| 查看: 786 | 回复: 7 | |||
| 【有奖交流】积极回复本帖子,参与交流,就有机会分得作者 liyangnpu 的 13 个金币 ,回帖就立即获得 1 个金币,每人有 1 次机会 | |||
| 当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖 | |||
[交流]
【征稿】Future-Generation Attack and Defense in Neural Networks (FGADNN)
|
|||
|
Special Issue -- Future-Generation Attack and Defense in Neural Networks (FGADNN) Aims & Scopes Neural Networks have demonstrated great success in many fields, e.g., natural language processing, image analysis, speech recognition, recommender system, physiological computing, etc. However, recent studies revealed that neural networks are vulnerable to adversarial attacks. The vulnerability of neural networks, which may hinder their adoption in high-stake scenarios. Thus, understanding their vulnerability and developing robust neural networks have attracted increasing attention. To understand and accommodate the vulnerability of neural networks, various attack and defense techniques have been proposed. According to the stage that the adversarial attack is performed, there are two types of attacks: poisoning attacks and evasion attacks. The former happens at the training stage, to create backdoors in the machine learning model by adding contaminated examples to the training set. The latter happens at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the neural network. According to how much the attacker knows about the target model, there are white-box, gray-box, and black-box attacks. According to the outcome, there are targeted attacks and non-targeted (indiscriminate) attacks. There are also many different attack scenarios, resulted from different combinations of these attack types. Several different adversarial defense strategies have also been proposed, e.g., data modification, which modifies the training set in the training stage or the input data in the test stage, through adversarial training, gradient hiding, transferability blocking, data compression, data randomization, etc.; model modification, which modifies the target model directly to increase its robustness, by regularization, defensive distillation, feature squeezing, using a deep contractive network or a mask layer, etc.; and, auxiliary tools, which may be additional auxiliary machine learning models to robustify the primary model, e.g., adversarial detection models, or defense generative adversarial nets (defense-GAN), high-level representation guided denoiser, etc. Because of the popularity, complexity, and lack of interpretability of neural networks, it is expected that more attacks will immerge, in various different scenarios and applications. It is critically important to develop strategies to defend against them. This special issue focuses on adversarial attacks and defenses in various future-generation neural networks, e.g., CNNs, LSTMs, ResNet, Transformers, BERT, spiking neural networks, and graph neural networks. We invite both reviews and original contributions, on the theory (design, understanding, visualization, and interpretation) and applications of adversarial attacks and defenses, in future-generation natural language processing, computer vision systems, speech recognition, recommender system, etc. Topics of interest include, but are not limited to: • Novel adversarial attack approaches • Novel adversarial defense approaches • Model vulnerability discovery and explanation • Trust and interpretability of neural network • Attacks and/or defenses in NLP • Attacks and/or defenses in recommender systems • Attacks and/or defenses in computer vision • Attacks and/or defenses in speech recognition • Attacks and/or defenses in physiological computing • Adversarial attack and defense various future-generation applications Evaluation Criterion • Novelty of the approach (how is it different from existing ones?) • Technical soundness (e.g., rigorous model evaluation) • Impact (how does it change the state-of-the-arts) • Readability (is it clear what has been done) • Reproducibility and open source: pre-registration if confirmatory claims are being made (e.g., via osf.io), open data, materials, code as much as ethically possible. Submission Instructions All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Authors should prepare their manuscript according to the Guide for Authors available from the online submission page of the Future Generation Computer Systems at https://ees.elsevier.com/fgcs/. Authors should select “VSI: NNVul” when they reach the “Article Type” step in the submission process. Inquiries, including questions about appropriate topics, may be sent electronically to liyangnpu@nwpu.edu.cn. Please make sure to read the Guide for Authors before writing your manuscript. The Guide for Authors and link to submit your manuscript is available on the Journal’s homepage at: https://www.journals.elsevier.co ... n-computer-systems. Important Dates ● Manuscript Submission Deadline: 20th June 2022 ● Peer Review Due: 30th July 2022 ● Revision Due: 15th September 2022 ● Final Decision: 20th October 2022 |
» 猜你喜欢
A区一本交叉课题组,低分调剂,招收机械电子信息通信等交叉方向
已经有72人回复
考研材料与化工,求调剂
已经有5人回复
304求调剂(085602一志愿985)
已经有8人回复
0856材料与化工309分求调剂
已经有4人回复
化工0817调剂
已经有3人回复
0856材料与化工353分求调剂
已经有8人回复
085600 材料与化工 295 求调剂
已经有5人回复
085602化工求调剂
已经有6人回复
收调剂
已经有5人回复
2026考研求调剂-材料类-本科211一志愿985-初试301分
已经有9人回复
» 抢金币啦!回帖就可以得到:
材料/化学相关专业2026级学术/专业型硕士研究生
+1/87
江西南部某一本高校招收环境、生物、计算机、农业等专业调剂生
+1/81
招硕士调剂生
+2/76
香港科技大学(广州)科研助理(生物医药领域)招聘启事
+1/50
澳门大学中药机制与质量研究全国重点实验室硕士研究生
+1/37
中科院化学所 宋延林 课题组招聘合成化学方向博士后(开展打印合成化学方向研究)
+1/36
山东理工大学 机械工程学院 多孔金属材料课题组 招收2026年入学博士
+1/34
南京林业大学木质纤维功能材料国际联合创新中心招收2026级博士生(申请-考核制
+2/34
西京学院土木水利 2026 级研究生招生相关说明
+1/13
【神经生物学考博求助】
+1/10
海南大学徐月山老师招生2026年第二批博士名额2~3个(高端设备开发方向)
+1/8
天津大学浙江研究院(宁波)诚聘高分子/化学/材料方向博士后、青年特聘研究员
+1/4
2026年东北石油大学“页岩油气钻采高效井眼清洁”创新团队招硕士生
+1/4
北京高校副校长团队招收机械类,环境类学硕和专硕
+1/4
湖南大学材料院陶益杰老师招收2026年秋季入学博士生一名-超材料方向
+1/3
复旦大学集成电路学院程增光课题组急聘科研助理
+1/3
德国伊尔梅瑙工业大学(TU Ilmenau)柔性电子方向 诚邀CSC申请
+1/1
重庆大坪医院生信博士招聘
+1/1
河南大学张琳教授招收电化学方向博士研究生
+1/1
层流压差式流量测控产品在大气 PM2.5 颗粒物采样分析仪器中的应用与优势
+1/1
7楼2022-04-20 21:53:39
简单回复
tzynew2楼
2022-04-20 20:45
回复
liyangnpu(金币+1): 谢谢参与
i 发自小木虫Android客户端
2022-04-20 21:48
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端
JeromeXu4楼
2022-04-20 21:04
回复
MTXSCI18楼
2022-04-20 22:44
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端
雨月清音5楼
2022-04-20 21:47
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端













回复此楼