| 查看: 764 | 回复: 7 | |||
| 【有奖交流】积极回复本帖子,参与交流,就有机会分得作者 liyangnpu 的 13 个金币 ,回帖就立即获得 1 个金币,每人有 1 次机会 | |||
[交流]
【征稿】Future-Generation Attack and Defense in Neural Networks (FGADNN)
|
|||
|
Special Issue -- Future-Generation Attack and Defense in Neural Networks (FGADNN) Aims & Scopes Neural Networks have demonstrated great success in many fields, e.g., natural language processing, image analysis, speech recognition, recommender system, physiological computing, etc. However, recent studies revealed that neural networks are vulnerable to adversarial attacks. The vulnerability of neural networks, which may hinder their adoption in high-stake scenarios. Thus, understanding their vulnerability and developing robust neural networks have attracted increasing attention. To understand and accommodate the vulnerability of neural networks, various attack and defense techniques have been proposed. According to the stage that the adversarial attack is performed, there are two types of attacks: poisoning attacks and evasion attacks. The former happens at the training stage, to create backdoors in the machine learning model by adding contaminated examples to the training set. The latter happens at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the neural network. According to how much the attacker knows about the target model, there are white-box, gray-box, and black-box attacks. According to the outcome, there are targeted attacks and non-targeted (indiscriminate) attacks. There are also many different attack scenarios, resulted from different combinations of these attack types. Several different adversarial defense strategies have also been proposed, e.g., data modification, which modifies the training set in the training stage or the input data in the test stage, through adversarial training, gradient hiding, transferability blocking, data compression, data randomization, etc.; model modification, which modifies the target model directly to increase its robustness, by regularization, defensive distillation, feature squeezing, using a deep contractive network or a mask layer, etc.; and, auxiliary tools, which may be additional auxiliary machine learning models to robustify the primary model, e.g., adversarial detection models, or defense generative adversarial nets (defense-GAN), high-level representation guided denoiser, etc. Because of the popularity, complexity, and lack of interpretability of neural networks, it is expected that more attacks will immerge, in various different scenarios and applications. It is critically important to develop strategies to defend against them. This special issue focuses on adversarial attacks and defenses in various future-generation neural networks, e.g., CNNs, LSTMs, ResNet, Transformers, BERT, spiking neural networks, and graph neural networks. We invite both reviews and original contributions, on the theory (design, understanding, visualization, and interpretation) and applications of adversarial attacks and defenses, in future-generation natural language processing, computer vision systems, speech recognition, recommender system, etc. Topics of interest include, but are not limited to: • Novel adversarial attack approaches • Novel adversarial defense approaches • Model vulnerability discovery and explanation • Trust and interpretability of neural network • Attacks and/or defenses in NLP • Attacks and/or defenses in recommender systems • Attacks and/or defenses in computer vision • Attacks and/or defenses in speech recognition • Attacks and/or defenses in physiological computing • Adversarial attack and defense various future-generation applications Evaluation Criterion • Novelty of the approach (how is it different from existing ones?) • Technical soundness (e.g., rigorous model evaluation) • Impact (how does it change the state-of-the-arts) • Readability (is it clear what has been done) • Reproducibility and open source: pre-registration if confirmatory claims are being made (e.g., via osf.io), open data, materials, code as much as ethically possible. Submission Instructions All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Authors should prepare their manuscript according to the Guide for Authors available from the online submission page of the Future Generation Computer Systems at https://ees.elsevier.com/fgcs/. Authors should select “VSI: NNVul” when they reach the “Article Type” step in the submission process. Inquiries, including questions about appropriate topics, may be sent electronically to liyangnpu@nwpu.edu.cn. Please make sure to read the Guide for Authors before writing your manuscript. The Guide for Authors and link to submit your manuscript is available on the Journal’s homepage at: https://www.journals.elsevier.co ... n-computer-systems. Important Dates ● Manuscript Submission Deadline: 20th June 2022 ● Peer Review Due: 30th July 2022 ● Revision Due: 15th September 2022 ● Final Decision: 20th October 2022 |
» 猜你喜欢
遇见不省心的家人很难过
已经有19人回复
退学或坚持读
已经有25人回复
博士延得我,科研能力直往上蹿
已经有4人回复
免疫学博士有名额,速联系
已经有14人回复
面上基金申报没有其他的参与者成吗
已经有4人回复
多组分精馏求助
已经有6人回复
» 抢金币啦!回帖就可以得到:
多功能 电子微生物生长分析仪 及 微生物快检技术开发服务
+2/150
诚招化工、有机、高分子等领域博士后及科研助理
+2/138
黄汉民团队联合淮北师范大学招聘师资博士后(年薪30-40万)
+1/84
结构动力学与结构健康监测方向欧盟玛丽居里全奖博士招聘
+1/78
丙烯液相
+1/76
时隔多年再次回到小木虫,有一番感慨
+1/50
香港科技大学计算物理及流体力学课题组招收全奖博士后及博士生(2026年9月入学)
+1/36
操作求助
+1/35
教育部重点实验室和清华大学某国家重点实验室,联合培养硕生、博生,并长期招博士后
+1/30
江西理工大学 稀土学院 稀土功能材料方向 招收2026届博士研究生、硕士研究生
+1/28
考博求助
+1/10
四川大学华西医院沈百荣教授课题组科研助理招聘启事
+1/8
招聘农用化学产品销售一名,须具备良好的英语口语,以便拓展海外市场。
+1/7
中山大学柔性电子学院黄维院士团队诚招博士后(柔性可穿戴电子或相关方向)
+1/4
【博士后/科研助理招聘-北京理工大学-集成电路与电子学院-国家杰青团队】
+1/4
电子科技大学,电子科学与工程学院,杨青慧教授,2026年博士研究生招生
+1/3
武汉工程大学董志兵教授课题组招收博士/硕士研究生(长期有效)
+1/2
【博士后/科研助理招聘-北京理工大学-集成电路与电子学院-国家杰青团队】
+1/2
上海理工大学“新能源材料”专业-赵斌教授招收申请考核制博士生【能源催化方向】
+1/2
英国兰卡斯特大学(Lancaster University)大模型、计算机视觉PhD招生
+1/2
7楼2022-04-20 21:53:39
简单回复
tzynew2楼
2022-04-20 20:45
回复
liyangnpu(金币+1): 谢谢参与
i 发自小木虫Android客户端
nono20093楼
2022-04-20 20:46
回复
liyangnpu(金币+1): 谢谢参与
`
JeromeXu4楼
2022-04-20 21:04
回复
雨月清音5楼
2022-04-20 21:47
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端
2022-04-20 21:48
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端
MTXSCI18楼
2022-04-20 22:44
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端













回复此楼