| 查看: 777 | 回复: 7 | |||
| 【有奖交流】积极回复本帖子,参与交流,就有机会分得作者 liyangnpu 的 13 个金币 ,回帖就立即获得 1 个金币,每人有 1 次机会 | |||
[交流]
【征稿】Future-Generation Attack and Defense in Neural Networks (FGADNN)
|
|||
|
Special Issue -- Future-Generation Attack and Defense in Neural Networks (FGADNN) Aims & Scopes Neural Networks have demonstrated great success in many fields, e.g., natural language processing, image analysis, speech recognition, recommender system, physiological computing, etc. However, recent studies revealed that neural networks are vulnerable to adversarial attacks. The vulnerability of neural networks, which may hinder their adoption in high-stake scenarios. Thus, understanding their vulnerability and developing robust neural networks have attracted increasing attention. To understand and accommodate the vulnerability of neural networks, various attack and defense techniques have been proposed. According to the stage that the adversarial attack is performed, there are two types of attacks: poisoning attacks and evasion attacks. The former happens at the training stage, to create backdoors in the machine learning model by adding contaminated examples to the training set. The latter happens at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the neural network. According to how much the attacker knows about the target model, there are white-box, gray-box, and black-box attacks. According to the outcome, there are targeted attacks and non-targeted (indiscriminate) attacks. There are also many different attack scenarios, resulted from different combinations of these attack types. Several different adversarial defense strategies have also been proposed, e.g., data modification, which modifies the training set in the training stage or the input data in the test stage, through adversarial training, gradient hiding, transferability blocking, data compression, data randomization, etc.; model modification, which modifies the target model directly to increase its robustness, by regularization, defensive distillation, feature squeezing, using a deep contractive network or a mask layer, etc.; and, auxiliary tools, which may be additional auxiliary machine learning models to robustify the primary model, e.g., adversarial detection models, or defense generative adversarial nets (defense-GAN), high-level representation guided denoiser, etc. Because of the popularity, complexity, and lack of interpretability of neural networks, it is expected that more attacks will immerge, in various different scenarios and applications. It is critically important to develop strategies to defend against them. This special issue focuses on adversarial attacks and defenses in various future-generation neural networks, e.g., CNNs, LSTMs, ResNet, Transformers, BERT, spiking neural networks, and graph neural networks. We invite both reviews and original contributions, on the theory (design, understanding, visualization, and interpretation) and applications of adversarial attacks and defenses, in future-generation natural language processing, computer vision systems, speech recognition, recommender system, etc. Topics of interest include, but are not limited to: • Novel adversarial attack approaches • Novel adversarial defense approaches • Model vulnerability discovery and explanation • Trust and interpretability of neural network • Attacks and/or defenses in NLP • Attacks and/or defenses in recommender systems • Attacks and/or defenses in computer vision • Attacks and/or defenses in speech recognition • Attacks and/or defenses in physiological computing • Adversarial attack and defense various future-generation applications Evaluation Criterion • Novelty of the approach (how is it different from existing ones?) • Technical soundness (e.g., rigorous model evaluation) • Impact (how does it change the state-of-the-arts) • Readability (is it clear what has been done) • Reproducibility and open source: pre-registration if confirmatory claims are being made (e.g., via osf.io), open data, materials, code as much as ethically possible. Submission Instructions All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Authors should prepare their manuscript according to the Guide for Authors available from the online submission page of the Future Generation Computer Systems at https://ees.elsevier.com/fgcs/. Authors should select “VSI: NNVul” when they reach the “Article Type” step in the submission process. Inquiries, including questions about appropriate topics, may be sent electronically to liyangnpu@nwpu.edu.cn. Please make sure to read the Guide for Authors before writing your manuscript. The Guide for Authors and link to submit your manuscript is available on the Journal’s homepage at: https://www.journals.elsevier.co ... n-computer-systems. Important Dates ● Manuscript Submission Deadline: 20th June 2022 ● Peer Review Due: 30th July 2022 ● Revision Due: 15th September 2022 ● Final Decision: 20th October 2022 |
» 猜你喜欢
【博士招生】太原理工大学2026化工博士
已经有5人回复
什么是人一生最重要的?
已经有5人回复
280求调剂
已经有3人回复
网上报道青年教师午睡中猝死、熬夜猝死的越来越多,主要哪些原因引起的?
已经有8人回复
面上可以超过30页吧?
已经有11人回复
版面费该交吗
已经有15人回复
体制内长辈说体制内绝大部分一辈子在底层,如同你们一样大部分普通教师忙且收入低
已经有18人回复
为什么中国大学工科教授们水了那么多所谓的顶会顶刊,但还是做不出宇树机器人?
已经有10人回复
» 抢金币啦!回帖就可以得到:
长春工业大学课题组招收硕士调剂生(化工、化学、材料、环境、制药、生物等方向)
+1/577
农药产品分析实验人员招聘
+5/270
侯旭课题组(化学、化工、环境、能源相关方向)欢迎你的加入
+1/181
26国自然面上,大家是否还写研究方案?
+1/87
福建师范大学环境与资源学院刘斯宝课题组招聘青年教师(带编)
+1/85
上海交通大学叶天南课题组招聘2026级博士研究生
+1/79
招收桥梁工程方向博士研究生!
+3/74
西工大控制科学博后招聘
+1/42
澳门理工大学人工智能智慧康养26 年9月入学 博士招生有奖学金
+1/33
澳门理工大学人工智能智慧康养方向26 年9月入学博士招生 奖学金
+1/31
南通大学生物医药方向国家级人才团队招收“申请-考核”制博士研究生20260222
+2/30
211大学【2026学博】补招
+1/28
26年启明计划
+1/21
重医大-药学院-药学化学26级博士招生
+1/8
新西兰 奥克兰理工大学(AUT)招博士,海藻资源化方向,详情请见如下内容,谢谢!
+1/8
矛盾的生活
+1/3
英国南安普顿大学招博后+博士(微流控,分子动力学)
+1/2
新加坡南洋理工大学- 光电/ 智能传感/ 脑机接口方向 博士后
+1/2
新加坡南洋理工大学- 光电/ 智能传感/ 脑机接口方向 博士后
+1/2
太原理工大学集成电路学院招收2026年博士研究生
+1/2
7楼2022-04-20 21:53:39
简单回复
tzynew2楼
2022-04-20 20:45
回复
liyangnpu(金币+1): 谢谢参与
i 发自小木虫Android客户端
nono20093楼
2022-04-20 20:46
回复
liyangnpu(金币+1): 谢谢参与
`
JeromeXu4楼
2022-04-20 21:04
回复
雨月清音5楼
2022-04-20 21:47
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端
2022-04-20 21:48
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端
MTXSCI18楼
2022-04-20 22:44
回复
liyangnpu(金币+1): 谢谢参与
, 发自小木虫Android客户端













回复此楼