24小时热门版块排行榜    

查看: 790  |  回复: 7
【有奖交流】积极回复本帖子,参与交流,就有机会分得作者 liyangnpu 的 13 个金币 ,回帖就立即获得 1 个金币,每人有 1 次机会

liyangnpu

铜虫 (初入文坛)


[交流] 【征稿】Future-Generation Attack and Defense in Neural Networks (FGADNN)

Special Issue -- Future-Generation Attack and Defense in Neural Networks (FGADNN)
Aims & Scopes
Neural Networks have demonstrated great success in many fields, e.g., natural language processing, image analysis, speech recognition, recommender system, physiological computing, etc. However, recent studies revealed that neural networks are vulnerable to adversarial attacks. The vulnerability of neural networks, which may hinder their adoption in high-stake scenarios. Thus, understanding their vulnerability and developing robust neural networks have attracted increasing attention.
To understand and accommodate the vulnerability of neural networks, various attack and defense techniques have been proposed.
According to the stage that the adversarial attack is performed, there are two types of attacks: poisoning attacks and evasion attacks. The former happens at the training stage, to create backdoors in the machine learning model by adding contaminated examples to the training set. The latter happens at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the neural network. According to how much the attacker knows about the target model, there are white-box, gray-box, and black-box attacks. According to the outcome, there are targeted attacks and non-targeted (indiscriminate) attacks. There are also many different attack scenarios, resulted from different combinations of these attack types.
Several different adversarial defense strategies have also been proposed, e.g., data modification, which modifies the training set in the training stage or the input data in the test stage, through adversarial training, gradient hiding, transferability blocking, data compression, data randomization, etc.; model modification, which modifies the target model directly to increase its robustness, by regularization, defensive distillation, feature squeezing,  using a deep contractive network or a mask layer, etc.; and, auxiliary tools, which may be additional auxiliary machine learning models to robustify the primary model, e.g., adversarial detection models, or defense generative adversarial nets (defense-GAN), high-level representation guided denoiser, etc.
Because of the popularity, complexity, and lack of interpretability of neural networks, it is expected that more attacks will immerge, in various different scenarios and applications. It is critically important to develop strategies to defend against them.
This special issue focuses on adversarial attacks and defenses in various future-generation neural networks, e.g., CNNs, LSTMs, ResNet, Transformers, BERT, spiking neural networks, and graph neural networks. We invite both reviews and original contributions, on the theory (design, understanding, visualization, and interpretation) and applications of adversarial attacks and defenses, in future-generation natural language processing, computer vision systems, speech recognition, recommender system, etc.
Topics of interest include, but are not limited to:
•        Novel adversarial attack approaches
•        Novel adversarial defense approaches
•        Model vulnerability discovery and explanation
•        Trust and interpretability of neural network
•        Attacks and/or defenses in NLP
•        Attacks and/or defenses in recommender systems
•        Attacks and/or defenses in computer vision
•        Attacks and/or defenses in speech recognition
•        Attacks and/or defenses in physiological computing
•        Adversarial attack and defense various future-generation applications
Evaluation Criterion
•        Novelty of the approach (how is it different from existing ones?)
•        Technical soundness (e.g., rigorous model evaluation)
•        Impact (how does it change the state-of-the-arts)
•        Readability (is it clear what has been done)
•        Reproducibility and open source: pre-registration if confirmatory claims are being made (e.g., via osf.io), open data, materials, code as much as ethically possible.
Submission Instructions
All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Authors should prepare their manuscript according to the Guide for Authors available from the online submission page of the Future Generation Computer Systems at https://ees.elsevier.com/fgcs/. Authors should select “VSI: NNVul” when they reach the “Article Type” step in the submission process. Inquiries, including questions about appropriate topics, may be sent electronically to liyangnpu@nwpu.edu.cn.
Please make sure to read the Guide for Authors before writing your manuscript. The Guide for Authors and link to submit your manuscript is available on the Journal’s homepage at: https://www.journals.elsevier.co ... n-computer-systems.
Important Dates
● Manuscript Submission Deadline: 20th June 2022
● Peer Review Due: 30th July 2022
● Revision Due: 15th September 2022
● Final Decision: 20th October 2022
回复此楼

» 猜你喜欢

» 抢金币啦!回帖就可以得到:

查看全部散金贴

已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

ruileelucky

至尊木虫 (文坛精英)



liyangnpu(金币+1): 谢谢参与
7楼2022-04-20 21:53:39
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
简单回复
tzynew2楼
2022-04-20 20:45   回复  
liyangnpu(金币+1): 谢谢参与
i 发自小木虫Android客户端
nono20093楼
2022-04-20 20:46   回复  
liyangnpu(金币+1): 谢谢参与
`
JeromeXu4楼
2022-04-20 21:04   回复  
liyangnpu(金币+1): 谢谢参与
发自小木虫IOS客户端
2022-04-20 21:47   回复  
liyangnpu(金币+1): 谢谢参与
发自小木虫Android客户端
2022-04-20 21:48   回复  
liyangnpu(金币+1): 谢谢参与
发自小木虫Android客户端
MTXSCI18楼
2022-04-20 22:44   回复  
liyangnpu(金币+1): 谢谢参与
发自小木虫Android客户端
相关版块跳转 我要订阅楼主 liyangnpu 的主题更新
提示: 如果您在30分钟内回复过其他散金贴,则可能无法领取此贴金币
普通表情 高级回复 (可上传附件)
最具人气热帖推荐 [查看全部] 作者 回/看 最后发表
[考研] 330求调剂 +3 小材化本科 2026-03-18 3/150 2026-03-18 21:55 by 无懈可击111
[考研] 085600材料与化工调剂 324分 +8 llllkkkhh 2026-03-18 8/400 2026-03-18 21:01 by Catalysis25
[考研] 一志愿985,本科211,0817化学工程与技术319求调剂 +7 Liwangman 2026-03-15 7/350 2026-03-18 20:08 by walc
[考研] 【同济软件】软件(085405)考研求调剂 +3 2026eternal 2026-03-18 3/150 2026-03-18 19:09 by 搏击518
[考研] 085601专硕,总分342求调剂,地区不限 +5 share_joy 2026-03-16 5/250 2026-03-18 14:48 by haxia
[考研] 085601材料工程专硕求调剂 +6 慕寒mio 2026-03-16 6/300 2026-03-18 14:26 by 007_lilei
[考研] 331求调剂(0703有机化学 +7 ZY-05 2026-03-13 8/400 2026-03-18 14:13 by 007_lilei
[考研] 0703化学336分求调剂 +6 zbzihdhd 2026-03-15 7/350 2026-03-18 09:53 by zhukairuo
[考博] 26博士申请 +3 1042136743 2026-03-17 3/150 2026-03-17 23:30 by 轻松不少随
[考研] 材料与化工求调剂 +6 为学666 2026-03-16 6/300 2026-03-17 20:15 by peike
[考研] 308求调剂 +4 是Lupa啊 2026-03-16 4/200 2026-03-17 17:12 by ruiyingmiao
[考研] 332求调剂 +6 Zz版 2026-03-13 6/300 2026-03-17 17:03 by ruiyingmiao
[考研] 278求调剂 +3 Yy7400 2026-03-13 3/150 2026-03-17 08:24 by laoshidan
[考研] 274求调剂 +5 时间点 2026-03-13 5/250 2026-03-17 07:34 by 热情沙漠
[考研] 070305求调剂 +3 mlpqaz03 2026-03-14 4/200 2026-03-15 11:04 by peike
[考研] 289求调剂 +4 这么名字咋样 2026-03-14 6/300 2026-03-14 18:58 by userper
[考研] 求材料调剂 085600英一数二总分302 前三科235 精通机器学习 一志愿哈工大 +4 林yaxin 2026-03-12 4/200 2026-03-13 22:04 by 星空星月
[考研] 304求调剂 +7 7712b 2026-03-13 7/350 2026-03-13 21:42 by peike
[硕博家园] 085600 260分求调剂 +3 天空还下雨么 2026-03-13 5/250 2026-03-13 18:46 by 天空还下雨么
[考研] 321求调剂(食品/专硕) +3 xc321 2026-03-12 6/300 2026-03-13 08:45 by xc321
信息提示
请填处理意见