| 查看: 2211 | 回复: 1 | |||
henryliu911铁杆木虫 (小有名气)
|
[交流]
IEEE J. Emerging Sel. Top. Circuits Syst.邀稿 已有1人参与
|
|
最近我拿到了IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS (SCI收录,影响因子:2.542)副主编的邀稿,special issue的主题是: Customized sub-systems and circuits for deep learning 该special issue详细信息如下: Scope and purpose [Rationale and Motivation] This special issue is dedicated to recent technical advances in emerging hardware technologies that will enable deep learning across various application areas. Over the past decade, deep learning has emerged as the dominant machine learning approach, revolutionizing a wide spectrum of challenging application domains ranging from image processing, machine translation, speech recognition and many others. This rapid progress has been enabled through the availability of massive amounts of labeled data, coupled with enhanced computational capabilities of advanced hardware platforms such as Graphics Processing Units (GPUs). Despite these impressive advances, it still takes significant time and energy to train and deploy these models on leading edge hardware. Furthermore, the complexity of these models makes it challenging to perform inference using deep learning algorithms on resource-constrained IoT devices. As deep learning models become more complex, emerging hardware platforms are critical for future Artificial Intelligence (AI) breakthroughs. In this special issue, we aim to address these emerging areas and provide a comprehensive perspective on various aspects of hardware system and circuits research for future deep learning applications. [Scope] To cover the rapid progress of emerging areas we plan to organize papers in four topics: 1. Digital deep learning processor. This session aims at digital DNN processing hardware; this includes temporal parallelism architectures (such as GPU, parallel threads, SIMD), as well as partial parallelism and data-flow architectures (such as FPGA, customized SoC, and ASIC). This session will also include software platform topics such as programming models, firm-ware, accelerator evaluation tools, EDA tools for digital deep learning processors. 2. Analog and in-memory computing approaches to deep learning. This session highlights integration of computation into memory to save energy by reducing data movement; it also includes analog computation, ADC/DAC design, and SRAM modifications. For deep learning workloads, the communication between memory units and the location of computation can dominate the energy consumption and impact computation throughput. In-memory computing is an architecture design approach that integrates some forms of memory and compute to reduce data transfer costs and improve chip efficiency. In addition to in-memory computing, custom analog circuit design for deep-learning workloads is also included in this special issue. 3. Algorithm-hardware interaction for deep learning. This special issue plans to publish papers presenting novel quantization schemes, pruning, sparsity exploration, compression techniques, and distribution strategies (data- and model-parallelisms, synchronization etc.) for deep neural networks: hardware-centric deep learning algorithms. This session also intended to discuss different reinforcement learning methods amenable to hardware efficient AI models and accelerators architectures. . Topics of interest • Hardware-efficient deep learning model architectures for training and inference • Energy-efficient deep learning inference accelerators • Quantization, pruning, and sparsification techniques for hardware-efficient deep learning algorithms • Distributed and parallel learning algorithms, systems, and demonstrations • Deep learning system demonstrations integrating sensors, cloud, Internet of Things, wearable devices, device-cloud interactions, and home-intelligence devices. • Customized digital deep learning processors, FPGA, CGRA, dataflow and specific temporal architectures • Analog and in-memory computing approaches to deep learning • Brain-inspired non von Neumann architectures • Accelerator evaluation tools, EDA tools for deep learning accelerator development • Customized hardware/software co-designs for deep learning • Machine learning system interfaces • Deep reinforcement learning for hardware efficient AI models and hardware designs Submission Procedure Prospective authors are invited to submit their papers following the instructions provided on the JETCAS web-site: https://jetcas.polito.it. The submitted manuscripts should not have been previously published nor should they be currently under consideration for publication elsewhere. Note that the relationship to screen content video technologies should be explained clearly in the submission. Important dates • Manuscript submissions due Nov. 19, 2018 • First round of reviews completed Jan. 7, 2019 • Revised manuscripts due Feb. 18, 2019 • Second round of reviews completed March 18, 2019 • Notification of acceptance: March 25, 2019 • Final manuscripts due April 18, 2019 我目前自己手头没有合适的稿件,直接拒绝AE的好意也不太好意思,想要合作发论文的老师同学可以联系我。本人有IEEE旗下顶刊的发文和审稿经验,可以帮助润色语言、修改、把关、预审稿。也帮AE打个征稿广告,欢迎大家自行投稿此special issue。 此帖欢迎友善回复及交流合作,大家科研都不易。 人在国外,有时差,回复不及时请见谅。 |
» 猜你喜欢
实验室接单子
已经有6人回复
假如你的研究生提出不合理要求
已经有11人回复
全日制(定向)博士
已经有5人回复
萌生出自己或许不适合搞科研的想法,现在跑or等等看?
已经有4人回复
Materials Today Chemistry审稿周期
已经有4人回复
参与限项
已经有3人回复
对氯苯硼酸纯化
已经有3人回复
求助:我三月中下旬出站,青基依托单位怎么办?
已经有12人回复
所感
已经有4人回复
要不要辞职读博?
已经有7人回复
匿名
用户注销 (著名写手)
- 应助: 0 (幼儿园)
- 金币: 4238.1
- 散金: 1212
- 红花: 11
- 帖子: 1476
- 在线: 2442小时
- 虫号: 0
- 注册: 2015-10-03
- 性别: GG
- 专业: 电化学
2楼2018-08-24 07:20:18












回复此楼