±±¾©Ê¯ÓÍ»¯¹¤Ñ§Ôº2026ÄêÑо¿ÉúÕÐÉú½ÓÊÕµ÷¼Á¹«¸æ
²é¿´: 2257  |  »Ø¸´: 1

henryliu911

Ìú¸Ëľ³æ (СÓÐÃûÆø)

[½»Á÷] IEEE J. Emerging Sel. Top. Circuits Syst.Ñû¸å ÒÑÓÐ1È˲ÎÓë

×î½üÎÒÄõ½ÁËIEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS £¨SCIÊÕ¼£¬Ó°ÏìÒò×Ó£º2.542£©¸±Ö÷±àµÄÑû¸å£¬special issueµÄÖ÷ÌâÊÇ£º Customized sub-systems and circuits for deep learning

¸Ãspecial issueÏêϸÐÅÏ¢ÈçÏ£º
Scope and purpose
[Rationale and Motivation]
This special issue is dedicated to recent technical advances in emerging hardware technologies that will enable deep learning across various application areas. Over the past decade, deep learning has emerged as the dominant machine learning approach, revolutionizing a wide spectrum of challenging application domains ranging from image processing, machine translation, speech recognition and many others. This rapid progress has been enabled through the availability of massive amounts of labeled data, coupled with enhanced computational capabilities of advanced hardware platforms such as Graphics Processing Units (GPUs). Despite these impressive advances, it still takes significant time and energy to train and deploy these models on leading edge hardware. Furthermore, the complexity of these models makes it challenging to perform inference using deep learning algorithms on resource-constrained IoT devices. As deep learning models become more complex, emerging hardware platforms are critical for future Artificial Intelligence (AI) breakthroughs. In this special issue, we aim to address these emerging areas and provide a comprehensive perspective on various aspects of hardware system and circuits research for future deep learning applications.


[Scope]
To cover the rapid progress of emerging areas we plan to organize papers in four topics:
1.        Digital deep learning processor. This session aims at digital DNN processing hardware; this includes temporal parallelism architectures (such as GPU, parallel threads, SIMD), as well as partial parallelism and data-flow architectures (such as FPGA, customized SoC, and ASIC). This session will also include software platform topics such as programming models, firm-ware, accelerator evaluation tools, EDA tools for digital deep learning processors.
2.        Analog and in-memory computing approaches to deep learning. This session highlights integration of computation into memory to save energy by reducing data movement; it also includes analog computation, ADC/DAC design, and SRAM modifications. For deep learning workloads, the communication between memory units and the location of computation can dominate the energy consumption and impact computation throughput. In-memory computing is an architecture design approach that integrates some forms of memory and compute to reduce data transfer costs and improve chip efficiency. In addition to in-memory computing, custom analog circuit design for deep-learning workloads is also included in this special issue.
3.        Algorithm-hardware interaction for deep learning. This special issue plans to publish papers presenting novel quantization schemes, pruning, sparsity exploration, compression techniques, and distribution strategies (data- and model-parallelisms, synchronization etc.) for deep neural networks: hardware-centric deep learning algorithms. This session also intended to discuss different reinforcement learning methods amenable to hardware efficient AI models and accelerators architectures.
.
Topics of interest
•        Hardware-efficient deep learning model architectures for training and inference
•        Energy-efficient deep learning inference accelerators


•        Quantization, pruning, and sparsification techniques for hardware-efficient deep learning algorithms
•        Distributed and parallel learning algorithms, systems, and demonstrations
•        Deep learning system demonstrations integrating sensors, cloud, Internet of Things, wearable devices, device-cloud interactions, and home-intelligence devices.
•        Customized digital deep learning processors, FPGA, CGRA, dataflow and specific temporal architectures
•        Analog and in-memory computing approaches to deep learning
•        Brain-inspired non von Neumann architectures
•        Accelerator evaluation tools, EDA tools for deep learning accelerator development
•        Customized hardware/software co-designs for deep learning
•        Machine learning system interfaces
•        Deep reinforcement learning for hardware efficient AI models and hardware designs

Submission Procedure
Prospective authors are invited to submit their papers following the instructions provided on the JETCAS web-site: https://jetcas.polito.it. The submitted manuscripts should not have been previously published nor should they be currently under consideration for publication elsewhere. Note that the relationship to screen content video technologies should be explained clearly in the submission.
Important dates
•        Manuscript submissions due          Nov. 19, 2018
•        First round of reviews completed         Jan. 7, 2019
•        Revised manuscripts due        Feb. 18, 2019
•        Second round of reviews completed        March 18, 2019
•        Notification of acceptance:        March 25, 2019
•        Final manuscripts due        April 18, 2019


ÎÒĿǰ×Ô¼ºÊÖͷûÓкÏÊʵĸå¼þ£¬Ö±½Ó¾Ü¾øAEµÄºÃÒâÒ²²»Ì«ºÃÒâ˼£¬ÏëÒªºÏ×÷·¢ÂÛÎĵÄÀÏʦͬѧ¿ÉÒÔÁªÏµÎÒ¡£±¾ÈËÓÐIEEEÆì϶¥¿¯µÄ·¢ÎĺÍÉó¸å¾­Ñ飬¿ÉÒÔ°ïÖúÈóÉ«ÓïÑÔ¡¢Ð޸ġ¢°Ñ¹Ø¡¢Ô¤Éó¸å¡£Ò²°ïAE´ò¸öÕ÷¸å¹ã¸æ£¬»¶Ó­´ó¼Ò×ÔÐÐͶ¸å´Ëspecial issue¡£

´ËÌû»¶Ó­ÓÑÉÆ»Ø¸´¼°½»Á÷ºÏ×÷£¬´ó¼Ò¿ÆÑж¼²»Òס£

ÈËÔÚ¹úÍ⣬ÓÐʱ²î£¬»Ø¸´²»¼°Ê±Çë¼ûÁ¡£
»Ø¸´´ËÂ¥

» ²ÂÄãϲ»¶

ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

ÄäÃû

Óû§×¢Ïú (ÖøÃûдÊÖ)

±¾Ìû½öÂ¥Ö÷¿É¼û
2Â¥2018-08-24 07:20:18
ÒÑÔÄ   ÉêÇëSEPI   »Ø¸´´ËÂ¥   ±à¼­   ²é¿´ÎÒµÄÖ÷Ò³
Ïà¹Ø°æ¿éÌø×ª ÎÒÒª¶©ÔÄÂ¥Ö÷ henryliu911 µÄÖ÷Ìâ¸üÐÂ
×î¾ßÈËÆøÈÈÌûÍÆ¼ö [²é¿´È«²¿] ×÷Õß »Ø/¿´ ×îºó·¢±í
[¿¼ÑÐ] ¿¼ÑÐÉúÎïÓëÒ½Ò©µ÷¼Á +8 Ìúº©º©123425 2026-03-31 9/450 2026-04-06 12:52 by lys0704
[¿¼ÑÐ] 301Çóµ÷¼Á +7 ϸ°ûÏà¹Øµ°°× 2026-04-03 7/350 2026-04-06 11:47 by lijunpoly
[¿¼ÑÐ] 070300»¯Ñ§Ñ§Ë¶311·ÖÇóµ÷¼Á +11 Áº¸»¹óÏÕÖÐÇó 2026-04-04 13/650 2026-04-06 07:24 by houyaoxu
[¿¼ÑÐ] 266·Ö£¬Ò»Ö¾Ô¸µçÆø¹¤³Ì£¬±¾¿Æ²ÄÁÏ£¬Çó²ÄÁÏרҵµ÷¼Á +11 ÍÛºôºßºôºß 2026-04-01 12/600 2026-04-04 23:17 by ÓÀ×ÖºÅ
[¿¼ÑÐ] 295Çóµ÷¼Á +4 AÄãºÃÑо¿Éú 2026-04-04 5/250 2026-04-04 22:46 by yu221
[¿¼ÑÐ] ¿É¿çרҵµ÷¼Á +3 ÖÜµÄµÃµØ 2026-04-04 6/300 2026-04-04 22:21 by barlinike
[¿¼ÑÐ] »·¾³285·Ö£¬¹ýÁù¼¶£¬Çóµ÷¼Á +10 xhr12 2026-04-02 10/500 2026-04-04 21:53 by bn53987
[¿¼ÑÐ] 278Çóµ÷¼Á +6 Yy7400 2026-04-03 6/300 2026-04-04 09:53 by zhangdingwa
[¿¼ÑÐ] ²ÄÁÏ¿ÆÑ§Ó빤³Ì339Çóµ÷¼Á +12 hyz0119 2026-03-31 13/650 2026-04-03 18:33 by lsÁõ˧
[¿¼ÑÐ] 334Çóµ÷¼Á +9 Trying] 2026-03-31 9/450 2026-04-03 15:18 by ×ÁçíØ¼
[¿¼ÑÐ] 320Çóµ÷¼Á +3 ũҵ¹¤³ÌÓëÐÅÏ¢¼ 2026-04-03 3/150 2026-04-03 11:40 by ÍÁľ˶ʿÕÐÉú
[¿¼ÑÐ] 326Çóµ÷¼Á +10 áÌáÌ×Ð 2026-04-02 10/500 2026-04-03 09:08 by ÅÁ¶ûÂíÀ­ÌØ
[¿¼ÑÐ] Ò»Ö¾Ô¸ÉÂÎ÷ʦ·¶´óѧÉúÎïѧ317·Ö +5 1563ÈÕ¡£ 2026-04-02 5/250 2026-04-03 06:58 by ilovexiaobin
[¿¼ÑÐ] һ־Ըɽ¶«´óѧ£¬085600£¬344 +7 κ×Óper 2026-04-02 8/400 2026-04-02 21:12 by °ÙÁéͯ888
[¿¼ÑÐ] Çóµ÷¼ÁÍÆ¼ö +3 ÄÏɽÄÏ@ 2026-04-01 3/150 2026-04-02 12:09 by xiaoranmu
[¿¼ÑÐ] ½­ËտƼ¼´óѧÕвÄÁÏÑо¿Éú +4 Su032713. 2026-04-01 5/250 2026-04-01 22:03 by cccchenso
[¿¼ÑÐ] Çóµ÷¼Á0703 +5 ÖܼÎÒ¢ 2026-03-31 8/400 2026-04-01 20:32 by ltltkkk
[¿¼ÑÐ] 286Çóµ÷¼Á +5 Sa67890. 2026-04-01 7/350 2026-04-01 19:50 by 6781022
[˶²©¼ÒÔ°] ²©Ò»±»ËͳöÁªÅà¸Ð¾õ²»ÊÊÓ¦Ôõô°ì +3 È«´åµÄ¹· 2026-03-31 3/150 2026-04-01 10:44 by 328838485
[¿¼ÑÐ] 080500-315·Ö¸´ÊÔµ÷¼Á +9 Éϰ¶3821 2026-03-31 9/450 2026-03-31 17:29 by ÌÆãå¶ù
ÐÅÏ¢Ìáʾ
ÇëÌî´¦ÀíÒâ¼û