| ²é¿´: 889 | »Ø¸´: 5 | |||
| ¡¾ÐüÉͽð±Ò¡¿»Ø´ð±¾ÌûÎÊÌ⣬×÷Õßagong½«ÔùËÍÄú 5 ¸ö½ð±Ò | |||
| µ±Ç°Ö»ÏÔʾÂú×ãÖ¸¶¨Ìõ¼þµÄ»ØÌû£¬µã»÷ÕâÀï²é¿´±¾»°ÌâµÄËùÓлØÌû | |||
agongгæ (³õÈëÎÄ̳)
|
[ÇóÖú]
ÇóÖúÉó¸åÒâ¼ûµÄÀí½â
|
||
|
×ÖÃæÒâ˼¿´¶®ÁË£¬µ«ÊÇ»¹Çë¹ýÀ´ÈË¿´¿´£¬È»ºó·¢±íһЩÅúÆÀºÍ½¨Òé¡£ÒÔ¼°Ö®ºó¸ÃÔõôÑùÐ޸ĺÍÏÂÒ»²½µÄͶ¸å£¬Ð»Ð»´ó¼Ò¡£ ·¢×ÔСľ³æIOS¿Í»§¶Ë |
» ²ÂÄãϲ»¶
290Çóµ÷¼Á
ÒѾÓÐ8È˻ظ´
268Çóµ÷¼Á
ÒѾÓÐ4È˻ظ´
²ÄÁϹ¤³Ì085601µ÷¼ÁÇóÀÏʦÊÕÁô
ÒѾÓÐ15È˻ظ´
²ÄÁÏÓ뻯¹¤£¨0856£©304ÇóBÇøµ÷¼Á
ÒѾÓÐ11È˻ظ´
211±¾£¬11408Ò»Ö¾Ô¸ÖпÆÔº277·Ö£¬ÔøÔÚÖпÆÔº×Ô¶¯»¯Ëùʵϰ
ÒѾÓÐ3È˻ظ´
297Çóµ÷¼Á
ÒѾÓÐ4È˻ظ´
¡¾0703»¯Ñ§µ÷¼Á¡¿-Ò»Ö¾Ô¸»ªÖÐʦ·¶´óѧ-Áù¼¶475
ÒѾÓÐ5È˻ظ´
282 Çóµ÷¼Á
ÒѾÓÐ3È˻ظ´
²ÄÁÏ080500µ÷¼ÁÇóÊÕÁô
ÒѾÓÐ3È˻ظ´
331Çóµ÷¼Á£¨0703Óлú»¯Ñ§
ÒѾÓÐ6È˻ظ´
agong
гæ (³õÈëÎÄ̳)
- Ó¦Öú: 0 (Ó×¶ùÔ°)
- ½ð±Ò: 92.8
- É¢½ð: 65
- ºì»¨: 2
- Ìû×Ó: 38
- ÔÚÏß: 22.4Сʱ
- ³æºÅ: 1567808
- ×¢²á: 2012-01-07
- רҵ: ¼ÆËã»úÍøÂç
|
Review 1 Relevance and Timeliness Technical Content and Scientific Rigour Novelty and Originality Quality of Presentation Acceptable. (3) Valid work but limited contribution. (3) Some interesting ideas and results on a subject well investigated. (3) Readable, but revision is needed in some parts. (3) Strong Aspects (Comments to the author: What are the strong aspects of the paper?) This paper presents a new algorithm to offload edge tasks to edge serves within the MEC environment. Authors present an improved reinforcement learning framework according to dynamic environments, which selects samples from an improved experience pool. Simulation experiments reveal improved performance. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) First, the distinction between the proposal and state-of-the-art reinforcement learning seems straightforward, as the selection of experience samples seems straightforward. Second, the simulation results are not discussed in details to explain the novelty of the proposal. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) First, authors should explain the improvements. why experiences are vital to improve the performance of the reinforcement learning. And the selection of experience samples seems straightforward. Second, an example is favored to illustrate the workflow of the proposed algorithm. Third, the experiments do not present the detailed setup of the MEC environment, and the performance metrics. |
6Â¥2021-12-15 18:23:27
ÙÈÔÂÀæÂä
гæ (СÓÐÃûÆø)
- Ó¦Öú: 0 (Ó×¶ùÔ°)
- ½ð±Ò: 240.1
- Ìû×Ó: 57
- ÔÚÏß: 27Сʱ
- ³æºÅ: 2998353
- ×¢²á: 2014-02-26
- ÐÔ±ð: GG
- רҵ: ʯÓÍ¡¢ÌìÈ»ÆøµØÖÊѧ

2Â¥2021-12-15 17:44:11
agong
гæ (³õÈëÎÄ̳)
- Ó¦Öú: 0 (Ó×¶ùÔ°)
- ½ð±Ò: 92.8
- É¢½ð: 65
- ºì»¨: 2
- Ìû×Ó: 38
- ÔÚÏß: 22.4Сʱ
- ³æºÅ: 1567808
- ×¢²á: 2012-01-07
- רҵ: ¼ÆËã»úÍøÂç
| Á´½Ó: https://pan.baidu.com/s/1jOUKi7Hp2Y76vfRuQMJ1Qg ÌáÈ¡Âë: euty ¸´ÖÆÕâ¶ÎÄÚÈݺó´ò¿ª°Ù¶ÈÍøÅÌÊÖ»úApp£¬²Ù×÷¸ü·½±ãŶ |
3Â¥2021-12-15 18:15:41
agong
гæ (³õÈëÎÄ̳)
- Ó¦Öú: 0 (Ó×¶ùÔ°)
- ½ð±Ò: 92.8
- É¢½ð: 65
- ºì»¨: 2
- Ìû×Ó: 38
- ÔÚÏß: 22.4Сʱ
- ³æºÅ: 1567808
- ×¢²á: 2012-01-07
- רҵ: ¼ÆËã»úÍøÂç
|
Strong Aspects (Comments to the author: What are the strong aspects of the paper?) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double ¡°the task¡± in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of ¡°undated¡± is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. The reviewer has the following comments. 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double ¡°the task¡± in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of ¡°undated¡± is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. |
4Â¥2021-12-15 18:22:55













»Ø¸´´ËÂ¥
20