| 查看: 841 | 回复: 5 | ||
| 【悬赏金币】回答本帖问题,作者agong将赠送您 5 个金币 | ||
[求助]
求助审稿意见的理解
|
||
|
字面意思看懂了,但是还请过来人看看,然后发表一些批评和建议。以及之后该怎么样修改和下一步的投稿,谢谢大家。 发自小木虫IOS客户端 |
» 猜你喜欢
参与限项
已经有4人回复
有没有人能给点建议
已经有5人回复
假如你的研究生提出不合理要求
已经有12人回复
全日制(定向)博士
已经有5人回复
萌生出自己或许不适合搞科研的想法,现在跑or等等看?
已经有4人回复
Materials Today Chemistry审稿周期
已经有4人回复
对氯苯硼酸纯化
已经有3人回复
所感
已经有4人回复
要不要辞职读博?
已经有7人回复
北核录用
已经有3人回复

2楼2021-12-15 17:44:11
3楼2021-12-15 18:15:41
|
Strong Aspects (Comments to the author: What are the strong aspects of the paper?) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double “the task” in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of “undated” is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. The reviewer has the following comments. 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double “the task” in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of “undated” is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. |
4楼2021-12-15 18:22:55
|
Review 2 Relevance and Timeliness Technical Content and Scientific Rigour Novelty and Originality Quality of Presentation Good. (4) Solid work of notable importance. (4) Some interesting ideas and results on a subject well investigated. (3) Well written. (4) Strong Aspects (Comments to the author: What are the strong aspects of the paper?) The paper proposes an improved experience based replay reinforcement learning algorithm (EBRL) for computation offloading by using MEC. The energy consumption and delay can be minimized by using the proposed algorithm compared with other algorithms. The paper is well written. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) It is better to show more practical situation for performance comparison with considering realistic applications. Currently, only arrival rate is changed for considering different environment. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) Please see the weak aspects. It is better to consider more realistic and practical situation. Robustness for environment change is another key performance for MEC offloading. |
5楼2021-12-15 18:23:15
|
Review 1 Relevance and Timeliness Technical Content and Scientific Rigour Novelty and Originality Quality of Presentation Acceptable. (3) Valid work but limited contribution. (3) Some interesting ideas and results on a subject well investigated. (3) Readable, but revision is needed in some parts. (3) Strong Aspects (Comments to the author: What are the strong aspects of the paper?) This paper presents a new algorithm to offload edge tasks to edge serves within the MEC environment. Authors present an improved reinforcement learning framework according to dynamic environments, which selects samples from an improved experience pool. Simulation experiments reveal improved performance. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) First, the distinction between the proposal and state-of-the-art reinforcement learning seems straightforward, as the selection of experience samples seems straightforward. Second, the simulation results are not discussed in details to explain the novelty of the proposal. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) First, authors should explain the improvements. why experiences are vital to improve the performance of the reinforcement learning. And the selection of experience samples seems straightforward. Second, an example is favored to illustrate the workflow of the proposed algorithm. Third, the experiments do not present the detailed setup of the MEC environment, and the performance metrics. |
6楼2021-12-15 18:23:27












回复此楼