| 查看: 934 | 回复: 5 | |||
| 【悬赏金币】回答本帖问题,作者agong将赠送您 5 个金币 | |||
| 当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖 | |||
[求助]
求助审稿意见的理解
|
|||
|
字面意思看懂了,但是还请过来人看看,然后发表一些批评和建议。以及之后该怎么样修改和下一步的投稿,谢谢大家。 发自小木虫IOS客户端 |
» 猜你喜欢
药学硕士,第一、第二作者已发表6 篇 SCI,药理方向及相关方向2026年/2027年博士申请
已经有6人回复
一篇MDPI论文改变了学习工作和生活
已经有5人回复
26年博士申请自荐-电催化
已经有3人回复
中国地质大学(北京)博士招生补录,数理学院材料科学与工程专业和材料与化工专业
已经有6人回复
收到国自然专家邀请后几年才会有本子送过来评
已经有4人回复
考博
已经有5人回复
26年申博自荐-计算机视觉
已经有4人回复
药化及相关博士的申请
已经有3人回复
|
Review 1 Relevance and Timeliness Technical Content and Scientific Rigour Novelty and Originality Quality of Presentation Acceptable. (3) Valid work but limited contribution. (3) Some interesting ideas and results on a subject well investigated. (3) Readable, but revision is needed in some parts. (3) Strong Aspects (Comments to the author: What are the strong aspects of the paper?) This paper presents a new algorithm to offload edge tasks to edge serves within the MEC environment. Authors present an improved reinforcement learning framework according to dynamic environments, which selects samples from an improved experience pool. Simulation experiments reveal improved performance. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) First, the distinction between the proposal and state-of-the-art reinforcement learning seems straightforward, as the selection of experience samples seems straightforward. Second, the simulation results are not discussed in details to explain the novelty of the proposal. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) First, authors should explain the improvements. why experiences are vital to improve the performance of the reinforcement learning. And the selection of experience samples seems straightforward. Second, an example is favored to illustrate the workflow of the proposed algorithm. Third, the experiments do not present the detailed setup of the MEC environment, and the performance metrics. |
6楼2021-12-15 18:23:27

2楼2021-12-15 17:44:11
3楼2021-12-15 18:15:41
|
Strong Aspects (Comments to the author: What are the strong aspects of the paper?) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double “the task” in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of “undated” is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. The reviewer has the following comments. 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double “the task” in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of “undated” is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. |
4楼2021-12-15 18:22:55












回复此楼