24小时热门版块排行榜    

查看: 1369  |  回复: 3
【悬赏金币】回答本帖问题,作者he1wen2zhi将赠送您 5 个金币

he1wen2zhi

新虫 (初入文坛)

[求助] 大修20天,3个审稿人,求教大佬们我该怎么改 已有1人参与

求教大佬我该侧重哪方面改
要加什么实验呢
第三个审稿人说我没用自己提的数据集做实验,实际上我论文中写了我用了我的数据集做的训练,我该怎么合理回复呢
求求大佬们给我些建议

三条审稿意见如下:
Reviewer #1: This paper presents an audio-visual cross-modality generation method for talking face videos with rhythmic head.
The studied topic is meaningful.
The authors are suggested to further improve the paper from the following aspects.

The quality evaluation of the generated audio-visual talking heads is very important for the method design. The authors have used some criteria for evaluation. The authors may give some discussions on whether it is possible to use some quality assessment methods for evaluation. For example using the audio-visual quality assessment methods proposed in 'Study of subjective and objective quality assessment of audio-visual signals', 'Attention-Guided Neural Networks for Full-Reference and No-Reference Audio-Visual Quality Assessment' for evaluation.
The authors are suggested to give some discussions on this aspect and the above works.

'The proposed method demonstrates improved performance in terms of video quality compared to traditional approaches'
Some discussions about visual quality assessment are suggested to be given here, considering that there are many visual quality assessment studies in the literatures, for example, 'Blind quality assessment based on pseudo-reference image', 'Blind image quality estimation via distortion aggravation', 'Unified blind quality assessment of compressed natural, graphic, and screen content images', 'Objective quality evaluation of dehazed images', 'Quality evaluation of image dehazing methods using synthetic hazy images'.

Following the above comments, the quality assessment of multimedia signals is also highly relevant to this work, thus some surveys for quality assessment are suggested to be given in the introduction section of the paper, for example, 'Perceptual image quality assessment: a survey', 'Screen content quality assessment: overview, benchmark, and beyond'.

Audio-visual attention is critical for various audio-visual applications. Many audio-visual attention prediction methods have been proposed, for example, 'A multimodal saliency model for videos with high audio-visual correspondence', 'Fixation prediction through multimodal analysis'. The authors may give some discussions on the possibility of using audio-visual attention prediction methods to improve the proposed method.
The authors are suggested to give some discussions on this aspect and the above works.




Reviewer #2: This paper addresses the generation of realistic talking facial videos by incorporating audio and head pose information. Existing methods lack natural head pose generation and audio synchronization, impacting video realism. The authors propose Flow2Flow, an autoregressive method that encodes audio and historical head poses using a multimodal transformer block with cross-attention. They introduce AVVS, a large-scale dataset for investigating rhythmic head movement patterns. The proposed method generates identity-independent facial motion representations, enabling photo-realistic videos with natural head poses and accurate lip-syncing, as demonstrated through experiments and comparisons with state-of-the-art approaches on public datasets. However, some concerns should be addressed.

The organization of the paper could benefit from improvements, e.g., some video synthesis part is introduced in the feature encoding part.

The authors pointed out that the full attention structure in the model excessively focuses on a single source during integration, leading to the neglect of crucial information from other modalities. As a result, accurately generating movements for the facial generation task becomes challenging. It would be helpful to provide supporting evidence or examples to further illustrate this issue.

Instead of delving into the intricacies of flow theory, it would be more beneficial to focus on incorporating references in the facial attribute generation process.

The model utilizes 15 neutral keypoints as facial attributes. It would be valuable for the authors to explore the impact of varying the number of keypoints and investigate whether incorporating certain 3DMM parameters and other types of audio features would enhance the results.

The authors have primarily focused on discussing the applications of common loss functions. However, IQA models also have the wide-ranging applications in evaluating generative image, video, audio, and multimedia models, e.g., "Blind image quality assessment via cross-view consistency" and "Comparative perceptual assessment of visual signals using free energy features." The authors are suggested to give some discussions on this aspect and the above works. Additionally, considering the significance of attention mechanism, the authors are encouraged to provide discussions on related works like "Toward visual behavior and attention understanding for augmented 360-degree videos," "Viewing behavior supported visual saliency predictor for 360-degree videos," and "Learning a deep agent to predict head movement in 360-degree images."



Reviewer #3: introductions:
This paper proposes a normalizing flow based network to generate realistic talking face videos, by using audio and past head poses as inputs.
Besides, they also contributes a solo-singing-themed audio-visual dataset called AVVS for research.

Strength:
1. Experimental results do show that their methods can generate photo realistic videos with natural head poses and lip-syncing. And the
performance looks good.
2. Utilizing normalizing flow model is novel and convincing.

Weakness:
1. It is kind of stange that I do not see any experiments on AVVS dataset. Since you are proposing a dataset, I think some experiments should
be conducted on it.
回复此楼
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

半生梦君

新虫 (职业作家)

对于第三个审稿人,首先表示感谢他的评论,然后列举你论文中所用的你的数据集。

发自小木虫Android客户端
2楼2023-08-10 15:01:44
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

nono2009

超级版主 (文学泰斗)

No gains, no pains.

优秀区长优秀区长优秀区长优秀区长优秀版主

【答案】应助回帖

感谢参与,应助指数 +1
能改的尽量改,不能改的诚恳说明。

发自小木虫Android客户端
3楼2023-08-11 07:24:56
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

Mr_jianye

新虫 (正式写手)

4楼2024-01-04 12:11:30
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
相关版块跳转 我要订阅楼主 he1wen2zhi 的主题更新
不应助 确定回帖应助 (注意:应助才可能被奖励,但不允许灌水,必须填写15个字符以上)
最具人气热帖推荐 [查看全部] 作者 回/看 最后发表
[考研] 材料考研339求调剂 +3 Karry*^_^* 2026-03-04 3/150 2026-03-06 00:32 by wutongshun
[考研] 274求调剂 +6 cgyzqwn 2026-03-01 11/550 2026-03-05 23:39 by ,!?
[考研] 复试调剂 +5 呼呼?~+123456 2026-03-05 5/250 2026-03-05 19:43 by caszguilin
[考研] 一志愿武理085601专硕347分求调剂 +4 啊欧欧欧 2026-03-04 5/250 2026-03-05 19:27 by Leding1356
[考研] 一志愿清华深研院材料专硕294分,专业课111分,本科中南大学材料,有六级,有工作经验 +3 H14528 2026-03-04 3/150 2026-03-05 10:35 by ms629
[考研] 0856材料求调剂 +17 hyf hyf hyf 2026-02-28 18/900 2026-03-05 09:32 by oxidpl
[考研] 376求调剂 +3 王浩然sam 2026-03-04 3/150 2026-03-05 07:48 by bxbo
[考研] 求调剂院校 +6 云朵452 2026-03-02 12/600 2026-03-04 23:17 by 云朵452
[考研] 286 +6 ksncj 2026-03-04 6/300 2026-03-04 20:49 by lature00
[考研] 320材料与化工,求调剂 +6 鹤遨予卿 2026-03-04 8/400 2026-03-04 20:47 by wutongshun
[考研] 求调剂 +7 博斯特525 2026-03-04 7/350 2026-03-04 18:09 by houyaoxu
[考研] 085600 英一数二272求调剂 5+6 vida_a 2026-03-01 47/2350 2026-03-04 13:35 by vida_a
[考研] 298求调剂一志愿中海洋 +3 lour. 2026-03-03 3/150 2026-03-03 20:41 by gxg2025
[考研] 材料学硕318求调剂 +11 February_Feb 2026-03-01 11/550 2026-03-03 14:24 by NUAAZXWS
[考研] 303求调剂 +5 今夏不夏 2026-03-01 5/250 2026-03-02 15:01 by 向上的胖东
[考研] 材料与化工328求调剂 +3 。,。,。,。i 2026-03-02 3/150 2026-03-02 13:09 by houyaoxu
[考研] 284求调剂 +10 天下熯 2026-02-28 11/550 2026-03-02 11:03 by 无际的草原
[基金申请] 成果系统访问量大,请一小时后再尝试。---NSFC啥时候好哦,已经两天这样了 +4 NSFC2026我来了 2026-02-28 4/200 2026-03-01 22:37 by 铁门栓
[考研] 328求调剂 +3 aaadim 2026-03-01 5/250 2026-03-01 17:29 by njzyff
[论文投稿] 求助coordination chemistry reviews 的写作模板 10+3 ljplijiapeng 2026-02-27 4/200 2026-03-01 09:07 by babero
信息提示
请填处理意见