| 查看: 1238 | 回复: 3 | |||
| 【悬赏金币】回答本帖问题,作者he1wen2zhi将赠送您 5 个金币 | |||
[求助]
大修20天,3个审稿人,求教大佬们我该怎么改已有1人参与
|
|||
|
求教大佬我该侧重哪方面改 要加什么实验呢 第三个审稿人说我没用自己提的数据集做实验,实际上我论文中写了我用了我的数据集做的训练,我该怎么合理回复呢 求求大佬们给我些建议 三条审稿意见如下: Reviewer #1: This paper presents an audio-visual cross-modality generation method for talking face videos with rhythmic head. The studied topic is meaningful. The authors are suggested to further improve the paper from the following aspects. The quality evaluation of the generated audio-visual talking heads is very important for the method design. The authors have used some criteria for evaluation. The authors may give some discussions on whether it is possible to use some quality assessment methods for evaluation. For example using the audio-visual quality assessment methods proposed in 'Study of subjective and objective quality assessment of audio-visual signals', 'Attention-Guided Neural Networks for Full-Reference and No-Reference Audio-Visual Quality Assessment' for evaluation. The authors are suggested to give some discussions on this aspect and the above works. 'The proposed method demonstrates improved performance in terms of video quality compared to traditional approaches' Some discussions about visual quality assessment are suggested to be given here, considering that there are many visual quality assessment studies in the literatures, for example, 'Blind quality assessment based on pseudo-reference image', 'Blind image quality estimation via distortion aggravation', 'Unified blind quality assessment of compressed natural, graphic, and screen content images', 'Objective quality evaluation of dehazed images', 'Quality evaluation of image dehazing methods using synthetic hazy images'. Following the above comments, the quality assessment of multimedia signals is also highly relevant to this work, thus some surveys for quality assessment are suggested to be given in the introduction section of the paper, for example, 'Perceptual image quality assessment: a survey', 'Screen content quality assessment: overview, benchmark, and beyond'. Audio-visual attention is critical for various audio-visual applications. Many audio-visual attention prediction methods have been proposed, for example, 'A multimodal saliency model for videos with high audio-visual correspondence', 'Fixation prediction through multimodal analysis'. The authors may give some discussions on the possibility of using audio-visual attention prediction methods to improve the proposed method. The authors are suggested to give some discussions on this aspect and the above works. Reviewer #2: This paper addresses the generation of realistic talking facial videos by incorporating audio and head pose information. Existing methods lack natural head pose generation and audio synchronization, impacting video realism. The authors propose Flow2Flow, an autoregressive method that encodes audio and historical head poses using a multimodal transformer block with cross-attention. They introduce AVVS, a large-scale dataset for investigating rhythmic head movement patterns. The proposed method generates identity-independent facial motion representations, enabling photo-realistic videos with natural head poses and accurate lip-syncing, as demonstrated through experiments and comparisons with state-of-the-art approaches on public datasets. However, some concerns should be addressed. The organization of the paper could benefit from improvements, e.g., some video synthesis part is introduced in the feature encoding part. The authors pointed out that the full attention structure in the model excessively focuses on a single source during integration, leading to the neglect of crucial information from other modalities. As a result, accurately generating movements for the facial generation task becomes challenging. It would be helpful to provide supporting evidence or examples to further illustrate this issue. Instead of delving into the intricacies of flow theory, it would be more beneficial to focus on incorporating references in the facial attribute generation process. The model utilizes 15 neutral keypoints as facial attributes. It would be valuable for the authors to explore the impact of varying the number of keypoints and investigate whether incorporating certain 3DMM parameters and other types of audio features would enhance the results. The authors have primarily focused on discussing the applications of common loss functions. However, IQA models also have the wide-ranging applications in evaluating generative image, video, audio, and multimedia models, e.g., "Blind image quality assessment via cross-view consistency" and "Comparative perceptual assessment of visual signals using free energy features." The authors are suggested to give some discussions on this aspect and the above works. Additionally, considering the significance of attention mechanism, the authors are encouraged to provide discussions on related works like "Toward visual behavior and attention understanding for augmented 360-degree videos," "Viewing behavior supported visual saliency predictor for 360-degree videos," and "Learning a deep agent to predict head movement in 360-degree images." Reviewer #3: introductions: This paper proposes a normalizing flow based network to generate realistic talking face videos, by using audio and past head poses as inputs. Besides, they also contributes a solo-singing-themed audio-visual dataset called AVVS for research. Strength: 1. Experimental results do show that their methods can generate photo realistic videos with natural head poses and lip-syncing. And the performance looks good. 2. Utilizing normalizing flow model is novel and convincing. Weakness: 1. It is kind of stange that I do not see any experiments on AVVS dataset. Since you are proposing a dataset, I think some experiments should be conducted on it. |
» 猜你喜欢
博士读完未来一定会好吗
已经有17人回复
心脉受损
已经有5人回复
Springer期刊投稿求助
已经有4人回复
读博
已经有3人回复
小论文投稿
已经有3人回复
Bioresource Technology期刊,第一次返修的时候被退回好几次了
已经有9人回复
到新单位后,换了新的研究方向,没有团队,持续积累2区以上论文,能申请到面上吗
已经有8人回复
申请2026年博士
已经有6人回复
半生梦君
新虫 (职业作家)
- 应助: 0 (幼儿园)
- 金币: 3010.3
- 散金: 200
- 沙发: 112
- 帖子: 3497
- 在线: 639.3小时
- 虫号: 33637970
- 注册: 2023-04-23
- 专业: 泛函分析
2楼2023-08-10 15:01:44
nono2009
超级版主 (文学泰斗)
No gains, no pains.
-

专家经验: +21105 - SEPI: 10
- 应助: 28684 (院士)
- 贵宾: 513.911
- 金币: 2555170
- 散金: 27828
- 红花: 2147
- 沙发: 66666
- 帖子: 1602247
- 在线: 65200.8小时
- 虫号: 827383
- 注册: 2009-08-13
- 性别: GG
- 专业: 工程热物理与能源利用
- 管辖: 科研家筹备委员会
3楼2023-08-11 07:24:56
Mr_jianye
新虫 (正式写手)
- 应助: 0 (幼儿园)
- 金币: 513
- 散金: 1171
- 沙发: 1
- 帖子: 615
- 在线: 64.2小时
- 虫号: 8529537
- 注册: 2018-04-15
- 性别: GG
- 专业: 人工晶体
4楼2024-01-04 12:11:30













回复此楼