Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange artifacts in result #9

Open
Vadim2S opened this issue Aug 31, 2022 · 15 comments
Open

Strange artifacts in result #9

Vadim2S opened this issue Aug 31, 2022 · 15 comments

Comments

@Vadim2S
Copy link

Vadim2S commented Aug 31, 2022

I am run script process_data.sh for "cnn", run script rendering.sh with parameters
iters="french_500000_head.tar" names="french" datasets="cnn"
and get following images:
frame_405

What is this body-colored artifacts around head? And how to remove it?

@sstzal
Copy link
Owner

sstzal commented Sep 1, 2022

The dataset and the model you used do not match.

@Vadim2S
Copy link
Author

Vadim2S commented Sep 1, 2022

Sorry. I am clearly missed something. Can you clarify please.

What I am must do for getting result video where man from "cnn.mp4" video speak with voice from "english_m.mp4" video?

@zhanchao019
Copy link

Sorry. I am clearly missed something. Can you clarify please.

What I am must do for getting result video where man from "cnn.mp4" video speak with voice from "english_m.mp4" video?

hi, I encountered the same artifacts problem. Have you solve this issue?

@YifengMa9
Copy link

YifengMa9 commented Oct 18, 2022

Sorry. I am clearly missed something. Can you clarify please.

What I am must do for getting result video where man from "cnn.mp4" video speak with voice from "english_m.mp4" video?
The dataset and model must match because the model utilizes training data as reference when generating new videos.

@sstzal
Copy link
Owner

sstzal commented Oct 18, 2022

Sorry. I am clearly missed something. Can you clarify please.

What I am must do for getting result video where man from "cnn.mp4" video speak with voice from "english_m.mp4" video?

If you use iters="french_500000_head.tar" names="french", then you should use datasets="frech"

@flyingshan
Copy link

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?

Hope for your reply, thank you!

@sstzal
Copy link
Owner

sstzal commented Oct 20, 2022

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?

Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

@flyingshan
Copy link

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?
Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

Thank you!

@kyuhyoung
Copy link

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?
Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

How about sync between output images and audio from B ? Let's say aud.npy of A is 100 frames long and aud.npy of B is 150 frames long. If we keep the same except aud.npy, the output images will be 100 frames which corresponds to the first 100 frames of aud.npy of B. Am I wrong ?

@lenismerino
Copy link

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?
Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

This is very useful, I was looking for this, suggest to be added to the README file.

@AIMads
Copy link

AIMads commented Dec 14, 2022

So to clarify, if you want to test with new audio file, then run the preprocessing script with the new audio.
Then change the aud.npy file in the dataset you used for training?
and then do the rendering script again?

@exceedzhang
Copy link

exceedzhang commented Jan 12, 2023

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?
Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

I conducted the experiment according to the method you said. A video was pre-trained, and then B video sound was used to replace A video sound, but the video generated and processed in this way is the same. Do you need to modify other files?

rendering.sh
#!/bin/bash
iters="105000_head.tar"
names="20230112"
datasets="20230111"
near=0.7494858980178833
far=1.3494858980178832
path="dataset/finetune_models/${datasets}/${iters}"
datapath="dataset/${datasets}/0"
bc_type="torso_imgs"
suffix="val"
python NeRFs/run_nerf_deform.py --need_torso True --config dataset/test_config.txt --expname ${names}${suffix} --expname_finetune ${names}${suffix} --render_only --ft_path ${path} --datadir ${datapath} --bc_type ${bc_type} --near ${near} --far ${far}
image

@sstzal
Copy link
Owner

sstzal commented Apr 20, 2023

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?
Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

How about sync between output images and audio from B ? Let's say aud.npy of A is 100 frames long and aud.npy of B is 150 frames long. If we keep the same except aud.npy, the output images will be 100 frames which corresponds to the first 100 frames of aud.npy of B. Am I wrong ?

You are right.

@sstzal
Copy link
Owner

sstzal commented Apr 20, 2023

Me have the same question. Can you clarity how should we set the parameters in 'rendering.sh 'to drive the talking head from man A with the audio from man B?
Hope for your reply, thank you!

To drive the talking head from man A with the audio from man B, you should only change the aud.npy to the one from B, with the other parameters keep the same.

I conducted the experiment according to the method you said. A video was pre-trained, and then B video sound was used to replace A video sound, but the video generated and processed in this way is the same. Do you need to modify other files?

rendering.sh #!/bin/bash iters="105000_head.tar" names="20230112" datasets="20230111" near=0.7494858980178833 far=1.3494858980178832 path="dataset/finetune_models/${datasets}/${iters}" datapath="dataset/${datasets}/0" bc_type="torso_imgs" suffix="val" python NeRFs/run_nerf_deform.py --need_torso True --config dataset/test_config.txt --expname ${names}${suffix} --expname_finetune ${names}${suffix} --render_only --ft_path ${path} --datadir ${datapath} --bc_type ${bc_type} --near ${near} --far ${far} image

Add --aud_file xxx.npy(the specified audio you want to use) after python NeRFs/run_nerf_deform.py

@jinlingxueluo
Copy link

我有同样的问题。您能否澄清一下我们应该如何在“rendering.sh”中设置参数,以用来自男人 B 的音频驱动男人 A 的说话头?希望您的回复,谢谢!

要使用来自男人 B 的音频驱动来自男人 A 的说话头,您应该只将 aud.npy 更改为来自 B 的 aud.npy,其他参数保持不变。

我按照你说的方法进行了实验。A视频经过预训练,然后用B视频声音代替A视频声音,但以这种方式生成和处理的视频是相同的。您需要修改其他文件吗?

rendering.sh #!/bin/bash iters=“105000_head.tar” 名称=“20230112” 数据集=“20230111” near=0.7494858980178833 far=1.3494858980178832 path=“dataset/finetune_models/${datasets}/${iters}” datapath=“dataset/${datasets}/0” bc_type=“torso_imgs” 后缀=“val” python NeRFs/run_nerf_deform.py --need_torso True --config dataset/test_config.txt --expname ${名称}${后缀} --expname_finetune ${names}${后缀} --render_only --ft_path ${路径} --datadir ${datapath} --bc_type ${bc_type} --near ${near} --far ${far} image

您好,想请问一下您跑实验的时候所选用的依赖库分别是哪些呢?是否存在依赖库冲突的问题,谢谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants