Skip to content
/ TMO Public

[WACV 2023] Treating Motion as Option to Reduce Motion Dependency in Unsupervised Video Object Segmentation

License

Notifications You must be signed in to change notification settings

suhwan-cho/TMO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TMO

This is the official PyTorch implementation of our paper:

Treating Motion as Option to Reduce Motion Dependency in Unsupervised Video Object Segmentation, WACV 2023
Suhwan Cho, Minhyeok Lee, Seunghoon Lee, Chaewon Park, Donghyeong Kim, Sangyoun Lee
Link: [WACV] [arXiv]

Treating Motion as Option with Output Selection for Unsupervised Video Object Segmentation, arXiv 2023
Suhwan Cho, Minhyeok Lee, Jungho Lee, MyeongAh Cho, Sangyoun Lee
Link: [arXiv]

You can also find other related papers at awesome-video-object-segmentation.

Abstract

In unsupervised VOS, most state-of-the-art methods leverage motion cues obtained from optical flow maps in addition to appearance cues. However, as they are overly dependent on motion cues, which may be unreliable in some cases, they cannot achieve stable prediction. To overcome this limitation, we design a novel motion-as-option network that is not much dependent on motion cues and a collaborative network learning strategy to fully leverage its unique property. Additionally, an adaptive output selection algorithm is proposed to maximize the efficacy of the motion-as-option network at test time.

Preparation

1. Download DUTS, DAVIS, FBMS, YouTube-Objects, and Long-Videos from the official websites.

2. Estimate and save optical flow maps from the videos using RAFT.

3. For convenience, I also provide the pre-processed DUTS, DAVIS, FBMS, YouTube-Objects, and Long-Videos.

4. Replace dataset paths in "run.py" file with your dataset paths.

Training

1. Open the "run.py" file.

2. Specify the model version.

3. Verify the training settings.

4. Start TMO training!

python run.py --train

Testing

1. Open the "run.py" file.

2. Specify the model version and "aos" option.

3. Choose a pre-trained model.

4. Start TMO testing!

python run.py --test

Attachments

pre-trained model (rn101)
pre-trained model (mitb1)
pre-computed results

Note

Code and models are only available for non-commercial research purposes.
If you have any questions, please feel free to contact me :)

E-mail: suhwanx@gmail.com

About

[WACV 2023] Treating Motion as Option to Reduce Motion Dependency in Unsupervised Video Object Segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages