Skip to content

xhonghu/SignDAGC_SLT-master

Repository files navigation

Prerequisites

  • This project is implemented in Pytorch (>1.8). Thus please install Pytorch first.
  • ctcdecode==0.4 [parlance/ctcdecode],for beam search decode.
  • For these who failed install ctcdecode (and it always does), you can download ctcdecode here, unzip it, and try cd ctcdecode and pip install .
  • Pealse follow this link to install pytorch geometric
  • You can install other required modules by conducting pip install -r requirements.txt pip install transformers

Data Preparation

  1. PHOENIX2014-T datasetDownload the RWTH-PHOENIX-Weather 2014 Dataset [download link]
  2. CSL dataset: Request the CSL Dataset from this website [download link]

Download datasets and extract them, no further data preprocessing needed.

Pretrained Models

  1. mbart_de / mbart_zh : pretrained language models used to initialize the translation network for German and Chinese, with weights from mbart-cc-25.
  2. We provide pretrained models Phoenix-2014T and CSL-Daily.

Download this directory and place them under pretrained_models,The directory structure is as follows..

|-- pretrained_models
|   |-- CSL-Daily
|   |   `-- best_model.pt  #Sign language recognition task weight
|   |-- CSL-Daily_g2t
|   |   `-- step_1000.ckpt  #Sign language translation pre-trained weights
|   |-- mBart_de
|   |   |-- config.json
|   |   |-- gloss2ids.pkl
|   |   |-- gloss_embeddings.bin
|   |   |-- map_ids.pkl
|   |   |-- pytorch_model.bin
|   |   |-- sentencepiece.bpe.model
|   |   `-- tokenizer.json
|   |-- mBart_zh
|   |   |-- config.json
|   |   |-- gloss2ids.pkl
|   |   |-- gloss_embeddings.bin
|   |   |-- old2new_vocab.pkl
|   |   |-- pytorch_model.bin
|   |   |-- sentence.bpe.model
|   |   `-- sentencepiece.bpe.model
|   |-- phoenix-2014T
|   |   `-- best_model.pt  #Sign language recognition task weight
|   `-- phoenix-2014T_g2t
|       `-- best.ckpt   #Sign language translation pre-trained weights

Weights

Here we provide the performance of the model (On Test) and its corresponding weights.

Dataset Backbone Rouge BLEU1 BLEU2 BLEU3 BLEU4 Pretrained model
Phoenix14T Resnet34 53.01 54.85 42.28 34.24 28.68 [Google Drive]
CSL-Daily Resnet34 52.86 55.87 42.22 32.70 25.90 [Google Drive]

Evaluate

To evaluate the pretrained model, choose the dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml first, and run the command below:

python main.py --load-weights path_to_weight.pt --phase test

python main.py --load-weights ./phoenix2014-T/best_model.pt --phase test

python main.py --load-weights ./csl-daily/best_model.pt --phase test

Training

To Training the SignDAGC model, choose the dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml first, and run the command below:

python main.py

Multi-machine training (In fact, the results of the Multi-machine run are not good):

python -m torch.distributed.launch --nproc_per_node=2 main.py --device 0,1

Acknowledgments

Our code is based on SignGraph and GreedyViG and TwoStream.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages