You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py:208: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
main()
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779]
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779] *****************************************
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779] *****************************************
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=0
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=2
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=1
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=3
W0908 20:14:28.131211 140432841385792 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 116498 closing signal SIGTERM
W0908 20:14:28.132093 140432841385792 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 116499 closing signal SIGTERM
E0908 20:14:28.164023 140432841385792 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 2) local_rank: 0 (pid: 116496) of binary: /home/wangduo/anaconda3/envs/DAT/bin/python
Traceback (most recent call last):
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py", line 208, in
main()
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/typing_extensions.py", line 2499, in wrapper
return arg(*args, **kwargs)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py", line 204, in main
launch(args)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in launch
run(args)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 133, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
(DAT) [wangduo@localhost DAT-main]$ python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/Train/train_DAT_S_x3.yml --launcher pytorch
/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py:208: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects
--local-rank
argument to be set, pleasechange it to read from
os.environ['LOCAL_RANK']
instead. Seehttps://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
main()
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779]
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779] *****************************************
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0908 20:14:24.356706 140432841385792 torch/distributed/run.py:779] *****************************************
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=0
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=2
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=1
usage: train.py [-h] -opt OPT [--launcher {none,pytorch,slurm}] [--auto_resume] [--debug] [--local_rank LOCAL_RANK] [--force_yml FORCE_YML [FORCE_YML ...]]
train.py: error: unrecognized arguments: --local-rank=3
W0908 20:14:28.131211 140432841385792 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 116498 closing signal SIGTERM
W0908 20:14:28.132093 140432841385792 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 116499 closing signal SIGTERM
E0908 20:14:28.164023 140432841385792 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 2) local_rank: 0 (pid: 116496) of binary: /home/wangduo/anaconda3/envs/DAT/bin/python
Traceback (most recent call last):
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py", line 208, in
main()
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/typing_extensions.py", line 2499, in wrapper
return arg(*args, **kwargs)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py", line 204, in main
launch(args)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in launch
run(args)
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 133, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/wangduo/anaconda3/envs/DAT/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
basicsr/train.py FAILED
Failures:
[1]:
time : 2024-09-08_20:14:28
host : localhost.localdomain
rank : 1 (local_rank: 1)
exitcode : 2 (pid: 116497)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Root Cause (first observed failure):
[0]:
time : 2024-09-08_20:14:28
host : localhost.localdomain
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 116496)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
你好,我这边一直显示报错DATModel 没有被正确注册到 MODEL_REGISTRY 中,我想咨询下这应该怎么解决
The text was updated successfully, but these errors were encountered: