You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing this awesome repo. I'm having memory problem with running run.sh with 1 H100 gpu. I tried reducing batch sizes to 1, but still I can not fine-tune a 7B LLM. Any idea how to work this around?
The text was updated successfully, but these errors were encountered:
Hey did you ever get around this? How much memory was required? I will drop from fp32 to bf16 and reduce the batch size, but wanted to see if I could get any insight. Thanks for the help!
Hi,
Thanks for sharing this awesome repo. I'm having memory problem with running run.sh with 1 H100 gpu. I tried reducing batch sizes to 1, but still I can not fine-tune a 7B LLM. Any idea how to work this around?
The text was updated successfully, but these errors were encountered: