-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoParallel] Addn support AutoParallel #58434
[AutoParallel] Addn support AutoParallel #58434
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
loss = loss_fn(out, label) | ||
|
||
loss.backward() | ||
return loss, layer.w0.grad, layer.w1.grad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add_n是不是应该当做op来测试?不需要放在组网里呀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done,thx!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
欢哥,phi::distributed
下面的文件在WITH_DISTRIBUTE=OFF
的编译条件下是不会编译的,这里需要用ifdef PADDLE_WITH_DISTRIBUTE
的条件宏控制一下dist_tensor
的分支代码
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done,thx!
… addn_disttensor
… addn_disttensor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* phi add_n support disttensor
* phi add_n support disttensor
PR types
Others
PR changes
Others
Description
PHI api Addn 支持 AutoParallel
Pcard-73145