-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix paddle.mode and paddle.bincount API #62995
Fix paddle.mode and paddle.bincount API #62995
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
paddle/phi/infermeta/binary.cc
Outdated
out->set_dtype(weights.dtype()); | ||
if (weights.dtype() == DataType::FLOAT32) { | ||
out->set_dtype(DataType::FLOAT32); | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这和 out->set_dtype(weights.dtype()); 有什么区别吗?感觉原本的写法反倒更简洁些
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这和 out->set_dtype(weights.dtype()); 有什么区别吗?感觉原本的写法反倒更简洁些
这里是参照kernel的这段逻辑修改的
if (!has_weights) {
int64_t* output_data = dev_ctx.template Alloc<int64_t>(output);
phi::funcs::SetConstant<Context, int64_t>()(
dev_ctx, output, static_cast<int64_t>(0));
KernelBincount<T, InputT, int64_t>
<<<GET_BLOCKS(input_numel), PADDLE_CUDA_NUM_THREADS, 0, stream>>>(
input_data, input_numel, has_weights, weights_data, output_data);
} else {
if (weights->dtype() == DataType::FLOAT32) {
float* output_data = dev_ctx.template Alloc<float>(output);
phi::funcs::SetConstant<Context, float>()(
dev_ctx, output, static_cast<float>(0));
KernelBincount<T, InputT, float>
<<<GET_BLOCKS(input_numel), PADDLE_CUDA_NUM_THREADS, 0, stream>>>(
input_data, input_numel, has_weights, weights_data, output_data);
} else {
double* output_data = dev_ctx.template Alloc<double>(output);
phi::funcs::SetConstant<Context, double>()(
dev_ctx, output, static_cast<double>(0));
KernelBincount<T, InputT, double>
<<<GET_BLOCKS(input_numel), PADDLE_CUDA_NUM_THREADS, 0, stream>>>(
input_data, input_numel, has_weights, weights_data, output_data);
}
}
}
这里的逻辑和out->set_dtype(weights.dtype());有出入
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
补充下现在weights的dtype
Sorry to inform you that 2564443's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
补充说明bincount报错信息: ......
paddle.seed(33)
obj = naive_func
dy_out = obj(in_tensor, in_params, func)
paddle.seed(33)
jit_obj = paddle.jit.to_static(obj)
st_out = jit_obj(in_tensor, in_params, func)
print("dy_out is: ", dy_out)
print("st_out is: ", st_out)
paddle.jit.save(jit_obj, path="bincount")
print("jit.save is successfully !!!")
paddle.seed(33)
jit = paddle.jit.load("bincount")
print("jit.load is successfully !!!")
paddle.seed(33)
inputs_key = sorted(in_tensor.keys())
inputs_value = []
for k in inputs_key:
inputs_value.append(in_tensor[k])
# print('inputs_value is: ', inputs_value)
res = jit(*inputs_value)
print('jit.load res: ', res)
compare(dy_out, res, delta=1e-5, rtol=1e-6) 报错如下:
这里可以发现在scale这算子中,张量的实际数据类型和目前期望的数据类型不一致。
猜测时infermeta中的dtype设置问题导致的。这里weight为空,x.dtype为int32,所以被设置为了int32类型,和kernel中的下述逻辑不符。 if (!has_weights) {
int64_t* output_data = dev_ctx.template Alloc<int64_t>(output);
phi::funcs::SetConstant<Context, int64_t>()(
dev_ctx, output, static_cast<int64_t>(0));
KernelBincount<T, InputT, int64_t>
<<<GET_BLOCKS(input_numel), PADDLE_CUDA_NUM_THREADS, 0, stream>>>(
input_data, input_numel, has_weights, weights_data, output_data);
} |
Sorry to inform you that e9d0862's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
PR Category
Others
PR Types
Others
Description
paddle.mode和paddle.bincount两个API在静态图模式下组网执行时,出现精度问题。经过分析原因和 #62801 所遇到的问题一致,根据kernel中的数据类型进行修复。