-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The model scored poorly after annotating the "detection adjustment" code #4
Comments
Thanks for your interest.
|
Thank you very much for your attention. In your experiment, do the reconstruction based methods compared in the article use the same adjustment strategy? |
Sure, all the comparing methods adopt this adjustment strategy for evaluation. |
That's good. Thx. |
Hi, this is an amazing job.
Here I come across a small problem.
On MSL dataset, the model performed good, looks like:
======================TEST MODE====================== Threshold : 0.0017330783803481142 pred: (73700,) gt: (73700,) pred: (73700,) gt: (73700,) Accuracy : 0.9853, Precision : 0.9161, Recall : 0.9473, F-score : 0.9314
But after I annotated the "detection adjustment" code, the score was poorly, looks like:
======================TEST MODE====================== Threshold : 0.0017330783803481142 pred: (73700,) gt: (73700,) pred: (73700,) gt: (73700,) Accuracy : 0.8866, Precision : 0.1120, Recall : 0.0109, F-score : 0.0199
And I'm sure only the "detection adjustment" code was annotated.
Can you help me out of this problem?
thx.
The text was updated successfully, but these errors were encountered: