Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about CHIEF pre-training #43

Open
KKIverson opened this issue Dec 10, 2024 · 1 comment
Open

Some questions about CHIEF pre-training #43

KKIverson opened this issue Dec 10, 2024 · 1 comment

Comments

@KKIverson
Copy link

Congratulations on this work on CHIEF.
I have a few questions about pre-training. I noticed that the supplementary file mentioned that 3 losses were used when pre-training the CHIEF model. Formula (3) is used to calculate the bag-level classification loss, and formula (4) is used to calculate the instance-level top-k loss. I understand both 3 and 4. But I don’t understand the loss in formula (5).
First, there are two negative signs in formula 5. Is there a typo (should there be only 1 negative sign) ? Second, for each bag(G), whether it is negative or positive, does the numerator calculate the similarity between each instance in G and the instance in the positive bag? If bag(G) is a negative sample, will it be counterproductive? Because your goal is to maximize the intra-class similarity and minimize the inter-class similarity.
Thank you!
企业微信截图_17338104562370
企业微信截图_17338104499190
企业微信截图_17338102479915

@KKIverson
Copy link
Author

Another small question: How big is this memory bank? Does it contain all the samples in this tissue source site? How is it maintained? Is it updated in each iteration?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant