You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congratulations on this work on CHIEF.
I have a few questions about pre-training. I noticed that the supplementary file mentioned that 3 losses were used when pre-training the CHIEF model. Formula (3) is used to calculate the bag-level classification loss, and formula (4) is used to calculate the instance-level top-k loss. I understand both 3 and 4. But I don’t understand the loss in formula (5).
First, there are two negative signs in formula 5. Is there a typo (should there be only 1 negative sign) ? Second, for each bag(G), whether it is negative or positive, does the numerator calculate the similarity between each instance in G and the instance in the positive bag? If bag(G) is a negative sample, will it be counterproductive? Because your goal is to maximize the intra-class similarity and minimize the inter-class similarity.
Thank you!
The text was updated successfully, but these errors were encountered:
Another small question: How big is this memory bank? Does it contain all the samples in this tissue source site? How is it maintained? Is it updated in each iteration?
Congratulations on this work on CHIEF.



I have a few questions about pre-training. I noticed that the supplementary file mentioned that 3 losses were used when pre-training the CHIEF model. Formula (3) is used to calculate the bag-level classification loss, and formula (4) is used to calculate the instance-level top-k loss. I understand both 3 and 4. But I don’t understand the loss in formula (5).
First, there are two negative signs in formula 5. Is there a typo (should there be only 1 negative sign) ? Second, for each bag(G), whether it is negative or positive, does the numerator calculate the similarity between each instance in G and the instance in the positive bag? If bag(G) is a negative sample, will it be counterproductive? Because your goal is to maximize the intra-class similarity and minimize the inter-class similarity.
Thank you!
The text was updated successfully, but these errors were encountered: