You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congratulations on your work, in which I am very interested. I have the following questions about pre-training and fine-tuning:
I don't see some details about stage 1 unsupervised training in the paper and also it is not shown in the code, may I ask where can I get it?
if I want to fine-tune the model on my own private dataset, and my own dataset is only partially labelled, do I need to do stage I unsupervised fine-tuning to extract patch level features?
Translated with DeepL.com (free version)
The text was updated successfully, but these errors were encountered:
Re Q1: If Stage 1 refers to the tile-level feature extractor, you should be able to find good insights for general-purpose training from the archived cTransPath repo. https://github.com/Xiyue-Wang/TransPath
Re Q2: Recent evidence I’ve gathered seems to suggest that fine-tuning large foundation models—whether for CHIEF or similar works—requires careful consideration of dataset size. Unless the new dataset is substantially comparable in scale to the original training sets, fine-tuning can easily disrupt the harmony of the trained embedding space.
If your set is only partially labeled and small datasets, I would start with using WSI/tile-level feature extractors to generate features and then fine-tune a classifier on these features.
Congratulations on your work, in which I am very interested. I have the following questions about pre-training and fine-tuning:
Translated with DeepL.com (free version)
The text was updated successfully, but these errors were encountered: