Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About fine-tuning and unsupervised training in stage 1. #39

Open
tulvgengenr opened this issue Nov 28, 2024 · 1 comment
Open

About fine-tuning and unsupervised training in stage 1. #39

tulvgengenr opened this issue Nov 28, 2024 · 1 comment

Comments

@tulvgengenr
Copy link

Congratulations on your work, in which I am very interested. I have the following questions about pre-training and fine-tuning:

  1. I don't see some details about stage 1 unsupervised training in the paper and also it is not shown in the code, may I ask where can I get it?
  2. if I want to fine-tune the model on my own private dataset, and my own dataset is only partially labelled, do I need to do stage I unsupervised fine-tuning to extract patch level features?

Translated with DeepL.com (free version)

@Dadatata-JZ
Copy link
Collaborator

Hi @tulvgengenr

Re Q1: If Stage 1 refers to the tile-level feature extractor, you should be able to find good insights for general-purpose training from the archived cTransPath repo.
https://github.com/Xiyue-Wang/TransPath

Re Q2: Recent evidence I’ve gathered seems to suggest that fine-tuning large foundation models—whether for CHIEF or similar works—requires careful consideration of dataset size. Unless the new dataset is substantially comparable in scale to the original training sets, fine-tuning can easily disrupt the harmony of the trained embedding space.

If your set is only partially labeled and small datasets, I would start with using WSI/tile-level feature extractors to generate features and then fine-tune a classifier on these features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants