Releases: khanlab/hippunfold
1.5.2 - bugfix release for T1T2w model and dentate curvature
v1.5.1
Cumulative changes in testing of dependencies and dockerfile
What's Changed
- fix dependency versions for snakebids, pulp by @jordandekraker in #313
- Import patch by @jordandekraker in #314
- Update Dockerfile to latest version with dependencies by @jordandekraker in #315
- Jordandekraker patch 1 by @jordandekraker in #317
Full Changelog: v1.5.0...v1.5.1
1.5.0
💪 Changes
- use naturalneighbour interpolation instead of linear barycentric @jordandekraker (#306). This generally improves the quality of unfolded-folded warps and fixes some issues of misplaced vertices in subject-folded space (especially common in the DG and anterior/posterior tips)
🚀 Features
1.4.1
1.4.0
1.3.3
1.3.2
1.3.1
v1.3.0: unfolded registration
Major changes (full Methods here):
- 🎉 Inter-subject alignment is now further refined using registration in unfolded space!
- 🚀 The default subfield parcellation scheme is now based on the maxprob of seven histology samples.
- Together these changes align subfield boundaries to within approximately 0.5mm!
- 💻 Can (optionally) specify alternative U-Net tissue segmentation models, including contrast-agnositic synthseg (#244)
Minor chages:
- 🐛 Bug fix for inner/outer surfaces sometimes being pushed outside of hippocampus (#236)
- 🐛 Curvature is no longer calculated on a smoothed surface, and is instead normalized via a tanh function (#242)
- 📝 Improvements to Documentation clarity
Notes:
- The effects of inter-subject alignment are most evident at high resolution, such as 7T, ex-vivo scanning, or histology. However this will still improve alignment of in-vivo scans, leading to sharper group-averaged images. Check out our full evaluation here
- The new default subfield parcellation scheme includes the previous labelling from 3D BigBrain with some added corrections. Previous labelling schemes can optionally also be applied.
- The new synthseg models should be considered experimental. These should be applicable to any scanning modality and sometimes to 3D histology. They also provide better segmentation detail, leading to clearer definition of digitations/gyrifications. However, these models have not yet been formally evaluated, and may stil undergo additional training iterations.
- Check out our associated toolbox for helpful Python/Matlab analysis tools, tutorials, and examples!
Contributors:
Full synthseg_v0.2 model training process:
Motivation
Ex-vivo hippocampi often exhibit high gyrification in the head, body, and tail, which can sometimes be observed at very high MRI resolutions. However, at lower resolutions, these details are attenuated due to partial voluming and blurring, making the hippocampus appear smoother. This effect obscures distinctions such as the SRLM and alveus, which separate the inner and outer sides of gyri, respectively.
By incorporating a training set abundant in highly folded hippocampal shapes, it may be possible to infer hippocampal folding based on prior knowledge, even when it is difficult to distinguish. Additionally, JD observed that some original UNet models trained on T1w or T2w data exhibited a bias toward a smaller uncus and tail. This bias likely arises from systematic challenges in manual segmentations, as these regions are thin, tightly folded, and difficult to delineate. Conservative raters tend to exclude some tissue from these areas, and since incremental training, quality control (QC), and retraining were used in UNet training, this bias may have been propagated across multiple samples.
Challenges in Training
A major challenge in using high-resolution, highly folded hippocampal models for training is that they predominantly come from ex-vivo datasets, where detailed 3D scanning or reconstruction is possible. However, the contrast in ex-vivo scans differs significantly from in-vivo scans. SynthSeg, developed in FastSurfer, generates synthetic MRI images from labels alone and trains a UNet segmentation neural network. By randomizing contrast or gray-level intensity over thousands of iterations, this approach makes the network contrast-agnostic. Some generated images resemble T1w scans, others T2w, and some resemble no known modality. A network trained on such diverse inputs should generalize well across different scan types.
Beyond contrast randomization, SynthSeg applies various realistic MRI artifacts and augmentations, such as signal dropout, bias fields, noise distributions, and standard image transformations (e.g., rotations). nnUNet extends these capabilities by offering additional augmentations, including random diffeomorphic morphing, smoothing, and anisotropic rescaling. Notably, nnUNet applies these augmentations online, avoiding the need to store every variation on disk. It also includes features like automatic data splitting for training and validation, hyperparameter optimization, and model selection.
For SynthSeg_v0.2 training, 10,000 synthetic images were generated using SynthSeg and subsequently used in nnUNet, which provided further augmentation and modeling features.
Impact of Augmentations on Labels
Some augmentations, such as rescaling in both nnUNet and SynthSeg, also affect the output labels. Downsampling label maps can smooth over highly folded areas, leading to an overall hippocampal shape that more closely resembles in-vivo MRI appearances.
Addressing Contextual Limitations
A key limitation in generating synthetic MRI images was that manual segmentations included only hippocampal labels, excluding contextual structures such as the third ventricle, brainstem, inferior temporal lobe neocortex, white matter, amygdala, and choroid plexus. This omission is not relevant in all applications of UNet within HippUnfold, such as for ex-vivo tissue blocks containing only the hippocampus. However, in in-vivo MRI, contextual information is critical for accurately interpreting hippocampal location and orientation. In some poor-quality images, hippocampal misplacement was observed in the collateral sulcus, which could have been mitigated by including a separate label for this structure.
To address this, manually segmented hippocampal labels were overlaid onto background images derived from two sources:
- BigBrain tissue CLS classification
- FreeSurfer ASEG segmentation (subcortical and cortical structures)
Pipeline Overview
Manual Segmentation:
Hippocampal tissue classes and surrounding structures were manually segmented on the following hippocampal samples:
- BigBrain (left and right samples)
- 3D PLI (left sample only)
- AHEAD brain (subject 152017, right hemisphere)
Background Labeling:
Manual hippocampal segmentations were overlaid onto three possible background label maps:
- No background
- BigBrain CLS
- FreeSurfer ASEG
Alignment was achieved by:
- Extracting low-detail hippocampal labels as a binary mask from background images
- Applying a 25-degree sagittal rotation to approximate oblique sampling (matching manual segmentation orientation)
- Registering binary masks to high-detail hippocampal segmentations using affine and deformable NiftiReg registration
- Applying concatenated transforms to background labels and superimposing original hippocampal tissue labels
This process resulted in 12 label maps, incorporating four manually segmented hippocampi and three background variations.
Synthetic MRI Generation:
- SynthSeg was used to generate 10,000 synthetic MRI images from the 12 label maps.
- Tissue classes were grouped based on expected similar contrasts (e.g., neocortex and amygdala).
UNet Training:
- nnUNet was trained on the 10,000 synthetic MRI images and corresponding label maps.
Dataset Splitting:
- 2,000 images reserved as test data
- 5-fold cross-validation applied (another 2,000 used for validation)
Note: Contamination between training, validation, and test datasets occurred, as all were derived from the same four detailed hippocampal segmentations.
Application in HippUnfold:
- The trained nnUNet model was integrated into the HippUnfold workflow as an optional model choice.
Due to dataset contamination, performance was qualitatively assessed by JD across experimental datasets without formal tracking.
Results and Observations
JD observed several key findings when comparing the SynthSeg_v0.2-trained model to previous UNet models trained on T1w and T2w scans:
Improved high-resolution performance:
- In cases such as 7T MRI and ex-vivo scans, segmentation showed greater definition of gyri than previous models.
Limited generalization to unexpected datasets: - The model failed on datasets outside SynthSeg’s intended coverage, such as other 3D histology samples in the AHEAD dataset.
Limited hippocampal gyrification in standard 3T MRI:
- Less gyrification was observed compared to ex-vivo manual segmentations. Downsampling in synthetic data and label augmentation likely contributed to this effect.This is unsurprising given the difficulty of visually distinguishing fine gyrification in 3T scans may also play a role.
- In some cases with lower resolution or contrast, SynthSeg_v0.2 showed catastrophic failures not seen as frequently as in the T1w- or T2w-trained models. This may be due to limited variability, insufficient consideration of MRI features in SynthSeg image generation, or insufficient augmentation/downsampling in the training set.
Better hippocampal delineation in high-resolution MRI:
- In high-resolution 3T, bespoke hippocampal scans, and 7T MRI, segmentation improved compared to T1w- or T2w-trained models.
Tail and uncus appeared larger and more defined in specialized, highres, and standard scans, aligning with ex-vivo histology observations.
Summary
SynthSeg_v0.2 combines SynthSeg and nnUNet to train a segmentation model using a restricted set of highly detailed hippocampal structures. The approach enhances segmentation performance in high-resolution MRI but struggles with extreme variations not covered in training. While improvements were observed in hippocampal definition and structure, limitations remain in accurately detecting gyrification reliably at standard resolutions.
1.2.1
Changes
🚀 Features
- add cli arg for specifying crop-native-res in mm, default as 0.2mm iso @jordandekraker (#224)
🐛 Bug Fixes
- fix for input type in dseg qc @jordandekraker (#226)
- T1T2w model file/url was missing from the config @akhanf (#232)
- fix bug in crop native box @akhanf (#230)
- bugfix for cropref @jordandekraker (#225)
- aligns laplace_coords.py behaviour to laplace_coords_withinit.py @jordandekraker (#220)
- Added missing container for hippdwi_to_template @kaitj (#216)
📝 Documentation
- Add accepted in eLife mention and link to readme @akhanf (#227)
- Note on autotop_deps @jordandekraker (#223)
- Updates for Windows @jordandekraker (#222)
- update docs for running singularity in linux @akhanf (#219)
- Update_documentation @mcespedes99 (#214)