Skip to content

running lama

Neil Horner edited this page Dec 15, 2019 · 9 revisions

See notes on data preprocessing

LAMA workflow

The following three steps show how to go from a series of baseline and mutant embryo 3D images (volumes) to generating anatomy phenotype calls.

1. Create a population average

This will be used as a target for registration and for generation of label maps. see Population average on how to generate or for instructions or how to get pre-made atlases

You can generate a population average using the tutorial data here

$ lama_reg.py -c tests/test_data/population_average_data/registration_config_population_average.yaml

After LAMA has finished, you will now see an output folder in the same directory as the inputs. In there there will be a averages folder. In this folder will be the 3 averages created from each stage of the registration pipeline. In this case, the final population average is called deformable.nrrd.

2. Generate baseline and mutant data

By mapping the baseline and mutant volumes into the same space as the population average, we generate data that can be compared at each pixel or at the organ label level. see Make baseline and mutant data

baselines

Generate the baselines and mutants data from the test data like so

# -c path to the config file
# -r path to the root directory containing the line folders (in this case just a baseline folder)
# -m make a job list file 

# Make the job list file using the -m option
$ lama_job_runner.py -c tests/test_data/registration_test_data/registration_config.toml -r tests/test_data/registration_test_data/baseline/ -m

# Then run again without -m option from as many machines as you want
$ lama_job_runner.py -c tests/test_data/registration_test_data/registration_config.toml -r tests/test_data/registration_test_data/baseline/

mutants

We use the same config for the mutant data

# Make the job list file using the -m option
$ lama_job_runner.py -c tests/test_data/registration_test_data/registration_config.toml -r tests/test_data/registration_test_data/mutant/ -m

# Then run again without -m option from as many machines as you want
$ lama_job_runner.py -c tests/test_data/registration_test_data/registration_config.toml -r tests/test_data/registration_test_data/baseline/

3. Run the statistical analysis

Now we have the baseline and mutant data we can run the statistical analysis. The voxel-based data is currently run with a linear model and corrected for multiple testing across the resulting image see stats pipeline

# -c config path
# -w root of baseline data
# -m root of mutant data
# -o output dir
# -t target foolder containing mask, labels and label metadata etc.
lama_stats -c tests/test_data/registration_test_data/stats_config.toml -w tests/test_data/registration_test_data/baseline -m tests/test_data/registration_test_data/mutant -o tests/test_data/registration_test_data -t tests/test_data/registration_test_data/baseline/target 

For the organ_volume analysis, due to a reduction in the amount of data, we have implemented a permutation-based statistical procedure. see permutation stats. We are currently looking to implement a similar process for the voxel-based data.