Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/parameter scan #74

Merged
merged 7 commits into from
Jan 28, 2025
Merged

Feat/parameter scan #74

merged 7 commits into from
Jan 28, 2025

Conversation

gurdeep330
Copy link
Member

@gurdeep330 gurdeep330 commented Jan 27, 2025

For authors

Description

T2Bdemo4.mp4

This PR introduces a new parameter_scan tool. Please take a look at the demo video above. When an incorrect parameter name is provided (e.g., kIL6RBIND instead of kIL6RBind), the agent invokes the get_modelinfo tool to extract the correct parameter name and automatically reinvokes parameter_scan. This behaviour is demonstrated in the demo and supported by tests. Additionally, when a non-existent species name or parameter is entered (for example AB instead of Ab{serum}), the agent notifies the user of the error and suggests valid alternatives from the model. These features enable the agent/tool to handle minor inaccuracies intelligently and provide helpful feedback when corrections are not possible.

Fixes # (issue) NA

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests you conducted to verify your changes. These may involve creating new test scripts or updating existing ones.

  • Added new test(s) in the tests folder
  • Added new function(s) to an existing test(s) (e.g.: tests/testX.py)
  • No new tests added (Please explain the rationale in this case)

Checklist

  • My code follows the style guidelines mentioned in the Code/DevOps guides
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation (e.g. MkDocs)
  • My changes generate no new warnings
  • I have added or updated tests (in the tests folder) that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

For reviewers

Checklist pre-approval

  • Is there enough documentation?
  • If a new feature has been added, or a bug fixed, has a test been added to confirm good behavior?
  • Does the test(s) successfully test edge/corner cases?
  • Does the PR pass the tests? (if the repository has continuous integration)

Checklist post-approval

  • Does this PR merge develop into main? If so, please make sure to add a prefix (feat/fix/chore) and/or a suffix BREAKING CHANGE (if it's a major release) to your commit message.
  • Does this PR close an issue? If so, please make sure to descriptively close this issue when the PR is merged.

Checklist post-merge

  • When you approve of the PR, merge and close it (Read this article to know about different merge methods on GitHub)
  • Did this PR merge develop into main and is it suppose to run an automated release workflow (if applicable)? If so, please make sure to check under the "Actions" tab to see if the workflow has been initiated, and return later to verify that it has completed successfully.

@gurdeep330 gurdeep330 self-assigned this Jan 27, 2025
@gurdeep330 gurdeep330 added enhancement New feature or request T2B labels Jan 27, 2025
@gurdeep330 gurdeep330 requested a review from dmccloskey January 27, 2025 11:52
Copy link
Member

@dmccloskey dmccloskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good first stab at creating a custom parameter scan function 👏.

However, there are a few pieces that are missing. The biggest is the ability to explore what species are most sensitive to the parameter changes, which currently cannot be done since only a single species saved. I get that this simplifies the visualization and data storage needs, but also limits the usefulness of the tool.

The basico documentation https://copasi.org/Support/User_Manual/Tasks/Parameter_Scan/ provides a good overview of their widget strategy. However, I think we can do much better than this by providing an interactive environment for users to explore the results. Please see if my suggestion for making use of the multiple simulation feature and plotting across different simulations is possible. If not, let's discuss some alternatives....

# check if the param_name is not None
if param_name is None:
continue
# if param is a kinectic parameter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# if param is a kinectic parameter
# if param is a kinetic parameter

description="species concentration at the time point")

@dataclass
class RecurringData:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class RecurringData:
class ReoccurringData:

class RecurringData:
"""
Dataclass for storing the species and time data
on recurring basis.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
on recurring basis.
on a reoccurring basis.

species_data: SpeciesData = Field(
description="species name and initial concentration data",
default=None)
recurring_data: RecurringData = Field(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
recurring_data: RecurringData = Field(
reoccurring_data: ReoccurringData = Field(

species_data: SpeciesData = Field(
description="species name and initial concentration data",
default=None)
recurring_data: RecurringData = Field(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
recurring_data: RecurringData = Field(
reocurring_data: ReoccurringData = Field(


def add_rec_events(model_object, recurring_data):
"""
Add reocurring events to the model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Add reocurring events to the model.
Add reoccurring events to the model.

"""
Dataclass for storing the parameter scan data.
"""
species_name: str = Field(description="Species name to investigate",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a particular reason to limit the saved data to a single species? In principle, if we are running the simulations for each and every parameter step, we should have access to all of the species concentrations over time, correct?

I would recommend saving the simulation results for each and every parameter scan step. This could be achieved using the latest feature for multiple simulations where the name of the simulation is the parameter scan step (which I see you used below to define the column in the dataframe). The challenge will then be visualizing the results for the user...

I think the most straight forward way would be for the user to prompt the model to create a custom plot based on what they specifically would like to see. For example, after requesting a parameter scan:

assistant: The parameter scan has finished running. There is a lot of data to visualize! Please specify what species and from which parameter scan steps you would like me to plot. For example, if you ask "plot species X and Y across all parameter scans", I will create a time vs concentration plot for species X and Y with additional labels on X and Y to indicate the parameter step.

Copy link
Member Author

@gurdeep330 gurdeep330 Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmccloskey please correct me if I am wrong. Do you mean to run a parameter scan of more than one species in the same prompt? If that is the case, then it is already possible. Sorry, I forgot to add it to the demo earlier. Please take a look at the video below where I ask the agent to run a param scan (kIL6RBind) on Ab{serum} and IL6{serum}. In this case, the agent invokes the tool twice once with each species. And, as you said above, it generates a name for each time the tool is invoked and saves them in the state's key. Users can do it by specifying multiple species.

T2Bdemo6.mp4

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this means it needs to re-run the same simulation twice then?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmccloskey
Yes, for now... I'll rework the backend. But is the front-end rendering as expected?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the code, it is rendering as I would expect. However, as I mentioned, I believe we are short changing ourselves on providing the type of interactive experience that is lacking from tools like Copasi and Basico. The ability to generate plots on the fly to address a specific user question is a huge selling point of our platform. Perhaps this will come with the plotting agent.

@@ -119,6 +119,68 @@ def test_simulate_model_tool():
# Check if the data of the second model contains
assert 'mTORC2' in dic_simulated_data[1]['data']

def test_param_scane_tool():
'''
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The explanation is much appreciated 😊

@@ -35,18 +35,21 @@ def get_model_metadata(self) -> Dict[str, Union[str, int]]:
Returns:
dict: Dictionary with model metadata
"""
@abstractmethod
def update_parameters(self, parameters: Dict[str, Union[float, int]]) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the choice to split these two methods 👍

@gurdeep330
Copy link
Member Author

@dmccloskey Based on our discussions, I have updated the tool to consider all the species names requested by the user in a single invocation.

@gurdeep330 gurdeep330 requested a review from dmccloskey January 27, 2025 21:53
@dmccloskey dmccloskey merged commit 6c6d95d into main Jan 28, 2025
6 checks passed
@dmccloskey dmccloskey deleted the feat/parameter-scan branch January 28, 2025 08:17
Copy link
Contributor

🎉 This PR is included in version 1.11.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request T2B
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants