Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: make llm.context work for async functions #909

Merged
merged 1 commit into from
Mar 10, 2025

Conversation

teamdandelion
Copy link
Contributor

As presently implemented, the new llm.context api (#884) does not consistently work for async functions. Consider the following two examples:

import asyncio

from mirascope import llm


@llm.call(provider="openai", model="gpt-4o-mini")
async def introspect_model() -> str:
    return f"What model are you?"


async def example1():
    print("example1:")
    openai_response = await introspect_model()
    print(openai_response.content)

    with llm.context(provider="anthropic", model="claude-3-5-sonnet-latest"):
        anthropic_response = await introspect_model()
        print(anthropic_response.content)


async def example2():
    print("example2:")
    openai_response_future = introspect_model()

    with llm.context(provider="anthropic", model="claude-3-5-sonnet-latest"):
        anthropic_response_future = introspect_model()

    openai_response, anthropic_response = await asyncio.gather(
        openai_response_future, anthropic_response_future
    )

    print(openai_response.content)
    print(anthropic_response.content)


async def main():
    await example1()
    await example2()


if __name__ == "__main__":
    asyncio.run(main())

Right now example 1 works but example 2 doesn't:

example1:
I am based on OpenAI's GPT-3 model. How can I assist you today?
I'm Claude, an AI assistant created by Anthropic. I aim to be direct and honest in my communications.

example2:
I am based on OpenAI's GPT-3 model, specifically designed for providing information and answering questions across a wide range of topics. If you have any specific questions or need assistance, feel free to ask!
I am based on OpenAI's GPT-3.5 architecture. How can I assist you today?

The issue being in the second example, when we actually execute the code, we're no longer inside the context of the context manager.

This commit fixes the issue by capturing the context into a closure at call time for the decorated function. I've added tests: specifically two tests to test_call.py, of which only the first one passes under the current implementation, and both pass with the fixed implementation.

Note, in the present implementation llm.override works fine. However, I added a similar test case to test_override for caution.

Copy link

codecov bot commented Mar 10, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 100.00%. Comparing base (0611264) to head (17a5f52).
Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main      #909   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          514       514           
  Lines        20953     21030   +77     
=========================================
+ Hits         20953     21030   +77     
Flag Coverage Δ
tests 100.00% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@teamdandelion teamdandelion force-pushed the fix-llm-context-async branch from 8fda98f to 17a5f52 Compare March 10, 2025 22:03
@willbakst
Copy link
Contributor

Great catch, thank you for submitting the fix!

One question, otherwise approved! If it requires a change I'll approve again if it's stale.

@willbakst willbakst merged commit 38c4890 into Mirascope:main Mar 10, 2025
7 checks passed
@willbakst
Copy link
Contributor

Just released this in v1.21.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants