fix: make llm.context work for async functions #909
Merged
+316
−41
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
As presently implemented, the new
llm.context
api (#884) does not consistently work for async functions. Consider the following two examples:Right now example 1 works but example 2 doesn't:
The issue being in the second example, when we actually execute the code, we're no longer inside the context of the context manager.
This commit fixes the issue by capturing the context into a closure at call time for the decorated function. I've added tests: specifically two tests to
test_call.py
, of which only the first one passes under the current implementation, and both pass with the fixed implementation.Note, in the present implementation
llm.override
works fine. However, I added a similar test case totest_override
for caution.