We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
_get_local_provider_call should return a client appropriate to the context of execution sync or async like it is the case for openAI for example.
_get_local_provider_call
This is probably not the best solution but illustrate the discussion.
diff --git a/mirascope/llm/_call.py b/mirascope/llm/_call.py index 48dc2098..ebec6992 100644 --- a/mirascope/llm/_call.py +++ b/mirascope/llm/_call.py @@ -2,6 +2,7 @@ from __future__ import annotations +import asyncio from collections.abc import AsyncIterable, Awaitable, Callable, Iterable from enum import Enum from functools import wraps @@ -56,9 +57,13 @@ def _get_local_provider_call( if client: return openai_call, client - from openai import OpenAI + from openai import AsyncOpenAI, OpenAI - client = OpenAI(api_key="ollama", base_url="http://localhost:11434/v1") + try: + asyncio.get_running_loop() + client = AsyncOpenAI(api_key="ollama", base_url="http://localhost:11434/v1") + except RuntimeError: + client = OpenAI(api_key="ollama", base_url="http://localhost:11434/v1") return openai_call, client else: # provider == "vllm" from ..core.openai import openai_call (END)
The issue occurs to me using Ollama but exactly the same issue probably exists with Vllm
The text was updated successfully, but these errors were encountered:
Fix released in v1.21.1! Thanks for pointing this out
v1.21.1
Sorry, something went wrong.
willbakst
Successfully merging a pull request may close this issue.
Description
_get_local_provider_call
should return a client appropriate to the context of execution sync or async like it is the case for openAI for example.This is probably not the best solution but illustrate the discussion.
Python, Mirascope & OS Versions, related packages (not required)
The issue occurs to me using Ollama but exactly the same issue probably exists with Vllm
The text was updated successfully, but these errors were encountered: