Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Provider config wizard should list supported models from server for ease of setup #1621

Open
sam-at-block opened this issue Mar 11, 2025 · 0 comments
Labels
enhancement New feature or request

Comments

@sam-at-block
Copy link

sam-at-block commented Mar 11, 2025

Motivation
When configuring OpenAI compatible providers, that are serving various open source models, the model names that the server expects can be hard to track down.
I ended up finding the right model name by running this curl "$URL/v1/models | jq

Describe the solution you'd like
As a QoL improvement, when configuring the LLM Provider, after providing the host/token stuff, and asking which model to use, Goose can proactively fetch all models that the API serves, and suggest them. This could probably work in both the GUI and CLI, but I was using the cli, so maybe the model prompt section could try and get a list to provide a single select prompt in the config wizard. .

At least OpenAI compatible APIs support this GET /v1/models list, but Im assuming most API formats in the wild have some sense of "list available models". But even the OpenAI model options are kinda confusing, lots of options, so a prepopulated list would help remove friction from the initial onboarding.

@sam-at-block sam-at-block added the enhancement New feature or request label Mar 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant