-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactors #35
Merged
Merged
Refactors #35
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Remove global mainWindow reference in LLM module - Pass mainWindow explicitly to chat handlers - Clean up unused imports and remove console logs - Delete unused avatar image assets - Update CSS with minor formatting improvements
- Extract common system prompt logic into a new returnSystemPrompt helper function - Simplify system prompt creation across all LLM providers - Update type definitions for improved type safety - Reduce code duplication in provider implementations
- Update all LLM providers to use a single ProviderInputParams interface - Modify function signatures to accept a single params object - Add new ProviderInputParams type in types.d.ts - Standardize parameter passing across different LLM providers - Improve type safety and code consistency
- Extract common message preprocessing logic into a new prepMessages helper function - Simplify message preparation across all LLM providers - Reduce code duplication in message timestamp and context handling - Improve code readability and maintainability
- Extract common Chain of Thought logic into a new openAiChainOfThought helper function - Remove duplicate chainOfThought implementations from each LLM provider - Simplify imports by removing unnecessary Electron and OpenAI type imports - Centralize Chain of Thought reasoning generation logic - Improve code maintainability and reduce code duplication
- Create a new providerInitialize helper function to standardize provider setup - Remove duplicate initialization logic across different LLM providers - Simplify API key and model retrieval for each provider - Reduce code duplication in provider initialization - Improve code maintainability and readability
- Implement XAI provider initialization in providerInit.ts - Update XAI provider to use centralized provider initialization - Remove redundant initialization logic in xai.ts - Simplify XAI provider setup and API key retrieval
- Create a new chatCompletion helper function to standardize chat completion across providers - Simplify provider implementations by extracting common streaming and message handling logic - Remove duplicate code in Azure OpenAI, Custom, Local Model, OpenAI, OpenRouter, XAI providers - Improve code maintainability and reduce redundancy - Centralize chat completion workflow with a single, reusable function
- Update generateTitle to use new providerInitialize function - Simplify title generation across different LLM providers - Remove redundant API key and initialization logic - Modify function signatures to use User object instead of separate userId - Centralize title generation workflow with a single, reusable approach
- Refactor OpenAI type imports in generateTitle and returnSystemPrompt - Use more specific import paths for ChatCompletionMessageParam - Remove unnecessary OpenAI import in generateTitle - Improve type import precision and clarity
- Extract type definitions from context files to dedicated type files - Move fetchEmbeddingModels and fetchSystemSpecs to separate data files - Simplify context providers by removing inline type definitions - Improve code organization and maintainability - Centralize type definitions for better type management
- Introduce ollamaInit state to track Ollama integration status - Update Ollama component to use new initialization state - Modify UI rendering and button text based on initialization state - Extend SysSettingsContext to include ollamaInit and setOllamaInit
- Extract chat logic into useChatLogic custom hook - Split Chat component into smaller, focused components - Improve code organization and readability - Simplify state management and scroll handling - Introduce ChatHeader, ChatMessagesArea components
- Improve error messaging in chat request handling - Add more specific error messages for API key and request issues - Update getUserConversations to fetch conversations with messages - Reset streaming message reasoning during request cancellation - Minor code formatting and cleanup
- Extract complex state logic into custom hooks (useChatManagement, useConversationManagement, useModelManagement, useUIState) - Simplify UserContext by delegating state management to specialized hooks - Remove unnecessary state variables and useEffect hooks - Improve code organization and readability - Ensure all required state setters and getters are included in the context value
- Add setStreamingMessageReasoning and setIsLoading to reset method - Update dependencies in the reset method to include new state setters - Remove empty data files that are no longer needed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.