Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Jan 4, 2026

finished : #1236

Summary by CodeRabbit

  • New Features
    • Added API endpoint selection for OpenAI-compatible models, allowing users to choose between chat and image endpoints based on their needs.
    • New model settings option with full multilingual support across all supported languages.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 4, 2026

📝 Walkthrough

Walkthrough

This PR introduces API endpoint selection for OpenAI-compatible models, allowing users to configure whether requests route to Chat, Image, or Video endpoints. Changes include a new ApiEndpointType enum, a model config field with defaults, backend dispatch logic based on endpoint type, UI selector in the model configuration dialog, and translations across 11 language variants.

Changes

Cohort / File(s) Summary
Shared Type Definitions
src/shared/model.ts, src/shared/types/presenters/legacy.presenters.d.ts
Added new ApiEndpointType enum (Chat, Image, Video) and optional apiEndpoint field to ModelConfig interface
Backend Configuration & Provider
src/main/presenter/configPresenter/modelConfig.ts, src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
Defaulted apiEndpoint to ApiEndpointType.Chat in config; introduced getEffectiveApiEndpoint() helper and endpoint-based request routing; updated parseFunctionCalls() signature with optional fallback parameter
Frontend UI Component
src/renderer/src/components/settings/ModelConfigDialog.vue
Added endpoint selector UI (gated to OpenAI-compatible providers), model data field, and default initialization logic
Internationalization
src/renderer/src/i18n/{en-US,da-DK,fa-IR,fr-FR,he-IL,ja-JP,ko-KR,pt-BR,ru-RU,zh-CN,zh-HK,zh-TW}/settings.json
Added apiEndpoint translation entries with description, label, and chat/image options across 11 language files

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Poem

🐰 A rabbit hops through endpoints three—
Chat and Image wild and free!
Config switches flow with grace,
Each model finds its perfect place. ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: support image api settings' directly corresponds to the main change: adding support for API endpoint selection (including image endpoints) with configuration UI and i18n.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ThinkInAIXYZ ThinkInAIXYZ deleted a comment from chatgpt-codex-connector bot Jan 4, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
src/main/presenter/configPresenter/modelConfig.ts (1)

1-1: Default apiEndpoint fallback is sound; consider making it consistently explicit

Importing ApiEndpointType and defaulting the fallback ModelConfig to apiEndpoint: ApiEndpointType.Chat makes the new field safe for unknown models. For configs built from the provider DB (buildConfigFromProviderModel), apiEndpoint remains undefined, so downstream code must always treat undefined as Chat. If you prefer every non-user config to carry an explicit endpoint, you could also set apiEndpoint: ApiEndpointType.Chat in buildConfigFromProviderModel (or in a small helper) so callers don’t need to special‑case undefined.

Also applies to: 375-391

src/renderer/src/i18n/en-US/settings.json (1)

278-284: API endpoint i18n is correct; confirm intentional omission of video option

The new apiEndpoint block is well-structured and matches other locales. Since ApiEndpointType includes video but the UI options here (and in other locales) only expose chat and image, please confirm that video endpoints are intentionally not user-selectable yet; if/when you enable them, you’ll need to add a video option and translations across all settings.json locales.

src/shared/types/presenters/legacy.presenters.d.ts (1)

6-6: ModelConfig typings updated correctly for apiEndpoint

Importing ApiEndpointType and adding optional apiEndpoint?: ApiEndpointType to ModelConfig aligns the public typings with the new config field used in the main process and renderer UI while remaining backward compatible. If you later want to persist the chosen endpoint per conversation or in static defaults, you can similarly extend CONVERSATION_SETTINGS or DefaultModelSetting, but that isn’t required for this PR’s per‑model setting behavior.

Also applies to: 146-166

src/renderer/src/components/settings/ModelConfigDialog.vue (1)

502-516: Consider extracting the provider exclusion list to a shared constant.

The hardcoded EXCLUDED_PROVIDERS array works but could be fragile if new non-OpenAI-compatible providers are added elsewhere. Additionally, the check uses includes() which may produce false positives (e.g., a provider ID containing "ollama" as substring).

🔎 Suggested improvement
 const isOpenAICompatibleProvider = computed(() => {
   const EXCLUDED_PROVIDERS = [
     'anthropic',
     'gemini',
     'vertex',
     'aws-bedrock',
     'github-copilot',
     'ollama',
     'acp'
   ]
   const providerId = props.providerId?.toLowerCase() || ''
-  return !EXCLUDED_PROVIDERS.some((excluded) => providerId.includes(excluded))
+  return !EXCLUDED_PROVIDERS.some((excluded) => providerId === excluded || providerId.startsWith(`${excluded}-`))
 })
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d88d866 and b9db664.

📒 Files selected for processing (17)
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/he-IL/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
🧰 Additional context used
📓 Path-based instructions (31)
src/renderer/src/i18n/**/*.json

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/i18n/**/*.json: Translation key naming convention: use dot-separated hierarchical structure with lowercase letters and descriptive names (e.g., 'common.button.submit')
Maintain consistent key-value structure across all language translation files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR)

Files:

  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/he-IL/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/shared/model.ts
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/i18n/he-IL/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
src/renderer/**

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Use lowercase with dashes for directories (e.g., components/auth-wizard)

Files:

  • src/renderer/src/i18n/da-DK/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/he-IL/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/shared/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Shared type definitions and utilities between main and renderer processes should be placed in src/shared

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
src/shared/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Shared types and utilities should be placed in src/shared/

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/shared/model.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/shared/**/*.d.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Define type definitions in shared/*.d.ts files for objects exposed by the main process to the renderer process

Files:

  • src/shared/types/presenters/legacy.presenters.d.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Store and retrieve custom prompts via configPresenter.getCustomPrompts() for config-based data source management

Files:

  • src/main/presenter/configPresenter/modelConfig.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.vue: Use Vue 3 Composition API for all components instead of Options API
Use Tailwind CSS with scoped styles for component styling

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/**/*.vue: All user-facing strings must use i18n keys via vue-i18n for internationalization
Ensure proper error handling and loading states in all UI components
Implement responsive design using Tailwind CSS utilities for all UI components

src/renderer/**/*.vue: Use composition API and declarative programming patterns; avoid options API
Structure files: exported component, composables, helpers, static content, types
Use PascalCase for component names (e.g., AuthWizard.vue)
Use Vue 3 with TypeScript, leveraging defineComponent and PropType
Use template syntax for declarative rendering
Use Shadcn Vue, Radix Vue, and Tailwind for components and styling
Implement responsive design with Tailwind CSS; use a mobile-first approach
Use Suspense for asynchronous components
Use <script setup> syntax for concise component definitions
Prefer 'lucide:' icon family as the primary choice for Iconify icons
Import Icon component from '@iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*.{vue,ts,tsx}: All user-facing strings must use i18n keys with vue-i18n framework in the renderer
Import and use useI18n() composable with the t() function to access translations in Vue components and TypeScript files
Use the dynamic locale.value property to switch languages at runtime
Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Renderer process code should be placed in src/renderer (Vue 3 application)

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability in Vue.js applications
Implement proper state management with Pinia in Vue.js applications
Utilize Vue Router for navigation and route management in Vue.js applications
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between Vue components

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Write concise, technical TypeScript code with accurate examples
Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError)
Avoid enums; use const objects instead
Use arrow functions for methods and computed properties
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements

Vue 3 app code in src/renderer/src should be organized into components/, stores/, views/, i18n/, lib/ directories with shell UI in src/renderer/shell/

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching
Leverage ref, reactive, and computed for reactive state management
Use provide/inject for dependency injection when appropriate
Use Iconify/Vue for icon implementation

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use TypeScript with Vue 3 Composition API for the renderer application
All user-facing strings must use vue-i18n keys in src/renderer/src/i18n

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/components/**/*.vue

📄 CodeRabbit inference engine (AGENTS.md)

src/renderer/src/components/**/*.vue: Use Tailwind for styles in Vue components
Vue component files must use PascalCase naming (e.g., ChatInput.vue)

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement the coreStream method following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), the coreStream(messages, modelId, temperature, maxTokens) method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format using convertToProviderTools and including them in the API request; for Providers without native function call support, prepare messages using prepareFunctionCallPrompt before making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response and yield standardized LLMCoreStreamEvent objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (25)
📚 Learning: 2025-11-25T05:26:43.510Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.510Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Maintain consistent key-value structure across all language translation files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR)

Applied to files:

  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`

Applied to files:

  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:26:24.867Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:26:24.867Z
Learning: Applies to src/shared/**/*.d.ts : Define type definitions in shared/*.d.ts files for objects exposed by the main process to the renderer process

Applied to files:

  • src/main/presenter/configPresenter/modelConfig.ts
📚 Learning: 2025-11-25T05:28:20.513Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.513Z
Learning: Applies to src/main/**/*.ts : Electron main process code belongs in `src/main/` with presenters in `presenter/` (Window/Tab/Thread/Mcp/Config/LLMProvider) and `eventbus.ts` for app events

Applied to files:

  • src/main/presenter/configPresenter/modelConfig.ts
📚 Learning: 2025-11-25T05:27:45.545Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:45.545Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Implement proper state management with Pinia in Vue.js applications

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:27:20.067Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.067Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Use modules to organize related state and actions in Pinia stores

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:27:20.067Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.067Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Keep Pinia stores focused on global state, not component-specific data

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:27:20.067Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.067Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Use getters for computed state properties in Pinia stores

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Use Pinia for frontend state management and Vue Router for SPA routing

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:28:20.513Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.513Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Use Pinia for state management

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/stores/*.ts : Use Pinia for state management

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:27:20.067Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.067Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Implement proper state persistence for maintaining data across sessions in Pinia stores

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Leverage ref, reactive, and computed for reactive state management

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:27:20.067Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.067Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Utilize actions for side effects and asynchronous operations in Pinia stores

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Store and retrieve custom prompts via `configPresenter.getCustomPrompts()` for config-based data source management

Applied to files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement the `coreStream` method following the standardized event interface for tool calling and response streaming

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (17)
src/renderer/src/i18n/zh-HK/settings.json (1)

278-284: LGTM! Consistent API endpoint translation added.

The new apiEndpoint configuration block is correctly structured and positioned within the model configuration section. The key hierarchy (apiEndpoint.label, apiEndpoint.description, apiEndpoint.options.chat/image) follows the dot-separated naming convention specified in coding guidelines.

As per coding guidelines, the key-value structure is consistent across all language files in this PR.

src/renderer/src/i18n/fr-FR/settings.json (1)

278-284: LGTM! Consistent API endpoint translation added.

The French translation maintains the same structural consistency as other locales, with proper key naming and positioning within the model configuration.

src/renderer/src/i18n/ru-RU/settings.json (1)

278-284: LGTM! Consistent API endpoint translation added.

The Russian translation follows the established pattern with correct JSON structure and key hierarchy.

src/renderer/src/i18n/da-DK/settings.json (1)

278-284: LGTM! Consistent API endpoint translation added.

The Danish translation maintains consistency with other language files in structure and positioning.

src/renderer/src/i18n/zh-CN/settings.json (1)

236-242: LGTM! Consistent API endpoint translation added.

The Simplified Chinese translation correctly adds the apiEndpoint configuration with proper structure. All translation files in this PR maintain consistent key-value structures as required by the coding guidelines.

Note: The shared ApiEndpointType enum includes Video = 'video' in addition to Chat and Image, but the video option is not included in these translations. This appears intentional for the current PR scope, likely indicating the UI only supports chat and image endpoints at this time.

src/shared/model.ts (1)

11-15: LGTM!

The ApiEndpointType enum is well-defined and follows the existing pattern of ModelType above it. String values correctly match the enum keys in lowercase, making it suitable for API routing and UI display.

src/renderer/src/i18n/ja-JP/settings.json (1)

278-285: Consistent with other locale files.

The translation structure matches other locale files in this PR. Note: options.video is not included here (same as other locales), though ApiEndpointType.Video exists in the enum.

Based on learnings: Maintain consistent key-value structure across all language translation files.

src/renderer/src/i18n/pt-BR/settings.json (1)

278-285: LGTM! Consistent translation structure.

The Portuguese translation follows the same structure as other locale files in this PR. The apiEndpoint configuration is properly nested under modelConfig and includes the necessary keys for UI display.

Based on learnings: Maintain consistent key-value structure across all language translation files.

src/renderer/src/i18n/ko-KR/settings.json (1)

278-285: LGTM! Translation structure is consistent.

The Korean translation properly adds the apiEndpoint configuration with appropriate localization. The structure matches all other locale files in this PR.

Based on learnings: Maintain consistent key-value structure across all language translation files.

src/renderer/src/i18n/zh-TW/settings.json (1)

278-285: Video endpoint missing from all translation files consistently—verify if intentional.

The ApiEndpointType enum in src/shared/model.ts includes Video = 'video', and it's handled in the backend (openAICompatibleProvider.ts line 1552). However, all 8 language files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR, and zh-TW) consistently omit the "video" option from apiEndpoint.options.

If the video endpoint is intentionally not yet exposed in the UI, no action is needed. If it should be available, add the following entry to all locale files:

zh-TW: "video": "影片生成"
zh-CN: "video": "视频生成" (or similar for other locales)

This ensures consistency with the backend support and maintains the key-value structure pattern across all translation files.

src/renderer/src/i18n/fa-IR/settings.json (1)

278-284: fa-IR apiEndpoint i18n entry looks structurally correct

Key path, structure, and options (chat/image) mirror the English source and neighboring searchLimit entry; JSON and nesting are valid.

src/renderer/src/i18n/he-IL/settings.json (1)

278-284: he-IL apiEndpoint translation aligns with structure and meaning

The apiEndpoint block is correctly placed under model.modelConfig, mirrors the English structure, and the labels/descriptions read consistently with the intended behavior.

src/renderer/src/components/settings/ModelConfigDialog.vue (3)

149-168: LGTM! Well-structured API endpoint selector.

The UI block is properly conditionally rendered, uses i18n keys for all user-facing strings, and follows the established pattern of other selector components in this dialog.


526-526: LGTM! Default apiEndpoint correctly set.

The default value ApiEndpointType.Chat aligns with the backend default and ensures consistent behavior for new configurations.


637-640: LGTM! Proper fallback initialization for existing configs.

This ensures backward compatibility by setting apiEndpoint to Chat for existing OpenAI-compatible provider configurations that lack this field.

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)

90-102: LGTM! Clean endpoint resolution logic.

The helper method correctly prioritizes: (1) explicit config, (2) model-based heuristic for image generation, (3) default to Chat. This maintains backward compatibility while enabling the new feature.


13-13: LGTM! Import added correctly.

The ApiEndpointType import from @shared/model aligns with the shared type definition used across the codebase.

@zerob13 zerob13 merged commit 84940b2 into dev Jan 4, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants