Skip to content

Setup wizard sets reasoning_effort: high which causes CAPIError 400 on every prompt with Claude models #2153

@Simon-McIntosh

Description

@Simon-McIntosh

Describe the bug

During first launch, the Copilot CLI setup wizard prompts the user to configure reasoning_effort (offering options like "high"). When the user selects this, it is saved to ~/.copilot/config.json. However, reasoning_effort is an OpenAI-specific parameter that Claude models do not support. When the CLI sends this parameter to the API for any Claude model (including the default claude-sonnet-4.6), the API returns 400 Bad Request immediately — before any tool is called or any response is returned.

The result: the tool is completely non-functional from first launch, because the setup wizard itself configures it into a broken state.

Affected version

GitHub Copilot CLI 1.0.9

Steps to reproduce

  1. Install Copilot CLI fresh
  2. On first launch, when prompted about reasoning_effort, select "high"
  3. The config is saved to ~/.copilot/config.json:
    {
      "reasoning_effort": "high",
      "streamer_mode": true
    }
  4. Type any prompt and press Enter
  5. Every single prompt fails immediately:
    ✗ Execution failed: CAPIError: 400 400 Bad Request
    (Request ID: FB0A:105CE9:33A8B7:3D6D1E:69BB8BD2)
    

Root cause (from logs)

The error occurs in getCompletionWithTools — the API rejects the request before any AI response is produced. The reasoning_effort parameter is being forwarded to Claude models that don't support it.

From ~/.copilot/logs/process-*.log:

[ERROR] error (Request-ID FB0A:105CE9:33A8B7:3D6D1E:69BB8BD2)
[ERROR] {
  "status": 400,
  "name": "CAPIError",
  "message": "400 400 Bad Request\n",
  ...
  at wst.getCompletionWithTools

Workaround

Manually remove reasoning_effort from ~/.copilot/config.json:

{
  "firstLaunchAt": "...",
  "banner": "never",
  "experimental": true
}

Expected behavior

Either:

  1. The CLI should not offer reasoning_effort as a setup option when the default model is Claude, OR
  2. The CLI should only forward reasoning_effort to models that support it (OpenAI o-series / GPT models), and silently ignore it for Claude/Gemini models

Additional context

  • OS: WSL2 (Ubuntu), also reproduced on Linux
  • Default model at time of failure: claude-sonnet-4.6
  • The error is silent — there's no indication that the config is the cause; users see a generic "400 Bad Request" with no actionable guidance

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions