You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-- so that codecompanion doesn't mistake this as a normal response with empty string as the content
5258
+
if data.output.content == "" then
5259
+
data.output.content = nil
5260
+
end
5261
+
end
5262
+
return data
5263
+
end
5264
+
}
5265
+
}
5266
+
<
5267
+
5268
+
Notes:
5269
+
5270
+
1. You don’t always have to set `data.output.content` to `nil`. This is mostly intended for `streaming`, and you may encounter issues in non-stream mode if you do that.
5271
+
2. It’s expected that the processed `data` table is returned at the end.
5272
+
3. For adapters that are using the legacy flat handler formats, this handler should be named `handlers.parse_message_meta`. The function signature stays the same.
5273
+
5274
+
5144
5275
REQUEST.BUILD_PARAMETERS
5145
5276
5146
5277
For the purposes of the OpenAI adapter, no additional parameters need to be
@@ -6796,7 +6927,7 @@ tool to function. In the case of Anthropic, we insert additional headers.
6796
6927
<
6797
6928
6798
6929
Some adapter tools can be a `hybrid` in terms of their implementation. That is,
6799
-
they’re an adapter tool that requires a client-side component (i.e.� a
6930
+
they’re an adapter tool that requires a client-side component (i.e.a
6800
6931
built-in tool). This is the case for the
6801
6932
|codecompanion-usage-chat-buffer-tools-memory| tool from Anthropic. To allow
6802
6933
for this, ensure that the tool definition in `available_tools` has
Copy file name to clipboardExpand all lines: doc/extending/adapters.md
+38Lines changed: 38 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -176,6 +176,7 @@ These handlers parse LLM responses:
176
176
-`response.parse_chat` - Format chat output for the chat buffer
177
177
-`response.parse_inline` - Format output for inline insertion
178
178
-`response.parse_tokens` - Extract token count from the response
179
+
-`response.parse_meta` - Process non-standard fields in the response (currently only supported by OpenAI-based adapters)
179
180
180
181
### Tool Handlers
181
182
@@ -376,6 +377,43 @@ handlers = {
376
377
}
377
378
```
378
379
380
+
### `response.parse_meta`
381
+
382
+
Some OpenAI-compatible API providers like deepseek, Gemini and OpenRouter implement a superset of the standard specification, and provide reasoning tokens/summaries within their response.
383
+
The non-standard fields in the [`message` (non-streaming)](https://platform.openai.com/docs/api-reference/chat/object#chat-object-choices-message) or [`delta` (streaming)](https://platform.openai.com/docs/api-reference/chat-streaming/streaming#chat_streaming-streaming-choices-delta) object are captured by the OpenAI adapter and can be used to extract the reasoning.
384
+
385
+
For example, the DeepSeek API provides the reasoning tokens in the `delta.reasoning_content` field.
386
+
We can therefore use the following `parse_meta` handler to extract the reasoning tokens and put them into the appropriate output fields:
387
+
388
+
```lua
389
+
handlers= {
390
+
response= {
391
+
---@paramselfCodeCompanion.HTTPAdapter
392
+
--- `data` is the output of the `parse_chat` handler
-- so that codecompanion doesn't mistake this as a normal response with empty string as the content
401
+
ifdata.output.content=="" then
402
+
data.output.content=nil
403
+
end
404
+
end
405
+
returndata
406
+
end
407
+
}
408
+
}
409
+
```
410
+
411
+
Notes:
412
+
413
+
1. You don't always have to set `data.output.content` to `nil`. This is mostly intended for `streaming`, and you may encounter issues in non-stream mode if you do that.
414
+
2. It's expected that the processed `data` table is returned at the end.
415
+
3. For adapters that are using the legacy flat handler formats, this handler should be named `handlers.parse_message_meta`. The function signature stays the same.
416
+
379
417
### `request.build_parameters`
380
418
381
419
For the purposes of the OpenAI adapter, no additional parameters need to be created. So we just pass this through:
0 commit comments