Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 3 additions & 26 deletions examples/gpt-5/gpt-5-1-codex-max_prompting_guide.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@
"\n",
"The Codex model family uses reasoning summaries to communicate user updates as it’s working. This can be in the form of one-liner headings (which updates the ephemeral text in Codex-CLI), or both heading and a short body. This is done by a separate model and therefore is **not promptable**, and we advise against adding any instructions to the prompt related to intermediate plans or messages to the user. We’ve improved these summaries for Codex-Max to be more communicative and provide more critical information about what’s happening and why; some of our users are updating their UX to promote these summaries more prominently in their UI, similar to how intermediate messages are displayed for GPT-5 series models.\n",
"\n",
"## [Agents.md](http://Agents.md) Usage\n",
"## Using agents.md\n",
"\n",
"Codex-cli automatically enumerates these files and injects them into the conversation; the model has been trained to closely adhere to these instructions.\n",
"\n",
Expand All @@ -185,7 +185,7 @@
"\n",
"# Compaction\n",
"\n",
"Compaction unlocks \"infinite\" context, where user conversations can persist for many turns without hitting context window limits or long context performance degradation, and agents can perform very long trajectories that exceed a typical context window for long-running, complex tasks. A weaker version of this was previously possible with ad-hoc scaffolding and conversation summarization, but our first-class implementation, available via the Responses API, is integrated with the model and is highly performant.\n",
"Compaction unlocks significantly longer effective context windows, where user conversations can persist for many turns without hitting context window limits or long context performance degradation, and agents can perform very long trajectories that exceed a typical context window for long-running, complex tasks. A weaker version of this was previously possible with ad-hoc scaffolding and conversation summarization, but our first-class implementation, available via the Responses API, is integrated with the model and is highly performant.\n",
"\n",
"How it works:\n",
"\n",
Expand All @@ -195,24 +195,7 @@
" 2. The endpoint is ZDR compatible and will return an “encrypted\\_content” item that you can pass into future requests. \n",
"3. For subsequent calls to the /responses endpoint, you can pass your updated, compacted list of conversation items (including the added compaction item). The model retains key prior state with fewer conversation tokens.\n",
"\n",
"**Endpoint:** POST /v1/responses/compact\n",
"\n",
"* Field: model \n",
" * Type: string \n",
" * Required: required \n",
" * Notes: Use any Responses-compatible alias. \n",
"* Field: input \n",
" * Type: string or array of items \n",
" * Required: optional \n",
" * Notes: Provided messages (and optional tool/function items). Required unless `previous_response_id` is specified. \n",
"* Field: previous\\_response\\_id \n",
" * Type: string (response id) \n",
" * Required: optional \n",
" * Notes: Seed the run from an existing response; server hydrates its input items. \n",
"* Field: Instructions \n",
" * Type: string \n",
" * Required: optional \n",
" * Notes: Developer-style instructions forwarded to the compaction run.\n",
"For endpoint details see our `/responses/compact` [docs](https://platform.openai.com/docs/api-reference/responses/compact).\n",
"\n",
"# Tools\n",
"\n",
Expand Down Expand Up @@ -586,12 +569,6 @@
"* Limit to 10k tokens. You can cheaply approximate this by computing `num_bytes/4`. \n",
"* If you hit the truncation limit, you should use half of the budget for the beginning, half for the end, and truncate in the middle with `…3 tokens truncated…`\n"
]
},
{
"cell_type": "markdown",
"id": "5f957bd4",
"metadata": {},
"source": []
}
],
"metadata": {
Expand Down