Skip to content

feat: add structured output tutorial (#14)#249

Open
builder-ujaladi wants to merge 1 commit intostrands-agents:mainfrom
builder-ujaladi:feat/14-structured-output
Open

feat: add structured output tutorial (#14)#249
builder-ujaladi wants to merge 1 commit intostrands-agents:mainfrom
builder-ujaladi:feat/14-structured-output

Conversation

@builder-ujaladi
Copy link
Copy Markdown

@builder-ujaladi builder-ujaladi commented Apr 18, 2026

Summary

Adds a new tutorial 14-structured-output to python/01-learn/ covering structured output with Strands Agents.

What's covered

  1. Flat Pydantic models — define a model, pass as structured_output_model, access result.structured_output (sync + async)
  2. Complex schemas — nested models, List[SubModel], Optional fields, Field validators, Enum/Literal constraints
  3. Validation & self-correction — automatic retry loop on validation failure, StructuredOutputException handling, custom structured_output_prompt
  4. Tools + structured outputcalculator tool from strands-agents-tools combined with structured results
  5. Streamingstream_async() with structured output (appears only in final event)

Files added

  • python/01-learn/14-structured-output/README.md
  • python/01-learn/14-structured-output/structured-output.ipynb
  • python/01-learn/14-structured-output/requirements.txt
  • python/01-learn/14-structured-output/images/architecture.png

Testing

  • Notebook executed end-to-end with nbconvert --execute in a clean venv — all 18 code cells pass with zero errors
  • Manual testing also completed
  • Model: Claude Sonnet 4.5 on Amazon Bedrock (us.anthropic.claude-sonnet-4-5-20250929-v1:0)

Add a step-by-step tutorial covering structured output with Strands Agents:
- Flat Pydantic models (sync + async)
- Complex schemas (nested, lists, optional, enums, constraints)
- Validation & self-correction (retry loop, exception handling, custom forcing prompt)
- Tools + structured output (calculator integration)
- Streaming with structured output
@builder-ujaladi builder-ujaladi force-pushed the feat/14-structured-output branch from 13234b2 to 6672e91 Compare April 18, 2026 19:09
@github-actions
Copy link
Copy Markdown

Latest scan for commit: 6672e91 | Updated: 2026-04-19 18:37:38 UTC

✅ Security Scan Report (PR Files Only)

Scanned Files

  • python/01-learn/14-structured-output/README.md
  • python/01-learn/14-structured-output/images/architecture.png
  • python/01-learn/14-structured-output/requirements.txt
  • python/01-learn/14-structured-output/structured-output.ipynb

Security Scan Results

Critical High Medium Low Info
0 0 0 0 0

Threshold: High

No security issues detected in your changes. Great job!

This scan only covers files changed in this PR.


## Architecture

![Structured Output Architecture](images/architecture.png)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The architecture diagram tries to show everything at once, which feels a bit overwhelming. Could you simplify it to represent just the core concept? You pass a model, you get a typed object back — that's the "aha moment."

Copy link
Copy Markdown
Collaborator

@manoj-selvakumar5 manoj-selvakumar5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I'm wondering if the streaming section make sense as a standalone section since there is no visible streaming happening? Would it make sense to just add as a note, by combining streaming with for example previous section. The section is now teaching "structured output doesn't stream."

  • Across the tutorial, the code cells print results but doesn't explain what the developer should take away from the output. For example, in the constraints section, the output shows Confidence: 95% but without a callout, there is no way to know immediately that the raw value is actually 0.95 displayed through :.0% formatting. The same pattern repeats in the self-correction and other sections: the output appears, and the notebook moves on. A short markdown cell after each output connecting what the developer sees back to the concept being taught would make a big difference. Something like: "Notice that X happened because of Y" "Notice that sentiment is exactly one of the three allowed Literal values and not somewhat negative or frustrated".

  • The self-correction section claims the LLM is "likely to violate" the validator on its first attempt, but all three cells show Tool #1 only the LLM gets it right every time. The field_validator never actually rejects anything. Could you please try it a bit differently so the LLM guess and fail? probably add a cell that prints agent.messages after the call to show the retry trail, use a harder constraint the LLM is genuinely likely to violate??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants