feat: add structured output tutorial (#14)#249
feat: add structured output tutorial (#14)#249builder-ujaladi wants to merge 1 commit intostrands-agents:mainfrom
Conversation
Add a step-by-step tutorial covering structured output with Strands Agents: - Flat Pydantic models (sync + async) - Complex schemas (nested, lists, optional, enums, constraints) - Validation & self-correction (retry loop, exception handling, custom forcing prompt) - Tools + structured output (calculator integration) - Streaming with structured output
13234b2 to
6672e91
Compare
|
Latest scan for commit: ✅ Security Scan Report (PR Files Only)Scanned Files
Security Scan Results
Threshold: High No security issues detected in your changes. Great job! This scan only covers files changed in this PR. |
|
|
||
| ## Architecture | ||
|
|
||
|  |
There was a problem hiding this comment.
The architecture diagram tries to show everything at once, which feels a bit overwhelming. Could you simplify it to represent just the core concept? You pass a model, you get a typed object back — that's the "aha moment."
manoj-selvakumar5
left a comment
There was a problem hiding this comment.
-
I'm wondering if the streaming section make sense as a standalone section since there is no visible streaming happening? Would it make sense to just add as a note, by combining streaming with for example previous section. The section is now teaching "structured output doesn't stream."
-
Across the tutorial, the code cells print results but doesn't explain what the developer should take away from the output. For example, in the constraints section, the output shows Confidence: 95% but without a callout, there is no way to know immediately that the raw value is actually 0.95 displayed through :.0% formatting. The same pattern repeats in the self-correction and other sections: the output appears, and the notebook moves on. A short markdown cell after each output connecting what the developer sees back to the concept being taught would make a big difference. Something like: "Notice that X happened because of Y" "Notice that
sentimentis exactly one of the three allowedLiteralvalues and not somewhat negative or frustrated". -
The self-correction section claims the LLM is "likely to violate" the validator on its first attempt, but all three cells show Tool #1 only the LLM gets it right every time. The field_validator never actually rejects anything. Could you please try it a bit differently so the LLM guess and fail? probably add a cell that prints agent.messages after the call to show the retry trail, use a harder constraint the LLM is genuinely likely to violate??
Summary
Adds a new tutorial
14-structured-outputtopython/01-learn/covering structured output with Strands Agents.What's covered
structured_output_model, accessresult.structured_output(sync + async)List[SubModel],Optionalfields,Fieldvalidators,Enum/LiteralconstraintsStructuredOutputExceptionhandling, customstructured_output_promptcalculatortool from strands-agents-tools combined with structured resultsstream_async()with structured output (appears only in final event)Files added
python/01-learn/14-structured-output/README.mdpython/01-learn/14-structured-output/structured-output.ipynbpython/01-learn/14-structured-output/requirements.txtpython/01-learn/14-structured-output/images/architecture.pngTesting
nbconvert --executein a clean venv — all 18 code cells pass with zero errorsus.anthropic.claude-sonnet-4-5-20250929-v1:0)