Skip to content

feat: generation hooks and streaming across all frameworks#327

Open
AlemTuzlak wants to merge 2 commits intomainfrom
feat/generation-hooks-and-streaming
Open

feat: generation hooks and streaming across all frameworks#327
AlemTuzlak wants to merge 2 commits intomainfrom
feat/generation-hooks-and-streaming

Conversation

@AlemTuzlak
Copy link
Contributor

@AlemTuzlak AlemTuzlak commented Mar 3, 2026

Summary

  • Adds streamGenerationResult() and streamVideoGeneration() server-side helpers to @tanstack/ai for wrapping one-shot generation tasks as SSE-compatible StreamChunk iterables
  • Adds GenerationClient and VideoGenerationClient state machines to @tanstack/ai-client with support for both streaming (ConnectionAdapter) and direct (fetcher) transport modes
  • Adds framework hooks across React, Solid, Svelte, and Vue: useGenerateImage, useGenerateSpeech, useGenerateVideo, useTranscription, useSummarize (Svelte uses create* naming)
  • Adds InferGenerationOutput conditional type for type-safe onResult transforms without overloads
  • Adds a new "Generations" overview doc with Mermaid architecture diagram showing the server-to-client flow
  • Adds "Full-Stack Usage" sections to all 4 existing generation guide docs (image, speech, transcription, video)
  • Adds 5 demo pages + 5 server API routes to the React example app with sidebar navigation and index page quick links

Test plan

  • All 858+ unit tests pass across ai, ai-client, ai-react, ai-solid, ai-svelte, ai-vue
  • Type checks pass for all packages (pnpm test:types)
  • Full pnpm test suite passes (31 projects)
  • Build succeeds for all 23 projects
  • Manually verified image generation demo works end-to-end in React example app

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features
    • Added unified generations pattern for image, text-to-speech, transcription, summarization, and video generation.
    • Supports both streaming (Server-Sent Events) and direct fetcher modes for flexible deployment.
    • Added framework hooks for React, Vue, Svelte, and Solid enabling generation features across frameworks.
    • Added interactive example pages showcasing each generation capability.
    • Includes comprehensive documentation and server-side streaming utilities.

…anscription, and summarize

Adds a complete client-side generation layer across all framework integrations
(React, Solid, Svelte, Vue) for one-shot generation tasks: image generation,
text-to-speech, audio transcription, text summarization, and video generation.

Core changes:
- `streamGenerationResult()` and `streamVideoGeneration()` server helpers in `@tanstack/ai`
- `GenerationClient` and `VideoGenerationClient` state machines in `@tanstack/ai-client`
- Framework hooks: `useGenerateImage`, `useGenerateSpeech`, `useGenerateVideo`,
  `useTranscription`, `useSummarize` (and Svelte `create*` equivalents)
- `InferGenerationOutput` type for type-safe `onResult` transforms
- New "Generations" overview doc with Mermaid architecture diagram
- Full-stack usage sections added to all 4 generation guide docs
- React example app with 5 demo pages and server API routes
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 3, 2026

📝 Walkthrough

Walkthrough

Introduces a unified "generations" pattern for non-chat AI activities with streaming and direct transport modes. Adds core GenerationClient and VideoGenerationClient classes, corresponding hooks/composables across React, Vue, Solid, and Svelte frameworks, server-side streaming utilities, API routes for image/speech/transcription/video/summarization, UI components, and comprehensive documentation guides.

Changes

Cohort / File(s) Summary
Configuration & Build
.claude/settings.json, .gitignore, knip.json, nx.json
Added Claude AI settings with allowed permissions; minor gitignore and build configuration adjustments (increased Nx parallelism to 15).
Documentation Guides
docs/guides/generations.md, docs/guides/image-generation.md, docs/guides/text-to-speech.md, docs/guides/transcription.md, docs/guides/video-generation.md, docs/guides/tree-shaking.md, docs/guides/migration.md
New comprehensive guide on generations pattern with detailed transport modes (streaming/direct), hook APIs, and result transformation; expanded existing generation guides with full-stack usage examples; updated document ordering.
Core Generation Client
packages/typescript/ai-client/src/generation-client.ts, packages/typescript/ai-client/src/generation-types.ts, packages/typescript/ai-client/src/video-generation-client.ts, packages/typescript/ai-client/src/index.ts
New GenerationClient class supporting streaming and fetcher transport modes with lifecycle hooks and state management; new VideoGenerationClient class for job-based polling; comprehensive type definitions for generation inputs, outputs, and client options; public API exports.
Generation Client Tests
packages/typescript/ai-client/tests/generation-client.test.ts, packages/typescript/ai-client/tests/video-generation-client.test.ts
Comprehensive test suites for both transport modes (streaming and fetcher), error handling, state transitions, result transformation, concurrent operation guards, and lifecycle management.
Server-Side Generation Streaming
packages/typescript/ai/src/stream-generation.ts, packages/typescript/ai/src/index.ts, packages/typescript/ai/tests/stream-generation.test.ts
New utilities to wrap generation tasks into AsyncIterable<StreamChunk> for SSE/HTTP streaming; supports polling-based video generation with status updates and timeout handling; comprehensive test coverage.
React Framework
packages/typescript/ai-react/src/use-generation.ts, packages/typescript/ai-react/src/use-generate-*.ts, packages/typescript/ai-react/src/index.ts, packages/typescript/ai-react/tests/use-generation.test.ts
New React hooks (useGeneration, useGenerateImage, useGenerateSpeech, useTranscription, useSummarize, useGenerateVideo) wrapping GenerationClient with full lifecycle support and result transformation; extensive test coverage across fetcher and streaming modes.
Vue Framework
packages/typescript/ai-vue/src/use-generation.ts, packages/typescript/ai-vue/src/use-generate-*.ts, packages/typescript/ai-vue/src/index.ts, packages/typescript/ai-vue/tests/use-generation.test.ts
Equivalent Vue composables using Vue reactivity primitives (shallow refs, readonly wrappers); maintains same API surface as React hooks; comprehensive test suite.
Solid Framework
packages/typescript/ai-solid/src/use-generation.ts, packages/typescript/ai-solid/src/use-generate-*.ts, packages/typescript/ai-solid/src/index.ts, packages/typescript/ai-solid/tests/use-generation.test.ts
Solid.js reactive hooks using accessor signals and scope cleanup patterns; same generation functionality with Solid-specific reactivity model; full test coverage.
Svelte Framework
packages/typescript/ai-svelte/src/create-generation.svelte.ts, packages/typescript/ai-svelte/src/create-generate-*.ts, packages/typescript/ai-svelte/src/index.ts, packages/typescript/ai-svelte/tests/create-generation.test.ts
Svelte 5 rune-based helpers using reactive state; factory functions return reactive getters and control methods; comprehensive test coverage across all generation types.
Example Application Routes
examples/ts-react-chat/src/routes/api.generate.*.ts, examples/ts-react-chat/src/routes/api.transcribe.ts, examples/ts-react-chat/src/routes/api.summarize.ts, examples/ts-react-chat/src/routes/generations.*.tsx
Server-side API routes handling POST requests for image/speech/video/transcription/summarization with streaming via SSE; client-side route components with UI for each generation type, input handling, progress tracking, and result display.
Example Application Infrastructure
examples/ts-react-chat/src/components/Header.tsx, examples/ts-react-chat/src/routes/index.tsx, examples/ts-react-chat/src/routeTree.gen.ts
Updated navigation header with generation section and icons; welcome page showing generation demos; extended route tree definitions for all new API and generation routes.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client Component
    participant Hook as useGeneration Hook
    participant GC as GenerationClient
    participant Adapter as ConnectionAdapter/Fetcher
    participant Server as Server/API

    Client->>Hook: call generate(input)
    Hook->>GC: client.generate(input)
    
    alt Streaming Mode (SSE)
        GC->>Adapter: connect({body, input})
        Adapter->>Server: establish connection
        Server->>Adapter: emit RUN_STARTED
        Adapter->>GC: processStream([chunk])
        GC->>GC: setIsLoading(true)
        
        loop Poll Status
            Server->>Adapter: emit CUSTOM('generation:result', data)
            Adapter->>GC: processStream([chunk])
            GC->>GC: setResult(data)
        end
        
        Server->>Adapter: emit RUN_FINISHED
        Adapter->>GC: processStream([chunk])
        GC->>GC: setIsLoading(false)
        GC->>GC: setStatus('success')
    else Direct Mode (Fetcher)
        GC->>Adapter: fetcher(input)
        Adapter->>Server: POST request
        Server-->>Adapter: return TResult
        GC->>GC: setResult(TResult)
        GC->>GC: setIsLoading(false)
        GC->>GC: setStatus('success')
    end
    
    GC->>Hook: notify onResult/onResultChange
    Hook->>Client: reactive state updated
    Client->>Client: re-render with result
Loading
sequenceDiagram
    participant Client as Client Component
    participant Hook as useGenerateVideo Hook
    participant VGC as VideoGenerationClient
    participant Server as API Server
    participant Adapter as VideoAdapter

    Client->>Hook: call generate(input)
    Hook->>VGC: client.generate(input)
    VGC->>VGC: setIsLoading(true), setStatus('generating')
    
    alt Streaming Mode with Polling
        VGC->>Server: connect({body, input})
        Server->>Server: emit RUN_STARTED
        Server->>Adapter: adapter.createVideoJob()
        Adapter-->>Server: {jobId}
        Server->>Server: emit CUSTOM('video:job:created', {jobId})
        VGC->>VGC: setJobId(jobId), onJobCreated(jobId)
        
        loop Poll Every pollingInterval
            Server->>Adapter: adapter.getVideoStatus(jobId)
            Adapter-->>Server: {status, progress}
            Server->>Server: emit CUSTOM('video:status', {...})
            VGC->>VGC: setVideoStatus({status, progress})
            VGC->>VGC: onStatusUpdate(videoStatus)
            
            alt Status = 'completed'
                Server->>Adapter: adapter.getVideoUrl(jobId)
                Adapter-->>Server: url
                Server->>Server: emit CUSTOM('generation:result', {url})
                VGC->>VGC: setResult({url, ...})
                Server->>Server: emit RUN_FINISHED
            end
        end
    else Direct Mode (Fetcher)
        VGC->>Adapter: fetcher(input)
        Adapter->>Adapter: polling loop
        Adapter-->>VGC: {url, status, jobId}
        VGC->>VGC: setResult(result)
    end
    
    VGC->>VGC: setIsLoading(false), setStatus('success')
    Hook->>Client: reactive state updated
    Client->>Client: render video player with result.url
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~70 minutes

Possibly related PRs

Suggested reviewers

  • AlemTuzlak
  • jherr

Poem

🐰 A world of generations born anew,
Streaming results in a lovely hue!
From images, speech, and videos bright,
The rabbit hops through code both day and night.
With hooks and clients in harmony bound,
All frameworks celebrate this beautiful sound! 🎬✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description provides comprehensive details about changes across multiple packages, includes a clear test plan with verification steps, but the description template checklist items are not completed (checkboxes unchecked). Complete the checklist in the description by checking the relevant boxes to confirm contribution guide compliance and testing completion.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: generation hooks and streaming across all frameworks' clearly and accurately summarizes the main change, which is the addition of generation hooks and streaming functionality across multiple UI frameworks.
Docstring Coverage ✅ Passed Docstring coverage is 82.05% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/generation-hooks-and-streaming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloud bot commented Mar 3, 2026

View your CI Pipeline Execution ↗ for commit 1fc91d7

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 3m 40s View ↗
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1m 17s View ↗

☁️ Nx Cloud last updated this comment at 2026-03-03 14:00:15 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Mar 3, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@327

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@327

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@327

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@327

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@327

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@327

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@327

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@327

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@327

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@327

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@327

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@327

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@327

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@327

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@327

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@327

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@327

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@327

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@327

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@327

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@327

commit: dbe6919

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

🧹 Nitpick comments (6)
packages/typescript/ai/tests/stream-generation.test.ts (2)

319-329: Remove unused pollCount variable.

The variable pollCount is declared and incremented but never read. This appears to be leftover from development.

🧹 Suggested fix
   it('should propagate error message from failed status', async () => {
     const adapter = createMockVideoAdapter()
-    let pollCount = 0
     adapter.getVideoStatus = vi.fn(async () => {
-      pollCount++
       return {
         jobId: 'job-123',
         status: 'failed' as const,
         error: 'Content policy violation',
       }
     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 319 -
329, Remove the unused pollCount variable and its increment in the test "should
propagate error message from failed status": delete the declaration "let
pollCount = 0" and the "pollCount++" inside the mocked adapter.getVideoStatus
implementation (the mock created via createMockVideoAdapter and vi.fn), as
pollCount is never read.

1-6: Sort imports alphabetically per ESLint configuration.

🔧 Suggested fix
-import { describe, it, expect, vi } from 'vitest'
+import { describe, expect, it, vi } from 'vitest'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 1 - 6,
Reorder the import statements and their specifiers to be alphabetically sorted
per ESLint: sort the three import lines by their module specifier (e.g.,
'../src/stream-generation', '../src/types', 'vitest' in alphabetical order) and
also alphabetize the named imports inside each import (ensure
streamGenerationResult comes before streamVideoGeneration and StreamChunk before
VideoStatusResult). Update the import block in the test file to reflect this
ordering so it satisfies the lint rule.
examples/ts-react-chat/src/routes/api.generate.image.ts (1)

2-6: Sort imports alphabetically per ESLint configuration.

The static analysis flags that generateImage should come after streamGenerationResult alphabetically.

🔧 Suggested fix
 import {
-  streamGenerationResult,
   generateImage,
+  streamGenerationResult,
   toServerSentEventsResponse,
 } from '@tanstack/ai'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/api.generate.image.ts` around lines 2 - 6,
Reorder the named imports in the import statement so they are alphabetized:
place generateImage after streamGenerationResult and before
toServerSentEventsResponse (i.e., import streamGenerationResult, generateImage,
toServerSentEventsResponse from '@tanstack/ai'); update the import line that
currently lists streamGenerationResult, generateImage,
toServerSentEventsResponse to match ESLint's alphabetical ordering.
examples/ts-react-chat/src/routes/generations.image.tsx (1)

91-101: Consider adding unique keys if image IDs are available.

Using array index as key works here since images are only appended and not reordered. However, if img.id or similar unique identifier is available from the API response, using that would be more robust.

💡 Optional improvement if images have unique IDs
           {result.images.map((img, i) => (
             <img
-              key={i}
+              key={img.id ?? i}
               src={img.url || `data:image/png;base64,${img.b64Json}`}
               alt={img.revisedPrompt || prompt}
               className="w-full rounded-lg border border-gray-700"
             />
           ))}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.image.tsx` around lines 91 -
101, The map uses the array index as the React key which is fragile; update the
key in the images mapping to use a unique identifier from the image object when
available (e.g., img.id or img.uuid) instead of i; locate the JSX that renders
result.images inside the component (the <img ... key={i} ... /> line) and
replace the index-based key with a conditional unique key such as key={img.id ??
img.uuid ?? i} so it falls back to index only if no unique id exists.
packages/typescript/ai-react/tests/use-generation.test.ts (1)

185-201: Avoid dangling async work in stop/cleanup tests

These tests start generate() but don’t await its completion after resolving the deferred promise. That can leave async work running past test boundaries and introduce flaky behavior.

✅ Suggested test hardening
-      act(() => {
-        result.current.generate({ prompt: 'test' })
-      })
+      let generatePromise!: Promise<void>
+      act(() => {
+        generatePromise = result.current.generate({ prompt: 'test' })
+      })
@@
-      resolvePromise!({ id: '1' })
+      resolvePromise!({ id: '1' })
+      await act(async () => {
+        await generatePromise
+      })

Also applies to: 240-254

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-react/tests/use-generation.test.ts` around lines 185 -
201, The test starts an async generation with result.current.generate({ prompt:
'test' }) and resolves its deferred promise via resolvePromise({ id: '1' }) but
does not wait for the generate() flow to finish, leaving async work dangling;
update the test to await the generation completing after calling resolvePromise
(for example, await waitFor(() => expect(result.current.isLoading).toBe(false))
or await act(async () => resolvePromise!({ id: '1' })) so that the generate/stop
lifecycle (generate(), stop(), isLoading, status) fully settles before the test
ends; ensure you apply the same change to the other occurrence around lines
240-254.
examples/ts-react-chat/src/routes/generations.video.tsx (1)

95-100: Clamp progress before applying it to CSS width

Line 99 applies videoStatus.progress directly into width. If the server emits out-of-range values, the bar can overflow/underflow visually. Clamp to [0, 100] before rendering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.video.tsx` around lines 95 -
100, The progress value from videoStatus.progress is used directly in the style
width which can overflow if the server sends values <0 or >100; clamp it to the
[0,100] range before rendering (e.g., compute const clamped = Math.min(100,
Math.max(0, videoStatus.progress ?? 0)) or create a clamp helper) and use that
clamped value in the style template string for the inner progress div so the CSS
width is always valid.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/settings.json:
- Around line 7-9: The .claude/settings.json contains machine-specific allowlist
entries and a high-risk execution permission; remove entries such as
"Read(//c/Users/AlemTuzlak/.claude/**)",
"Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)", the absolute
repo worktree grep pattern, and any "Bash(pnpm install)" rule from the committed
file, and instead add those developer-specific paths/commands to
.claude/settings.local.json (which is in .gitignore); keep the shared
.claude/settings.json minimal and portable (only repo-relative, non-identifying
Read/Query rules) and ensure no broad Bash execution rules remain in the
committed file so it adheres to least-privilege.

In `@docs/guides/generations.md`:
- Around line 123-137: Add explicit language identifiers to the two fenced code
blocks that show the event sequences (the block containing "RUN_STARTED → ...
RUN_FINISHED" and the block containing "RUN_STARTED → ... RUN_ERROR") by
changing the opening triple-backtick to include "text" (e.g., ```text) for both
blocks so markdownlint MD040 is satisfied; update both occurrences in the
docs/guides/generations.md content where those event sequence blocks appear.

In `@docs/guides/transcription.md`:
- Around line 308-309: The example destructures the JSON root (const { audio,
language, model } = await request.json()) but the actual route handler in
api.transcribe.ts expects the payload under body.data; update the snippet to
extract these values from request.json().body.data (or adjust to read
request.json().body then access .data) so the variables audio, language, and
model come from the same shape the route implementation expects, and add a
simple existence check for body.data before using it.

In `@examples/ts-react-chat/src/routes/api.generate.video.ts`:
- Around line 9-15: Validate the incoming JSON before using request.json()
results: define a Zod schema for the expected payload (shape containing data: {
prompt: string, size?: string, duration?: number, model?: string }), use
schema.parse or safeParse to validate the parsed body, and if validation fails
return a 400 response; then destructure validated.data and pass those validated
values into openaiVideo(...) and streamVideoGeneration(...). Ensure you
reference the existing symbols (request.json(), body.data,
streamVideoGeneration, openaiVideo, and the fields model/prompt/size/duration)
so the adapter only receives validated inputs.

In `@examples/ts-react-chat/src/routes/api.transcribe.ts`:
- Around line 2-6: The import specifiers from '@tanstack/ai' are not
alphabetically ordered; reorder them to satisfy the sort-imports rule by listing
the named imports in alphabetical order: generateTranscription,
streamGenerationResult, toServerSentEventsResponse (i.e., change the import line
to import { generateTranscription, streamGenerationResult,
toServerSentEventsResponse } from '@tanstack/ai'). Ensure there are no trailing
commas or duplicate imports after the change.

In `@examples/ts-react-chat/src/routes/generations.speech.tsx`:
- Around line 43-56: The labels are not associated with their form controls
which breaks accessibility; add explicit associations by giving the textarea
used in setText a unique id (e.g., textToSpeakId) and set the corresponding Text
label's htmlFor to that id, and likewise assign ids to the Voice control(s)
rendered in the "Voice" group (or the containing select/radio input) and set the
Voice label's htmlFor to match; update references around the textarea that uses
setText and isLoading and the Voice control rendering code so screen readers and
keyboard navigation correctly map labels to controls.
- Around line 14-23: The onResult handler creates blob URLs but never revokes
them, causing memory leaks; update the logic in onResult and the
reset()/component unmount path to call URL.revokeObjectURL on any previously
created audio URL before assigning a new one, store the current URL in the
component state so reset() can revoke it, and add a useEffect cleanup that
revokes the stored URL on unmount; reference the onResult function, the
generate() flow that triggers it, and the reset() method to ensure all created
URLs are revoked.

In `@examples/ts-react-chat/src/routes/generations.transcription.tsx`:
- Around line 17-21: The current conversion uses file.arrayBuffer(), then builds
a base64 string via new Uint8Array(buffer).reduce(...) and assigns dataUrl —
this causes O(n²) string concatenation; replace that whole path with
FileReader.readAsDataURL(file) (awaiting a Promise around the onload/onerror
events) to obtain the dataUrl directly and remove buffer/base64/reduce usage,
then use the resulting dataUrl wherever dataUrl is referenced.

In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 46-54: Add an accessible label association: give the textarea an
explicit id (e.g., id="prompt") and update the <label> to use htmlFor="prompt"
so screen readers and form controls are correctly linked; keep the existing
value={prompt}, onChange={(e) => setPrompt(e.target.value)},
disabled={isLoading} and other props on the textarea (adjust the id string as
needed to avoid collisions).

In `@packages/typescript/ai-client/src/video-generation-client.ts`:
- Around line 155-167: The fetcher never gets the AbortSignal so long-running
fetches can't be cancelled; update the fetcher API and call site: change the
fetcher signature (where declared/typed) to accept an optional AbortSignal (e.g.
fetcher(input, signal?: AbortSignal)) and pass the signal from
generateWithFetcher into the call (replace this.fetcher(input) with
this.fetcher(input, signal)), then update any affected types/usages of fetcher
and handle errors/abort behavior in generateWithFetcher (check signal.aborted
before setResult and handle thrown AbortError accordingly).

In `@packages/typescript/ai-client/tests/generation-client.test.ts`:
- Line 1: Reorder the named imports from 'vitest' to satisfy the sort-imports
rule: change the import line in generation-client.test.ts (currently "import {
describe, it, expect, vi } from 'vitest'") to alphabetically ordered members
"import { describe, expect, it, vi } from 'vitest'".

In `@packages/typescript/ai-client/tests/video-generation-client.test.ts`:
- Line 1: The named imports in the top-level import statement (describe, it,
expect, vi) are not alphabetized; update the import from "vitest" so the members
are in sorted order (describe, expect, it, vi) to satisfy the sort-imports lint
rule and re-run linting/tests to confirm the fix.

In `@packages/typescript/ai-react/src/use-generate-video.ts`:
- Around line 126-165: The useMemo inside useGenerateVideo currently throws
during render and only depends on clientId, so it won't recreate the
VideoGenerationClient when options.connection or options.fetcher change;
instead, stop throwing in the memo and either (A) create a safe
no-op/placeholder client inside useMemo when neither connection nor fetcher are
present, or (B) move the validation and throwing into the generate() method so
render never throws; also update the memo dependency array to include the
transport inputs used to build the client (e.g., reference
optionsRef.current.connection and optionsRef.current.fetcher or include
options.connection/options.fetcher) so a new VideoGenerationClient<TOutput> is
created when the transport changes (identify VideoGenerationClient, useMemo in
useGenerateVideo, optionsRef, and generate when making edits).

In `@packages/typescript/ai-react/src/use-generation.ts`:
- Around line 50-53: The UseGenerationReturn interface currently widens
generate's input to Record<string, any>, losing compile-time safety; update the
interface to accept a TInput generic (e.g., UseGenerationReturn<TOutput,
TInput>) and change the generate signature to generate: (input: TInput) =>
Promise<void> so it matches the internal generate implementation (the generate
function used in the hook at line ~156) and restores correct typing for wrappers
like useSummarize/useGenerateImage without casts.

In `@packages/typescript/ai-solid/src/use-generate-video.ts`:
- Around line 133-168: The createMemo call in useGenerateVideo incorrectly
passes a dependency array ([clientId]) as the second argument; remove that
second argument so the call is just createMemo(() => { ... }) and let Solid
track dependencies by reading clientId and options inside the memo; ensure the
function that constructs VideoGenerationClient (references: createMemo,
clientId, options, VideoGenerationClient) remains unchanged otherwise.

In `@packages/typescript/ai-solid/src/use-generation.ts`:
- Around line 111-142: The createMemo call for the client variable incorrectly
passes a dependency array as a second argument; Solid's createMemo tracks
dependencies automatically and accepts an optional initial value (not a deps
array). Remove the second argument ([clientId]) from the createMemo invocation
so it only receives the factory function that builds client using clientOptions
and branches for options.connection/options.fetcher (leaving the
GenerationClient construction and error throw unchanged).

In `@packages/typescript/ai-svelte/tests/create-generation.test.ts`:
- Around line 304-308: The speech test fixtures use mockResult objects that
don't include the required TTSResult.id property; update each test fixture (the
mockResult declarations around create-generation.test.ts lines where mockResult
is defined and the other occurrences at the noted ranges) to include a valid id
string (e.g., id: 'mock-tts-id') so the mocked payload matches the full
TTSResult shape used by functions under test (ensure all three
occurrences—initial mockResult and the other two fixtures—are updated).

In `@packages/typescript/ai-vue/src/use-generation.ts`:
- Around line 51-54: Update the UseGenerationReturn interface to preserve the
TInput generic so callers keep compile-time safety: add a TInput type parameter
to UseGenerationReturn and change the generate signature from generate: (input:
Record<string, any>) => Promise<void> to generate: (input: TInput) =>
Promise<void>; then adjust the useGeneration function's return type to
UseGenerationReturn<TInput, TOutput, TParams> (or matching generic arity) and
remove the unsafe cast at the site where generate is returned so the real TInput
flows through without widening to Record<string, any>.

---

Nitpick comments:
In `@examples/ts-react-chat/src/routes/api.generate.image.ts`:
- Around line 2-6: Reorder the named imports in the import statement so they are
alphabetized: place generateImage after streamGenerationResult and before
toServerSentEventsResponse (i.e., import streamGenerationResult, generateImage,
toServerSentEventsResponse from '@tanstack/ai'); update the import line that
currently lists streamGenerationResult, generateImage,
toServerSentEventsResponse to match ESLint's alphabetical ordering.

In `@examples/ts-react-chat/src/routes/generations.image.tsx`:
- Around line 91-101: The map uses the array index as the React key which is
fragile; update the key in the images mapping to use a unique identifier from
the image object when available (e.g., img.id or img.uuid) instead of i; locate
the JSX that renders result.images inside the component (the <img ... key={i}
... /> line) and replace the index-based key with a conditional unique key such
as key={img.id ?? img.uuid ?? i} so it falls back to index only if no unique id
exists.

In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 95-100: The progress value from videoStatus.progress is used
directly in the style width which can overflow if the server sends values <0 or
>100; clamp it to the [0,100] range before rendering (e.g., compute const
clamped = Math.min(100, Math.max(0, videoStatus.progress ?? 0)) or create a
clamp helper) and use that clamped value in the style template string for the
inner progress div so the CSS width is always valid.

In `@packages/typescript/ai-react/tests/use-generation.test.ts`:
- Around line 185-201: The test starts an async generation with
result.current.generate({ prompt: 'test' }) and resolves its deferred promise
via resolvePromise({ id: '1' }) but does not wait for the generate() flow to
finish, leaving async work dangling; update the test to await the generation
completing after calling resolvePromise (for example, await waitFor(() =>
expect(result.current.isLoading).toBe(false)) or await act(async () =>
resolvePromise!({ id: '1' })) so that the generate/stop lifecycle (generate(),
stop(), isLoading, status) fully settles before the test ends; ensure you apply
the same change to the other occurrence around lines 240-254.

In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 319-329: Remove the unused pollCount variable and its increment in
the test "should propagate error message from failed status": delete the
declaration "let pollCount = 0" and the "pollCount++" inside the mocked
adapter.getVideoStatus implementation (the mock created via
createMockVideoAdapter and vi.fn), as pollCount is never read.
- Around line 1-6: Reorder the import statements and their specifiers to be
alphabetically sorted per ESLint: sort the three import lines by their module
specifier (e.g., '../src/stream-generation', '../src/types', 'vitest' in
alphabetical order) and also alphabetize the named imports inside each import
(ensure streamGenerationResult comes before streamVideoGeneration and
StreamChunk before VideoStatusResult). Update the import block in the test file
to reflect this ordering so it satisfies the lint rule.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 228e0e3 and 1fc91d7.

📒 Files selected for processing (65)
  • .claude/settings.json
  • .gitignore
  • docs/guides/generations.md
  • docs/guides/image-generation.md
  • docs/guides/migration.md
  • docs/guides/text-to-speech.md
  • docs/guides/transcription.md
  • docs/guides/tree-shaking.md
  • docs/guides/video-generation.md
  • examples/ts-react-chat/src/components/Header.tsx
  • examples/ts-react-chat/src/routeTree.gen.ts
  • examples/ts-react-chat/src/routes/api.generate.image.ts
  • examples/ts-react-chat/src/routes/api.generate.speech.ts
  • examples/ts-react-chat/src/routes/api.generate.video.ts
  • examples/ts-react-chat/src/routes/api.summarize.ts
  • examples/ts-react-chat/src/routes/api.transcribe.ts
  • examples/ts-react-chat/src/routes/generations.image.tsx
  • examples/ts-react-chat/src/routes/generations.speech.tsx
  • examples/ts-react-chat/src/routes/generations.summarize.tsx
  • examples/ts-react-chat/src/routes/generations.transcription.tsx
  • examples/ts-react-chat/src/routes/generations.video.tsx
  • examples/ts-react-chat/src/routes/index.tsx
  • knip.json
  • nx.json
  • packages/typescript/ai-client/src/generation-client.ts
  • packages/typescript/ai-client/src/generation-types.ts
  • packages/typescript/ai-client/src/index.ts
  • packages/typescript/ai-client/src/video-generation-client.ts
  • packages/typescript/ai-client/tests/generation-client.test.ts
  • packages/typescript/ai-client/tests/video-generation-client.test.ts
  • packages/typescript/ai-react/src/index.ts
  • packages/typescript/ai-react/src/use-generate-image.ts
  • packages/typescript/ai-react/src/use-generate-speech.ts
  • packages/typescript/ai-react/src/use-generate-video.ts
  • packages/typescript/ai-react/src/use-generation.ts
  • packages/typescript/ai-react/src/use-summarize.ts
  • packages/typescript/ai-react/src/use-transcription.ts
  • packages/typescript/ai-react/tests/use-generation.test.ts
  • packages/typescript/ai-solid/src/index.ts
  • packages/typescript/ai-solid/src/use-generate-image.ts
  • packages/typescript/ai-solid/src/use-generate-speech.ts
  • packages/typescript/ai-solid/src/use-generate-video.ts
  • packages/typescript/ai-solid/src/use-generation.ts
  • packages/typescript/ai-solid/src/use-summarize.ts
  • packages/typescript/ai-solid/src/use-transcription.ts
  • packages/typescript/ai-solid/tests/use-generation.test.ts
  • packages/typescript/ai-svelte/src/create-generate-image.svelte.ts
  • packages/typescript/ai-svelte/src/create-generate-speech.svelte.ts
  • packages/typescript/ai-svelte/src/create-generate-video.svelte.ts
  • packages/typescript/ai-svelte/src/create-generation.svelte.ts
  • packages/typescript/ai-svelte/src/create-summarize.svelte.ts
  • packages/typescript/ai-svelte/src/create-transcription.svelte.ts
  • packages/typescript/ai-svelte/src/index.ts
  • packages/typescript/ai-svelte/tests/create-generation.test.ts
  • packages/typescript/ai-vue/src/index.ts
  • packages/typescript/ai-vue/src/use-generate-image.ts
  • packages/typescript/ai-vue/src/use-generate-speech.ts
  • packages/typescript/ai-vue/src/use-generate-video.ts
  • packages/typescript/ai-vue/src/use-generation.ts
  • packages/typescript/ai-vue/src/use-summarize.ts
  • packages/typescript/ai-vue/src/use-transcription.ts
  • packages/typescript/ai-vue/tests/use-generation.test.ts
  • packages/typescript/ai/src/index.ts
  • packages/typescript/ai/src/stream-generation.ts
  • packages/typescript/ai/tests/stream-generation.test.ts

Comment on lines +7 to +9
"Read(//c/Users/AlemTuzlak/.claude/**)",
"Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)",
"Bash(grep -n \"keyed\\\\|createMemo\" /c/Users/AlemTuzlak/projects/ts-poc/.claude/worktrees/distributed-weaving-rocket/packages/typescript/ai-devtools/src/components/conversation/*.tsx)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

git ls-files | grep -E "\.claude|\.gitignore" | head -20

Repository: TanStack/ai

Length of output: 442


🏁 Script executed:

if [ -f ".claude/settings.json" ]; then
  echo "=== .claude/settings.json exists ==="
  wc -l ".claude/settings.json"
  cat -n ".claude/settings.json"
else
  echo ".claude/settings.json does not exist"
fi

Repository: TanStack/ai

Length of output: 3194


🏁 Script executed:

if [ -f ".gitignore" ]; then
  echo "=== .gitignore content (relevant lines) ==="
  grep -n "\.claude\|settings\." ".gitignore" || echo "No matches found"
else
  echo ".gitignore does not exist"
fi

Repository: TanStack/ai

Length of output: 206


Remove machine-specific and high-risk permissions from committed settings.

This committed allowlist contains user-identifying absolute paths (lines 7–9, 11, 43) and broad execution permission (Bash(pnpm install) at line 48), which is non-portable and violates least-privilege. Keep repo-shared rules minimal, and move developer-specific paths and commands to .claude/settings.local.json, which is already in .gitignore for this purpose.

🔧 Suggested hardening diff
 {
   "permissions": {
     "allow": [
       "Bash(pnpm test:lib)",
       "Bash(pnpm test:eslint)",
       "Bash(xargs grep -l \"devtools-event-client\")",
-      "Read(//c/Users/AlemTuzlak/.claude/**)",
-      "Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)",
-      "Bash(grep -n \"keyed\\\\|createMemo\" /c/Users/AlemTuzlak/projects/ts-poc/.claude/worktrees/distributed-weaving-rocket/packages/typescript/ai-devtools/src/components/conversation/*.tsx)",
       "Bash(xargs grep -l \"onConfig\\\\|onIterationStart\\\\|onIteration\")",
-      "Bash(grep -n \"^.*\\\\\\(onConfig\\\\|onIterationStart\\\\|onIteration\\\\\\)\" /c/Users/AlemTuzlak/projects/ts-poc/packages/typescript/ai/src/**/*.ts)",
       "Bash(npx `@tanstack/router-cli` generate)",
       "Bash(xargs grep -l \"generate\\\\|image\\\\|video\\\\|audio\")",
       "Bash(pnpm --filter `@tanstack/ai` build)",
       ...rest of repo-portable entries...
-      "Read(//c/Users/AlemTuzlak/projects/ts-react-chat/src/routes/**)",
       "Bash(npx tsr generate)",
       "Bash(npx nx run ts-react-media:build)",
       "Bash(npx nx run `@tanstack/ai`:test:types)",
       "Bash(npx nx run `@tanstack/ai`:test:types --verbose)",
-      "Bash(pnpm install)"
     ]
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/settings.json around lines 7 - 9, The .claude/settings.json contains
machine-specific allowlist entries and a high-risk execution permission; remove
entries such as "Read(//c/Users/AlemTuzlak/.claude/**)",
"Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)", the absolute
repo worktree grep pattern, and any "Bash(pnpm install)" rule from the committed
file, and instead add those developer-specific paths/commands to
.claude/settings.local.json (which is in .gitignore); keep the shared
.claude/settings.json minimal and portable (only repo-relative, non-identifying
Read/Query rules) and ensure no broad Bash execution rules remain in the
committed file so it adheres to least-privilege.

Comment on lines +123 to +137
```
1. RUN_STARTED → Client sets status to 'generating'
2. CUSTOM → Client receives the result
name: 'generation:result'
value: <your result>
3. RUN_FINISHED → Client sets status to 'success'
```

If the function throws, a `RUN_ERROR` event is emitted instead:

```
1. RUN_STARTED → Client sets status to 'generating'
2. RUN_ERROR → Client sets error + status to 'error'
error: { message: '...' }
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add explicit language to fenced code blocks

Line 123 and Line 133 open fenced blocks without language identifiers, which trips markdownlint MD040.

📝 Proposed fix
-```
+```text
 1. RUN_STARTED          → Client sets status to 'generating'
 2. CUSTOM               → Client receives the result
    name: 'generation:result'
    value: <your result>
 3. RUN_FINISHED         → Client sets status to 'success'

@@
- +text

  1. RUN_STARTED → Client sets status to 'generating'
  2. RUN_ERROR → Client sets error + status to 'error'
    error: { message: '...' }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
1. RUN_STARTED → Client sets status to 'generating'
2. CUSTOM → Client receives the result
name: 'generation:result'
value: <your result>
3. RUN_FINISHED → Client sets status to 'success'
```
If the function throws, a `RUN_ERROR` event is emitted instead:
```
1. RUN_STARTED → Client sets status to 'generating'
2. RUN_ERROR → Client sets error + status to 'error'
error: { message: '...' }
```
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 123-123: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


[warning] 133-133: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/generations.md` around lines 123 - 137, Add explicit language
identifiers to the two fenced code blocks that show the event sequences (the
block containing "RUN_STARTED → ... RUN_FINISHED" and the block containing
"RUN_STARTED → ... RUN_ERROR") by changing the opening triple-backtick to
include "text" (e.g., ```text) for both blocks so markdownlint MD040 is
satisfied; update both occurrences in the docs/guides/generations.md content
where those event sequence blocks appear.

Comment on lines +308 to +309
const { audio, language, model } = await request.json()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Server snippet payload shape doesn’t match route implementation

Line 308 destructures the JSON root directly, but the route in examples/ts-react-chat/src/routes/api.transcribe.ts expects values under body.data. The sample will fail as written against the documented route pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/transcription.md` around lines 308 - 309, The example
destructures the JSON root (const { audio, language, model } = await
request.json()) but the actual route handler in api.transcribe.ts expects the
payload under body.data; update the snippet to extract these values from
request.json().body.data (or adjust to read request.json().body then access
.data) so the variables audio, language, and model come from the same shape the
route implementation expects, and add a simple existence check for body.data
before using it.

Comment on lines +9 to +15
const body = await request.json()
const { prompt, size, duration, model } = body.data

const stream = streamVideoGeneration(
openaiVideo(model ?? 'sora-2'),
{ prompt, size, duration },
{ pollingInterval: 3000, maxDuration: 600_000 },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's check if the file exists and read its current content
cat -n examples/ts-react-chat/src/routes/api.generate.video.ts | head -30

Repository: TanStack/ai

Length of output: 897


🏁 Script executed:

# Check if Zod is already installed in the project
cat package.json | grep -A 50 '"dependencies"' | head -60

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check for similar API routes in examples to see if they use Zod validation
fd -e ts -e tsx examples/ -exec grep -l "request.json()" {} \;

Repository: TanStack/ai

Length of output: 304


🏁 Script executed:

# Check if there's any existing validation pattern in similar example files
find examples -name "api*.ts" -type f | head -10

Repository: TanStack/ai

Length of output: 507


🏁 Script executed:

# Let's properly check package.json for Zod
grep -i zod package.json

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the full dependencies section
head -100 package.json | tail -50

Repository: TanStack/ai

Length of output: 2513


🏁 Script executed:

# Check other API route files to see validation patterns
cat -n examples/ts-react-chat/src/routes/api.summarize.ts

Repository: TanStack/ai

Length of output: 989


🏁 Script executed:

# Check another API file for comparison
cat -n examples/ts-react-chat/src/routes/api.generate.image.ts | head -30

Repository: TanStack/ai

Length of output: 1006


🏁 Script executed:

# Check the api-verification.ts file mentioned - seems relevant
cat -n examples/ts-react-chat/api-verification.ts

Repository: TanStack/ai

Length of output: 71


🏁 Script executed:

# Check if examples have their own package.json with Zod
cat -n examples/ts-react-chat/package.json | head -50

Repository: TanStack/ai

Length of output: 2062


🏁 Script executed:

# Check if Zod is used anywhere in the examples
rg -i "zod" examples/

Repository: TanStack/ai

Length of output: 1455


🏁 Script executed:

# Check if there are any toolDefinition usages in the example
rg "toolDefinition" examples/

Repository: TanStack/ai

Length of output: 3273


🏁 Script executed:

# Check the main packages to see where Zod is used
rg "from.*zod" packages/typescript/ | head -20

Repository: TanStack/ai

Length of output: 1865


🏁 Script executed:

# Check if there are API routes in the main packages that use Zod for validation
rg "request\.json()" packages/typescript/ -A 5 | head -40

Repository: TanStack/ai

Length of output: 2767


🏁 Script executed:

# Check the actual guideline definition to understand scope
cat CLAUDE.md | grep -A 10 -B 2 "toolDefinition"

Repository: TanStack/ai

Length of output: 1362


🏁 Script executed:

# Check if examples are explicitly excluded from certain guidelines
cat CLAUDE.md | grep -i "examples"

Repository: TanStack/ai

Length of output: 438


🏁 Script executed:

# Look at one of the guitar-tools files to see how Zod is used with toolDefinition
cat -n examples/ts-react-chat/src/lib/guitar-tools.ts | head -40

Repository: TanStack/ai

Length of output: 1666


Validate request payload with Zod before adapter invocation

Lines 9–15 parse request.json() and destructure body.data without runtime validation. This can crash on malformed input and forwards unchecked values to openaiVideo().

🛡️ Proposed fix
 import { createFileRoute } from '@tanstack/react-router'
 import { streamVideoGeneration, toServerSentEventsResponse } from '@tanstack/ai'
 import { openaiVideo } from '@tanstack/ai-openai'
+import { z } from 'zod'
+
+const generateVideoRequestSchema = z.object({
+  data: z.object({
+    prompt: z.string().min(1),
+    size: z.string().optional(),
+    duration: z.number().int().positive().optional(),
+    model: z.string().optional(),
+  }),
+})
@@
       POST: async ({ request }) => {
-        const body = await request.json()
-        const { prompt, size, duration, model } = body.data
+        const parsed = generateVideoRequestSchema.safeParse(await request.json())
+        if (!parsed.success) {
+          return new Response(
+            JSON.stringify({ error: 'Invalid request payload' }),
+            {
+              status: 400,
+              headers: { 'content-type': 'application/json' },
+            },
+          )
+        }
+
+        const { prompt, size, duration, model } = parsed.data.data

Per coding guidelines, use Zod for runtime schema validation in TypeScript files. This applies to request handling in server routes.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const body = await request.json()
const { prompt, size, duration, model } = body.data
const stream = streamVideoGeneration(
openaiVideo(model ?? 'sora-2'),
{ prompt, size, duration },
{ pollingInterval: 3000, maxDuration: 600_000 },
import { createFileRoute } from '@tanstack/react-router'
import { streamVideoGeneration, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiVideo } from '@tanstack/ai-openai'
import { z } from 'zod'
const generateVideoRequestSchema = z.object({
data: z.object({
prompt: z.string().min(1),
size: z.string().optional(),
duration: z.number().int().positive().optional(),
model: z.string().optional(),
}),
})
export const Route = createFileRoute('/api/generate/video')({
methods: ['POST'],
handler: async ({ request }) => {
const parsed = generateVideoRequestSchema.safeParse(await request.json())
if (!parsed.success) {
return new Response(
JSON.stringify({ error: 'Invalid request payload' }),
{
status: 400,
headers: { 'content-type': 'application/json' },
},
)
}
const { prompt, size, duration, model } = parsed.data.data
const stream = streamVideoGeneration(
openaiVideo(model ?? 'sora-2'),
{ prompt, size, duration },
{ pollingInterval: 3000, maxDuration: 600_000 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/api.generate.video.ts` around lines 9 - 15,
Validate the incoming JSON before using request.json() results: define a Zod
schema for the expected payload (shape containing data: { prompt: string, size?:
string, duration?: number, model?: string }), use schema.parse or safeParse to
validate the parsed body, and if validation fails return a 400 response; then
destructure validated.data and pass those validated values into openaiVideo(...)
and streamVideoGeneration(...). Ensure you reference the existing symbols
(request.json(), body.data, streamVideoGeneration, openaiVideo, and the fields
model/prompt/size/duration) so the adapter only receives validated inputs.

Comment on lines +2 to +6
import {
streamGenerationResult,
generateTranscription,
toServerSentEventsResponse,
} from '@tanstack/ai'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Resolve import sort lint failure

Line 2-Line 6 import members are not ordered per configured sort-imports rule (static analysis reports this as an error).

🧰 Tools
🪛 ESLint

[error] 4-4: Member 'generateTranscription' of the import declaration should be sorted alphabetically.

(sort-imports)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/api.transcribe.ts` around lines 2 - 6, The
import specifiers from '@tanstack/ai' are not alphabetically ordered; reorder
them to satisfy the sort-imports rule by listing the named imports in
alphabetical order: generateTranscription, streamGenerationResult,
toServerSentEventsResponse (i.e., change the import line to import {
generateTranscription, streamGenerationResult, toServerSentEventsResponse } from
'@tanstack/ai'). Ensure there are no trailing commas or duplicate imports after
the change.

Comment on lines +50 to +53
export interface UseGenerationReturn<TOutput> {
/** Trigger a generation request */
generate: (input: Record<string, any>) => Promise<void>
/** The generation result, or null if not yet generated */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd packages/typescript/ai-react/src && wc -l use-generation.ts

Repository: TanStack/ai

Length of output: 76


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '40,60p'

Repository: TanStack/ai

Length of output: 939


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '85,105p'

Repository: TanStack/ai

Length of output: 973


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '160,180p'

Repository: TanStack/ai

Length of output: 500


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '1,40p'

Repository: TanStack/ai

Length of output: 2051


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '130,175p'

Repository: TanStack/ai

Length of output: 1302


🏁 Script executed:

cd packages/typescript/ai-react && rg "UseGenerationReturn" -n

Repository: TanStack/ai

Length of output: 259


🏁 Script executed:

cd packages/typescript/ai-react/src && rg "useGeneration" -n | head -20

Repository: TanStack/ai

Length of output: 1172


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-summarize.ts | sed -n '95,110p'

Repository: TanStack/ai

Length of output: 642


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generate-image.ts | sed -n '100,115p'

Repository: TanStack/ai

Length of output: 570


🏁 Script executed:

cd packages/typescript/ai-react/src && rg "UseGenerationReturn|UseGenerateImageReturn|UseSummarizeReturn" -n | head -20

Repository: TanStack/ai

Length of output: 717


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-generate-image.ts | sed -n '1,30p'

Repository: TanStack/ai

Length of output: 1455


🏁 Script executed:

cd packages/typescript/ai-react/src && cat -n use-summarize.ts | sed -n '40,55p'

Repository: TanStack/ai

Length of output: 712


Add TInput generic to UseGenerationReturn to restore type-safety for direct hook consumers.

Line 52 and line 171 widen the generate function input to Record<string, any>, causing direct useGeneration<TInput, ...> calls to lose input type validation at compile time. The internal generate function at line 156 correctly accepts TInput, but the return interface discards this information, forcing wrapper hooks (useSummarize, useGenerateImage) to cast the result back to specific input types (lines 105, 108).

♻️ Proposed fix
-export interface UseGenerationReturn<TOutput> {
+export interface UseGenerationReturn<
+  TOutput,
+  TInput extends Record<string, any> = Record<string, any>,
+> {
   /** Trigger a generation request */
-  generate: (input: Record<string, any>) => Promise<void>
+  generate: (input: TInput) => Promise<void>
   /** The generation result, or null if not yet generated */
   result: TOutput | null
   /** Whether a generation is currently in progress */
   isLoading: boolean
@@
-): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>> {
+): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>, TInput> {
@@
   return {
-    generate: generate as (input: Record<string, any>) => Promise<void>,
+    generate,
     result,
     isLoading,
     error,

Per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-react/src/use-generation.ts` around lines 50 - 53, The
UseGenerationReturn interface currently widens generate's input to
Record<string, any>, losing compile-time safety; update the interface to accept
a TInput generic (e.g., UseGenerationReturn<TOutput, TInput>) and change the
generate signature to generate: (input: TInput) => Promise<void> so it matches
the internal generate implementation (the generate function used in the hook at
line ~156) and restores correct typing for wrappers like
useSummarize/useGenerateImage without casts.

Comment on lines +133 to +168
const client = createMemo(() => {
const baseOptions = {
id: clientId,
body: options.body,
onResult: (r: VideoGenerateResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onJobCreated: (id: string) => options.onJobCreated?.(id),
onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
onJobIdChange: setJobId,
onVideoStatusChange: setVideoStatus,
}

if (options.connection) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
connection: options.connection,
})
}

if (options.fetcher) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
fetcher: options.fetcher,
})
}

throw new Error(
'useGenerateVideo requires either a connection or fetcher option',
)
}, [clientId])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

SolidJS createMemo second argument dependency array API documentation

💡 Result:

SolidJS createMemo does not have a “dependency array” second argument (unlike React hooks). Dependencies are automatically tracked by whatever reactive values (signals/stores/memos) you read inside the memo function. [2]

Per the official API, the signature is:
createMemo(fn, value?, options?) [1]

  • 1st argument (fn): computes the memo; it receives the previous value as its parameter. [1]
  • 2nd argument (value?): initial value used for that “previous value” parameter on the first run (not dependencies). [1]
  • 3rd argument (options?): options like equals (and name). [1]

Example (showing the real meaning of the 2nd argument): [1]

const trend = createMemo(
  (prev) => ({ value: count(), was: prev.value }),
  { value: 0 } // initial "prev"
);

Sources: Solid Docs createMemo reference [1], Solid Docs memo concept page [2].


Remove the incorrect second argument from createMemo.

In SolidJS, createMemo takes createMemo(fn, initialValue?, options?) where the second argument is the initial value for the previous-value parameter, not a dependency array. Dependencies are automatically tracked by reading reactive values (clientId, options) inside the memo function. Passing [clientId] is incorrect and misleading.

Proposed fix
-  const client = createMemo(() => {
+  const client = createMemo(() => {
     const baseOptions = {
       id: clientId,
       body: options.body,
       onResult: (r: VideoGenerateResult) => options.onResult?.(r),
       onError: (e: Error) => options.onError?.(e),
       onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
       onChunk: (c: StreamChunk) => options.onChunk?.(c),
       onJobCreated: (id: string) => options.onJobCreated?.(id),
       onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s),
       onResultChange: setResult,
       onLoadingChange: setIsLoading,
       onErrorChange: setError,
       onStatusChange: setStatus,
       onJobIdChange: setJobId,
       onVideoStatusChange: setVideoStatus,
     }

     if (options.connection) {
       return new VideoGenerationClient<TOutput>({
         ...baseOptions,
         connection: options.connection,
       })
     }

     if (options.fetcher) {
       return new VideoGenerationClient<TOutput>({
         ...baseOptions,
         fetcher: options.fetcher,
       })
     }

     throw new Error(
       'useGenerateVideo requires either a connection or fetcher option',
     )
-  }, [clientId])
+  })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const client = createMemo(() => {
const baseOptions = {
id: clientId,
body: options.body,
onResult: (r: VideoGenerateResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onJobCreated: (id: string) => options.onJobCreated?.(id),
onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
onJobIdChange: setJobId,
onVideoStatusChange: setVideoStatus,
}
if (options.connection) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGenerateVideo requires either a connection or fetcher option',
)
}, [clientId])
const client = createMemo(() => {
const baseOptions = {
id: clientId,
body: options.body,
onResult: (r: VideoGenerateResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onJobCreated: (id: string) => options.onJobCreated?.(id),
onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
onJobIdChange: setJobId,
onVideoStatusChange: setVideoStatus,
}
if (options.connection) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGenerateVideo requires either a connection or fetcher option',
)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-solid/src/use-generate-video.ts` around lines 133 -
168, The createMemo call in useGenerateVideo incorrectly passes a dependency
array ([clientId]) as the second argument; remove that second argument so the
call is just createMemo(() => { ... }) and let Solid track dependencies by
reading clientId and options inside the memo; ensure the function that
constructs VideoGenerationClient (references: createMemo, clientId, options,
VideoGenerationClient) remains unchanged otherwise.

Comment on lines +111 to +142
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}

if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}

if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}

throw new Error(
'useGeneration requires either a connection or fetcher option',
)
}, [clientId])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check current createMemo usage in the codebase
rg -n "createMemo\(" --type ts packages/typescript/ai-solid/src/ -A 1 | head -30

Repository: TanStack/ai

Length of output: 625


🌐 Web query:

Solid.js createMemo API signature documentation

💡 Result:

Solid’s official docs list createMemo with this TypeScript signature:

function createMemo<T>(
  fn: (v: T) => T,
  value?: T,
  options?: { equals?: false | ((prev: T, next: T) => boolean); name?: string }
): () => T;
  • fn is called to compute the value and receives the previous memo value as its argument.
  • Returns a read-only accessor () => T. [1]

Docs page (official):

https://docs.solidjs.com/reference/basic-reactivity/create-memo

[1]

Sources: [1] Solid Docs – “createMemo” API reference.


Remove the unnecessary second argument from createMemo.

Solid.js's createMemo does not accept a dependency array. The second argument, when provided, should be an initial value for the memo (of the same type as the computed value). Dependencies are automatically tracked. The [clientId] array here is incorrect syntax for Solid.js.

🔧 Proposed fix
   const client = createMemo(() => {
     const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
       id: clientId,
       body: options.body,
       onResult: (r: TResult) => options.onResult?.(r),
       onError: (e: Error) => options.onError?.(e),
       onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
       onChunk: (c: StreamChunk) => options.onChunk?.(c),
       onResultChange: setResult,
       onLoadingChange: setIsLoading,
       onErrorChange: setError,
       onStatusChange: setStatus,
     }

     if (options.connection) {
       return new GenerationClient<TInput, TResult, TOutput>({
         ...clientOptions,
         connection: options.connection,
       })
     }

     if (options.fetcher) {
       return new GenerationClient<TInput, TResult, TOutput>({
         ...clientOptions,
         fetcher: options.fetcher,
       })
     }

     throw new Error(
       'useGeneration requires either a connection or fetcher option',
     )
-  }, [clientId])
+  })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}
if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGeneration requires either a connection or fetcher option',
)
}, [clientId])
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}
if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGeneration requires either a connection or fetcher option',
)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-solid/src/use-generation.ts` around lines 111 - 142,
The createMemo call for the client variable incorrectly passes a dependency
array as a second argument; Solid's createMemo tracks dependencies automatically
and accepts an optional initial value (not a deps array). Remove the second
argument ([clientId]) from the createMemo invocation so it only receives the
factory function that builds client using clientOptions and branches for
options.connection/options.fetcher (leaving the GenerationClient construction
and error throw unchanged).

Comment on lines +304 to +308
const mockResult = {
audio: 'base64data',
format: 'mp3' as const,
model: 'tts-1',
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align speech test fixtures with the expected TTSResult shape (include id).

Current fixtures validate only partial payloads, which weakens contract fidelity for speech paths.

Proposed fix
     const mockResult = {
+      id: 'tts-1',
       audio: 'base64data',
       format: 'mp3' as const,
       model: 'tts-1',
     }
-    const mockResult = { audio: 'base64data', format: 'mp3', model: 'tts-1' }
+    const mockResult = { id: 'tts-1', audio: 'base64data', format: 'mp3', model: 'tts-1' }
       fetcher: async () => ({
+        id: 'tts-1',
         audio: 'base64data',
         format: 'mp3' as const,
         model: 'tts-1',
       }),

Also applies to: 321-321, 352-356

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-svelte/tests/create-generation.test.ts` around lines
304 - 308, The speech test fixtures use mockResult objects that don't include
the required TTSResult.id property; update each test fixture (the mockResult
declarations around create-generation.test.ts lines where mockResult is defined
and the other occurrences at the noted ranges) to include a valid id string
(e.g., id: 'mock-tts-id') so the mocked payload matches the full TTSResult shape
used by functions under test (ensure all three occurrences—initial mockResult
and the other two fixtures—are updated).

Comment on lines +51 to +54
export interface UseGenerationReturn<TOutput> {
/** Trigger a generation request */
generate: (input: Record<string, any>) => Promise<void>
/** The generation result, or null if not yet generated */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n packages/typescript/ai-vue/src/use-generation.ts | head -200

Repository: TanStack/ai

Length of output: 7156


Vue base composable widens generate input from TInput to Record<string, any>.

The UseGenerationReturn interface (line 51) lacks a TInput generic parameter and hardcodes the input type. Combined with the cast at line 173, this loses compile-time type safety for direct useGeneration<TInput, ...> consumers despite the function correctly accepting TInput internally.

♻️ Proposed fix
-export interface UseGenerationReturn<TOutput> {
+export interface UseGenerationReturn<
+  TOutput,
+  TInput extends Record<string, any> = Record<string, any>,
+> {
   /** Trigger a generation request */
-  generate: (input: Record<string, any>) => Promise<void>
+  generate: (input: TInput) => Promise<void>
@@
-): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>> {
+): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>, TInput> {
@@
   return {
-    generate: generate as (input: Record<string, any>) => Promise<void>,
+    generate,
     result: readonly(result),
     isLoading: readonly(isLoading),

Affects lines 51–54, 98, and 173–174. Per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-vue/src/use-generation.ts` around lines 51 - 54,
Update the UseGenerationReturn interface to preserve the TInput generic so
callers keep compile-time safety: add a TInput type parameter to
UseGenerationReturn and change the generate signature from generate: (input:
Record<string, any>) => Promise<void> to generate: (input: TInput) =>
Promise<void>; then adjust the useGeneration function's return type to
UseGenerationReturn<TInput, TOutput, TParams> (or matching generic arity) and
remove the unsafe cast at the site where generate is returned so the real TInput
flows through without widening to Record<string, any>.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant