feat: generation hooks and streaming across all frameworks#327
feat: generation hooks and streaming across all frameworks#327AlemTuzlak wants to merge 2 commits intomainfrom
Conversation
…anscription, and summarize Adds a complete client-side generation layer across all framework integrations (React, Solid, Svelte, Vue) for one-shot generation tasks: image generation, text-to-speech, audio transcription, text summarization, and video generation. Core changes: - `streamGenerationResult()` and `streamVideoGeneration()` server helpers in `@tanstack/ai` - `GenerationClient` and `VideoGenerationClient` state machines in `@tanstack/ai-client` - Framework hooks: `useGenerateImage`, `useGenerateSpeech`, `useGenerateVideo`, `useTranscription`, `useSummarize` (and Svelte `create*` equivalents) - `InferGenerationOutput` type for type-safe `onResult` transforms - New "Generations" overview doc with Mermaid architecture diagram - Full-stack usage sections added to all 4 generation guide docs - React example app with 5 demo pages and server API routes
📝 WalkthroughWalkthroughIntroduces a unified "generations" pattern for non-chat AI activities with streaming and direct transport modes. Adds core Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client Component
participant Hook as useGeneration Hook
participant GC as GenerationClient
participant Adapter as ConnectionAdapter/Fetcher
participant Server as Server/API
Client->>Hook: call generate(input)
Hook->>GC: client.generate(input)
alt Streaming Mode (SSE)
GC->>Adapter: connect({body, input})
Adapter->>Server: establish connection
Server->>Adapter: emit RUN_STARTED
Adapter->>GC: processStream([chunk])
GC->>GC: setIsLoading(true)
loop Poll Status
Server->>Adapter: emit CUSTOM('generation:result', data)
Adapter->>GC: processStream([chunk])
GC->>GC: setResult(data)
end
Server->>Adapter: emit RUN_FINISHED
Adapter->>GC: processStream([chunk])
GC->>GC: setIsLoading(false)
GC->>GC: setStatus('success')
else Direct Mode (Fetcher)
GC->>Adapter: fetcher(input)
Adapter->>Server: POST request
Server-->>Adapter: return TResult
GC->>GC: setResult(TResult)
GC->>GC: setIsLoading(false)
GC->>GC: setStatus('success')
end
GC->>Hook: notify onResult/onResultChange
Hook->>Client: reactive state updated
Client->>Client: re-render with result
sequenceDiagram
participant Client as Client Component
participant Hook as useGenerateVideo Hook
participant VGC as VideoGenerationClient
participant Server as API Server
participant Adapter as VideoAdapter
Client->>Hook: call generate(input)
Hook->>VGC: client.generate(input)
VGC->>VGC: setIsLoading(true), setStatus('generating')
alt Streaming Mode with Polling
VGC->>Server: connect({body, input})
Server->>Server: emit RUN_STARTED
Server->>Adapter: adapter.createVideoJob()
Adapter-->>Server: {jobId}
Server->>Server: emit CUSTOM('video:job:created', {jobId})
VGC->>VGC: setJobId(jobId), onJobCreated(jobId)
loop Poll Every pollingInterval
Server->>Adapter: adapter.getVideoStatus(jobId)
Adapter-->>Server: {status, progress}
Server->>Server: emit CUSTOM('video:status', {...})
VGC->>VGC: setVideoStatus({status, progress})
VGC->>VGC: onStatusUpdate(videoStatus)
alt Status = 'completed'
Server->>Adapter: adapter.getVideoUrl(jobId)
Adapter-->>Server: url
Server->>Server: emit CUSTOM('generation:result', {url})
VGC->>VGC: setResult({url, ...})
Server->>Server: emit RUN_FINISHED
end
end
else Direct Mode (Fetcher)
VGC->>Adapter: fetcher(input)
Adapter->>Adapter: polling loop
Adapter-->>VGC: {url, status, jobId}
VGC->>VGC: setResult(result)
end
VGC->>VGC: setIsLoading(false), setStatus('success')
Hook->>Client: reactive state updated
Client->>Client: render video player with result.url
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit 1fc91d7
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Nitpick comments (6)
packages/typescript/ai/tests/stream-generation.test.ts (2)
319-329: Remove unusedpollCountvariable.The variable
pollCountis declared and incremented but never read. This appears to be leftover from development.🧹 Suggested fix
it('should propagate error message from failed status', async () => { const adapter = createMockVideoAdapter() - let pollCount = 0 adapter.getVideoStatus = vi.fn(async () => { - pollCount++ return { jobId: 'job-123', status: 'failed' as const, error: 'Content policy violation', } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 319 - 329, Remove the unused pollCount variable and its increment in the test "should propagate error message from failed status": delete the declaration "let pollCount = 0" and the "pollCount++" inside the mocked adapter.getVideoStatus implementation (the mock created via createMockVideoAdapter and vi.fn), as pollCount is never read.
1-6: Sort imports alphabetically per ESLint configuration.🔧 Suggested fix
-import { describe, it, expect, vi } from 'vitest' +import { describe, expect, it, vi } from 'vitest'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 1 - 6, Reorder the import statements and their specifiers to be alphabetically sorted per ESLint: sort the three import lines by their module specifier (e.g., '../src/stream-generation', '../src/types', 'vitest' in alphabetical order) and also alphabetize the named imports inside each import (ensure streamGenerationResult comes before streamVideoGeneration and StreamChunk before VideoStatusResult). Update the import block in the test file to reflect this ordering so it satisfies the lint rule.examples/ts-react-chat/src/routes/api.generate.image.ts (1)
2-6: Sort imports alphabetically per ESLint configuration.The static analysis flags that
generateImageshould come afterstreamGenerationResultalphabetically.🔧 Suggested fix
import { - streamGenerationResult, generateImage, + streamGenerationResult, toServerSentEventsResponse, } from '@tanstack/ai'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/ts-react-chat/src/routes/api.generate.image.ts` around lines 2 - 6, Reorder the named imports in the import statement so they are alphabetized: place generateImage after streamGenerationResult and before toServerSentEventsResponse (i.e., import streamGenerationResult, generateImage, toServerSentEventsResponse from '@tanstack/ai'); update the import line that currently lists streamGenerationResult, generateImage, toServerSentEventsResponse to match ESLint's alphabetical ordering.examples/ts-react-chat/src/routes/generations.image.tsx (1)
91-101: Consider adding unique keys if image IDs are available.Using array index as key works here since images are only appended and not reordered. However, if
img.idor similar unique identifier is available from the API response, using that would be more robust.💡 Optional improvement if images have unique IDs
{result.images.map((img, i) => ( <img - key={i} + key={img.id ?? i} src={img.url || `data:image/png;base64,${img.b64Json}`} alt={img.revisedPrompt || prompt} className="w-full rounded-lg border border-gray-700" /> ))}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/ts-react-chat/src/routes/generations.image.tsx` around lines 91 - 101, The map uses the array index as the React key which is fragile; update the key in the images mapping to use a unique identifier from the image object when available (e.g., img.id or img.uuid) instead of i; locate the JSX that renders result.images inside the component (the <img ... key={i} ... /> line) and replace the index-based key with a conditional unique key such as key={img.id ?? img.uuid ?? i} so it falls back to index only if no unique id exists.packages/typescript/ai-react/tests/use-generation.test.ts (1)
185-201: Avoid dangling async work in stop/cleanup testsThese tests start
generate()but don’t await its completion after resolving the deferred promise. That can leave async work running past test boundaries and introduce flaky behavior.✅ Suggested test hardening
- act(() => { - result.current.generate({ prompt: 'test' }) - }) + let generatePromise!: Promise<void> + act(() => { + generatePromise = result.current.generate({ prompt: 'test' }) + }) @@ - resolvePromise!({ id: '1' }) + resolvePromise!({ id: '1' }) + await act(async () => { + await generatePromise + })Also applies to: 240-254
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-react/tests/use-generation.test.ts` around lines 185 - 201, The test starts an async generation with result.current.generate({ prompt: 'test' }) and resolves its deferred promise via resolvePromise({ id: '1' }) but does not wait for the generate() flow to finish, leaving async work dangling; update the test to await the generation completing after calling resolvePromise (for example, await waitFor(() => expect(result.current.isLoading).toBe(false)) or await act(async () => resolvePromise!({ id: '1' })) so that the generate/stop lifecycle (generate(), stop(), isLoading, status) fully settles before the test ends; ensure you apply the same change to the other occurrence around lines 240-254.examples/ts-react-chat/src/routes/generations.video.tsx (1)
95-100: Clamp progress before applying it to CSS widthLine 99 applies
videoStatus.progressdirectly intowidth. If the server emits out-of-range values, the bar can overflow/underflow visually. Clamp to[0, 100]before rendering.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/ts-react-chat/src/routes/generations.video.tsx` around lines 95 - 100, The progress value from videoStatus.progress is used directly in the style width which can overflow if the server sends values <0 or >100; clamp it to the [0,100] range before rendering (e.g., compute const clamped = Math.min(100, Math.max(0, videoStatus.progress ?? 0)) or create a clamp helper) and use that clamped value in the style template string for the inner progress div so the CSS width is always valid.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/settings.json:
- Around line 7-9: The .claude/settings.json contains machine-specific allowlist
entries and a high-risk execution permission; remove entries such as
"Read(//c/Users/AlemTuzlak/.claude/**)",
"Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)", the absolute
repo worktree grep pattern, and any "Bash(pnpm install)" rule from the committed
file, and instead add those developer-specific paths/commands to
.claude/settings.local.json (which is in .gitignore); keep the shared
.claude/settings.json minimal and portable (only repo-relative, non-identifying
Read/Query rules) and ensure no broad Bash execution rules remain in the
committed file so it adheres to least-privilege.
In `@docs/guides/generations.md`:
- Around line 123-137: Add explicit language identifiers to the two fenced code
blocks that show the event sequences (the block containing "RUN_STARTED → ...
RUN_FINISHED" and the block containing "RUN_STARTED → ... RUN_ERROR") by
changing the opening triple-backtick to include "text" (e.g., ```text) for both
blocks so markdownlint MD040 is satisfied; update both occurrences in the
docs/guides/generations.md content where those event sequence blocks appear.
In `@docs/guides/transcription.md`:
- Around line 308-309: The example destructures the JSON root (const { audio,
language, model } = await request.json()) but the actual route handler in
api.transcribe.ts expects the payload under body.data; update the snippet to
extract these values from request.json().body.data (or adjust to read
request.json().body then access .data) so the variables audio, language, and
model come from the same shape the route implementation expects, and add a
simple existence check for body.data before using it.
In `@examples/ts-react-chat/src/routes/api.generate.video.ts`:
- Around line 9-15: Validate the incoming JSON before using request.json()
results: define a Zod schema for the expected payload (shape containing data: {
prompt: string, size?: string, duration?: number, model?: string }), use
schema.parse or safeParse to validate the parsed body, and if validation fails
return a 400 response; then destructure validated.data and pass those validated
values into openaiVideo(...) and streamVideoGeneration(...). Ensure you
reference the existing symbols (request.json(), body.data,
streamVideoGeneration, openaiVideo, and the fields model/prompt/size/duration)
so the adapter only receives validated inputs.
In `@examples/ts-react-chat/src/routes/api.transcribe.ts`:
- Around line 2-6: The import specifiers from '@tanstack/ai' are not
alphabetically ordered; reorder them to satisfy the sort-imports rule by listing
the named imports in alphabetical order: generateTranscription,
streamGenerationResult, toServerSentEventsResponse (i.e., change the import line
to import { generateTranscription, streamGenerationResult,
toServerSentEventsResponse } from '@tanstack/ai'). Ensure there are no trailing
commas or duplicate imports after the change.
In `@examples/ts-react-chat/src/routes/generations.speech.tsx`:
- Around line 43-56: The labels are not associated with their form controls
which breaks accessibility; add explicit associations by giving the textarea
used in setText a unique id (e.g., textToSpeakId) and set the corresponding Text
label's htmlFor to that id, and likewise assign ids to the Voice control(s)
rendered in the "Voice" group (or the containing select/radio input) and set the
Voice label's htmlFor to match; update references around the textarea that uses
setText and isLoading and the Voice control rendering code so screen readers and
keyboard navigation correctly map labels to controls.
- Around line 14-23: The onResult handler creates blob URLs but never revokes
them, causing memory leaks; update the logic in onResult and the
reset()/component unmount path to call URL.revokeObjectURL on any previously
created audio URL before assigning a new one, store the current URL in the
component state so reset() can revoke it, and add a useEffect cleanup that
revokes the stored URL on unmount; reference the onResult function, the
generate() flow that triggers it, and the reset() method to ensure all created
URLs are revoked.
In `@examples/ts-react-chat/src/routes/generations.transcription.tsx`:
- Around line 17-21: The current conversion uses file.arrayBuffer(), then builds
a base64 string via new Uint8Array(buffer).reduce(...) and assigns dataUrl —
this causes O(n²) string concatenation; replace that whole path with
FileReader.readAsDataURL(file) (awaiting a Promise around the onload/onerror
events) to obtain the dataUrl directly and remove buffer/base64/reduce usage,
then use the resulting dataUrl wherever dataUrl is referenced.
In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 46-54: Add an accessible label association: give the textarea an
explicit id (e.g., id="prompt") and update the <label> to use htmlFor="prompt"
so screen readers and form controls are correctly linked; keep the existing
value={prompt}, onChange={(e) => setPrompt(e.target.value)},
disabled={isLoading} and other props on the textarea (adjust the id string as
needed to avoid collisions).
In `@packages/typescript/ai-client/src/video-generation-client.ts`:
- Around line 155-167: The fetcher never gets the AbortSignal so long-running
fetches can't be cancelled; update the fetcher API and call site: change the
fetcher signature (where declared/typed) to accept an optional AbortSignal (e.g.
fetcher(input, signal?: AbortSignal)) and pass the signal from
generateWithFetcher into the call (replace this.fetcher(input) with
this.fetcher(input, signal)), then update any affected types/usages of fetcher
and handle errors/abort behavior in generateWithFetcher (check signal.aborted
before setResult and handle thrown AbortError accordingly).
In `@packages/typescript/ai-client/tests/generation-client.test.ts`:
- Line 1: Reorder the named imports from 'vitest' to satisfy the sort-imports
rule: change the import line in generation-client.test.ts (currently "import {
describe, it, expect, vi } from 'vitest'") to alphabetically ordered members
"import { describe, expect, it, vi } from 'vitest'".
In `@packages/typescript/ai-client/tests/video-generation-client.test.ts`:
- Line 1: The named imports in the top-level import statement (describe, it,
expect, vi) are not alphabetized; update the import from "vitest" so the members
are in sorted order (describe, expect, it, vi) to satisfy the sort-imports lint
rule and re-run linting/tests to confirm the fix.
In `@packages/typescript/ai-react/src/use-generate-video.ts`:
- Around line 126-165: The useMemo inside useGenerateVideo currently throws
during render and only depends on clientId, so it won't recreate the
VideoGenerationClient when options.connection or options.fetcher change;
instead, stop throwing in the memo and either (A) create a safe
no-op/placeholder client inside useMemo when neither connection nor fetcher are
present, or (B) move the validation and throwing into the generate() method so
render never throws; also update the memo dependency array to include the
transport inputs used to build the client (e.g., reference
optionsRef.current.connection and optionsRef.current.fetcher or include
options.connection/options.fetcher) so a new VideoGenerationClient<TOutput> is
created when the transport changes (identify VideoGenerationClient, useMemo in
useGenerateVideo, optionsRef, and generate when making edits).
In `@packages/typescript/ai-react/src/use-generation.ts`:
- Around line 50-53: The UseGenerationReturn interface currently widens
generate's input to Record<string, any>, losing compile-time safety; update the
interface to accept a TInput generic (e.g., UseGenerationReturn<TOutput,
TInput>) and change the generate signature to generate: (input: TInput) =>
Promise<void> so it matches the internal generate implementation (the generate
function used in the hook at line ~156) and restores correct typing for wrappers
like useSummarize/useGenerateImage without casts.
In `@packages/typescript/ai-solid/src/use-generate-video.ts`:
- Around line 133-168: The createMemo call in useGenerateVideo incorrectly
passes a dependency array ([clientId]) as the second argument; remove that
second argument so the call is just createMemo(() => { ... }) and let Solid
track dependencies by reading clientId and options inside the memo; ensure the
function that constructs VideoGenerationClient (references: createMemo,
clientId, options, VideoGenerationClient) remains unchanged otherwise.
In `@packages/typescript/ai-solid/src/use-generation.ts`:
- Around line 111-142: The createMemo call for the client variable incorrectly
passes a dependency array as a second argument; Solid's createMemo tracks
dependencies automatically and accepts an optional initial value (not a deps
array). Remove the second argument ([clientId]) from the createMemo invocation
so it only receives the factory function that builds client using clientOptions
and branches for options.connection/options.fetcher (leaving the
GenerationClient construction and error throw unchanged).
In `@packages/typescript/ai-svelte/tests/create-generation.test.ts`:
- Around line 304-308: The speech test fixtures use mockResult objects that
don't include the required TTSResult.id property; update each test fixture (the
mockResult declarations around create-generation.test.ts lines where mockResult
is defined and the other occurrences at the noted ranges) to include a valid id
string (e.g., id: 'mock-tts-id') so the mocked payload matches the full
TTSResult shape used by functions under test (ensure all three
occurrences—initial mockResult and the other two fixtures—are updated).
In `@packages/typescript/ai-vue/src/use-generation.ts`:
- Around line 51-54: Update the UseGenerationReturn interface to preserve the
TInput generic so callers keep compile-time safety: add a TInput type parameter
to UseGenerationReturn and change the generate signature from generate: (input:
Record<string, any>) => Promise<void> to generate: (input: TInput) =>
Promise<void>; then adjust the useGeneration function's return type to
UseGenerationReturn<TInput, TOutput, TParams> (or matching generic arity) and
remove the unsafe cast at the site where generate is returned so the real TInput
flows through without widening to Record<string, any>.
---
Nitpick comments:
In `@examples/ts-react-chat/src/routes/api.generate.image.ts`:
- Around line 2-6: Reorder the named imports in the import statement so they are
alphabetized: place generateImage after streamGenerationResult and before
toServerSentEventsResponse (i.e., import streamGenerationResult, generateImage,
toServerSentEventsResponse from '@tanstack/ai'); update the import line that
currently lists streamGenerationResult, generateImage,
toServerSentEventsResponse to match ESLint's alphabetical ordering.
In `@examples/ts-react-chat/src/routes/generations.image.tsx`:
- Around line 91-101: The map uses the array index as the React key which is
fragile; update the key in the images mapping to use a unique identifier from
the image object when available (e.g., img.id or img.uuid) instead of i; locate
the JSX that renders result.images inside the component (the <img ... key={i}
... /> line) and replace the index-based key with a conditional unique key such
as key={img.id ?? img.uuid ?? i} so it falls back to index only if no unique id
exists.
In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 95-100: The progress value from videoStatus.progress is used
directly in the style width which can overflow if the server sends values <0 or
>100; clamp it to the [0,100] range before rendering (e.g., compute const
clamped = Math.min(100, Math.max(0, videoStatus.progress ?? 0)) or create a
clamp helper) and use that clamped value in the style template string for the
inner progress div so the CSS width is always valid.
In `@packages/typescript/ai-react/tests/use-generation.test.ts`:
- Around line 185-201: The test starts an async generation with
result.current.generate({ prompt: 'test' }) and resolves its deferred promise
via resolvePromise({ id: '1' }) but does not wait for the generate() flow to
finish, leaving async work dangling; update the test to await the generation
completing after calling resolvePromise (for example, await waitFor(() =>
expect(result.current.isLoading).toBe(false)) or await act(async () =>
resolvePromise!({ id: '1' })) so that the generate/stop lifecycle (generate(),
stop(), isLoading, status) fully settles before the test ends; ensure you apply
the same change to the other occurrence around lines 240-254.
In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 319-329: Remove the unused pollCount variable and its increment in
the test "should propagate error message from failed status": delete the
declaration "let pollCount = 0" and the "pollCount++" inside the mocked
adapter.getVideoStatus implementation (the mock created via
createMockVideoAdapter and vi.fn), as pollCount is never read.
- Around line 1-6: Reorder the import statements and their specifiers to be
alphabetically sorted per ESLint: sort the three import lines by their module
specifier (e.g., '../src/stream-generation', '../src/types', 'vitest' in
alphabetical order) and also alphabetize the named imports inside each import
(ensure streamGenerationResult comes before streamVideoGeneration and
StreamChunk before VideoStatusResult). Update the import block in the test file
to reflect this ordering so it satisfies the lint rule.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (65)
.claude/settings.json.gitignoredocs/guides/generations.mddocs/guides/image-generation.mddocs/guides/migration.mddocs/guides/text-to-speech.mddocs/guides/transcription.mddocs/guides/tree-shaking.mddocs/guides/video-generation.mdexamples/ts-react-chat/src/components/Header.tsxexamples/ts-react-chat/src/routeTree.gen.tsexamples/ts-react-chat/src/routes/api.generate.image.tsexamples/ts-react-chat/src/routes/api.generate.speech.tsexamples/ts-react-chat/src/routes/api.generate.video.tsexamples/ts-react-chat/src/routes/api.summarize.tsexamples/ts-react-chat/src/routes/api.transcribe.tsexamples/ts-react-chat/src/routes/generations.image.tsxexamples/ts-react-chat/src/routes/generations.speech.tsxexamples/ts-react-chat/src/routes/generations.summarize.tsxexamples/ts-react-chat/src/routes/generations.transcription.tsxexamples/ts-react-chat/src/routes/generations.video.tsxexamples/ts-react-chat/src/routes/index.tsxknip.jsonnx.jsonpackages/typescript/ai-client/src/generation-client.tspackages/typescript/ai-client/src/generation-types.tspackages/typescript/ai-client/src/index.tspackages/typescript/ai-client/src/video-generation-client.tspackages/typescript/ai-client/tests/generation-client.test.tspackages/typescript/ai-client/tests/video-generation-client.test.tspackages/typescript/ai-react/src/index.tspackages/typescript/ai-react/src/use-generate-image.tspackages/typescript/ai-react/src/use-generate-speech.tspackages/typescript/ai-react/src/use-generate-video.tspackages/typescript/ai-react/src/use-generation.tspackages/typescript/ai-react/src/use-summarize.tspackages/typescript/ai-react/src/use-transcription.tspackages/typescript/ai-react/tests/use-generation.test.tspackages/typescript/ai-solid/src/index.tspackages/typescript/ai-solid/src/use-generate-image.tspackages/typescript/ai-solid/src/use-generate-speech.tspackages/typescript/ai-solid/src/use-generate-video.tspackages/typescript/ai-solid/src/use-generation.tspackages/typescript/ai-solid/src/use-summarize.tspackages/typescript/ai-solid/src/use-transcription.tspackages/typescript/ai-solid/tests/use-generation.test.tspackages/typescript/ai-svelte/src/create-generate-image.svelte.tspackages/typescript/ai-svelte/src/create-generate-speech.svelte.tspackages/typescript/ai-svelte/src/create-generate-video.svelte.tspackages/typescript/ai-svelte/src/create-generation.svelte.tspackages/typescript/ai-svelte/src/create-summarize.svelte.tspackages/typescript/ai-svelte/src/create-transcription.svelte.tspackages/typescript/ai-svelte/src/index.tspackages/typescript/ai-svelte/tests/create-generation.test.tspackages/typescript/ai-vue/src/index.tspackages/typescript/ai-vue/src/use-generate-image.tspackages/typescript/ai-vue/src/use-generate-speech.tspackages/typescript/ai-vue/src/use-generate-video.tspackages/typescript/ai-vue/src/use-generation.tspackages/typescript/ai-vue/src/use-summarize.tspackages/typescript/ai-vue/src/use-transcription.tspackages/typescript/ai-vue/tests/use-generation.test.tspackages/typescript/ai/src/index.tspackages/typescript/ai/src/stream-generation.tspackages/typescript/ai/tests/stream-generation.test.ts
| "Read(//c/Users/AlemTuzlak/.claude/**)", | ||
| "Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)", | ||
| "Bash(grep -n \"keyed\\\\|createMemo\" /c/Users/AlemTuzlak/projects/ts-poc/.claude/worktrees/distributed-weaving-rocket/packages/typescript/ai-devtools/src/components/conversation/*.tsx)", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | grep -E "\.claude|\.gitignore" | head -20Repository: TanStack/ai
Length of output: 442
🏁 Script executed:
if [ -f ".claude/settings.json" ]; then
echo "=== .claude/settings.json exists ==="
wc -l ".claude/settings.json"
cat -n ".claude/settings.json"
else
echo ".claude/settings.json does not exist"
fiRepository: TanStack/ai
Length of output: 3194
🏁 Script executed:
if [ -f ".gitignore" ]; then
echo "=== .gitignore content (relevant lines) ==="
grep -n "\.claude\|settings\." ".gitignore" || echo "No matches found"
else
echo ".gitignore does not exist"
fiRepository: TanStack/ai
Length of output: 206
Remove machine-specific and high-risk permissions from committed settings.
This committed allowlist contains user-identifying absolute paths (lines 7–9, 11, 43) and broad execution permission (Bash(pnpm install) at line 48), which is non-portable and violates least-privilege. Keep repo-shared rules minimal, and move developer-specific paths and commands to .claude/settings.local.json, which is already in .gitignore for this purpose.
🔧 Suggested hardening diff
{
"permissions": {
"allow": [
"Bash(pnpm test:lib)",
"Bash(pnpm test:eslint)",
"Bash(xargs grep -l \"devtools-event-client\")",
- "Read(//c/Users/AlemTuzlak/.claude/**)",
- "Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)",
- "Bash(grep -n \"keyed\\\\|createMemo\" /c/Users/AlemTuzlak/projects/ts-poc/.claude/worktrees/distributed-weaving-rocket/packages/typescript/ai-devtools/src/components/conversation/*.tsx)",
"Bash(xargs grep -l \"onConfig\\\\|onIterationStart\\\\|onIteration\")",
- "Bash(grep -n \"^.*\\\\\\(onConfig\\\\|onIterationStart\\\\|onIteration\\\\\\)\" /c/Users/AlemTuzlak/projects/ts-poc/packages/typescript/ai/src/**/*.ts)",
"Bash(npx `@tanstack/router-cli` generate)",
"Bash(xargs grep -l \"generate\\\\|image\\\\|video\\\\|audio\")",
"Bash(pnpm --filter `@tanstack/ai` build)",
...rest of repo-portable entries...
- "Read(//c/Users/AlemTuzlak/projects/ts-react-chat/src/routes/**)",
"Bash(npx tsr generate)",
"Bash(npx nx run ts-react-media:build)",
"Bash(npx nx run `@tanstack/ai`:test:types)",
"Bash(npx nx run `@tanstack/ai`:test:types --verbose)",
- "Bash(pnpm install)"
]
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/settings.json around lines 7 - 9, The .claude/settings.json contains
machine-specific allowlist entries and a high-risk execution permission; remove
entries such as "Read(//c/Users/AlemTuzlak/.claude/**)",
"Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)", the absolute
repo worktree grep pattern, and any "Bash(pnpm install)" rule from the committed
file, and instead add those developer-specific paths/commands to
.claude/settings.local.json (which is in .gitignore); keep the shared
.claude/settings.json minimal and portable (only repo-relative, non-identifying
Read/Query rules) and ensure no broad Bash execution rules remain in the
committed file so it adheres to least-privilege.
| ``` | ||
| 1. RUN_STARTED → Client sets status to 'generating' | ||
| 2. CUSTOM → Client receives the result | ||
| name: 'generation:result' | ||
| value: <your result> | ||
| 3. RUN_FINISHED → Client sets status to 'success' | ||
| ``` | ||
|
|
||
| If the function throws, a `RUN_ERROR` event is emitted instead: | ||
|
|
||
| ``` | ||
| 1. RUN_STARTED → Client sets status to 'generating' | ||
| 2. RUN_ERROR → Client sets error + status to 'error' | ||
| error: { message: '...' } | ||
| ``` |
There was a problem hiding this comment.
Add explicit language to fenced code blocks
Line 123 and Line 133 open fenced blocks without language identifiers, which trips markdownlint MD040.
📝 Proposed fix
-```
+```text
1. RUN_STARTED → Client sets status to 'generating'
2. CUSTOM → Client receives the result
name: 'generation:result'
value: <your result>
3. RUN_FINISHED → Client sets status to 'success'@@
- +text
- RUN_STARTED → Client sets status to 'generating'
- RUN_ERROR → Client sets error + status to 'error'
error: { message: '...' }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| 1. RUN_STARTED → Client sets status to 'generating' | |
| 2. CUSTOM → Client receives the result | |
| name: 'generation:result' | |
| value: <your result> | |
| 3. RUN_FINISHED → Client sets status to 'success' | |
| ``` | |
| If the function throws, a `RUN_ERROR` event is emitted instead: | |
| ``` | |
| 1. RUN_STARTED → Client sets status to 'generating' | |
| 2. RUN_ERROR → Client sets error + status to 'error' | |
| error: { message: '...' } | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 123-123: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 133-133: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/guides/generations.md` around lines 123 - 137, Add explicit language
identifiers to the two fenced code blocks that show the event sequences (the
block containing "RUN_STARTED → ... RUN_FINISHED" and the block containing
"RUN_STARTED → ... RUN_ERROR") by changing the opening triple-backtick to
include "text" (e.g., ```text) for both blocks so markdownlint MD040 is
satisfied; update both occurrences in the docs/guides/generations.md content
where those event sequence blocks appear.
| const { audio, language, model } = await request.json() | ||
|
|
There was a problem hiding this comment.
Server snippet payload shape doesn’t match route implementation
Line 308 destructures the JSON root directly, but the route in examples/ts-react-chat/src/routes/api.transcribe.ts expects values under body.data. The sample will fail as written against the documented route pattern.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/guides/transcription.md` around lines 308 - 309, The example
destructures the JSON root (const { audio, language, model } = await
request.json()) but the actual route handler in api.transcribe.ts expects the
payload under body.data; update the snippet to extract these values from
request.json().body.data (or adjust to read request.json().body then access
.data) so the variables audio, language, and model come from the same shape the
route implementation expects, and add a simple existence check for body.data
before using it.
| const body = await request.json() | ||
| const { prompt, size, duration, model } = body.data | ||
|
|
||
| const stream = streamVideoGeneration( | ||
| openaiVideo(model ?? 'sora-2'), | ||
| { prompt, size, duration }, | ||
| { pollingInterval: 3000, maxDuration: 600_000 }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and read its current content
cat -n examples/ts-react-chat/src/routes/api.generate.video.ts | head -30Repository: TanStack/ai
Length of output: 897
🏁 Script executed:
# Check if Zod is already installed in the project
cat package.json | grep -A 50 '"dependencies"' | head -60Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check for similar API routes in examples to see if they use Zod validation
fd -e ts -e tsx examples/ -exec grep -l "request.json()" {} \;Repository: TanStack/ai
Length of output: 304
🏁 Script executed:
# Check if there's any existing validation pattern in similar example files
find examples -name "api*.ts" -type f | head -10Repository: TanStack/ai
Length of output: 507
🏁 Script executed:
# Let's properly check package.json for Zod
grep -i zod package.jsonRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check the full dependencies section
head -100 package.json | tail -50Repository: TanStack/ai
Length of output: 2513
🏁 Script executed:
# Check other API route files to see validation patterns
cat -n examples/ts-react-chat/src/routes/api.summarize.tsRepository: TanStack/ai
Length of output: 989
🏁 Script executed:
# Check another API file for comparison
cat -n examples/ts-react-chat/src/routes/api.generate.image.ts | head -30Repository: TanStack/ai
Length of output: 1006
🏁 Script executed:
# Check the api-verification.ts file mentioned - seems relevant
cat -n examples/ts-react-chat/api-verification.tsRepository: TanStack/ai
Length of output: 71
🏁 Script executed:
# Check if examples have their own package.json with Zod
cat -n examples/ts-react-chat/package.json | head -50Repository: TanStack/ai
Length of output: 2062
🏁 Script executed:
# Check if Zod is used anywhere in the examples
rg -i "zod" examples/Repository: TanStack/ai
Length of output: 1455
🏁 Script executed:
# Check if there are any toolDefinition usages in the example
rg "toolDefinition" examples/Repository: TanStack/ai
Length of output: 3273
🏁 Script executed:
# Check the main packages to see where Zod is used
rg "from.*zod" packages/typescript/ | head -20Repository: TanStack/ai
Length of output: 1865
🏁 Script executed:
# Check if there are API routes in the main packages that use Zod for validation
rg "request\.json()" packages/typescript/ -A 5 | head -40Repository: TanStack/ai
Length of output: 2767
🏁 Script executed:
# Check the actual guideline definition to understand scope
cat CLAUDE.md | grep -A 10 -B 2 "toolDefinition"Repository: TanStack/ai
Length of output: 1362
🏁 Script executed:
# Check if examples are explicitly excluded from certain guidelines
cat CLAUDE.md | grep -i "examples"Repository: TanStack/ai
Length of output: 438
🏁 Script executed:
# Look at one of the guitar-tools files to see how Zod is used with toolDefinition
cat -n examples/ts-react-chat/src/lib/guitar-tools.ts | head -40Repository: TanStack/ai
Length of output: 1666
Validate request payload with Zod before adapter invocation
Lines 9–15 parse request.json() and destructure body.data without runtime validation. This can crash on malformed input and forwards unchecked values to openaiVideo().
🛡️ Proposed fix
import { createFileRoute } from '@tanstack/react-router'
import { streamVideoGeneration, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiVideo } from '@tanstack/ai-openai'
+import { z } from 'zod'
+
+const generateVideoRequestSchema = z.object({
+ data: z.object({
+ prompt: z.string().min(1),
+ size: z.string().optional(),
+ duration: z.number().int().positive().optional(),
+ model: z.string().optional(),
+ }),
+})
@@
POST: async ({ request }) => {
- const body = await request.json()
- const { prompt, size, duration, model } = body.data
+ const parsed = generateVideoRequestSchema.safeParse(await request.json())
+ if (!parsed.success) {
+ return new Response(
+ JSON.stringify({ error: 'Invalid request payload' }),
+ {
+ status: 400,
+ headers: { 'content-type': 'application/json' },
+ },
+ )
+ }
+
+ const { prompt, size, duration, model } = parsed.data.dataPer coding guidelines, use Zod for runtime schema validation in TypeScript files. This applies to request handling in server routes.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const body = await request.json() | |
| const { prompt, size, duration, model } = body.data | |
| const stream = streamVideoGeneration( | |
| openaiVideo(model ?? 'sora-2'), | |
| { prompt, size, duration }, | |
| { pollingInterval: 3000, maxDuration: 600_000 }, | |
| import { createFileRoute } from '@tanstack/react-router' | |
| import { streamVideoGeneration, toServerSentEventsResponse } from '@tanstack/ai' | |
| import { openaiVideo } from '@tanstack/ai-openai' | |
| import { z } from 'zod' | |
| const generateVideoRequestSchema = z.object({ | |
| data: z.object({ | |
| prompt: z.string().min(1), | |
| size: z.string().optional(), | |
| duration: z.number().int().positive().optional(), | |
| model: z.string().optional(), | |
| }), | |
| }) | |
| export const Route = createFileRoute('/api/generate/video')({ | |
| methods: ['POST'], | |
| handler: async ({ request }) => { | |
| const parsed = generateVideoRequestSchema.safeParse(await request.json()) | |
| if (!parsed.success) { | |
| return new Response( | |
| JSON.stringify({ error: 'Invalid request payload' }), | |
| { | |
| status: 400, | |
| headers: { 'content-type': 'application/json' }, | |
| }, | |
| ) | |
| } | |
| const { prompt, size, duration, model } = parsed.data.data | |
| const stream = streamVideoGeneration( | |
| openaiVideo(model ?? 'sora-2'), | |
| { prompt, size, duration }, | |
| { pollingInterval: 3000, maxDuration: 600_000 }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/ts-react-chat/src/routes/api.generate.video.ts` around lines 9 - 15,
Validate the incoming JSON before using request.json() results: define a Zod
schema for the expected payload (shape containing data: { prompt: string, size?:
string, duration?: number, model?: string }), use schema.parse or safeParse to
validate the parsed body, and if validation fails return a 400 response; then
destructure validated.data and pass those validated values into openaiVideo(...)
and streamVideoGeneration(...). Ensure you reference the existing symbols
(request.json(), body.data, streamVideoGeneration, openaiVideo, and the fields
model/prompt/size/duration) so the adapter only receives validated inputs.
| import { | ||
| streamGenerationResult, | ||
| generateTranscription, | ||
| toServerSentEventsResponse, | ||
| } from '@tanstack/ai' |
There was a problem hiding this comment.
Resolve import sort lint failure
Line 2-Line 6 import members are not ordered per configured sort-imports rule (static analysis reports this as an error).
🧰 Tools
🪛 ESLint
[error] 4-4: Member 'generateTranscription' of the import declaration should be sorted alphabetically.
(sort-imports)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/ts-react-chat/src/routes/api.transcribe.ts` around lines 2 - 6, The
import specifiers from '@tanstack/ai' are not alphabetically ordered; reorder
them to satisfy the sort-imports rule by listing the named imports in
alphabetical order: generateTranscription, streamGenerationResult,
toServerSentEventsResponse (i.e., change the import line to import {
generateTranscription, streamGenerationResult, toServerSentEventsResponse } from
'@tanstack/ai'). Ensure there are no trailing commas or duplicate imports after
the change.
| export interface UseGenerationReturn<TOutput> { | ||
| /** Trigger a generation request */ | ||
| generate: (input: Record<string, any>) => Promise<void> | ||
| /** The generation result, or null if not yet generated */ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd packages/typescript/ai-react/src && wc -l use-generation.tsRepository: TanStack/ai
Length of output: 76
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '40,60p'Repository: TanStack/ai
Length of output: 939
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '85,105p'Repository: TanStack/ai
Length of output: 973
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '160,180p'Repository: TanStack/ai
Length of output: 500
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '1,40p'Repository: TanStack/ai
Length of output: 2051
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generation.ts | sed -n '130,175p'Repository: TanStack/ai
Length of output: 1302
🏁 Script executed:
cd packages/typescript/ai-react && rg "UseGenerationReturn" -nRepository: TanStack/ai
Length of output: 259
🏁 Script executed:
cd packages/typescript/ai-react/src && rg "useGeneration" -n | head -20Repository: TanStack/ai
Length of output: 1172
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-summarize.ts | sed -n '95,110p'Repository: TanStack/ai
Length of output: 642
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generate-image.ts | sed -n '100,115p'Repository: TanStack/ai
Length of output: 570
🏁 Script executed:
cd packages/typescript/ai-react/src && rg "UseGenerationReturn|UseGenerateImageReturn|UseSummarizeReturn" -n | head -20Repository: TanStack/ai
Length of output: 717
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-generate-image.ts | sed -n '1,30p'Repository: TanStack/ai
Length of output: 1455
🏁 Script executed:
cd packages/typescript/ai-react/src && cat -n use-summarize.ts | sed -n '40,55p'Repository: TanStack/ai
Length of output: 712
Add TInput generic to UseGenerationReturn to restore type-safety for direct hook consumers.
Line 52 and line 171 widen the generate function input to Record<string, any>, causing direct useGeneration<TInput, ...> calls to lose input type validation at compile time. The internal generate function at line 156 correctly accepts TInput, but the return interface discards this information, forcing wrapper hooks (useSummarize, useGenerateImage) to cast the result back to specific input types (lines 105, 108).
♻️ Proposed fix
-export interface UseGenerationReturn<TOutput> {
+export interface UseGenerationReturn<
+ TOutput,
+ TInput extends Record<string, any> = Record<string, any>,
+> {
/** Trigger a generation request */
- generate: (input: Record<string, any>) => Promise<void>
+ generate: (input: TInput) => Promise<void>
/** The generation result, or null if not yet generated */
result: TOutput | null
/** Whether a generation is currently in progress */
isLoading: boolean
@@
-): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>> {
+): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>, TInput> {
@@
return {
- generate: generate as (input: Record<string, any>) => Promise<void>,
+ generate,
result,
isLoading,
error,Per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-react/src/use-generation.ts` around lines 50 - 53, The
UseGenerationReturn interface currently widens generate's input to
Record<string, any>, losing compile-time safety; update the interface to accept
a TInput generic (e.g., UseGenerationReturn<TOutput, TInput>) and change the
generate signature to generate: (input: TInput) => Promise<void> so it matches
the internal generate implementation (the generate function used in the hook at
line ~156) and restores correct typing for wrappers like
useSummarize/useGenerateImage without casts.
| const client = createMemo(() => { | ||
| const baseOptions = { | ||
| id: clientId, | ||
| body: options.body, | ||
| onResult: (r: VideoGenerateResult) => options.onResult?.(r), | ||
| onError: (e: Error) => options.onError?.(e), | ||
| onProgress: (p: number, m?: string) => options.onProgress?.(p, m), | ||
| onChunk: (c: StreamChunk) => options.onChunk?.(c), | ||
| onJobCreated: (id: string) => options.onJobCreated?.(id), | ||
| onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s), | ||
| onResultChange: setResult, | ||
| onLoadingChange: setIsLoading, | ||
| onErrorChange: setError, | ||
| onStatusChange: setStatus, | ||
| onJobIdChange: setJobId, | ||
| onVideoStatusChange: setVideoStatus, | ||
| } | ||
|
|
||
| if (options.connection) { | ||
| return new VideoGenerationClient<TOutput>({ | ||
| ...baseOptions, | ||
| connection: options.connection, | ||
| }) | ||
| } | ||
|
|
||
| if (options.fetcher) { | ||
| return new VideoGenerationClient<TOutput>({ | ||
| ...baseOptions, | ||
| fetcher: options.fetcher, | ||
| }) | ||
| } | ||
|
|
||
| throw new Error( | ||
| 'useGenerateVideo requires either a connection or fetcher option', | ||
| ) | ||
| }, [clientId]) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
SolidJS createMemo second argument dependency array API documentation
💡 Result:
SolidJS createMemo does not have a “dependency array” second argument (unlike React hooks). Dependencies are automatically tracked by whatever reactive values (signals/stores/memos) you read inside the memo function. [2]
Per the official API, the signature is:
createMemo(fn, value?, options?) [1]
- 1st argument (
fn): computes the memo; it receives the previous value as its parameter. [1] - 2nd argument (
value?): initial value used for that “previous value” parameter on the first run (not dependencies). [1] - 3rd argument (
options?): options likeequals(andname). [1]
Example (showing the real meaning of the 2nd argument): [1]
const trend = createMemo(
(prev) => ({ value: count(), was: prev.value }),
{ value: 0 } // initial "prev"
);Sources: Solid Docs createMemo reference [1], Solid Docs memo concept page [2].
Remove the incorrect second argument from createMemo.
In SolidJS, createMemo takes createMemo(fn, initialValue?, options?) where the second argument is the initial value for the previous-value parameter, not a dependency array. Dependencies are automatically tracked by reading reactive values (clientId, options) inside the memo function. Passing [clientId] is incorrect and misleading.
Proposed fix
- const client = createMemo(() => {
+ const client = createMemo(() => {
const baseOptions = {
id: clientId,
body: options.body,
onResult: (r: VideoGenerateResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onJobCreated: (id: string) => options.onJobCreated?.(id),
onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
onJobIdChange: setJobId,
onVideoStatusChange: setVideoStatus,
}
if (options.connection) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new VideoGenerationClient<TOutput>({
...baseOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGenerateVideo requires either a connection or fetcher option',
)
- }, [clientId])
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const client = createMemo(() => { | |
| const baseOptions = { | |
| id: clientId, | |
| body: options.body, | |
| onResult: (r: VideoGenerateResult) => options.onResult?.(r), | |
| onError: (e: Error) => options.onError?.(e), | |
| onProgress: (p: number, m?: string) => options.onProgress?.(p, m), | |
| onChunk: (c: StreamChunk) => options.onChunk?.(c), | |
| onJobCreated: (id: string) => options.onJobCreated?.(id), | |
| onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s), | |
| onResultChange: setResult, | |
| onLoadingChange: setIsLoading, | |
| onErrorChange: setError, | |
| onStatusChange: setStatus, | |
| onJobIdChange: setJobId, | |
| onVideoStatusChange: setVideoStatus, | |
| } | |
| if (options.connection) { | |
| return new VideoGenerationClient<TOutput>({ | |
| ...baseOptions, | |
| connection: options.connection, | |
| }) | |
| } | |
| if (options.fetcher) { | |
| return new VideoGenerationClient<TOutput>({ | |
| ...baseOptions, | |
| fetcher: options.fetcher, | |
| }) | |
| } | |
| throw new Error( | |
| 'useGenerateVideo requires either a connection or fetcher option', | |
| ) | |
| }, [clientId]) | |
| const client = createMemo(() => { | |
| const baseOptions = { | |
| id: clientId, | |
| body: options.body, | |
| onResult: (r: VideoGenerateResult) => options.onResult?.(r), | |
| onError: (e: Error) => options.onError?.(e), | |
| onProgress: (p: number, m?: string) => options.onProgress?.(p, m), | |
| onChunk: (c: StreamChunk) => options.onChunk?.(c), | |
| onJobCreated: (id: string) => options.onJobCreated?.(id), | |
| onStatusUpdate: (s: VideoStatusInfo) => options.onStatusUpdate?.(s), | |
| onResultChange: setResult, | |
| onLoadingChange: setIsLoading, | |
| onErrorChange: setError, | |
| onStatusChange: setStatus, | |
| onJobIdChange: setJobId, | |
| onVideoStatusChange: setVideoStatus, | |
| } | |
| if (options.connection) { | |
| return new VideoGenerationClient<TOutput>({ | |
| ...baseOptions, | |
| connection: options.connection, | |
| }) | |
| } | |
| if (options.fetcher) { | |
| return new VideoGenerationClient<TOutput>({ | |
| ...baseOptions, | |
| fetcher: options.fetcher, | |
| }) | |
| } | |
| throw new Error( | |
| 'useGenerateVideo requires either a connection or fetcher option', | |
| ) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-solid/src/use-generate-video.ts` around lines 133 -
168, The createMemo call in useGenerateVideo incorrectly passes a dependency
array ([clientId]) as the second argument; remove that second argument so the
call is just createMemo(() => { ... }) and let Solid track dependencies by
reading clientId and options inside the memo; ensure the function that
constructs VideoGenerationClient (references: createMemo, clientId, options,
VideoGenerationClient) remains unchanged otherwise.
| const client = createMemo(() => { | ||
| const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = { | ||
| id: clientId, | ||
| body: options.body, | ||
| onResult: (r: TResult) => options.onResult?.(r), | ||
| onError: (e: Error) => options.onError?.(e), | ||
| onProgress: (p: number, m?: string) => options.onProgress?.(p, m), | ||
| onChunk: (c: StreamChunk) => options.onChunk?.(c), | ||
| onResultChange: setResult, | ||
| onLoadingChange: setIsLoading, | ||
| onErrorChange: setError, | ||
| onStatusChange: setStatus, | ||
| } | ||
|
|
||
| if (options.connection) { | ||
| return new GenerationClient<TInput, TResult, TOutput>({ | ||
| ...clientOptions, | ||
| connection: options.connection, | ||
| }) | ||
| } | ||
|
|
||
| if (options.fetcher) { | ||
| return new GenerationClient<TInput, TResult, TOutput>({ | ||
| ...clientOptions, | ||
| fetcher: options.fetcher, | ||
| }) | ||
| } | ||
|
|
||
| throw new Error( | ||
| 'useGeneration requires either a connection or fetcher option', | ||
| ) | ||
| }, [clientId]) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check current createMemo usage in the codebase
rg -n "createMemo\(" --type ts packages/typescript/ai-solid/src/ -A 1 | head -30Repository: TanStack/ai
Length of output: 625
🌐 Web query:
Solid.js createMemo API signature documentation
💡 Result:
Solid’s official docs list createMemo with this TypeScript signature:
function createMemo<T>(
fn: (v: T) => T,
value?: T,
options?: { equals?: false | ((prev: T, next: T) => boolean); name?: string }
): () => T;fnis called to compute the value and receives the previous memo value as its argument.- Returns a read-only accessor
() => T. [1]
Docs page (official):
https://docs.solidjs.com/reference/basic-reactivity/create-memo[1]
Sources: [1] Solid Docs – “createMemo” API reference.
Remove the unnecessary second argument from createMemo.
Solid.js's createMemo does not accept a dependency array. The second argument, when provided, should be an initial value for the memo (of the same type as the computed value). Dependencies are automatically tracked. The [clientId] array here is incorrect syntax for Solid.js.
🔧 Proposed fix
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}
if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGeneration requires either a connection or fetcher option',
)
- }, [clientId])
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const client = createMemo(() => { | |
| const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = { | |
| id: clientId, | |
| body: options.body, | |
| onResult: (r: TResult) => options.onResult?.(r), | |
| onError: (e: Error) => options.onError?.(e), | |
| onProgress: (p: number, m?: string) => options.onProgress?.(p, m), | |
| onChunk: (c: StreamChunk) => options.onChunk?.(c), | |
| onResultChange: setResult, | |
| onLoadingChange: setIsLoading, | |
| onErrorChange: setError, | |
| onStatusChange: setStatus, | |
| } | |
| if (options.connection) { | |
| return new GenerationClient<TInput, TResult, TOutput>({ | |
| ...clientOptions, | |
| connection: options.connection, | |
| }) | |
| } | |
| if (options.fetcher) { | |
| return new GenerationClient<TInput, TResult, TOutput>({ | |
| ...clientOptions, | |
| fetcher: options.fetcher, | |
| }) | |
| } | |
| throw new Error( | |
| 'useGeneration requires either a connection or fetcher option', | |
| ) | |
| }, [clientId]) | |
| const client = createMemo(() => { | |
| const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = { | |
| id: clientId, | |
| body: options.body, | |
| onResult: (r: TResult) => options.onResult?.(r), | |
| onError: (e: Error) => options.onError?.(e), | |
| onProgress: (p: number, m?: string) => options.onProgress?.(p, m), | |
| onChunk: (c: StreamChunk) => options.onChunk?.(c), | |
| onResultChange: setResult, | |
| onLoadingChange: setIsLoading, | |
| onErrorChange: setError, | |
| onStatusChange: setStatus, | |
| } | |
| if (options.connection) { | |
| return new GenerationClient<TInput, TResult, TOutput>({ | |
| ...clientOptions, | |
| connection: options.connection, | |
| }) | |
| } | |
| if (options.fetcher) { | |
| return new GenerationClient<TInput, TResult, TOutput>({ | |
| ...clientOptions, | |
| fetcher: options.fetcher, | |
| }) | |
| } | |
| throw new Error( | |
| 'useGeneration requires either a connection or fetcher option', | |
| ) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-solid/src/use-generation.ts` around lines 111 - 142,
The createMemo call for the client variable incorrectly passes a dependency
array as a second argument; Solid's createMemo tracks dependencies automatically
and accepts an optional initial value (not a deps array). Remove the second
argument ([clientId]) from the createMemo invocation so it only receives the
factory function that builds client using clientOptions and branches for
options.connection/options.fetcher (leaving the GenerationClient construction
and error throw unchanged).
| const mockResult = { | ||
| audio: 'base64data', | ||
| format: 'mp3' as const, | ||
| model: 'tts-1', | ||
| } |
There was a problem hiding this comment.
Align speech test fixtures with the expected TTSResult shape (include id).
Current fixtures validate only partial payloads, which weakens contract fidelity for speech paths.
Proposed fix
const mockResult = {
+ id: 'tts-1',
audio: 'base64data',
format: 'mp3' as const,
model: 'tts-1',
}- const mockResult = { audio: 'base64data', format: 'mp3', model: 'tts-1' }
+ const mockResult = { id: 'tts-1', audio: 'base64data', format: 'mp3', model: 'tts-1' } fetcher: async () => ({
+ id: 'tts-1',
audio: 'base64data',
format: 'mp3' as const,
model: 'tts-1',
}),Also applies to: 321-321, 352-356
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-svelte/tests/create-generation.test.ts` around lines
304 - 308, The speech test fixtures use mockResult objects that don't include
the required TTSResult.id property; update each test fixture (the mockResult
declarations around create-generation.test.ts lines where mockResult is defined
and the other occurrences at the noted ranges) to include a valid id string
(e.g., id: 'mock-tts-id') so the mocked payload matches the full TTSResult shape
used by functions under test (ensure all three occurrences—initial mockResult
and the other two fixtures—are updated).
| export interface UseGenerationReturn<TOutput> { | ||
| /** Trigger a generation request */ | ||
| generate: (input: Record<string, any>) => Promise<void> | ||
| /** The generation result, or null if not yet generated */ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/typescript/ai-vue/src/use-generation.ts | head -200Repository: TanStack/ai
Length of output: 7156
Vue base composable widens generate input from TInput to Record<string, any>.
The UseGenerationReturn interface (line 51) lacks a TInput generic parameter and hardcodes the input type. Combined with the cast at line 173, this loses compile-time type safety for direct useGeneration<TInput, ...> consumers despite the function correctly accepting TInput internally.
♻️ Proposed fix
-export interface UseGenerationReturn<TOutput> {
+export interface UseGenerationReturn<
+ TOutput,
+ TInput extends Record<string, any> = Record<string, any>,
+> {
/** Trigger a generation request */
- generate: (input: Record<string, any>) => Promise<void>
+ generate: (input: TInput) => Promise<void>
@@
-): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>> {
+): UseGenerationReturn<InferGenerationOutput<TResult, TOnResult>, TInput> {
@@
return {
- generate: generate as (input: Record<string, any>) => Promise<void>,
+ generate,
result: readonly(result),
isLoading: readonly(isLoading),Affects lines 51–54, 98, and 173–174. Per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-vue/src/use-generation.ts` around lines 51 - 54,
Update the UseGenerationReturn interface to preserve the TInput generic so
callers keep compile-time safety: add a TInput type parameter to
UseGenerationReturn and change the generate signature from generate: (input:
Record<string, any>) => Promise<void> to generate: (input: TInput) =>
Promise<void>; then adjust the useGeneration function's return type to
UseGenerationReturn<TInput, TOutput, TParams> (or matching generic arity) and
remove the unsafe cast at the site where generate is returned so the real TInput
flows through without widening to Record<string, any>.
Summary
streamGenerationResult()andstreamVideoGeneration()server-side helpers to@tanstack/aifor wrapping one-shot generation tasks as SSE-compatibleStreamChunkiterablesGenerationClientandVideoGenerationClientstate machines to@tanstack/ai-clientwith support for both streaming (ConnectionAdapter) and direct (fetcher) transport modesuseGenerateImage,useGenerateSpeech,useGenerateVideo,useTranscription,useSummarize(Svelte usescreate*naming)InferGenerationOutputconditional type for type-safeonResulttransforms without overloadsTest plan
ai,ai-client,ai-react,ai-solid,ai-svelte,ai-vuepnpm test:types)pnpm testsuite passes (31 projects)🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes