Skip to content

Comments

LCORE-1350: Refactored utilities for more generic use#1198

Open
asimurka wants to merge 1 commit intolightspeed-core:mainfrom
asimurka:utility_refactoring
Open

LCORE-1350: Refactored utilities for more generic use#1198
asimurka wants to merge 1 commit intolightspeed-core:mainfrom
asimurka:utility_refactoring

Conversation

@asimurka
Copy link
Contributor

@asimurka asimurka commented Feb 23, 2026

Description

This PR introduces a query utility functions refactor. With the upcoming responses endpoint we need our utility functions to accept atomic or responses compatible types so that they can be reused in new responses endpoint without additional workarounds.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: Cursor

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features

    • Transcript metadata and a transcript object model for richer conversation records.
    • Enhanced response text extraction handling more content types.
    • Added automatic model selection and referenced-document deduplication for responses.
  • Bug Fixes

    • More accurate token usage reporting.
    • Safer streaming behavior when responses are missing.
  • Refactor

    • Centralized configuration for system prompts and cache flows.
    • Simplified transcript and response assembly processes.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

Walkthrough

Consolidates response handling and text-extraction APIs, migrates per-call AppConfig usage to a global configuration, introduces transcript model objects and new transcript helpers, and updates endpoint call sites and tests to use the new response/ transcript utilities and signatures.

Changes

Cohort / File(s) Summary
Responses core & types
src/utils/responses.py
Replaced OpenAI-prefixed types with Response* aliases; added build_turn_summary, select_model_for_responses, extract_text_from_output_items, extract_text_from_output_item, _extract_text_from_content; changed signatures for prepare_tools, extract_token_usage, parse_referenced_documents, build_tool_call_summary, and get_mcp_tools.
Query / token / transcript logic
src/utils/query.py, src/utils/transcripts.py, src/utils/types.py
Removed per-call AppConfig parameters; introduced TranscriptMetadata/Transcript models and transcript creators; updated store_transcript to accept a Transcript object; changed store_conversation_into_cache, store_query_results, consume_query_tokens, and validate_model_provider_override signatures and flows; removed select_model_and_provider_id and evaluate_model_hints.
Endpoints calling responses utilities
src/app/endpoints/query.py, src/app/endpoints/streaming_query.py, src/app/endpoints/a2a.py, src/app/endpoints/rlsapi_v1.py
Replaced older extraction helpers with extract_text_from_output_item(s) and use build_turn_summary in response flows; updated RBAC validation to pass explicit model/provider; removed passing configuration into token/storage calls; adjusted get_mcp_tools call to no-arg.
Prompts / configuration usage
src/utils/prompts.py
get_system_prompt now accepts system_prompt: Optional[str] and uses module-level configuration; get_topic_summary_system_prompt signature updated to no-arg and uses configuration-driven defaults.
Models / errors
src/models/responses.py
NotFoundResponse.__init__ now accepts optional resource_id and changes the cause message when resource_id is None.
Tests updated
tests/unit/..., tests/integration/...
Updated imports, mocks, and assertions across unit and integration tests to reflect renamed functions, new signatures, global configuration usage, transcript object usage, and new response usage objects. Removed tests for deleted utilities.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

ok-to-test

Suggested reviewers

  • tisnik
  • are-ces
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main refactoring effort to make utilities more generic for reuse across endpoints.
Docstring Coverage ✅ Passed Docstring coverage is 99.24% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.



def get_system_prompt(query_request: QueryRequest, config: AppConfig) -> str:
def get_system_prompt(system_prompt: Optional[str]) -> str:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not pass the whole query-specific request type.



def get_topic_summary_system_prompt(config: AppConfig) -> str:
def get_topic_summary_system_prompt() -> str:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not pass as an argument

)


def select_model_and_provider_id(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deprecated. Use new function that generalizes for query and responses usage.

return _is_inout_shield(shield) or not is_output_shield(shield)


def evaluate_model_hints(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deprecated.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/utils/prompts.py (1)

11-33: ⚠️ Potential issue | 🟡 Minor

Update the docstring and typing to match the function signature and implementation.

The docstring references config and query_request parameters that don't exist in the function signature. The function accepts only system_prompt, and internally uses configuration (not config). Also, update the type annotation from Optional[str] to str | None per the modern union syntax requirement.

  • Line 11: Change Optional[str] to str | None and remove the unused Optional import
  • Lines 17-19: Update docstring to reference configuration.customization.custom_profile (not config)
  • Line 19: Update to configuration.customization.system_prompt
  • Line 23: Update to configuration.customization.disable_query_system_prompt
  • Line 24: Change "the incoming query_request contains a system_prompt" to clarify the function parameter, not a query request object
  • Lines 28-31: Remove the config (AppConfig): parameter from the docstring (it's not a parameter); update system_prompt parameter description to be concise
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/prompts.py` around lines 11 - 33, Update the get_system_prompt
signature and docstring to match the implementation: change the type annotation
from Optional[str] to the modern union syntax str | None and remove the unused
Optional import; in the docstring replace all references to "config" and
"query_request" with the actual runtime variable "configuration" (e.g.,
configuration.customization.custom_profile,
configuration.customization.system_prompt,
configuration.customization.disable_query_system_prompt), clarify that the
function parameter is the incoming per-request system_prompt (not a
query_request object), and remove the nonexistent "config (AppConfig):"
parameter block while making the system_prompt parameter description concise and
accurate.
tests/unit/utils/test_transcripts.py (1)

63-66: ⚠️ Potential issue | 🟡 Minor

Mocking construct_transcripts_path hides a critical path-construction regression.

construct_transcripts_path is fully mocked here, so the test cannot detect that the path will be constructed with a double-hashed user ID in production. Recommend at minimum an assertion that construct_transcripts_path was called with the once-hashed user ID (i.e., _hash_user_id("user123")), not the raw value and not a double-hash:

from utils.transcripts import _hash_user_id   # expose for assertion
...
mock_construct = mocker.patch(
    "utils.transcripts.construct_transcripts_path",
    return_value=mocker.MagicMock(),
)
...
store_transcript(transcript)

expected_hashed = _hash_user_id(user_id)
mock_construct.assert_called_once_with(expected_hashed, conversation_id)

See the critical issue raised on store_transcript in src/utils/transcripts.py for the root cause.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/utils/test_transcripts.py` around lines 63 - 66, The test
currently fully mocks construct_transcripts_path which hides a regression where
the user_id could be double-hashed; update the test that patches
"utils.transcripts.construct_transcripts_path" to also import and use
_hash_user_id and assert that construct_transcripts_path was called once with
the once-hashed user id and the conversation_id (e.g., compute expected_hashed =
_hash_user_id(user_id) and then call
mock_construct.assert_called_once_with(expected_hashed, conversation_id)),
keeping the patch return as a MagicMock but adding this assertion around
store_transcript to catch double-hash regressions in store_transcript.
src/utils/query.py (1)

236-246: ⚠️ Potential issue | 🟡 Minor

Stale docstring: configuration parameter is still documented but was removed from the signature.

Line 244 in the docstring still lists configuration: Application configuration as an argument, but the function no longer accepts it.

📝 Suggested fix
     Args:
         user_id: The authenticated user ID
         conversation_id: The conversation ID
         model: The model identifier
         started_at: ISO formatted timestamp when the request started
         completed_at: ISO formatted timestamp when the request completed
         summary: Summary of the turn including LLM response and tool calls
         query_request: The original query request
-        configuration: Application configuration
         skip_userid_check: Whether to skip user ID validation
         topic_summary: Optional topic summary for the conversation
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/query.py` around lines 236 - 246, The docstring for the function
that documents parameters user_id, conversation_id, model, started_at,
completed_at, summary, query_request, configuration, skip_userid_check, and
topic_summary is stale: remove the `configuration: Application configuration`
entry (or replace it with the actual current parameter if the function now
accepts a differently named config) so the Args section matches the function
signature; update the Args list near the top of the docstring to only include
parameters actually present (e.g., user_id, conversation_id, model, started_at,
completed_at, summary, query_request, skip_userid_check, topic_summary).
🧹 Nitpick comments (3)
tests/unit/utils/test_transcripts.py (1)

104-111: Use distinct values for model_id/query_model and provider_id/query_provider to strengthen field mapping coverage.

model_id and query_model are both "fake-model", and provider_id and query_provider are both "fake-provider". Any accidental transposition of these arguments in create_transcript_metadata would still pass all downstream assertions.

♻️ Suggested fix: use distinct values to disambiguate fields
- model = "fake-model"
- provider = "fake-provider"
+ model = "fake-model"
+ provider = "fake-provider"
+ query_model_override = "query-model"
+ query_provider_override = "query-provider"
  ...
  metadata = create_transcript_metadata(
      user_id=user_id,
      conversation_id=conversation_id,
      model_id=model,
      provider_id=provider,
-     query_provider=query_request.provider,
-     query_model=query_request.model,
+     query_provider=query_provider_override,
+     query_model=query_model_override,
  )
  ...
- assert stored_data["metadata"]["query_provider"] == query_request.provider
- assert stored_data["metadata"]["query_model"] == query_request.model
+ assert stored_data["metadata"]["query_provider"] == query_provider_override
+ assert stored_data["metadata"]["query_model"] == query_model_override
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/utils/test_transcripts.py` around lines 104 - 111, The test uses
identical values for the model and provider pairs which masks argument
transposition; update the test call to create_transcript_metadata so model_id !=
query_model and provider_id != query_provider (e.g., use distinct strings for
model_id vs query_request.model and for provider_id vs query_request.provider)
to ensure create_transcript_metadata receives and maps each parameter correctly
(refer to the create_transcript_metadata invocation and the
query_request.model/query_request.provider variables to locate and change the
values).
src/utils/responses.py (1)

988-1019: _extract_text_from_content strips each fragment before joining — verify this is intentional.

Each text/refusal part is .strip()-ed before being space-joined. This means leading/trailing whitespace in individual content parts is discarded. If a content part intentionally ends with whitespace (e.g., "sentence one. "), the strip will alter the final output. The current tests happen to match because the join inserts a space, but this could cause subtle differences for content with intentional formatting.

If this is intentional (normalize all whitespace), this is fine. Just flagging for awareness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/responses.py` around lines 988 - 1019, The function
_extract_text_from_content currently calls .strip() on each InputTextPart.text,
OutputTextPart.text, and ContentPartRefusal.refusal which removes intentional
leading/trailing whitespace; remove the .strip() calls so you append the raw
text/refusal values (still checking for truthiness) — e.g., change
text_fragments.append(input_text_part.text.strip()) to
text_fragments.append(input_text_part.text) (and similarly for
output_text_part.text and refusal_part.refusal) while preserving the existing
truthy checks, casts (InputTextPart, OutputTextPart, ContentPartRefusal), and
the final " ".join behavior.
src/app/endpoints/query.py (1)

258-270: Consider moving deduplicate_referenced_documents to a shared utility module for consistency and maintainability.

This pure utility function fits naturally with parse_referenced_documents in src/utils/responses.py, which already houses response-related utilities. While currently used only in query.py, consolidating related document-handling utilities in a single module improves code organization and makes future reuse easier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/query.py` around lines 258 - 270, Move the pure utility
function deduplicate_referenced_documents into the same utilities module that
contains parse_referenced_documents (responses.py), keeping its signature and
docstring unchanged; then replace the local definition in query.py with an
import from responses (e.g., from responses import
deduplicate_referenced_documents), update any imports/exports as needed so other
modules can reuse it, and run tests to ensure no import regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/openapi.json`:
- Line 4375: Both the GET and POST operations for the /a2a endpoint use the same
operationId "handle_a2a_jsonrpc_a2a_get", which must be unique; update the
operationId values so they are distinct (e.g., change the POST operationId to
"handle_a2a_jsonrpc_a2a_post" or another unique name). Locate the occurrences of
operationId "handle_a2a_jsonrpc_a2a_get" in the OpenAPI document and rename the
POST one (and any other duplicates) to a unique identifier, ensuring any
references in server/client codegen or tooling are updated to match the new
names.

In `@Makefile`:
- Line 36: The Makefile test-e2e invocation uses the `script -q -e -c "uv run
behave ..."` command which leaves a stray "typescript" file; modify the command
used by the make test-e2e target to pass /dev/null as the output file to
`script` (e.g., `script -q -e /dev/null -c "uv run behave ..."`), so the
pseudo-TTY is provided but no transcript is written.
- Line 36: The Makefile target uses the `script` command without an explicit
output file which causes a `typescript` file to be created; update the
invocation of `script -q -e -c "uv run behave --color --format pretty
--tags=-skip -D dump_errors=true `@tests/e2e/test_list.txt`"` to pass `/dev/null`
as the output transcript (i.e., add `/dev/null` as the output-file argument to
`script`) so the pseudo-TTY is provided but no `typescript` artifact is written.

In `@src/models/responses.py`:
- Around line 1756-1763: The constructor NotFoundResponse.__init__ uses
Optional[str] for the resource_id annotation; update the signature to use modern
union syntax (resource_id: str | None) per project Python 3.12+ standards, and
adjust any related type hints inside the class/file accordingly; also remove the
now-unused Optional import from typing if it becomes unused to keep imports
clean.
- Around line 1756-1769: The __init__ signature in the NotFoundResponse
constructor uses Optional[str] for resource_id; update the type annotation to
use modern union syntax `str | None` in the __init__ method signature (the
parameter named resource_id in the __init__ of the class in
src/models/responses.py) and remove any now-unused Optional import from typing
if present; keep the rest of the docstring and logic unchanged.

In `@src/utils/responses.py`:
- Around line 920-943: The build_turn_summary code currently reads
response.usage unguarded leading to AttributeError for mock/odd responses;
change the final token usage extraction to use a safe getattr (e.g., usage =
getattr(response, "usage", None)) and pass that usage into extract_token_usage
(or skip/extract None) so extract_token_usage receives None instead of raising;
update the call around summary.token_usage = extract_token_usage(response.usage,
model) to use the safe variable and ensure extract_token_usage can accept None.

In `@src/utils/transcripts.py`:
- Around line 85-87: store_transcript is double-hashing user IDs because
create_transcript_metadata already stores a hashed ID
(TranscriptMetadata.user_id) and construct_transcripts_path always calls
_hash_user_id again; change the path construction to avoid the second hash by
adding a parameter to construct_transcripts_path (e.g., pre_hashed or
hashed=True) and, in store_transcript, call
construct_transcripts_path(transcript.metadata.user_id, pre_hashed=True) so
construct_transcripts_path skips calling _hash_user_id when the caller supplies
an already-hashed ID; update any callers or tests accordingly to preserve the
single-hash invariant.
- Around line 102-133: Update the type annotations of the new parameters in
create_transcript_metadata to use modern union syntax (str | None) instead of
Optional[str]; change provider_id, query_provider, and query_model annotations
to str | None in the create_transcript_metadata signature and ensure any related
type hints or usages in that function (e.g., when constructing
TranscriptMetadata) remain compatible with the new annotations.

In `@src/utils/types.py`:
- Around line 202-225: Update the TranscriptMetadata and Transcript models to
use modern union syntax and add Attributes sections to their docstrings: replace
Optional[str] with the PEP 604 form (str | None) for provider and
query_provider/query_model in TranscriptMetadata, and update any other Optional
usages in these classes accordingly; then expand the docstrings for both
TranscriptMetadata and Transcript to include a Google-style "Attributes:"
section that documents each field (e.g., provider, model, user_id,
conversation_id, timestamp, metadata, redacted_query, query_is_valid,
llm_response, rag_chunks, truncated, attachments, tool_calls, tool_results) with
brief types/purposes so the docstrings conform to project conventions.

In `@tests/unit/utils/test_responses.py`:
- Around line 67-80: The constructor for MockContentPart (__init__) can leave
self.type unset when part_type, text, and refusal are all falsy; update __init__
so self.type is always assigned (e.g., set self.type = part_type if provided,
elif text -> "output_text", elif refusal -> "refusal", else set self.type =
"unknown" or None) so any code accessing MockContentPart.type will not raise
AttributeError.

---

Outside diff comments:
In `@src/utils/prompts.py`:
- Around line 11-33: Update the get_system_prompt signature and docstring to
match the implementation: change the type annotation from Optional[str] to the
modern union syntax str | None and remove the unused Optional import; in the
docstring replace all references to "config" and "query_request" with the actual
runtime variable "configuration" (e.g.,
configuration.customization.custom_profile,
configuration.customization.system_prompt,
configuration.customization.disable_query_system_prompt), clarify that the
function parameter is the incoming per-request system_prompt (not a
query_request object), and remove the nonexistent "config (AppConfig):"
parameter block while making the system_prompt parameter description concise and
accurate.

In `@src/utils/query.py`:
- Around line 236-246: The docstring for the function that documents parameters
user_id, conversation_id, model, started_at, completed_at, summary,
query_request, configuration, skip_userid_check, and topic_summary is stale:
remove the `configuration: Application configuration` entry (or replace it with
the actual current parameter if the function now accepts a differently named
config) so the Args section matches the function signature; update the Args list
near the top of the docstring to only include parameters actually present (e.g.,
user_id, conversation_id, model, started_at, completed_at, summary,
query_request, skip_userid_check, topic_summary).

In `@tests/unit/utils/test_transcripts.py`:
- Around line 63-66: The test currently fully mocks construct_transcripts_path
which hides a regression where the user_id could be double-hashed; update the
test that patches "utils.transcripts.construct_transcripts_path" to also import
and use _hash_user_id and assert that construct_transcripts_path was called once
with the once-hashed user id and the conversation_id (e.g., compute
expected_hashed = _hash_user_id(user_id) and then call
mock_construct.assert_called_once_with(expected_hashed, conversation_id)),
keeping the patch return as a MagicMock but adding this assertion around
store_transcript to catch double-hash regressions in store_transcript.

---

Nitpick comments:
In `@src/app/endpoints/query.py`:
- Around line 258-270: Move the pure utility function
deduplicate_referenced_documents into the same utilities module that contains
parse_referenced_documents (responses.py), keeping its signature and docstring
unchanged; then replace the local definition in query.py with an import from
responses (e.g., from responses import deduplicate_referenced_documents), update
any imports/exports as needed so other modules can reuse it, and run tests to
ensure no import regressions.

In `@src/utils/responses.py`:
- Around line 988-1019: The function _extract_text_from_content currently calls
.strip() on each InputTextPart.text, OutputTextPart.text, and
ContentPartRefusal.refusal which removes intentional leading/trailing
whitespace; remove the .strip() calls so you append the raw text/refusal values
(still checking for truthiness) — e.g., change
text_fragments.append(input_text_part.text.strip()) to
text_fragments.append(input_text_part.text) (and similarly for
output_text_part.text and refusal_part.refusal) while preserving the existing
truthy checks, casts (InputTextPart, OutputTextPart, ContentPartRefusal), and
the final " ".join behavior.

In `@tests/unit/utils/test_transcripts.py`:
- Around line 104-111: The test uses identical values for the model and provider
pairs which masks argument transposition; update the test call to
create_transcript_metadata so model_id != query_model and provider_id !=
query_provider (e.g., use distinct strings for model_id vs query_request.model
and for provider_id vs query_request.provider) to ensure
create_transcript_metadata receives and maps each parameter correctly (refer to
the create_transcript_metadata invocation and the
query_request.model/query_request.provider variables to locate and change the
values).

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b9ba4ab and f754544.

📒 Files selected for processing (19)
  • Makefile
  • docs/openapi.json
  • src/app/endpoints/a2a.py
  • src/app/endpoints/query.py
  • src/app/endpoints/rlsapi_v1.py
  • src/app/endpoints/streaming_query.py
  • src/models/responses.py
  • src/utils/prompts.py
  • src/utils/query.py
  • src/utils/responses.py
  • src/utils/transcripts.py
  • src/utils/types.py
  • tests/unit/app/endpoints/test_a2a.py
  • tests/unit/app/endpoints/test_query.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • tests/unit/utils/test_prompts.py
  • tests/unit/utils/test_query.py
  • tests/unit/utils/test_responses.py
  • tests/unit/utils/test_transcripts.py

logger.exception("Error storing transcript: %s", e)
response = InternalServerErrorResponse.generic()
raise HTTPException(**response.model_dump()) from e
logger.info("Storing transcript")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate construction to multiple functions, do not pass the entire request so it can be reused for responses endpoint.

raise HTTPException(**response.model_dump()) from e

# Store conversation in cache
cache_entry = CacheEntry(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Outside of try block

logger = get_logger(__name__)


def extract_text_from_response_output_item(output_item: Any) -> str:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too complicated and type insecure.

return extract_text_from_output_items(response.output)


async def prepare_tools(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be now reused in responses



async def get_mcp_tools( # pylint: disable=too-many-return-statements,too-many-locals
mcp_servers: list[ModelContextProtocolServer],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import directly from configuration

return documents


def extract_token_usage(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLS API is already stable - no need for such defensive approach.

return {"args": arguments_str}


async def select_model_for_responses(
Copy link
Contributor Author

@asimurka asimurka Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New function for model selection if not provided in request explicitly.

return summary


def extract_text_from_output_items(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semantically equivalent to previous implementation but simplified thanks to API stability

rag_chunks: list[dict],
truncated: bool,
attachments: list[Attachment],
def store_transcript(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use dedicated model for transcript

except (IOError, OSError) as e:
logger.error("Failed to store transcript into %s: %s", transcript_file_path, e)
raise
response = InternalServerErrorResponse.generic()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Raise HTTP error immediately do not reraise internal errors.

) -> TranscriptMetadata:
"""Create a TranscriptMetadata BaseModel instance.

logger.info("Transcript successfully stored at: %s", transcript_file_path)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved right after successful file insertion

token_usage: TokenCounter = Field(default_factory=TokenCounter)


class TranscriptMetadata(BaseModel):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model can be reused by responses-like structure, where model and provider are concatenated.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if it changed the behaviour (see changes in openapi.json), it should be done before refactoring. The refactoring itself should not change "external" behaviour.

@asimurka asimurka marked this pull request as draft February 23, 2026 11:31
@asimurka asimurka marked this pull request as draft February 23, 2026 11:31
@asimurka asimurka force-pushed the utility_refactoring branch from f754544 to 64d98ff Compare February 23, 2026 12:14
@asimurka asimurka marked this pull request as ready for review February 23, 2026 12:16
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/utils/transcripts.py (1)

44-55: ⚠️ Potential issue | 🟡 Minor

Clarify that hashed_user_id is pre-hashed.

The docstring still implies hashing occurs in this function, which no longer matches the signature.

📝 Suggested docstring tweak
-    The returned path is built from the configured transcripts storage base
-    directory, a filesystem-safe directory derived from a hash of `user_id`,
+    The returned path is built from the configured transcripts storage base
+    directory, a filesystem-safe directory derived from a pre-hashed `user_id`,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transcripts.py` around lines 44 - 55, The docstring for
construct_transcripts_path incorrectly implies the function performs hashing;
clarify that hashed_user_id is expected to be pre-hashed and not hashed here,
and update parameter description for hashed_user_id to state it is a
filesystem-safe hashed identifier (pre-hashed), and ensure conversation_id
description mentions it is normalized for path use; keep the rest of the
docstring intact and consistent with the function signature.
src/utils/prompts.py (1)

3-35: ⚠️ Potential issue | 🟡 Minor

Fix type hints and docstring references.

Change Optional[str] to str | None per modern Python syntax guidelines. Update docstring references from config to configuration (lines 18, 19, 23), and remove the non-existent config (AppConfig) parameter from the Parameters section (line 30)—the function accepts only system_prompt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/prompts.py` around lines 3 - 35, Update the type annotation and
docstring in get_system_prompt: change the parameter type from Optional[str] to
the modern union syntax str | None in the function signature, update all
docstring references that mention `config` to `configuration` (lines describing
configuration.customization...), and remove the non-existent `config
(AppConfig)` entry from the Parameters section so the docstring only documents
the actual `system_prompt` parameter; keep references to
constants.DEFAULT_SYSTEM_PROMPT and configuration.customization where present to
preserve resolution precedence.
♻️ Duplicate comments (2)
src/utils/responses.py (1)

920-943: Guard response.usage access in build_turn_summary.

This was previously flagged; the direct access can still raise AttributeError when mocks or partial responses omit .usage.

llama-stack-api OpenAIResponseObject usage field optional
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/responses.py` around lines 920 - 943, The code accesses
response.usage directly in build_turn_summary (where TurnSummary,
extract_token_usage are used), which can raise AttributeError for partial/mocked
responses; change the call to safely read usage (e.g., usage = getattr(response,
"usage", None) or check hasattr(response, "usage")) and pass that usage (which
may be None) into extract_token_usage(response_usage, model) so
extract_token_usage always receives a defined value; update any related
references in the function (summary.token_usage assignment) to use this guarded
variable.
tests/unit/utils/test_responses.py (1)

67-80: MockContentPart.type remains unset when all constructor args are falsy.

No else branch assigns self.type, leaving it undefined if part_type, text, and refusal are all None/falsy. Any call to part.type in that state raises AttributeError.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/utils/test_responses.py` around lines 67 - 80, The constructor for
MockContentPart (__init__) can leave self.type unset when part_type, text, and
refusal are all falsy; always assign self.type to avoid AttributeError by adding
a final else branch that sets a default value (e.g., None or "unknown") so
self.type is defined for every instance of MockContentPart.
🧹 Nitpick comments (4)
tests/unit/utils/test_responses.py (1)

1088-1104: Rename test_extract_token_usage_with_dict_usage – the test no longer uses a dict.

Both test_extract_token_usage_with_dict_usage (line 1088) and test_extract_token_usage_with_object_usage (line 1107) now use mocker.Mock(). The "dict" name is misleading and the two tests are near-identical. Consider renaming/merging them for clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/utils/test_responses.py` around lines 1088 - 1104, The test named
test_extract_token_usage_with_dict_usage no longer uses a dict and is nearly
identical to test_extract_token_usage_with_object_usage; update the tests to
remove confusion by renaming test_extract_token_usage_with_dict_usage to a more
accurate name (e.g., test_extract_token_usage_with_mock_usage) or merge the two
into a single parametrized test that covers both usage shapes, ensuring you
update any references and keep assertions against extract_token_usage (and the
mocked extract_provider_and_model_from_model_id, metrics.llm_token_sent_total,
metrics.llm_token_received_total, and _increment_llm_call_metric) unchanged.
src/app/endpoints/query.py (2)

155-155: Replace list[Any] with list[RAGChunk] and remove the leftover dev comment.

RAGChunk is available from utils.types (already imported in streaming_query.py for the same purpose). The inline comment reads like a personal TODO; it should be removed before merge.

✏️ Suggested fix
-    pre_rag_chunks: list[Any] = []  # use your RAGChunk type (or the upstream one)
+    pre_rag_chunks: list[RAGChunk] = []

Add to imports:

 from utils.types import (
     ResponsesApiParams,
     TurnSummary,
+    RAGChunk,
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/query.py` at line 155, Replace the dev placeholder typing
and comment for the pre_rag_chunks variable by typing it as list[RAGChunk] and
removing the inline TODO/comment; ensure RAGChunk is imported from utils.types
(the project already imports RAGChunk in streaming_query.py so mirror that
import) and update the declaration from "pre_rag_chunks: list[Any] = []  # use
your RAGChunk type (or the upstream one)" to a typed empty list using RAGChunk
with no extra comment.

258-270: Move deduplicate_referenced_documents to utils/query.py to eliminate duplication and complete the docstring.

Two issues:

  1. Code duplication: An equivalent deduplication block already exists inline in streaming_query.py (lines 607-618). Placing this function in utils/query.py would let both endpoints import and share it, removing the duplicate.

  2. Incomplete docstring: The one-line description lacks Args: and Returns: sections required by the Google docstring convention followed in this repo. As per coding guidelines, all functions must have complete docstrings with Parameters, Returns, and Raises sections.

✏️ Suggested docstring fix (if keeping function here for now)
-def deduplicate_referenced_documents(
-    docs: list[ReferencedDocument],
-) -> list[ReferencedDocument]:
-    """Remove duplicate referenced documents based on URL and title."""
+def deduplicate_referenced_documents(
+    docs: list[ReferencedDocument],
+) -> list[ReferencedDocument]:
+    """Remove duplicate referenced documents based on URL and title.
+
+    Args:
+        docs: List of referenced documents to deduplicate.
+
+    Returns:
+        Ordered list of documents with duplicates (same URL + title) removed.
+    """
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/query.py` around lines 258 - 270, Move the deduplication
logic into a shared utility and add a full Google-style docstring: extract the
function deduplicate_referenced_documents and place it in the shared utils.query
module so streaming_query's inline dedupe block (the duplicate logic) imports
and uses this function instead of duplicating code; update the function's
docstring to include Args:, Returns:, and Raises: sections describing docs:
list[ReferencedDocument] input, the returned list[ReferencedDocument], and any
exceptions it may raise (if none, document that it does not raise), and update
streaming_query to import and call deduplicate_referenced_documents rather than
using the inline dedupe loop.
src/app/endpoints/streaming_query.py (1)

607-618: Move inline deduplication to a shared utility to avoid code duplication.

This block duplicates the deduplication logic from deduplicate_referenced_documents in src/app/endpoints/query.py. Because that helper lives in an endpoint module rather than utils/, it can't be imported here. Moving it to utils/query.py (or utils/responses.py) would let both endpoints share one implementation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/streaming_query.py` around lines 607 - 618, The inline
deduplication in the streaming handler (the loop that builds
seen/deduplicated_documents and sets turn_summary.referenced_documents from
turn_summary.pre_rag_documents + tool_based_documents) duplicates logic already
implemented in deduplicate_referenced_documents in src/app/endpoints/query.py;
extract that helper into a shared utils module (e.g., utils/query.py or
utils/responses.py), update the original deduplicate_referenced_documents to
live there and export it, then replace the inline loop with a call to the shared
helper (passing the combined list and assigning the result to
turn_summary.referenced_documents). Ensure function signatures match existing
callers (update imports in both streaming_query.py and query.py) and remove the
duplicated loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/query.py`:
- Around line 91-110: Update the signature of validate_model_provider_override
to use modern union syntax for optional params (change Optional[str] to str |
None for both model and provider), and tighten the override detection so any
explicit model or provider triggers the check (set has_override = provider is
not None or model is not None) instead of relying on "/" in model; then if
has_override and Action.MODEL_OVERRIDE not in authorized_actions, keep raising
HTTPException(**ForbiddenResponse.model_override().model_dump()) to enforce
RBAC.

In `@src/utils/responses.py`:
- Around line 101-120: Update the type annotations in the prepare_tools function
signature to use PEP 604 union syntax instead of Optional[...]: change
vector_store_ids: Optional[list[str]] to vector_store_ids: list[str] | None,
no_tools: Optional[bool] to no_tools: bool | None, mcp_headers:
Optional[McpHeaders] to mcp_headers: McpHeaders | None, and the return
annotation Optional[list[dict[str, Any]]] to list[dict[str, Any]] | None; keep
other symbols (AsyncLlamaStackClient, token: str, Any, dict) unchanged.

In `@src/utils/transcripts.py`:
- Around line 71-96: The docstring and sync I/O in store_transcript are
incorrect: update the docstring (Raises) to reflect that the function raises
HTTPException (wrapping IO/OSError) and then convert store_transcript to async
to avoid blocking async handlers — change its signature to async def
store_transcript(...), perform file writes with an async library (e.g.,
aiofiles) or run blocking I/O in asyncio.to_thread, and keep the same error
handling (catch IOError/OSError, log, create
InternalServerErrorResponse.generic(), and raise HTTPException from the error);
finally update store_query_results to await store_transcript so callers use the
new async API.

---

Outside diff comments:
In `@src/utils/prompts.py`:
- Around line 3-35: Update the type annotation and docstring in
get_system_prompt: change the parameter type from Optional[str] to the modern
union syntax str | None in the function signature, update all docstring
references that mention `config` to `configuration` (lines describing
configuration.customization...), and remove the non-existent `config
(AppConfig)` entry from the Parameters section so the docstring only documents
the actual `system_prompt` parameter; keep references to
constants.DEFAULT_SYSTEM_PROMPT and configuration.customization where present to
preserve resolution precedence.

In `@src/utils/transcripts.py`:
- Around line 44-55: The docstring for construct_transcripts_path incorrectly
implies the function performs hashing; clarify that hashed_user_id is expected
to be pre-hashed and not hashed here, and update parameter description for
hashed_user_id to state it is a filesystem-safe hashed identifier (pre-hashed),
and ensure conversation_id description mentions it is normalized for path use;
keep the rest of the docstring intact and consistent with the function
signature.

---

Duplicate comments:
In `@src/utils/responses.py`:
- Around line 920-943: The code accesses response.usage directly in
build_turn_summary (where TurnSummary, extract_token_usage are used), which can
raise AttributeError for partial/mocked responses; change the call to safely
read usage (e.g., usage = getattr(response, "usage", None) or check
hasattr(response, "usage")) and pass that usage (which may be None) into
extract_token_usage(response_usage, model) so extract_token_usage always
receives a defined value; update any related references in the function
(summary.token_usage assignment) to use this guarded variable.

In `@tests/unit/utils/test_responses.py`:
- Around line 67-80: The constructor for MockContentPart (__init__) can leave
self.type unset when part_type, text, and refusal are all falsy; always assign
self.type to avoid AttributeError by adding a final else branch that sets a
default value (e.g., None or "unknown") so self.type is defined for every
instance of MockContentPart.

---

Nitpick comments:
In `@src/app/endpoints/query.py`:
- Line 155: Replace the dev placeholder typing and comment for the
pre_rag_chunks variable by typing it as list[RAGChunk] and removing the inline
TODO/comment; ensure RAGChunk is imported from utils.types (the project already
imports RAGChunk in streaming_query.py so mirror that import) and update the
declaration from "pre_rag_chunks: list[Any] = []  # use your RAGChunk type (or
the upstream one)" to a typed empty list using RAGChunk with no extra comment.
- Around line 258-270: Move the deduplication logic into a shared utility and
add a full Google-style docstring: extract the function
deduplicate_referenced_documents and place it in the shared utils.query module
so streaming_query's inline dedupe block (the duplicate logic) imports and uses
this function instead of duplicating code; update the function's docstring to
include Args:, Returns:, and Raises: sections describing docs:
list[ReferencedDocument] input, the returned list[ReferencedDocument], and any
exceptions it may raise (if none, document that it does not raise), and update
streaming_query to import and call deduplicate_referenced_documents rather than
using the inline dedupe loop.

In `@src/app/endpoints/streaming_query.py`:
- Around line 607-618: The inline deduplication in the streaming handler (the
loop that builds seen/deduplicated_documents and sets
turn_summary.referenced_documents from turn_summary.pre_rag_documents +
tool_based_documents) duplicates logic already implemented in
deduplicate_referenced_documents in src/app/endpoints/query.py; extract that
helper into a shared utils module (e.g., utils/query.py or utils/responses.py),
update the original deduplicate_referenced_documents to live there and export
it, then replace the inline loop with a call to the shared helper (passing the
combined list and assigning the result to turn_summary.referenced_documents).
Ensure function signatures match existing callers (update imports in both
streaming_query.py and query.py) and remove the duplicated loop.

In `@tests/unit/utils/test_responses.py`:
- Around line 1088-1104: The test named test_extract_token_usage_with_dict_usage
no longer uses a dict and is nearly identical to
test_extract_token_usage_with_object_usage; update the tests to remove confusion
by renaming test_extract_token_usage_with_dict_usage to a more accurate name
(e.g., test_extract_token_usage_with_mock_usage) or merge the two into a single
parametrized test that covers both usage shapes, ensuring you update any
references and keep assertions against extract_token_usage (and the mocked
extract_provider_and_model_from_model_id, metrics.llm_token_sent_total,
metrics.llm_token_received_total, and _increment_llm_call_metric) unchanged.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f754544 and 64d98ff.

📒 Files selected for processing (18)
  • src/app/endpoints/a2a.py
  • src/app/endpoints/query.py
  • src/app/endpoints/rlsapi_v1.py
  • src/app/endpoints/streaming_query.py
  • src/models/responses.py
  • src/utils/prompts.py
  • src/utils/query.py
  • src/utils/responses.py
  • src/utils/transcripts.py
  • src/utils/types.py
  • tests/integration/endpoints/test_query_integration.py
  • tests/unit/app/endpoints/test_a2a.py
  • tests/unit/app/endpoints/test_query.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • tests/unit/utils/test_prompts.py
  • tests/unit/utils/test_query.py
  • tests/unit/utils/test_responses.py
  • tests/unit/utils/test_transcripts.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/app/endpoints/a2a.py
  • src/app/endpoints/rlsapi_v1.py
  • src/utils/types.py
  • tests/unit/app/endpoints/test_a2a.py
  • tests/unit/app/endpoints/test_query.py

Comment on lines +71 to +96
def store_transcript(
transcript: Transcript,
) -> None:
"""Store transcript in the local filesystem.
Parameters:
user_id: The user ID (UUID).
conversation_id: The conversation ID (UUID).
model_id: Identifier of the model used to generate the LLM response.
provider_id: Optional provider identifier for the model.
query_is_valid: The result of the query validation.
query: The query (without attachments).
query_request: The request containing a query.
summary: Summary of the query/response turn.
rag_chunks: The list of serialized `RAGChunk` dictionaries.
truncated: The flag indicating if the history was truncated.
attachments: The list of `Attachment` objects.
transcript: BaseModel instance to be stored (e.g., Transcript).
Raises:
IOError, OSError: If writing the transcript file to disk fails.
"""
transcripts_path = construct_transcripts_path(user_id, conversation_id)
transcripts_path = construct_transcripts_path(
transcript.metadata.user_id, transcript.metadata.conversation_id
)
transcripts_path.mkdir(parents=True, exist_ok=True)

hashed_user_id = _hash_user_id(user_id)

data_to_store = {
"metadata": {
"provider": provider_id,
"model": model_id,
"query_provider": query_request.provider,
"query_model": query_request.model,
"user_id": hashed_user_id,
"conversation_id": conversation_id,
"timestamp": datetime.now(UTC).isoformat(),
},
"redacted_query": query,
"query_is_valid": query_is_valid,
"llm_response": summary.llm_response,
"rag_chunks": rag_chunks,
"truncated": truncated,
"attachments": [attachment.model_dump() for attachment in attachments],
"tool_calls": [tc.model_dump() for tc in summary.tool_calls],
"tool_results": [tr.model_dump() for tr in summary.tool_results],
}

# stores feedback in a file under unique uuid
# stores transcript in a file under unique uuid
transcript_file_path = transcripts_path / f"{get_suid()}.json"
try:
with open(transcript_file_path, "w", encoding="utf-8") as transcript_file:
json.dump(data_to_store, transcript_file)
json.dump(transcript.model_dump(), transcript_file)
logger.info("Transcript successfully stored at: %s", transcript_file_path)
except (IOError, OSError) as e:
logger.error("Failed to store transcript into %s: %s", transcript_file_path, e)
raise
response = InternalServerErrorResponse.generic()
raise HTTPException(**response.model_dump()) from e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify call sites for store_transcript
rg -n "store_transcript\(" -S src tests

Repository: lightspeed-core/lightspeed-stack

Length of output: 361


🏁 Script executed:

# Also check the actual file to confirm the code state
cat -n src/utils/transcripts.py | sed -n '71,96p'

Repository: lightspeed-core/lightspeed-stack

Length of output: 1335


🏁 Script executed:

# Check if there are any imports or async context in the module
head -30 src/utils/transcripts.py

Repository: lightspeed-core/lightspeed-stack

Length of output: 692


🏁 Script executed:

# Check the call site in query.py to understand the context
sed -n '260,280p' src/utils/query.py

Repository: lightspeed-core/lightspeed-stack

Length of output: 845


🏁 Script executed:

# Check if the calling function in query.py is async
grep -n "^async def\|^def" src/utils/query.py | grep -B5 -A5 "269"

Repository: lightspeed-core/lightspeed-stack

Length of output: 58


🏁 Script executed:

# Find the function definition containing the store_transcript call at line 269
sed -n '200,280p' src/utils/query.py | head -80

Repository: lightspeed-core/lightspeed-stack

Length of output: 2967


🏁 Script executed:

# Search for async functions that might contain this call
rg -B30 "store_transcript" src/utils/query.py | head -50

Repository: lightspeed-core/lightspeed-stack

Length of output: 1959


🏁 Script executed:

# Check the function signature of store_query_results and see if it's async
rg -n "def store_query_results" src/utils/query.py

Repository: lightspeed-core/lightspeed-stack

Length of output: 158


🏁 Script executed:

# Find all call sites of store_query_results to see if it's called from async contexts
rg -n "store_query_results\(" src tests

Repository: lightspeed-core/lightspeed-stack

Length of output: 771


🏁 Script executed:

# Check the endpoints to see if they are async functions
sed -n '390,410p' src/app/endpoints/streaming_query.py
rg -n "^async def\|^def" src/app/endpoints/streaming_query.py | grep -B2 -A2 "401"

Repository: lightspeed-core/lightspeed-stack

Length of output: 854


🏁 Script executed:

# Check the other endpoint
sed -n '220,240p' src/app/endpoints/query.py
rg -n "^async def\|^def" src/app/endpoints/query.py | grep -B2 -A2 "231"

Repository: lightspeed-core/lightspeed-stack

Length of output: 815


🏁 Script executed:

# Get the function definitions for the endpoints calling store_query_results
rg -B10 "store_query_results" src/app/endpoints/streaming_query.py | head -30

Repository: lightspeed-core/lightspeed-stack

Length of output: 877


🏁 Script executed:

# Get the function definitions for the other endpoint
rg -B10 "store_query_results" src/app/endpoints/query.py | head -30

Repository: lightspeed-core/lightspeed-stack

Length of output: 770


🏁 Script executed:

# Find the exact function definition that contains the store_query_results call in streaming_query.py
rg -B50 "store_query_results" src/app/endpoints/streaming_query.py | grep -E "^[0-9]+-async def|^[0-9]+-def" | tail -1

Repository: lightspeed-core/lightspeed-stack

Length of output: 58


🏁 Script executed:

# Find the exact function definition that contains the store_query_results call in query.py
rg -B50 "store_query_results" src/app/endpoints/query.py | grep -E "^[0-9]+-async def|^[0-9]+-def" | tail -1

Repository: lightspeed-core/lightspeed-stack

Length of output: 58


🏁 Script executed:

# Let's get the line numbers and search more directly
grep -n "async def" src/app/endpoints/streaming_query.py | head -5

Repository: lightspeed-core/lightspeed-stack

Length of output: 383


🏁 Script executed:

grep -n "async def" src/app/endpoints/query.py | head -5

Repository: lightspeed-core/lightspeed-stack

Length of output: 180


Update docstring and consider making file I/O async.

The Raises docstring documents IOError, OSError but the function raises HTTPException (line 96). Additionally, store_transcript() performs synchronous disk I/O and is called from async endpoint handlers via store_query_results(), which violates the coding guideline to use async def for I/O operations. Converting to async would require updating both store_transcript() and store_query_results() to async.

📝 Docstring fix (minimal)
-    Raises:
-        IOError, OSError: If writing the transcript file to disk fails.
+    Raises:
+        HTTPException: If writing the transcript file to disk fails.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transcripts.py` around lines 71 - 96, The docstring and sync I/O in
store_transcript are incorrect: update the docstring (Raises) to reflect that
the function raises HTTPException (wrapping IO/OSError) and then convert
store_transcript to async to avoid blocking async handlers — change its
signature to async def store_transcript(...), perform file writes with an async
library (e.g., aiofiles) or run blocking I/O in asyncio.to_thread, and keep the
same error handling (catch IOError/OSError, log, create
InternalServerErrorResponse.generic(), and raise HTTPException from the error);
finally update store_query_results to await store_transcript so callers use the
new async API.

@asimurka asimurka force-pushed the utility_refactoring branch from 64d98ff to 0611cc1 Compare February 23, 2026 13:59
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/utils/query.py (1)

236-250: ⚠️ Potential issue | 🟡 Minor

Stale docstring: references removed configuration parameter.

Line 244 still documents configuration: Application configuration but this parameter was removed from the function signature.

📝 Proposed fix
     Args:
         user_id: The authenticated user ID
         conversation_id: The conversation ID
         model: The model identifier
         started_at: ISO formatted timestamp when the request started
         completed_at: ISO formatted timestamp when the request completed
         summary: Summary of the turn including LLM response and tool calls
         query_request: The original query request
-        configuration: Application configuration
         skip_userid_check: Whether to skip user ID validation
         topic_summary: Optional topic summary for the conversation
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/query.py` around lines 236 - 250, The docstring parameter list is
stale: it still documents `configuration` although that parameter was removed
from the function signature; update the docstring in src/utils/query.py so the
Args section matches the actual signature (remove `configuration: Application
configuration` and any references to it), ensure the remaining parameters
(`user_id`, `conversation_id`, `model`, `started_at`, `completed_at`, `summary`,
`query_request`, `skip_userid_check`, `topic_summary`) are correctly described,
and keep the Raises section accurate for `HTTPException` or other errors.
♻️ Duplicate comments (4)
src/utils/transcripts.py (1)

79-80: Docstring Raises section is inaccurate.

The function catches IOError/OSError and raises HTTPException (line 96), but the docstring still documents IOError, OSError as the raised exceptions.

📝 Proposed fix
     Raises:
-        IOError, OSError: If writing the transcript file to disk fails.
+        HTTPException: If writing the transcript file to disk fails (wraps IOError/OSError).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transcripts.py` around lines 79 - 80, Update the docstring for the
function in src/utils/transcripts.py that writes the transcript to disk so its
Raises section reflects the actual exception thrown (HTTPException) instead of
listing IOError/OSError; specifically mention that the function raises
fastapi.HTTPException (or HTTPException) when file write fails and include the
relevant status code/context, and remove or replace the outdated IOError/OSError
entries so the docstring matches the behavior where the code catches OS errors
and re-raises an HTTPException (see the raise HTTPException call in the
function).
src/utils/query.py (1)

91-110: Previous review's Optional[str] and override-logic concerns were noted.

The Optional[str] usage on lines 92-93 was flagged in a previous review. The override detection logic at line 107 intentionally allows bare model names without "/" to skip the MODEL_OVERRIDE check, which the docstring explains is by design for the Responses API "provider/model" format.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/query.py` around lines 91 - 110, Change the Optional[str] type
annotations on validate_model_provider_override's parameters to modern union
syntax (model: str | None, provider: str | None) for consistency with
authorized_actions, and keep the override detection logic using has_override =
provider is not None or (model is not None and "/" in model) unchanged; ensure
the function still raises HTTPException when has_override is true and
Action.MODEL_OVERRIDE not in authorized_actions (referencing
validate_model_provider_override, model, provider, has_override, and
Action.MODEL_OVERRIDE).
tests/unit/utils/test_responses.py (1)

67-80: ⚠️ Potential issue | 🟡 Minor

Ensure MockContentPart.type is always set.
When part_type, text, and refusal are all falsy, type remains unset and may raise AttributeError in code that expects it.

🛠️ Suggested fix
     def __init__(
         self,
         text: Optional[str] = None,
         refusal: Optional[str] = None,
         part_type: Optional[str] = None,
     ) -> None:
         self.text = text
         self.refusal = refusal
         if part_type:
             self.type = part_type
         elif text:
             self.type = "output_text"
         elif refusal:
             self.type = "refusal"
+        else:
+            self.type = "unknown"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/utils/test_responses.py` around lines 67 - 80, The constructor for
MockContentPart leaves self.type unset when part_type, text, and refusal are all
falsy; update the __init__ of MockContentPart to always set self.type by adding
a final else that assigns a sensible default (e.g., "empty" or "unknown") so any
code reading MockContentPart.type never raises AttributeError.
src/utils/responses.py (1)

101-107: 🛠️ Refactor suggestion | 🟠 Major

Use PEP 604 unions instead of Optional[...] throughout the file.
The repo guidelines require modern T | None syntax; this file contains 30+ instances of Optional[...] that need refactoring. Python 3.12+ fully supports this syntax.

Examples to update:

  • Line 103-107: prepare_tools function parameters
  • Line 448: extract_token_usage function parameter
  • Plus 25+ additional instances throughout the module
♻️ Suggested refactor pattern
 async def prepare_tools(
     client: AsyncLlamaStackClient,
-    vector_store_ids: Optional[list[str]],
-    no_tools: Optional[bool],
+    vector_store_ids: list[str] | None,
+    no_tools: bool | None,
     token: str,
-    mcp_headers: Optional[McpHeaders] = None,
-) -> Optional[list[dict[str, Any]]]:
+    mcp_headers: McpHeaders | None = None,
+) -> list[dict[str, Any]] | None:

-def extract_token_usage(usage: Optional[ResponseUsage], model: str) -> TokenCounter:
+def extract_token_usage(usage: ResponseUsage | None, model: str) -> TokenCounter:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/responses.py` around lines 101 - 107, The type hints in this module
use legacy Optional[...] forms; replace them with PEP 604 union syntax (T |
None) across the file (e.g., change prepare_tools signature parameters client:
AsyncLlamaStackClient, vector_store_ids: Optional[list[str]], no_tools:
Optional[bool], token: str, mcp_headers: Optional[McpHeaders] to use
AsyncLlamaStackClient, vector_store_ids: list[str] | None, no_tools: bool |
None, mcp_headers: McpHeaders | None) and do the same for extract_token_usage
and the other ~30 occurrences; remove or adjust any unnecessary Optional imports
(or keep typing imports used elsewhere) and ensure all annotations use X | None
consistently while preserving existing generic types like list[str] and
dict[str, Any].
🧹 Nitpick comments (2)
src/utils/prompts.py (1)

3-3: Use str | None instead of Optional[str].

The coding guidelines require modern union syntax. Optional[str] on line 11 should be str | None, and the from typing import Optional import can then be removed.

♻️ Proposed fix
-from typing import Optional
 from fastapi import HTTPException
 ...
-def get_system_prompt(system_prompt: Optional[str]) -> str:
+def get_system_prompt(system_prompt: str | None) -> str:

As per coding guidelines: "Use modern Union syntax str | int for type unions instead of Union[str, int]".

Also applies to: 11-11

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/prompts.py` at line 3, Replace the legacy typing.Optional usage
with modern union syntax: change any type annotations using Optional[str] to str
| None (e.g., function parameters or return annotations that currently use
Optional[str]) and remove the now-unneeded "from typing import Optional" import;
update all occurrences in src/utils/prompts.py so annotations use the pipe union
form.
src/app/endpoints/query.py (1)

258-270: Avoid duplicating deduplicate_referenced_documents.
There’s an identical helper in utils.responses; reusing it avoids drift and keeps behavior consistent.

♻️ Suggested refactor
-from utils.responses import (
-    build_turn_summary,
-    extract_vector_store_ids_from_tools,
-    get_topic_summary,
-    prepare_responses_params,
-)
+from utils.responses import (
+    build_turn_summary,
+    deduplicate_referenced_documents,
+    extract_vector_store_ids_from_tools,
+    get_topic_summary,
+    prepare_responses_params,
+)

- def deduplicate_referenced_documents(
-     docs: list[ReferencedDocument],
- ) -> list[ReferencedDocument]:
-     """Remove duplicate referenced documents based on URL and title."""
-     seen: set[tuple[str | None, str | None]] = set()
-     out: list[ReferencedDocument] = []
-     for d in docs:
-         key = (str(d.doc_url) if d.doc_url else None, d.doc_title)
-         if key in seen:
-             continue
-         seen.add(key)
-         out.append(d)
-     return out
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/query.py` around lines 258 - 270, The
deduplicate_referenced_documents function here duplicates logic from
utils.responses; remove this local copy and import the single implementation
from utils.responses instead. Replace the local function definition with an
import of deduplicate_referenced_documents from utils.responses and update any
local references to call that imported symbol (ensure function signature and
return types match usage in this module). Keep only the shared implementation in
utils.responses to avoid behavioral drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/prompts.py`:
- Around line 28-31: The function docstring in src/utils/prompts.py is stale:
update the parameter list to match the current signature by removing references
to the removed `config (AppConfig)` parameter and replace any remaining
`query_request` mentions with `system_prompt`; specifically edit the docstring
for the function that takes `system_prompt` (and optional other current params)
so the parameter descriptions accurately describe `system_prompt
(Optional[str])` and any other existing args, and remove the obsolete `config`
entry so the docs mirror the code.

In `@src/utils/responses.py`:
- Around line 184-188: The branch that builds model only when both
query_request.model and query_request.provider are present ignores valid
overrides where query_request.model already contains the provider (e.g.,
"provider/model"); update the logic in the response selection flow to first
validate and accept a provider-qualified model string (use
validate_model_provider_override to parse/validate query_request.model) and, if
valid, set model to that value; otherwise fall back to combining
query_request.provider + "/" + query_request.model when both fields are
provided, and only call select_model_for_responses(client, user_conversation)
when no valid override is present (adjust code around the current model
assignment where select_model_for_responses is invoked).

In `@src/utils/transcripts.py`:
- Around line 151-157: The create_transcript function currently hardcodes
Transcript fields query_is_valid=True and truncated=False; update
create_transcript to accept two new parameters (e.g., query_is_valid: bool =
True, truncated: bool = False) or implement detection logic, then pass those
parameter values into the Transcript constructor instead of the hardcoded
literals; modify the create_transcript signature and any callers to provide
appropriate values (or compute them inside create_transcript) and ensure the
returned Transcript uses the query_is_valid and truncated variables rather than
True/False constants.

---

Outside diff comments:
In `@src/utils/query.py`:
- Around line 236-250: The docstring parameter list is stale: it still documents
`configuration` although that parameter was removed from the function signature;
update the docstring in src/utils/query.py so the Args section matches the
actual signature (remove `configuration: Application configuration` and any
references to it), ensure the remaining parameters (`user_id`,
`conversation_id`, `model`, `started_at`, `completed_at`, `summary`,
`query_request`, `skip_userid_check`, `topic_summary`) are correctly described,
and keep the Raises section accurate for `HTTPException` or other errors.

---

Duplicate comments:
In `@src/utils/query.py`:
- Around line 91-110: Change the Optional[str] type annotations on
validate_model_provider_override's parameters to modern union syntax (model: str
| None, provider: str | None) for consistency with authorized_actions, and keep
the override detection logic using has_override = provider is not None or (model
is not None and "/" in model) unchanged; ensure the function still raises
HTTPException when has_override is true and Action.MODEL_OVERRIDE not in
authorized_actions (referencing validate_model_provider_override, model,
provider, has_override, and Action.MODEL_OVERRIDE).

In `@src/utils/responses.py`:
- Around line 101-107: The type hints in this module use legacy Optional[...]
forms; replace them with PEP 604 union syntax (T | None) across the file (e.g.,
change prepare_tools signature parameters client: AsyncLlamaStackClient,
vector_store_ids: Optional[list[str]], no_tools: Optional[bool], token: str,
mcp_headers: Optional[McpHeaders] to use AsyncLlamaStackClient,
vector_store_ids: list[str] | None, no_tools: bool | None, mcp_headers:
McpHeaders | None) and do the same for extract_token_usage and the other ~30
occurrences; remove or adjust any unnecessary Optional imports (or keep typing
imports used elsewhere) and ensure all annotations use X | None consistently
while preserving existing generic types like list[str] and dict[str, Any].

In `@src/utils/transcripts.py`:
- Around line 79-80: Update the docstring for the function in
src/utils/transcripts.py that writes the transcript to disk so its Raises
section reflects the actual exception thrown (HTTPException) instead of listing
IOError/OSError; specifically mention that the function raises
fastapi.HTTPException (or HTTPException) when file write fails and include the
relevant status code/context, and remove or replace the outdated IOError/OSError
entries so the docstring matches the behavior where the code catches OS errors
and re-raises an HTTPException (see the raise HTTPException call in the
function).

In `@tests/unit/utils/test_responses.py`:
- Around line 67-80: The constructor for MockContentPart leaves self.type unset
when part_type, text, and refusal are all falsy; update the __init__ of
MockContentPart to always set self.type by adding a final else that assigns a
sensible default (e.g., "empty" or "unknown") so any code reading
MockContentPart.type never raises AttributeError.

---

Nitpick comments:
In `@src/app/endpoints/query.py`:
- Around line 258-270: The deduplicate_referenced_documents function here
duplicates logic from utils.responses; remove this local copy and import the
single implementation from utils.responses instead. Replace the local function
definition with an import of deduplicate_referenced_documents from
utils.responses and update any local references to call that imported symbol
(ensure function signature and return types match usage in this module). Keep
only the shared implementation in utils.responses to avoid behavioral drift.

In `@src/utils/prompts.py`:
- Line 3: Replace the legacy typing.Optional usage with modern union syntax:
change any type annotations using Optional[str] to str | None (e.g., function
parameters or return annotations that currently use Optional[str]) and remove
the now-unneeded "from typing import Optional" import; update all occurrences in
src/utils/prompts.py so annotations use the pipe union form.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 64d98ff and 0611cc1.

📒 Files selected for processing (18)
  • src/app/endpoints/a2a.py
  • src/app/endpoints/query.py
  • src/app/endpoints/rlsapi_v1.py
  • src/app/endpoints/streaming_query.py
  • src/models/responses.py
  • src/utils/prompts.py
  • src/utils/query.py
  • src/utils/responses.py
  • src/utils/transcripts.py
  • src/utils/types.py
  • tests/integration/endpoints/test_query_integration.py
  • tests/unit/app/endpoints/test_a2a.py
  • tests/unit/app/endpoints/test_query.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • tests/unit/utils/test_prompts.py
  • tests/unit/utils/test_query.py
  • tests/unit/utils/test_responses.py
  • tests/unit/utils/test_transcripts.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/models/responses.py
  • src/app/endpoints/rlsapi_v1.py
  • tests/unit/app/endpoints/test_a2a.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
  • src/utils/types.py

Comment on lines +28 to 31
system_prompt (Optional[str]): The incoming system prompt; may contain a
per-request `system_prompt`.
config (AppConfig): Application configuration which may include
customization flags, a custom profile, and a default `system_prompt`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Stale docstring: references removed config (AppConfig) parameter.

Lines 30-31 still document a config (AppConfig) parameter that was removed from the signature. Line 24 also refers to query_request instead of the current system_prompt parameter.

📝 Proposed fix
     Parameters:
         system_prompt (Optional[str]): The incoming system prompt; may contain a
         per-request `system_prompt`.
-        config (AppConfig): Application configuration which may include
-        customization flags, a custom profile, and a default `system_prompt`.
 
     Returns:
         str: The resolved system prompt to apply to the request.

Also update line 24:

-    `query_request` contains a `system_prompt`, an HTTP 422 Unprocessable
+    `system_prompt` is provided, an HTTP 422 Unprocessable
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
system_prompt (Optional[str]): The incoming system prompt; may contain a
per-request `system_prompt`.
config (AppConfig): Application configuration which may include
customization flags, a custom profile, and a default `system_prompt`.
system_prompt (Optional[str]): The incoming system prompt; may contain a
per-request `system_prompt`.
Returns:
str: The resolved system prompt to apply to the request.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/prompts.py` around lines 28 - 31, The function docstring in
src/utils/prompts.py is stale: update the parameter list to match the current
signature by removing references to the removed `config (AppConfig)` parameter
and replace any remaining `query_request` mentions with `system_prompt`;
specifically edit the docstring for the function that takes `system_prompt` (and
optional other current params) so the parameter descriptions accurately describe
`system_prompt (Optional[str])` and any other existing args, and remove the
obsolete `config` entry so the docs mirror the code.

Comment on lines +184 to 188
if query_request.model and query_request.provider:
model = f"{query_request.provider}/{query_request.model}"
else:
model = await select_model_for_responses(client, user_conversation)

Copy link
Contributor

@coderabbitai coderabbitai bot Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Responses-format model override is ignored when provider is omitted.
If the client sends model="provider/model" (supported by validate_model_provider_override), this branch discards it and selects a different model. This breaks explicit overrides.

✅ Suggested fix
-    if query_request.model and query_request.provider:
-        model = f"{query_request.provider}/{query_request.model}"
-    else:
-        model = await select_model_for_responses(client, user_conversation)
+    if query_request.model:
+        if query_request.provider:
+            model = f"{query_request.provider}/{query_request.model}"
+        elif "/" in query_request.model:
+            model = query_request.model
+        else:
+            model = await select_model_for_responses(client, user_conversation)
+    else:
+        model = await select_model_for_responses(client, user_conversation)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/responses.py` around lines 184 - 188, The branch that builds model
only when both query_request.model and query_request.provider are present
ignores valid overrides where query_request.model already contains the provider
(e.g., "provider/model"); update the logic in the response selection flow to
first validate and accept a provider-qualified model string (use
validate_model_provider_override to parse/validate query_request.model) and, if
valid, set model to that value; otherwise fall back to combining
query_request.provider + "/" + query_request.model when both fields are
provided, and only call select_model_for_responses(client, user_conversation)
when no valid override is present (adjust code around the current model
assignment where select_model_for_responses is invoked).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ok, query request validator requires either none or both of there attributes to be specified.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@asimurka, thank you for the clarification! If the query request validator enforces that model and provider must either both be specified or both be absent, then the current logic is indeed correct. The concatenated format concern doesn't apply since the validator would reject model="provider/model" when provider is not also provided.

I appreciate you explaining the validation constraint! 🐰

ฅ^•ﻌ•^ฅ


✏️ Learnings added
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1198
File: src/utils/responses.py:184-188
Timestamp: 2026-02-23T14:11:34.809Z
Learning: The query request validator in the Responses API flow requires that `query_request.model` and `query_request.provider` must either both be specified or both be absent. The concatenated format (e.g., `model="provider/model"` with `provider=None`) is not permitted by the validator.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1198
File: src/utils/query.py:91-110
Timestamp: 2026-02-23T14:07:34.842Z
Learning: In `src/utils/query.py`, the `validate_model_provider_override` function should only flag an override when: (1) provider is explicitly specified (separated format), or (2) model contains "/" (concatenated provider/model format). Specifying just a model name without provider is not considered an override and uses the default provider.

Comment on lines +151 to +157
return Transcript(
metadata=metadata,
redacted_query=redacted_query,
query_is_valid=True,
llm_response=summary.llm_response,
rag_chunks=[chunk.model_dump() for chunk in summary.rag_chunks],
truncated=False,
Copy link
Contributor

@coderabbitai coderabbitai bot Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check usage of query_is_valid and truncated fields
rg -n "query_is_valid|truncated" src/utils/transcripts.py src/utils/query.py -A 2 -B 2

Repository: lightspeed-core/lightspeed-stack

Length of output: 687


🏁 Script executed:

#!/bin/bash
# Check the full function context around lines 151-157
cat -n src/utils/transcripts.py | sed -n '140,170p'

Repository: lightspeed-core/lightspeed-stack

Length of output: 1127


🏁 Script executed:

#!/bin/bash
# Find the Transcript model definition to understand these fields
rg -n "class Transcript" src/ -A 15

Repository: lightspeed-core/lightspeed-stack

Length of output: 1378


🏁 Script executed:

#!/bin/bash
# Check if there are other places where Transcript objects are created
rg -n "Transcript(" src/ --type py -B 2 -A 2

Repository: lightspeed-core/lightspeed-stack

Length of output: 148


🏁 Script executed:

#!/bin/bash
# Get the full function context
cat -n src/utils/transcripts.py | sed -n '130,165p'

Repository: lightspeed-core/lightspeed-stack

Length of output: 1439


🏁 Script executed:

#!/bin/bash
# Search for Transcript instantiation more carefully
rg "Transcript\(" src/ -t py --max-count 20

Repository: lightspeed-core/lightspeed-stack

Length of output: 171


🏁 Script executed:

#!/bin/bash
# Check the function signature and docstring more clearly
cat -n src/utils/transcripts.py | sed -n '125,175p'

Repository: lightspeed-core/lightspeed-stack

Length of output: 1668


🏁 Script executed:

#!/bin/bash
# Find where create_transcript is called
rg "create_transcript" src/ -t py -B 3 -A 3

Repository: lightspeed-core/lightspeed-stack

Length of output: 1945


🏁 Script executed:

#!/bin/bash
# Check if there are any validation or truncation checks in the codebase
rg "query_is_valid|truncated" src/ -t py --max-count 30

Repository: lightspeed-core/lightspeed-stack

Length of output: 799


🏁 Script executed:

#!/bin/bash
# Check the context where create_transcript is called in query.py
cat -n src/utils/query.py | sed -n '200,280p'

Repository: lightspeed-core/lightspeed-stack

Length of output: 3569


🏁 Script executed:

#!/bin/bash
# Check if there's any validation logic before transcript creation
rg "query_is_valid|truncated" src/utils/query.py -B 5 -A 5

Repository: lightspeed-core/lightspeed-stack

Length of output: 58


🏁 Script executed:

#!/bin/bash
# Look for validation logic in query handling
rg "validate.*query|check.*query" src/ -t py --max-count 15

Repository: lightspeed-core/lightspeed-stack

Length of output: 269


🏁 Script executed:

#!/bin/bash
# Check if there's any query validation that happens before store_query_results is called
rg "store_query_results|query_is_valid" src/app/endpoints/ -t py -B 5 -A 2

Repository: lightspeed-core/lightspeed-stack

Length of output: 2184


🏁 Script executed:

#!/bin/bash
# Check if truncation happens anywhere in query processing
rg "truncated.*=|truncated.*True|truncated.*False" src/ -t py

Repository: lightspeed-core/lightspeed-stack

Length of output: 294


Pass parameters to control query_is_valid and truncated fields instead of hardcoding them.

The create_transcript() function hardcodes both query_is_valid=True (line 154) and truncated=False (line 157) with no way for callers to override these values. While the Transcript model defines these as required fields, the helper function provides no parameters to control them and no logic to determine their actual values. This prevents recording invalid queries or truncated responses in transcripts. Consider adding these as parameters to the function signature or implementing validation/truncation detection logic before transcript creation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transcripts.py` around lines 151 - 157, The create_transcript
function currently hardcodes Transcript fields query_is_valid=True and
truncated=False; update create_transcript to accept two new parameters (e.g.,
query_is_valid: bool = True, truncated: bool = False) or implement detection
logic, then pass those parameter values into the Transcript constructor instead
of the hardcoded literals; modify the create_transcript signature and any
callers to provide appropriate values (or compute them inside create_transcript)
and ensure the returned Transcript uses the query_is_valid and truncated
variables rather than True/False constants.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ok - deprecated parameters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@asimurka, understood! I've noted that these parameters are deprecated.

/)_/)
( •.•)
/>📝 noted for future reference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants