Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughIntroduces in-memory micro-batching with an explicit flush for event buffering, rewrites live WebSocket handlers to batch/count payloads and enforce access checks, adds a Polar product→organization CLI, and updates SDKs and frontend components to consume simplified live count messages. Removed getLiveEventInfo. Changes
Sequence Diagram(s)sequenceDiagram
participant SDK as Client/SDK
participant Buffer as EventBuffer (in-memory)
participant Redis as Redis
participant CH as ClickHouse
participant WS as Pub/Sub / WebSocket
SDK->>+Buffer: send event(s)
Buffer->>Buffer: enqueue PendingEvent(s)
alt timer or manual flush()
Buffer-->>+Redis: execute single multi/pipeline (zadd, rpush, scripts...)
Buffer-->>+CH: enqueue batched write / worker ingestion
Redis-->>-WS: publish throttled notification (latest buffered event)
Buffer->>-SDK: ack (if applicable)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@packages/db/src/buffers/event-buffer.ts`:
- Around line 395-402: The catch block in the flushLocalBufferToRedis flow
currently logs the failure but drops events from eventsToFlush (causing data
loss); modify the catch handler in the method that builds and executes the Redis
multi (look for eventsToFlush and this.pendingEvents) to re-queue the failed
events back onto this.pendingEvents (preserving order—e.g., unshift or concat
depending on how pendingEvents is consumed), ensure you do this before
clearing/resetting isFlushing, and avoid duplicating events if a partial success
path exists (check any success branch that removes events). Also consider
incrementing a retry counter or emitting a metric/log entry when re-queuing so
persistent failures are observable.
In `@packages/payments/scripts/assign-product-to-org.ts`:
- Around line 42-50: The prompt object for the Polar API key uses the wrong
Inquirer type string ('string'); update the prompt definition for polarApiKey so
its type property is 'input' (i.e., change type: 'string' to type: 'input')
while keeping the existing name 'polarApiKey' and validate function intact to
ensure the same validation behavior.
🧹 Nitpick comments (3)
packages/payments/scripts/assign-product-to-org.ts (1)
130-133: Avoid duplicate Polar client instantiation.A Polar client is already created at line 54-57 inside
promptForInput()but is not returned. Instead of creating it again here, consider returning the client frompromptForInput()or moving client creation tomain()only.Option: Return polar client from promptForInput
- return { - ...polarCredentials, - ...restOfAnswers, - }; + return { + ...polarCredentials, + ...restOfAnswers, + polar, // Return the already-created client + }; } async function main() { console.log('Assigning existing product to organization...'); const input = await promptForInput(); - const polar = new Polar({ - accessToken: input.polarApiKey, - server: input.isProduction ? 'production' : 'sandbox', - }); + const polar = input.polar;packages/db/src/buffers/event-buffer.ts (1)
60-71: Consider cleaning up timers on shutdown.The
flushTimerandpublishTimerare not cleaned up if the EventBuffer is destroyed or the process shuts down gracefully. This could lead to timer leaks or callbacks firing after cleanup.Consider adding a cleanup/destroy method
public destroy() { if (this.flushTimer) { clearTimeout(this.flushTimer); this.flushTimer = null; } if (this.publishTimer) { clearTimeout(this.publishTimer); this.publishTimer = null; } }packages/db/src/buffers/event-buffer.test.ts (1)
83-157: Consider verifying calculated duration in the test.The test validates that events move to the buffer correctly but doesn't verify that the duration is actually calculated. Consider adding a check that processes the buffer and verifies the duration field is set correctly (e.g., ~1000ms between view1 and view2).
| } catch (error) { | ||
| this.logger.error('Failed to add event to Redis buffer', { error }); | ||
| this.logger.error('Failed to flush local buffer to Redis', { | ||
| error, | ||
| eventCount: eventsToFlush.length, | ||
| }); | ||
| } finally { | ||
| this.isFlushing = false; | ||
| } |
There was a problem hiding this comment.
Events may be lost on Redis flush failure.
If multi.exec() fails, the events in eventsToFlush are discarded without retry. For critical event tracking, consider re-queuing failed events back to pendingEvents in the catch block.
Proposed fix to re-queue failed events
} catch (error) {
this.logger.error('Failed to flush local buffer to Redis', {
error,
eventCount: eventsToFlush.length,
});
+ // Re-queue failed events for retry
+ this.pendingEvents = [...eventsToFlush, ...this.pendingEvents];
} finally {
this.isFlushing = false;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| this.logger.error('Failed to add event to Redis buffer', { error }); | |
| this.logger.error('Failed to flush local buffer to Redis', { | |
| error, | |
| eventCount: eventsToFlush.length, | |
| }); | |
| } finally { | |
| this.isFlushing = false; | |
| } | |
| } catch (error) { | |
| this.logger.error('Failed to flush local buffer to Redis', { | |
| error, | |
| eventCount: eventsToFlush.length, | |
| }); | |
| // Re-queue failed events for retry | |
| this.pendingEvents = [...eventsToFlush, ...this.pendingEvents]; | |
| } finally { | |
| this.isFlushing = false; | |
| } |
🤖 Prompt for AI Agents
In `@packages/db/src/buffers/event-buffer.ts` around lines 395 - 402, The catch
block in the flushLocalBufferToRedis flow currently logs the failure but drops
events from eventsToFlush (causing data loss); modify the catch handler in the
method that builds and executes the Redis multi (look for eventsToFlush and
this.pendingEvents) to re-queue the failed events back onto this.pendingEvents
(preserving order—e.g., unshift or concat depending on how pendingEvents is
consumed), ensure you do this before clearing/resetting isFlushing, and avoid
duplicating events if a partial success path exists (check any success branch
that removes events). Also consider incrementing a retry counter or emitting a
metric/log entry when re-queuing so persistent failures are observable.
| { | ||
| type: 'string', | ||
| name: 'polarApiKey', | ||
| message: 'Enter your Polar API key:', | ||
| validate: (input: string) => { | ||
| if (!input) return 'API key is required'; | ||
| return true; | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Use 'input' instead of 'string' for inquirer prompt type.
Inquirer's prompt type should be 'input' for text input fields, not 'string'.
Proposed fix
{
- type: 'string',
+ type: 'input',
name: 'polarApiKey',
message: 'Enter your Polar API key:',
validate: (input: string) => {
if (!input) return 'API key is required';
return true;
},
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| type: 'string', | |
| name: 'polarApiKey', | |
| message: 'Enter your Polar API key:', | |
| validate: (input: string) => { | |
| if (!input) return 'API key is required'; | |
| return true; | |
| }, | |
| }, | |
| { | |
| type: 'input', | |
| name: 'polarApiKey', | |
| message: 'Enter your Polar API key:', | |
| validate: (input: string) => { | |
| if (!input) return 'API key is required'; | |
| return true; | |
| }, | |
| }, |
🤖 Prompt for AI Agents
In `@packages/payments/scripts/assign-product-to-org.ts` around lines 42 - 50, The
prompt object for the Polar API key uses the wrong Inquirer type string
('string'); update the prompt definition for polarApiKey so its type property is
'input' (i.e., change type: 'string' to type: 'input') while keeping the
existing name 'polarApiKey' and validate function intact to ensure the same
validation behavior.
0829e7a to
a672b73
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/controllers/live.controller.ts`:
- Around line 78-82: The access-denied branch sends a raw string while normal
websocket messages use SuperJSON; update the branch in live.controller.ts where
you check "if (!access)" to produce a consistent wire format by either (a)
serializing a structured error payload with the same SuperJSON format used
elsewhere (import SuperJSON and send SuperJSON.stringify({ type: 'error',
message: 'No access' }) via socket.send) or (b) closing the socket with a proper
WebSocket policy code/reason (e.g., socket.close(1008, 'No access')) so clients
can handle it uniformly; change the socket.send/socket.close calls accordingly
and ensure the error shape matches other message handlers.
- Around line 21-24: The sendCount function currently calls
eventBuffer.getActiveVisitorCount(params.projectId) and then socket.send(...)
without handling rejections; update sendCount to catch errors from
getActiveVisitorCount and from socket.send (e.g., by adding a .catch on the
promise chain or using try/catch if converted to async), and guard socket.send
with a socket.readyState check (or handle errors from send) so any failure from
eventBuffer.getActiveVisitorCount or socket.send is logged/handled and does not
become an unhandled rejection; reference the sendCount function,
eventBuffer.getActiveVisitorCount, and socket.send when locating where to add
the error handling.
In `@apps/start/src/components/events/event-listener.tsx`:
- Around line 19-23: The project-level increment handler for
useWS(`/live/events/${projectId}`, ...) unconditionally calls counter.set and
causes the badge to drift when table filters are active; update the listener to
respect the table's current filters: either (A) stop mounting/subscribe the WS
when filters are active (use the same filter state used by the events table
toolbar to conditionally call useWS), or (B) enhance the WS payload to include
enough event metadata and change the handler callback to inspect that metadata
against the currentFilters/dateRange and only call counter.set((prev) => prev +
count) when the incoming events actually match the visible table criteria (use
the existing currentFilters/dateRange state and projectId to scope the check).
Ensure the change references the existing useWS hook, the counter.set call, and
the table's currentFilters/dateRange state so the badge only increments for
visible events.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4e18972c-8e01-4d23-8bd3-8a1a38cf9b65
📒 Files selected for processing (4)
apps/api/src/controllers/live.controller.tsapps/start/src/components/events/event-listener.tsxapps/start/src/components/events/table/index.tsxapps/start/src/components/onboarding/onboarding-verify-listener.tsx
| if (!access) { | ||
| socket.send('No access'); | ||
| socket.close(); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Keep the websocket wire format consistent on access denial.
This branch sends a raw string on an endpoint whose normal messages are SuperJSON. That makes denied connections speak a different protocol than successful ones, which can break client-side parsing. Prefer closing with a policy code/reason, or send a structured SuperJSON error payload before closing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/controllers/live.controller.ts` around lines 78 - 82, The
access-denied branch sends a raw string while normal websocket messages use
SuperJSON; update the branch in live.controller.ts where you check "if
(!access)" to produce a consistent wire format by either (a) serializing a
structured error payload with the same SuperJSON format used elsewhere (import
SuperJSON and send SuperJSON.stringify({ type: 'error', message: 'No access' })
via socket.send) or (b) closing the socket with a proper WebSocket policy
code/reason (e.g., socket.close(1008, 'No access')) so clients can handle it
uniformly; change the socket.send/socket.close calls accordingly and ensure the
error shape matches other message handlers.
| useWS<{ count: number }>( | ||
| `/live/events/${projectId}`, | ||
| (event) => { | ||
| if (event) { | ||
| const isProfilePage = !!params?.profileId; | ||
| if (isProfilePage) { | ||
| const profile = 'profile' in event ? event.profile : null; | ||
| if (profile?.id === params?.profileId) { | ||
| counter.set((prev) => prev + 1); | ||
| } | ||
| return; | ||
| } | ||
|
|
||
| counter.set((prev) => prev + 1); | ||
| } | ||
| ({ count }) => { | ||
| counter.set((prev) => prev + count); | ||
| }, |
There was a problem hiding this comment.
Project-wide batch counts will drift from the filtered table.
This listener is rendered inside the events table toolbar, but the new { count } payload only tells us how many events were written for the project. Once a user applies a date range or any other table filter, the badge can show “N new events” even though refresh adds zero visible rows. Either hide the listener while filters are active, or send enough event metadata to scope the increment to the current view.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/start/src/components/events/event-listener.tsx` around lines 19 - 23,
The project-level increment handler for useWS(`/live/events/${projectId}`, ...)
unconditionally calls counter.set and causes the badge to drift when table
filters are active; update the listener to respect the table's current filters:
either (A) stop mounting/subscribe the WS when filters are active (use the same
filter state used by the events table toolbar to conditionally call useWS), or
(B) enhance the WS payload to include enough event metadata and change the
handler callback to inspect that metadata against the currentFilters/dateRange
and only call counter.set((prev) => prev + count) when the incoming events
actually match the visible table criteria (use the existing
currentFilters/dateRange state and projectId to scope the check). Ensure the
change references the existing useWS hook, the counter.set call, and the table's
currentFilters/dateRange state so the badge only increments for visible events.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/worker/src/boot-workers.ts (1)
250-267:⚠️ Potential issue | 🟠 MajorSuccess metrics are no longer being recorded for job durations.
The
eventsGroupJobDurationhistogram is only observed onfailedevents (line 254-256). The previously existingcompletedevent listener that recorded successful job durations was removed. This breaks observability - you'll only have metrics for failed jobs, not successful ones.🔧 Proposed fix to add success metrics
Add a
completedhandler alongside thefailedhandler:+ (worker as Worker).on('completed', (job) => { + if (job?.processedOn && job?.finishedOn) { + const elapsed = job.finishedOn - job.processedOn; + eventsGroupJobDuration.observe( + { name: worker.name, status: 'success' }, + elapsed + ); + } + }); + (worker as Worker).on('failed', (job) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/worker/src/boot-workers.ts` around lines 250 - 267, The metrics handler currently only observes durations inside the Worker.on('failed', ...) callback, so successful runs are not recorded; add a parallel Worker.on('completed', (job) => { ... }) handler that mirrors the failed handler's logic: check job.processedOn and job.finishedOn, compute elapsed = job.finishedOn - job.processedOn, call eventsGroupJobDuration.observe with { name: worker.name, status: 'completed' } and elapsed, and log the success (using worker.name and job.id/job.data) so successful job durations are captured just like failures.
🧹 Nitpick comments (6)
apps/api/src/controllers/live.controller.ts (1)
21-30: Error handling added, but consider logging failures.The
.catch()handler prevents unhandled rejections (addressing the previous review). However, silently sending'0'may mask underlying issues. Consider adding a log statement for debugging.💡 Suggested improvement
.catch(() => { - socket.send('0'); + req.log?.warn?.('Failed to get active visitor count'); + socket.send('0'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/controllers/live.controller.ts` around lines 21 - 30, The catch block in sendCount swallows errors and returns '0' silently; update the .catch on eventBuffer.getActiveVisitorCount(params.projectId) to log the error before sending '0' so failures are visible in logs (include the error object and context like projectId and that it occurred in sendCount), referencing sendCount, eventBuffer.getActiveVisitorCount and socket.send to locate the code.apps/worker/src/boot-workers.ts (1)
119-121: Silent continue may hide configuration issues.When
eventsGroupQueues[index]is undefined, the code silently continues. This could mask misconfiguration where shards are expected but not available. Consider logging a warning.💡 Suggested improvement
if (!queue) { + logger.warn(`Queue for events_${index} not found, skipping`, { index }); continue; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/worker/src/boot-workers.ts` around lines 119 - 121, When eventsGroupQueues[index] is undefined the code silently continues which can hide misconfiguration; update the block that checks `if (!queue) { continue; }` to log a warning before continuing (include identifying info such as `index`, the `eventsGroupQueues.length` or the related shard/group name) using the existing logger (e.g. `processLogger` or the module's logger) so operators can detect missing queues, then continue as before.packages/redis/cachable.ts (1)
135-148: Date parsing may have edge cases.The
DATE_REGEXmatches ISO 8601 dates ending inZ, but some valid ISO dates may use timezone offsets like+00:00instead ofZ. Also, strings that accidentally match the pattern (e.g., in user data) would be converted to Dates.This is generally fine for internal cache data, but be aware of these edge cases if caching user-provided content.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/cachable.ts` around lines 135 - 148, The DATE_REGEX and parseCache conversion are too strict (only trailing 'Z') and may falsely convert arbitrary strings; update DATE_REGEX to accept both 'Z' and timezone offsets (e.g., allow (Z|[+-]\d{2}:\d{2})) and in parseCache validate that the matched string is a real date before converting by creating a Date and checking !isNaN(date.getTime()); reference the DATE_REGEX constant and parseCache function to locate and change the logic.apps/api/src/hooks/is-bot.hook.ts (1)
46-47: Response body change: Ensure clients handle the new format.The response now includes
{ bot }instead of an empty body. If any clients parse the 202 response body for bot detection, they'll need to handle this new structure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/hooks/is-bot.hook.ts` around lines 46 - 47, The response now returns an object via reply.status(202).send({ bot }) in is-bot.hook.ts; update any clients that consume this 202 response to expect and parse the JSON body with a top-level "bot" property (e.g., response.json().bot or checking body.bot) or, if you must preserve the prior empty-body contract, change the server call site reply.status(202).send({ bot }) back to reply.status(202).send() and document the contract; locate the usage of reply.status(202).send({ bot }) and update client-side handlers or revert the server response to maintain compatibility.packages/db/src/services/project.service.ts (1)
106-109: Consider using the custom query builder for ClickHouse queries.Per coding guidelines, ClickHouse queries should use the custom query builder from
packages/db/src/clickhouse/query-builder.ts. This function uses raw string interpolation withsqlstring.escape. Whilesqlstring.escapeprovides SQL injection protection, consider aligning with the project's query builder pattern for consistency.As per coding guidelines: "When writing ClickHouse queries, always use the custom query builder from
./packages/db/src/clickhouse/query-builder.ts"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/project.service.ts` around lines 106 - 109, The function getProjectEventsCount builds a raw ClickHouse SQL string with sqlstring.escape and should instead use the project's ClickHouse query builder; update getProjectEventsCount to call the query-builder helper (from ./clickhouse/query-builder.ts) to construct the SELECT count(*) FROM TABLE_NAMES.events WHERE project_id = ? AND name NOT IN (...) query and pass projectId as a parameter (or use the builder's escaping API), then invoke chQuery with the builder-produced query; this replaces the direct string interpolation and aligns getProjectEventsCount and chQuery usage with the project's query-builder pattern.apps/worker/src/jobs/events.incoming-event.ts (1)
164-197: Variable shadowing:sessionis redeclared.Line 166 declares
const session = ...which shadows thesessionvariable already destructured fromjobPayloadat line 95. While this works because the outersessionis unused in this branch (sinceuaInfo.isServer || isTimestampFromThePastmeans the function returns early at line 197), the naming collision is confusing.Consider renaming to clarify intent:
♻️ Suggested rename for clarity
- const session = + const existingSession = profileId && !isTimestampFromThePast ? await sessionBuffer.getExistingSession({ profileId, projectId, }) : null; const payload = { ...baseEvent, - deviceId: session?.device_id ?? '', - sessionId: session?.id ?? '', - referrer: session?.referrer ?? undefined, - referrerName: session?.referrer_name ?? undefined, - referrerType: session?.referrer_type ?? undefined, - path: session?.exit_path ?? baseEvent.path, - origin: session?.exit_origin ?? baseEvent.origin, - os: session?.os ?? baseEvent.os, - osVersion: session?.os_version ?? baseEvent.osVersion, - browserVersion: session?.browser_version ?? baseEvent.browserVersion, - browser: session?.browser ?? baseEvent.browser, - device: session?.device ?? baseEvent.device, - brand: session?.brand ?? baseEvent.brand, - model: session?.model ?? baseEvent.model, - city: session?.city ?? baseEvent.city, - country: session?.country ?? baseEvent.country, - region: session?.region ?? baseEvent.region, - longitude: session?.longitude ?? baseEvent.longitude, - latitude: session?.latitude ?? baseEvent.latitude, + deviceId: existingSession?.device_id ?? '', + sessionId: existingSession?.id ?? '', + referrer: existingSession?.referrer ?? undefined, + referrerName: existingSession?.referrer_name ?? undefined, + referrerType: existingSession?.referrer_type ?? undefined, + path: existingSession?.exit_path ?? baseEvent.path, + origin: existingSession?.exit_origin ?? baseEvent.origin, + os: existingSession?.os ?? baseEvent.os, + osVersion: existingSession?.os_version ?? baseEvent.osVersion, + browserVersion: existingSession?.browser_version ?? baseEvent.browserVersion, + browser: existingSession?.browser ?? baseEvent.browser, + device: existingSession?.device ?? baseEvent.device, + brand: existingSession?.brand ?? baseEvent.brand, + model: existingSession?.model ?? baseEvent.model, + city: existingSession?.city ?? baseEvent.city, + country: existingSession?.country ?? baseEvent.country, + region: existingSession?.region ?? baseEvent.region, + longitude: existingSession?.longitude ?? baseEvent.longitude, + latitude: existingSession?.latitude ?? baseEvent.latitude, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/worker/src/jobs/events.incoming-event.ts` around lines 164 - 197, The branch declares a new const named session which shadows the outer session destructured from jobPayload; rename the inner variable (e.g., existingSession or bufferedSession) returned from sessionBuffer.getExistingSession and update all subsequent uses in that branch (device_id, id, referrer, referrer_name, referrer_type, exit_path, exit_origin, os, os_version, browser_version, browser, device, brand, model, city, country, region, longitude, latitude) so they reference the new name before calling createEventAndNotify, leaving the outer session identifier unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/controllers/track.controller.ts`:
- Around line 190-194: The server-side groupId logic in track.controller.ts sets
groupId to undefined when uaInfo.isServer is true and payload.profileId is
missing; update the assignment for groupId to mirror event.controller.ts by
using projectId and payload.profileId with a fallback to generateId() so groupId
is always a string: adjust the expression that sets groupId (currently using
uaInfo.isServer, payload.profileId, deviceId) to use
`${projectId}:${payload.profileId ?? generateId()}` when uaInfo.isServer is
true, ensuring subsequent queue job usage receives a string groupId.
In `@apps/worker/src/boot-workers.ts`:
- Line 95: The function bootWorkers is currently synchronous but is awaited at
its call site in apps/worker/src/index.ts; either remove the unnecessary await
there or make bootWorkers return a Promise that resolves when workers are
actually initialized. Locate the bootWorkers() declaration and decide: if
initialization is synchronous, update the index.ts call to remove the await; if
initialization involves async work (e.g., starting services, connecting to
queues), change bootWorkers to an async function that performs the async steps
and returns a Promise (or explicitly returns after awaiting those operations) so
callers using await bootWorkers() correctly wait for readiness.
In `@apps/worker/src/utils/session-handler.ts`:
- Around line 6-35: The CHANGE_DELAY_THROTTLE_MAP used by extendSessionEndJob
grows unbounded and causes a memory leak; replace the plain Map with a
bounded/TTL cache (e.g., use LRUCache from 'lru-cache' or implement periodic
pruning) so entries auto-expire — specifically swap CHANGE_DELAY_THROTTLE_MAP
for an LRUCache<string, number> configured with a sensible max (e.g., 100k) and
ttl (e.g., CHANGE_DELAY_THROTTLE_MS * 2) or add a cleanup routine that removes
keys older than CHANGE_DELAY_THROTTLE_MS, ensuring extendSessionEndJob continues
to read/write the cache the same way.
In `@packages/db/src/services/project.service.ts`:
- Around line 28-29: Several project mutations are missing cache invalidation:
after any project update or delete you must call
getProjectByIdCached.clear(projectId) to purge both L1 and L2 caches; add this
call in the project router handlers that schedule/cancel deletion (the functions
that set/unset deleteAt), in the delete service where db.project.delete(...) is
invoked, and in the worker job that updates eventsCount, following the existing
getProjectByIdCached.clear(...) usage pattern elsewhere in the file to ensure
callers see fresh project data immediately.
In `@packages/redis/cachable.ts`:
- Around line 264-276: cachedFn.set's inner function currently returns undefined
when hasResult(payload) is false and swallows errors in the .catch, so callers
expecting Promise<'OK'> can break; update the implementation of cachedFn.set
(the setter returned by cachedFn.set) to always return a Promise<'OK'>: when
hasResult(payload) is false return Promise.resolve('OK'), and when writing to
Redis use getRedisCache().setex(key, expireInSec,
JSON.stringify(payload)).then(() => 'OK') while removing the silent .catch (or
rethrow the error via .catch(err => Promise.reject(err))) so errors aren't
swallowed; keep the in-memory lruCache.set(key, payload) behavior when hasResult
is true and use getKey, lruCache, and expireInSec as in the current code.
In `@packages/sdks/react-native/index.ts`:
- Around line 40-42: The track method currently always injects __path by
spreading properties first then adding __path: this.lastPath, which overwrites
any caller-provided __path and can emit an empty string before the first
screenView; change track (the track(name: string, properties?: TrackProperties)
method) so it only adds __path when properties?.__path is undefined AND
this.lastPath is a non-empty value (i.e., if properties has its own __path,
leave it intact; if this.lastPath is falsy, don't add __path), then call
super.track with the merged properties.
In `@packages/sdks/web/src/index.ts`:
- Around line 295-297: The track method currently always overwrites any provided
__path by spreading properties then setting __path: this.lastPath; change it to
only inject __path when the caller didn't supply one: in track(name: string,
properties?: TrackProperties) check if properties?.__path is undefined (or not
present) and only then add __path: this.lastPath to the merged properties; keep
the rest of the merge behavior intact so explicit __path values from callers are
preserved and the safe fallback pattern used elsewhere is reused.
---
Outside diff comments:
In `@apps/worker/src/boot-workers.ts`:
- Around line 250-267: The metrics handler currently only observes durations
inside the Worker.on('failed', ...) callback, so successful runs are not
recorded; add a parallel Worker.on('completed', (job) => { ... }) handler that
mirrors the failed handler's logic: check job.processedOn and job.finishedOn,
compute elapsed = job.finishedOn - job.processedOn, call
eventsGroupJobDuration.observe with { name: worker.name, status: 'completed' }
and elapsed, and log the success (using worker.name and job.id/job.data) so
successful job durations are captured just like failures.
---
Nitpick comments:
In `@apps/api/src/controllers/live.controller.ts`:
- Around line 21-30: The catch block in sendCount swallows errors and returns
'0' silently; update the .catch on
eventBuffer.getActiveVisitorCount(params.projectId) to log the error before
sending '0' so failures are visible in logs (include the error object and
context like projectId and that it occurred in sendCount), referencing
sendCount, eventBuffer.getActiveVisitorCount and socket.send to locate the code.
In `@apps/api/src/hooks/is-bot.hook.ts`:
- Around line 46-47: The response now returns an object via
reply.status(202).send({ bot }) in is-bot.hook.ts; update any clients that
consume this 202 response to expect and parse the JSON body with a top-level
"bot" property (e.g., response.json().bot or checking body.bot) or, if you must
preserve the prior empty-body contract, change the server call site
reply.status(202).send({ bot }) back to reply.status(202).send() and document
the contract; locate the usage of reply.status(202).send({ bot }) and update
client-side handlers or revert the server response to maintain compatibility.
In `@apps/worker/src/boot-workers.ts`:
- Around line 119-121: When eventsGroupQueues[index] is undefined the code
silently continues which can hide misconfiguration; update the block that checks
`if (!queue) { continue; }` to log a warning before continuing (include
identifying info such as `index`, the `eventsGroupQueues.length` or the related
shard/group name) using the existing logger (e.g. `processLogger` or the
module's logger) so operators can detect missing queues, then continue as
before.
In `@apps/worker/src/jobs/events.incoming-event.ts`:
- Around line 164-197: The branch declares a new const named session which
shadows the outer session destructured from jobPayload; rename the inner
variable (e.g., existingSession or bufferedSession) returned from
sessionBuffer.getExistingSession and update all subsequent uses in that branch
(device_id, id, referrer, referrer_name, referrer_type, exit_path, exit_origin,
os, os_version, browser_version, browser, device, brand, model, city, country,
region, longitude, latitude) so they reference the new name before calling
createEventAndNotify, leaving the outer session identifier unchanged.
In `@packages/db/src/services/project.service.ts`:
- Around line 106-109: The function getProjectEventsCount builds a raw
ClickHouse SQL string with sqlstring.escape and should instead use the project's
ClickHouse query builder; update getProjectEventsCount to call the query-builder
helper (from ./clickhouse/query-builder.ts) to construct the SELECT count(*)
FROM TABLE_NAMES.events WHERE project_id = ? AND name NOT IN (...) query and
pass projectId as a parameter (or use the builder's escaping API), then invoke
chQuery with the builder-produced query; this replaces the direct string
interpolation and aligns getProjectEventsCount and chQuery usage with the
project's query-builder pattern.
In `@packages/redis/cachable.ts`:
- Around line 135-148: The DATE_REGEX and parseCache conversion are too strict
(only trailing 'Z') and may falsely convert arbitrary strings; update DATE_REGEX
to accept both 'Z' and timezone offsets (e.g., allow (Z|[+-]\d{2}:\d{2})) and in
parseCache validate that the matched string is a real date before converting by
creating a Date and checking !isNaN(date.getTime()); reference the DATE_REGEX
constant and parseCache function to locate and change the logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d705c21b-f394-4605-b0d0-9dd674315f4e
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (26)
apps/api/package.jsonapps/api/src/bots/index.tsapps/api/src/controllers/live.controller.tsapps/api/src/controllers/manage.controller.tsapps/api/src/controllers/track.controller.tsapps/api/src/hooks/is-bot.hook.tsapps/api/src/routes/track.router.tsapps/api/src/utils/ids.tsapps/worker/package.jsonapps/worker/src/boot-workers.tsapps/worker/src/jobs/cron.salt.tsapps/worker/src/jobs/events.incoming-event.tsapps/worker/src/jobs/events.incoming-events.test.tsapps/worker/src/metrics.tsapps/worker/src/utils/session-handler.tsbiome.jsonpackages/db/src/services/clients.service.tspackages/db/src/services/notification.service.tspackages/db/src/services/project.service.tspackages/db/src/services/salt.service.tspackages/queue/src/queues.tspackages/redis/cachable.tspackages/sdks/react-native/index.tspackages/sdks/web/src/index.tspackages/trpc/src/routers/project.tspnpm-workspace.yaml
✅ Files skipped from review due to trivial changes (4)
- packages/db/src/services/notification.service.ts
- pnpm-workspace.yaml
- apps/api/package.json
- apps/worker/package.json
| const groupId = uaInfo.isServer | ||
| ? payload.profileId | ||
| ? `${projectId}:${payload.profileId}` | ||
| : `${projectId}:${generateId()}` | ||
| : undefined | ||
| : deviceId; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there are other places that rely on server event ordering
rg -n 'isServer' --type=ts -C3 | head -60Repository: Openpanel-dev/openpanel
Length of output: 3000
🏁 Script executed:
#!/bin/bash
# Find and examine the GroupQueue implementation
rg -n 'class GroupQueue|interface GroupQueue' --type=ts -A 20 | head -100Repository: Openpanel-dev/openpanel
Length of output: 49
🏁 Script executed:
#!/bin/bash
# Check the queues.ts file to understand how groupId is used and what GroupQueue does
cat -n packages/queue/src/queues.ts | head -100Repository: Openpanel-dev/openpanel
Length of output: 3647
🏁 Script executed:
#!/bin/bash
# Search for how groupId is used in the codebase
rg -n 'groupId' --type=ts -B 2 -A 2 | head -80Repository: Openpanel-dev/openpanel
Length of output: 3385
🏁 Script executed:
#!/bin/bash
# Check if groupmq is a local package or external dependency
fd -t f 'package.json' -x grep -l 'groupmq' {}Repository: Openpanel-dev/openpanel
Length of output: 147
🏁 Script executed:
#!/bin/bash
# Look for documentation or tests related to groupId ordering
rg -n 'orderMs|groupId' --type=ts apps/api/src/controllers/ -B 3 -A 3 | head -100Repository: Openpanel-dev/openpanel
Length of output: 3840
🏁 Script executed:
#!/bin/bash
# Check event.controller.ts for comparison with track.controller.ts
cat -n apps/api/src/controllers/event.controller.ts | head -80Repository: Openpanel-dev/openpanel
Length of output: 2480
🏁 Script executed:
#!/bin/bash
# Check if there are tests for server event handling
find . -name '*.test.ts' -o -name '*.spec.ts' | xargs rg -l 'isServer|groupId' 2>/dev/null | head -10Repository: Openpanel-dev/openpanel
Length of output: 171
🏁 Script executed:
#!/bin/bash
# Look for any comments or documentation about groupId behavior
rg -n 'groupId|group.*id' --type=ts packages/queue -B 2 -A 2 | head -100Repository: Openpanel-dev/openpanel
Length of output: 605
🏁 Script executed:
#!/bin/bash
# Check the complete track.controller.ts around lines 190-230 to see full context
sed -n '185,235p' apps/api/src/controllers/track.controller.ts | cat -nRepository: Openpanel-dev/openpanel
Length of output: 1705
Align server event groupId handling with event.controller.ts pattern.
Lines 190-194 differ from the equivalent logic in event.controller.ts (line 40-41): when uaInfo.isServer is true and there's no profileId, groupId becomes undefined. This is then passed to the queue job at line 230, whereas event.controller.ts always generates a string groupId using ${projectId}:${profileId ?? generateId()}.
Change lines 190-194 to match the pattern in event.controller.ts:
const groupId = uaInfo.isServer
? `${projectId}:${payload.profileId ?? generateId()}`
: deviceId;
This ensures the groupId field in the queue job is always a string, consistent with how the other controller handles server events.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/controllers/track.controller.ts` around lines 190 - 194, The
server-side groupId logic in track.controller.ts sets groupId to undefined when
uaInfo.isServer is true and payload.profileId is missing; update the assignment
for groupId to mirror event.controller.ts by using projectId and
payload.profileId with a fallback to generateId() so groupId is always a string:
adjust the expression that sets groupId (currently using uaInfo.isServer,
payload.profileId, deviceId) to use `${projectId}:${payload.profileId ??
generateId()}` when uaInfo.isServer is true, ensuring subsequent queue job usage
receives a string groupId.
| } | ||
|
|
||
| export async function bootWorkers() { | ||
| export function bootWorkers() { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how bootWorkers is called in the codebase
rg -n 'bootWorkers' --type=ts -C2Repository: Openpanel-dev/openpanel
Length of output: 933
🏁 Script executed:
cat -n apps/worker/src/boot-workers.ts | head -150Repository: Openpanel-dev/openpanel
Length of output: 5554
🏁 Script executed:
rg -n "workers\.run\(" --type=ts -A5 -B2Repository: Openpanel-dev/openpanel
Length of output: 49
🏁 Script executed:
tail -n +150 apps/worker/src/boot-workers.tsRepository: Openpanel-dev/openpanel
Length of output: 5538
🏁 Script executed:
rg "\.run\(\)" apps/worker/src/boot-workers.ts -B2 -A2Repository: Openpanel-dev/openpanel
Length of output: 188
🏁 Script executed:
# Check if there's any git history showing if 'completed' handler was removed
git log -p --all -- apps/worker/src/boot-workers.ts | grep -A10 -B10 "completed" | head -50Repository: Openpanel-dev/openpanel
Length of output: 49
🏁 Script executed:
# Check the metrics definition to understand what's expected
cat -n apps/worker/src/metrics.ts | head -100Repository: Openpanel-dev/openpanel
Length of output: 3286
Remove unnecessary await or make bootWorkers() async.
The function is now synchronous but the call site in apps/worker/src/index.ts still uses await bootWorkers(). While this won't cause errors (awaiting a non-Promise resolves immediately), it misleads developers into thinking the await waits for initialization completion when the function returns immediately.
Either remove the await at the call site if workers are fully initialized synchronously, or make bootWorkers() return a Promise that resolves when workers are ready.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/worker/src/boot-workers.ts` at line 95, The function bootWorkers is
currently synchronous but is awaited at its call site in
apps/worker/src/index.ts; either remove the unnecessary await there or make
bootWorkers return a Promise that resolves when workers are actually
initialized. Locate the bootWorkers() declaration and decide: if initialization
is synchronous, update the index.ts call to remove the await; if initialization
involves async work (e.g., starting services, connecting to queues), change
bootWorkers to an async function that performs the async steps and returns a
Promise (or explicitly returns after awaiting those operations) so callers using
await bootWorkers() correctly wait for readiness.
| const CHANGE_DELAY_THROTTLE_MS = process.env.CHANGE_DELAY_THROTTLE_MS | ||
| ? Number.parseInt(process.env.CHANGE_DELAY_THROTTLE_MS, 10) | ||
| : 60_000; // 1 minute | ||
|
|
||
| const CHANGE_DELAY_THROTTLE_MAP = new Map<string, number>(); | ||
|
|
||
| export async function extendSessionEndJob({ | ||
| projectId, | ||
| deviceId, | ||
| }: { | ||
| projectId: string; | ||
| deviceId: string; | ||
| }) { | ||
| const last = CHANGE_DELAY_THROTTLE_MAP.get(`${projectId}:${deviceId}`) ?? 0; | ||
| const isThrottled = Date.now() - last < CHANGE_DELAY_THROTTLE_MS; | ||
|
|
||
| if (isThrottled) { | ||
| return; | ||
| } | ||
|
|
||
| const jobId = getSessionEndJobId(projectId, deviceId); | ||
| const job = await sessionsQueue.getJob(jobId); | ||
|
|
||
| if (!job) { | ||
| return; | ||
| } | ||
|
|
||
| await job.changeDelay(SESSION_TIMEOUT); | ||
| CHANGE_DELAY_THROTTLE_MAP.set(`${projectId}:${deviceId}`, Date.now()); | ||
| } |
There was a problem hiding this comment.
Memory leak: CHANGE_DELAY_THROTTLE_MAP grows unbounded.
The throttle map stores entries for each unique projectId:deviceId combination but never removes them. In a long-running worker process, this will continuously grow as new devices interact with the system, eventually consuming significant memory.
Consider one of these approaches:
- Use a TTL-based cache (e.g.,
lru-cachewith TTL) - Periodically prune stale entries
- Store throttle state in Redis alongside the job
🛠️ Suggested fix using Map with periodic cleanup
-const CHANGE_DELAY_THROTTLE_MAP = new Map<string, number>();
+const CHANGE_DELAY_THROTTLE_MAP = new Map<string, number>();
+
+// Cleanup stale throttle entries every 10 minutes
+setInterval(() => {
+ const now = Date.now();
+ for (const [key, timestamp] of CHANGE_DELAY_THROTTLE_MAP) {
+ // Remove entries older than 2x the throttle window
+ if (now - timestamp > CHANGE_DELAY_THROTTLE_MS * 2) {
+ CHANGE_DELAY_THROTTLE_MAP.delete(key);
+ }
+ }
+}, 10 * 60 * 1000);Alternatively, consider using lru-cache:
import { LRUCache } from 'lru-cache';
const CHANGE_DELAY_THROTTLE_MAP = new LRUCache<string, number>({
max: 100_000, // Maximum entries
ttl: CHANGE_DELAY_THROTTLE_MS * 2, // Auto-expire entries
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/worker/src/utils/session-handler.ts` around lines 6 - 35, The
CHANGE_DELAY_THROTTLE_MAP used by extendSessionEndJob grows unbounded and causes
a memory leak; replace the plain Map with a bounded/TTL cache (e.g., use
LRUCache from 'lru-cache' or implement periodic pruning) so entries auto-expire
— specifically swap CHANGE_DELAY_THROTTLE_MAP for an LRUCache<string, number>
configured with a sensible max (e.g., 100k) and ttl (e.g.,
CHANGE_DELAY_THROTTLE_MS * 2) or add a cleanup routine that removes keys older
than CHANGE_DELAY_THROTTLE_MS, ensuring extendSessionEndJob continues to
read/write the cache the same way.
| /** L1 LRU (60s) + L2 Redis. clear() invalidates Redis + local LRU; other nodes may serve stale from LRU for up to 60s. */ | ||
| export const getProjectByIdCached = cacheable(getProjectById, 60 * 60 * 24); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify all project update paths clear the cache properly
# Expected: All update/delete operations on projects should call getProjectByIdCached.clear()
rg -n "getProjectByIdCached" --type ts -C 3Repository: Openpanel-dev/openpanel
Length of output: 8071
🏁 Script executed:
#!/bin/bash
# Find the cacheable function implementation to understand L1/L2 behavior
rg -n "export.*cacheable|function cacheable" --type ts -B 2 -A 10Repository: Openpanel-dev/openpanel
Length of output: 13366
🏁 Script executed:
#!/bin/bash
# Search for all project update/delete operations to ensure cache is cleared
rg -n "project\.(update|delete|create)" --type ts | head -30Repository: Openpanel-dev/openpanel
Length of output: 1854
🏁 Script executed:
#!/bin/bash
# Check for db.project mutations that might not call clear()
rg -n "db\.\$primary\(\)\.project\.(update|delete|create)" --type ts -A 2Repository: Openpanel-dev/openpanel
Length of output: 49
🏁 Script executed:
#!/bin/bash
# Check packages/trpc/src/routers/project.ts for all update/clear patterns
sed -n '60,100p' packages/trpc/src/routers/project.ts
sed -n '160,200p' packages/trpc/src/routers/project.tsRepository: Openpanel-dev/openpanel
Length of output: 2090
🏁 Script executed:
#!/bin/bash
# Check packages/db/src/services/delete.service.ts
sed -n '20,40p' packages/db/src/services/delete.service.tsRepository: Openpanel-dev/openpanel
Length of output: 460
🏁 Script executed:
#!/bin/bash
# Check apps/worker/src/jobs/sessions.ts
sed -n '40,60p' apps/worker/src/jobs/sessions.tsRepository: Openpanel-dev/openpanel
Length of output: 519
🏁 Script executed:
#!/bin/bash
# Check packages/trpc/src/routers/onboarding.ts for project.create
sed -n '95,115p' packages/trpc/src/routers/onboarding.tsRepository: Openpanel-dev/openpanel
Length of output: 658
🏁 Script executed:
#!/bin/bash
# Check if there are clear() calls after the updateAt operations in project.ts
sed -n '155,210p' packages/trpc/src/routers/project.ts | grep -A 5 "deleteAt"Repository: Openpanel-dev/openpanel
Length of output: 239
🏁 Script executed:
#!/bin/bash
# Check context around delete.service.ts delete operation - look for any clear calls
sed -n '1,50p' packages/db/src/services/delete.service.tsRepository: Openpanel-dev/openpanel
Length of output: 1232
🏁 Script executed:
#!/bin/bash
# Check full context of sessions.ts update
sed -n '35,65p' apps/worker/src/jobs/sessions.tsRepository: Openpanel-dev/openpanel
Length of output: 701
Add cache clear() calls to all project mutation operations.
The docstring is accurate—cacheable() internally implements two-level caching (L1 local LRU + L2 Redis), with the single TTL parameter configuring L2 expiration. However, verification reveals missing cache invalidation in several project update/delete paths:
packages/trpc/src/routers/project.tslines 166 & 193: Schedule/cancel deletion operations updatedeleteAtbut don't callgetProjectByIdCached.clear()packages/db/src/services/delete.service.tsline 29:db.project.delete()lacks cache clearingapps/worker/src/jobs/sessions.tsline 49: BackgroundeventsCountupdate doesn't clear cache
Without these calls, project data mutations won't reflect until the 24-hour TTL expires. Add getProjectByIdCached.clear(projectId) after each update/delete operation (consistent with the existing pattern in line 98).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/src/services/project.service.ts` around lines 28 - 29, Several
project mutations are missing cache invalidation: after any project update or
delete you must call getProjectByIdCached.clear(projectId) to purge both L1 and
L2 caches; add this call in the project router handlers that schedule/cancel
deletion (the functions that set/unset deleteAt), in the delete service where
db.project.delete(...) is invoked, and in the worker job that updates
eventsCount, following the existing getProjectByIdCached.clear(...) usage
pattern elsewhere in the file to ensure callers see fresh project data
immediately.
| cachedFn.set = | ||
| (...args: Parameters<T>) => | ||
| (payload: ReturnType<T>) => { | ||
| (payload: Awaited<ReturnType<T>>) => { | ||
| const key = getKey(...args); | ||
| if (hasResult(payload)) { | ||
| functionLruCache.set(key, payload); | ||
| lruCache.set(key, payload); | ||
| return getRedisCache() | ||
| .setex(key, expireInSec, JSON.stringify(payload)) | ||
| .catch(() => { | ||
| // ignore error | ||
| }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Return type inconsistency in set() method.
The type signature declares set(...) => (payload) => Promise<'OK'>, but:
- When
hasResult(payload)is false, the function returnsundefined(no explicit return). - The
.catch()on line 272-274 swallows errors and returnsundefined, not'OK'.
This can cause type errors at runtime if callers expect Promise<'OK'>.
🔧 Proposed fix
cachedFn.set =
(...args: Parameters<T>) =>
- (payload: Awaited<ReturnType<T>>) => {
+ (payload: Awaited<ReturnType<T>>): Promise<'OK'> => {
const key = getKey(...args);
if (hasResult(payload)) {
lruCache.set(key, payload);
return getRedisCache()
.setex(key, expireInSec, JSON.stringify(payload))
+ .then(() => 'OK' as const)
.catch(() => {
- // ignore error
+ return 'OK' as const; // Return OK even on failure since LRU was set
});
}
+ return Promise.resolve('OK');
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/redis/cachable.ts` around lines 264 - 276, cachedFn.set's inner
function currently returns undefined when hasResult(payload) is false and
swallows errors in the .catch, so callers expecting Promise<'OK'> can break;
update the implementation of cachedFn.set (the setter returned by cachedFn.set)
to always return a Promise<'OK'>: when hasResult(payload) is false return
Promise.resolve('OK'), and when writing to Redis use getRedisCache().setex(key,
expireInSec, JSON.stringify(payload)).then(() => 'OK') while removing the silent
.catch (or rethrow the error via .catch(err => Promise.reject(err))) so errors
aren't swallowed; keep the in-memory lruCache.set(key, payload) behavior when
hasResult is true and use getKey, lruCache, and expireInSec as in the current
code.
| track(name: string, properties?: TrackProperties) { | ||
| return super.track(name, { ...properties, __path: this.lastPath }); | ||
| } |
There was a problem hiding this comment.
Preserve explicit __path and avoid emitting empty path by default.
At Line 41, the merge order always overrides any caller-provided __path, and before the first screenView() it sends __path: ''. Prefer injecting only when needed.
Suggested patch
track(name: string, properties?: TrackProperties) {
- return super.track(name, { ...properties, __path: this.lastPath });
+ const path = (properties?.__path as string | undefined) ?? this.lastPath;
+ return super.track(
+ name,
+ path ? { ...properties, __path: path } : properties
+ );
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| track(name: string, properties?: TrackProperties) { | |
| return super.track(name, { ...properties, __path: this.lastPath }); | |
| } | |
| track(name: string, properties?: TrackProperties) { | |
| const path = (properties?.__path as string | undefined) ?? this.lastPath; | |
| return super.track( | |
| name, | |
| path ? { ...properties, __path: path } : properties | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/sdks/react-native/index.ts` around lines 40 - 42, The track method
currently always injects __path by spreading properties first then adding
__path: this.lastPath, which overwrites any caller-provided __path and can emit
an empty string before the first screenView; change track (the track(name:
string, properties?: TrackProperties) method) so it only adds __path when
properties?.__path is undefined AND this.lastPath is a non-empty value (i.e., if
properties has its own __path, leave it intact; if this.lastPath is falsy, don't
add __path), then call super.track with the merged properties.
| track(name: string, properties?: TrackProperties) { | ||
| return super.track(name, { ...properties, __path: this.lastPath }); | ||
| } |
There was a problem hiding this comment.
Avoid hard-overriding __path in track().
At Line 296, this always replaces any explicit __path and can emit an empty path until screenView() runs. Recommend the same conditional injection pattern used for safe fallback.
Suggested patch
track(name: string, properties?: TrackProperties) {
- return super.track(name, { ...properties, __path: this.lastPath });
+ const path = (properties?.__path as string | undefined) ?? this.lastPath;
+ return super.track(
+ name,
+ path ? { ...properties, __path: path } : properties
+ );
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| track(name: string, properties?: TrackProperties) { | |
| return super.track(name, { ...properties, __path: this.lastPath }); | |
| } | |
| track(name: string, properties?: TrackProperties) { | |
| const path = (properties?.__path as string | undefined) ?? this.lastPath; | |
| return super.track( | |
| name, | |
| path ? { ...properties, __path: path } : properties | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/sdks/web/src/index.ts` around lines 295 - 297, The track method
currently always overwrites any provided __path by spreading properties then
setting __path: this.lastPath; change it to only inject __path when the caller
didn't supply one: in track(name: string, properties?: TrackProperties) check if
properties?.__path is undefined (or not present) and only then add __path:
this.lastPath to the merged properties; keep the rest of the merge behavior
intact so explicit __path values from callers are preserved and the safe
fallback pattern used elsewhere is reused.
Summary by CodeRabbit
New Features
Performance
Tests
Refactor