Skip to content

feat(core): add emitPerformanceMetric bridge for runtime telemetry#393

Open
vanceingalls wants to merge 1 commit intomainfrom
perf/x-1-emit-performance-metric
Open

feat(core): add emitPerformanceMetric bridge for runtime telemetry#393
vanceingalls wants to merge 1 commit intomainfrom
perf/x-1-emit-performance-metric

Conversation

@vanceingalls
Copy link
Copy Markdown
Collaborator

@vanceingalls vanceingalls commented Apr 21, 2026

Summary

Extend the runtime analytics bridge with a numeric performance metric channel. Hosts subscribe via the existing postMessage transport (one bridge, two channels) and aggregate per-session p50 / p95 for scrub latency, sustained fps, dropped frames, decoder count, composition load time, and media sync drift before forwarding to their observability pipeline.

This is the foundation other perf tooling sits on — the player itself emits the events; player-side aggregation and flush land in a follow-up.

Why

Step X-1 of the player perf proposal. Today there is no way for an embedding host to learn that scrub latency spiked, that a composition took 3 s to load, or that the media-sync loop is running 200 ms behind real time. The only signals are anecdotal user reports.

A single shared bridge keeps the runtime → host surface area minimal: hosts that already wire up the analytics channel get perf for free, and hosts that don't aren't paying for it.

What changed

  • New emitPerformanceMetric(name, value, tags?) helper in @hyperframes/core that forwards a { type: "performance-metric", name, value, tags } envelope through the existing analytics postMessage transport.
  • Six initial metric names defined in the proposal:
    • scrub_latency_ms — wall-clock from seek() call to first paint at the new frame.
    • playback_fps — sustained rAF cadence during play.
    • dropped_frames — count of >25 ms gaps within a play window.
    • decoder_count — number of concurrently-decoding video elements.
    • composition_load_ms — navigation-start to player-ready.
    • media_sync_drift_ms — drift between expected and actual decoder time.
  • Each emit also writes a performance.mark() with { value, tags } on detail, so the same numbers surface in the DevTools Performance panel's User Timing track for local debugging without instrumenting the host.
  • Zero PostHog (or any other analytics SDK) dependency in core — the host decides where to forward the events.

Test plan

  • Unit tests cover the envelope shape, the performance.mark mirror, and the no-op path when no host has wired up the bridge.
  • Manual: verified marks appear in the User Timing track when scrubbing the studio preview.

Stack

Step X-1 of the player perf proposal. Foundation for the perf gate (P0-1a/b/c) — the perf scenarios in this stack instrument these same channels for CI measurement.

Extends the runtime analytics bridge with a numeric performance metric channel
for scrub latency, sustained fps, dropped frames, decoder count, composition
load time, and media sync drift. Metrics flow through the existing
postMessage transport (one bridge, two channels) so hosts can aggregate per
session (p50/p95) and forward to their observability pipeline.

Also writes performance.mark() with value+tags on detail so metrics surface
in the DevTools Performance panel's User Timing track for local debugging.

No PostHog dependency in core. Player-side aggregation and flush land in a
follow-up PR per the player-perf proposal.
Copy link
Copy Markdown
Collaborator

@jrusso1020 jrusso1020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean bridge. Additive, no breaking changes, everything downstream of postMessage is guarded. Test coverage is exactly what I'd want to see for a telemetry path — nulled transport, throwing transport, throwing performance.mark, zero/negative values, tag normalization, and a DevTools User Timing assertion when the environment supports it. Nice.

The double-guard (both performance.mark and postMessage wrapped in try/catch) is the right shape for runtime code that must never affect playback. One small non-blocking observation: the RuntimePerformanceTags type duplicates RuntimeAnalyticsProperties — may be worth collapsing to a shared alias in a followup if any third shape gets added, but carving them apart now is also fine since the semantics genuinely differ.

Approved.

Rames Jusso

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants