feat(core): add emitPerformanceMetric bridge for runtime telemetry#393
feat(core): add emitPerformanceMetric bridge for runtime telemetry#393vanceingalls wants to merge 1 commit intomainfrom
Conversation
Extends the runtime analytics bridge with a numeric performance metric channel for scrub latency, sustained fps, dropped frames, decoder count, composition load time, and media sync drift. Metrics flow through the existing postMessage transport (one bridge, two channels) so hosts can aggregate per session (p50/p95) and forward to their observability pipeline. Also writes performance.mark() with value+tags on detail so metrics surface in the DevTools Performance panel's User Timing track for local debugging. No PostHog dependency in core. Player-side aggregation and flush land in a follow-up PR per the player-perf proposal.
jrusso1020
left a comment
There was a problem hiding this comment.
Clean bridge. Additive, no breaking changes, everything downstream of postMessage is guarded. Test coverage is exactly what I'd want to see for a telemetry path — nulled transport, throwing transport, throwing performance.mark, zero/negative values, tag normalization, and a DevTools User Timing assertion when the environment supports it. Nice.
The double-guard (both performance.mark and postMessage wrapped in try/catch) is the right shape for runtime code that must never affect playback. One small non-blocking observation: the RuntimePerformanceTags type duplicates RuntimeAnalyticsProperties — may be worth collapsing to a shared alias in a followup if any third shape gets added, but carving them apart now is also fine since the semantics genuinely differ.
Approved.
— Rames Jusso

Summary
Extend the runtime analytics bridge with a numeric performance metric channel. Hosts subscribe via the existing postMessage transport (one bridge, two channels) and aggregate per-session p50 / p95 for scrub latency, sustained fps, dropped frames, decoder count, composition load time, and media sync drift before forwarding to their observability pipeline.
This is the foundation other perf tooling sits on — the player itself emits the events; player-side aggregation and flush land in a follow-up.
Why
Step
X-1of the player perf proposal. Today there is no way for an embedding host to learn that scrub latency spiked, that a composition took 3 s to load, or that the media-sync loop is running 200 ms behind real time. The only signals are anecdotal user reports.A single shared bridge keeps the runtime → host surface area minimal: hosts that already wire up the analytics channel get perf for free, and hosts that don't aren't paying for it.
What changed
emitPerformanceMetric(name, value, tags?)helper in@hyperframes/corethat forwards a{ type: "performance-metric", name, value, tags }envelope through the existing analytics postMessage transport.scrub_latency_ms— wall-clock fromseek()call to first paint at the new frame.playback_fps— sustained rAF cadence during play.dropped_frames— count of >25 ms gaps within a play window.decoder_count— number of concurrently-decoding video elements.composition_load_ms— navigation-start to player-ready.media_sync_drift_ms— drift between expected and actual decoder time.performance.mark()with{ value, tags }ondetail, so the same numbers surface in the DevTools Performance panel's User Timing track for local debugging without instrumenting the host.core— the host decides where to forward the events.Test plan
performance.markmirror, and the no-op path when no host has wired up the bridge.Stack
Step
X-1of the player perf proposal. Foundation for the perf gate (P0-1a/b/c) — the perf scenarios in this stack instrument these same channels for CI measurement.