Skip to content

perf(traces): Cache valid span statuses in a module-level frozenset#6208

Merged
ericapisani merged 1 commit intomasterfrom
py-2403-perf-bump-for-status-method
May 5, 2026
Merged

perf(traces): Cache valid span statuses in a module-level frozenset#6208
ericapisani merged 1 commit intomasterfrom
py-2403-perf-bump-for-status-method

Conversation

@ericapisani
Copy link
Copy Markdown
Member

@ericapisani ericapisani commented May 5, 2026

The status setter on StreamedSpan was reconstructing a set from the SpanStatus enum on every invocation. In high-throughput tracing workloads this is a hot path, so caching the values in a module-level frozenset avoids repeated allocations.

Fixes #6207
Fixes PY-2403


Using the following benchmarking script:

import timeit
from enum import Enum

class SpanStatus(str, Enum):
    OK = "ok"
    ERROR = "error"


# Approach 1: Current — rebuild set every call
def check_current(status):
    return status not in {e.value for e in SpanStatus}


# Approach 2: Module-level frozenset
_VALID = frozenset(e.value for e in SpanStatus)

def check_frozenset(status):
    return status not in _VALID


N = 100_000
REPEATS = 5

for label, fn in [
    ("current (set comprehension)", check_current),
    ("frozenset lookup",            check_frozenset),
]:
    # Test with a valid status (common path)
    times = timeit.repeat(lambda: fn("ok"), number=N, repeat=REPEATS)
    best = min(times)
    per_call_ns = (best / N) * 1e9
    print(f"{label:30s}  valid    {best:.4f}s / {N} calls  ({per_call_ns:.0f} ns/call)")

    # Test with an invalid status (rare path)
    times = timeit.repeat(lambda: fn("bogus"), number=N, repeat=REPEATS)
    best = min(times)
    per_call_ns = (best / N) * 1e9
    print(f"{label:30s}  invalid  {best:.4f}s / {N} calls  ({per_call_ns:.0f} ns/call)")

I was able to confirm the small bump in performance with this change:

Run 1:

current (set comprehension)     valid    0.0375s / 100000 calls  (375 ns/call)
current (set comprehension)     invalid  0.0379s / 100000 calls  (379 ns/call)
frozenset lookup                valid    0.0031s / 100000 calls  (31 ns/call)
frozenset lookup                invalid  0.0029s / 100000 calls  (29 ns/call)

Run 2:

current (set comprehension)     valid    0.0308s / 100000 calls  (308 ns/call)
current (set comprehension)     invalid  0.0311s / 100000 calls  (311 ns/call)
frozenset lookup                valid    0.0024s / 100000 calls  (24 ns/call)
frozenset lookup                invalid  0.0025s / 100000 calls  (25 ns/call)

Run 3:

current (set comprehension)     valid    0.0312s / 100000 calls  (312 ns/call)
current (set comprehension)     invalid  0.0312s / 100000 calls  (312 ns/call)
frozenset lookup                valid    0.0025s / 100000 calls  (25 ns/call)
frozenset lookup                invalid  0.0024s / 100000 calls  (24 ns/call)

Avoids rebuilding the set on every `set_status` call by computing it once at module load time.
@linear-code
Copy link
Copy Markdown

linear-code Bot commented May 5, 2026

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

Codecov Results 📊

13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 7.73s

All tests are passing successfully.

✅ Patch coverage is 100.00%. Project has 15002 uncovered lines.

Files with missing lines (1)
File Patch % Lines
traces.py 69.38% ⚠️ 94 Missing and 21 partials

Generated by Codecov Action

@ericapisani ericapisani marked this pull request as ready for review May 5, 2026 17:22
@ericapisani ericapisani requested a review from a team as a code owner May 5, 2026 17:22
@ericapisani ericapisani merged commit 7f09094 into master May 5, 2026
156 checks passed
@ericapisani ericapisani deleted the py-2403-perf-bump-for-status-method branch May 5, 2026 18:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Replace creation of set on every span.status call with frozen set

2 participants