Skip to content

fix: stop video drain when real subscriber arrives#232

Merged
aliev merged 7 commits intomainfrom
fix/stop-drain-on-subscribe
Apr 2, 2026
Merged

fix: stop video drain when real subscriber arrives#232
aliev merged 7 commits intomainfrom
fix/stop-drain-on-subscribe

Conversation

@aliev
Copy link
Copy Markdown
Member

@aliev aliev commented Apr 1, 2026

Why

When a real subscriber calls add_track_subscriber, the MediaBlackhole drain for that track kept running, competing with the actual consumer for frames. Additionally, drain was off by default, so tracks without explicit subscribers would accumulate unconsumed frames (aiortc issue #554).

Changes

  • Enable drain_video_frames by default across all layers (ConnectionManager, PeerConnectionManager, SubscriberPeerConnection)
  • Stop and clean up blackhole drain in add_track_subscriber when a real subscriber arrives
  • Prevent GC from collecting the blackhole stop task via _background_tasks set
  • Add unit tests for drain cleanup behavior
  • Document drain_video_frames lifecycle in docstring

Summary by CodeRabbit

  • Bug Fixes

    • Video-frame draining is enabled by default to prevent unconsumed-frame buildup and reduce resource usage.
    • Per-track draining now stops cleanly when a subscriber attaches, preventing lingering background tasks and improving connection stability.
  • Tests

    • Added tests covering the video drain lifecycle and verifying drains stop when subscribers are added.

Enable drain by default and clean up blackhole when add_track_subscriber
is called, preventing stale drains from competing with actual consumers.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

📝 Walkthrough

Walkthrough

Default behavior for video-frame draining was enabled (False → True) across ConnectionManager, PeerConnectionManager, and SubscriberPeerConnection. SubscriberPeerConnection now tracks per-track drain state as (MediaBlackhole, drain_task), schedules stop tasks when a subscriber is added, and tests were added to validate cleanup.

Changes

Cohort / File(s) Summary
Constructor defaults
getstream/video/rtc/connection_manager.py, getstream/video/rtc/peer_connection.py, getstream/video/rtc/pc.py
Changed drain_video_frames constructor default from FalseTrue; ConnectionManager docstring updated to describe attaching incoming video tracks to a MediaBlackhole.
Per-track drain lifecycle
getstream/video/rtc/pc.py
Refactored per-track drain tracking to _video_blackholes: track_id -> (MediaBlackhole, asyncio.Task) and removed separate _video_drain_tasks; introduced _background_tasks: set[asyncio.Task]; add_track_subscriber() now pops existing drain tuple, cancels the drain task, schedules blackhole.stop() and retains stop tasks until completion.
Manager wiring
getstream/video/rtc/connection_manager.py, getstream/video/rtc/peer_connection.py
drain_video_frames argument forwarded from ConnectionManager → PeerConnectionManager → SubscriberPeerConnection so peers inherit the new default drain behavior.
Tests
tests/rtc/test_subscriber_drain.py
Added tests that mock a SubscriberPeerConnection to assert add_track_subscriber() stops and removes an existing per-track blackhole, and that adding a subscriber when no drain exists returns a non-None result.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Client
    participant ConnectionManager
    participant PeerConnectionManager
    participant SubscriberPeerConnection
    participant MediaBlackhole

    Client->>ConnectionManager: create peer (drain_video_frames=True)
    ConnectionManager->>PeerConnectionManager: instantiate peer (drain enabled)
    PeerConnectionManager->>SubscriberPeerConnection: create subscriber PC
    Note over SubscriberPeerConnection,MediaBlackhole: Incoming video track arrives
    SubscriberPeerConnection->>MediaBlackhole: attach track, start() (drain task)
    Client->>SubscriberPeerConnection: add_track_subscriber(track_id)
    SubscriberPeerConnection->>SubscriberPeerConnection: pop (MediaBlackhole, drain_task)
    SubscriberPeerConnection->>drain_task: cancel
    SubscriberPeerConnection->>MediaBlackhole: schedule stop() (background task)
    MediaBlackhole-->>SubscriberPeerConnection: stop complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐇
I nibble frames in moonlit queues so deep,
a little blackhole tending what I keep.
A subscriber hops — I quiet my song,
stop and tidy, then hop along. 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: stop video drain when real subscriber arrives' directly and clearly describes the main fix: stopping the video drain when a real subscriber arrives, which is the core problem being addressed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/stop-drain-on-subscribe

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

aliev added 2 commits April 1, 2026 20:13
Replace ensure_future with create_task and hold a strong reference
in _background_tasks set to avoid silent task cancellation by GC.
ConnectionManager and PeerConnectionManager had drain_video_frames
defaulting to False, overriding the True default in
SubscriberPeerConnection. Aligned all three layers and added a
docstring explaining the drain lifecycle.
Two separate dicts (_video_blackholes, _video_drain_tasks) tracked
the same lifecycle, risking desync. Store them as a tuple in one dict.
stop() handles current cleanup, but if aiortc changes start() to be
long-lived, the drain_task could leak. Cancel it defensively.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
getstream/video/rtc/pc.py (1)

221-228: ⚠️ Potential issue | 🟡 Minor

Clean up _video_blackholes when track ends.

If a track ends before a real subscriber is added via add_track_subscriber, the blackhole entry remains in _video_blackholes. While the blackhole will likely stop naturally (track ended), the dictionary entry persists and the blackhole isn't explicitly stopped.

Proposed fix
     def handle_track_ended(self, track: aiortc.mediastreams.MediaStreamTrack) -> None:
         logger.info(f"track ended: {track.id}")
 
         # Clean up stored references when track ends
         if track.id in self.track_map:
             del self.track_map[track.id]
         if track.id in self.video_frame_trackers:
             del self.video_frame_trackers[track.id]
+        
+        blackhole, drain_task = self._video_blackholes.pop(track.id, (None, None))
+        if blackhole and drain_task:
+            task = asyncio.create_task(blackhole.stop())
+            drain_task.cancel()
+            self._background_tasks.add(task)
+            task.add_done_callback(self._background_tasks.discard)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@getstream/video/rtc/pc.py` around lines 221 - 228, In handle_track_ended,
when cleaning up track_map and video_frame_trackers also check and clean up the
_video_blackholes entry for the same track id: if a blackhole exists stop it
(call its stop/close/terminate method as appropriate) and remove the key from
self._video_blackholes to avoid a lingering stopped blackhole for tracks that
ended before add_track_subscriber was called; reference the handle_track_ended
method and the _video_blackholes attribute (and relate to add_track_subscriber
behavior) so you stop and delete the blackhole consistently with how other
trackers are removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@getstream/video/rtc/pc.py`:
- Around line 221-228: In handle_track_ended, when cleaning up track_map and
video_frame_trackers also check and clean up the _video_blackholes entry for the
same track id: if a blackhole exists stop it (call its stop/close/terminate
method as appropriate) and remove the key from self._video_blackholes to avoid a
lingering stopped blackhole for tracks that ended before add_track_subscriber
was called; reference the handle_track_ended method and the _video_blackholes
attribute (and relate to add_track_subscriber behavior) so you stop and delete
the blackhole consistently with how other trackers are removed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 18de4201-5f04-4efa-b679-8608f36529ca

📥 Commits

Reviewing files that changed from the base of the PR and between 90975d1 and ba06494.

📒 Files selected for processing (1)
  • getstream/video/rtc/pc.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants