fix: stop video drain when real subscriber arrives#232
Conversation
Enable drain by default and clean up blackhole when add_track_subscriber is called, preventing stale drains from competing with actual consumers.
📝 WalkthroughWalkthroughDefault behavior for video-frame draining was enabled (False → True) across ConnectionManager, PeerConnectionManager, and SubscriberPeerConnection. SubscriberPeerConnection now tracks per-track drain state as (MediaBlackhole, drain_task), schedules stop tasks when a subscriber is added, and tests were added to validate cleanup. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant ConnectionManager
participant PeerConnectionManager
participant SubscriberPeerConnection
participant MediaBlackhole
Client->>ConnectionManager: create peer (drain_video_frames=True)
ConnectionManager->>PeerConnectionManager: instantiate peer (drain enabled)
PeerConnectionManager->>SubscriberPeerConnection: create subscriber PC
Note over SubscriberPeerConnection,MediaBlackhole: Incoming video track arrives
SubscriberPeerConnection->>MediaBlackhole: attach track, start() (drain task)
Client->>SubscriberPeerConnection: add_track_subscriber(track_id)
SubscriberPeerConnection->>SubscriberPeerConnection: pop (MediaBlackhole, drain_task)
SubscriberPeerConnection->>drain_task: cancel
SubscriberPeerConnection->>MediaBlackhole: schedule stop() (background task)
MediaBlackhole-->>SubscriberPeerConnection: stop complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Replace ensure_future with create_task and hold a strong reference in _background_tasks set to avoid silent task cancellation by GC.
ConnectionManager and PeerConnectionManager had drain_video_frames defaulting to False, overriding the True default in SubscriberPeerConnection. Aligned all three layers and added a docstring explaining the drain lifecycle.
Two separate dicts (_video_blackholes, _video_drain_tasks) tracked the same lifecycle, risking desync. Store them as a tuple in one dict.
stop() handles current cleanup, but if aiortc changes start() to be long-lived, the drain_task could leak. Cancel it defensively.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
getstream/video/rtc/pc.py (1)
221-228:⚠️ Potential issue | 🟡 MinorClean up
_video_blackholeswhen track ends.If a track ends before a real subscriber is added via
add_track_subscriber, the blackhole entry remains in_video_blackholes. While the blackhole will likely stop naturally (track ended), the dictionary entry persists and the blackhole isn't explicitly stopped.Proposed fix
def handle_track_ended(self, track: aiortc.mediastreams.MediaStreamTrack) -> None: logger.info(f"track ended: {track.id}") # Clean up stored references when track ends if track.id in self.track_map: del self.track_map[track.id] if track.id in self.video_frame_trackers: del self.video_frame_trackers[track.id] + + blackhole, drain_task = self._video_blackholes.pop(track.id, (None, None)) + if blackhole and drain_task: + task = asyncio.create_task(blackhole.stop()) + drain_task.cancel() + self._background_tasks.add(task) + task.add_done_callback(self._background_tasks.discard)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@getstream/video/rtc/pc.py` around lines 221 - 228, In handle_track_ended, when cleaning up track_map and video_frame_trackers also check and clean up the _video_blackholes entry for the same track id: if a blackhole exists stop it (call its stop/close/terminate method as appropriate) and remove the key from self._video_blackholes to avoid a lingering stopped blackhole for tracks that ended before add_track_subscriber was called; reference the handle_track_ended method and the _video_blackholes attribute (and relate to add_track_subscriber behavior) so you stop and delete the blackhole consistently with how other trackers are removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@getstream/video/rtc/pc.py`:
- Around line 221-228: In handle_track_ended, when cleaning up track_map and
video_frame_trackers also check and clean up the _video_blackholes entry for the
same track id: if a blackhole exists stop it (call its stop/close/terminate
method as appropriate) and remove the key from self._video_blackholes to avoid a
lingering stopped blackhole for tracks that ended before add_track_subscriber
was called; reference the handle_track_ended method and the _video_blackholes
attribute (and relate to add_track_subscriber behavior) so you stop and delete
the blackhole consistently with how other trackers are removed.
Why
When a real subscriber calls
add_track_subscriber, the MediaBlackhole drain for that track kept running, competing with the actual consumer for frames. Additionally, drain was off by default, so tracks without explicit subscribers would accumulate unconsumed frames (aiortc issue #554).Changes
drain_video_framesby default across all layers (ConnectionManager,PeerConnectionManager,SubscriberPeerConnection)add_track_subscriberwhen a real subscriber arrives_background_taskssetdrain_video_frameslifecycle in docstringSummary by CodeRabbit
Bug Fixes
Tests