Conversation
Add a devcontainer based on the collector-builder image that enables agent-driven development of collector. The devcontainer includes all C++ build dependencies, Go, Node.js, Claude Code, gcloud CLI, and developer tooling (ripgrep, fd, gh). Verified: cmake configure, full collector build, and 17/17 unit tests pass inside the container. Claude Code authenticates to Vertex AI via read-only gcloud credential mount. - .devcontainer/: Dockerfile, devcontainer.json, network firewall - CLAUDE.md: agent development guide with build/test workflows - .claude/skills/: /build, /ci-status, /iterate slash commands - .claude/settings.json: deny Read(.devcontainer/**) for security - Security: bubblewrap sandboxing, npm hardening, read-only mounts, optional iptables firewall with NET_ADMIN Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…DE.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The run.sh script launches Claude Code in the devcontainer with: - Git worktree isolation: agent works on its own copy, never touches the user's checkout. Worktree is cleaned up on exit. - GitHub auth: supports fine-grained PAT via GITHUB_TOKEN or host gh CLI config (read-only mount) - Modes: autonomous (-p task), interactive, shell, no-worktree Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace gh CLI and Docker-based MCP server with official GitHub MCP server at api.githubcopilot.com/mcp (OAuth, project-scoped .mcp.json) - Add permissions.deny for dangerous MCP tools (merge, delete, fork) - Add bubblewrap, socat, iptables to Dockerfile for sandboxing - Remove gh CLI install from Dockerfile - Fix run.sh: suppress git worktree output, use bash array for docker args instead of eval with string (fixes --interactive mode) - Remove Docker socket mount and GITHUB_TOKEN forwarding from run.sh - Update skills to reference mcp__github__* tools instead of gh CLI Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…missions Move skills from .claude/skills/ to .claude/plugins/collector-dev/ as a proper Claude Code plugin. Each skill now declares only the tools it needs via allowed-tools frontmatter: - /collector-dev:build — cmake, make, git describe, strip (no GitHub) - /collector-dev:ci-status — git branch/log + GitHub MCP read-only tools - /collector-dev:iterate — build tools + git + clang-format + GitHub MCP PR/push tools The GitHub MCP server config moves from root .mcp.json into the plugin's .mcp.json so it's bundled with the skills that use it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
run.sh now creates the branch, pushes it, and opens a draft PR before launching the agent. The agent receives the branch name and PR URL in its prompt and only needs to commit and push. iterate skill drops all GitHub MCP write tools (create_branch, push_files, create_pull_request, update_pull_request). It retains only read-only GitHub MCP tools for checking CI status. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New skill that checks CI status and reacts to failures: - PASSED: all checks green, stop - PENDING: still running, wait for next loop - FIXED: diagnosed failure, pushed fix, awaiting new CI - FLAKE: infra issue, not code - BLOCKED: needs human intervention Usage: /loop 30m /collector-dev:watch-ci Same restricted tool set as iterate — read-only GitHub MCP, build tools, git push to existing branch only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New /collector-dev:task skill that runs the full lifecycle: 1. Implement the task (edit, build, unit test, format, push) 2. Monitor CI in a loop (sleep 10m, check status, fix failures) 3. Stop when all checks pass, or after 6 cycles (~3h) Reports final status: PASSED, BLOCKED, or TIMEOUT. run.sh now invokes /collector-dev:task directly so a single command goes from task description to green CI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Claude Code doesn't auto-discover plugins from .claude/plugins/. Add --plugin-dir /workspace/.claude/plugins/collector-dev to all claude invocations so skills like /collector-dev:task are available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use --output-format stream-json --verbose for autonomous task mode so all messages (tool calls, responses, thinking) stream to container stdout in real time. Interactive mode keeps the normal TUI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
--local edits the working tree directly with interactive TUI. No worktree, no branch, no PR. For debugging and experimentation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
git worktree add does not init submodules. Without this, cmake fails because falcosecurity-libs and other submodules are missing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Only init falcosecurity-libs and collector/proto/third_party/stackrox. The 17 builder/third_party/* submodules are baked into the builder image and not needed for compiling collector. This avoids cloning 49 recursive submodules (was hanging on large repos like grpc). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… mode Both headless and default task mode now use the same task_prompt() that explicitly invokes /collector-dev:task, ensuring the skill's allowed-tools restrictions are enforced. Only difference is headless skips PR creation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Validate before launching the container: - Docker running and image exists (with build command hint) - gcloud ADC credentials file exists - Vertex AI env vars set (CLAUDE_CODE_USE_VERTEX, GOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_LOCATION) - gh CLI authenticated (only for PR mode) - ~/.gitconfig and ~/.ssh exist (warnings) - git push and gh pr create errors are no longer silent Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove gh from run.sh entirely (no check_gh, no setup_pr) - Remove --headless (was identical to default mode without PR) - task skill now has create_pull_request and create_branch in allowed-tools - Agent pushes branch and creates draft PR via GitHub MCP server - iterate skill stays read-only on GitHub (only task can create PRs) - Simplified to 4 modes: default task, --interactive, --local, --shell Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move skills from .claude/plugins/collector-dev/ to .claude/skills/ (standalone skills, no plugin wrapper). Fixes skills not loading in worktrees since the plugin directory was never committed. - Delete collector-dev plugin entirely (caused phantom GitHub MCP) - Remove --plugin-dir from run.sh - Add entrypoint.sh that creates .claude dirs and registers GitHub MCP server via claude mcp add-json when GITHUB_TOKEN is set - Add --skip-submodules and --debug flags to run.sh - Add COPY --chmod=755 for entrypoint.sh in Dockerfile - Simplify CLAUDE.md from 249 to 30 lines — just build commands, key paths, testing rules, and conventions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…t issues Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The git push deny was blocking the host too. Move it to the entrypoint which writes /home/dev/.claude/settings.json (user scope) inside the container only. Project-level settings.json keeps only the MCP deny rules which apply everywhere. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
No SSH keys in container = git push fails at auth. No deny rule needed. GitHub MCP (PAT via GITHUB_TOKEN) is the only push path. Also removes git push deny from entrypoint settings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
git clone --local sets origin to the local filesystem path. Fix the remote to the real GitHub URL so git push works from the clone. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mount the main repo's .git/ directory at its original absolute path inside the container (read-only). The worktree's .git file resolves correctly without any path patching. No cleanup hacks needed. Replaces the git clone --local approach which had issues with the remote URL pointing to the local filesystem. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mount falcosecurity-libs and collector/proto/third_party/stackrox read-only from the main repo into the container. Eliminates submodule init entirely — worktree creation is now just git worktree add. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Default: git submodule update --init --depth 1 (slower, independent copy) --symlink-submodules: mount from main repo read-only (instant, shared) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
--branch <name> sets a custom branch name instead of auto-generated. Worktrees now created under /tmp/collector-worktrees/<branch-name> with slashes replaced by dashes in the directory name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
git commit writes to .git/worktrees/<name>/ (HEAD, index, refs). Read-only mount was blocking commits. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
.git/ is mounted read-only so the agent can't modify shared refs, objects, or config. Only .git/worktrees/<name>/ is read-write, which is what git commit needs (HEAD, index, refs for that branch). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…mpts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Host uid (502) != container uid (1000). Make the worktree's .git/worktrees/<name>/ world-writable so the container's dev user can write index.lock, HEAD, etc. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
a+rw doesn't add execute on directories, so creating files inside modules/ subdirs fails. a+rwX adds execute on directories only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Submodule init and git commit both need to write to various places under .git/ (worktrees/, modules/, index.lock). The ro mount with rw overlay on the worktree subdir is insufficient. Mount .git rw. Agent still can't push (no SSH keys). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…bmodules - Remove SYS_PTRACE (container escape risk), NET_ADMIN, NET_RAW from devcontainer.json runArgs - Remove --symlink-submodules flag entirely (broke cmake builds, added complexity for marginal speedup) - Show submodule init progress instead of silencing output Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Request repos, pull_requests, and actions toolsets so the agent can create PRs, push files, list workflow runs, and download job logs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds an Agent Status section to the PR description with timestamp, last commit SHA, cycle count, status, and one-line details. Keeps the original PR description above the status section. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Planning doc for Trail of Bits mcp-context-protector as a security proxy for GitHub MCP. Covers architecture, open questions, threat model comparison, and phased implementation approach. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ask skill The /task skill's STOP condition caused the agent to exit before running /watch-ci. Instead, give the agent inline instructions for step 1 (implement) and explicitly tell it to run /watch-ci for step 2. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ling Single skill for the full autonomous cycle: implement → build → test → commit → push via MCP → create PR → monitor CI → address review comments → loop until green. run.sh autonomous mode now invokes /dev-loop directly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
If the task starts with /, pass it as-is (e.g., /task, /watch-ci). Otherwise default to /dev-loop. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ext-protector mcp-watchdog is lightweight (pattern-based, no ML), covers 70+ attack patterns including credential redaction, prompt injection, rug pull detection, SSRF blocking. mcp-context-protector requires PyTorch/ LlamaFirewall (~3GB), better suited for centralized deployment. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The agent can't git push (no SSH keys) and GitHub MCP push_files requires an existing remote branch. Push from the host in run.sh so /watch-ci and /dev-loop can push via MCP immediately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…skills Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All modes now use interactive TUI by default, including autonomous task mode. This means you can watch, interrupt, and interact with the agent even in /dev-loop mode. Use --no-tui to get the old stream-json output for piping or logging. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Explain that push_files sends file contents via GitHub API — it does not sync local git history. Provide explicit steps: get branch name, get changed files list, read contents, call push_files. Remove the remote branch prerequisite from watch-ci since run.sh handles it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…uide Covers: quick start, modes, skills, token permissions (required vs optional vs not needed), security model, env vars, worktree management. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace manual memory management with RAII using std::unique_ptr for: - nRadixNode members (value_, left_, right_) - NRadixTree root_ member This eliminates manual delete calls, simplifies copy/assignment operators, and prevents potential memory leaks. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #3157 +/- ##
==========================================
+ Coverage 27.38% 27.42% +0.03%
==========================================
Files 95 95
Lines 5427 5423 -4
Branches 2548 2540 -8
==========================================
+ Hits 1486 1487 +1
Misses 3214 3214
+ Partials 727 722 -5
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
📝 WalkthroughWalkthroughThis PR adds a comprehensive Claude Code devcontainer setup for the Collector eBPF runtime security project, including automated workflow skills, Docker configuration, initialization scripts, and security policies. It also refactors C++ smart pointer usage in the NRadix tree data structure. Changes
Sequence Diagram(s)sequenceDiagram
participant Claude as Claude Code
participant Git as Local Git
participant MCP as GitHub MCP
participant CI as CI System
participant Dev as Developer
Dev->>Claude: Start dev-loop with task
Claude->>Git: Read task & explore repo
Claude->>Git: Implement changes
Claude->>Git: Build (CMake Release)
Claude->>Git: Run tests (ctest)
Claude->>Git: Format code (clang-format)
Claude->>Git: Commit changes
Claude->>MCP: Determine changed files
Claude->>MCP: Push files to branch
Claude->>MCP: Find/create PR
Claude->>MCP: Fetch CI status
loop CI Monitoring (max 6 cycles)
Claude->>CI: Check workflow status
alt Checks Passed
Claude->>MCP: Update PR status PASSED
Claude-->>Dev: Complete with summary
else Checks Failed
Claude->>MCP: Retrieve job logs
Claude->>Claude: Classify failure (lint/test/flake)
alt Fixable Failure
Claude->>Git: Apply fixes
Claude->>Git: Rebuild & retest
Claude->>Git: Commit fixes
Claude->>MCP: Push updated files
Claude->>MCP: Update PR status FIXED
else Unresolvable
Claude->>MCP: Update PR status BLOCKED
Claude-->>Dev: Report diagnosis
end
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (2)
.devcontainer/run.sh (2)
242-245: Silent push failure may confuse users.The
git pushstderr is suppressed, so if it fails (e.g., branch already exists on remote with different history), users won't know why subsequent MCP operations might behave unexpectedly. Consider logging a warning on failure.🛡️ Proposed fix with warning on failure
if [[ "$TASK" != /* ]]; then echo "Pushing branch $BRANCH..." >&2 - git -C "$WORKTREE" push -u origin "$BRANCH" >/dev/null 2>&1 + if ! git -C "$WORKTREE" push -u origin "$BRANCH" 2>&1 | head -5 >&2; then + echo "WARNING: git push failed (expected if no SSH keys). MCP push_files will be used." >&2 + fi fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/run.sh around lines 242 - 245, The git push in the run.sh script currently suppresses stderr causing silent failures; update the block that checks TASK and runs git -C "$WORKTREE" push -u origin "$BRANCH" so that you capture its exit status and, on non-zero exit, emit a warning to stderr (including the branch name and a short message) and optionally show git's error output; modify the push invocation in the TASK/BRANCH/WORKTREE section to not fully discard stderr (or redirect it to a variable) and echo a clear warning if the push failed.
120-120: Consider handling the case where the branch already exists.If
--branchspecifies an existing branch name,git worktree add -bwill fail. Consider using-Bto reset an existing branch or adding explicit error handling.🛡️ Proposed fix with error handling
- git -C "$REPO_ROOT" worktree add -b "$branch" "$worktree_dir" HEAD >/dev/null 2>&1 + if ! git -C "$REPO_ROOT" worktree add -b "$branch" "$worktree_dir" HEAD 2>&1; then + echo "ERROR: Failed to create worktree. Branch '$branch' may already exist." >&2 + exit 1 + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/run.sh at line 120, The git worktree add command in run.sh uses -b which fails if the branch already exists; update the logic around git -C "$REPO_ROOT" worktree add -b "$branch" "$worktree_dir" HEAD to handle pre-existing branches—either switch to using -B to force/reset the branch when appropriate or add a prior check (e.g., run git -C "$REPO_ROOT" rev-parse --verify "$branch" or git branch --list "$branch") and then use git worktree add without -b or delete/reset the branch accordingly; ensure the modified flow covers both creating a new worktree and reusing/resetting an existing branch and returns/prints a clear error on failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/skills/task/SKILL.md:
- Around line 34-40: The fenced code block in SKILL.md is missing a language tag
which triggers markdownlint; update the triple-backtick fence that contains the
TASK COMPLETE summary to include a language identifier (e.g., ```text or
```markdown) so the linter stops flagging it—locate the fenced block in
.claude/skills/task/SKILL.md and add the language after the opening backticks.
In @.devcontainer/devcontainer.json:
- Around line 16-17: Replace the hard bind mounts for
"source=${localEnv:HOME}/.gitconfig,target=/home/dev/.gitconfig,type=bind,readonly"
and
"source=${localEnv:HOME}/.config/gcloud,target=/home/dev/.config/gcloud,type=bind,readonly"
with optional named-volume mounts (e.g.,
"source=devcontainer-gitconfig,target=/home/dev/.gitconfig,type=volume,readonly"
and
"source=devcontainer-gcloud,target=/home/dev/.config/gcloud,type=volume,readonly")
and add a short README or helper script to create/populate those volumes for
contributors who opt-in; this makes mounts safe when host paths don’t exist
while keeping the credential files available for users who want them.
In @.devcontainer/Dockerfile:
- Around line 51-52: The Dockerfile currently installs Claude Code without a
version, which risks breaking CLI subcommands used by the entrypoint; update the
RUN command that installs "@anthropic-ai/claude-code" to pin it to a tested
release (e.g. append @<tested-version>) so the build always pulls the known-good
CLI that supports "claude config set" and "claude mcp add-json"; optionally
document the chosen version in a comment or build ARG so future updates are
deliberate.
- Around line 47-49: The Dockerfile currently executes remote installer scripts
at build time (the RUN curl -fsSL
https://rpm.nodesource.com/setup_${NODE_VERSION}.x | bash - \ line referencing
NODE_VERSION and the similar gcloud install block), which is non-reproducible
and insecure; replace these live installer pipes with pinned, verifiable
artifacts: fetch specific Node.js rpm or tarball for the desired NODE_VERSION
(or use the official distro package with a fixed version), verify its
checksum/signature before installing, and similarly download a fixed gcloud SDK
release archive and verify signatures instead of piping to bash; update the RUN
steps (the Node setup and the gcloud install blocks) to use curl/wget to
download a pinned file, verify integrity, and install via package manager or
extraction so builds are reproducible and supply-chain risk is mitigated.
- Around line 30-39: The Dockerfile currently streams Go, ripgrep, and fd
archives directly into tar (the RUN block using ARCH/GOARCH and the filenames
go1.23.6.linux-${GOARCH}.tar.gz,
ripgrep-14.1.1-${ARCH}-unknown-linux-gnu.tar.gz, and
fd-v10.2.0-${ARCH}-unknown-linux-gnu.tar.gz); change this to download each
archive to a temporary file, download the corresponding published .sha256 (or
checksum) file, verify the archive with sha256sum -c (or compute and compare),
only extract after verification, and then delete the temp files; ensure the same
RUN step uses the ARCH/GOARCH variables and preserves the exact archive
filenames so the subsequent tar commands target the verified files.
In @.devcontainer/entrypoint.sh:
- Around line 11-13: The script currently embeds $GITHUB_TOKEN into the JSON
passed to claude mcp add-json which writes the bearer token into persisted
config; instead implement and run the mcp-watchdog proxy that injects the
Authorization header at request time and update the entrypoint to register the
MCP against the local proxy (e.g., claude mcp add-json ...
'{"type":"http","url":"http://localhost:PORT","headers":{"X-MCP-Toolsets":"context,..."}
}' --scope user) without the Authorization field, ensure mcp-watchdog (the
lightweight proxy service) starts before calling claude mcp add-json and is
coded to read the secret from an environment variable at runtime and forward it
to upstream GitHub while redacting/stopping any secret from being written into
~/.claude or volumes.
In @.devcontainer/init-firewall.sh:
- Line 43: Remove the iptables rule that whitelists the GCE metadata server by
deleting the line containing "iptables -A OUTPUT -d metadata.google.internal -j
ACCEPT" (or replace it with a commented note if you want to keep history);
ensure there are no other rules that allow traffic to metadata.google.internal
so the metadata-server exception is fully removed while keeping the existing ADC
mount/config intact.
- Around line 62-63: The current iptables rule "iptables -A OUTPUT -p tcp
--dport 22 -j ACCEPT" allows SSH to any Internet host; restrict it by either
specifying explicit destination IPs/CIDRs with the -d option (only the GCP test
VM addresses) or gate the rule behind an opt-in variable (e.g.,
CHECK_INTEGRATION_TESTING/ENABLE_TEST_SSH) so the rule is only added when
testing is intentionally enabled; update the script to validate the provided
test targets and fail-safe (do not add the broad rule by default).
- Around line 33-42: The current iptables rules use hostname-based -d entries
(e.g., iptables -A OUTPUT -d oauth2.googleapis.com, -d www.googleapis.com, -d
storage.googleapis.com) which are resolved once and become brittle; replace
these with either (A) a local outbound proxy (configure and document
HTTP(S)_PROXY/NO_PROXY for the devcontainer and remove the hostname -d rules) or
(B) ipset-based allowlists: create ipset sets (e.g., "google_apis" and
"vertex_aiplatform"), populate them from DNS/CDN IP ranges and use iptables -m
set --match-set <set> dst -j ACCEPT instead of -d hostnames, and add a refresh
mechanism (cron or systemd timer/script invoked at container startup) to
periodically update the ipset entries for hostnames used in the script
(oauth2.googleapis.com, accounts.google.com, github.com, registry.npmjs.org,
etc.); update the script to create/populate the ipsets, use iptables -m set
matches, and ensure cleanup on container stop.
In @.devcontainer/run.sh:
- Line 252: Replace the trap invocation so it follows the SC2064 pattern: stop
using a double-quoted trap string that expands WORKTREE immediately and instead
use a single-quoted trap command with the WORKTREE variable wrapped in double
quotes inside the single quotes; update the line that calls trap with the
cleanup_worktree function and the WORKTREE variable accordingly so the variable
is expanded at trap runtime and is safely quoted.
- Line 179: The trap currently uses double quotes which expands $WORKTREE at
definition time (trap "cleanup_worktree '$WORKTREE'" EXIT) causing ShellCheck
SC2064; either change the trap to use single quotes so WORKTREE is expanded at
signal time (use trap with single-quoted command invoking cleanup_worktree and
expanding "$WORKTREE" when the trap runs) or, if the current behavior of
capturing the local WORKTREE value at definition time is intentional, keep the
existing trap and add a ShellCheck disable directive for SC2064 immediately
above the trap line to silence the warning; refer to the trap invocation and
cleanup_worktree function and the WORKTREE variable when making the change.
- Line 205: The trap command uses double quotes currently which triggers SC2064;
change the trap invocation so the command is single-quoted and the variable is
double-quoted for expansion at trap execution (i.e., update the trap that
references cleanup_worktree and $WORKTREE to use the pattern: trap
'cleanup_worktree "$WORKTREE"' EXIT) so WORKTREE is expanded when the trap runs,
not when it’s defined.
In `@collector/lib/NRadix.h`:
- Line 115: Make collector::NRadixTree::root_ a private member (instead of
public) and stop exposing the owning std::unique_ptr; change its declaration to
private and replace any external uses with controlled accessors. Add a
non-owning getter like nRadixNode* Root() / const nRadixNode* Root() (or a const
reference helper) and update callers that previously accessed root_ directly to
call Root(); update internal methods Insert, Find, IsEmpty and the copy
constructor/operator= to use the new getter or operate on the unique_ptr
internally so external code cannot reset/move root_. Ensure no code retains
ownership or performs reset/move on the unique_ptr from outside the class.
- Around line 56-61: The copy-and-swap in nRadixNode::operator=(const
nRadixNode& other) incorrectly calls std::swap(*new_node, *this) which relies on
a move assignment/swap for nRadixNode; instead construct the temporary with
std::make_unique<nRadixNode>(other) and swap each std::unique_ptr member
explicitly (value_, left_, right_) between *new_node and *this (and swap any
other non-pointer members if needed) so the unique_ptrs are exchanged safely;
update operator= to perform these member-wise swaps and return *this.
---
Nitpick comments:
In @.devcontainer/run.sh:
- Around line 242-245: The git push in the run.sh script currently suppresses
stderr causing silent failures; update the block that checks TASK and runs git
-C "$WORKTREE" push -u origin "$BRANCH" so that you capture its exit status and,
on non-zero exit, emit a warning to stderr (including the branch name and a
short message) and optionally show git's error output; modify the push
invocation in the TASK/BRANCH/WORKTREE section to not fully discard stderr (or
redirect it to a variable) and echo a clear warning if the push failed.
- Line 120: The git worktree add command in run.sh uses -b which fails if the
branch already exists; update the logic around git -C "$REPO_ROOT" worktree add
-b "$branch" "$worktree_dir" HEAD to handle pre-existing branches—either switch
to using -B to force/reset the branch when appropriate or add a prior check
(e.g., run git -C "$REPO_ROOT" rev-parse --verify "$branch" or git branch --list
"$branch") and then use git worktree add without -b or delete/reset the branch
accordingly; ensure the modified flow covers both creating a new worktree and
reusing/resetting an existing branch and returns/prints a clear error on
failure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: febca997-8875-4a27-8a14-ecd90947e434
📒 Files selected for processing (15)
.claude/settings.json.claude/skills/dev-loop/SKILL.md.claude/skills/task/SKILL.md.claude/skills/watch-ci/SKILL.md.devcontainer/Dockerfile.devcontainer/README.md.devcontainer/devcontainer.json.devcontainer/entrypoint.sh.devcontainer/init-firewall.sh.devcontainer/mcp_protector_plan.md.devcontainer/run.sh.gitignoreCLAUDE.mdcollector/lib/NRadix.cppcollector/lib/NRadix.h
💤 Files with no reviewable changes (1)
- .gitignore
| ``` | ||
| TASK COMPLETE | ||
| Branch: <current branch> | ||
| Commit: <commit hash> | ||
| Files changed: <list> | ||
| Tests: <pass/fail count> | ||
| ``` |
There was a problem hiding this comment.
Specify a language for this fenced block.
markdownlint is already flagging this fence, so this change leaves the warning open until the block is annotated (for example, text).
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 38-38: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/task/SKILL.md around lines 34 - 40, The fenced code block in
SKILL.md is missing a language tag which triggers markdownlint; update the
triple-backtick fence that contains the TASK COMPLETE summary to include a
language identifier (e.g., ```text or ```markdown) so the linter stops flagging
it—locate the fenced block in .claude/skills/task/SKILL.md and add the language
after the opening backticks.
| "source=${localEnv:HOME}/.gitconfig,target=/home/dev/.gitconfig,type=bind,readonly", | ||
| "source=${localEnv:HOME}/.config/gcloud,target=/home/dev/.config/gcloud,type=bind,readonly", |
There was a problem hiding this comment.
Make these host credential mounts optional.
Docker bind mounts fail when the source path does not exist. Requiring ${HOME}/.config/gcloud here (and ${HOME}/.gitconfig on fresh machines) means contributors without those files cannot even create the devcontainer, despite these being optional workflows. Gate them behind an opt-in setting or replace them with named volumes initialized on demand.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/devcontainer.json around lines 16 - 17, Replace the hard bind
mounts for
"source=${localEnv:HOME}/.gitconfig,target=/home/dev/.gitconfig,type=bind,readonly"
and
"source=${localEnv:HOME}/.config/gcloud,target=/home/dev/.config/gcloud,type=bind,readonly"
with optional named-volume mounts (e.g.,
"source=devcontainer-gitconfig,target=/home/dev/.gitconfig,type=volume,readonly"
and
"source=devcontainer-gcloud,target=/home/dev/.config/gcloud,type=volume,readonly")
and add a short README or helper script to create/populate those volumes for
contributors who opt-in; this makes mounts safe when host paths don’t exist
while keeping the credential files available for users who want them.
| RUN ARCH=$(uname -m) \ | ||
| && GOARCH=$([ "$ARCH" = "aarch64" ] && echo "arm64" || echo "amd64") \ | ||
| # Install Go | ||
| && curl -fsSL "https://go.dev/dl/go1.23.6.linux-${GOARCH}.tar.gz" | tar -C /usr/local -xzf - \ | ||
| # Install ripgrep | ||
| && curl -fsSL "https://github.com/BurntSushi/ripgrep/releases/download/14.1.1/ripgrep-14.1.1-${ARCH}-unknown-linux-gnu.tar.gz" \ | ||
| | tar -xzf - --strip-components=1 -C /usr/local/bin "ripgrep-14.1.1-${ARCH}-unknown-linux-gnu/rg" \ | ||
| # Install fd | ||
| && curl -fsSL "https://github.com/sharkdp/fd/releases/download/v10.2.0/fd-v10.2.0-${ARCH}-unknown-linux-gnu.tar.gz" \ | ||
| | tar -xzf - --strip-components=1 -C /usr/local/bin "fd-v10.2.0-${ARCH}-unknown-linux-gnu/fd" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the Dockerfile to understand the full context
cat -n .devcontainer/DockerfileRepository: stackrox/collector
Length of output: 3854
🏁 Script executed:
# Check if checksums or signatures are available for these tools
# Go - check official documentation and available checksums
curl -sL https://go.dev/dl/ | grep -i "sha256\|checksum" | head -20Repository: stackrox/collector
Length of output: 655
🏁 Script executed:
# Check ripgrep releases for checksum information
curl -sL "https://api.github.com/repos/BurntSushi/ripgrep/releases/tags/14.1.1" | jq '.body' 2>/dev/null | head -50Repository: stackrox/collector
Length of output: 66
🏁 Script executed:
# Check fd releases for checksum information
curl -sL "https://api.github.com/repos/sharkdp/fd/releases/tags/v10.2.0" | jq '.body' 2>/dev/null | head -50Repository: stackrox/collector
Length of output: 66
🌐 Web query:
Go 1.23.6 official release checksums download
💡 Result:
Go 1.23.6 was released on 2025-02-04 as a security release fixing crypto/elliptic timing sidechannel on ppc64le (CVE-2025-22866). Official binary and source distributions are available at https://go.dev/dl/. SHA256 checksums for all official download files are listed on that page in a table under the go1.23.6 section (likely in "Archived versions" since newer releases like go1.26.1 are featured). To verify downloads, compute the SHA256 hash of the file and match it against the official checksums on https://go.dev/dl/. Alternatively, use go install golang.org/dl/go1.23.6@latest followed by go1.23.6 download, which fetches and verifies the official distribution automatically. Source code can also be obtained via git checkout go1.23.6 in the golang/go repository at https://go.googlesource.com/go. No explicit checksum values were extractable from search snippets, but the official page provides them directly.
Citations:
- 1: https://go.dev/dl/
- 2: https://groups.google.com/g/golang-announce/c/xU1ZCHUZw3k
- 3: https://go.dev/doc/devel/release
- 4: https://pkg.go.dev/golang.org/dl/go1.23.6
- 5: https://www.reddit.com/r/golang/comments/1ihsl61/go_1236_is_released/
🌐 Web query:
ripgrep 14.1.1 release checksums available
💡 Result:
Yes, checksums (SHA256) for ripgrep 14.1.1 release binaries are available on the official GitHub releases page at https://github.com/BurntSushi/ripgrep/releases/tag/14.1.1. Each binary archive, such as ripgrep-14.1.1-x86_64-unknown-linux-musl.tar.gz, has a corresponding .sha256 file containing its checksum. These are standard for ripgrep releases, as confirmed by examples in 14.0.0 and 15.x releases (e.g., ripgrep-14.0.0-aarch64-apple-darwin.tar.gz.sha256). SourceForge mirrors also host these .sha256 files specifically for 14.1.1 assets, like ripgrep-14.1.1-x86_64-pc-windows-gnu.zip.sha256 and ripgrep_14.1.1-1_amd64.deb.sha256. The release occurred on 2024-09-09 (patch fixing matching bug #2884), and is verified with GPG signature.
Citations:
- 1: https://github.com/burntsushi/ripgrep/releases
- 2: https://crates.io/crates/ripgrep/14.1.0
- 3: https://github.com/BurntSushi/ripgrep/releases/tag/14.0.0
- 4: https://sourceforge.net/projects/ripgrep.mirror/files/14.1.1/ripgrep-14.1.1-x86_64-pc-windows-gnu.zip.sha256/download
- 5: https://sourceforge.net/projects/ripgrep.mirror/files/14.1.1/ripgrep_14.1.1-1_amd64.deb.sha256/download
- 6: https://github.com/BurntSushi/ripgrep/releases
- 7: https://ripgrep.dev/changelog/
🌐 Web query:
fd 10.2.0 release checksums github
💡 Result:
The fd (find alternative) tool version 10.2.0 was released on 2024-08-23 by sharkdp/fd on GitHub. SHA256 checksums for its binary assets are provided directly on the GitHub release page (https://github.com/sharkdp/fd/releases/tag/v10.2.0), listed next to each downloadable file (e.g., similar to fd-musl_10.4.2_amd64.deb sha256:a8d10259388c32f9aafd65fb65b6e3e2d2782043bed687112f50e2c4cf000c27 for recent releases). There is no separate checksums.txt file, as confirmed by closed issue #788 requesting one. Visit the specific release page to view all 22-23 assets and their individual SHA256 hashes for verification. The release includes features like --hyperlink option and builds Windows releases with Rust 1.77 for Windows 7 support.
Citations:
- 1: https://github.com/sharkdp/fd/releases
- 2: Add a checksums.txt file or equivalent for binary/artifacts checksums on GitHub release pages sharkdp/fd#788
- 3: https://raw.githubusercontent.com/sharkdp/fd/master/CHANGELOG.md
- 4: https://github.com/sharkdp/fd
- 5: https://crates.io/crates/fd-find/10.2.0
🌐 Web query:
docker dockerfile best practices piping curl to tar security
💡 Result:
Dockerfile best practices for piping curl to tar, such as RUN curl https://example.com/file.tar.gz | tar -xzf -, are endorsed by official Docker documentation as efficient for downloading and extracting remote archives in a single layer. This avoids leaving temporary files in image layers, unlike using ADD with remote URLs followed by separate extraction. Key best practices: - Prefer RUN curl ... | tar over ADD http://... for remote archives: ADD creates an extra layer with the unextracted archive, bloating the image and complicating cleanup. RUN allows download, extract, and delete in one step. - Pin exact versions in URLs (e.g., via ENV PG_VERSION=9.3.4; curl https://example.com/postgres-$PG_VERSION.tar.xz | tar ...) for reproducibility and supply chain security. - Verify integrity: Download to a temp file first, check SHA256 checksum, then extract and remove (e.g., RUN curl -fsSL https://example.com/package.tar.gz -o /tmp/pkg.tar.gz && echo "expected_sha256 /tmp/pkg.tar.gz" | sha256sum -c - && tar -xzf /tmp/pkg.tar.gz -C / && rm /tmp/pkg.tar.gz). - Use --fail/-f with curl to fail on HTTP errors, -SL for location following. - Combine with apt cleanup: In Debian/Ubuntu, chain with apt-get update/install/curl && ... && apt-get purge -y curl ca-certificates && rm -rf /var/lib/apt/lists/* to minimize image size. - Avoid piping curl directly to sh/bash (curl | sh), as it executes untrusted scripts without review; always inspect or verify downloads. - Use multi-stage builds for complex cases to further reduce final image size. - Scan images with tools like Trivy or Docker Scout for vulnerabilities, including in curl itself (keep base images updated). Security risks if not followed: - Unpinned URLs allow supply chain attacks if source changes maliciously. - No checksum: Tampered archives execute arbitrary code during build. - Separate layers retain artifacts inspectable in image history. - ADD remote URLs lack transparency and checksum support, enabling user-agent-based attacks. Official Docker docs explicitly recommend this pattern in multiple best practices pages (e.g., ENV example with curl | tar).
Citations:
- 1: https://docs.docker.com/develop/develop-images/dockerfile_best-practices
- 2: https://docs.docker.com/build/building/best-practices/
- 3: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
- 4: https://docs.datadoghq.com/security/code_security/iac_security/iac_rules/dockerfile/curl_or_wget_instead_of_add/
- 5: https://docs.docker.com/develop/security-best-practices/
- 6: https://stackoverflow.com/questions/72971015/is-there-any-cache-advantage-to-using-add-url-vs-run-wget-curl-url-in-a-dock
Verify downloaded archives before extracting them.
Go, ripgrep, and fd are streamed directly from the network into tar. A compromised mirror or TLS tampering could inject arbitrary code into the image. All three tools publish SHA256 checksums: Go on https://go.dev/dl/, ripgrep as .sha256 files on the release page, and fd alongside release assets. Download to a temp file, verify the checksum, then extract and remove it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/Dockerfile around lines 30 - 39, The Dockerfile currently
streams Go, ripgrep, and fd archives directly into tar (the RUN block using
ARCH/GOARCH and the filenames go1.23.6.linux-${GOARCH}.tar.gz,
ripgrep-14.1.1-${ARCH}-unknown-linux-gnu.tar.gz, and
fd-v10.2.0-${ARCH}-unknown-linux-gnu.tar.gz); change this to download each
archive to a temporary file, download the corresponding published .sha256 (or
checksum) file, verify the archive with sha256sum -c (or compute and compare),
only extract after verification, and then delete the temp files; ensure the same
RUN step uses the ARCH/GOARCH variables and preserves the exact archive
filenames so the subsequent tar commands target the verified files.
| RUN curl -fsSL https://rpm.nodesource.com/setup_${NODE_VERSION}.x | bash - \ | ||
| && dnf install -y nodejs \ | ||
| && dnf clean all |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .devcontainer/Dockerfile | head -70Repository: stackrox/collector
Length of output: 3453
Avoid executing live installer scripts during the build.
Both the NodeSource and gcloud installs execute remote scripts fetched at build time without any integrity check. This makes the image non-reproducible and extends the supply-chain trust boundary to live network content (lines 47-49 and 55-57). Prefer pinned packages or verified release artifacts instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/Dockerfile around lines 47 - 49, The Dockerfile currently
executes remote installer scripts at build time (the RUN curl -fsSL
https://rpm.nodesource.com/setup_${NODE_VERSION}.x | bash - \ line referencing
NODE_VERSION and the similar gcloud install block), which is non-reproducible
and insecure; replace these live installer pipes with pinned, verifiable
artifacts: fetch specific Node.js rpm or tarball for the desired NODE_VERSION
(or use the official distro package with a fixed version), verify its
checksum/signature before installing, and similarly download a fixed gcloud SDK
release archive and verify signatures instead of piping to bash; update the RUN
steps (the Node setup and the gcloud install blocks) to use curl/wget to
download a pinned file, verify integrity, and install via package manager or
extraction so builds are reproducible and supply-chain risk is mitigated.
| # Install Claude Code | ||
| RUN npm install -g @anthropic-ai/claude-code |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the .devcontainer directory structure
ls -la .devcontainer/Repository: stackrox/collector
Length of output: 624
🏁 Script executed:
# Read the Dockerfile to confirm the current state at lines 51-52
cat -n .devcontainer/Dockerfile | head -60Repository: stackrox/collector
Length of output: 2895
🏁 Script executed:
# Read entrypoint.sh to check for subcommand usage
cat -n .devcontainer/entrypoint.shRepository: stackrox/collector
Length of output: 1028
Pin @anthropic-ai/claude-code to a tested version.
The entrypoint script relies on specific subcommands: claude config set and claude mcp add-json. An unpinned installation will pull whatever version is current at build time, risking silent breakage if a release changes these command interfaces. Pin the version and update it deliberately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/Dockerfile around lines 51 - 52, The Dockerfile currently
installs Claude Code without a version, which risks breaking CLI subcommands
used by the entrypoint; update the RUN command that installs
"@anthropic-ai/claude-code" to pin it to a tested release (e.g. append
@<tested-version>) so the build always pulls the known-good CLI that supports
"claude config set" and "claude mcp add-json"; optionally document the chosen
version in a comment or build ARG so future updates are deliberate.
| --interactive|-i) | ||
| preflight | ||
| WORKTREE=$(setup_worktree) | ||
| trap "cleanup_worktree '$WORKTREE'" EXIT |
There was a problem hiding this comment.
Fix shellcheck SC2064: Use single quotes for trap command.
The trap command expands $WORKTREE at definition time due to double quotes. While this happens to work correctly here (since WORKTREE is set before the trap), the idiomatic approach is to use single quotes and let the variable expand at signal time.
🔧 Proposed fix
- trap "cleanup_worktree '$WORKTREE'" EXIT
+ trap 'cleanup_worktree "$WORKTREE"' EXITHowever, since WORKTREE is a local variable that goes out of scope, the current approach of capturing it at definition time is actually intentional. An alternative fix that keeps the same behavior while satisfying shellcheck:
🔧 Alternative: disable shellcheck for this line
+ # shellcheck disable=SC2064 # Intentional: capture WORKTREE value at trap definition time
trap "cleanup_worktree '$WORKTREE'" EXIT📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| trap "cleanup_worktree '$WORKTREE'" EXIT | |
| # shellcheck disable=SC2064 # Intentional: capture WORKTREE value at trap definition time | |
| trap "cleanup_worktree '$WORKTREE'" EXIT |
🧰 Tools
🪛 GitHub Actions: Lint
[error] 179-179: shellcheck failed in pre-commit hook "shellcheck". SC2064 (warning): Use single quotes, otherwise this expands now rather than when signalled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/run.sh at line 179, The trap currently uses double quotes
which expands $WORKTREE at definition time (trap "cleanup_worktree '$WORKTREE'"
EXIT) causing ShellCheck SC2064; either change the trap to use single quotes so
WORKTREE is expanded at signal time (use trap with single-quoted command
invoking cleanup_worktree and expanding "$WORKTREE" when the trap runs) or, if
the current behavior of capturing the local WORKTREE value at definition time is
intentional, keep the existing trap and add a ShellCheck disable directive for
SC2064 immediately above the trap line to silence the warning; refer to the trap
invocation and cleanup_worktree function and the WORKTREE variable when making
the change.
| --shell|-s) | ||
| check_docker; check_image | ||
| WORKTREE=$(setup_worktree) | ||
| trap "cleanup_worktree '$WORKTREE'" EXIT |
There was a problem hiding this comment.
Same SC2064 fix needed here.
Apply the same fix as line 179.
🔧 Proposed fix (same pattern)
+ # shellcheck disable=SC2064 # Intentional: capture WORKTREE value at trap definition time
trap "cleanup_worktree '$WORKTREE'" EXIT📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| trap "cleanup_worktree '$WORKTREE'" EXIT | |
| # shellcheck disable=SC2064 # Intentional: capture WORKTREE value at trap definition time | |
| trap "cleanup_worktree '$WORKTREE'" EXIT |
🧰 Tools
🪛 GitHub Actions: Lint
[error] 205-205: shellcheck failed in pre-commit hook "shellcheck". SC2064 (warning): Use single quotes, otherwise this expands now rather than when signalled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/run.sh at line 205, The trap command uses double quotes
currently which triggers SC2064; change the trap invocation so the command is
single-quoted and the variable is double-quoted for expansion at trap execution
(i.e., update the trap that references cleanup_worktree and $WORKTREE to use the
pattern: trap 'cleanup_worktree "$WORKTREE"' EXIT) so WORKTREE is expanded when
the trap runs, not when it’s defined.
| echo "Task: $TASK" | ||
| echo "---" | ||
|
|
||
| trap "cleanup_worktree '$WORKTREE'" EXIT |
There was a problem hiding this comment.
Same SC2064 fix needed here.
Apply the same fix as line 179.
🔧 Proposed fix (same pattern)
+ # shellcheck disable=SC2064 # Intentional: capture WORKTREE value at trap definition time
trap "cleanup_worktree '$WORKTREE'" EXIT🧰 Tools
🪛 GitHub Actions: Lint
[error] 252-252: shellcheck failed in pre-commit hook "shellcheck". SC2064 (warning): Use single quotes, otherwise this expands now rather than when signalled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/run.sh at line 252, Replace the trap invocation so it follows
the SC2064 pattern: stop using a double-quoted trap string that expands WORKTREE
immediately and instead use a single-quoted trap command with the WORKTREE
variable wrapped in double quotes inside the single quotes; update the line that
calls trap with the cleanup_worktree function and the WORKTREE variable
accordingly so the variable is expanded at trap runtime and is safely quoted.
| nRadixNode& operator=(const nRadixNode& other) { | ||
| if (this == &other) { | ||
| return *this; | ||
| } | ||
| auto* new_node = new nRadixNode(other); | ||
| auto new_node = std::make_unique<nRadixNode>(other); | ||
| std::swap(*new_node, *this); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== move special members =="
rg -n -C2 'nRadixNode\s*\(\s*nRadixNode&&|operator=\s*\(\s*nRadixNode&&' collector/lib/NRadix.h collector/lib/NRadix.cpp || true
echo
echo "== custom swap overloads =="
rg -n -C2 '\bswap\s*\(\s*nRadixNode\s*&' collector/lib/NRadix.h collector/lib/NRadix.cpp || trueRepository: stackrox/collector
Length of output: 117
nRadixNode::operator= lacks safe swap semantics for std::unique_ptr members.
The copy-and-swap implementation at lines 56-61 uses std::swap(*new_node, *this), but nRadixNode has no move assignment operator and no custom swap overload. Without these, the swap operation cannot safely exchange the three std::unique_ptr members (value_, left_, right_), risking undefined behavior or infinite recursion.
Suggested fix
nRadixNode& operator=(const nRadixNode& other) {
if (this == &other) {
return *this;
}
auto new_node = std::make_unique<nRadixNode>(other);
- std::swap(*new_node, *this);
+ value_.swap(new_node->value_);
+ left_.swap(new_node->left_);
+ right_.swap(new_node->right_);
return *this;
}Swap each std::unique_ptr member directly instead of relying on a non-existent move assignment operator.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| nRadixNode& operator=(const nRadixNode& other) { | |
| if (this == &other) { | |
| return *this; | |
| } | |
| auto* new_node = new nRadixNode(other); | |
| auto new_node = std::make_unique<nRadixNode>(other); | |
| std::swap(*new_node, *this); | |
| nRadixNode& operator=(const nRadixNode& other) { | |
| if (this == &other) { | |
| return *this; | |
| } | |
| auto new_node = std::make_unique<nRadixNode>(other); | |
| value_.swap(new_node->value_); | |
| left_.swap(new_node->left_); | |
| right_.swap(new_node->right_); | |
| return *this; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@collector/lib/NRadix.h` around lines 56 - 61, The copy-and-swap in
nRadixNode::operator=(const nRadixNode& other) incorrectly calls
std::swap(*new_node, *this) which relies on a move assignment/swap for
nRadixNode; instead construct the temporary with
std::make_unique<nRadixNode>(other) and swap each std::unique_ptr member
explicitly (value_, left_, right_) between *new_node and *this (and swap any
other non-pointer members if needed) so the unique_ptrs are exchanged safely;
update operator= to perform these member-wise swaps and return *this.
| bool IsAnyIPNetSubset(Address::Family family, const NRadixTree& other) const; | ||
|
|
||
| nRadixNode* root_; | ||
| std::unique_ptr<nRadixNode> root_; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Make root_ private.
The class now exposes its owning std::unique_ptr directly. Any caller can reset() or move collector::NRadixTree::root_ out and turn the unguarded dereferences in Insert, Find, IsEmpty, and the copy operations into nullptr crashes.
Suggested fix
class NRadixTree {
public:
NRadixTree() : root_(std::make_unique<nRadixNode>()) {}
explicit NRadixTree(const std::vector<IPNet>& networks) : root_(std::make_unique<nRadixNode>()) {
for (const auto& network : networks) {
auto inserted = this->Insert(network);
if (!inserted) {
CLOG(ERROR) << "Failed to insert CIDR " << network << " in network tree";
}
}
}
NRadixTree(const NRadixTree& other) : root_(std::make_unique<nRadixNode>(*other.root_)) {}
~NRadixTree() = default;
NRadixTree& operator=(const NRadixTree& other) {
if (this == &other) {
return *this;
}
root_ = std::make_unique<nRadixNode>(*other.root_);
return *this;
}
bool Insert(const IPNet& network) const;
IPNet Find(const IPNet& network) const;
IPNet Find(const Address& addr) const;
std::vector<IPNet> GetAll() const;
bool IsEmpty() const;
bool IsAnyIPNetSubset(const NRadixTree& other) const;
bool IsAnyIPNetSubset(Address::Family family, const NRadixTree& other) const;
+ private:
std::unique_ptr<nRadixNode> root_;
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@collector/lib/NRadix.h` at line 115, Make collector::NRadixTree::root_ a
private member (instead of public) and stop exposing the owning std::unique_ptr;
change its declaration to private and replace any external uses with controlled
accessors. Add a non-owning getter like nRadixNode* Root() / const nRadixNode*
Root() (or a const reference helper) and update callers that previously accessed
root_ directly to call Root(); update internal methods Insert, Find, IsEmpty and
the copy constructor/operator= to use the new getter or operate on the
unique_ptr internally so external code cannot reset/move root_. Ensure no code
retains ownership or performs reset/move on the unique_ptr from outside the
class.
Description
A detailed explanation of the changes in your PR.
Feel free to remove this section if it is overkill for your PR, and the title of your PR is sufficiently descriptive.
Checklist
Automated testing
If any of these don't apply, please comment below.
Testing Performed
TODO(replace-me)
Use this space to explain how you tested your PR, or, if you didn't test it, why you did not do so. (Valid reasons include "CI is sufficient" or "No testable changes")
In addition to reviewing your code, reviewers must also review your testing instructions, and make sure they are sufficient.
For more details, ref the Confluence page about this section.
Summary by CodeRabbit
New Features
Documentation
Chores