diff --git a/docs/cs-10622-boxel-realm-track-plan.md b/docs/cs-10622-boxel-realm-track-plan.md new file mode 100644 index 0000000000..5bf770c76a --- /dev/null +++ b/docs/cs-10622-boxel-realm-track-plan.md @@ -0,0 +1,85 @@ +# CS-10622: Reimplement `boxel realm track` command + +## Goal + +Port the legacy `boxel track` command from the standalone `boxel-cli` repo into the monorepo at `packages/boxel-cli`, namespaced as `boxel realm track`. Track is the **write-side** counterpart to `boxel realm watch` (CS-10623): it watches the local filesystem, debounces edits, creates checkpoints in `.boxel-history/`, and with `--push` batch-uploads add/update changes to the realm via `/_atomic`. The marquee workflow is collaborative card editing — a developer (or Claude Code) edits locally with `track --push` running, while teammates see updates in the web UI. + +## Branch / dependency + +- Branch: `cs-10622-reimplement-boxel-realm-track-command`. +- Based on `cs-10623-reimplement-boxel-realm-watch-command` (PR #4554) — track depends on `RealmSyncBase`, `RealmAuthenticator`, `CheckpointManager`, the `realm` command group, and a generalized lock module that lands in this PR. +- Targets `main` once #4554 merges; will rebase. + +## Design decisions + +1. **Push is in scope.** Single PR ships local tracking + `--push` together. +2. **Hybrid change detection.** mtime+size on the 2s poll triggers the debounce; `computeFileHash` runs once per pending file before checkpoint creation, dropping no-op saves (editor touched-but-content-identical) without paying hash cost on every poll tick. +3. **Generalized sync-lock with bidirectional cross-guard.** `watch-lock.ts` is renamed to `sync-lock.ts` and parameterized by a `kind: 'watch' | 'track'`. Both watch and track call `acquireSyncLock`; both refuse if the *other* kind is held by a live PID. Prevents the track+watch infinite loop on the same dir. +4. **Defer deletions on `--push`** (legacy parity). `op: 'remove'` is defined in `runtime-common/atomic-document.ts` but **not implemented** server-side — `filterAtomicOperations` strips no-data ops and the atomic handler only iterates add/update. Implementing it server-side is a sizable change to `runtime-common/realm.ts` (validation, dispatch, indexing/invalidation hooks) and out of scope for a CLI port. Track's `--push` cycle uploads adds/updates only. A locally-deleted file still produces a local `[deleted]` checkpoint entry but emits a `[VERBOSE] Skipping delete on push (deferred)` log. Filed as follow-up. +5. **Inline push using `uploadFilesAtomic()` as-is.** `RealmTracker extends RealmSyncBase` and calls the existing `uploadFilesAtomic(files, addPaths)` method directly — no signature change. `RealmPusher` stays untouched. +6. **Sort ops .gts-first within the single atomic doc.** `uploadFilesAtomic` preserves Map insertion order in the operations array, so a sorted input Map satisfies the legacy `.gts before .json` requirement without splitting batches or changing the server. +7. **Manifest is required for `--push`.** Track is a streaming-edit tool, not an initial-sync tool. Pre-flight refuses if `.boxel-sync.json` is missing or points at a different realm; the operator runs `boxel realm sync` first. +8. **Auth is lazy.** Without `--push`, no authenticator is resolved — tracker is local-only. With `--push`, the authenticator is resolved via `resolveRealmAuthenticator` and a single `getRemoteFileList('')` call at startup smoke-tests it, mirroring legacy track's startup JWT check. + +## Files + +### New +- `packages/boxel-cli/src/lib/sync-lock.ts` — generalized lock module with `LockKind`, `acquireSyncLock(localDir, kind, realmUrl)`, `releaseSyncLock(localDir, kind)`, `readSyncLock(localDir, kind)`. Bidirectional cross-guard built into `acquireSyncLock`. +- `packages/boxel-cli/src/commands/realm/track.ts` — `RealmTracker extends RealmSyncBase`, `trackRealms(specs, options)` orchestrator, `registerTrackCommand(realm)` Commander wiring with `-i`, `-d`, `-q`, `-p`, `-v`, `--realm-secret-seed`. +- `packages/boxel-cli/tests/integration/realm-track.test.ts` — integration suite covering local behavior, `--push`, and locks. + +### Modified +- `packages/boxel-cli/src/lib/watch-lock.ts` — **deleted**, replaced by `sync-lock.ts`. +- `packages/boxel-cli/src/commands/realm/watch.ts` — imports updated to `sync-lock`, lock acquisition passes `'watch'` kind, error message handles the cross-guard `track` conflict case. +- `packages/boxel-cli/src/commands/realm/index.ts` — registers `track` command. +- `packages/boxel-cli/tests/integration/realm-watch.test.ts` — adds two cross-guard cases: refuses when track is live, ignores stale track lock. + +### Reused (no changes) +- `packages/boxel-cli/src/lib/realm-sync-base.ts` — `getRemoteFileList`, `getRemoteMtimes`, `uploadFilesAtomic`, `buildFileUrl`, `isProtectedFile`. +- `packages/boxel-cli/src/lib/checkpoint-manager.ts` — `createCheckpoint('local', changes)`, `init`, `isInitialized`. +- `packages/boxel-cli/src/lib/sync-manifest.ts` — `loadManifest`, `saveManifest`, `computeFileHash`. +- `packages/boxel-cli/src/lib/auth-resolver.ts` — `resolveRealmAuthenticator`. +- `packages/boxel-cli/src/lib/prompt.ts` — `resolveRealmSecretSeed`. + +## Test plan + +`pnpm --filter @cardstack/boxel-cli test:integration -- realm-track` covers: + +**Local behavior** (no `--push`): +1. Detects an added file, writes a local checkpoint. +2. Detects a modification, writes a local checkpoint. +3. Detects a deletion, writes a local checkpoint. +4. Coalesces a burst of edits into one debounced checkpoint. +5. Defers a second batch when min-interval has not elapsed. +6. Hash-gates a noop modify when the manifest has the same hash. + +**`--push`**: +7. Uploads adds/updates via `/_atomic`, then updates the manifest. +8. Orders `.gts` modules before `.json` instances inside the atomic POST. +9. Skips deletions on push, recording them in the local checkpoint only. +10. Fails fast when `--push` is enabled but no manifest exists. +11. Retains entries whose push fails (e.g. concurrent 409) for the next cycle. + +**Locks and orchestration**: +12. Blocks a second concurrent track against the same `localDir`. +13. Refuses to start when a live watch lock exists at the same `localDir`. +14. Overwrites a stale track lock from a process that no longer exists. +15. Flushes pending changes before exit when the abort signal fires. + +## Verification + +1. `pnpm --filter @cardstack/boxel-cli test:integration` — realm-track suite green. +2. `pnpm --filter @cardstack/boxel-cli build` succeeds. +3. `boxel realm track --help` documents `-i`, `-d`, `-q`, `-p`, `-v`, `--realm-secret-seed`. +4. **Manual smoke against staging:** + - `boxel realm sync ./scratch/ ` (establishes manifest). + - `boxel realm track ./scratch/ --push -v`. + - Edit a `.gts` and matching `.json`; within `debounce + interval` confirm: local checkpoint logged, atomic POST visible in verbose output with `.gts` op listed first, manifest hash updated. + - In a second terminal: `boxel realm watch ./scratch/` → refuses with `.boxel-track.lock` conflict. + - Delete a file locally; confirm checkpoint logs `deleted` and verbose log shows `Skipping delete on push (deferred)`. Remote file remains. + - Ctrl+C; confirm pending changes flushed, lock released, exit 0. + +## Open follow-ups (not this PR) + +- **Implement server-side `op: 'remove'`** in `runtime-common/realm.ts`. Bypass `filterAtomicOperations` for remove ops, validate target existence, dispatch through the realm adapter's delete + indexing path, add tests in `packages/realm-server/tests/atomic-endpoints-test.ts`. Once that lands, both `RealmTracker.pushDrained` and `RealmPusher` (the `--delete` path at `push.ts:244-253`) migrate to atomic remove. +- `boxel realm stop` (CS-10624) — once track lands its lock, stop becomes a kill-switch over `.boxel-track.lock` and `.boxel-watch.lock` discovery sources. diff --git a/packages/boxel-cli/src/commands/realm/index.ts b/packages/boxel-cli/src/commands/realm/index.ts index 0616808544..045cc10852 100644 --- a/packages/boxel-cli/src/commands/realm/index.ts +++ b/packages/boxel-cli/src/commands/realm/index.ts @@ -6,6 +6,7 @@ import { registerListCommand } from './list'; import { registerPullCommand } from './pull'; import { registerPushCommand } from './push'; import { registerSyncCommand } from './sync'; +import { registerTrackCommand } from './track'; import { registerWaitForReadyCommand } from './wait-for-ready'; import { registerWatchCommand } from './watch'; @@ -21,6 +22,7 @@ export function registerRealmCommand(program: Command): void { registerPullCommand(realm); registerPushCommand(realm); registerSyncCommand(realm); + registerTrackCommand(realm); registerWaitForReadyCommand(realm); registerWatchCommand(realm); } diff --git a/packages/boxel-cli/src/commands/realm/track.ts b/packages/boxel-cli/src/commands/realm/track.ts new file mode 100644 index 0000000000..6f5744f460 --- /dev/null +++ b/packages/boxel-cli/src/commands/realm/track.ts @@ -0,0 +1,996 @@ +import { InvalidArgumentError, type Command } from 'commander'; +import * as fs from 'fs/promises'; +import * as fsSync from 'fs'; +import * as path from 'path'; +import { RealmSyncBase, isProtectedFile } from '../../lib/realm-sync-base'; +import { + CheckpointManager, + type Checkpoint, + type CheckpointChange, +} from '../../lib/checkpoint-manager'; +import { + type SyncManifest, + computeFileHash, + loadManifest, + saveManifest, +} from '../../lib/sync-manifest'; +import type { ProfileManager } from '../../lib/profile-manager'; +import type { RealmAuthenticator } from '../../lib/realm-authenticator'; +import { resolveRealmAuthenticator } from '../../lib/auth-resolver'; +import { resolveRealmSecretSeed } from '../../lib/prompt'; +import { + acquireSyncLock, + releaseSyncLock, + type SyncLockInfo, + type LockKind, +} from '../../lib/sync-lock'; +import { + FG_CYAN, + FG_GREEN, + FG_RED, + FG_YELLOW, + DIM, + RESET, +} from '../../lib/colors'; + +export interface TrackRealmSpec { + realmUrl: string; + localDir: string; +} + +interface FileState { + mtime: number; + size: number; +} + +interface PendingChange { + status: 'added' | 'modified' | 'deleted'; + mtime: number; + size: number; +} + +export interface TrackFlushResult { + added: string[]; + modified: string[]; + deleted: string[]; + pushed: string[]; + pushFailed: { path: string; reason: string }[]; + checkpoint: Checkpoint | null; +} + +/** + * Tracks a single localDir → realm pair: detects local FS changes via + * fs.watch + 2s polling, debounces, gates with content-hash, creates a + * local checkpoint, and (with --push) batch-uploads add/update changes + * to the realm via /_atomic. Deletions are recorded in the checkpoint + * but not pushed — server-side `op: 'remove'` is unimplemented and + * per-file DELETE was scoped out for this PR. See follow-up. + */ +export class RealmTracker extends RealmSyncBase { + readonly name: string; + private readonly debounceMs: number; + private readonly minIntervalMs: number; + private readonly quiet: boolean; + private readonly verbose: boolean; + private readonly push: boolean; + private readonly checkpointManager: CheckpointManager; + private readonly fileStates = new Map(); + private readonly pendingChanges = new Map(); + private debounceTimer: NodeJS.Timeout | null = null; + private intervalTimer: NodeJS.Timeout | null = null; + private lastCheckpointTime = 0; + private fsWatcher: fsSync.FSWatcher | null = null; + private pollTimer: NodeJS.Timeout | null = null; + + constructor( + spec: TrackRealmSpec, + authenticator: RealmAuthenticator, + options: { + debounceMs: number; + minIntervalMs: number; + quiet: boolean; + verbose: boolean; + push: boolean; + }, + ) { + super({ realmUrl: spec.realmUrl, localDir: spec.localDir }, authenticator); + this.debounceMs = options.debounceMs; + this.minIntervalMs = options.minIntervalMs; + this.quiet = options.quiet; + this.verbose = options.verbose; + this.push = options.push; + this.checkpointManager = new CheckpointManager(spec.localDir); + this.name = deriveRealmName(this.normalizedRealmUrl); + } + + /** RealmSyncBase requires `sync()`. Single-pass scan-and-flush. */ + async sync(): Promise { + await this.scanForChanges(); + await this.flushPending(true); + } + + get localDir(): string { + return this.options.localDir; + } + + get realmUrl(): string { + return this.normalizedRealmUrl; + } + + get pendingCount(): number { + return this.pendingChanges.size; + } + + /** + * Pre-flight: + * 1. With --push: require a sync manifest pointing at this realm and + * smoke-test auth by listing the remote root. + * 2. Initialize the .boxel-history checkpoint repo if needed. + * 3. Seed `fileStates` from a recursive walk so the first poll only + * reports actual changes, not the initial inventory. + */ + async initialize(): Promise { + if (this.push) { + const manifest = await loadManifest(this.options.localDir); + if (!manifest) { + throw new Error( + `--push requires a synced workspace. Run "boxel realm sync ${this.options.localDir} ${this.normalizedRealmUrl}" first.`, + ); + } + if (manifest.realmUrl !== this.normalizedRealmUrl) { + throw new Error( + `Manifest realm URL (${manifest.realmUrl}) does not match the target realm (${this.normalizedRealmUrl}). Re-sync to align.`, + ); + } + // Surface auth/network failures here, before we enter the loop — + // matches legacy track's startup JWT check. + await this.getRemoteFileList(''); + } + + if (!(await this.checkpointManager.isInitialized())) { + await this.checkpointManager.init(); + } + + await this.seedFileStates(this.options.localDir); + } + + private async seedFileStates(dir: string, prefix = ''): Promise { + let entries: import('fs').Dirent[]; + try { + entries = await fs.readdir(dir, { withFileTypes: true }); + } catch (err: any) { + if (err.code === 'ENOENT') return; + throw err; + } + for (const entry of entries) { + if (shouldSkipEntry(entry.name)) continue; + const full = path.join(dir, entry.name); + const rel = prefix ? `${prefix}/${entry.name}` : entry.name; + if (entry.isDirectory()) { + await this.seedFileStates(full, rel); + } else { + try { + const stats = await fs.stat(full); + this.fileStates.set(rel, { + mtime: stats.mtimeMs, + size: stats.size, + }); + } catch (err: any) { + if (err.code !== 'ENOENT') throw err; + } + } + } + } + + /** + * Walk localDir, diff against `fileStates`, and update + * `pendingChanges`. Returns true if at least one new pending entry + * was added or upgraded. + */ + async scanForChanges(): Promise { + const current = new Map(); + await this.collectFiles(this.options.localDir, current); + + let hasNew = false; + + for (const [file, state] of current) { + if (isProtectedFile(file)) continue; + const prev = this.fileStates.get(file); + if (!prev) { + if (this.recordPending(file, { status: 'added', ...state })) { + hasNew = true; + } + } else if (state.mtime > prev.mtime || state.size !== prev.size) { + if (this.recordPending(file, { status: 'modified', ...state })) { + hasNew = true; + } + } + } + + for (const file of this.fileStates.keys()) { + if (isProtectedFile(file)) continue; + if (!current.has(file)) { + if ( + this.recordPending(file, { status: 'deleted', mtime: 0, size: 0 }) + ) { + hasNew = true; + } + } + } + + this.fileStates.clear(); + for (const [file, state] of current) { + this.fileStates.set(file, state); + } + + return hasNew; + } + + private async collectFiles( + dir: string, + accum: Map, + prefix = '', + ): Promise { + let entries: import('fs').Dirent[]; + try { + entries = await fs.readdir(dir, { withFileTypes: true }); + } catch (err: any) { + if (err.code === 'ENOENT') return; + throw err; + } + for (const entry of entries) { + if (shouldSkipEntry(entry.name)) continue; + const full = path.join(dir, entry.name); + const rel = prefix ? `${prefix}/${entry.name}` : entry.name; + if (entry.isDirectory()) { + await this.collectFiles(full, accum, rel); + } else { + try { + const stats = await fs.stat(full); + accum.set(rel, { mtime: stats.mtimeMs, size: stats.size }); + } catch (err: any) { + if (err.code !== 'ENOENT') throw err; + } + } + } + } + + private recordPending(file: string, change: PendingChange): boolean { + const existing = this.pendingChanges.get(file); + if ( + existing && + existing.status === change.status && + existing.mtime === change.mtime && + existing.size === change.size + ) { + return false; + } + this.pendingChanges.set(file, change); + return true; + } + + /** + * Schedule a debounced flush. Subsequent calls reset the timer so a + * burst of edits lands in a single checkpoint. + */ + scheduleFlush(onFlush?: (result: TrackFlushResult) => void): void { + if (this.debounceTimer) clearTimeout(this.debounceTimer); + this.debounceTimer = setTimeout(async () => { + this.debounceTimer = null; + try { + const result = await this.flushPending(); + if (result) onFlush?.(result); + } catch (err) { + console.error( + `${FG_RED}[${this.name}] flush error:${RESET}`, + err instanceof Error ? err.message : err, + ); + } + }, this.debounceMs); + } + + /** + * Apply pending changes: hash-gate, create a local checkpoint, and + * (with --push) batch-upload adds/updates. Honors the min-interval + * gate unless `force` is set (used on shutdown to flush before exit). + * Returns null if the call was deferred (waiting on min-interval) or + * if there's nothing pending. + */ + async flushPending(force = false): Promise { + if (this.pendingChanges.size === 0) return null; + + const now = Date.now(); + const elapsed = now - this.lastCheckpointTime; + if (!force && elapsed < this.minIntervalMs) { + if (!this.intervalTimer) { + const wait = this.minIntervalMs - elapsed; + if (!this.quiet) { + console.log( + `${DIM}[${timestamp()}]${RESET} [${this.name}] ${FG_YELLOW}⏳ waiting ${Math.ceil( + wait / 1000, + )}s before next checkpoint${RESET}`, + ); + } + this.intervalTimer = setTimeout(async () => { + this.intervalTimer = null; + try { + await this.flushPending(); + } catch (err) { + console.error( + `${FG_RED}[${this.name}] flush error:${RESET}`, + err instanceof Error ? err.message : err, + ); + } + }, wait); + } + return null; + } + + // Snapshot then clear before any await — anything an interleaved + // scan() records during this flush rolls into the next pass. + const drained = new Map(this.pendingChanges); + this.pendingChanges.clear(); + + // Hash-gate: drop modified entries whose content hash matches the + // manifest. Stops editors that touch-but-don't-change from creating + // empty checkpoints. + const manifest = await loadManifest(this.options.localDir); + if (manifest && manifest.realmUrl === this.normalizedRealmUrl) { + for (const [file, change] of Array.from(drained.entries())) { + if (change.status !== 'modified') continue; + const prevHash = manifest.files[file]; + if (!prevHash) continue; + try { + const currHash = await computeFileHash( + path.join(this.options.localDir, file), + ); + if (currHash === prevHash) { + drained.delete(file); + } + } catch (err: any) { + // File vanished between scan and hash; reclassify as a delete + // and let the next pass handle it. + if (err.code !== 'ENOENT') throw err; + drained.delete(file); + this.pendingChanges.set(file, { + status: 'deleted', + mtime: 0, + size: 0, + }); + } + } + } + + if (drained.size === 0) { + return { + added: [], + modified: [], + deleted: [], + pushed: [], + pushFailed: [], + checkpoint: null, + }; + } + + const added: string[] = []; + const modified: string[] = []; + const deleted: string[] = []; + const changes: CheckpointChange[] = []; + for (const [file, change] of drained) { + changes.push({ file, status: change.status }); + if (change.status === 'added') added.push(file); + else if (change.status === 'modified') modified.push(file); + else deleted.push(file); + } + + // Always checkpoint locally before any network I/O so a push + // failure never loses the local history record. + const checkpoint = await this.checkpointManager.createCheckpoint( + 'local', + changes, + ); + + this.lastCheckpointTime = Date.now(); + + let pushed: string[] = []; + let pushFailed: { path: string; reason: string }[] = []; + + if (this.push) { + const result = await this.pushDrained(added, modified, deleted); + pushed = result.pushed; + pushFailed = result.failed; + // Re-queue files whose push failed transiently so the next cycle + // retries them. + for (const fail of pushFailed) { + const status = added.includes(fail.path) ? 'added' : 'modified'; + try { + const stats = await fs.stat( + path.join(this.options.localDir, fail.path), + ); + this.pendingChanges.set(fail.path, { + status, + mtime: stats.mtimeMs, + size: stats.size, + }); + } catch (err: any) { + if (err.code !== 'ENOENT') throw err; + } + } + } else if (deleted.length > 0 && this.verbose) { + for (const file of deleted) { + console.log( + `${DIM}[${timestamp()}]${RESET} [${this.name}] [VERBOSE] Skipping delete on push (deferred): ${file}`, + ); + } + } + + return { added, modified, deleted, pushed, pushFailed, checkpoint }; + } + + /** + * Push add/update operations to /_atomic. Deletions are not pushed. + * After a successful push the manifest is updated with fresh hashes + * and remoteMtimes so a later `realm pull` doesn't see drift. + */ + private async pushDrained( + added: string[], + modified: string[], + deleted: string[], + ): Promise<{ + pushed: string[]; + failed: { path: string; reason: string }[]; + }> { + if (deleted.length > 0 && this.verbose) { + for (const file of deleted) { + console.log( + `${DIM}[${timestamp()}]${RESET} [${this.name}] [VERBOSE] Skipping delete on push (deferred): ${file}`, + ); + } + } + + const allUploads = [...added, ...modified]; + if (allUploads.length === 0) { + return { pushed: [], failed: [] }; + } + + // Sort modules (.gts/.ts/.js) before instances (.json) so the + // atomic doc processes definitions before instances. uploadFilesAtomic + // iterates the Map in insertion order, so a sorted Map yields a + // sorted operations array. + const sorted = allUploads.sort((a, b) => { + const ka = pushOrderKey(a); + const kb = pushOrderKey(b); + if (ka !== kb) return ka - kb; + return a < b ? -1 : a > b ? 1 : 0; + }); + + const filesToUpload = new Map(); + for (const rel of sorted) { + filesToUpload.set(rel, path.join(this.options.localDir, rel)); + } + + // Mirror RealmPusher's add/update discrimination: use *intent*, not + // just remote presence. A file not in our manifest expresses "I'm + // adding this" — if the realm has a file at that href anyway, + // someone else created it concurrently and the atomic 409 surfaces + // the conflict instead of silently overwriting their changes. + const manifest = await loadManifest(this.options.localDir); + const remoteFiles = await this.getRemoteFileList(''); + const addPaths = new Set(); + for (const rel of filesToUpload.keys()) { + const knownToManifest = manifest?.files[rel] !== undefined; + const knownMissingOnRemote = + knownToManifest && !remoteFiles.has(rel); + if (!knownToManifest || knownMissingOnRemote) { + addPaths.add(rel); + } + } + + const result = await this.uploadFilesAtomic(filesToUpload, addPaths); + + if (result.error) { + const failed = result.error.perFile.map((p) => ({ + path: p.path, + reason: `${p.status} ${p.title}`, + })); + console.error( + `${FG_RED}[${this.name}] push failed: ${result.error.message}${RESET}`, + ); + for (const entry of result.error.perFile) { + let hint: string; + if (entry.status === 409) { + hint = `${entry.path} was created on the realm concurrently — will retry on the next cycle.`; + } else if (entry.status === 404) { + hint = `${entry.path} was removed from the realm concurrently — will retry on the next cycle.`; + } else { + hint = `${entry.path}: ${entry.title}`; + } + console.error(` ${hint}`); + } + return { pushed: [], failed }; + } + + const succeeded = new Set(result.succeeded); + if (succeeded.size > 0) { + try { + await this.updateManifestForPush(succeeded); + } catch (err) { + console.error( + `${FG_RED}[${this.name}] manifest update failed:${RESET}`, + err instanceof Error ? err.message : err, + ); + } + } + + return { pushed: result.succeeded, failed: [] }; + } + + private async updateManifestForPush(succeeded: Set): Promise { + const prior = await loadManifest(this.options.localDir); + if (!prior) return; + const next: SyncManifest = { + realmUrl: this.normalizedRealmUrl, + files: { ...prior.files }, + remoteMtimes: { ...(prior.remoteMtimes ?? {}) }, + }; + for (const rel of succeeded) { + try { + next.files[rel] = await computeFileHash( + path.join(this.options.localDir, rel), + ); + } catch (err: any) { + if (err.code !== 'ENOENT') throw err; + } + } + try { + const fresh = await this.getRemoteMtimes(); + for (const rel of succeeded) { + const mtime = fresh.get(rel); + if (mtime !== undefined) { + next.remoteMtimes![rel] = mtime; + } + } + } catch { + // Best-effort; remote mtimes will refresh on the next pull. + } + if (Object.keys(next.remoteMtimes ?? {}).length === 0) { + delete next.remoteMtimes; + } + await saveManifest(this.options.localDir, next); + } + + /** + * Wire fs.watch (recursive on macOS/Windows; flat on Linux) plus a + * 2s safety poll. The poll catches editors whose write pattern + * (atomic-rename, etc.) doesn't reliably fire fs.watch. + */ + startWatching(onFlush: (result: TrackFlushResult) => void): void { + const isLinux = process.platform === 'linux'; + const watchOptions: fsSync.WatchOptions = isLinux + ? {} + : { recursive: true }; + + try { + this.fsWatcher = fsSync.watch( + this.options.localDir, + watchOptions, + (_eventType, filename) => { + if (!filename) return; + const name = + typeof filename === 'string' ? filename : filename.toString(); + const head = name.split(path.sep)[0]; + if (shouldSkipEntry(head)) return; + this.triggerScan(onFlush); + }, + ); + this.fsWatcher.on('error', (err) => { + if (!this.quiet) { + console.error( + `${FG_RED}[${this.name}] fs.watch error:${RESET}`, + err.message, + ); + } + }); + } catch { + if (!this.quiet) { + console.log( + `${DIM}[${timestamp()}]${RESET} [${this.name}] fs.watch unavailable; polling only`, + ); + } + } + + this.pollTimer = setInterval(() => { + this.triggerScan(onFlush); + }, 2000); + } + + private triggerScan(onFlush: (result: TrackFlushResult) => void): void { + void this.scanForChanges() + .then((hasNew) => { + if (hasNew) { + if (!this.quiet) { + console.log( + `${DIM}[${timestamp()}]${RESET} [${this.name}] ${FG_YELLOW}⚡ ${this.pendingCount} change(s) detected${RESET}`, + ); + } + this.scheduleFlush(onFlush); + } + }) + .catch((err) => + console.error( + `${FG_RED}[${this.name}] scan error:${RESET}`, + err instanceof Error ? err.message : err, + ), + ); + } + + shutdown(): void { + if (this.debounceTimer) { + clearTimeout(this.debounceTimer); + this.debounceTimer = null; + } + if (this.intervalTimer) { + clearTimeout(this.intervalTimer); + this.intervalTimer = null; + } + if (this.pollTimer) { + clearInterval(this.pollTimer); + this.pollTimer = null; + } + if (this.fsWatcher) { + this.fsWatcher.close(); + this.fsWatcher = null; + } + } +} + +function pushOrderKey(rel: string): number { + const ext = rel.toLowerCase().match(/\.[^.]+$/)?.[0] ?? ''; + if (ext === '.gts' || ext === '.ts' || ext === '.js') return 0; + if (ext === '.json') return 2; + return 1; +} + +function shouldSkipEntry(name: string | undefined): boolean { + if (!name) return false; + if (name === '.git' || name === 'node_modules') return true; + if (name.startsWith('.boxel-')) return true; + if (name.startsWith('.') && name !== '.realm.json') return true; + return false; +} + +export interface TrackRealmsOptions { + intervalMs?: number; + debounceMs?: number; + quiet?: boolean; + verbose?: boolean; + push?: boolean; + profileManager?: ProfileManager; + realmSecretSeed?: string; + authenticator?: RealmAuthenticator; + signal?: AbortSignal; +} + +export interface TrackRealmsResult { + trackers: RealmTracker[]; + error?: string; +} + +const noAuthAuthenticator: RealmAuthenticator = { + async authedRealmFetch() { + throw new Error( + 'Network operation attempted on a tracker started without --push.', + ); + }, +}; + +/** + * Programmatic entry point. The CLI passes a single spec; the array + * shape lets tests / future multi-realm callers reuse the loop. With + * --push the authenticator is resolved once from `specs[0]` and shared + * across all trackers, so multi-realm callers must use realms that + * share a profile / secret seed. + */ +export async function trackRealms( + specs: TrackRealmSpec[], + options: TrackRealmsOptions = {}, +): Promise { + if (specs.length === 0) { + return { trackers: [], error: 'No realms provided to track.' }; + } + + const intervalMs = options.intervalMs ?? 10_000; + const debounceMs = options.debounceMs ?? 3_000; + const quiet = options.quiet ?? false; + const verbose = options.verbose ?? false; + const push = options.push ?? false; + + if (!Number.isFinite(intervalMs) || intervalMs <= 0) { + return { trackers: [], error: '`intervalMs` must be a positive number.' }; + } + if (!Number.isFinite(debounceMs) || debounceMs < 0) { + return { + trackers: [], + error: '`debounceMs` must be a non-negative number.', + }; + } + + let authenticator: RealmAuthenticator = noAuthAuthenticator; + if (push) { + if (options.authenticator) { + authenticator = options.authenticator; + } else { + const resolution = resolveRealmAuthenticator({ + realmUrl: specs[0].realmUrl, + realmSecretSeed: options.realmSecretSeed, + profileManager: options.profileManager, + }); + if (!resolution.ok) { + return { trackers: [], error: resolution.error }; + } + authenticator = resolution.authenticator; + } + } + + const lockedDirs: string[] = []; + for (const spec of specs) { + const result = await acquireSyncLock(spec.localDir, 'track', spec.realmUrl); + if (!result.ok) { + for (const dir of lockedDirs) await releaseSyncLock(dir, 'track'); + return { + trackers: [], + error: formatLockedError( + spec.localDir, + result.existing, + result.conflictKind, + ), + }; + } + if (result.staleOverwrote && !quiet) { + console.log( + `${DIM}[${timestamp()}] overwrote stale lock at ${spec.localDir}${RESET}`, + ); + } + lockedDirs.push(spec.localDir); + } + + const trackers: RealmTracker[] = []; + for (const spec of specs) { + const tracker = new RealmTracker(spec, authenticator, { + debounceMs, + minIntervalMs: intervalMs, + quiet, + verbose, + push, + }); + try { + await tracker.initialize(); + } catch (err) { + for (const t of trackers) t.shutdown(); + for (const dir of lockedDirs) await releaseSyncLock(dir, 'track'); + return { + trackers: [], + error: `Failed to initialize track on ${spec.realmUrl}: ${ + err instanceof Error ? err.message : String(err) + }`, + }; + } + trackers.push(tracker); + } + + if (!quiet) { + console.log( + `${FG_CYAN}⇆ Tracking ${trackers.length} realm${ + trackers.length > 1 ? 's' : '' + }:${RESET}`, + ); + for (const t of trackers) { + console.log(` ${t.localDir} ${DIM}→${RESET} ${t.name}`); + } + console.log( + ` ${DIM}Debounce: ${debounceMs / 1000}s, Min interval: ${intervalMs / 1000}s${RESET}`, + ); + if (push) console.log(` ${DIM}Push: enabled${RESET}`); + if (verbose) console.log(` ${DIM}Verbose: enabled${RESET}`); + console.log(` ${DIM}Press Ctrl+C to stop${RESET}\n`); + } + + for (const tracker of trackers) { + tracker.startWatching((result) => { + if (!quiet) logFlush(tracker.name, result); + }); + } + + let stopped = false; + await new Promise((resolve) => { + let sigintHandler: (() => void) | null = null; + let sigtermHandler: (() => void) | null = null; + + const cleanup = async () => { + if (stopped) return; + stopped = true; + // Force-flush before shutdown so anything buffered lands in a final + // checkpoint even when we're under the min-interval gate. + for (const t of trackers) { + try { + await t.flushPending(true); + } catch (err) { + console.error( + `${FG_RED}[${t.name}] final flush error:${RESET}`, + err instanceof Error ? err.message : err, + ); + } + } + for (const t of trackers) t.shutdown(); + if (sigintHandler) process.off('SIGINT', sigintHandler); + if (sigtermHandler) process.off('SIGTERM', sigtermHandler); + for (const dir of lockedDirs) { + try { + await releaseSyncLock(dir, 'track'); + } catch { + // Best-effort — a leftover lock will be detected as stale next run. + } + } + resolve(); + }; + + if (options.signal) { + if (options.signal.aborted) { + void cleanup(); + return; + } + options.signal.addEventListener('abort', () => void cleanup(), { + once: true, + }); + } else { + sigintHandler = () => { + if (!quiet) console.log(`\n${FG_CYAN}⇆ Tracking stopped${RESET}`); + void cleanup(); + }; + sigtermHandler = sigintHandler; + process.on('SIGINT', sigintHandler); + process.on('SIGTERM', sigtermHandler); + } + }); + + return { trackers }; +} + +function formatLockedError( + localDir: string, + info: SyncLockInfo, + conflictKind: LockKind, +): string { + if (conflictKind === 'watch') { + return ( + `A boxel realm watch process is already active for ${localDir} ` + + `(pid ${info.pid}, started ${info.startedAt}). Stop it before starting track — ` + + `running track and watch against the same directory creates a push/pull loop.` + ); + } + return ( + `A boxel realm track process is already active for ${localDir} ` + + `(pid ${info.pid}, started ${info.startedAt}). Stop it before starting ` + + `a new one, or remove ${path.join(localDir, '.boxel-track.lock')} if it's stale.` + ); +} + +function logFlush(name: string, result: TrackFlushResult): void { + if (result.checkpoint) { + const tag = result.checkpoint.isMajor ? '[MAJOR]' : '[minor]'; + console.log( + `${DIM}[${timestamp()}]${RESET} [${name}] ${FG_GREEN}checkpoint:${RESET} ${result.checkpoint.shortHash} ${tag} ${result.checkpoint.message}`, + ); + } + if (result.added.length || result.modified.length || result.deleted.length) { + const parts: string[] = []; + if (result.added.length) parts.push(`+${result.added.length}`); + if (result.modified.length) parts.push(`~${result.modified.length}`); + if (result.deleted.length) parts.push(`-${result.deleted.length}`); + console.log(` ${DIM}${parts.join(' ')}${RESET}`); + } + if (result.pushed.length) { + console.log( + ` ${FG_GREEN}✓ pushed ${result.pushed.length} file(s)${RESET}`, + ); + } + if (result.pushFailed.length) { + console.log( + ` ${FG_RED}✗ ${result.pushFailed.length} push failure(s) (will retry)${RESET}`, + ); + } +} + +function deriveRealmName(normalizedUrl: string): string { + const parts = normalizedUrl.replace(/\/$/, '').split('/'); + return parts.slice(-2).join('/'); +} + +function timestamp(): string { + return new Date().toLocaleTimeString(); +} + +function parsePositiveSeconds(name: string): (value: string) => number { + return (value: string) => { + const n = Number.parseFloat(value); + if (!Number.isFinite(n) || n <= 0) { + throw new InvalidArgumentError(`${name} must be a positive number.`); + } + return n; + }; +} + +function parseNonNegativeSeconds(name: string): (value: string) => number { + return (value: string) => { + const n = Number.parseFloat(value); + if (!Number.isFinite(n) || n < 0) { + throw new InvalidArgumentError(`${name} must be a non-negative number.`); + } + return n; + }; +} + +export function registerTrackCommand(realm: Command): void { + realm + .command('track') + .description( + 'Watch a local directory for changes, create checkpoints, and (with --push) sync them to a Boxel realm', + ) + .argument('', 'The local directory to track') + .argument( + '', + 'The URL of the realm this directory mirrors (used for --push and history attribution)', + ) + .option( + '-d, --debounce ', + 'Seconds to wait after a burst of edits before applying them', + parseNonNegativeSeconds('--debounce'), + 3, + ) + .option( + '-i, --interval ', + 'Minimum seconds between checkpoints', + parsePositiveSeconds('--interval'), + 10, + ) + .option('-q, --quiet', 'Only print on checkpoint creation') + .option( + '-p, --push', + 'Push add/update changes to the realm after each checkpoint', + ) + .option('-v, --verbose', 'Print detailed debug output') + .option( + '--realm-secret-seed', + 'Administrative auth: prompt for a realm secret seed and mint a JWT locally instead of using a Matrix profile (env: BOXEL_REALM_SECRET_SEED)', + ) + .action( + async ( + localDir: string, + realmUrl: string, + options: { + debounce: number; + interval: number; + quiet?: boolean; + push?: boolean; + verbose?: boolean; + realmSecretSeed?: boolean; + }, + ) => { + const realmSecretSeed = await resolveRealmSecretSeed( + options.realmSecretSeed === true, + ); + const result = await trackRealms([{ realmUrl, localDir }], { + intervalMs: options.interval * 1000, + debounceMs: options.debounce * 1000, + quiet: options.quiet, + verbose: options.verbose, + push: options.push, + realmSecretSeed, + }); + if (result.error) { + console.error(`${FG_RED}Error:${RESET} ${result.error}`); + process.exit(1); + } + }, + ); +} diff --git a/packages/boxel-cli/src/commands/realm/watch.ts b/packages/boxel-cli/src/commands/realm/watch.ts index 4e7fe4baea..7f0e1c0b0e 100644 --- a/packages/boxel-cli/src/commands/realm/watch.ts +++ b/packages/boxel-cli/src/commands/realm/watch.ts @@ -18,10 +18,11 @@ import type { RealmAuthenticator } from '../../lib/realm-authenticator'; import { resolveRealmAuthenticator } from '../../lib/auth-resolver'; import { resolveRealmSecretSeed } from '../../lib/prompt'; import { - acquireWatchLock, - releaseWatchLock, - type WatchLockInfo, -} from '../../lib/watch-lock'; + acquireSyncLock, + releaseSyncLock, + type SyncLockInfo, + type LockKind, +} from '../../lib/sync-lock'; import { FG_CYAN, FG_GREEN, @@ -376,12 +377,16 @@ export async function watchRealms( // failure rolls back all earlier locks rather than leaving them dangling. const lockedDirs: string[] = []; for (const spec of specs) { - const result = await acquireWatchLock(spec.localDir, spec.realmUrl); + const result = await acquireSyncLock(spec.localDir, 'watch', spec.realmUrl); if (!result.ok) { - for (const dir of lockedDirs) await releaseWatchLock(dir); + for (const dir of lockedDirs) await releaseSyncLock(dir, 'watch'); return { watchers: [], - error: formatLockedError(spec.localDir, result.existing), + error: formatLockedError( + spec.localDir, + result.existing, + result.conflictKind, + ), }; } if (result.staleOverwrote && !quiet) { @@ -401,7 +406,7 @@ export async function watchRealms( await watcher.initialize(); } catch (err) { for (const w of watchers) w.shutdown(); - for (const dir of lockedDirs) await releaseWatchLock(dir); + for (const dir of lockedDirs) await releaseSyncLock(dir, 'watch'); return { watchers: [], error: `Failed to initialize watch on ${spec.realmUrl}: ${ @@ -483,7 +488,7 @@ export async function watchRealms( if (sigtermHandler) process.off('SIGTERM', sigtermHandler); for (const dir of lockedDirs) { try { - await releaseWatchLock(dir); + await releaseSyncLock(dir, 'watch'); } catch { // Best effort \u2014 a leftover lock will be detected as stale next run. } @@ -513,7 +518,18 @@ export async function watchRealms( return { watchers }; } -function formatLockedError(localDir: string, info: WatchLockInfo): string { +function formatLockedError( + localDir: string, + info: SyncLockInfo, + conflictKind: LockKind, +): string { + if (conflictKind === 'track') { + return ( + `A boxel realm track process is already active for ${localDir} ` + + `(pid ${info.pid}, started ${info.startedAt}). Stop it before starting watch — ` + + `running track and watch against the same directory creates a push/pull loop.` + ); + } return ( `A boxel realm watch process is already active for ${localDir} ` + `(pid ${info.pid}, started ${info.startedAt}). Stop it before starting ` + diff --git a/packages/boxel-cli/src/lib/sync-lock.ts b/packages/boxel-cli/src/lib/sync-lock.ts new file mode 100644 index 0000000000..e146312f39 --- /dev/null +++ b/packages/boxel-cli/src/lib/sync-lock.ts @@ -0,0 +1,113 @@ +import * as fs from 'fs/promises'; +import * as path from 'path'; + +const LOCK_FILES = { + watch: '.boxel-watch.lock', + track: '.boxel-track.lock', +} as const; + +export type LockKind = keyof typeof LOCK_FILES; + +export interface SyncLockInfo { + pid: number; + startedAt: string; + realmUrl: string; +} + +export type SyncLockResult = + | { ok: true; staleOverwrote: boolean } + | { ok: false; conflictKind: LockKind; existing: SyncLockInfo }; + +function lockPath(localDir: string, kind: LockKind): string { + return path.join(localDir, LOCK_FILES[kind]); +} + +function isProcessAlive(pid: number): boolean { + try { + process.kill(pid, 0); + return true; + } catch (err: any) { + // EPERM means the process exists but we can't signal it — still alive. + return err?.code === 'EPERM'; + } +} + +async function readLock( + localDir: string, + kind: LockKind, +): Promise { + try { + const raw = await fs.readFile(lockPath(localDir, kind), 'utf8'); + const parsed = JSON.parse(raw) as Partial; + if ( + typeof parsed.pid !== 'number' || + typeof parsed.startedAt !== 'string' || + typeof parsed.realmUrl !== 'string' + ) { + return null; + } + return parsed as SyncLockInfo; + } catch { + return null; + } +} + +/** + * Acquire a sync-lock of `kind` against `localDir`. Refuses if either + * - the same kind is already held by a live process, or + * - the *other* kind is held by a live process — running `boxel realm + * track` and `boxel realm watch` against the same dir would create a + * push/pull loop. + * A stale same-kind lock (dead pid) is overwritten. A stale other-kind + * lock is left in place — its owner will overwrite it on next run. + */ +export async function acquireSyncLock( + localDir: string, + kind: LockKind, + realmUrl: string, +): Promise { + await fs.mkdir(localDir, { recursive: true }); + + const otherKind: LockKind = kind === 'watch' ? 'track' : 'watch'; + const otherLock = await readLock(localDir, otherKind); + if (otherLock && isProcessAlive(otherLock.pid)) { + return { ok: false, conflictKind: otherKind, existing: otherLock }; + } + + const existing = await readLock(localDir, kind); + let staleOverwrote = false; + if (existing && isProcessAlive(existing.pid)) { + return { ok: false, conflictKind: kind, existing }; + } + if (existing) { + staleOverwrote = true; + } + const info: SyncLockInfo = { + pid: process.pid, + startedAt: new Date().toISOString(), + realmUrl, + }; + await fs.writeFile( + lockPath(localDir, kind), + JSON.stringify(info, null, 2) + '\n', + ); + return { ok: true, staleOverwrote }; +} + +export async function releaseSyncLock( + localDir: string, + kind: LockKind, +): Promise { + try { + await fs.unlink(lockPath(localDir, kind)); + } catch (err: any) { + if (err?.code !== 'ENOENT') throw err; + } +} + +export async function readSyncLock( + localDir: string, + kind: LockKind, +): Promise { + return readLock(localDir, kind); +} diff --git a/packages/boxel-cli/src/lib/watch-lock.ts b/packages/boxel-cli/src/lib/watch-lock.ts deleted file mode 100644 index d29b6ade17..0000000000 --- a/packages/boxel-cli/src/lib/watch-lock.ts +++ /dev/null @@ -1,81 +0,0 @@ -import * as fs from 'fs/promises'; -import * as path from 'path'; - -const LOCK_FILE = '.boxel-watch.lock'; - -export interface WatchLockInfo { - pid: number; - startedAt: string; - realmUrl: string; -} - -export type WatchLockResult = - | { ok: true; staleOverwrote: boolean } - | { ok: false; existing: WatchLockInfo }; - -function lockPath(localDir: string): string { - return path.join(localDir, LOCK_FILE); -} - -function isProcessAlive(pid: number): boolean { - try { - process.kill(pid, 0); - return true; - } catch (err: any) { - // EPERM means the process exists but we can't signal it — still alive. - return err?.code === 'EPERM'; - } -} - -async function readLock(localDir: string): Promise { - try { - const raw = await fs.readFile(lockPath(localDir), 'utf8'); - const parsed = JSON.parse(raw) as Partial; - if ( - typeof parsed.pid !== 'number' || - typeof parsed.startedAt !== 'string' || - typeof parsed.realmUrl !== 'string' - ) { - return null; - } - return parsed as WatchLockInfo; - } catch { - return null; - } -} - -export async function acquireWatchLock( - localDir: string, - realmUrl: string, -): Promise { - await fs.mkdir(localDir, { recursive: true }); - const existing = await readLock(localDir); - let staleOverwrote = false; - if (existing && isProcessAlive(existing.pid)) { - return { ok: false, existing }; - } - if (existing) { - staleOverwrote = true; - } - const info: WatchLockInfo = { - pid: process.pid, - startedAt: new Date().toISOString(), - realmUrl, - }; - await fs.writeFile(lockPath(localDir), JSON.stringify(info, null, 2) + '\n'); - return { ok: true, staleOverwrote }; -} - -export async function releaseWatchLock(localDir: string): Promise { - try { - await fs.unlink(lockPath(localDir)); - } catch (err: any) { - if (err?.code !== 'ENOENT') throw err; - } -} - -export async function readWatchLock( - localDir: string, -): Promise { - return readLock(localDir); -} diff --git a/packages/boxel-cli/tests/integration/realm-track.test.ts b/packages/boxel-cli/tests/integration/realm-track.test.ts new file mode 100644 index 0000000000..416c5cabff --- /dev/null +++ b/packages/boxel-cli/tests/integration/realm-track.test.ts @@ -0,0 +1,636 @@ +import '../helpers/setup-realm-server'; +import { describe, it, expect, beforeAll, afterAll } from 'vitest'; +import * as fs from 'fs'; +import * as os from 'os'; +import * as path from 'path'; +import { RealmTracker, trackRealms } from '../../src/commands/realm/track'; +import { CheckpointManager } from '../../src/lib/checkpoint-manager'; +import type { ProfileManager } from '../../src/lib/profile-manager'; +import { createRealm } from '../../src/commands/realm/create'; +import { + startTestRealmServer, + stopTestRealmServer, + createTestProfileDir, + setupTestProfile, + uniqueRealmName, +} from '../helpers/integration'; + +let profileManager: ProfileManager; +let cleanupProfile: () => void; +const localDirs: string[] = []; + +function makeLocalDir(): string { + let dir = fs.mkdtempSync(path.join(os.tmpdir(), 'boxel-track-int-')); + localDirs.push(dir); + return dir; +} + +function writeLocal(localDir: string, relPath: string, content: string): void { + let full = path.join(localDir, relPath); + fs.mkdirSync(path.dirname(full), { recursive: true }); + fs.writeFileSync(full, content); +} + +function deleteLocal(localDir: string, relPath: string): void { + fs.unlinkSync(path.join(localDir, relPath)); +} + +function buildFileUrl(realmUrl: string, relPath: string): string { + let base = realmUrl.endsWith('/') ? realmUrl : `${realmUrl}/`; + return `${base}${relPath.replace(/^\/+/, '')}`; +} + +async function remoteFileExists( + realmUrl: string, + relPath: string, +): Promise { + let response = await profileManager.authedRealmFetch( + buildFileUrl(realmUrl, relPath), + { headers: { Accept: 'application/vnd.card+source' } }, + ); + return response.ok; +} + +async function fetchRemoteFile( + realmUrl: string, + relPath: string, +): Promise { + let response = await profileManager.authedRealmFetch( + buildFileUrl(realmUrl, relPath), + { headers: { Accept: 'application/vnd.card+source' } }, + ); + if (!response.ok) { + throw new Error( + `fetchRemoteFile ${relPath} failed: ${response.status} ${response.statusText}`, + ); + } + return response.text(); +} + +async function writeRemoteFile( + realmUrl: string, + relPath: string, + content: string, +): Promise { + let response = await profileManager.authedRealmFetch( + buildFileUrl(realmUrl, relPath), + { + method: 'POST', + headers: { + 'Content-Type': 'text/plain;charset=UTF-8', + Accept: 'application/vnd.card+source', + }, + body: content, + }, + ); + if (!response.ok) { + throw new Error( + `writeRemoteFile ${relPath} failed: ${response.status} ${response.statusText}`, + ); + } +} + +function seedManifest( + localDir: string, + realmUrl: string, + files: Record = {}, +): void { + fs.writeFileSync( + path.join(localDir, '.boxel-sync.json'), + JSON.stringify({ realmUrl, files, remoteMtimes: {} }, null, 2), + ); +} + +async function createTestRealm(): Promise { + let name = uniqueRealmName(); + await createRealm(name, `Test ${name}`, { profileManager }); + let realmTokens = + profileManager.getActiveProfile()!.profile.realmTokens ?? {}; + let entry = Object.entries(realmTokens).find(([url]) => url.includes(name)); + if (!entry) { + throw new Error(`No realm JWT stored for ${name}`); + } + return entry[0]; +} + +function sleep(ms: number): Promise { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +beforeAll(async () => { + await startTestRealmServer(); + let testProfile = createTestProfileDir(); + profileManager = testProfile.profileManager; + cleanupProfile = testProfile.cleanup; + await setupTestProfile(profileManager); +}); + +afterAll(async () => { + for (let dir of localDirs) { + fs.rmSync(dir, { recursive: true, force: true }); + } + cleanupProfile?.(); + await stopTestRealmServer(); +}); + +describe('realm track (integration) — local behavior', () => { + it('detects an added file and writes a local checkpoint', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/track-add/'; + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: false, + }); + await tracker.initialize(); + + writeLocal(localDir, 'cards/foo.gts', 'export const foo = 1;\n'); + + let hasNew = await tracker.scanForChanges(); + expect(hasNew).toBe(true); + expect(tracker.pendingCount).toBe(1); + + let result = await tracker.flushPending(true); + expect(result).not.toBeNull(); + expect(result!.added).toEqual(['cards/foo.gts']); + expect(result!.modified).toEqual([]); + expect(result!.deleted).toEqual([]); + expect(result!.checkpoint).not.toBeNull(); + expect(result!.checkpoint!.source).toBe('local'); + + tracker.shutdown(); + }); + + it('detects a modification and writes a local checkpoint', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/track-mod/'; + writeLocal(localDir, 'a.gts', 'export const a = 1;\n'); + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: false, + }); + await tracker.initialize(); + + // Realm mtimes are second-precision; wait so the next write bumps it. + await sleep(1100); + writeLocal(localDir, 'a.gts', 'export const a = 2;\n'); + + let hasNew = await tracker.scanForChanges(); + expect(hasNew).toBe(true); + let result = await tracker.flushPending(true); + expect(result!.modified).toEqual(['a.gts']); + expect(result!.added).toEqual([]); + + tracker.shutdown(); + }); + + it('detects a deletion and writes a local checkpoint', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/track-del/'; + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: false, + }); + await tracker.initialize(); + + // Baseline: tracker must have already checkpointed the file at least + // once before the delete; otherwise .boxel-history has nothing to + // diff against and createCheckpoint returns null. This mirrors the + // real-world flow (file edited or added under track's watch) and + // matches the watch test's pre-flush pattern. + writeLocal(localDir, 'doomed.gts', 'export const x = 1;\n'); + await tracker.scanForChanges(); + await tracker.flushPending(true); + + deleteLocal(localDir, 'doomed.gts'); + await tracker.scanForChanges(); + let result = await tracker.flushPending(true); + expect(result!.deleted).toEqual(['doomed.gts']); + expect(result!.checkpoint).not.toBeNull(); + + tracker.shutdown(); + }); + + it('coalesces a burst of edits into one debounced checkpoint', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/track-burst/'; + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 75, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: false, + }); + await tracker.initialize(); + + let flushes: Array<{ added: string[]; modified: string[] }> = []; + let flushSettled = new Promise((resolve) => { + let onFlush = (result: { added: string[]; modified: string[] }) => { + flushes.push(result); + resolve(); + }; + + (async () => { + writeLocal(localDir, 'b1.gts', '1'); + await tracker.scanForChanges(); + tracker.scheduleFlush(onFlush); + + writeLocal(localDir, 'b2.gts', '2'); + await tracker.scanForChanges(); + tracker.scheduleFlush(onFlush); + })(); + }); + + await flushSettled; + await sleep(40); + + expect(flushes.length).toBe(1); + expect(flushes[0].added.sort()).toEqual(['b1.gts', 'b2.gts']); + + tracker.shutdown(); + }); + + it('defers a second batch when min-interval has not elapsed', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/track-int/'; + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 200, + quiet: true, + verbose: false, + push: false, + }); + await tracker.initialize(); + + writeLocal(localDir, 'first.gts', '1'); + await tracker.scanForChanges(); + let r1 = await tracker.flushPending(); + expect(r1!.added).toEqual(['first.gts']); + + // Second batch within minIntervalMs — should be deferred. + writeLocal(localDir, 'second.gts', '2'); + await tracker.scanForChanges(); + let r2 = await tracker.flushPending(); + expect(r2).toBeNull(); + // Pending entry stays buffered until the interval timer fires. + expect(tracker.pendingCount).toBe(1); + + // Wait past the min interval; the interval timer should drain it. + await sleep(300); + expect(tracker.pendingCount).toBe(0); + + tracker.shutdown(); + }); + + it('hash-gates a noop modify when the manifest has the same hash', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/track-hash/'; + let content = 'export const noop = 1;\n'; + writeLocal(localDir, 'noop.gts', content); + + // md5 of `content`. + let crypto = await import('crypto'); + let hash = crypto.createHash('md5').update(content).digest('hex'); + seedManifest(localDir, realmUrl, { 'noop.gts': hash }); + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: false, + }); + await tracker.initialize(); + + // Touch (rewrite identical content) so mtime/size diff is recorded. + await sleep(1100); + writeLocal(localDir, 'noop.gts', content); + + await tracker.scanForChanges(); + let result = await tracker.flushPending(true); + expect(result).not.toBeNull(); + // Hash gate dropped the modify; nothing to checkpoint. + expect(result!.modified).toEqual([]); + expect(result!.added).toEqual([]); + expect(result!.checkpoint).toBeNull(); + + tracker.shutdown(); + }); +}); + +describe('realm track (integration) — --push', () => { + it('uploads adds and updates via /_atomic, then updates the manifest', async () => { + let realmUrl = await createTestRealm(); + let localDir = makeLocalDir(); + seedManifest(localDir, realmUrl); + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: true, + }); + await tracker.initialize(); + // Write AFTER init so seedFileStates doesn't capture it as already-known. + writeLocal(localDir, 'thing.gts', 'export const t = 1;\n'); + await tracker.scanForChanges(); + let result = await tracker.flushPending(true); + + expect(result!.added).toEqual(['thing.gts']); + expect(result!.pushed).toEqual(['thing.gts']); + expect(result!.pushFailed).toEqual([]); + expect(await remoteFileExists(realmUrl, 'thing.gts')).toBe(true); + expect(await fetchRemoteFile(realmUrl, 'thing.gts')).toContain('t = 1'); + + let manifest = JSON.parse( + fs.readFileSync(path.join(localDir, '.boxel-sync.json'), 'utf8'), + ); + expect(manifest.files['thing.gts']).toBeTypeOf('string'); + expect(manifest.files['thing.gts'].length).toBeGreaterThan(0); + + tracker.shutdown(); + }); + + it('orders .gts modules before .json instances inside the atomic POST', async () => { + let realmUrl = await createTestRealm(); + let localDir = makeLocalDir(); + seedManifest(localDir, realmUrl); + + // Spy on authedRealmFetch to capture the atomic POST body. + let capturedBody: string | null = null; + let originalFetch = profileManager.authedRealmFetch.bind(profileManager); + profileManager.authedRealmFetch = async (input, init) => { + let urlString = + typeof input === 'string' + ? input + : input instanceof URL + ? input.href + : input.url; + if (urlString.endsWith('_atomic') && init?.body) { + capturedBody = init.body as string; + } + return originalFetch(input, init); + }; + + try { + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: true, + }); + await tracker.initialize(); + // Write AFTER init so seedFileStates doesn't pre-capture the files. + writeLocal( + localDir, + 'cards/Person/Person.gts', + 'export const p = 1;\n', + ); + writeLocal(localDir, 'cards/Person/instance-1.json', '{"x":1}\n'); + await tracker.scanForChanges(); + await tracker.flushPending(true); + tracker.shutdown(); + } finally { + profileManager.authedRealmFetch = originalFetch; + } + + expect(capturedBody).not.toBeNull(); + let parsed = JSON.parse(capturedBody!); + let ops: Array<{ op: string; href: string }> = parsed['atomic:operations']; + let gtsIdx = ops.findIndex((o) => o.href.endsWith('Person.gts')); + let jsonIdx = ops.findIndex((o) => o.href.endsWith('instance-1.json')); + expect(gtsIdx).toBeGreaterThanOrEqual(0); + expect(jsonIdx).toBeGreaterThanOrEqual(0); + expect(gtsIdx).toBeLessThan(jsonIdx); + }); + + it('skips deletions on push, recording them in the local checkpoint only', async () => { + let realmUrl = await createTestRealm(); + let localDir = makeLocalDir(); + seedManifest(localDir, realmUrl); + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: true, + }); + await tracker.initialize(); + + // Establish the file via a normal track add+push cycle so manifest, + // remote, and .boxel-history all agree on its existence. + writeLocal(localDir, 'persistent.gts', 'export const x = 1;\n'); + await tracker.scanForChanges(); + await tracker.flushPending(true); + expect(await remoteFileExists(realmUrl, 'persistent.gts')).toBe(true); + + deleteLocal(localDir, 'persistent.gts'); + await tracker.scanForChanges(); + let result = await tracker.flushPending(true); + + expect(result!.deleted).toEqual(['persistent.gts']); + expect(result!.pushed).toEqual([]); + expect(result!.pushFailed).toEqual([]); + expect(result!.checkpoint).not.toBeNull(); + // Remote file untouched — deferred deletion semantics. + expect(await remoteFileExists(realmUrl, 'persistent.gts')).toBe(true); + + tracker.shutdown(); + }); + + it('fails fast when --push is enabled but no manifest exists', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/no-manifest/'; + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: true, + }); + await expect(tracker.initialize()).rejects.toThrow( + /requires a synced workspace/, + ); + + tracker.shutdown(); + }); + + it('retains entries whose push fails (e.g. concurrent 409) for the next cycle', async () => { + let realmUrl = await createTestRealm(); + let localDir = makeLocalDir(); + seedManifest(localDir, realmUrl); + + // Pre-create the file on the realm so an 'add' op gets a 409. Since + // the manifest is empty, our addPaths logic will use op:add, but the + // server already has the resource → 409. + await writeRemoteFile(realmUrl, 'race.gts', 'export const r = 1;\n'); + + let tracker = new RealmTracker({ realmUrl, localDir }, profileManager, { + debounceMs: 0, + minIntervalMs: 0, + quiet: true, + verbose: false, + push: true, + }); + await tracker.initialize(); + // Write AFTER init so scanForChanges sees the file as new. + writeLocal(localDir, 'race.gts', 'export const r = 2;\n'); + await tracker.scanForChanges(); + let result = await tracker.flushPending(true); + + // Push failed; entry is retained for retry. + expect(result!.pushFailed.length).toBe(1); + expect(result!.pushFailed[0].path).toBe('race.gts'); + expect(tracker.pendingCount).toBe(1); + + tracker.shutdown(); + }); +}); + +describe('realm track (integration) — locks and orchestration', () => { + it('blocks a second concurrent track against the same localDir', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/lock-self/'; + + let firstController = new AbortController(); + let firstRun = trackRealms([{ realmUrl, localDir }], { + debounceMs: 25, + intervalMs: 1000, + quiet: true, + push: false, + signal: firstController.signal, + }); + + await sleep(150); + let lockPath = path.join(localDir, '.boxel-track.lock'); + expect(fs.existsSync(lockPath)).toBe(true); + + let second = await trackRealms([{ realmUrl, localDir }], { + debounceMs: 25, + intervalMs: 1000, + quiet: true, + push: false, + }); + expect(second.error).toBeDefined(); + expect(second.error).toContain(`pid ${process.pid}`); + expect(second.trackers).toEqual([]); + + firstController.abort(); + await firstRun; + expect(fs.existsSync(lockPath)).toBe(false); + }); + + it('refuses to start when a live watch lock exists at the same localDir', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/lock-cross/'; + fs.mkdirSync(localDir, { recursive: true }); + let watchLockPath = path.join(localDir, '.boxel-watch.lock'); + fs.writeFileSync( + watchLockPath, + JSON.stringify({ + pid: process.pid, + startedAt: new Date().toISOString(), + realmUrl, + }), + ); + + let result = await trackRealms([{ realmUrl, localDir }], { + debounceMs: 25, + intervalMs: 1000, + quiet: true, + push: false, + }); + expect(result.error).toBeDefined(); + expect(result.error).toContain('boxel realm watch'); + expect(result.error).toContain(`pid ${process.pid}`); + expect(result.trackers).toEqual([]); + // Track refused — must not have written its own lock or touched the + // watch lock. + expect(fs.existsSync(path.join(localDir, '.boxel-track.lock'))).toBe(false); + expect(fs.existsSync(watchLockPath)).toBe(true); + fs.rmSync(watchLockPath); + }); + + it('overwrites a stale track lock from a process that no longer exists', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/lock-stale/'; + let lockPath = path.join(localDir, '.boxel-track.lock'); + fs.mkdirSync(localDir, { recursive: true }); + fs.writeFileSync( + lockPath, + JSON.stringify({ + pid: 999_999_999, + startedAt: '2020-01-01T00:00:00.000Z', + realmUrl, + }), + ); + + let controller = new AbortController(); + let run = trackRealms([{ realmUrl, localDir }], { + debounceMs: 25, + intervalMs: 1000, + quiet: true, + push: false, + signal: controller.signal, + }); + + await sleep(150); + let parsed = JSON.parse(fs.readFileSync(lockPath, 'utf8')); + expect(parsed.pid).toBe(process.pid); + + controller.abort(); + let result = await run; + expect(result.error).toBeUndefined(); + expect(fs.existsSync(lockPath)).toBe(false); + }); + + it('flushes pending changes before exit when the abort signal fires', async () => { + let localDir = makeLocalDir(); + let realmUrl = 'https://example.test/abort-flush/'; + + let controller = new AbortController(); + let run = trackRealms([{ realmUrl, localDir }], { + debounceMs: 0, + intervalMs: 5000, // long min-interval — exit must force-flush past it + quiet: true, + push: false, + signal: controller.signal, + }); + + await sleep(150); + writeLocal(localDir, 'last-minute.gts', '1'); + // Give the poll loop one tick to detect. + await sleep(2200); + + controller.abort(); + let result = await run; + expect(result.error).toBeUndefined(); + + // Final force-flush should have written a checkpoint covering the file. + let checkpoints = await new CheckpointManager(localDir).getCheckpoints(); + expect( + checkpoints.some((c) => + c.message.toLowerCase().includes('last-minute'), + ) || + // Some checkpoint message conventions just say "X file added". + checkpoints.length >= 1, + ).toBe(true); + }); +}); diff --git a/packages/boxel-cli/tests/integration/realm-watch.test.ts b/packages/boxel-cli/tests/integration/realm-watch.test.ts index 814310df6a..c80083111a 100644 --- a/packages/boxel-cli/tests/integration/realm-watch.test.ts +++ b/packages/boxel-cli/tests/integration/realm-watch.test.ts @@ -383,6 +383,69 @@ describe('realm watch (integration)', () => { expect(fs.existsSync(lockPath)).toBe(false); }); + it('refuses when a live track lock exists at the same localDir', async () => { + let localDir = makeLocalDir(); + fs.mkdirSync(localDir, { recursive: true }); + let trackLockPath = path.join(localDir, '.boxel-track.lock'); + fs.writeFileSync( + trackLockPath, + JSON.stringify({ + pid: process.pid, + startedAt: new Date().toISOString(), + realmUrl, + }), + ); + + let result = await watchRealms([{ realmUrl, localDir }], { + profileManager, + intervalMs: 1000, + debounceMs: 25, + quiet: true, + }); + expect(result.error).toBeDefined(); + expect(result.error).toContain('boxel realm track'); + expect(result.error).toContain(`pid ${process.pid}`); + expect(result.watchers).toEqual([]); + expect(fs.existsSync(path.join(localDir, '.boxel-watch.lock'))).toBe(false); + expect(fs.existsSync(trackLockPath)).toBe(true); + fs.rmSync(trackLockPath); + }); + + it('ignores a stale track lock from a dead pid and proceeds', async () => { + let localDir = makeLocalDir(); + fs.mkdirSync(localDir, { recursive: true }); + let trackLockPath = path.join(localDir, '.boxel-track.lock'); + fs.writeFileSync( + trackLockPath, + JSON.stringify({ + pid: 999_999_999, + startedAt: '2020-01-01T00:00:00.000Z', + realmUrl, + }), + ); + + let controller = new AbortController(); + let run = watchRealms([{ realmUrl, localDir }], { + profileManager, + intervalMs: 1000, + debounceMs: 25, + quiet: true, + signal: controller.signal, + }); + + await sleep(150); + expect(fs.existsSync(path.join(localDir, '.boxel-watch.lock'))).toBe(true); + + controller.abort(); + let result = await run; + expect(result.error).toBeUndefined(); + + // Watch doesn't own the track lock — leave the stale file in place; + // track itself will overwrite it on next run. + expect(fs.existsSync(trackLockPath)).toBe(true); + fs.rmSync(trackLockPath); + }); + it('downgrades a pending modify to a delete when the remote file disappears', async () => { let localDir = makeLocalDir(); await writeRemoteFile(realmUrl, 'flip.gts', 'export const x = 1;\n');