molecule-core/scripts
Hongming Wang 8bf29b7d0e fix(sweep-cf-tunnels): parallelize deletes + raise workflow timeout
The hourly Sweep stale Cloudflare Tunnels job got cancelled mid-cleanup
on 2026-05-02 (run 25248788312, killed at 5min after deleting 424/672
stale tunnels). A second manual dispatch finished the remaining 254
fine, so the immediate backlog cleared, but two underlying bugs would
re-trip on the next big cleanup.

Bug 1: serial delete loop. The execute branch was a `while read; do
curl -X DELETE; done` pipeline at ~0.7s/tunnel — fine for the
steady-state cleanup of a handful, but a 600+ backlog needs ~7-8min.
This commit fans out to $SWEEP_CONCURRENCY (default 8) workers via
`xargs -P 8 -L 1 -I {} bash -c '...' _ {} < "$DELETE_PLAN"`. With 8x
parallelism the same 600+ list drains in ~60s. Notes:

  - We use stdin (`<`) not GNU's `xargs -a FILE` so the script stays
    portable to BSD xargs (matters for local-runner testing on macOS).
  - We pass ONLY the tunnel id on argv. xargs tokenizes on whitespace
    by default; tab-separating id+name on argv risks mangling. The
    name is kept in a side-channel id->name map ($NAME_MAP) and looked
    up by the worker only on failure, for FAIL_LOG readability.
  - Workers print exactly `OK` or `FAIL` on stdout; tally with
    `grep -c '^OK$' / '^FAIL$'`.
  - On non-zero FAILED, log the first 20 lines of $FAIL_LOG as
    "Failure detail (first 20):" — same diagnostic surface as before
    but consolidated so we don't spam logs on a flaky CF API.

Bug 2: workflow's 5-min cap was set as a hangs-detector but turned out
to be a real-job-too-slow detector. Raised to 30 min — generous
headroom for the ~60s steady-state run while still surfacing genuine
hangs (and in line with the sweep-cf-orphans companion job).

Bug 3 (drive-by): the existing trap was `trap 'rm -rf "$PAGES_DIR"'
EXIT`, which would have been silently overwritten by any later trap
registration. Replaced with a single `cleanup()` function that wipes
PAGES_DIR + all four new tempfiles (DELETE_PLAN, NAME_MAP, FAIL_LOG,
RESULT_LOG), called once via `trap cleanup EXIT`.

Verification:
  - bash -n scripts/ops/sweep-cf-tunnels.sh: clean
  - shellcheck -S warning scripts/ops/sweep-cf-tunnels.sh: clean
  - python3 yaml.safe_load on the workflow: clean
  - Synthetic 30-line delete plan with every 7th id sentinel'd to
    return {"success":false}: TEST PASS, DELETED=26 FAILED=4, FAIL_LOG
    side-channel name lookup verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:35:46 -07:00
..
demo-freeze-snapshots ops: demo-day freeze + rollback runbook 2026-05-01 12:04:30 -07:00
ops fix(sweep-cf-tunnels): parallelize deletes + raise workflow timeout 2026-05-02 02:35:46 -07:00
build_runtime_package.py fix(runtime): register configs_dir in TOP_LEVEL_MODULES + drop alias 2026-05-01 13:13:57 -07:00
build-images.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
bundle-compile.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
canary-smoke.sh feat(canary): smoke harness + GHA verification workflow (Phase 2) 2026-04-19 03:30:19 -07:00
cleanup-rogue-workspaces.sh fix(provisioner): stop rogue config-missing restart loop (#17) 2026-04-14 07:32:58 -07:00
clone-manifest.sh fix(quickstart): wire up template/plugin registry via manifest.json 2026-04-23 14:55:34 -07:00
demo-day-runbook.md ops: demo-day freeze + rollback runbook 2026-05-01 12:04:30 -07:00
demo-freeze.sh ops: demo-day freeze + rollback runbook 2026-05-01 12:04:30 -07:00
demo-thaw.sh ops: demo-day freeze + rollback runbook 2026-05-01 12:04:30 -07:00
dev-start.sh fix(dev-start): detect missing Go and fall back to docker-compose platform 2026-04-29 20:04:37 -07:00
import-agent.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
lockdown-tenant-sg.sh feat(security): Phase 35.1 — SG lockdown script for tenant EC2 instances 2026-04-18 12:01:41 -07:00
measure-coordinator-task-bounds-runner.sh fix(harness-runner): switch from non-existent /heartbeat-history to /activity 2026-04-28 23:12:51 -07:00
measure-coordinator-task-bounds.sh docs: registry pattern + harness scripts READMEs 2026-04-28 22:19:40 -07:00
nuke-and-rebuild.sh fix(scripts): nuke-and-rebuild self-bootstraps templates; add E2E test 2026-04-26 14:37:04 -07:00
post-rebuild-setup.sh security: remove hardcoded API keys from post-rebuild-setup.sh 2026-04-20 13:02:52 -07:00
README.md docs(scripts): rename /heartbeat-history → /activity in README 2026-04-29 02:23:00 -07:00
refresh-workspace-images.sh feat(platform/admin): /admin/workspace-images/refresh + Docker SDK + GHCR auth 2026-04-26 10:17:21 -07:00
rollback-latest.sh fix(scripts): correct platform dir path + add ROOT isolation (shellcheck clean) 2026-04-22 15:42:24 +00:00
test_build_runtime_package.py chore: rewriter unit tests + drop misleading noqa on import inbox 2026-04-30 20:45:32 -07:00
test-a2a-cross-runtime.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
test-all-adapters.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
test-all.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
test-cross-agent-chat.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
test-nuke-and-rebuild.sh fix(scripts): nuke-and-rebuild self-bootstraps templates; add E2E test 2026-04-26 14:37:04 -07:00
test-team-e2e.sh initial commit — Molecule AI platform 2026-04-13 11:55:37 -07:00
wheel_smoke.py feat(mcp): notifications/claude/channel for push-feel inbox UX 2026-04-30 20:10:01 -07:00

scripts/

Operational and one-off scripts for molecule-core. Most are self-documenting — see the header comments in each file.

RFC #2251 coordinator task-bound harnesses

There are three related scripts; pick the right one:

Script Purpose Targets
measure-coordinator-task-bounds.sh Canonical v1 harness for the RFC #2251 / Issue 4 reproduction. Provisions a PM coordinator + Researcher child via claude-code-default + langgraph templates, sends a synthesis-heavy A2A kickoff, observes elapsed time + activity trace. OSS-shape platform — localhost or any /workspaces-shaped endpoint. Has tenant/admin-token guards for non-localhost runs.
measure-coordinator-task-bounds-runner.sh Generalised runner for the same measurement contract but with arbitrary template + secret + model combinations (Hermes/MiniMax, etc.). Useful for cross-runtime variants without modifying the canonical harness. Same as above (local or SaaS via MODE=saas).
measure-coordinator-task-bounds.sh (in molecule-controlplane) Production-shape variant that bootstraps a real staging tenant via POST /cp/admin/orgs, then runs the same measurement against <slug>.staging.moleculesai.app. Staging controlplane only — refuses to run against production.

See reference_harness_pair_pattern (auto-memory) for when to use which and the cross-repo design rationale.

Common safety pattern across all three

  • Cleanup trap on EXIT/INT/TERM auto-deletes provisioned resources.
  • DRY_RUN=1 prints plan + auth fingerprint, exits before any state mutation. Run this before pointing at staging or any shared infrastructure.
  • Non-target guard refuses arbitrary endpoints (the controlplane variant is locked to staging-api.moleculesai.app; the OSS variant requires explicit auth + tenant scoping for non-localhost PLATFORM).
  • Cleanup failures emit cleanup_*_failed events with remediation hints; no silenced curl. ADMIN_TOKEN expiring mid-run surfaces as a structured event rather than a silent leak.

Activity trace caveat

If activity_trace.raw == "<endpoint_unavailable>", the per-workspace /activity endpoint isn't wired on the target build — the bound measurement is INCONCLUSIVE on the platform-ceiling question. Either wire the endpoint or replace with the equivalent Datadog query. Note that /activity accepts a since_secs query parameter; see the endpoint handler for the supported range.

Other scripts

  • cleanup-rogue-workspaces.sh — emergency teardown for leaked workspaces. Prompts for confirmation. Pair with the harnesses if a cleanup trap fails (see cleanup_*_failed events).
  • canary-smoke.sh — quick smoke test for canary releases.
  • dev-start.sh — local-dev platform bring-up.

The rest are self-documenting in their header comments.