The molecule-ai-workspace-runtime mirror is regenerated on every
runtime-v* tag from this monorepo's workspace/. Per saved memory
reference_runtime_repo_is_mirror_only, mirror-guard rejects direct
PRs to the mirror; edit at source.
Source-side files that propagate to the mirror's published README +
read by users of the in-monorepo workspace-runtime docs:
- scripts/build_runtime_package.py (the README generator):
* line 281 README_TEMPLATE: 'Shared workspace runtime for Molecule
AI' link → Gitea
* line 399 doc-link to workspace-runtime-package.md → Gitea path
(with /src/branch/main/ shape)
LEFT AS-IS (per Q3 audit-trail decision):
* lines 379, 392 historical issue cross-refs (#2936, #2937)
- workspace/build-all.sh:5 — comment block linking to template-*
repos. Migrated to Gitea path-shape.
- docs/workspace-runtime-package.md:
* lines 101-108 adapter→repo table (8 templates, all PUBLIC on
Gitea) — Gitea URLs
* line 247 starter-repo link — substituted host + added inline
note that starter doesn't survive the suspension migration
(recreation pending; cross-link to this issue)
* line 259 generic git clone command for new templates → Gitea
* line 289 second starter mention — same handling as 247
Files NOT touched in this PR:
- workspace/ Python source code (.py files) — those use github
paths in docstrings + a few log strings; fix bundled with the
cross-repo Go-module-style migration (per #37 Q5 + parked
follow-ups).
- 'Writing a new adapter' section's `gh repo create` command (line
254-256) — gh CLI doesn't talk to Gitea (per #45 parked follow-up).
- 'Writing a new adapter' section's ghcr.io image ref (line 276) —
per #46 ghcr→ECR migration (separate concern).
After this PR merges to staging + a runtime-v* tag is pushed, the
mirror's published README will inherit the Gitea link. Until then
the mirror's README continues to reference github.com/Molecule-AI
(stale but historical-marker-correct since the mirror existed
pre-suspension).
Refs: molecule-ai/internal#41, molecule-ai/internal#37,
molecule-ai/internal#38, molecule-ai/internal#42,
molecule-ai/internal#45, molecule-ai/internal#46
User-reported friction: pip install molecule-ai-workspace-runtime on a
3.10 interpreter fails with "Could not find a version that satisfies the
requirement (from versions: none)" — pip's requires_python filter
silently drops the only available artifact before attempting install,
so the error doesn't mention Python at all. Operators see
"package missing", file a bug, and chase a phantom CDN/visibility
issue.
Two changes mirror the requirement at the two operator-touch surfaces:
1. workspace-server/internal/handlers/external_connection.go:
the externalUniversalMcpTemplate snippet (rendered into the
canvas Connect-External-Agent modal) now leads with a brief
"Requires Python >= 3.11" block + diagnostic + upgrade paths.
2. docs/workspace-runtime-package.md: same callout at the top of
the doc, before the Overview, so anyone landing here from search
gets the answer immediately.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- workspace-runtime-package.md: add explicit "Where to make changes"
section documenting the mirror-only policy on
Molecule-AI/molecule-ai-workspace-runtime — direct PRs are auto-rejected
by mirror-guard CI; staging push regenerates both the mirror and the
PyPI wheel via .github/workflows/publish-runtime.yml.
- infra/workspace-terminal.md: replace dead molecule-core#1528 reference
(repo renamed to molecule-monorepo, no longer accepting issues at the
old name) with a forward-pointer to monorepo + molecule-controlplane
issue trackers.
- architecture/backends.md: bump audit date to 2026-05-02 and add rows
for channel envelope enrichment (#2471), chat_history MCP tool
(#2474), /activity before_ts paging (#2476), /activity peer_id filter
(#2472), runtime_wedge smoke gate (#2473 + #2475), and the canvas-E2E
state-file requirement (#2327).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds an opt-in goroutine that polls GHCR every 5 minutes for digest
changes on each workspace-template-*:latest tag and invokes the same
refresh logic /admin/workspace-images/refresh exposes. With this, the
chain from "merge runtime PR" to "containers running new code" is fully
hands-off — no operator step between auto-tag → publish-runtime →
cascade → template image rebuild → host pull + recreate.
Opt-in via IMAGE_AUTO_REFRESH=true. SaaS deploys whose pipeline already
pulls every release should leave it off (would be redundant work);
self-hosters get true zero-touch.
Why a refactor of admin_workspace_images.go is in this PR:
The HTTP handler held all the refresh logic inline. To share it with
the new watcher without HTTP loopback, extracted WorkspaceImageService
with a Refresh(ctx, runtimes, recreate) (RefreshResult, error) shape.
HTTP handler is now a thin wrapper; behavior is preserved (same JSON
response, same 500-on-list-failure, same per-runtime soft-fail).
Watcher design notes:
- Last-observed digest tracked in memory (not persisted). On boot the
first observation per runtime is seed-only — no spurious refresh
fires on every restart.
- On Refresh error, the seen digest rolls back so the next tick retries.
Without this rollback a transient Docker glitch would convince the
watcher the work was done.
- Per-runtime fetch errors don't block other runtimes (one template's
brief 500 doesn't pause the others).
- digestFetcher injection seam in tick() lets unit tests cover all
bookkeeping branches without standing up an httptest GHCR server.
Verified live: probed GHCR's /token + manifest HEAD against
workspace-template-claude-code; got HTTP 200 + a real
Docker-Content-Digest. Same calls the watcher makes.
Co-authored-by: Hongming Wang <hongmingwangalt@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: point new-runtime-template flow at the GitHub template repo
The 'Writing a new adapter' section was a 6-step manual checklist that
re-derived the canonical shape every time. Now that
Molecule-AI/molecule-ai-workspace-template-starter exists as a GitHub
template, the flow collapses to:
gh repo create ... --template Molecule-AI/molecule-ai-workspace-template-starter
Plus a fill-in-the-TODO-markers table.
Why this matters: the starter ships with the
'repository_dispatch: [runtime-published]' cascade receiver pre-wired,
which means new templates pick up runtime PyPI publishes automatically
without the one-time setup PR each existing template needed (PRs #6-#22
across the 8 template repos that we just opened to retrofit). At
'hundreds of runtimes' scale this is the difference between linear PR-
toil and zero PR-toil per template addition.
Also adds: 'When the starter itself needs to evolve' — explicit pattern
for keeping the canonical shape in one place when it changes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
* docs(workspace-runtime): drop PYPI_TOKEN refs — OIDC is the new auth
Reflects PR #2113 (PyPI Trusted Publisher / OIDC migration). No static
PyPI token exists in the repo anymore, so the docs shouldn't claim one
does. Replaces the PYPI_TOKEN row in the Required Secrets table with an
"Auth" section pointing at the OIDC config; TEMPLATE_DISPATCH_TOKEN is
still the only repo secret the cascade needs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Hongming Wang <hongmingwangalt@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The production-side end of the runtime CD chain. Operators (or the post-
publish CI workflow) hit this after a runtime release to pull the latest
workspace-template-* images from GHCR and recreate any running ws-* containers
so they adopt the new image. Without this, freshly-published runtime sat in
the registry but containers kept the old image until naturally cycled.
Implementation notes:
- Uses Docker SDK ImagePull rather than shelling out to docker CLI — the
alpine platform container has no docker CLI installed.
- ghcrAuthHeader() reads GHCR_USER + GHCR_TOKEN env, builds the base64-
encoded JSON payload Docker engine expects in PullOptions.RegistryAuth.
Both empty → public/cached images only; both set → private GHCR pulls.
- Container matching uses ContainerInspect (NOT ContainerList) because
ContainerList returns the resolved digest in .Image, not the human tag.
Inspect surfaces .Config.Image which is what we need.
- Provisioner.DefaultImagePlatform() exported so admin handler picks the
same Apple-Silicon-needs-amd64 platform as the provisioner — single
source of truth for the multi-arch override.
Local-dev companion: scripts/refresh-workspace-images.sh runs on the
host and inherits the host's docker keychain auth — alternate path for
when GHCR_USER/TOKEN aren't set in the platform env.
🤖 Generated with [Claude Code](https://claude.com/claude-code)