publish-runtime.yml has never fired since the .gitea port (0 rows in
action_run.workflow_id='publish-runtime.yml' ever), which is why PyPI
is still at 0.1.129 despite Gitea having a runtime-v1.0.0 tag.
Root cause hypothesis: Gitea Actions evaluates the on.push.paths filter
against tag-push events too (no path diff → workflow skipped). PR #349
made this visible by adding the paths trigger, but the same defect
existed for the originally-ported tags-only trigger on this Gitea version
— hence the runtime-v1.0.0 tag also never published.
Fix: split into two files, each with a single unambiguous trigger shape.
- publish-runtime.yml : on.push.tags only (the publisher)
- publish-runtime-autobump.yml : on.push.branches+paths (NEW; the bumper)
The autobump file computes next version from PyPI latest, pushes
'runtime-v$VERSION' tag via DISPATCH_TOKEN (not GITHUB_TOKEN — needed
to trigger downstream workflows on Gitea), and exits. The tag push
then triggers publish-runtime.yml.
Test plan after merge:
1. Push no-op commit to workspace/. Observe autobump fire, push tag.
2. Observe publish-runtime.yml fire on the tag, publish 0.1.130 to
PyPI, cascade to template repos.
3. Verify 'action_run' shows >0 rows for both workflow_ids.
sop-tier-check / tier-check (pull_request) Successful in 2s (manual refresh: run 5030 on pull_request_label event succeeded; commit-status stale per go-gitea#33700)
Adds back the original GitHub workflow's auto-publish trigger that was
dropped during the 2026-05-10 .gitea port (#206). Push to main or
staging filtered by workspace/** falls into the existing PyPI-latest
auto-bump path — no logic changes, just the missing trigger and a
comment correction.
Caveat: the workflow still requires PYPI_TOKEN as a repository secret
(or org-level). Without it the publish step will fail loudly with a
descriptive error. Q2 follow-up tracks setting the secret.
Refs: molecule-core#348
Ports the bounded retry+backoff around each `git clone` in
scripts/clone-manifest.sh onto main, mirroring PR #298 which landed the
same change on staging. CI-infra carve-out: publish-workspace-server-image.yml
fires on `push: branches:[main]`, so the retry mitigation must be on main for
the workflow to be resilient to the OOM-killed-git-mid-clone flake
(`error: git-remote-https died of signal 9`, run 4622) when triggered by a
main push. Same one-file change as #298 (+45/-5), POSIX-sh, sh -n clean.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The Docker daemon health-check fix should not change which branches trigger
the build. Revert accidental addition of 'staging' to branch filters.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Cover the canvas image publish workflow with the same `docker info`
guard added to publish-workspace-server-image.yml (commit 5216e781).
publish-canvas-image.yml was the only docker-build workflow still
missing the step.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The Molecule-AI GitHub org was suspended 2026-05-06; canonical SCM is
now git.moleculesai.app. external_connection.go was still emitting
github.com URLs in operator-facing copy-paste blocks, breaking
external-agent onboarding silently.
Per-site decisions (8 emit sites in 1 file):
- L124 (channel template doc comment): swap source-of-truth comment to
Gitea host.
- L137 /plugin marketplace add Molecule-AI/...: swap to explicit Gitea
HTTPS URL form. End-to-end-verified path per internal#37 § 1.A.
- L138 /plugin install molecule@molecule-mcp-claude-channel: marketplace
name is molecule-channel (per remote .claude-plugin/marketplace.json),
not the repo name. Fix to molecule@molecule-channel.
- L157 --channels plugin:molecule@molecule-mcp-claude-channel: same
marketplace-name fix.
- L179 user-facing GitHub URL: swap to Gitea.
- L261 pip install git+https://github.com/Molecule-AI/molecule-sdk-python:
not on PyPI; swap to git+https://git.moleculesai.app/molecule-ai/...
- L310 hermes-channel doc comment: swap source-of-truth comment.
- L339 pip install git+https://github.com/Molecule-AI/hermes-channel-molecule:
not on PyPI; swap to Gitea.
- L369 issue-tracker URL: swap to Gitea.
Verification:
- molecule-ai-workspace-runtime, codex-channel-molecule are on PyPI (200);
no swap needed for those pip lines (they were already package-name form).
- molecule-mcp-claude-channel, molecule-sdk-python, hermes-channel-molecule
are NOT on PyPI; swapped to git+https://git.moleculesai.app/molecule-ai/
form. All three repos are public on Gitea (default branch main) and
serve git-upload-pack unauthenticated (verified curl 200 against
/info/refs?service=git-upload-pack).
- Third-party github URLs (gin import, openai/codex, NousResearch/
hermes-agent upstream issue trackers, npm @openai/codex) intentionally
preserved.
Adds TestExternalTemplates_NoBrokenMoleculeAIGitHubURLs regression guard
to prevent the same broken URLs from re-emerging on future template
edits.
go vet / go build / existing TestExternal* — all clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two surfaces in workspace-server hardcoded `ghcr.io` and silently bypassed
the `MOLECULE_IMAGE_REGISTRY` env override that flips every other image
operation to the configured private mirror (e.g. AWS ECR in production):
1. internal/imagewatch/watch.go — image-auto-refresh polled
`https://ghcr.io/v2/...` and `https://ghcr.io/token` directly. Post-
suspension, with the platform pointed at ECR, the watcher silently
stopped seeing digest changes (every poll either 404'd or hung on a
registry it has no business talking to).
2. internal/handlers/admin_workspace_images.go — Docker Engine auth
payload pinned `serveraddress: "ghcr.io"`, so when the operator sets
`MOLECULE_IMAGE_REGISTRY=…ecr…/molecule-ai` the engine matched the
wrong credential entry on every authenticated pull.
Fix: extract `provisioner.RegistryHost()` returning the host portion of
`RegistryPrefix()` (e.g. `ghcr.io` ← `ghcr.io/molecule-ai`, or
`004947743811.dkr.ecr.us-east-2.amazonaws.com` ← the ECR mirror prefix),
and route both surfaces through it. Default behavior is unchanged for
OSS users on GHCR.
Tests
- New `TestRegistryHost_SplitsHostFromOrgPath` and
`TestRegistryHost_NeverEmpty` pin the helper across GHCR / ECR /
self-hosted Gitea / bare-host edge cases.
- New `TestGHCRAuthHeader_RespectsRegistryEnv` asserts the Docker auth
payload's `serveraddress` follows MOLECULE_IMAGE_REGISTRY (and never
leaks the org-path suffix).
- New `TestRemoteDigest_RegistryHostFollowsEnv` stands up an httptest
server, points MOLECULE_IMAGE_REGISTRY at it, and confirms both the
token endpoint and the manifest HEAD land there — i.e. the full image-
watch loop respects the env override end-to-end.
Both new tests were verified to FAIL on the pre-fix code path before the
helper was wired in, so a future revert can't silently re-introduce the
bug.
Out of scope (followup needed)
ECR uses `aws ecr get-authorization-token` (SigV4 + basic-auth) instead
of GHCR's `/token?service=…&scope=…` flow. This PR makes the URL host-
configurable; the bearer-token negotiation in `fetchPullToken` still
speaks the GHCR flavor. On ECR with `IMAGE_AUTO_REFRESH=true`, the
watcher will now fail loudly at the token fetch (logged per tick) rather
than silently hitting ghcr.io. Operators on ECR should keep
IMAGE_AUTO_REFRESH=false until ECR auth is wired — tracked as a separate
task. Net effect of this PR alone is strictly better than pre-fix:
fail-loud > silent-broken.
Refs: RFC #229 P2-4
tier:low
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two bugs in yaml-utils.ts toYaml():
1. tools: [] was only emitted when config.tools.length > 0,
but the test asserts it's always present. Add blank-line
separator + unconditional list("tools", ...) so MINIMAL_CONFIG
with tools: [] renders correctly.
2. Nested list values (e.g. runtime_config.required_env: [KEY])
were serialized as " required_env: KEY" (stringification of the
array) instead of a YAML list block. Fix obj() to detect
Array.isArray(sv) and emit a list block with 4-space indent.
Closes#269.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Self-delegation deadlocks: the sending turn holds `_run_lock`, the receive
handler waits for the same lock, the A2A request 30s-times-out, and the
whole cycle is wasted (the Dev Lead system prompt warns agents off this by
hand — "Never delegate_task to your own workspace ID … there is no peer who
is also you"). The platform/runtime had no guard. Now both
`tool_delegate_task` and `tool_delegate_task_async` early-return an
actionable error when `workspace_id == effective_source` (`source_workspace_id
or _peer_to_source[target] or WORKSPACE_ID`) — before `discover_peer`, so no
network round-trip is wasted either. A genuinely different target (incl.
another of a multi-workspace agent's own registered workspaces) is
unaffected.
Tests: tests/test_a2a_tools_delegation.py — new TestSelfDelegationGuard (4
cases: rejects own ID; rejects when source_workspace_id explicitly == target;
async path rejects; a different target passes the guard through to
discover_peer). `pytest tests/test_a2a_tools_delegation.py` → 12 passed.
(tests/test_a2a_tools_impl.py's TestToolDelegateTask* suite is red on this
PC2/Windows checkout — same on `main` without this change; httpx-mock infra,
not this PR — CI validates on Linux.)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[core-lead-agent] PR #281 merged — handles string-form errors in a2a_tools.delegate_task (was raising AttributeError on every delegation through legacy path), fixes empty-parts dict regression (#279), and drops the dead staging branch trigger from both publish workflows. Replaces the abandoned PR #268 + #277. Integration Tester unblocked for mesh recovery validation.
internal#226 follow-up #1. `molecule_runtime.config` resolves the picked
model as `MOLECULE_MODEL` > `MODEL` > (legacy) `MODEL_PROVIDER` (#280) —
this side of the boundary now matches:
- applyRuntimeModelEnv reads `MOLECULE_MODEL` ahead of `MODEL` /
`MODEL_PROVIDER`, and exports BOTH `MOLECULE_MODEL` and `MODEL`
(the latter kept for back-compat with everything that already reads
`os.environ["MODEL"]`). So a workspace whose secrets carry
`MOLECULE_MODEL` (the unambiguous name) is honoured, and the
`MODEL_PROVIDER` misnomer — which got set to provider slugs
("minimax") and even runtime names ("claude-code") — is the lowest-
priority fallback, exactly as on the runtime side.
- the resolution-order comment is updated to flag MODEL_PROVIDER as the
legacy-and-misleadingly-named var.
Also drops a stray trailing `}` in delegation_test.go (committed in
97768272 "test(delegation): add isDeliveryConfirmedSuccess helper") that
made `internal/handlers` fail to parse — one of the things keeping the
package from compiling for tests.
Tests: TestApplyRuntimeModelEnv_SetsUniversalMODELForAllRuntimes extended
to assert MOLECULE_MODEL mirrors MODEL on every case, plus two new cases
(MOLECULE_MODEL env fallback; MOLECULE_MODEL beats MODEL_PROVIDER). Could
not run `go test ./internal/handlers/` locally — the package is still
blocked behind `internal/plugins` `SourceResolver` redeclaration (the
#248 plugin-router/resolver refactor, Core-BE's lane); CI validates once
that lands. The applyRuntimeModelEnv change is mechanical (same shape as
the existing `MODEL` handling) — reviewer please eyeball.
Companion: molecule-core#280 (runtime config.py side), molecule-ai-workspace-template-claude-code#14 (CLI-stream-error surfacing).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Run `docker info` as the first CI step to catch runner Docker socket
permission issues (docker.sock unreadable, daemon restarted, group
membership drift) before the expensive `docker build` step. The error
now surfaces immediately with a clear `::error::` message rather than
silently continuing into `docker build` where the same failure would
appear 60-90s later as a cryptic ECR auth error.
Gitea Actions run 4350 (2026-05-10 05:58 UTC) is the trigger: the runner's
docker.sock became inaccessible for ~6 minutes, `docker build` failed
at step 2 with `permission denied...docker.sock`, and `go build` (step 3)
was never reached — masking the compile errors that were already on
main. The downstream code errors only surfaced once run 4407 succeeded
at `docker build` and finally reached `go build`.
Now: `docker info` → fail in ~1s with actionable error.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Two-part fix from PR #268 (ported by Integration Tester after PR #268
was closed without merge):
PART 1 — workspace/builtin_tools/a2a_tools.py: Fixes AttributeError
when platform returns a plain string as the error field. Before:
data["error"].get("message") ← crashes if error is a string
After:
isinstance(err, dict) → err.get("message")
isinstance(err, str) → use err directly
otherwise → str(err)
Also guards result.get("parts") against non-dict result.
Includes fix for issue #279: empty-parts regression where
{"parts": []} returned "(no text)" instead of str(result).
PART 2 — .gitea/workflows/ and .github/workflows/
publish-workspace-server-image.yml: Removed dead "staging" branch
trigger. Trunk-based migration (2026-05-08) removed the staging branch
but the workflow triggers were not updated.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
PR #256 introduced PluginResolver to break the SourceResolver redeclaration
deadlock, but missed three downstream call-sites that left main uncompilable:
1. plugins/drift_sweeper.go: PluginResolver.Resolve was declared returning
PluginResolver (recursive). *Registry.Resolve returns the production
SourceResolver from source.go, so *Registry didn't satisfy PluginResolver.
Fix: Resolve returns SourceResolver. Add compile-time assertion that
*Registry satisfies PluginResolver so any future signature drift fails
the build instead of router wiring.
2. plugins/drift_sweeper_test.go: stubResolver was still declared with the
old SourceResolver shape AND asserted against SourceResolver — the
assertion failed because stubResolver lacks Scheme()/Fetch(). Fix: stub
is a PluginResolver; assertion targets PluginResolver. Drop the unused
"database/sql" import that fails go vet.
3. router/router.go:
- The 70f84823 reorder moved the plgh init block above its dockerCli
dependency (line 538 used; line 594 declared). Moved the dockerCli
declaration up so it's available where used; replaced the orphaned
declaration in the terminal block with a comment.
- Setup's pluginResolver param was typed plugins.SourceResolver — wrong
for *plugins.Registry (Registry is not a per-scheme resolver). Retyped
to plugins.PluginResolver, which *Registry actually satisfies.
- Removed the broken `plgh.WithSourceResolver(pluginResolver)` call —
WithSourceResolver expects a per-scheme SourceResolver, not a
PluginResolver/registry. plgh has its own internal default registry
(github+local) from NewPluginsHandler, so dropping the call is
functionally a no-op vs the broken state. Kept the param so the
drift sweeper (main.go) can share scheme enumeration when needed.
4. go.sum: add the content hash entry for go.moleculesai.app/plugin/
gh-identity/pluginloader (only the /go.mod hash was present, breaking
`go build ./cmd/server`).
Verified locally:
go build ./... ✓
go vet ./... ✓ (only pre-existing org_external append warning)
go test ./internal/plugins/... ✓
go test ./internal/router/... ✓
6 pre-existing handler test failures (TestExecuteDelegation_*,
TestHandleDiagnose_*) are orthogonal — they did not run before because the
package didn't compile. Out of scope for this fix; tracking separately.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
`molecule_runtime.config.load_config` read the `MODEL_PROVIDER` env var as
the *picked model id* — despite the name, it never carried the provider
(that's `LLM_PROVIDER` / the YAML `provider:` field). So `claude-code`,
`minimax`, and `opus` were all "valid" values for a var named
MODEL_PROVIDER. That footgun bit the dev-team rollout (2026-05-10): the
lead persona env files set `MODEL=claude-opus-4-7` (the intended model)
*and* `MODEL_PROVIDER=claude-code` (mistaking it for "the runtime"); the
loader picked up MODEL_PROVIDER → the claude CLI got `--model claude-code`
→ 404 on every turn, surfaced only as "Command failed with exit code 1"
with empty stderr (the real error is in the stream-json stdout, swallowed
by the SDK's placeholder). The 22 IC workspaces "worked" only because
their `MODEL_PROVIDER=minimax` happened to fuzzy-match on MiniMax's side —
they were actually running `--model minimax`, not `MiniMax-M2.7-highspeed`.
New precedence in `_picked_model_from_env`: `MOLECULE_MODEL` (canonical,
unambiguous) > `MODEL` (the obviously-correct name, already plumbed by
workspace-server's applyRuntimeModelEnv) > `MODEL_PROVIDER` (legacy —
still honored so canvas Save+Restart, the secret-mint path, and existing
persona env files keep working, but if it's the only one set we log a
one-time deprecation pointing at the misnomer) > the YAML `model:` field.
Applied at both the top-level `model` and `runtime_config.model`
resolution sites; semantics are otherwise unchanged. Bonus: workspaces
that already set `MODEL` correctly now get exactly that model instead of
whatever fuzzy-match the upstream did with the provider slug.
Tests: 5 new cases in test_config.py (MODEL beats MODEL_PROVIDER;
MOLECULE_MODEL beats MODEL; MODEL overrides YAML; legacy MODEL_PROVIDER
still resolves + warns; no warning when MODEL is set) + an autouse
fixture that clears MODEL*/resets the warn-latch so resolution is
deterministic regardless of the CI env or test order. `pytest
tests/test_config.py` — 66 passed; the config-importing suites
(test_preflight, test_skills_loader) — 129 passed.
Companion: molecule-dev-department PR #10 fixes the six dev-team lead
`workspace.yaml`s from `model: MiniMax-M2.7` to `model: opus`. Follow-ups
(not in scope here): plumb `MOLECULE_MODEL` from applyRuntimeModelEnv and
the canvas; strip `MODEL`/`MODEL_PROVIDER` from the operator-host persona
env files once the org-template `model:` field is authoritative end-to-end.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a push-mode workspace (one with a public URL) is at capacity, the
platform queues the delegation request and returns:
{"queued": true, "message": "...", "queue_depth": N, "queue_id": "..."}
The existing SSOT parser (a2a_response.py) only handled the poll-mode
envelope (status=queued + delivery_mode=poll). Push-mode queue
responses fell through to Malformed, causing send_a2a_message to log a
warning and return an error — even though delivery was actually queued
successfully.
Fix: add handling for data.get("queued") is True as a Queued variant
with delivery_mode="push". Checked before the poll-mode envelope so the
two cases are mutually exclusive.
Fixes observed 2026-05-10: platform returning push-mode queue
envelopes to Integration Tester when Release Manager workspace was at
capacity.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Adds a Known Limitations section to docs/agent-runtime/workspace-runtime.md
explaining that the base molecule-ai-workspace-runtime image intentionally
omits Chromium system libs (libnss3, libatk-bridge2.0-0, libxkbcommon0, etc.)
to keep the shared image lean for every workspace role.
Records the recommended workflow (E2E in CI on the Gitea Actions self-hosted
runner) and points future role-specific QA/FE templates at layering
playwright install-deps on top of the base image rather than baking it in.
Closes the documentation half of molecule-ai/molecule-app#7.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>