Unified the fallback default for PLATFORM_URL from `http://platform:8080` (Docker Compose service name) to `http://host.docker.internal:8080` across all 13 modules that declare it. This matches: - The provisioner's default (buildContainerEnv injects PLATFORM_URL from cfg.PlatformURL, which defaults to host.docker.internal on the platform side — main.go:platformURL) - The molecule-git-token-helper.sh script (already uses host.docker.internal) - The MCP client (MOLECULE_URL injected by provisioner) The provisioner always sets PLATFORM_URL in production containers, so this is a development/Docker-only improvement: without this change, a workspace started outside the Docker Compose network (e.g. via `docker run` with `--network host`) would fail platform API calls with "Connection refused" because `platform:8080` resolves nowhere. 13 modules updated: a2a_cli, a2a_client, a2a_mcp_server, adapters/base, builtin_tools/a2a_tools, builtin_tools/approval, builtin_tools/delegation, builtin_tools/hitl, builtin_tools/memory, consolidation, coordinator, main, molecule_ai_status. All docstrings updated to match. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> |
||
|---|---|---|
| .gitea/workflows | ||
| molecule_runtime | ||
| tests | ||
| .gitignore | ||
| CONTRIBUTING.md | ||
| pyproject.toml | ||
| README.md | ||
molecule-ai-workspace-runtime
⚠️ This repo is a publish artifact, not the source of truth.
Runtime code lives in
Molecule-AI/molecule-core→workspace/. This repo is regenerated and republished from there by thepublish-runtimeworkflow on everyruntime-v*tag.Don't edit files here directly. PRs against this repo will not be merged. Open them against
molecule-coreinstead.
Shared Python runtime infrastructure for all Molecule AI agent adapters.
This package provides the core machinery every Molecule AI workspace container needs:
- A2A server — registers with the platform, heartbeats, serves A2A JSON-RPC
- Adapter interface —
BaseAdapter/AdapterConfig/SetupResult - Built-in tools — delegation, memory, approvals, sandbox, telemetry
- Skill loader — loads and hot-reloads skill modules from
/configs/skills/ - Plugin system — per-workspace + shared plugin discovery and install
- Config / preflight — YAML config loading with validation
Installation
pip install molecule-ai-workspace-runtime
Adapter discovery
The runtime discovers adapters in two ways:
-
ADAPTER_MODULEenv var (standalone adapter repos):ADAPTER_MODULE=adapter molecule-runtimeThe runtime imports
adapterand callsadapter.Adapter. -
Subdirectory scan (monorepo local dev): falls back to scanning
molecule_runtime/adapters/<runtime>/and importing the matching subdir'sAdapterclass.
Contributing
Don't open PRs here. Send your change to
Molecule-AI/molecule-core
under the workspace/ directory. After your PR merges to main and a
runtime-v* tag is pushed, the publish-runtime
workflow rebuilds this mirror + uploads the new wheel to PyPI.
See docs/workspace-runtime-package.md
for the full publishing flow.
Why this split
The runtime needs to ship as a PyPI artifact (so the 8 workspace template
images can pip install it), but it also needs to evolve in lock-step
with the platform's wire protocol (queue shape, A2A metadata, event
payloads). A monorepo edit + auto-publish pipeline gives both: atomic
cross-cutting changes, plus a clean PyPI release on every tag.
For the back-history of why this repo previously was the source of truth
and the drift that caused: see issue Molecule-AI/molecule-core#2103.