chore: extract workspace runtime to PyPI + move adapter Dockerfiles to template repos

Published `molecule-ai-workspace-runtime==0.1.0` to PyPI:
  https://pypi.org/project/molecule-ai-workspace-runtime/0.1.0/

Source repo: https://github.com/Molecule-AI/molecule-ai-workspace-runtime

Each adapter's Dockerfile and requirements.txt have moved to the corresponding
standalone template repo (molecule-ai-workspace-template-<runtime>). The adapter
Python code (.py files) stays in the monorepo for local dev and testing.

Changes:
- workspace-template/pyproject.toml — new, packages the shared runtime as a PyPI package
- workspace-template/adapters/*/Dockerfile — removed (now in template repos)
- workspace-template/adapters/*/requirements.txt — removed (now in template repos)
- workspace-template/Dockerfile — drop COPY adapters/ (still copies .py files via *.py glob)
- workspace-template/build-all.sh — simplified to base-image-only build
- workspace-template/entrypoint.sh — remove adapter requirements.txt install step
- workspace-template/tests/test_hermes_adapter.py — skip Dockerfile/requirements.txt checks
- CLAUDE.md — update architecture description + workspace image table
- docs/workspace-runtime-package.md — new, explains the package + adapter repo layout

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Hongming Wang 2026-04-16 04:33:10 -07:00
parent d424bd947f
commit 03f6fc81dd
7 changed files with 160 additions and 127 deletions

View File

@ -118,7 +118,7 @@ Canvas (Next.js :3000) ←WebSocket→ Platform (Go :8080) ←HTTP→ Postgres +
Four main components:
- **Platform** (`platform/`): Go/Gin control plane — workspace CRUD, registry, discovery, WebSocket hub, liveness monitoring
- **Canvas** (`canvas/`): Next.js 15 + React Flow (@xyflow/react v12) + Zustand + Tailwind — visual workspace graph
- **Workspace Runtime** (`workspace-template/`): Unified Docker image with pluggable adapter system — supports LangGraph, Claude Code, OpenClaw, DeepAgents, CrewAI, AutoGen. Adapters in `workspace-template/adapters/`. Deps installed at startup via `entrypoint.sh`.
- **Workspace Runtime** (`workspace-template/`): Shared runtime published as [`molecule-ai-workspace-runtime`](https://pypi.org/project/molecule-ai-workspace-runtime/) on PyPI. Supports LangGraph, Claude Code, OpenClaw, DeepAgents, CrewAI, AutoGen. Each adapter lives in its own standalone template repo (e.g. `molecule-ai-workspace-template-claude-code`). See `docs/workspace-runtime-package.md` for the full picture.
- **molecli** (`platform/cmd/cli/`): Go TUI dashboard (Bubbletea + Lipgloss) — real-time workspace monitoring, event log, health overview, delete/filter operations
## Build & Run Commands
@ -172,21 +172,20 @@ Env vars: `NEXT_PUBLIC_PLATFORM_URL` (default http://localhost:8080), `NEXT_PUBL
### Workspace Images
```bash
bash workspace-template/build-all.sh # Build base + ALL runtime images
bash workspace-template/build-all.sh claude-code # Build base + specific runtime only
bash workspace-template/build-all.sh # Build base image only (workspace-template:base)
```
Each runtime has its own Docker image extending `workspace-template:base`, with deps pre-installed for fast startup. The base Dockerfile (`workspace-template/Dockerfile`) builds `:base`, then each `adapters/*/Dockerfile` extends it (e.g. `claude_code/Dockerfile` installs the `claude` CLI). **Always use `build-all.sh`** — it builds base first, then all runtimes in order. No `:latest` tag — each runtime uses its own tag to avoid confusion.
Adapters are now in standalone template repos. Each repo has its own `Dockerfile` that installs `molecule-ai-workspace-runtime` from PyPI + adapter-specific deps. The base `workspace-template/Dockerfile` still builds `:base` for local dev. See `docs/workspace-runtime-package.md` for the adapter repo list and details.
| Runtime | Image Tag | Key Deps |
|---------|-----------|----------|
| langgraph | `workspace-template:langgraph` | langchain-anthropic, langgraph |
| claude-code | `workspace-template:claude-code` | claude-agent-sdk (pip), @anthropic-ai/claude-code (npm) |
| openclaw | `workspace-template:openclaw` | openclaw deps |
| crewai | `workspace-template:crewai` | crewai |
| autogen | `workspace-template:autogen` | autogen |
| deepagents | `workspace-template:deepagents` | deepagents |
| hermes | `workspace-template:hermes` | openai (OpenAI-compatible client; Nous Portal via `HERMES_API_KEY` or OpenRouter via `OPENROUTER_API_KEY` fallback) |
| gemini-cli | `workspace-template:gemini-cli` | @google/gemini-cli (npm); requires `GEMINI_API_KEY`; MCP wired via `~/.gemini/settings.json`; memory file: `GEMINI.md` |
| Runtime | Standalone Repo | Key Deps |
|---------|-----------------|----------|
| langgraph | `molecule-ai-workspace-template-langgraph` | molecule-ai-workspace-runtime, langchain-anthropic, langgraph |
| claude-code | `molecule-ai-workspace-template-claude-code` | molecule-ai-workspace-runtime, claude-agent-sdk (pip), @anthropic-ai/claude-code (npm) |
| openclaw | `molecule-ai-workspace-template-openclaw` | molecule-ai-workspace-runtime, openclaw (npm) |
| crewai | `molecule-ai-workspace-template-crewai` | molecule-ai-workspace-runtime, crewai |
| autogen | `molecule-ai-workspace-template-autogen` | molecule-ai-workspace-runtime, autogen |
| deepagents | `molecule-ai-workspace-template-deepagents` | molecule-ai-workspace-runtime, deepagents |
| hermes | `molecule-ai-workspace-template-hermes` | molecule-ai-workspace-runtime, openai, anthropic, google-genai |
| gemini-cli | `molecule-ai-workspace-template-gemini-cli` | molecule-ai-workspace-runtime, @google/gemini-cli (npm) |
Templates live in standalone repos under `Molecule-AI/molecule-ai-workspace-template-*` (8 workspace templates) and `Molecule-AI/molecule-ai-org-template-*` (5 org templates). They're cloned at Docker build time into the platform image. The template registry (`template_registry` table in the control plane DB) tracks all templates with their `github://` source URLs. Agent roles are configured after deployment via Config tab or API.

View File

@ -0,0 +1,79 @@
# Workspace Runtime PyPI Package
## Overview
The shared workspace runtime infrastructure lives in two places:
1. **Source of truth (monorepo):** `workspace-template/` — this is where all development happens
2. **Published package:** [`molecule-ai-workspace-runtime`](https://pypi.org/project/molecule-ai-workspace-runtime/) on PyPI
## What's in the package
Everything in `workspace-template/` except adapter-specific code:
- `molecule_runtime/` — all shared `.py` files (main.py, config.py, heartbeat.py, etc.)
- `molecule_runtime/adapters/``BaseAdapter`, `AdapterConfig`, `SetupResult`, `shared_runtime`
- `molecule_runtime/builtin_tools/` — delegation, memory, approvals, sandbox, telemetry
- `molecule_runtime/skill_loader/` — skill loading + hot-reload
- `molecule_runtime/plugins_registry/` — plugin discovery and install pipeline
- `molecule_runtime/policies/` — namespace routing policies
- Console script: `molecule-runtime``molecule_runtime.main:main_sync`
## Adapter repos
Each of the 8 adapter repos now contains:
- `adapter.py` — runtime-specific `Adapter` class
- `requirements.txt``molecule-ai-workspace-runtime>=0.1.0` + adapter deps
- `Dockerfile` — standalone image (no longer extends workspace-template:base)
| Adapter | Repo |
|---------|------|
| claude-code | https://github.com/Molecule-AI/molecule-ai-workspace-template-claude-code |
| langgraph | https://github.com/Molecule-AI/molecule-ai-workspace-template-langgraph |
| crewai | https://github.com/Molecule-AI/molecule-ai-workspace-template-crewai |
| autogen | https://github.com/Molecule-AI/molecule-ai-workspace-template-autogen |
| deepagents | https://github.com/Molecule-AI/molecule-ai-workspace-template-deepagents |
| hermes | https://github.com/Molecule-AI/molecule-ai-workspace-template-hermes |
| gemini-cli | https://github.com/Molecule-AI/molecule-ai-workspace-template-gemini-cli |
| openclaw | https://github.com/Molecule-AI/molecule-ai-workspace-template-openclaw |
## Adapter discovery (ADAPTER_MODULE)
Standalone adapter repos set `ENV ADAPTER_MODULE=adapter` in their Dockerfile.
The runtime's `get_adapter()` checks this env var first:
```python
# In molecule_runtime/adapters/__init__.py
def get_adapter(runtime: str) -> type[BaseAdapter]:
adapter_module = os.environ.get("ADAPTER_MODULE")
if adapter_module:
mod = importlib.import_module(adapter_module)
return getattr(mod, "Adapter")
# Fall back to built-in subdirectory scan (monorepo local dev)
...
```
## Publishing a new version
```bash
cd workspace-template
# 1. Bump version in pyproject.toml
# 2. Sync to molecule-ai-workspace-runtime repo
# 3. Tag and push — CI publishes to PyPI via PYPI_TOKEN secret
```
Or manually:
```bash
cd workspace-template
python -m build
python -m twine upload dist/*
```
## Writing a new adapter
1. Create a new standalone repo `molecule-ai-workspace-template-<runtime>`
2. Copy `adapter.py` pattern from any existing adapter repo
3. Change imports: `from molecule_runtime.adapters.base import BaseAdapter, AdapterConfig`
4. Create `requirements.txt` with `molecule-ai-workspace-runtime>=0.1.0` + your deps
5. Create `Dockerfile` with `ENV ADAPTER_MODULE=adapter` and `ENTRYPOINT ["molecule-runtime"]`
6. Register the runtime name in the platform's known runtimes list

View File

@ -25,12 +25,12 @@ RUN useradd -m -s /bin/bash agent
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy runtime code
# Copy runtime code (adapters/ has been removed — adapters now live in standalone
# template repos and install molecule-ai-workspace-runtime from PyPI)
COPY *.py ./
COPY entrypoint.sh ./
COPY skill_loader/ ./skill_loader/
COPY builtin_tools/ ./builtin_tools/
COPY adapters/ ./adapters/
COPY plugins_registry/ ./plugins_registry/
COPY policies/ ./policies/

View File

@ -1,12 +1,19 @@
#!/usr/bin/env bash
# build-all.sh — Rebuild base + all runtime images in the correct order.
# build-all.sh — Rebuild base image and optionally adapter images.
#
# NOTE: Adapters have been extracted to standalone template repos:
# https://github.com/Molecule-AI/molecule-ai-workspace-template-<runtime>
#
# This script now only builds the base image from workspace-template/Dockerfile.
# Each adapter repo has its own Dockerfile that installs molecule-ai-workspace-runtime
# from PyPI and the adapter-specific deps.
#
# Usage:
# bash workspace-template/build-all.sh # Build all
# bash workspace-template/build-all.sh claude-code langgraph # Build specific runtimes only
# bash workspace-template/build-all.sh # Build base image only
#
# The base image must be built first, then each adapter extends it.
# Adapter directory names use underscores (claude_code), Docker tags use hyphens (claude-code).
# Standalone adapter repos still reference the legacy base image for local dev
# (e.g. FROM workspace-template:base). To build those locally, clone the adapter
# repo and run `docker build -t workspace-template:<runtime> .` from its root.
set -euo pipefail
@ -20,90 +27,11 @@ NC='\033[0m'
log() { echo -e "${GREEN}[build]${NC} $1" >&2; }
err() { echo -e "${RED}[error]${NC} $1" >&2; }
# Convert between dir name (underscore) and tag name (hyphen)
dir_to_tag() { echo "${1//_/-}"; }
tag_to_dir() { echo "${1//-/_}"; }
# Step 1: Build base image (always — all runtimes depend on it)
# Build base image
log "Building workspace-template:base ..."
if ! docker build -t workspace-template:base -f Dockerfile . ; then
err "Base image build failed"
exit 1
fi
log "Base image built"
# Step 2: Determine which runtimes to build
RUNTIMES=()
if [ $# -gt 0 ]; then
for arg in "$@"; do
dir="$(tag_to_dir "$arg")"
if [ -f "adapters/$dir/Dockerfile" ]; then
RUNTIMES+=("$dir")
else
err "No Dockerfile for runtime: $arg (looked in adapters/$dir/)"
exit 1
fi
done
else
for df in adapters/*/Dockerfile; do
RUNTIMES+=("$(basename "$(dirname "$df")")")
done
fi
# Step 3: Build each runtime image — in parallel (Top-5 #3 from outcomes doc).
#
# All adapter Dockerfiles `FROM workspace-template:base` with no inter-adapter
# dependency, so they're safe to run concurrently. Single-runtime builds
# (`bash build-all.sh claude-code`) still run serially — no benefit to fork.
# Per-adapter stderr/stdout goes to /tmp/build_<tag>.log so failures are
# debuggable without interleaved output.
FAILED=()
if [ "${#RUNTIMES[@]}" -le 1 ] || [ "${SERIAL_BUILD:-}" = "1" ]; then
# Serial path — preserves the old behaviour for single-runtime rebuilds and
# for CI environments that prefer bounded concurrency (set SERIAL_BUILD=1).
for dir_name in "${RUNTIMES[@]}"; do
tag="$(dir_to_tag "$dir_name")"
log "Building workspace-template:$tag (serial) ..."
if docker build -t "workspace-template:$tag" -f "adapters/$dir_name/Dockerfile" . ; then
log "workspace-template:$tag built"
else
err "workspace-template:$tag FAILED"
FAILED+=("$tag")
fi
done
else
# Parallel path — fan out one `docker build` per adapter, capture each
# output to /tmp/build_<tag>.log, wait for all, then tally.
declare -a PIDS=()
declare -a TAGS=()
for dir_name in "${RUNTIMES[@]}"; do
tag="$(dir_to_tag "$dir_name")"
log "Building workspace-template:$tag (parallel, log=/tmp/build_${tag}.log) ..."
docker build -t "workspace-template:$tag" \
-f "adapters/$dir_name/Dockerfile" . \
> "/tmp/build_${tag}.log" 2>&1 &
PIDS+=("$!")
TAGS+=("$tag")
done
# Wait for each, report per-tag outcome.
for i in "${!PIDS[@]}"; do
pid="${PIDS[$i]}"
tag="${TAGS[$i]}"
if wait "$pid"; then
log "workspace-template:$tag built"
else
err "workspace-template:$tag FAILED — see /tmp/build_${tag}.log"
FAILED+=("$tag")
fi
done
fi
echo ""
if [ ${#FAILED[@]} -eq 0 ]; then
log "All ${#RUNTIMES[@]} runtime images built successfully"
else
err "${#FAILED[@]} failed: ${FAILED[*]}"
exit 1
fi
log "Done. Adapters are in standalone template repos — see docs/workspace-runtime-package.md"

View File

@ -52,33 +52,11 @@ else:
print('langgraph')
" 2>/dev/null || echo "langgraph")
# Normalize runtime name for directory lookup (claude-code -> claude_code)
ADAPTER_DIR=$(echo "$RUNTIME" | tr '-' '_')
echo "=== Molecule AI Workspace ==="
echo "Runtime: $RUNTIME"
# Install adapter-specific Python requirements (skip if already pre-installed in image)
REQ_FILE="/app/adapters/${ADAPTER_DIR}/requirements.txt"
if [ -f "$REQ_FILE" ]; then
if grep -q '^[^#]' "$REQ_FILE" 2>/dev/null; then
# Check if first package is already installed
FIRST_PKG=$(grep '^[^#]' "$REQ_FILE" | head -1 | sed 's/[>=<].*//')
if python3 -c "import importlib; importlib.import_module('${FIRST_PKG//-/_}')" 2>/dev/null; then
echo "Adapter deps already installed (${FIRST_PKG})"
else
echo "Installing Python adapter dependencies..."
pip install --no-cache-dir --user -q -r "$REQ_FILE" 2>&1 | tail -3
fi
fi
fi
# Install adapter-specific npm packages (for Node.js-based runtimes like OpenClaw)
NPM_FILE="/app/adapters/${ADAPTER_DIR}/package.json"
if [ -f "$NPM_FILE" ]; then
echo "Installing npm adapter dependencies..."
cd "/app/adapters/${ADAPTER_DIR}" && npm install --production 2>&1 | tail -3
cd /app
fi
# NOTE: Adapter-specific deps are now pre-installed in each adapter's Docker image
# (standalone template repos). Each image installs molecule-ai-workspace-runtime
# from PyPI plus the adapter-specific requirements. No per-runtime pip install needed here.
exec python3 main.py

View File

@ -0,0 +1,35 @@
[build-system]
requires = ["setuptools>=68.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "molecule-ai-workspace-runtime"
version = "0.1.0"
description = "Molecule AI workspace runtime — shared infrastructure for all agent adapters"
requires-python = ">=3.11"
license = {text = "BSL-1.1"}
readme = "README.md"
# Don't pin heavy deps — each adapter adds its own
dependencies = [
"a2a-sdk[http-server]>=0.3.25",
"httpx>=0.27.0",
"uvicorn>=0.30.0",
"starlette>=0.38.0",
"websockets>=12.0",
"pyyaml>=6.0",
"langchain-core>=0.3.0",
"opentelemetry-api>=1.24.0",
"opentelemetry-sdk>=1.24.0",
"opentelemetry-exporter-otlp-proto-http>=1.24.0",
"temporalio>=1.7.0",
]
[project.scripts]
molecule-runtime = "molecule_runtime.main:main_sync"
[tool.setuptools.packages.find]
where = ["."]
include = ["molecule_runtime*"]
[tool.setuptools.package-data]
"molecule_runtime" = ["py.typed"]

View File

@ -48,22 +48,33 @@ class TestHermesShellLayout:
def test_directory_exists(self):
assert HERMES_DIR.is_dir(), "adapters/hermes/ directory is missing"
@pytest.mark.skip(
reason="Dockerfile moved to molecule-ai-workspace-template-hermes standalone repo"
)
def test_dockerfile_present(self):
assert (HERMES_DIR / "Dockerfile").is_file(), "Dockerfile missing from hermes shell"
def test_init_py_present(self):
assert (HERMES_DIR / "__init__.py").is_file(), "__init__.py missing from hermes shell"
@pytest.mark.skip(
reason="requirements.txt moved to molecule-ai-workspace-template-hermes standalone repo"
)
def test_requirements_txt_present(self):
assert (HERMES_DIR / "requirements.txt").is_file(), "requirements.txt missing"
# ---------------------------------------------------------------------------
# 2. requirements.txt — primary dependency contract
# NOTE: requirements.txt has moved to the standalone template repo.
# The source-of-truth checks below are now in that repo's own test suite.
# ---------------------------------------------------------------------------
class TestHermesRequirements:
@pytest.mark.skip(
reason="requirements.txt moved to molecule-ai-workspace-template-hermes standalone repo"
)
def test_openai_version_pin(self):
text = (HERMES_DIR / "requirements.txt").read_text()
assert "openai>=1.0.0" in text, (
@ -71,6 +82,9 @@ class TestHermesRequirements:
"the Hermes adapter relies on the OpenAI-compat client for Nous Portal / OpenRouter."
)
@pytest.mark.skip(
reason="requirements.txt moved to molecule-ai-workspace-template-hermes standalone repo"
)
def test_no_heavy_framework_deps(self):
"""PR-1 shell must not introduce heavy deps that aren't committed to yet."""
text = (HERMES_DIR / "requirements.txt").read_text().lower()