molecule-ai-workspace-templ.../config.yaml
Hongming Wang 0f4ed28f62 feat: initial codex CLI workspace template
OpenAI Codex CLI (@openai/codex >=0.72) wrapped as a Molecule
workspace runtime, with native MCP-style push parity via persistent
codex app-server stdio JSON-RPC.

Each session holds one long-lived `codex app-server` child + one
thread; A2A messages become turn/start RPCs against the existing
thread. Per-thread serialization handles mid-turn arrivals (matches
OpenClaw's per-chat sequentializer).

Modules:
- app_server.py — async JSON-RPC over NDJSON stdio (286 LOC)
- executor.py — turn lifecycle, notification accumulation,
  error surfacing (270 LOC)
- adapter.py — thin BaseAdapter shell + preflight

Tests: 12/12 pass against Python NDJSON mock + fake AppServerProcess.
Validated end-to-end against real codex-cli 0.72.0:
- initialize handshake works
- thread/start works (returns thread.id, NOT thread.threadId as the
  generated JSON schema claims; executor accepts both shapes)

Scaffolded but not yet end-to-end verified against a real Molecule
workspace + peer A2A traffic — that lands separately.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:19:52 -07:00

73 lines
2.1 KiB
YAML

name: OpenAI Codex CLI
description: >-
OpenAI Codex CLI (@openai/codex) wrapped as a Molecule workspace runtime.
Each session holds a long-lived `codex app-server` child + one thread,
so A2A messages process in-order with full conversation continuity —
no fresh subprocess per turn.
Provider is OpenAI (codex is OpenAI-only). Set OPENAI_API_KEY in the
workspace's environment via the canvas Config tab.
version: 0.1.0
tier: 2
runtime: codex
runtime_config:
# Default codex model. Pass-through to `thread/start`'s `model`
# field; codex resolves the rest. Leave empty to use codex's own
# default (currently gpt-5, but tracks codex CLI releases).
model: gpt-5
# Models surfaced in the canvas Config tab dropdown. All require
# OPENAI_API_KEY since codex is OpenAI-only.
models:
- id: gpt-5
name: GPT-5
required_env: [OPENAI_API_KEY]
- id: gpt-5-mini
name: GPT-5 mini
required_env: [OPENAI_API_KEY]
- id: o4-mini
name: o4-mini (reasoning)
required_env: [OPENAI_API_KEY]
- id: gpt-4o
name: GPT-4o
required_env: [OPENAI_API_KEY]
# All codex models share this requirement.
required_env: [OPENAI_API_KEY]
# Single-provider runtime; surfaced as the only entry in the canvas
# Provider dropdown.
providers:
- openai
# 0 = no executor-side timeout. Per-turn timeout is enforced inside
# executor.py at _TURN_TIMEOUT (currently 600s).
timeout: 0
# codex's tool set is built-in (file ops, terminal, apply_patch, web
# fetch). No extension surface from our side today.
skills: []
a2a:
port: 8000
streaming: true
push_notifications: true
# Bridge config — consumed by executor.py.
bridge:
# codex app-server is a stdio child, not a network service. No URL
# or port to configure. Listed here for symmetry with hermes' bridge
# block; when we add overrides (e.g. custom codex binary path) they
# land here.
app_server_command: codex
app_server_args: ["app-server"]
delegation:
retry_attempts: 3
retry_delay: 5
timeout: 120
escalate: true
template_schema_version: 1