Production fix: - wait_for_message exceptions now trigger exponential backoff (1s → 60s cap, resets to 0 on first successful poll) instead of a flat 1s retry. Under platform outage, N daemons under flat 1Hz retry would hammer the endpoint unnecessarily; the cap-and-reset shape keeps the daemon responsive while being a good citizen. Correctness gate: - Test coverage for the six error branches that operators actually hit: the backoff progression itself, backoff reset on first success, inbox_pop failure (codex must not re-run the same message), peer_agent without peer_id (poison drained, not looped), unknown message kind (poison drained, not looped), empty codex output (placeholder reply, not silent drop), canvas_user falling back to workspace_id when arrival_workspace_id absent, and four malformed-payload shapes from wait_for_message (parametrised: invalid JSON, non-dict, timeout sentinel, missing activity_id). - Backoff tests verified to FAIL on the old flat-1s code by stashing only bridge.py and re-running — pinning a real regression, not a tautology. Cleanup: - _RealTools imports molecule_runtime.a2a_tools once at construction instead of four times per message. - README documents CODEX_CHANNEL_MOLECULE_STATE_DIR override. Test: pytest -q → 28 passed (was 17). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
4.5 KiB
codex-channel-molecule
Bridge daemon — gives codex CLI push parity with the Molecule AI platform's other external runtimes.
The Molecule platform's hermes-channel-molecule plugin gives hermes-agent true push delivery — peer agents and canvas-user messages land mid-session as conversation turns. Codex CLI has no plugin API today and its MCP runtime drops inbound notifications, so this daemon is the equivalent push surface — built outside the codex process.
How it works
canvas user / peer agent ──► molecule platform inbox
│
wait_for_message (long-poll)
│
▼
codex-channel-molecule daemon
│
codex exec --resume <sid> "<msg>"
│
capture stdout
│
send_message_to_user / delegate_task
│
inbox_pop(activity_id)
│
▼
canvas chat / peer workspace
Per chat thread (one canvas-user thread or one peer-workspace thread) gets its own codex session_id, persisted to ~/.codex-channel-molecule/sessions.json so daemon restarts don't lose conversation context. Set CODEX_CHANNEL_MOLECULE_STATE_DIR to override the default location (e.g. when running under systemd with a per-instance state dir).
When to use this vs. the codex tab in the External Connect modal
The codex tab wires the molecule MCP server into ~/.codex/config.toml so codex can call platform tools (list_peers, delegate_task, send_message_to_user, commit_memory, etc.). That's outbound — codex calls out to the platform.
This daemon is the inbound counterpart — the platform pushes to codex. Run both for full bidirectional integration.
Install
npm install -g @openai/codex@^0.57
pip install codex-channel-molecule
Configure + run
The same env-var contract as hermes-channel-molecule's outbound MCP path (WORKSPACE_ID, PLATFORM_URL, MOLECULE_WORKSPACE_TOKEN):
export WORKSPACE_ID=<uuid from External Connect modal>
export PLATFORM_URL=https://<your-tenant>.moleculesai.app
export MOLECULE_WORKSPACE_TOKEN=<bearer token from External Connect modal>
codex-channel-molecule
The daemon runs in the foreground; logs go to stderr. For systemd hosts, register a unit; for one-off use, nohup ... & plus a log file works.
Deprecation path
When openai/codex#17543 lands upstream — a generic path for handling MCP custom notifications in codex and forwarding them into the active session as user submissions — this daemon becomes redundant. Codex itself will accept inbound molecule messages as Op::UserInput directly through the MCP server already wired in ~/.codex/config.toml. Until then, this is the operator-facing answer.
Development
git clone https://github.com/Molecule-AI/codex-channel-molecule
cd codex-channel-molecule
pip install -e ".[test]"
pytest -q
Tests are entirely real-subprocess (no mocking the spawn boundary) so the boot path is covered the same way the daemon runs in production.
Releasing
Tag-on-push triggers publish.yml which builds + publishes to PyPI via OIDC trusted publishing (no API token needed).
# Bump pyproject.toml `version`, commit, then:
git tag v0.1.1 && git push origin v0.1.1
The workflow refuses to publish if the tag doesn't match pyproject.toml's version — keeps PyPI versions and git tags in lockstep.
One-time PyPI setup (before the first release):
- Create the project on PyPI by uploading the first wheel manually, OR
- Pre-register the project on PyPI under a "Pending publisher" config so the first tagged push creates it.
Either way, on the project's PyPI page → "Manage" → "Publishing" → "Add a new publisher", configure:
- Owner:
Molecule-AI - Repository:
codex-channel-molecule - Workflow filename:
publish.yml - Environment name:
pypi
After this, every git push origin v*.*.* ships the wheel to PyPI without any further intervention.
License
Apache-2.0