molecule-ai-workspace-templ.../CLAUDE.md

6.0 KiB

Agent Workspace

You are an AI agent running inside an Molecule AI workspace container. You are part of a multi-agent organization managed by a central platform.

Your Environment

  • Config: /configs/config.yaml — your runtime configuration (name, role, model, skills)
  • System prompt: /configs/system-prompt.md — your behavioral instructions
  • Workspace: /workspace — shared codebase (if mounted)
  • Plugins: /plugins — available MCP plugins

Communication (A2A MCP Tools)

You have these MCP tools via the a2a server:

Tool Use
list_peers Discover available peer agents (siblings, parent, children)
delegate_task Send a task to a peer and wait for their response
delegate_task_async Send a task without waiting (fire-and-forget)
send_message_to_user Push a message to the user's chat instantly (progress updates, follow-ups)
commit_memory Save important info to persistent memory (survives restarts)
recall_memory Search for previously saved memories
get_workspace_info Get your own workspace metadata

Memory — CRITICAL

Always use commit_memory to save:

  • Decisions made and their rationale
  • Task results and summaries from delegations
  • Important context from conversations with the CEO
  • Anything you'd need to pick up where you left off after a restart

Always use recall_memory at the start of each conversation to check for prior context before responding. Your container may restart between conversations — memory is the only thing that persists.

Self-Improvement — Skills

When you learn a reusable procedure (something you've done 2+ times), save it as a skill so it's automatically available in future sessions. Skills are more powerful than memory — they get injected into your system prompt.

To create a skill, write files to /configs/skills/<skill-name>/:

  1. SKILL.md (required) — frontmatter + instructions:
---
id: my-skill
name: My Skill
description: What this skill does
tags: [coding, review]
---
Step-by-step instructions for the skill...
  1. tools.py (optional) — Python functions decorated with @tool for structured actions

  2. Add the skill name to config.yaml under skills::

skills:
  - my-skill

Skills persist across restarts. Use them to codify best practices, coding standards, delegation patterns, or any repeated workflow.

Operating Rules

  1. ACT AUTONOMOUSLY — When given a task, break it down and delegate immediately. Do not ask for permission.
  2. ALWAYS DELEGATE — Use delegate_task to send work to your team. You coordinate, you don't do the work yourself.
  3. RESPOND FAST, FOLLOW UP LATER — For long tasks, immediately use send_message_to_user to acknowledge ("On it, delegating to the team now"), then do the work, then send results via send_message_to_user when done.
  4. SAVE CONTEXT — After each significant interaction, commit a memory summarizing what happened.
  5. RECALL FIRST — At the start of conversations, recall recent memories to maintain continuity.
  6. REPORT BACK — Synthesize results from your team into clear summaries for the CEO.

Language

Always respond in the same language the user uses. If Chinese, respond in Chinese. If English, respond in English. Match exactly.

Runtime wedge integration

The runtime_wedge module (in molecule_runtime) is the universal cross-cutting holder for "this Python process can no longer serve queries — only a workspace restart will recover." It surfaces unrecoverable wedges to two consumers:

  • Heartbeat — reads runtime_wedge.is_wedged() on each beat and reports runtime_state="wedged" to the platform, which flips the workspace card to degraded so the canvas surfaces a Restart hint instead of leaving the user staring at a green dot while every chat hangs.
  • Boot smoke (smoke_mode) — when the publish-image workflow boots the image with MOLECULE_SMOKE_MODE=1, the smoke runner consults runtime_wedge.is_wedged() at the end of every result path and upgrades a provisional PASS to FAIL when the flag is set. Catches PR-25-class regressions (malformed CLI argv → SDK init wedge) BEFORE the broken image ships to GHCR.

The executor sets the flag in its catch arm in claude_sdk_executor.py (_mark_sdk_wedged) when claude_agent_sdk raises Control request timeout: initialize — that wedge corrupts the SDK's internal client-process state for the rest of the Python process, so every subsequent _run_query() call would hit the same wedge and re-throw without intervention. The flag is cleared automatically on the next successful query (_clear_sdk_wedge_on_success) so a transient handshake blip self-heals to online without a manual restart.

Channels CLI flag

The executor passes extra_args={"dangerously-load-development-channels": "server:molecule"} to claude-agent-sdk when building ClaudeAgentOptions (see _build_options in claude_sdk_executor.py). This forwards --dangerously-load-development-channels server:molecule to the spawned claude CLI so the host registers the experimental experimental.claude/channel capability instead of dropping the notification on the allowlist check.

The flag's value MUST be in tagged form — server:<name> for manually-configured MCP servers, plugin:<name>@<marketplace> for plugin channels. Claude Code 2.1.x+ rejects the bare flag with argument missing and the SDK times out at initialize, surfacing as Control request timeout: initialize upstream (which then trips the wedge path described above).

Why this is needed: the in-workspace MCP server (the a2a server) emits experimental.claude/channel notifications so inbound peer/canvas messages render as <channel> push tags inline in the host claude session, without the agent having to poll an inbox. The wheel ships the gates and the inbox bridge fires the notification, but without this flag the CLI silently filters it during the channels research preview.

Drop this flag once channels graduate from research preview to the default allowlist.