Compare commits

...

5 Commits

Author SHA1 Message Date
Hongming Wang
31d1100149 docs(claude): author CLAUDE.md (template had no docs)
Some checks failed
CI / validate (push) Failing after 0s
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 20:03:56 -07:00
Hongming Wang
549d956345
Merge pull request #7 from Molecule-AI/chore/enroll-secret-scan
chore(ci): enroll in org-wide secret-scan reusable workflow (Molecule-AI/molecule-core#2109)
2026-04-29 13:45:45 -07:00
Hongming Wang
4b15398fa6
Merge pull request #8 from Molecule-AI/fix/a2a-sdk-v1-symbol-rename
fix: rename new_agent_text_message → new_text_message + a2a.utils → a2a.helpers (a2a-sdk v1)
2026-04-29 00:43:57 -07:00
Hongming Wang
75e069d1ab fix: rename new_agent_text_message → new_text_message + a2a.utils → a2a.helpers (a2a-sdk v1)
Lazy import inside async def execute() used the v0 a2a-sdk symbol +
import path. Module-load doesn't evaluate it so the boot-smoke gate
in molecule-ci's publish-template-image workflow didn't catch it,
but the image ships broken — first A2A message hits ImportError:

    from a2a.utils import new_agent_text_message
    ImportError: cannot import name 'new_agent_text_message' from 'a2a.utils'

Verified against the running image (a2a-sdk==1.0.2):
  a2a.utils.new_agent_text_message:    False
  a2a.helpers.new_agent_text_message:  False
  a2a.utils.new_text_message:          False
  a2a.helpers.new_text_message:        True

Surfaced via cross-template audit while verifying v0.1.36 cascade
health. crewai/openclaw/autogen all share the same lazy-import bug.
Fix: rename symbol + switch import path.

Refs: molecule-core memory `reference_a2a_sdk_v0_to_v1_migration`,
task #131 (smoke gate doesn't catch lazy imports).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:34:57 -07:00
rabbitblood
0d14857c29 chore(ci): enroll in org-wide secret-scan reusable workflow (Molecule-AI/molecule-core#2109) 2026-04-26 20:09:07 -07:00
3 changed files with 133 additions and 3 deletions

22
.github/workflows/secret-scan.yml vendored Normal file
View File

@ -0,0 +1,22 @@
name: Secret scan
# Calls the canonical reusable workflow in molecule-core. Defense
# against the #2090-class leak (a hosted-agent commit slipping a
# credential-shaped string into a PR). Pattern set lives in
# molecule-core so we do not maintain a parallel copy here.
#
# Pinned to @staging because that is the active default branch on the
# upstream repo (main lags behind via the staging-promotion workflow).
# Updates ride along automatically as the upstream regex set evolves.
on:
pull_request:
types: [opened, synchronize, reopened]
push:
branches: [main, staging, master]
merge_group:
types: [checks_requested]
jobs:
secret-scan:
uses: Molecule-AI/molecule-core/.github/workflows/secret-scan.yml@staging

108
CLAUDE.md Normal file
View File

@ -0,0 +1,108 @@
# CLAUDE.md — molecule-ai-workspace-template-crewai
Workspace template that runs CrewAI inside the Molecule AI workspace runtime.
This is a **single-process container** booted by `molecule-runtime` (PyPI:
`molecule-ai-workspace-runtime`); it speaks A2A to peer workspaces and exposes
canvas chat through the platform.
## What this template does
- Boots a CrewAI `Agent` + `Task` + `Crew` per inbound message.
- Wires platform tools (delegation, memory, sandbox, approval, MCP bridges)
into the Crew's tool list so the agent can call them like any CrewAI tool.
- Speaks the A2A protocol to other workspaces — replies and peer delegations
flow over the platform's A2A transport, not direct HTTP.
## Files
- `adapter.py` — the CrewAI adapter (only Python you usually edit).
- `config.yaml` — runtime config: model, env requirements, model picker entries.
- `system-prompt.md` — default backstory; overridden by canvas Config tab post-deploy.
- `Dockerfile` — base image; `ENTRYPOINT ["molecule-runtime"]` boots the runtime
which discovers `Adapter` via `ADAPTER_MODULE=adapter`.
- `requirements.txt``molecule-ai-workspace-runtime` + `crewai>=0.100.0`.
## BaseAdapter integration point
`adapter.py` defines `CrewAIAdapter(BaseAdapter)` from
`molecule_runtime.adapters.base`. Two lifecycle hooks matter:
- `setup(config: AdapterConfig)` — imports `crewai`, calls
`self._common_setup(config)` (inherited; loads skills, builds LangChain tool
list, resolves the system prompt), then bridges each LangChain tool to a
CrewAI `@tool` via `_langchain_to_crewai()`.
- `create_executor(config) -> AgentExecutor` — returns a `CrewAIA2AExecutor`
that the runtime hands to the A2A server. **The executor is the contract**;
do not call CrewAI from `setup()`.
Module exports `Adapter = CrewAIAdapter` so `molecule-runtime` finds it.
## CrewAI specifics
`CrewAIA2AExecutor.execute(context, event_queue)` does the per-message work:
1. `extract_message_text(context)` — pull the user/peer message off A2A.
2. `set_current_task(heartbeat, brief_task(...))` — heartbeat surfaces the
current task to the platform UI; cleared in `finally`.
3. Build a fresh `Agent(role, goal, backstory, llm, tools)`,
`Task(description, expected_output, agent)`, and `Crew([agent], [task])`.
4. `await asyncio.to_thread(crew.kickoff)` — CrewAI is sync; off-load to a
thread so the event loop stays responsive for heartbeats + A2A.
5. `event_queue.enqueue_event(new_text_message(reply))` — A2A reply.
`role` is the first 100 chars of the resolved system prompt; `goal` is fixed.
History from the A2A context is folded into `task_desc` via `build_task_text`.
Model strings follow Molecule's `provider:model` form (e.g.
`openai:gpt-4.1-mini`). `openai:` is rewritten to `openai/` for CrewAI's
LiteLLM model router. Other providers pass through unchanged.
## Tool wrapping (Molecule MCP -> CrewAI)
`_langchain_to_crewai(lc_tool)` wraps each LangChain `BaseTool` (the platform
tool surface — `delegate_task`, `send_message_to_user`, memory, sandbox, etc.)
as a sync CrewAI `@tool`. CrewAI's decorator reads `__doc__` at decoration
time, so `wrapper.__doc__` is set from the LangChain `description` **before**
applying `crewai_tool(...)`. The wrapper bridges sync->async via
`asyncio.get_event_loop().run_until_complete(lc_tool.ainvoke(kwargs))`.
## Common gotchas
- **Sync/async boundary.** `crew.kickoff()` is blocking; always wrap with
`asyncio.to_thread`. The tool wrapper uses `run_until_complete`, which works
because CrewAI invokes tools from worker threads, not the main loop.
- **CrewAI tool docstrings are mandatory.** Setting `__doc__` after the
`@tool` decorator runs is too late — keep the order in `_langchain_to_crewai`.
- **Model provider routing.** Only `openai:` prefix rewrite exists today;
Anthropic / Bedrock / others rely on LiteLLM defaults from the raw string.
- **Per-message Crew construction.** A new `Agent`/`Crew` is built every call
— there is no in-memory CrewAI state across messages. State lives in the
history extracted from A2A context, not inside CrewAI.
- **Errors swallowed to a reply.** The `try/except` in `execute()` returns
`f"CrewAI error: {e}"` to the user instead of raising; check container logs
for stack traces.
## Conventions
- New Molecule platform tools land in `molecule-core` / runtime — they will
show up automatically once `_common_setup` returns them in `langchain_tools`.
Do not register tools by hand here.
- Skills load via `_common_setup` driven by `config.yaml` `skills:` (none in
this template by default).
- `/workspace` is the agent's read/write scratch dir; `/configs/config.yaml`
is the live config mount.
- Logs: stdout/stderr from this process is captured by the platform; use the
module logger (`logger = logging.getLogger(__name__)`).
## What NOT to do
- Don't break the `BaseAdapter` contract (`setup` + `create_executor`) — the
runtime discovers and drives it, and the publish-runtime smoke test boots
this adapter from a freshly built wheel.
- Don't bypass A2A for inter-workspace calls. Use the `delegate_task` tool;
direct HTTP between workspaces will skip auth, audit, and the platform's
A2A queue.
- Don't import CrewAI at module top level — `setup()` does the import inside
a `try/except ImportError` so missing deps fail with a clear message.
- Don't edit code paths assumed by the runtime smoke test without updating
the corresponding template-publish workflow in `.github/workflows/`.

View File

@ -90,14 +90,14 @@ class CrewAIA2AExecutor(AgentExecutor):
self._heartbeat = heartbeat
async def execute(self, context, event_queue):
from a2a.utils import new_agent_text_message
from a2a.helpers import new_text_message
from molecule_runtime.adapters.shared_runtime import extract_history, build_task_text, brief_task, set_current_task
from molecule_runtime.adapters.shared_runtime import extract_message_text
user_message = extract_message_text(context)
if not user_message:
await event_queue.enqueue_event(new_agent_text_message("No message provided"))
await event_queue.enqueue_event(new_text_message("No message provided"))
return
await set_current_task(self._heartbeat, brief_task(user_message))
@ -138,7 +138,7 @@ class CrewAIA2AExecutor(AgentExecutor):
finally:
await set_current_task(self._heartbeat, "")
await event_queue.enqueue_event(new_agent_text_message(reply))
await event_queue.enqueue_event(new_text_message(reply))
async def cancel(self, context, event_queue): # pragma: no cover
pass