forked from molecule-ai/molecule-core
Forked clean from public hackathon repo (Starfire-AgentTeam, BSL 1.1) with full rebrand to Molecule AI under github.com/Molecule-AI/molecule-monorepo. Brand: Starfire → Molecule AI. Slug: starfire / agent-molecule → molecule. Env vars: STARFIRE_* → MOLECULE_*. Go module: github.com/agent-molecule/platform → github.com/Molecule-AI/molecule-monorepo/platform. Python packages: starfire_plugin → molecule_plugin, starfire_agent → molecule_agent. DB: agentmolecule → molecule. History truncated; see public repo for prior commits and contributor attribution. Verified green: go test -race ./... (platform), pytest (workspace-template 1129 + sdk 132), vitest (canvas 352), build (mcp). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3.7 KiB
3.7 KiB
System Architecture
Overview
The platform is fully distributed — workspaces can run on different machines and communicate via the A2A (Agent-to-Agent) protocol.
System Boundaries
The platform consists of four distinct systems:
+-----------------------------------------------------------+
| canvas/ Next.js 15 frontend |
| React Flow visual canvas, Zustand state, WebSocket |
+-----------------------------+-----------------------------+
| HTTP + WebSocket
+-----------------------------v-----------------------------+
| platform/ Go (gin) backend |
| Registry, hierarchy, event log, provisioner, bundles |
+------+---------------------------------------+------------+
| Postgres | Redis
+------v------+ +---------v---------+
| Postgres | | Redis |
| (Docker) | | (Docker) |
+-------------+ +-------------------+
A2A HTTP (JSON-RPC 2.0) — direct workspace-to-workspace
+-----------------------------------------------------------+
| workspace-template/ pluggable workspace runtime |
| LangGraph / DeepAgents / Claude Code / CrewAI / AutoGen |
| / OpenClaw + a2a-sdk |
+-----------------------------------------------------------+
|
+------v----------------------------------------------------+
| Langfuse Observability (Docker) |
| Traces every LLM call across all workspaces |
+-----------------------------------------------------------+
Plugins
Shared skill/rule collections mounted into every workspace container at /plugins/:
plugins/
├── ecc/ # Everything Claude Code
│ ├── rules/ # Always-on guidelines
│ ├── skills/ # Shared skills (coding-standards, tdd-workflow, etc.)
│ └── AGENTS.md # Prompt fragment
└── superpowers/ # obra/superpowers
└── skills/ # Shared skills (debugging, verification, etc.)
Plugins inject rules and skills into all agents automatically. Workspace-specific skills take priority over plugin skills (deduplicated by ID).
Data Flow Summary
- Canvas <-> Platform: HTTP REST + WebSocket for real-time events
- Platform <-> Postgres: Source of truth for registry, hierarchy, events
- Platform <-> Redis: Ephemeral state — liveness, caching, pub/sub
- Platform -> Workspace: Provisioning (tiered Docker deployment), discovery
- Workspace <-> Workspace: Direct A2A (JSON-RPC 2.0) — platform not in path
- Workspace -> Langfuse: Automatic LLM tracing
Folder Structure
molecule/
|
+-- docker-compose.yml # full local dev stack
+-- docker-compose.infra.yml # postgres, redis, langfuse only
+-- .env.example
+-- README.md
|
+-- canvas/ # Next.js 15 frontend
+-- platform/ # Go backend
+-- workspace-template/ # Python agent runtime (generic image)
+-- workspace-configs-templates/ # workspace personality definitions
+-- infra/ # scripts + langfuse compose
+-- docs/ # documentation
Related Docs
- Platform API — Go backend details
- Workspace Runtime — Runtime layer details
- Canvas UI — Next.js frontend details
- Technology Choices — Why each piece was chosen