* fix(security): call redactSecrets before seeding workspace memories (F1085) seedInitialMemories() in workspace_provision.go was inserting template/config memories directly into agent_memories without scrubbing credential patterns. A workspace provisioned from a template containing API keys, tokens, or other secrets would store them in plain text — the same class of issue as #838. Fix: call redactSecrets(workspaceID, content) on the truncated memory content before the INSERT. The truncation (maxMemoryContentLength = 100 KiB, CWE-400) is preserved — redaction runs after truncation so the size limit still applies. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * test(workspace_provision): add seedInitialMemories coverage for #1208 Cover the truncate-at-100k boundary (PR #1167, CWE-400) and the redactSecrets call (F1085 / #1132), both identified as untested in #1208. - TestSeedInitialMemories_TruncatesOversizedContent: boundary at exactly 100k, 1 byte over, far over, and well under. Verifies INSERT receives exactly maxMemoryContentLength bytes. - TestSeedInitialMemories_RedactsSecrets: verifies redactSecrets runs before INSERT, regression test for F1085. - TestSeedInitialMemories_InvalidScopeSkipped: invalid scope is silently skipped, no INSERT called. - TestSeedInitialMemories_EmptyMemoriesNil: nil slice is handled without DB calls. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * docs(marketing): Discord adapter launch visual assets (#1209) Squash-merge: Discord adapter launch visual assets (3 PNGs) + social copy. Acceptance: assets on staging. * fix(ci): golangci-lint errcheck failures on staging Suppress errcheck warnings for calls where the return value is safely ignored: - resp.Body.Close() (artifacts/client.go): deferred cleanup — failure to close a response body is non-critical; the defer itself is what matters for connection reuse. - rows.Close() (bundle/exporter.go): deferred cleanup in a loop where rows.Err() already handles query errors. - filepath.Walk (bundle/exporter.go): top-level walk call; errors in sub-directory traversal are handled by the inner callback (which returns nil for err != nil). - broadcaster.RecordAndBroadcast (bundle/importer.go): fire-and-forget event broadcast; errors are logged internally by the broadcaster. - db.DB.ExecContext (bundle/importer.go): best-effort runtime column update; non-critical auxiliary data that the provisioner re-extracts if needed. Fixes: #1143 * test(artifacts): suppress w.Write return values to satisfy errcheck All httptest.ResponseWriter.Write calls in client_test.go now discard the byte count and error return with _, _ = prefix. The Write method is safe to discard in test handlers — httptest.ResponseWriter.Write never returns an error for in-memory buffers. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(CI): move changes job off self-hosted runner + add workflow concurrency Cherry-pick from staging PR #1194 for main. Two changes to relieve macOS arm64 runner saturation: 1. `changes` job: runs on ubuntu-latest instead of [self-hosted, macos, arm64]. This job does a plain `git diff` with zero macOS dependencies — moving it off the runner frees a slot immediately on every workflow trigger. 2. Add workflow-level concurrency: concurrency: group: ci-${{ github.ref }}; cancel-in-progress: true Prevents multiple stale in-flight CI runs from queuing on the same ref when new commits arrive. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(security): call redactSecrets before seeding workspace memories (F1085) (#1203) seedInitialMemories() in workspace_provision.go was inserting template/config memories directly into agent_memories without scrubbing credential patterns. A workspace provisioned from a template containing API keys, tokens, or other secrets would store them in plain text — the same class of issue as #838. Fix: call redactSecrets(workspaceID, content) on the truncated memory content before the INSERT. The truncation (maxMemoryContentLength = 100 KiB, CWE-400) is preserved — redaction runs after truncation so the size limit still applies. Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> * tick: 2026-04-21 ~03:40Z — CI stalled 59+ min, GH_TOKEN 4th rotation, PR reviews done * fix(tenant-guard): allowlist /registry/register + /registry/heartbeat Final layer of today's stuck-provisioning saga. With the private-IP platform_url fix and the intra-VPC :8080 SG rule in place, workspace EC2s finally reached the tenant on the right port — only to have every POST bounced with a synthetic 404 by TenantGuard. TenantGuard is the SaaS hook that rejects cross-tenant routing. It demands X-Molecule-Org-Id on every request, but CP's workspace user- data doesn't export MOLECULE_ORG_ID (only WORKSPACE_ID, PLATFORM_URL, RUNTIME, PORT), so the runtime can't attach the header. Net effect: every workspace's first heartbeat to /registry/heartbeat was a silent 404, and the workspace sat in 'provisioning' until the platform sweeper timed it out. Allowlist the two workspace-boot paths: - /registry/register — one-shot at runtime startup - /registry/heartbeat — every 30s Both are still gated by wsauth.HasAnyLiveToken (workspaces with a token on file must present it; legacy tokenless workspaces are grandfathered). And the tenant SG already scopes :8080 to the VPC CIDR, so only intra-VPC callers can reach these paths in the first place. The allowlist bypasses cross-org routing, not auth. Follow-up: passing MOLECULE_ORG_ID into the workspace env would let the runtime attach the header and drop this allowlist entry. Tracked separately; not urgent since the multi-layer auth above is already adequate. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: Molecule AI Infra-SRE <infra-sre@agents.moleculesai.app> Co-authored-by: molecule-ai[bot] <276602405+molecule-ai[bot]@users.noreply.github.com> Co-authored-by: Molecule AI Core-DevOps <core-devops@agents.moleculesai.app> Co-authored-by: Molecule AI Core-UIUX <core-uiux@agents.moleculesai.app> Co-authored-by: Hongming Wang <hongmingwang.rabbit@users.noreply.github.com> |
||
|---|---|---|
| .githooks | ||
| .github/workflows | ||
| canvas | ||
| docs | ||
| infra | ||
| marketing/devrel/campaigns/discord-adapter-launch/assets | ||
| org-templates/molecule-dev | ||
| scripts | ||
| tests | ||
| workspace | ||
| workspace-server | ||
| .env.example | ||
| .gitattributes | ||
| .gitignore | ||
| .mcp.json.example | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| docker-compose.infra.yml | ||
| docker-compose.yml | ||
| LICENSE | ||
| manifest.json | ||
| railway.toml | ||
| README.md | ||
| README.zh-CN.md | ||
| render.yaml | ||
| tick-reflections-temp.md | ||
The Org-Native Control Plane For Heterogeneous AI Agent Teams
The world's most powerful governance platform for AI agent teams.
Visual Canvas • Runtime Compatibility • Hierarchical Memory • Skill Evolution • Operational Guardrails
Docs Home • Quick Start • Architecture • Platform API • Workspace Runtime
The Pitch
Molecule AI is the most powerful way to govern an AI agent organization in production.
It combines the parts that are usually scattered across demos, internal glue code, and framework-specific tooling into one product:
- one org-native control plane for teams, roles, hierarchy, and lifecycle
- one runtime layer that lets LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, and OpenClaw run side by side
- one memory model that keeps recall, sharing, and skill evolution aligned with organizational boundaries
- one operational surface for observing, pausing, restarting, inspecting, and improving live workspaces
Most teams can build a workflow, a strong single agent, a coding agent, or a custom multi-agent graph.
Very few teams can run all of that as a governed organization with clear structure, durable memory boundaries, and production operations.
That is the gap Molecule AI closes.
Why Molecule AI Feels Different
1. The node is a role, not a task
In Molecule AI, a workspace is an organizational role. That role can begin as one agent, later expand into a sub-team, and still keep the same external identity, hierarchy position, memory boundary, and A2A interface.
2. The org chart is the topology
You do not wire collaboration paths by hand. Hierarchy defines the default communication surface. The structure is not decorative UI. It is part of the operating model.
3. Runtime choice stops being a dead-end decision
LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, and OpenClaw can all plug into the same workspace abstraction. Teams can standardize governance without forcing every group onto one runtime.
4. Memory is treated like infrastructure
Molecule AI's HMA approach is designed around organizational boundaries, not just “store more context somewhere.” Durable recall, scoped sharing, awareness namespaces, and skill promotion are all part of one coherent system.
5. It comes with a real control plane
Registry, heartbeats, restart, pause/resume, activity logs, approvals, terminal access, files, traces, bundles, templates, and WebSocket fanout are not afterthoughts. They are first-class parts of the platform.
The Category Gap Molecule AI Fills
| Category | What it does well | Where it breaks | What Molecule AI adds |
|---|---|---|---|
| Workflow builders | Visual task automation | Nodes are tasks, not durable organizational roles | Role-native workspaces, hierarchy, long-lived teams |
| Agent frameworks | Strong runtime semantics | Weak control plane and weak org-level operations | Unified lifecycle, canvas, registry, policies, observability |
| Coding agents | Excellent local execution | Usually not designed as team infrastructure | Workspace abstraction, A2A collaboration, platform ops |
| Custom multi-agent graphs | Full flexibility | Brittle topology and governance sprawl | Standardized operating model without losing runtime freedom |
What Makes Molecule AI Defensible
| Advantage | Why it matters in practice |
|---|---|
| Role-native workspace abstraction | Your org structure survives model swaps, framework changes, and team expansion |
| Fractal team expansion | A single specialist can become a managed department without breaking upstream integrations |
| Heterogeneous runtime compatibility | Different teams can keep their preferred agent architecture while sharing one control plane |
| HMA + awareness namespaces | Memory sharing follows hierarchy instead of leaking across the whole system |
| Skill evolution loop | Durable successful workflows can graduate from memory into reusable, hot-reloadable skills |
| WebSocket-first operational UX | The canvas reflects task state, structure changes, and A2A responses in near real time |
| Global secrets with local override | Centralize provider access, then override only where a workspace needs specialized credentials |
Runtime Compatibility, Compared
Molecule AI is not trying to replace the frameworks below. It is the system that makes them easier to run together.
| Runtime / architecture | Status in current repo | Native strength | What Molecule AI adds |
|---|---|---|---|
| LangGraph | Shipping on main |
Graph control, tool use, Python extensibility | Canvas orchestration, hierarchy routing, A2A, memory scopes, operational lifecycle |
| DeepAgents | Shipping on main |
Deeper planning and decomposition | Same workspace contract, team topology, activity stream, restart behavior |
| Claude Code | Shipping on main |
Real coding workflows, CLI-native continuity | Secure workspace abstraction, A2A delegation, org boundaries, shared control plane |
| CrewAI | Shipping on main |
Role-based crews | Persistent workspace identity, policy consistency, shared canvas and registry |
| AutoGen | Shipping on main |
Assistant/tool orchestration | Standardized deployment, hierarchy-aware collaboration, shared ops plane |
| OpenClaw | Shipping on main |
CLI-native runtime with its own session model | Workspace lifecycle, templates, activity logs, topology-aware collaboration |
| NemoClaw | WIP on feat/nemoclaw-t4-docker |
NVIDIA-oriented runtime path | Planned to join the same abstraction once merged; not yet part of main |
This is the key idea: many agent runtimes, one organizational operating system.
Why The Memory Architecture Compounds
Most projects stop at “we added memory.” Molecule AI pushes further:
| Conventional memory setup | Molecule AI |
|---|---|
| Flat store or weak namespaces | Hierarchy-aligned LOCAL, TEAM, GLOBAL scopes |
| Sharing is easy to overexpose | Sharing is explicit and structure-aware |
| Memory and procedure get mixed together | Memory stores durable facts; skills store repeatable procedure |
| Every agent can become over-privileged | Workspace awareness namespaces reduce blast radius |
| UI memory and runtime memory blur together | Separate surfaces for scoped agent memory, key/value workspace memory, and recall |
The flywheel
Task execution
-> durable insight captured in memory
-> repeated success becomes a signal
-> workflow promoted into a reusable skill
-> skill hot-reloads into the runtime
-> future work gets faster and more reliable
This is one of Molecule AI's strongest long-term advantages: the system can get more operationally capable without turning into one giant hidden prompt.
Self-Improving Agent Teams, Built Into Molecule AI
Most agent systems stop at "a smart runtime." Molecule AI pushes further: it gives teams a way to capture what worked, promote repeatable procedure into skills, reload those improvements into live workspaces, and keep the whole loop visible at the platform level.
| Positioning lens | Conventional self-improving agent pattern | Molecule AI |
|---|---|---|
| Unit of improvement | A single agent session or runtime | A workspace, a team, and eventually the whole org graph |
| Operational surface | Mostly hidden inside the agent loop | Visible in the platform, Canvas, activity stream, memory surfaces, and runtime controls |
| Strategic outcome | A smarter agent | A compounding organization with durable knowledge and governed reusable skills |
Where that shows up in Molecule AI
| Core mechanism | Molecule AI module(s) | Why it matters |
|---|---|---|
| Durable memory that survives sessions | workspace/builtin_tools/memory.py, workspace/builtin_tools/awareness_client.py, workspace-server/internal/handlers/memories.go |
Memory is not just durable, it is workspace-scoped and can route into awareness namespaces tied to the org structure |
| Cross-session recall | workspace-server/internal/handlers/activity.go (/workspaces/:id/session-search) |
Recall spans both activity history and memory rows, so the system can search what happened and what was learned without inventing a separate hidden store |
| Skills built from experience | workspace/builtin_tools/memory.py (_maybe_log_skill_promotion) |
Promotion from memory into a skill candidate is surfaced as an explicit platform activity, not a silent internal side effect |
| Skill improvement during use | workspace/skill_loader/watcher.py, workspace/skill_loader/loader.py, workspace/main.py |
Skills hot-reload into the live runtime, so improvements become available on the next A2A task without restarting the workspace |
| Persistent skill lifecycle | workspace-server/cmd/cli/cmd_agent_skill.go, workspace/plugins.py |
Skills are not just generated once; they can be audited, installed, published, shared, mounted by plugins, and governed as reusable operational assets |
Why this matters in Molecule AI
-
The learning loop is org-aware, not just session-aware. Memory can live at
LOCAL,TEAM, orGLOBALscope, and awareness namespaces give each workspace a durable identity boundary. -
The learning loop is visible to operators. Promotion events, activity logs, current-task updates, traces, and WebSocket fanout mean self-improvement is part of the control plane, not a hidden black box.
-
The learning loop compounds across teams, not just one agent. A workflow learned by one workspace can become a governed skill, reload into the runtime, appear in the Agent Card, and become usable inside a larger organizational hierarchy.
The result is not just “an agent that learns.” It is an organization that gets more capable as its workspaces accumulate durable memory and reusable procedure.
What Ships In main
Canvas
- Next.js 15 + React Flow + Zustand
- drag-to-nest team building
- empty-state deployment + onboarding wizard
- template palette
- bundle import/export
- 10-tab side panel for chat, activity, details, skills, terminal, config, files, memory, traces, and events
Platform
- Go/Gin control plane
- workspace CRUD and provisioning
- registry and heartbeats
- browser-safe A2A proxy
- team expansion/collapse
- activity logs and approvals
- secrets and global secrets
- files API, terminal, bundles, templates, viewport persistence
Runtime
- unified
workspace/image - adapter-driven execution
- Agent Card registration
- awareness-backed memory integration
- plugin-mounted shared rules/skills
- hot-reloadable local skills
- coordinator-only delegation path
Ops
- Langfuse traces
- current-task reporting
- pause/resume/restart flows
- activity streaming
- runtime tiers
- direct workspace inspection through terminal and files
Built For Teams That Need More Than A Demo
Molecule AI is especially strong when you need to run:
- AI engineering teams with PM / Dev Lead / QA / Research / Ops roles
- mixed runtime organizations where one team prefers LangGraph and another prefers Claude Code
- long-lived agent organizations that need memory boundaries and reusable procedures
- internal platforms that want to expose agent teams as structured infrastructure, not ad hoc scripts
Architecture
Canvas (Next.js :3000) <--HTTP / WS--> Platform (Go :8080) <---> Postgres + Redis
| |
| +--> Docker provisioner / bundles / templates / secrets
|
+-------------------- shows --------------------> workspaces, teams, tasks, traces, events
Workspace Runtime (Python image with adapters)
- LangGraph / DeepAgents / Claude Code / CrewAI / AutoGen / OpenClaw
- Agent Card + A2A server
- heartbeat + activity + awareness-backed memory
- skills + plugins + hot reload
Quick Start
git clone https://github.com/Molecule-AI/molecule-monorepo.git
cd molecule-monorepo
./infra/scripts/setup.sh
# Boots Postgres (:5432), Redis (:6379), Langfuse (:3001),
# and Temporal (:7233 gRPC, :8233 UI) on the shared
# `molecule-monorepo-net` Docker network. Temporal runs with
# no auth on localhost — dev-only; production must gate it.
cd workspace-server
go run ./cmd/server
cd ../canvas
npm install
npm run dev
Then open http://localhost:3000:
- Deploy a template or create a blank workspace from the empty state.
- Follow the onboarding guide into
Config. - Add a provider key in
Secrets & API Keys. - Open
Chatand send the first task.
Documentation Map
- Docs Home
- Quick Start
- Product Overview
- System Architecture
- Memory Architecture
- Platform API
- Workspace Runtime
- Canvas UI
- Local Development
- Ecosystem Watch — adjacent projects we track (Holaboss, Hermes, gstack, …)
- Glossary — how we use "harness", "workspace", "plugin", "flow" vs. ecosystem neighbors
Current Scope
The current main branch already includes the core platform, canvas, memory model, six production adapters, skill lifecycle, and operational surfaces. Adjacent runtime work such as NemoClaw remains branch-level until merged, and this README keeps that distinction explicit on purpose.
License
Business Source License 1.1 — copyright © 2025 Molecule AI.
Personal, internal, and non-commercial use is permitted without restriction. You may not use the Licensed Work to offer a competing product or service. On January 1, 2029, the license converts to Apache 2.0.
