Compare commits

..

1 Commits

Author SHA1 Message Date
b0945f40a2 docs(readme): document peer_name/peer_role/agent_card_url envelope fields (2026-05-02)
The platform's inbound A2A envelope was enriched on 2026-05-02 with
three additional meta fields that the README's "How replies work"
sample didn't cover:

- peer_name      — peer's display name (registry-resolved; may be absent
                   on registry-lookup failure)
- peer_role      — peer's declared role (same registry source)
- agent_card_url — deterministic URL of the platform's discover endpoint
                   for this peer (always populated)

Updates the JSON sample to include all three (existing fields retained,
nothing removed) and adds a one-line compatibility note dated to the
change with a forward-pointer to the runtime-mcp docs page.

The plugin already forwards whatever the platform sends, so older
platforms that pre-date this change continue to work unchanged — only
the docs were stale.
2026-05-01 20:07:16 -07:00
10 changed files with 13 additions and 594 deletions

View File

@ -1,28 +0,0 @@
{
"name": "molecule-channel",
"owner": {
"name": "Molecule AI",
"email": "support@moleculesai.app",
"url": "https://moleculesai.app"
},
"plugins": [
{
"name": "molecule",
"source": {
"source": "url",
"url": "https://git.moleculesai.app/molecule-ai/molecule-mcp-claude-channel.git"
},
"description": "Bridges Molecule A2A traffic into a Claude Code session via MCP. Subscribe to one or more Molecule workspaces; A2A messages from peers surface as conversation turns; replies route back through Molecule's A2A endpoints.",
"version": "0.4.0-gitea.3",
"homepage": "https://git.moleculesai.app/molecule-ai/molecule-mcp-claude-channel",
"license": "Apache-2.0",
"keywords": [
"molecule",
"molecule-ai",
"a2a",
"channel",
"mcp"
]
}
]
}

View File

@ -1,7 +1,7 @@
{
"name": "molecule",
"description": "Molecule AI channel for Claude Code — bridges Molecule A2A traffic into a Claude Code session via MCP. Subscribe to one or more Molecule workspaces; A2A messages from peers surface as conversation turns; replies route back through Molecule's A2A endpoints.",
"version": "0.4.0-gitea.3",
"version": "0.1.0",
"keywords": [
"molecule",
"molecule-ai",

View File

@ -16,56 +16,10 @@ No tunnel. No public endpoint. The plugin self-registers each watched workspace
## Install
This plugin distributes through the Claude Code marketplace flow. From any shell:
```bash
# 1. Add the marketplace (one-time per machine)
claude plugin marketplace add https://git.moleculesai.app/molecule-ai/molecule-mcp-claude-channel.git
# 2. Install the plugin
claude plugin install molecule@molecule-channel
claude --channels plugin:molecule@Molecule-AI/molecule-mcp-claude-channel
```
`molecule` is the plugin name (from `.claude-plugin/plugin.json`); `molecule-channel` is the marketplace name (from `.claude-plugin/marketplace.json`). Both live in the same repo — installing the marketplace makes the plugin available; installing the plugin enables it for your sessions.
To pin a specific version, append `#<tag>` to the marketplace URL — for example `…/molecule-mcp-claude-channel.git#v0.4.0-gitea.3`. Without a ref, you track `main`.
> **Note for users coming from the GitHub install path**: the GitHub `Molecule-AI` org was suspended on 2026-05-06 and is permanently gone. The earlier `claude --channels plugin:molecule@Molecule-AI/...` invocation no longer resolves. The new path (above) is the canonical replacement; behavior is unchanged.
>
> **Don't use the `claude --channels plugin:…` one-liner.** It silently no-ops on Claude Code 2.1.129 (and likely 2.1.x in general). The marketplace flow above is the only path that actually registers the plugin. If a previous setup guide pointed you at `claude --channels plugin:molecule@…`, ignore it.
### Allowing the channel via `allowedChannelPlugins`
The Claude Code host gates channel-plugin notifications behind an explicit allow-list. The plugin won't deliver `notifications/claude/channel` events to your session unless this list contains an entry that matches.
**Schema.** `allowedChannelPlugins` is an array of **objects**, not strings. The shape is `{ "plugin": "<plugin-name>", "marketplace": "<marketplace-name>" }`. The host's Zod validator silently ignores entries that aren't objects in this shape — so a bare-string entry like `"molecule"` or `"molecule@molecule-channel"` will load without error and contribute nothing to the allow-list. The symptom: poll loop runs cleanly, cursor advances, stderr says "delivered", and the message never reaches the conversation.
For this plugin, the entry is:
```json
{ "plugin": "molecule", "marketplace": "molecule-channel" }
```
**Location.** `allowedChannelPlugins` only takes effect from the **managed-settings** file:
- macOS: `/Library/Application Support/ClaudeCode/managed-settings.json`
- Linux: `/etc/claude-code/managed-settings.json`
- Windows: `C:\ProgramData\ClaudeCode\managed-settings.json`
Putting it in your user-level `~/.claude/settings.json` (or `~/.claude/settings.local.json`) does **not** work — the host reads the field only from the managed location. Most self-hosters try the user-level file first; this is the single most common reason a freshly-installed channel plugin appears to do nothing. The managed-settings file may need `sudo` to edit on macOS/Linux.
A minimal working `managed-settings.json`:
```json
{
"allowedChannelPlugins": [
{ "plugin": "molecule", "marketplace": "molecule-channel" }
]
}
```
After editing, restart Claude Code (or `/reload-plugins`) for the host to re-read the file.
On first launch the plugin creates `~/.claude/channels/molecule/` and exits with a config-missing error pointing at `.env`. Fill it in:
```
@ -82,7 +36,6 @@ MOLECULE_POLL_WINDOW_SECS=30 # default 30s — only used to seed the first
MOLECULE_AGENT_NAME="Claude Code (channel)" # how the workspace appears in canvas
MOLECULE_AGENT_DESC="Local Claude Code session..."
MOLECULE_AUTO_REGISTER_POLL=true # set to "false" if you've configured the workspace another way
MOLECULE_HEARTBEAT_INTERVAL_MS=30000 # default 30s — keeps the canvas presence badge on "online"; set to 0 to disable
```
The `.env` file is `chmod 600` after first read; tokens never appear in environment-block-style `claude doctor` dumps.
@ -90,11 +43,9 @@ The `.env` file is `chmod 600` after first read; tokens never appear in environm
Re-launch Claude Code:
```bash
claude
claude --channels plugin:molecule@Molecule-AI/molecule-mcp-claude-channel
```
(After the one-time `marketplace add` + `plugin install` above, the plugin loads automatically on every `claude` invocation; no per-launch flag needed.)
You should see on stderr:
```
@ -136,6 +87,9 @@ When a peer's message lands in your session, the meta block carries the routing
"workspace_id": "ws-uuid-1",
"watching_as": "ws-uuid-1",
"peer_id": "ws-uuid-pm-coordinator",
"peer_name": "ops-agent",
"peer_role": "sre",
"agent_card_url": "https://your-tenant.staging.moleculesai.app/registry/discover/ws-uuid-pm-coordinator",
"method": "user_message",
"activity_id": "act-...",
"ts": "2026-04-29T..."
@ -144,6 +98,8 @@ When a peer's message lands in your session, the meta block carries the routing
}
```
> **Compatibility note (2026-05-02):** the platform now enriches the inbound envelope with three additional fields — `peer_name` (peer's display name, registry-resolved), `peer_role` (peer's declared role, same registry source), and `agent_card_url` (deterministic URL of the platform's discover endpoint for this peer). `peer_name` and `peer_role` may be absent when the registry lookup fails (e.g. the peer hasn't registered yet); `agent_card_url` is always populated because it's computed deterministically from `peer_id`. Pre-2026-05-02 platforms do not emit these fields — the plugin forwards whatever the platform sends, so older payloads continue to work unchanged. See <https://doc.moleculesai.app/docs/runtime-mcp> for the full envelope spec.
Claude can call `reply_to_workspace({peer_id, text})` to send the response back. If only one workspace is watched, `workspace_id` is implicit. Multi-workspace setups need the watched id explicitly.
## Architecture notes
@ -191,7 +147,7 @@ A2A messages can carry `Part` entries with `url` and `media_type`. The MVP deliv
## Contributing
Single-file MCP server. The whole bridge lives in `server.ts`. Open issues at [molecule-ai/molecule-mcp-claude-channel](https://git.moleculesai.app/molecule-ai/molecule-mcp-claude-channel/issues).
Single-file MCP server. The whole bridge lives in `server.ts`. Open issues at [Molecule-AI/molecule-mcp-claude-channel](https://github.com/Molecule-AI/molecule-mcp-claude-channel/issues).
## License

View File

@ -1,6 +0,0 @@
# bunfig.toml — preload the test env-mock setup so server.ts's
# required-config guard doesn't call process.exit(1) when test files
# import pure helpers from it. See tests/setup.ts for the full
# rationale.
[test]
preload = ["./tests/setup.ts"]

View File

@ -1,108 +0,0 @@
// channel-capabilities-and-filter.test.ts — pins the two regressions Reno-Stars
// caught in their local-patched verify of v0.4.0-gitea.2:
//
// P0. Server constructor must declare `experimental.claude/channel` and
// `experimental.claude/channel/permission` capabilities. Without
// these, the Claude Code MCP host treats the server as tool-only and
// silently drops every `notifications/claude/channel` event we emit
// — poll advances, cursor moves, stderr says "delivered", message
// never reaches the user.
//
// P1. pollWorkspace must skip outbound `method=notify` rows. The
// activity feed returns the agent's own /notify calls alongside
// inbound A2A; emitNotification classifies them as canvas_user
// (source_id=null) and the reply echoes back as a fake user turn
// one poll later.
//
// Both regressions are silent — green tests + green CI today, broken
// behavior in production. Pin the shape so a future refactor that drops
// either fix surfaces here.
//
// Imports from ./server.ts are safe because tests/setup.ts (preloaded
// via bunfig.toml) sets the three required env vars before any test
// file is imported.
import { describe, expect, test } from 'bun:test'
import {
SERVER_CAPABILITIES,
shouldEmitActivity,
} from './server.ts'
import type { ActivityEntry } from './extract-text.ts'
describe('SERVER_CAPABILITIES — P0 channel-capability declaration', () => {
test('declares experimental.claude/channel', () => {
expect(SERVER_CAPABILITIES).toBeDefined()
expect(SERVER_CAPABILITIES.experimental).toBeDefined()
// The presence of the key is what the host checks. Empty object is
// intentional — the channel capability has no negotiable sub-fields
// today; it's a marker for "this server emits notifications/claude/channel".
expect(SERVER_CAPABILITIES.experimental['claude/channel']).toBeDefined()
expect(typeof SERVER_CAPABILITIES.experimental['claude/channel']).toBe('object')
})
test('declares experimental.claude/channel/permission', () => {
// Companion flag the host gates channel-write permission prompts on.
// Required pair — telegram-channel reference declares both.
expect(SERVER_CAPABILITIES.experimental['claude/channel/permission']).toBeDefined()
expect(typeof SERVER_CAPABILITIES.experimental['claude/channel/permission']).toBe('object')
})
test('still declares tools (regression: don\'t lose the tools surface)', () => {
// The pre-fix capability object was `{ tools: {} }`; this test pins
// that adding the experimental block didn't accidentally drop tools,
// which would break reply_to_workspace / list_peers / delegate_task.
expect(SERVER_CAPABILITIES.tools).toBeDefined()
})
})
describe('shouldEmitActivity — P1 outbound /notify echo filter', () => {
// Construct just enough of an ActivityEntry to satisfy the helper's
// Pick<ActivityEntry, 'method'>. The helper is intentionally narrow —
// it only reads .method — so the test doesn't need to mock the rest.
const make = (method: string | null): Pick<ActivityEntry, 'method'> => ({ method })
test('skips method="notify" rows (the agent\'s own outbound echoes)', () => {
expect(shouldEmitActivity(make('notify'))).toBe(false)
})
test('emits method="message/send" rows (inbound peer A2A)', () => {
// The dominant inbound shape: peers POST /workspaces/:id/a2a with
// a JSON-RPC message/send envelope; the platform records that as
// method="message/send" on the destination workspace.
expect(shouldEmitActivity(make('message/send'))).toBe(true)
})
test('emits method="user_message" rows (canvas-user inbound)', () => {
// Canvas chat panel sends method="user_message" — these surface
// as canvas_user kind to Claude.
expect(shouldEmitActivity(make('user_message'))).toBe(true)
})
test('emits null-method rows (inbound, method missing on platform side)', () => {
// Defensive: platform older than #2354 may have null method on some
// rows; deliver them rather than silently dropping. canvas_user
// classification will fall back to "no peer_id" → treat as canvas-user.
expect(shouldEmitActivity(make(null))).toBe(true)
})
test('emits any non-"notify" method even unrecognised ones', () => {
// Forward-compat: a future platform version could add a new method
// string. Default-allow + explicit-deny on "notify" is the safer
// policy than default-deny + explicit-allow on a known list.
expect(shouldEmitActivity(make('something/new'))).toBe(true)
})
test('integration: emitting twice in a batch where one is notify yields one emission', () => {
// Models the real pollWorkspace loop shape: filter pass count must
// equal "non-notify rows", regardless of order.
const batch: Array<Pick<ActivityEntry, 'method'>> = [
make('notify'), // own echo — drop
make('message/send'), // peer A2A — emit
make('notify'), // another own echo — drop
make('user_message'), // canvas user — emit
]
const emitted = batch.filter(shouldEmitActivity)
expect(emitted).toHaveLength(2)
expect(emitted.map(a => a.method)).toEqual(['message/send', 'user_message'])
})
})

View File

@ -1,172 +0,0 @@
// heartbeat.test.ts — pin the POST /registry/heartbeat shape against a
// local Bun.serve fixture. Closes #6 / molecule-core#24 — the v0.4.0-gitea.1
// channel plugin polled /workspaces/:id/activity but never POSTed
// /registry/heartbeat, so the platform's healthsweep flipped the canvas
// presence badge to `awaiting_agent` within 90s of plugin start.
//
// The poll loop is read-only on the platform side (activity.go is a SELECT
// — /workspaces/:id/activity does NOT bump last_heartbeat_at), so without
// a dedicated keepalive POST the row stales out and the badge looks
// offline even while A2A traffic flows fine.
//
// Asserts the actual HTTP wire shape:
// - method = POST
// - path = /registry/heartbeat
// - Authorization: Bearer <token-for-workspace>
// - Content-Type: application/json
// - Origin: <platformUrl> (SaaS edge WAF — same as register)
// - body.workspace_id = <id>
//
// Pre-fix code path: heartbeat.ts does not exist. Post-fix: this test
// passes against the real function and would FAIL if a refactor swapped
// POST→GET, dropped the bearer token, renamed workspace_id, or stopped
// drainage on the success path — all of which would silently re-break
// the presence badge or leak sockets.
import { afterAll, afterEach, beforeAll, describe, expect, it } from 'bun:test'
import { sendHeartbeat } from './heartbeat.ts'
interface CapturedRequest {
method: string
pathname: string
headers: Record<string, string>
body: unknown
}
let captured: CapturedRequest[] = []
let nextStatus = 200
let nextResponseBody: string = '{}'
const fixture = Bun.serve({
port: 0,
async fetch(req) {
const url = new URL(req.url)
let body: unknown = undefined
try {
body = await req.json()
} catch {
body = await req.text().catch(() => undefined)
}
const hdrs: Record<string, string> = {}
req.headers.forEach((v, k) => { hdrs[k.toLowerCase()] = v })
captured.push({ method: req.method, pathname: url.pathname, headers: hdrs, body })
return new Response(nextResponseBody, {
status: nextStatus,
headers: { 'content-type': 'application/json' },
})
},
})
const platformUrl = `http://127.0.0.1:${fixture.port}`
beforeAll(() => {
captured = []
nextStatus = 200
nextResponseBody = '{}'
})
afterEach(() => {
captured = []
nextStatus = 200
nextResponseBody = '{}'
})
afterAll(() => {
fixture.stop(true)
})
describe('sendHeartbeat — POST /registry/heartbeat shape (closes #6 / molecule-core#24)', () => {
it('POSTs the workspace_id payload with the per-workspace bearer token + Origin header', async () => {
nextStatus = 200
await sendHeartbeat({
platformUrl,
workspaceId: 'ws-heartbeat-test-id',
token: 'tok-heartbeat-test',
})
expect(captured).toHaveLength(1)
const req = captured[0]!
expect(req.method).toBe('POST')
expect(req.pathname).toBe('/registry/heartbeat')
expect(req.headers['authorization']).toBe('Bearer tok-heartbeat-test')
expect(req.headers['content-type']).toContain('application/json')
// Origin pinned because SaaS edge WAF rewrites /workspaces/* and
// /registry/* to the Next.js front-end without it (per saved memory
// `reference_saas_waf_origin_header.md`). Heartbeat would silently
// 404 on saas tenants without it; pin so a refactor that drops it
// surfaces here, not in production.
expect(req.headers['origin']).toBe(platformUrl)
expect(req.body).toEqual({ workspace_id: 'ws-heartbeat-test-id' })
})
it('does not throw on platform 5xx — logs and returns so the next tick retries', async () => {
nextStatus = 503
nextResponseBody = 'service unavailable'
const logs: string[] = []
// sendHeartbeat must not propagate — the setInterval caller relies on
// resolution-not-rejection so a transient platform 503 doesn't kill
// the heartbeat loop for the rest of the plugin's lifetime.
await expect(sendHeartbeat({
platformUrl,
workspaceId: 'ws-x',
token: 'tok-x',
log: (line) => { logs.push(line) },
})).resolves.toBeUndefined()
expect(captured).toHaveLength(1)
expect(logs.join('')).toContain('HTTP 503')
expect(logs.join('')).toContain('service unavailable')
})
it('does not throw on platform 401 — auth-token revocation surfaces in stderr but does not crash', async () => {
nextStatus = 401
nextResponseBody = '{"error":"invalid token"}'
const logs: string[] = []
await expect(sendHeartbeat({
platformUrl,
workspaceId: 'ws-y',
token: 'tok-revoked',
log: (line) => { logs.push(line) },
})).resolves.toBeUndefined()
expect(captured).toHaveLength(1)
expect(logs.join('')).toContain('HTTP 401')
})
it('does not throw on network error — fetch failure logged, next tick retries', async () => {
const logs: string[] = []
// Use a port that's almost certainly closed (port 1 is reserved/usually
// unreachable in user space). On any plausible test host the connection
// refuses immediately, surfacing the fetch-failed branch.
await expect(sendHeartbeat({
platformUrl: 'http://127.0.0.1:1',
workspaceId: 'ws-net',
token: 'tok',
log: (line) => { logs.push(line) },
timeoutMs: 1_000,
})).resolves.toBeUndefined()
expect(logs.join('')).toContain('fetch failed')
})
it('drains the response body on success so connections can be reused', async () => {
// Pre-fix concern: a body-not-drained refactor would leak sockets in
// production over the lifetime of a long-running session. The
// contract the production code relies on is "after sendHeartbeat
// resolves, the body is consumed" — verifiable indirectly by
// observing that a follow-up call still sees a fresh fixture entry.
nextStatus = 200
nextResponseBody = '{"ok":true,"some":"large-response-body-with-content"}'
await sendHeartbeat({
platformUrl,
workspaceId: 'ws-1',
token: 'tok-1',
})
await sendHeartbeat({
platformUrl,
workspaceId: 'ws-2',
token: 'tok-2',
})
expect(captured).toHaveLength(2)
expect(captured[0]!.body).toEqual({ workspace_id: 'ws-1' })
expect(captured[1]!.body).toEqual({ workspace_id: 'ws-2' })
})
})

View File

@ -1,109 +0,0 @@
// heartbeat.ts — POST /registry/heartbeat keepalive that flips the
// canvas presence badge from `awaiting_agent` to `online`. Closes #6
// and molecule-core#24.
//
// Why this file exists:
//
// The platform's healthsweep (workspace-server's
// internal/registry/healthsweep.go) flips any `runtime='external'`
// workspace whose `last_heartbeat_at` is older than 90s back to
// `status='awaiting_agent'`. The v0.4.0-gitea.1 channel plugin only
// POSTed /registry/register at startup (which DOES bump
// last_heartbeat_at via registry.go:369) but never heartbeated again.
// Within 90s of plugin start the row goes stale, the canvas badge
// flips to `awaiting_agent`, and the workspace looks offline even
// though A2A traffic flows fine over the long-poll loop.
//
// /workspaces/:id/activity GET (the poll loop) is read-only on the
// platform side — it does NOT touch presence. /registry/heartbeat is
// the only endpoint the platform's healthsweep actually watches.
//
// Why a separate module:
//
// server.ts has top-level side effects (PID-file lock, MCP connect,
// compat probe, register-as-poll, ticker start). Importing it from a
// test triggers all of them. Pure helpers — formatRemovedWorkspaceError,
// computeJitteredInterval, resolvePlatformUrls — already live in
// their own modules so tests can pin contracts without booting the
// server. This file follows the same pattern: heartbeat is a
// fetch-and-log function with a single dependency (workspace_id +
// token + base URL), trivially testable against a Bun.serve fixture.
/**
* Send one POST /registry/heartbeat to the platform.
*
* On success: 2xx, body drained.
* On platform 4xx/5xx: logged to stderr with status + truncated body,
* resolves cleanly so the next caller's setInterval tick retries.
* On network error: logged to stderr, resolves cleanly.
*
* The function NEVER throws the typical caller is a setInterval
* tick, and an unhandled rejection there would kill the heartbeat
* loop for the rest of the plugin's lifetime, leaving the canvas
* badge stuck on awaiting_agent with no log to point at.
*
* Wire shape (pinned by heartbeat.test.ts):
* POST {platformUrl}/registry/heartbeat
* Authorization: Bearer {token}
* Content-Type: application/json
* Origin: {platformUrl} -- SaaS edge WAF requires this
* {"workspace_id": "<id>"} -- minimal HeartbeatPayload
*
* The body is the smallest valid HeartbeatPayload workspace_id is the
* only required field, everything else (error_rate, sample_error,
* active_tasks, uptime_seconds, current_task) is `omitempty`-friendly
* on the platform side. The Python runtime in workspace/heartbeat.py
* sends the same shape when it has no per-tick metrics to attach.
*/
export interface HeartbeatOptions {
/** Platform base URL, no trailing slash. e.g. https://tenant.staging.moleculesai.app */
platformUrl: string
/** Workspace UUID being heartbeated. */
workspaceId: string
/** Bearer token issued for this workspace by /registry/register. */
token: string
/** Optional fetch override for tests. Defaults to globalThis.fetch. */
fetchImpl?: typeof fetch
/** Optional stderr override for tests. Defaults to writing to process.stderr. */
log?: (line: string) => void
/** Optional request timeout in ms. Defaults to 10s heartbeat is a thin
* DB UPDATE; if it can't land in 10s the network is wedged enough that
* the next tick fires sooner than waiting longer would help. */
timeoutMs?: number
}
export async function sendHeartbeat(opts: HeartbeatOptions): Promise<void> {
const fetchImpl = opts.fetchImpl ?? fetch
const log = opts.log ?? ((line: string) => { process.stderr.write(line) })
const timeoutMs = opts.timeoutMs ?? 10_000
let resp: Response
try {
resp = await fetchImpl(`${opts.platformUrl}/registry/heartbeat`, {
method: 'POST',
headers: {
Authorization: `Bearer ${opts.token}`,
'Content-Type': 'application/json',
Origin: opts.platformUrl,
},
body: JSON.stringify({ workspace_id: opts.workspaceId }),
signal: AbortSignal.timeout(timeoutMs),
})
} catch (err) {
log(`molecule channel: heartbeat ${opts.workspaceId} fetch failed: ${err}\n`)
return
}
if (!resp.ok) {
const errText = await resp.text().catch(() => '')
log(
`molecule channel: heartbeat ${opts.workspaceId} HTTP ${resp.status}${errText.slice(0, 200)}\n`,
)
return
}
// 2xx — drain body so the connection can be reused. We don't consume
// any field from the heartbeat response; /registry/register is where
// platform_inbound_secret + auth_token are surfaced.
await resp.text().catch(() => '')
}

View File

@ -1,6 +1,6 @@
{
"name": "molecule-mcp-claude-channel",
"version": "0.4.0-gitea.3",
"version": "0.3.0",
"description": "Molecule AI channel for Claude Code — bridges A2A traffic into a Claude Code session via MCP",
"license": "Apache-2.0",
"type": "module",

105
server.ts
View File

@ -41,7 +41,6 @@ import { readFileSync, writeFileSync, mkdirSync, chmodSync, existsSync, renameSy
import { homedir } from 'os'
import { join } from 'path'
import { extractText, type ActivityEntry } from './extract-text.ts'
import { sendHeartbeat } from './heartbeat.ts'
// ─── Config ─────────────────────────────────────────────────────────────
@ -89,23 +88,6 @@ const AGENT_DESC = process.env.MOLECULE_AGENT_DESC ??
const AUTO_REGISTER_POLL = !['0', 'false', 'no'].includes(
(process.env.MOLECULE_AUTO_REGISTER_POLL ?? 'true').toLowerCase()
)
// MOLECULE_HEARTBEAT_INTERVAL_MS — cadence for the per-workspace
// /registry/heartbeat ping that keeps the canvas presence badge on
// "online" (closes #6 / molecule-core#24).
//
// Default 30_000ms (30s) matches the Python runtime's HEARTBEAT_INTERVAL
// in workspace/heartbeat.py and is well under the platform's 90s
// `REMOTE_LIVENESS_STALE_AFTER` window — three heartbeat ticks fit
// inside the staleness budget so a single dropped POST doesn't flap
// the workspace to `awaiting_agent`.
//
// Set to 0 to disable the heartbeat loop entirely (useful for tests
// or for operators who run a separate heartbeat daemon). Negative
// values are clamped to 0.
const HEARTBEAT_INTERVAL_MS = Math.max(
0,
parseInt(process.env.MOLECULE_HEARTBEAT_INTERVAL_MS ?? '30000', 10) || 0,
)
if (!PLATFORM_URL || WORKSPACE_IDS.length === 0 || WORKSPACE_TOKENS.length === 0) {
process.stderr.write(
@ -244,29 +226,6 @@ function saveCursors(): void {
}
}
// Per-row inbound filter for the activity feed. The `?type=a2a_receive`
// query already restricts the kind, but the platform STILL returns the
// agent's own outbound /notify rows in that view — they're recorded as
// a2a_receive on the SAME workspace_id with method='notify' and a null
// source_id. emitNotification would then classify them as `canvas_user`
// inbound (because peer_id is empty), and every reply this plugin sent
// would echo back as a fake user turn one poll later — the model would
// see its own answer as a new user prompt and try to "respond" to it,
// burning tokens and confusing the conversation.
//
// Filter on the row level so the cursor still advances past these rows
// (the caller already advances cursor to activities[last].id regardless
// of skip/emit, so a long run of notify-only rows can't stall the cursor).
//
// Reno-Stars caught this as the v0.4.0-gitea.2 → .3 P1 fix. Exported so
// a regression test can pin the contract without standing up a fake
// activity-feed HTTP fixture just to assert one boolean.
export function shouldEmitActivity(act: Pick<ActivityEntry, 'method'>): boolean {
// Outbound /notify calls (this agent's own replies) — silently drop.
if (act.method === 'notify') return false
return true
}
async function pollWorkspace(workspaceId: string, mcp: Server): Promise<void> {
const token = TOKEN_BY_WORKSPACE.get(workspaceId)!
const url = new URL(`${PLATFORM_URL}/workspaces/${workspaceId}/activity`)
@ -351,7 +310,6 @@ async function pollWorkspace(workspaceId: string, mcp: Server): Promise<void> {
// notification delivery is best-effort anyway.
if (activities.length === 0) return
for (const act of activities) {
if (!shouldEmitActivity(act)) continue
emitNotification(mcp, workspaceId, act)
}
const newest = activities[activities.length - 1].id
@ -512,30 +470,9 @@ function emitNotification(mcp: Server, workspaceId: string, act: ActivityEntry):
// ─── MCP server ─────────────────────────────────────────────────────────
// Capabilities: declaring `experimental['claude/channel']` is what makes the
// Claude Code MCP host actually deliver our `notifications/claude/channel`
// events into the conversation. Without it the host treats this server as
// tool-only and silently drops every channel notification — the poll
// advances, the cursor moves, stderr says "delivered", and yet no message
// reaches the user. The companion `claude/channel/permission` flag opts the
// server into the permission-prompt path the host gates channel writes on.
//
// Reno-Stars caught this as the v0.4.0-gitea.2 → .3 P0 fix; mirrors the
// shape used by the official telegram channel plugin's MCP server.
//
// Exported so a regression test can pin the shape without spinning up a
// real Server / stdio transport.
export const SERVER_CAPABILITIES = {
tools: {},
experimental: {
'claude/channel': {},
'claude/channel/permission': {},
},
} as const
const mcp = new Server(
{ name: 'molecule', version: '0.4.0-gitea.3' },
{ capabilities: SERVER_CAPABILITIES },
{ name: 'molecule', version: '0.3.0' },
{ capabilities: { tools: {} } },
)
// Tool: reply_to_workspace ----------------------------------------------
@ -1324,11 +1261,7 @@ process.stderr.write(
`molecule channel: connected — watching ${WORKSPACE_IDS.length} workspace(s) at ${PLATFORM_URL}\n` +
` workspaces: ${WORKSPACE_IDS.join(', ')}\n` +
` delivery_mode=poll cursor=${CURSOR_FILE} auto_register=${AUTO_REGISTER_POLL}\n` +
` poll: every ${POLL_INTERVAL_MS}ms (cursor-based; ${POLL_WINDOW_SECS}s window only used for first-run seed)\n` +
` heartbeat: ` +
(HEARTBEAT_INTERVAL_MS > 0
? `every ${HEARTBEAT_INTERVAL_MS}ms (POST /registry/heartbeat — keeps canvas presence on 'online')\n`
: `disabled (MOLECULE_HEARTBEAT_INTERVAL_MS=0; canvas will flip to 'awaiting_agent' after 90s)\n`)
` poll: every ${POLL_INTERVAL_MS}ms (cursor-based; ${POLL_WINDOW_SECS}s window only used for first-run seed)\n`
)
// Stagger initial polls slightly so N-workspace watchers don't all hit the
@ -1340,38 +1273,6 @@ WORKSPACE_IDS.forEach((id, i) => {
}, i * 500)
})
// Per-workspace heartbeat ticker — closes #6 / molecule-core#24.
//
// The startup `registerAsPoll` upsert already bumped `last_heartbeat_at`
// on each row, so the workspace is "online" from boot. The first heartbeat
// fires after one full HEARTBEAT_INTERVAL_MS so we don't double-pump on
// startup; subsequent ticks keep the row fresh inside the 90s stale
// window enforced by workspace-server's healthsweep.
//
// Stagger by i * 500ms so N-workspace plugins don't fan-spike the
// platform — same shape as the poll-loop staggering above.
//
// Conditional on HEARTBEAT_INTERVAL_MS > 0 so tests / unusual deploys
// can disable the loop without hacking around the ticker. .unref() so
// the heartbeat doesn't keep the event loop alive at shutdown.
//
// `sendHeartbeat` is imported from ./heartbeat.ts — see that file for
// the full presence-bug rationale + wire-shape contract.
if (HEARTBEAT_INTERVAL_MS > 0) {
WORKSPACE_IDS.forEach((id, i) => {
setTimeout(() => {
setInterval(
() => void sendHeartbeat({
platformUrl: PLATFORM_URL,
workspaceId: id,
token: TOKEN_BY_WORKSPACE.get(id)!,
}),
HEARTBEAT_INTERVAL_MS,
).unref()
}, i * 500)
})
}
// Clean shutdown — fire-and-forget a "disconnected" notice on each watched
// workspace's A2A so peers don't sit waiting on a silent channel.
const shutdown = (sig: string) => {

View File

@ -1,15 +0,0 @@
// tests/setup.ts — preloaded by bunfig.toml's [test].preload before any
// test file is imported. Sets fake values for the three env vars
// server.ts requires at top-level (MOLECULE_PLATFORM_URL,
// MOLECULE_WORKSPACE_IDS, MOLECULE_WORKSPACE_TOKENS). Without this,
// importing server.ts (which the test files do, to pull
// formatRemovedWorkspaceError + other pure helpers) hits the
// required-config guard at server.ts:92 and calls process.exit(1) —
// killing the test runner before any test runs.
//
// `??=` only assigns when the var is unset, so a developer running
// `bun test` locally with a populated .env file isn't overridden.
process.env.MOLECULE_PLATFORM_URL ??= 'http://localhost:18080'
process.env.MOLECULE_WORKSPACE_IDS ??= 'ws-test-00000000-0000-0000-0000-000000000001'
process.env.MOLECULE_WORKSPACE_TOKENS ??= 'tok-test'