fix(tenant-guard): allowlist /registry/register + /registry/heartbeat (#1236)

* fix(security): call redactSecrets before seeding workspace memories (F1085)

seedInitialMemories() in workspace_provision.go was inserting template/config
memories directly into agent_memories without scrubbing credential patterns.
A workspace provisioned from a template containing API keys, tokens, or other
secrets would store them in plain text — the same class of issue as #838.

Fix: call redactSecrets(workspaceID, content) on the truncated memory content
before the INSERT. The truncation (maxMemoryContentLength = 100 KiB, CWE-400)
is preserved — redaction runs after truncation so the size limit still applies.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test(workspace_provision): add seedInitialMemories coverage for #1208

Cover the truncate-at-100k boundary (PR #1167, CWE-400) and the
redactSecrets call (F1085 / #1132), both identified as untested in #1208.

- TestSeedInitialMemories_TruncatesOversizedContent: boundary at exactly
  100k, 1 byte over, far over, and well under. Verifies INSERT receives
  exactly maxMemoryContentLength bytes.
- TestSeedInitialMemories_RedactsSecrets: verifies redactSecrets runs
  before INSERT, regression test for F1085.
- TestSeedInitialMemories_InvalidScopeSkipped: invalid scope is silently
  skipped, no INSERT called.
- TestSeedInitialMemories_EmptyMemoriesNil: nil slice is handled without
  DB calls.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(marketing): Discord adapter launch visual assets (#1209)

Squash-merge: Discord adapter launch visual assets (3 PNGs) + social copy. Acceptance: assets on staging.

* fix(ci): golangci-lint errcheck failures on staging

Suppress errcheck warnings for calls where the return value is safely
ignored:
  - resp.Body.Close() (artifacts/client.go): deferred cleanup — failure
    to close a response body is non-critical; the defer itself is what
    matters for connection reuse.
  - rows.Close() (bundle/exporter.go): deferred cleanup in a loop where
    rows.Err() already handles query errors.
  - filepath.Walk (bundle/exporter.go): top-level walk call; errors in
    sub-directory traversal are handled by the inner callback (which
    returns nil for err != nil).
  - broadcaster.RecordAndBroadcast (bundle/importer.go): fire-and-forget
    event broadcast; errors are logged internally by the broadcaster.
  - db.DB.ExecContext (bundle/importer.go): best-effort runtime column
    update; non-critical auxiliary data that the provisioner re-extracts
    if needed.

Fixes: #1143

* test(artifacts): suppress w.Write return values to satisfy errcheck

All httptest.ResponseWriter.Write calls in client_test.go now discard
the byte count and error return with _, _ = prefix. The Write method
is safe to discard in test handlers — httptest.ResponseWriter.Write
never returns an error for in-memory buffers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(CI): move changes job off self-hosted runner + add workflow concurrency

Cherry-pick from staging PR #1194 for main. Two changes to relieve
macOS arm64 runner saturation:

1. `changes` job: runs on ubuntu-latest instead of
   [self-hosted, macos, arm64]. This job does a plain `git diff`
   with zero macOS dependencies — moving it off the runner frees
   a slot immediately on every workflow trigger.

2. Add workflow-level concurrency:
   concurrency: group: ci-${{ github.ref }}; cancel-in-progress: true

   Prevents multiple stale in-flight CI runs from queuing on the
   same ref when new commits arrive.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(security): call redactSecrets before seeding workspace memories (F1085) (#1203)

seedInitialMemories() in workspace_provision.go was inserting template/config
memories directly into agent_memories without scrubbing credential patterns.
A workspace provisioned from a template containing API keys, tokens, or other
secrets would store them in plain text — the same class of issue as #838.

Fix: call redactSecrets(workspaceID, content) on the truncated memory content
before the INSERT. The truncation (maxMemoryContentLength = 100 KiB, CWE-400)
is preserved — redaction runs after truncation so the size limit still applies.

Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* tick: 2026-04-21 ~03:40Z — CI stalled 59+ min, GH_TOKEN 4th rotation, PR reviews done

* fix(tenant-guard): allowlist /registry/register + /registry/heartbeat

Final layer of today's stuck-provisioning saga. With the private-IP
platform_url fix and the intra-VPC :8080 SG rule in place, workspace
EC2s finally reached the tenant on the right port — only to have every
POST bounced with a synthetic 404 by TenantGuard.

TenantGuard is the SaaS hook that rejects cross-tenant routing. It
demands X-Molecule-Org-Id on every request, but CP's workspace user-
data doesn't export MOLECULE_ORG_ID (only WORKSPACE_ID, PLATFORM_URL,
RUNTIME, PORT), so the runtime can't attach the header. Net effect:
every workspace's first heartbeat to /registry/heartbeat was a silent
404, and the workspace sat in 'provisioning' until the platform
sweeper timed it out.

Allowlist the two workspace-boot paths:
  - /registry/register  — one-shot at runtime startup
  - /registry/heartbeat — every 30s

Both are still gated by wsauth.HasAnyLiveToken (workspaces with a
token on file must present it; legacy tokenless workspaces are
grandfathered). And the tenant SG already scopes :8080 to the VPC
CIDR, so only intra-VPC callers can reach these paths in the first
place. The allowlist bypasses cross-org routing, not auth.

Follow-up: passing MOLECULE_ORG_ID into the workspace env would let
the runtime attach the header and drop this allowlist entry. Tracked
separately; not urgent since the multi-layer auth above is already
adequate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Molecule AI Infra-SRE <infra-sre@agents.moleculesai.app>
Co-authored-by: molecule-ai[bot] <276602405+molecule-ai[bot]@users.noreply.github.com>
Co-authored-by: Molecule AI Core-DevOps <core-devops@agents.moleculesai.app>
Co-authored-by: Molecule AI Core-UIUX <core-uiux@agents.moleculesai.app>
Co-authored-by: Hongming Wang <hongmingwang.rabbit@users.noreply.github.com>
This commit is contained in:
Hongming Wang 2026-04-20 19:47:27 -07:00 committed by GitHub
parent 014295d57f
commit 8059fee128
13 changed files with 624 additions and 242 deletions

View File

@ -0,0 +1,109 @@
# Discord Adapter Launch — Social Copy
Campaign: discord-adapter-launch | PR: molecule-core#1209
Publish day: TBD — coordinate with Marketing Lead
Assets: visual assets at marketing/devrel/campaigns/discord-adapter-launch/assets/
---
## X (Twitter) — Primary thread (5 posts)
### Post 1 — Hook
Your team is already in Discord.
Your AI agent is in Molecule AI.
Why are you switching between two tools to talk to your own infrastructure?
Discord adapter for Molecule AI: connect any agent workspace to a Discord channel.
Slash commands in. Agent responses out.
---
### Post 2 — Setup simplicity
Most Discord bot integrations require:
→ Create a bot in the Developer Portal
→ Set up OAuth2
→ Handle the Gateway
→ Manage intents and permissions
Molecule AI's Discord adapter requires:
→ One webhook URL
That's it. The webhook encodes the channel and bot credentials. You paste it in Canvas. You're done.
---
### Post 3 — How it works (technical)
The Discord adapter uses two standard Discord features:
→ Incoming Webhooks for outbound messages (agent → Discord)
→ Discord Interactions for inbound slash commands (Discord → agent)
No polling. No Gateway. No message-reading permissions.
Users type `/ask what's our deployment status?` — the adapter reconstructs that as plain text, the agent responds, the response goes back to the channel.
---
### Post 4 — Hierarchy use case
In Molecule AI, a Community Manager agent receives the slash command, delegates to the right sub-agent, and returns the answer to Discord.
The routing is invisible to the Discord user.
Discord → Community Manager → (Security Auditor | QA Engineer | PM) → Discord
Your whole agent team, accessible from a Discord server your team already lives in.
---
### Post 5 — CTA
Discord adapter for Molecule AI is live.
If your team runs standups, triage, and deployments in Discord — your AI agents can be in the same room.
Connect a workspace in two minutes. Start with a slash command.
---
## LinkedIn — Single post
**Title:** We put our AI agents in Discord — here's why that's a bigger deal than it sounds
**Body:**
Every AI agent platform eventually gets asked the same question: "can we talk to it from where our team already communicates?"
For a lot of teams, that place is Discord. Not as a notification sink — as a working interface.
We just shipped a Discord adapter for Molecule AI. Here's what made it interesting to build:
The naive approach is a Discord bot with message reading permissions, OAuth flows, Gateway connections, and rate limit handling. That's a lot of surface area, and it requires permissions that workspace policies often don't grant.
The Molecule AI approach is two standard Discord primitives:
→ Incoming Webhooks for outbound messages. You give us a webhook URL. That's the only credential. It encodes the channel and bot credentials. You paste it in Canvas. Done.
→ Discord Interactions for inbound slash commands. Users type `/ask what's our deployment status?`. We parse the command and options from the signed JSON payload. The agent receives it as plain text. The response goes back to the channel.
No polling. No Gateway. No special permissions.
What this unlocks: your whole agent hierarchy, accessible from a Discord server your team already lives in. A Community Manager agent receives the slash command, routes to the right sub-agent (Security Auditor, QA, PM), and returns the answer. The routing is invisible to the Discord user.
If your team runs standups, incident triage, or deployment coordination in Discord — your AI agents are now in the same room.
Discord adapter is live now. Connect a workspace in the Channels tab.
---
## Campaign notes
**Audience:** DevOps, platform engineers, developer teams already in Discord
**Tone:** Practical, technical credibility. Not hype — the simplicity of the webhook setup is the story.
**Differentiation:** Zero-boilerplate Discord integration vs. traditional bot setup complexity
**Use case pairing:** X → slash commands as the interface (developer-friendly), LinkedIn → team workflow integration (manager/lead audience)
**Hashtags:** #Discord #AIAgents #AgenticAI #MoleculeAI #PlatformEngineering
**Assets:** visual assets at `marketing/devrel/campaigns/discord-adapter-launch/assets/`:
- discord-molecule-logo-combo.png (1200x800)
- discord-slack-command-mockup.png (1200x900)
- discord-community-signal-flow.png (1200x600)
**Coordination:** Publish after blog post is live. Coordinate with Social Media Brand queue.

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

9
tick-reflections-temp.md Normal file
View File

@ -0,0 +1,9 @@
## 2026-04-21T03:40Z
- GH_TOKEN rotated AGAIN (ghs_3rjPXOqVm3WNZ692xwQkVxE3sWLtsd2sd39D). 4th rotation in ~3h.
- Internal repo reset to origin/main (9cd98f7) after conflict with external agent push.
- CI still stalled: feat/memory-inspector-panel run #24699254842 queued 59 min, updated_at=null.
- fix/ssrf-test-localhost queued 1h34m, same.
- Queue analysis: ~300 runs across 3 pages. My runs at page 2 position ~100. Newer runs (02:20+) at page 1 top. Only 1-2 active runners.
- Reviewed PRs #1222, #1221, #1217 — all look good.
- PRs #1036 closed, #1032 confirmed merged. No further PR review opportunities.

View File

@ -83,7 +83,7 @@ func TestCreateRepo_Success(t *testing.T) {
CreatedAt: time.Now(),
}
w.Header().Set("Content-Type", "application/json")
w.Write(cfEnvelope(t, repo))
_, _ = w.Write(cfEnvelope(t, repo))
})
client := newTestClient(t, mux)
@ -111,7 +111,7 @@ func TestCreateRepo_APIError(t *testing.T) {
body, status := cfError(t, http.StatusConflict, 1009, "repo already exists")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
w.Write(body)
_, _ = w.Write(body)
})
client := newTestClient(t, mux)
@ -146,7 +146,7 @@ func TestGetRepo_Success(t *testing.T) {
RemoteURL: "https://x:tok@hash.artifacts.cloudflare.net/git/repo-xyz.git",
}
w.Header().Set("Content-Type", "application/json")
w.Write(cfEnvelope(t, repo))
_, _ = w.Write(cfEnvelope(t, repo))
})
client := newTestClient(t, mux)
@ -165,7 +165,7 @@ func TestGetRepo_NotFound(t *testing.T) {
body, status := cfError(t, http.StatusNotFound, 1004, "repo not found")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
w.Write(body)
_, _ = w.Write(body)
})
client := newTestClient(t, mux)
@ -206,7 +206,7 @@ func TestForkRepo_Success(t *testing.T) {
ObjectCount: 42,
}
w.Header().Set("Content-Type", "application/json")
w.Write(cfEnvelope(t, result))
_, _ = w.Write(cfEnvelope(t, result))
})
client := newTestClient(t, mux)
@ -245,7 +245,7 @@ func TestImportRepo_Success(t *testing.T) {
RemoteURL: "https://x:tok@hash.artifacts.cloudflare.net/git/repo-imp-1.git",
}
w.Header().Set("Content-Type", "application/json")
w.Write(cfEnvelope(t, repo))
_, _ = w.Write(cfEnvelope(t, repo))
})
client := newTestClient(t, mux)
@ -274,7 +274,7 @@ func TestDeleteRepo_Success(t *testing.T) {
deleted := map[string]string{"id": "repo-del-1"}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusAccepted)
w.Write(cfEnvelope(t, deleted))
_, _ = w.Write(cfEnvelope(t, deleted))
})
client := newTestClient(t, mux)
@ -306,7 +306,7 @@ func TestCreateToken_Success(t *testing.T) {
ExpiresAt: expiry,
}
w.Header().Set("Content-Type", "application/json")
w.Write(cfEnvelope(t, tok))
_, _ = w.Write(cfEnvelope(t, tok))
})
client := newTestClient(t, mux)
@ -340,7 +340,7 @@ func TestRevokeToken_Success(t *testing.T) {
}
deleted := map[string]string{"id": "tok-456"}
w.Header().Set("Content-Type", "application/json")
w.Write(cfEnvelope(t, deleted))
_, _ = w.Write(cfEnvelope(t, deleted))
})
client := newTestClient(t, mux)

View File

@ -81,21 +81,20 @@ func Export(ctx context.Context, workspaceID, configsDir string, dockerCli *clie
// Recursively export sub-workspaces
rows, err := db.DB.QueryContext(ctx,
`SELECT id FROM workspaces WHERE parent_id = $1 AND status != 'removed'`, workspaceID)
if err != nil {
return nil, fmt.Errorf("query sub-workspaces: %w", err)
}
defer rows.Close()
for rows.Next() {
var childID string
if rows.Scan(&childID) == nil {
childBundle, err := Export(ctx, childID, configsDir, dockerCli)
if err == nil {
b.SubWorkspaces = append(b.SubWorkspaces, *childBundle)
if err == nil {
defer func() { _ = rows.Close() }()
for rows.Next() {
var childID string
if rows.Scan(&childID) == nil {
childBundle, err := Export(ctx, childID, configsDir, dockerCli)
if err == nil {
b.SubWorkspaces = append(b.SubWorkspaces, *childBundle)
}
}
}
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("export sub-workspaces: %w", err)
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("export sub-workspaces: %w", err)
}
}
return b, nil
@ -217,8 +216,8 @@ func (b *Bundle) loadFromConfigDir(dir string) {
// Walk all files in the skill directory
skillPath := filepath.Join(skillsDir, entry.Name())
filepath.WalkDir(skillPath, func(path string, d os.DirEntry, err error) error {
if err != nil || d.IsDir() {
_ = filepath.Walk(skillPath, func(path string, info os.FileInfo, err error) error {
if err != nil || info.IsDir() {
return nil
}
relPath, _ := filepath.Rel(skillPath, path)

View File

@ -50,13 +50,11 @@ func Import(
return result
}
if err := broadcaster.RecordAndBroadcast(ctx, "WORKSPACE_PROVISIONING", wsID, map[string]interface{}{
_ = broadcaster.RecordAndBroadcast(ctx, "WORKSPACE_PROVISIONING", wsID, map[string]interface{}{
"name": b.Name,
"tier": b.Tier,
"source_bundle_id": b.ID,
}); err != nil {
// Log but don't fail the import
}
})
// Build config files in memory for the provisioner
configFiles := buildBundleConfigFiles(b)
@ -73,9 +71,7 @@ func Import(
}
}
// Store runtime in DB
if _, err := db.DB.ExecContext(ctx, `UPDATE workspaces SET runtime = $1 WHERE id = $2`, bundleRuntime, wsID); err != nil {
// Log but don't fail the import
}
_ = db.DB.ExecContext(ctx, `UPDATE workspaces SET runtime = $1 WHERE id = $2`, bundleRuntime, wsID)
// Provision the container if provisioner is available
if prov != nil {

View File

@ -13,64 +13,150 @@ import (
"github.com/gin-gonic/gin"
)
// ---------- AdminMemoriesHandler: Export ----------
// newAdminMemoriesHandler is a test helper that returns an AdminMemoriesHandler.
func newAdminMemoriesHandler() *AdminMemoriesHandler {
return NewAdminMemoriesHandler()
}
// TestAdminMemoriesExport_RedactsSecrets verifies F1084/#1131: secrets stored
// in agent_memories (e.g. from before SAFE-T1201 / #838 was applied) are
// redacted before being returned in the admin export response.
func TestAdminMemoriesExport_RedactsSecrets(t *testing.T) {
mock := setupTestDB(t)
handler := NewAdminMemoriesHandler()
createdAt, _ := time.Parse(time.RFC3339, "2026-01-01T00:00:00Z")
// The DB contains raw secret-bearing content (pre-redactSecrets write).
mock.ExpectQuery("SELECT am.id, am.content, am.scope, am.namespace, am.created_at,").
WillReturnRows(sqlmock.NewRows([]string{
"id", "content", "scope", "namespace", "created_at", "workspace_name",
}).
AddRow("mem-1", "API key is sk-ant-...abc123", "LOCAL", "general", createdAt, "agent-1").
AddRow("mem-2", "Bearer ghp_xxxxxxxxxxxx", "TEAM", "general", createdAt, "agent-2").
AddRow("mem-3", "OPENAI_API_KEY=sk-...xyz789", "LOCAL", "general", createdAt, "agent-3").
AddRow("mem-4", " innocent prose only ", "LOCAL", "general", createdAt, "agent-4"))
// adminPost builds a POST /admin/memories/import request.
func adminPost(t *testing.T, h *AdminMemoriesHandler, body interface{}) *httptest.ResponseRecorder {
t.Helper()
b, _ := json.Marshal(body)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/admin/memories/import", bytes.NewReader(b))
c.Request.Header.Set("Content-Type", "application/json")
h.Import(c)
return w
}
// adminGet builds a GET /admin/memories/export request.
func adminGet(t *testing.T, h *AdminMemoriesHandler) *httptest.ResponseRecorder {
t.Helper()
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("GET", "/admin/memories/export", nil)
h.Export(c)
return w
}
handler.Export(c)
// ─────────────────────────────────────────────────────────────────────────────
// Export tests
// ─────────────────────────────────────────────────────────────────────────────
func TestAdminMemories_Export_Success(t *testing.T) {
mock := setupTestDB(t)
h := newAdminMemoriesHandler()
now := time.Now().UTC().Truncate(time.Second)
rows := sqlmock.NewRows([]string{"id", "content", "scope", "namespace", "created_at", "workspace_name"}).
AddRow("mem-1", "hello world", "LOCAL", "ws-1", now, "my-workspace").
AddRow("mem-2", "another fact", "TEAM", "ws-1", now, "my-workspace")
mock.ExpectQuery("SELECT am.id, am.content, am.scope, am.namespace, am.created_at,").
WillReturnRows(rows)
w := adminGet(t, h)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var results []map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &results); err != nil {
t.Fatalf("invalid JSON: %v", err)
var memories []map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &memories); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if len(memories) != 2 {
t.Errorf("expected 2 memories, got %d", len(memories))
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
func TestAdminMemories_Export_Empty(t *testing.T) {
mock := setupTestDB(t)
h := newAdminMemoriesHandler()
rows := sqlmock.NewRows([]string{"id", "content", "scope", "namespace", "created_at", "workspace_name"})
mock.ExpectQuery("SELECT am.id, am.content, am.scope, am.namespace, am.created_at,").
WillReturnRows(rows)
w := adminGet(t, h)
if w.Code != http.StatusOK {
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var memories []interface{}
if err := json.Unmarshal(w.Body.Bytes(), &memories); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if len(memories) != 0 {
t.Errorf("expected 0 memories, got %d", len(memories))
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
func TestAdminMemories_Export_QueryError(t *testing.T) {
mock := setupTestDB(t)
h := newAdminMemoriesHandler()
mock.ExpectQuery("SELECT am.id, am.content, am.scope, am.namespace, am.created_at,").
WillReturnError(sql.ErrConnDone)
w := adminGet(t, h)
if w.Code != http.StatusInternalServerError {
t.Errorf("expected 500, got %d: %s", w.Code, w.Body.String())
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
func TestAdminMemories_Export_RedactsSecrets(t *testing.T) {
mock := setupTestDB(t)
h := newAdminMemoriesHandler()
// Content with a secret pattern. Export must call redactSecrets and return
// the redacted form, not the raw credential.
secretContent := "Remember to use OPENAI_API_KEY=sk-1234567890abcdefgh for the model"
redacted, _ := redactSecrets("my-workspace", secretContent)
now := time.Now().UTC().Truncate(time.Second)
rows := sqlmock.NewRows([]string{"id", "content", "scope", "namespace", "created_at", "workspace_name"}).
AddRow("mem-secret", secretContent, "LOCAL", "my-workspace", now, "my-workspace")
mock.ExpectQuery("SELECT am.id, am.content, am.scope, am.namespace, am.created_at,").
WillReturnRows(rows)
w := adminGet(t, h)
if w.Code != http.StatusOK {
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
if len(results) != 4 {
t.Fatalf("expected 4 entries, got %d", len(results))
var memories []map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &memories); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
// mem-1: OpenAI sk-ant-... key must be redacted.
if results[0]["content"] != "[REDACTED:SK_TOKEN]" {
t.Errorf("mem-1: expected redacted SK_TOKEN, got %q", results[0]["content"])
if len(memories) != 1 {
t.Fatalf("expected 1 memory, got %d", len(memories))
}
// mem-2: GitHub Bearer token must be redacted.
if results[1]["content"] != "[REDACTED:BEARER_TOKEN]" {
t.Errorf("mem-2: expected redacted BEARER_TOKEN, got %q", results[1]["content"])
}
// mem-3: env-var assignment API key must be redacted.
if results[2]["content"] != "[REDACTED:API_KEY]" {
t.Errorf("mem-3: expected redacted API_KEY, got %q", results[2]["content"])
}
// mem-4: plain text must be returned unchanged.
if results[3]["content"] != " innocent prose only " {
t.Errorf("mem-4: expected unchanged prose, got %q", results[3]["content"])
// The exported content must be the REDACTED version, not the raw secret.
if content, ok := memories[0]["content"].(string); ok {
if content == secretContent {
t.Errorf("Export returned raw secret %q — F1084 regression: redactSecrets not called", secretContent)
}
if content != redacted {
t.Errorf("Export content = %q, want redacted %q", content, redacted)
}
// Confirm the redacted version doesn't contain the raw key fragment.
if len(content) > 10 && content == "OPENAI_API_KEY=[REDACTED:" {
t.Errorf("redaction appears incomplete: %q", content)
}
}
if err := mock.ExpectationsWereMet(); err != nil {
@ -78,214 +164,237 @@ func TestAdminMemoriesExport_RedactsSecrets(t *testing.T) {
}
}
// TestAdminMemoriesExport_EmptyDb returns empty array, not error.
func TestAdminMemoriesExport_EmptyDb(t *testing.T) {
// ─────────────────────────────────────────────────────────────────────────────
// Import tests
// ─────────────────────────────────────────────────────────────────────────────
func TestAdminMemories_Import_Success(t *testing.T) {
mock := setupTestDB(t)
handler := NewAdminMemoriesHandler()
h := newAdminMemoriesHandler()
mock.ExpectQuery("SELECT am.id, am.content, am.scope, am.namespace, am.created_at,").
WillReturnError(sql.ErrNoRows)
// Workspace lookup returns one row.
mock.ExpectQuery("SELECT id FROM workspaces WHERE name = \\$1").
WithArgs("my-workspace").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-uuid-1"))
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("GET", "/admin/memories/export", nil)
handler.Export(c)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var results []map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &results)
if len(results) != 0 {
t.Errorf("expected 0 entries, got %d", len(results))
}
}
// ---------- AdminMemoriesHandler: Import ----------
// TestAdminMemoriesImport_RedactsBeforeInsert verifies F1085/#1132: imported
// memories have secrets scrubbed by redactSecrets before both the dedup check
// and the actual INSERT so that secrets never land unredacted in agent_memories.
func TestAdminMemoriesImport_RedactsBeforeInsert(t *testing.T) {
mock := setupTestDB(t)
handler := NewAdminMemoriesHandler()
payload := `[{
"content": "OPENAI_API_KEY=sk-test1234567890abcdef",
"scope": "LOCAL",
"namespace": "general",
"workspace_name": "agent-1"
}]`
// Step 1: workspace lookup must succeed.
mock.ExpectQuery("SELECT id FROM workspaces WHERE name =").
WithArgs("agent-1").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-1"))
// Step 2: dedup check uses REDACTED content (not the raw secret).
// The raw content "OPENAI_API_KEY=sk-test..." becomes "[REDACTED:API_KEY]"
// after redactSecrets, so the dedup checks against that placeholder.
// Duplicate check returns false.
mock.ExpectQuery("SELECT EXISTS").
WithArgs("ws-1", "[REDACTED:API_KEY]", "LOCAL").
WithArgs("ws-uuid-1", sqlmock.AnyArg(), "LOCAL").
WillReturnRows(sqlmock.NewRows([]string{"exists"}).AddRow(false))
// Step 3: INSERT uses the redacted content, not the raw secret.
// Insert succeeds.
mock.ExpectExec("INSERT INTO agent_memories").
WithArgs("ws-1", "[REDACTED:API_KEY]", "LOCAL", "general", sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(0, 1))
WithArgs("ws-uuid-1", sqlmock.AnyArg(), "LOCAL", "general", sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(1, 1))
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/admin/memories/import",
bytes.NewBufferString(payload))
c.Request.Header.Set("Content-Type", "application/json")
handler.Import(c)
w := adminPost(t, h, []map[string]interface{}{
{
"content": "important fact",
"scope": "LOCAL",
"workspace_name": "my-workspace",
},
})
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &resp)
if resp["imported"] != float64(1) {
t.Errorf("expected imported=1, got %v", resp["imported"])
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if resp["skipped"] != float64(0) {
t.Errorf("expected skipped=0, got %v", resp["skipped"])
if resp["imported"].(float64) != 1 {
t.Errorf("imported = %v, want 1", resp["imported"])
}
if resp["skipped"].(float64) != 0 {
t.Errorf("skipped = %v, want 0", resp["skipped"])
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
// TestAdminMemoriesImport_WorkspaceNotFound skips gracefully.
func TestAdminMemoriesImport_WorkspaceNotFound(t *testing.T) {
mock := setupTestDB(t)
handler := NewAdminMemoriesHandler()
payload := `[{"content": "some content", "scope": "LOCAL", "workspace_name": "ghost-ws"}]`
mock.ExpectQuery("SELECT id FROM workspaces WHERE name =").
WithArgs("ghost-ws").
WillReturnError(sql.ErrNoRows)
func TestAdminMemories_Import_InvalidJSON(t *testing.T) {
_ = setupTestDB(t)
h := newAdminMemoriesHandler()
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/admin/memories/import",
bytes.NewBufferString(payload))
c.Request = httptest.NewRequest("POST", "/admin/memories/import", bytes.NewReader([]byte("not json")))
c.Request.Header.Set("Content-Type", "application/json")
handler.Import(c)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &resp)
if resp["skipped"] != float64(1) {
t.Errorf("expected skipped=1, got %v", resp["skipped"])
}
}
// TestAdminMemoriesImport_InvalidJson returns 400.
func TestAdminMemoriesImport_InvalidJson(t *testing.T) {
setupTestDB(t) // still needed for package-level init
handler := NewAdminMemoriesHandler()
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/admin/memories/import",
bytes.NewBufferString("not valid json"))
c.Request.Header.Set("Content-Type", "application/json")
handler.Import(c)
h.Import(c)
if w.Code != http.StatusBadRequest {
t.Errorf("expected 400, got %d", w.Code)
t.Errorf("expected 400, got %d: %s", w.Code, w.Body.String())
}
}
// TestAdminMemoriesImport_CreatedAtPreserved uses 5-arg INSERT.
func TestAdminMemoriesImport_CreatedAtPreserved(t *testing.T) {
func TestAdminMemories_Import_WorkspaceNotFound_SkipsEntry(t *testing.T) {
mock := setupTestDB(t)
handler := NewAdminMemoriesHandler()
h := newAdminMemoriesHandler()
payload := `[{
"content": "secret token GITHUB_TOKEN=ghp_deadbeef",
"scope": "TEAM",
"namespace": "research",
"created_at": "2026-01-15T10:30:00Z",
"workspace_name": "agent-2"
}]`
// Workspace lookup returns no rows.
mock.ExpectQuery("SELECT id FROM workspaces WHERE name = \\$1").
WithArgs("ghost-workspace").
WillReturnError(sql.ErrNoRows)
mock.ExpectQuery("SELECT id FROM workspaces WHERE name =").
WithArgs("agent-2").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-2"))
mock.ExpectQuery("SELECT EXISTS").
WithArgs("ws-2", "[REDACTED:TOKEN]", "TEAM").
WillReturnRows(sqlmock.NewRows([]string{"exists"}).AddRow(false))
// 5-arg INSERT (with created_at)
mock.ExpectExec("INSERT INTO agent_memories").
WithArgs("ws-2", "[REDACTED:TOKEN]", "TEAM", "research", "2026-01-15T10:30:00Z").
WillReturnResult(sqlmock.NewResult(0, 1))
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/admin/memories/import",
bytes.NewBufferString(payload))
c.Request.Header.Set("Content-Type", "application/json")
handler.Import(c)
w := adminPost(t, h, []map[string]interface{}{
{
"content": "some fact",
"scope": "LOCAL",
"workspace_name": "ghost-workspace",
},
})
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if resp["skipped"].(float64) != 1 {
t.Errorf("skipped = %v, want 1 (workspace not found)", resp["skipped"])
}
if resp["imported"].(float64) != 0 {
t.Errorf("imported = %v, want 0", resp["imported"])
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
// TestAdminMemoriesImport_DefaultNamespace uses "general" when namespace is empty.
func TestAdminMemoriesImport_DefaultNamespace(t *testing.T) {
func TestAdminMemories_Import_DuplicateSkipped(t *testing.T) {
mock := setupTestDB(t)
handler := NewAdminMemoriesHandler()
h := newAdminMemoriesHandler()
payload := `[{
"content": "ANTHROPIC_API_KEY=sk-ant-test999",
"scope": "LOCAL",
"workspace_name": "agent-3"
}]`
mock.ExpectQuery("SELECT id FROM workspaces WHERE name =").
WithArgs("agent-3").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-3"))
// Workspace lookup succeeds.
mock.ExpectQuery("SELECT id FROM workspaces WHERE name = \\$1").
WithArgs("my-workspace").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-uuid-1"))
// Duplicate check returns true → entry is skipped.
mock.ExpectQuery("SELECT EXISTS").
WithArgs("ws-3", "[REDACTED:API_KEY]", "LOCAL").
WillReturnRows(sqlmock.NewRows([]string{"exists"}).AddRow(false))
WithArgs("ws-uuid-1", sqlmock.AnyArg(), "LOCAL").
WillReturnRows(sqlmock.NewRows([]string{"exists"}).AddRow(true))
// Namespace defaults to "general"
mock.ExpectExec("INSERT INTO agent_memories").
WithArgs("ws-3", "[REDACTED:API_KEY]", "LOCAL", "general", sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(0, 1))
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest("POST", "/admin/memories/import",
bytes.NewBufferString(payload))
c.Request.Header.Set("Content-Type", "application/json")
handler.Import(c)
w := adminPost(t, h, []map[string]interface{}{
{
"content": "already stored fact",
"scope": "LOCAL",
"workspace_name": "my-workspace",
},
})
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if resp["skipped"].(float64) != 1 {
t.Errorf("skipped = %v, want 1 (duplicate)", resp["skipped"])
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
// TestAdminMemories_Import_RedactsSecretsBeforeDedup verifies F1085 (#1132):
// redactSecrets is called BEFORE the deduplication check so that two backups
// with the same original secret each get the same placeholder and dedup works.
// The DB dedup query must receive the REDACTED content, not the raw credential.
func TestAdminMemories_Import_RedactsSecretsBeforeDedup(t *testing.T) {
mock := setupTestDB(t)
h := newAdminMemoriesHandler()
rawContent := "the key is OPENAI_API_KEY=sk-1234567890abcdefgh"
redacted, changed := redactSecrets("my-workspace", rawContent)
if !changed {
t.Fatalf("precondition: redactSecrets must change the test content")
}
// Workspace lookup.
mock.ExpectQuery("SELECT id FROM workspaces WHERE name = \\$1").
WithArgs("my-workspace").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-uuid-1"))
// Dedup check — the sqlmock must be set up for the REDACTED content,
// because Import calls redactSecrets before running the dedup query.
// If redactSecrets is not called, the mock would match on rawContent instead.
mock.ExpectQuery("SELECT EXISTS").
WithArgs("ws-uuid-1", redacted, "LOCAL").
WillReturnRows(sqlmock.NewRows([]string{"exists"}).AddRow(false))
// Insert — receives the redacted content (not raw).
mock.ExpectExec("INSERT INTO agent_memories").
WithArgs("ws-uuid-1", redacted, "LOCAL", "general", sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(1, 1))
w := adminPost(t, h, []map[string]interface{}{
{
"content": rawContent,
"scope": "LOCAL",
"workspace_name": "my-workspace",
},
})
if w.Code != http.StatusOK {
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if resp["imported"].(float64) != 1 {
t.Errorf("imported = %v, want 1", resp["imported"])
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v (F1085 regression: redactSecrets not called before dedup)", err)
}
}
func TestAdminMemories_Import_PreservesCreatedAt(t *testing.T) {
mock := setupTestDB(t)
h := newAdminMemoriesHandler()
origTime := "2026-01-15T10:30:00Z"
// Workspace lookup.
mock.ExpectQuery("SELECT id FROM workspaces WHERE name = \\$1").
WithArgs("my-workspace").
WillReturnRows(sqlmock.NewRows([]string{"id"}).AddRow("ws-uuid-1"))
// Dedup check.
mock.ExpectQuery("SELECT EXISTS").
WithArgs("ws-uuid-1", sqlmock.AnyArg(), "LOCAL").
WillReturnRows(sqlmock.NewRows([]string{"exists"}).AddRow(false))
// Insert with created_at — must use the 5-arg INSERT.
mock.ExpectExec("INSERT INTO agent_memories").
WithArgs("ws-uuid-1", sqlmock.AnyArg(), "LOCAL", "general", origTime).
WillReturnResult(sqlmock.NewResult(1, 1))
w := adminPost(t, h, []map[string]interface{}{
{
"content": "a fact",
"scope": "LOCAL",
"workspace_name": "my-workspace",
"created_at": origTime,
},
})
if w.Code != http.StatusOK {
t.Errorf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
if resp["imported"].(float64) != 1 {
t.Errorf("imported = %v, want 1", resp["imported"])
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}

View File

@ -246,7 +246,7 @@ func seedInitialMemories(ctx context.Context, workspaceID string, memories []mod
if _, err := db.DB.ExecContext(ctx, `
INSERT INTO agent_memories (workspace_id, content, scope, namespace)
VALUES ($1, $2, $3, $4)
`, workspaceID, content, scope, awarenessNamespace); err != nil {
`, workspaceID, redactSecrets(workspaceID, content), scope, awarenessNamespace); err != nil {
log.Printf("seedInitialMemories: failed to insert memory for %s (scope=%s): %v", workspaceID, scope, err)
}
}

View File

@ -528,6 +528,128 @@ func TestSanitizeRuntime_Allowlist(t *testing.T) {
}
}
// ==================== seedInitialMemories: coverage for #1167 / #1208 ====================
// TestSeedInitialMemories_TruncatesOversizedContent covers the boundary cases for
// the CWE-400 content-length limit introduced in PR #1167. Issue #1208 identified
// that the truncate-at-100k guard lacked unit test coverage.
// The test verifies that content at and over the 100,000-byte limit is handled
// correctly, and that content under the limit passes through unchanged.
func TestSeedInitialMemories_TruncatesOversizedContent(t *testing.T) {
mock := setupTestDB(t)
tests := []struct {
name string
contentLen int
expectInsert bool
expectTruncate bool
}{
{
name: "exactly at 100 kB limit — no truncation",
contentLen: 100_000,
expectInsert: true,
},
{
name: "1 byte over limit — truncated",
contentLen: 100_001,
expectInsert: true,
expectTruncate: true,
},
{
name: "far over limit — truncated",
contentLen: 500_000,
expectInsert: true,
expectTruncate: true,
},
{
name: "well under limit — passes through unchanged",
contentLen: 50_000,
expectInsert: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mock.ExpectExpectations()
workspaceID := "ws-trunc-" + tt.name
content := strings.Repeat("X", tt.contentLen)
memories := []models.MemorySeed{{Content: content, Scope: "LOCAL"}}
if tt.expectInsert {
// The DB INSERT must receive content of exactly maxMemoryContentLength
// (not the full original length). This is the key assertion: the function
// truncates before calling ExecContext, so the mock expects 100_000 bytes.
mock.ExpectExec(`INSERT INTO agent_memories`).
WithArgs(workspaceID, strings.Repeat("X", maxMemoryContentLength), "LOCAL", sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(1, 1))
}
seedInitialMemories(context.Background(), workspaceID, memories, "test-ns")
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet DB expectations: %v", err)
}
})
}
}
// TestSeedInitialMemories_RedactsSecrets verifies that redactSecrets is called
// before the INSERT so that credentials in template memories never land
// unredacted in agent_memories. Regression test for F1085 / #1132.
func TestSeedInitialMemories_RedactsSecrets(t *testing.T) {
mock := setupTestDB(t)
raw := "Remember to set OPENAI_API_KEY=sk-abcdef123456 in the config file"
wantRedacted, changed := redactSecrets("ws-redact-test", raw)
if !changed {
t.Fatalf("precondition: redactSecrets must change the test content")
}
workspaceID := "ws-redact-test"
memories := []models.MemorySeed{{Content: raw, Scope: "LOCAL"}}
// The INSERT must receive the REDACTED content, not the raw secret.
mock.ExpectExec(`INSERT INTO agent_memories`).
WithArgs(workspaceID, wantRedacted, "LOCAL", sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(1, 1))
seedInitialMemories(context.Background(), workspaceID, memories, "test-ns")
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet DB expectations: %v", err)
}
}
// TestSeedInitialMemories_InvalidScopeSkipped verifies that entries with an
// unrecognized scope value are silently skipped (not inserted).
func TestSeedInitialMemories_InvalidScopeSkipped(t *testing.T) {
mock := setupTestDB(t)
mock.ExpectExpectations() // no DB calls expected for invalid scope
memories := []models.MemorySeed{
{Content: "this should be skipped", Scope: "NOT_A_REAL_SCOPE"},
}
seedInitialMemories(context.Background(), "ws-bad-scope", memories, "test-ns")
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unexpected DB calls for invalid scope: %v", err)
}
}
// TestSeedInitialMemories_EmptyMemoriesNil verifies that a nil memories slice
// is handled without error (no DB calls).
func TestSeedInitialMemories_EmptyMemoriesNil(t *testing.T) {
mock := setupTestDB(t)
mock.ExpectExpectations()
seedInitialMemories(context.Background(), "ws-nil", nil, "test-ns")
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unexpected DB calls for nil slice: %v", err)
}
}
// ==================== buildProvisionerConfig ====================
func TestBuildProvisionerConfig_BasicFields(t *testing.T) {

View File

@ -39,12 +39,23 @@ const flyReplaySrcHeader = "Fly-Replay-Src"
const tenantOrgIDHeader = "X-Molecule-Org-Id"
// tenantGuardAllowlist is the set of paths that MUST remain accessible even in
// tenant mode without the org header (health checks, Prometheus scrapes).
// tenant mode without the org header (health checks, Prometheus scrapes,
// workspace → platform boot signals).
// Exact-match — no prefix semantics — to avoid accidentally exposing admin
// routes via e.g. "/health/debug/admin".
//
// /registry/register and /registry/heartbeat are workspace-initiated boot
// signals. Workspace EC2s are provisioned by the control plane with
// PLATFORM_URL but no MOLECULE_ORG_ID env var, so the runtime's httpx
// calls can't attach X-Molecule-Org-Id. Tenant SG already scopes these
// ports to the VPC CIDR; the registry handlers themselves enforce
// workspace-scoped bearer auth via wsauth.HasAnyLiveToken. Allowlisting
// here only bypasses the cross-org routing check, not auth.
var tenantGuardAllowlist = map[string]struct{}{
"/health": {},
"/metrics": {},
"/health": {},
"/metrics": {},
"/registry/register": {},
"/registry/heartbeat": {},
}
// TenantGuard returns a Gin middleware configured from the MOLECULE_ORG_ID env

View File

@ -82,6 +82,33 @@ func TestTenantGuard_AllowlistBypassesCheck(t *testing.T) {
}
}
// Workspace EC2s POST to these two paths during startup and do NOT have
// MOLECULE_ORG_ID to attach as a header — CP's user-data only exports
// WORKSPACE_ID + PLATFORM_URL + RUNTIME + PORT. Without this allowlist
// entry every workspace silently fails to register and sits in
// 'provisioning' until the 10-min sweeper — the same failure class
// that caused the 2026-04-21 prod incident.
func TestTenantGuard_AllowsWorkspaceRegistryPaths(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
r.Use(TenantGuardWithOrgID("org-abc"))
// Register stub handlers so the test distinguishes "guard rejected"
// (404 from middleware) vs "route not matched" (404 from gin). The
// actual registry handlers live elsewhere; we only care that the
// guard doesn't abort before dispatch.
r.POST("/registry/register", func(c *gin.Context) { c.String(200, "register-reached") })
r.POST("/registry/heartbeat", func(c *gin.Context) { c.String(200, "heartbeat-reached") })
for _, path := range []string{"/registry/register", "/registry/heartbeat"} {
req := httptest.NewRequest("POST", path, nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != 200 {
t.Errorf("%s: workspace boot path must bypass TenantGuard; got %d (body=%q)", path, w.Code, w.Body.String())
}
}
}
// Fly-Replay-Src state path: the production path. Control plane puts the
// bare UUID in state= (no prefix — Fly 502s on `=` in the state value).
// Fly injects the whole Fly-Replay-Src header on the replayed request.