Compare commits

..

7 Commits

Author SHA1 Message Date
55b060413c Merge pull request 'chore(ci): adopt .runtime-version push-mode cascade signal' (#2) from chore/runtime-version-file into main
All checks were successful
CI / validate (push) Successful in 2m56s
2026-05-07 10:12:47 +00:00
devops-engineer
4989682e20 chore(ci): adopt .runtime-version push-mode cascade signal
All checks were successful
CI / validate (pull_request) Successful in 10m40s
CI / validate (push) Successful in 10m37s
Background: post-2026-05-06 SCM is Gitea, not GitHub. Gitea 1.22.6 has
no repository_dispatch / workflow_dispatch trigger API (empirically
verified across 6 candidate paths in molecule-core#20 issuecomment-913).
The molecule-core/publish-runtime.yml cascade therefore cannot fire
templates via curl-dispatch — pivots to push-mode instead.

This PR is the consumer side of that pivot:

- .runtime-version file at repo root — single line, plain version
  string. Currently 0.1.129 (latest published as of 2026-05-07).
  publish-runtime overwrites this on each cascade.

- publish-image.yml gains a resolve-version job that reads the file
  and forwards the value to the reusable build workflow as the
  third-priority source in the resolution chain.

Sequencing context: this PR (and 8 sibling PRs to the other template
repos) MUST land before molecule-core#20 v2 is merged.

Refs molecule-core#14, molecule-core#20.
2026-05-07 03:08:18 -07:00
security-auditor
7c82c8317a ci: re-trigger after orchestrator recreated runners 1-8 (CONFIG_FILE env)
All checks were successful
CI / validate (push) Successful in 10m48s
Per saved memory feedback_act_runner_needs_config_file_env: runners 1-8
were spawned without -e CONFIG_FILE=/config.yaml; act_runner fell back
to /data/config.yaml and ignored runner.envs the whole time. Orchestrator
recreated 1-8 with full proper env. All 16 now uniform with
AGENT_TOOLSDIRECTORY + RUNNER_TOOL_CACHE + GITHUB_SERVER_URL + GH_HOST.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:51:37 -07:00
security-auditor
c6820c374a ci: re-trigger after orchestrator restarted runners 1-8
Some checks failed
CI / validate (push) Failing after 18s
Per saved memory feedback_runner_config_partial_deploy: orchestrator
identified that runners 1-8 last restarted before AGENT_TOOLSDIRECTORY
+ RUNNER_TOOL_CACHE were added; cycle 7 retrigger landed ~50% on stale
runners. Orchestrator restarted 1-8 at ~09:37; this empty commit
re-triggers CI on the now-consistent runner pool.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:41:39 -07:00
security-auditor
1e979227ba ci: re-trigger after runner-config v2 (AGENT_TOOLSDIRECTORY etc.)
Some checks failed
CI / validate (push) Failing after 18s
Empty commit to re-run CI against the act_runner config that landed
in /opt/molecule/runners/config.yaml (cycle ~58 internal#46 Phase 3).
No source change. CI now runs setup-python with /tmp/hostedtoolcache,
which works (verified in cycle 6 task 1022 log, careful-bash#2).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:27:49 -07:00
95a70c9a16 Merge pull request 'fix(ci): lowercase 'molecule-ai/' in cross-repo workflow refs' (#1) from fix/lowercase-org-slug into main
All checks were successful
CI / validate (push) Successful in 10m51s
2026-05-07 08:59:13 +00:00
security-auditor
a9d8037e42 fix(ci): lowercase 'molecule-ai/' in cross-repo workflow refs
Some checks failed
CI / validate (pull_request) Failing after 0s
CI / validate (push) Failing after 0s
Gitea is case-sensitive on owner slugs; canonical is lowercase
`molecule-ai/...`. Mixed-case `Molecule-AI/...` refs fail-at-0s
when the runner tries to resolve the cross-repo workflow / checkout.

Same fix as molecule-controlplane#12. Mechanical case-correction;
no behavior change beyond making CI resolve again.

Refs: internal#46

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 00:59:43 -07:00
6 changed files with 33 additions and 113 deletions

View File

@ -2,4 +2,4 @@ name: CI
on: [push, pull_request]
jobs:
validate:
uses: Molecule-AI/molecule-ci/.github/workflows/validate-workspace-template.yml@main
uses: molecule-ai/molecule-ci/.github/workflows/validate-workspace-template.yml@main

View File

@ -25,12 +25,38 @@ permissions:
packages: write
jobs:
# The `.runtime-version` file is the push-mode cascade signal post-
# 2026-05-06: when molecule-core/publish-runtime.yml ships a new
# version to PyPI, it does NOT call repository_dispatch (Gitea 1.22.6
# has no such endpoint — empirically verified molecule-core#20).
# Instead it git-pushes an updated `.runtime-version` to each template,
# which trips this workflow's `on: push: branches: [main]` trigger.
# This job reads that file and forwards the version to the reusable
# build workflow.
resolve-version:
runs-on: ubuntu-latest
timeout-minutes: 2
outputs:
version: ${{ steps.read.outputs.version }}
steps:
- uses: actions/checkout@v4
- id: read
run: |
if [ -f .runtime-version ]; then
v="$(head -n1 .runtime-version | tr -d '[:space:]')"
echo "version=$v" >> "$GITHUB_OUTPUT"
echo "resolved runtime version: $v"
else
echo "no .runtime-version file present — falling through to Dockerfile default"
fi
publish:
uses: Molecule-AI/molecule-ci/.github/workflows/publish-template-image.yml@main
needs: resolve-version
uses: molecule-ai/molecule-ci/.github/workflows/publish-template-image.yml@main
secrets: inherit
with:
# Cascade fires with client_payload.runtime_version = the exact
# version PyPI just published. Forwarded as a docker --build-arg
# so the cache key changes per-version and pip install resolves
# freshly. Empty on push/PR — falls back to requirements.txt pin.
runtime_version: ${{ github.event.client_payload.runtime_version || inputs.runtime_version || '' }}
runtime_version: ${{ github.event.client_payload.runtime_version || inputs.runtime_version || needs.resolve-version.outputs.version || '' }}

View File

@ -1,22 +0,0 @@
name: Secret scan
# Calls the canonical reusable workflow in molecule-core. Defense
# against the #2090-class leak (a hosted-agent commit slipping a
# credential-shaped string into a PR). Pattern set lives in
# molecule-core so we do not maintain a parallel copy here.
#
# Pinned to @staging because that is the active default branch on the
# upstream repo (main lags behind via the staging-promotion workflow).
# Updates ride along automatically as the upstream regex set evolves.
on:
pull_request:
types: [opened, synchronize, reopened]
push:
branches: [main, staging, master]
merge_group:
types: [checks_requested]
jobs:
secret-scan:
uses: Molecule-AI/molecule-core/.github/workflows/secret-scan.yml@staging

1
.runtime-version Normal file
View File

@ -0,0 +1 @@
0.1.129

View File

@ -1,85 +0,0 @@
# molecule-ai-workspace-template-autogen
A Molecule AI workspace template that runs the **Microsoft AutoGen** (`autogen-agentchat`) framework as a workspace runtime. The template ships an `Adapter` class that the platform's `molecule-runtime` ENTRYPOINT discovers and loads.
This is **not** a plugin — there is no `plugin.yaml` and no `rules/` directory. It is a runtime container image, published per `RUNTIME_VERSION` to the Molecule registry by the reusable `Molecule-AI/molecule-ci` workflow.
---
## Files
| File | Role |
|---|---|
| `adapter.py` | The single source of runtime behavior. Defines `AutoGenAdapter` (subclass of `BaseAdapter`) and `AutoGenA2AExecutor` (subclass of `a2a.server.agent_execution.AgentExecutor`). |
| `config.yaml` | Workspace metadata: `name`, `runtime: autogen`, default `model`, model picker (`models:`), required env (`OPENAI_API_KEY`), `template_schema_version: 1`. |
| `system-prompt.md` | Default system prompt — used only when the workspace config does not override it. |
| `__init__.py` | Re-exports `AutoGenAdapter` as `Adapter`. The runtime resolves it via `ENV ADAPTER_MODULE=adapter` (set in the Dockerfile). |
| `Dockerfile` | `python:3.11-slim`, runs as user `agent` (uid 1000), entrypoint is `molecule-runtime`. Honors a `RUNTIME_VERSION` build-arg that pins the wheel installed on top of `requirements.txt` — that ARG is load-bearing because it busts the pip-install cache layer for cascade-triggered builds (see Dockerfile comment, 2026-04-27 incident). |
| `requirements.txt` | `molecule-ai-workspace-runtime>=0.1.0`, `autogen-agentchat>=0.4.0`, `autogen-ext[openai]>=0.4.0`. |
| `.github/workflows/ci.yml` | Delegates to `Molecule-AI/molecule-ci/.github/workflows/validate-workspace-template.yml@main`. |
| `.molecule-ci/scripts/validate-workspace-template.py` | Local copy of the validator the CI runs. |
---
## BaseAdapter integration (adapter.py)
`AutoGenAdapter(BaseAdapter)` implements four hooks the platform expects:
- `name()``"autogen"` — must match `runtime:` in `config.yaml`.
- `display_name()`, `description()`, `get_config_schema()` — surfaced in the canvas template picker / config editor.
- `setup(config)` — async; imports `AssistantAgent` to fail fast if `autogen-agentchat` is missing, then calls inherited `self._common_setup(config)` (provided by `BaseAdapter`) which returns the resolved system prompt and the platform's LangChain tool list. The tools are wrapped via `_langchain_to_autogen` and stashed on `self.autogen_tools`.
- `create_executor(config)` → returns `AutoGenA2AExecutor` (the A2A protocol object the molecule-runtime serves).
`_common_setup`, `build_task_text`, `brief_task`, `extract_history`, `extract_message_text`, `set_current_task` come from `molecule_runtime.adapters.shared_runtime` — the shared layer that owns delegation, memory, sandbox, and approval tools across all adapter templates. Do not reimplement these.
---
## AutoGen specifics
- The runtime uses **only** `AssistantAgent` (single-agent shape). There is no `UserProxyAgent` and no `GroupChat` / `Swarm` wiring in this template.
- A fresh `AssistantAgent` + `OpenAIChatCompletionClient` is constructed **per `execute()` call** inside `AutoGenA2AExecutor.execute` — there is no long-lived agent instance held on `self`. The system prompt and the tool list are reused, but the chat client and agent are not pooled.
- Model resolution: the config string (e.g. `openai:gpt-4.1-mini`) is split on `":"` and only the suffix is passed to `OpenAIChatCompletionClient(model=...)`. The provider prefix (`openai:`) is stripped silently — non-OpenAI prefixes will compile but fail at request time.
- Reply extraction: `agent.run()` returns a result with `messages`. The executor walks `messages` in reverse and returns the first item whose `content` is a `str`. If none qualifies, it falls back to `str(result)`.
---
## Tool wrapping (LangChain → AutoGen)
`_langchain_to_autogen(lc_tool)` wraps each platform LangChain `BaseTool` as an `autogen_core.tools.FunctionTool` with a single typed parameter `input: str`. The wrapper:
1. Tries `json.loads(input)` and, if the result is a `dict`, calls `await lc_tool.ainvoke(parsed_dict)` — preserves structured-input tools.
2. Otherwise falls back to `await lc_tool.ainvoke(input)`.
This is the bridge AutoGen requires because **AutoGen tools must have typed signatures (no `**kwargs`)** while LangChain tools accept opaque string-or-dict input. Keep the bridge function-shape — replacing it with a multi-arg signature breaks every platform tool that ships structured args (`delegate_task`, `commit_memory`, etc.).
---
## Async boundaries
- `setup`, `create_executor`, `execute`, and `cancel` are all `async def` — AutoGen's whole API is async-first; do not call any of them from synchronous code.
- `OpenAIChatCompletionClient` and `AssistantAgent` are constructed inside the async `execute()` body. If you ever hoist them to `__init__` for reuse, do it lazily inside an async method — not in a sync constructor.
- `cancel()` is a no-op (`pass`). The platform may call it on disconnect; until it does something, an in-flight `agent.run()` will keep running until the OpenAI call finishes. Do not assume canceling the A2A task halts model spend.
- `set_current_task` is wrapped in `try / finally` so the heartbeat label is cleared even on exception. Preserve that pattern when adding new error paths.
---
## Conventions / what NOT to do
- **Do not** hold workspace-mutable state on `AutoGenA2AExecutor`. Each `execute()` call rebuilds the agent. Conversation history comes from `extract_history(context)`, not from any in-process memory.
- **Do not** add provider-specific clients here. If multi-provider routing is needed, route inside `_common_setup` upstream (in `molecule_runtime`); keep this template OpenAI-only to match `requirements.txt` and `config.yaml`'s declared models.
- **Do not** edit `adapter.py` to bypass `_common_setup` — the platform expects every adapter template to expose the same delegation / memory / sandbox / approval tool set, and that set is built there.
- **Do not** bump `template_schema_version` without coordinating with the platform team — `config.yaml` declares `template_schema_version: 1` and the canvas template picker reads it.
- **Do not** modify the Dockerfile's `ARG RUNTIME_VERSION=` line or its position above the `pip install` layer — it is the cache-bust trigger that fixes cascade-build staleness.
- **Do not** add a `CLAUDE.md`-driven runtime config; everything the runtime reads is in `config.yaml`. `CLAUDE.md` is for repo contributors only.
---
## Where to make changes
| Want to... | Edit... |
|---|---|
| Add or remove an OpenAI model | `config.yaml` `models:` block + bump `version:` |
| Change the default system prompt | `system-prompt.md` |
| Pick up a newer `molecule-ai-workspace-runtime` | Bump pin in `requirements.txt`; the cascade publish workflow handles the image |
| Change tool-bridge behavior | `_langchain_to_autogen` in `adapter.py` |
| Switch to multi-agent (`GroupChat` / `Swarm`) | `AutoGenA2AExecutor.execute` — re-wire the agent construction; keep the LangChain bridge intact |

View File

@ -107,12 +107,12 @@ class AutoGenA2AExecutor(AgentExecutor):
self._heartbeat = heartbeat
async def execute(self, context, event_queue):
from a2a.helpers import new_text_message
from a2a.utils import new_agent_text_message
user_message = extract_message_text(context)
if not user_message:
await event_queue.enqueue_event(new_text_message("No message provided"))
await event_queue.enqueue_event(new_agent_text_message("No message provided"))
return
await set_current_task(self._heartbeat, brief_task(user_message))
@ -153,7 +153,7 @@ class AutoGenA2AExecutor(AgentExecutor):
finally:
await set_current_task(self._heartbeat, "")
await event_queue.enqueue_event(new_text_message(reply))
await event_queue.enqueue_event(new_agent_text_message(reply))
async def cancel(self, context, event_queue): # pragma: no cover
pass