Audited every a2a-sdk surface in workspace/ against the installed
1.0.2 wheel. Found and fixed:
main.py (the live workspace startup path):
• create_jsonrpc_routes(rpc_url='/', enable_v0_3_compat=True) —
rpc_url required in 1.x; v0.3 compat enables inbound legacy
clients (`"role": "user"` lowercase) without forcing them to
upgrade. Pairs with the outbound rename below.
a2a_executor.py:
• TextPart/FilePart/FileWithUri removed in 1.x. Part is now a
flat proto message: Part(text=…) / Part(url=…, filename=…,
media_type=…). Updated the file-attachment branch (only
reachable when an agent emits files; the harness's PONG path
didn't exercise this, but it's a latent crash).
• Message field names: messageId/taskId/contextId →
message_id/task_id/context_id (proto3 snake_case).
• Role enum: Role.agent → Role.ROLE_AGENT (proto enum).
Outbound JSON-RPC payloads (8 files):
• "role": "user" → "role": "ROLE_USER" — proto3 JSON serialization
is strict about enum values. Sites: a2a_client, a2a_cli, main
(initial+idle prompts), heartbeat, builtin_tools/a2a_tools,
builtin_tools/delegation. Wire JSON keys stay camelCase
(proto3 default), only the role enum value changed.
google-adk/adapter.py:
• new_agent_text_message → new_text_message (4 sites). This
adapter's directory has a hyphen, so it can't be imported as a
Python module — effectively dead code, but the wheel ships the
file and a future fix should keep it correct against 1.x.
Why one PR instead of seven: every previous a2a-sdk migration find
landed as its own publish → cascade → harness → next-bug cycle.
Today's audit ran every a2a-sdk symbol/type/method in workspace/
against the installed 1.0.2 wheel in a single sweep + tested the
critical paths (Message construction, Part construction, Role enum
parsing) against the actual SDK. Should be the last migration PR.
Verified locally:
python3 scripts/build_runtime_package.py --version 0.1.99 \
--out /tmp/build-final
pip install /tmp/build-final
python -c "import molecule_runtime.main; \
from molecule_runtime.a2a_executor import LangGraphA2AExecutor"
→ ✓ all imports clean against a2a-sdk 1.0.2
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| adapter.py | ||
| README.md | ||
| requirements.txt | ||
| test_adapter.py | ||
Google ADK Adapter
Molecule AI workspace adapter for Google Agent Development Kit (ADK) — Google's official multi-agent Python SDK (~19k ⭐, Apache-2.0).
Overview
This adapter bridges the A2A protocol used by the Molecule AI platform to Google ADK's runner/session model. Agents are backed by Google Gemini models via AI Studio or Vertex AI. Each workspace gets an LlmAgent wrapped in a Runner with an InMemorySessionService; sessions are tied to A2A task context IDs for stable, isolated per-conversation state.
Runtime key: google-adk
Installation
The adapter dependencies are installed automatically by entrypoint.sh from this directory's requirements.txt:
pip install -r adapters/google-adk/requirements.txt
You'll also need a Google API key (AI Studio) or Vertex AI credentials.
Configuration
config.yaml
runtime: google-adk
model: google:gemini-2.0-flash # or gemini-1.5-pro, gemini-2.5-flash, etc.
runtime_config:
agent_name: my-agent # optional, default: molecule-adk-agent
max_output_tokens: 8192 # optional, default: 8192
temperature: 1.0 # optional, default: 1.0
Environment Variables
| Variable | Required | Description |
|---|---|---|
GOOGLE_API_KEY |
Yes (unless Vertex AI) | Google AI Studio API key |
GOOGLE_GENAI_USE_VERTEXAI |
No | Set to "1" to use Vertex AI instead of AI Studio |
GOOGLE_CLOUD_PROJECT |
When using Vertex AI | GCP project ID |
GOOGLE_CLOUD_LOCATION |
When using Vertex AI | GCP region, e.g. "us-central1" |
Usage Example
import asyncio
from adapter_base import AdapterConfig
from adapters.google_adk.adapter import GoogleADKAdapter
async def main():
config = AdapterConfig(
model="google:gemini-2.0-flash",
system_prompt="You are a helpful assistant.",
runtime_config={
"agent_name": "demo-agent",
"max_output_tokens": 1024,
"temperature": 0.7,
},
workspace_id="ws-demo",
)
adapter = GoogleADKAdapter()
await adapter.setup(config) # validates keys, loads plugins/skills
executor = await adapter.create_executor(config) # returns GoogleADKA2AExecutor
# executor.execute(context, event_queue) is called by the A2A server per turn
print(f"Adapter: {adapter.display_name()} — model {config.model}")
asyncio.run(main())
Running via A2A
Once the workspace is provisioned, send A2A messages as normal:
curl -X POST http://localhost:8000 \
-H 'Content-Type: application/json' \
-d '{
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "What is 2 + 2?"}]
}
}
}'
Supported Models
Any model supported by Google ADK and available through your credential path:
| Model | Notes |
|---|---|
gemini-2.0-flash |
Recommended — fast, cost-effective |
gemini-2.5-flash |
Latest preview, strong reasoning |
gemini-1.5-pro |
Higher capability, higher latency |
gemini-1.5-flash |
Fast, lower cost |
Use the google: prefix in config.yaml — the adapter strips it before passing the model name to ADK.
Architecture
A2A Request
│
▼
GoogleADKA2AExecutor.execute()
│
├── extract_message_text() ← shared_runtime helper
├── _ensure_session() ← create/reuse InMemorySessionService session
├── _build_content() ← wrap text in google.genai.types.Content
│
▼
runner.run_async(session_id, user_id, new_message)
│
▼
ADK Event stream → filter is_final_response() → extract text
│
▼
event_queue.enqueue_event(new_agent_text_message(reply))
│
▼
A2A Response
License
Apache-2.0 — same as google/adk-python.