The old template was a thin OpenAI-compat multi-provider dispatcher that shared the name "hermes" with Nous Research's hermes-agent but had none of its actual capabilities (skills, memory, tools, learning loop, multi-platform gateway). Customers picking "Hermes" in canvas got a stateless chat shim instead of the agent framework they expected. This PR rewrites the template to run the real hermes-agent (github.com/NousResearch/hermes-agent) inside the workspace container: - Dockerfile installs hermes-agent via its upstream install.sh (same pattern template-claude-code uses for the claude CLI). - start.sh boots `hermes gateway` with the api_server platform on 127.0.0.1:8642, waits for /health, then exec's molecule-runtime on :8000. - adapter.py / executor.py collapse to a thin A2A proxy that forwards every incoming message to /v1/chat/completions on the local gateway and returns the response on the A2A queue. - providers.py + escalation.py deleted — hermes-agent owns provider selection (`hermes model`), its own skill/memory loop supersedes escalation. - Env vars unchanged: HERMES_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY, MINIMAX_API_KEY are all forwarded into ~/.hermes/.env at boot. All planning + rationale lives in this repo under docs/: - docs/PLANNING.md — why, scope, phases, risks, success criteria - docs/ARCHITECTURE.md — port map, boot sequence, request flow, what the bridge deliberately does NOT do - docs/MIGRATION.md — v1.x → v2.0.0 behaviour changes (no customer migration needed, v1.x was CI-canary-only) - docs/CONFIGURATION.md — model picking, persistence, gateway restart, inspection, timeouts Net -195 lines of code for a massive capability upgrade. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
6 lines
114 B
Python
6 lines
114 B
Python
from adapter import HermesAgentAdapter
|
|
|
|
Adapter = HermesAgentAdapter
|
|
|
|
__all__ = ["HermesAgentAdapter", "Adapter"]
|