The canary workflow has been failing for ~30 consecutive runs (issue #1500, opened 2026-04-21) on the same line: [hermes-agent error 500] No LLM provider configured. Run `hermes model` to select a provider, or run `hermes setup` for first-time configuration. Root cause: the canary's env block was missing E2E_OPENAI_API_KEY. Without it, tests/e2e/test_staging_full_saas.sh provisions the workspace with empty secrets; template-hermes start.sh seeds ~/.hermes/.env with no provider keys; derive-provider.sh resolves the model slug `openai/gpt-4o` to PROVIDER=openrouter (hermes has no native openai provider in its registry); A2A request at step 8/11 fails with the "No LLM provider configured" error from hermes-agent. The full-lifecycle workflow (e2e-staging-saas.yml line 84) carries the same secret correctly. Mirror its pattern + add a fail-fast preflight so future regressions surface in <5s instead of after 8 min of provision-then-die. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| auto-promote-staging.yml | ||
| block-internal-paths.yml | ||
| canary-staging.yml | ||
| canary-verify.yml | ||
| check-merge-group-trigger.yml | ||
| ci.yml | ||
| codeql.yml | ||
| e2e-api.yml | ||
| e2e-staging-canvas.yml | ||
| e2e-staging-saas.yml | ||
| e2e-staging-sanity.yml | ||
| promote-latest.yml | ||
| publish-canvas-image.yml | ||
| publish-workspace-server-image.yml | ||
| retarget-main-to-staging.yml | ||