Bundles the same hermes+OpenAI → claude-code+MiniMax migration onto
the full-lifecycle E2E that's been red on every provisioning-critical
push since 2026-05-01. Same root cause as the canary fix in the prior
commit: MOLECULE_STAGING_OPENAI_KEY hit insufficient_quota and there's
no SLA on operator billing top-up.
Same shape as canary commit: claude-code as default runtime + MiniMax
as primary key + hermes/langgraph kept as workflow_dispatch options
with OpenAI fallback. Per-runtime verify-key case-statement matches
canary-staging.yml + continuous-synth-e2e.yml byte-for-byte.
Two extra wrinkles vs canary:
- Dispatch input `runtime` default flipped from "hermes" to "claude-code"
so operators dispatching from the UI get the safe path by default.
They can still pick hermes/langgraph from the dropdown when they
specifically want to exercise OpenAI.
- E2E_MODEL_SLUG is dispatch-aware: MiniMax-M2.7-highspeed for
claude-code, openai/gpt-4o for hermes (slash-form per
derive-provider.sh), openai:gpt-4o for langgraph (colon-form per
init_chat_model). The branch comment in lib/model_slug.sh covers
the rationale; pinning the slug here keeps the dispatch UX stable
even when operators don't override.
After this lands + the canary commit lands, the only OpenAI-dependent
E2E surface is the operator-dispatch fallback. The cron canary, the
synth E2E, AND the full-lifecycle gate are all on MiniMax — separate
billing account, no OpenAI quota dependency on auto-runs.