feat(comfyui): add hardware check + auto-gate local install on verdict

Layers a programmatic hardware-feasibility check on top of the v4 skill
so the agent doesn't silently push users toward a local install they
can't actually run. The official comfy-cli supports --nvidia / --amd /
--m-series / --cpu, but has no guard against "4 GB laptop GPU on SDXL"
or "Intel Mac falling back to CPU" — both route to comfy-cli paths in
the original table and then fail on first workflow.

- scripts/hardware_check.py: detect OS/arch/GPU (NVIDIA nvidia-smi,
  AMD rocm-smi, Apple M1+ via arm64+sysctl, Intel Arc via clinfo),
  VRAM, system/unified RAM. Emits JSON
  {verdict: ok|marginal|cloud, recommended_install_path, comfy_cli_flag}
  with practical thresholds: discrete GPU >=6 GB VRAM minimum,
  Apple Silicon >=16 GB unified memory minimum, Intel Mac -> cloud,
  no accelerator -> cloud. comfy_cli_flag maps directly to
  `comfy install` so the agent can stitch the whole flow together.

- scripts/comfyui_setup.sh: runs hardware_check.py first when no
  explicit flag is passed. If verdict=cloud, refuses to install
  locally, prints Comfy Cloud URL + an override command, exits 2.
  Otherwise auto-selects the right --nvidia/--amd/--m-series flag
  for `comfy install`. Surfaces marginal-verdict notes to the user.

- SKILL.md Setup & Onboarding: adds mandatory Step 0 "Check If This
  Machine Can Run ComfyUI Locally" ahead of the Path A-E selection.
  Documents the verdict thresholds inline, ties verdict + comfy_cli_flag
  to the install paths, and updates the path-choice table so
  "verdict: cloud" is the first row. Quick-Start "Detect Environment"
  block extended to include the hardware check. Verification
  checklist gains a hardware-check gate.

- Frontmatter setup.help rewritten to point at hardware_check.py
  first. Version bumped 4.0.0 -> 4.1.0.
This commit is contained in:
teknium1 2026-04-29 12:38:09 -07:00 committed by Teknium
parent 528a13b37a
commit 9d7ece362d
4 changed files with 431 additions and 12 deletions

View File

@ -1,7 +1,7 @@
---
name: comfyui
description: "Generate images, video, and audio with ComfyUI — install, launch, manage nodes/models, run workflows with parameter injection. Uses the official comfy-cli for lifecycle and direct REST API for execution."
version: 4.0.0
version: 4.1.0
requires: ComfyUI (local or Comfy Cloud); comfy-cli (pip install comfy-cli)
author: [kshitijk4poor, alt-glitch]
license: MIT
@ -9,7 +9,7 @@ platforms: [macos, linux, windows]
prerequisites:
commands: ["python3"]
setup:
help: "pip install comfy-cli && comfy install. Cloud: get API key at platform.comfy.org"
help: "Run scripts/hardware_check.py FIRST to decide local vs Comfy Cloud; then scripts/comfyui_setup.sh auto-installs locally (or use Cloud API key for platform.comfy.org)."
metadata:
hermes:
tags:
@ -37,7 +37,8 @@ setup/management and direct REST API calls for workflow execution.
**Scripts in this skill:**
- `scripts/comfyui_setup.sh` — full setup automation (install + launch + verify)
- `scripts/hardware_check.py` — detect GPU/VRAM/Apple Silicon, decide local vs Comfy Cloud
- `scripts/comfyui_setup.sh` — full setup automation (hardware check + install + launch + verify)
- `scripts/extract_schema.py` — reads workflow JSON, outputs which parameters are controllable
- `scripts/run_workflow.py` — injects user args, submits workflow, monitors progress, downloads outputs
- `scripts/check_deps.py` — checks if required custom nodes and models are installed
@ -81,9 +82,13 @@ param injection, execution monitoring, and output download that the CLI doesn't
# What's available?
command -v comfy >/dev/null 2>&1 && echo "comfy-cli: installed"
curl -s http://127.0.0.1:8188/system_stats 2>/dev/null && echo "server: running"
# Can this machine actually run ComfyUI locally? (GPU/VRAM/Apple Silicon check)
python3 scripts/hardware_check.py
```
If nothing is installed, go to **Setup & Onboarding** below.
If nothing is installed, go to **Setup & Onboarding** below — but always run the
hardware check first, before picking an install path.
If the server is already running, skip to **Core Workflow**.
## Core Workflow
@ -175,22 +180,69 @@ Show images to the user via `vision_analyze` or return the file path directly.
## Setup & Onboarding
When a user asks to set up ComfyUI, walk them through the path that fits their situation.
Ask what hardware they have and whether they want local or cloud.
**Do not assume they can run ComfyUI locally** — run the hardware check first, then use
its verdict to pick the right install path.
**Official docs:** https://docs.comfy.org/installation
**CLI docs:** https://docs.comfy.org/comfy-cli/getting-started
**Cloud docs:** https://docs.comfy.org/get_started/cloud
### Step 0: Check If This Machine Can Run ComfyUI Locally (MANDATORY)
Before recommending an install path, run:
```bash
python3 scripts/hardware_check.py --json
```
It detects OS, GPU (NVIDIA CUDA / AMD ROCm / Apple Silicon / Intel Arc), VRAM,
and unified/system RAM, then returns a verdict plus a suggested `comfy-cli` flag:
| Verdict | Meaning | Action |
|------------|-----------------------------------------------------------|-------------------------------------------------|
| `ok` | ≥8 GB VRAM (discrete) OR ≥32 GB unified (Apple Silicon) | Local install — use `comfy_cli_flag` from report |
| `marginal` | SD1.5 works; SDXL tight; Flux/video unlikely | Local OK for light workflows, else **Path A (Cloud)** |
| `cloud` | No usable GPU, <6 GB VRAM, <16 GB Apple unified, Intel Mac | **Go straight to Path A (Comfy Cloud)** do NOT install locally |
Hardware thresholds the skill enforces:
- **Discrete GPU minimum:** 6 GB VRAM. Below that, most modern models won't load.
- **Apple Silicon:** M1 or newer (ARM64). Intel Macs have no MPS backend — Cloud only.
- **Apple Silicon memory:** 16 GB unified minimum. 8 GB M1/M2 will swap/OOM on SDXL/Flux.
- **No accelerator at all:** CPU-only is listed as a comfy-cli option but a single SDXL
image takes 10+ minutes — treat it as unusable and route to Cloud.
The report's `comfy_cli_flag` field gives you the exact flag for Step 1 below:
`--nvidia`, `--amd`, or `--m-series`. For Intel Arc, use Path E (manual install).
For `verdict: cloud`, skip to Path A.
Surface the `notes` array verbatim to the user so they understand why a
particular path was recommended.
### Choosing an Installation Path
Use the hardware check result first. The table below is a fallback for when the user
has already told you their hardware or you need to narrow down between multiple
viable paths:
| Situation | Recommended Path |
|-----------|-----------------|
| No GPU / just want to try it | **Comfy Cloud** (zero setup) |
| Windows + NVIDIA GPU + non-technical | **ComfyUI Desktop** (one-click installer) |
| Windows + NVIDIA GPU + technical | **Portable build** or **comfy-cli** |
| Linux + any GPU | **comfy-cli** (easiest) or manual install |
| macOS + Apple Silicon | **ComfyUI Desktop** (macOS) or **comfy-cli** |
| Headless / server / CI | **comfy-cli** |
| `verdict: cloud` from hardware check | **Path A: Comfy Cloud** |
| No GPU / just want to try it | **Path A: Comfy Cloud** (zero setup) |
| Windows + NVIDIA GPU + non-technical | **Path B: ComfyUI Desktop** (one-click installer) |
| Windows + NVIDIA GPU + technical | **Path C: Portable** or **Path D: comfy-cli** |
| Linux + any GPU | **Path D: comfy-cli** (easiest) or Path E manual |
| macOS + Apple Silicon | **Path B: ComfyUI Desktop** or **Path D: comfy-cli** |
| Headless / server / CI | **Path D: comfy-cli** |
For the fully automated path (hardware check → install → launch), just run:
```bash
bash scripts/comfyui_setup.sh
```
It runs `hardware_check.py` internally, refuses to install locally when the verdict
is `cloud`, picks the right `comfy-cli` flag otherwise, then installs and launches.
---
@ -559,6 +611,7 @@ curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
## Verification Checklist
- [ ] `hardware_check.py` verdict is `ok` OR the user explicitly chose Comfy Cloud
- [ ] `comfy` available on PATH (or `uvx --from comfy-cli comfy --help` works)
- [ ] `curl http://127.0.0.1:8188/system_stats` returns JSON
- [ ] `comfy model list` shows at least one checkpoint

View File

@ -2,8 +2,14 @@
# ComfyUI Setup — Install, launch, and verify using the official comfy-cli.
# Usage: bash scripts/comfyui_setup.sh [--nvidia|--amd|--m-series|--cpu]
#
# If no flag is passed, runs hardware_check.py to detect the right one
# automatically, and refuses to install locally when the verdict is "cloud"
# (no usable GPU, too little VRAM, Intel Mac, etc.) — pointing the user
# at Comfy Cloud instead.
#
# Prerequisites: Python 3.10+, pip
# What it does:
# 0. Hardware check (skipped if a flag was passed explicitly)
# 1. Installs comfy-cli (if not present)
# 2. Disables analytics tracking
# 3. Installs ComfyUI + ComfyUI-Manager
@ -12,7 +18,55 @@
set -euo pipefail
GPU_FLAG="${1:---nvidia}" # Default to NVIDIA
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
HARDWARE_CHECK="$SCRIPT_DIR/hardware_check.py"
# Step 0: Hardware check (auto-detect GPU flag when none was provided)
if [ $# -ge 1 ]; then
GPU_FLAG="$1"
echo "==> GPU flag: $GPU_FLAG (user-supplied, skipping hardware check)"
else
if [ ! -f "$HARDWARE_CHECK" ]; then
echo "==> hardware_check.py not found, defaulting to --nvidia"
GPU_FLAG="--nvidia"
else
echo "==> Running hardware check..."
set +e
HW_JSON="$(python3 "$HARDWARE_CHECK" --json)"
HW_EXIT=$?
set -e
echo "$HW_JSON"
echo ""
VERDICT="$(echo "$HW_JSON" | python3 -c 'import sys,json; print(json.load(sys.stdin).get("verdict",""))')"
FLAG="$(echo "$HW_JSON" | python3 -c 'import sys,json; print(json.load(sys.stdin).get("comfy_cli_flag") or "")')"
if [ "$VERDICT" = "cloud" ]; then
echo ""
echo "==> Hardware check: this machine is not suitable for local ComfyUI."
echo " Recommended: Comfy Cloud — https://platform.comfy.org"
echo ""
echo " If you want to override and install anyway, re-run with an"
echo " explicit flag: bash $0 --nvidia|--amd|--m-series|--cpu"
exit 2
fi
if [ -z "$FLAG" ]; then
echo "==> Hardware check couldn't pick a comfy-cli flag. Defaulting to --nvidia."
echo " (For Intel Arc or unsupported hardware, use the manual install path.)"
GPU_FLAG="--nvidia"
else
GPU_FLAG="$FLAG"
fi
if [ "$VERDICT" = "marginal" ]; then
echo "==> Hardware check: verdict is MARGINAL."
echo " SD1.5 should work; SDXL/Flux may be slow or OOM."
echo " Consider Comfy Cloud for heavier workflows: https://platform.comfy.org"
echo ""
fi
fi
fi
echo "==> ComfyUI Setup"
echo " GPU flag: $GPU_FLAG"

View File

@ -0,0 +1,311 @@
#!/usr/bin/env python3
"""Detect whether this machine can realistically run ComfyUI locally.
Emits a structured JSON report the agent can read to decide whether to:
- help the user install ComfyUI locally, or
- steer them to Comfy Cloud instead.
Usage:
python3 hardware_check.py [--json]
Exit code:
0 "ok" can run local ComfyUI at reasonable speed
1 "marginal" technically works but slow / memory-tight
2 "cloud" local is not viable, recommend Comfy Cloud
The JSON report always prints to stdout regardless of exit code.
Output fields the agent should read:
verdict: "ok" | "marginal" | "cloud"
recommended_install_path: "nvidia" | "amd" | "apple-silicon" | "intel" | "comfy-cloud"
comfy_cli_flag: "--nvidia" | "--amd" | "--m-series" | None
(pass directly to `comfy install` when verdict != cloud)
gpu: detected GPU info or null
notes: list of human-readable strings to surface to the user
"""
from __future__ import annotations
import json
import os
import platform
import re
import shutil
import subprocess
import sys
# Rough thresholds. SDXL/Flux need real VRAM; SD1.5 will scrape by on 6GB.
# Apple Silicon shares RAM with GPU — unified memory budget is total RAM.
MIN_VRAM_GB_USABLE = 6 # below this, most modern models won't load
OK_VRAM_GB = 8 # SDXL fits comfortably here
GREAT_VRAM_GB = 12 # Flux / video models start being realistic
MIN_MAC_RAM_GB = 16 # Apple Silicon unified memory; below = pain
OK_MAC_RAM_GB = 32 # smooth for SDXL / most workflows
def _run(cmd: list[str], timeout: int = 5) -> str:
try:
out = subprocess.run(
cmd, capture_output=True, text=True, timeout=timeout, check=False
)
return (out.stdout or "") + (out.stderr or "")
except (FileNotFoundError, subprocess.TimeoutExpired, OSError):
return ""
def detect_nvidia() -> dict | None:
if not shutil.which("nvidia-smi"):
return None
out = _run([
"nvidia-smi",
"--query-gpu=name,memory.total,driver_version",
"--format=csv,noheader,nounits",
])
if not out.strip():
return None
first = out.strip().splitlines()[0]
parts = [p.strip() for p in first.split(",")]
if len(parts) < 2:
return None
name = parts[0]
try:
vram_mb = int(parts[1])
except ValueError:
vram_mb = 0
driver = parts[2] if len(parts) > 2 else ""
return {
"vendor": "nvidia",
"name": name,
"vram_gb": round(vram_mb / 1024, 1),
"driver": driver,
}
def detect_rocm() -> dict | None:
if not shutil.which("rocm-smi"):
return None
out = _run(["rocm-smi", "--showproductname", "--showmeminfo", "vram"])
if not out.strip():
return None
name_m = re.search(r"Card series:\s*(.+)", out)
vram_m = re.search(r"VRAM Total Memory \(B\):\s*(\d+)", out)
vram_gb = 0.0
if vram_m:
vram_gb = round(int(vram_m.group(1)) / (1024**3), 1)
return {
"vendor": "amd",
"name": name_m.group(1).strip() if name_m else "AMD GPU",
"vram_gb": vram_gb,
"driver": "rocm",
}
def detect_apple_silicon() -> dict | None:
if platform.system() != "Darwin":
return None
if platform.machine() != "arm64":
return None # Intel Mac — no usable MPS
chip = _run(["sysctl", "-n", "machdep.cpu.brand_string"]).strip()
# Examples: "Apple M1", "Apple M1 Pro", "Apple M2 Max", "Apple M3 Ultra"
m = re.search(r"Apple M(\d+)", chip)
generation = int(m.group(1)) if m else 1
mem_bytes = 0
try:
mem_bytes = int(_run(["sysctl", "-n", "hw.memsize"]).strip() or 0)
except ValueError:
pass
ram_gb = round(mem_bytes / (1024**3), 1) if mem_bytes else 0.0
return {
"vendor": "apple",
"name": chip or "Apple Silicon",
"generation": generation,
"unified_memory_gb": ram_gb,
}
def detect_intel_arc() -> dict | None:
if platform.system() != "Linux":
return None
if not shutil.which("clinfo"):
return None
out = _run(["clinfo", "--list"])
if "Intel" in out and ("Arc" in out or "Xe" in out):
return {"vendor": "intel", "name": "Intel Arc/Xe", "vram_gb": 0.0}
return None
def total_system_ram_gb() -> float:
sysname = platform.system()
if sysname == "Darwin":
try:
return round(int(_run(["sysctl", "-n", "hw.memsize"]).strip() or 0) / (1024**3), 1)
except ValueError:
return 0.0
if sysname == "Linux":
try:
with open("/proc/meminfo", "r") as fh:
for line in fh:
if line.startswith("MemTotal:"):
kb = int(line.split()[1])
return round(kb / (1024**2), 1)
except OSError:
return 0.0
if sysname == "Windows":
out = _run(["wmic", "ComputerSystem", "get", "TotalPhysicalMemory"])
m = re.search(r"(\d{6,})", out)
if m:
return round(int(m.group(1)) / (1024**3), 1)
return 0.0
# Map recommended_install_path → flag the agent can pass to `comfy install`
# Set to None when no local install is advised (verdict=cloud).
_COMFY_CLI_FLAG = {
"nvidia": "--nvidia",
"amd": "--amd",
"apple-silicon": "--m-series",
"intel": None, # comfy-cli has no Intel Arc flag — manual install
"comfy-cloud": None,
}
def classify(gpu: dict | None, ram_gb: float) -> tuple[str, str, list[str]]:
"""Return (verdict, recommended_install_path, notes)."""
notes: list[str] = []
if gpu is None:
notes.append(
"No supported accelerator found (NVIDIA CUDA / AMD ROCm / Apple Silicon / Intel Arc)."
)
notes.append(
"CPU-only ComfyUI works but is unusably slow for modern models — use Comfy Cloud."
)
return "cloud", "comfy-cloud", notes
if gpu["vendor"] == "apple":
gen = gpu.get("generation", 1)
mem = gpu.get("unified_memory_gb", 0.0)
if mem < MIN_MAC_RAM_GB:
notes.append(
f"Apple Silicon with {mem} GB unified memory — below the {MIN_MAC_RAM_GB} GB practical minimum."
)
notes.append("SD1.5 may work; SDXL/Flux will swap or OOM. Recommend Comfy Cloud.")
return "cloud", "comfy-cloud", notes
if mem < OK_MAC_RAM_GB:
notes.append(
f"Apple Silicon M{gen} with {mem} GB — SDXL works but slow. Flux/video likely too tight."
)
return "marginal", "apple-silicon", notes
notes.append(f"Apple Silicon M{gen} with {mem} GB unified memory — good for SDXL/Flux.")
return "ok", "apple-silicon", notes
# Discrete GPU path (nvidia/amd/intel)
vram = gpu.get("vram_gb", 0.0)
if gpu["vendor"] == "intel":
notes.append("Intel Arc detected — ComfyUI IPEX support is experimental; Comfy Cloud is more reliable.")
return "marginal", "intel", notes
if vram < MIN_VRAM_GB_USABLE:
notes.append(
f"{gpu['name']} has only {vram} GB VRAM — below the {MIN_VRAM_GB_USABLE} GB practical minimum."
)
notes.append("Most modern models won't load. Recommend Comfy Cloud.")
return "cloud", "comfy-cloud", notes
if vram < OK_VRAM_GB:
notes.append(
f"{gpu['name']} ({vram} GB VRAM) — SD1.5 works, SDXL tight, Flux/video unlikely."
)
return "marginal", gpu["vendor"], notes
if vram < GREAT_VRAM_GB:
notes.append(f"{gpu['name']} ({vram} GB VRAM) — SDXL comfortable, Flux possible with optimizations.")
return "ok", gpu["vendor"], notes
notes.append(f"{gpu['name']} ({vram} GB VRAM) — can run everything including Flux/video.")
return "ok", gpu["vendor"], notes
def build_report() -> dict:
sysname = platform.system()
arch = platform.machine()
ram_gb = total_system_ram_gb()
gpu = (
detect_nvidia()
or detect_rocm()
or detect_apple_silicon()
or detect_intel_arc()
)
# Intel Mac special case — fall out of apple-silicon detection with no GPU
if gpu is None and sysname == "Darwin" and platform.machine() != "arm64":
notes = [
"Intel Mac detected — no MPS backend available.",
"ComfyUI will fall back to CPU which is unusably slow. Use Comfy Cloud.",
]
return {
"os": sysname,
"arch": arch,
"system_ram_gb": ram_gb,
"gpu": None,
"verdict": "cloud",
"recommended_install_path": "comfy-cloud",
"comfy_cli_flag": None,
"notes": notes,
"install_urls": _install_urls(),
}
verdict, install_path, notes = classify(gpu, ram_gb)
return {
"os": sysname,
"arch": arch,
"system_ram_gb": ram_gb,
"gpu": gpu,
"verdict": verdict,
"recommended_install_path": install_path,
"comfy_cli_flag": _COMFY_CLI_FLAG.get(install_path),
"notes": notes,
"install_urls": _install_urls(),
}
def _install_urls() -> dict:
return {
"desktop": "https://docs.comfy.org/installation/desktop",
"manual": "https://docs.comfy.org/installation/manual_install",
"comfy_cli": "https://docs.comfy.org/comfy-cli/getting-started",
"cloud": "https://platform.comfy.org",
}
def main() -> int:
report = build_report()
json_mode = "--json" in sys.argv
if json_mode:
print(json.dumps(report, indent=2))
else:
print(f"OS: {report['os']} ({report['arch']})")
print(f"RAM: {report['system_ram_gb']} GB")
if report["gpu"]:
g = report["gpu"]
if g["vendor"] == "apple":
print(f"GPU: {g['name']}{g.get('unified_memory_gb', 0)} GB unified memory")
else:
print(f"GPU: {g['name']}{g.get('vram_gb', 0)} GB VRAM")
else:
print("GPU: (none detected)")
print(f"Verdict: {report['verdict']}{report['recommended_install_path']}")
if report["comfy_cli_flag"]:
print(f" → run: comfy --skip-prompt install {report['comfy_cli_flag']}")
for n in report["notes"]:
print(f"{n}")
if report["verdict"] == "ok":
return 0
if report["verdict"] == "marginal":
return 1
return 2
if __name__ == "__main__":
sys.exit(main())

View File

@ -324,6 +324,7 @@ AUTHOR_MAP = {
"dalvidjr2022@gmail.com": "Jr-kenny",
"m@statecraft.systems": "mbierling",
"balyan.sid@gmail.com": "alt-glitch",
"52913345+alt-glitch@users.noreply.github.com": "alt-glitch",
"oluwadareab12@gmail.com": "bennytimz",
"simon@simonmarcus.org": "simon-marcus",
"xowiekk@gmail.com": "Xowiek",