Cost governance
Daily and per-cycle budgets enforced mid-stream. Hard caps kill a cycle the moment it runs out; override on demand from bot or CLI when you want to.
Agents you can leave alone overnight. Hard budget caps, audit trails, git-tagged rollbacks, and a messenger control plane you can drive from anywhere.
AI agent frameworks help you build agents. They don't tell you how to run one safely at 3 a.m. when nobody's watching the API bill. loopctl does — hard budget caps, per-cycle spend enforcement, blocked-branch guardrails, rollback tags on every cycle, and a messenger control plane you can drive from anywhere.
Everything you'd otherwise cobble together yourself for an unattended coding agent — in one binary + one YAML file.
Daily and per-cycle budgets enforced mid-stream. Hard caps kill a cycle the moment it runs out; override on demand from bot or CLI when you want to.
Blocked commands (rm -rf,
git push --force).
Blocked branches — never direct to main. Blocked file patterns — secrets + CI stay human-only.
Approval gates on any step. Break-glass PIN for emergency pause from chat.
30+ Telegram and Slack slash commands. The same verbs work from the CLI — parity is contract-tested, so anything you do on your laptop works from your phone.
Multi-agent workflows where each step has a named role, per-step handoff, per-role model and effort, and optional human approval gates.
Every cycle tagged in git. commit_push_pr
policy plus blocked-branches plus GitHub branch protection — three layers. One-command rollback.
Not a coding assistant. Software, docs, support triage, ops monitoring — skills define what the agent does; the framework handles the loop, the bot, and the guardrails.
20+ skills ship in the box — scan-and-fix, chat, team pipelines, audit runners. Three
scopes: framework (built-in),
project (operator-local),
dev-time (maintainer-local).
Override any shipped skill by dropping the same-named file in your
.agentic/skills/.
No forking required.
Claude today. OpenAI, Ollama, whatever tomorrow. Adapters plug in behind one interface — the loop, the bot, and the guardrails don't care which brain you wire up.
Install any Model Context Protocol server with agentic mcp install.
Every pipeline agent gets the same tools — GitHub, Jira, Linear, your own — with the
same auth, no wrapper code. Same adapter pattern, different endpoint.
A loop can run for hours without you. When it touches a protected file, asks for a risky command, crosses a budget boundary, or reaches a review step, it pauses and asks a named human to approve, reject, or edit the handoff. Omit the setting for a fully autonomous step; add it where human judgment is part of the workflow.
organization: pipeline: [pm, dev, reviewer, lead] agents: - name: dev # no approval key: autonomous unless a guardrail pauses it - name: reviewer approval: human # by design: wait for a person
loopctl is a narrow tool. If one of these sounds like you, use something else — we'll save you time.
Something that helps you write the next line of code while you're in-flow. loopctl doesn't do that.
Use Claude Code, Cursor, or Aider.
Upload a task, come back to a PR, no infra to manage. loopctl runs on your machines — you own the containers, the logs, the secrets.
Use Devin, AgentDock, or an enterprise agent-fleet SaaS.
loopctl assumes you've already shipped something with a coding agent and now want to run it unattended. The onboarding curve is not gentle.
Start with Goose or Claude Code for a few weeks first.
Define who does what in YAML — write it yourself, or run
/setup-pipeline
inside Claude Code and let the skill interview you about domain, risk, and budget,
then write the organization
block for you. Either way, the pipeline runs what you produce. One agent or twenty —
any shape, any domain. Each agent gets its own model, effort, skills, and personality.
Change the shape without rewriting code.
# Hand-written below, or generated by running `/setup-pipeline` # inside Claude Code — either way the output lands here. organization: pipeline: [pm, dev, reviewer, lead] agents: - name: pm role: orchestrator model: claude-opus-4-7 effort: max description: >- Reads backlog, picks target, writes spec. skills: [team-orchestrator] - name: dev role: developer agent: codex model: gpt-5.4 effort: high description: >- Implements the spec. Knows when to ask back. skills: [team-developer] - name: reviewer role: reviewer model: claude-opus-4-7 effort: max approval: human # optional: require a person here description: >- Naming, invariants, security. No emotion. skills: [team-reviewer] - name: lead role: lead agent: codex model: gpt-5.4-mini effort: medium description: >- Final SHIP or HOLD. Sees the full output. skills: [team-lead]
Solo is one agent. Team is a flat list. Enterprise is nested departments. Scale the shape to the problem; the framework doesn't care.
Opus where judgment matters. Codex where implementation or fast lead checks fit better. Set agent, model, and effort per role in YAML — no code rebuild.
Each agent gets only the skills (markdown playbooks) it needs. Description sets tone and approach — the reviewer isn't the dev isn't the lead. Handoff between steps is structured, not just context dump.
Omit approval and the step runs on its own.
Add approval: human when review is part of the
design, or let guardrails pause the loop only when something breaks policy. Approve,
reject, or edit from Telegram / Slack / CLI.
Write the YAML yourself, or run
/setup-pipeline
in any Claude Code session. The skill interviews you, proposes a shape, explains
the trade-offs, and writes it into your
.agentic/config.yml.
Ship working code from issue to PR.
pm → dev → reviewer → qa → lead
Editor plans, writer drafts, fact-checker verifies, lead signs off.
editor → writer → checker → lead
Triager reads alerts, investigator finds cause, fixer patches, postmortem writes.
triager → investigator → fixer → postmortem
Three layers every cycle flows through: what comes in, the controls you hold, and the audit trail that comes out. Nothing happens that isn't logged, capped, or revertable.
> watch production. ping me if a cycle fails. Tailing production-progress.log, watching for result=failed… [19:35] ✓ cycle-1776794642 · #510 · PR #536 [19:42] ○ cycle-1776795882 · scotty step 2 [19:47] ✓ cycle-1776795882 · $4.12 · PR #537 [19:58] ✓ cycle-1776796410 · $2.80 · PR #538 All green. 3 PRs in 23m, $6.92 spent. >
agentic ops tail --all-projects [production] 19:42 Step 2/6 — scotty [production] 19:42 reading loop/loop.go [staging ] 19:43 cycle result=success [dev ] 19:43 auto-pick from backlog [production] 19:44 Step 3/6 — geordi [production] 19:46 Step 6/6 — picard [production] 19:47 Pipeline completed 5m32s [production] 19:47 PR #537
agentic ops tail --all-projects --filter events [production] 19:35:12 cycle-1776794642 result=success · PR #536 · 31m32s · $4.12 [production] 19:42:03 cycle-1776795882 Step 1/6 — riker [production] 19:42:47 cycle-1776795882 Step 2/6 — scotty [staging] 19:43:11 cycle-1776795934 result=success · $0.93 · 2m26s [staging] 19:43:12 PR-error: no commits between main and branch [dev] 19:43:39 cycle-1776795966 auto-pick from backlog [production] 19:44:51 cycle-1776795882 Step 3/6 — geordi [production] 19:45:55 cycle-1776795882 Step 4/6 — data [production] 19:46:22 cycle-1776795882 Step 5/6 — spock [production] 19:46:59 cycle-1776795882 Step 6/6 — picard [dev] 19:47:20 cycle-1776795966 result=success · $0.71 · 2m8s [production] 19:47:31 cycle-1776795882 Pipeline completed 5m32s [production] 19:47:32 cycle-1776795882 result=success · PR #537 · $2.80
19:42:03 Step 1/6 — riker (Project Manager) model=opus-4-7 · effort=max reading backlog, selecting highest-value target… → picked #510 (pipeline phase transitions) 19:42:47 Step 2/6 — scotty (Senior Developer) editing loop/loop.go, orchestrator/pipeline.go → core impl + unit tests · 2m14s · $0.84 19:44:51 Step 3/6 — geordi (Developer) polishing edges, adding step-lock TTL → refinements + table tests · 1m4s · $0.42 19:45:55 Step 4/6 — data (QA Engineer) running test suite, static analysis → all 383 tests pass · 27s · $0.15 19:46:22 Step 5/6 — spock (Senior Code Reviewer) code-quality review, naming, invariants → 0 findings, approved · 37s · $0.29 19:46:59 Step 6/6 — picard (Engineering Lead) model=opus-4-6 · effort=max reading full pipeline output, final verdict… → SHIP IT · 32s · $0.21 19:47:31 Pipeline completed in 5m32s · $2.80 total 19:47:32 PR https://github.com/LoopCtl/agentic/pull/537
Click the tabs above to switch views.
The diagram above shows one loop in one env. Real deployments run N envs, multiple loops inside each env, and can fan out parallel steps within a single pipeline. Same config surface for all three shapes.
One env, three loops. Fast pipeline for urgent fixes. Main pipeline for planned work. Nightly audit. Each with its own cadence, pipeline shape, and budget — they share the env's workspace and secrets.
Same pipeline, parallel steps. Two devs split the implementation. Two QAs run the test matrix in parallel. The reviewer and lead stay sequential because they need the full picture. Fan-out / fan-in declared in YAML, same surface as sequential steps.
interval (e.g. 30m) wakes a cycle.
/scan, /fix <issue>, /force.
agentic ops cycle.
cycles_per_hour · cycles_per_daydaily_spend · per_cycle_spend/loop on|off · /pause · /cancel/god — bypass caps for one incidentops config cycles_per_day 20 — retune liveAgentic-Cycle · Agentic-Agent · Agentic-Model · Agentic-Skill · Agentic-Role.
agentic:in-progress, comments progress, clears the label on PR merge. Anyone reading the issue sees the full arc.
agentic-cycle-<id> — rollback to any point with one command.
Everything loopctl produces is grep-able from the moment it lands. Three surfaces, same audit trail:
git log --grep='Agentic-Cycle: ...'
every commit from one cycle · every commit from one agent
gh issue view 510 --comments
the agent's own progress narration on the issue thread
jq '.result' production-cycles.jsonl
success / failure counts from the event log
Nothing anonymous lands in your repo. Nothing hidden from the issue thread. Nothing unlogged.
The AI-agent ecosystem has split into distinct shapes. Most tools do one of these well — loopctl is the combination: self-hosted, autonomous, governed, integrated. Heavily biased toward operational dimensions; if you're comparing tools for production ops, these are the axes that matter.
| Tool | Self-hosted | 24/7 loop | Messenger ops | Runtime |
|---|---|---|---|---|
| loopctl | ✓ native binary OR Docker · your infra | ✓ scan loop | ✓ adapter · TG + Slack ship | Single Go binary · no runtime deps |
| Self-hosted peers — autonomous loop shape | ||||
| Ralph | ✓ shell + prd.json | ✓ until PRD done | No | Shell script |
| OpenHands | ✓ local GUI / K8s enterprise | ✓ autonomous + interactive | Slack / Jira / Linear (Cloud) | Python + TS frontend · Docker |
| NVIDIA NemoClaw | ✓ local · NVIDIA hardware | ✓ always-on agents | No | RTX / DGX + open models (Nemotron) |
| Hosted SaaS alternatives | ||||
| Devin / hosted SaaS | No — vendor cloud | Vendor-controlled | Slack / dashboard | SaaS (nothing to install) |
| AgentDock | No — vendor cloud | No — visual builder / workflow | No | SaaS (free + paid Pro) |
| Runloop | No — vendor cloud | No — sandbox compute layer | No | SaaS (enterprise pricing) |
| Interactive self-hosted agents — run inside loopctl | ||||
| Goose | ✓ native CLI / desktop | No — interactive | No (MCP instead) | Rust binary + MCP extensions |
| Pi | ✓ npm / git install | No — interactive terminal | No | Node.js CLI + TypeScript extensions |
| pi-mono | ✓ monorepo toolkit | No — bot delegates to CLI | Slack (pi-mom) | Node.js / TypeScript modules |
| Skill bundles — extend an existing agent | ||||
| gstack | ✓ local skills dir | No — slash-command | No | Markdown skills in Claude Code |
| pi-skills | ✓ clone + symlink | No — manually triggered | No | Markdown skills for Pi / Claude Code / Codex CLI / Amp / Droid |
| Name-collision disambiguation | ||||
| loopctl.com (unrelated) | OSS core + SaaS | No — governance layer | No | SaaS + self-host OSS |
| Tool | Handoff | Parallel | Context | Git discipline |
|---|---|---|---|---|
| loopctl | ✓ env / dept / project scope | ✓ envs + steps | ✓ fresh window · memory + handoff re-injected | ✓ PR + rollback tags |
| Self-hosted peers | ||||
| Ralph | prd.json between iters | No | ✓ fresh per iteration | Memory via git history |
| OpenHands | Agent SDK state | Multi-agent supported | Session-bound | No built-in lifecycle |
| NVIDIA NemoClaw | Wraps OpenClaw agents | Depends on wrapped agent | Policy-governed | No |
| Hosted SaaS | ||||
| Devin / hosted SaaS | Vendor-managed | Vendor-managed | Vendor-managed | Vendor-managed |
| AgentDock | Workflow chain vars | Workflow state | Session / task | No |
| Runloop | Sandbox state per agent | — | Sandbox-scoped | No |
| Interactive self-hosted | ||||
| Goose | MCP extensions | No | Session-bound | No |
| Pi | Tree-structured history | — | Session-bound | No |
| pi-mono | Shared session logs | — | Session-bound | No |
| Skill bundles | ||||
| gstack | None | No | Per slash-cmd | Skill-driven, no lifecycle |
| pi-skills | None | No | Per skill invocation | No |
| Disambiguation | ||||
| loopctl.com (unrelated) | Per-agent token tracking | — | Anomaly detection | Review-gate enforcement |
Name collision:
loopctl.com is a separate product in
the agent-governance / audit category. Same name, different company, different category.
If you were looking for them, this isn't them.
A note on method: third-party claims reflect best-effort reading of each tool's public positioning. Verify against their current docs before relying on them. This space moves fast.
Your repo stays where it lives. Data never leaves your infra unless you wire up an adapter. No SaaS tier, no telemetry beacon.
Claude today. OpenAI, Ollama, whatever tomorrow. The loop doesn't care — adapters plug in behind one interface.
YAML for the environment, SKILL.md
for what agents do. No Python module to import, no hidden runtime to learn.
Nothing is forced. Skip the pipeline and run one agent. Skip the messenger. Skip MCP. Skip approvals. Skip the bot entirely. Start with the minimum shape you need, layer in the rest when the pain shows up.
One YAML file. One binary. Your repo, your infra, your budget caps.