loopctl
Private beta — code's not public yet Self-hosted · Agent-agnostic · Config-over-code

Your AI agent, running 24/7, on your infrastructure.

Agents you can leave alone overnight. Hard budget caps, audit trails, git-tagged rollbacks, and a messenger control plane you can drive from anywhere.

production · ↻ 30m staging · ↻ 15m dev · ↻ 5m · each env runs its own loop in parallel, own cadence, own pipeline
SIGNAL QUEUE issues · bot · webhook · cron LOOP every 30m · budgets · guardrails · optional human gate human gate optional · policy-triggered HANDBACK pm dev reviewer qa lead handoff forward · handback when blocked · optional human checkpoint OUTPUTS PR · rollback tag · trailer · JSONL · push feeds the next cycle

AI agent frameworks help you build agents. They don't tell you how to run one safely at 3 a.m. when nobody's watching the API bill. loopctl does — hard budget caps, per-cycle spend enforcement, blocked-branch guardrails, rollback tags on every cycle, and a messenger control plane you can drive from anywhere.

What you get.

Everything you'd otherwise cobble together yourself for an unattended coding agent — in one binary + one YAML file.

Cost governance

Daily and per-cycle budgets enforced mid-stream. Hard caps kill a cycle the moment it runs out; override on demand from bot or CLI when you want to.

Guardrails

Blocked commands (rm -rf, git push --force). Blocked branches — never direct to main. Blocked file patterns — secrets + CI stay human-only. Approval gates on any step. Break-glass PIN for emergency pause from chat.

Messenger ops

30+ Telegram and Slack slash commands. The same verbs work from the CLI — parity is contract-tested, so anything you do on your laptop works from your phone.

Pipeline orchestration

Multi-agent workflows where each step has a named role, per-step handoff, per-role model and effort, and optional human approval gates.

Git discipline

Every cycle tagged in git. commit_push_pr policy plus blocked-branches plus GitHub branch protection — three layers. One-command rollback.

Works for any domain

Not a coding assistant. Software, docs, support triage, ops monitoring — skills define what the agent does; the framework handles the loop, the bot, and the guardrails.

Curated skills, ready to run

20+ skills ship in the box — scan-and-fix, chat, team pipelines, audit runners. Three scopes: framework (built-in), project (operator-local), dev-time (maintainer-local). Override any shipped skill by dropping the same-named file in your .agentic/skills/. No forking required.

Agent-agnostic

Claude today. OpenAI, Ollama, whatever tomorrow. Adapters plug in behind one interface — the loop, the bot, and the guardrails don't care which brain you wire up.

MCP-native

Install any Model Context Protocol server with agentic mcp install. Every pipeline agent gets the same tools — GitHub, Jira, Linear, your own — with the same auth, no wrapper code. Same adapter pattern, different endpoint.

Human-in-the-loop checkpoint

Autonomous until judgment is required.

A loop can run for hours without you. When it touches a protected file, asks for a risky command, crosses a budget boundary, or reaches a review step, it pauses and asks a named human to approve, reject, or edit the handoff. Omit the setting for a fully autonomous step; add it where human judgment is part of the workflow.

Trigger
Risk detected
State
Cycle paused
Review
Chat or CLI
Result
Resume same run
.agentic/config.yml optional
organization:
  pipeline: [pm, dev, reviewer, lead]
  agents:
    - name: dev
      # no approval key: autonomous unless a guardrail pauses it

    - name: reviewer
      approval: human # by design: wait for a person
Agent step scotty · edits PAUSED needs-human resume telegram · approval request Protected file touched: .github/workflows/deploy.yml /approve cycle-42 · /reject · /edit handoff approval decision is written to the cycle log, PR body, and issue timeline

Who this isn't for.

loopctl is a narrow tool. If one of these sounds like you, use something else — we'll save you time.

You want a coding assistant in your editor.

Something that helps you write the next line of code while you're in-flow. loopctl doesn't do that.

Use Claude Code, Cursor, or Aider.

You want a hosted agent that "just works".

Upload a task, come back to a PR, no infra to manage. loopctl runs on your machines — you own the containers, the logs, the secrets.

Use Devin, AgentDock, or an enterprise agent-fleet SaaS.

You're learning what agents are.

loopctl assumes you've already shipped something with a coding agent and now want to run it unattended. The onboarding curve is not gentle.

Start with Goose or Claude Code for a few weeks first.

Build your dream team in minutes.

Define who does what in YAML — write it yourself, or run /setup-pipeline inside Claude Code and let the skill interview you about domain, risk, and budget, then write the organization block for you. Either way, the pipeline runs what you produce. One agent or twenty — any shape, any domain. Each agent gets its own model, effort, skills, and personality. Change the shape without rewriting code.

.agentic/config.yml yaml
# Hand-written below, or generated by running `/setup-pipeline`
# inside Claude Code — either way the output lands here.
organization:
  pipeline: [pm, dev, reviewer, lead]
  agents:
    - name: pm
      role: orchestrator
      model: claude-opus-4-7
      effort: max
      description: >-
        Reads backlog, picks target, writes spec.
      skills: [team-orchestrator]

    - name: dev
      role: developer
      agent: codex
      model: gpt-5.4
      effort: high
      description: >-
        Implements the spec. Knows when to ask back.
      skills: [team-developer]

    - name: reviewer
      role: reviewer
      model: claude-opus-4-7
      effort: max
      approval: human # optional: require a person here
      description: >-
        Naming, invariants, security. No emotion.
      skills: [team-reviewer]

    - name: lead
      role: lead
      agent: codex
      model: gpt-5.4-mini
      effort: medium
      description: >-
        Final SHIP or HOLD. Sees the full output.
      skills: [team-lead]

Any shape, any size.

Solo is one agent. Team is a flat list. Enterprise is nested departments. Scale the shape to the problem; the framework doesn't care.

Right model per role.

Opus where judgment matters. Codex where implementation or fast lead checks fit better. Set agent, model, and effort per role in YAML — no code rebuild.

Scoped skills + personality.

Each agent gets only the skills (markdown playbooks) it needs. Description sets tone and approach — the reviewer isn't the dev isn't the lead. Handoff between steps is structured, not just context dump.

Autonomous until judgment is required.

Omit approval and the step runs on its own. Add approval: human when review is part of the design, or let guardrails pause the loop only when something breaks policy. Approve, reject, or edit from Telegram / Slack / CLI.

Hand-written or AI-guided.

Write the YAML yourself, or run /setup-pipeline in any Claude Code session. The skill interviews you, proposes a shape, explains the trade-offs, and writes it into your .agentic/config.yml.

Starter templates — any domain, same pipeline shape

Software

SDLC crew

Ship working code from issue to PR.

pm → dev → reviewer → qa → lead

Docs & content

Content team

Editor plans, writer drafts, fact-checker verifies, lead signs off.

editor → writer → checker → lead

Incidents

Ops crew

Triager reads alerts, investigator finds cause, fixer patches, postmortem writes.

triager → investigator → fixer → postmortem

How it works.

Three layers every cycle flows through: what comes in, the controls you hold, and the audit trail that comes out. Nothing happens that isn't logged, capped, or revertable.

ops@loopctl-prod:~/loopctl/workspaceclaude
pane 0 · claude
> watch production. ping me if a cycle fails.

Tailing production-progress.log, watching for result=failed

[19:35]  cycle-1776794642 · #510 · PR #536
[19:42]  cycle-1776795882 · scotty step 2
[19:47]  cycle-1776795882 · $4.12 · PR #537
[19:58]  cycle-1776796410 · $2.80 · PR #538

All green. 3 PRs in 23m, $6.92 spent.
> 
pane 1 · ops tail
agentic ops tail --all-projects

[production] 19:42 Step 2/6 — scotty
[production] 19:42 reading loop/loop.go
[staging   ] 19:43 cycle result=success
[dev       ] 19:43 auto-pick from backlog
[production] 19:44 Step 3/6 — geordi
[production] 19:46 Step 6/6 — picard
[production] 19:47 Pipeline completed 5m32s
[production] 19:47 PR #537
pane 0 · ops tail --all-projects
agentic ops tail --all-projects --filter events

[production] 19:35:12 cycle-1776794642 result=success · PR #536 · 31m32s · $4.12
[production] 19:42:03 cycle-1776795882 Step 1/6 — riker
[production] 19:42:47 cycle-1776795882 Step 2/6 — scotty
[staging] 19:43:11 cycle-1776795934 result=success · $0.93 · 2m26s
[staging] 19:43:12 PR-error: no commits between main and branch
[dev] 19:43:39 cycle-1776795966 auto-pick from backlog
[production] 19:44:51 cycle-1776795882 Step 3/6 — geordi
[production] 19:45:55 cycle-1776795882 Step 4/6 — data
[production] 19:46:22 cycle-1776795882 Step 5/6 — spock
[production] 19:46:59 cycle-1776795882 Step 6/6 — picard
[dev] 19:47:20 cycle-1776795966 result=success · $0.71 · 2m8s
[production] 19:47:31 cycle-1776795882 Pipeline completed 5m32s
[production] 19:47:32 cycle-1776795882 result=success · PR #537 · $2.80
pane 0 · production pipeline · cycle-1776795882
19:42:03 Step 1/6riker (Project Manager)
         model=opus-4-7 · effort=max
         reading backlog, selecting highest-value target…
         → picked #510 (pipeline phase transitions)

19:42:47 Step 2/6scotty (Senior Developer)
         editing loop/loop.go, orchestrator/pipeline.go
         → core impl + unit tests · 2m14s · $0.84

19:44:51 Step 3/6geordi (Developer)
         polishing edges, adding step-lock TTL
         → refinements + table tests · 1m4s · $0.42

19:45:55 Step 4/6data (QA Engineer)
         running test suite, static analysis
         → all 383 tests pass · 27s · $0.15

19:46:22 Step 5/6spock (Senior Code Reviewer)
         code-quality review, naming, invariants
         → 0 findings, approved · 37s · $0.29

19:46:59 Step 6/6picard (Engineering Lead)
         model=opus-4-6 · effort=max
         reading full pipeline output, final verdict…
         → SHIP IT · 32s · $0.21

19:47:31 Pipeline completed in 5m32s · $2.80 total
19:47:32 PR https://github.com/LoopCtl/agentic/pull/537
[loopctl]

Click the tabs above to switch views.

Parallel at every level.

The diagram above shows one loop in one env. Real deployments run N envs, multiple loops inside each env, and can fan out parallel steps within a single pipeline. Same config surface for all three shapes.

Multiple loops per env
ENV · production loop-fast ↻ 5m pipeline: scan-fix loop-main ↻ 30m pipeline: pm → dev → rev → lead loop-audit ↻ 24h pipeline: spock (code review only)

One env, three loops. Fast pipeline for urgent fixes. Main pipeline for planned work. Nightly audit. Each with its own cadence, pipeline shape, and budget — they share the env's workspace and secrets.

Parallel steps in one pipeline
pm dev·1 dev·2 reviewer qa·1 qa·2 fan-out to dev·1 / dev·2 · merge at reviewer · fan-out again to qa·1 / qa·2

Same pipeline, parallel steps. Two devs split the implementation. Two QAs run the test matrix in parallel. The reviewer and lead stay sequential because they need the full picture. Fan-out / fan-in declared in YAML, same surface as sequential steps.

1 · In
Signals & triggers

What the loop picks up.

  • Interval ticks — every interval (e.g. 30m) wakes a cycle.
  • Issue backlog — pulls work from GitHub issues, labels, or your tracker adapter.
  • Messenger commands/scan, /fix <issue>, /force.
  • Webhooks — GitHub PR-opened / issue-labeled → priority queue.
  • Scheduled — cron-like triggers via agentic ops cycle.
2 · Controls
What you hold

Every lever is live.

Pre-cycle gate
  • · Rate limits: cycles_per_hour · cycles_per_day
  • · Budgets: daily_spend · per_cycle_spend
  • · Blocked branches (main/master) + blocked shell commands
  • · Pause / resume + emergency stop
Pipeline
  • · N steps, each with its own model + effort
  • · Per-step mid-stream budget enforcement
  • · Human review gates (any step, any agent, only when needed)
  • · HANDBACK / HANDOFF between roles
Runtime overrides
  • · /loop on|off · /pause · /cancel
  • · /god — bypass caps for one incident
  • · ops config cycles_per_day 20 — retune live
  • · Break-glass PIN for emergency pause from chat
3 · Out
Everything is traceable

Full accountability.

  • Commit trailers. Every commit carries Agentic-Cycle · Agentic-Agent · Agentic-Model · Agentic-Skill · Agentic-Role.
  • Issue timeline. The agent labels the issue agentic:in-progress, comments progress, clears the label on PR merge. Anyone reading the issue sees the full arc.
  • PR body. Links the issue, the cycle-id, every step's duration / tokens / spend.
  • Cycle JSONL. Every tool call, every token count, every PR URL — one file per env.
  • Git tag per cycle. agentic-cycle-<id> — rollback to any point with one command.
  • Live stream. Bot pushes step transitions + failure causes to your messenger in real time.

Everything loopctl produces is grep-able from the moment it lands. Three surfaces, same audit trail:

  • git log --grep='Agentic-Cycle: ...' every commit from one cycle · every commit from one agent
  • gh issue view 510 --comments the agent's own progress narration on the issue thread
  • jq '.result' production-cycles.jsonl success / failure counts from the event log

Nothing anonymous lands in your repo. Nothing hidden from the issue thread. Nothing unlogged.

How it compares.

The AI-agent ecosystem has split into distinct shapes. Most tools do one of these well — loopctl is the combination: self-hosted, autonomous, governed, integrated. Heavily biased toward operational dimensions; if you're comparing tools for production ops, these are the axes that matter.

Shape & deployment
Tool Self-hosted 24/7 loop Messenger ops Runtime
loopctl ✓ native binary OR Docker · your infra ✓ scan loop ✓ adapter · TG + Slack ship Single Go binary · no runtime deps
Self-hosted peers — autonomous loop shape
Ralph✓ shell + prd.json✓ until PRD doneNoShell script
OpenHands✓ local GUI / K8s enterprise✓ autonomous + interactiveSlack / Jira / Linear (Cloud)Python + TS frontend · Docker
NVIDIA NemoClaw✓ local · NVIDIA hardware✓ always-on agentsNoRTX / DGX + open models (Nemotron)
Hosted SaaS alternatives
Devin / hosted SaaSNo — vendor cloudVendor-controlledSlack / dashboardSaaS (nothing to install)
AgentDockNo — vendor cloudNo — visual builder / workflowNoSaaS (free + paid Pro)
RunloopNo — vendor cloudNo — sandbox compute layerNoSaaS (enterprise pricing)
Interactive self-hosted agents — run inside loopctl
Goose✓ native CLI / desktopNo — interactiveNo (MCP instead)Rust binary + MCP extensions
Pi✓ npm / git installNo — interactive terminalNoNode.js CLI + TypeScript extensions
pi-mono✓ monorepo toolkitNo — bot delegates to CLISlack (pi-mom)Node.js / TypeScript modules
Skill bundles — extend an existing agent
gstack✓ local skills dirNo — slash-commandNoMarkdown skills in Claude Code
pi-skills✓ clone + symlinkNo — manually triggeredNoMarkdown skills for Pi / Claude Code / Codex CLI / Amp / Droid
Name-collision disambiguation
loopctl.com (unrelated)OSS core + SaaSNo — governance layerNoSaaS + self-host OSS
Pipeline, memory & git
Tool Handoff Parallel Context Git discipline
loopctl ✓ env / dept / project scope ✓ envs + steps ✓ fresh window · memory + handoff re-injected ✓ PR + rollback tags
Self-hosted peers
Ralphprd.json between itersNo✓ fresh per iterationMemory via git history
OpenHandsAgent SDK stateMulti-agent supportedSession-boundNo built-in lifecycle
NVIDIA NemoClawWraps OpenClaw agentsDepends on wrapped agentPolicy-governedNo
Hosted SaaS
Devin / hosted SaaSVendor-managedVendor-managedVendor-managedVendor-managed
AgentDockWorkflow chain varsWorkflow stateSession / taskNo
RunloopSandbox state per agentSandbox-scopedNo
Interactive self-hosted
GooseMCP extensionsNoSession-boundNo
PiTree-structured historySession-boundNo
pi-monoShared session logsSession-boundNo
Skill bundles
gstackNoneNoPer slash-cmdSkill-driven, no lifecycle
pi-skillsNoneNoPer skill invocationNo
Disambiguation
loopctl.com (unrelated)Per-agent token trackingAnomaly detectionReview-gate enforcement

Name collision: loopctl.com is a separate product in the agent-governance / audit category. Same name, different company, different category. If you were looking for them, this isn't them.

A note on method: third-party claims reflect best-effort reading of each tool's public positioning. Verify against their current docs before relying on them. This space moves fast.

Self-hosted by default.

Your repo stays where it lives. Data never leaves your infra unless you wire up an adapter. No SaaS tier, no telemetry beacon.

Agent-agnostic.

Claude today. OpenAI, Ollama, whatever tomorrow. The loop doesn't care — adapters plug in behind one interface.

Config over code.

YAML for the environment, SKILL.md for what agents do. No Python module to import, no hidden runtime to learn.

Opt-in by default.

Nothing is forced. Skip the pipeline and run one agent. Skip the messenger. Skip MCP. Skip approvals. Skip the bot entirely. Start with the minimum shape you need, layer in the rest when the pain shows up.

Run your agent like infrastructure.

One YAML file. One binary. Your repo, your infra, your budget caps.