loopctl
Private beta — code's not public yet Self-hosted · Agent-agnostic · Config-over-code

The control plane for autonomous work, on your infrastructure.

Ground agents in your repo, issues, skills, and tools. Keep execution local-first, push decisions through budgets and guardrails, and drive the loop from chat or CLI when judgment is required.

production · ↻ 30m staging · ↻ 15m dev · ↻ 5m · each env runs its own loop in parallel, own cadence, own pipeline
SIGNAL QUEUE issues · bot · webhook · cron LOOP every 30m · budgets · guardrails · optional human gate human gate optional · policy-triggered HANDBACK pm dev reviewer qa lead handoff forward · handback when blocked · optional human checkpoint OUTPUTS PR · rollback tag · trailer · JSONL · push feeds the next cycle

AI agent frameworks help you build agents. They don't tell you how to run one safely at 3 a.m. when nobody's watching the API bill. loopctl does — hard budget caps, per-cycle spend enforcement, blocked-branch guardrails, rollback tags on every cycle, and a messenger control plane you can drive from anywhere.

AgenticOps model

Grounded agents. Local-first control plane. Remote integrations where they belong.

loopctl gives autonomous work a clean operating model: select the next unit of work, assemble grounded context, enforce policy, run the agent with live tools, and leave a durable record operators can inspect or resume from any surface.

Grounding

Repo + ops state

Workspace files, issue context, prior handoffs, rules, skills, and budget state are the source of truth for each step.

Data flow

Prompt + live tools

loopctl assembles role, task, policy, and prior outputs up front. The agent reads the repo and live state through shell, git, GitHub, and MCP tools as it works.

Boundary

Local first

Control plane, logs, queues, and workspace stay close to the repo. GitHub, Telegram, Slack, Mattermost, MCP servers, and deploy targets stay remote behind adapters.

1. Intake
Issue, webhook, bot, cron
Signals enter the queue with env policy and department routing.
2. Control plane
Policy, budget, topology
loopctl selects work, injects role context, and decides who must approve.
3. Agent runtime
Model + tools
The model receives a task packet, then works against the mounted workspace and tools.
4. Outputs
PR, log, audit trail
Every run leaves durable state operators can tail, review, resume, or roll back.

What you get.

Everything you'd otherwise cobble together yourself for an unattended coding agent — in one binary + one YAML file.

Cost governance

Daily and per-cycle budgets enforced mid-stream. Hard caps kill a cycle the moment it runs out; override on demand from bot or CLI when you want to.

Guardrails

Blocked commands (rm -rf, git push --force). Blocked branches — never direct to main. Blocked file patterns — secrets + CI stay human-only. Approval gates on any step. Break-glass PIN for emergency pause from chat.

Messenger ops

30+ Telegram, Slack, and Mattermost slash commands. The same verbs work from the CLI — parity is contract-tested, so anything you do on your laptop works from your phone.

Pipeline orchestration

Multi-agent workflows where each step has a named role, per-step handoff, per-role model and effort, optional human approval gates, and a control plane that decides when to hand work forward, hand it back, or pause for judgment.

Git discipline

Every cycle tagged in git. commit_push_pr policy plus blocked-branches plus GitHub branch protection — three layers. One-command rollback.

Works for any domain

Not a coding assistant. Software, docs, support triage, ops monitoring — skills define what the agent does; the framework handles the loop, the bot, and the guardrails.

Curated skills, ready to run

20+ skills ship in the box — scan-and-fix, chat, team pipelines, audit runners. Three scopes: framework (built-in), project (operator-local), dev-time (maintainer-local). Override any shipped skill by dropping the same-named file in your .agentic/skills/. No forking required.

Agent-agnostic

Claude, Codex, Gemini, Grok, OpenAI, and Ollama out of the box — plus custom commands for anything else. Adapters plug in behind one interface; the loop, the bot, and the guardrails don't care which brain you wire up.

MCP-native

Install any Model Context Protocol server with agentic mcp install. Every pipeline agent gets the same tools — GitHub, Jira, Linear, your own — with the same auth, no wrapper code. Same adapter pattern, different endpoint.

Human-in-the-loop checkpoint

Autonomous until judgment is required.

A loop can run for hours without you. When it touches a protected file, asks for a risky command, crosses a budget boundary, or reaches a review step, it pauses and asks a named human to approve, reject, or edit the handoff. Omit the setting for a fully autonomous step; add it where human judgment is part of the workflow.

Trigger
Risk detected
State
Cycle paused
Review
Chat or CLI
Result
Resume same run
.agentic/config.yml optional
organization:
  pipeline: [pm, dev, reviewer, lead]
  agents:
    - name: dev
      # no approval key: autonomous unless a guardrail pauses it

    - name: reviewer
      approval: human # by design: wait for a person
Agent step scotty · edits PAUSED needs-human resume telegram · approval request Protected file touched: .github/workflows/deploy.yml /approve cycle-42 · /reject · /edit handoff approval decision is written to the cycle log, PR body, and issue timeline

Who this isn't for.

loopctl is a narrow tool. If one of these sounds like you, use something else — we'll save you time.

You want a coding assistant in your editor.

Something that helps you write the next line of code while you're in-flow. loopctl doesn't do that.

Use Claude Code, Cursor, or Aider.

You want a hosted agent that "just works".

Upload a task, come back to a PR, no infra to manage. loopctl runs on your machines — you own the containers, the logs, the secrets.

Use Devin, AgentDock, or an enterprise agent-fleet SaaS.

You're learning what agents are.

loopctl assumes you've already shipped something with a coding agent and now want to run it unattended. The onboarding curve is not gentle.

Start with Goose or Claude Code for a few weeks first.

Build your dream team in minutes.

Define who does what in YAML — write it yourself, or run /setup-pipeline inside Claude Code and let the skill interview you about domain, risk, and budget, then write the organization block for you. Either way, the pipeline runs what you produce. One agent or twenty — any shape, any domain. Each agent gets its own model, effort, skills, and personality. Change the shape without rewriting code.

.agentic/config.yml yaml
# Hand-written below, or generated by running `/setup-pipeline`
# inside Claude Code — either way the output lands here.
organization:
  pipeline: [pm, dev, reviewer, lead]
  agents:
    - name: pm
      role: orchestrator
      model: claude-opus-4-7
      effort: max
      description: >-
        Reads backlog, picks target, writes spec.
      skills: [team-orchestrator]

    - name: dev
      role: developer
      agent: codex
      model: gpt-5.4
      effort: high
      description: >-
        Implements the spec. Knows when to ask back.
      skills: [team-developer]

    - name: reviewer
      role: reviewer
      model: claude-opus-4-7
      effort: max
      approval: human # optional: require a person here
      description: >-
        Naming, invariants, security. No emotion.
      skills: [team-reviewer]

    - name: lead
      role: lead
      agent: codex
      model: gpt-5.4-mini
      effort: medium
      description: >-
        Final SHIP or HOLD. Sees the full output.
      skills: [team-lead]

Any shape, any size.

Solo is one agent. Team is a flat list. Enterprise is nested departments. Scale the shape to the problem; the framework doesn't care.

Right model per role.

Opus where judgment matters. Codex where implementation or fast lead checks fit better. Set agent, model, and effort per role in YAML — no code rebuild.

Scoped skills + personality.

Each agent gets only the skills (markdown playbooks) it needs. Description sets tone and approach — the reviewer isn't the dev isn't the lead. Handoff between steps is structured, not just context dump.

Autonomous until judgment is required.

Omit approval and the step runs on its own. Add approval: human when review is part of the design, or let guardrails pause the loop only when something breaks policy. Approve, reject, or edit from Telegram / Slack / CLI.

Hand-written or AI-guided.

Write the YAML yourself, or run /setup-pipeline in any Claude Code session. The skill interviews you, proposes a shape, explains the trade-offs, and writes it into your .agentic/config.yml.

Starter templates — any domain, same pipeline shape

Software

SDLC crew

Ship working code from issue to PR.

pm → dev → reviewer → qa → lead

Docs & content

Content team

Editor plans, writer drafts, fact-checker verifies, lead signs off.

editor → writer → checker → lead

Incidents

Ops crew

Triager reads alerts, investigator finds cause, fixer patches, postmortem writes.

triager → investigator → fixer → postmortem

How it works.

Three layers every cycle flows through: what comes in, the controls you hold, and the audit trail that comes out. Nothing happens that isn't logged, capped, or revertable.

ops@loopctl-prod:~/loopctl/workspaceclaude
pane 0 · claude
> watch production. ping me if a cycle fails.

Tailing production-progress.log, watching for result=failed

[19:35]  cycle-1776794642 · #510 · PR #536
[19:42]  cycle-1776795882 · scotty step 2
[19:47]  cycle-1776795882 · $4.12 · PR #537
[19:58]  cycle-1776796410 · $2.80 · PR #538

All green. 3 PRs in 23m, $6.92 spent.
> 
pane 1 · ops tail
agentic ops tail --all-projects

[production] 19:42 Step 2/6 — scotty
[production] 19:42 reading loop/loop.go
[staging   ] 19:43 cycle result=success
[dev       ] 19:43 auto-pick from backlog
[production] 19:44 Step 3/6 — geordi
[production] 19:46 Step 6/6 — picard
[production] 19:47 Pipeline completed 5m32s
[production] 19:47 PR #537
pane 0 · ops tail --all-projects
agentic ops tail --all-projects --filter events

[production] 19:35:12 cycle-1776794642 result=success · PR #536 · 31m32s · $4.12
[production] 19:42:03 cycle-1776795882 Step 1/6 — riker
[production] 19:42:47 cycle-1776795882 Step 2/6 — scotty
[staging] 19:43:11 cycle-1776795934 result=success · $0.93 · 2m26s
[staging] 19:43:12 PR-error: no commits between main and branch
[dev] 19:43:39 cycle-1776795966 auto-pick from backlog
[production] 19:44:51 cycle-1776795882 Step 3/6 — geordi
[production] 19:45:55 cycle-1776795882 Step 4/6 — data
[production] 19:46:22 cycle-1776795882 Step 5/6 — spock
[production] 19:46:59 cycle-1776795882 Step 6/6 — picard
[dev] 19:47:20 cycle-1776795966 result=success · $0.71 · 2m8s
[production] 19:47:31 cycle-1776795882 Pipeline completed 5m32s
[production] 19:47:32 cycle-1776795882 result=success · PR #537 · $2.80
pane 0 · production pipeline · cycle-1776795882
19:42:03 Step 1/6riker (Project Manager)
         model=opus-4-7 · effort=max
         reading backlog, selecting highest-value target…
         → picked #510 (pipeline phase transitions)

19:42:47 Step 2/6scotty (Senior Developer)
         editing loop/loop.go, orchestrator/pipeline.go
         → core impl + unit tests · 2m14s · $0.84

19:44:51 Step 3/6geordi (Developer)
         polishing edges, adding step-lock TTL
         → refinements + table tests · 1m4s · $0.42

19:45:55 Step 4/6data (QA Engineer)
         running test suite, static analysis
         → all 383 tests pass · 27s · $0.15

19:46:22 Step 5/6spock (Senior Code Reviewer)
         code-quality review, naming, invariants
         → 0 findings, approved · 37s · $0.29

19:46:59 Step 6/6picard (Engineering Lead)
         model=opus-4-6 · effort=max
         reading full pipeline output, final verdict…
         → SHIP IT · 32s · $0.21

19:47:31 Pipeline completed in 5m32s · $2.80 total
19:47:32 PR https://github.com/LoopCtl/agentic/pull/537
[loopctl]

Click the tabs above to switch views.

Parallel at every level.

The diagram above shows one loop in one env. Real deployments run N envs, multiple loops inside each env, and can fan out parallel steps within a single pipeline. Same config surface for all three shapes.

Multiple loops per env
ENV · production loop-fast ↻ 5m pipeline: scan-fix loop-main ↻ 30m pipeline: pm → dev → rev → lead loop-audit ↻ 24h pipeline: spock (code review only)

One env, three loops. Fast pipeline for urgent fixes. Main pipeline for planned work. Nightly audit. Each with its own cadence, pipeline shape, and budget — they share the env's workspace and secrets.

Parallel steps in one pipeline
pm dev·1 dev·2 reviewer qa·1 qa·2 fan-out to dev·1 / dev·2 · merge at reviewer · fan-out again to qa·1 / qa·2

Same pipeline, parallel steps. Two devs split the implementation. Two QAs run the test matrix in parallel. The reviewer and lead stay sequential because they need the full picture. Fan-out / fan-in declared in YAML, same surface as sequential steps.

1 · In
Signals & triggers

What the loop picks up.

  • Interval ticks — every interval (e.g. 30m) wakes a cycle.
  • Issue backlog — pulls work from GitHub issues, labels, or your tracker adapter.
  • Messenger commands/scan, /fix <issue>, /force.
  • Webhooks — GitHub PR-opened / issue-labeled → priority queue.
  • Scheduled — cron-like triggers via agentic ops cycle.
2 · Controls
What you hold

Every lever is live.

Pre-cycle gate
  • · Rate limits: cycles_per_hour · cycles_per_day
  • · Budgets: daily_spend · per_cycle_spend
  • · Blocked branches (main/master) + blocked shell commands
  • · Pause / resume + emergency stop
Pipeline
  • · N steps, each with its own model + effort
  • · Per-step mid-stream budget enforcement
  • · Human review gates (any step, any agent, only when needed)
  • · HANDBACK / HANDOFF between roles
Runtime overrides
  • · /loop on|off · /pause · /cancel
  • · /god — bypass caps for one incident
  • · ops config cycles_per_day 20 — retune live
  • · Break-glass PIN for emergency pause from chat
3 · Out
Everything is traceable

Full accountability.

  • Commit trailers. Every commit carries Agentic-Cycle · Agentic-Agent · Agentic-Model · Agentic-Skill · Agentic-Role.
  • Issue timeline. The agent labels the issue agentic:in-progress, comments progress, clears the label on PR merge. Anyone reading the issue sees the full arc.
  • PR body. Links the issue, the cycle-id, every step's duration / tokens / spend.
  • Cycle JSONL. Every tool call, every token count, every PR URL — one file per env.
  • Git tag per cycle. agentic-cycle-<id> — rollback to any point with one command.
  • Live stream. Bot pushes step transitions + failure causes to your messenger in real time.

Everything loopctl produces is grep-able from the moment it lands. Three surfaces, same audit trail:

  • git log --grep='Agentic-Cycle: ...' every commit from one cycle · every commit from one agent
  • gh issue view 510 --comments the agent's own progress narration on the issue thread
  • jq '.result' production-cycles.jsonl success / failure counts from the event log

Nothing anonymous lands in your repo. Nothing hidden from the issue thread. Nothing unlogged.

How we compare.

Five tools closest to loopctl in shape and intent — the architectural peers. Honest framing per row: what they're great at, where loopctl differs, who should pick which. For tools you may also be evaluating but that solve different problems (Cursor, Claude Code, Devin, Goose), see “Who this isn't for” above — those are different jobs, not direct comparisons. Assessments dated 2026-04; verify against current docs.

Tool Shape They're great at What loopctl adds Pick which
loopctl Self-hosted ops control plane Scheduled scan loop · messenger ops · commit provenance · tracker timeline per cycle · controller/worker publication “AI runs my project overnight”
Gas Town Multi-agent workspace coordination (OSS) Persistent named agents in towns/rigs · git-backed work state via Beads · mailbox-style handoff · supervisor/watchdog patterns · refinery-style merge queue Scheduled scan loop on top of workspace coordination · messenger ops with CLI parity contract · per-cycle + daily cost governance · provenance per commit (named role trailers) · tracker timeline per cycle Gas Town to coordinate persistent named agents across parallel workspaces. loopctl when you want the operations layer (scheduling, messenger, audit, publication) on top.
Gas City Multi-agent orchestration SDK (OSS) Lower-level building blocks behind Gas Town — runtime providers, controller/supervisor reconciliation, work routing, mail/formulas/orders integration with Beads Ready-to-run ops platform — no SDK assembly required · scheduled scan loop with messenger ops · per-cycle + daily cost governance · provenance per commit · tracker timeline per cycle · built-in guardrails + approval gates Gas City to assemble your own custom orchestration. loopctl when you want a working ops platform without building it.
OpenHands OSS agent platform Larger contributor community · mature multi-agent SDK · polished local GUI · K8s enterprise story · Slack / Jira / Linear cloud integration Scheduled scan loop — not interactive-first · CLI ↔ bot parity contract enforced by tests on every PR · provenance per commit (named role trailers) · per-cycle + daily cost governance · tracker timeline per cycle OpenHands to build into a platform with a richer agent SDK. loopctl as an operations layer your agent runs inside.
Composio Agent Orchestrator Parallel coding-agent orchestrator (OSS) Per-worker worktree + branch + PR · CI/review feedback routed back to agents · local dashboard with desktop notifications · tmux/Docker runtime Scheduled scan loop on top of parallel workspaces · bidirectional messenger ops over Slack / Telegram / Mattermost (not desktop notifications) · per-cycle + daily cost governance with mid-stream circuit breaker · provenance per commit · tracker timeline per cycle Composio Agent Orchestrator to supervise N parallel coding agents from a dashboard. loopctl when you want operations, budgets, and audit on top.
Loki Mode Multi-agent app generator from spec Turns a written specification into a working app — source code, tests, CI, audit log · multi-agent cycles · provider failover Continuous ops on your existing project — not one-shot generation from a spec · scheduled scan loop against your backlog · messenger ops with CLI parity · per-cycle + daily cost governance · provenance per commit · tracker timeline per cycle Loki Mode to generate a new app from a written spec. loopctl to run agents on your existing project's issues 24/7.

Method: assessments reflect best-effort reading of each tool's public positioning + docs as of 2026-04. The agent-tooling space moves fast; verify against current docs before deciding.

Broader ecosystem.

Tools we've researched that sit in adjacent categories — complementary, niche, or solving a different problem. Grouped by shape so you can see what each one is actually for, not just how it scores on loopctl's axes. Per-row “when to pick” honestly points at the alternative when it's the better fit.

Tool Best at Stack When to pick
Self-hosted autonomous loops
Other autonomous-loop tools that didn't make the head-to-head above
RalphBash script that keeps re-running an agent against a fixed task list until every item is marked doneShell + JSON task file + git historyYou want the loop concept only and will build your own ops layer
NVIDIA NemoClawOpen privacy/security wrapper around autonomous agents, using NVIDIA's Nemotron open modelsOpen source; designed for local NVIDIA computeYou need privacy/policy enforcement on top of an open autonomous agent stack
Workspace managers
Other tools for supervising agents across parallel git workspaces
Claude SquadTerminal UI to spin up and supervise several agent sessions side by side, each in its own git workspacetmux + git workspacesYou want a TUI to babysit several manual sessions at once
Hosted SaaS alternatives
Vendor-managed — trade infra control for zero-setup
DevinHosted autonomous coding agent — upload a task, come back to a PR, no infra to manageVendor cloud, per-seat pricingYou want autonomous coding without operating any infrastructure
AgentDockVisual / no-code agent builder + workflow chains, hostedVendor cloud, free + paid ProYou want a no-code visual builder over OSS code-self-host
RunloopManaged sandbox compute for agents to run tools in — an infrastructure layer, not an agent itselfVendor cloud, enterprise pricingYou need agent sandbox compute as a managed primitive
MuleSoft Agent FabricEnterprise agent control plane with token tracking and LLM governance, embedded in SalesforceSalesforce cloud, enterpriseYou're a Salesforce shop and need an enterprise control plane
Kore.aiEnterprise conversational AI + agent platform with hosted orchestration and ecosystem integrationsVendor cloud, enterprise pricingYou're enterprise and want a hosted agent platform with vendor support
Interactive agents — you drive these by hand
Inference primitives loopctl can run as adapters when you want them automated
CursorAI code editor (VS Code fork) with Composer background-task agents and real-time pair programmingProprietary IDE, cloud + local agentsYou want AI in your editor, not on your infrastructure
Claude CodeAnthropic's first-party CLI agent for coding — structured tool use, first-class Claude API accessCLI binary, Claude APIYou want to drive Claude interactively — or run it as the agent loopctl invokes
GooseNative desktop + CLI agent with MCP-first design and Linux Foundation governanceRust binary + MCP extensionsYou want an interactive agent with strong OSS governance — or run it via loopctl
PiInteractive terminal coding agent with branching conversation history; supports many model providersNode.js CLI + TypeScript extensionsYou want an alternative interactive terminal coding agent
pi-monoToolkit behind Pi: the CLI, a unified LLM API, TUI/web libraries, a Slack bot that delegates to the CLINode.js / TypeScript modulesYou want Pi's broader toolkit, not just the CLI
AntigravityGoogle's IDE for AI coding, with a panel that orchestrates multiple agents on one taskGoogle platform, public previewYou're inside Google's ecosystem and want IDE-integrated multi-agent
Skill bundles — extend an existing agent
Workflow + instruction assets that drop into Claude Code, Pi, or similar
gstackDisciplined plan / build / review / canary slash commands for Claude CodeMarkdown skills in Claude CodeYou already use Claude Code and want structured commands for development phases
pi-skillsSkill pack — web search, browser automation, Gmail/Drive, VS Code, YouTube transcripts — that works across multiple agent harnessesMarkdown skills (Pi / Claude Code / Codex CLI / Amp / Droid)You want portable skills that work across several coding agents
SuperpowersSkills framework + opinionated software-development methodology you install into your agentLocal plugin / skill installYou want a methodology layer on top of an existing coding agent
BMAD MethodAgile AI-development method with specialized role agents and workflows you install into your agentLocal install in supported coding agentsYou want an Agile-flavored multi-agent process imposed on your tooling
Spec KitToolkit + slash-commands for spec-driven development — write the spec first, then have the agent implement itCLI + local agent integrationsYou want spec-first workflows guiding the agent through phases
Everything Claude CodeCurated bundle of Claude Code productivity assets — skills, instincts, memory, security patternsLocal Claude Code assetsYou want a grab-bag of Claude Code productivity assets in one place
Agent frameworks — libraries to build with
SDKs you import into your own code, not daemons that run agents
CrewAILibrary for assembling multi-agent flows in code, with task chaining and memory storesPython library, open core + paid cloudYou're writing Python and want to build multi-agent flows in code
LangChainPython / JS framework for building LLM apps; multi-agent orchestration via LangGraph; massive integrations ecosystemPython + JavaScript libraries; LangSmith SaaS for tracingYou're writing Python or JS and want the broadest LLM-tool ecosystem to build agent apps in code
Microsoft Agent FrameworkMulti-agent orchestration library; the merger of Microsoft's AutoGen and Semantic Kernel.NET + Python libraryYou're in the Microsoft stack and want a first-party agent SDK
Memory / state infrastructure
Persistence layers, not runners — complement an existing agent
Task OrchestratorServer that enforces a workflow / task graph the agent has to follow, with required notes and actor attributionLocal server (Model Context Protocol) + SQLiteYou want server-enforced workflow gates that prompts can't bypass
Claude MemLong-term memory plugin: captures Claude Code sessions, compresses them, retrieves relevant context next timeClaude Code plugin, local memory storeYou use Claude Code and want session memory that carries between runs
Observability & personal assistants
Tracing for developers, or always-on personal AI — not coding ops platforms
AgentOpsTracing and telemetry for your own agent code — per-step timing, token usage, errorsSaaS, open core + $40/moYou've built an agent and want production telemetry
OpenClawPersonal AI assistant gateway across many messaging channels — Telegram, Slack, Discord, Signal, WhatsApp, Matrix, MattermostLocal daemon + sessions + sandboxYou want a personal assistant across many channels, not coding ops

Self-hosted by default.

Your repo stays where it lives. Data never leaves your infra unless you wire up an adapter. No SaaS tier, no telemetry beacon.

Agent-agnostic.

Claude, Codex, Gemini, Grok, OpenAI, Ollama, plus custom commands. The loop doesn't care — adapters plug in behind one interface.

Config over code.

YAML for the environment, SKILL.md for what agents do. No Python module to import, no hidden runtime to learn.

Opt-in by default.

Nothing is forced. Skip the pipeline and run one agent. Skip the messenger. Skip MCP. Skip approvals. Skip the bot entirely. Start with the minimum shape you need, layer in the rest when the pain shows up.

Run your agent like infrastructure.

One YAML file. One binary. Your repo, your infra, your budget caps.