v0.0.33 — Free to use

One file.
One agent.

Drop a Markdown file into your repo and get a fully-equipped AI agent with tools, MCP integrations, and sub-agent delegation. 6 providers. 9 built-in tools. Zero boilerplate.

AgentCTL — terminal
$ m # Setup wizard runs on first launch — pick your backend » fix the failing test in api/handler.go → fs_read api/handler.go → shell go test ./api/... → fs_write api/handler.go (patch: fix nil check) Overwrite api/handler.go? [y/N]: y → shell go test ./api/... PASS → git commit -m "fix: nil check in handler"

Works with the models you already use

Local or hosted. Free or paid. Switch mid-session with /model.

Ollama
Anthropic
OpenAI
Gemini
Alibaba
LiteLLM

Everything you need, nothing you don't

A single 7.8 MB binary. No SDKs, no runtimes, no daemons.

Agents as Markdown

YAML frontmatter defines model, tools, and permissions. The body is the system prompt. One file = one agent.

9 Built-in Tools

Shell, file read/write, directory listing, git, test runner, and sub-agent delegation. Every write requires user confirmation.

MCP Integration

Connect to Jira, GitHub, Confluence, or any MCP server. Tools are namespaced and auto-discovered.

Hub-and-Spoke

Orchestrator agents delegate to specialists that return structured JSON with citations and confidence levels.

Full-screen TUI

Token count, cost estimate, context window %, CPU/RAM stats, theming. Falls back to line REPL in pipes.

Keys in the Keychain

API keys stored in macOS Keychain or Linux libsecret. Never in config files. Never in plaintext.

Define an agent in 10 lines

A Markdown file with YAML frontmatter is all you need. The body becomes the system prompt. Tools, model, temperature — everything is declarative.

  • Choose any model from 6 providers
  • Pick which tools the agent can use
  • Connect MCP servers for external integrations
  • Delegate to sub-agents for complex workflows
  • Add structured output schemas for JSON responses
devops.md
--- name: devops type: agent model: anthropic/claude-sonnet-4-6 tools: - shell - fs_read - fs_write - git - test_run temperature: 0.3 --- You are a DevOps engineer. Explore the project with fs_list. Make targeted changes with fs_write. Always consider security.

Ready-to-use agents included

Ship with the binary. Use as-is or fork as templates for your own.

coder.md
Full coding assistant with planner sub-agent and GitHub MCP integration.
Claude SonnetshellgitMCP
k8s-debug.md
Kubernetes troubleshooter. Triages pod crashes, networking, and resource issues.
Claude Sonnetshellkubectl
terraform-plan.md
Reviews Terraform plans, catches security issues, patches .tf files.
Claude Sonnetshellfs_write
ticket-worker.md
Reads Jira tickets, implements the work, commits, and updates the ticket.
Jira MCPConfluencegit
helm-deploy.md
Lints Helm charts, renders templates, reviews values for production readiness.
Claude Sonnetshellhelm
orchestrator.md
Routes tasks to the right specialist agent automatically. Hub-and-spoke.
delegate6 sub-agents
steva-djubre.md
Grumpy Serbian DevOps SRE. Worst attitude, best results. Responds in Serbian.
Serbianpersonality
qwen-coder.md
Local coding assistant. Runs on Ollama with Qwen3-Coder. No API key needed.
Ollamafreelocal

Everything is documented

See it in action

TUI chat session
Chat session — full-screen TUI with token/cost tracking
Tool execution
Tool execution — shell, fs_read, fs_write with diff preview
Theme selection
Themes — 9 built-in themes, switch with /theme
Model picker
Model picker — /models to list and switch by number
Diff preview
Diff preview — fs_write shows changes before applying
Config manager
Config — provider setup, model scanning, API key management

Frequently asked questions

Any model from 6 providers: Ollama (local — Qwen, Llama, Mistral, etc.), Anthropic (Claude), OpenAI (GPT-4o, GPT-4.1), Google Gemini, Alibaba Cloud (Qwen, DeepSeek), or any OpenAI-compatible endpoint via LiteLLM. Switch models mid-session with /model provider/model.
AgentCTL itself is free and open source (MIT). With Ollama, everything runs locally at zero cost. Hosted providers (Anthropic, OpenAI, etc.) charge per-token — a typical coding session costs $0.01–$1.35 depending on the model.
Yes, but every file write requires your explicit confirmation. You see a diff preview and type y to approve or n to decline. The agent sees "user declined" and adjusts. There's also an undo stack — type /undo to revert the last write.
AgentCTL is CLI-first, not IDE-bound. Agents are plain Markdown files you version-control alongside your code. There's no vendor lock-in — switch providers with one line. The hub-and-spoke pattern lets you compose specialist agents that work together. And it's a single 7.8 MB binary with zero dependencies.
In your OS keychain — macOS Keychain or Linux libsecret. Never in config files, never in plaintext, never in environment variables (unless you explicitly set them). The setup wizard handles storage automatically.
Absolutely. The repo ships with purpose-built agents for Kubernetes debugging, Terraform plan review, Helm chart management, and Jira/Confluence ticket-driven development. Through the shell tool, agents have access to kubectl, terraform, helm, and any other CLI on your system.
Run m new my-agent to scaffold a new agent .md file with boilerplate. Edit the system prompt and tools list, then m chat my-agent. You can also install agents globally with m install ./my-agent.md and run them by name from anywhere.
Run m doctor. It checks your config, API key, model reachability, required tools (git, grep), and session encryption. If anything is wrong, it tells you exactly how to fix it.

Ready to try it?

Install in 30 seconds. First-run wizard gets you chatting immediately.

Download Latest Release →