AI Agent in Your Terminal

Write, edit, debug, and ship code with AI — powered by local models (Ollama) or cloud APIs. Zero telemetry. Your code stays yours.

Download Roncon Code

Detecting your OS...

Download

v0.3.0 — standalone binary, no Python required

Or install via Homebrew:

brew install Hypn-Tech/roncon/roncon-code

roncon-code — ~/project
$ roncon-code --model ollama:qwen2.5:7b [AdaptiveProfile] Level: LIGHT (7 tools, Metal GPU, 8.7GB free) [ModelFallback] Chain: ollama:qwen2.5:7b [PrivacyGuard] Active — all telemetry DISABLED Ready to code. What would you like to build? > Fix the bug in auth.py and add tests Reading auth.py... Found issue: missing null check on line 42 Editing auth.py: replaced 1 occurrence Creating test_auth.py (234 chars, 12 lines) Running: pytest test_auth.py All tests passing. Bug fixed. > /cost Session: 0 tokens | $0.00 (local model — free)

Features

🔍

Adaptive Profiles

Auto-detects GPU, RAM, and model size. Adjusts tool complexity to match your hardware — from MINIMAL (3 tools) to FULL (all tools).

🔄

Model Fallback

Cloud API hits rate limit? Auto-switches to local Ollama. Re-adapts tool descriptions for the new model. Zero downtime.

🔒

Privacy Guard

Zero telemetry by default. All LangSmith tracing blocked at code level. Your data never leaves your machine.

💰

Cost Tracking

Real-time token usage and cost per provider. Free with Ollama, from $0.14/M tokens with cloud APIs.

🛡

Permission System

Blocks dangerous operations: rm -rf, git push --force, DROP TABLE. Configurable patterns and sensitive paths.

📡

Live Model Registry

Polls Ollama every 30s. New model pulled? Auto-discovered. Model removed? Auto-disposed. All in real-time.

53 CLI Commands

/model, /adaptive, /fallback, /registry, /privacy, /providers, /doctor, /benchmark, and 45 more.

🧪

694 Tests

100% coverage on all custom middleware. 29 eval scenarios across 4 categories. Battle-tested.

🌐

Multi-Provider

Ollama, Groq, DeepInfra, Together, OpenRouter, Anthropic, OpenAI — switch with one flag.

Pricing

Roncon Code charges for the CLI tool — you bring your own LLM (local or cloud API).

CLI License

Free
0 sats

Proof-of-work throttled
~0.1s first 10 actions/hr
~1s up to 30/hr
~16s up to 50/hr

No signup required
Starter
500 sats

100 instant actions
Valid 30 days
No PoW delay

Buy with Lightning
Pro
2,000 sats

500 instant actions
Valid 90 days
20% off

Buy with Lightning
Power
5,000 sats

1,500 instant actions
Valid 180 days
40% off

Buy with Lightning

LLM Cost vs Quality

You bring your own model — local (free) or cloud API (your key, your cost).

Model Setup LLM Cost Quality Privacy Speed
Ollama local (7B) $0 (free forever) Good 100% local ~5 tok/s
Ollama local (14B+) $0 (free forever) Very Good 100% local ~3 tok/s
Groq (70B cloud) ~$37/mo Very Good API only ~300 tok/s
DeepInfra (72B cloud) ~$22/mo Very Good API only ~50 tok/s
OpenRouter (DeepSeek) ~$10/mo Good API only ~40 tok/s

Compare: Claude Code costs $100-200/mo (CLI + model bundled). GitHub Copilot $19/mo.
With Roncon Code, you pay only for the CLI + choose your own model at your own cost.