LiteLLM is great. Here’s what you still need to build.
LiteLLM is the best open-source LLM proxy. If you want governance out of the box -- PII scanning, cost caps, HITL approvals, managed runners, and a dashboard -- without running infrastructure yourself, that’s what we built.
What LiteLLM gives you.
LiteLLM is an excellent open-source project. 16K+ GitHub stars, 100+ provider integrations, and a thriving community. We respect what BerriAI has built.
100+ LLM Providers
Call OpenAI, Anthropic, Google, Cohere, Mistral, and dozens more through a unified OpenAI-compatible API.
Open Source
MIT-licensed, 16K+ GitHub stars. Deploy on your own infrastructure with full control over the codebase.
Great Python SDK
Clean, well-documented Python SDK with drop-in replacement for the OpenAI client. Active community contributions.
Load Balancing & Fallbacks
Route across multiple deployments with automatic retries and fallback chains when a provider goes down.
Active Community
Thriving open-source community with frequent releases, responsive maintainers, and growing ecosystem.
Spend Tracking
Track costs per model, per virtual key. Set basic budgets and get usage reports.
What you’d still need to build on top.
LiteLLM gives you the proxy. But governance, compliance, and operations? That’s on you. Here’s what teams typically need to build on top.
LiteLLM + governance + runners + dashboard = Curate-Me
We don’t replace LiteLLM -- we build the governance layer that sits above it. Everything below is what you’d need to build yourself, or what you get out of the box with Curate-Me.
PII Scanning
Build regex + NLP pipelines to catch API keys, SSNs, credit card numbers, and PII before they reach any provider. Handle deny vs. redact modes.
HITL Approvals
Approval queue, notification system, dashboard UI, timeout handling, and audit logging. Plus the WebSocket layer for real-time updates.
Immutable Audit Trail
Append-only event log for every request, response, and governance decision. Tamper-proof storage with time-travel replay.
Dashboard & Observability
Cost dashboards, request explorer, governance policy editor, team management, API key console, usage analytics, alert configuration.
Managed Runners
Sandboxed execution environments with lifecycle management, 4-tier access control, network isolation, desktop streaming, and CI auto-fix.
Model Allowlists + Cost Caps
Per-org model restrictions with wildcard support. Admin UI to manage policies. Enforcement middleware in the proxy chain.
28-42 engineer-weeks to build, test, and maintain these features yourself. Or swap one base URL and get them all for $49/mo.
Feature-by-feature comparison.
LiteLLM excels at provider coverage and self-hosted flexibility. Curate-Me adds the governance, observability, and execution layers.
| Feature | LiteLLM | Curate-Me |
|---|---|---|
| LLM Gateway Proxy | Self-hosted | Managed |
| Provider Count | 100+ | 17+ |
| Cost Tracking | ||
| Budget Enforcement | Basic | Real-time |
| Rate Limiting | ||
| PII Scanning | ||
| Model Allowlists | ||
| HITL Approvals | ||
| Managed Runners | ||
| Immutable Audit Trail | ||
| Dashboard | 100+ pages | |
| Self-Hosted Option | ||
| Free Tier | Unlimited (self-host) | 1K req/day (~30K/mo) |
| Starting Paid | Free (managed: custom) | $49/mo |
LiteLLM supports 100+ providers vs our 17+ built-in providers (OpenAI, Anthropic, Google, Groq, Mistral, xAI, and more). If provider breadth is your primary need, LiteLLM is the better choice. If governance and operations are, we are.
Total cost of ownership.
LiteLLM is free to self-host. But building the governance layer on top is not. Here’s what it actually costs.
$54,000 one-time build + $3,700/mo ongoing. Assumes mid-level engineer at $75/hr.
$49/mo. No build cost, no ops overhead, no maintenance.
167x cheaper in year one. And that’s before accounting for opportunity cost -- the weeks your engineers spend building governance instead of shipping product.
LiteLLM for the proxy.
Curate-Me for everything else.
Swap one base URL. Get PII scanning, HITL approvals, cost caps, model allowlists, managed runners, and a full dashboard -- instantly.