How It Works

One proxy. Full visibility. Zero code changes.

AI SpendOps sits between your applications and AI providers, capturing token usage from every API call, enforcing policy, and generating finance-ready reports.

Architecture

Your Application
AI SpendOps Proxy
Metadata + Policy
AI Providers
OpenAI, Anthropic, etc.
Dashboard
Reports & Exports
Audit Trail
01

Route through the proxy

Point your AI API calls at AI SpendOps. A single base URL change, no SDK, no code refactor. Our proxy captures token usage from every request and forwards to your chosen provider in milliseconds.

02

Tag with dimensions

Attach metadata to every request: team, feature, environment, cost centre. Enforce required dimensions so no request goes untagged.

03

Enforce policies

Control which providers and models each API key can access. Block disallowed requests at the proxy before they reach the provider.

04

Report and export

See real-time dashboards with token-level cost breakdowns. Set budgets and alerts. Export chargeback reports. Give finance exactly what they need.

Dimension tagging

Every API request carries metadata dimensions that tell you who spent what, and why. Configure required dimensions per API key. Untagged requests are rejected before reaching the provider.

// Request header
X-ASO-Dims: {
"team": "backend"
"feature": "search"
"environment": "production"
"cost-centre": "CC-1001"
}

What you get out

Spend Reports

Token usage and cost by team, model, provider, dimension

Budget Reports

Actual vs budget with burn projections

Audit Logs

Every request, policy decision, key change

Chargeback Exports

CSV/JSON by department and cost centre

Fully managed SaaS

AI SpendOps is a fully managed platform. No infrastructure to deploy, no agents to install. Change a base URL, add a header, and you're live.

Ready to make AI spend auditable?

Be the first to know when we launch. Priority access and early-adopter discounts for waitlist members.

Join the Waitlist

Have a question? Get in touch