2 min setup · No credit card

How Verbal Works

One command gives you complete visibility into every AI dollar you spend — across tools, models, and providers.

$npx @getverbal/cli init
1Step 1

Install in 2 Minutes

One command installs the Verbal CLI. No configuration, no agent to run — it auto-detects the AI tools you already use (Claude, Cursor, Copilot, ChatGPT, and more) and starts tracking immediately.

  • Works with your existing tools — no API key changes
  • Detects Claude Code, Cursor, GitHub Copilot automatically
  • Cloud-synced with automatic PII redaction — opt out any time
Terminal

Detecting AI tools…

✓ Claude Code detected

✓ Cursor detected

✓ GitHub Copilot detected

Dashboard ready at http://localhost:4000

2Step 2

Live data arriving...

Events today

1

Tokens used

2.2K

Cost so far

0.04

First event

claude-sonnet-4-6

Cursor · 3 min ago

2,200 tokens

$0.04

Automatic Detection

Verbal hooks into your existing AI tools the moment you run the first command. No manual logging, no code changes — your first data point appears within seconds.

  • Hooks into MCP protocol for Claude Code sessions
  • Captures every API call via local proxy (optional)
  • Tracks Cursor, Copilot usage from editor metadata
3Step 3

Unified Spend View

See every dollar across OpenAI, Anthropic, and Google in one chart. Spot weekend dips, model switches, and cost trends as they happen.

12+

Providers

47%

Avg savings

500+

Teams using it

Daily AI Spend (14 days)

4Step 4

Prompt Analysis — claude-sonnet-4-6

62.0

Overall Score

62/ 100

Clarity72
Specificity48
Structure58
Efficiency65
Context68

Suggested Rewrite

Refactor our Next.js API route error handling (TypeScript, App Router). Currently every route has its own try/catch returning ad-hoc error JSON. I want: (1) a centralized error handler middleware, (2) custom error classes with typed codes like NOT_FOUND and VALIDATION_ERROR, (3) consistent JSON error shape { error: { code, message } }. Here's an example of the current pattern: ```ts export async function GET() { try { ... } catch (e) { return Response.json({ error: "Server error" }, { status: 500 }); } } ``` Keep backward compatibility with the existing { error: string } shape our frontend expects during migration.

Est. token reduction: ~340 tokens

Prompt Coaching

Every session gets a prompt quality score across 5 dimensions. Verbal spots ambiguous instructions, missing context, and wasted tokens — then shows you exactly how to fix them.

  • Scored across clarity, specificity, structure, efficiency, and context
  • Rewrite suggestions that cut token usage by 20–40%
  • Trend tracking: watch your scores improve week over week
5Step 5

Track Your ROI

Verbal tracks the ROI of your AI spend week over week. As you right-size models and improve your prompts, watch your cost-per-token drop while output quality holds.

  • Model right-sizing: switch to cheaper models for simple tasks
  • Prompt efficiency: same output for fewer tokens
  • Weekly spend trend against team benchmarks

4-Week Optimization Progress

Week 1

42 hours saved

$1,480

baseline

Week 2

48 hours saved

$1,320

-$160

Week 3

52 hours saved

$1,180

-$300

Week 4

58 hours saved

$1,042

-$438

Total saved in 4 weeks

-$438

Start Tracking in 2 Minutes

No signup. No credit card. Just one command and you have full visibility.

$npx @getverbal/cli init

Or create a free account to sync across devices.