Founder's Note
Why We Built Verbal
Posted February 2026
A few months ago I was staring at my credit card statement trying to figure out why my monthly expenses had quietly crept up by $300. After twenty minutes of digging, I found the culprit: AI subscriptions. ChatGPT Pro. Claude Pro. Copilot. Cursor. An OpenAI API bill I'd forgotten about. Perplexity. A niche tool I'd signed up for and never cancelled.
None of them were obviously expensive. That's the thing. It's $20 here, $25 there, a few dollars in API overages. Each one felt justified in isolation. Together, they added up to a meaningful subscription bill — and I had no idea what I was actually getting from any of them.
The Explosion of AI Tools
This is not a niche problem. The number of AI tools people actually pay for has quintupled in the past two years. Developers are running agents, researchers are burning through API credits, teams are adding Copilot to everyone's IDE without thinking about what it costs at scale.
Meanwhile, the AI providers themselves give you almost no visibility. OpenAI shows you a usage graph. Anthropic shows you token counts. But there is no unified view. No way to compare what you're spending on ChatGPT versus Claude versus your API bill. No way to see if the expensive model is actually producing better results than the cheap one.
The Mint.com Analogy
When Mint launched in 2006, it did something simple: it showed you all your bank accounts in one place. That's it. But seeing everything together — automatically, without manual work — changed how people thought about money. You cannot optimize what you cannot see.
That's the idea behind Verbal. Not another AI tool. A way to see what all your AI tools are costing you, automatically, in one place — and then actually understand whether they're worth it.
What Makes Verbal Different
Most tools stop at spend tracking. Log your API calls, see a bar chart, done. That is useful but not enough. The more interesting question is not "how much did I spend?" but "how good were the prompts I was spending it on?"
Verbal scores your prompts. Not with a vague quality score, but on five specific dimensions: clarity, specificity, structure, efficiency, and context. A score tells you whether you wrote a prompt that was underspecified, over-verbose, or missing context that would have gotten you a better answer the first time. That matters, because a poorly written prompt does not just produce a worse answer — it costs more tokens to correct.
We also track ROI. Not in an abstract sense — in actual dollar terms. Sessions that produced useful output, sessions that got abandoned after three retries, sessions where the model was overkill for the task. The goal is to give you the same visibility into AI spend that a good finance team gives a company: not just what you spent, but whether it was worth it.
Try It
Verbal is free to start. Connect your providers, import your history, and you will know in minutes what you have been spending and where the waste is.