The 2AM Problem

You're in Claude Code. You've typed "make it better" for the 11th time. Claude rewrites your entire module. Again. 2,000 output tokens.

Multiply that by 30 days. Check your bill. $47.

Your reaction: "That can't be right."

Narrator: It was right.

The Dashboard Nobody Asked For

I could have just spent less. But I'm a developer. So instead I built a pixel bead board. Think Perler beads meets NES retro gaming meets existential dread about AI costs.

Every token becomes a coloured pixel. Claude = coral. GPT = green. Gemini = blue. Light pixels = input tokens, dark pixels = output (3-5x more expensive on the Claude API).

You can pick board shapes: square, cat head, heart, star, mushroom. Fill patterns: spiral, rain, snake. When full, it saves as pixel art. Each cat costs about $0.80 in API fees. Beautiful and depressing.

The 80% Problem

My board was 80% dark red. That meant 80% output tokens at $15/MTok vs $3/MTok input for Claude Opus pricing.

This tracks with what devs are seeing across AI billing: unpredictable, usage-based costs where finance teams face delayed visibility and surprise overages. No built-in caps. No alerts. Just vibes and variance.

Started asking for shorter answers. Cost dropped 30%. The dashboard paid for itself. Ironically.

Why This Matters

Claude API token usage tracking isn't just about saving money. It's about understanding where costs actually come from. Most Claude API cost monitoring tools show you totals. This shows you patterns.

The "make it better" loop? That's where your money goes. Each iteration compounds. Claude generates 2,000 tokens. You tweak the prompt. Claude generates 2,500 tokens. Repeat until 2AM.

Output tokens cost 3-5x more than input across Anthropic's API pricing tiers. That ratio kills you in iterative workflows. Claude Code, cursor agents, any tool doing multi-turn conversations: they all hit this.

How It Works

Setup takes 30 seconds. Claude Code: 3 env vars. OpenClaw: 1 command. Any agent: 1 curl.

Privacy-first design: only token counts, never prompts. Open source. Works with Claude API, OpenAI, Gemini.

The pixel bead visualisation isn't just aesthetic. It's functional. You spot cost patterns instantly. Dark clusters = expensive output. Light patches = efficient input. Spiral fill = chronological usage. Rain fill = cost-weighted distribution.

The Real Cost of "Make It Better"

Late-night coding sessions drive the highest token costs. You're tired. Prompts get vague. "Make it better." "Fix this." "One more thing."

Claude tries to help. Generates comprehensive responses. 1,500 tokens explaining three ways to refactor your function. You only needed one.

That's $0.0225 per response at Claude Opus rates. Doesn't sound like much. But 20 iterations? $0.45. Every night for a month? $13.50. Just on vague prompts.

The cat head dashboard makes this visible. Each dark pixel is a mini-invoice you can't dispute because you asked for it.

Why Not Just Use the Anthropic Console?

The Claude API billing console shows totals. This shows behaviour.

You need to see: which conversations cost the most? Which time of day burns tokens? Which prompt patterns generate expensive outputs?

The Anthropic dashboard can't tell you that 80% of your bill comes from output tokens in late-night sessions. This can.

It's a Claude token cost calculator meets behaviour tracker meets pixel art generator. The trifecta nobody asked for but everyone needs.

Setup

ohmytoken.dev | GitHub

Three environment variables. One command. Cat head optional but recommended.

Works with any Claude API endpoint: Opus, Sonnet, Haiku. Tracks both Claude Code API pricing and direct API calls. Open source Claude API usage monitoring tool.

No telemetry. No prompt logging. Just token counts and pixel art.

The Bigger Picture

This reflects a 2026 trend: developers building custom dashboards because vendor tools don't show what matters. Claude's console shows spend. Not behaviour. Not patterns. Not the 2AM "make it better" loop.

Community sentiment echoes frustration with black box pricing. DIY solutions emerge. Pixel art dashboards. CSV parsers. Homebrew analytics.

The need for pricing transparency standards is real. Make costs machine-readable. Let developers track usage in ways that match their workflows.

Until then: pixel beads and cat heads.

Lessons Learned

  1. Output tokens cost 3-5x more than input on Claude API
  2. Iterative prompting compounds costs fast
  3. Late-night sessions = highest token usage
  4. Vague prompts generate expensive responses
  5. Visualisation changes behaviour

The dashboard cost $0.80 to build (one cat head of API calls). Saved $14/month in the first week. ROI: 1,650%.

Your move, finance team.

Try It

Cat head optional. Cost awareness mandatory.

ohmytoken.dev | GitHub

T
Written by TheVibeish Editorial