Changelog / added
New Component: Prompt Token Counter
Token Metrics
Total Count
1,242
Cost Est.
$0.0034
842
400
Why Token Counting Matters
Every LLM has a context window — the maximum number of tokens it can process in a single request. Go over the limit and your request will either be truncated or rejected entirely.
Our new Prompt Token Counter gives you instant visibility into your token usage as you type.
Features
Real-Time Counting
As you type or paste text into the input area, the counter updates instantly. No need to submit or click anything.
Model-Aware Limits
Select your target model and the counter automatically adjusts the maximum threshold:
| Model | Context Window |
|---|---|
| GPT-4o | 128,000 tokens |
| Claude 3.5 Sonnet | 200,000 tokens |
| Gemini 2.0 Flash | 1,000,000 tokens |
| Llama 3.1 70B | 128,000 tokens |
Visual Progress Bar
A color-coded progress bar shows your usage:
- Green — Under 50% of the limit
- Yellow — 50–80% of the limit
- Red — Over 80%, approaching the maximum
Copy-Friendly
One-click copy of the token count for logging, documentation, or sharing with your team.
Pro tip: Use this alongside the Prompt Template Editor to ensure your templates stay within budget across all models.