About AI Token Counter
Token counts drive LLM API pricing, context-window limits, and the quality of long-prompt completions. Knowing roughly how many tokens your prompt costs is useful before you hit Send. The AI Token Counter estimates token counts for the major model families — GPT-4 (cl100k), GPT-4o (o200k), Claude, Gemini and Llama — using length and word-shape heuristics calibrated against each tokenizer. Counts are estimates within roughly ±10-15% for English; non-English text and code-heavy prompts have wider error bands, which the page surfaces. No real tokenizer is bundled — that would add 150-250 KB for a tool that's meant to be a quick sanity check, not a billing engine.
- No uploads
- Browser-only
- Works offline
- 100% free
How it works
- 1
Paste your text
Anything from a one-line system prompt to a multi-page document. The counter updates as you type.
- 2
Compare across models
Each model card shows its estimate, the encoding family it uses, and the accuracy band. Pricing is shown for closed-weight models.
- 3
Use for planning
If you're approaching a context-window limit, use the count to chunk your prompt or trim. For exact billing, run through the provider's API.
Related tools
Browse allFormat, clean and JSON-escape your prompts — strip invisible characters and fix smart quotes.
Count words, characters, sentences — analyse reading time.
Normalise whitespace, line endings, smart quotes and invisibles.
Render GitHub-flavoured markdown live with safe HTML output.
Frequently asked questions
Are my files uploaded to a server?
No. Every tool on SnapToolz runs entirely inside your browser using JavaScript and WebAssembly. Your file is read locally, processed in memory, and the result is offered as a download. Nothing is sent to a server — there isn't one to send to.
How accurate is the estimate?
For English prose, within ~10% of the real tokenizer. For code-heavy or punctuation-heavy text, ~15%. For non-Latin scripts (Chinese, Arabic, Hindi) the accuracy drops because BPE tokenizers fragment those scripts at very different rates depending on the encoding. The page applies a non-ASCII penalty to compensate but it's still approximate.
Why not bundle a real tokenizer?
gpt-tokenizer and js-tiktoken are 150-250 KB each. For a tool that exists to estimate counts, that's a steep tax. The heuristic gets within ±10% for the common case at 0 KB extra.
What's the difference between cl100k_base and o200k_base?
cl100k_base is OpenAI's tokenizer through GPT-4 / GPT-3.5 Turbo. o200k_base is the newer encoding used by GPT-4o and GPT-4o-mini — it's more efficient for non-English text and code, so o200k counts tend to be 5-15% lower than cl100k for the same input.
Are the prices accurate?
They're based on public 2026Q1 pricing for input/output tokens of the flagship tier. Actual cost depends on the specific model, tier and region. Use the figures as relative estimates, not invoices.
Does it work offline?
Yes. SnapToolz is a Progressive Web App. After your first visit, the app is cached on your device and every tool keeps working without an internet connection.
Is SnapToolz free?
Yes — every tool is 100% free with no sign-up, no watermark, no hidden tier. The whole platform is open source and we have no plan to gate features.