tiktoken is great if you are writing Python code that needs exact GPT token counts. It's overkill if you just need to check whether your prompt fits in the context window, or estimate what an API call will cost. For one-off token counting, you don't need to install anything.
tiktoken is OpenAI's official tokenizer library. Developers install it via pip install tiktoken and use it like this:
import tiktoken
enc = tiktoken.encoding_for_model("gpt-4o")
tokens = enc.encode("Hello, world!")
print(len(tokens)) # 4
This is the right tool when:
It's the wrong tool when:
For quick counting, an online token counter takes 5 seconds:
No install. No Python. No API key. Works on any device. Counts work for GPT, Claude, Gemini, Llama, and DeepSeek.
Count tokens in seconds. No install, no signup.
Open Token Counter →Browser token counters typically use a word-based approximation: roughly 1 token per 0.75 words in English. tiktoken gives the exact count using OpenAI's actual tokenizer vocabulary. For most English text, the two agree within 5-10%.
Where they diverge:
| Content type | Browser estimate | Actual tiktoken | Variance |
|---|---|---|---|
| Plain English text | 1,000 | 985 | 1.5% |
| Technical/jargon | 1,000 | 1,055 | 5.5% |
| Code (Python) | 1,000 | 1,180 | 18% |
| Mixed English/Spanish | 1,000 | 1,025 | 2.5% |
| JSON output | 1,000 | 1,090 | 9% |
| Math equations | 1,000 | 1,250 | 25% |
For chat prompts, summarization, and content generation, the browser estimate is accurate enough for budgeting. For code-heavy or symbol-heavy content, run it through the actual tokenizer if precision matters.
If you're writing production code that does any of these:
...then install tiktoken (or the equivalent for your chosen model) and use it inside your code. The browser counter is for humans, not for production systems.
If you need the exact official tokenizer for a specific model:
Use tiktoken for production code that needs exact GPT counts. Use the browser tool for everything else — quick estimates, sizing prompts, checking context window fit, comparing models, budgeting projects, sharing token math with non-developers. The 5-10% accuracy gap doesn't matter when the goal is "is this going to fit and what will it cost roughly."
Skip the install. Count tokens in your browser.
Open Token Counter →