Pricing
You pay for what you translate
No tiers. No monthly minimums. No enterprise sales calls. Translation is billed per token, storage is billed per entry. Cached results cost nothing to serve.
Translations run on frontier LLMs via Vercel AI Gateway. Prefer your own provider? See BYOK below.
10,000 cached translations costs roughly $1/month in storage. Most apps spend more on their database than on auto18n cache.
What things actually cost
Small app
5,000 translations/month, 10 languages
~$2-5/mo
Most of your strings are UI labels — short, cached quickly. After the first run, 90%+ are cache hits.
Content platform
50,000 translations/month, user-generated content
~$15-40/mo
UGC is more varied, so cache hit rate is lower. Bulk endpoint saves 10%. Storage grows but stays cheap.
Database backfill
1M strings, one-time bulk job
~$50-120 one-time
Bulk discount applies. Subsequent re-translations of the same content hit cache — free.
By default, translations run on our platform — one bill, predictable per-token rates. Prefer your own OpenAI, Anthropic, xAI, or Groq account? Route requests through us to your provider instead. You pay the model bill directly, we charge a small platform fee on token volume.
Use BYOK to lock in a specific model, hit your own rate limits, or apply provider credits. Integration takes one extra header.
See the BYOK guideTest keys (sk_test_...) hit the real model and return production-quality translations, but usage is tracked separately and not billed. Build your whole integration before spending a cent.
No credit card needed. Start with a test key.