Pricing

You pay for what you translate

No tiers. No monthly minimums. No enterprise sales calls. Translation is billed per token, storage is billed per entry. Cached results cost nothing to serve.

Translation
Billed per token. Only charged on cache misses.
Input tokens$0.22 / 1M
Output tokens$0.90 / 1M
Bulk endpoint discount10% off
BYOK platform fee$0.10 / 1M tokens

Translations run on frontier LLMs via Vercel AI Gateway. Prefer your own provider? See BYOK below.

Cache storage
Billed monthly per cached translation.
Per cached entry$0.0001 / month
Per MB stored$0.05 / month

10,000 cached translations costs roughly $1/month in storage. Most apps spend more on their database than on auto18n cache.

What things actually cost

Small app

5,000 translations/month, 10 languages

~$2-5/mo

Most of your strings are UI labels — short, cached quickly. After the first run, 90%+ are cache hits.

Content platform

50,000 translations/month, user-generated content

~$15-40/mo

UGC is more varied, so cache hit rate is lower. Bulk endpoint saves 10%. Storage grows but stays cheap.

Database backfill

1M strings, one-time bulk job

~$50-120 one-time

Bulk discount applies. Subsequent re-translations of the same content hit cache — free.

Bring your own provider

By default, translations run on our platform — one bill, predictable per-token rates. Prefer your own OpenAI, Anthropic, xAI, or Groq account? Route requests through us to your provider instead. You pay the model bill directly, we charge a small platform fee on token volume.

Use BYOK to lock in a specific model, hit your own rate limits, or apply provider credits. Integration takes one extra header.

See the BYOK guide
Free to test

Test keys (sk_test_...) hit the real model and return production-quality translations, but usage is tracked separately and not billed. Build your whole integration before spending a cent.

Get your API key

No credit card needed. Start with a test key.