Best cheap AI for broad day-to-day work — now with 1M context.
68
Coding
75
Writing
76
Research
82
Images
97
Value
82
Long Context
Use this when
High-volume everyday AI usage where speed and cost both matter
Skip this if
Strengths
1M token context window at $0.50/$3 per million tokens
2.5× faster time-to-first-token than Gemini 2.5 Flash
Strong multimodal support across text, images, audio, and video
Weaknesses
Not as sharp as premium models on hard reasoning or complex coding
May need more validation on nuanced technical tasks
Monthly cost estimate
See what Gemini 3.1 Flash actually costs at your usage level
Input tokens / month1M
10k50M
Output tokens / month500k
10k25M
Input cost
$0.500
Output cost
$1.50
Total / month
$2.00
Based on Gemini 3.1 Flash API pricing: $0.5/1M input · $3/1M output. Real costs vary by provider discounts and caching. Check the provider for exact current rates.
Price History
Gemini 3.1 Flash pricing over time
Tracking since May 8, 2026 · more data builds daily
1 data point · tracked daily since May 8, 2026
Ready to try it?
Start using Gemini 3.1 Flash
High-volume everyday AI usage where speed and cost both matter. Start free — no card required.
Recommendations are made independently based on real-world use and public benchmarks. See our disclosures for details.
Compare alternatives
Similar models worth checking before you commit.
GoogleBudget
Gemma 4 26B A4B
Gemma 4 26B A4B is a sparse mixture-of-experts open model from Google, activating only ~4B parameters per forward pass despite having 26B total parameters. It offers a 262K context window at budget pricing, making it one of the more capable open-weight models for its cost tier.
Verdict
A lean, fast, and surprisingly capable budget model best suited for high-volume text tasks where cost efficiency trumps peak quality.
Gemini 3.1 Flash is best for high-volume everyday ai usage where speed and cost both matter. It is a strong fit when that workflow matters more than the tradeoffs around budget pricing and very fast speed.
When should I avoid Gemini 3.1 Flash?
You need premium reasoning depth or the highest coding benchmark scores.
What is a cheaper alternative to Gemini 3.1 Flash?
Meta: Llama 3.1 8B Instruct is the lower-cost option to compare first when you want a similar workflow fit with less token spend.
What is a faster alternative to Gemini 3.1 Flash?
Gemma 4 26B A4B is the better pick when response time matters more than maximum depth or premium quality.
Newsletter
Get notified when Gemini 3.1 Flash pricing changes
We track pricing daily. When this model drops or spikes, you'll know first.
No spam. Useful updates only. Affiliate disclosures always clearly labeled.
You need premium reasoning depth or the highest coding benchmark scores.
Best for cost-sensitive applications needing long-context processing with reasonable quality, such as document summarization pipelines or lightweight coding assistants.
Context
262k tokens
As an open-weight model, Gemma 4 26B can also be self-hosted, making API pricing largely irrelevant at scale. The 'A4B' suffix denotes the active parameter count in its MoE configuration. Listed as superseding Gemini 3 Flash Preview, though Gemini 2.0 Flash remains a stronger hosted alternative.
Open-weightBudgetMoELong ContextGoogle
Best for
Cost-sensitive applications needing long-context processing with reasonable quality, such as document summarization pipelines or lightweight coding assistants.
Gemma 4 31B is Google's open-weight instruction-tuned model offering a strong balance of capability and cost efficiency at just $0.14/$0.40 per million tokens. It features a 262K context window and is designed for developers who need capable on-premise or API-hosted inference without flagship pricing.
Verdict
A well-priced, long-context open-weight model that's ideal for high-volume developer workloads but won't match frontier models on complex reasoning.
Quality score
66%
Pricing
$0.14/1M in
$0.40/1M out
Speed
Fast
Best for cost-conscious developers needing a capable open-weight model for coding assistance, summarization, and document analysis at scale.
Context
262k tokens
As an open-weight model, Gemma 4 31B can be self-hosted via Ollama or Hugging Face in addition to Google's API. Pricing shown is for hosted inference. No image input capability confirmed at launch.
Open WeightBudgetLong ContextCodingSelf-Hostable
Best for
Cost-conscious developers needing a capable open-weight model for coding assistance, summarization, and document analysis at scale.
Gemma 2 9B is Google's open-weight 9-billion parameter model designed for efficient on-device and API deployment. It punches above its weight class for instruction-following and general language tasks at an exceptionally low cost.
Verdict
A capable open-weight budget model hamstrung by a frustratingly small context window.
Quality score
45%
Pricing
$0.03/1M in
$0.09/1M out
Speed
Very fast
Best for lightweight text tasks, classification, and summarization where cost matters more than frontier-level quality.
Context
8k tokens
Pricing reflects API access through third-party providers; Google also offers Gemma 2 9B weights for free download and self-hosting. The 8,192 token limit is a hard architectural constraint of this version.
Open WeightBudgetSmall ModelGoogleOn-Device
Best for
Lightweight text tasks, classification, and summarization where cost matters more than frontier-level quality.