UseRightAI
UseRightAI logo
HomeModelsComparePricingWhat's New
UseRightAI
Cut through AI hype. Pick what works.
UseRightAI logo
Cut through AI hype. Pick what works.

Independent AI model tracker. Live pricing, real benchmarks, zero vendor bias.

X (Twitter)LinkedInUpdatesContact

Compare

ChatGPT vs ClaudeGPT-4o vs Claude SonnetClaude vs GeminiDeepSeek vs ChatGPTMistral vs ClaudeGemini Flash vs GPT-4o MiniLlama vs ChatGPTBuild your own →

Best For

CodingWritingDevelopersProduct ManagersDesignersSalesBest Cheap AIBest Free AI

Pricing & Data

API Token PricingPrice HistoryBenchmark ScoresPrivacy & SafetySubscription PlansCost CalculatorWhich AI is Cheapest?

Company

About UseRightAIContactWhat ChangedAll ModelsDisclosuresPrivacy PolicyTerms of Service

© 2026 UseRightAI. Independent · Free forever · Not affiliated with any AI provider.

Affiliate links are clearly labeled. See disclosures.

HomeModelsOpenAI: GPT-3.5 Turbo (older v0613)
OpenAIBalanced

OpenAI: GPT-3.5 Turbo (older v0613)

A once-useful workhorse now completely overshadowed by cheaper, more capable successors.

42
Coding
48
Writing
30
Research
0
Images
62
Value
5
Long Context
Use this when

High-volume, cost-sensitive text tasks like classification, summarization, and simple Q&A where bleeding-edge quality is not required.

Skip this if

You need to process documents longer than a few paragraphs, require strong reasoning, or are starting a new project where GPT-4o mini or Claude Haiku are viable alternatives.

Pricing
$1.00/1M in
$2.00/1M out
→0%since Mar 2026
Context
4k tokens
Speed
Very fast
How to access
API
$1/1M input tokens
Subscription = chat interface. API = build with it. Compare all subscription plans
Switch to instead if...
Best overall
Claude Opus 4.6
Cheaper option
Meta: Llama 3.1 8B Instruct
Faster option
OpenAI: GPT-5 Mini

Strengths

Low cost at $1/$2 per million tokens makes it viable for large-scale batch processing

Fast inference speed suitable for latency-sensitive applications

Reliable instruction-following for structured, well-defined prompts

Stable, versioned snapshot ensures consistent, reproducible outputs over time

Weaknesses

Severely limited 4,095-token context window makes it unsuitable for long documents or multi-turn conversations

Substantially outclassed by GPT-4o mini at a similar price point for nearly every task

Frozen older checkpoint means it lacks improvements in reasoning, refusals, and factual accuracy from later model versions

Monthly cost estimate

See what OpenAI: GPT-3.5 Turbo (older v0613) actually costs at your usage level

Input tokens / month1M
10k50M
Output tokens / month500k
10k25M
Input cost
$1.00
Output cost
$1.00
Total / month
$2.00

Based on OpenAI: GPT-3.5 Turbo (older v0613) API pricing: $1/1M input · $2/1M output. Real costs vary by provider discounts and caching. Check the provider for exact current rates.

Price History

OpenAI: GPT-3.5 Turbo (older v0613) pricing over time

→0% since Mar 27

$1.08$1.04$1.00$0.960$0.920Mar 27Mar 28

2 data points · tracked daily since Mar 27, 2026

Ready to try it?

Start using OpenAI: GPT-3.5 Turbo (older v0613)

High-volume, cost-sensitive text tasks like classification, summarization, and simple Q&A where bleeding-edge quality is not required.. Start free — no card required.

Try OpenAI: GPT-3.5 Turbo (older v0613) freeCompare alternatives

Recommendations are made independently based on real-world use and public benchmarks. See our disclosures for details.

Compare alternatives

Similar models worth checking before you commit.

OpenAIBudget

OpenAI: GPT-5 Mini

GPT-5 Mini is OpenAI's budget-tier distillation of GPT-5, designed for high-volume, cost-sensitive tasks that don't require full flagship reasoning depth. It supersedes GPT-4o with improved instruction following and a massively expanded 400K context window at a fraction of the cost.

Verdict
The new budget default for OpenAI API users: faster, cheaper, and smarter than GPT-4o with a context window that punches well above its price tier.
Quality score
66%
Pricing
$0.25/1M in
$2.00/1M out
Speed
Very fast
Best for high-volume production workloads — chatbots, summarization pipelines, and document q&a — where cost efficiency matters more than peak reasoning.
Context
400k tokens
Output cost of $2/1M tokens is higher than some competing budget models (Gemini Flash at ~$0.60/1M output). At scale, output-heavy tasks may erode cost advantages — monitor token ratios carefully. Supersedes GPT-4o, which may be deprecated on a rolling basis.
BudgetFastLong ContextHigh VolumeOpenAI
Best for
High-volume production workloads — chatbots, summarization pipelines, and document Q&A — where cost efficiency matters more than peak reasoning.
View model
AnthropicBalanced

Anthropic: Claude 3.5 Haiku

Claude 3.5 Haiku is Anthropic's fastest and most affordable model in the Claude 3.5 family, designed for high-throughput tasks requiring quick responses without sacrificing Claude's core instruction-following quality. It handles a massive 200K context window while maintaining speed suitable for production pipelines.

Verdict
The fastest way to get Claude's quality in production — just don't confuse 'fast' with 'cheap'.
Quality score
64%
Pricing
$0.80/1M in
$4.00/1M out
Speed
Very fast
Best for high-volume, latency-sensitive applications like chatbots, classification, data extraction, and agentic tool use where speed and cost matter more than peak reasoning depth.
Context
200k tokens
Output cost of $4/1M is notably higher than competing fast/mini models. Input cost at ~$0.80/1M is competitive. Best value emerges in input-heavy pipelines like document classification or RAG retrieval where output tokens are minimal.
FastLong ContextBudget-FriendlyClaude FamilyAgentic
Best for
High-volume, latency-sensitive applications like chatbots, classification, data extraction, and agentic tool use where speed and cost matter more than peak reasoning depth.
View model
GoogleBudget

Google: Gemma 2 9B

Gemma 2 9B is Google's open-weight 9-billion parameter model designed for efficient on-device and API deployment. It punches above its weight class for instruction-following and general language tasks at an exceptionally low cost.

Verdict
A capable open-weight budget model hamstrung by a frustratingly small context window.
Quality score
45%
Pricing
$0.03/1M in
$0.09/1M out
Speed
Very fast
Best for lightweight text tasks, classification, and summarization where cost matters more than frontier-level quality.
Context
8k tokens
Pricing reflects API access through third-party providers; Google also offers Gemma 2 9B weights for free download and self-hosting. The 8,192 token limit is a hard architectural constraint of this version.
Open WeightBudgetSmall ModelGoogleOn-Device
Best for
Lightweight text tasks, classification, and summarization where cost matters more than frontier-level quality.
View model

Change history

Pricing moves, ranking shifts, and capability updates.

New ModelMar 27, 2026

OpenAI: GPT-3.5 Turbo (older v0613) — added to UseRightAI

OpenAI: GPT-3.5 Turbo (older v0613) (OpenAI) is now indexed. A once-useful workhorse now completely overshadowed by cheaper, more capable successors.

View model

FAQ

What is OpenAI: GPT-3.5 Turbo (older v0613) best for?

OpenAI: GPT-3.5 Turbo (older v0613) is best for high-volume, cost-sensitive text tasks like classification, summarization, and simple q&a where bleeding-edge quality is not required.. It is a strong fit when that workflow matters more than the tradeoffs around balanced pricing and very fast speed.

When should I avoid OpenAI: GPT-3.5 Turbo (older v0613)?

You need to process documents longer than a few paragraphs, require strong reasoning, or are starting a new project where GPT-4o mini or Claude Haiku are viable alternatives.

What is a cheaper alternative to OpenAI: GPT-3.5 Turbo (older v0613)?

Meta: Llama 3.1 8B Instruct is the lower-cost option to compare first when you want a similar workflow fit with less token spend.

What is a faster alternative to OpenAI: GPT-3.5 Turbo (older v0613)?

OpenAI: GPT-5 Mini is the better pick when response time matters more than maximum depth or premium quality.

Newsletter

Get notified when OpenAI: GPT-3.5 Turbo (older v0613) pricing changes

We track pricing daily. When this model drops or spikes, you'll know first.

No spam. Useful updates only. Affiliate disclosures always clearly labeled.