Skip to main content
Tool Research open-source active Below 8
Verified May 2026 Research Editorial only, no paid placements

Yi (01.AI)

Active

Kai-Fu Lee's open-weight LLM family from 01.AI, now frozen after the March 2025 pivot away from pre-training.

Best plan Free (open-weight); Yi-Lightning ~$0.14/M tokens Open-source + cloud
Best for Researchers using Yi-34B-200K for long-context experiments on fixed weights Research
Watch Production workloads needing ongoing model updates Check fit before switching
Pricing Free (open-weight); Yi-Lightning ~$0.14/M tokens
Launched 2023
Watchlist Yi (01.AI)

Save this page locally, then revisit it when pricing, score notes, or related news changes.

Decision badges Readiness signals
Active productOpen sourcePublic repo listedVerified this monthIrregular review cycleNiche or situational score
Fact ledger Verified fields
Company
01.AI
Category
Research
Pricing model
Open source
Price range
Free (open-weight); Yi-Lightning ~$0.14/M tokens
Status
Active
Last verified
May 2, 2026
Pricing Anchor Pricing should be checked on the current Yi (01.AI) source before purchase; AIpedia has not promoted this page to a full Tier 1 pricing profile yet 01.AI Wikipedia
Best For Kai-Fu Lee's open-weight LLM family from 01.AI, now frozen after the March 2025 pivot away from pre-training. Best for research, literature review, evidence collection, and document analysis. 01.AI Wikipedia
Watch Out For Non-Tier-1 canonical profile: verify current pricing, usage limits, data policy, and integration details before procurement 01.AI Wikipedia
Change timeline What moved recently
  1. Verified
    Core pricing and product facts checked May 2, 2026 | Irregular cadence
  2. Updated
    Editorial page changed May 2, 2026
Knowledge graph Adjacent context
Company 01.AI
Category Research
Best for
  • Researchers using Yi-34B-200K for long-context experiments on fixed weights
  • Self-hosting bilingual Chinese-English workloads on Apache 2.0 Yi-Coder
  • Budget API use where freezing model state is acceptable
Not ideal for
  • Production workloads needing ongoing model updates
  • Teams expecting vendor roadmap or benchmark parity with current frontier
  • New deployments where frozen-model risk outweighs cost savings

01.AI’s open-weight LLM family, developed in Beijing by Kai-Fu Lee’s lab starting March 2023. 01.AI achieved unicorn status within eight months and released Yi-34B, Yi-Lightning, and Yi-Coder across 2023 and 2024. Yi-Lightning briefly ranked joint third on Chatbot Arena at its October 2024 launch.

Status as of April 2026: frozen. In March 2025, 01.AI stopped pre-training large language models and pivoted the company to selling enterprise AI solutions built on DeepSeek’s models. Kai-Fu Lee publicly stated that only Alibaba, Google, DeepSeek, and ByteDance have the user base to justify pre-training frontier models. Existing Yi weights remain on Hugging Face. No Yi-2 successor is planned.

System Verdict

Pick Yi only for narrow legacy use cases. Yi-34B-200K still works as a fixed open-weight baseline for research. Yi-Coder (Apache 2.0) runs as a small coding model for teams that have already integrated it. Yi-Lightning remains accessible through third-party gateways at ~$0.14/M tokens. For any of these, the model delivers what it promised at launch.

Skip it for new production. The family receives no pre-training updates. DeepSeek now delivers what Yi-Lightning promised with an active roadmap. Qwen offers a broader open-weight family under Apache 2.0 with monthly releases. 01.AI itself points enterprise buyers to DeepSeek, not Yi.

Who still uses Yi: Academic researchers freezing a fixed baseline, hobbyists running Yi-34B-200K on existing infrastructure, teams with pre-existing Yi-Coder deployments. New projects should default elsewhere.

Key Facts

StatusFrozen since March 2025 pre-training pivot
Current focus of 01.AIWanZhi platform reselling DeepSeek-based enterprise AI
Highest-performing Yi modelYi-Lightning (October 2024)
Flagship open-weightYi-34B-200K (4K base, 200K extended context)
Coding modelsYi-Coder-9B and Yi-Coder-1.5B (Apache 2.0)
Productivity appWanzhi (Copilot-style, now DeepSeek-backed)
Yi-Lightning pricing~$0.14/M input, ~$0.14/M output (October 2024 direct pricing)
Third-party accessOpenRouter, AIMLAPI, other aggregators
License (Yi-34B)Custom Yi License, commercial use with thresholds
License (Yi-Coder)Apache 2.0
No successor plannedKai-Fu Lee confirmed strategy shift publicly

Every data point above was verified on 2026-04-17. See Sources.

What it actually is

family in maintenance mode. The Yi weights remain downloadable on Hugging Face. Third-party API providers still host Yi-Lightning. 01.AI itself no longer develops new Yi generations.

The original product delivered strong results. Yi-34B beat Llama 2 at launch. Yi-Lightning ranked competitively on Chatbot Arena at $0.14/M tokens, one of the best price-per-quality ratios of its era. Yi-Coder is a capable open coding model under Apache 2.0.

The moats collapsed with the pivot. 01.AI’s WanZhi platform now sells DeepSeek-based agents to enterprise customers in finance, video gaming, and legal verticals. Kai-Fu Lee’s public stance is that pre-training below hundred-million-user scale is uneconomic. Yi is the artifact of that strategic reversal.

When to pick Yi

  • Fixed-weight research baselines. Academic papers benefit from models that will not change. Yi-34B-200K gives a citable frozen target.
  • Long-context experiments on older hardware. Yi-34B-200K runs on a single A100 and handles 200K tokens for research prototypes.
  • Legacy Yi-Coder deployments. Teams already using Yi-Coder-9B or Yi-Coder-1.5B under Apache 2.0 can continue running those models at no cost.
  • Bilingual Chinese-English tasks with fixed scope. Yi’s training still handles both languages competently for narrow-scope workloads.
  • Budget API through third-party gateways. Yi-Lightning’s ~$0.14/M pricing remains competitive on aggregator platforms.

When to pick something else

  • Active open-weight development: Qwen releases monthly under Apache 2.0 across 119 languages.
  • Cheapest capable API with a roadmap: DeepSeek delivers Yi-Lightning’s value proposition with ongoing model updates.
  • Polished English product: ChatGPT or Claude.
  • Open-weight coding leader: GLM GLM-5.1 under MIT at SWE-Bench Pro leader tier.
  • Long-context frontier model: Kimi K2.5 at 256K free, or Claude Opus 4.7 at 1M tokens.

Pricing

Yi-Lightning pricing via third-party aggregators. Open weights free on Hugging Face 01-ai.

SurfacePriceNotes
Yi-34B open weightsFreeCustom Yi License, commercial use with revenue thresholds
Yi-Coder-9B / 1.5BFreeApache 2.0, no commercial restrictions
Yi-Lightning (direct, historical)~$0.14/M input and outputOctober 2024 pricing via Kai-Fu Lee announcement
Yi-Lightning (third-party gateways)VariesOpenRouter, AIMLAPI, other aggregators
WanZhi enterprise platformCustomNow DeepSeek-backed, not Yi-based

Prices verified 2026-04-17 via Hugging Face 01-ai, 01.AI Wikipedia, and SCMP coverage of 01.AI’s DeepSeek pivot. Direct 01.AI API access post-pivot is uncertain; third-party gateways remain the reliable path.

Against the alternatives

Yi (frozen)DeepSeek V3Qwen3 open-weightGLM-5.1
Active developmentNoYesYes (monthly)Yes
Open weightsCustom Yi License + Apache 2.0 (Coder)V3 openApache 2.0MIT
Current benchmarksFrozen at 2024 levelFrontier reasoningFrontier multilingualSWE-Bench Pro leader
API price~$0.14/M (third-party)~$0.28/M~$0.15-$0.325/M$1.00/M
Context window200K (Yi-34B-200K)64K1M (Qwen3.6 Plus)200K
Vendor roadmapNoneActiveActiveActive
Best viewed asLegacy research baselineCheap capable APIOpen-weight multilingualOpen-weight coding leader

Failure modes

  • No further pre-training. 01.AI confirmed in March 2025 that new Yi generations are not planned. The model family will never match current frontier benchmarks.
  • Yi-Lightning API availability is uncertain. Direct 01.AI hosted access post-pivot is ambiguous. Third-party gateways remain the reliable path, but per-provider SLAs vary.
  • Yi License restrictions on Yi-34B. Commercial deployment above certain revenue thresholds requires separate agreement. Apache 2.0 only covers the Yi-Coder variants.
  • Community support declining. GitHub activity on the Yi repo has dropped following the pivot. Fewer third-party tutorials, fewer active contributors.
  • Enterprise path flows through DeepSeek. 01.AI’s WanZhi platform sells DeepSeek-backed agents. Customers buying from 01.AI are effectively buying DeepSeek with 01.AI’s consulting layer.
  • Benchmark drift. Yi-Lightning’s late-2024 Chatbot Arena ranking does not reflect 2026 frontier scores. DeepSeek V3, Qwen3, and GLM-5.1 have all surpassed it.
  • Same Chinese-provider data residency posture as active vendors. Without the benefit of an active roadmap to justify the tradeoff.

Methodology

This page was produced by the aipedia.wiki editorial pipeline, an automated system that ingests vendor documentation, verifies pricing and model details against primary sources, and generates the editorial analysis you are reading. No individual human wrote this review. Scoring follows the four-dimension rubric at /about/scoring/ (Utility, Value, Moat, Longevity; unweighted average). Longevity is scored low to reflect the frozen-model status. Last verified 2026-04-17 against 01.AI Wikipedia, SCMP interview with Kai-Fu Lee, TMTPost WanZhi coverage, and Hugging Face 01-ai.

FAQ

Is 01.AI still developing Yi? No. In March 2025, 01.AI stopped pre-training large language models and pivoted to selling enterprise AI solutions built on DeepSeek’s models. Existing Yi weights remain on Hugging Face. No Yi-2 is planned.

Is Yi open source? Mixed. Yi-34B uses a custom Yi License that permits commercial use up to certain revenue thresholds. Yi-Coder-9B and Yi-Coder-1.5B ship under Apache 2.0 with no commercial restrictions.

How does Yi-Lightning compare in 2026? At its October 2024 launch, Yi-Lightning ranked joint third on Chatbot Arena at ~$0.14/M tokens. Since then, DeepSeek V3 and similar models have passed it on both quality and price. Yi-Lightning remains accessible through third-party providers but is no longer a frontier model.

Should I start a new project on Yi? Only if you specifically need a frozen-weight research baseline or already have Yi-Coder integrated. For new production work, DeepSeek, Qwen, and GLM offer active roadmaps at similar or better price points.

What is WanZhi? 01.AI’s enterprise AI platform, now built on DeepSeek rather than Yi. It serves finance, video gaming, and legal customers with custom agent deployments. Buyers get DeepSeek capability with 01.AI’s consulting and integration layer.

Sources

Share LinkedIn
Was this review helpful?
Embed this score on your site Free. Links back.
Yi (01.AI) editorial score badge
<a href="https://aipedia.wiki/tools/yi/" target="_blank" rel="noopener"><img src="https://aipedia.wiki/badges/yi.svg" alt="Yi (01.AI) on aipedia.wiki" width="260" height="72" /></a>
[![Yi (01.AI) on aipedia.wiki](https://aipedia.wiki/badges/yi.svg)](https://aipedia.wiki/tools/yi/)

Badge value auto-updates if the editorial score changes. Attribution via the link is required.

Cite this page For journalists, researchers, and bloggers
According to aipedia.wiki Editorial at aipedia.wiki (https://aipedia.wiki/tools/yi/)
aipedia.wiki Editorial. (2026). Yi (01.AI) — Editorial Review. aipedia.wiki. Retrieved May 8, 2026, from https://aipedia.wiki/tools/yi/
aipedia.wiki Editorial. "Yi (01.AI) — Editorial Review." aipedia.wiki, 2026, https://aipedia.wiki/tools/yi/. Accessed May 8, 2026.
aipedia.wiki Editorial. 2026. "Yi (01.AI) — Editorial Review." aipedia.wiki. https://aipedia.wiki/tools/yi/.
@misc{yi-01-ai-editorial-review-2026, author = {{aipedia.wiki Editorial}}, title = {Yi (01.AI) — Editorial Review}, year = {2026}, publisher = {aipedia.wiki}, url = {https://aipedia.wiki/tools/yi/}, note = {Accessed: 2026-05-08} }
Spotted an error or want to share your experience with Yi (01.AI)?

Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Yi (01.AI) and want to share what worked or didn't, the editorial desk reviews every message sent through this form.

Email editorial@aipedia.wiki
Report outdated info Help us keep this page accurate