01.AI’s open-weight LLM family, developed in Beijing by Kai-Fu Lee’s lab starting March 2023. 01.AI achieved unicorn status within eight months and released Yi-34B, Yi-Lightning, and Yi-Coder across 2023 and 2024. Yi-Lightning briefly ranked joint third on Chatbot Arena at its October 2024 launch.
Status as of April 2026: frozen. In March 2025, 01.AI stopped pre-training large language models and pivoted the company to selling enterprise AI solutions built on DeepSeek’s models. Kai-Fu Lee publicly stated that only Alibaba, Google, DeepSeek, and ByteDance have the user base to justify pre-training frontier models. Existing Yi weights remain on Hugging Face. No Yi-2 successor is planned.
System Verdict
Pick Yi only for narrow legacy use cases. Yi-34B-200K still works as a fixed open-weight baseline for research. Yi-Coder (Apache 2.0) runs as a small coding model for teams that have already integrated it. Yi-Lightning remains accessible through third-party gateways at ~$0.14/M tokens. For any of these, the model delivers what it promised at launch.
Skip it for new production. The family receives no pre-training updates. DeepSeek now delivers what Yi-Lightning promised with an active roadmap. Qwen offers a broader open-weight family under Apache 2.0 with monthly releases. 01.AI itself points enterprise buyers to DeepSeek, not Yi.
Who still uses Yi: Academic researchers freezing a fixed baseline, hobbyists running Yi-34B-200K on existing infrastructure, teams with pre-existing Yi-Coder deployments. New projects should default elsewhere.
Key Facts
| Status | Frozen since March 2025 pre-training pivot |
| Current focus of 01.AI | WanZhi platform reselling DeepSeek-based enterprise AI |
| Highest-performing Yi model | Yi-Lightning (October 2024) |
| Flagship open-weight | Yi-34B-200K (4K base, 200K extended context) |
| Coding models | Yi-Coder-9B and Yi-Coder-1.5B (Apache 2.0) |
| Productivity app | Wanzhi (Copilot-style, now DeepSeek-backed) |
| Yi-Lightning pricing | ~$0.14/M input, ~$0.14/M output (October 2024 direct pricing) |
| Third-party access | OpenRouter, AIMLAPI, other aggregators |
| License (Yi-34B) | Custom Yi License, commercial use with thresholds |
| License (Yi-Coder) | Apache 2.0 |
| No successor planned | Kai-Fu Lee confirmed strategy shift publicly |
Every data point above was verified on 2026-04-17. See Sources.
What it actually is
family in maintenance mode. The Yi weights remain downloadable on Hugging Face. Third-party API providers still host Yi-Lightning. 01.AI itself no longer develops new Yi generations.
The original product delivered strong results. Yi-34B beat Llama 2 at launch. Yi-Lightning ranked competitively on Chatbot Arena at $0.14/M tokens, one of the best price-per-quality ratios of its era. Yi-Coder is a capable open coding model under Apache 2.0.
The moats collapsed with the pivot. 01.AI’s WanZhi platform now sells DeepSeek-based agents to enterprise customers in finance, video gaming, and legal verticals. Kai-Fu Lee’s public stance is that pre-training below hundred-million-user scale is uneconomic. Yi is the artifact of that strategic reversal.
When to pick Yi
- Fixed-weight research baselines. Academic papers benefit from models that will not change. Yi-34B-200K gives a citable frozen target.
- Long-context experiments on older hardware. Yi-34B-200K runs on a single A100 and handles 200K tokens for research prototypes.
- Legacy Yi-Coder deployments. Teams already using Yi-Coder-9B or Yi-Coder-1.5B under Apache 2.0 can continue running those models at no cost.
- Bilingual Chinese-English tasks with fixed scope. Yi’s training still handles both languages competently for narrow-scope workloads.
- Budget API through third-party gateways. Yi-Lightning’s ~$0.14/M pricing remains competitive on aggregator platforms.
When to pick something else
- Active open-weight development: Qwen releases monthly under Apache 2.0 across 119 languages.
- Cheapest capable API with a roadmap: DeepSeek delivers Yi-Lightning’s value proposition with ongoing model updates.
- Polished English product: ChatGPT or Claude.
- Open-weight coding leader: GLM GLM-5.1 under MIT at SWE-Bench Pro leader tier.
- Long-context frontier model: Kimi K2.5 at 256K free, or Claude Opus 4.7 at 1M tokens.
Pricing
Yi-Lightning pricing via third-party aggregators. Open weights free on Hugging Face 01-ai.
| Surface | Price | Notes |
|---|---|---|
| Yi-34B open weights | Free | Custom Yi License, commercial use with revenue thresholds |
| Yi-Coder-9B / 1.5B | Free | Apache 2.0, no commercial restrictions |
| Yi-Lightning (direct, historical) | ~$0.14/M input and output | October 2024 pricing via Kai-Fu Lee announcement |
| Yi-Lightning (third-party gateways) | Varies | OpenRouter, AIMLAPI, other aggregators |
| WanZhi enterprise platform | Custom | Now DeepSeek-backed, not Yi-based |
Prices verified 2026-04-17 via Hugging Face 01-ai, 01.AI Wikipedia, and SCMP coverage of 01.AI’s DeepSeek pivot. Direct 01.AI API access post-pivot is uncertain; third-party gateways remain the reliable path.
Against the alternatives
| Yi (frozen) | DeepSeek V3 | Qwen3 open-weight | GLM-5.1 | |
|---|---|---|---|---|
| Active development | No | Yes | Yes (monthly) | Yes |
| Open weights | Custom Yi License + Apache 2.0 (Coder) | V3 open | Apache 2.0 | MIT |
| Current benchmarks | Frozen at 2024 level | Frontier reasoning | Frontier multilingual | SWE-Bench Pro leader |
| API price | ~$0.14/M (third-party) | ~$0.28/M | ~$0.15-$0.325/M | $1.00/M |
| Context window | 200K (Yi-34B-200K) | 64K | 1M (Qwen3.6 Plus) | 200K |
| Vendor roadmap | None | Active | Active | Active |
| Best viewed as | Legacy research baseline | Cheap capable API | Open-weight multilingual | Open-weight coding leader |
Failure modes
- No further pre-training. 01.AI confirmed in March 2025 that new Yi generations are not planned. The model family will never match current frontier benchmarks.
- Yi-Lightning API availability is uncertain. Direct 01.AI hosted access post-pivot is ambiguous. Third-party gateways remain the reliable path, but per-provider SLAs vary.
- Yi License restrictions on Yi-34B. Commercial deployment above certain revenue thresholds requires separate agreement. Apache 2.0 only covers the Yi-Coder variants.
- Community support declining. GitHub activity on the Yi repo has dropped following the pivot. Fewer third-party tutorials, fewer active contributors.
- Enterprise path flows through DeepSeek. 01.AI’s WanZhi platform sells DeepSeek-backed agents. Customers buying from 01.AI are effectively buying DeepSeek with 01.AI’s consulting layer.
- Benchmark drift. Yi-Lightning’s late-2024 Chatbot Arena ranking does not reflect 2026 frontier scores. DeepSeek V3, Qwen3, and GLM-5.1 have all surpassed it.
- Same Chinese-provider data residency posture as active vendors. Without the benefit of an active roadmap to justify the tradeoff.
Methodology
This page was produced by the aipedia.wiki editorial pipeline, an automated system that ingests vendor documentation, verifies pricing and model details against primary sources, and generates the editorial analysis you are reading. No individual human wrote this review. Scoring follows the four-dimension rubric at /about/scoring/ (Utility, Value, Moat, Longevity; unweighted average). Longevity is scored low to reflect the frozen-model status. Last verified 2026-04-17 against 01.AI Wikipedia, SCMP interview with Kai-Fu Lee, TMTPost WanZhi coverage, and Hugging Face 01-ai.
FAQ
Is 01.AI still developing Yi? No. In March 2025, 01.AI stopped pre-training large language models and pivoted to selling enterprise AI solutions built on DeepSeek’s models. Existing Yi weights remain on Hugging Face. No Yi-2 is planned.
Is Yi open source? Mixed. Yi-34B uses a custom Yi License that permits commercial use up to certain revenue thresholds. Yi-Coder-9B and Yi-Coder-1.5B ship under Apache 2.0 with no commercial restrictions.
How does Yi-Lightning compare in 2026? At its October 2024 launch, Yi-Lightning ranked joint third on Chatbot Arena at ~$0.14/M tokens. Since then, DeepSeek V3 and similar models have passed it on both quality and price. Yi-Lightning remains accessible through third-party providers but is no longer a frontier model.
Should I start a new project on Yi? Only if you specifically need a frozen-weight research baseline or already have Yi-Coder integrated. For new production work, DeepSeek, Qwen, and GLM offer active roadmaps at similar or better price points.
What is WanZhi? 01.AI’s enterprise AI platform, now built on DeepSeek rather than Yi. It serves finance, video gaming, and legal customers with custom agent deployments. Buyers get DeepSeek capability with 01.AI’s consulting and integration layer.
Sources
- 01.AI Wikipedia: pivot timeline and current strategy
- SCMP: Kai-Fu Lee bets on DeepSeek: strategic reversal interview
- TMTPost: 01.AI WanZhi platform: DeepSeek-based enterprise pivot
- TechNode: Kai-Fu Lee predicts China AI endgame: public stance on pre-training economics
- Hugging Face 01-ai: current weight availability
- Yi open-source GitHub: repo activity and community status
- Yi: Open Foundation Models paper: original model architecture and training
Related
- Category: AI Research
Embed this score on your site Free. Links back.
<a href="https://aipedia.wiki/tools/yi/" target="_blank" rel="noopener"><img src="https://aipedia.wiki/badges/yi.svg" alt="Yi (01.AI) on aipedia.wiki" width="260" height="72" /></a> [](https://aipedia.wiki/tools/yi/) Badge value auto-updates if the editorial score changes. Attribution via the link is required.
Cite this page For journalists, researchers, and bloggers
According to aipedia.wiki Editorial at aipedia.wiki (https://aipedia.wiki/tools/yi/) aipedia.wiki Editorial. (2026). Yi (01.AI) — Editorial Review. aipedia.wiki. Retrieved May 8, 2026, from https://aipedia.wiki/tools/yi/ aipedia.wiki Editorial. "Yi (01.AI) — Editorial Review." aipedia.wiki, 2026, https://aipedia.wiki/tools/yi/. Accessed May 8, 2026. aipedia.wiki Editorial. 2026. "Yi (01.AI) — Editorial Review." aipedia.wiki. https://aipedia.wiki/tools/yi/. @misc{yi-01-ai-editorial-review-2026,
author = {{aipedia.wiki Editorial}},
title = {Yi (01.AI) — Editorial Review},
year = {2026},
publisher = {aipedia.wiki},
url = {https://aipedia.wiki/tools/yi/},
note = {Accessed: 2026-05-08}
} Spotted an error or want to share your experience with Yi (01.AI)?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Yi (01.AI) and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki