Skip to main content
Tool Chatbots freemium active Below 8
Verified May 2026 Chatbots Editorial only, no paid placements

GLM (ChatGLM)

Active

Zhipu AI's open-source LLM family, with GLM-5.1 topping SWE-Bench Pro at 58.4% as of April 2026.

Best plan Free (GLM-4.7-Flash) / API from $1.00/M tokens (GLM-5) Free + paid plans
Best for Agentic coding and SWE-bench-style tasks Chatbots
Watch Polished English consumer chat Check fit before switching
Pricing Free (GLM-4.7-Flash) / API from $1.00/M tokens (GLM-5)
Launched 2022
Watchlist GLM (ChatGLM)

Save this page locally, then revisit it when pricing, score notes, or related news changes.

Decision badges Readiness signals
Active productFree tierPublic repo listedVerified this monthQuarterly review cycleNiche or situational score
Fact ledger Verified fields
Company
Zhipu AI (Z.ai)
Category
Chatbots
Pricing model
Free tier
Price range
Free (GLM-4.7-Flash) / API from $1.00/M tokens (GLM-5)
Status
Active
Last verified
May 4, 2026
Flagship Model GLM-5 is the current public anchor in AIpedia source coverage, with model information available through Z.ai and Hugging Face. GLM-5 on Hugging Face
Pricing Anchor API pricing lives on the Zhipu AI Open Platform and should be checked by model because token pricing and free Flash tiers can change. Zhipu AI Open Platform pricing
Api Available BigModel provides hosted API access and documentation for GLM-family models. Zhipu AI Open Platform docs
Open Source Or Local Some GLM releases are available as model repositories, but production use still requires checking the specific model license and deployment constraints. GLM-5 on Hugging Face
Best For Best for teams comparing Chinese frontier/open-weight model options, especially when Z.ai/BigModel platform access and GLM open releases matter. Z.ai official site
Watch Out For International teams should validate language coverage, API availability, compliance, and documentation fit before treating GLM as a drop-in OpenAI or Anthropic alternative. Zhipu AI Open Platform docs
Change timeline What moved recently
  1. Verified
    Core pricing and product facts checked May 4, 2026 | Quarterly cadence
  2. Updated
    Editorial page changed May 4, 2026
Knowledge graph Adjacent context
Company Zhipu AI (Z.ai)
Category Chatbots
Head-to-head comparisons
Best for
  • Agentic coding and SWE-bench-style tasks
  • Developers wanting open-weight frontier coding models
  • Bilingual Chinese-English professional workflows
  • Teams evaluating domestic-hardware-trained models
Not ideal for
  • Polished English consumer chat
  • Teams needing Western data residency on hosted API
  • Users prioritizing writing quality over coding

Zhipu AI’s open-source LLM family, developed in Beijing as a 2019 spinoff of Tsinghua University’s KEG Lab. The consumer product rebranded to Z.ai in early 2026. The current flagship is GLM-5.1, released open-source under MIT license on April 7, 2026, and trained entirely on Huawei Ascend 910B chips with no Nvidia hardware.

GLM-5.1 topped SWE-Bench Pro at 58.4% at launch. The proprietary GLM-5 (February 2026) scored 77.8% on SWE-bench Verified, ahead of Gemini 3 Pro at 76.2% and within reach of Claude Opus 4.6 at 80.9%.

System Verdict

Pick GLM if you need an open-weight coding model near the SWE-Bench frontier. GLM-5.1 is freely downloadable under MIT, runs on consumer Hugging Face hardware or enterprise Ascend clusters, and matches or edges past Claude Opus 4.6 on specific coding benchmarks. OpenAI and Anthropic API compatibility makes it a drop-in backend for Cursor, Cline, and Continue.dev.

Skip it if you want a polished English consumer product or the cheapest API. Z.ai chat is functional but secondary to the Chinese-language experience. DeepSeek undercuts GLM-5 by 3-4x on pure API cost for non-coding chat. Zhipu raised prices 30% at GLM-5’s launch and another 10% at GLM-5.1, so cost predictability trails Qwen and DeepSeek.

Who uses which surface: Free Flash models for hobbyists and light tasks, GLM-5 API for production coding workloads, Coding Plan subscription (~$10 to $80/month) for IDE users, self-hosted GLM-5.1 weights for teams avoiding vendor lock-in.

Key Facts

Flagship modelGLM-5.1 (open-sourced April 7, 2026 under MIT)
Proprietary flagshipGLM-5 (released February 11, 2026)
Architecture744-754B MoE, ~40B active parameters per token
Context window200K tokens
Max output128K tokens per response
SWE-Bench Pro58.4% (GLM-5.1) · above the cited OpenAI frontier baseline at 57.7% and Claude Opus 4.6 at 57.3%
SWE-bench Verified77.8% (GLM-5)
Training hardwareHuawei Ascend 910B, MindSpore framework, zero Nvidia
API pricingGLM-5 $1.00/M input · GLM-4.7 $0.60/M input
Free tierGLM-4.7-Flash (203K context) and GLM-4.5-Flash, registered users
LicenseMIT on GLM-5.1 weights

Every data point above was verified on 2026-04-17. See Sources.

What it actually is

A single LLM family covering two audiences: developers calling the API or self-hosting weights, and consumers chatting at z.ai or chatglm.cn. The model family is optimized for agentic engineering and long-horizon coding, not generalist chat.

GLM-5.1 ships as open weights under MIT on Hugging Face. That combination, MIT license plus frontier SWE-Bench scores, is rare. Most labs at this benchmark tier keep weights closed.

The real moats are the open-weight coding lead and the Huawei Ascend training stack. GLM is the first Chinese frontier model trained without Nvidia, which matters for customers evaluating supply-chain independence. The consumer chat product is secondary to that positioning.

When to pick GLM

  • Agentic coding workloads. GLM-5.1’s 58.4% SWE-Bench Pro score is the current open-weight leader, edging past closed frontier models on that specific benchmark.
  • Cursor, Cline, or Continue.dev backend swap. GLM supports OpenAI-compatible and Anthropic-compatible API formats. Drop-in replacement against paid Claude or GPT endpoints.
  • Self-hosted frontier coding. MIT-licensed weights permit local deployment, fine-tuning, and commercial use without licensing fees.
  • Huawei Ascend infrastructure. Enterprises on domestic Chinese compute get a native-stack option that does not require Nvidia.
  • Bilingual Chinese-English technical work. Legal, finance, and engineering teams operating across both languages get trained-in-parallel fluency.

When to pick something else

  • Cheapest API for general chat: DeepSeek at ~$0.28/M input is 3-4x cheaper than GLM-5.
  • Polished English writing: Claude Opus 4.7 or ChatGPT. GLM’s English is functional, not best-in-class.
  • Broadest open-weight coverage: Qwen. Apache 2.0 across more sizes, wider language coverage, more active monthly releases.
  • Google Workspace integration: Gemini. GLM has no Workspace hooks.
  • Consumer-grade product polish: ChatGPT or Claude. Z.ai chat is developer-adjacent, not consumer-first.

Pricing

Subscription pricing via Zhipu AI Open Platform and bigmodel.cn pricing.

Plan / ModelPriceNotes
GLM-4.7-FlashFree203K context, registered users, lightweight tasks
GLM-4.5-FlashFreeGeneral-purpose lightweight
GLM-5 API$1.00/M inputFlagship, 30% increase at Feb 2026 launch, +10% on GLM-5.1
GLM-4.7 API$0.60/M inputPrevious flagship, still available
Coding Plan Lite~$10/monthEntry IDE access, Q2 2026 discount to $27/quarter
Coding Plan Pro~$30/monthGLM-5 access, higher limits, $81/quarter discounted
Coding Plan Max~$80/monthHighest limits, $216/quarter discounted

Prices verified 2026-04-17 via Zhipu AI Open Platform pricing and Vibecoding pricing breakdown. Zhipu raised GLM prices 30% in February 2026 and another 10% on the GLM-5.1 release.

Against the alternatives

GLM-5.1Claude Opus 4.7DeepSeek V3Qwen3.6 Plus
Open weightsMITClosedPartial (V3 open)Apache 2.0
SWE-Bench Pro58.4%57.3% on 4.6Strong reasoningStrong coding
API input price$1.00/M$5.00/M~$0.28/M~$0.325/M
Context window200K1M64K1M
Training hardwareHuawei AscendNvidiaNvidiaNvidia + Alibaba silicon
English polishFunctionalStrongestModerateModerate
Best viewed asOpen-weight coding leaderReasoning specialistCheap capable APIOpen-weight multilingual

Failure modes

  • Price increases at each major release. Zhipu raised API rates 30% at GLM-5 and 10% at GLM-5.1. Cost predictability is lower than at DeepSeek or Qwen.
  • Z.ai consumer UX lags. The chat interface is functional in English but built for a Chinese-first audience. Menu labels, error messages, and onboarding trail ChatGPT or Claude.
  • Thin competitive moat. SWE-Bench leaderboard positions shift monthly. DeepSeek, Qwen, and Kimi are all active challengers with comparable release cadence.
  • API cost undercut by DeepSeek. At $1/M input, GLM-5 is ~3.5x DeepSeek’s rate. The benchmark advantage is coding-specific; on general chat, DeepSeek wins on value.
  • Huawei Ascend training is a signal, not a benefit. For most Western developers, it does not change the product. For Chinese enterprises it matters materially.
  • Third-party tutorials are thinner. Smaller global developer community than ChatGPT or Claude. Fewer Stack Overflow threads, fewer YouTube walkthroughs.
  • Hosted API runs on Chinese infrastructure. Western teams with strict data residency rules cannot use bigmodel.cn. Self-hosted weights are the workaround.

Methodology

This page was produced by the aipedia.wiki editorial pipeline, an automated system that ingests vendor documentation, verifies pricing and model details against primary sources, and generates the editorial analysis you are reading. No individual human wrote this review. Scoring follows the four-dimension rubric at /about/scoring/ (Utility, Value, Moat, Longevity; unweighted average). Last verified 2026-04-17 against GLM-5.1 on Hugging Face, Zhipu AI Open Platform pricing, Modem Guides GLM-5.1 benchmark coverage, and TrendForce price-hike coverage.

FAQ

Is GLM free to use? Partially. GLM-4.7-Flash and GLM-4.5-Flash are free for registered users on the Zhipu AI platform. GLM-5 and GLM-5.1, the SWE-Bench leaders, require either a paid API ($1/M input) or a Coding Plan subscription starting at ~$10/month.

What changed between GLM-5 and GLM-5.1? GLM-5 launched February 11, 2026 as a proprietary API. GLM-5.1 shipped April 7, 2026 with a 10% price increase and full open-source release under MIT license. GLM-5.1 also took the SWE-Bench Pro lead at 58.4%.

Can I use GLM with Cursor or VS Code? Yes. GLM exposes OpenAI-compatible and Anthropic-compatible API formats. Set a custom endpoint in Cursor, Cline, Continue.dev, or any editor that accepts a custom OpenAI endpoint, using your Zhipu API key.

What does “trained on Huawei Ascend” mean in practice? GLM-5 and GLM-5.1 were trained end-to-end on Huawei Ascend 910B chips with the MindSpore framework, with no Nvidia hardware. For most users the output is indistinguishable. For Chinese enterprises and compliance buyers, it is a supply-chain independence signal.

Is GLM faster or slower than Claude or GPT? Inference latency performance varies with regional load.

Sources

Share LinkedIn
Was this review helpful?
Embed this score on your site Free. Links back.
GLM (ChatGLM) editorial score badge
<a href="https://aipedia.wiki/tools/glm/" target="_blank" rel="noopener"><img src="https://aipedia.wiki/badges/glm.svg" alt="GLM (ChatGLM) on aipedia.wiki" width="260" height="72" /></a>
[![GLM (ChatGLM) on aipedia.wiki](https://aipedia.wiki/badges/glm.svg)](https://aipedia.wiki/tools/glm/)

Badge value auto-updates if the editorial score changes. Attribution via the link is required.

Cite this page For journalists, researchers, and bloggers
According to aipedia.wiki Editorial at aipedia.wiki (https://aipedia.wiki/tools/glm/)
aipedia.wiki Editorial. (2026). GLM (ChatGLM) — Editorial Review. aipedia.wiki. Retrieved May 8, 2026, from https://aipedia.wiki/tools/glm/
aipedia.wiki Editorial. "GLM (ChatGLM) — Editorial Review." aipedia.wiki, 2026, https://aipedia.wiki/tools/glm/. Accessed May 8, 2026.
aipedia.wiki Editorial. 2026. "GLM (ChatGLM) — Editorial Review." aipedia.wiki. https://aipedia.wiki/tools/glm/.
@misc{glm-chatglm-editorial-review-2026, author = {{aipedia.wiki Editorial}}, title = {GLM (ChatGLM) — Editorial Review}, year = {2026}, publisher = {aipedia.wiki}, url = {https://aipedia.wiki/tools/glm/}, note = {Accessed: 2026-05-08} }
Spotted an error or want to share your experience with GLM (ChatGLM)?

Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used GLM (ChatGLM) and want to share what worked or didn't, the editorial desk reviews every message sent through this form.

Email editorial@aipedia.wiki
Report outdated info Help us keep this page accurate