
GLM-4.5 - Unifying agentic capabilities in one open model
GLM-4.5 is a new 355B parameter open-weight MoE model (32B active). It delivers state-of-the-art performance on reasoning, code, and agentic tasks. Both the 355B flagship and a 106B Air version are now available, featuring dual-mode inference.
Replies
Z.ai
Today, we introduce two new GLM family members: GLM-4.5 and GLM-4.5-Air — our latest flagship models.
GLM-4.5 is built with 355 billion total parameters and 32 billion active parameters, and GLM-4.5-Air with 106 billion total parameters and 12 billion active parameters. Both are designed to unify reasoning, coding, and agentic capabilities into a single model in order to satisfy more and more complicated requirements of fast rising agentic applications.
We're proud to contribute GLM-4.5 to the open-source community and can't wait to see what developers build with it. Looking forward to your feedback!
@zixuan_lii How well does it scale with multi-agent setups
Any memory limits users should watch for
Z.ai
@masump The context window is 128K. For specialized scenarios, we use a single-agent solution where the GLM-4.5 model continuously thinks and calls tools, using the results as context for its subsequent actions.
@zixuan_lii Impressive work, Zixuan! Excited to see GLM-4.5 and 4.5-Air pushing boundaries in unified reasoning, coding, and agentic performance. Open-sourcing such a powerful model is a huge step forward for the AI community—looking forward to exploring its capabilities!
Ask ChatGPT
Eternal AI
Congrats on the launch! GLM-4.5 could be a major step forward in unifying reasoning, coding, and agentic capabilities. Open-source models are the future.
Z.ai
@sig_eternal Thanks Sig! Open Source is all we need. 🙌
For the past few years, the GLM model series from the Zhipu team (now Z.ai) has been one of the favorites in the open-source community, often as a great, affordable option. With GLM-4.5, however, it's clear they're now aiming for the top.
So, long story short: imagine a model with the agentic coding capabilities like Claude 4, but 2-3x faster, 10x cheaper (based on official API prices), and it's completely open-source.
This. is. simply wild. 🫡
Haha "keep gravity weak so the flappy bird game is easy". How good is performance on M-series macs (if this is possible at all)?
Z.ai
@steve_androulakis GLM-4.5 is compatible with MLX. You can find more test cases on X. (recommending @ivanfioravanti)
Elisi : AI-powered Goal Management App
Love that you’re pushing open innovation at this scale. Big step for agentic AI devs — already thinking of ways to build with GLM-4.5. Appreciate the contribution!
Z.ai
@venkateshiyer Truly appreciate your comment. We're honored to have users like you! Feel free to join our Discord community for interactions with other GLMers!
I’m intrested in how GLM-4.5 balances reasoning quality and efficiency, especially with the MoE setup. The 32B active parameter design seems like a smart trade-off for keeping inference costs manageable.
Z.ai
@santosh__kumar9 With its MoE architecture, GLM-4.5 use 32B active parameters (from 355B total) for efficient inference, while a depth-focused design (more layers, extra attention heads) enhances reasoning. Its hybrid modes (thinking for complex tasks, non-thinking for speed) further optimize this balance.
I am just enjoying working wit z.ai. Tho I did noticed that it wont think beyond 5 minutes so I have to so I have to manage the direction in next prompt. But yes I can say it's better than Gemini 2.5 pro and Kimi.ai but I am using these three together to get out from the complicated situations.
I just tested it again with a complex prompt and it went beyond 10 minutes in reasoning :)
started at- Today at 9:22 PM and got Internet disconnected but I am again testing it how well it is able to give reply to this complex task.
Great work.
Z.ai
@devanshvarshney Shout out to Devansh! Feel free to join our Discord community for more info and interactions with other GLMers!
A strong model and open souced, I can't wait to have a try,and how about it's translation ability?
Z.ai
@guanghuadavid It’s a top-tier model. We’ve already integrated it with several well-known translation apps and tools. Feel free to test it via API on OpenRouter or directly on our web chat.
@olliez1 Thanks, we will give it a try
BestPage.ai
Dang, a clean playground for testing MIT-licensed GLM models? That’s exactly what I needed for quick prototyping—no distractions, just pure model tinkering. How often will you update the models?
Z.ai
@joey_zhu_seopage_ai Our tech team is always on the call for improvement. <:
我测试了一个,可以直接读取网址,并分析同时给出更好的建议。非常好。
Z.ai
@yicheng_du_easychinese_ Thanks Yicheng.
Loving the minimalist vibe Z AI feels like a refreshing take on model exploration. With an MIT license and GLM variants like Rumination and Reasoning, it’s clear this is more than a demo.
Big kudos for keeping it open, lightweight, and free to dive into!
Z.ai
@vivek_sharma_25 Glad you like it! It means the world to us!
What a powerful model that high performence to fully support agent workflow building.
Whoa, GLM-4.5 MoE is absolutely coooooool! The way it handles complex tasks with that mixture-of-experts architecture is mind-blowing - oops, did I just spend three hours playing with it instead of working? Totally worth it though!
Z.ai definitely surprised me. the quality of PowerPoints it outputs is so much better than manual work, and the coding tasks it handles are really smart! love it.
@zixuan_lii Great Product! But a little bit slow compared to other Open AI Models.
Congratulations on the launch! Open source is the optimal path to drive the era forward, enabling global mutual benefit.
GLM-4.5 is an impressive advancement in the open-source large language model landscape. Building on the strengths of its predecessors, it demonstrates notable improvements in reasoning, code generation, and multilingual understanding.