Zac Zuo

GLM-4.5 - Unifying agentic capabilities in one open model

GLM-4.5 is a new 355B parameter open-weight MoE model (32B active). It delivers state-of-the-art performance on reasoning, code, and agentic tasks. Both the 355B flagship and a 106B Air version are now available, featuring dual-mode inference.

Add a comment

Replies

Best
Zixuan Li
Maker
📌

Today, we introduce two new GLM family members: GLM-4.5 and GLM-4.5-Air — our latest flagship models.

GLM-4.5 is built with 355 billion total parameters and 32 billion active parameters, and GLM-4.5-Air with 106 billion total parameters and 12 billion active parameters. Both are designed to unify reasoning, coding, and agentic capabilities into a single model in order to satisfy more and more complicated requirements of fast rising agentic applications.

We're proud to contribute GLM-4.5 to the open-source community and can't wait to see what developers build with it. Looking forward to your feedback!

Sig Eternal

Congrats on the launch! GLM-4.5 could be a major step forward in unifying reasoning, coding, and agentic capabilities. Open-source models are the future.

Oliver Zenn

@sig_eternal Thanks Sig! Open Source is all we need. 🙌

Zac Zuo
Hunter

For the past few years, the GLM model series from the Zhipu team (now Z.ai) has been one of the favorites in the open-source community, often as a great, affordable option. With GLM-4.5, however, it's clear they're now aiming for the top.

So, long story short: imagine a model with the agentic coding capabilities like Claude 4, but 2-3x faster, 10x cheaper (based on official API prices), and it's completely open-source.

This. is. simply wild. 🫡

Devansh Varshney

I am just enjoying working wit z.ai. Tho I did noticed that it wont think beyond 5 minutes so I have to so I have to manage the direction in next prompt. But yes I can say it's better than Gemini 2.5 pro and Kimi.ai but I am using these three together to get out from the complicated situations.

Devansh Varshney

I just tested it again with a complex prompt and it went beyond 10 minutes in reasoning :)

started at- Today at 9:22 PM and got Internet disconnected but I am again testing it how well it is able to give reply to this complex task.

Great work.

Oliver Zenn

@devanshvarshney Shout out to Devansh! Feel free to join our Discord community for more info and interactions with other GLMers!

Devansh Varshney

@olliez1 I just joined the Discord server thanks and not just GLM 4.5 I have taken the Gemini 2.5 pro CLI to its extreme and found things with it tried to solve it but I am also stuck with BASIC IDE work for LibreOffice. If there is something for me I would love to work with you guys <3

Santosh Kumar

I’m intrested in how GLM-4.5 balances reasoning quality and efficiency, especially with the MoE setup. The 32B active parameter design seems like a smart trade-off for keeping inference costs manageable.

Oliver Zenn

@santosh__kumar9 With its MoE architecture, GLM-4.5 use 32B active parameters (from 355B total) for efficient inference, while a depth-focused design (more layers, extra attention heads) enhances reasoning. Its hybrid modes (thinking for complex tasks, non-thinking for speed) further optimize this balance.

Steve Androulakis

Haha "keep gravity weak so the flappy bird game is easy". How good is performance on M-series macs (if this is possible at all)?

Oliver Zenn

@steve_androulakis GLM-4.5 is compatible with MLX. You can find more test cases on X. (recommending @ivanfioravanti)

Guanghua(David)

A strong model and open souced, I can't wait to have a try,and how about it's translation ability?

Oliver Zenn

@guanghuadavid It’s a top-tier model. We’ve already integrated it with several well-known translation apps and tools. Feel free to test it via API on OpenRouter or directly on our web chat.

Guanghua(David)

@olliez1 Thanks, we will give it a try

Joey Judd

Dang, a clean playground for testing MIT-licensed GLM models? That’s exactly what I needed for quick prototyping—no distractions, just pure model tinkering. How often will you update the models?

Oliver Zenn

@joey_zhu_seopage_ai Our tech team is always on the call for improvement. <:

Venkatesh Iyer

Love that you’re pushing open innovation at this scale. Big step for agentic AI devs — already thinking of ways to build with GLM-4.5. Appreciate the contribution!

Oliver Zenn

@venkateshiyer Truly appreciate your comment. We're honored to have users like you! Feel free to join our Discord community for interactions with other GLMers!

Yicheng Du (EasyChinese)

我测试了一个,可以直接读取网址,并分析同时给出更好的建议。非常好。

Oliver Zenn

@yicheng_du_easychinese_ Thanks Yicheng.

vivek sharma

Loving the minimalist vibe Z AI feels like a refreshing take on model exploration. With an MIT license and GLM variants like Rumination and Reasoning, it’s clear this is more than a demo.

Big kudos for keeping it open, lightweight, and free to dive into!

Oliver Zenn

@vivek_sharma_25 Glad you like it! It means the world to us!

Simon Zhang

What a powerful model that high performence to fully support agent workflow building.

Gin Tse

Love that you can just jump in and try these high-perf GLM models for free—no setup, super smooth UI. Ngl, that's the way to get ppl hooked!

Tony Stark

Whoa, GLM-4.5 MoE is absolutely coooooool! The way it handles complex tasks with that mixture-of-experts architecture is mind-blowing - oops, did I just spend three hours playing with it instead of working? Totally worth it though!

yun gong
Your product is truly outstanding—innovative, user-friendly, and built with exceptional quality. It sets a new standard in its category and reflects a deep understanding of customer needs.
yun gong
your product is the real deal—super innovative, easy to use, and made with top-notch quality. It really stands out and shows you know what customers want. great job!
yun gong
your GLM-4.5 is seriously impressive—next-level innovation, smooth and easy to use, with top-quality performance. it’s a game-changer that clearly shows you get what customers need. keep up the awesome work!
Jessin Sam S

@zixuan_lii Great Product! But a little bit slow compared to other Open AI Models.

Gin Tse

Congratulations on the launch! Open source is the optimal path to drive the era forward, enabling global mutual benefit.

AIlidh

GLM-4.5 is an impressive advancement in the open-source large language model landscape. Building on the strengths of its predecessors, it demonstrates notable improvements in reasoning, code generation, and multilingual understanding.