Zac Zuo

GLM-4.5 - Unifying agentic capabilities in one open model

GLM-4.5 is a new 355B parameter open-weight MoE model (32B active). It delivers state-of-the-art performance on reasoning, code, and agentic tasks. Both the 355B flagship and a 106B Air version are now available, featuring dual-mode inference.

Add a comment

Replies

Best
Zixuan Li
Maker
📌

Today, we introduce two new GLM family members: GLM-4.5 and GLM-4.5-Air — our latest flagship models.

GLM-4.5 is built with 355 billion total parameters and 32 billion active parameters, and GLM-4.5-Air with 106 billion total parameters and 12 billion active parameters. Both are designed to unify reasoning, coding, and agentic capabilities into a single model in order to satisfy more and more complicated requirements of fast rising agentic applications.

We're proud to contribute GLM-4.5 to the open-source community and can't wait to see what developers build with it. Looking forward to your feedback!

Masum Parvej

@zixuan_lii How well does it scale with multi-agent setups

Any memory limits users should watch for

Zixuan Li

@masump The context window is 128K. For specialized scenarios, we use a single-agent solution where the GLM-4.5 model continuously thinks and calls tools, using the results as context for its subsequent actions.

Sudarshan Nath

@zixuan_lii Impressive work, Zixuan! Excited to see GLM-4.5 and 4.5-Air pushing boundaries in unified reasoning, coding, and agentic performance. Open-sourcing such a powerful model is a huge step forward for the AI community—looking forward to exploring its capabilities!

Ask ChatGPT

Sig Eternal

Congrats on the launch! GLM-4.5 could be a major step forward in unifying reasoning, coding, and agentic capabilities. Open-source models are the future.

Oliver Zenn

@sig_eternal Thanks Sig! Open Source is all we need. 🙌

Zac Zuo
Hunter

For the past few years, the GLM model series from the Zhipu team (now Z.ai) has been one of the favorites in the open-source community, often as a great, affordable option. With GLM-4.5, however, it's clear they're now aiming for the top.

So, long story short: imagine a model with the agentic coding capabilities like Claude 4, but 2-3x faster, 10x cheaper (based on official API prices), and it's completely open-source.

This. is. simply wild. 🫡

Steve Androulakis

Haha "keep gravity weak so the flappy bird game is easy". How good is performance on M-series macs (if this is possible at all)?

Oliver Zenn

@steve_androulakis GLM-4.5 is compatible with MLX. You can find more test cases on X. (recommending @ivanfioravanti)

Venkatesh Iyer

Love that you’re pushing open innovation at this scale. Big step for agentic AI devs — already thinking of ways to build with GLM-4.5. Appreciate the contribution!

Oliver Zenn

@venkateshiyer Truly appreciate your comment. We're honored to have users like you! Feel free to join our Discord community for interactions with other GLMers!

Santosh Kumar

I’m intrested in how GLM-4.5 balances reasoning quality and efficiency, especially with the MoE setup. The 32B active parameter design seems like a smart trade-off for keeping inference costs manageable.

Oliver Zenn

@santosh__kumar9 With its MoE architecture, GLM-4.5 use 32B active parameters (from 355B total) for efficient inference, while a depth-focused design (more layers, extra attention heads) enhances reasoning. Its hybrid modes (thinking for complex tasks, non-thinking for speed) further optimize this balance.

Devansh Varshney

I am just enjoying working wit z.ai. Tho I did noticed that it wont think beyond 5 minutes so I have to so I have to manage the direction in next prompt. But yes I can say it's better than Gemini 2.5 pro and Kimi.ai but I am using these three together to get out from the complicated situations.

Devansh Varshney

I just tested it again with a complex prompt and it went beyond 10 minutes in reasoning :)

started at- Today at 9:22 PM and got Internet disconnected but I am again testing it how well it is able to give reply to this complex task.

Great work.

Oliver Zenn

@devanshvarshney Shout out to Devansh! Feel free to join our Discord community for more info and interactions with other GLMers!

Guanghua(David)
Launching soon!

A strong model and open souced, I can't wait to have a try,and how about it's translation ability?

Oliver Zenn

@guanghuadavid It’s a top-tier model. We’ve already integrated it with several well-known translation apps and tools. Feel free to test it via API on OpenRouter or directly on our web chat.

Guanghua(David)
Launching soon!

@olliez1 Thanks, we will give it a try

Joey Judd

Dang, a clean playground for testing MIT-licensed GLM models? That’s exactly what I needed for quick prototyping—no distractions, just pure model tinkering. How often will you update the models?

Oliver Zenn

@joey_zhu_seopage_ai Our tech team is always on the call for improvement. <:

Yicheng Du (EasyChinese)

我测试了一个,可以直接读取网址,并分析同时给出更好的建议。非常好。

Oliver Zenn

@yicheng_du_easychinese_ Thanks Yicheng.

vivek sharma

Loving the minimalist vibe Z AI feels like a refreshing take on model exploration. With an MIT license and GLM variants like Rumination and Reasoning, it’s clear this is more than a demo.

Big kudos for keeping it open, lightweight, and free to dive into!

Oliver Zenn

@vivek_sharma_25 Glad you like it! It means the world to us!

Simon Zhang

What a powerful model that high performence to fully support agent workflow building.

Tony Stark

Whoa, GLM-4.5 MoE is absolutely coooooool! The way it handles complex tasks with that mixture-of-experts architecture is mind-blowing - oops, did I just spend three hours playing with it instead of working? Totally worth it though!

Robin Devon Calandri

Z.ai definitely surprised me. the quality of PowerPoints it outputs is so much better than manual work, and the coding tasks it handles are really smart! love it.

yun gong
Your product is truly outstanding—innovative, user-friendly, and built with exceptional quality. It sets a new standard in its category and reflects a deep understanding of customer needs.
yun gong
your product is the real deal—super innovative, easy to use, and made with top-notch quality. It really stands out and shows you know what customers want. great job!
yun gong
your GLM-4.5 is seriously impressive—next-level innovation, smooth and easy to use, with top-quality performance. it’s a game-changer that clearly shows you get what customers need. keep up the awesome work!
Jessin Sam S

@zixuan_lii Great Product! But a little bit slow compared to other Open AI Models.

Gin Tse

Congratulations on the launch! Open source is the optimal path to drive the era forward, enabling global mutual benefit.

AIlidh

GLM-4.5 is an impressive advancement in the open-source large language model landscape. Building on the strengths of its predecessors, it demonstrates notable improvements in reasoning, code generation, and multilingual understanding.