Launches
Coming soon
Upcoming launches to watch
Launch archive
Most-loved launches by the community
Launch Guide
Checklists and pro tips for launching
Products
News
Newsletter
The best of Product Hunt, every day
Stories
Tech news, interviews, and tips from makers
Changelog
New Product Hunt features and releases
Forums
Forums
Ask questions, find support, and connect
Streaks
The most active community members
Events
Meet others online and in-person
Advertise
Subscribe
Sign in
Subscribe
Sign in
Qwen 1.5 MoE
Highly efficient mixture-of-expert (MoE) model from Alibaba
5.0
•
1 review
•
51 followers
Highly efficient mixture-of-expert (MoE) model from Alibaba
5.0
•
1 review
•
51 followers
Visit website
Qwen1.5-MoE-A2.7B is a small mixture-of-expert (MoE) model with only 2.7 billion activated parameters yet matches the performance of state-of-the-art 7B models like Mistral 7B and Qwen1.5-7B.
Overview
Launches
1
Reviews
1
Alternatives
Team
More
Blog
•
Newsletter
•
Questions
•
Forums
•
Product Categories
•
Apps
•
About
•
FAQ
•
Terms
•
Privacy and Cookies
•
X.com
•
Facebook
•
Instagram
•
LinkedIn
•
YouTube
•
Advertise
© 2025 Product Hunt
AutoForm
— Automate the busywork from your files and your tools.
Automate the busywork from your files and your tools.
Promoted
Do you use Qwen 1.5 MoE?
I use this
I use something else
5.0
Based on 1 review
Review Qwen 1.5 MoE?
Leave a review
Reviews
Helpful
View all
Salman Paracha
used Qwen 1.5 MoE to build
Arch
(262 points)
Highly performant base models that can be used for task-specific training. Such as the function calling experience built into Arch
Helpful
Share
Report
10mo ago