Lora is a local LLM designed for Flutter. It delivers GPT-4o-mini-level performance and is built for seamless integration—call it with just one line of code.
👋 Hello, Everyone! We're excited to introduce our new product, “Lora for flutter”.
🔥 What is Lora? Lora is an on-device LLM including SDK to integrate your flutter based app.
🔎 Key Features of Lora - LLM: on-device LLM with gpt4o-mini level performance - SDK: Integrate seamlessly with just one line of code in Flutter - Price: $99/month, with unlimited tokens
We’d love your feedback to make Lora a “Wow!” product. Questions or suggestions? DM me anytime. Thank you!
@seungwhan it's really cool we are getting more local tools like this. Are you guys planning to add a feature to select other models? I'd love to try it with my local R1s. Congrats on the launch! 🚀
@manuhortet Thanks a lot, Manu. Do you mean that you want to try our SDK to use your local R1? We did't think of it, but I'm telling you that we'd consider the feature. Thanks for your request.
It looks very promising but would be great to have more technical information. It is really local or need internet access to make requests? How much space does it need? Is performance good on cheap devices?
@aleksedtech Hello! Thank you for stopping by. And yes, Lora DOES NOT NEED INTERNET ACCESS to make request & get a response. About 1.5 GB will be needed for the model. And it shows mesmerizing performance if the device have over 8GB ram :)
@aleksedtech@woobeen_back But if it's local and doesn't require internet to work, how do you charge monthly? Is it monthly license/key renewal kind of thing?
@aleksedtech@woobeen_back@vorniches Great question! 🤔 We've been putting a lot of thought into our revenue model. The fact that it runs locally is a major security advantage, but we're still exploring how much and deep monitoring is appropriate while maintaining that strength. Balancing security and sustainability is definitely a challenge! 🔐💡
@hansol_nam You guys have never failed even once—always bringing something unique that meets the current market needs. Congratulations on the launch of your new product!
@waseem_panhwer Wow, that truly means a lot! 😊 Thank you for your kind words and constant support. We always strive to build something meaningful, and hearing this from you makes it all the more rewarding. Excited to keep pushing forward—really appreciate you being part of the journey! 🚀✨
@waseem_panhwer Wow, that means a lot! 🚀 We always strive to bring something fresh and valuable to the market, so hearing this really motivates us. Thanks for the support! 🙌😊
@seungwhan@michael_vavilov Thank you for your kind words about our product that we put so much thought and effort into :) I will definitely work on solving the excessive AI cost issue using Lora!
Congratulations to your launch again! It’s so cool that the performance of Lora’s performance is significantly better than the average level. Also good luck to this launch!
@evakk Thank you so much! 🎉 Your support means a lot! We're really excited about Lora’s performance and how it’s pushing the boundaries of on-device AI. Appreciate the encouragement—let’s keep innovating! 💡
Lora is an efficient and flexible fine-tuning technique, particularly suitable for environments with limited resources and the need to quickly adapt to new tasks. Although it may not perform as well as full parameter fine-tuning on certain tasks, its efficient usability makes it a powerful tool for fine-tuning large pre trained models.
LoRA is an efficient and flexible fine-tuning technique that is particularly suitable for resource-limited environments and scenarios where rapid adaptation to new tasks is required. Although it may not perform as well as full-parameter fine-tuning on some tasks, its high efficiency and ease of use make it a powerful tool for fine-tuning large pretrained models.
This sounds like a fantastic tool for Flutter developers! On-device LLM with GPT-4-mini level performance is impressive, especially with the added benefit of privacy and faster response times. Seamless integration with just one line of code is really a big thing for devs looking to enhance their apps without the usual complexity.
Congrats on the launch!
Best wishes and sending wins to the team :) @seungwhan
@whatshivamdo Thanks a lot Shivam! Please try and leave your feedback. It'd be really helpful for growing our product. I notified your product and am looking forward to seeing it. Have a good day!
Love what you've built here! As a Flutter dev, I've been looking for a way to add LLM capabilities without the complexity of cloud services. That one-line integration is exactly what we need - nobody wants to spend days just setting up AI features.
Quick question though - how's the performance on lower-end devices? I'm working on an app targeting markets where users might not have the latest phones.
Really impressed by what you've achieved with local processing. The privacy angle is huge for my clients too. Keep crushing it! 🚀
@xi_z Really appreciate your thoughtful feedback! 🙌 We’ve put a lot of effort into making integration as seamless as possible while ensuring solid performance across various devices. 🌍⚡ We’re continuously optimizing for lower-end hardware, so stay tuned for even more improvements! Thanks for the support—let’s keep pushing the boundaries of local AI together! 😎
Have a successful launch and continue building out your roadmap for 2025 and beyond. Use all the sales and marketing strategies to build out your user base.
Replies
👋 Hello, Everyone!
We're excited to introduce our new product, “Lora for flutter”.
🔥 What is Lora?
Lora is an on-device LLM including SDK to integrate your flutter based app.
🔎 Key Features of Lora
- LLM: on-device LLM with gpt4o-mini level performance
- SDK: Integrate seamlessly with just one line of code in Flutter
- Price: $99/month, with unlimited tokens
We’d love your feedback to make Lora a “Wow!” product.
Questions or suggestions? DM me anytime. Thank you!
@artem_stenko Thanks a lot, Artem! Please try and leave your feedback. It'll be really helpful for growing our product. Have a good one!
Producta
@seungwhan it's really cool we are getting more local tools like this. Are you guys planning to add a feature to select other models? I'd love to try it with my local R1s.
Congrats on the launch! 🚀
JustCMS
It looks very promising but would be great to have more technical information. It is really local or need internet access to make requests? How much space does it need? Is performance good on cheap devices?
@justonedev Thank you so much for your valuable feedback! 😊
In addition to the model benchmarks, we’ve documented detailed usage separately, and we’ll work on incorporating it into our website in the future.
Lora is a fully local LLM that works even in airplane mode! ✈️
It looks like my teammate has already shared more details—feel free to check it out! 🚀
Prototype
@aleksedtech @woobeen_back But if it's local and doesn't require internet to work, how do you charge monthly? Is it monthly license/key renewal kind of thing?
Ollie
@aleksedtech @woobeen_back @vorniches Great question! 🤔 We've been putting a lot of thought into our revenue model. The fact that it runs locally is a major security advantage, but we're still exploring how much and deep monitoring is appropriate while maintaining that strength. Balancing security and sustainability is definitely a challenge! 🔐💡
Prototype
@aleksedtech @woobeen_back @peekabooooo It doesn't make it clear :|
Congrats for another launch!
@mikita_aliaksandrovich Thanks a lot Mikita. I'd love to see your product. Notified yours. Wish you all the best!
@mikita_aliaksandrovich Thank you! Hope you like Lora :) And I can't wait your product
@woobeen_back Thanks!
@mikita_aliaksandrovich Thank you so much for congratulating me :)
Ollie
@mikita_aliaksandrovich Super Thanks :) and Excited to see what you’re building too! 🚀 Looking forward to it! 😊
@hansol_nam You guys have never failed even once—always bringing something unique that meets the current market needs. Congratulations on the launch of your new product!
@hansol_nam @waseem_panhwer Thanks for kindly comment, Muhammad! I'd love to provide a WOW product to the world. Please stay tuned!
@hansol_nam @waseem_panhwer Thank you for your sweet words :) Hope you like it!
@waseem_panhwer Wow, that truly means a lot! 😊 Thank you for your kind words and constant support. We always strive to build something meaningful, and hearing this from you makes it all the more rewarding. Excited to keep pushing forward—really appreciate you being part of the journey! 🚀✨
Ollie
@waseem_panhwer Wow, that means a lot! 🚀 We always strive to bring something fresh and valuable to the market, so hearing this really motivates us. Thanks for the support! 🙌😊
99 sounds reasonable! @seungwhan congrats!
@seungwhan @michael_vavilov Thank you! Lora will lessen development cost & operation cost a lot!
Thanks a lot, Michael! Please try and leave your feedback. It'll be really helpful fro growing our product. Have a good one!
@seungwhan @michael_vavilov Thank you for your kind words about our product that we put so much thought and effort into :) I will definitely work on solving the excessive AI cost issue using Lora!
Congratulations to your launch again! It’s so cool that the performance of Lora’s performance is significantly better than the average level. Also good luck to this launch!
@evakk Thanks a lot Evak! Please try and leave your feedback. It'd be really helpful for growing our product. Have a nice day!
Ollie
@evakk Thank you so much! 🎉 Your support means a lot! We're really excited about Lora’s performance and how it’s pushing the boundaries of on-device AI. Appreciate the encouragement—let’s keep innovating! 💡
Lora is an efficient and flexible fine-tuning technique, particularly suitable for environments with limited resources and the need to quickly adapt to new tasks. Although it may not perform as well as full parameter fine-tuning on certain tasks, its efficient usability makes it a powerful tool for fine-tuning large pre trained models.
@lle_crh Sure. Please try and leave your feedback. It'd be really helpful for growing our product.
Ollie
@promise Really appreciate you recognizing the things I’m particular about(Just One Line)! lol Thank you so much! 😊
LoRA is an efficient and flexible fine-tuning technique that is particularly suitable for resource-limited environments and scenarios where rapid adaptation to new tasks is required. Although it may not perform as well as full-parameter fine-tuning on some tasks, its high efficiency and ease of use make it a powerful tool for fine-tuning large pretrained models.
@lle_lile Sure. Please try and leave your feedback. It'd be really helpful for growing our product.
@seungwhan @peekabooooo @hansol_nam @woobeen_back congratulations on the launch. This is really taking the LLM experiences and use cases to another level
Ollie
@seungwhan @hansol_nam @woobeen_back @michael_talreja Thank you! 🚀 We're already working on taking it to an even higher level! stay tuned and keep cheering us on! 🔥
This sounds like a fantastic tool for Flutter developers! On-device LLM with GPT-4-mini level performance is impressive, especially with the added benefit of privacy and faster response times. Seamless integration with just one line of code is really a big thing for devs looking to enhance their apps without the usual complexity.
Congrats on the launch!
Best wishes and sending wins to the team :) @seungwhan
@whatshivamdo Thanks a lot Shivam! Please try and leave your feedback. It'd be really helpful for growing our product. I notified your product and am looking forward to seeing it. Have a good day!
Ollie
@seungwhan @whatshivamdo Thanks so much! 🙌 We're planning to support even more models in the future, so stay tuned and keep cheering us on! 🚀
As a Flutter developer, I'm amazed by how simple it is to integrate 👍
Ollie
@shenjun Just a "Single line of code", and it's ready to go! 🚀
This looks great! Love the easy integration with just one line of code.
Ollie
@marianna_tymchuk Exactly! My biggest focus is making integration "SUPER EASY". 🔥 Really appreciate you noticing that! 🙌
Great launch! On-device AI makes everything faster and better.
Ollie
@hanna_kuznietsova On-device AI makes everything faster and better WITH YOU 😘
Congrats! 🙌 AI-powered Flutter apps just got easier.
Ollie
@anton_diduh Just add one line of code and build your own LLM-powered AI service—seamless, fast, and private! 🚀
Love it! Simple, fast, and perfect for Flutter apps.
Ollie
@viktoriia_vasylchenko Just add one line of code and build your own LLM-powered AI service—seamless, fast, and private! ✨
Integration is a big deal in Flutter! Thanks for making this process so much easier! Wish you good luck with the launch! 🎉
@kay_arkain Thanks a lot, Kay! Please try and leave your feedback. It'd be helpful for growing our product.
WOW! Lora DOES NOT NEED INTERNET ACCESS to make request!! Its very useful!!
@mahyar_hsh Sure. Please try and leave your feedback. It'd be helpful for growing our product.
Chance AI
Love what you've built here! As a Flutter dev, I've been looking for a way to add LLM capabilities without the complexity of cloud services. That one-line integration is exactly what we need - nobody wants to spend days just setting up AI features.
Quick question though - how's the performance on lower-end devices? I'm working on an app targeting markets where users might not have the latest phones.
Really impressed by what you've achieved with local processing. The privacy angle is huge for my clients too. Keep crushing it! 🚀
Ollie
@xi_z Really appreciate your thoughtful feedback! 🙌 We’ve put a lot of effort into making integration as seamless as possible while ensuring solid performance across various devices. 🌍⚡ We’re continuously optimizing for lower-end hardware, so stay tuned for even more improvements! Thanks for the support—let’s keep pushing the boundaries of local AI together! 😎
@onelocalfamily Thanks a lot! We are going to launch whenever we build something new. Please stay tuned!