I was on the lookout for a platform that would accommodate the maximum functionality of prompt stylability testing. The first thing I came across was GenumLab. The product has recently entered beta and has already surprised me! It's still raw and as I understand in the active development stage, but you can already see the great potential! This plaftorm is positioning itself as PromptFinOps. After it I came across PromptForge. According to the description of functionality, they are as similar as possible. What is the difference between them? I would like to know the advantages and disadvantages of these platforms
Olvy
@insaanimanav nice work. Good luck.
Smart name tag.
Olvy
@django_vinci Haha thanks
@insaanimanav awesome idea, thanks for sharing. when will this be ready for actual use?
Olvy
@the_league Its out right now you can clone the repository and use the setup instructions to start using it
This is cool — would love to be able to run it locally and use @LM Studio or @Ollama for inference. Would that be possible? (I also have Docker installed but it'd be nice to avoid unnecessary $$ cloud bills)
Olvy
@chrismessina Thanks, Love this feedback! Local inference is definitely on the roadmap!
The Docker setup already runs locally - you just need to plug in your API keys. But adding LM Studio/Ollama support would be AMAZING for cost-free local inference!
Would you prefer Ollama integration first? And what models are you running locally? Always looking to prioritize based on real usage!
Thanks for the great suggestion!
Olvy
@chrismessina We now have support for local inference in prompt-forge you can now use ollama for local inference thanks to the work by https://github.com/halilugur
This is great! Congratulations on the launch @insaanimanav
Olvy
@nshntarora Thanks