Latitude
1d ago
Add a comment
Hello Product Hunt!We're happy to come back to PH to introduce Latitude AI Agents, the end-to-end platform to design, evaluate, and refine your agents.Key Features:- Autonomous Agentic Runtime: Craft prompts that run in a loop until the agent achieves its goal, fully integrated with your existing tools and data.- Multi-Agent Orchestration: Break down complex tasks into smaller agents and easily manage their contexts.- Self-Improving Prompts: Use other LLMs to evaluate agent performance and automatically refine the agent's instructions based on the results.- Easy Integration via SDK or API: Integrate with agents into your codebase using our SDKs for Python and TypeScript.- Model Context Protocol Ready: Connect with many platforms offering tools and resources for agents, or create your own custom MCP server.We'd love to hear your thoughts. Are you building an agent in 2025?Looking forward to your feedback!
Awesome stuff!
The "self-improving" part is interesting, could be useful for creating agents that get better over time instead of staying static.
Pullpo.io
Hey César! This is cool! How does agent performance evaluation work? I imagine it can sometimes be really hard to do, even for a human.
@marco_patino Thanks Marco! We have a range of evaluators available:
LLM-as-judge: an LLM analyzes the instructions and list of messages your agent produces
Human in the loop: a human reviews agent generations and scores them manually
Code evals: you can push evaluation results directly from your backend
It really depends on the use case, but we've seen improvements of up to 30% using our automatic prompt refiner.
Great idea. How does the self improvement part work?
@ed_preble We use a technique called Semantic Backpropagation:
You can evaluate any conversations generated by the agent automatically using LLM-as-judge
We use the results of those evaluations to suggest changes in your prompt automatically
If you want to learn more, I highly recommend this paper: https://arxiv.org/pdf/2412.03624
Thinkbuddy AI
I've been using them for months and felt in love on first sight. Great product by UX/UI and also very useful - nice to see them their innovation pace is so fast too! Looking forward to see them in great places!
Replies
Latitude
Hello Product Hunt!
We're happy to come back to PH to introduce Latitude AI Agents, the end-to-end platform to design, evaluate, and refine your agents.
Key Features:
- Autonomous Agentic Runtime: Craft prompts that run in a loop until the agent achieves its goal, fully integrated with your existing tools and data.
- Multi-Agent Orchestration: Break down complex tasks into smaller agents and easily manage their contexts.
- Self-Improving Prompts: Use other LLMs to evaluate agent performance and automatically refine the agent's instructions based on the results.
- Easy Integration via SDK or API: Integrate with agents into your codebase using our SDKs for Python and TypeScript.
- Model Context Protocol Ready: Connect with many platforms offering tools and resources for agents, or create your own custom MCP server.
We'd love to hear your thoughts. Are you building an agent in 2025?
Looking forward to your feedback!
Awesome stuff!
The "self-improving" part is interesting, could be useful for creating agents that get better over time instead of staying static.
Pullpo.io
Hey César! This is cool! How does agent performance evaluation work? I imagine it can sometimes be really hard to do, even for a human.
Latitude
@marco_patino Thanks Marco! We have a range of evaluators available:
LLM-as-judge: an LLM analyzes the instructions and list of messages your agent produces
Human in the loop: a human reviews agent generations and scores them manually
Code evals: you can push evaluation results directly from your backend
It really depends on the use case, but we've seen improvements of up to 30% using our automatic prompt refiner.
Great idea. How does the self improvement part work?
Latitude
@ed_preble We use a technique called Semantic Backpropagation:
You can evaluate any conversations generated by the agent automatically using LLM-as-judge
We use the results of those evaluations to suggest changes in your prompt automatically
If you want to learn more, I highly recommend this paper: https://arxiv.org/pdf/2412.03624
Thinkbuddy AI
I've been using them for months and felt in love on first sight. Great product by UX/UI and also very useful - nice to see them their innovation pace is so fast too! Looking forward to see them in great places!