Latitude

Latitude

The Open-Source Prompt Engineering Platform

4.8
β€’10 reviewsβ€’

1.2K followers

Build, evaluate, and refine your prompts with AI. Latitude is the open-source prompt engineering platform to ship LLM features with confidence.
This is the 5th launch from Latitude. View more
Latitude Agents

Latitude Agents

Build self-improving AI agents
Latitude empowers the next billion AI builders to design, evaluate, and deploy truly autonomous AI agents.
Latitude Agents gallery image
Latitude Agents gallery image
Latitude Agents gallery image
Latitude Agents gallery image
Free Options
Launch Team

What do you think? …

CΓ©sar M.
Maker
πŸ“Œ

Hello Product Hunt!

We're happy to come back to PH to introduce Latitude AI Agents, the end-to-end platform to design, evaluate, and refine your agents.

Key Features:

- Autonomous Agentic Runtime: Craft prompts that run in a loop until the agent achieves its goal, fully integrated with your existing tools and data.​

- Multi-Agent Orchestration: Break down complex tasks into smaller agents and easily manage their contexts.​

- Self-Improving Prompts: Use other LLMs to evaluate agent performance and automatically refine the agent's instructions based on the results.​

- Easy Integration via SDK or API: Integrate with agents into your codebase using our SDKs for Python and TypeScript.​

- Model Context Protocol Ready: Connect with many platforms offering tools and resources for agents, or create your own custom MCP server.​

We'd love to hear your thoughts. Are you building an agent in 2025?

Looking forward to your feedback!

Guli Moreno

Awesome stuff!

Marco PatiΓ±o

Hey CΓ©sar! This is cool! How does agent performance evaluation work? I imagine it can sometimes be really hard to do, even for a human.

CΓ©sar M.

@marco_patino Thanks Marco! We have a range of evaluators available:

  • LLM-as-judge: an LLM analyzes the instructions and list of messages your agent produces

  • Human in the loop: a human reviews agent generations and scores them manually

  • Code evals: you can push evaluation results directly from your backend


It really depends on the use case, but we've seen improvements of up to 30% using our automatic prompt refiner.

Marco PatiΓ±o

@heycesr nicee πŸ‘Œ