Arch is an intelligent gateway for agents. An AI-native, open source infrastructure project to help developers build fast, hyper-personalized agents in mins. Arch is engineered with specialized (fast) LLMs to transparently integrate prompts with APIs (function calling), to add safety, routing and observability features in secs - so that developers can focus on what matters most.
Hello PH!
My name is Salman and I work on Arch - an open source infrastructure primitive to help developers build fast, personalized agent in minus. Arch is an intelligent prompt gateway engineered with (fast) LLMs for the secure handling, robust observability, and seamless integration of prompts with your APIs - all outside business logic.
Arch is built on (and by the contributors of) Envoy with the belief that:
Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic.
Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
⭐ Core Features:
🏗️ Built on Envoy: Arch runs alongside application servers, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs.
🤖 Function Calling: For fast agentic and RAG apps. Engineered with SOTA.LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function calling, and parameter extraction from prompts. Our models can run under <200 ms!!
🛡️ Prompt Guard: Arch centralizes prompt guards to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.
🚦 Traffic Management: Arch manages LLM calls, offering smart retries, automatic cut over, and resilient upstream connections for continuous availability between LLMs or a single LLM provider with multiple versions
👀 OpenTelemetry Tracing, Metrics and Logs : Arch uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with exiting observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance.
- Visit our Github page to get started (and ⭐️ the project 🙏) : https://github.com/katanemo/arch
- To learn more about Arch our docs: https://docs.archgw.com/
A big thanks 🙏 to my incredibly talented team who helped us to our first milestone as we re:invent infrastructure primitives for Generative AI.
@alex_tartach Thanks Alex - building this was a lot of fun, and its early days for us. Packing intelligence in infrastructure to help developers build fast agents (faster than before) is the ultimate goal
What's a personalised agent? Web chatbot or personal assistant or? This field is moving so fast, it's hard to know what terms means these days. Thanks 🙏
@sentry_co personalized means to customize the agent to be unique. Most agents are just summarizing over some data. With Arch you can build something very tailored like creating ad campaigns via prompts or updating insurance claims - and offer generative summaries in the same experience
Impressive work - At Meta we have the same core belief that safety of agents is paramount and as much as possible if we can tackle those concerns early in the request path the better - Arch feels like a great fit for responsible and safe AI - not to mention the other super powers it offers developers.
One quick question: can you elaborate more about the prompt guard model, I see that you guys fined tuned it over the prompt guard from Meta?"
@sarmad_siddiqui Thank you! Yes, Arch uses purpose-built LLMs for guardrails The Arch-Guard collection of models can be found here. https://huggingface.co/collectio.... We fine-tuned over Meta's prompt guard and the optimization was to improve TPR (+4%) without impacting FPR. This was for the jailbreak use case, and the next set of baseline guardrails will include toxicity, harmfulness, etc.
Arch
Arch
Arch
Arch