Chris Messina

Temporal + OpenAI Agents SDK - Build production-ready agents, fast.

Rate-limiting? Brittle tools? Rapid iteration? We’ve got it handled. OpenAI Agents SDK + Temporal means your agents now remember state, bounce back from crashes, and survive real-world chaos without you writing error-handling and orchestration code.

Add a comment

Replies

Best
Maxim Fateev

Hi, Product Hunt Community, I’m the Co-Founder and CTO of Temporal and I’m excited to launch this integration with OpenAI’s Agents SDK in Public Preview.

What is it?

This new integration between the OpenAI Agents SDK and Temporal’s Python SDK lets you build production-ready AI agents, fast. What does that mean? It’s easy to prototype agents, but they fall apart in production. Rate-limiting, crashes, flaky APIs and unreliable tools. These issues sound new, but they aren’t. They’re a different flavor of distributed system challenges. Temporal has been solving these challenges for years to make distributed systems more accessible and reliable.

With this integration, you can use abstractions and best practices from OpenAI with Temporal’s orchestration, Durable Execution, and scale to build agents for the real world. Add several lines of Temporal code to agents you already built with OpenAI’s framework or start building with both from the beginning.

It’s available in Public Preview in Temporal’s Python SDK, open-source under the MIT license.

What’s included?

  • Lightweight, easy-to-use Agents SDK with very few abstractions: the OpenAI Agents SDK is straightforward to use and provides industry best practices.

  • Durable Execution for agents: Temporal’s crash-proof execution provides automatic state persistence, retries, seamless recovery across failures, long-running task handling, and human-in-the-loop. This approach also preserves tokens since you don't need to rerun LLMs to get back content you lost.

  • 100% code: The Agents SDK & Temporal both provide a plain-code developer experience that is flexible, intuitive, and gives you the freedom to develop how you want.

  • Built-in observability and debugging tools: includes local replay, support for unit testing, and traceability of every step in an application.

  • Massive horizontal scale: run thousands (or millions) of agents in parallel, with the same technology that supports applications at Nvidia, Lovable, and Netflix, as well as every Snap story and every Taco Bell order.

We’re excited to see what you build. You can read more about the integration here. Try it out, and let us know what you think!

Nathan Schram

@maximfateev this integration makes so much sense. The durable execution for agents is exactly what's been missing; I've lost too many tokens to crashes. What's the memory overhead like?

Maxim Fateev

@nathanschram There is no memory overhead as this is the same Python code. There is latency overhead related to storing the state, but it is negligible compared to LLM latency.

Joey Judd

Honestly, letting devs skip all the boring plumbing code and just focus on biz logic is genius—makes life sooo much easier fr. The team nailed it!

Hannah Short

@joey_zhu_seopage_ai Thanks! We have a great team and we were excited to collaborate with OpenAI on this.

Steve Androulakis

Hey! I'm in the video above. Ask me questions about the implementation and I'm happy to answer.

Sig Eternal

Congrats on the launch! Really impressed by how this integration solves the challenge of creating production-ready AI agents, ensuring reliability for long term use. The industry needs this to address the common struggles with unstable APIs and system crashes.

Hannah Short

@sig_eternal Thanks! We’re excited to see the growth of these production ready agents.

Joey Judd

Wow, building workflows that never fail sounds like a dream—no more late-night debugging sessions for edge cases! Does Temporal handle retries automatically if something goes down mid-process?

Hannah Short

Hi @joey_zhu_seopage_ai, we do! You can learn more here: https://temporal.io/how-it-works

Cornelia Davis

@joey_zhu_seopage_ai Not only does it handle retries, but the retries themselves are durable!!! If your app goes down while it's retrying an LLM or tool call, when the app is recovered, the retries will continue.

Maxim Fateev

@joey_zhu_seopage_ai It's not just retries. It recovers the state of the workflow to the exact point before the failure. For example, if an agent called LLM, which returned the list of tools to execute, and then the process crashed, the LLM call will not be repeated, and the tool call will be executed.

Milton sarkar

Been exploring Temporal recently and it really changes how you think about workflows. Instead of spending hours writing retry logic, timeouts, and all that boilerplate, you just focus on what your app actually needs to do. It’s kind of wild knowing the same tech is behind Snap stories and Taco Bell orders — feels super robust but still developer-friendly. If you're building anything with complex flows or background jobs, this is seriously worth a look. ⚙️💡

Hannah Short

@milton_sarkar1 We love to hear that you've been exploring! If you haven't yet, we'd love for you to join our Slack community: t.mp/slack

Ricky Cipher

Just found this tool. Looks like a great solution for managing workflows. Can’t wait to try it!

Hannah Short

@rickycipher We'd love to hear more about your experience! Join our community and chat with our team and other developers using Temporal here: t.mp/slack

Mou Sarkar

Very much excited 😊

Hannah Short

@mou_sarkar3 Thanks for checking it out!

Gin Tse

Honestly, coding workflows that *never* fail is such a gamechanger—no more stressing over edge cases, just pure business logic. Realy smart move, team!

Benjamin Lyrics

Its a huge amount changes that incredible.

Hannah Short

@benjamin_lyrics Thanks for checking it out!