
I'm Zach, founder of Warp, the first Agentic Development Environment (ADE). AMA! 🔥
Hi Product Hunt!!
I'm Zach the CEO of @Warp , the first Agentic Development Environment (ADE)..
We are in the midst of a change from "development by hand" to "development by prompt," where instead of hand-writing code and commands, developers ask agents to build features, fix bugs, debug server crashes, and more. At Warp, we are building the first agentic development environment designed from the ground up for this new workflow of humans and agents collaborating to ship better software.
Ask me anything about how we are building Warp (e.g. why it's a native rust app), why a new environment that's neither a terminal nor an IDE is needed, how cloud agents fit into the picture, and where this is all headed in the next few years."
— Zach
1.3K views
Replies
Wordlink
Hey Zach, do you plan on hosting a hackathon going forward to showcase all the capabilities Warp can do?
Warp
@angus_mac love this idea. we participate in a ton of hackathons and i think it would be cool to do a warp hosted one. @erikeliason
Product Hunt
@angus_mac @erikeliason @zach_lloyd we should co-host!
@angus_mac hackaton with Warp sounds like a brilliant idea! Let's realize it!
Wordlink
@bobvasic everyone seems excited about it, I think we're manifesting it!
Hey, @zach_lloyd ! You're positioning Warp as neither a terminal nor an IDE, but a new category entirely. Given the massive installed base and muscle memory developers have with existing tools, what's your strategy for overcoming the switching costs? Are you seeing early adoption primarily from junior developers who haven't built deep tool preferences, or are senior developers also making the transition to prompt-driven development?
Warp
@lina_huchok That's a great question. For developers who are super familiar with using a terminal, they can actually just still use that same muscle memory for the most part within Warp.
The future though is starting every task with a prompt. And in that respect, Warp is capitalizing on broad momentum towards the CLI as the primary interface.
Tools like Claude Code and Gemini CLI show that the command line is actually a great place to do agentic work. And with Warp, we realized that the best possible experience isn't just using a text-based CLI tool, it's a rich interface for doing agentic work that lets you review and iterate on diffs with the agent, run multiple agents at once, and save and share agent contexts all in an app that's built for the new workflow.
We have seen success in getting long-term Warp users to adopt the more agentic workflow. From a product perspective, we've done this by trying to get them to experience an AHA moment - usually when they're in the course of fixing some terminal error and see that the agent can help them achieve the task more easily.
We also have a lot of users who are coming into Warp who don't carry the baggage of using terminals or IDEs and are getting going with Agentic development for the first time.
Product Hunt
Okay so I'm a non-dev but I've been using Warp more and more lately. I run Claude Code through it, default to using it vs just Terminal or another CLI, and have started to have it open hand-in-hand with Cursors. With that being said a couple questions I have:
Are you seeing a new user set of "vibe coders" or non-technical folks using Warp?
If yes, how is this subset of users impacting the product roadmap/vision?
"Develop by Prompt" is so green, how are you evaluating what and how to build for something so new that even users might not know how to interact with the toolset (yet)
What gets you most excited about the future of development and the "prompting" trend you're seeing?
TIA!
Warp
@gabe Awesome questions.
Yes, we do see vibe coders, and our data shows that about 25% of people who are using Warp for coding are not experienced developers. However, most people who are using Warp these days do have a technical background.
Our roadmap is focused pretty squarely on the pro developer use case under the theory that if we can build something that's awesome for pro devs, it will probably also be sufficient for vibe coders. However, we don't really believe that the converse is true - a tool that's built primarily for vibe coders is unlikely to be super usable for the real development use cases that we want to support.
Totally agree on "developed by prompt" being totally new. The way that we evaluate it is through a bunch of standard product development practices that I've always believed in. You should check out how we work on how we solve user problems to get a good sense of it: https://notion.warp.dev/original-How-we-solve-user-problems-at-Warp-21643263616d81bc9347e20fc1b73d8e
The biggest thing is that we only want to build features if we're clear on what problem they're solving. We validate that by testing the features quickly with users and by testing the features internally at Warp.
The thing that has me most excited is that for people who love building software, this is a golden age where you can build not just more apps (I worry that there's a lot of crap that might get built) but better apps more quickly that solve more user problems. I also think it's cool how the agentic coding workflows are democratizing development so that a larger range of people are able to build software to solve their own problems.
If I move from Cursor to Warp what will be the replacement for tab completion?
@admiralrohan Nothing, I suppose? Because "tab, tab, tab" is still development by hand.
Warp
@admiralrohan The idea with Warp is that for the most part, you don't need to do tab completion; it works a little bit more like Claude Code. For the most part, you should be able to just prompt Warp and get a good code diff. If the diff isn't quite right, Warp has some simple editing features that we're working on expanding and improving. And if you really need to, you can always hop into your IDE and use completion there.
There's a lot of concern around LLM costs these days, as promises of "millions of tokens per second" is as much of a financial burden as it is an unlock.
Do you see costs exploding to the point where it's a top 3 variable cost of a company?
Warp
@le_zhu We are already at this point. For Warp, LLM costs are far and away the number one variable cost at the company, which means that we need to be smart about how we manage the tokens that we send and receive from the LLMs.
It'll be interesting to see what happens to these costs over time. For any fixed level of intelligence, the cost of tokens is going down dramatically over time. But for the frontier models, which were the ones that are most useful for coding, the costs stayed around the same. Our hope is that there's a bunch of competition at the model layer which drives these costs down.
I'd like insights into how the team at Warp, is using Warp (for their bi-directional agentic development) is the team plugging-in any agentic capabilities into their pipelines for auto-alerts or QC or any-kind of feedb-back driven self-improvement paradigms? C'mon, spill <3
Warp
@alt8451 I wrote something up on this: https://notion.warp.dev/How-Warp-uses-Warp-to-build-Warp-21643263616d81a6b9e3e63fd8a7380c
@zach_lloyd Hey Zach, this is great, and I'm sure its going to help with adoption; I'm referring more to the underlying patterns in the context of your deployment. I appreciate the time.
Purposeful Poop
Hey Zach! thanks for doing this AMA.
I'm curious, are you a successful pivot, or did you all set out to build this from the get go?
Purposeful Poop
and a follow up that is totally unrelated, what is the "highest complexity" code in your codebase? do you consider the "technicality of the code" to be part of your moat?
Warp
@catt_marroll another great question.
yes, i do consider it to be part of our moat. even as coding agents become more powerful they aren't close to being able to build warp from scratch (although we use them now every day to iterate on warp's existing codebase).
There are a ton of parts that are technically complex, but probably the part that would be most difficult for anyone to replicate is our integration with the shell and moving a lot of the traditional shell layer into the gui. This is how we support rich input and blocks in warp.
Warp
@catt_marroll somewhere in the middle.
our mission has stayed the same throughout -- empowering developers to ship better software more quickly, but the product has changed a ton with the rise of LLMs.
Our original product strategy and product vision of building the world's best command line is still highly relevant and differentiated, and happens to work extremely well for agents, not just terminal commands. Building a better terminal also helped us get hundreds of thousands of active developers into Warp even before our move towards the ADE.
That said, it wasn't until we started really embracing agentic development that our revenue took off.
Really cool! ADE feels inevitable with Ai's rise. I'd love to hear: how does Warp handle long-lived agent memory? Do agents remember previous context across sessions or is everything task-specific right now?
Warp
@carter_hill1 It's a great question. We have a feature in Warp called session restoration where we store conversations in a local SQLite database so that even across app restarts you can specify that you want to continue a conversation. I think eventually these will be more cloud-synced so you can share them with your team or pick them up from different computers.
We also have global and cloud-synced rules, as well as the local and project-specific warp.md file. The agent has the ability to edit the warp.md file to save things so that it remembers your context. We're working on making the agent better at knowing when to create rules and when to edit the Warp MD file so it remembers stuff for you.
Hi Zach,
As an aspiring founder, I’m curious- are you aiming for Warp to eventually be acquired, or is the focus on growing independently over the long term? How does that vision shape the way you’re building Warp today?
Warp
@rolfadd The goal is to build the company for the long term, but I'm also pragmatic. If we get to a point where it seems like acquisition is the best option, we would consider it.
Right now, my goal is to position Warp to be the de facto tool that developers are using to build with AI, and I think we have a strong chance of doing that because we have such a differentiated approach, strong growth, and a very big and engaged user base.
Textify
Love Warp - awesome product (happy user for 1+ years)!
Warp
@mihail_eric Thanks Mihail!!
Warp
@gsmbk Glad it's working well. I'm going to steal that line. Love at first prompt. That's awesome.
I think we have a pretty differentiated spot in the space right now. All of our competitors are either forks of VSCode (which are basically the same UI app) or they are CLI apps (which are also basically the same app). To my knowledge at least Warp is the only app that is trying to build something from ground up principles for the agentic workflow, not just of today but of what we see coming down the pipe as agents become more and more autonomous.
So we are making a bet here that there's a bunch of product differentiation that actually isn't that easy for our competitors to replicate. And if it was, I think you would see more people trying to replicate what Warp has built (and maybe that's happening and I don't know it yet).
We do have BYOLLM on our roadmap, and it's something that I would really like to build. It's not just a common end-user request; it's a common enterprise request. My main concern with BYO LLM at the moment is actually that if you're talking about local LLMs, the quality isn't the same as the frontier models.
If you're talking about API keys, we'd have to figure out how to make our business model support that. We would probably wanna do something where we charge some usage premium on top of the API key so that we can recuperate some value.
Coming from Cursor, I've been surprised to see no counter of current context length of each conversation window. What is Warp's idea on tracking context length and resetting / summarizing context?
Warp
@gusto_js We actually recently added something that shows you when the context window starts to fill up. It's a little battery icon that you can hover over and see the context window usage. It shows up in the prompt. I'm trying to find an image of it.
We also added a second feature recently that uses an LLM to suggest when you should start a new conversation. So it kicks in when you change subject midway through.
Finally, our general approach is that we try to make users not have to think about this too much and we will do things like summarize and truncate for you if you don't actively manage yourself.
Hi @zach_lloyd
I have been using Warp for 2 months now, and I really appreaciate how it integrates agent/llm asking with command line.
I confess I don't use it specifically to code, since I delegate it to Claude Code and Cursor.
But I use it a lot to quickly understand a part of codebase, configuration files, and the most used: to generate complicated commands using natural language.
The models that are offered by Warp today are the "mainstream" commercial models (Sonnet/Opus/Gpt, etc) but they are kind of slow, which take me out of the flow when executing some cli actions/devops/etc
I've recently published in twitter a simple alias to use a fast llm inference like Groq to generate commands from natural language
I always give the aws cli examples because it requires a bunch of parameters depending which service I'm using. Asking things like: "give me all the records on my meetings table on dynamo that lenght is greather than x, give me it in json format".
This ask, in the normal Warp agent using the currently offered models, would take at least 30 seconds and also it will come with explanations, etc.
I just was to run that command, but asking in natural language as above.
So, integrating fast inference/llm, would make users achieve this without getting out of the flow.
With that said, I would like to know if you guys have any plans to implement fast inference models/servers that allow us to use Warp as a "natural language" command line for every commands.
Warp
@douglas_correa Really good question. Couple thoughts here:
1. You can pick a faster, cheaper model to do things in Warp. So I would start with that by changing the model in the model selector to something that's lower latency, maybe like GPT-4.1. We are also thinking about implementing some model routing so this can happen automatically.
2. For the real-time translation of English into commands, we actually have a feature that's built for this that you can activate by typing # into the input and we'll bring up real-time English to command translation. It might not have exactly the latency that you're looking for, but you should try it and give me feedback.
Dashcam
How do y'all test the Warp client?
Warp
@sw1tch We have a whole bunch of different ways of testing it: everything from unit tests to an integration test harness that actually runs the full app and simulates user actions to dogfooding it to putting it out on a preview build to being able to run different experiments where certain features are enabled for different parts of the user base. So we test it at a lot of different levels.
Hey Zach, huge fan of what you're building with the ADE.
Thinking ahead to more complex workflows, where you have multiple agents collaborating (e.g. one scaffolds a service, another containerizes it, a third deploys)... My main question is about the handoff: How do you see agents sharing state and managing dependencies between each other? Is that something you envision Warp handling natively, or more through ecosystem tools?
Warp
@dhairya_thakkar1 This is an awesome question. I think we'll probably support both, so we'll want to make native handoff between contexts between agents seamless in Warp.
But we also support things like MCP for this sort of thing. We're also launching a Warp MD file that's compatible with Claude MD and cursor rules. So it's kind of about both approaches being in play.
@zach_lloyd Makes a ton of sense to have both Warp-native handoff and MCP/Warp MD in play. Passing context cleanly feels like (at least to me) what might fully unlock multi-agent workflows for bigger tasks.
On a side note, curious what you and the Warp team look for in early-career folks (new grads, interns, etc.) whether that’s particular skills, attitudes, or just the kinds of problems you like to see them dive into.
Invoice generator
Hey Zach! Any plans for Grok 4 integration? Users claim that on X, it generates much less bloated code.
Warp
@csaba_kissi We have explored Grok 4 but haven't yet done the work to ship it. Every time we ship a new model in Warp, we want to make sure that it's high quality, which has to do not just with the underlying model but also how we prompt it, how we do the tool calling and prompt caching.
There's a whole bunch of work that has to go in, and then we actually want to run our internal evals on it to make sure that it's good for users. So, short answer is, we are evaluating every new model that's coming out.
We haven't quite gotten Grok 4 over the line, it's possible that we'll launch it, and we definitely want to continue to offer people who are using Warp state-of-the-art models from all the providers.
Hey @zach_lloyd , long time user of Warp here, and I had the chance to use Warp 2.0 as soon as it was released.
Before that, for my agentic developments, I used a lot of Windsurf/Cursor/Copilot. I was switching between them because the cost of switching was very low so I was using the best IDE at any point in time.
I recently switched my development workflow totally to Claude Code (well, I still open the project in VSCode for the diff and the review haha).
Today I'm still considering the switch from Claude Code to Warp, but I still have a few questions:
- what are your differenciators vs Claude Code, aka what can you do with Warp that you cannot with Claude Code?
- I have the feeling that I can still ask Claude to execute terminal commands and ask it to debug the results, correct? If yes, then what are the advantages of Warp?
- There are no agents/subagents yet in Warp, do you plan to develop these kind of features? Same question for the /commands.
- How is the performance of your agent compared to Claude Code that uses the Claude models "fully"? I'm using Claude Code with AWS Bedrock (not a Claude subscription) so I pay for every token I use, so the development performance is very powerful. How do you compare to that as you're on a subscription model?
thank you
Warp
@benjamin_sicard This is a great question, and I'm glad you're a fan of Warp! The short answer is, it's totally cool to use Cloud Code within Warp, but we feel like we have several advantages.
The biggest one is that Warp, by virtue of being a GUI app, can do things at the UI layer that you simply can't do with a purely TUI-based app like Claude Code. If you check out our preview build, you'll see it's possible to review code diffs, edit the diffs produced by the agent, and iterate in a much tighter way than if you do your work with Claude Code, where it ends up being that you have to context-switch into either an editor or the GitHub code review UI or something like Git Tower to actually see and edit what the agent is doing.
I think this is really important if you're doing production coding. It's less important if you're doing vibe coding, where you don't have strong requirements about the maintainability of the code or maybe you don't need to comprehend it at the same level.
We also at Warp have the advantage of being able to be multi-model, so that the next time a model comes out that we think could either be either better than Claude or could supplement Claude and be used in a different way. You're not stuck with one model provider, and you don't have to redo all your workflows to switch across model providers.
For subagents, I'd be really interested in hearing what your use case is. It's something that we're considering adding in Warp But I'm not yet fully convinced of the value for most users, but I could potentially get there.
For the pricing and performance, we are, I think, at a par basically with Claude Code's efficiency. We're not trying to compete on price. In fact, I think that's a losing strategy for us. We're trying to add value through the application layer on top of what these models provide.
Nice, Really like this. Lowkey wild to think we might look back at hand-coding the way people look at writing assembly today 😂.
Hey Zach, I’m curious since ChatGPT can already handle tasks like writing code, debugging, and fixing errors, what’s truly new or unique about your Agentic Development Environment (ADE) compared to what AI tools like ChatGPT already do?