Pieces Long-Term Memory Agent - The first AI that remembers everything you work on
Ever wish you had an AI tool that remembered what you worked on, with who, and when across your entire desktop? Pieces Long-Term Memory Agent captures, preserves, and resurfaces historical workflow details, so you can pick up where you left off.
Replies
Pieces for Developers
Hey Product Hunt!
Tsavo Knott here, CEO and Co-Founder of Pieces š. Today, we're launching Pieces Long-Term Memory Agent (LTM-2)āthe first AI that remembers everything you work on.
AI has transformed how we build software, but it's always been missing one critical component: Memory.
Developers waste hours every week searching for lost work, switching between apps, and trying to reconstruct past decisions.
Code snippets, critical insights, and problem-solving steps get buried in Slack, lost in browser tabs, or forgotten in old commits. By the time you find what you need, your flow is broken, momentum is gone, and sometimes you end up rebuilding from scratch.
AI can generate content, but it doesnāt remember your work. Every day starts from zero.
Thatās why we built Pieces Long-Term Memory (LTM-2), the first AI that captures everything you work on across desktop applications, browsers, IDEs, terminalsāwherever you work.
It runs directly on macOS, Windows, and Linux, continuously structuring and resurfacing your work so you can pick up right where you left off. It remembers for months, organizing everything just like your brain does.
What LTM-2 Does
Remembers for 9+ months ā Code, notes, links, and side conversations that shaped your decisions.
Instant recall ā Ask, āWhen was the last time I hit this error?ā or āWhat was I working on before Thanksgiving?ā and get answers fast.
Workstream Activities ā A full visual timeline showing what you did, when, and why.
Private & Built for Teams
90% runs offline ā Keeps your data secure and private.
Full control ā Pause memory, block apps, or delete data anytime.
Pioneering OS-Level AI Memory
Weāre not just launching a toolāweāre building a new category: OS-level Artificial Memory.
LTM-2 is a massive leap forward, and LTM-2.5 and LTM-3 are already in the works.
We canāt wait for you to try it and see how AI that remembers changes the way you work.
Let us know what you think!
Cheers,
Tsavo
@tsavo_at_pieces That offline privacy and instant search are clutch for devs.
Pieces for Developers
@tsavo_at_pieces
casually ships cutting-edge AI memory
already working on LTM-2.5 and LTM-3
Geeez y'all, save some innovation for the rest of us!! š
Depict
@tsavo_at_pieces very cool launch, was testing rewind previously and will test pieces too ā”ļø
Let's Trip
@tsavo_at_pieces This is a great problem to solve. I have been maintaining the same chat for a very long time on chatGPT and the page gets heavy everytime I ask for a new answer. How are you addressing this problem? Congratulations on the launch.
Pieces for Developers
Pieces is a revolutionary AI-powered productivity tool designed specifically for developers and content creators who work with text. As a centralized code snippet manager enhanced with on-device AI capabilities, it's become an essential part of my daily workflow.
What Makes Pieces Stand Out: The Long-Term Memory Advantage
Pieces has pioneered a groundbreaking innovation in developer tools - the Long-Term Memory Agent (LTM-2). This isn't just another AI tool; it's the first AI that truly remembers everything you work on across your entire digital ecosystem.
Unlike other tools that force you to start from zero each day, Pieces creates a persistent memory of your development journey. The LTM-2 agent captures everything across desktop applications, browsers, IDEs, and terminals, remembering your work for 9+ months. This solves one of the most frustrating aspects of development work - losing valuable time searching for lost snippets, past solutions, and critical insights buried in various platforms.
I've been using Pieces daily, and its ability to instantly recall previous work with natural language queries like "When was the last time I hit this error?" has dramatically improved my productivity. The visual timeline showing my entire workstream activity helps me maintain context between sessions and pick up exactly where I left off.
Privacy-Focused Architecture
What particularly impresses me is Pieces' commitment to privacy. With 90% of processing running offline and directly on my machine (macOS, Windows, or Linux), my code and sensitive information remain secure. The ability to pause memory collection, block specific apps, or delete data provides complete control over what gets remembered.
Customer-Focused Development
What truly sets Pieces apart is their exceptional team's dedication to user feedback. They maintain active engagement with their community, consistently incorporating user suggestions into their roadmap. This responsiveness has created a product that genuinely addresses real developer pain points - particularly the fragmentation of knowledge across different tools and platforms.
Whether you're a student, indie developer, part of a startup team, or working in enterprise development, Pieces offers significant productivity gains by eliminating the constant context-switching and search time that plagues modern development workflows.
If you're looking to elevate your development efficiency and maintain better focus on complex projects, I strongly recommend giving Pieces and its Long-Term Memory Agent a try. It's not just saving my time; it's fundamentally changing how I approach software development.
@henry_rausch Such a nice explanation
@shivay_at_pieces Thank you so much!!
Pieces for Developers
@henry_rausch Absolutely love this summary Henry! The whole team at Pieces is so thankful for all the support you've shown throughout the past few years. Thank you so much! š„¹
@elliezub thank you and the team for the kind words! I'm rooting for you!
Pieces is the best AI app I've used so far! Try it out and enjoy it for yourself!
Pieces for Developers
@hra42 Thank you Henry! Would love to know what your favorite prompts for Pieces are! I still use "Where did I leave off yesterday?" basically every morning
@jackross my day starts with a daily standup, so it's every morning:
After that, I'm prepared for the daily and get a nice start in the day and connect with my team.
Pieces for Developers
@henry_rausch Love the conversation summaries with #3! Also been using "What didn't I do yesterday that I was supposed to?" recently and it's super helpful for finding things I actually forgot about! Pieces might actually have a better memory than me sometimes! š
Pieces for Developers
@hra42 Thank you so much Henry!! š
Looks really cool. Does this have an API too? one of our top problems is that when we hit an api to openai, memory is lost and each api is a fresh one.
Pieces for Developers
@kunwar_raj You can develop with Pieces using one of our opensource SDKs like this one for python and this one for Csharp. If you are interested please reach out, we are always happy to support people building on top our stack!
This is truly a game changer! Thanks for sharing this project. Congrats on the launch !š
Pieces for Developers
@kay_arkain Thank you for the support Kay! We can't wait to hear what people think about LTM-2! š„³
Pieces for Developers
@kay_arkain Thank YOU for the support Kay! You're who we build this for!!
Pieces for Developers
I am personally SO excited about the new Workstream Activity view. The rollups every 30 minutes are going to be an absolute game-changer for teams š¤ Can't wait to see how people end up using LTM-2! š„³
Pieces for Developers
@elliezub I'm never writing my own standup updates ever again LOL!
Super excited to see Pieces LTM 2!
Think of LTM-2 as that close friend who knows you better than you know yourself. It doesn't just record what you did two hours ago - it understands the contextual relationships between all your system interactions, revealing patterns and connections in your daily workstream that even you weren't aware of.
LTM-2 observes how you move between tasks, recognizes the relationships between seemingly separate activities, and constructs a meaningful narrative of your digital behavior. It can identify when you're context-switching between projects, recognize recurring workflows, and highlight inefficiencies you might have missed. By understanding not just the individual actions but how they relate to your broader objectives, LTM-2 provides insights that transform how you understand and optimize your own work patterns.
Pieces for Developers
@shivay_at_pieces Wow! Thank you for all of the support Shivay, and that is a great way to describe LTM-2, "that close friend who knows you better than you know yourself," love it!
Pieces for Developers
@shivay_at_pieces Always fun having Pieces diagnose problems I didn't even know I was working on! Amazing what AI can do when it actually remembers your code, your conversations, and the actual GOAL of your work!
Definitely a new innovation in AI space where everyone is just running behind raw compute and reasoning, but no one catering to what would truly make AI personalised for the end user, the context. All the best for the launch.
Pieces for Developers
@nikhil_l Personalized AI is the logical next step with LLMs - Pieces Long-Term Memory makes that happen!
You'd never hire a personal assistant who woke up every morning with amnesia! AI shouldn't be any different.
Thanks for the support and hope you're loving LTM-2!!
Exciting times for Pieces! š The team is building a game-changer with LTM-2, redefining AI-memory
Pieces for Developers
@hanna_stechenko2 This is probably the coolest update we have had yet! š„³
Congrats on the launch! Curious about how you see recent developments in MCP impacting or complimenting the offering. Seems like things are heading in that direction and could open up a lot of opportunities for Pieces.
Thanks @steve_caldwell2 , Sorry I was not able to catch what MCP is, can you sherd more light on what MCP means? Happy to answer your question;
@steve_caldwell2 @ialimustufa MCP = Model Context Protocol (https://www.anthropic.com/news/model-context-protocol) a standard how AIs can fetch data, it's good, but not comparable to pieces.
@ialimustufa @henry_rausch Thanks for providing the additional context there Henry. I'm not sure I'd call it "comparable" to Pieces, but it certainly seems like a protocol that Pieces could leverage to quickly connect to many more data sources. It's kind of the hot girl at the AI context dance right now.
@henry_rausch @steve_caldwell2 Great question Steve! We don't currently use MCP, but we're excited about any standardization of data sources. With LTM-2 we're currently focussed more on the contextual data you're explicitly working with, but in the future we are definitely planning on augmenting the context for a query with these external data sources if we feel that it will provide a more useful answer!
Pieces for Developers
@henry_rausch @steve_caldwell2 @ialimustufa Funnily enough I've been playing with MCP recently as a side project. We have a Python SDK so it wouldn't be too hard to implement this yourself for now using our Python SDK and the MCP Python SDK.
Pieces for Developers
If your current AI assistant was a real person, you'd FIRE them.
No idea what you worked on yesterday
Makes you manually give them all your information
And even forgets your name!!!
Cutting edge LLMs (as great as they are) have the memory of a goldfish. š
Pieces is the first AI that remembers EVERYTHING you do.
"Who asked me about that API bug last quarter and how did we solve it?" - is a question that would make ChatGPT break down into tears. Pieces can answer it, show you the links you clicked, find emails where you talked about it, and summarizes the entire thing so you can jump right back into your work with ZERO context swtiching.
Stop wasting time using assistants that don't grow with you.
All you context, all your memories, all your AI models. - All in one place.
Pieces for Developers
@jackross Exactly! LTM-2 is the major upgrade in AI assistants that we have all been waiting for. Thank you for the support Jack!
Canāt wait to integrate work stream activities into my workflow! Iāll never need to use the Chat GPT interface again š
Pieces for Developers
@sam_parks_at_pieces Workstream Activities are a game-changer for sure š
MGX (MetaGPT X)
Really intrigued by Pieces' approach to long-term memory! While the memory chunking system looks promising, I'm curious about how you handle memory contamination issues. When multiple conversations or contexts overlap, how do you maintain clarity and prevent incorrect information bleed?
Also wondering about the memory cleanup process - is there a way to identify and remove potentially contaminated or outdated memory blocks? Would love to hear more about your solution to these challenges, as memory pollution has been a significant hurdle in long-term memory implementations.
Pieces for Developers
@zongze_x thanks for the great technical question, you are spot on, the issue of memory contamination is complex and a core challenge in designing features like this one. I suspect you can appreciate that writing a full answer here is tough - but would make an excellent topic for a technical article (watch this space). At a high-level, our approach to identifying and minimizing contamination happens at three levels:
On entry: we are very selective about what is added to the LTM. By analysing where the users focus is and how what they are focusing on currently relates to the big picture of there workstream we can prevent a lot of corruption at source.
On roll up: when we roll up memories into periodic summaries our agent looks for narratives and themes across workflow elements. When we find contradictions, we resolve them by comparing those narratives to cut out random chatter and keep focus on core tasks.
At query time: when you interact with your workstream data, through the copilot or the summaries, those interactions are used to infer which aspects are useful/truthful and which are not, which allows us to elevate quality information whilst demoting the noise.
Additionally, signals from all of these levels are used to periodically clean contamination from your stored memories. It's a work in progress but I have found the LTM to be much more resistant to context corruption than other solutions out there.
Hey š super cool launch. It's a beautiful co incidence I just finished a paper on long term memory as a weight in a new form of transformer architecture. Paper is still in review but your launch is fun and practical.
All the best š
Pieces for Developers
@themisty Hey Krishna! Thank you for checking out the launch! Your paper sounds really interesting, I'm sure a lot of people from my team would enjoy reading it. Where will it be published?
@elliezub Dear Ellie, I am excited to have readers interested already :) it will be on Arxiv, fingers crossed. Still under heavy reviewing lol.
Pieces for Developers
@themisty Sounds great! Looks like we are connected on Linkedin now, so hopefully you will post about it once it's published. Can't wait to read it!
Pieces for Developers
@themisty Can't wait to read your paper Krishna! Thanks for the support as always!!
@elliezub Thanks will definitely ping the team. In the meantime I definitely don't mind if you guys can feel my product nonilion ? I don't mind a solid feedback š
Pieces for Developers
@bishoy_hany1 Same! The possibilities are really endless with how it can improve your workflow. Thank you for the support Bishoy!
A @Quadratic integration would be too good... "What was the Python code I used to generate a data visualization in my spreadsheet last week? Run that again here."
Pieces for Developers
@cole_at_quadratic That would be epic Cole! By the way, good luck on the launch tomorrow! š„³
@elliezub Thank you!! Good luck to you as well. Killing it so far!
Pieces for Developers
@elliezub @cole_at_quadratic I think partnerships and integrations are about to become a large part of the business model! Most definitely should explore this š¤
Shram
This launch sounds incredibly promising! The concept of an AI with long-term memory definitely addresses a major pain point for developers. The ability to recall important details from past projects could be a game-changer for maintaining workflow and enhancing productivity.
Congrats on the launch! Best wishes and sending lots of wins :) @tsavo_at_pieces
Pieces for Developers
@whatshivamdo Really appreciate the support Shivam! LTM is definitely the biggest productivity boost I've gotten from LLMs in a loooong time!
Pieces for Developers
@stemonte I've not tried that large, I "only" have a 39" ultra wide, but it works fine for me! If you want to lend me your setup for a while I'll be happy to test for you š
Pieces for Developers
@stemonte Be aware that the larger the monitor, the more system resources will be used. But Pieces uses only 1-2% of CPU typically, so the increase will be minimal. We only extract memories from the current active window as well to keep them more human-centric, so using multiple monitors has no impact. I'm also guessing that if you have large monitors you probably are not running on a 2016 low spec IBM Thinkpad š. So your CPU impact will be negligible.
This is awesome! Long-term memory in AI is definitely something thatās been missing, and you guys nailed it. Huge congrats to the team for making it happenāthis will really change how we work with AI.
@henry_habib Thank you very much, do try it out and let us know your favorite prompt;
@henry_habib Thanks for all the support Henry! Can't wait to see how LTM changes the way you use AI!
Really impressive product @tsavo_at_pieces + team!
I enabled LTM-2, does that mean there is no need to install plugins since it uses the universal screen recording interface? Or would installing the VS Code plugin give better results for memory capture?
Hey @tleyden, Thanks for your support! Means a lot;
PiecesOS captures your information and works with couchbase database locally to store this information. Plugin's allow you to bring this memory in your choice of IDE. I am usually coding in VS Code and chrome so I have both the plugins.
While using the plugin you can provide more context by adding the codebase etc. Hope this clarifies your question!;
@tsavo_at_pieces @tleyden you don't need to install the plugins for memory capture, pieces runs just like magic, still the VS extension makes working with pieces easier, so I would install it anyways.
Pieces for Developers
Hey @tleyden , thanks for the great question and your awesome intuitionāyouāre spot on!
With LTM-2 enabled, our system already leverages a uniform screen segmentation (we never actually record any video.. too heavy to process š ), vision processing, and accessibility APIs, so it works incredibly well out of the box. That said, installing plugins (like the VS Code one) can provide even richer dataāthink deep stack traces, AST details, and more discrete file paths.
Believe it or not, we're already leveraging some integrations sending what we call āTier 3ā data, which gets blended with the lower-level visual and accessibility data and interconnected there on the device through a couple classic algos. That said, both data sources direct-from-plugin and uniform at the OS are unique and additive so you'll definitely be seeing us continue to invest in the plugins š
Anyway, hopefully that answers your question and thanks again for your support!
Cheers,
-Tsavo