1. Home
  2. Newsletter
  3. Weekly
  4. 😸 GPT in your terminal
newsletter icon
The Roundup
April 20th, 2025
GPT in your terminal
AI is teaming up with dolphins

gm, yes you read that title right. Happy Sunday! In today's weekly Roundup, we got OpenAI's new coding tool, Claude's new research tool, a new voice AI, Google's Dolphin AI, and a trending forum post about poop.

Weekly
Leaderboard highlights
OpenAI Codex CLI
OpenAI Codex CLI — Frontier reasoning in the terminal
OpenAI Codex CLI brings GPT‑4.1 power right into your terminal. You can run scripts, edit files, and even commit changes under git control all by chatting with your shell.
Aqua Voice
Aqua Voice — Incredibly fast voice input for Mac and Windows
Aqua Voice gives you fast, system-wide voice input for Mac and Windows. It works in any text field, launches in 50ms, and uses accessibility APIs instead of screen scraping. Just press a shortcut and speak.
Claude Research
Claude Research — Claude takes research to new places
Claude Research gives Claude a new research mode. It can browse the web in real time, cite sources, and pull context from your Gmail, Calendar, and Docs. Ask it to write a reply, summarize a paper, or prep you for a meeting — it’ll do the digging so you don’t have to.
Universal Memory MCP
Universal Memory MCP — Your memories, in every LLM you use.
Universal Memory MCP makes your memory portable across ChatGPT, Claude, Gemini, and more. Install it once and carry your links, chats, and notes wherever you go. No copy-paste. No re-explaining yourself.
Notion Mail
Notion Mail — The first inbox that thinks like you
Notion Mail plugs into Gmail so you can handle your inbox from inside your Notion workspace. It helps you write replies, pull messages into docs, and turn threads into action items — all without jumping between apps.
FROM THE FRONTIER
Dolphins 🤝 AI

Google’s newest AI project isn’t trying to pass the bar exam or write your emails. It’s trying to talk to dolphins.

DolphinGemma is a lightweight 400M-parameter model built to decode the squeaks, whistles, and clicks of Atlantic spotted dolphins. It’s part of a wild collaboration with the Wild Dolphin Project and Georgia Tech, and it runs on Pixel phones. Not to generate selfies, but to analyze dolphin sounds in the ocean. In real time. From a boat.

It works kind of like a language model. The AI listens to dolphin sounds and tries to predict what might come next. Like autocomplete, but for underwater whistles.

Even weirder? They’re testing two-way communication. Dolphins are trained to associate certain sounds with objects, and when they mimic those sounds back, the AI translates it. Basically, if a dolphin wants a toy, it can ask for it. Which is adorable and also mildly terrifying.

newsletter icon
The Roundup
Every Sunday
Everything you missed this past week on Product Hunt: Top products, spicy community discourse, key trends on the site, and long-form pieces we’ve recently published.