Hi {{firstName|Futurist}},

A lobster booked flights last week. That wasn’t a joke. It joined meetings. It ran code. It had claws. It was called Clawdbot, but now it’s Moltbot, and it went viral for a reason. Not because it talked. But because it acted. That’s the story everyone’s chewing on right now: AI isn’t just answering questions anymore. It’s doing stuff. Helping. Coordinating. Running little pieces of work like a swarm of invisible interns. Let’s take a look at what’s on the menu: Claude is turning your tools into teammates. China’s pushing open-source swarms that rival the big names. And world models are starting to reason, plan, and act across time. After all, the lobster was just the appetizer of what’s to come this year.

So grab your favorite snack, settle in, and let's dip into what's cooking. No time to read? Listen to this episode of Digital Dips on Spotify and stay updated while you’re on the move. The link to the podcast is only available to subscribers. If you haven’t subscribe already, I recommend to do so.

🍢 Finger food for thought

One topic that deserves more than a bite. Not too long. Just enough to chew on.

Headstory: A lobster went viral this week, and why this matters

Last week, a lobster went viral. It had claws. It booked flights. It sent Slack messages. It did daily stand-ups. It was called Clawdbot, but is now named to Moltbot. It spread like wildfire. Not because of marketing. Not because of big budgets. But because it actually did things. It didn’t just answer questions, it acted. And people noticed. 44,000 GitHub stars. Cloudflare stock jumped by 12%. Suddenly, everyone buys a Mac Mini. A lobster-shaped agent proved a point Silicon Valley has been circling for years: AI isn’t a chatbot. It’s a co-worker. A doer. However, you still need some technical knowledge to get Moltbot running.

Behind the buzz and funny name, there’s something deeper going on. I guess by the end of 2026, everyone who wants their own army of agents will have one. Without technical knowhow how to set it up. Workers will multiply themselves, not once, but hundreds of times. One AI assistant for email. Another for meetings. Another to build prototypes, coordinate projects, track competitors, run experiments. You don’t need scale. You become scale. That’s what this little lobster taught us last week.

Which brings us to the bigger shift happening right now. It’s clear: the coordination layer is the next battleground. The messy human glue between systems, notes, status updates, who said what in which meeting, is finally becoming structured. Actionable. Shared. That’s the first domino. And then the real shift begins: one person doesn’t just get faster, they multiply. A team becomes a company. A company becomes a hundred versions of itself, running in parallel, 24/7, without meetings, without burnout.

And this isn’t just theory. All of this, just by managing multiple Moltbots. Take a look at what Ethan Mollick showed what happens when regular people, not developers, get their hands on AI agents. His MBA students, most of whom couldn’t code, built full startup prototypes in just four days. Not because they’re tech experts. But because they know how to lead. They know what a good result looks like. They know how to give feedback. That’s the skill that matters now. In a world full of capable agents, your real value is knowing what to ask for, and how to spot when something’s off. That’s not prompt engineering. That’s just good management. That shift in value, from building to guiding, is becoming more obvious by the day.

And this is where Dario Amodei comes in. He’s the CEO of Anthropic. One of the people building the most powerful AI systems in the world. He reminds us that this isn’t a story of magic. It’s a story of timing. Of leverage. Of maturity. These tools don’t just think anymore. They act. And when they act before you are ready, things will break. The Moltbot moment is exciting. But it’s also a test. Because power, real power, is arriving before most companies have figured out how to govern it and actually create real value out of it.

The lobster was just the beginning. If every worker can scale 10x, are you ready for what that means? Curious how a family of lobsters could multiply you, your entire team or organization? Send me a message. I’ll help you get it set up.

🍟 Crispy bites

Fresh tech nuggets. Short, sharp, snackable.

Interact with your work tools directly in Claude

TL;DR

Claude allows you to access and interact with your favorite work tools like Slack, Asana, and Figma without switching tabs. This integration means real-time project updates and message drafting right within the Claude-app. Built on the open Model Context Protocol, Claude's new feature pushes boundaries of how tools and AI can interact. As organizations strive for seamless workflows, this enhancement could redefine efficiency in collaborative environments.

Why this matters

  • Work processes can be managed directly in Claude without switching tabs.

  • Claude users can now draft and preview Slack messages directly within the app. Asana timelines can be built and updated in real time without leaving the platform. Or use Figma for creating and visualizing diagrams instantly.

  • It supports other interactive tools as well like Amplitude, Box and, Canva; with other apps coming soon.

  • This is powered by the Model Context Protocol (MCP), an open standard for AI connectivity.

  • Available on Pro, Max, Team, and Enterprise plans, with Claude Cowork support coming soon.

My Taste

This move by Claude tells us something significant about the direction of workflow technology: it's all about removing friction. Integrating all your key tools right where the conversation happens cuts down on the constant task-switching that seems minor but adds up. What intrigues me is the reliance on the open Model Context Protocol. It’s a promising step towards fluid AI interactions across platforms. As more players open up their standards like this, the future could see traditionally competitive tools working in surprising harmony.

Open-source AI challenger Kimi K2.5 debuts

TL;DR

Kimi K2.5, a new open-source AI model, has been launched, offering advanced coding and vision capabilities. It features an agent swarm system that speeds up complex tasks by up to 4.5 times compared to single-agent setups. This model combines visual and text data training, enabling quick development of front-end interfaces and enhanced visual debugging. As China continues to refine open-source models like Kimi K2.5, the competitive gap with the U.S. narrows, challenging heavyweights like GPT-5.2, Gemini 3, and Claude 4.5 Opus.

Why this matters

  • It boasts state-of-the-art performance on agentic benchmarks like HLE (Can an AI reason the way a capable human does) and BrowseComp (Can an AI use the web like a human researcher), positioning it as a strong open-source contender in AI.

  • Kimi K2.5 can orchestrate up to 100 sub-agents to execute tasks in parallel, increasing speed by up to 4.5x.

  • It integrates 15 trillion visual and text tokens for enhanced coding and visualization tasks.

  • The agent swarm strategy allows complex tasks to run concurrently, significantly reducing completion times.

  • In benchmarks, Kimi K2.5 shows 59.3% improvement in office productivity tasks over previous versions.

My Taste

Kimi K2.5 is a clear example of how open-source AI in China is catching up with and challenging U.S-based closed systems. The ability to perform complex tasks rapidly using a multicentric approach signifies a shift from sheer processing power to intelligent orchestration. The benchmarks show impressive efficiency and signal a robust answer to heavyweights like GPT and Claude. As organizations increasingly adopt these models, access to flexible, cost-effective AI could redefine competitive advantages in the tech industry. The real challenge ahead will be differentiating when so many advanced tools are available to all. Keep an eye on how this pressures closed models to adapt or risk losing relevance.

Odyssey-2 Pro and API launch: Opening new doors for AI

TL;DR

Today marks a pivotal moment with the release of Odyssey-2 Pro and its developer API. Now, developers can integrate detailed, interactive simulations powered by large video and interaction datasets into their applications. This opens new avenues for creativity, reminiscent of the GPT-2 era. The potential here is immense, from gaming to healthcare, as we stand on the brink of an AI application boom.

Why this matters

  • Odyssey-2 Pro offers interactive simulations in real time, streaming at 720P and 22 FPS.

  • The model's ability to simulate basic physics and dynamics has greatly improved.

  • Developers can embed simulations via three endpoints: interactive, viewable streams, and simulations.

  • The API supports both real-time and offline generation workflows.

  • A developer portal and SDKs for JavaScript and Python have been launched, with iOS and Android coming soon.

My Taste

There's a gap in AI often overlooked: understanding causality, not just pattern matching. Odyssey-2 Pro takes a step forward by letting machines simulate what comes next. This isn't just about making smarter AI—it's about giving technology the power to reason and plan ahead, like a chess player sizing up their next move several steps in advance. This shift could redefine how AI systems operate in the real world, moving them from reactive tools to proactive systems. We're not just teaching AIs to answer; we're teaching them to imagine possibilities. Now, there's a platform ready for developers to start building this future.

Audio drives AI's new video frontier

TL;DR

LTX introduces a fresh way to make videos with AI, starting from audio to generate consistent voices and synchronized visuals. Their new Audio-to-Video feature shapes full scenes by using sound, which means fewer manual adjustments and more natural performances. Partnering with Eleven Labs, this tool transforms how we think about video creation, allowing for seamless transitions across platforms without re-recording. It opens up possibilities from music videos to multi-character scenes.

Why this matters

  • Audio drives scene timing, motion, and pacing.

  • Voices stay consistent across generations, streamlining video production.

  • Supports common audio formats like MP3, WAV, and AAC.

  • Enables multi-platform video creation in seconds from a single audio file.

  • Ideal for creating believable AI influencers and branded content.

My Taste

What's compelling here is the shift of control from video to audio, giving creators a new kind of creative freedom. AI video generation often struggles with stability, particularly in maintaining consistent voices. LTX's approach addresses this directly, setting a new standard for what AI-generated content can achieve. As audio becomes the guiding thread, we're seeing the beginnings of true multimedia versatility. The future could see audio-led storytelling expanding beyond traditional constraints, creating dynamic, immersive experiences where every element is perfectly in sync.

🧀 Cheesy pick

A cheesy selection of three tools and one tasty rabbit hole.

  • Blink.new builds full apps from simple prompts in minutes with no code.

  • Twin builds autonomous AI workflows that run your tasks for you.

  • Remotion builds videos in code using React and turns them into MP4s.

  • Bonus: Why agentic AI demands a rethink of strategy, talent and value.

🍱 Leftovers

A roundup of updates that are too cheesy to ignore.

  • OpenAI’s ChatGPT introduces global age prediction to tailor experiences and safeguards for teens.

  • OpenAI’s Prism rolls out a free AI-powered workspace for scientists to write and collaborate using GPT-5.2.

  • Google’s Stitch MCP Server lets your Coding Agent design directly in your IDE and syncs with Antigravity.

  • Google’s Stitch injects design consistency with Skills that create design.md files and React components.

  • Google’s D4RT transforms video into 4D representations, enhancing AI's understanding of space and time.

  • Google’s Antigravity launches Terminal Sandboxing, confining terminal commands to your project folder for safer coding.

  • Google Antigravity Agent Manager unveils a seamless workflow for orchestrating software development and debugging.

  • Google’s Gemini 3 Flash's Agentic Vision transforms image analysis with AI-driven visual reasoning and code execution.

  • Google’s Gemini 3 auto-browses Chrome, multitasks with tabs, and integrates with Gmail, Calendar, and YouTube for AI Pro and Ultra.

  • Google’s Gemini CLI introduces Hooks for customizing your agentic loop like never before.

  • Apple reinvents Siri as a full-fledged chatbot in iOS 27, gearing up to challenge ChatGPT.

  • Anthropic’s Claude now taps securely into your health with Apple Health, Health Connect, HealthEx, and Function Health integrations.

  • Amazon unveils Health AI for One Medical, leveraging Bedrock to manage prescriptions and appointments.

  • World’s API lets you create persistent 3D worlds from text, images, and video.

  • LangSmith’s Agent Builder Template Library delivers ready-to-deploy AI agents for seamless business integration.

  • NVIDIA revolutionizes Voice AI with PersonaPlex-7B, enabling real-time, natural conversations through full-duplex audio.

  • Runway’s Gen-4.5 Image to Video debuts for paid users, enhancing story creation with precise camera control and consistent characters.

  • Alibaba’s Qwen3-TTS unleashes its open-source TTS family, offering voice design and cloning in 10 languages with SOTA performance.

  • Alibaba’s Qwen3-Max-Thinking unleashes advanced reasoning, topping the charts in adaptive tool use and complex problem-solving.

  • Windows releases winapp CLI in public preview to streamline app development across frameworks.

  • GitHub Copilot SDK powers up your app with agentic execution and multi-step planning.

  • Cursor accelerates task completion with subagents, now featuring image generation and clarifying questions.

  • Cursor introduces Agent Skills, enabling agents to discover and execute specialized prompts and code.

  • Cursor enables seamless multitasking with new subagent support for multiple browsers.

  • Gamma Remix reinvents your slides for any audience with seamless transformations.

  • Freepik’s New Video Color Grading tool transforms your clips with professional styles instantly, free for all creators.

  • Exa’s Semantic revolution: Search across 60M+ companies for insights on web traffic, headcount, and more.

  • XREAL's Real 3D™ update transforms your 2D movies and games into immersive 3D with a simple toggle.

  • HeyGen avatars now talk on video with Remotion and Claude Code integration.

  • HeyGen's updated Video Agent empowers everyone to produce stunning videos with ease and precision.

  • Abacus AI’s Autonomous agents debut with infinite memory, ready to tackle any task with persistent information storage.

  • Abacus AI's Deep Agent autonomously builds apps and remembers tasks to execute effortlessly.

  • Tencent HunyuanImage 3.0-Instruct upgrades to native multimodal image editing with unmatched precision and reasoning skills.

  • PixVerse launches V5.6 with cinematic visuals and native vocals, free for Pro+ subscribers.

  • LumaLabs’ Ray3.14 rolls out with 1080p HD visuals, 4x speed boost, and budget-friendly pricing for pro creators.

  • Manus introduces Skills, turning your sessions into customizable expertise with a click.

  • BlackForestLabs Skills wraps FLUX into a one-command install for seamless coding agent integration and sub-second enhancements.

  • DecartAI’s Lucy 2.0 debuts as a World Editing Model, offering 1080p, 30FPS real-time performance.

  • MiniMax launches their Agent, an AI-native colleague that now lives inside your workflows, from code to alerts to sales outreach.

  • Vidu launches Agent 1.0 for effortless video creation with one-click editing and multilingual support.

  • Wondercraft Video turns your ideas into professional explainer videos with AI precision.

  • Mistral Vibe 2.0 powers Le Chat Pro and Team plans with faster code deployment.

  • Kimi Code unleashes an open-source coding agent that's Python-based, VS Code-ready, and fully transparent.

  • Lovable levels up with smarter planning, automated testing, and prompt queuing for smoother app development.

  • n8n Chat Hub lets your entire team use AI agents without fuss, ensuring centralized security and a user-friendly interface.

  • Helix 02 redefines robotics with intelligent control across complex and long-term tasks.

How’d this digital dip taste?

You made it to the bottom. Quick taste test before you go.

Login or Subscribe to participate

This was it. Our forty-eight digital dip together. Forward this to someone who’s still wondering if AI is here to stay.

Not sure how to turn all this into action? I can help. Whether you're flying solo, leading a team, or running an entire organization, I'll help you figure out how to multiply yourself with AI. What to automate. What to delegate. And how to turn your coordination layer into a force multiplier. Just reply to this email.

Looking forward to what tomorrow brings! ▽

-Wesley