Hi {{firstName|Futurist}},

Nvidia CEO Jensen Huang just said it out loud: “I think we’ve achieved AGI.” Last year, saying this would have sounded like science fiction. Today, the CEO of NVIDIA says we’re basically there. At the same time, Anthropic has been quietly testing a new model called ‘Mythos’ with select customers. Insiders call it a “step change.” Dramatically higher scores in coding. Academic reasoning. Cybersecurity. Some even say it is far ahead of any other AI model in cyber capabilities. It’s part of a new Capybara series, larger and smarter than their current frontier model Opus. More expensive too. Not ready for the public, yet. And reportedly so powerful that rollout needs to be slow, because of security risks. And they’re not alone. OpenAI is in the same direction. Their upcoming model, code-named ‘Spud’, is described as a massive leap with serious economic impact. They even renamed their department to “AGI Deployment.”… We’ve mentioned it in the previous edition of Digital Dips: Morgan Stanley warned that a massive leap in AI capability could hit in the first half of 2026. They we’re not lying. 2026 looks like the year we hit the steep part of the curve.

On today’s menu: Europe building AI on its own terms, models that improve themselves, Claude stepping onto your desktop as a digital co-worker, OpenAI making hard cuts to double down on what it believes will move the global economy, and memory getting 6x lighter while the race to AGI gets louder.

So grab your favorite snack, settle in, and let's dip into what's cooking. No time to read? Listen to this episode of Digital Dips on Spotify and stay updated while you’re on the move. The link to the podcast is only available to subscribers. If you haven’t subscribe already, I recommend to do so.

🍟 Crispy bites

Fresh tech nuggets. Short, sharp, snackable.

Mistral AI introduces Forge for enterprise AI models

TL;DR

Mistral AI has launched Forge, a new platform for enterprises to build AI models using their proprietary data. This shift caters to the need for AI solutions that internalize company-specific knowledge, moving beyond generic models. Key partners like ASML and the European Space Agency are already on board. As AI becomes a cornerstone of enterprise tech, the focus shifts to strategic autonomy and customization.

Why this matters

  • Forge uses proprietary data instead of public datasets for training.

  • Partners include ASML, Ericsson, and the European Space Agency.

  • Supports both dense and mixture-of-experts model architectures.

  • Allows continuous learning through reinforcement learning methods.

  • Promotes strategic autonomy by keeping data control in-house.

My thoughts

It's refreshing to see Europe investing in AI infrastructure rather than just policy tweaks. Mistral's Forge stands out by anchoring AI in the unique knowledge of each enterprise rather than offering a one-size-fits-all solution. The real battleground isn't over who has the best model but over who can provide genuine ownership of context without compromising on security or control. This approach could redefine AI utility in Europe where regulatory landscapes are challenging and ownership matters. More tailored solutions like this are likely what we'll see shaping the future of AI in tightly regulated sectors.

MiniMax-M2.7 builds its own future

TL;DR

MiniMax M2.7 isn't just another AI model; it's actively shaping its own development. By running over 100 iterations, it improved its performance by 30%. The model also monitors research workflows, covering 30-50% of tasks, and excelled in machine learning competitions, achieving a 66.6% medal rate. This marks a shift towards AI models capable of self-evolution, indicating a future where AI systems might operate with minimal human intervention.

Why this matters

  • Achieved 30% performance improvement through 100+ autonomous iterations

  • Handles 30-50% of the research workflow in reinforcement learning experiments

  • Scored 56.22% on SWE-Pro, approaching Claude’s Opus level

  • Achieved a 66.6% medal rate in ML competitions, tying with Gemini 3.1

  • Capabilities extend to software engineering, office automation, and interactive entertainment

  • Demonstrates potential for full AI autonomy in development processes

My thoughts

That's really fascinating: MiniMax M2.7 participated in its own development. Watching it autonomously run over 100 loops, analyzing failures, tweaking code, and improving performance is impressive. And it doesn't stop there: handling up to half the research workflow and excelling in competitions shows its capability. MiniMax is clearly moving towards fully autonomous AI processes. This isn't a theory anymore; it's a glimpse into a future where AI models manage themselves.

Google AI Studio transforms app development

TL;DR

Google AI Studio has launched a revamped development experience that bundles everything needed to create production-ready apps directly in the browser. Key features include the new Antigravity coding agent and built-in Firebase integration for databases and authentication. This streamlined approach could reshape the market by making scalable app deployment as easy as writing a prompt. The first-party infrastructure Google provides could be a game-changer for developers.

Why this matters

  • Google AI Studio now includes Antigravity for fast code generation

  • One-click deployment to the web is available in the browser

  • Firebase is integrated for databases and secure sign-in

  • Supports modern web development with tools like Next.js, Three.js

  • Remembers your work across devices and sessions

  • Addresses a critical gap in vibe coding platforms lacking built-in backends

My Taste

This is where it gets interesting. And a little painful for some parties. While startups like Bolt, Replit and Lovable focused on making creation feel smooth and fun, Google quietly bundled the part that actually matters in production. Backend, hosting, auth, database. Live in one tab. And for free. The others charge monthly fees and still ask you to glue together five tools before you can ship something that doesn’t break. And when things do break? Token bills climb. Context disappears. The AI starts rewriting parts of your app you didn’t touch. The real gap wasn’t building nicer prototypes. It was owning the layer underneath. And Google owns that layer. With millions of developers already inside their ecosystem, fixing the “front door” is a much smaller problem than building world-class infrastructure from scratch.

Google shrinks AI memory footprint with TurboQuant

TL;DR

Google just unveiled TurboQuant, a new algorithm that reduces large language models' working memory by a stunning 6x. This shift doesn't compromise accuracy and speeds up processing by up to 8x. Historically, AI's memory demands have taxed both technology infrastructure and budgets. With TurboQuant, running AI becomes vastly more efficient, cheaper, and scalable. The ripple effect could dramatically alter existing memory and GPU market dynamics. Ultimately, it suggests a potential fading into the background of expensive AI hardware costs.

Why this matters

  • Google introduces TurboQuant with 6x memory compression

  • Processing speeds on AI models improve up to 8x

  • No retraining or accuracy loss across benchmarks

  • AI cache memory shrinks from 32 bits to 3 bits per value

  • GPU memory costs make up 55% of AI compute spending

  • Hyperscalers investing $700 billion into AI infrastructure by 2026

My thoughts

Google may have just taken a needle to the AI memory hype, and the market reacted fast. In simple terms: AI systems need a lot of temporary memory to remember your conversation, and that memory has become one of the most expensive parts of running AI. Google now claims it can shrink that memory by 6x without making the model worse, and even make it up to 8x faster. If that holds, companies won’t need as much high-end memory hardware to serve the same number of users. That matters because AI has pushed memory prices up sharply, based on the belief that more AI always means more chips. But if smart software reduces the need for that hardware, the economics change. The same machines could handle more users, longer conversations and lower costs. Nothing breaks overnight, contracts are signed and demand is still strong, but strategically this is big: when software removes a bottleneck, entire markets quietly reset.

Claude now operates your computer

TL;DR

Claude has taken a big step forward, now capable of autonomously controlling your entire computer. This includes opening apps, navigating browsers, and filling spreadsheets. Running initially on macOS, it enhances productivity by allowing tasks to be assigned from a phone and completed on a desktop. The strategic move positions Claude as a "digital employee" capable of handling complex computer tasks, adding another layer to what AI can do right now.

Why this matters

  • Claude can now use a mouse, keyboard, and screen autonomously

  • Works with connected apps like Slack and Calendar; requires permission for others

  • Allows task assignment from mobile, leading to completed work on desktop

  • Feature is released under Claude Cowork and Claude Code for macOS

  • Runs only when the desktop is active, preventing runaway tasks

My thoughts

Anthropic just delivered the biggest AI product launch of the year so far. In 30 days they rebuilt OpenClaw, the open-source agent that pulled 250K GitHub stars, but this time wrapped in enterprise guardrails. Where OpenClaw let you text an agent from WhatsApp to control your desktop, Anthropic shipped Dispatch: persistent threads from phone to desktop inside their own app. Where OpenClaw used Discord and Telegram as control panels, Anthropic added Code Channels with controlled bridges. Where OpenClaw offered full OS access and 100+ unreviewed skills (and the malware that came with it), Anthropic introduced curated plugins, admin controls, permission prompts and sandboxed execution. Even the always-on “heartbeat” daemon has an opposite: Claude only runs when your desktop is open, friction by design to prevent runaway automation. The strategy is obvious. Let open source test the edges. Then ship the safe version for the enterprise before anyone else does. Yes, there are gaps. No WhatsApp or iMessage integration yet. No headless mode. Your Mac must stay awake. But make no mistake: this is a digital employee. Claude can use your screen like a human, any app, any browser, any spreadsheet, and you can text it from your phone and return to finished work. Meanwhile OpenAI hired the builder of OpenClaw, announced internal reorganizations, and is still running agents in cloud sandboxes without local file control or true async desktop handoff. The demand signal is clear: executives want an AI that works on their machine, not in a demo browser. Anthropic shipped it. Others are still aligning slide decks.

OpenAI kills Sora to fuel its next model

TL;DR

OpenAI is shutting down its Sora app and discontinuing all video generation capabilities inside ChatGPT. Despite a $1 billion deal with Disney only four months ago, the company is pivoting away from video to focus on its newest language model, codenamed Spud. This decision follows the significant revenue growth displayed by competitors like Anthropic, which have thrived by concentrating on text and code solutions. Expect future announcements on timelines and data preservation for current Sora users.

Why this matters

  • OpenAI ends Sora, its video generation app, after just six months.

  • Disney's $1 billion plan for video content through Sora is now defunct.

  • Anthropic reported $19 billion in annual revenue with text and code, no video.

  • Sora generated only $1.4 million in consumer revenue over six months.

  • Spud, OpenAI’s new language model, is ready to accelerate economic growth.

My thoughts

OpenAI didn’t tweak the video strategy. They killed it. App gone. API gone. No video inside ChatGPT. And Disney’s $1 billion deal? Dead in four months. This is not a product clean-up, it’s a full reset driven by economics. While Sora burned GPUs and delivered $1.4 million in consumer revenue, Anthropic quietly built a $19 billion run rate with one surface: chat, code, enterprise. No video circus. OpenAI looked at the numbers and followed the money. So now ChatGPT, Codex and the browser become one focused product, Instant Checkout is cut, and the Sora team moves to robotics. Even the product org gets renamed to “AGI Deployment,” which tells you how leadership wants this story framed. Either they are closer to something big, or this is the sharpest strategic pivot we’ve seen in years.

🧀 Cheesy pick

A cheesy selection of three tools and one tasty rabbit hole.

  • Superscale turns one sentence into ads ready to launch.

  • CrowdReply makes AI search visibility something brands can grow.

  • Perceptis turns your sources into business-grade slides in minutes.

  • Bonus: Deloitte shows AI is boosting efficiency, but the real upside lies in reinventing the business.

🍱 Leftovers

A roundup of updates that are too cheesy to ignore.

  • World’s AgentKit launches to human-verify automation in the agent economy.

  • Perplexity’s launches Comet Enterprise, the ultimate AI browser for streamlined team productivity.

  • Anthropic’s Claude Cowork newest “Dispatch” feature lets you resume work from any device.

  • Anthropic’s Cowork introduces Projects, keeping your tasks and files organized in one place.

  • Anthropic’s Claude Code introduces cloud-based task scheduling for hassle-free automation.

  • Anthropic’s Claude Code channels now let you control sessions via Telegram and Discord on your phone.

  • Google's Stitch transforms your spoken ideas into high-fidelity designs with AI prowess.

  • Google unveils a Developer’s Guide for B2B AI agents using 6 open standards for seamless integration.

  • Google unveils Gemini in its Marketing Platform, uniting inventory and boosting ROI with top-tier AI.

  • Google’s Veo and Google Ads team up to transform images into dynamic videos for your campaigns.

  • Google’s Vibe Coding XR debuts, enabling Gemini Canvas users to transform prompts into interactive WebXR experiences.

  • OpenArt Worlds lets you generate a fully navigable 3D environment from a single prompt or image.

  • NVIDIA's Vera Rubin powers instant HD video creation, redefining real-time media dynamics.

  • Stripe introduces Machine Payments Protocol (MPP), an open standard, internet-native way for agents to pay.

  • Gamma evolves with Imagine design, seamless Connectors, and AI-native Templates for the ultimate creative toolkit.

  • Visa unleashes Visa CLI, an experimental tool straight from Visa Crypto Labs.

  • Gumloop introduces Gumstack as a unified dashboard for AI agent data visibility and access control.

  • Krea introduces ode Node Agent, describe what you want, and watch their agent build and refine creative workflows to make it happen.

  • Cursor debuts Composer 2, a lean and cost-effective coding agent poised to rival Opus 4.6.

  • Cursor lets you run cloud agents on your own infrastructure, keeping code and tools in-house.

  • Lovable expands from app-building to multitasking, doubling as your data scientist, business analyst, and more.

  • Lovable integrates Twilio, adding SMS, WhatsApp, 2FA, and AI voice to your projects.

  • Lovable ushers in AI-powered vibe coding pentests with @AikidoSecurity, slashing time and cost for app security.

  • Microsoft’s MAI-Image-2 lands in the top 5 for text-to-image creation, crafted for true artists.

  • Devin clones itself to tackle big tasks by delegating to virtual Devins.

  • Meta introduces AI support assistants on Instagram and Facebook to enhance user experience and content enforcement.

  • Langchain launches Fleet: an enterprise hub for managing your team of smart agents across daily channels.

  • White House unveils National AI Policy Framework to unify federal AI strategy across all states.

  • Trump taps tech titans Zuckerberg, Ellison, and Huang for AI-focused technology council.

  • OpenAI plans a "superapp" to unify its offerings and streamline the user experience.

  • OpenAI aims for an AI researcher by 2026, eyeing a full-fledged multi-agent lab by 2028.

  • OpenAI’s ChatGPT unveils a new feature allowing easy file browsing and referencing in the Library tab, rolling out worldwide.

  • OpenAI teases Spud, a powerful model set to boost the economy, alongside the rebranding to "AGI Deployment."

  • WordPress introduces AI agents to automate writing and publishing tasks.

  • Palantir's AI to become the backbone of the US military's core system.

  • Syndica now offers enhanced blockchain scalability with Firehose integration for developers.

  • SWIFT launches blockchain-powered 24/7 cross-border payments with 25+ banks.

  • MoonPay open-sourced the Open Wallet Standard (OWS), a secure wallet standard for AI agents and developer tools.

  • Spline releases Omma, an AI agents for crafting 3D designs, websites, and apps.

  • Apple unveils a revamped Siri with a dedicated app, Dynamic Island chatbot, and unified search.

  • Apple distills Google's Gemini model to enhance Siri's AI capabilities.

  • Shopify’s Agentic plan lets non-Shopify brands sell across AI chat platforms by adding products to a universal catalog.

  • Cloudflare’s AI Search lets you create public endpoints and UI snippets effortlessly to enhance your website's search capabilities.

  • Luma’s Uni-1 debuts as an innovative model that thinks and generates pixels simultaneously with heightened intelligence.

How’d this digital dip taste?

You made it to the bottom. Quick taste test before you go.

Login or Subscribe to participate

This was it. Our fifty-three digital dip together. Forward this to someone who sees AI as a feature, while others use it to change the game.

If this edition made one thing clear, it’s this: The models are not the bottleneck anymore. Anthropic ships digital employees. OpenAI restructures around deployment. Mistral AI builds Forge so enterprises can train on their own data. If you’re a leadership team thinking about autonomy, control, and real operational impact, this is where I can support you. Not by adding another layer of tooling. But by helping you translate these shifts into structure. Mapping how your organization actually runs. Identifying where knowledge should live inside models. Designing workflows where agents don’t just assist, but own outcomes. The labs have made their move. If you want to make yours, I’m here to help you. Reply and let’s design how this fits inside your business.

Looking forward to what tomorrow brings! ▽

-Wesley

Other dips you might like as well