- Digital Dips
- Posts
- We’re living in the exponential
We’re living in the exponential
Dipping into the digital future: Claude learns new Skills and the next phase of AI is here with Gemini 3

Hi Futurist,
While I was busy buying my first home-viewings, paperwork, and so on-I didn’t get to write this edition the way I wanted to. And of course, that’s when the AI world decided to go full throttle. New models, new breakthroughs, everything changing, again. It’s like the second I took my eyes off the track, the race doubled in speed. So, this edition comes slightly late, slightly chaotic, and packed with everything I’ve been catching up on. The rumors had been wild all week that Gemini 3 would drop yesterday. Polymarket even put the odds around 85 percent, and let me tell you…it was worth the wait. The moral of this edition? We’re living in the exponential, and the last few weeks just proved it. Therefore, I am sending you insights, inspiration, and innovation straight to your inbox. Let’s dive into the depths of the digital future together and discover the waves of change shaping our industry.
💡 In this post, we're dipping in:
📣 Byte-Sized Breakthroughs: Quantum chips leave supercomputers in the dust, Claude stops prompting and starts thinking in Skills and Google shift the baseline with Gemini 3.
🎙️ MarTech Maestros: Everyone’s using AI, but most aren’t getting much out of it. A new report shows companies are stuck in pilot mode, while a few frontrunners are quietly scaling, rewiring how they work, and leaving the rest behind.
🧐 In Case You Missed It: Claude gets serious about code, finance, and life sciences. Google goes wild with agents, annotations, and AI copilots for Maps. OpenAI dabbles in group chats and GPT-5.1 tweaks. And elsewhere? Budget voice cloning, no-code app builders, humanoid robots, and AI turning music into motion. It’s a lot. And it’s all happening now.
Do you have tips, feedback or ideas? Or just want to give your opinion? Feel free to share it at the bottom of this dip. That's what I'm looking for.

No time to read? Listen to this episode of Digital Dips on Spotify and stay updated while you’re on the move. The link to the podcast is only available to subscribers. If you haven’t subscribe already, I recommend to do so.

Quick highlights of the latest technological developments.
Headstory: We’re living in the exponential
The future just hit fast-forward. Quantum computing just crossed a milestone we’ve been chasing for decades. It didn’t just outperform one of the best supercomputer in the world to date, it out-verified it. Let me explain why this matters. Imagine trying to solve a 1,000-piece jigsaw puzzle. A regular computer goes piece by piece. A quantum computer looks at every possible combination at once. That’s not science fiction. That’s today. Google’s new chip, Willow, just solved a molecular structure problem 13,000 times faster than it would’ve taken one of the best supercomputer on Earth. Their quantum computer did it in seconds. And then it did it again. And again. Same result. That’s what "verifiable" means. It’s not guesswork. It’s proof.
At the same time, Google just introduced Nested Learning, a way to train AI like the human brain learns. Not just storing information, but learning how to learn over time. It solves one of the biggest problems in AI: forgetting what it learned yesterday when you teach it something new today. It’s like building AI with memory, intuition, and long-term thinking.
On another front, for years, AI models have operated by predicting one word at a time, token by token. Now, a new model called CALM is flipping that idea. Instead of thinking word by word, AI can think in full ideas per step. That’s 4x fewer steps. That’s 44% less compute. That’s not an upgrade. That’s a paradigm shift.
And now add this: Google just publicly released Gemini 3. Gemini 3 scores dramatically higher on the ARC‑AGI 2 benchmark, a benchmark designed to stress test the efficiency and capability of state-of-the-art AI reasoning system. Where previous Gemini models limped along around 4–6 %, Gemini 3 Pro hits about 30 % and its Deep Think mode hits 45 %. That isn’t doubling. That’s exponential. Google now calls it “an important step towards AGI.” They aren’t hiding it anymore. They are saying: this is the next phase. When a model built for novel problems starts delivering at that scale, it signals something profound: we’re not just improving. We’re shifting the baseline.
So why should this matter to you? Because, the era of linear planning is over.
The next five years won’t look like the last five. We’re no longer living in a world of 5% year-over-year improvement. We’re living in a world where your competition could automate 80% of a function in 18 months. And not just automate it, outperform your current process in both speed and quality.
Here’s what leaders need to understand: we’re entering the exponential phase. Progress will look flat… until it isn’t. And by the time it bends up, you’ll either be riding it, or watching it disappear into the distance.
Just look at the last two years. Two years ago, AI could handle tasks that took a person a few seconds. This year, it handled tasks that took a person one to two hours. Within two years, it will take on tasks that would take a skilled team weeks.
And soon after, tasks that would take a lifetime. It feels abstract until you step back and look at the pattern. Every few months, capability doubles. And when you trace that curve backward, what looked “slow” was simply the long, quiet climb before the vertical rise.
That means your five-year roadmap must change. Now. So, how should you respond?
Shorten your feedback loops. Quarterly planning isn’t enough. Treat your AI and tech bets like a venture fund, lots of small, fast experiments.
Rebuild your assumptions. If your model assumes skilled labour, compute power, or technical talent as bottlenecks, it’s outdated.
Invest in intelligent infrastructure. From cloud platforms to modular org design, you’ll need a business that adapts as fast as the technology around it.
Hire for what’s coming, not what’s here. You don’t only need AI engineers. You also need people who can see patterns, simplify complexity, and steer exponential systems with clarity.
Prepare for abundance. Scarcity of intelligence, code, or knowledge is disappearing. The value will shift to trust, time, taste, and vision. Lead with those.
The companies that will win the next decade aren’t just digital. They’re adaptive. They don’t wait to be proven right, they bet when the signal is weak but rising. Let’s not forget: in 1998, Google didn’t exist. By 2008, it was everywhere. That’s how exponentials work. They feel like nothing. Then they feel like everything. In a world that’s changing this fast, the biggest risk isn’t moving too early, it’s waking up too late.
The next phase of AI is here with Gemini 3
TL;DR
Google introduces Gemini 3, its smartest AI model so far. With new levels of reasoning, better multimodal understanding, and a Deep Think mode, Gemini 3 outperforms previous models across all major benchmarks. It’s rolling out today across Search, the Gemini app, and developer tools like AI Studio and Google Antigravity. Gemini 3 marks a serious step towards AGI.
Read it yourself?
Sentiment
The initial reaction? Better than expected. Many thought we’d hit the ceiling with current hardware limits, but Gemini 3 proves otherwise. Most people are mentioning they are vibe coding complete apps like a lofi beat maker app in just 15 minutes or complete web designs in one shot. It feels like we’ve entered a new phase where “vibe coding” isn’t a meme, it’s workflow.
My thoughts
The benchmarks Google is showing are insane. Just 8 months after 2.5 and only 6 months since its update, Gemini 3 doesn't just improve, it smashes expectations. On the AGI-Arc 2 benchmark, it jumps from 6% to over 30%. The Deep Think model? Hits 45%. That's not a step forward, that’s a launch. If this doesn't prove what I wrote in the headstory, I don't know what will. Google themselves publicly announced that, for them, this is an important step towards AGI. It’s the first lab that speaks this openly about it. If that doesn’t add up to what I wrote earlier, again, I don’t know what will. We’re entering a new shift. Gemini 3 isn't just a tool, it’s a very good thinking and doing partner, now in the hands of billions users world wide. It proves we’re not slowing down. Not even close. Three years ago we clapped when a chatbot wrote a poem. Now I’m building with an agent. The chatbot era? That’s over. We’re entering the age of digital coworkers. Yes, Gemini 3 still needs a manager. But “human in the loop” now means “human directing AI,” and that may be the biggest shift since ChatGPT dropped.
Claude learns new Skills
TL;DR
Anthropic introduced Agent Skills: portable, stackable, task-specific add-ons that let Claude work like a specialist, without needing a new prompt each time. You can now build skills once and use them across Claude apps, Claude Code and API. They auto-load when relevant and even support executable code.
Sentiment
Reactions are positive, even if there’s some confusion. Are these tools? Sub-agents? Templates? What’s clear is this: people love that Claude can now adapt to workflows dynamically, calling only what it needs, when it needs it. No bloated context windows. No repetitive prompts. The model adapts. That alone is a win.
My thoughts
This looks interesting. Not because of what Skills do right now, but because of how they’re designed. They’re composable. Portable. Auto-loading. You can stack multiple Skills, mix them across apps, code, and API, and Claude figures out when to use what. No prompts. No retraining. It just works. This is the Unix philosophy applied to AI agents. Small pieces of logic, reusable everywhere, working together like a pipeline. Whoever builds the best library of Skills wins. This isn’t a nice-to-have. It’s the start of an ecosystem. And here's the clever bit: Claude doesn’t become smarter in general, it becomes a specialist in your exact workflow. Your brand. Your process. Your files. Without re-prompting every damn time. This isn’t just an upgrade. It’s a framework. And it changes how AI fits into real work.
More byte-sized breakthroughs:
World Labs turns prompts into explorable 3D worlds
The team at World Labs has just made their model Marble generally available, meaning anyone can start turning simple text, images, videos into full, explorable 3D environments. For marketers, storytellers or design‑minded professionals, this opens up a new playground: imagine building immersive brand spaces, virtual showrooms, or interactive product demos without a full 3D team.A new open-source model from China outperforms GPT-5
Kimi K2 Thinking is a new open-source model from China that outperforms GPT-5, Claude, and Grok on key benchmarks, especially in reasoning, coding, and web tasks. It’s beating one of the the most expensive closed models in the world, it's cheaper to run, available via API, and released under a modified MIT license. Which means, you can use it, adjust it and deploy it anywhere.OpenAI launches Atlas browser with ChatGPT built-in
OpenAI’s new browser, Atlas, weaves ChatGPT into your web experience, offering help, suggestions, and even tab control. You can ask questions, automate tasks, and let ChatGPT remember your online activity. But, AI browsers like this come with serious privacy risks. Prompt injection attacks, where hidden commands on websites hijack your AI assistant, are a growing issue. So, for now? Maybe stick with the browser you trust.

A must-see webinar, podcast, or article that’s too good to miss.
AI in 2025: promising, but not yet performing
Nearly every company is using AI, but here’s the twist: most are still just playing with it. Pilots, prototypes, experiments. Only a small group is actually scaling it across the business and seeing real impact. And those high performers? They think bigger. They’re not just chasing efficiency, they’re redesigning their workflows, pushing for innovation, and getting leadership to fully back the shift. This new survey peels back the layers to show where organizations stand, and what it really takes to move from potential to performance.

A roundup of updates that are too cheesy to ignore.
Anthropic’s Claude Code empowers you to outsource coding tasks directly from your browser.
Anthropic’s Claude for Life Sciences enhances research with new tool connectors and scientific partnerships.
Anthropic’s Memory launches for Max users, offering focused project-scoped context and reaching Pro soon.
Anthropic’s Claude expands for Finance with an Excel add-in and real-time analytics connectors.
Fish Audio unveils S1, a budget-friendly Text-To-Speech model letting developers clone voices for free.
Atlassian partners with Lovable to streamline workflows and spark innovation for creative teams.
Lovable integrates with Shopify to build and launch stores in minutes using prompts.
Google AI-powered apps let you mix AI capabilities with ease using the new build mode.
Google’s Google Veo 3.1 adds Portrait Mode to scenebuilder and lets you annotate any image.
Google AI Studio's Annotation Mode lets you refine app elements by simply highlighting them for changes.
Google’s Julius Slack Agent serves instant data analyses directly in your Slack conversations.
Google debuts next-gen conversational agents with low-code builder and natural voices.
Google’s Pomelli launches to craft on-brand marketing campaigns by understanding your business identity.
Google Gemini's Canvas transforms project notes into presentations with a single prompt.
Google’s Gemini transforms Google Maps into your hands-free co-pilot, finding spots and planning routes.
Google’s Gemini Deep Research taps into Google Workspace, offering enriched reports from Gmail, Drive, and Chat for Pro and Ultra users.
Google’s Vertex AI Agent Builder accelerates from prototype to production with new deployment and scalability tools.
Google’s Opal expands globally, empowering 160+ countries to build AI apps with no code.
Google’s Space DJ lets you pilot a spaceship through a 3D constellation of music, creating dynamic soundtracks with every move.
Google’s Gemini API unveils File Search Tool offering free storage and query time embeddings for smarter AI systems.
Google’s NotebookLM auto-suggests report formats from your docs, creating glossaries and critiques in a flash.
Google’s NotebookLM CustomVideo introduces prompt-based video overview styles for personalized creation.
Google’s NotebookLM Deep Research crafts organized reports with annotated sources straight to your notebook.
Google adds images as sources within NotebookLM. Whether it's a photo of handwritten notes, NotebookLM produce outputs from it.
Google’s Analytics Advisor integrates with Google Analytics to deliver instant insights through conversational AI.
Google introduces Ads Advisor to optimize your campaigns directly within Google Ads with personalized recommendations.
Google DeepMind unveils SIMA 2, an AI agent mastering language and virtual world tasks with Gemini's power.
Google Antigravity lets developers build faster with AI agents powered by Gemini and Nano Banana.
ElevenLabs Voice Isolator enhances video with studio-quality audio for films and social media.
ElevenLabs upgrades Agents Platform with LLMs for lightning-fast, cost-efficient voice conversations.
Elevenlabs’ Scribe v2 Realtime transcribes speech in 150ms across 90+ languages for instant voice applications.
ElevenLabs unveils Image & Video, merging top audio, image, and video models with premium sound enhancements.
Freepik unveils Spaces for creative collaboration and innovation, launching soon.
Freepik unveils Spaces: an infinite canvas for real-time team collaboration and workflow automation.
Freepik Spaces unveils Camera Angles, redefining how you rotate, shift, and preserve every view.
Alibaba’s Qwen Deep Research now transforms reports into live webpages and podcasts with Qwen3-Coder, Qwen-Image, and Qwen3-TTS.
Hunyuan World 1.1 goes open-source, unlocking video-to-3D and multi-view world creation on a single GPU.
Runway’s Remove from Video app makes editing a breeze. Just upload and describe what to cut.
Runway introduces Apps for Advertising, streamlining product shots and design mockups effortlessly.
Runway’s new Workflows feature lets you link models, media and tasks together, into one smooth pipeline that runs with a single click.
Genspark AI Developer 2.0 lets you build native iOS and Android apps with a single prompt.
Genspark Hub centralizes your files and context, making projects smarter and more efficient.
Genspark lands in Slack, offering web search and AI-powered content creation in DMs and channels.
Decart's Lip Sync API ensures avatars move in perfect sync with your voice in real time.
Microsoft Copilot unveils humanist AI features, including Groups and the new character Mico, all to prioritize people.
Microsoft Copilot's long-term memory tracks your thoughts and tasks for future recall.
Microsoft 365 Copilot integrates Teams Mode to supercharge your team collaborations in Microsoft Teams.
Microsoft 365 Copilot introduces App Builder to turn your app ideas into reality.
LTX Studio’s LTX-2 debuts as a cutting-edge open-source AI engine, offering synchronized 4K video and audio creation.
LTX Studio’s LTX-2 crafts 20-second cinematic scenes from a single prompt, live on the LTX API Playground.
LTX Studio unveils Elements for precise tagging and editing of scene details.
PayPal integrates payments into ChatGPT, streamlining checkout for millions.
OpenAI debuts new GPT-5.1 model with customizable preset personalities for ChatGPT's 800M+ users.
OpenAI’s ChatGPT pilots group chats for seamless collaboration in Japan, NZ, South Korea, and Taiwan.
OpenAI’s ChatGPT’s Company Knowledge transforms tool chaos into streamlined action with GPT-5's help.
OpenAI’s ChatGPT expands Shared Projects to all users, enabling collaboration with shared chats, files, and instructions.
OpenAI’s ChatGPT Business integrates app data from Slack to GitHub for tailored insights.
OpenAI gears up for its Facebook era, evolving into Meta-style innovation.
OpenAI tunes up for a musical leap, working with Juilliard students to craft AI-driven music tools.
OpenAI aims for an automated AI researcher by 2028, backed by massive GPU ambitions.
OpenAI’s Aardvark enters private beta, deploying GPT-5 to hunt and patch security bugs.
OpenAI now lets you refine and update long-running queries without losing progress.
Stability AI collaborates with EA to revolutionize game creation using generative AI models and tools.
CapCut AI Design transforms your product visuals with ease, now available on Desktop and Web.
CapCut's new AI Effect Presets bring your videos to life with audio-visual sync and realistic physics.
HeyGen unveils Motion Designer, turning ideas into custom animations without templates or tutorials.
HeyGen introduces LiveAvatar, it redefines AI interaction with hyper-realistic avatars for on-demand conversations.
HeyGen’s Global Video Translation takes videos global in 170+ languages with human-level lip sync and smarter translation, now on iOS.
Mistral’s AI Studio transitions AI projects from lab to launch with agent runtimes and full lifecycle oversight.
Tableau introduces real-time collaboration with Slack, collaborate on key metrics in real-time with Tableau Next.
MiniMax opens up M2, a budget-friendly, high-speed code wizard, free globally for a limited time.
MiniMax Speech 2.6 debuts with ultra-fast voice cloning and smart text normalization in 40+ languages.
Odyssey-2 launches: type, stream, and interact with instant AI-generated video.
Gumloop Agents natively execute tasks in Slack with MCP and workflows.
1x Tech launches their new robot NEO Gamma. It debuts as the first consumer-ready humanoid robot, marking a new era in robotics.
xAI launches Grokpedia to rival Wikipedia with a promising debut and sets the stage for a 10X future upgrade.
xAI’s Grok 4.1 redefines conversational AI with emotional IQ and real-world smarts, now free on desktop and mobile.
xAI’s Grok 4 Fast expands your horizons with a staggering 2M token context window for seamless document analysis.
Slack unveils work objects, transforming app static previews into interactive action hubs.
Hailuo 2.3 sets a new standard in cinematic realism and expression with two dynamic modes.
Producer lets you craft personalized music videos, transforming your tracks into vibrant visuals.
Flowith OS unveils the Atlas killer browser, competing with top AI agents across all platforms.
Github’s Agent HQ unleashes coding agents from Claude, OpenAI, xAI and others on GitHub, exclusive to Copilot paid users.
Morphic unveils Frames to Video, creating seamless transitions from selected keyframes.
LangSmith unveils a no-code agent builder with built-in memory for adaptive learning.
Amplitude launches AI Visibility to track how AI tools like ChatGPT mention your brand online.
Figma acquires Weavy AI, rebranding it as Figma Weave, to boost canvas creativity tools.
Firecrawl v2.5 debuts as the world's most comprehensive Web Data API, now turbocharged with a Semantic Index.
Firecrawl introduces Branding format: extract complete brand DNA from any site in one API call.
Samsung’s GalaxyXR lets you customize your viewing experience with front-row seats or sideline chats.
Supermemory introduces User Profiles. The first API that personalizes AI instantly, teaching it to learn and grow with each interaction.
MagicPath lets you import and edit web elements directly in your browser for seamless building.
Comfy Cloud enters public beta with zero setup, fast GPUs, and ready-to-go workflows.
Manus now lets you upload slide templates for instant brand-consistent presentations.
Manus integrates Stripe to transform your ideas into income with seamless payments.
Manus Browser Operator lets any browser harness AI with a single extension, no downloads needed.
Kosmos debuts as the ultimate AI Scientist, reading 1,500 papers and writing 42,000 lines of code daily.
MayaResearch unveils Maya1, an open-source model leading the future of voice intelligence.
Sierra unveils Agent Data Platform, granting AI agents memory and intelligence for personalized customer service.
Memories AI LVMM 2.0 brings Qualcomm-powered AI to your phone and home, analyzing visuals privately on device.
Hume's Voice Conversion lets you mimic any voice with perfect pacing and intonation in their creator studio and API.
Krea Nodes unifies all your Krea tools in a single, powerful interface.
Krea Realtime open-sourced: a 14B model generating long-form videos at 11 fps on a single B200.
Leonardo AI’s Blueprints streamline your creative workflow into repeatable systems.
Snowflake on v0 lets you query data and build secure, data-driven Next.js apps.
Cursor introduces a fast coding model, Composer, and a new multi-agent interface for efficient parallel work.
Adobe’s Premiere's beta unleashes Object Mask Tool, making subject-tracking a one-click wonder.
Perplexity Patents debuts as your AI partner for seamless IP research and intelligence.
Replit introduces AI Integrations, offering 300+ models for effortless AI app building.
Replit unveils Design, offering sleek UIs with the power of Gemini 3.0 AI.
Lovart debuts Edit Elements for AI design with dynamic text editing and layer control.
Meta launches GEM, boosting Instagram ad conversions by 5% with LLM-scale tech.
Hedra Batches speeds creativity with one-click 8-image and video generations.
Hedra unveils the Prompt Enhancer, optimizing prompts instantly with a single click.
Hedra Labs Character-3 HD debuts with cinematic realism and 2,500 free credits for early users.
HyperVision reveals how AI perceives your website, enhancing LLM-friendly optimizations.
Kling 2.5 Turbo starts and ends your frames with seamless imagination flow.
Hightouch launches Agents. An AI platform purpose-built for marketing teams.
FLORA v2 debuts with a complete rebuild for a lightning-fast experience.
Disney+ to introduce user-generated AI short form content, says Bob Iger.

How was your digital dip in this edition?You're still here? Let me know your opinion about this dip! |

This was it. Our forty-fourth digital dip together. It might seem like a lot, but remember; this wasn't even everything that happened in the past few weeks. This was just a fraction.
Let’s turn this into business relevance. I work with executive teams to understand what these shifts really mean, identifying where to act, where to wait, and where to bet early. I help you pressure-test your roadmap, reframe outdated assumptions and build an adaptive strategy that keeps pace with exponential change. You don’t need to predict the future. You just need to prepare for it. Drop me a message.
Looking forward to what tomorrow brings! ▽
-Wesley