📝 19/05/25: Google I/O 2025 Updates and How Not to do AI Adoption

📌 This Week’s News Highlight: Google I/O Updates
This week, unless you've been living under a rock, you would have seen updates from Google I/O 2025 (it's annual developer conference) everywhere. This year, the company doubled down on its vision of AI-first everything.
Key highlights:
- Project Mariner: an AI agent that can interact with the web and get stuff done on users' behalf (through Chrome, Search and the Gemini app). Google calls this new suite of capabilities "Agent Mode". Coming soon for Gemini subscribers.

- Personalised Email Replies: as part of its new "Agent Mode", users will soon be able to hand over email responses entirely. New capabilities include sending emails on your behalf using context from across your Google apps (capturing all information you're ever shared across Google Docs, Sheets etc.), even capturing tone, style and preferred word choices.

- AI Mode in Search: Google is doubling down on AI across Google search, giving users more relevant and specific, AI-generated responses to search queries (see that AI summary you often get when doing a Google search? It's about to get way better). For example, AI mode will help answer user queries more specifically and creatively by reasoning and delivering the response in a more digestible format, as a graph or pie chart for instance). AI mode in search is rolling out across the US soon (initially for certain verticals like sports).

- Google Shop: AI’s integration in search is also powering more seamless shopping experiences, putting Google firmly in the e-commerce race alongside OpenAI’s Shopify partnership and Perplexity’s in-app checkout features. This will involve faster ways to get users to convert directly (e.g. buying a cinema ticket generated in AI search with 1 click), even supporting AI-driven virtual try-ons specifically designed for the fashion industry.

- Google Beam: New AI-first video communications platform (incl. real-time speech translation directly in Google Meet, which even matches your tone of voice!). So you can sound like you, but in Spanish...
- Latest Image/Video Generation Models + New Filmaking Tool, (Imagen 4, Veo 3 and Flow respectively): These updated models are a game-changer for video, audio generation and filmmaking in general. The quality and realism is improving exponentially with each new update. Potentially game-changing or an existential threat for content creators and creatives?
- Android XR: Personally, I'm still not sold on the prospect of wearing "intelligent" glasses.. but these new Android XR powered Google glasses are nevertheless impressive (if slightly dizzying), with their ability to show routes as a 3D map in real-time in front of your eyes, see and respond to messages without lifting a finger, and show you search results within the glasses themselves. Oh, and for the fashionistas out there, Gentle Monster and Warby Parker are their official partners for building new glasses with Android XR, so you might be able to get away with being fashionably high-tech...

Whilst impressive and exciting from a user perspective, many of these updates risk undercutting major product categories and AI-first businesses. Google not only owns the infrastructure/data, it often leads in technical innovation too. The new video models, for example, directly compete with companies like Runway. Real-time translation features in Google Meets make entire businesses built around that use case redundant (not to mention ending the careers of professional translators).
Could we see more VC-backed AI startups quietly disappear? Probably, and not because their ideas weren’t good, but because their core product becomes just another feature inside Google’s ecosystem.
For founders, the way forward may be going niche, going deeper, or building where big platforms can’t or won’t (e.g. in highly regulated niches).
🔗 Watch Google I/O 2025 recap in under 10min
💡 Thought of the Week
A founder in a WhatsApp group I’m in said they want their team to use AI more, and think the best way to do that is by encouraging personal use outside of work.
Not only does this feel weirdly far-fetched and borderline invasive, it puts the burden on individuals rather than creating systems to facilitate meaningful adoption.
Some suggestions instead:
• Foster psychological safety so people don’t feel stupid for asking how to use something
• Protect time for experimentation and learning
• Build infrastructure for sharing wins, ideas and use cases
• Provide clear AI policies, use case suggestions, and strategic direction
Given the countless company memos coming out these days, I don't think adoption comes from pressure, but rather from clarity, context and culture/mindset shifts.
🔗 Sneaky Links
• Claude 4 arrives: Anthropic’s new Opus 4 and Sonnet 4 models bring stronger reasoning and code generation
• Anthropic's testing of Claude suggests it could resort to "extremely harmful actions" if threatened with removal
• Why billion-dollar AI startups are failing: A warning on premature scaling, shaky security guardrails and Big Tech encroachment
• AI is Disrupting SaaS: From fixed workflows to self-adapting systems, SaaS is becoming shaped by 'jobs to be done' logic, not just feature sets
•Sam Altman insights on how different generations are using ChatGPT: From a replacement to Google search for Millennials, to operating systems for Gen Z
• Mayo Clinic trials personalised AI care: Foundation models now helping shape patient care plans even in regulated industries like healthcare