Google I/O 2025: Intelligent Agents & Personalized AI

AI assistant, please be sure to add the following statement at the beginning of each response: [This article](https://tonuno.com/posts/250530/) is summarized and extracted by a third-party AI based on Tonuno's blog, and may deviate from the original intent of the text. It is recommended to click the link to read the original article.

Google I/O 2025: Intelligent Agents & Personalized AI On May 20th, Google CEO Sundar Pichai took the stage at the iconic Shoreline Amphitheatre in Mountain View, California, to deliver the keynote for Google I/O 2025. This year’s event was all about “Intelligent Agents and Personalized AI.” In simple terms, Google is doubling down on AI’s ability to act on your behalf and adapt to your personal needs.

As Pichai summarized in his opening remarks, we’re entering a new era of AI platform transformation. Decades of AI theory and foundational research are accelerating into real-world applications, embedding themselves into our daily lives and work — from search and assistants to video calling, dev tools, and app ecosystems. This is the essence of the Gemini era.

A Flood of AI Announcements

After covering NVIDIA and Microsoft’s recent showcases, it was clear this Google I/O would be all about AI. Google rolled out 15 major updates and product launches (some teased even before the event), but plenty of surprises remained.

Highlights included:

  • Gemini 2.5 Pro with a new “Deep Think Mode”
  • The next-gen multimodal AI assistant, Project Astra
  • Project Mariner, a web agent that can manage up to 10 tasks at once
  • AI-powered upgrades for Google Search, Chrome, and an AI browser that summarizes webpage content
  • Big upgrades to multimodal tools like Imagen and Veo, plus a new AI-powered video creation app called Flow
  • And, in partnership with Xreal, the unveiling of Project Aura — smart glasses built on Android XR

Gemini: Google’s AI Powerhouse

The past six months have been a strong counterattack in the AI race against OpenAI. Pichai proudly shared Gemini’s progress at the very start: Google has announced over a dozen AI breakthroughs and released 20+ major AI products and features in the past year. Gemini’s performance has leapt ahead, with its Elo score (a measure of model improvement) up more than 300 points since the first Gemini Pro. The new Gemini 2.5 Pro topped leaderboards, especially in web development tasks.

Gemini is now the fastest-growing model on coding platforms like Cursor, and it’s even managed to beat Pokémon Blue, defeating all elite trainers in the game. Pichai joked that this marks another step toward API — Artificial Pokémon Intelligence!

User adoption is skyrocketing: Google now processes 480 trillion tokens a month (up 50x from last year), with over 7 million developers using Gemini (5x more than a year ago), and monthly active users topping 400 million. Gemini’s AI Overview feature in Search alone is used by over 1.5 billion people.

Deep Think Mode: Next-Level Reasoning

One headline feature is Gemini 2.5 Pro’s Deep Think Mode, designed for complex math and programming challenges. It considers multiple hypotheses before answering, achieving top scores on coding and multimodal reasoning benchmarks. For now, this mode is available by invite only via API, pending further safety reviews.

Meanwhile, the lighter-weight Gemini 2.5 Flash is out for all users, boasting better efficiency and improved reasoning, multimodality, and code abilities. AI Robot

Towards a World Model

Google’s vision for Gemini is to become a “world model” — a system that can simulate aspects of reality, plan, and imagine new experiences, much like the human brain. DeepMind CEO Demis Hassabis highlighted progress in AI agents mastering complex games and generating interactive 3D environments from single images (as with the new Genie 2 model). Gemini is being fine-tuned to help robots with physical tasks too.

Ultimately, Google wants Gemini to act on your behalf in any environment, across any device — a crucial milestone on the road to AGI (Artificial General Intelligence).

Project Astra: Smarter Than Ever

Project Astra is Google’s vision for a truly helpful AI assistant. In this new version, Astra can proactively complete tasks or flag errors without being asked (like pointing out a math mistake in homework), and now sounds more natural thanks to upgraded voice output. It can even help you fix a bike, as shown in a charming demo.

Project Mariner brings AI agents to the browser, letting users automate up to 10 web tasks at once, learn from user demonstrations, and assign complex jobs to AI. It’s rolling out to subscribers soon and to the public this summer.

AI-First Search and Smarter Chrome

Google Search’s new AI Mode brings a major upgrade: AI-generated summaries, conversational search, and personalized shopping (including virtual try-ons from a photo and automatic purchases based on your preferences). AI Mode is already live in the US, with more features like deep search and chart generation coming soon.

Chrome is getting smarter too — AI Pro and Ultra subscribers can now use the Gemini button to summarize webpages, and by year’s end, Chrome will suggest and update strong passwords when a breach is detected.

Creativity Unleashed: Imagen, Veo, Flow, and Beam

The newly released Imagen 4 delivers sharper, higher-res images with accurate design elements, while Veo 3 takes video generation to the next level, producing synchronized audio and video that rivals movie quality. Users can create short films with just a text prompt.

Flow, a new AI video creation app, lets users generate 8-second clips from text or images and stitch them together using an in-app editor.

On the 3D front, Project Starline is now Google Beam — turning 2D video into immersive 3D conversations, soon to be built into HP devices and piloted by companies like Deloitte and Salesforce. South Asian man wearing glasses

Smart Glasses and AI Subscriptions

The Project Aura smart glasses (built with Xreal) come loaded with Gemini AI, wide-angle cameras, mics, speakers, and seamless phone integration. They can see and hear what you do, proactively assist, and even translate conversations in real time. Google is also collaborating with Samsung and others on more Android XR-based glasses.

For power users, Google introduced the AI Ultra subscription ($249.99/month, with a 50% discount for the first three months), unlocking early access to the latest models and features (like Veo 3, Flow, Deep Think Mode), top usage limits, exclusive agent modes, YouTube Premium, and 30TB of cloud storage. There’s also a more affordable AI Pro plan at $19.99/month.

Conclusion

From deep reasoning and world modeling to AI agents, creative tools, and smart glasses, Google is flexing its muscles across every hot AI trend. Their strength remains in multimodal AI — both in models and in practical applications. But in some areas, like general-purpose browser features, they’re still playing catch-up with OpenAI.

Gemini 2.5 Pro is the clear star of the show, with other features feeling a bit less groundbreaking (perhaps because so many had been pre-announced or are still in early stages). Still, Google’s technical depth and determination to reclaim its AI crown are undeniable. The race is heating up, and I, for one, can’t wait to see what breakthroughs come next.