Everything We Learned at Google I/O. AI Mode in Chrome, Gemini Live, XR Glasses and Much More

Everything We Learned at Google I/O. AI Mode in Chrome, Gemini Live, XR Glasses and Much More

Everything We Learned at Google I/O. AI Mode in Chrome, Gemini Live, XR Glasses and Much More

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

Everything We Learned at Google I/O. AI Mode in Chrome, Gemini Live, XR Glasses and Much More

Google’s I/O developer’s conference was packed with AI. The company showed off the latest updates to its Gemini AI platform and debuted its pricey new AI Ultra subscription plan. Google also introduced its new Flow app that expands its video-generation toolset, and capped off the presentation with a demo of its new Android XR glasses. For a play-by-play recap of the event, check out our live blog coverage of Google I/E. Google is also proud to announce that AI usage and performance numbers are up, with a new 42.5-exaflop Ironwood Tensor processing unit coming to Google Cloud later this year. For more information on Google’s AI plans, visit the company’s website or follow CNN Living on Facebook and Twitter. and Twitter for updates on the latest from Google I-O and other tech events. and follow us on Twitter @CNET and @CNNLobby and on Facebook for updates from Google’s annual developer conference, including live streams and videos.

Read full article ▼
This year’s Google I/O developer’s conference was packed with AI. Google showed off the latest updates to its Gemini AI platform and debuted its pricey new AI Ultra subscription plan (spoiler: it’s $250 per month).

Google also introduced its new Flow app that expands its video-generation toolset, and capped off the presentation with a demo of its new Android XR glasses.

The company was also proud to announce that AI usage and performance numbers are up. (Given that a new 42.5-exaflop Ironwood Tensor processing unit is coming to Google Cloud later this year, they’ll continue to rise.)

For a play-by-play recap of the event, check out our live blog coverage of Google I/O.

Google’s new video generation tool Flow in action. Google/Screenshot by Joe Maldonado/CNET

On May 13, Google held a separate event for Android, where it launched Android 16, debuting its new Material 3 Expressive interface, updates to security and an update on Gemini integration and features.

A lot of the whizzy new AI features are only available via one of its subscription levels. AI Pro is just a rebranding of Google’s $20-per-month Gemini Advanced plan (adding some new features), but Google AI Ultra is a pricier new option — $250 per month, with half off the first three months for the moment — that provides access to the latest, spiffiest and least usage-limited of all its tools and models, as well as a prototype for managing AI agents and the 30 terabytes of storage you’re going to need to store it all. They’re both available today.

Google also wants to make your automation sound smarter with Personalized Smart Replies, which makes your generated answers sound more like you, as well as plowing through pieces of information on your device to provide relevant information. It’ll be in Gmail this summer for subscribers. Eventually, it’ll be everywhere.

One of Google’s boasts was Gemini’s victory in Pokemon Blue. Staff writer Zach McAuliffe has questions, because you do NOT mess with his childhood memories.

Project Aura’s XR glasses. James Martin/CNET

Also, it includes lots of better models, better coding tools and other details on developer-friendly things you expect from a developer conference. The announcement included its conversational Gemini Live, formerly part of Project Astra, its interactive, agentic, voice AI, kitchen sink AI app. (As Managing Editor Patrick Holland says, “Astra is a rehearsal of features that, when they’re ready for the spotlight, get added to Gemini Live.”) And for researchers, NotebookLM incorporates Gemini Live to improve its… everything.

It’s available now in the US.

Watch this: Everything Announced at Google I/O 2025 15:40

Chrome AI Mode

People (that is, those over 18) who pony up for the subscriptions, plus people on the Chrome Beta, Dev and Canary tracks, will be able to try out the company’s expanded Gemini integration with Chrome — summary, research and agentic chat based on the contents of your screen, somewhat like Gemini Live does for phones (which, by the way, is available for free on Android and iOS as of today). But the Chrome version is more suited to the type of things you do at a computer rather than a phone. (Microsoft already does this with Copilot in its own Edge browser.)

Eventually, Google plans for Gemini in Chrome to be capable of synthesizing using multiple tabs and voice navigation.

The company is also expanding how you can interact with its AI Overviews in Google Search as part of AI Mode, with interactions with AI Overviews and more agentic shopping help. It’s a new tab with search, or on the search bar, and it’s available now. It includes deeper searches, Personal Context — which uses all the information it knows about you, and that’s a lot — to make suggestions and customize replies.

Google/Screenshot by Joe Maldonado/CNET

The company detailed its new AI Mode for shopping, which has an improved conversational shopping experience, a checkout that monitors for the best pricing, and an updated “try on” interface that lets you upload a photo of yourself rather than modeling it on a generic body.

We have reservations about the feature — it sounds like a privacy nightmare, for one thing, and I don’t really want to see clothes on the “real” me for another.

Google plans to launch it soon, though the updated “try on” feature is now available in the US via Search Labs.

Google Beam

Formerly known as Project Starline, Google Beam is the updated version of the company’s 3D videoconferencing, now with AI. It uses a six-camera array to capture all angles of you, which the AI then stitches together, uses head tracking to follow your movements, and sends at up to 60 frames per second.

The platform uses a light field display that doesn’t require wearing any special equipment, but that technology also tends to be sensitive to off-angle viewing. HP is an old hand in the large-scale scanning biz, including 3D scanning, so the partnership with Google isn’t a big surprise.

Imagen 4 gets better with type. Google/Screenshot by Joe Maldonado/CNET

Flow and other generative creative tools

Google Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It’s available as part of Gemini AI Ultra.

Imagen 4 image generation is more detailed, with improved tonality and better text and typography. And it’s faster. Meanwhile, Veo 3, also available today, has a better understanding of physics and native audio generation — sound effects, background sounds and dialogue.

All this is available under the AI Pro plan. Google’s Synth ID gen AI detection tool is also available today.

Source: Cnet.com | View original article

Google I/O 2025 recap: AI updates, Android XR, Google Beam and everything else announced at the annual keynote

Google kicked off its annual I/O developer event with its annual keynote on Tuesday. As ever, the company had many updates for a wide range of products to talk about. The bulk of the Android news was revealed last week, during a special edition of The Android Show. But Tuesday’s keynote still included a ton of stuff including, of course, a pile of AI-related news. We covered the event in real-time in our live blog, which includes expert commentary (and even some jokes!) from our team. Here are all the juicy details worth knowing about:AI Mode chatbot is coming to Search for all US users. Google has released an iOS and Android app for the NotebookLM mobile app. The company aims to transform Gemini into a universal AI assistant that can handle everyday tasks. It can do things like turn a grid of photos into a 3D sphere of narration, then add text-to-speech on the fly. It’s bolstered by extra languages and can also change the language of the picture.

Read full article ▼
Today is one of the most important days on the tech calendar as Google kicked off its I/O developer event with its annual keynote. As ever, the company had many updates for a wide range of products to talk about.

The bulk of the Android news was revealed last week, during a special edition of The Android Show. However, Tuesday’s keynote still included a ton of stuff including, of course, a pile of AI-related news. We covered the event in real-time in our live blog, which includes expert commentary (and even some jokes!) from our team.

If you’re on the hunt for a breakdown of everything Google announced at the I/O keynote, though, look no further. Here are all the juicy details worth knowing about:

AI Mode chatbot is coming to Search for all US users

Quelle surprise, Google is continuing to shove more generative AI features into its core products. AI Mode, which is what the company is calling a new chatbot, will soon be live in Search for all US users.

Advertisement Advertisement

AI Mode is in a separate tab and it’s designed to handle more complex queries than people have historically used Search for. You might use it to compare different fitness trackers or find the most affordable tickets for an upcoming event. AI Mode will soon be able to whip up custom charts and graphics related to your specific queries too. It can also handle follow-up questions.

The chatbot now runs on Gemini 2.5. Google plans to bring some of its features into the core Search experience by injecting them into AI Overviews. Labs users will be the first to get access to the new features before Google rolls them out more broadly.

Meanwhile, AI Mode is powering some new shopping features. You’ll soon be able to upload a single picture of yourself to see what a piece of clothing might look like on a virtual version of you.

Also, similar to the way in which Google Flights keeps an eye out for price drops, Google will be able to let you know when an item you want (in its specific size and color) is on sale for a price you’re willing to pay. It can even complete the purchase on your behalf if you want.

1.5 billion people see AI Overviews each month

AI Overviews, the Gemini-powered summaries that appear at the top of search results and have been buggy to say the least, are seen by more than 1.5 billion folks every month, according to Google. The “overwhelming majority” of people interact with these in a meaningful way, the company said — this could mean clicking on something in an overview or keeping it on their screen for a while (presumably to read through it).

Advertisement Advertisement

Still, not everyone likes the AI Overviews and would rather just have a list of links to the information they’re looking for. You know, like Search used to be. As it happens, there are some easy ways to declutter the results.

Another look at Google’s universal AI assistant

We got our first peek at Project Astra, Google’s vision for a universal AI assistant, at I/O last year and the company provided more details this time around. A demo showed Astra carrying out a number of actions to help fix a mountain bike, including diving into your emails to find out the bike’s specs, researching information on the web and calling a local shop to ask about a replacement part.

It already feels like a culmination of Google’s work in the AI assistant and agent space, though elements of Astra (such as granting it access to Gmail) might feel too intrusive for some. In any case, Google aims to transform Gemini into a universal AI assistant that can handle everyday tasks. The Astra demo is our clearest look yet at what that might look like in action.

NotebookLM mobile app

On the NotebookLM front, Google has released an iOS and Android app for the tool. The company also took the opportunity at I/O to show off what NotebookLM can do.

Advertisement Advertisement

Google put together a notebook featuring the I/O keynote video from YouTube as well as associated blog posts, press releases and product demos. You can drill down into all of this information or just ask the AI questions about I/O. Of course, you’ll be able to generate audio summaries as well as a mind map to structure all the info that’s in the notebook.

Gemini 2.5 is here with (according to Google) improved functionality, upgraded security and transparency, extra control and better cost efficiency. Gemini 2.5 Pro is bolstered by a new enhanced reasoning mode called Deep Think. The model can do things like turn a grid of photos into a 3D sphere of pictures, then add narration for each image. Gemini 2.5’s text-to-speech feature can also change up languages on the fly. There’s much more to it than that, of course, and we’ve got more details in our Gemini 2.5 story.

You know those smart replies in Gmail that let you quickly respond to an email with an acknowledgement? Google is now going to offer personalized versions of those so that they better match your writing style. For this to work, Gemini looks at your emails and Drive documents. Gemini will need your permission before it plunders your personal information. Subscribers will be able to use this feature in Gmail starting this summer.

Google Meet is getting a real-time translation option, which should come in very useful for some folks. A demo showed Meet being able to match the speaker’s tone and cadence while translating from Spanish to English.

Advertisement Advertisement

Subscribers on the Google AI Pro and Ultra (more on that momentarily) plans will be able to try out real-time translations between Spanish and English in beta starting this week. This feature will soon be available for other languages.

An example of camera sharing using Google’s Gemini Live AI.

Advertisement Advertisement

Gemini Live, a tool Google brought to Pixel phones last month, is coming to all compatible Android and iOS devices in the Gemini app (which already has more than 400 million monthly active users). This allows you to ask Gemini questions about screenshots, as well as live video that your phone’s camera is capturing. Google is rolling out Gemini Live to the Gemini iOS and Android app starting today.

Google Search Live is a similar-sounding feature. You’ll be able to have a “conversation” with Search about what your phone’s camera can see. This will be accessible through Google Lens and AI Mode.

A new filmmaking app called Flow, which builds on VideoFX, includes features such as camera movement and perspective controls; options to edit and extend existing shots; and a way to fold AI video content generated with Google’s Veo model into projects. Flow is available to Google AI Pro and Ultra subscribers in the US starting today. Google will expand availability to other markets soon.

Speaking of Veo, that’s getting an update. The latest version, Veo 3, is the first iteration that can generate videos with sound (it probably can’t add any soul or actual meaning to the footage, though). The company also suggests that its Imagen 4 model is better at generating photorealistic images and handling fine details like fabrics and fur than earlier versions.

Advertisement Advertisement

Handily, Google has a tool it designed to help you determine if a piece of content was generated using its AI tools. It’s called SynthID Detector — naturally, it’s named after the tool that applies digital watermarks to AI-generated material.

According to Google, SynthID Detector can scan an image, piece of audio, video or text for the SynthID watermark and let you know which parts are likely to have a watermark. Early testers will be able to to try this out starting today. Google has opened up a waitlist for researchers and media professionals. (Gen AI companies should offer educators a version of this tech ASAP.)

The new AI Ultra plan costs $250 per month

Google AI Ultra pricing chart

To get access to all of its AI features, Google wants you to pay 250 American dollars every month for its new AI Ultra plan. There’s really no other way to react to this other than “LOL. LMAO.” I rarely use either of those acronyms, which highlights just how absurd this is. What are we even doing here? That’s obscenely expensive.

Anyway, this plan includes early access to the company’s latest tools and unlimited use of features that are costly for Google to run, such as Deep Research. It comes with 30TB of storage across Google Photos, Drive and Gmail. You’ll get YouTube Premium as well — arguably the Google product that’s most worth paying for.

Advertisement Advertisement

Google is offering new subscribers 50 percent off an AI Ultra subscription for the first three months. Woohoo. In addition, the AI Premium plan is now known as Google AI Pro.

A second Android XR device has been announced

As promised during last week’s edition of The Android Show, Google offered another look at Android XR. This is the platform that the company is working on in the hope of doing for augmented reality, mixed reality and virtual reality what Android did for smartphones. After the company’s previousefforts in those spaces, it’s now playing catchup to the likes of Meta and Apple.

The initial Android XR demo at I/O didn’t offer much to get too excited about for now. It showed off features like a mini Google Map that you can access on a built-in display and a way to view 360-degree immersive videos. We’re still waiting for actual hardware that can run this stuff.

Xreal’s Project Aura is the second official Android XR headset

As it happens, Google revealed the second Android XR device. Xreal is working on Project Aura, a pair of tethered smart glasses. We’ll have to wait a bit longer for more details on Google’s own Android XR headset, which it’s collaborating with Samsung on. That’s slated to arrive later this year.

Advertisement Advertisement

A second demo of Android XR was much more interesting. Google showed off a live translation feature for Android XR with a smart glasses prototype that the company built with Samsung. That seems genuinely useful, as do many of the accessibility-minded applications of AI. Gentle Monster and Warby Parker are making smart glasses with Android XR too. Just don’t call it Google Glass (or do, I’m not your dad).

Chrome’s password manager is getting an upgrade

Google is giving the Chrome password manager a very useful weapon against hackers. It will be able to automatically change passwords on accounts that have been compromised in data breaches. So if a website, app or company is infiltrated, user data is leaked and Google detects the breach, the password manager will let you generate a new password and update a compatible account with a single click.

The main sticking point here is that it only works with websites that are participating in the program. Google’s working with developers to add support for this feature. Still, making it easier for people to lock down their accounts is a definite plus. (And you should absolutely be using a password manager if you aren’t already.)

Advertisement Advertisement

On the subject of Chrome, Google is stuffing Gemini into the browser as well. The AI assistant will be able to answer questions about the tabs you have open. You’ll be able to access it from the taskbar and a new menu at the top of the browser window.

Beam is the new name of Google’s 3D video conferencing booths

It’s been a few years since we first heard about Project Starline, a 3D video conferencing project. We tried this tech out at I/O 2023 and found it to be an enjoyable experience.

Now, Google is starting to sell this tech, but only to enterprise customers (i.e. big companies) for now. It’s got a new name for all of this too: Google Beam. And it’s probably not going to be cheap. HP will reveal more details in a few weeks.

Source: Tech.yahoo.com | View original article

Google I/O 2025 Key Highlights: Android XR glasses, AI Mode in Search, Gemini 2.5 updates, and other big announcements

Google kicked off its annual I/O developer conference today (May 20) at Mountain View, California, US, with an opening keynote that was about AI. In a long stream of announcements, the search giant drew the curtains back on the new AI products it has been working on over the past few months. It introduced upgraded versions of existing AI models and tools such as Gemini 2.5 Flash and Pro, Imagen 4, Veo 3, and Lyria 2, as well as new updates to AI Mode in Search, Deep Research, Canvas, Gmail, Google Meet, etc. Google also announced two new AI subscription plans, the AI Pro plan that will be available globally at $19.99 per month and the pricier $249 per month AI Ultra plan that is being rolled out in the US. We got a glimpse of how Android XR-powered smart glasses will work in real-world scenarios such as messaging friends, taking photos, or asking Gemini for turn-by-turn directions. The Indian Express’ Nandagopal Rajan is on-site at the event, bringing you the news as it breaks.

Read full article ▼
Google I/O 2025 Event Live Stream Updates: Sundar Pichai speaking about the new AI update in his keynote. (Image: Nandagopal Rajan/The Indian Express)

Google I/O 2025 Live Updates: Google kicked off its annual I/O developer conference today (May 20) at Mountain View, California, US, with an opening keynote that was about AI (at least for the most part).

In a long stream of announcements, the search giant drew the curtains back on the new AI products it has been working on over the past few months, including an AI tool for filmmaking, an asynchronous AI coding agent, an AI-first 3D video communication platform, and more.

It introduced upgraded versions of existing AI models and tools such as Gemini 2.5 Flash and Pro, Imagen 4, Veo 3, and Lyria 2, as well as new updates to AI Mode in Search, Deep Research, Canvas, Gmail, Google Meet, etc. Google also announced two new AI subscription plans, the AI Pro plan that will be available globally at $19.99 per month and the pricier $249 per month AI Ultra plan that is being rolled out in the US, with more countries coming soon. But Google I/O 2025 was not just about AI either. We got a glimpse of how Android XR-powered smart glasses will work in real-world scenarios such as messaging friends, taking photos, or asking Gemini for turn-by-turn directions. The Indian Express’ Nandagopal Rajan is on-site at the event, bringing you the news as it breaks. Stay tuned for more coverage of Google I/O 2025. Live Updates 00:54 (IST) 21 May 2025 That’s a wrap Keep a tab on the tech section of The Indian Express for more AI news coming out of Google I/O 2025. 00:53 (IST) 21 May 2025 Google now brings GenAI to more people than any other product “More intelligence is available for everyone, everywhere. And the world is responding, adopting AI faster than ever before…,” Google CEO Sundar Pichai said, adding that the company has released over a dozen foundation models since the last edition of the conference. You can read the full on-ground, news report of The Indian Express here. 00:43 (IST) 21 May 2025 The biggest announcements at Google I/O 2025 Here’s an easy-to-digest list of everything that was announced during the opening keynote of Google I/O 2025: – Updates to Gemini 2.5 Flash and Pro models, including native audio outputs – Deep Think reasoning mode in Gemini 2.5 Pro – Shopping in AI Mode, agentic checkout, and a virtual try-on tool – New updates to Deep Research and Canvas – Imagen 4 – Veo 3 – Synth ID detector – Personalised smart replies in Gmail – Speech translation in Meet – Support for Anthropic’s Model Context Protocol (MCP) – Search Live in AI Mode and Lens – Jules, an asynchronous AI coding agent – Flow, a new AI filmmaking tool – Beam, a 3D AI-first video communication platform – Google AI Pro and AI Ultra subscription plans – A look at Android XR-powered smart glasses – Expanded access to AI Overviews – Gemini in Google Chrome – Expanded access to Gemini Live’s camera and screen sharing capabilities – 00:26 (IST) 21 May 2025 ‘Inspiring to think of research of today becoming reality in the future’: Pichai Sundar Pichai is back onstage talking about examples of how AI is helping people such as detecting wildfires and drone deliveries of critical medicine. 00:24 (IST) 21 May 2025 We’re almost near the end of the keynote 00:21 (IST) 21 May 2025 How 3D maps look on Android XR-powered smart glasses 00:17 (IST) 21 May 2025 An early demo of Android XR on glasses 00:15 (IST) 21 May 2025 Google teases Android XR Android XR supports a broad spectrum of devices including headsets, smart glasses, and more. It is built in partnership with Samsung and optimised for Snapdragon Qualcomm. Samsung’s Project Moohan will be available later this year. Meanwhile, Glasses with Android XR will let Gemini see and hear the world. It works with your phone for hands-free access to apps. 00:10 (IST) 21 May 2025 Google announces two new AI subscription plans Google AI Pro will be available globally with a full suite of AI products, higher limits, and Gemini Advanced. It is priced at $19.99/month. Meanwhile, Google AI Ultra is priced at $249.99/month AI subscription plan, offering higher usage limits and access to its top-tier AI models and features. AI Ultra is only available in the US for now. 00:05 (IST) 21 May 2025 Google launces new AI filmmaking tool ‘Flow’ Flow combines the best of Imagen, Veo, and Gemini. While the AI tool relies on the Veo and Imagen models for generating cinematic visuals, the Gemini integration is meant to make prompting more intuitive so that filmmakers can describe their vision in everyday language. 00:00 (IST) 21 May 2025 Over 10 billion pieces of content marked with SynthID so far Google also launched a new Synth ID detector, a verification platform to help users identify whether a piece of content is AI-generated or now. Users have to just upload a piece of content and the SynthID Detector will identify if either the entire file or just a part of it has SynthID in it. 23:56 (IST) 20 May 2025 Google’s Lyria AI music generator demo features Shankar Mahadevan 23:52 (IST) 20 May 2025 Veo 3 comes with native audio generation That means Veo 3, Google’s AI video generator, can generate sound effects, background sounds, and dialogue. The AI tool is available for Google Ultra subscribers in the US via the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI. 23:51 (IST) 20 May 2025 Imagen 4 is here Imagen 4 has been upgraded to generate 2k resolution images with clarity in fine details like intricate fabrics, water droplets, and animal fur. The AI-generated images are now available to download in a range of aspect ratios. Imagen 4 is also significantly better at spelling and typography, Google claimed. It is available today in the Gemini app, Whisk, Vertex AI and across Gemini Workspace. 23:50 (IST) 20 May 2025 Gemini is coming to Chrome Gemini is coming to the desktop version of Google Chrome but only for Google AI Pro and Google AI Ultra subscribers in the US. This enables users to ask Gemini to clarify or summarise any information on any webpage that users are reading. “In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf,” Google said. 23:49 (IST) 20 May 2025 New updates to Deep Research and Canvas in the Gemini app Starting today, users can get a more customised Deep Research report by uploading their own private PDFs and images. “For instance, a market researcher can now upload internal sales figures (as PDFs) to cross-reference with public market trends, all within Deep Research,” Google said. On Canvas, users can use AI to generate interactive infographics, quizzes and podcast-style Audio Overviews in 45 languages. 23:45 (IST) 20 May 2025 All the new features announced within AI Mode 23:43 (IST) 20 May 2025 Google demoes a new virtual try-on shopping feature within AI Mode The “try on” experimental feature is rolling out in Search Labs in the US today. 23:42 (IST) 20 May 2025 Shopping in AI Mode With AI Mode, users who search for products will be shown a browsable panel of images and product listings personalised to their taste. The righthand panel dynamically updates with relevant products and images, helping users pinpoint exactly what they’re looking for and discover new brands. 23:36 (IST) 20 May 2025 Google unveils Search Live in AI Mode and Lens Google is bringing Project Astra’s live capabilities into Search. With Search Live, users can talk back-and-forth with Search about what they see in real-time, using their phone camera. For instance, if a student wants help with their project, they can tap the Live icon in AI Mode or Lens, point their camera, and ask a question. 23:21 (IST) 20 May 2025 AI Mode in Google Search rolling out to everyone in the US AI Mode offers a more dynamic UI with a combination of text, images, and links. It uses a query fan-out technique to break down a user’s question into subtopics and issuing a multitude of queries simultaneously. This enables Search to dive deeper into the web than a traditional search on Google, helping users discover more of what the web has to offer and find hyper-relevant content that matches a user’s question. AI Mode will soon be able to make its responses even more helpful with personalised suggestions based on past searches. Users can also opt-in to connect other apps such as Gmail with AI Mode. “Because your flight and hotel confirmations are in your inbox, it will sync when you actually be in Nashville (for instance).” Personal Context in AI Mode is coming this summer. AI Mode will also get Deep Search to help users unpack a topic. It can issue hundreds of searches on a user’s behalf to create a research report. Complex data analysis and visualisations will also be possible via AI Mode. Project Mariner’s agentic capabilities also coming to AI Mode to help users, for instance, find ticket options for sporting events. 23:19 (IST) 20 May 2025 US and India is one of Google AI Overviews’ biggest markets: Pichai AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews. People are coming to Google to ask more of their questions, including more complex, longer and multimodal questions, says Pichai. 23:15 (IST) 20 May 2025 ‘We’re ushering in a new golden age of discovery’: Demis Hassabis 23:08 (IST) 20 May 2025 Gemini 2.5 Pro gets new enhanced reasoning mode ‘Deep Think’ “Deep Think uses our cutting edge research in thinking and reasoning including parallel techniques,” says DeepMind’s Demis Hassabis. Deep Think is currently being rolled out to Google’s trusted testers. 23:05 (IST) 20 May 2025 Google debuts new AI coding agent Jules, an AI agent powered by Gemini 2.5 Pro and capable of autonomously reading and generating code, makes its debut at Google I/0 2025. Users can integrate Jules directly into their existing code repositories. The AI coding agent then makes a clone of the user’s codebase within Google Cloud virtual machine (VM) to ‘understand’ the context of the project and perform tasks such as writing tests, building new features, fixing bugs, etc. 23:00 (IST) 20 May 2025 Native audio outputs coming to Gemini 2.5 Flash, Pro Google is introducing a preview version of audio-visual input and native audio output dialogue for both Gemini 2.5 Flash and Pro. It allows the user to steer the AI-generated tone, accent and style of speaking. For example, you can tell the model to use a stronger voice or whisper when telling a story. It can also switch between languages (say from English to Hindi) mid-speech. The text-to-speech capability is now available in the Gemini API. 22:58 (IST) 20 May 2025 Tulsee Doshi takes the stage Tulsee Doshi, Senior Director, Product Management at Google DeepMind, onstage talking about the updated versions of Gemini 2.5 Flash and Pro. 22:54 (IST) 20 May 2025 Updated version of Gemini 2.5 Flash coming in June “AI is making an amazing new future possible. Gemini 2.5 Pro is our most intelligent model yet, and we’re truly impressed with what we’ve created. It can build entire apps from sketches. It also leads in learning and ranks number one across multiple benchmarks. I’m thrilled to announce Gemini 2.5 Flash, which comes with enhanced capabilities. Flash will be generally available in early June, and we’re making final adjustments based on your feedback,” said Google DeepMind chief Demis Hassabis. 22:52 (IST) 20 May 2025 Pichai introduces Demis Hassabis onstage as ‘Nobel Prize laureate’ 22:51 (IST) 20 May 2025 Gemini in Gmail Google launches personalized smart replies in Gmail. It will help users draft emails that match their specific context and tone. Available in Gmail this summer for subscribers. It pulls from a user’s past emails and Google Drive to provide suggestions that are more relevant to the user. 22:47 (IST) 20 May 2025 Agent Mode in Gemini app Using Agent Mode, the Gemini app can help users, for instance, find real-estate listings. Gemini uses MCP (Model Context Protocol) to access the listings and even schedule a tour on your behalf. Coming soon to subscribers. 22:43 (IST) 20 May 2025 Google Meet gets real-time speech translation The new AI feature on Google Meet translates a user’s words into their listener’s preferred language — in near real time, with low-latency. 22:42 (IST) 20 May 2025 Google unveils Beam, a new video communication platform Google Beam, is an AI-first video communication platform that has evolved from Project Starline, which Pichai said brings the world closer to having a “natural freeflowing conversation across languages”. Beam instantly translates spoken languages in “real-time, preserving the quality, nuance, and personality in their voices”. 22:41 (IST) 20 May 2025 ‘Best models at most effective price point’: Pichai As expected, Google’s annual conference this time is focused on AI, with the search and advertising giant showcasing a lot of new products and updates. “There’s a hard trade off between price and performance. Yet, we’ll be able to deliver the best models at the most effective price point… we are in a new phase of AI product shift,” Pichai said in his opening keynote. 22:39 (IST) 20 May 2025 Google is processing 480 trillion monthly tokens, wow. “The world is adopting AI faster than ever before. There’s one marker of progress. This time last year, we were processing 9.7 trillion tokens a month across our products and APIs. Now we are processing 480 trillion monthly tokens. That’s about a 50 x increase in just a year,” says Pichai 22:35 (IST) 20 May 2025 Sundar Pichai takes the stage 22:32 (IST) 20 May 2025 The first video also says created by Veo 3. Adobe seems to have some serious competition coming its way. 22:32 (IST) 20 May 2025 All set for the show to start. The countdown seems powered by Google’s image creation tolls 22:29 (IST) 20 May 2025 Few minutes left for Google I/O 2025 There’s a little over 28,000 folks tuned into the livestream right now. Google CEO Sundar Pichai should be taking the stage at any moment now so get your last-minute predictions in. You can also watch the keynote interpreted in American Sign Language here. 22:21 (IST) 20 May 2025 How does Google plan to use AI for Search? That is something worth keeping an eye out for at this year’s Google I/O event, which is being held at a time when the tech giant is staring down at the possibility of its search business being broken up by a US district court on the proposal of the US government. On whether AI chatbots could replace search engines, Sundar Pichai had said in his testimony that while it was not a zero-sum battle between the two products, AI “is going to deeply transform Google Search.” Read the top five moments from Sundar Pichai’s testimony in the Google antitrust remedies trial here. 21:56 (IST) 20 May 2025 What time is it? https://www.instagram.com/reel/DJ4XL9rppuf/?igsh=MTJrcnk0ejR6dDM4eA%3D%3D 21:53 (IST) 20 May 2025 NotebookLM goes mobile A day ahead of Google I/O 2025, the tech giant announced it is launching its widely popular AI offering NotebookLM as an app for both Android and iOS devices. Not just that, Google also said that users will also be able to download the acclaimed AI-generated podcasts created using Notebook LM for offline listening. Wondering how to use the app? Follow the steps mentioned here. 21:49 (IST) 20 May 2025 ‘Tis the season of tech conferences On the morning of Day 1 of the Google I/O developer conference, Apple has announced that its annual Worldwide Developers Conference (WWDC 2025) will be taking place from June 9-13 this year. Just yesterday, Microsoft held its own conference (Build 2025) where CEO Satya Nadella announced a series of sweeping updates that reflect the tech giant’s vision for an open agentic future. 21:37 (IST) 20 May 2025 One hour to go… At 10:30pm IST, Google CEO Sundar Pichai is expected to take the stage to share the new wave of AI products being shipped by the company. Stay tuned. 20:53 (IST) 20 May 2025 Google I/O Gates Open The gates are open at Shoreline Amphitheatre as attendees start arriving for Google I/O 2025. With an hour to go before Sundar Pichai takes the stage, excitement is already in the air. Expect big updates around AI, Gemini, Android, and more. Stay tuned for all the action—live from Mountain View. (Image: Nandagopal Rajan/The Indian Express) 19:27 (IST) 20 May 2025 Gemini Ultra Plan in the offing Only a few hours left for Google to commence its much-awaited I/O 2025 event. The tech giant is expected to introduce a slew of AI offerings tonight including the rumoured ‘Gemini Ultra plan for Google One. There are no details available yet about the new plan. Currently, the free plan includes Gemini Proi for basic tasks like writing, multimodal input, summarisation, etc. The premium plan at $19.99/month comes with Gemini Advanced featuring enhanced capabilities, 2TB storage, and integration with Google apps like Gmail and Docs. 18:09 (IST) 20 May 2025 What is Project Mariner? The highly anticipated Google I/O event may also see Google detailing its Project Mariner, an experimental AI agent showcased last year. This is an agent that has been designed to autonomously navigate and interact with web pages on behalf of users. While functioning as a Chrome browser extension, the browser can read, click, scroll, and even fill information into forms. The AI agent is reportedly built upon Gemini 2.0 offering it some advanced multimodal understanding. Its comprehension abilities allow it to perform complex tasks such as adding items to an online shopping cart, planning travel itineraries, offering users real-time updates, and maintaining control over the entire process. As of now, Project Mariner is in a testing phase with a select group of users. Reportedly, the search giant is ensuring user safety before its rollout. With advanced AI capabilities coming to the browsing experience, Project Mariner seems like a significant step toward more intuitive and efficient human-machine interactions. 15:52 (IST) 20 May 2025 Android XR could be the biggest thing at Google I/O With Google hyping up Project Astra and Android XR, it looks like we might finally see a demo of Project Moohan, Samsung’s first mixed reality headset. While the company’s extended reality OS did not get much screentime during last week’s The Android Show, Android head Sameer Samat hinted that we may see some sort of prototype smart glasses during this year’s Google I/O. With Samsung gearing up to launch its mixed reality headset later this year, this may be Google’s best chance to share what its new operating system is capable of. 15:22 (IST) 20 May 2025 WearOS 6 to get Gemini and Material 3 Expressive treatment During The Android Show livestream, Google announced that it has big plans for WearOS. Like Android 16, the Material 3 Expressive visual upgrades will be making their way to WearOS 6, modernising the quite dated looking user interface. This means that we will see more rounded and less rounded UI elements as well as new animations that make better use of the screen. The new user interface also brings glanceable information, making it easier for users to go through information like messages and calendar appointments. Google also said that it is “continuing to improve performance and optimize power”, which roughly translates to 10 per cent more battery life on Pixel watches. However, the biggest news is that the tech giant is finally bringing Gemini to WearOS powered devices like the Samsung Galaxy Watch 7 and the Galaxy Watch Ultra. 14:45 (IST) 20 May 2025 Google may announce a new “Pinterest-like” feature Google is reportedly working on a new Pinterest-like feature that will show “images deisgn to give people ideas for fashion or interior design”, according to The Information. However, it is still unclear if Google will allow users to upload their own image or simply view other people’s collections. Also, there is no news if the tech giant plans to integrate the Pinterest-like functionality into an existing Google service or if it emerges as a standalone website or app. the move may enable Google to offer a more interesting way to get search results and may help the company generate revenue from ads. 13:51 (IST) 20 May 2025 Google One rumoured to get Gemini Pro and Gemini Ultra plans Rumour has it that Google One, the company’s subscription service will be getting a bunch of new AI plans. Earlier this month, known tipster TestingCatalog, who shared a post on X claiming that Google was preparing to introduce two new plans – Gemini Pro and Gemini Ultra. These subscription plans maybe available alongside the existing Google Advanced bundle, which offers access to the company’s most capable AI models and comes with 2TB of cloud storage. As for pricing, it looks like the upcoming plans will be more expensive than Google’s current AI Premium plan. 13:38 (IST) 20 May 2025 This year’s Google I/O will focus mostly on AI Google has already shared what Android 16 and WearOS 6 looks like at The Android Show last week, meaning the tech giant might mostly talk about new features coming to Gemini alongside some other AI news. The tech giant is also steadily integrating AI with search by introducing features like Overview, so it will be interesting to see how much Google’s AI feature set has improved in the last year and where’s its going next. 12:53 (IST) 20 May 2025 Having a deep think… pic.twitter.com/oZPH2jAWyV — Sundar Pichai (@sundarpichai) May 20, 2025 https://platform.twitter.com/widgets.js pic.twitter.com/oZPH2jAWyV

— Sundar Pichai (@sundarpichai) May 20, 2025

A few hours before Google I/O starts, Google CEO Sundar Pichai shares a picture of him and Deepmind head Demis Hanssabis standing in the Shoreline Amphitheater in Mountain View, California. 12:05 (IST) 20 May 2025 Google unveils Android 16 a week before Google I/O Ahead of the annual developer conference, Google announced Android 16, the latest version of the world’s most popular mobile operating system. The upcoming version of Android will use “Material 3 Expressive”, a new design language that brings several changes to the user interface. According to Mindy Brooks, the Senior Director at Android Platform, Material 3 Expressive brings “fluid, natural, and springy animations” to the interface. For example, when users dismiss a notification, they will see a smooth detached transition followed by a haptic feedback.

© IE Online Media Services Pvt Ltd

Source: Indianexpress.com | View original article

Gemini, AI-Powered Search, Android Updates and More ~ My Mobile India

Google’s I/O 2025 keynote was all about AI and how it’s going to power everything from Search and Gmail to Chrome, Android and even shopping. Google introduced new AI subscription plans, Gemini 2.5 Flash and Deep Think, AI-powered tools for Search, Workspace and Project Astra integration. Google also previewed the latest from Project AstRA, showing Gemini controlling Android phones, making calls, and navigating apps. AI Mode in Google Search is now rolling out to all U.S. users. Google is rebranding AI Premium to Google AI Pro at the same price of $19.99/month. You’ll also be able to upload your own PDFs and images to fuel Deep Research, and what you create in Gemini Canvas, like notes or drafts, can now become a webpage, infographic, quiz, or audio summary. You can check out the entire keynote below – you can also watch it live on YouTube and the Google Play video player at the bottom of the page.

Read full article ▼
Highlights

Google introduced new AI subscription plans, Gemini 2.5 Flash and Deep Think, AI-powered tools for Search, Workspace and Project Astra integration.

AI Mode now provides direct answers, taps into Gmail context, supports shopping and ticket purchases and expands to 200+ countries.

Wear OS 6 Developer Preview, Jules AI coding assistant, AI-powered filmmaking, Google Play enhancements and XR partnerships.

Caption – Google at I/O 2025. (Image credit – Google)

Google I/O 2025 was all about AI and how it’s going to power everything from Search and Gmail to Chrome, Android and even shopping. While Android didn’t take centre stage during the main keynote, Google still had plenty to share across its products and services.

You can check out the entire keynote below –

Here’s a simple breakdown of all the big announcements.

Gemini

Caption – Google Gemini plans. (Image credit – Google)

Google is rebranding AI Premium to Google AI Pro at the same price of $19.99/month. It gives you more features and higher usage limits than the free tier, with more updates on the way.

Students in Japan, Brazil, Indonesia, and the UK can now get AI Pro free for one school year.

For power users, Google introduced a new AI Ultra plan at $249.99/month, giving you early access to cutting-edge tools.

Gemini 2.5 Flash is now out of preview, running faster and more efficiently.

is now out of preview, running faster and more efficiently. Gemini 2.5 Deep Think , coming soon, will offer stronger reasoning in math, code, and multimodal tasks, but it’s still undergoing safety checks. It’ll launch with AI Ultra first.

, coming soon, will offer stronger reasoning in math, code, and multimodal tasks, but it’s still undergoing safety checks. It’ll launch with AI Ultra first. Imagen 4 with improved image and text generation, is now rolling out in the Gemini app.

with improved image and text generation, is now rolling out in the Gemini app. Veo 3 adds native audio generation to videos and is also exclusive to AI Ultra in the U.S.

adds native audio generation to videos and is also exclusive to AI Ultra in the U.S. Gemini Live is getting camera and screen sharing support in the iOS app.

is getting camera and screen sharing support in the iOS app. Soon, Gemini Live will support Calendar, Keep, Maps, and Tasks, helping you with more day-to-day tasks.

Caption – New Gemini announcements made by Google during I/O 2025. (Image credit – Google)

You’ll also be able to upload your own PDFs and images to fuel Deep Research, and what you create in Gemini Canvas, like notes or drafts, can now become a webpage, infographic, quiz, or audio summary.

Gemini can now generate interactive quizzes too.

Google also previewed the latest from Project Astra, showing Gemini controlling Android phones, making calls, and navigating apps.

Gemini and Gemini Live are heading to desktop Chrome (Mac and Windows). At launch, they’ll answer questions about webpages, with full site navigation coming later.

Project Mariner, which turns Gemini into a smart agent that can complete tasks online, will be available to AI Ultra users. This powers the upcoming Agent Mode in the Gemini app.

Search Gets Smarter with AI

AI Mode in Google Search is now rolling out to all U.S. users. It will get new features first before they appear in AI Overviews, and both will run on a custom version of Gemini 2.5.

AI Mode can soon tap into your Gmail context to deliver personalised results.

can soon tap into your Gmail context to deliver personalised results. It will also support custom charts and graphs for things like sports scores or finance trends.

for things like sports scores or finance trends. A shopping pane l will update live as you refine your search.

l will update live as you refine your search. Deep Search , similar to Deep Research, is also coming.

, similar to Deep Research, is also coming. Search Live will use Project Astra to offer real-time, interactive AI help.

will use to offer real-time, interactive AI help. You’ll be able to say things like “Find 2 affordable tickets for Saturday’s Reds game in the lower level,” and AI will handle the ticket purchase and even book restaurant reservations with your approval.

Caption – AI Mode explained at Google I/O 2025. (Image credit – Google)

The “buy for me” tool is also coming to AI Mode, allowing Google to add items to your cart and checkout via Google Pay.

And yes, you’ll be able to virtually try on clothes by uploading a photo. The AI understands how fabrics fold and stretch on different body types.

AI Overviews are expanding to 200+ countries in 40+ languages.

Google Workspace Updates

Caption – New Google Meet Speech Translation upgrades explained during Google during I/O 2025. (Image credit – Google)

Gmail is getting Personalised Smart Replies, Inbox Cleanup, and Calendar scheduling tools.

Google Meet Speech Translation will now feel like having a live interpreter on the call. It’s available now for AI Pro and Ultra users.

will now feel like having a live interpreter on the call. It’s available now for Docs will let you ground AI responses with info directly from your document.

with info directly from your document. Imagen 4 is coming to Docs, Slides, Vids, and more.

is coming to Docs, Slides, Vids, and more. Google Vids can now turn a full slide deck into a video with AI avatars that only need a script. Other new tools include Transcript Trim and Balance Sound.

More AI Announcements

Caption – Google Beam at Google I/O 2025. (Image credit – Google)

Project Starline has been renamed to Google Beam, a new 3D video communication platform. HP is making the first device.

has been renamed to Google Beam, a new 3D video communication platform. HP is making the first device. Google teased more XR glasses demos and announced a bigger partnership with Samsung.

and announced a bigger partnership with Samsung. Google Flow is a new AI filmmaking tool, evolving from VideoFX. It uses Gemini, Veo, and Imagen to let you create scenes with simple text prompts.

evolving from VideoFX. It uses Gemini, Veo, and Imagen to let you create scenes with simple text prompts. Google introduced the SynthID Detector, a portal where you can upload content to check if it includes Google’s AI watermark.

Tools for Developers

Jules is Google’s new AI coding agen It “reads your code, understands your intent, and gets to work.” Jules can write tests, fix bugs, and even upgrade dependencies. It’s in public beta with no waitlist now.

It “reads your code, understands your intent, and gets to work.” Jules can write tests, fix bugs, and even upgrade dependencies. It’s in public beta with no waitlist now. Wear OS 6 Developer Preview is out with a new Material 3 Expressive design.

is out with a new Material 3 Expressive design. Android Studio now has built-in Gemini 2.5 Pro and a new Studio Labs to test AI tools in stable versions.

Google Play Updates

Devs can now pause buggy app updates before they reach more users.

before they reach more users. The Play Store is getting curated spaces for local and seasonal interests like football in Brazil or cricket in India.

for local and seasonal interests like football in Brazil or cricket in India. New topic browse pages will help users discover timely content, like live sports and movies.

will help users discover timely content, like live sports and movies. Google Play Collections are expanding with a Travel category and are coming to Brazil, India, Indonesia, Japan, and Mexico.

From deeper AI integration across platforms to a personal AI agent that can shop and organise your day, Google I/O 2025 clearly marks the company’s push into a future where AI touches almost everything you do.

FAQs

Q1. What new AI features did Google announce for Gemini?

Answer. Google introduced Gemini 2.5 Flash for faster AI responses and Gemini 2.5 Deep Think for stronger reasoning in math, coding, and multimodal tasks.

Q2. How is Google making Search smarter with AI?

Answer. AI Mode now provides AI-generated answers, taps into Gmail context, supports shopping and ticket purchases, and expands to 200+ countries in 40+ languages.

Q3. What updates did Google reveal for developers?

Answer. Google announced Wear OS 6 Developer Preview, Jules AI coding assistant, AI-powered filmmaking tools, and new Google Play updates for app discovery and curation.

Also Read:

Source: Mymobileindia.com | View original article

Google I/O 2025: The best of the competitor’s developer conference Apple

Gemini, a direct competitor to tools like ChatGPT or Claude, is getting new modes and features that make it a full-fledged personal assistant. Google also introduced at this year’s I/O new AI tariff Ultra for $250 a month, which will provide access to the latest AI tools, high limits and bonuses such as YouTube Premium and 30TB cloude-repository. In Gmail now Gemini analyzes past communication and the user’s style to generate responses that sound personal and natural. In the browser Chrome newly integrated AI can answer questions about open tabs or summarize long articles. In Google Meet is introducing a new feature in the form of live translation – currently between English and Spanish, with other languages ​​gradually being added. Android XR: Smart glasses with AI translation and navigation.

Read full article ▼
Company Google This week, at its annual developer conference, I/O 2025, Apple unveiled groundbreaking innovations that will fundamentally change the way people interact with technology. The key theme was deep connectivity between artificial intelligence and inteligence (AI) across all products – from search engine to Gmail to new augmented reality smart glasses. The AI ​​platform becomes the main star Gemini, which receives a whole range of new features and capabilities.

Gemini, a direct competitor to tools like ChatGPT or Claude, is getting new modes and features that make it a full-fledged personal assistant. What’s new Gemini is gaining?

Gemini Live

Service Gemini It will be live. now available for iPhone users. It will enable communication with artificial intelreal-time interaction via screen sharing or phone camera. For example, the user can recognize objects in the surrounding area, get advice on DIY projects, orchat help with planning and organization.

Agent Mode

A forthcoming feature called Agent Mode has the potential to be a breakthrough. It will allow, for example, to find ideal tickets for a sports match or search for suitable housing based on specified parameters.

Personal context

Gemini It will actively use the user’s personal context – search history, data from apps like Gmail, Calendar, or Maps – and offer proactive suggestions and reminders.

AI Ultra: Artificial intelgency on steroids (and for $250 a month)

Google also introduced at this year’s I/O new AI tariff Ultra for $250 a month, which will provide access to the latest AI tools, high limits and bonuses such as YouTube Premium and 30TB cloude-repository. At the same time, it renamed the original AI Premium plan to AI Pro.

Gemini v Google Search, realistic videos and images

Veo 3 and Imagen 4 tools are pushing the boundaries of visual content creation using artificial intelgency. Veo 3 allows generate realistic videos complete with sound effects, believable physics and dialogue, but is only available to AI subscribers UltraImagen 4 focuses on creating detailed and photorealistic images with sophisticated elements such as fur, fabrics or text directly in the image. This tool is immediately available in the environment Gemini.

Google at the same time fundamentally transforms its search through artificial intelurgency. New AI Mode replaces traditional results with AI-powered answers and uses query fan-out – breaking down a question into sub-queries and performing hundreds of searches simultaneously. The Deep Search feature allows you to create expert insights based on extensive data analysis and visualize them in the form of graphs or summaries. AI Shopping Mode then offers the possibility of realistically “trying on” clothes using a camera or photo, and recommends suitable varianty and will find discounts.

Gemini coming to Gmail, Chrome and Meet

Google expands the use of its artificial intelignation Gemini into key tools like Gmail, Chrome and Meet. In Gmail now Gemini analyzes past communication and the user’s style to generate responses that sound personal and natural. In the browser Chrome newly integrated AI can answer questions about open tabs or summarize long articles. In Google Meet is introducing a new feature in the form of live translation – currently between English and Spanish, with other languages ​​gradually being added.

Android XR: Smart glasses with AI translation and navigation

Google also unveiled a prototype of smart AR glasses with an integrated display. These lightweight glasses offer live translation directly on the glass or visual navigation instructions. They are equipped with cameras, microphones and speakers and are connected to an AI model Gemini, which provides image recognition, provides live responses or generates personalized directions. The glasses are created in collaboration with design brands Gentle Monster and Warby Parker. The first commercial device running on the platform Android However, the XR will be a headset from Samsung, which is set to be released later this year.

Source: Letemsvetemapplem.eu | View original article

Source: https://www.cnet.com/tech/mobile/everything-we-learned-at-google-io-ai-mode-in-chrome-gemini-live-xr-glasses-and-much-more/

Leave a Reply

Your email address will not be published. Required fields are marked *