I Saw Google’s New ChromeOS AI Tool in Action—and It's Pretty Amazing
I Saw Google’s New ChromeOS AI Tool in Action—and It's Pretty Amazing

I Saw Google’s New ChromeOS AI Tool in Action—and It’s Pretty Amazing

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

Google’s Veo 3 AI Slopfest Just Reached New Heights

Google’s Veo 3 video-generating tool made its debut in a prime-time ad on Thursday. The ad was created using a combination of Gemini and ChatGPT. The creator of the ad says it took him all of two days to create and “300-400 generations,” so clearly this isn’t quite waving a magic wand-type technology yet. But it did effectively kill a lot of jobs on what would have been a much bigger ad. The technology is undoubtedly the preferred method of advertising, but the job-killing potential is undoubtedly high. And if there’S a way to cut costs, you can bet your ass that we’ll see a lot more of this kind of AI slop in the near future in the future. It was unveiled at Google’s I/O conference last month and has been used in a number of other demos, including this action schlock AI slopsfest.

Read full article ▼
Apparently, Veo 3 is ready for prime-time TV whether you like it or not.

Just when I thought I met my slop quota for the month, Google had to go ahead and pile on. With Veo 3, AI-generated video has reached a whole new and stupefying level. YouTube slop; video game slop; VR slop; app slop—you name it. All of that slop, however interesting, disheartening, or inane, has been pretty low stakes, but apparently it’s all pointed in one direction, and that’s straight for prime time.

Kalshi hired me to make the most unhinged NBA Finals commercial possible. Network TV actually approved this GTA-style madness 🤣 High-dopamine Veo 3 videos will be the ad trend of 2025. Here’s how I made it in just TWO DAYS 👇🏼 (Prompt included)pic.twitter.com/XcT3m7CROL — PJ Ace (@PJaccetturo) June 11, 2025

On Thursday night, Veo 3 made its debut as a tool for AI advertising, becoming the engine behind this commercial for the financial services company Kalshi, which aired during the NBA finals. This isn’t the first AI-generated ad—those have been happening for a little while now and were a pretty major theme at this past year’s Super Bowl—but it’s certainly an ascension for Veo 3, which was just unveiled at Google’s I/O conference last month. But just because AI ads aren’t new doesn’t mean the idea of video generation didn’t simultaneously reach new heights and new lows.

As you can see from the creator of the ad (or I guess prompter in this case), PJ Ace, the whole process was rife with Google AI, from ideation to generation. “Kalshi asked me to create a spot about people betting on various markets, including the NBA Finals,” wrote Ace on X. “I said the best Veo 3 content is crazy people doing crazy things while showcasing your brand.” After the initial idea—which was apparently a thematic mashup of GTA and Florida—Ace used a mixture of Gemini and ChatGPT to help write and devise the script, and then took those ideas and had Gemini literally write a prompt that he could feed into Veo 3. That’s right, folks, he had AI prompt itself, and that’s how a prime-time ad was born.

The result looks about on par with what we’ve seen other people generating with Veo 3. The visuals themselves are realistic, but you’ll notice that each scene in the ad is very short. That’s because Veo 3 still has trouble with continuity. Even in Google’s curated demos of its new video generation model last month, including this action schlock AI slopfest, things get weird when you try to stitch coherent scenes together. Though Google’s AI filmmaking tool, Flow, is made for creating longer, coherent AI videos, allowing you to describe angles and characters and retain them across scenes, things still get wonky. The aforementioned action-oriented AI slopfest is full of strange scenes of a SWAT team shooting at nothing and jarring camera angle shifts that make the fact it was AI-generated pretty obvious.

Created with Google Flow. Visuals, Sound Design, and Voice were prompted using Veo 3 text-to-video. Welcome to a new era of filmmaking. pic.twitter.com/E3NSA1WsXe — Dave Clark (@Diesol) May 21, 2025

Ace says his ad took him all of two days to create and “300-400 generations,” so clearly this isn’t quite waving a magic wand-type technology yet. Though, as Ace points out, it did effectively kill a lot of jobs on what would have been a much bigger payroll. Ace estimates that the whole thing was about a “95 percent cost reduction” as opposed to “traditional ads.” There’s a lot to unpack here, and based on the limitations I just described above, I don’t think we can herald Veo 3 in as the new, preferred method of advertising, but the job-killing potential for this type of technology is undoubtedly high. And if there’s a way to cut costs, you can bet your ass that we’ll see a lot more of this kind of AI slop in the near future.

Source: Gizmodo.com | View original article

I Saw Google’s New ChromeOS AI Tool in Action—and It’s Pretty Amazing

Text Capture lets you select any part of your screen with text, whether it’s part of an image or by itself. It can, for example, create a calendar event with all the details from a selected image. It’s similar to what Apple’s Visual Intelligence promises for future versions of iOS and what Click to Do already does on Copilot+ PCs. Google’s vice president of product management for ChromeOS, John Maletis, said it’s about “bringing AI to where the user is” That means offering AI tools, no matter what you’re doing on your computer, without requiring you to open a separate application. Like Apple and Microsoft, Google is emphasizing on-device AI processing but tapping into the cloud for some of the more computationally intensive features. It works directly with any Chromebook app, meaning you don’t have to stop whatever you are doing to launch it. The most advanced Chromebooks cost less than a competent Copilot+. PC, iPhone, or MacBook. You can read my initial impressions of Text Capture and other new ChromeOS AI features based on a live demo.

Read full article ▼
Google isn’t giving up any ground to Apple or Microsoft in the race to add AI features to their respective platforms. Alongside a fresh class of Chromebooks with neural processing units (NPUs), Google introduced some intriguing new AI features for ChromeOS. Some work on every Chromebook, some only on Chromebook Plus machines, and some only on the new NPU-equipped models.

One new feature stood out to me amid all the announcements: Text Capture. With this Chromebook Plus tool, you can select any part of your screen with text, whether it’s part of an image or by itself. Text Capture then proposes related actions and carries out the one you choose. It can, for example, create a calendar event with all the details from a selected image. It’s similar to what Apple’s Visual Intelligence promises for future versions of iOS and what Click to Do already does on Copilot+ PCs.

Text Capture is indicative of Google’s strategy to make AI features as frictionless as possible. Google’s vice president of product management for ChromeOS, John Maletis, said it’s about “bringing AI to where the user is.” That means offering AI tools, no matter what you’re doing on your computer, without requiring you to open a separate application. Like Apple and Microsoft, Google is emphasizing on-device AI processing but tapping into the cloud for some of the more computationally intensive features.

I saw the Text Capture tool (and others) in action at Google’s SoHo offices in New York and came away impressed, especially since even the most advanced Chromebooks cost less than a competent Copilot+ PC, iPhone, or MacBook. Below, you can read my initial impressions of Text Capture and the other new ChromeOS AI features based on a live demo. I look forward to testing everything for myself soon.

How to Use Text Capture in ChromeOS

Despite not requiring one of the new Chromebooks with an NPU, Text Capture uses an on-device AI model. It works directly with any Chromebook app, meaning you don’t have to stop whatever you are doing to launch it.

(Credit: Google/PCMag)

To get started, long-press the Launcher key (the button in the lower-left corner of the desktop with the G) or the space bar key. An overlay message appears and instructs you to select anywhere on the screen by dragging your finger or dragging with a touchpad or mouse. During my demo, the Google rep selected an area of a grocery receipt with purchases and prices.

(Credit: Google/PCMag)

Once you select something, you should see a panel that shows an AI overview about it. Otherwise, you can run a search on the image or text in question. That part works like the Google Lens AI feature I mention later. You get a colorful border animation around the panel, and a set of relevant action buttons appears. For the receipt, the options were Create Google Sheet, Copy Text, and Copy With Formatting. That first option seems the most useful, since a spreadsheet gives you tons of options for working with numerical data.

(Credit: Google/PCMag)

After choosing the Google Sheets option, a brief “Creating” dialog appeared in the lower right side of the screen. Finally, Google Sheets opened with the selected text from the image all in neat rows and columns. Just note that you might need to clean up any text from an image that doesn’t fit a pattern, such as the “2 @ $1.25 ea” line below.

(Credit: Google/PCMag)

You can also create a calendar entry from an image you select. Text Capture can tell if a selection contains event information, in which case you get a button that proposes adding a calendar appointment. (Below, you can see the colorful animation around the selection box.)

(Credit: Google/PCMag)

Again, relevant buttons appear below your selection. After the Google rep chose Add to Calendar, a fully populated event entry appeared in Google Calendar.

(Credit: Google/PCMag)

Text Capture vs. Similar Apple and Microsoft Features

Text Capture somewhat overlaps with the Visual Intelligence features Apple announced at WWDC, but there are some important differences. First, the Apple tool is available only on iPhones running iOS 26, meaning iPad and Mac users are out of luck. Second, with Apple, you have to take a photo or screenshot before you can do anything, whereas Google’s Text Capture lets you select anything that’s on the screen. In its favor, Apple’s Visual Intelligence understands objects in images in addition to text.

Microsoft’s Click to Do feature also lets you select objects on your Windows 11 desktop. It then suggests relevant actions. Click to Do doesn’t take you out of your current activity either, requiring you to just press the Windows key and click on the object of interest. Click to Do can summarize or rewrite text with AI and open or edit images, but it lacks Text Selection’s ability to create a calendar event or spreadsheet based on a selection.

Even More New ChromeOS Features

Google also announced the Smart Grouping feature for the new range of Chromebooks with NPUs. This uses on-device AI to identify apps, tabs, and windows that are part of a related overall task and groups them together on a new virtual desktop.

The ability to generate AI images via the Quick Insert key is another cool exclusive for Chromebooks with an NPU. This works directly within an app, such as Google Docs. You have to give up the Caps Lock key to use this feature, though you can still get that functionality in other ways. Here’s what the Quick Insert interface looks like in use:

(Credit: Google/PCMag)

All ChromeOS users will get the new Select to Search With Lens feature: When you select an object in any application, a Google Lens panel with information about it pops up. This tool is also available from the screenshot utility.

The “Help me read” feature now includes a Simplify option that uses AI to condense or clarify complex text. Finally, Google’s NotebookLM AI research tool will be preinstalled on Chromebook Plus laptops.

Can Google Deliver on Its Goal of Practical AI?

All the AI tools appearing on desktop OSes, whether from Apple, Google, or Microsoft, have the potential to make everyday computing easier. Google’s latest features seem competitive and genuinely useful so far, but I plan to test them more to see just how much they improve my efficiency.

Source: Me.pcmag.com | View original article

OpenAI Takes Down Jony Ive’s ‘io’ References Amid Trademark Dispute

OpenAI has removed all mentions of Jony Ive’s startup “io” from its website and social media pages. The scrub follows a court order prompted by a trademark complaint from iyO, a hearing device startup that graduated from Alphabet’s X moonshot lab. Last month, OpenAI announced it was acquiring the startup for an estimate $6.5 billion. That came on day two of the Google I/O developer conference, prompting some to speculate that OpenAI and Ive were looking to steal some of Google’s thunder with io.

Read full article ▼
OpenAI has removed all mentions of Jony Ive’s startup “io” from its website and social media pages. The scrub follows a court order prompted by a trademark complaint from iyO, a hearing device startup that graduated from Alphabet’s X moonshot lab.

“This page is temporarily down due to a court order following a trademark complaint from iyO about our use of the name ‘io.’ We don’t agree with the complaint and are reviewing our options,” OpenAI said in an X post on Sunday.

Ive’s io was founded last year by the former Apple designer to develop next-generation hardware for AI. Last month, OpenAI announced it was acquiring the startup for an estimate $6.5 billion. That came on day two of the Google I/O developer conference, prompting some to speculate that OpenAI and Ive were looking to steal some of Google’s thunder with io.

As of this writing, OpenAI has taken down mentions of io, but CEO Sam Altman has not.

In the launch video still viewable on his X feed, Altman and Ive are seen hyping up new hardware that can make better use of AI. “I think we have the opportunity here to completely reimagine what it means to use a computer,” Altman says in the video.

According to Bloomberg, the acquisition was to be completed in the summer following regulatory approvals and would have cost OpenAI nearly $6.5 billion. Despite the lawsuit, OpenAI has no plans to drop the acquisition, a company spokesperson tells The Verge.

Once the takeover is completed, OpenAI would merge io’s team of 55 hardware engineers, software developers, and manufacturing experts with its teams in San Francisco. Together, they are expected to create a family of AI-powered devices, and the first one is already being tested.

“The first one we’ve been working on…has just completely captured our imagination,” Ive said in the video. Altman, who has been testing the prototype at home, says: “I’ve been able to live with it, and I think it’s the coolest piece of technology that the world will have ever seen.”

Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Source: Me.pcmag.com | View original article

Gen Z and Boomers Agree: This Is the Rudest Thing You Can Do With Your Smartphone

Survey: 75% of US adults consider it unacceptable to video chat or take phone calls on speaker without headphones in public spaces like coffee shops or grocery stores. 87% of Boomers believe using speakerphones in public is inappropriate, while 41% of Gen Zers deem this sort of behavior acceptable. Nearly one-quarter (23%) of Millennials view looking through a partner’s device as acceptable. But the worst action of all is looking at a friend or family member’s device without their knowledge: 92% of respondents call it unacceptable, according to the YouGov survey.. Read on to learn more about what people think is rude and what is OK when it comes to modern-day tech usage, and how to use ChatGPT to send a text or email written by ChatG PT to someone you’re dating. The latest issue of PCMag’s e-book, The Etiquette Bible, is on sale now for $19.99. For more information, visit www.theetiquettebible.com.

Read full article ▼
I live in New York, where, all too often, I find myself waiting in line at the grocery store as the person behind me yammers away to a friend on speakerphone. Or I’ll be riding the subway next to a dude (it’s always a dude) blasting music from his phone. To me, this sort of behavior is beyond rude. Headphones were invented for a reason!

Of course, we all have different thresholds for what’s annoying. For instance, according to PCMag’s new survey on tech etiquette, 75% of US adults consider it unacceptable to video chat or take phone calls on speaker without headphones in public spaces like coffee shops or grocery stores. However, a full quarter of folks find this behavior acceptable.

(Credit: Cole Kan)

“It’s a little shocking how many people think it’s appropriate to actually use these devices in public, like to play music without headphones, for example,” says business speaker Scott Steinberg, author of The Business Etiquette Bible: Modern and High-Tech Rules, Tips and Training for Working Professionals. (Just under 20% of US adults say playing music without headphones in public spaces is acceptable, according to our survey.) “But at the same time, it mirrors what you see in real-world environments.”

When it comes to tech etiquette, there is often evidence of a generational divide. Our survey of more than 2,000 adults, conducted by YouGov last month, finds that 87% of Boomers believe using speakerphones in public is inappropriate, while 41% of Gen Zers deem this sort of behavior acceptable. “It tells you that you’re seeing changing generational norms and that most of us haven’t received a great deal of training when it comes to high-tech etiquette—we’re kind of left to figure it out for ourselves,” Steinberg says.

“So it makes sense that younger generations would feel their way as they go and as they see their peers using these devices,” he continues. “Whereas older generations perhaps received a little bit more traditional training in terms of etiquette, which has bled over into the high-tech realm.”

Etiquette expert Elaine Swann, the author of the forthcoming Elaine Swann’s Book of Modern Etiquette, expands on Steinberg’s observations. “Gen Z folks grew up with this technology,” she says. “It’s part of their makeup, and so they have a more relaxed perspective in terms of how to use it. The technology comes first, before the consideration piece.”

But what, according to our survey, is the least acceptable thing you can do involving tech? And how do people feel about, say, sending a text or email written by ChatGPT? Read on to learn more about what people think is rude and what is OK when it comes to modern-day tech usage.

How Terrible Is Snooping on Someone Else’s Device?

Our survey finds that 84% of US adults consider looking at a partner’s device without their knowledge to be unacceptable, with the remaining 16% finding such snooping OK. However, nearly one-quarter (23%) of Millennials view looking through a partner’s device as acceptable.

But the worst action of all is looking at a friend or family member’s device without their knowledge: 92% of respondents call it unacceptable.

(Credit: Cole Kan)

Why is peeping at a friend or family member’s device considered worse? “I think that intimacy gives a person more comfort in accessing [their partner’s] device,” Swann says.

Steinberg cites the lurid, tech-enabled cheating stories we’ve all heard as the reason people in relationships are “a little bit more comfortable engaging in these behaviors, even if they wouldn’t, for example, snoop on a friend or family member.” He adds, “It’s just to say it’s hard out there on the dating scene.”

Is Having ChatGPT Write Your Text or Email OK?

Sending a text or email written by ChatGPT might seem convenient, but people are split on whether it’s OK. A majority—56%—deem it unacceptable. However, younger generations are more forgiving: 51% of Gen Zers and 54% of Millennials find using ChatGPT in these instances acceptable.

“Gen Z and Millenials think it’s more acceptable because they’re balancing more things in their lives, and looking for ways to be more efficient,” Swann says. “With the older generations, they have a very strong work ethic and ChatGPT feels like they’re cheating.”

But Steinberg thinks the entire population’s views on using generative AI for texts and emails are likely to change dramatically over the next few years. “As AI becomes more and more integrated into virtually every aspect of communications, in business and our daily lives, we’re going to become more accepting of it,” he says, “just because it’s going to be there at every turn.”

Why Gen Z Is (Mostly) Different

Three-quarters of US adults say that texting or emailing while actively conversing with someone else IRL is unacceptable. However, 40% of Gen Zers think this behavior is no problem. “Gen Z folks prioritize electronic communication over verbal communication,” Swann explains. “This is the same generation that has challenges making phone calls, and they tend to be a little more socially awkward.”

Gen Zers also rank first among the generations when it comes to thinking it’s acceptable to use devices in the bathroom, whether for a selfie or a mirror shot: 44% consider it just fine. “Gen Z thinks it’s OK, because if the lighting is right, honey, they’re going to take that shot,” Swann says.

(Note: Click on the headers in the chart below to see what different generations think.)

Meanwhile, nearly three in four adults overall find it unacceptable. Steinberg, who is Gen X, isn’t a fan of bathroom phone usage. “People are social creatures,” he says. “We learn from our environments. Unfortunately, you know, I guess some of us learn from the next stall.”

Generations tend to align when it comes to recording or photographing strangers in public. Eight in 10 US adults find such behavior unacceptable, but around one in five from each generation—Gen Z (19%), Millennials (22%), Gen X (21%), and Boomers (19%)—are cool with it.

“It is good to know that [overall] people do find this unacceptable because we should be mindful of recording strangers in public, and even people we know,” Swann says. “I always say that if you take a picture with someone, just take a millisecond to ask them if you can post it online before you do.”

Methodology

PCMag commissioned YouGov to conduct the survey. All figures, unless otherwise stated, are from YouGov. Total sample size was 2,104 adults. Fieldwork was undertaken from May 13-15. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18 and older).

Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Source: Me.pcmag.com | View original article

Apple Is Facing Another Siri AI Lawsuit, This Time From Shareholders

A federal securities class-action has been filed against Apple. It claims that the company promoted a supercharged Siri for iPhone 16 despite knowing it wasn’t ready. Apple’s stock hit an all-time high in December 2024 and fell in March after an executive told Daring Fireball that Siri features would not be available until 2026. The case is yet to be certified by a judge, but the plaintiffs have alleged violations of US securities laws and demanded relief for damages.

Read full article ▼
Apple is being sued for delaying Apple Intelligence features once again. This time, the lawsuit comes in the form of a federal securities class-action on behalf of all individuals or entities who purchased Apple shares between June 10, 2024, and June 9, 2025.

Filed by shareholder Eric Tucker, the lawsuit names CEO Tim Cook, current CFO Kevan Parekh, and former CFO Luca Maestri as defendants. It claims that the company promoted a supercharged Siri for iPhone 16 despite knowing it wasn’t ready.

“Unbeknownst to investors, Apple lacked a functional prototype of these advanced AI-based Siri features at the time of the 2024 WWDC and had no reasonable basis to believe it could deliver the product it was advertising within the iPhone 16 product cycle, if ever,” the lawsuit says.

The complaint also notes the number of times Apple promoted the Siri features on its website and YouTube since June 2024 before removing them and announcing they were behind schedule in March 2025. Apple’s stock hit an all-time high in December 2024 and fell in March after an executive told Daring Fireball that Siri features would not be available until 2026.

As MacRumors notes, the case is yet to be certified by a judge, but the plaintiffs have alleged violations of US securities laws and demanded relief for damages.

Similar claims have been noted in at least three other lawsuits filed by iPhone 16 users. They claim they have had to pay a premium for some iPhone 16 features that are not yet available. The device launched in September 2024, and the delayed Siri features won’t reportedly arrive until iOS 26.4 in spring 2026.

Among those delayed is Siri’s ability to tap into users’ personal context and take actions based on what’s on the screen. According to a demo at WWDC 24, when you ask Siri about the arrival details of a flight you booked, it would be able to go through your email, find the ticket, search for its live status on the web, and then provide a response.

Responding to what caused the delay, Apple software boss Craig Federighi recently said that things started falling apart when the team tried to merge old Siri code with new. Following the debacle, Siri has been completely shifted to the new code to meet the quality standards customers expect from Apple, Federighi added.

Source: Uk.pcmag.com | View original article

Source: https://www.pcmag.com/news/google-chromeos-ai-capture-text-tool-preview

Leave a Reply

Your email address will not be published. Required fields are marked *