
Be Careful What You Tell ChatGPT: Your Chats Could Show Up on Google Search
How did your country report this? Share your view in the comments.
Diverging Reports Breakdown
Rare Outage Takes Down Major ChatGPT Feature
ChatGPT’s conversation history feature went down today, leaving users in the dark. OpenAI confirmed the issue just after noon ET and deployed a fix by 3:24 p.m. ChatGPT automatically names every new chat with its interpretation of the topic. It’s unclear if the issue is a design bug or if the conversations are actually deleted. Some people are reporting issues only on desktop, not mobile, and others are experiencing the problem on both.
Original Story 7/30:
ChatGPT’s conversation history feature went down today, leaving users in the dark.
“I’ve closed ChatGPT for a minute and opened it again, only to find three months of data is gone!” says one Redditor. “Images I’ve created during this period are still in the library, but chats are just…Poofed!”
“I had some important convos saved there,” writes another.
OpenAI confirmed the issue just after noon ET and deployed a fix by 3:24 p.m., according to its status tracker. “We have applied the mitigation and are monitoring the recovery,” it says. Outage tracker Downdetector also shows a spike in user-reported issues around 10:30 a.m. ET.
(Credit: Downdetector)
Conversation history is a core feature within the otherwise simple chat interface. Users may reference previous chats or continue them because they are often about one or two core topics. ChatGPT automatically names every new chat with its interpretation of the topic, such as “Identifying trees” or “calculating tips.”
It’s unclear if the issue is a design bug or if the conversations are actually deleted. Some people are reporting issues only on desktop, not mobile, and others are experiencing the problem on both.
ChatGPT is generally a reliable product; OpenAI self-reports 99.55% uptime across its 23 services. Conversation history has 99.81% uptime, so an outage is an unusual event for the chatbot overall and for this particular feature. OpenAI has not posted about the issue on its social media. We’ve reached out for more information and will update this story if we hear back.
Disclosure: PCMag parent company Ziff Davis owns Downdetector.
China to Nvidia: Do Your AI Chips Contain a Backdoor?
Chinese government says Nvidia’s H20 AI chip is a potential spying risk. It summoned Nvidia to discuss its concerns about the H20 containing “backdoors” US lawmakers have floated a proposed bill to require Nvidia to install location-tracking technology on high-end GPUs. Nvidia: “Cybersecurity is critically important to us. Nvidia does not have ‘backdoors’ in our chips that would give anyone a remote way to access or control them””The question is not whether China will have AI. It already does,” Nvidia CEO Jensen Huang says.
On Thursday, the Cyberspace Administration of China flagged Nvidia’s H20 AI GPU as a potential spying risk, after Nvidia secured White House approval to sell the product in China. It summoned Nvidia to discuss its concerns about the H20 containing “backdoors.”
In a statement, the agency noted that US lawmakers have floated a proposed bill to require Nvidia to install location-tracking technology on high-end GPUs. The bill’s sponsor, Sen. Tom Cotton (R-Ark.), says doing so will help prevent the technology from falling into the hands of “adversaries like Communist China.”
Without elaborating, China’s cybersecurity agency also cited unnamed “US AI experts” who claim Nvidia chips contain mature tracking and remote shutdown capabilities. The agency has asked Nvidia to submit documentation that proves the H20 doesn’t pose a spying risk.
In response, Nvidia tells PCMag: “Cybersecurity is critically important to us. Nvidia does not have ‘backdoors’ in our chips that would give anyone a remote way to access or control them.”
According to Reuters, Nvidia has ordered Taiwan’s TSMC to build 300,000 H20 units to meet the demand in China, an indicator that the GPU will become widely used in Chinese data centers.
Although the H20 was downgraded to comply with US export controls, Democratic lawmakers fear the AI chip will still prove powerful enough to help the country develop cutting-edge AI technologies for a wide variety of applications, including for the military, putting US national security at risk.
However, Nvidia CEO Jensen Huang has defended the advanced chip sales to China, arguing the country is bound to develop cutting-edge AI, with or without US tech. “The question is not whether China will have AI. It already does. The question is whether one of the world’s largest AI markets will run on American platforms,” he said in May.
Meanwhile, the US Commerce Department notes that H20 shipments to China still require an export license. “The Trump Administration will consider any H20 license applications carefully, accounting for both the benefits and the costs of potential exports from America, and considering the views of experts across the US Government,” the department tells PCMag.
Zuckerberg Walks Back Open-Source AI Pledge, Citing Safety Risk
Meta CEO Mark Zuckerberg is hyping up the emergence of superintelligent AI. Just don’t expect it to be open source, like the company’s Llama large language models. Zuckerberg has been offering massive deals to poach top AI researchers from rivals like OpenAI and Apple. At the same time, Meta is investing billions in new data centers that promise to develop and run future AI systems. The company has tightly restricted the use of Llama in the European Union and has not released the training data to build the large language model.
On Wednesday, Zuckerberg published a manifesto about the company’s efforts to build AI that’s smarter than humans. “Developing superintelligence is now in sight,” he writes.
“Over the last few months, we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable,” he said.
But while the manifesto talks up Zuckerberg’s vision to use AI to improve society, he also signals a retreat from openness, citing safety. “We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source,” he says.
The statement stands out since it was only a year ago that Zuckerberg wrote a separate article pledging to make the company’s AI open source. “There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives,” he said at the time.
Other critics say Zuckerberg’s AI efforts were never really open source, which is supposed to mean that a software’s code is freely available for developers to inspect and modify. But with Llama, Meta has imposed certain restrictions, according to the Open Source Initiative, a group that stewards the open-source definition.
For example, users need to sign off on Meta’s own licensing document to download Llama. Although the agreement is designed to prevent malicious use of the AI technology, it also prohibits deploying Llama in critical infrastructure or for regulated controlled substances.
Meta’s agreement also requires users to seek out a license if the Llama implementation gains “greater than 700 million monthly active users in a calendar month.” In addition, the company has tightly restricted Llama’s use in the European Union and has not released the training data to build the large language model.
Of course, keeping Meta’s superintelligent AI under wraps is also about staying ahead of the competition. Zuckerberg has been offering massive deals to poach top AI researchers from rivals like OpenAI and Apple. At the same time, Meta is investing billions in new data centers that promise to develop and run future AI systems.
In his manifesto, Zuckerberg addd that his intent it to deliver the AI capabilities to “billions of people across our products.”
Uber Eats Adds New Perk: Upload Your Food Pics, Get Uber Cash
Uber Eats is offering perks to those who add their own pics. Customers can snap a picture after delivery and earn $3 in Uber Cash if their photo gets selected. Uber’s AI will now automatically detect and enhance low-quality images with better lighting, resolution, framing, and plating.
If a restaurant doesn’t have an image for a menu item, customers can snap a picture after delivery and earn $3 in Uber Cash if their photo gets selected. Uber Cash balance can be used for both Uber rides and food or package deliveries. To get started, tap “Add Photos” on the rate order screen; it appears you can share up to five photos for each food item.
(Credit: Uber)
Alongside its new perk for customers, Uber announced a set of AI features for vendors. To help restaurants present their food better, Uber’s AI will now automatically detect and enhance low-quality images with better lighting, resolution, framing, and plating.
Restaurants can also use AI to write menu descriptions and summarize customer reviews. The AI tool will highlight strengths and areas of improvement.
The platform is also getting a Live Order Chat for customers. These chats can only be initiated by restaurants, and are intended to help increase order accuracy by checking in on special requests, dietary preferences, and allergies. They can also be used to provide real-time updates about replacements for out-of-stock items.
Uber Eats also shared its latest merchant impact report, which says the platform completed 140 million orders and generated $9.4 billion in revenue from the US in 2024.
ChatGPT chats will now show up in Google search, which is alarming — but there’s an easy way to stop it from happening
ChatGPT users can now find other ChatGPT chat sessions through Google Search. For this to happen you need to have clicked the share button on a chat. When you do that ChatG PT will generate a URL which anyone you share it with can go to. Once this URL has been generated, Google can scrape it and add it to search results. But be aware, anything inside the chat when you click that button can be found by someone through Google. The upside is that any further content you add to your chat after clicking share will not be available. The URL generation is finite, and unless you click it again, subsequent information won’t go the same way.
As reported by our pals over at Techradar, though, that is very much not the case. At least, in some cases. You can now, quite easily, go and find chat sessions other ChatGPT users have had through Google Search.
Yikes.
There’s no immediate indication that your ChatGPT session building a workout plan is suddenly going to start topping Google’s rankings. Instead, by entering site:chatgpt.com/share followed by the topic you’re looking for, you can bring up a whole raft of results from OpenAI’s chatbot.
Anyone can do this and will be taken to a full ChatGPT transcript. (Image credit: Windows Central)
I know I wasn’t aware of this, but equally, I’m also fairly confident none of my ChatGPT chats will be in there, either. Not that there’s anything interesting for people to look at anyway.
That’s because it seems that for this to happen you need to have clicked the share button on a chat. When you do that ChatGPT will generate a URL which anyone you share it with can go to.
What’s also happening, and isn’t clearly communicated, is that once this URL has been generated, Google can scrape it and add it to search results. As I’ve never once done this, I’m fairly confident that nobody can see my terrible attempts to use ChatGPT to learn a bit of PowerShell.
So, don’t share, your stuff won’t hit Google. I even asked ChatGPT itself about this whole situation, and this was the response.
I had to ask twice, but eventually ChatGPT itself gave an answer on this topic. (Image credit: Windows Central)
I do think OpenAI needs to be a little more up front and obvious with users of ChatGPT that this is what’s happening, though. When you click the share button, there’s no mention that doing so will allow your content to hit Google.
As such, you need to be especially careful if you are using the share function. I’d always urge caution over sharing personal information with a faceless computer somewhere on the planet anyway. But be aware, anything inside the chat when you click that button can be found by someone through Google.
The upside, if there is one, is that any further content you add to your chat after clicking share will not be available. The URL generation is finite, and unless you click it again, subsequent information won’t go the same way.
So yes, it’s alarming. But it’s also avoidable. Be careful out there, folks.