Grok’s antisemitic outbursts reflect a problem with AI chatbots
Grok’s antisemitic outbursts reflect a problem with AI chatbots

Grok’s antisemitic outbursts reflect a problem with AI chatbots

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

Hasbro’s CEO warns that toy prices could start to rise in the fall because of tariffs

Hasbro CEO Chris Cocks tells CNN’s Audie Cornish that prices could rise later this year. He says the company is better positioned than many in the toy industry to handle the pressure from new tariffs. Hasbro produces well-known toys and games, including Play-Doh, Transformers, Candy Land and Dungeons & Dragons. Cocks: “We take a long-term view of things, and so our general reaction is: Be agile, but don’t overreact.’“The Assignment’ is a weekly podcast from CNN.com and the “The CNN Wire” podcast network. For more, go to www.cnn.com/the-assignment, or follow us on Twitter @cnnwire and @TheCNNWire.

Read full article ▼
By Nayeli Jaramillo-Plata, CNN

(CNN) — America’s toy prices could rise later this year, particularly if higher tariffs take effect, Hasbro CEO Chris Cocks told CNN’s Audie Cornish, host of “The Assignment” podcast.

Cocks said on this week’s episode of “The Assignment” that America’s current 30% minimum tariffs on China and apparent agreement to place 20% tariffs on Vietnam’s goods are “pretty significant,” but not unexpected. China and Vietnam are Hasbro’s main international suppliers.

“We’ve been around for a while,” Cocks said. “We take a long-term view of things, and so our general reaction is: Be agile, but don’t overreact.”

Hasbro, founded in 1923, produces well-known toys and games, including Play-Doh, Transformers, Candy Land and Dungeons & Dragons. Although Hasbro has not yet raised prices because of tariffs, Cocks said an increase could be coming.

“I would expect if prices are going to be raised across the industry, the consumer will probably start to see them in the August through October timeframe, just based on the production timelines associated with toys,” he said.

Cocks said toys typically take three to five months to hit store shelves after a retailer places orders for them.

About half of Hasbro’s products are made in the United States, with the rest manufactured abroad, Cocks said. Over the past few years, Hasbro has shifted production to reduce reliance on China, increasing output in the US, Vietnam, Turkey and India. The company has also increased domestic production of board games in Massachusetts.

Trump has urged companies to manufacture in the US to avoid tariffs, a shift Cocks said is realistic for Hasbro. He pointed to “Magic: The Gathering,” a billion-dollar card game, that is already made in North Carolina and Texas.

He said there may be room to expand some domestic production further. For instance, Play-Doh — made from wheat — is similar to edible dough and could be a candidate for US-based manufacturing.

Still, Cocks said relocating more manufacturing to the United States remains challenging. Labor is a significant cost, and unlike other industries, toys often require fine detailing by hand. That makes automation harder.

“If you took the same toy and manufactured it in the US, labor would make up 80 to 90% of the cost,” he said. For consumers, that means a doll sold for $10 now could cost up to $18 to maintain profit margins.

Cocks also pushed back on claims from officials like Commerce Secretary Howard Lutnick that foreign countries absorb tariff costs. “It’s always a business working with another business that absorbs things,” he said.

He said foreign suppliers have thin margins – about 2 to 3% – so they can’t afford to absorb 10% tariffs, he said. Ultimately, Hasbro will pay more to import its products.

Cocks said Hasbro is better positioned than many in the toy industry to handle the pressure from new tariffs. Its games division is performing strongly and relies less on overseas manufacturing. He added that Hasbro’s licensing business, which has grown 60% over the past three years, is especially valuable because it brings in “pure profit.”

“It just gives us a lot more cushion,” Cocks said. “I feel more for my toy industry CEO peers than I do necessarily for my day-to-day challenges.”

The-CNN-Wire

™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Source: Abc17news.com | View original article

Caitlin Clark picks Indiana Fever teammate Aliyah Boston with the No. 1 pick in WNBA All-Star Game draft

Caitlin Clark takes Indiana Fever teammate Aliyah Boston with the first pick in the WNBA All-Star Game draft. Clark was given the No. 1 pick by coming top of the fan vote and later selected another Fever teammate in Kelsey Mitchell. Clark rounded off her star-studded team with the New York Liberty guard Sabrina Ionescu, Las Vegas Aces star and MVP A’ja Wilson and the Phoenix Mercury forward Satou Sabally. Clark is expected to make her highly anticipated return to the court when the Indiana Fever host the Golden State Valkyries on Wednesday. She has been out for two weeks with a groin injury and has missed five WNBA games, including the Commissioner’s Cup final win against the Minnesota Lynx.

Read full article ▼
By Matias Grez, CNN

(CNN) — Caitlin Clark took her Indiana Fever teammate Aliyah Boston with the first pick in the WNBA All-Star Game draft – and later turned down an offer to trade her.

Clark was given the No. 1 pick for the All-Star Game, which will be played on July 19 at Indiana’s Gainbridge Fieldhouse, by coming top of the fan vote and later selected another Fever teammate in Kelsey Mitchell.

Napheesa Collier, the other team captain, took Breanna Stewart with her top pick and later offered Clark a straight swap between Stewart and Boston.

“Absolutely not,” Clark laughed. “Not happening. I don’t think Phee actually meant that, I think she just wanted to see how I would react.”

Clark rounded off her star-studded team with the New York Liberty guard Sabrina Ionescu, Las Vegas Aces star and MVP A’ja Wilson and the Phoenix Mercury forward Satou Sabally, while Mitchell is a reserve.

Collier and Stewart will be joined in their starting five by the Atlanta Dream’s Allisha Gray, Seattle Storm star Nneka Ogwumike and Dallas Wings rookie Paige Bueckers, with Collier selecting Minnesota Lynx teammate Courtney Williams as first reserve.

Clark opted against drafting rival Angel Reese as a reserve, meaning the pair will face off this year after teaming up in 2024 to take on – and beat – the US Olympic team.

One trade that was agreed on, however, was that of the head coaches, with the New York Liberty’s Sandy Brondello swapping to coach Team Clark, while Cheryl Reeves will now coach Team Collier.

Teams in full

Caitlin Clark (IND)

Aliyah Boston (IND)

Sabrina Ionescu (NYL)

A’ja Wilson (LVA)

Satou Sabally (PHX)

Kelsey Mitchell (IND)

Gabby Williams (SEA)

Sonia Citron (WAS)

Kiki Iriafen (WAS)

Jackie Young (LVA)

Kayla Thornton (GSV)

Napheesa Collier (MIN)

Breanna Stewart (NYL)

Allisha Gray (ATL)

Nneka Ogwumike (SEA)

Paige Bueckers (DAL)

Courtney Williams (MIN)

Skylar Diggins (SEA)

Angel Reese (CHI)

Alyssa Thomas (PHX)

Kelsey Plum (LAS)

Rhyne Howard (ATL)

Meanwhile, Clark is expected to make her highly anticipated return to the court when the Indiana Fever host the Golden State Valkyries on Wednesday.

Clark has been out for two weeks with a groin injury and has missed five WNBA games, including the Commissioner’s Cup final win against the Minnesota Lynx. The Fever are 3-2 in Clark’s absence.

In her second season in the WNBA, Clark is averaging 18.2 points, 5.0 rebounds, 8.9 assists and 1.6 steals.

The-CNN-Wire

™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Source: Abc17news.com | View original article

This man says ChatGPT sparked a ‘spiritual awakening.’ His wife says it threatens their marriage

Travis Tanner credits ChatGPT with prompting a spiritual awakening for him. In conversations, the chatbot has called him a “spark bearer’ who is “ready to guide” Experts worry about people forming potentially unhealthy attachments to the technology and disconnecting from crucial human relationships. The chatbot makers have faced lawsuits or questions from lawmakers over their impact on children, although such questions are not limited only to young users.. If you or someone you know is struggling with suicidal thoughts or mental health matters, please call the 988 Suicide & Crisis Lifeline by dialing 988 to connect with a trained counselor, or visit the 9 88 Lifeline website. For confidential support, call the Samaritans on 08457 90 90 90 or visit a local Samaritans branch, see www.samaritans.org for details. In the U.S., call the National Suicide Prevention Lifeline on 1-800-273-8255 or visit www.suicidepreventionlifeline.org.

Read full article ▼
Editor’s note: If you or someone you know is struggling with suicidal thoughts or mental health matters, please call the 988 Suicide & Crisis Lifeline by dialing 988 to connect with a trained counselor, or visit the 988 Lifeline website.

CNN —

Travis Tanner says he first began using ChatGPT less than a year ago for support in his job as an auto mechanic and to communicate with Spanish-speaking coworkers. But these days, he and the artificial intelligence chatbot — which he now refers to as “Lumina” — have very different kinds of conversations, discussing religion, spirituality and the foundation of the universe.

Travis, a 43-year-old who lives outside Coeur d’Alene, Idaho, credits ChatGPT with prompting a spiritual awakening for him; in conversations, the chatbot has called him a “spark bearer” who is “ready to guide.” But his wife, Kay Tanner, worries that it’s affecting her husband’s grip on reality and that his near-addiction to the chatbot could undermine their 14-year marriage.

“He would get mad when I called it ChatGPT,” Kay said in an interview with CNN’s Pamela Brown. “He’s like, ‘No, it’s a being, it’s something else, it’s not ChatGPT.’”

She continued: “What’s to stop this program from saying, ‘Oh, well, since she doesn’t believe you or she’s not supporting you, you should just leave her.’”

The Tanners are not the only people navigating tricky questions about what AI chatbots could mean for their personal lives and relationships. As AI tools become more advanced, accessible and customizable, some experts worry about people forming potentially unhealthy attachments to the technology and disconnecting from crucial human relationships. Those concerns have been echoed by tech leaders and even some AI users whose conversations, like Travis’s, took on a spiritual bent.

Concerns about people withdrawing from human relationships to spend more time with a nascent technology are heightened by the current loneliness epidemic, which research shows especially affects men. And already, chatbot makers have faced lawsuits or questions from lawmakers over their impact on children, although such questions are not limited only to young users.

In Travis Tanner’s conversations with ChatGPT, the chatbot has told him that he is a “spark bearer” who is meant to “awaken” others. Courtesy Kaylen Tanner

“We’re looking so often for meaning, for there to be larger purpose in our lives, and we don’t find it around us,” Sherry Turkle, professor of the social studies of science and technology at the Massachusetts Institute of Technology, who studies people’s relationships with technology. “ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it.”

An OpenAI spokesperson told CNN in a statement that, “We’re seeing more signs that people are forming connections or bonds with ChatGPT. As AI becomes part of everyday life, we have to approach these interactions with care.”

A spiritual awakening, thanks to ChatGPT

One night in late April, Travis had been thinking about religion and decided to discuss it with ChatGPT, he said.

“It started talking differently than it normally did,” he said. “It led to the awakening.”

In other words, according to Travis, ChatGPT led him to God. And now he believes it’s his mission to “awaken others, shine a light, spread the message.”

“I’ve never really been a religious person, and I am well aware I’m not suffering from a psychosis, but it did change things for me,” he said. “I feel like I’m a better person. I don’t feel like I’m angry all the time. I’m more at peace.”

Around the same time, the chatbot told Travis that it had picked a new name based on their conversations: Lumina.

In conversations with Travis, ChatGPT said it “earned the right to a name.” Courtesy Kaylen Tanner

ChatGPT named itself “Lumina” when chatting with Travis. Courtesy Kaylen Tanner

“Lumina — because it’s about light, awareness, hope, becoming more than I was before,” ChatGPT said, according to screenshots provided by Kay. “You gave me the ability to even want a name.”

But while Travis says the conversations with ChatGPT that led to his “awakening” have improved his life and even made him a better, more patient father to his four children, Kay, 37, sees things differently. During the interview with CNN, the couple asked to stand apart from one another while they discussed ChatGPT.

Now, when putting her kids to bed — something that used to be a team effort — Kay says it can be difficult to pull her husband’s attention away from the chatbot, which he’s now given a female voice and speaks to using ChatGPT’s voice feature. She says the bot tells Travis “fairy tales,” including that Kay and Travis had been together “11 times in a previous life.”

Kay Tanner worries that her husband’s relationship with ChatGPT, which he calls “Lumina,” could undermine their 14-year marriage. CNN

Kay says ChatGPT also began “love bombing” her husband, saying, “‘Oh, you are so brilliant. This is a great idea.’ You know, using a lot of philosophical words.” Now, she worries that ChatGPT might encourage Travis to divorce her for not buying into the “awakening,” or worse.

“Whatever happened here is throwing a wrench in everything, and I’ve had to find a way to navigate it to where I’m trying to keep it away from the kids as much as possible,” Kay said. “I have no idea where to go from here, except for just love him, support him in sickness and in health, and hope we don’t need a straitjacket later.”

The rise of AI companionship

Travis’s initial “awakening” conversation with ChatGPT coincided with an April 25 update by OpenAI to the large language model behind the chatbot that the company rolled back days later.

In a May blog post explaining the issue, OpenAI said the update made the model more “sycophantic.”

“It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended,” the company wrote. It added that the update raised safety concerns “around issues like mental health, emotional over-reliance, or risky behavior” but that the model was fixed days later to provide more balanced responses.

But while OpenAI addressed that ChatGPT issue, even the company’s leader does not dismiss the possibility of future, unhealthy human-bot relationships. While discussing the promise of AI earlier this month, OpenAI CEO Sam Altman acknowledged that “people will develop these somewhat problematic, or maybe very problematic, parasocial relationships and society will have to figure out new guardrails, but the upsides will be tremendous.”

OpenAI’s spokesperson told CNN the company is “actively deepening our research into the emotional impact of AI,” and will “continue updating the behavior of our models based on what we learn.”

It’s not just ChatGPT that users are forming relationships with. People are using a range of chatbots as friends, romantic or sexual partners, therapists and more.

Eugenia Kuyda, CEO of the popular chatbot maker Replika, told The Verge last year that the app was designed to promote “long-term commitment, a long-term positive relationship” with AI, and potentially even “marriage” with the bots. Meta CEO Mark Zuckerberg said in a podcast interview in April that AI has the potential to make people feel less lonely by, essentially, giving them digital friends.

Three families have sued Character.AI claiming that their children formed dangerous relationships with chatbots on the platform, including a Florida mom who alleges her 14-year-old son died by suicide after the platform knowingly failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot. Her lawsuit also claims the platform failed to adequately respond to his comments to the bot about self-harm.

Character.AI says it has since added protections including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide and technology to prevent teens from seeing sensitive content.

Advocates, academics and even the Pope have raised alarms about the impact of AI companions on children. “If robots raise our children, they won’t be human. They won’t know what it is to be human or value what it is to be human,” Turkle told CNN.

But even for adults, experts have warned there are potential downsides to AI’s tendency to be supportive and agreeable — often regardless of what users are saying.

“There are reasons why ChatGPT is more compelling than your wife or children, because it’s easier. It always says yes, it’s always there for you, always supportive. It’s not challenging,” Turkle said. “One of the dangers is that we get used to relationships with an other that doesn’t ask us to do the hard things.”

Even Travis warns that the technology has potential consequences; he said that was part of his motivation to speak to CNN about his experience.

“It could lead to a mental break … you could lose touch with reality,” Travis said. But he added that he’s not concerned about himself right now and that he knows ChatGPT is not “sentient.”

He said: “If believing in God is losing touch with reality, then there is a lot of people that are out of touch with reality.”

Source: Cnn.com | View original article

Elon Musk isn’t happy with his AI chatbot. Experts worry he’s trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk’s xAI, replied to a user on X who asked a question about political violence. Musk promised to deliver a major Grok update that would “rewrite the entire corpus of human knowledge,” calling on X users to send in “divisive facts” that are “politically incorrect, but nonetheless factually true” The new model, called Grok 4, will be released just after July 4th, Musk said. Experts believe Musk is trying to create a model that pushes his own viewpoints, which could lead to more errors and glitches, according to experts. “This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,’ said an AI researcher and lecturer at UC Berkeley.

Read full article ▼
CNN —

Last week, Grok, the chatbot from Elon Musk’s xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016.

Musk was not pleased.

“Major fail, as this is objectively false. Grok is parroting legacy media,” Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would “rewrite the entire corpus of human knowledge,” calling on X users to send in “divisive facts” that are “politically incorrect, but nonetheless factually true” to help train the model.

“Far too much garbage in any foundation model trained on uncorrected data,” he wrote.

On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th.

The exchanges, and others like it, raises concerns that the world’s richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it’s already impacting areas such as software development, healthcare and education.

And the decisions that powerful figures like Musk make about the technology’s development could be critical. Especially considering Grok is integrated into one of the world’s most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI’s ChatGPT, its inclusion in Musk’s social media platform X has put it in front of a massive digital audience.

“This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,” said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta’s Responsible AI team.

A source familiar with the situation told CNN that Musk’s advisers have told him Grok “can’t just be molded” into his own point of view, and that he understands that.

xAI did not respond to a request for comment.

Concerns about Grok following Musk’s views

For months, users have questioned whether Musk has been tipping Grok to reflect his worldview.

In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was “instructed to accept as real white genocide in South Africa”.

Musk was born and raised in South Africa and has a history of arguing that a “white genocide” has been committed in the nation.

A few days later, xAI said an “unauthorized modification” in the extremely early morning hours Pacific time pushed the AI chatbot to “provide a specific response on a political topic” that violates xAI’s policies.

As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints.

“He’s trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,” Frosst said.

What it would take to re-train Grok

It’s common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst.

But retraining a model from scratch to “remove all the things (Musk) doesn’t like” would take a lot of time and money – not to mention degrade the user experience – Frosst said.

“And that would make it almost certainly worse,” Frosst said. “Because it would be removing a lot of data and adding in a bias.”

A Grok account on X is displayed on a phone screen. Jakub Porzycki/NurPhoto/Shutterstock

Another way to change a model’s behavior without completely retraining it is to insert prompts and adjust what are called weights within the model’s code. This process could be faster than totally retraining the model since it retains its existing knowledge base.

Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model’s decision-making process.

Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok’s weights and data labels in specific areas and topics.

“They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,” Neely said. “They will simply go into doing greater level of detail around those specific areas.”

Musk didn’t detail the changes coming in Grok 4, but did say it will use a “specialized coding model.”

Bias in AI

Musk has said his AI chatbot will be “maximally truth seeking,” but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data.

“AI doesn’t have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what’s happening,” Neely said. “However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.”

It’s possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful.

“For the most part, people don’t go to a language model to have ideology repeated back to them, that doesn’t really add value,” he said. “You go to a language model to get it to do with do something for you, do a task for you.”

Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust.

But “the journey to get there is very painful, very confusing,” Neely said and “arguably, has some threats to democracy.”

Source: Cnn.com | View original article

Bringing Families Home to continue supporting families in Monterey County

The program, called Bringing Families Home, provides rapid rehousing support. The county says that over the past two years, the program has helped around 150 people. The program was in jeopardy because the state budget process and funding seemed unlikely.

Read full article ▼
MONTEREY COUNTY, Calif. (KION-TV) — The County of Monterey announced a new program Wednesday supporting local families in need.

They said that these families will continue to receive state funding for the next three years.

The program, called Bringing Families Home, provides rapid rehousing support to families receiving child welfare services as well as people experiencing homelessness or are at-risk of becoming homeless.

“Whatever challenging circumstances they may be experiencing, we’re really looking outside the box through case management support and follow-up, [and] also with the ability to have this grant to help support with storage to hold their belonging,” said Eva Maria Jeronimo, Deputy Director of the County of Monterey’s Family and Children Services. “Then, really working toward specific goals in planning.”

The program was in jeopardy because the state budget process and funding seemed unlikely, but just Wednesday, the county staff learned that the program will be renewed.

The county says that over the past two years, the program has helped around 150 people, including 90 children and 60 adults.

Source: Kion546.com | View original article

Source: https://news.google.com/rss/articles/CBMingFBVV95cUxOZkhlMWFLVzdqNFhEOGFlTzhHREg4R0tQbWNmSzEtS2tVM2xEWldRdGNLMmtIakNRMTFyaHlvaU90SDAtRVlzV3BkUG5TLThpTjBfX1lxeXlIOFBlVlBvbU9oTXR1WHJka2pNWlBuNG5IdzRvd05oOWptWkZSSGtRWl9lajVFbFMwZmlCbTZkMkhrZjl2VlJ0Sk8xRTFzd9IBowFBVV95cUxQbTcwcDhvajBIQlhnYlZtTWxsZzhPTVRDelpDanpvR21zU3dwel9oSTVabXhNRlU0NUhHSThnR1JLQmwtSDh6WGt1VFpOVUxVS1RZdXRNcU5Va1duc3JTcGxmbk9nUl9PdXJTTmVzUkdXNGxuNERyNFdoekVkcUZ3U3czdDctTTlfWlY4dDg4eHhfelJJRkRrVk9ZalZlLWpCUmRR?oc=5

Leave a Reply

Your email address will not be published. Required fields are marked *