
Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness
How did your country report this? Share your view in the comments.
Diverging Reports Breakdown
Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness
The debate over whether AI models could one day be conscious — and merit legal safeguards — is dividing tech leaders. Microsoft’s CEO of AI, Mustafa Suleyman, argues that the study of AI welfare is “both premature, and frankly dangerous.” On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Google DeepMind recently posted a job listing for a researcher to study, among other things, “cutting-edge societal questions around machine cognition, consciousness and multi-agent systems”
Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — AI models might develop subjective experiences similar to living beings, and if they do, what rights they should have.
The debate over whether AI models could one day be conscious — and merit legal safeguards — is dividing tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare,” and if you think it’s a little out there, you’re not alone.
Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.”
Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we’re just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots.
Furthermore, Microsoft’s AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a “world already roiling with polarized arguments over identity and rights.”
Suleyman’s views may sound reasonable, but he’s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models a new feature: Claude can now end conversations with humans who are being “persistently harmful or abusive.“
Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, “cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.”
Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman.
Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch’s request for comment.
Suleyman’s hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a “personal” and “supportive” AI companion.
Source: https://finance.yahoo.com/news/microsoft-ai-chief-says-dangerous-175253304.html