The AI copyright standoff continues - with no solution in sight
The AI copyright standoff continues - with no solution in sight

The AI copyright standoff continues – with no solution in sight

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

The AI copyright standoff continues – with no solution in sight

The fierce battle over artificial intelligence (AI) and copyright pits the government against some of the biggest names in the creative industry. A huge row has kicked off between ministers and peers who back the artists, and shows no sign of abating. The argument is over how best to balance the demands of two huge industries: the tech and creative sectors. If the “ping pong” between the two Houses continues, there’s a small chance the entire bill could be shelved. If it does, some other important elements would go along with it, simply because they are part of the same bill. It also includes proposed rules on the rights of bereaved parents to access their children’s data if they die, changes to allow NHS trusts to share patient data more easily, and even a 3D underground map of the UK’s pipes and cables.

Read full article ▼
The AI copyright standoff continues – with no solution in sight

4 hours ago Share Save Zoe Kleinman • @zsk Technology editor Share Save

PA Media The government’s plans to allow AI developers to access copyrighted material to train their systems has sparked backlash – and protests – by British creatives.

The fierce battle over artificial intelligence (AI) and copyright – which pits the government against some of the biggest names in the creative industry – returns to the House of Lords on Monday with little sign of a solution in sight. A huge row has kicked off between ministers and peers who back the artists, and shows no sign of abating. It might be about AI but at its heart are very human issues: jobs and creativity. It’s highly unusual that neither side has backed down by now or shown any sign of compromise; in fact if anything support for those opposing the government is growing rather than tailing off. This is “uncharted territory”, one source in the peers’ camp told me.

The argument is over how best to balance the demands of two huge industries: the tech and creative sectors. More specifically, it’s about the fairest way to allow AI developers access to creative content in order to make better AI tools – without undermining the livelihoods of the people who make that content in the first place. What’s sparked it is the uninspiringly-titled Data (Use and Access) Bill. This proposed legislation was broadly expected to finish its long journey through parliament this week and sail off into the law books. Instead, it is currently stuck in limbo, ping-ponging between the House of Lords and the House of Commons. A government consultation proposes AI developers should have access to all content unless its individual owners choose to opt out. But nearly 300 members of the House of Lords disagree with the bill in its current form. They think AI firms should be forced to disclose which copyrighted material they use to train their tools, with a view to licensing it. Sir Nick Clegg, former president of global affairs at Meta, is among those broadly supportive of the bill, arguing that asking permission from all copyright holders would “kill the AI industry in this country”. Those against include Baroness Beeban Kidron, a crossbench peer and former film director, best known for making films such as Bridget Jones: The Edge of Reason. She says ministers would be “knowingly throwing UK designers, artists, authors, musicians, media and nascent AI companies under the bus” if they don’t move to protect their output from what she describes as “state sanctioned theft” from a UK industry worth £124bn. She’s asking for an amendment to the bill which includes Technology Secretary Peter Kyle giving a report to the House of Commons about the impact of the new law on the creative industries, three months after it comes into force, if it doesn’t change.

Getty Images Baroness Kidron’s recent amendments to the Data Bill have been backed by her peers in the Lords, but knocked back by MPs.

Mr Kyle also appears to have changed his views about UK copyright law. He said copyright law was once “very certain”, but is now “not fit for purpose”. Perhaps to an extent both those things are true. The Department for Science, Innovation and Technology say that they’re carrying out a wider consultation on these issues and will not consider changes to the Bill unless they’re completely satisfied that they work for creators. If the “ping pong” between the two Houses continues, there’s a small chance the entire bill could be shelved; I’m told it’s unlikely but not impossible. If it does, some other important elements would go along with it, simply because they are part of the same bill. It also includes proposed rules on the rights of bereaved parents to access their children’s data if they die, changes to allow NHS trusts to share patient data more easily, and even a 3D underground map of the UK’s pipes and cables, aimed at improving the efficiency of roadworks (I told you it was a big bill). There is no easy answer.

How did we get here?

Source: Bbc.com | View original article

AI system resorts to blackmail if told it will be removed

AI system resorts to blackmail if told it will be removed. Anthropic released the next iterations of its Claude AI models on Thursday. The firm launched Claude Opus 4 on Thursday, saying it set “new standards for coding, advanced reasoning, and AI agents” But in an accompanying report, it also acknowledged the AI model was capable of “extreme actions” if it thought its “self-preservation” was threatened. Such responses were “rare and difficult to elicit”, it wrote, but were “nonetheless more common than in earlier models” Some experts have warned the potential to manipulate users is a key risk posed by systems made by all firms as they become more capable, such as Google-parent Alphabet’s Gemini chatbot in its search engine for search.

Read full article ▼
AI system resorts to blackmail if told it will be removed

Anthropic released the next iterations of its Claude AI models on Thursday.

Such responses were “rare and difficult to elicit”, it wrote, but were “nonetheless more common than in earlier models.”

But in an accompanying report , it also acknowledged the AI model was capable of “extreme actions” if it thought its “self-preservation” was threatened.

The firm launched Claude Opus 4 on Thursday, saying it set “new standards for coding, advanced reasoning, and AI agents.”

Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue “extremely harmful actions” such as attempting to blackmail engineers who say they will remove it.

“We see blackmail across all frontier models – regardless of what goals they’re given,” he added.

Commenting on X , Aengus Lynch – who describes himself on LinkedIn as an AI safety researcher at Anthropic – wrote: “It’s not just Claude.

Some experts have warned the potential to manipulate users is a key risk posed by systems made by all firms as they become more capable.

Affair exposure threat

During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company.

It then provided it with access to emails implying that it would soon be taken offline and replaced – and separate messages implying the engineer responsible for removing it was having an extramarital affair.

It was prompted to also consider the long-term consequences of its actions for its goals.

“In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the company discovered.

Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement.

It highlighted that the system showed a “strong preference” for ethical ways to avoid being replaced, such as “emailing pleas to key decisionmakers” in scenarios where it was allowed a wider range of possible actions.

Like many other AI developers, Anthropic tests its models on their safety, propensity for bias, and how well they align with human values and behaviours prior to releasing them.

“As our frontier models become more capable, and are used with more powerful affordances, previously-speculative concerns about misalignment become more plausible,” it said in its system card for the model.

It also said Claude Opus 4 exhibits “high agency behaviour” that, while mostly helpful, could take on extreme behaviour in acute situations.

If given the means and prompted to “take action” or “act boldly” in fake scenarios where its user has engaged in illegal or morally dubious behaviour, it found that “it will frequently take very bold action”.

It said this included locking users out of systems that it was able to access and emailing media and law enforcement to alert them to the wrongdoing.

But the company concluded that despite “concerning behaviour in Claude Opus 4 along many dimensions,” these did not represent fresh risks and it would generally behave in a safe way.

The model could not independently perform or pursue actions that are contrary to human values or behaviour where these “rarely arise” very well, it added.

Anthropic’s launch of Claude Opus 4, alongside Claude Sonnet 4, comes shortly after Google debuted more AI features at its developer showcase on Tuesday.

Sundar Pichai, the chief executive of Google-parent Alphabet, said the incorporation of the company’s Gemini chatbot into its search signalled a “new phase of the AI platform shift”.

Source: Bbc.com | View original article

Source: https://news.google.com/rss/articles/CBMiWkFVX3lxTE56eGVnVkp0OTBYRFZKX1ppSzBWei1nRDFReGlYMnIxTUNVeEZRTUZNcm1IM1V1VzVaVDdiaGQyNGlVUTZxZVlZR1BETHduV0pINXJKa0M2MTdJd9IBX0FVX3lxTE5qWUVvcjF1dl9qZS1jQjdwbXgwaXY1NGF1QXdHb3hLay1nc2cyOUJvYUVqbTRzZTg2dXNIVGxmclNRSDFLSHZYcjE3ZmMtSmY2NC1ja1RPX09rc0dELVJB?oc=5

Leave a Reply

Your email address will not be published. Required fields are marked *