
These are the worst ChatGPT prompts for the environment, study claims | BBC Science Focus Magazine
How did your country report this? Share your view in the comments.
Diverging Reports Breakdown
These are the worst ChatGPT prompts for the environment, study claims
Researchers in Germany ran 14 open source LLMs – ranging in size from 7 to 72 billion parameters – through 1,000 benchmark questions. They found that models which use internal reasoning to ‘think’ through answers can produce up to 50 times more emissions than those that respond concisely. Models with more parameters – which are generally more accurate – also produced more emissions. Users could reduce emissions by prompting artificial intelligence (AI) models to be brief and to the point, or by asking for bullet points instead of full paragraphs. Even casually generating images, jokes or essays when they’re not truly needed can come at a climate cost. For example, developing systems to select the smallest model needed to answer each question accurately could cut unnecessary emissions at scale.
Researchers in Germany ran 14 open source LLMs – ranging in size from 7 to 72 billion parameters – through 1,000 benchmark questions and measured the CO₂ emissions generated during each response.
They found that models which use internal reasoning to ‘think’ through answers can produce up to 50 times more emissions than those that respond concisely.
Meanwhile, models with more parameters – which are generally more accurate – also produced more emissions.
But the models themselves aren’t the only problem; the way users interact with them has huge implications too.
“Because people are being friendly and using ‘please’ and ‘thank you’, the LLMs are generating more extensive answers,” Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the study, told BBC Science Focus.
“This means more words are generated, which means the model is executed for longer, which needs more power and emits more CO₂.”
These additional words have no impact on how useful an answer is, and yet the impact on the environment is huge.
“If the model generates 10,000 words of really useful text or 10,000 words of nonsense, it doesn’t matter. It’s the same CO₂ emissions,” Dauner said.
Being polite to your AI platform uses more power – Getty
This means users could reduce emissions by prompting artificial intelligence (AI) models to be brief and to the point, or by asking for bullet points instead of full paragraphs. Even casually generating images, jokes or essays when they’re not truly needed can come at a climate cost.
The study also found that questions requiring deeper reasoning – such as philosophy or abstract algebra – produced far more emissions than straightforward topics like history.
While the researchers tested smaller models that can be run locally, Dauner said the pattern likely holds – and worsens – for massive models like ChatGPT, which may run on over 10 times more parameters and consume far more energy.
“The main difference between the models I tested and the models behind Microsoft Copilot or ChatGPT is the number of parameters,” Dauner said. These models, which many of us use daily, often have around 10 times more parameters, resulting in carbon footprints 10 times larger.
As well as encouraging individual users to think before they prompt, Dauner pointed to things that the companies behind LLMs can do. For example, he suggested that developing systems to select the smallest model needed to answer each question accurately could cut unnecessary emissions at scale.
“I’m a big fan of these tools,” Dauner said. “I use them every day. The key is just to use them concisely and be aware of what’s happening.”
Read more:
About our expert
Maximilian Dauner is a PhD candidate at Hochschule München University of Applied Sciences.
Source: https://www.sciencefocus.com/news/worst-chatgpt-prompts-for-the-environment