Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds
Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds

Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

Is it OK for AI to write science papers? Nature survey shows researchers are split

Nature surveyed 5,229 academics about the ethics of using artificial intelligence. Most think it is acceptable to use AI to edit one’s research paper or to translate it. But views are more divided when it comes to writing all or part of a paper. A majority (65%) think it’s ethically acceptable, but about one-third are against it. The survey suggests that current opinions on AI use vary among academics — sometimes widely. The results do not necessarily represent the views of researchers in general, because of inevitable response bias. They were drawn from all around the world — 21% were from the United States, 10% from India and 8% from Germany, for instance — and represent various career stages and fields. But the majority (57%) felt it was acceptable to assist in peer review in his or her manuscript. The majority of respondents cited one-quarter of concerns about privacy, with 60% saying it was not appropriate to use it for initial peer-review. The number of papers with signs of AI use is rising rapidly, raising questions around plagiarism and other ethical concerns.

Read full article ▼
How much is the artificial intelligence (AI) revolution altering the process of communicating science? With generative AI tools such as ChatGPT improving so rapidly, attitudes about using them to write research papers are also evolving. The number of papers with signs of AI use is rising rapidly (D. Kobak et al. Preprint at arXiv https://doi.org/pkhp; 2024), raising questions around plagiarism and other ethical concerns.

To capture a sense of researchers’ thinking on this topic, Nature posed a variety of scenarios to some 5,000 academics around the world, to understand which uses of AI are considered ethically acceptable.

Take Nature’s AI research test: find out how your ethics compare

The survey results suggest that researchers are sharply divided on what they feel are appropriate practices. Whereas academics generally feel it’s acceptable to use AI chatbots to help to prepare manuscripts, relatively few report actually using AI for this purpose — and those who did often say they didn’t disclose it.

Past surveys reveal that researchers also use generative AI tools to help them with coding, to brainstorm research ideas and for a host of other tasks. In some cases, most in the academic community already agree that such applications are either appropriate or, as in the case of generating AI images, unacceptable. Nature’s latest poll focused on writing and reviewing manuscripts — areas in which the ethics aren’t as clear-cut.

A divided landscape

Nature’s survey laid out several scenarios in which a fictional academic, named Dr Bloggs, had used AI without disclosing it — such as to generate the first draft of a paper, to edit their own draft, to craft specific sections of the paper and to translate a paper. Other scenarios involved using AI to write a peer review or to provide suggestions about a manuscript Dr Bloggs was reviewing (see Supplementary information for full survey, data and methodology, and you can also test yourself against some of the survey questions).

Survey participants were asked what they thought was acceptable and whether they had used AI in these situations, or would be willing to. They were not informed about journal policies, because the intent was to reveal researchers’ underlying opinions. The survey was anonymous.

The 5,229 respondents were contacted in March, through e-mails sent to randomly chosen authors of research papers recently published worldwide and to some participants in Springer Nature’s market-research panel of authors and reviewers, or through an invitation from Nature’s daily briefing newsletter. They do not necessarily represent the views of researchers in general, because of inevitable response bias. However, they were drawn from all around the world — of those who stated a country, 21% were from the United States, 10% from India and 8% from Germany, for instance — and represent various career stages and fields. (Authors in China are under-represented, mainly because many didn’t respond to e-mail invitations).

The survey suggests that current opinions on AI use vary among academics — sometimes widely. Most respondents (more than 90%) think it is acceptable to use generative AI to edit one’s research paper or to translate it. But they differ on whether the AI use needs to be disclosed, and in what format: for instance, through a simple disclosure, or by giving details about the prompts given to an AI tool.

When it comes to generating text with AI —for instance, to write all or part of one’s paper — views are more divided. In general, a majority (65%) think it is ethically acceptable, but about one-third are against it.

Asked about using AI to draft specific sections of a paper, most researchers felt it was acceptable to do this for the paper’s abstract, but more were opposed to doing so for other sections.

Although publishers generally agree that substantive AI use in academic writing should be declared, the response from Nature’s survey suggests that not all researchers have the same opinion, says Alex Glynn, a research literacy and communications instructor at the University of Louisville in Kentucky. “Does the disconnect reflect a lack of familiarity with the issue or a principled disagreement with the publishing community?”

Using AI to generate an initial peer-review report was more frowned upon — with more than 60% of respondents saying it was not appropriate (about one-quarter of these cited privacy concerns). But the majority (57%) felt it was acceptable to use AI to assist in peer review by answering questions about a manuscript.

“I’m glad to see people seem to think using AI to draft a peer-review report is not acceptable, but I’m more surprised by the number of people who seem to think AI assistance for human reviewers is also out of bounds,” says Chris Leonard, a scholarly-communications consultant who writes about developments in AI and peer review in his newsletter, Scalene. (Leonard also works as a director of product solutions at Cactus Communications, a multinational firm in Mumbai, India.) “That hybrid approach is perfect to catch things reviewers may have missed.”

AI still used only by a minority

In general, few academics said they had actually used AI for the scenarios Nature posed. The most popular category was using AI to edit one’s research paper, but only around 28% said they had done this (another 43%, however, said they’d be willing to). Those numbers dropped to around 8% for writing a first draft, making summaries of other articles for use in one’s own paper, translating a paper and supporting peer review.

A mere 4% of respondents said they’d used AI to conduct an initial peer review.

Overall, about 65% reported that they had never used AI in any of the scenarios given, with people earlier in their careers being more likely to have used AI at least for one case. But when respondents did say they had used AI, they more often than not said they hadn’t disclosed it at the time.

“These results validate what we have also heard from researchers — that there’s great enthusiasm but low adoption of AI to support the research process,” says Josh Jarrett, a senior vice-president at Wiley, the multinational scholarly publisher, which has also surveyed researchers about use of AI.

Split opinions

When given the opportunity to comment on their views, researchers’ opinions varied drastically. On the one hand, some said that the broad adoption of generative AI tools made disclosure unnecessary. “AI will be, if not already is, a norm just like using a calculator,” says Aisawan Petchlorlian, a biomedical researcher at Chulalongkorn University in Bangkok. “‘Disclosure’ will not be an important issue.”

On the other hand, some said that AI use would always be unacceptable. “I will never condone using generative AI for writing or reviewing papers, it is pathetic cheating and fraud,” said an Earth-sciences researcher in Canada.

AI is transforming peer review — and many scientists are worried

Others were more ambivalent. Daniel Egan, who studies infectious diseases at the University of Cambridge, UK, says that although AI is a time-saver and excellent at synthesizing complex information from multiple sources, relying on it too heavily can feel like cheating oneself. “By using it, we rob ourselves of the opportunities to learn through engaging with these sometimes laborious processes.”

Respondents also raised a variety of concerns, from ethical questions around plagiarism and breaching trust and accountability in the publishing and peer-review process to worries about AI’s environmental impact.

Some said that although they generally accepted that the use of these tools could be ethical, their own experience revealed that AI often produced sub-par results — false citations, inaccurate statements and, as one person described it, “well-formulated crap”. Respondents also noted that the quality of an AI response could vary widely depending on the specific tool that was used.

There were also some positives: many respondents pointed out that AI could help to level the playing field for academics for whom English was not a first language.

Several also explained why they supported certain uses, but found others unacceptable. “I use AI to self-translate from Spanish to English and vice versa, complemented with intensive editing of the text, but I would never use AI to generate work from scratch because I enjoy the process of writing, editing and reviewing,” says a humanities researcher from Spain. “And I would never use AI to review because I would be horrified to be reviewed by AI.”

Career stage and location

Perhaps surprisingly, academics’ opinions didn’t generally seem to differ widely by their geographical location, research field or career stage. However, respondents’ self-reported experience with AI for writing or reviewing papers did correlate strongly with having favourable opinions of the scenarios, as might be expected.

Career stage did seem to matter when it came to the most popular use of AI — to edit papers. Here, younger researchers were both more likely to think the practice acceptable, and more likely to say they had done it.

And respondents from countries where English is not a first language were generally more likely than those in English-speaking nations to have used AI in the scenarios. Their underlying opinions on the ethics of AI use, however, did not seem to differ greatly.

Related surveys

Various researchers and publishers have conducted surveys of AI use in the academic community, looking broadly at how AI might be used in the scientific process. In January, Jeremy Ng, a health researcher at the Ottawa Hospital Research Institute in Canada, and his colleagues published a survey of more than 2,000 medical researchers, in which 45% of respondents said they had previously used AI chatbots (J. Y. Ng et al. Lancet Dig. Health 7, e94–e102; 2025). Of those, more than two-thirds said they had used it for writing or editing manuscripts — meaning that, overall, around 31% of the people surveyed had used AI for this purpose. That is slightly more than in Nature’s survey.

Science sleuths flag hundreds of papers that use AI without disclosing it

“Our findings revealed enthusiasm, but also hesitation,” Ng says. “They really reinforced the idea that there’s not a lot of consensus around how, where or for what these chatbots should be used for scientific research.”

In February, Wiley published a survey examining AI use in academia by nearly 5,000 researchers around the world (see go.nature.com/438yngu). Among other findings, this revealed that researchers felt most uses of AI (such as writing up documentation and increasing the speed and ease of peer review) would be commonly accepted in the next few years. But less than half of the respondents said they had actually used AI for work, with 40% saying they’d used it for translation and 38% for proofreading or editing of papers.

Source: Nature.com | View original article

Measles may be making a comeback in the U.S., Stanford Medicine-led research finds

Measles is one of the most infectious diseases that exists. The number of people who have to be immune to prevent it from spreading is extremely high. The MMR (measles, mumps and rubella) vaccine has become particularly controversial, partly due to a history of fraudulent medical research that raised safety concerns. If vaccination rates were cut in half, we’d expect 51.2 million cases of measles, 9.9 million Cases of rubella, 4.3 million cases. of polio and 200 cases of diphtheria over 25 years. This would lead to 10.3m hospitalizations and 159,200 deaths, plus an estimated 51,200 children with post-measels neurological complications. If we start to see major changes to the childhood vaccination schedule and policy and coverage really drops, we could see the return of awful complications from diseases that most clinicians today have not encountered thanks to decades of successful immunization. If that were to happen, it would take time eliminate them again.

Read full article ▼
Why is measles likely to become endemic and not the others?

Lo: Measles is one of the most infectious diseases that exists, so the number of people who have to be immune to prevent it from spreading is extremely high. Polio, diphtheria and rubella are still far more infectious than COVID-19, for example, but measles is in a different ballpark, with one person infecting up to 20 others (though our model took a conservative estimate of 12) in a fully susceptible population. Also, the MMR (measles, mumps and rubella) vaccine has become particularly controversial, partly due to a history of fraudulent medical research that raised safety concerns; it has been conclusively shown that there is no link with autism. Measles is also more common around the world, so travelers are more likely to bring it back.

Kiang: Travelers importing a disease are like matches, and U.S. under-vaccination is the tinder. With measles, you’re throwing a lot of matches in, and eventually something is going to happen.

What do you predict will happen if fewer people get their kids vaccinated?

Kiang: If vaccination were to fall by even 10% today, measles cases would skyrocket to 11.1 million over the next 25 years. If vaccination rates were cut in half, we’d expect 51.2 million cases of measles, 9.9 million cases of rubella, 4.3 million cases of polio and 200 cases of diphtheria over 25 years. This would lead to 10.3 million hospitalizations and 159,200 deaths, plus an estimated 51,200 children with post-measles neurological complications, 10,700 cases of birth defects due to rubella and 5,400 people paralyzed from polio. Measles would become endemic in less than five years, and rubella would become endemic in less than 20. Under these conditions, polio became endemic in about half of simulations in around 20 years.

What differences did you find at the state level?

Kiang: Massachusetts has high vaccination rates and was consistently low risk. Both California and Texas were higher risk, even after accounting for larger population size, because vaccination rates in both have dropped and there’s a lot of travel to those states. Our model assumed there was no spillover of infections across state lines, so the numbers could be an underestimate.

If these diseases become more widespread, who is in danger?

Lo: Folks who are unvaccinated are most at risk of infection and the awful complications. That includes babies, especially those between the ages of 6 and 12 months, who have waning antibodies from their mothers but are not yet eligible for their first dose of an MMR vaccine. Others are also at risk. People who are immunocompromised, which is a sizable segment of the U.S. population, can also be at risk. While the effects of declining vaccination won’t be immediate, we could eventually see the return of awful complications from diseases that most clinicians today have not encountered thanks to decades of successful immunization.

Which of your scenarios is most likely to play out?

Lo: One that’s conceivable is that vaccine coverage continues to drop, measles outbreaks become larger and more frequent, and eventually measles becomes endemic again. Hopefully, some fraction of the unvaccinated population seeks vaccination as a result, and state and local public health departments continue to do their vital work in outbreak response, allowing us to reestablish elimination. We might get into that kind of pattern. If we start to see major changes to the childhood vaccination schedule and policy and coverage really drops, you get into a world where you worry about diseases like polio and rubella, but that would likely take well over a decade or more. If that were to happen, you can’t just flip a switch — once these diseases get unleashed, it would take time eliminate them again.

What should parents, providers and others do based on these results?

Lo: I would encourage parents who aren’t sure about vaccination to discuss this with their pediatrician and believe in our health care providers. We hope our research also provides the data for federal and state officials, vaccine guideline committees, and others to understand what will happen if decisions are made that lead to declines in vaccination.

With measles, we’re right on the cusp. Increasing vaccination levels by just 5% brings the number of measles cases down, safely away from returning to endemic levels. These are the kinds of small percentages that can really be a tipping point. It’s empowering that a small segment of the population can make a difference here.

Kiang: It’s worth emphasizing that there really shouldn’t be any cases at this point, because these diseases are preventable. Anything above zero is tragic. When you’re talking about potentially thousands or millions, that’s unfathomable.

Researchers from Baylor College of Medicine, Rice University, and Texas A&M University contributed to the study.

A grant from the National Institutes of Health (T32AI007502) supported the research.

Source: Med.stanford.edu | View original article

Global Water Supplies Threatened by Overmining of Aquifers: New Study — ProPublica

Earth is being slowly dehydrated by the unmitigated mining of groundwater. Nearly 6 billion people, or three quarters of humanity, live in the 101 countries that the study identified as confronting a net decline in water supply. Most of the water lost from drying regions is from groundwater pumping, which ultimately shifts fresh water from aquifers into the oceans. Moisture lost to evaporation and drought, plus runoff from pumped groundwater, now outpaces the melting of glaciers and the ice sheets of either Antarctica or Greenland as the largest contributor of water to the oceans, the study says. The paper “provides a glimpse of what the future is going to be,” said Hrishikesh Chandanpurkar, an earth systems scientist working with Arizona State University. “We are already dipping from a trust fund. We don’t actually know how much the account has,’’ the lead author of the study said. The study examines 22 years of observational data from NASA’s Gravity Recovery and Climate Experiment, or GRACE, satellites.

Read full article ▼
So much groundwater is now being pumped that it is filling the oceans as it drains off land, becoming one of the largest drivers of global sea level rise.

But farther south — where most people live — it is largely the race to suck groundwater from aquifers that is removing the water from the continents.

As the planet gets hotter and its reservoirs shrink and its glaciers melt, people have increasingly drilled into a largely ungoverned, invisible cache of fresh water: the vast, hidden pools found deep underground.

Now, a new study that examines the world’s total supply of fresh water — accounting for its rivers and rain, ice and aquifers together — warns that Earth’s most essential resource is quickly disappearing, signaling what the paper’s authors describe as “a critical, emerging threat to humanity.” The landmasses of the planet are drying. In most places there is less precipitation even as moisture evaporates from the soil faster. More than anything, Earth is being slowly dehydrated by the unmitigated mining of groundwater, which underlies vast proportions of every continent. Nearly 6 billion people, or three quarters of humanity, live in the 101 countries that the study identified as confronting a net decline in water supply — portending enormous challenges for food production and a heightening risk of conflict and instability.

The paper “provides a glimpse of what the future is going to be,” said Hrishikesh Chandanpurkar, an earth systems scientist working with Arizona State University and the lead author of the study. “We are already dipping from a trust fund. We don’t actually know how much the account has.”

The research, published on Friday in the journal Science Advances, confirms not just that droughts and precipitation are growing more extreme but reports that drying regions are fast expanding. It also found that while parts of the planet are getting wetter, those areas are shrinking. The study, which excludes the ice sheets of Antarctica and Greenland, concludes not only that Earth is suffering a pandemic of “continental drying” in lower latitudes, but that it is the uninhibited pumping of groundwater by farmers, cities and corporations around the world that now accounts for 68% of the total loss of fresh water in those areas, which generally don’t have glaciers.

Groundwater is ubiquitous across the globe, but its quality and depth vary, as does its potential to be replenished by rainfall. Major groundwater basins — the deep and often high-quality aquifers — underlie roughly one-third of the planet, including roughly half of Africa, Europe and South America. But many of those aquifers took millions of years to form and might take thousands of years to refill. Instead, a significant portion of the water taken from underground flows off the land through rivers and on to the oceans.

The researchers were surprised to find that the loss of water on the continents has grown so dramatically that it has become one of the largest causes of global sea level rise. Moisture lost to evaporation and drought, plus runoff from pumped groundwater, now outpaces the melting of glaciers and the ice sheets of either Antarctica or Greenland as the largest contributor of water to the oceans.

Water From Land Has Become a Leading Driver of Sea Level Rise Most of the water lost from drying regions is from groundwater pumping, which ultimately shifts fresh water from aquifers into the oceans. Drying land and aquifers No

data Melting glaciers Drying land

and aquifers No

data Melting

glaciers Note: Glaciers refer to the parts of the continents covered in glaciers but excludes the ice sheets of Greenland and Antarctica. Drying land and aquifers refer to the water lost by the continents in areas not covered by glaciers, including river flow and evaporation. Groundwater loss accounts for 68% of the drying in those places.

The study examines 22 years of observational data from NASA’s Gravity Recovery and Climate Experiment, or GRACE, satellites, which measure changes in the mass of the earth and have been applied to estimate its water content. The technique was groundbreaking two decades ago when the study’s co-author, Jay Famiglietti, who was then a professor at the University of California, at Irvine, used it to pinpoint where aquifers were in decline. Since then, he and others have published dozens of papers using GRACE data, but the question has always lingered: What does the groundwater loss mean in the context of all of the water available on the continents? So Famiglietti, now a professor at Arizona State University, set out to inventory all the land-based water contained in glaciers, rivers and aquifers and see what was changing. The answer: everything, and quickly.

Since 2002, the GRACE sensors have detected a rapid shift in water loss patterns around the planet. Around 2014, though, the pace of drying appears to have accelerated, the authors found, and is now growing by an area twice the size of California each year. “It’s like this sort of creeping disaster that has taken over the continents in ways that no one was really anticipating,” Famiglietti said. (Six other researchers also contributed to the study.) The parts of the world drying most acutely are becoming interconnected, forming what the study’s authors describe as “mega” regions spreading across the earth’s mid-latitudes. One of those regions covers almost the whole of Europe, the Middle East, North Africa and parts of Asia.

The Drying of the Earth Accelerated in Recent Years The dramatic depletion of groundwater and surface water plus the melting of glaciers between 2014-24 has connected once-separate arid places, forming “mega-drying” regions that stretch across whole continents. 2003-2013 2014-2024 0 2 Water Loss (mmSLE) South America 2003-2013 2014-2024 North America Note: Data is for February 2003 to December 2013 and January 2014 to April 2024. The first time period contains seven more months of data than the second.

In the American Southwest and California, groundwater loss is a familiar story, but over the past two decades that hot spot has also spread dramatically. It now extends through Texas and up through the southern High Plains, where the Ogallala aquifer is depended on for agriculture, and it spreads south, stretching throughout Mexico and into Central America. These regions are connected not because they rely on the same water sources — in most cases they don’t — but because their populations will face the same perils of water stress: the most likely, a food crisis that could ultimately displace millions of people.

“This has to serve as a wake-up call,” said Aaron Salzberg, a former fellow at the Woodrow Wilson Center and the former director of the Water Institute at the University of North Carolina, who was not involved with the study.

Research has long established that people take more water from underground when climate-driven heat and drought are at their worst. For example, during droughts when California has enforced restrictions on delivery of surface water to its farmers — which the state regulates — the enormous agriculture enterprises that dominate the Central Valley have drilled deeper and pumped harder, depleting the aquifer — which the state regulates less precisely — even more.

For the most part, such withdrawals have remained invisible. Even with the GRACE data, scientists cannot measure the exact levels or know when an aquifer will be exhausted. But there is one foolproof sign that groundwater is disappearing: The earth above it collapses as the ground compresses like a drying sponge. The visible signs of such subsidence around the world appear to match what the GRACE data says. Mexico City is sinking as its groundwater aquifers are drained, as are large parts China, Indonesia, Spain and Iran, to name a few. A recent study by researchers at Virginia Tech in the journal Nature Cities found that 28 cities across the United States are sinking — New York, Houston and Denver, among them — threatening havoc for everything from building safety to transit. In the Central Valley, the ground surface is nearly 30 vertical feet lower than it was in the first part of the 20th century.

Ground subsidence around the world is one of the clearest ways to identify where groundwater is overdrawn.

When so much water is pumped, it has to drain somewhere. Just like rivers and streams fed by rainfall, much of the used groundwater makes its way into the ocean. The study pinpoints a remarkable shift: Groundwater drilled by people, used for agriculture or urban supplies and then discarded into drainages now contributes more water to the oceans than melting from each of the world’s largest ice caps.

People aren’t just misusing groundwater, they are flooding their own coasts and cities in the process, Famiglietti warns. That means they are also imperiling some of the world’s most important food-producing lowlands in the Nile and Mekong deltas and cities from Shanghai to New York. Once in the oceans, of course, groundwater will never again be suitable for drinking and human use without expensive and energy-sucking treatment or through the natural cycle of evaporating and precipitating as rain. But even then, it may no longer fall where it is needed most. Groundwater “is an intergenerational resource that is being poorly managed, if managed at all,” the study states, “at tremendous and exceptionally undervalued cost to future generations.”

That such rapid and substantial overuse of groundwater is also causing coastal flooding underscores the compounding threat of rising temperatures and aridity. It means that water scarcity and some of the most disruptive effects of climate change are now inextricably intertwined. And here, the study’s authors implore leaders to find a policy solution: Improve water management and reduce groundwater use now, and the world has a tool to slow the rate of sea level rise. Fail to adjust the governance and use of groundwater around the world, and humanity risks surrendering parts of its coastal cities while pouring out finite reserves it will sorely need as the other effects of climate change take hold.

How Groundwater Becomes Ocean Water The process starts when deep underground aquifers are tapped to make up for a lack of water from rainfall and rivers. Worldwide, 70% of fresh water is used for growing crops, with more of it coming from groundwater as droughts grow more extreme. Only a small amount of that water seeps back into aquifers. Instead, most of the water runs off the land into streams, eventually flowing into rivers. The rivers ultimately drain into the ocean, where fresh water becomes salt water. For that water to be usable again, it must either be industrially treated or return to the land as rain. But with climate change, these same drying regions are seeing less rainfall.

If the drying continues — and the researchers warn that it is now nearly impossible to reverse “on human timescales” — it heralds “potentially staggering” and cascading risks for global order. The majority of the earth’s population lives in the 101 countries that the study identified as losing fresh water, making up not just North America, Europe and North Africa but also much of Asia, the Middle East and South America. This suggests the middle band of Earth is becoming less habitable. It also correlates closely with the places that a separate body of climate research has already identified as a shrinking environmental niche that has suited civilization for the past 6,000 years. Combined, these findings all point to the likelihood of widespread famine, the migration of large numbers of people seeking a more stable environment and the carry-on impact of geopolitical disorder.

Peter Gleick, a climate scientist and a member of the National Academy of Sciences, lauded the new report for confirming trends that were once theoretical. The ramifications, he said, could be profoundly destabilizing. “The massive overpumping of groundwater,” Gleick said, “poses enormous risk to food production.” And food, he pointed out, is the foundation for stability. The water science center he co-founded, the Pacific Institute, has tracked more than 1,900 incidents in which water supplies were either the casualty of, a tool for or the cause of violence. In Syria, beginning in 2011, drought and groundwater depletion drove rural unrest that contributed to the civil war, which displaced millions of people. In Ghana, in 2017, protesters rioted as wells ran dry. And in Ukraine, whose wheat supports much of the world, water infrastructure has been a frequent target of Russian attacks.

“Water is being used as a strategic and political tool,” said Salzberg, who spent nearly two decades analyzing water security issues as the special director for water resources at the State Department. “We should expect to see that more often as the water supply crisis is exacerbated.”

India, for example, recently weaponized water against Pakistan. In April, following terrorist attacks in Kashmir, Prime Minister Narendra Modi suspended his country’s participation in the Indus Waters Treaty, a river-sharing agreement between the two nuclear powers that was negotiated in 1960. The Indus system flows northwest out of Tibet into India, before turning southward into Pakistan. Pakistan has severely depleted its groundwater reserves — the region is facing one of the world’s most urgent water emergencies according to the Science Advances paper. The Indus has only become more essential as a supply of fresh water for its 252 million people. Allowing that water to cross the border would be “prejudicial to India’s interests,” Modi said. In this case, he wasn’t attempting to recoup water supply for his country, Salzberg said, but was leveraging its scarcity to win a strategic advantage over his country’s principal rival.

What’s needed most is governance of water that recognizes it as a crucial resource that determines both sovereignty and progress, Salzberg added. Yet there is no international framework for water management, and only a handful of countries have national water policies of their own.

The United States has taken stabs at regulating its groundwater use, but in some cases those attempts appear to be failing. In 2014, California passed what seemed to many a revolutionary groundwater management act that required communities to assess their total water supply and budget its long-term use. But the act doesn’t take full effect until 2040, which has allowed many groundwater districts to continue to draw heavily from aquifers even as they complete their plans to conserve those resources. Chandanpurkar and Famiglietti’s research underscores the consequences for such a slow approach.

Arizona pioneered groundwater regulations in 1980, creating what it called active management areas where extraction would be limited and surface waters would be used to replenish aquifers. But it only chose to manage the water in metropolitan areas, leaving vast, unregulated swaths of the state where investors, farmers and industry have all pounced on the availability of free water for profit. In recent years, Saudi investors have pumped rural water to grow feed for cattle exported back to the Arabian Peninsula, and hedge funds are competing to pump and sell water to towns near Phoenix. Meanwhile, four out of the original five active management areas are failing to meet the state’s own targets.

“They like to say, ‘Oh, the management’s doing well,’” Famiglietti said, but looking out over the next century, the trends suggest the aquifers will continue to empty out. “No one talks about that. I don’t think it’s an exaggeration to say it’s an existential issue for cities like Phoenix.”

ProPublica Read More Microsoft Used China-Based Engineers to Support Product Recently Hacked by China

Both California and Arizona grow significant portions of America’s fruits and vegetables. Something has to give. “If you want to grow food in a place like California,” Famiglietti asked, “do you just bring in water? If we deplete that groundwater, I don’t think there’s enough water to really replace what we’re doing there.” The United States might not have much choice, he added, but to move California’s agriculture production somewhere far away and retire the land.

Chandanpurkar, Famiglietti and the report’s other authors suggest there are ready solutions to the problems they have identified, because unlike so many aspects of the climate crisis, the human decisions that lead to the overuse of water can be speedily corrected. Agriculture, which uses the vast majority of the world’s fresh water, can deploy well-tested technologies like drip irrigation, as Israel has, that sharply cut use by as much as 50%. When California farms reduced their take of Colorado River water in 2023 and 2024, the water levels in Lake Mead, the nation’s largest reservoir, jumped by 16 vertical feet as some 390 billion gallons were saved by 2025. Individuals can reduce water waste by changing simple routines: shortening showers or removing lawns. And cities can look to recycle more of the water they use, as San Diego has.

A national policy that establishes rules around water practices but also prioritizes the use of water resources for national security and a collective interest could counterbalance the forces of habit and special interests, Salzberg said. Every country needs such a policy, and if the United States were to lead, it might offer an advantage. But “the U.S. doesn’t have a national water strategy,” he said, referring to a disjointed patchwork of state and court oversight. “We don’t even have a national water institution. We haven’t thought as a country about how we would even protect our own water resources for our own national interests, and we’re a mess.”

Source: Propublica.org | View original article

Pros, Cons, Debate, Arguments, Computer Science, & Technology

Artificial intelligence (AI) is the use of computers and machines to mimic the problem-solving and decision-making capabilities of the human mind. The modern notion of AI largely began when Alan Turing created the “ Turing test ” to determine if a computer is capable of “thinking” Pro 1: AI can make everyday life more enjoyable and convenient, while improving our health and standard of living. Pro 2: It can even enhance your doctor’s appointments and medical procedures, compared to a vast library of medical data compared to your data. Pro 3: AI technology can be used in medicine for patient diagnosis, robotic surgery, and drug development. Pro 4: AI could be used to improve the quality of our lives and help us live longer and healthier lives. Pro 5: It could lead to a better quality of life for us and our loved ones, as well as a better way to communicate with each other and the world around us in a more efficient and efficient way. Pro 6: We can use AI to improve our health, and it can even help in medical care.

Read full article ▼
Table of Contents Ask the Chatbot

Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa , translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media . [13] [58]

More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [1] [10] [11] [12]

AI technology continued to grow at a rapid pace during the 1950s. And, as computers became cheaper in the 1960s and ’70s, AI programs flourished, and U.S. government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early ’90s furthered the research, including the invention of expert systems . But progress again waned with another drop in government funding. [10]

The “Father of Artificial Intelligence,” John McCarthy , coined the term “artificial intelligence” as “the science and engineering of making intelligent machines.” He would go on to create the computer programming language LISP (which is still used in AI), host computer chess games against human Russian opponents, and develop the first computer with “hand-eye” capability, all important building blocks for AI. [1] [5] [6] [7]

The modern notion of AI largely began when Alan Turing , who contributed to breaking the Nazis’ Enigma code during World War II , created the “ Turing test ” to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been debated. [1] [3] [4]

Mayor notes that the myths about Hephaestus , the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man Talos, which had a mysterious life force from the gods called ichor. Hephaestus also created Pandora and her infamous and powerful jar/box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concludes, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [2]

The idea of AI dates back at least 2,700 years. As Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University , explains: “Our ability to imagine artificial intelligence goes back to ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [2]

Artificial intelligence (AI) is the use of “ computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM . [1]

Pro 1: AI can make everyday life more enjoyable and convenient, while improving our health and standard of living. Why sit in a traffic jam when a map app can navigate you around the car accident? Why fumble with shopping bags searching for your keys in the dark when a preset location-based command can have your doorway illuminated as you approach your now unlocked door? [23] Why scroll through hundreds of possible TV shows when the streaming app already knows what genres you like? Why forget eggs at the grocery store when a digital assistant can take an inventory of your refrigerator and add them to your grocery list and have them delivered to your home? All of these marvels are assisted by AI technology. [23] AI-enabled fitness apps boomed during the COVID-19 pandemic when gyms were closed, increasing the number of AI options for at-home workouts. Now, you can not only set a daily steps goal with encouragement reminders on your smart watch, but you can ride virtually through the countryside on a Peloton bike from your garage or have a personal trainer on your living room TV. For more specialized fitness, AI wearables can monitor yoga poses or golf and baseball swings. [24][25] AI can even enhance your doctor’s appointments and medical procedures. It can alert medical caregivers to patterns in your health data as compared to a vast library of medical data, while also doing the paperwork tied to medical appointments so doctors have more time to focus on their patients, resulting in more personalized care. AI can even help surgeons be quicker, more accurate, and less invasive in their operations. [26] Smart speakers including Amazon’s Echo can use AI to soothe babies to sleep and monitor their breathing. Using AI, speakers can also detect regular and irregular heartbeats, as well as heart attacks and congestive heart failure. [27][28][29] AI is even beginning to excel at creative writing, producing fiction and poetry that some readers enjoy. Some observers predict that TV and film scripts will also soon benefit from the compositional powers of AI.

Pro 2: AI makes work easier for students and professionals alike. Much like the calculator did not signal the end of students’ grasp of mathematics, typing did not eliminate handwriting, and Google did not herald the end of research skills, AI does not signal the end of reading and writing or of education in general. [78][79] Elementary school teacher Shannon Morris explains that AI tools like “ChatGPT can help students by providing real-time answers to their questions, engaging them in personalized conversations, and providing customized content based on their interests. It can also offer personalized learning resources, videos, articles, and interactive activities. This resource can even provide personalized recommendations for studying, help with research, provide context-specific answers, and offer educational games.” She also notes that teachers’ more daunting tasks like grading and making vocabulary lists can be streamlined with AI tools. [79] For adults AI can similarly make work easier and more efficient, rather than signaling the rise of the robot employee. Pesky, time-consuming tasks like scheduling and managing meetings, finding important emails amongst the spam, prioritizing tasks for the day, and creating and posting social media content can be delegated to AI, freeing up time for more important and rewarding work. The technology can also help with brainstorming, understanding difficult concepts, finding errors in code, and learning languages via conversation, making daunting tasks more manageable. [80] AI is a tool that, if used responsibly, can enhance both learning and work for everyone. Carri Spector of the Stanford Graduate School of Education says, “I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.” [81]

Pro 3: AI helps marginalized groups by offering accessibility for people with disabilities. Artificial intelligence is commonly integrated into smartphones and other household devices. Virtual assistants, including Siri, Alexa, and Cortana, can perform innumerable tasks from making a phone call to navigating the internet. People who are deaf and hearing impaired can access transcripts of voicemail or other audio, for example. [20] Other virtual assistants can transcribe conversations as they happen, allowing for more comprehension and participation by those who have impairments that affect their communication. Using voice commands with virtual assistants can help people with mobility disabilities who may have difficulty navigating small buttons or screens or turning on a lamp. [20] Apps enabled by AI on smartphones and other devices, including VoiceOver and TalkBack, can read messages, describe app icons or images, and give information such as battery levels for visually impaired people. Other apps, such as Voiceitt, can transcribe and standardize the voices of people with speech impediments. [20] Wheelmap provides users with information about wheelchair accessibility, and Evelity offers indoor navigation tools that are customized to the user’s needs, providing audio or text instructions and routes for wheelchair accessibility. [20] Other AI implementations, such as smart thermostats, smart lighting, and smart plugs, can be automated to work on a schedule to aid people with mobility or cognitive disabilities to lead more independent lives. [21] More advanced AI projects can combine with robotics to help physically disabled people. HOOBOX Robotics, for example, uses facial recognition software to allow a wheelchair user to move their wheelchair with facial expressions, making movement easier for seniors and those with ALS or quadriparesis. [22]

Pro 4: Artificial intelligence can improve workplace safety. AI doesn’t get stressed, tired, or sick, three major causes of human accidents in the workplace. AI robots can collaborate with or replace humans for especially dangerous tasks. For example 50 percent of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [14][15] Artificial intelligence can also help humans be safer. For instance, AI can ensure employees are up to date on training by tracking and automatically scheduling safety or other training. AI can also check and offer corrections for ergonomics to prevent repetitive stress injuries or worse. [16] An AI program called AI-SAFE (Automated Intelligent System for Assuring Safe Working Environments) aims to automate the workplace personal protective equipment (PPE) check, eliminating human errors that could cause accidents in the workplace. As more people wear PPE to prevent the spread of COVID-19 and other viruses, this sort of AI could protect against large-scale outbreaks. [17][18][19] In India, AI was used during the coronavirus pandemic to reopen factories safely by providing camera, cell phone, and smart wearable device-based technology to ensure social distancing, take employee temperatures at regular intervals, and perform contact tracing if anyone tested positive for the virus. [18][19] AI can also perform more sensitive tasks in the workplace such as scanning work emails for improper behavior and types of harassment.[15]

Source: Britannica.com | View original article

AI integration in financial services: a systematic review of trends and regulatory challenges

Research focus in AI finance literature has evolved based on contemporary trends and scientific relevance. Through co-occurrence analysis of author keywords examined temporally, we can trace the thematic evolution of topics. Figure 13 shows the dataset divided into five developmental periods, each marked by distinct research themes and technological innovations. The co-citation analysis reveals that AI research in finance draws upon existing research from five foundational clusters: bankruptcy prediction and machine learning techniques, fintech ecosystems, deep learning and neural networks, credit scoring and asset selection, and stock market predictions using sentiment and textual analysis. The convergence of these themes suggests a growing recognition of technology’s interconnected role in finance. The thematic map identifies five clusters of machine learning, financial technology, big data, text mining and financial management, each containing AI research topics explored in literature. These categories may interact based on the given research theme and may offer insights into insights into the financial system. Callon measures quantifies quantifies the frequency, intensity, and intensity of AI research.

Read full article ▼
Thematic evolution of keywords

Research focus in AI finance literature has evolved based on contemporary trends and scientific relevance. Through co-occurrence analysis of author keywords examined temporally, we can trace the thematic evolution of topics. Figure 13 shows the dataset divided into five developmental periods, each marked by distinct research themes and technological innovations.

Fig. 13 Thematic evolution of keywords. Full size image

The foundational period (1990–2005) marked the initial integration of AI technologies into financial applications, with particular emphasis on risk management, fraud detection, and trading strategies. This era witnessed the emergence of fundamental methodologies, including data mining, neural networks, and economic forecasting. While innovative, this period of technological experimentation, exposed limitations of early AI applications in finance. The lack of sufficient regulatory frameworks and control mechanisms for these technologies would later contribute to vulnerabilities exposed during the 2007–2008 financial crisis.

The crisis response period (2006–2009) brought a shift toward more sophisticated analytical approaches. Research expanded to include text mining, eXtensible Business Reporting Language (XBRL), and biometric finance models, to reflect a growing emphasis on data standardization and analytical rigor. The focus turned to enhancing financial system robustness through improved credit assessment, market forecasting, and cybersecurity measures.

The consolidation phase (2010–2015) addressed systemic vulnerabilities while enhancing operational efficiencies. Cloud computing infrastructure, decision support systems, and sentiment analysis capabilities emerged during this time. The integration of big data analytics significantly advanced the field’s analytical capabilities, enabling more comprehensive financial modeling and risk assessment.

The transformation period (2016–2019) heralded the mainstream adoption of advanced AI technologies, particularly deep learning and sophisticated machine learning applications. This era introduced diverse approaches to credit scoring and broader AI integration across financial services. Research focus shifted from individual technological applications to holistic approaches to financial system transformation.

The contemporary period (2020–2024) represents a synthesis of previous developments, with research clustering around three primary domains: financial technology, artificial intelligence, and machine learning. This integration reflects the field’s maturation, where technological innovations are increasingly viewed within the broader context of financial system transformation. The convergence of these themes suggests a growing recognition of technology’s interconnected role in finance.

Knowledge foundations of AI research in finance through co-citation analysis

The semantic associations of co-cited references uncovered through co-citation analysis illustrate a field’s knowledge foundations (Donthu et al., 2021). Figure 14 presents the co-citation map of references cited at least eight times by articles in the review corpus. The co-citation analysis reveals that AI research in finance draws upon existing research from five foundational clusters: bankruptcy prediction and machine learning techniques (red nodes), fintech ecosystems (dark green nodes), deep learning and neural networks (blue nodes), credit scoring and asset selection (lemon green nodes), and stock market predictions using sentiment and textual analysis (violet nodes). Notably, the largest foundational clusters relate to bankruptcy prediction and machine learning techniques (red nodes) and fintech ecosystems (dark green nodes).

Fig. 14: Co-citation of references cited by articles in AI finance research. Each node represents a cited reference. Each color of nodes represents a semantic cluster of references based on thematic similarity. The size of nodes represents the degree of local citations wherein larger nodes reflect greater intensity of local citations. The link between nodes represents co-citations. The size of the link between nodes represents the degree of co-citations wherein thicker links reflect greater co-citation intensity. Full size image

Thematic clusters by Walk Trap

A walk trap algorithm from RStudio’s biblioshiny feature is employed to cluster author keywords in the bibliometric data with a threshold of 250 keyword co-occurrences to identify distinct research categories. The resulting thematic map (Fig. 15) shows these categories in a matrix of niche themes, motor themes, emerging or declining themes and basic themes. Niche themes represent specialized finance topics with relatively low occurrence but significant future impact potential. Motor themes show high co-occurrence in literature and are considered influential in AI research. Emerging or declining topics exhibit fluctuating research interest, either gaining or losing popularity within the research community. Basic themes serve as foundational building blocks for advanced research. These categories may interact based on the given research theme. The thematic map identifies five clusters of AI research: machine learning, financial technology, big data, text mining and financial management, each containing several AI research topics explored in literature.

Table 10 shows the quantitative features of the thematic maps. Themes are categorized and ranked based on multiple metrics. Callon centrality measures theme influence within the broader research network, while Callon density evaluates internal theme cohesion. Rank centrality and density metrics provide relative measures of thematic significance and concentration, respectively. Cluster frequency quantifies thematic prevalence, offering insights into research intensity across domains. These metrics collectively provide information about the structure, centrality and connectivity of the various research streams.

Table 10 Thematic map features. Full size table

Machine learning stands out as both a basic and motor theme, encompassing essential applications such as neural networks, deep learning, data mining, and algorithmic trading. This dual nature reflects its importance as both a technological foundation and an innovative catalyst for predictive analytics and automation. The strong interconnections between credit risk assessment, fraud detection, and behavioral finance applications suggest a well-established research area that continues to evolve.

Financial technology represents the most influential research cluster today, showing particularly robust centrality measurements. This domain—which includes artificial intelligence, digital finance, and financial inclusion—highlights the increasing focus on democratizing financial services through technology. These themes serve as crucial bridges connecting traditional financial research with innovative technological applications that expand access and enhance financial literacy.

The big data and analytics cluster shows unique patterns over time, with research intensity that rises and falls, indicating an area still finding its footing. While essential to AI applications in finance, shifting density metrics point to changes in research focus, possibly reflecting technological maturation and evolving practical needs. Crowdfunding exhibits similar patterns, with research interest tracking alongside market developments and regulatory shifts.

Text mining and natural language processing function as dual-nature themes—simultaneously foundational and emerging—with growing centrality indicators. Their application to unstructured financial data represents a significant frontier in AI finance research. The dual nature also suggests their importance for future innovations, particularly in analyzing market sentiment and processing financial communications.

The financial management cluster, despite lower occurrence frequencies, show high specialization, particularly regarding cloud computing and security applications. This niche positioning, paired with strong internal density measurements, indicates a focused research area addressing critical operational and security challenges in AI-enabled financial systems. The cluster’s focus on infrastructure and security concerns also underscores its essential role in ensuring the practical viability of AI financial innovations.

Thematic clusters of AI finance research through bibliographic coupling

Using a minimum of 100 citations, six main research themes are drawn from the present dataset through bibliographic coupling. These themes classify AI finance research over the studied period. Table 11 presents these clusters and the top articles that relate to them.

Table 11 Research clusters from bibliographic coupling. Full size table

Theme 1: Digitalization, Servitization, and Financial Transformation

AI and machine learning technologies have triggered the rise of concepts like digital servitization and financial transformation that are reshaping finance today. Digitalization involves integrating digital technologies into business processes and changing service delivery methods while servitization represents a transition from product-focused models to service-based offerings enabled by digital tools (Kohtamäki et al., 2020). These twin forces are actively redefining financial models, strategies, and performance metrics across the industry.

Research increasingly shows the critical connection between digitalization, servitization and financial performance. Examining the “digitalization paradox”, Kohtamäki et al. (2020) discovered a non-linear, U-shaped relationship linking these elements. The study found that when firms combine higher digital investment with service-oriented strategies, financial outcomes improve significantly. This finding suggests firms should not pursue digital transformation in isolation—this strategy works best when integrated with service-focused business models. Abou-Foul et al., (2021) reinforced this outcome, noting that IoT-based digitalization specifically enhances market offerings and drives better financial results. Both research streams converge on a key insight: digital technologies deliver optimal financial potential only when coupled with flexible service models.

Big Data Analytics has emerged as another essential tool for financial strategy development, enhancing decision-making and risk management (Kushawa et al., 2021). AI’s growing integration into financial services—from predictive analytics to automated customer service—is fundamentally transforming traditional operations. The insurance sector particularly benefits from Robotic Process Automation (RPA), robotics, and AI, which automate repetitive tasks such as data entry and document processing to accelerate operational digitalization. Beyond improving data management, AI enables personalized insurance products and maintains customer engagement through robotics and chatbots, effectively shifting focus from traditional offerings toward service models. This allows insurance firms to cut operational costs, improve efficiency and allocate resources more strategically while enhancing risk management and optimizing portfolios. Some digital insurance advisors have gone further by incorporating gamification (Grgurevic & Stroughair, 2018) and human-centric AI approaches (Pisoni, Díaz-Rodríguez (2023)) to attract clients. Palmié et al. (2020) argue that innovations like robo-advisors, predictive analytics, and fraud detection systems are fundamentally reshaping financial services through personalization, automation, and decision-making efficiency. Yet, despite AI’s deployment in finance, its broader implications for management skills remain surprisingly unexplored (Gupta & George, 2016).

The disruption from AI extends beyond individual firms to the entire FinTech ecosystem. Traditional institutions now face the challenge of adapting to ecosystem-driven transformation that fosters innovation across financial services (Christensen et al., 2018). Fuller et al. (2019) highlighted that FinTech innovations do not develop in isolation but thrive through collaborative ecosystems. This insight points to an important new research direction: understanding the mechanisms that enable commercialization and scaling of AI-based financial technologies within these ecosystems. At the strategic level, Alkaraan et al. (2022) investigated how companies communicate their Industry 4.0 transformation in annual reports and its impact on financial performance. Their work highlights that strategic investment decision-making practices aligned with technologies like AI, cloud computing, and big data analytics have become increasingly important drivers of financial performance. Interestingly, ESG practices appear to moderate the relationship between digital transformation and financial success, suggesting sustainability is becoming intrinsically linked to corporate financial strategies (Shiyyab et al., 2023; Lim, 2024).

FinTech’s sustainability contribution is further illuminated by Pizzi et al. (2021) through their ReSOLVE conceptual model. They propose that FinTech solutions—including secure payment systems and alternative financing options like tokenized crowdfunding—help SMEs transition toward sustainable business models by integrating circular economy practices. This complements the multi-attribute decision-making framework by Soni et al. (2022) for selecting optimal Industry 4.0 technologies to enhance sustainable supply chain finance for SMEs. Their framework prioritizes IoT, cloud computing, big data, and analytics, as critical tools for improving SME competitiveness and working capital management.

Luo et al. (2022) further strengthen the case for AI’s role in improving financial performance and risk management within supply chains. Their testing of a custom optimization algorithm combining support vector machines (SVM) with AdaBoost demonstrated significant improvements in credit risk assessment and financial performance. This adds weight to the argument that AI-driven models are becoming essential tools for financial transformation, especially in supply chain finance optimization.

Despite these promising advances in AI-based financial transformation, several research gaps remain. More attention is needed on integrating AI with human oversight in financial decision-making, particularly in balancing automation with ethical standards and skill development. More research should explore ecosystem innovation mechanics, especially how FinTech ecosystems sustain and scale disruptive technologies. And while financial performance has received considerable attention, the long-term sustainability and resilience of digital servitization strategies in finance remains understudied. Future research could examine how these strategies adapt to changing market conditions and evolving regulatory environments.

Theme 2: AI-Driven Financial Forecasting and Text Mining

The financial sector has witnessed a remarkable transformation in textual data analysis through AI-driven forecasting methods. Text mining has become a critical approach for extracting meaningful insights from the vast array of unstructured data found in financial reports, press releases, news coverage, and social media (Shirazi & Mohammadi, 2019). Market efficiency depends on incorporating all available information—a task that has become increasingly challenging given today’s information deluge. The sheer volume and complexity of financial textual data now exceed traditional analytical capabilities, prompting the use of natural language processing (NLP) and machine learning techniques that enable institutions to process data at previously impossible scales.

Natural language-based financial forecasting (NLFF) stands out as a pivotal development, employing semantic modeling and sentiment analysis to predict market movements. Xing et al. (2018) note that these methods are evaluated primarily on forecasting accuracy, correspondence between predictions and outcomes, and performance in trading simulations. Yet challenges persist, particularly regarding volatility in trading strategies and market anomalies like the “20-minute theory” (LeBaron et al., 1999) and the “Monday effect” (Lakinishok & Maberly, 1990). These issues highlight demands for more sophisticated models capturing nuanced market behaviors. Xing et al. (2018) advocate developing specialized resources, online predictive frameworks, and comprehensive evaluation methods to enhance NLFF applications. Interestingly, Day and Lee (2016) observed that sentiment derived from different financial sources produced varying impacts on investment outcomes.

The research community has responded with innovative approaches. Malo et al. (2014) developed the Linearized Phrase-Structure model to strengthen financial lexicons by identifying semantic orientations within specialized financial language. Oliveira et al. (2016) took a different approach, using microblogging data to construct stock market-specific lexicons for sentiment analysis. Chan and Chong (2017) further expanded these capabilities by creating a sentiment analysis system that captures both individual terms and contextual phrases. Das and Chen (2007) emphasized the importance of classification algorithms in enhancing sentiment analysis precision and reducing misidentifications. In a broader investigation, Atkins et al. (2018) determined that financial news proves more valuable for predicting market volatility than direct price movements. These advances highlight text mining’s crucial role in capturing investment sentiment—an increasingly vital factor in market prediction.

Various NLP architectures have also been applied to financial fraud detection, including hierarchical attention networks, GPT, artificial neural networks, and XGBoost (Craja et al., 2020; Hajek & Henriques, 2017). Kumar and Ravi (2016) revealed in their review that text mining in finance primarily addresses foreign exchange rate prediction, stock market forecasting, customer relationship management, and cyber fraud detection. Long short-term memory networks show particular promise for stock price prediction when combining sentiment data with market indicators (Borovkova & Tsiamas). By contrast, convolutional neural networks demonstrate greater effectiveness when analyzing order book data from foreign exchanges (Tsantekidis et al., 2017; Gunduz et al., 2017).

Social media analytics combined with text mining also presents fresh opportunities for financial decision-making. Schniederjans et al. (2013) demonstrated how mining social platform content could enhance impression management strategies, potentially improving financial outcomes. This finding suggests properly analyzed social sentiment offers meaningful insights into investor behavior, further establishing NLP’s importance in modern finance.

Beyond market predictions, text mining and data analytics address customer retention challenges across financial services, especially in retirement markets and banking (Shirazi & Mohammadi, 2019). The ability to anticipate customer decisions through textual interactions gives institutions powerful tools for relationship management. NLP techniques also help synthesize finance literature, identifying research trends and highlighting future research directions (Goodell et al., 2021; Cockcroft & Russell, 2018; Raman et al., 2024; Horobet et al., 2024).

Future work should explore hybrid approaches combining multiple machine-learning methodologies to strengthen financial forecasting. Integrating sentiment analysis with deep learning models like GPT or HAN could improve prediction accuracy by capturing both short-term sentiment shifts and underlying trends. Real-time predictive modeling remains notably underexplored; researchers might investigate how real-time text mining from news outlets and social platforms could enhance trading algorithms, creating more responsive approaches to market fluctuations.

As this field evolves, developing specialized lexicons and models will be crucial for advancing forecasting techniques. With unstructured financial data growing exponentially, the need for sophisticated NLP frameworks capable of processing this information at scale becomes increasingly important. These developments will not only enhance forecasting accuracy but also deepen our understanding of market dynamics, investor psychology, and fraud patterns, opening numerous avenues for further investigation.

Theme 3: Machine Learning in Credit Scoring and Risk Management

Credit scoring and risk management practices, through machine learning applications, have evolved to meet the growing demands for more precise default prediction and risk assessment. While conventional credit systems typically focus on default probability (PD), newer approaches focus on profitability and risk evaluation capabilities. Serrano-Cinca, Gutiérrez-Nieto (2016) developed a profit scoring Decision Support System (DSS) for peer-to-peer (P2P) lending markets, arguing that predicting internal rates of return (IRR) is essential for sustainable lending. Their findings suggest that traditional logistic regression methods often inadequately capture profitability metrics, whereas multivariate regression and CHAID decision trees demonstrate significantly better performances for these specific objectives.

The credit industry faces mounting operational pressures and regulatory requirements that diminishes reliance on conventional logistic regression in favour of ensemble and multi-classifier systems. Ma et al. (2018) documented LightGBM algorithm’s effectiveness for default risk prediction in P2P lending, identifying key predictors across loan characteristics, financial indicators, credit history, and personal information. Fintech lenders increasingly incorporate alternative data sources to expand credit accessibility. By analyzing non-traditional information—including online purchasing patterns, insurance claims history, and utility payment records—these platforms enhance risk assessments while extending credit to previous underserved market segments (Kshetri, 2021; Jagtiani & Lemieux, 2019). This approach improves default prediction while potentially reducing credit costs through more precise risk evaluation (Jagtiani & Lemieux, 2019). In a unique application, Netzer et al. (2019) demonstrated how text mining combined with machine learning could predict borrower default probabilities by analyzing written loan applications, revealing unexpected correlations between specific writing patterns and subsequent default behavior.

Practical adoption of machine learning models in credit scoring still remains somewhat limited, partly due to inconsistent modeling procedures across the industry. Montevechi et al. (2024) surveyed ML techniques for consumer credit risk assessment—including decision trees, support vector machines (SVM), and various neural network architectures—yet found many remain underutilized in actual practice. Addressing this gap, Ala’raj and Abbod (2016) proposed a classifier consensus system that combines outputs from multiple classifiers to optimize credit scoring outcomes to show the growing recognition that collaborative approaches may yield superior results.

Model explainability also represents a limitation in adopting machine learning in credit decisions. Industry practitioners typically employ tools like Shapley Additive Explanations (SHAP) to enhance model interpretability—essential for regulatory compliance and stakeholder trust (Al Shiam et al., (2024)). In day-to-day applications, decision trees and random forests have proven particularly effective for short-term credit risk evaluation (Butaru et al., 2016; Montevechi et al., 2024). Default risk factors vary considerably across financial institutions, requiring customized risk management strategies rather than one-size-fits-all approaches. Big data analytics integration can substantially improve systemic risk measurement for credit assets, yielding insights beyond traditional modeling capabilities (Butaru et al., 2016). Other valuable machine learning applications include decision support systems for bank ratings and early-warning mechanisms alerting institutions to emerging risks (Doumpos, Zopounidis (2010)).

Future studies should prioritize establishing industry-wide standards for machine learning models in credit scoring to enhance practical implementation and cross-institutional comparability, particularly regarding alternative data incorporation. Strengthening collaboration between academic researchers and industry practitioners could help bridge persistent gaps between theoretical advances and practical implementations. The integration of machine learning with comprehensive data analytics in credit scoring will prove increasingly vital for refining risk assessment methodologies and promoting responsible lending practices across the financial ecosystem.

Theme 4: Digital Finance, Inclusion and Sustainability

Digital financial inclusion involves the deployment of technological platforms to extend financial services to underserved populations, addressing barriers of physical access and eligibility requirements. The full potential of these platforms continues to be constrained by issues of limited connectivity, inadequate financial literacy, and insufficient social awareness. Success also requires localizing financial offerings to match consumer realities (Aziz & Naima, 2021). Financial institutions continue to leverage AI and blockchain technologies to gather client information and secure transactions to improve service delivery in traditionally underserved communities (Inairat & Al-kassem, 2022; Demirkan et al., 2020).

The fintech revolution has enabled banks and financial startups to embrace more sustainable operational models, strengthen risk management, and develop genuinely customer-focused services. Blockchain implementation, beyond merely securing online transactions, enhances transparency throughout financial processes, thereby widening inclusion opportunities (Demirkan et al., 2020). These developments nonetheless bring increased financial intermediation, regulatory challenges, and heightened privacy concerns—issues that demand stronger regulatory frameworks to mitigate emerging risks (Wang et al., 2020).

Beyond expanding access, digital finance demonstrates capacity to support broader economic sustainability. Razzaq, Yang (2023) utilized web-crawler technology alongside a super-efficiency SBM model to evaluate inclusive digital finance, finding that technological transformation promotes environmentally conscious growth by stimulating enterprise innovation and tackling energy poverty. Their research shows that fintech advances sustainable practices and positions digital tools as vital catalysts for both financial access and environmental stewardship.

Despite promising advances, legitimate concerns have emerged regarding AI’s impact on achieving the 2030 Sustainable Development Goals, particularly those related to financial inclusion. Truby (2020) contends that algorithmic financial decision-making can potentially undermine the SDG agenda through inherent biases, ethical governance gaps, and transparency deficiencies. These observations suggest the necessity for anticipatory regulatory requirements to ensure AI applications foster inclusive, equitable financial ecosystems. Responsible AI deployment, supported by robust ethical frameworks, remains essential for meaningful financial inclusion.

The use of AI in digital finance also presents both opportunities and challenges. Yuan et al. (2021) note that elevated financial risk can impede adoption of AI and complementary technologies in financial sectors, especially across G7 nations where risk minimization directly influences technological innovation and sustainability efforts. Thoughtful policy frameworks must balance innovation against prudent risk management to ensure digital finance continues supporting both economic development and environmental goals.

Blockchain technology maintains its pivotal role in cross-border payment systems and customer needs identification, establishing foundations for more inclusive financial markets (Zhang et al., 2020). Fintech firms, employing AI applications, also work to ensure traditionally marginalized groups—particularly low-income populations—can actively participate in financial markets (Fazal et al., 2024; Mhlanga, 2020). Mhlanga (2020) specifically notes AI’s contribution to digital inclusion through enhanced risk detection capabilities, fraud prevention mechanisms, and strengthened cybersecurity measures. AI-powered chatbots and support systems further improve accessibility by providing responsive customer assistance.

Mobile money transfer innovations have also significantly reduced exclusion of high-risk populations from financial services, enabling market participation that were previously unavailable to them (Park & Mercado, 2018). Blockchain implementations and cryptocurrencies, including Bitcoin, have helped mitigate currency-related risks (Paul et al., 2019). Online payment platforms paired with social networks address information asymmetry problems in credit markets, effectively resolving credit rationing challenges that historically limited access for underserved communities (Wang et al., 2020).

Arner et al. (2020) argue that AI and fintech are key drivers of financial inclusion due to two main factors: first, the implementation of electronic Know Your Customer (e-KYC) protocols simplifies account opening processes and helps marginalized groups build digital identities; and second, interoperable electronic payment systems facilitate smoother financial transactions. These innovations not only make it easier for marginalized groups to access financial services but also reduce operational barriers for financial institutions, creating a more inclusive and efficient financial ecosystem.

Prioritizing the development of integrative frameworks that harness AI, blockchain, and fintech innovations while addressing ethical governance concerns, transparency requirements, and regulatory compliance challenges is necessary in this sphere. Balancing innovation against consumer protection will remain crucial in ensuring digital finance advances both inclusion and sustainability objectives.

Theme 5: Financial Fraud Detection through Data Mining and AI

Financial fraud constitutes a grave offense with far-reaching consequences for industries, economies, and customers alike. The explosion of big data has rendered traditional detection methods increasingly obsolete, driving adoption of artificial intelligence and machine learning approaches. AI provides adaptive solutions for identifying fraudulent activities through real-time monitoring and predictive capabilities—crucial advances in an environment where fraudsters constantly refine their techniques (West & Bhattacharya, 2016). This unending evolution of fraudulent practices requires detection systems to remain flexible and responsive, spurring preference for advanced computational methods like neural networks and ensemble techniques (Kamuangu, 2024).

AI-powered fraud detection faces several obstacles: classification challenges, rapidly evolving fraud strategies, data privacy issues, and substantial costs tied to developing complex analytical frameworks. Financial fraud is categorized into two broad categories: external fraud—which includes credit card fraud, insurance scams, online payment fraud, and loan fraud—and internal fraud, involves financial statement manipulation, money laundering operations, and various forms of corruption (Ali et al., 2022; Hamza et al., 2023). Researchers have applied AI and ML models to address both categories. Analysis of linguistic patterns in corporate disclosures, particularly annual SEC filings, has emerged as a valuable input for detecting financial reporting irregularities (Glancy & Yadav, 2011).

Among effective detection models, supervised learning techniques—random forest algorithms, logistic regression models, decision trees, support vector machines, and Naïve Bayes classifiers—have gained popularity (Ravisankar et al., 2011; Hernandez Aros et al., 2024). Ensemble methods, particularly gradient-boosting machines, demonstrate exceptional effectiveness, with Kamuangu (2024) observing that gradient-boosting techniques and autoencoders (in deep learning) represent leading models in current literature. Hernandez Aros et al., 2024 found random forest remains the predominant model in supervised learning applications, while autoencoders lead unsupervised techniques. These models prove instrumental in combating fraud across sectors like credit card and insurance fraud, where detecting fraudulent activities require the use of both unsupervised and semi-supervised models (Hilal et al., 2022). Generative adversarial networks, autoencoders, convolutional neural networks, and long short-term memory networks all show considerable promise in these domains.

A possible future study in this area is the integration of generative models like GANs with temporal modeling capabilities of CNNs and LSTMs. Such hybrid systems could leverage generative models’ oversampling abilities to address imbalances in fraud detection datasets while capturing temporal patterns in fraudulent behaviors (Hilal et al., 2022). Additionally, combining supervised and unsupervised learning techniques alongside ensemble methods might enhance the detection of evolving fraud strategies, particularly within high-risk domains such as online transactions and digital payment systems.

Theme 6: Other AI applications in finance

Beyond the core applications discussed earlier, AI is making significant inroads in several other financial domains. Financial crisis prediction has seen innovative approaches, with Uthayakumar et al. (2020) developing an ant colony optimization model that surpasses conventional techniques like genetic algorithms. Similarly, Pan (2012) introduced the Fruit Fly Optimization Algorithm, which effectively predicts financial distress using Taiwan stock exchange data.

The wealth management landscape has been transformed by robo-advisors, which Belanche et al. (2019) note is changing how investment services are delivered. Cryptocurrencies present another frontier, with research by Huynh et al. (2020) and Maiti et al. (2023) suggesting Bitcoin and other digital currencies offer portfolio diversification benefits, though Le et al. (2021) present more cautious perspectives on their effectiveness.

Blockchain technology, cryptocurrencies, and tokenization are also creating new funding avenues for entrepreneurs while enhancing transaction security and streamlining financial processes (Ahluwalia et al., 2020).

Source: Nature.com | View original article

Source: https://www.nytimes.com/2025/08/04/science/04hs-science-papers-fraud-research-paper-mills.html

Leave a Reply

Your email address will not be published. Required fields are marked *