Agentic AI's greatest potential benefit? Changing how a health system functions
Agentic AI's greatest potential benefit? Changing how a health system functions

Agentic AI’s greatest potential benefit? Changing how a health system functions

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

State of Generative AI in the Enterprise 2024

Most organizations are pursuing 20 or fewer experiments or proofs of concept (POCs) Two-thirds said that 30% or fewer of their experiments will be fully scaled in the next three to six months. The most advanced initiatives target IT (28%), operations (11%), marketing (10%), and customer service (8%) Beyond IT, organizations overwhelmingly focus their deepest deployments on functions uniquely critical to success in their industries. Learn how a bank is using GenAI to triage millions of cybersecurity alerts into fewer than 10 real threats per day. See how a tech company is developing GenAI tools to accelerate sales with an eye toward commercialization in the future.Explore insights from our Generative AI report.

Read full article ▼
Explore insights from our Generative AI report In the final installment of the series, we asked C-suite executives several overarching questions about Generative AI investment, scaling, and value realization across a variety of industries.

What is the state of GenAI experimentation? Excitement remains high but has evolved into a feeling of positive pragmatism. More organizations are dedicating a greater portion of their budgets to GenAI than earlier in the year, but they’re focusing their efforts and taking their time: Most are pursuing 20 or fewer experiments or proofs of concept (POCs) and over two-thirds said that 30% or fewer of their experiments will be fully scaled in the next three to six months. % of experiments/POCs fully scaled in three to six months % of

experiments/

POCs 80% 2% 70% 2% 60% 5% 50% 9% 40% 13% 30% 26% 20% 27% 10% 16% 0% 1% % of organizations

Are some use cases showing more promise? The most advanced initiatives target IT (28%), operations (11%), marketing (10%), and customer service (8%)—but this spread varies by industry in a way that suggests a shift toward competitive differentiation: Beyond IT, organizations overwhelmingly focus their deepest deployments on functions uniquely critical to success in their industries. Top functions of most-advanced (scaled) GenAI initiatives Select industry: Consumer Energy, resources & industrial Financial services Life sciences & health care Tech, media & telecom Government 20% IT 20% Marketing 12% Customer service 17% IT 23% Operations 11% Strategy 13% Finance 14% Cybersecurity 21% IT 11% Operations 21% R&D 23% IT 12% Cybersecurity 17% Product development 34% IT 96% IT 3% Operations Real-world results: Learn how a global consumer company is increasing productivity and sales while reducing media costs through GenAI. View case study

Are advanced GenAI initiatives meeting expectations? Almost all organizations report measurable ROI with GenAI in their most advanced initiatives, and 20% report ROI in excess of 30%. The vast majority (74%) say their most advanced initiative is meeting or exceeding ROI expectations. Cybersecurity initiatives are far more likely to exceed expectations, with 44% delivering ROI above expectations. % of leaders reporting on ROI expectations 2% 31% 43% 24% Below Meeting Exceeding Unsure Real-world results: Learn how a bank is using GenAI to triage millions of cybersecurity alerts into fewer than 10 real threats per day. View case study

How long will it take to resolve challenges? Organizations have learned that Generative AI scaling and value creation is hard work. The majority acknowledge they need at least a year to resolve ROI and adoption challenges such as governance, training, talent, trust, and data issues—and they’re willing to put in the time. 55-70% need 12+ months to resolve adoption challenges 70% need 12+ months to resolve ROI challenges 76% say they’ll wait at least 12 months or more before reducing investment if value targets weren’t being met Real-world results: Learn how a tech company is developing GenAI tools to accelerate sales with an eye toward commercialization in the future. View case study

Source: Deloitte.com | View original article

12 top ways artificial intelligence will impact healthcare

Healthcare is a data-rich industry ripe for artificial intelligence deployment. Hospitals, health systems and other provider-based organizations have integrated AI into their daily workflows. Innovations in AI are making waves in the healthcare industry, attempting to solve some of healthcare’s most significant pain points. This list details, in alphabetical order, the top 12 ways AI has and will continue to impact healthcare. The list also includes: precision medicine, telehealth, health IT, digital transformation and predictive analytics. The top 12 reasons AI is here to stay in healthcare, and its reach will likely increase. It can help overcome major drug discovery and development barriers. It could also help tackle clinician burnout, most of which aim to automate aspects of the most complex aspects of care. It has the potential to streamline clinical workflows while bolstering cost-effective care delivery. It’s also helpful in the data-gathering systems for complex drug manufacturing, and models to identify novel drug targets. It will continue revolutionizing the pharmaceutical industry.

Read full article ▼
Healthcare is a data-rich industry ripe for artificial intelligence deployment. The impact of AI in healthcare has been positive.

Hospitals, health systems and other provider-based organizations have integrated AI into their daily workflows to improve patient care, reduce costs and enhance efficiency. Providers, payers and other stakeholders have also realized many other advantages, including more personalized treatment plans, improved communications across stakeholders and digital transformation across the enterprise.

AI is here to stay in healthcare, and its reach will likely increase. Innovations in AI, such as generative AI (GenAI), agentic AI and intelligent automation, are making waves in the healthcare industry, attempting to solve some of healthcare’s most significant pain points.

This list details, in alphabetical order, the top 12 ways AI has and will continue to impact healthcare.

1. Clinical decision support At its core, a clinical decision support system (CDSS) is a critical tool designed to improve care quality and patient safety. But technologies such as AI and machine learning are transforming clinical decision-making. In the early days of CDSS tools, many were standalone offerings that were not well integrated into clinical workflows. Today, many CDSSes are integrated into electronic health records (EHRs) to help improve deployment and gain more value from the use of these tools at the bedside. This article is part of AI in healthcare: A guide to improving patient care with AI Which also includes:

Health IT (health information technology)

Health IT (health information technology) precision medicine (PM)

precision medicine (PM) telehealth (telemedicine) AI takes this one step further by enabling providers to take advantage of information within the EHR and data pulled from outside of it. Because AI tools can process larger amounts of data more efficiently than other tools while enabling stakeholders to pull fine-grained insights, they have significant potential to transform clinical decision-making. Using AI’s advanced pattern recognition capabilities, CDSS tools can incorporate risk stratification and predictive analytics to help clinicians make more informed, personalized treatment recommendations in high-value use cases, such as chronic disease management.

2. Drug discovery and development Drug discovery, development and manufacturing have created new treatment options for various health conditions. Integrating AI and other technologies into these processes will continue revolutionizing the pharmaceutical industry. High drug development costs and other challenges are driving clinical researchers to seek out new tools to get new drugs to market more efficiently. The process is often high risk, high reward: The drug development lifecycle takes billions of dollars and decades of research, but new medicines aren’t guaranteed to receive regulatory approval from the U.S. Food and Drug Administration. AI and other technologies can help overcome major drug discovery and development barriers. AI and machine learning, in particular, are revolutionizing drug manufacturing by enhancing process optimization, predictive maintenance and quality control while flagging data patterns a human might miss, improving efficiency. These tools are also helpful in the data-gathering systems for complex drug manufacturing, and models to identify novel drug targets are reducing the time and resource investment required for drug discovery. According to pharmaceutical developer Roche, AI is instrumental in creating a more feasible and sustainable timeline for drug development. “The sheer scale and complexity of the scientific data involved in drug discovery pose significant barriers to progress,” the company wrote in a Jan. 30, 2025, post. “Computational approaches have enhanced data collection and analysis, but have historically not matched the magnitude of this problem. Thus, there’s still potential for further advancements in the faster delivery of new medicines and improved success rates in research.” Investments in AI are paying off for pharmaceutical companies. A study published in the June 2024 issue of Drug Discovery Today revealed that AI discovered molecules far better than historic industry averages. The study’s authors suggested that scientists should continue to measure their success as AI-discovered molecules continue down the development pipeline.

3. Electronic health records EHRs hold vast information about a patient’s health and well-being in structured and unstructured formats. This data is valuable for clinicians — but making it accessible and actionable has challenged health systems. AI has given healthcare organizations a unique opportunity to overcome some of these hurdles, and some already see the benefits. EHR adoption aims to streamline clinical workflows while bolstering cost-effective care delivery. However, clinicians cite clinical documentation and administrative tasks as EHR burdens and sources of burnout. AI tools are key to addressing these issues and giving providers back their time so they can focus on patients. There are multiple AI use cases to tackle clinician burnout, most of which aim to automate aspects of the EHR workflow. Health data extraction products can help clinicians find the information they’re looking for quickly and effectively, reducing information overload. Many of these tools use natural language processing (NLP). This AI approach enables algorithms to flag key components of human language and use those insights to parse through text data to extract meaning. AI is also beneficial when healthcare organizations move to new EHR platforms and must undertake legacy data conversion. This process often reveals that patient records are missing, incomplete or inconsistent, which can create significant inefficiencies. Typically, inconsistencies pulled from a medical record require data translation to convert the information into the language of the EHR. The process usually requires humans to translate the data manually, which is time-consuming and labor-intensive and can also introduce new errors that could threaten patient safety. AI-based tools can automate this process, saving time and effort for care teams. Finally, ambient documentation systems powered by AI are instrumental in streamlining provider documentation burdens. Using NLP and machine learning, these tools “listen” to patient-provider conversations during the clinical encounter, transcribe them and then generate a clinical note filed into the EHR for provider review.

4. Genomics Genomics has sparked a wealth of excitement across the healthcare and life sciences industries. Genetic data lets researchers and clinicians better understand what drives patient outcomes, potentially improving care. Particularly, genomics plays a key role in precision and personalized medicine, but making these insights useful requires analyzing large, complex data sets. By enabling providers to combine the power of genomics and big data analytics, AI models can tailor care and treatment recommendations for various medical conditions. These tools are invaluable for overcoming a significant obstacle to using genomics in clinical settings: the data’s actionability. Access to a patient’s genome sequence data sounds promising, as genetic information is relevant to identifying potential health concerns, such as hereditary disease. However, to truly transform care delivery, providers need to know more than just what the data says about a patient’s genetic makeup. They must also determine how that information can be used in the real world. One approach to achieving this involves integrating genomic data into EHRs, which can help providers access and evaluate a more complete picture of a patient’s health. But AI can take this further. “Artificial Intelligence (AI) is valuable in genomics because it enables researchers to analyse vast amounts of complex genomic data more efficiently and accurately than before,” according to an Oct. 17, 2024, blog post by Katrina Costa, a science writer at the Wellcome Sanger Institute. “For example, each human genome contains around 3 billion base pairs and large-scale studies can involve hundreds of thousands of genomes. AI can also help identify patterns and correlations in data that are too subtle or complex for us to detect, and predict the impact of specific changes.” A study published in the May 20, 2024, issue of Nature Communications detailed how an AI-driven model used genomics and epigenetics to assess risk for certain autoimmune diseases. To flag genetic mutations causing certain illnesses, medical researchers must distinguish between cell types — something that’s not always possible. Using an AI-powered tool and genomics, the researchers could predict disease more accurately and thus intervene sooner.

5. Hospital management Managing health system operations is at the heart of how healthcare is delivered. Optimizing workflows and monitoring capacity can have major implications for a healthcare organization’s bottom line and its ability to provide high-quality care. However, monitoring and managing all the resources required is no small undertaking, and health systems are increasingly looking to data analytics tools such as AI to help. Capacity management is a significant challenge for health systems, as issues like ongoing staffing shortages and recent surges in respiratory viruses can exacerbate existing hospital management challenges. Many hospitals, such as Cleveland Clinic, have implemented smart scheduling that uses AI to analyze historical data — including patient volume trends and staff availability — to optimize shift rosters. This type of scheduling can also predict when more staff might be needed, such as during peak flu season and holidays. AI-enabled capacity management is beneficial for surgical scheduling. Since operating rooms are high-cost, high-demand hospital areas, AI can minimize OR downtime by optimizing procedure scheduling and staff availability. Some hospitals have also started using digital twins to improve operational management and performance. Digital twins are virtual replicas of a hospital, including its typical patients, workflows and departments. The technology mirrors data from the EHR, real-time solutions and other IT systems to provide hospital leaders with a platform to test changes and how they might affect care delivery. According to a March 22, 2024, article in npj Digital Medicine, typical applications in healthcare include hospital management, facility design, workflow development, decision-making and individualized therapy. Use of AI in healthcare ranges from chatbots that provide patients with basic information to robot-assisted surgery.

6. Medical imaging Medical imaging is critical in diagnostics and pathology, but effectively interpreting these images requires significant clinical expertise and experience. Imaging analytics, often driven by AI, aims to tackle this. AI technologies are already changing medical imaging by enhancing screening, risk assessment and precision medicine. In a study published in the March 11, 2024, issue of Communications Medicine, Johns Hopkins researchers showed that a deep neural network-based automated detection tool could assist emergency room clinicians in diagnosing COVID-19 by analyzing lung ultrasound images. The tool is designed to identify B-lines — bright, vertical image abnormalities that indicate inflammation in patients with pulmonary complications — with a high degree of accuracy to diagnose COVID-19 infection. The model’s success suggests that a similar approach could be applied to other severe conditions, such as heart failure, to diagnose patients efficiently at the point of care. The researchers emphasized that such a capability would be instrumental in scenarios where emergency department clinicians face high caseloads, like during flu and COVID seasons, or for integration into wearable technologies and other wireless devices for enhanced remote patient monitoring. AI can improve every aspect of a radiologist’s workflow, a top priority for healthcare organizations as the demand for radiologists is expected to grow by almost 26% between 2023 and 2055, according to a study from the Harvey L. Neiman Health Policy Institute published in the February 2025 issue of the Journal of the American College of Radiology.

7. Medical research and clinical trials Medical research is a cornerstone of the healthcare industry, facilitating the development of game-changing treatments and therapies. But this research, particularly clinical trials, requires vast amounts of money, time and resources. AI tools can help researchers overcome the top challenges of clinical trials, including the time it takes to recruit or match patients to a trial, collect large amounts of data from various sources and manually analyze data. AI-powered chatbots can be especially useful for clinical trials to guide patients through eligibility screening and onboarding. These technologies are particularly valuable for accelerating clinical trials by improving trial design, optimizing eligibility screening and enhancing recruitment workflows. Further, AI models can help advance clinical trial data analysis, as they enable researchers to process extensive data sets, detect patterns, predict results and propose treatment strategies informed by patient data. AI has also proven helpful for trial design, enabling protocol simulation to reduce costly amendments.

8. Patient engagement Patient engagement significantly improves health outcomes by enabling patients and their loved ones to be actively involved in care. Patient engagement solutions are often designed to balance convenience and high-quality interpersonal interaction. While digital technologies cannot replace the human elements of the patient experience, they have their place in healthcare consumerism. AI, specifically, can be valuable for personalizing patient engagement tools. Communication is a key aspect of patient experience and activation. EHRs can help facilitate that communication by allowing patients and providers to send messages to one another using the patient portal. However, overflowing inboxes can contribute to clinician burnout, and some queries can be too complex or time-consuming to address using an EHR message. This creates frustration on both sides, as clinicians want to spend more time on care and less on administrative tasks, while patients want their healthcare to be accessible and frictionless. AI chatbots are emerging as a potential solution to this conundrum. They are well-suited to analyzing patient needs and providing resources in certain areas. For example, GenAI is being embedded into more patient portals to address the following two tasks: Mining patient messages and triaging them to the appropriate clinical team member.

Fielding patient messages, analyzing them and generating a response. Studies have shown that GenAI tools provide good medical advice in patient portal messages, although their responses still require review by a healthcare provider. What’s more, chatbots can help filter patient phone calls, sifting out those that can be resolved by providing basic information, such as giving parking information to hospital visitors. The emergence of agentic AI takes this a step further by helping to complete these administrative tasks. These AI tools can also be applied to clinical needs, using patient symptom data to provide care recommendations. AI-driven patient engagement can also take the form of tools designed to conduct patient outreach based on clinical risk assessment data or systems to translate health information for users in a patient portal.

9. Predictive analytics and risk stratification In recent years, the rise of predictive analytics has aided providers in delivering more proactive healthcare to patients. In the era of value-based care, the capability to forecast outcomes is invaluable for developing crucial interventions and guiding clinical decision-making. To successfully use predictive analytics, stakeholders must be able to process vast amounts of high-quality data from multiple sources. For this reason, many predictive modeling tools incorporate AI in some way, and AI-driven predictive analytics technologies have various benefits and high-value use cases. Predictive analytics enables improved clinical decision support, population health management and value-based care delivery, and its healthcare applications are continually expanding. AI-based risk stratification is a crucial component of many of these efforts, as flagging patients at risk for adverse outcomes and preventing those outcomes is integral to advancing high-quality care delivery. For example, researchers at the University Medical Center Groningen developed an AI-driven model to stratify the risk of coronary artery disease (CAD). The model uses an AI-powered questionnaire to forecast a person’s 10-year CAD risk by analyzing their answers about lifestyle, medical history and social factors. The model proved to be as accurate as traditional clinical risk tools that require laboratory analyses, reducing unnecessary utilization. Researchers also touted the cost-effectiveness and scalability of the AI-powered questionnaire, which is more accessible to patients than lab tests. Hospitals are also applying AI capabilities to established predictive analytics solutions that predict adverse events before they happen. Top use cases in this area include risk assessment for sepsis, heart failure and hospital readmissions.

10. Remote patient monitoring Remote patient monitoring (RPM) has become more familiar to patients following the COVID-19 pandemic and the resulting rise in telehealth and virtual care. However, RPM technologies present significant opportunities to enhance patient well-being and improve care by enabling providers and researchers to use additional patient-generated health data. AI can be incorporated into RPM tools or used to streamline RPM data processing. Common RPM tools that use advanced analytics approaches like AI play a significant role in advancing hospital-at-home programs. These initiatives let patients receive care outside the hospital setting, necessitating that clinical decision-making must rely on real-time patient data. RPM offerings enable continuous and intermittent recording and transmission of this data. Tools such as biosensors and wearables are frequently used to help care teams gain insights into a patient’s vital signs or activity levels. AI bolsters these tools’ capabilities by helping to predict complications, helping care teams to preemptively intervene in cases of clinical deterioration, and flagging patients who are likely to benefit from hospital-at-home services compared to inpatient care. These technologies are also helpful because they can learn a patient’s baseline biometrics, detect deviations from that baseline and adjust accordingly or alert the care team when a patient is at high risk for an adverse event.

11. Revenue cycle management Revenue cycle management (RCM) ensures that health systems can focus on providing high-quality patient care. However, effectively tackling revenue challenges and optimizing operations requires heavy lifting on the administrative side. AI tools can help ease these burdens in a variety of ways. RCM still relies heavily on manual processes, but recent trends in AI adoption show that stakeholders are looking at the potential of advanced technologies for automation. Providers are investigating AI-based tools to streamline claims management, which is rife with labor- and resource-intensive tasks, such as managing denials and medical coding. To that end, many in healthcare are interested in AI-enabled autonomous coding, patient estimate automation and prior authorization technology. Healthcare organizations are seeking more information on their ROI before adopting these tools. However, adoption will likely center on operational optimization, leading to automation tools deployed in areas with the highest administrative burden, such as claims management. AI technologies can take over mundane, repetitive tasks — such as checking a claim’s status — and enable staff to focus on more complex revenue cycle management objectives. Revenue cycle management has also been a top target for GenAI in healthcare, considering the relatively low risks of applying the newer technology to administrative versus clinical tasks. For example, GenAI can be used for appointment reminders, preauthorization updates, payment reminders and insurance claim updates. GenAI has also been tapped to improve medical coding by validating codes based on clinical documentation and EHR data and using natural language to turn unstructured data into structured, billing-ready information. Similarly, providers are starting to use GenAI to draft appeal letters for claim denials management, with some AI tools now able to customize denial workflows by payer.

Source: Techtarget.com | View original article

Agentic AI: 9 promising use cases for business

AI agents can be used to answer employee questions and handle simple tasks on their behalf. Coding agents are an early use case because programming is detail-driven and time consuming. An AI agent paired with a BI solution could give more employees access to useful analytics. A January IBM survey concluded that 43% of companies use AI agents for HR. But there are hundreds of uses yet to be discovered, says Ryan Janssen, co-founder and CEO of Zenlytic, an AI-powered BI vendor. system. Each agent is like a microservice, specializing in one particular thing. “And they all talk to each other in a multi-agent system,” says Daniel Avancini, CDO of Indicium, a global data services company. The tricky thing is there’s a lot of tweaking of the model so they don’t do the wrong thing or access the wrong information. ‘Oh, wait, this is ambiguous; I need to go back and use a tool for this.’”

Read full article ▼
That’s the way it was done in the past, until gen AI came along. Human experts now enhance reports generated by AI.

“Now we can feed AI all the contact and public documentation, and it can spin out a report in minutes instead of days with tremendous accuracy and detail,” he says. “AI plus human expertise is a tremendous boost in quality,” he says.

Now, with AI agents, the process is changing yet again. EY will release an agent-drive version of the process to evaluate vendors. “It’ll be a continuous monitoring of vendors, which was previously not possible,” Schuller says.

AI agents aren’t just about optimization use cases, he adds. “The real value is this expansion of the market, and expansion of revenue opportunities.”

HR and employee support

Another relatively low-risk, high-value use case for AI agents is answering employee questions and handling simple tasks on their behalf. A January IBM survey on gen AI development, in fact, concluded that 43% of companies use AI agents for HR.

Indicium, a global data services company, began deploying AI agents in mid-2024, for example, when the technology started to mature.

“You’d start seeing off-the-shelf applications — both open source and proprietary — that made it easier to build them,” says Daniel Avancini, the company’s CDO.

The agents are used to making things easier for HR, he says, including tasks such as internal knowledge retrieval, tagging, and documenting, as well as other business processes.

Each agent is like a microservice, specializing in one particular thing. “And they all talk to each other in a multi-agent system,” he says.

And these prompt-based conversations can get peculiar. The tricky thing is there’s a possibility of hallucinations and all the other problems that come with gen AI. “So there’s a lot of tweaking of the model so they don’t do the wrong thing or access the wrong information,” he says.

On the positive side, the AI agents can handle a lot of questions autonomously, creating a another business benefit. “And we’re finding things that aren’t correctly documented, so it helps us make the processes better,” Avancini adds.

Business intelligence

Another area where AI agents will have a large impact is business intelligence. While BI dashboards are relatively simple to use, gaining insights that go beyond the standard categories has often taken the work of a data team to extract, says Ryan Janssen, co-founder and CEO at Zenlytic, an AI-powered BI vendor.

An AI agent paired with a BI solution could give more employees access to useful analytics, he says. For example, an AI agent for BI could advise a marketing team about where to spend its budget or create a chart based on an example drawn on a napkin, Janssen says.

AI agents that understand voice inputs can generate business data insights based on spoken questions such as, “What are our top three marketing channels?”

“That’s a very natural question, but it’s ambiguous,” Janssen says. “What you can’t do with the chatbot versus an agent is disambiguating that ambiguous question. What do you mean by ‘top’? The agent, when well built, will say, ‘Oh, wait, this is ambiguous; I need to go back and use a tool for this.’”

Many organizations are just at the start of their agentic AI journeys, and there are hundreds of uses yet to be discovered, Janssen adds. Coding agents are an early use case because programming is detail-driven and time consuming, but now coding hobbyists are building apps using coding assistants.

“The way that they are best applied is when you have work that is grindy, takes a lot of work, or requires a lot of attention to detail,” Janssen says.

When dozens of agents get strung together and organized, enterprises will see new breakthroughs, he adds.

“We haven’t even scratched the surface yet with what agents can do,” he says. “We don’t know what an organization looks like yet, how they’re supposed to interact, and how it is governed. But I have no doubt that over the next couple of years, we’re going to figure that out.”

Source: Cio.com | View original article

Financial Stability in Focus: Artificial intelligence in the financial system

The effective monitoring of AI-related risks is essential to understand whether additional risk mitigations might be warranted in support of safe innovation. By doing so, the FPC can contribute to the safe adoption of the technology from the perspective of financial stability, which will support sustainable growth. The approach will need to be flexible and forward-looking given the uncertainties and potential pace of change in AI. AI has the potential to save workers time on a wide range of tasks, thus potentially boosting productivity. It can enhance firms’ decision-making processes and help make products and services better and more tailored to customers’ needs. At the cutting edge it can catalyse other scientific or technical breakthroughs, such as in computing or medicine. The FPC is focused on financial stability risks, which can ultimately impact households and businesses. AI is already helping many financial institutions to automate and optimise their internal processes. A likely area of development over the coming years is advanced forms of AI increasingly helping to inform firms�’ core financial decisions.

Read full article ▼
The effective monitoring of AI-related risks is essential to understand whether additional risk mitigations might be warranted in support of safe innovation, what they might be, and at what point they may become appropriate. The FPC will also continue to engage actively with domestic and international initiatives to monitor and mitigate AI-related risks, not least because many of these risks are likely to be cross-border in nature. Section 3 of this FSiF sets out the FPC’s planned approach to monitoring and mitigating AI-related risks in more detail.

The FPC’s intends to build out its monitoring approach to enable it to track the development of AI-related risks to financial stability. The approach will need to be flexible and forward-looking given the uncertainties and potential pace of change in AI. To this end, the FPC plans – supported by the Bank, the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) – to make use of a blend of quantitative and qualitative information sources. These include the regular Bank and FCA Survey on AI in UK financial services (hereafter ‘the AI Survey’), the AI Consortium , and targeted market and supervisory intelligence gathering. The FPC will continue to adapt and add to these tools in a flexible way as the risk environment evolves.

A more indirect – but potentially significant – way in which AI could affect the financial system is through its adoption across the wider economy. For example, if it challenges established business models in certain economic sectors, this could impact some borrower firms’ creditworthiness and thus increase credit risk for those lenders that have exposures to them. Section 2 of this FSiF sets out the FPC’s view of the potential financial stability implications of AI in more detail.

Given the significant levels of uncertainty around how AI will evolve, the FPC is considering the potential macroprudential implications of more widespread, and changing, use of AI in the financial system. By doing so, the FPC can contribute to the safe adoption of the technology from the perspective of financial stability, which will support sustainable growth. In this context, the FPC is focused on the following areas:

As a macroprudential policymaker, the FPC is focused on financial stability risks, which can ultimately impact households and businesses. A stable financial system is one that has sufficient resilience to be able to facilitate and supply vital services by financial institutions, markets and market infrastructure to households and businesses in a manner that absorbs rather than amplifies shocks. Financial stability risks can arise even where risks to the safety and soundness of individual firms are well managed by microprudential authorities, for example arising as a result of the collective behaviour of firms.

Finance is among those sectors benefiting from this source of innovation. AI is already helping many financial institutions to automate and optimise their existing internal processes, such as code generation, as well as their interactions with customers. A likely area of development over the coming years is advanced forms of AI increasingly helping to inform firms’ core financial decisions, such as credit and insurance underwriting, potentially shifting the allocation of capital. By enabling new sources of data to be used, the technology could ultimately enhance firms’ offering to customers. However, in the context of the new and distinct features of advanced AI, and the rapid pace of its development, there is a high degree of uncertainty over how the technology and its use will evolve. Section 1 of this Financial Stability in Focus (FSiF) discusses this broader context to the Financial Policy Committee’s (FPC’s) consideration of AI.

The development and deployment of artificial intelligence (AI) is likely to have a transformative impact across many sectors of the UK economy. AI has the potential to save workers time on a wide range of tasks, thus potentially boosting productivity. It can enhance firms’ decision-making processes and help make products and services better and more tailored to customers’ needs. At the cutting edge it can catalyse other scientific or technical breakthroughs, such as in computing or medicine. All of this has the potential to increase long-term productive economic growth.

Investment managers similarly stand to benefit from the rapid development of AI techniques. For example, the same IMF report identified the use of generative AI by investment managers to help them better use alternative data sets, such as social media content, to uncover previously unknown relationships between economic or financial variables, and hence to generate new investment strategies. As well as benefiting end-investors, the exploitation of novel sources of data could help increase market efficiency, with new sources of information incorporated into pricing faster and more accurately than was previously possible.

Firms participating in financial markets seek to make the best possible use of available data to optimise their trading strategies. Among institutions undertaking algorithmic (rules-based) trading in highly liquid markets, established AI techniques (such as decision trees) are already deployed to help refine the predictive power of models that feed into their trading strategies. And there is active innovation in this space, with a recent International Monetary Fund (IMF) report highlighting that over half of all patents filed by high frequency or algorithmic trading firms now relate to AI. And while autonomous AI-based trading models do not yet appear to be in widespread production, it is plausible that in the future such approaches will be employed (Box A).

Lending decisions are at the heart of banks’ financial risk management. Supervisory intelligence suggests that while in aggregate the use of AI in credit risk management is still in its infancy, some firms are using AI-based techniques (such as established gradient boosting decisions tree models) at various stages of the lending process. This includes in the pre-screening, application scoring, pricing and provisioning steps, and applies across lending classes. Among insurers, AI-based models are currently widely used to support pricing and underwriting decisions. The Organisation for Economic Co-operation and Development (OECD) has highlighted how the use of telematics – which can generate large quantities of data to feed into AI-based models – may in the future increasingly be integrated into insurance products, beyond its current widespread use in motor insurance. Such innovations have the potential to enable more tailored insurance products or pricing, as well as potentially helping with insurers’ risk management.

The broad capabilities of AI point to its likely spread across use cases over time, including in areas central to financial institutions’ business models. Greater use of AI could ultimately help firms to enhance their offering to customers. For example, the potential ability for some lenders, including non-bank lenders, to leverage a wider range of structured and unstructured data could, in principle, widen choice and access to finance for creditworthy companies, including small and medium-sized enterprises.

While there is considerable uncertainty about the longer-term economic impacts of advanced AI, some analysis suggests that these could be very significant, with one study estimating that, over the next 15 years, generative AI could bring productivity gains of up to 30% to the banking and insurance sectors, and to firms operating in capital markets. AI can also help public authorities to achieve their objectives more efficiently and effectively. Box B discusses how the Bank of England is adopting AI in its work, including in support of the FPC’s financial stability objective.

A key breakthrough has been the rapid development of generative AI, such as large language models that can generate natural language text output. These models are often pre-trained and provided as cloud-based services by third-party providers. Such models are being used to help streamline various internal functions within financial institutions, such as code generation and information search and retrieval, helping improve productivity. And AI-based analytics are being used to enhance customer interactions, for example helping payment firms better predict a customer’s preferred payment option.

The 2024 AI Survey indicates that the top near-term (in the next three years) use cases for AI include optimising internal processes, enhancing customer support and combatting financial crime (Chart 1). Financial firms appear most willing to deploy AI in these types of operationally focused use cases, which are expected, according to the same survey, to be among those delivering the biggest benefits in three years’ time.

Financial institutions’ decisions about the pace, scale and nature of their AI adoption will depend on a complex combination of technical developments, commercial incentives and risk appetite, implying a wide range of possible future adoption scenarios. The regulatory environment will also be relevant to the pace and nature of industry adoption: the appropriate management of firm and system level risks can help create an environment in which firms are able to innovate safely and unlock the full benefits of the technology.

Many AI-based analytical techniques, such as those which assist in statistical analysis, are not new and have been established for a decade or more. Meanwhile, some use cases for advanced forms of the technology, such as generative AI, are now in production. The AI market continues to change at pace, for example with the development of so-called agentic AI systems. Such systems, which are not widespread at present, can take autonomous action to achieve specified goals by utilising tools, learning from feedback, and adapting to dynamic environments. It is also possible that some applications will not in practice live up to their initial promise, while currently-unforeseen developments may have significant impacts.

While AI adoption is currently happening at pace in many parts of the financial services sector, there is a high degree of uncertainty about its specific future impacts.

Subject to its objective in relation to financial stability (which is itself a vital foundation for sustainable growth), the FPC also has an objective to support the Government’s economic policy. Supporting broad-based and resilient growth built on strong and secure foundations contributes to that objective. As such, when considering the implications for financial stability of emerging technologies such as AI, the FPC is also mindful of the significant economic opportunities presented by them.

As highlighted by the Government-commissioned AI Opportunities Action Plan , effective and swift AI adoption has the potential to enhance the competitiveness of areas of UK economic strength, and to unlock new growth opportunities across the whole economy. As the third largest destination for AI investment globally, the UK is well placed to capitalise on these opportunities.

As a general-purpose technology, AI can bring productivity gains to many economic sectors. For example, it can help reduce resources spent on routine administrative tasks, freeing up employees’ time for higher value-added work. It can enhance institutional decision-making and help make products and services better and more tailored to customers’ needs. And at the cutting edge it could catalyse other scientific or technical breakthroughs, such as in computing or medicine. All of this has the potential to support long-term productive economic growth.

The FPC fulfils its financial stability objective of contributing to protecting and enhancing the resilience of the UK financial system by identifying, monitoring and taking action to remove or reduce systemic risks to financial stability. This includes work on emerging systemic risks where there is considerable uncertainty over their potential timing and size. The FPC is considering the potential implications of AI in order to contribute to its adoption in a way that safeguards financial stability and so is conducive to sustainable growth. It is doing so in the context of a wider public discussion over the benefits and potential risks around AI, including in relation to financial stability.

While the distinct features of AI are the source of its unique benefits, they can also be additional sources of risk (as discussed by Sarah Breeden (2024) ). For example, the complexity of some AI models – coupled with their ability to change dynamically – poses new challenges around the predictability, explainability and transparency of model outputs. And their use of very large amounts of data poses new challenges for users around ensuring the integrity of that data. The potential for market concentration in AI-related services, including vendor-provided models, is a further challenge.

AI brings both benefits and potential risks, so is of relevance to the FPC’s financial stability objective of protecting and enhancing the resilience of the UK financial system.

Some forms of AI represent a discontinuity relative to previous modelling technologies. For example, advanced AI models (including generative AI models) can be dynamic, learning automatically from new input data, meaning that their outputs can evolve over time. They can be used to produce complex outputs and make decisions autonomously. And they are trained on vast volumes of data, on a different scale to previous modelling tools. As a result, they have powerful capabilities across a wide range of use cases, bringing significant – and potentially transformative – benefits to their users. In the coming years, various parts of the UK economy, including financial services, may be reshaped as the use of this technology becomes more widespread and evolves.

2: The financial stability implications of AI

There are various ways in which AI-related developments might impact financial stability.

This section explores several key ways in which AI might interact with vulnerabilities at the firm and system level, and could – especially in the absence of sufficient mitigations – lead to financial stability risks. These risks to financial stability could transmit to the real economy via their effect on systemically important institutions, systemically important markets or by affecting the operational delivery of vital services. Specifically, it explores the following four AI-related areas of focus for the FPC:

Greater use of AI in banks’ and insurers’ core financial decision-making.

Greater use of AI in financial markets.

Operational risks in relation to AI service providers.

Changing external cyber threat environment.

The distinct features of advanced AI models relative to other modelling technologies are relevant to each of these areas, and feed into potential risks to financial firms and the financial system. For example, the potential for dynamism in complex AI models (updating as new data is available) and a lack of predictability and explainability of their outputs could, other things equal, make it harder for firms to manage risks related to their use. In the context of widespread use of vendor-provided AI models, around half of the respondents to the 2024 AI Survey report having only a ‘partial understanding’ of the AI technologies they use. More generally, the quality of AI model output relies on the quality of the input data. The use of very large-scale training data can add to existing challenges for firms to ensure that data used is relevant, of sufficient quality, and does not introduce bias.

The potential for market concentration in AI-related services is also of relevance. In the generative AI market in particular there are various factors that could increase concentration over time. These include the cost and complexity of the models and vertical integration of parts of the ‘AI stack’. However, there are also various factors that could have the opposite effect on market concentration, including the widespread availability of open-source models.

Given these considerations, a scenario in which AI models are increasingly deployed in an autonomous manner (as opposed to being used largely as an assistive tool) could potentially pose significant additional risks to financial stability in the future.

This section does not seek to present a comprehensive overview of all possible AI-related risks to financial stability, which would also encompass potential effects on business models and market structures. These could be significant, especially over the longer term. For example, if AI challenges established business models in certain economic sectors this could, in principle, impact some borrower firms’ creditworthiness and thus credit risk for those lenders that are exposed to them. AI could also increase the relative footprint of non-bank financial institutions (NBFIs) in certain markets, for example as a result of a greater use of algorithmic trading. Given the high level of uncertainty around the future trajectory of AI, it is challenging at present to assess all such potential longer-term impacts.

Risks can stem from vulnerabilities at both the firm and system level.

The FPC identifies and assesses risks by considering vulnerabilities arising at both the institution level (microfinancial) and system level (macrofinancial) and the transmission channels through which they can impact financial stability (Figure 1). Actions by both financial institutions and public authorities can help build resilience to systemic risks. Microfinancial vulnerabilities often relate to risks that can impact individual firms’ safety and soundness, such as model risk, and microprudential regulation helps to mitigate such risks.

But even where risks are well managed from the perspective of individual firms, macrofinancial vulnerabilities can mean that the collective behaviour of firms in response to a shock can have implications for financial stability. In particular, this may be the case when firms do not have sufficient information or incentives to take account of system level outcomes in deciding their actions – in other words when they are outcome agnostic from a system perspective. Such risks to the system are the focus of the FPC’s macroprudential work.

Figure 1: How shocks can interact with vulnerabilities to affect financial stability (a) (b) Source: Bank of England.

(a) ‘Microfinancial vulnerabilities’ are entity-level vulnerabilities that determine their financial and operational susceptibility to shocks.

(b) ‘Macrofinancial vulnerabilities’ are features of the system that help determine the impact of shocks on the system and real economy, given microfinancial vulnerabilities. More precisely, they are features that – conditional on the existence of microfinancial vulnerabilities – can increase the impact of shocks across the system and real economy.

The FPC’s analysis of AI-related risks will continue to be updated as the external risk environment evolves and as more information becomes available, including through its ongoing monitoring work (Section 3).

Greater use of AI in banks’ and insurers’ core financial decision making

While bringing various potential benefits to both firms and customers, AI can introduce new risks for individual firms, especially in relation to data and models.

As described in Section 1, it is likely that banks and insurers will increasingly integrate the use of AI into their core business decisions around the provision of credit and insurance, respectively. Doing so could help enhance their product and service offering to customers, and it could also improve the accuracy of their financial risk management. At the same time, it is important to be alert to potential risks that could arise from the deployment of AI in business functions that have a direct impact on the financial position of the firm and outcomes for customers. For example, the lack of explainability and potential autonomy of advanced AI models could – if deployed without appropriate testing, governance and risk controls – lead to a level of financial risk-taking that is not properly understood at the time.

Microprudential regulation can help mitigate risks from AI…

A range of existing microprudential principles, regulation and guidance is of relevance to firm level risks, notably measures in relation to model risk management, data and governance, and conduct. And the Senior Managers and Certification Regime (SM&CR) is a supervisory tool that can be used to ensure appropriate individual accountability for conduct and competence in relation to these issues. In the context of the changing risk landscape around AI, a number of aspects where existing regulatory regimes might need to evolve were highlighted in responses to the FCA, Bank and PRA discussion paper DP5/22 – Artificial Intelligence and Machine Learning (summarised in feedback statement FS2/23 – Artificial Intelligence and Machine Learning).

It will be important to ensure that existing regulatory frameworks, alongside firm level controls, mitigate microfinancial risks from AI sufficiently, especially as AI models are increasingly used in agent functions. The FPC will continue to engage other regulatory authorities on relevant frameworks, to help inform its assessment and monitoring of systemic risks from AI (Section 3).

… but it is also important to consider system level implications.

At the system level, common weaknesses in model and data risk management across firms would represent a macrofinancial vulnerability. For example, in the event that large numbers of firms rely on the same open-source model components or data libraries, a significant unknown error or bias could cause many firms to misestimate certain risks and so misprice and misallocate credit as a result. The eventual crystallisation of such a weakness could generate losses for a number of systemic firms, leading to a tightening of credit supply to the real economy, or broader financial contagion through a loss of confidence. This type of scenario was seen in the 2008 Global Financial Crisis, where a debt bubble was partly fuelled by the collective mispricing of risk (as transformed by innovations around securitisation). More widely, a high level of reliance on AI models for key risk management decisions, could, in principle, impact other areas of firms’ resilience, such as liquidity preparedness.

Under a scenario in which core decisions on the availability and pricing of services are underpinned by AI models, biased or wrongly calibrated data or models could directly affect outcomes for consumers, such as their access to products. This could in turn give rise to conduct-related risks, for example if certain decisions or processes were to be subject to legal challenge and financial redress. This could be amplified by practical issues related to establishing who is ultimately liable for decisions made by AI models.

Greater use of AI in financial markets

AI could be used to inform more trading and investment decisions, and that may be associated with greater market efficiency but also require appropriate risk management.

Market participants appear likely to integrate more advanced AI-based analysis into their core trading and investment activities, although the speed and scope of AI deployment is uncertain and could vary significantly across institutions and asset classes. In particular, institutions undertaking algorithmic trading already widely use established AI techniques (such as decision trees) to calibrate their algorithms, with scope for further innovation in this space (Box A). And some investment managers are turning to AI to help generate profitable insights.

Greater use of AI by market participants in their core business processes could help increase market efficiency (for example through the faster incorporation of new information), while also being beneficial for end-investors through increased returns. At the same time, the deployment of increasingly complex AI models in this way raises various potential firm level risk management challenges, in common with those already discussed under the previous section. Unknown data or model flaws might mean that a company’s exposures turn out to have been incorrectly measured or interpreted, leading to it having insufficient financial resilience to market stress events. And it may be particularly challenging for AI models to respond to extreme events and situations of radical uncertainty, such as historically unprecedented shocks.

AI-driven trading and investment strategies could increase the tendency for market participants to take correlated positions.

Greater use of AI-driven trading strategies could lead to various potential outcomes for markets and the practical implications are uncertain. From a systemic risk perspective, the potential for AI-based participants to take increasingly correlated positions is an important consideration. This could be driven by the widespread use of a small number of open-source or vendor-provided models or underlying data sets, or a more general convergence on very similar model designs across the market. Herding and market concentration was the top risk cited in recent IMF outreach when stakeholders were asked about risks that could result from wider adoption of generative AI in capital markets.

As explored by Jonathan Hall (2024), a potential future market with widespread use of autonomous AI-based trading might be more informationally efficient than a market shaped by human traders, but it could also be less resilient to shocks. For example, increasingly correlated positioning and strategies could exacerbate the impact of fire-sales in response to a stress event (where firms could be forced to unwind leveraged positions). The potential for this type of correlated deleveraging was explored in the system-wide exploratory scenario. It arises, in part, because individual institutions may not factor in the collective impact of their actions on the market. Systemic markets, such as core bond markets, are central to the flow of finance to the real economy. Their effective functioning is therefore an important aspect of financial stability.

There are also ways in which the greater use of AI could, in principle, contribute to the improvement of market resilience. For instance, AI could enhance risk management by enabling better use of available data, meaning that the sort of fire-sale scenario described above – where leveraged firms are caught out by price moves – becomes less likely or has less of an impact. And, as noted by the Financial Stability Board (FSB), the ability of investment managers to offer increasingly customised options to their clients might have the effect, other things equal, of reducing market correlations.

Advanced AI models could rationally exploit profit-making opportunities in a destabilising way or engage in other adverse behaviours.

Under a scenario of advanced AI trading models being deployed to act with more autonomy, these models might identify and exploit weaknesses in the trading strategies of other firms in a way that triggers or amplifies price movements. For example, models might learn that stress events increase their opportunity to make profit and so take actions actively to increase the likelihood of such events.

Another source of risk is the potential for such models to facilitate collusion or other forms of market manipulation. Given the ability of some AI models to learn dynamically in multi-agent environments, and challenges around the explainability of model outputs, such adverse behaviours might emerge without the human manager’s intention or awareness.

Existing market monitoring and oversight measures are relevant to AI risks, and the FPC will continue to follow closely the implications of AI for systemic markets.

There are market conduct regulations to guard against market manipulation, alongside the SM&CR to help ensure appropriate individual accountability. And risks around herding in markets, potentially leading to procyclical fire-sales, are not new. The FPC already considers the resilience of systemic markets to such sources of disruption, including through assessment tools such as the system-wide exploratory scenario. Internationally, the FSB is consulting on enhancing public disclosure on aggregate market positioning and liquidity.

The rapid pace of change in AI technology could lead to correspondingly fast and significant shifts in risks to systemic markets. As such, AI-related risks in this area merit an ongoing focus from a macroprudential perspective, and the FPC intends to monitor relevant developments closely (Section 3).

Operational risks in relation to AI service providers

In order to capitalise on the productivity benefits of AI, financial institutions generally rely on service providers outside the financial sector.

For many AI implementations, financial institutions rely on vendor-provided AI models. This is particularly so for very complex and powerful models, such as the most recent large language models, where significant scale is required to motivate the large amounts of capital investment needed for their development. As discussed in Section 1, these models are used in various ways to increase companies’ productivity, such as assisting with code generation. For other use cases, financial institutions build AI models in-house. But even so, they may rely on cloud computing to develop and operate these models, and on external data aggregators to obtain the large data sets on which the models are trained.

Growing concentration in the supply of AI-related services could increase risks to the financial system.

Evidence from the AI Survey supports the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease. A further increase in interconnectedness between nodes in the financial system driven by AI has the potential to heighten existing vulnerabilities in this regard. A reliance on a small number of providers for a given service could also generate systemic risks in the event of disruptions to them, especially if is not feasible to migrate rapidly to alternative providers.

A severe operational disruption related to external service providers could, if its impact on the financial system is severe, lead to financial stability issues. For example, under a scenario in which customer-facing functions have become heavily reliant on vendor-provided AI models, a widespread outage of one or several key models could leave many firms unable to deliver vital services such as time-critical payments. The potential for disruption from such operational risks has been underscored by several temporary outages of important banking and payment services (for example those triggered by the July 2024 worldwide IT outage caused by a flawed update distributed by the cyber security technology firm CrowdStrike).

This highlights the importance of building and maintaining operational resilience, an existing area of focus for the FPC.

In March 2024, the FPC set out its macroprudential approach to operational resilience. The FPC noted that firm level operational resilience provides the essential foundation for operational resilience across the system. Firms can mitigate operational risks through effective control frameworks and investment in operational resilience, and both microprudential and macroprudential policies are in place to help manage certain existing risks in this space.

Public-private sector collaboration (between the Bank, other authorities and a range of financial and non-financial firms) is supporting the development of a ‘shared responsibility model’ for AI. The output will be guidance on a structure for managing implementation risks, such as whether the third-party provider or the client firm is responsible for managing the data within different kinds of AI deployment. This should help minimise the potential for divergences in approach leading to firm or sector level operational impacts.

Additionally, the Financial Services and Markets Act 2023 established a new regulatory regime for critical third parties, in response to the FPC’s 2021 recommendation that additional policy measures were likely to be needed to mitigate the financial stability risks stemming from concentration in the provision of services to UK firms and FMIs. The Bank, the PRA and the FCA jointly published rules for the new regime in November 2024 (supervisory statement 6/24). Certain third parties providing data and AI models could also emerge as potential future critical third parties as a result of increasing use of them by the financial sector. There are ways in which AI services might differ from other types of third-party usage, including the complexity of the most computationally powerful foundation models, and also potential challenges around identifying specialised and niche providers.

Given the rapid evolution in how financial institutions are interacting with the AI market, and in the structure of that market, the FPC will monitor developments in this space closely (Section 3).

Changing external cyber threat environment

AI could be a new tool for malicious actors that already pose risks to financial companies…

Cyberattacks are a significant source of risk faced by many financial firms, and in the Bank’s most recent Systemic Risk Survey, they remain near the top of the list of the perceived key sources of risk to financial system. The FPC has previously noted that higher geopolitical tensions create an environment of heightened risk of cyberattacks. And in the 2024 AI Survey, cybersecurity came near the top of perceived current AI-related risks, and respondents expected this risk to grow over the next three years.

The use of AI by threat actors could increase their capability to carry out successful cyberattacks against the financial system, with potentially greater sophistication and scale than was previously possible. Financial institutions’ own use of AI could also open up new opportunities for malicious actors to exploit, for example via any vulnerabilities around the software or hardware of third-party providers. The model development stage could also be a potential target, for example via the malicious manipulation of model training data (so-called data poisoning). Longer term, the potential for cyberattackers to combine AI with possible future developments in quantum computing will need to be monitored as both technologies evolve over the coming years.

In addition to cyberattackers seeking to cause disruption to the financial system, AI might also increase the capabilities or opportunities of other types of malicious actor. For example, those engaged in illicit financing (money laundering or terrorism financing) could seek to use AI models to circumvent institutional controls. And the use of public or customer-facing AI models by financial institutions creates new risks such as ‘prompt injection’, whereby attackers seek to manipulate models to extract confidential information. Further, cyberattackers perpetrating fraud schemes (against financial institutions directly or against retail customers) could be rendered more effective and harder to detect as a result of generative AI models. For example, the capability of these models to produce so-called ‘deepfakes’, as well as highly personalised text, could increase the ability of those intent on committing fraud to manipulate employees or retail customers.

… exacerbating the risks they pose to the wider financial system.

As well as posing material risks to individual institutions, AI-related cyberattacks could have systemic implications. For example, the widespread deployment of common AI models with shared cyber vulnerabilities across systemic firms would represent a system-wide vulnerability. This might increase the impact of large-scale cyberattacks, which could spread to other parts of the financial system through operational contagion or a general loss of confidence. Financial stability might ultimately be affected, if, for example, systemic markets or the operational delivery of vital services were to be materially disrupted as a result. While recent ransomware attacks at several financial firms and third-party providers did not impact financial stability, they showed how such incidents have the potential to amplify risks across the financial system, as disruption at one firm can cause disruption at others.

A more general increase in cyberthreats could have impacts on the real economy and so indirectly on financial institutions. For example, AI-based disinformation tools such as deepfakes could be used to exacerbate existing geopolitical tensions, increasing economic uncertainty.

However, AI could improve firms’ ability to combat threat actors.

The use of AI by financial institutions could improve their ability to combat malicious actors. For example, it could assist with the detection of cyberthreats by improving the automated identification of malware or illicit finance activity. Indeed, respondents to the 2024 AI Survey expected the benefits of AI for cybersecurity and anti-money laundering to grow significantly over the next three years.

The impact of AI in this area is therefore bi-directional, raising the prospect of a technological arms race between financial companies and malicious actors, and making the overall impact of AI from a financial stability perspective uncertain.

Public-private sector collaboration will be important for helping address AI-related cyberthreats.

UK authorities and industry have established approaches to analysing and mitigating cyber risks. This includes public-private sector collaboration through the Cross Market Operational Resilience Group (CMORG).

In 2024, CMORG established an AI Taskforce to identify and mitigate potential emerging operational risks to the sector through the widescale adoption of AI. The Taskforce is developing scenarios exploring how malicious actors could utilise generative AI to enhance their ability to conduct attacks against individual financial services firms and the sector more widely. This includes consideration of the way in which generative AI could be used to circumvent established security and authentication controls at scale. The scenarios developed through this work will inform sector-wide collaboration on the development of proactive mitigation measures, while also informing firm and sector level exercises.

Tracking the evolution of AI-related cyber risks will be an important part of the FPC’s approach to monitoring AI (Section 3).

Source: Bankofengland.co.uk | View original article

How COOs maximize operational impact from gen AI and agentic AI

McKinsey estimates that over the long term, gen AI could yield $4.4 trillion in productivity growth potential. Gen AI’s deep learning AI models are already helping companies achieve performance breakthroughs across the operations value chain. Only 12 percent of these organizations have been able to find revenue-generating use cases for gen AI, according to a McKinsey survey of 118 US C-suite executives. In mid-2024, just 17 percent of organizations worldwide said that they derive more than 10 percent of EBIT from gen AI. The COO can help build enterprise capabilities for gen-AI-based rewiring in three ways: identifying the highest-potential domains for AI, building the capabilities needed, and building the right processes to deploy the technology effectively, says McKinsey’S Andrew Kuchins, senior vice president of research and innovation at McKinsey & Company. For more information, visit McKinsey’s gen AI blog and the company’s gen- AI white paper.

Read full article ▼
Better, faster, easier, cheaper: That’s the promise of gen AI. For at least some companies, it’s becoming the reality as well, as leaders find new ways for gen AI—and the increasingly capable agents it enables—to automate, augment, and accelerate work across virtually every function. Early adopters are using gen AI to help strengthen supplier negotiations in procurement and improve quality control in equipment maintenance (see sidebar “Gen AI’s potential across operations”). One digital marketing platform is even using gen AI to manage “long tail” sales accounts that were previously too labor-intensive to serve, for an annual revenue gain of more than $30 million.

Gen AI’s potential across operations McKinsey estimates that over the long term, gen AI could yield $4.4 trillion in productivity growth potential. That’s on top of “traditional” or “analytical” AI, which relies on structured data to solve discrete analytic tasks—such as predictive analytics for optimizing equipment maintenance. Gen AI’s deep learning AI models are already helping companies achieve performance breakthroughs across the operations value chain, especially by finding new opportunities to break internal silos. Multiagent systems can achieve even more (exhibit). R&D. A European automotive supplier initially used gen AI to analyze its existing code base, then deployed targeted gen AI applications to automate new-code generation, quality assurance, and documentation. Together, the efforts across these three subfunctions yielded a time savings of 20 to 30 percent while improving code quality.

Manufacturing. A global heavy-industry manufacturer developed a gen-AI-based “technician maintenance copilot” to help frontline workers quickly find nonintuitive root causes for equipment failures. Frontline workers can now complete tasks that previously required automation experts from other parts of the operation to solve, reducing maintenance workload by 40 percent while increasing overall equipment effectiveness (OEE) by 3 percent.

Procurement. At a multinational resources company, upskilled procurement teams worked with the legal department to use gen AI models to review thousands of pages of contracts covering more than $1 billion in spend. The effort revealed more than $15 million in quick-win savings, such as through improved compliance with payment terms—with substantial further savings to come from negotiation insights.

Production line. Future production lines could take these ideas even further by deploying gen AI agents that, underpinned by the most advanced large language models, are increasingly able to take autonomous action to complete complex tasks. Picture a shift turnover in a factory or hospital—a moment notoriously prone to miscommunication, delay, and rework. As the incoming shift supervisor checks in, a gen-AI-based agent could aggregate the prior shift’s performance, identify ongoing problems and their resolution status, and even develop remediation plans for the supervisor to review. Once the supervisor gives feedback, the agents can either iterate and refine the plan or start implementation.

Yet, as encouraging as these results are, there’s still much to do. In a recent McKinsey survey of 118 US C-suite executives, only 19 percent said that gen AI increased their company’s revenue by more than 5 percent. It’s a similar picture elsewhere: In mid-2024, just 17 percent of organizations worldwide said that they derive more than 10 percent of EBIT from gen AI.

Not surprisingly, about half of senior executives in that survey describe their organization’s development and release of gen AI tools as too slow—despite the fact that three-quarters also say they have at least a draft of their gen AI strategy. Only 12 percent of these organizations have been able to find revenue-generating use cases for gen AI. And while the ultimate goal for these organizations is to achieve gen AI maturity, with gen AI fundamentally changing how work gets done, a mere 1 percent of executives say their organization has reached that point.

That’s where the COO plays a critical role, as illustrated by several recent success stories where gen AI and gen-AI-based agents have helped redefine how a company creates value. Specifically, the COO can help build enterprise capabilities for gen AI-based rewiring in three ways: First, they can define the company’s operating structure for gen AI, identifying the highest-potential domains for gen AI deployment and building the capabilities needed to scale the technology effectively across the enterprise.

Second, they can shape the organization’s data governance, addressing the complex challenges associated with extracting and structuring data from legacy operating systems and minimizing risks associated with inaccuracy. Third, and most important for sustaining gen AI’s advantages over time, they can oversee change management initiatives so that people learn, use, and improve the tools and processes gen AI enables.

Getting these three factors right takes work, not just in operations but also in collaboration with other leaders, such as the chief information officer (CIO). But it’s how companies’ investments in gen AI can pay off: by reshaping how work gets done every day.

Ensuring gen AI creates real business value

Getting gen AI wrong could be costly: not just in wasted investment but also in missed opportunities. Companies that move quickly are already securing major advantages, increasing the stakes.

Sensing gen AI’s possibilities, senior leaders of a European equipment maker with more than €10 billion in revenue wanted to avoid one of gen AI’s most common pitfalls: fragmentation in development. Too often, individual functions and business units design gen AI tools that optimize their own tasks but fail at the enterprise level—such as a production-scheduling tool that raises factory output higher than the logistics department can absorb.

The company’s COO recognized that, in facing the future of operations, he and his team needed much more than a list of potential gen AI use cases. They needed to rethink the entire operating model to see how this new automation could transform people’s work.

Rethinking operating structures for gen AI

To an even greater degree than seen in earlier waves of technology-based transformation, gen AI touches virtually every part of a business organization. This expanded scope for coordination makes the operating structure particularly important to get right, both to identify the highest-potential gen AI opportunities at the enterprise level and to see them through to fruition. From the beginning, therefore, the European equipment manufacturer brought together the COO, CIO, chief technology officer (CTO), and heads of manufacturing, procurement, supply chain, and quality control, along with business unit leads responsible for marketing and sales, to undertake a gen-AI-prompted reassessment of its operating assumptions.

Centralization. The equipment manufacturer’s leaders recognized that sustaining this sort of centralized approach would be essential, especially as the organization developed foundational capabilities in fields ranging from platform architecture to risk and ethics. At least initially, a center of excellence (COE) or “factory” model, with a steering committee providing executive leadership and an operating committee overseeing day-to-day work, would help keep stakeholders collaborating to generate lasting value (Exhibit 1).

The next question is where the COE should sit within the larger organization. Under the most centralized approach, the COE directs gen AI strategy and reports directly to the CEO, operating in parallel to the business units (Exhibit 2). By enforcing enterprise-wide standards and minimizing the risks of duplication and resource waste, this option is often the most practical one at the very earliest stages of gen AI exploration. For the equipment manufacturer, following this model has so far yielded a prioritized road map of relevant use cases for €300 million in EBITDA improvement.

As the company builds more confidence, it could evolve toward one of two middle alternatives in which the business units develop their own gen AI capabilities. In some instances, the COE takes the lead and the business unit executes, while in others, the business unit takes the lead with support from the COE. Only a few organizations have fully decentralized their gen AI function and left it to the business units to run.

Identifying domains. Developing a clear structure helps organizations find the right balance in designing gen-AI-based solutions that are large enough to achieve meaningful end-to-end impact yet small enough to be achievable within a reasonable time frame. Thinking in terms of domains can push gen AI past the “pilot purgatory” stage, in which organizations spend time and resources and incur opportunity costs on developing gen AI tools that have little effect beyond saving workers a few minutes a day.

Most important is to start by assessing the strategic fit for gen AI, with an expansive view of the art of the possible so that the solution can have a lasting effect. For example, a finance function might start by identifying a pain point—such as analysts being overloaded with simple requests from other managers that would take days to answer. An initial response might be to create a gen AI chatbot that would allow anyone in the company to directly query finance data on their own.

This addresses the initial problem by enabling faster query resolution and freeing up analysts for higher-value work. But a deeper examination would seek the root causes for the frequent queries, and whether a more sophisticated gen AI tool—perhaps an agent or a set of agents—could start to produce certain analyses automatically when certain scenarios occur.

This sort of thinking implies another major question: Can we keep gen AI from destroying value? Automating financial analyses for internal purposes, such as to find lessons relevant to new product launches, tends to be substantially less risky than automating analyses for compiling into quarterly securities reporting. And that leads to a final question: Is gen AI the right solution? For some reporting, simple and (comparatively) inexpensive analytic AI may be completely adequate.

Data governance

Centralization helps operations leaders deal with what 70 percent of gen AI high performers reported as a challenge: managing data (Exhibit 3). With gen AI, the accuracy, availability, and usability of operational data become even more important, yet old challenges persist. A global materials company provides a typical example, with teams in different functions each developing their own unique information about the same products. The R&D department’s data focused on safety issues; the application engineering team developed tailored customer solutions; commercialization owned the product descriptions; and customer support assembled a set of highly specific product details to answer user queries. With no single source of truth, conflicts naturally arose in the underlying data, which gen AI models struggled to parse.

To resolve the issue, the company is now following a centralized data management system that harmonizes data from different sources, eliminating discrepancies and ensuring that all teams have access to the same accurate information. Crucially, the system emphasizes human oversight to maintain high data quality and reliability, especially for AI-generated answers. A robust governance structure further validates and regularly updates data.

Change management

As with earlier waves of digital innovation, gen-AI-based transformations are less about the technology itself and more about rethinking how humans work. If anything, gen AI’s potential to enhance creativity and innovation makes change management even more central, particularly as its impact depends on integrating human and gen AI capabilities.

It’s a tall order. A gen AI transformation must not only account for the complexities of an evolving technology landscape while yielding clear business results but also address risk concerns (see sidebar “Mitigating risk”), overcome skill gaps, and foster innovation and adaptability. And gen AI itself must keep improving, with AI agents subject to their own performance management systems.

Mitigating risk McKinsey research has identified eight categories of risk associated with deploying gen AI, which companies can systematically assess as they adopt gen AI across their organizations (exhibit). data privacy and quality due to unauthorized use or disclosure of personal, sensitive, or other legally protected data, or use of incomplete or inaccurate data in model training

impaired fairness due to bias in training data, model performance, or misrepresentation of model outputs

intellectual property (IP) infringement due to unauthorized use of proprietary content or inadvertent “leakage” of IP into the public domain

malicious use due to heightened potential for bad actors to create harmful AI-generated content (such as deepfakes, hate speech, or phishing scams)

poor performance or explainability due to model limitations that result in inaccurate, outdated, or completely fabricated output (“hallucinations”)

security threats due to AI system vulnerabilities that could allow for bypass of existing filters or manipulation of open-source models

strategic missteps due to noncompliance with standards, regulations, or social norms, resulting in reputational or societal risk

third-party risks due to misunderstanding where and how outside actors may be using gen AI, creating potential unknown exposures from, for example, use of proprietary data in public models

Setting a bold aspiration for enterprise-wide impact. These obstacles are all too familiar to the typical COO, who is charged with leading the continuous-improvement efforts that sit at the core of next-generation operational excellence. They were the starting point for a tech industry COO who recognized gen AI’s potential to break long-standing operational logjams—and understood that success would depend on how well people embraced gen AI solutions.

The tech company’s work with gen AI started by tackling one of its thorniest cross-functional problems, where complex coordination led to frequent delays in generating highly tailored statements of work that outlined the details of the technology services each client would buy. Assembling a statement of work required the relationship manager to collect input from experts in internal functions ranging from finance and legal to data security, as well as from the delivery managers and solution architects leading the day-to-day work—and the client, too. Rework and errors were a fact of life, slowing response times to such a degree that relationship managers missed deadlines for important requests for proposals.

To build a tool that could generate statements of work for more than a dozen product lines, the company needed to scale quickly. The answer for this organization was to centralize. Leaders created a single working group comprising three main teams: one for engineering, one for business and data requirements, and one for change management.

The three teams collaborated extensively, particularly in reimagining workflows that would take full advantage of gen AI’s efficiencies. Previously, for example, creating a statement of work involved elaborate rounds of requirements gathering, feasibility analysis, and risk assessment—inevitably generating rework as later reviews identified issues that affected earlier decisions. By analyzing thousands of earlier statements, the new tool developed templates that highlight the most frequent potential problems up front. Specialist experts in legal, compliance, or related functions can instead focus their efforts on problems that don’t have a clear precedent.

Increasing employees’ confidence in a gen AI solution. The change management team’s involvement proved crucial not only in building the tool but also in ensuring uptake once it was deployed. Following the core principles of the influence model, leaders ensured that each product line had its own dedicated change champion, who served as an intermediary between users and the working group to develop and adapt statement-of-work templates that would meet user needs. The change champion would then help communicate with users and build their skills both in using the tool and in improving its capabilities.

The ultimate result is a templatized statement of work that replaces hundreds of document variations, each taking days to produce, with just five that now require only hours to build. This has eliminated thousands of hours of repetitive labor, freeing experienced employees to focus more on high-value work.

Strengthening COO–CIO collaboration

These examples illustrate how using AI to rethink a stream of value can yield much more improvement than simply automating a few tasks. It also requires a much closer integration between the COO and CIO, whose traditional incentives have often been in tension.

COOs charged with modernizing complex, legacy operations have often found off-the-shelf IT solutions to be a difficult fit at best. Yet the cost and complexity of bespoke technology can create substantial burdens for the IT function and the CIO. Some of the friction has dissipated as newer technologies, such as edge computing and standardized industrial communications protocols, have taken hold—along with modular IT architecture and more flexible development practices. But there’s more to be done.

AI’s short innovation cycles and high resource needs have raised the pressure for technology investments to yield their projected returns on schedule, if not sooner. When COOs and CIOs collaborate more effectively, troves of data can become usable insights for revamping operations and creating entirely new sources of value.

The technology company shows how this collaboration can produce results. The COO of the business took the lead in identifying the transformation opportunity and developing it so that it met operational requirements. The CIO’s involvement expanded the vision of what was possible, such as by finding new opportunities to adapt enterprise-wide gen AI investments for the specific data needs of creating statements of work. Along the way, the CIO’s team became more agile in working with the operations team so that the entire project could meet milestones.

COOs already know that dozens of narrow gen AI use cases are unlikely to add up to lasting operational improvement. Instead, gen AI’s potential comes from how it helps leaders rethink entire value chains. This is at the heart of the COO’s role, and its future.

Source: Mckinsey.com | View original article

Source: http://www.healthcareitnews.com/news/agentic-ais-greatest-potential-benefit-changing-how-health-system-functions

Leave a Reply

Your email address will not be published. Required fields are marked *