Practitioner perspectives on informing decisions in One Health sectors with predictive models
Practitioner perspectives on informing decisions in One Health sectors with predictive models

Practitioner perspectives on informing decisions in One Health sectors with predictive models

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

Practitioner perspectives on informing decisions in One Health sectors with predictive models

When asked what a model is, participants most commonly said: a tool for understanding a system, a decision-support tool, and/or a simplification of real-world complexity (Fig. 3). The top three types of models participants were familiar with included: computation or simulation models including scenario trees, statistical models, or mathematical models that solve equations. When asked to describe an experience where participants were involved in a model-to-decision workflow, most participants said the model provided a critical role. Critical roles described included:Motivating policy development through risk assessment.Facilitating communication, collaboration, and understanding at the science-management-policy interface.Providing inference for processes that cannot be directly measured.Identifying refined resource prioritization strategies.Building knowledge in the early stages of a new event (e.g., COVID-19 emergence) when knowledge of the system is poor.Quantifying and framing decision risk. Managing stakeholder expectations and social licensing for a policy by providing intelligence about how long it might take before changes are observed.

Read full article ▼
Variation in participant definitions and familiarity with models

When asked what a model is, participants most commonly said: a tool for understanding a system, a decision-support tool, and/or a simplification of real-world complexity (Fig. 3). The top three types of models participants were familiar with included: computation or simulation models including scenario trees, statistical models, or mathematical models that solve equations (Fig. 3), highlighting an emphasis on knowledge of quantitative models among the professionals interviewed.

Fig. 3: Number of participants defining models in different ways (top). Number of participants familiar with different modeling techniques (bottom). Some participants had multiple definitions and are familiar with multiple methods (i.e., sum of the bars > 41). Full size image

How models are useful for informing decisions

When participants were asked how models could be useful for guiding decisions in One Health sectors, 80% (33/41) said models were useful for comparing disease control policy scenarios. The second most frequently described use was in creating a joint understanding among stakeholders. Here, the process of developing the model was seen as the most useful deliverable—it helped decision-makers understand the system well enough to make informed decisions. Models provide a robust method for assimilating relevant information and bridging information gaps that cannot be resolved through other means.

When asked to describe an experience where participants were involved in a model-to-decision workflow, most participants said the model provided a critical role. Critical roles described included:

Motivating policy development through risk assessment

Facilitating communication, collaboration, and understanding at the science-management-policy interface

Identifying important knowledge gaps for focusing future data collection

Providing inference for processes that cannot be directly measured

Identifying refined resource prioritization strategies

Building knowledge in the early stages of a new event (e.g., COVID-19 emergence) when knowledge of the system is poor

Motivating decision-makers by rapidly building evidence about whether action is needed when experiments are infeasible and when they are unmotivated to act on sparse information

Building agreement among stakeholders on priorities

Quantifying and framing decision risk—providing numbers alongside intuition about the likelihood of outcomes

Managing stakeholder expectations and social licensing for a policy by providing intelligence about how long it might take before changes are observed

Risks and challenges with using models to inform decisions

Participants reported effective model-to-decision workflows are challenged by decision-makers not being informed sufficiently in the development, evaluation of robustness, and interpretation of evidence from models. Participants thought lack of understanding can lead to misinterpretation of evidence, inability to understand what can and cannot be asked from a model, or misunderstanding uncertainty, including having a false sense of precision. A decision-maker may want to feel confident about the distinction between risk categories, yet:

‘Some models can be very bad at distinguishing unlikely from very unlikely.’ (Policy advisor and decision-maker)

Often decision-makers need intelligence on a relative (or qualitative) scale (e.g., outcome likely to be worse than X or better than Y), for example,

‘It’s [referring to model utility] on the scale of ‘government must act’. And so that was when it was super useful. And to me, the actual number, whether it was 5000 or 80,000, didn’t matter, but it was the fact that it was in thousands that mattered. It made it material. It was a ‘way worse than flu’. So we needed to act. That’s where the model came in.’ (Science advisor)

And to use the qualitative scale of effects to evaluate potential policy outcomes, for example,

‘They [models] are incredibly useful to show worst case and best case scenarios and illustrate that there are policy levers to pull that can shift you one way or another.’ (Science advisor),

yet scientists may present the information in a more nuanced format that makes it challenging for decision-makers to interpret on the time scale they need answers. Also, with increasing misinformation (Scheufele and Krause, 2019) and scientific specialization, it is challenging to bridge information gaps between the two cultures (scientists and policy-makers) (Snow, 2012). The average level of scientific literacy for decision-makers and increased scientific misinformation are obscuring the ability of decision-makers to efficiently and effectively consider scientific evidence.

‘The huge problem these days is this vast misinformation blogosphere [blogs considered collectively as an interconnected source of information] of nonsense [speaking about information that is not scientific or from rational thinking, but readily accessible in people’s immediate realm of information] that’s just got worse with COVID.’ (Science producer)

Another risk described by participants is that a user can refine a model to deliver a desired output.

‘You can construct a model and parametrize it to give you the decision that you want.’ (Operational Manager)

This can be done through inserting narrow ranges of parameter values that meet a desired outcome (e.g., prior distributions in a Bayesian analysis or parameter ranges in a sensitivity analysis of a computational or mathematical model), excluding processes one does not want effects from, or interpreting uncertainty in a way that supports one’s agenda (e.g., biased presentation of model results and conclusions about untested effects (Pepin et al. 2017; Stien, 2017). Users who do not understand how using models this way violates the scientific process are susceptible to unintentional misuse, while users who understand this well can be susceptible to intentional misuse. Example statements included:

‘I guess the key thing is making sure that decision makers are not cherry picking information they want to use – this is where one of the risks arises. …if you’ve got different models that have different results’ (Policy decision-maker)

or

‘…it felt like some of the policy people had learned enough math epi terms or had enough of a feel for roughly how things worked that they could sort of form their own assumptions about if I want to get to here, this is how the model might be able to create a narrative that gets me there.’ (Science producer—modeler)

or

‘There’s probably been examples where maybe models have been developed in order to try and push a particular policy agenda, which can be problematic or models that presuppose their own conclusions. I think you see that sometimes as well. There are certainly bad ways of building a model and that’s a risk.’. (Science production—Modeler)

or

‘There’s always a conflation of ‘this is what the model says versus this is what you parameterize the model to say’. Isn’t there? And that really worries me more and more as I carry on in my career.’ (Policy advisor and decision-maker)

The risk of unintentional and intentional misuse affects trust in models, therefore potentially creating mistrust issues from both those who do understand the process and those who do not. These concerns suggest that it’s important to develop standards for model-to-decision workflows, including transparent and professional expert peer review to reduce misinterpretation and misuse. One participant noted:

‘Often there was the pressure for the scientist or the researcher to be able to justify how they had arrived at conclusion A to the Nth degree and yet the policy person could ultimately choose to ignore that science and just do whatever they felt was appropriate, but not the same kind of equivalent, robust, and transparent process around their decision making.’ (Operational Manager)

Several participants thought models should not be developed and used for real-time forecasting during emergencies—the appropriate use of models is for preparedness when time is available for appropriate rigor and consideration (i.e., during non-emergency situations). One reason is that scientists are eager to provide real-time intelligence provision and advocate for their approaches even though each has uncertainty and potentially conflicting results. For example,

‘On the day we say we’ve got an outbreak, every university in NZ and some offshore will start modeling our outbreak. I’d like us to be a little bit ahead and to have had some of these conversations because I’m not gonna be in a conversation with 10 academics who all think their model is God’s gift…[while managing an emergency]’ (Policy advisor and decision-maker)

This suggests that staff dedicated to preparedness modeling may increase the use of models in management and policy decisions. Some participants thought real-time forecasting might also open the door for selecting models that provide confirmation bias or support political agendas that might not align with robust science, which erodes trust in science quality from models. Also, participants said models may have too much uncertainty to provide the level of precision that the public would hold decision-makers accountable for when different numbers play out in reality. It can be counterproductive to put

‘…scary numbers with high uncertainty in the public domain’, (Policy advisor & decision-maker)

And equally counterproductive to provide numbers with the wrong level of detail. For example,

‘And once you tell people that probably X number of million people will die, it’s very hard for them to think about other risks, right? So how do we responsibly portray what models are actually able to predict in the band [speaking about range of statistical uncertainty] and what the dependencies [potential downstream impacts] are on prioritization [of actions that may be taken]’. (Policy advisor & decision-maker)

Participants warned that modeling should be one source of evidence considered alongside other sources. However, some people may value numerical comparisons more than qualitative information, and numerical comparisons can give the semblance of clear-cut answers, when reality is more complex. Thus, participants warned that modelers should not be

‘…let out of their pens unaccompanied’, (Science producer—subject matter expert and Policy advisor)

meaning that subject-matter-expert practitioners (e.g.public health specialists, physicians, veterinarians, wildlife managers) should be working hand-in-glove with them to shape their questions and interpret the results in terms of policy, social, or health impacts. For example,

‘So that needs to be clear…just technical experts [speaking about modelers] aren’t decision makers and that’s mostly appropriate because the decision makers have to worry about a whole lot of stuff that’s not technical, right? So they have to balance up hard choices. And in a way, that’s why I think when models go to decision makers, then you start having bizarre outcomes because they’re interpreting them as another equally weighted, but uninterrogated source of advice. So the model says this and so they must be okay, whereas if the model goes to technical experts [speaking about health practitioners/subject-matter-experts] who then formulate advice that’s in a conversation with decision makers, there’s some kind of interpretation of what the model said, which is where I think my main focus has been.’ (Policy advisor and decision-maker)

Several participants referenced the outbreak of foot-and-mouth in the United Kingdom in 2001 as an example of why it is important to emphasize communication and not to have modelers translating the health policy intelligence from modeling results (e.g., modelers directly making management recommendations to decision-makers without filtering the model results through subject-matter experts in the application of health management in the system being modeled (Kitching et al. 2006; Mansley et al. 2011). Even though NZ has set up effective workflows for model-to-decisions in some sectors, there is a legacy effect due to both past experiences and ongoing current hurdles that have made it difficult to build trust.

Participants highlighted that models can reveal contextual details that impact other policy-making challenges because models can identify phenomena that were unanticipated. Policy impacts can vary dramatically for different groups of people or animals. For example,

‘One of the big things was around if you move anything from the mandatory to being optional in terms of protections, then people who have more privilege can take those protections. They can stay home when case numbers are up, they can buy better masks, all the rest of it. Meanwhile, people who are most vulnerable get infected. Those people are then removed from your susceptible populations. That’s doubly benefiting the people who are more privileged. So they benefit first from their initial privilege and then because they infected these other people who are now recovered and at least in the short-term, aren’t going to infect you anymore.’ (Science producer—modeler)

When variation of policy effects on different groups is accounted for, policy decision-making becomes more complex and can require longer to achieve consensus among stakeholders.

Effective use of models

When asked to describe their experience in model-to-decision workflows, participants mentioned a variety of applications. These ranged from—What transportation policies should we implement to improve human and environmental health? to What is the local probability of disease freedom of bovine tuberculosis? We summarize two examples participants described to show common themes that led to the use of the models in practical decision-making (Box 1 and 2). In both cases, individuals in decision-making roles successfully championed the use of models and acted as ‘navigators’ (knowledge brokers) of the models into the decision-making ecosystem. The drive to champion models often came from long-term trusted relationships between the decision-maker and scientists that develop models. The decision-maker that championed the models invested substantial effort in regular communication between modelers and decision-makers to ensure the models addressed context-specific needs and constraints and provided a trusted source of evidence. Thus, trusted relationships and dedicated knowledge brokers are essential for the successful uptake of modeling intelligence in policy decisions).

Box 1 An example where participants described their experience with using predictive models for informing decisions during COVID-19 pandemic response in humans SARS-CoV-2—humans: Roughly 27% of participants reported playing a role in the model-to-decision workflow for control of COVID-19 in humans in NZ (see Stats NZ, 2021). Key objectives for the models included comparing control policy scenarios (mathematical and computational models) and inferring the source of new cases for contact tracing and identifying and managing local introductions or outbreaks (phylodynamic models of genomic data) (Douglas et al. 2021a, 2021b; Jelley et al. 2022; Mccaw and Plank, 2022). Modelers in NZ reached out to government agencies to ask how they could help. Science advisors from the Office of the Prime Minister’s Chief Science Advisors (OPMCSA, 2024) saw the value in leveraging the research community and developed a system to integrate modeling expertise. Modelers rapidly leveraged available epidemiological models of SARS-CoV-2 in other countries and adapted them to the NZ context to address decision-maker needs. This application was followed by an adaptation of the models for informing control policies (e.g., when to do local lockdowns and quarantine duration) throughout the pandemic. We interviewed participants involved in the model-to-decision workflow including modelers and other science producers, science and policy advisors, operational managers, and policy decision-makers to understand what led to the acceptance and use of model-based evidence by decision-makers. Models were viewed as important tools for informing the evidence landscape that support policy decisions during the pandemic for several reasons. First, NZ decision-makers had recently leveraged models in a similar capacity to inform control policies for the introduction of Mycoplasma bovis in the country (Ministry for Primary Industries, Manatū Ahu Matua 2023). Second, they integrated a modeling team into the incident command system (ICS). The ICS communication structure involved a dedicated communicator between the modeling team and policy decision-makers. Communication occurred at least weekly early on, sometimes daily, where modelers provided guidance about resource needs and timelines for each decision-maker request, explained uncertainties in results and provided guidance on the appropriate selection of models for each question. Decision-makers were able to specify high-priority questions. Third, public health experts were integrated into the communication structure through an advisory group to interpret modeling results through a public health lens. Participants felt this helped to avoid the nonsensical public health policy decisions that can occur when evidence from models is interpreted directly without expertise from a practitioner in the appropriate health field. Fourth, decision-makers first asked a very clear-cut, simple question: “should we continue with an elimination strategy?” NZ effectively implemented an elimination strategy on 23 March 2020 with an announced move towards a national stay-at-home order (‘lockdown’) and other measures aimed at eliminating COVID-19 (Baker et al. 2020a). Elimination is a well-established approach for ending transmission of endemic infectious diseases but was not previously used for pandemics where mitigation was the response strategy built into preparedness plans (Baker et al. 2020b). Modelers first started tackling this question by estimating the consequences of a poorly controlled pandemic wave (Hendy et al. 2021). They subsequently developed a framework that could query what the course of the pandemic could look like if an elimination strategy was continued versus a mitigation strategy (James et al. 2020). Starting with a feasible, simple question helped to develop a functioning workflow that was viewed as useful by users, which led to greater demand. The validity of this work was reinforced by the successful elimination of Covid-19 from New Zealand, 103 days after the first identified case (Baker et al. 2020b). Subsequent modeling also showed that making this decision early in the pandemic produced better outcomes than if it had been delayed (Binny et al. 2021). Fifth, trusted government employees with strong quantitative skills were seconded from The Treasury into the COVID-19 modeling teams to act as knowledge brokers. These individuals provided two important elements: (1) a trusted source of expert oversight and (2) national government context knowledge to facilitate communication. Sixth, the Ministry of Health conducted regular surveys of public perspectives to understand the perceived impacts of different policies and used the information to specify constraints in formulating questions for modelers. This example mainly concerns the beginning of the pandemic (~first 6 months of 2020). More nuanced policy questions were addressed later during the pandemic (e.g., Mccaw and Plank, 2022; Binny et al. 2021). Participants said that the successful incorporation of models early on to address simple, pressing questions helped to show the value for uptake later on (good return on investment), and infrastructure that could be readily leveraged (accessibility).

Box 2 An example where participants described their experience with using predictive models for informing decisions relating to bovine tuberculosis biosecurity in cattle and possums Mycobacterium bovis—cattle/possum interface: Another 22% of participants described the use of models for informing control policies for bovine tuberculosis (bTB) in NZ. In contrast to COVID-19, bTB rarely infects humans in NZ, and poses an ongoing threat to cattle due to its persistence in invasive, non-native brushtail possums (Trichosurus vulpecula) (Barron et al. 2015; Gormley et al. 2016; Nugent et al. 2015). NZ has been managing bTB in domestic cattle since the mid-to-late 1900s (Nugent et al. 2015; Livingstone et al. 2015). Currently, the mission of OSPRI (an organization including TBfree NZ Ltd, a shareholder and government-funded organization) includes bTB elimination. Models are used as standard practice for estimating local bTB freedom during control operations (for evaluating local bTB status in possum populations and planning surveillance design (Anderson et al. 2017, 2013) and planning how much and how often possum control (i.e., killing) is necessary for reaching bTB freedom in local areas (Ramsey and Efford, 2010), including efficient strategies for resource allocation (Gormley et al. 2016)). This involves a suite of models that have been developed, iteratively refined, and used routinely over the years (Nugent et al. 2018). We interviewed both modelers and bTB operational managers to understand how modeling intelligence is integrated into bTB operational decisions. First, in the early years of the bTB elimination program there was a decision-maker who believed evidence from models was necessary for effective strategic planning and program evaluation. This decision-maker spent time building relationships with modelers and championed the approach. Second, the trusted relationship with frequent communication included facilitating the decision-maker to shape model development with their expertise and knowledge (similar to the ‘co-design’ concept mentioned in Box 1), which enhanced confidence in the approach. Third, long-term continuity in the use of modeling tools has allowed them to be tested and adapted against their predictions over time and has shown that the predictions met the needs of decision-makers. Fourth, the initial champion of modeling tools incorporated these tools as a ‘rule of business’ and trained subsequent operational managers on the function and utility of the tools.

How models become trusted

There are many ways to build models (Silberzahn et al. 2018), each with advantages and disadvantages, so what gives professionals confidence in using results from models? Participants described a variety of metrics that gave them confidence. The most frequent answer was having a trusted interpersonal (often collaborative) relationship between scientists (and/or science advisors) and decision-makers (Gluckman et al. 2021). Also, the model predictions proved accurate (validation of the model predictions against eventual real-life outcomes) and the decision-maker was

‘brought on the journey of model development’ (Operational manager)

And

‘understood what was under the hood.’ (Operational manager)

It was also important that the model and results were peer-reviewed, rigorous statistical evaluation was conducted, and well-established scientific theories were applied (trust in the scientific method). It was equally important that the model led to a high-impact policy change (added value to the decision-makers bottom line). One participant said having models was considered essential by other colleagues or stakeholders, thus the model was trusted to meet a variety of participants’ demands, for example,

‘…having them [models] and the answers they [models] provide was considered a hundred percent necessary to the investment’ (Operational manager)

Models for shaping versus supporting decisions

Models can be applied before a decision is made to shape decisions or post hoc to evaluate decisions. Most participants felt the use of models a priori is ideal but post hoc use can be valuable and appropriate when decisions need to be made more quickly than the science can be produced. Retrospective evaluation of decisions with models was seen as advantageous for providing support for further investment, providing scientific evidence for high-stakes decisions, or evaluating whether the decision should be revised to improve outcomes. Participants said it’s useful to have a model for evaluation as conditions change and to identify alternatives. However, many thought there are special considerations and increased risk for model development as post hoc decision intelligence, for example,

‘I think the risk in using a model to justify a past decision is that models have bias and scientists have biases. And I think if you’re not careful about those biases and the assumptions that you make in a model, you’d have to be quite disciplined in just making sure that you are testing the full range of model assumptions or options.’ (Science producer—modeler)

Participants said when modeling for post hoc decision intelligence it’s important to understand whether the decision could be changed with additional intelligence. They said more safeguards are needed for conducting the science objectively because of the risk of confirmation bias (Nickerson, 1988), meaning it can be difficult to design the scientific questions objectively and comprehensively when a desired outcome is known. Using multiple independent scientific groups (Li et al. 2017) may be important to safeguard against this risk.

Based on risks previously described regarding decision-makers lacking expertise in modeling, there may be an additional risk of misuse of science relative to other intelligence production techniques because, as mentioned above, one can construct and parametrize a model to produce a desired result. Using models for post hoc evaluation of decisions requires good-faith decision-making. One participant described this as

‘You need to have clean governance over decisions—you need to separate the decision-making from the people who are going to benefit from the decisions’. (Operational manager)

Other participants said that post hoc use of models requires encouraging decision-makers to ask the right (objective) questions with an understanding of what is and what isn’t scientific evidence that aligns with robust scientific practice. Otherwise, evidence claimed to be derived from science is actually misinformation. Examples of bad practice with models relayed by participants included:

‘…models that presuppose their own conclusions’ (Science producer—modeler)’,

or

‘…a model with that predetermined outcome in mind’ (Science producer)

or

‘…creating the science to support what you want to do—it [the model] shouldn’t be reverse engineered’ (Science producer—modeler),

or

‘…what someone referred to as policy-based evidence. I think the desire for evidence that will support the policy decision you’ve already made is quite high.’ (Science producer and advisor)

Participants agreed it was bad practice to choose one result or model from a set that has similar scientific rigor, probabilities of being true, and/or similar levels of uncertainty about the truth. Modeling science has developed a variety of methods for objectively combining information gained from multiple models with different uncertainty and divergent sources of data to produce combined and more robust model-based evidence (e.g., value of information analysis, ensemble modeling (Li et al. 2017; Oidtman et al. 2021). Post hoc modeling needs to be about testing the value and effectiveness of a decision and evaluating the potential impacts of alternative decisions, for example,

‘I think as a scientist in a policy making space, it’s incumbent upon you to point out what the evidence says, what the range of evidence is. If a policymaker chooses to cherry pick the bit that suits them, then there is a limited amount you can do about that. But I think that you need to be presenting them, if you’re talking about modeling, presenting the model outputs as they stand and encouraging policymakers to consider asking the right questions and then seeing the answers’ (Science producer and advisor)

Because political pressure can be high to not ‘rock the boat’ (i.e., change the business rules), and humans have a natural tendency to trust evidence that best matches their understanding (confirmation bias), it is especially important to develop standards for appropriate use of models in post hoc decision-making.

Tension in the scientist–decisionmaker relationship

There is tension between professionals working where the mission is more research-focused versus public policy or government-based operational programs. Considering the frustrations expressed by “both sides”, collaboration could be improved (Power, 2018). Key frustrations from research-focused professionals include difficulty keeping up with which scientific questions are most helpful to pursue. Reasons included not understanding the government context, little transparency on who to contact for different issues (leading to relationship-building with someone at the wrong level of government for use of the science in decision-making), and not having the capacity to develop and maintain relationships among all the other responsibilities of a research scientist, for example,

‘Mid-career folks are just dropping out at the moment. Going to do other stuff because of the workload at universities. For academic scientists time is a big challenge for them which is a challenge for relationship building. So the kind of thing where I took this really strategic approach [to relationship building], but also using those informal connections, I think for a lot of academics, they just don’t have time to do that.’ (Science producer—modeler)

Participants also said there is high turnover in government positions, which leads to researchers investing less time in building relationships because when collaborators move positions, the knowledge and relationships are lost, for example,

‘One of the biggest challenges in New Zealand is most government agencies’ staff change their jobs like they change their underwear. As a result, they move around way too quickly and they never actually get anything finished. And you’re always starting again. Not just relationships, but knowledge.’ (Science producer—modeler)

Additionally, participants said government professionals can be unhelpful in connecting researchers in a meaningful way, for example,

‘We’ve got data, we’ve done all the work for you, we want to engage. We’re reaching out to them and really not getting any kind of feedback or any response from people there.’ (Science producer)

One participant said the solution to improving relationships involves actively discussing and planning for

‘…how to go about building an ecosystem where there’s a richer, more resilient network of relationships between researchers and policy makers so that policy makers can quickly take soundings from the research community in a very informal way and build those relationships that enable deeper pieces, whether the scientists are really coming together with a more proactive, long-term evidence base for more strategic work.’ (Science advisor)

Participants expressed that policy-making processes take a very long time with numerous reviews that involve input from a variety of stakeholders. An original policy proposal that was well justified by scientific evidence can be refined repeatedly by different people so the rationale is changed or lost altogether. It can be difficult to bring outside modelers in later in the process, making it important to have involvement from the beginning and a clear mechanism for incorporating new evidence. Decision-makers admitted that researchers often understand so little about policy processes that they cannot bridge the gap to work with researchers. Potential solutions described by participants included increasing opportunities for scientist secondments into government contexts (Gluckman et al. 2021), such as AAAS Science & Technology Policy Fellowships in the USA for early career researchers or science advisors for those more senior.

Modeler (or scientist) behavior can prohibit collaboration. For example:

‘It doesn’t really help the situation—saying I told you so when it does actually happen. That’s the time when we roll up our sleeves and see, how can we minimize the damage that is associated with this?’ (Science advisor)

Thus, the attitude of being patronizing or unsupportive when an adverse event occurs is counterproductive to engagement and shows a lack of sensitivity or awareness of the position of other professionals. Another comment was that scientists should:

‘park the passive aggression, and listen to hear rather than listen to respond’ (Science advisor),

meaning that the model-based evidence contributes only in part to the solutions and that modelers must be constructive, and respect the contributions from others. For example,

‘Technical experts know what they know, but they very rarely know the context in which that information needs to feed into. They tend to be passive aggressive. It’s a common trait within scientists. And so when their evidence isn’t listened to, they tend to be highly critical of the people that didn’t listen to their evidence, even though their evidence was listened to. It just was other decisions, other pieces of information fed into the decision making process.’ (Science advisor)

The contribution from any one modeler will be considered in a team environment that may or may not have the capacity for direct communication between the modeler and the decision-maker. Counterproductive behaviors can be minimized with more transparent communication between modelers and decision-makers about the constraints experienced by each group and opportunities for engagement.

Relatedly, it’s important for researchers to come to the table with the attitude that

‘modeling is only one piece in the puzzle’ (Policy advisor and decision-maker)

For example, statements such as

‘The fact that science only informs policy, it will never set policy. It can’t set policy. When you think about it, we have the Treaty of Waitangi, a legal document that must be overlayed over the top of everything that we decide within the country. So that’s got to get an equal weighting to science in a lot of those discussions.’ (Science advisor)

And

‘Policy making is not a pure process. It’s messy, and a values process. It’s a judgment process ultimately, and evidence and data inputs and modeling are one group of inputs into it. But there’s a whole lot of other inputs and lenses that are brought to bear. Of course, governments, for the most part, set out to make the best decisions they can for their populations. We had the good fortune of being able to use some modeling to help shape and inform decisions and policy options.’ (Policy advisor and decision-maker),

meaning that the model results contribute only in part to the solutions. For this reason, some decision-makers are replacing the term ‘science-based decisions’ with ‘science-informed decisions’ (Gluckman et al. 2021).

Finally, similar to research-focused professionals, decision-makers are usually overburdened with little flexible time and have very tight timelines for decision-making. Participants said that these constraints generally do not align well with, for example, the participation of graduate students. Limitations for decision-makers include that most graduate students cannot solve problems fast enough nor can they communicate the results effectively. Involving graduate students in solutions for decision-makers can lead to needs not being met and a lack of willingness for a decision-maker to want to involve outside scientists in the future. Yet, the involvement of graduate students is part of the business structure for research organizations and trains the next generation of scientists to be more policy-aware when defining scientific directions.

What practitioners recommend for effective generation and use of evidence from models

Consistent with previous work, several participants said one important feature for successful model-to-decision workflows is to have a knowledge broker (Gluckman et al. 2021; Kiem et al. 2014). For example,

‘You need those people who can really walk across the bridge there and turn the outputs into understandable intelligence that paints a picture rather than a series of numbers or figures or whatever. It’s communicating here’s what this overall picture means. That intelligence.’ (Policy advisor and decision-maker)

The knowledge broker is often a science advisor but may be a subject-matter expert who serves as a trusted advisor or navigator to the decision-maker (e.g., health practitioner or health/environment policy expert in our system, Table 1). A good knowledge broker for model-to-decision workflows will be experienced in science communication to decision-makers, be able to quickly convert complex evidence into intelligence, have a strong grasp on quantitative analysis techniques, understand the decision-maker’s context, be objective about consideration of relevant intelligence, and have a trusted relationship with the decision-maker. An example comment describing beneficial traits for knowledge brokers:

‘…bring it back to the honest broker and go remembering that minister has five minutes to listen to you. Five minutes to gather as much as they can from this topic, and then make an important decision. You need to have gone through all of that literature, deduced it all down into one pager and go, if I was in your position on the basis of what I’ve read, I would do this. Now, that’s not an advocate, that’s a person that has actually just developed this balanced understanding of the literature.’ (Science advisor)

Table 1 Synthesis of the actions recommended by participants for an effective model-to-decision workflow in determining management strategies or control policies in One Health sectors. Full size table

Being explicit about the roles and responsibilities of team members is important for minimizing miscommunication or unwanted communication. In decision-making for public policy, the role of science producers is to explain to the decision-maker, not the public. The role of a science advisor is to recommend decisions to the decision-maker based on a broad set of intelligence (Table 1). This can be confusing because, for example, NZ’s Education Act encourages academic freedom and a role for academics as critics and conscience of society, making it important to discuss roles when academic scientists engage with decision-makers as science producers because roles are context-dependent.

Decision-making is a values-based process (von Winterfeldt, 2013). Because decision-makers use different criteria for gaining trust in intelligence, it’s important to understand the value system used to gain confidence about information quality. This can be done when modelers work with knowledge brokers and/or decision-makers to define the problem. When identifying the decision-makers objectives, it is important to understand what the decision-maker’s constraints are (e.g., timelines for answers, budget, infrastructure, time they or their knowledge broker have to participate) and ensure decision-makers understand what the scientist’s constraints are (data quality needs, resources, feasible timelines, the type of answers that can be generated). During NZ’s early COVID-19 experience, this exchange was facilitated through a key knowledge broker who set up regular communication among modelers and policy developers in an incident command type of structure. All participants we interviewed who were part of this workflow commented on how effective this approach was in developing models and appropriate timelines for policy decisions. These conversations are important for determining the level of investment needed for modeling and appropriate modeling techniques. An ability to clearly understand the problem the decision-maker is trying to solve is a critical skill for building trust and adding value. When done effectively, the modeler and knowledge broker can assist and empower the decision-maker to refine questions in the most effective way for their problem. This approach is synergistic with how participants described their vision for collaboration with Māori—NZ’s Indigenous population—to address structural disparities (Box 3).

Making a communication plan to establish the preferred frequency and method of communication can help manage inefficient delivery of products, which causes frustration. Establishing early on who the science producer should work with to access data and get input for model development is important for efficiency.

It is good practice to make sure the decision-maker understands what value modeling can and cannot bring to the problem. This helps to manage expectations and regret in the investment, which can have legacy effects for future investment. For emerging fields, it is important to articulate to decision-makers the long-term benefits of investment in terms of capacity building and readiness. Relatedly, it is useful to understand concerns the decision-maker might have about the modeling and identify a workflow that minimizes those risks. After the model has been developed and used, demonstrating the return on investment is important for continued building of trust in the use of models for future decision-making.

Source: Nature.com | View original article

Source: https://www.nature.com/articles/s41599-025-05077-3

Leave a Reply

Your email address will not be published. Required fields are marked *