Best Practices for AI Governance
There are some best practices for AI governance. These include setting standards (define AI policies for planning), monitoring data (keep inputs accurate and protect privacy), reviewing results (have experts validate AI outputs), and integrating effectively (align AI with current planning processes). When AI systems feature transparent decision-making, they are easier to audit and explain to stakeholders such as customers, regulators, and internal teams.
As part of effective governance and utilization, there should be descriptive tools providing advanced machine learning for tasks such as customer data analysis, customer segmentation, demand forecasting and operational optimization to help make informed, efficient decisions. Second, there should be predictive/prescriptive tools for building models, analyzing data, and generating insights. It should be suited for businesses aiming to use AI in decision-making, including forecasting, risk analysis, and market trends. Third, there should be a data visualization platform with AI features like “Explain Data” and “Ask Data” to pose questions in natural language and get instant answers.
Weak AI governance can expose healthcare organizations to serious risks, including breaches of patient privacy, regulatory violations, and unchecked algorithmic bias. These failures may result in investigations, costly fines, or even operational disruptions. Beyond regulatory and legal risks, governance lapses can quickly erode stakeholder trust and damage reputation. Patients, partners, and investors may lose confidence, leading to financial losses as business relationships suffer. To prevent these outcomes, healthcare executives must ensure AI systems are transparent, auditable, and regularly monitored for risks like data quality and explainability. Proactive governance protects organizations and supports a culture of accountability and innovation.
Healthcare Leaders Still Needed
AI is a powerful aid, but because strategy involves such complexity, human input will remain integral for the foreseeable future.[11] Strategy is challenging due to the need for multi-level reasoning, contextual awareness, and understanding of human behavior.[12] These models cannot yet perform complex, hypothesis-driven, multi-step reasoning. Healthcare institutions face many intricate analyses (e.g., buyside due diligence on a major clinic acquisition, determining whether to close a hospital, or deciding whether to significantly expand a care unit). Careful review by humans will still be required to validate calculations and ensure that any incorrect assumptions made by the model are identified and amended.[13]
When used to help humans, AI poses risks such as data non-representation/skewness, lack of clear explanations, and producing false but convincing information (i.e., hallucinations). In particular, generative AI models exhibit inherent biases associated with the datasets and natural language tasks utilized during their pre-training processes. Thus, it is important to review the diversity and representativeness of the training datasets used by these systems. Systems trained on biased or incomplete data can generate discriminatory or inaccurate results, leading to poor decisions. Ensuring high data quality through proactive measures prevents costly mistakes and improves system reliability.
In terms of on-going human engagement and supervision of AI, there are key points for organizational leaders to consider.[14] First, proprietary data access becomes more critical. Second, increased data makes separating valuable information from noise essential. Third, with easier insight generation, executive-level synthesis gains importance. Fourth, strong strategy development processes matter more than the quality of insights. Fifth, strategy teams must invest in technology to build and access proprietary data ecosystems. And in the final analysis, it will be the responsibility of healthcare leaders to make the ultimate difficult strategic decisions. As McKinsey notes, “AI won’t change the need for leaders to demonstrate strategic courage by committing to big moves.”[15]
Power of Averaging Multiple Perspectives
Strategic foresight, or predicting the results of strategic decisions, is central to key strategy theories.[16] Researchers have focused on how both individual and combined predictions influence the evaluation of a strategic decision.[17] Prior research has shown that aggregating many imperfect predictions can improve the overall prediction by offsetting errors.[18][19] The benefit of aggregation is also often called wisdom of the crowds.
Previous research on human evaluators has examined the effects of differences in their expertise, cognitive approaches, and demographic characteristics.[20] Aggregating multiple predictions improves accuracy when their errors partially cancel out. In regression tasks, positive and negative errors balance each other, while in classification tasks, majority voting ensures most correct predictions prevail. Aggregation works best with a larger and more diverse set of predictions. It can occur through many techniques, including by averaging, majority voting, or taking a modal hybrid. A widely adopted method involves selecting evaluators with varied backgrounds.
Researchers compared rankings from large language models versus human experts in analyses of 60 business models.[21] They find that generative AI can produce inconsistent and biased evaluations, but its aggregated rankings are similar to those of humans. This study shows that generative AI offers useful predictions for strategic decision making. Single evaluations from generative AI can be inconsistent or biased. However, combining multiple assessments from different LLMs, prompts, or roles produces results similar to human experts. This method efficiently offers healthcare executives strategic insights across domains and can supplement human judgment.
When applying the wisdom of crowds, we need both scale and diversity in predications.[22] First, diversity means predictions vary from one another. By combining different predictions, errors can be balanced out, so optimistic forecasts offset pessimistic ones for continuous outcomes. Without diversity, group predictions offer little advantage over individual ones. Second, scale is the number of predictions used in aggregation. Using many predictions increases the chance of offsetting errors. If too few predictions are chosen, they may all be overly optimistic or pessimistic, reducing aggregation effectiveness.
|
Actionable Recommendations for Healthcare Executives (Next 6–12 Months)
1. Assess AI Readiness and Data Infrastructure
-
Evaluate current data assets, technology, and workforce capabilities to identify gaps in AI readiness.
-
Invest in secure, high-quality, and diverse data sources for robust AI-driven insights.
2. Establish Clear AI Governance and Policies
-
Develop transparent AI governance frameworks for data privacy, bias mitigation, and regular auditing.
-
Engage cross-functional teams to oversee AI integration and ensure alignment with organizational goals.
3. Pilot AI-Driven Strategic Planning Tools
-
Launch pilot projects using generative AI for scenario planning, market forecasting, and risk management.
-
Refine processes, measure ROI, and build internal expertise through these pilots.
4. Cultivate Diverse AI Perspectives
-
Aggregate insights from multiple AI models, prompts, and roles to reduce bias and improve decision quality.
-
Encourage collaboration between human experts and AI systems to validate and interpret recommendations.
5. Invest in Executive Education and Change Management
6. Monitor, Measure, and Refine
By taking these steps in the next 6–12 months, healthcare executives can accelerate AI adoption, enhance strategic decision-making, and position their organizations for sustainable growth and improved patient outcomes.
|
Conclusion
The rapid integration of generative AI offers healthcare leaders a significant opportunity to enhance strategic planning and decision-making amid increasing complexity and uncertainty. It facilitates forecasting, scenario modeling, competitor simulation, and strategy formulation, enabling healthcare executives to evaluate assumptions and adjust strategies in real time. AI can synthesize a wide array of data sources (such as financial metrics, regulatory guidance, market intelligence, and stakeholder input). These can then convert to actionable insights that inform flexible, evidence-based leadership.
To fully realize these benefits, disciplined implementation by health systems and hospitals is essential. An effective AI-supported strategy requires both sufficient scale and diversity in predictions. Diversity ensures that AI-generated perspectives are meaningfully varied, which allows errors to offset one another and mitigates systematic bias. Scale amplifies this effect by increasing the reliability of aggregated insights. Absent adequate diversity, aggregation delivers limited benefit; without sufficient scale, predictive models may share similar blind spots. It is therefore crucial for healthcare executives to move beyond single-model outputs and intentionally cultivate a range of independent AI perspectives across different models, prompts, and roles.
When applied in this manner, generative AI serves to augment, rather than replace, executive judgment. Aggregated AI insights can improve strategic foresight, support comprehensive risk management, and elevate the quality of complex decision-making, all while maintaining human accountability and ethical oversight. As healthcare organizations continue to embed AI into their operations, those embracing a thoughtful, aggregation-driven approach will be better equipped to navigate uncertainty, sustain competitive advantage, and enhance both organizational performance and patient outcomes. This will benefit patients, their families, clinicians, other stakeholders, and the organization itself.