Clinicians and AI: A Conversation with Dr. Jay Anders on Future Healthcare
In an era where technology is infiltrating nearly every aspect of our lives, the healthcare industry is no exception. Artificial intelligence (AI) promises to deliver faster diagnoses, more efficient care coordination, and improved patient outcomes. As the field of AI in healthcare continues to evolve, numerous artificial intelligences in healthcare research papers and scholarly articles are being published to explore its potential. Yet, as AI tools and large language models (LLMs) continue to gain attention, many medical professionals are left pondering how to incorporate these innovations into an already complex healthcare environment without losing the crucial human touch.
Recently, we had the opportunity to sit down with Jay Anders, MD, a seasoned internist and Chief Medical Officer who has spent decades in roles spanning private practice, information technology, and medical leadership. He has been at the frontlines of healthcare’s digital transformation since the early 2000s, providing both clinical and administrative perspectives on emerging technologies. In our conversation, he offered a candid view of AI’s promise, its pitfalls, and how healthcare leaders might strike the best balance between technology and compassionate patient care in this AI-influenced clinical world.
From Paper Records to Digital Frontiers
Dr. Anders’s journey in healthcare began in a multi-specialty group practice, where he spent 20 years as a general internist and took on administrative duties as president and chief medical director. His first foray into healthcare IT was spurred by a simple logistical challenge: How do we stop transporting paper charts between offices?
His passion for healthcare IT led him to become the Chief Medical Officer (CMO) of multiple organizations, both on the payer side and within the health IT field, including stints at Christie Clinic, InteGreat, Med3000, McKesson, and eventually his current position, where he has spent over a decade exploring advanced technologies for health information management at Medicomp Systems, Inc. This experience has given him unique insights into the relationship between AI and doctors, as explored in various AI in healthcare articles, particularly in the realm of electronic health records and telemedicine.
A Tool, Not a “Thinker”: Understanding AI’s Real Role
When discussing AI, one of Dr. Anders’s first points is the need to demystify what large language models (LLMs) and other AI tools actually do. This understanding is crucial for both clinicians and patients as we navigate the landscape of AI in healthcare, as highlighted in numerous artificial intelligence scholarly articles.
Although AI does not think in the human sense, it excels at certain repetitive, data-heavy tasks and can generate plausible-sounding responses. AI algorithms have shown promising results in AI diagnostics and clinical decision support systems. However, as Dr. Anders warns, there is a danger in assuming that AI’s convincing language equates to correctness.
This oversight, he emphasizes, lies at the heart of the clinician’s role in an AI-driven world. Just as no physician would accept a new medication without understanding its clinical studies, so too must they rigorously evaluate AI output to ensure it meets standards of quality and accuracy. This critical evaluation is essential for building trust in AI within the healthcare community and addressing the ethical considerations of AI in healthcare, particularly in the context of AI-enabled clinical decision support systems (CDSS).
Preserving the Patient-Clinician Relationship
The practice of medicine, according to Dr. Anders, is far more than diagnosing conditions and prescribing treatments. Much of a clinician’s expertise lies in subtle observations—body language, emotional state, lifestyle context, and more. This human element is crucial when considering the AI impact on medical decision-making and the development of personalized medicine.
He worries that if AI tools are deployed haphazardly, especially in settings like chatbots making independent decisions, the all-important human element could be diminished. The art of medicine requires empathy, judgment, and the responsibility that comes with directly interacting with a living, breathing human being—none of which can be replaced by an AI’s text predictions.
Instead, Dr. Anders envisions AI as a complementary tool, providing clinicians with immediate access to vast amounts of data they can then contextualize with their professional expertise. This vision aligns with the concept of human-AI collaboration, where AI’s ability to identify patterns in large datasets, sort through clinical histories, or flag unusual findings can free up clinicians to do what they do best: treat patients holistically. This approach is particularly promising in the field of AI-powered patient monitoring, where continuous data collection can enhance personalized care strategies.
Fostering Trust in AI: “We Have to Train It Well”
One of the greatest barriers to widespread AI adoption is the question of trust. Can clinicians trust an AI’s outputs if its training data is incomplete, inaccurate, or biased? According to Dr. Anders, trust in AI is built through:
- Transparent Data Sources – “You’ve got to trust the data you’re giving it,” he says. “If 50% of these devices are trained on artificial data—made-up data—clinicians are going to push back immediately.”
- Validation & Testing – AI tools must be rigorously validated in real-world scenarios before being trusted for clinical decision-making. This process is crucial for AI trust-building in healthcare and involves AI verification and AI transparency.
- Human Oversight – Ultimately, a human clinician should take responsibility for care decisions and confirm AI-generated suggestions. This oversight is a key component of clinician-AI collaboration and ensures AI accountability.
Dr. Anders recommends leveraging AI’s strengths in non-clinical areas first, such as scheduling, materials management, and operational logistics. By proving its value in these low-risk areas, AI can build a track record of reliability, which might then ease its acceptance in more critical applications like diagnosis and treatment planning. This approach also allows for the assessment of AI reliability and AI robustness in healthcare settings, particularly in AI-driven healthcare management systems.
Meeting Quality Measures & Improving Care
Healthcare organizations are constantly striving to meet quality metrics—blood pressure checks, diabetes management, preventive screenings, and more. These are typically tracked through numerator/denominator calculations requiring myriad billing codes and clinical documentation details.
AI can potentially sort and analyze these huge datasets, identifying trends or overlooked details that clinicians might miss amid their workloads. This capability aligns with current AI in healthcare trends, particularly in the realm of clinical decision support and predictive analytics in healthcare.
Still, there is significant promise in using AI to identify patients at risk, highlight missing screenings, and detect patterns in population health. The key is combining AI’s data analysis with the clinician’s informed perspective on what to do next, exemplifying the potential of human-AI collaboration in improving patient care. This approach has been documented in various AI in healthcare articles, with notable progress seen in AI in healthcare 2019 and AI in healthcare 2020 reports, particularly in the development of AI-enabled precision medicine techniques.
Interoperability: Sharing Data, Sharing Responsibility
A long-standing challenge in healthcare is interoperability—the ability of different systems to seamlessly exchange and make use of patient information. Dr. Anders points to recent strides by the Office of the National Coordinator (ONC) and their rules against “information blocking” as major steps forward.
Yet, simply enabling data exchange is only half the battle. Clinicians often find themselves deluged with PDFs or non-standardized data that are too cumbersome to parse in day-to-day workflows. This challenge highlights the need for AI in hospital operations to streamline data management and improve efficiency.
This is another space where AI can shine: sifting through the noise and distilling clinical insights. However, leaders must address cultural barriers that hinder true data sharing—such as competition among hospital systems, concerns about patient retention, and the lack of proper resources in rural settings. Additionally, ensuring patient data privacy in AI healthcare applications remains a critical concern that must be addressed to build trust in these systems. The development of explainable AI and addressing AI limitations are crucial steps in this process, particularly in the context of machine learning in medicine and AI for medical research.
Bridging the Rural Gap
One of Dr. Anders’s passions is improving healthcare in rural America, where patients often lack close access to specialist services and advanced medical resources. AI-enabled tools and data-sharing platforms can make a huge difference—if these communities are granted the funding and support they need.
In many rural areas, the biggest hurdle is not a lack of interest, but a lack of funding for broadband, updated electronic health record (EHR) systems, and ongoing technical support. These issues represent potential barriers to efficiency in implementing AI solutions. By championing these issues, healthcare leaders can ensure that AI’s benefits reach some of the most underserved populations. This includes exploring the potential of AI-assisted surgery and medical imaging AI to bring advanced care to remote areas.
Leadership in an AI-Driven Future
At the close of our conversation, Dr. Anders circled back to the essence of leadership in this quickly changing environment. Technology moves fast, but the principles of quality care and patient well-being cannot be compromised. Preparing medical professionals for an AI-influenced clinical world is a crucial task for healthcare leaders.
He encourages clinician leaders to remain vocal advocates for patient-centered AI deployments:
- Stay Informed & Engage – Leaders must understand AI’s capabilities and limitations and communicate these clearly to their teams. This includes staying updated on AI governance and AI regulation developments, particularly in the realm of AI ethics in healthcare.
- Build Trust through Transparency – Whether it’s clarifying how a chatbot’s advice is generated or showing how AI identifies high-risk patients, transparency fosters acceptance and trust in AI. This involves addressing concerns about AI bias and ensuring AI fairness.
- Champion Collaboration – Encourage alliances among hospitals, payers, technology providers, and local communities to break down barriers and extend AI’s benefits to all. This includes supporting AI in clinical trials and AI in treatment planning.
- Never Lose the Human Touch – Above all, keep the patient-clinician bond at the forefront. AI is a tool for augmenting, not replacing, the empathetic care that patients trust. This principle is crucial when considering AI safety and AI liability issues.
Conclusion
Artificial intelligence has the potential to reshape modern healthcare—streamlining administrative tasks, boosting diagnostic accuracy, and enhancing the quality of care that patients receive. Yet, success hinges on prudent leadership and a deep understanding that AI is an aid, not a substitute, for the human clinician.
Dr. Anders’s journey from paper charts and trucking records between clinics to exploring advanced AI applications underscores just how far healthcare has come. His insights remind us that while technology continues to evolve, the fundamental mission of healthcare—to care for and heal people—remains unchanged. By embracing AI thoughtfully, ensuring it is well-trained, fostering interoperability, and prioritizing the patient-physician relationship, clinicians and leaders can harness the power of AI to forge a brighter, more equitable future in healthcare.
As we move forward, it’s crucial to continue monitoring and analyzing AI’s impact through comprehensive artificial intelligence in healthcare reports. These reports, along with ongoing research in areas like AI in medical imaging and AI in mental health, will help guide the ethical and practical implementation of AI in healthcare settings. The future of healthcare lies in the synergy between human expertise and technological innovation, with AI-enabled CDSS and human-AI collaboration paving the way for more efficient, accurate, and personalized patient care.