Key Takeaways
- Healthcare leaders must act as vigilant gatekeepers to ensure AI implementation does not compromise patient safety or outpace clinical validation.
In the swirl of artificial intelligence hype and healthcare transformation, the stakes have never been higher. Hospital leaders face pressure to innovate, sometimes at a pace that outstrips caution, while clinicians, administrators, and technologists navigate an increasingly complex and consequential landscape. AI promises operational efficiency, clinical breakthroughs, and even new hope for addressing longstanding disparities. But who keeps the system honest? Who ensures that technology is a servant, not a master, to the needs of patients and communities?
Tim O'Connell, MD, MEng, practicing radiologist and CEO of emtelligent, stands at the intersection of these challenges. His career arc, from network engineering to the highest levels of medical AI entrepreneurship, illuminates not only the promise of AI but also its potential pitfalls. As he warns, “our healthcare leaders and organizations, institutions, payers—wherever they are—need to remain the gatekeepers to ensure that implementing new technologies doesn't happen too quickly or for the wrong reasons and that it adversely affects patient care.”
This article explores Dr. O’Connell’s unique perspective on AI in healthcare, the importance of clinical leadership, the urgent need for community collaboration, and the non-negotiable focus on patient and clinician well-being.
Before he was Dr. O’Connell, physician and medical AI CEO, Tim O’Connell was a network engineer, designing systems for telecom giants. “I’ve got a master’s degree in engineering and I worked for both a large equipment manufacturer and a large telephone company, doing networking and working in information technology,” he explains. “So I’ve been able to combine my love of medicine with, you know, my love of technology—and every day is a great day in the candy shop.”
His dual fluency—in the language of computers and the lived experience of clinical medicine—has shaped Emtelligent’s mission: to make sense of unstructured medical text, the free-form reports generated by doctors, nurses, and care teams every day. O’Connell spends one to two days a week practicing radiology in Vancouver, British Columbia, and the rest immersed in building tools to transform how healthcare systems use data.
The core problem is one many health leaders know too well: vital information is locked in the narrative notes of patient charts, inaccessible to most analytics tools. “We started the company with some partners about eight years ago now to solve some of the problems that I certainly see in healthcare with unstructured medical text,” O’Connell says.
When asked how AI is shaping the future, O’Connell’s response is pragmatic. “The landscape is moving so quickly right now, I think I can predict where we’re going to be five months from now, but ten months from now, I have no idea,” he quips, echoing Bill Gates. The frenzy is real, but so is the opportunity.
O’Connell sees three main fronts for AI in healthcare:
But O’Connell is clear-eyed about the obstacles.
Despite rapid progress, O’Connell identifies four major barriers to meaningful AI adoption in healthcare:
In short: robust pilots, real-world evidence, and transparency are essential before full-scale adoption.
If AI is to fulfill its potential, it needs more than data and algorithms; it needs trust from clinicians, administrators, and patients.
O’Connell pushes back against the stereotype that clinicians fear AI as a job threat. “I really don’t think there are any doctors out there who are like, ‘Oh, I don’t want to use this thing because I’m worried about it stealing my job.’ It’s more that people have used AI products or workflow augmentation products in the past and had bad experiences.”
So what does build trust?
The lesson: AI is not a silver bullet. Its success hinges on inclusive leadership, honest communication, and a relentless focus on patient and clinician needs.
How should physician executives and clinical leaders champion AI initiatives without losing sight of what matters? O’Connell borrows from psychology to make his point: “People are familiar with Maslow’s hierarchy of needs, right? For bottom line, we need sort of shelter and food and things like that. There’s absolutely a hierarchy of needs in healthcare and in project implementation, and the number one hierarchy is a patient always comes first. Absolutely. There can be no negative impact on patient care.”
This hierarchy, he argues, should guide every decision:
This ethical sequence echoes through O’Connell’s philosophy and underscores the essential role of clinical leaders in organizational decision-making.
The conversation inevitably turns to personalized medicine and health equity—a topic that sits at the heart of modern healthcare debates.
Dr. O’Connell is refreshingly candid: “I think it’s one of the most pressing questions of our time in healthcare—how can we reduce disparities in health care delivery? The very short answer is: I’m not exactly sure.”
He notes that before technology can solve equity, healthcare systems must first build trust with marginalized communities. “I’ve met many patients in marginalized communities, and they don’t even want to seek care because they’ve had such negative experiences in the past...I think we have some fundamental problems with our health care systems and our health care delivery models that need to be addressed first.”
That said, he sees promise in using AI for more representative training data and awareness of rare conditions. He shares a telling story: “I just had a clinical case where I looked at a chest x-ray. I’m based in Vancouver; we have a very small population of African American individuals here. I called the emergency room doctor and I said, ‘I think this very young patient has sickle cell anemia.’ He said, ‘How can you tell that? Why would you even think about that?’ I said, ‘Well, I trained in the U.S. Northeast, and there’s some signs here, and in my training that’s a very common disease.’ Here in Vancouver, it’s extremely uncommon. And so there’s just general unawareness of it. That emergency room doctor was very thankful, and worked the patient up, and they were positive.”
His point: clinical expertise is still essential, but AI—properly designed and trained—can help fill gaps, particularly when local training data is sparse.
Rapid AI advances can be exhilarating and anxiety-inducing in equal measure. For health leaders, the need to keep up can feel overwhelming.
O’Connell offers a practical strategy: “One of the ways we are able to very rapidly use new technologies and yet still maintain that focus is through good, high-quality benchmarks...data sets that have been human-annotated as a gold standard and then testing new technologies against them as they come out.”
But he also warns against overreliance on superficial metrics. “There was a movie called Catch Me If You Can...he passes the Louisiana State Bar Exam, but is he a lawyer? And you have the same question of, well, you passed your USMLE Step 2 medical licensing exam. Does that make you a doctor? And the answer is, of course, no...We’re seeing, you know, some very low quality benchmarks being used to test models, and then we’re seeing press releases: ‘Oh, this model passed the USMLE Step 2.’ And it’s like, that’s not what makes someone a physician.”
The message: demand better evidence, keep the focus on patient outcomes, and don’t be distracted by surface-level achievements.
Throughout O’Connell’s narrative, a recurring theme is the indivisibility of patient outcomes and clinician well-being. New technologies, if poorly implemented, risk not only patient safety but also staff morale, retention, and engagement.
He is adamant that the well-being of clinicians and the health of the community are inseparable: “If you’re going to do a whole lot of work to implement new systems and workflows, then there actually should be a net benefit to patient care—either an improvement in quality of care and safety of care or efficiency of care.”
Community collaboration is likewise non-negotiable. Tackling disparities, creating trustworthy AI, and managing disruptive change all require dialogue between clinicians, administrators, patients, technologists, and the communities they serve.
Artificial intelligence may be the most disruptive force in modern medicine, but it will not—cannot—replace the human judgment and leadership that safeguard patients and communities. As Dr. Timothy O’Connell’s story shows, the future belongs to leaders who blend technical vision with clinical wisdom, ethical courage, and a deep commitment to both patient and provider well-being.
In an era where change is the only constant, O’Connell’s message to his peers resonates: “You shouldn’t feel bad if you find that things are moving too quickly...it’s moving quickly for everyone. But...you have to be the gatekeeper.”
Healthcare’s greatest advances will not come from algorithms alone, but from the collaborative spirit of leaders who hold technology—and themselves—to the highest standard.
<p>Our healthcare leaders and in healthcare organizations, institutions, payers, wherever they are, need to remain the gatekeepers to ensure that implementing new technologies doesn't happen too quickly or for the wrong reasons and that it adversely affects patient care. [Music] Hello, this is Zach with the American Journal of Healthcare Strategy and you are listening to the Clinicians and Leadership podcast where we focus on empowering clinicians from bedside to boardroom.</p> <p>Today we have the pleasure of being joined by Dr. Tim O'Connell. Dr. Okonnell, why don't you take a second to introduce yourself, share a little bit about your experience and your current role um and the ways that you're serving now. Thank you, Zach. Good morning. It's very nice to meet you and thank you very much for having me on. Um, as as you mentioned, my name is uh Tim Oonnell. I'm a practicing radiologist. I'm based in Vancouver, British Columbia, and Canada.</p> <p>Um, I've been practicing medicine now as a a licensed physician since 2012. And I work at a number of different hospitals here in Vancouver. I work clinically one to two days a week. And the rest of the time, the other nine days a week it seems like. Uh I I'm CEO of a medical language AI company called Intelligent. Uh we started the company with some partners about uh eight years ago now to solve some of the problems that that I certainly see in healthcare with unstructured medical text.</p> <p>And that's you know the the thousands and thousands of reports that every hospital generates every day and clinics and things like that where there's you know tech pros text that a caregiver has has dictated or typed or created. So um yeah, it's it's been quite the journey and quite an exciting time for us right now and and before going into medicine. I used to work as a network engineer.</p> <p>So got a master's degree in engineering and and I worked for uh both a large equipment manufacturer and a large telephone company um doing networking and and working in information technology. So I've been able to combine my love of medicine with with you know my love of technology and every day is a great day in the candy shop. Yeah. Well, Dr. Okonnell, we are thrilled to have you on today.</p> <p>Um, and and particularly just to discuss this topic of AI and we're going to dive through some questions um, kind of focusing on AI and some some impacts that can have on on the medical community. But I think your background and is is allows you to to speak to this in some very unique ways. I mean, you've mentioned uh, you know, still practicing medicine, serving in an administrative capacity, as well as your background as as a software engineer.</p> <p>like, holy cow, that's that's the combination of of that background is just the perfect setup for for this topic today. And so I'm I'm excited to to dive into this. And so as we get started as as someone that has served in in the IT world, in the software world, in the medical world, and in the administrative world, how do you see AI shaping the future of patient care and operate like operational and organizational efficiency? Well, thanks Zach.</p> <p>So, I think that's definitely kind of a crystal ball kind of question and I love answering those, but to paraphrase Bill Gates many years ago, he said, "I I can predict where we're going to be five years from now, but 10 years from now, I have no idea." And in AI, the landscape is moving so quickly right now. I think I can predict where we're going to be five months from now, but 10 months from now, I have no I have no idea.</p> <p>Um, but you know, I think I think it's going to change uh care in sort of three areas.</p> <p>one I think I I think right now compared to a lot of other organizations like manufacturing and things like that I think in healthcare I think we have relatively poor organizational efficiency a lot of our workflows are very manual it's a human doing a task and handing it off to another human and and there's all kinds of efficiencies there so I I think I think AI systems for um finding problems in efficiency and and fixing them through automation I think there's a there's a lot of room there for improvement I think we're also going to Another area for improving efficiency is just automating monitoring of patient outcomes and stopping problems before they start.</p> <p>For example, a patient gets an infection while they're in hospital, it's going to massively increase their length of stay. So, having tools that can automate um monitoring and solving problems like that, I think will be great for uh improving efficiency, shortening length of stay, things like that. Uh currently, I think another area that's going to be really important is just general improvements in clinical medicine.</p> <p>I think there a really interesting study which was just sort of uh I think it was posted in JAMAMA or maybe the New England Journal. Um and it sort of said that large language model was able to do a better job diagnosing complex patient uh you know disorders essentially or diagnosis uh than humans were and was even better than human and large language model alone.</p> <p>So I think we're going to see you know and and those like you know complex cases that come into the hospital and the p you know it takes two weeks to understand what's going on with the patient the sort of host episode um those cause a lot of problems and efficiency and so I think there's there's real areas for improvement there. Well and Dr. Okonnell I'm I'm interested in in hearing your thoughts on this next question.</p> <p>you you've mentioned in that previous answer just uh the the rapid rate at which AI is developing and and uh progressing and becoming more commonplace and that that really is true if you look back over the the last number of years um it's it's becoming more commonplace it's becoming more common practice becoming more accepted uh yet there still is some barriers to the adoption of AI and seeing the usage of AI and and trusting AI and I want to dive into that a little bit but prior to diving into some of the specific uh things that we we'll cover later on.</p> <p>I'm I'm interested to hear from you with your experience and your background. What are some of the biggest barriers to the adoption of AI in healthcare and and how can organizations overcome those barriers? Sure. I think I mean I think one of the big barriers is the products have to actually work.</p> <p>uh there's a lot of sort of froth and excitement right now in a lot of areas where people going oh like you know I can use this large language model to diagnose something and that may work on a a oneoff kind of scenario but then when you use them to deal with millions of patient reports or things like that you may find that the accuracy is unacceptable I've seen lots of trials of even things for generative use cases like generating discharge summaries and things like this at hospitals really go off the rails because the products don't work to for that use case so I think that's one One of the issues um I think one of the other issues is a lot of the use cases for AI involve workflow changes and job changes and there can be many many reasons why um those uh products are taking time to adopt right you you can't go into an organization as complex as a hospital where if you change one thing there's all these downstream effects the sort of you know butterfly flapping its wings in Brazil kind of effect that you know there can be unintended outcomes and consequences so implementation always takes a long time because everything has to be done safely.</p> <p>You you you know you don't want to be like oh we improved efficiency in this area and then have all these like bad patient outcomes in another area. I I think one of the other issues is is security.</p> <p>I think um in using AI some of what we're seeing from you know and I have some personal experience in this in some of what you see in when you go to use a large language model is that some of the hyperscalers can be fairly opaque about how they plan to use your data right and and you know well we'll retain your data for 30 days but we're not going to use it and what does that mean and this sort of thing so I think there needs to be uh more clarity on on how some of these companies that if they're hosting models and this sort of thing about how they can use their data.</p> <p>And and finally, I think one of the other big adoptions is for some of these products, you know, I often tell people that AI is capable of really cool party tricks, but that doesn't mean that there's a good ROI for the investment you have to make to uh, you know, to actually be like, oh, we should implement this because there may be a massive implementation cost or an opportunity cost to implement a new system. And at the end of the day, it doesn't actually save you money or produce efficiency.</p> <p>it's just like well we did it because it could. Um so I think those are sort of the four areas that that are slowing down adoption right now. Um one of the areas that you you mentioned is is just that the products have to work and and they have to work well and they have to they have to accomplish the goal and they have to be safe.</p> <p>Um another common barrier that I I think is is felt more in the organization sense is is a lack of trust among your clinicians and your frontline staff who are actually going to be using those products. um to to help inform their patient care. And that's that's understandable as you've had, you know, with an aging population and an in an aging and growing, you know, an aging provider population as well who have served in these patient care settings for a wide variety of years.</p> <p>Um and just in the most recent years, AI is becoming to the point where it could be helpful. And so how can clinicians and physicians and leadership build trust among clinicians that work in their organizations and among patients to in the decision-m capabilities of of AI? Sure.</p> <p>So I think I think one thing's worth addressing right off right off the bat for for the audience and that's you know I really don't think there are any doctors out there who are who are like you know a very very small number of physicians out there who are who are like oh I don't want to use this thing because I'm worried about it stealing my job right I I think it I don't think it has anything to do with that I I think a a lot of people have used sort of AI products or workflow augmentation products in the past but had bad experiences of them with them and and so they're very wary you If you used computer AED detection tools 20 years ago in at least in radiology, you you might have been like, oh, like you know, these were as wrong as often as they were right.</p> <p>And and you know, but there has been 25 years of progress since then and and and and things are better now. But but I think that you know really how you get to uh have trust by your care providers in in in these products is through really well-run proof of concept projects and really well-run pilots. You know that and and really clear goals. In medicine we're all about like you know that there has to be all this rigor around experimental testing. There has to be ethics review.</p> <p>You know you know establishing non-inferiority is often the goal. And so we need to approach implementation implementing changes in workflow uh the exact same way. Uh it really has to be that um you know we're we're being rigorous and we're being careful when we're implementing new workflow solutions not to just swap stuff out. And I think another area that I think clinicians need to see from their leadership is no conflict of interest.</p> <p>I I think, you know, it's often, you know, people have made accusations in the past about people in the military that, you know, by the time they get to be a general, they're really just auditioning for a job with a defense contractor. And and and I think the same thing is true in in healthcare leadership.</p> <p>I I think that leadership in healthcare organizations often they're they're willing to go and sign, you know, large agreements with large hyperscalers to say, "Oh, we're going to give all our business to this one giant hyperscaler." and you're sort of like that doesn't make any sense from an organizational perspective, from a patient care perspective. It's like why did you do that? Were you auditioning for a job with them?</p> <p>So, I think that we need to be very uh transparent about potential conflicts of interest, conflicts of commitment is as is uh in healthcare leadership. Well, and and Dr. Okonnell, I'm I I think that this next question uh kind of comes off of that previous question a little bit. Um and so I'm I'm curious to hear your thoughts on this one.</p> <p>um how can your your physicians uh your your physician executives your your clinicians and leaderships uh champion these AIdriven initiatives while while ensuring that the alignment of the broader organization and and the the the goals of the organization and serving the populations and the patient populations that they serve. How can you ens how can your physician executives ensure that those line up?</p> <p>Um because like you said, you've mentioned before AI uh is is really good at doing party tricks and but how can you make sure that it's not just a party trick but actually no this is benefiting the the patients and the the populations that we're serving and and also benefiting the organization as a whole. How how do you make sure that balance is there? Sure.</p> <p>I I think you know I think you know people are familiar with Maslo's hierarchy of needs right for bottom line we need sort of shelter and food and things like that. There's absolutely a hierarchy of needs in in healthcare and in project implementation. And the the number one hierarchy is the patient always comes first. Absolutely. There can be no negative impact on patient care.</p> <p>And if you're going to do a whole lot of work to implement new systems and workflows, then there actually should be a net benefit to patient care, right? Either an improvement in quality of care and safety of care or efficiency of care. And so, um, that's always the yard stick, right? The next sort of step in in the hierarchy of needs, I think, is the sustainability of care. Are we implementing a system which is going to adversely affect how sustainable our care is, right?</p> <p>Is it going to create an extra hour of work for one of the caregivers involved in the patient workflow? Then it, you know, we need to look at it and go like this may result in people quitting jobs or changing jobs and so this is going to be a net negative impact, right? And then we also have to then look at organizational sustainability, profit, ROI, things like that.</p> <p>So I think by always making sure that we put the patient first, I I think that that is is is the most important thing to ensure alignment and not just doing stuff because we can.</p> <p>And that that just speaks to the importance of having someone with that clinical experience in in the room and being a part of that decision-making process because they have tangible ex number numerous years of experience of keeping that that patient first and and interacting with that patient and doing what's best for that patient.</p> <p>And so absolutely right and and I mean this is the history of of technology implementation right is is you can you can always implement new technology to replace humans but um you have to be careful about it and smart about it and make sure that that you're not sort of you know um creating an adverse outcome somewhere along the way. Um a a common phrase Dr. Okonnell that that we hear a lot nowadays is is that we're we're entering into this era of personalized medicine.</p> <p>Um and and uh just that that's just a common phrase maybe some buzzwords that are being thrown out a lot and but but really with it some more of the recent innovations that that seems to be more of the case. Um with that we're we're entering this era of personalized medicine. Um and we are also becoming more and more aware of disparities in healthcare. um and and equity in healthcare delivery and and where those disparities and equity don't don't line up.</p> <p>And so I'm curious with these innovations and tools, particularly with AI, how can clinicians and and leadership use AI to address some of these health care disparities and and improve equity in care delivery? how how can your your health care leaders drive these initiatives and and and drive innovation that that really addresses those areas of need. That's a great that's a great question.</p> <p>I think it's one of the most pressing questions of our time in healthcare is how can we reduce disparities in healthcare delivery? AB: Absolutely. Um, you know, the very short question, the very short answer is is is I'm not exactly sure. I think that I think that it is it is an incredibly important question. Um, I don't think we're at the stage yet where we always know what the problem is.</p> <p>I think that in recent years, healthcare organizations have made tremendous steps forward, but but quite arguably not enough to listening to people in marginalized communities, right? I I think in and I've met many patients in marginalized communities and they don't even want to seek care because they've had such negative experiences in the past when trying to obtain care.</p> <p>Um, and so I think we have some fundamental problems with our health care systems and our healthcare delivery models that need to be addressed first um for providing care to people in in in marginalized communities to obtain their trust and get them even coming into the health care system.</p> <p>And and you know, due to cultural differences, maybe having a thousand bed hospital with, you know, an an intake area that looks like uh you're going into a conveyor belt in a factory isn't the right system for for some of these people in these communities. And so, um, I I would love to have, you know, I'm hopeful that AI can we can start using AI in smart ways to to help address disparities in healthcare.</p> <p>Um, but I'm I'm I'm not sure we have this fundamental problem of people not even seeking healthcare that that we need to solve first.</p> <p>um you know but I think that's a bit of a separate a separate question from the topic of like how can we ensure a lack of bias and equity when we are training models and and and I think we're I think we still have more work to do there but it certainly I think you know training on you know training data not just on what's available but rather working with organizations to ensure that you are training on broad selections of data um from people with different backgrounds uh so that you can um you know make your models aware of all the you know enormous genotypic and phenotypic differences that that exist in in in humans today let alone differences in the social environment and and you know professional environment that that can overlay with these things.</p> <p>I I just had a a clinical case where I looked at a chest X-ray and I'm based in Vancouver. We have a very very small population of African-American individuals here and people from West Africa. And I looked at a chest X-ray and I called the emergency room doctor and I said, "I think this this very young patient has cickle cell anemia." And he said, "How how can you tell that?</p> <p>Why would you think about that?" And I said, "Well, I I trained in the US Northeast and there's some signs here and you know in in where I where I did some of my fellowship training that's a very common disease, right? But here in Vancouver, it's extremely uncommon. And so there's just general unawareness of it. And and so that emergency room doctor was, you know, very thankful and worked the patient up and they're positive.</p> <p>But it's just an example of where, you know, geography and and background and what's in your training data can make a huge difference to patient care. Well, and I think one of the things that I loved just throughout that answer is is the the importance of that human connection in healthcare and and the importance of uh just the not the not necessarily important but the the benefit that a caring provider can have on a patient to encourage them to return to seek care.</p> <p>And and like you said, there's there's fundamental issues that that need to be addressed with just building trust with the health care systems as a whole. And that's a topic for a whole another interview in another episode uh that we can we can dive into another time. But but in particularly just the the benefit uh that that the tools that that recent innovations such as artificial intelligence can can provide.</p> <p>But ultimately it does come back to that can the provider build that trust with that patient to encourage them to return. And I I think that that is such a critical component uh that that is often we are looking for AI and other things like hey let's fix this issue when in reality like you identified um and I think identified correctly there there's a lot more fundamental issues that we have to address.</p> <p>AI can help innovation and tools can help but it's not just going to be the band-aid that fixes everything. It it it I mean quite literally could be a band-aid to cover up some issues that we still need to address later on. And so, uh, Dr. Okonnell, um, shifting gears just a little bit. We've we've mentioned a number of times just the rapid advancement that that AI has has undergone over the past number of years.</p> <p>Um, and and how you've mentioned that, you know, you may be able to predict where AI is going to be in five months, but but past that, who knows? And that's that is both exciting from a like a perspective on on on ways that that could impact the medical field, but also terrifying, I would imagine, to to leaders that are trying to make decisions and and evaluate data and stay on top of an everchanging field.</p> <p>And so how do you keep your organization prepared for these rapid advancements while while still maintaining focus on the a the everyday responsibilities of of taking care of patients um and and doing just the the the regular organizational uh tasks that that you need to do. Sure. I I think well I think one of the nice things about medicine is that it turns out there's only usually one sort of right answer when you're talking about like answering clinical questions, right?</p> <p>like the patient has diabetes or they don't have diabetes, right? But so one of the ways which we you know are able to very rapidly use new technologies and yet still maintain that focus is through um good high quality benchmarks and so these could be you know data sets that have been human annotated as a gold standard and then testing new technologies against them as they come out. Now I think one area that the industry is very much still struggling in is the large language model benchmarks.</p> <p>Uh I've yet to see I think there was just some published in the last month that I'm hopeful are higher quality benchmarks but at my company we've done a deep dive into a lot of these benchmarks and found that they are extremely lowquality benchmarks that um you know there was a movie years ago which I think illustrates it very well.</p> <p>It was called Catch Me If You Can and and it was about a a famous notorious fraudster, a gentleman named Frank Abagnelli who who, you know, later turned his life around and now does all kinds of great things, but in the movie, and it wasn't in his book, you know, he passes, I think it was Louisiana State Bar Exam, right? And so the question is, well, he passed the bar exam, is he a lawyer? And so you'd have the same question of, well, you passed your USMLE step two medical licensing exam.</p> <p>Does that make you a doctor? And the answer is, of course, no. Um, and so we're seeing, you know, some very lowquality benchmarks being used to, um, test models and then we're seeing press releases. Oh, this model so and so pass the USMLE step two. And it's like, that's not what makes someone want a physician, right? You don't get to operate on people or you don't get to read chest X-rays just because you you passed a background knowledge examination.</p> <p>And my goodness, those models have been trained on like hundreds or thousands of medical textbooks. they can't pass those medical licensing examinations, then that means they're they're missing the mark by a wide margin. So, I think benchmarks are going to help us stay focused on on what the right answer is and ensuring that new technologies can meet them, but we need better benchmarks than we have right now. And that's actually something we're actively working on. Uh Dr.</p> <p>Okonnell, I'm I'm grateful for your time today and just just excited to just learn a little bit more about AI and and how it it can benefit healthcare. um and just just dive into the the topic that we've covered today. Uh before we go, just one final question um and then I'll let you go, I promise. Uh what what advice would you give to fellow clinicians in leadership about staying ahead in in an AI enabled healthcare landscape? Sure.</p> <p>So number one, I think you shouldn't feel bad if you find that things are moving too quickly and and you're always feeling like you have to catch up, right? I think what happened this week with DeepSake was a great example of that, right? like we've sort of known about them for sort of months now, but then you know I think the the bombshell got released and and everyone was like whoa and you know he saw that reaction in the stock market as well.</p> <p>So you know don't feel bad it's moving quickly for everyone. Um and but I think the the advice I have is like you have to be the gatekeeper. remember that there's a lot of people at these companies with good intentions but no clinical experience or poor or limited clinical experience and uh and um you know someone's opinion that their product is going to help patients doesn't mean it's actually going to work.</p> <p>So you know our healthcare leaders and in healthcare organizations, institutions, payers, wherever they are need to remain the gatekeepers to ensure that implementing new technologies doesn't happen too quickly or for the wrong reasons and that it adversely affects patient care and so they you know they should be very secure in that. Well, Dr.</p> <p>I I love I love that answer and and in a sense your your clinicians and leadership are are grounded by that that that gives them something that they can compare every opportunity and and every new innovation take it back to hey how does this impact patients how will this benefit patients how could this negatively impact patients and and in a sense I I think that your your clinicians and leadership have a little bit of an advantage there uh because you you you have that focus and you'll maintain that focus and you're you're not going to it's harder start to get a little bit uh you know dissuaded by the uh the other opportunities that that are out there and distracted by the other opportunities out there.</p> <p>And so well Dr. Okonnell, thank you so much for joining us today on the Clinicians Leadership Podcast. Um and we wish you the best. Thank you so much. It was my pleasure, Zach. Thank you so much for having me on.</p>
Want to reach healthcare executives and decision-makers? Join industry leaders like HealthMap Solutions on our podcast.
Become a GuestDiscover related content across the AJHCS ecosystem
Articles on the same topic in AJHCS