CareTalk: Healthcare. Unfiltered.
CareTalk: Healthcare. Unfiltered. is a weekly podcast that provides an incisive, no B.S. view of the US healthcare industry. Join co-hosts John Driscoll (President U.S. Healthcare and EVP, Walgreens Boots Alliance) and David Williams (President, Health Business Group) as they debate the latest in US healthcare news, business and policy. Visit us at www.CareTalkPodcast.com
CareTalk: Healthcare. Unfiltered.
The Trust Problem With Healthcare AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
How can patients and providers trust AI in healthcare if they don’t understand how it works?
In this clip from our episode "Making Healthcare Access Truly Borderless”, HealthBiz Podcast host David Williams speaks with Dr. Sarah Matt, author of The Borderless Healthcare Revolution, about why explainability and trust matter when AI is used in care delivery.
Listen to the full episode here
🎙️⚕️ABOUT DR. SARAH MATT
Sarah Matt, MD, MBA, is a surgeon turned health technology strategist, author, and speaker. Her work focuses on how digital tools, from remote surgery to telemedicine to AI, can expand access to healthcare and eliminate the traditional boundaries that separate patients from care.
With over two decades of experience at the intersection of medicine and innovation, Dr. Matt has held leadership roles at Oracle Health, NextGen, and multiple health tech startups. She has designed and deployed systems that reach patients around the world, including hard-to-serve and underserved populations.
A practicing physician, Dr. Matt continues to treat patients in rural and charity-based settings, keeping her closely connected to the human side of healthcare access. She speaks widely at healthcare and technology conferences and has appeared on national panels about artificial intelligence, care delivery reform, and digital transformation.
A graduate of Cornell, SUNY Upstate Medical University, and UT Austin’s McCombs School of Business, she blends clinical acumen with deep technical knowledge to challenge the status quo and to reimagine what healthcare can look like when geography no longer dictates your care.
🎙️⚕️ABOUT HEALTH BIZ PODCAST
HealthBiz is a CareTalk podcast that delivers in-depth interviews on healthcare business, technology, and policy with entrepreneurs and CEOs. Host David E. Williams — president of the healthcare strategy consulting boutique Health Business Group — is also a board member, investor in private healthcare companies, and author of the Health Business Blog. Known for his strategic insights and sharp humor, David offers a refreshing break from the usual healthcare industry BS.
GET IN TOUCH
Follow CareTalk on LinkedIn
Become a CareTalk sponsor
Guest appearance requests
Visit us on the web
Subscribe to the CareTalk Newsletter
⚙️CareTalk: Healthcare. Unfiltered. is produced by Grippi Media Digital Marketing Consulting.
So what do you see in terms of people working alongside some technologies like AI and also robotics, uh, in care delivery? Is that something we should be enthusiastic about or is it, is it scary?
Sarah Matt:For me, it's exciting, but at the same time I've been building my whole career, so I've been seeing this come to bear over several years. So. I don't think, for me it's scary. It's a calculated risk and ultimately, if you understand what we're. Taking data from what we're doing with it, how it's going to improve things. It has built trust with me in certain circumstances. What I'd say for patients and for providers for that matter, is that ultimately a lot of the AI is not what I'd call explainable. Um, there's a lot of black boxes and if you're a provider putting your license on the line to take care of patients, ultimately you're gonna wanna know where those tools are coming from, how they're being utilized, and. Where your liability starts and ends, and as a patient, you're gonna want information and we need to show patients that there's value in their data, but by giving us their data, they gain value from it. As an example, I give Google Maps my information all the time. I'm sure you do too. And I find value in giving them that data. On the healthcare side, I don't think we've been providing true value to patients by utilizing their data just yet.
David:You know, you mentioned a, a black box, um, from some of the earlier times when AI is starting to be used, there's an awareness that some of the training data is coming from. You know, from bias data sets. And then, so on the one hand you may naively, naively go and say, Hey, we're gonna get rid of bias because, you know, the, the AI could be objective in a way that maybe humans can't. But they say, well, wait a minute, how did Theis learn? And you could actually potentially echo in the AI world, what's in the real world, or even entrench. Even further, where are we? How big of a problem is that, uh, algorithmic bias? And are we doing anything to correct it or make it worse
Sarah Matt:right now? One of the things I think is interesting is that by creating these algorithms, we're finding our existing biases within the system. That data that's been collected has probably been collected in a fashion that maybe wasn't the best in the first place, but it was taken as you know. This is our, our central database. This is great. We've done this. Um, but people are biased. Systems are biased. Processes are biased. I think right now, because of the ability of these new technologies to use so much information and do it so quickly and change and evolve so quickly, folks are getting a little bit more, I'd say, anxious about the fact that. They can't put their finger on how it's done. They can't put their finger on all the training data. What I'd suggest is that most of the data we've probably collected over the course of, you know, we'll say US history in terms of healthcare, has probably been taken in fashions that weren't ideal. And so if we can identify why things might be swayed one way or the other. I think that's a great first start because it shows us which populations should be included now that we know that they aren't, or that perhaps scoring or other diagnostics are not working well for specific populations. So what are we using these for? Who are they working for best? What populations perhaps we need more data on, and let's work to fill that gap.