CareTalk: Healthcare. Unfiltered.

Turning Clinical Data Chaos into Clarity w/ John Laursen

• CareTalk: Healthcare. Unfiltered.

Send us a text

Healthcare is drowning in messy, inconsistent data, and IMO Health is helping clean it up so organizations can turn information into real clinical insight.

In this CareTalk Executive Feature episode, host David E. Williams speaks with John Laursen, Senior Vice President of Commercialization at IMO Health, about how to separate hype from real value in AI healthcare.


🎙️⚕️ABOUT JOHN LAURSEN
John Laursen is an accomplished Senior Vice President Of Commercialization at IMO Health, bringing extensive experience in the healthcare industry. With a robust background in management consulting, he has honed his expertise in change management, performance improvement, and strategic planning. 

John has held various leadership roles, including Vice President of Business Development and Chief Growth Officer, showcasing his ability to drive business success in complex environments. His educational foundation from Gustavus Adolphus College complements his professional journey, marked by a commitment to enhancing healthcare delivery. 

Outside of his professional life, John is likely to be an advocate for innovative healthcare solutions, reflecting a passion for improving patient outcomes. His career trajectory reflects a blend of strategic insight and operational excellence, making him a valuable asset in any healthcare-focused organization.


🎙️⚕️ABOUT CARETALK
CareTalk is a weekly podcast that provides an incisive, no B.S. view of the US healthcare industry. Join co-hosts John Driscoll (President U.S. Healthcare and EVP, Walgreens Boots Alliance) and David Williams (President, Health Business Group) as they debate the latest in US healthcare news, business and policy.

#healthcare #healthcarepodcast #healthcarebusiness #healthcarepolicy #healthinsurance #healthcaretechnology #healthcarefinance #caretalk #healthcareindustry #healthcarepolicy #IMO #AIhealthcare


GET IN TOUCH
Follow CareTalk on LinkedIn
Become a CareTalk sponsor
Guest appearance requests
Visit us on the web
Subscribe to the CareTalk Newsletter

Support the show


⚙️CareTalk: Healthcare. Unfiltered. is produced by
Grippi Media Digital Marketing Consulting.

David:

Healthcare AI is evolving faster than ever, but in the rush to innovate organizations, risk trading, accuracy, and safety for speed. Call it the fast fashion era of ai. Welcome to Care Talk Executive features a series where we spotlight innovative companies and leaders working to advance the healthcare field. I'm David Williams, president of Health Business Group, and my guest today is John Laursen. He is SVP of Product Commercialization at IMO Health, a clinical data intelligence company, improving how data is used in healthcare. John, welcome to Care Talk.

John:

Thank you, David. It's great to be here.

David:

You know, I love that term fast fashion, but I'd like you to explain, you know, what is fast fashion? What's the fast fashion era of ai and why are you using that term these days?

John:

Yeah, I mean, I think we can all think about fast fashion in, um, whether our daughters or sons are buying from h and m and, and the speed that that market, the, the fashion market kind of creates to match the immediate trends. Um, as I apply that to healthcare. I think healthcare always has had kind of ebbs and flows of, of fast fashion. Like some, a lot of organizations can start to follow a trend really quickly in healthcare. Um, and I feel like we're in one of those today. Um, what you often see in that is a lot of those organizations then fall out of favor as they can't demonstrate meaningful value over a long period of time. The fundamentals of healthcare. Don't often shift that much, um, despite some of the frothiness that you'll experience in one given year or a couple years. So organizations that pop up show a try to align to a near term trend but aren't underlying fixing the underlying problems really fit that fast fashion definition to me. So I feel in the AI space, we're, we're there again. Um, we'll have more in the future, but we're back into an area where somebody can claim they do ai and that's enough to get. In the market, but it's not enough to sustain the market.

David:

Got it. You know, we've had other moments of health tech hype in the past. Is there something different about this moment with ai or is it just kind of one in a series?

John:

I think the, the big thing I think is different in this moment is that AI enables. It's a lower barrier to entry into the market, um, both from a pers from a funding perspective or from a marketing perspective. But honestly, either you and I, David, we could, we could stand up an AI healthcare tech company pretty quickly. The foundational and frontier models that are now available to everybody do create a higher step. I think what, uh, is the same though as the other fast fashion periods is that that doesn't mean just because you can get in, you can demonstrate meaningful value to the provider market. And again, those fundamentals of their funds flow haven't shifted. So if you're not aligned and having a deep impact there, you're still gonna go out of favor quickly.

David:

What sort of risks do you see arising when organizations, you know, rush to adopt AI as many are doing now, perhaps without that rigorous evaluation?

John:

I think, I mean, um, again, we're in that period where hype can kind of out see our outpace right reality. Our provider customers at IMO and we've, we work with thousands of hospitals and, and clinicians across the country. Um, they're taking a very major approach to ai, um, uh, doing really concerted pilots to understand the impact in particularly in the clinical workflow, but also in the billing workflow. And so I, I don't see a rush to adopt. I see measured, um, implementation and evaluation, um, because they know the risks. Um, healthcare. Has in numerous end end cases or use cases that kind of bleed out. And the, if you don't get those right, you can have a very negative impact both financially and clinically to your, to your organization. So I think most organizations realize that and are gonna take a very measured approach into evaluating an AI partner.

David:

So when they do that, what are some of the top criteria that a provider or a payer should be using when they're evaluating an AI tool?

John:

Yeah, I think, I mean, again, working with health systems over the last 30 years at IMO and, and, um, health systems are rational buyers. They're rational decision makers. They are gonna have to understand your value proposition in a way that fundamentally impacts their clinical delivery or financial, um, financial models. So that will always be the case. And if you're AI. If you, if your use of AI accentuates that, great, if it doesn't impact that, then you're just screaming into the wind. Um, so that's criteria number one. Uh, the second I, I see them really looking for a deep understanding of. The clinical workflow and the data requirements necessary for healthcare. I mean, IMO is a data quality company. We have been working with organizations to improve data capture and the, the, the use of their data and analytic models for 30 years. So we're trusted in that space. But if you don't understand the nuance of billing codes and how that translates to the clinical terms used by clinicians, your AI model's not gonna function. Um, so it's both that understanding that data and that governance function. And probably the last thing is clinicians, they're gonna wanna see some human in the loop function for a while now. So those that are overpromising in terms of their ability to automate everything at this point, I would have some skepticism because of those, um, those end cases that get, that are pretty wonky. You're still gonna need some review now that. You could comment that your AI solution eliminates 80%, 90%, but if you don't show a meaningful way to let the clinician or a coder influence that, then you may be, uh, you may be viewed with skepticism.

David:

Yeah. I think in any of these. Uh, decisions organizations are, are looking at balancing different factors, and I'm, I'm thinking here, things like speed, accuracy, you know, safety, compliance. How do you balance all those things when you're trying to do an AI initiative?

John:

When we bring, um, well, first, when, whenever we're starting to bring something new to market, we, we kind of think about not is it AI driven or not, it's like, what is the fundamental value we're gonna do? What's the superpower that we wanna bring to market that we know we can do better than anybody else? If AI can help us deliver that, we're gonna do that and use that to do that. Um, so that's just kind of grounding. And then from a, from balancing of all those things, when we bring something to market in the provider market, we rely on our, essentially our council of our thousands, of, of clinicians and our systems to guide us. Our customer advisory council guides us. Our feedback with our customer success team guides us. Um, and then we have a rigorous development partner platform that we bring those products to bear and pressure test through before they're GAed out, out into the world. So I think we're just measured in both evaluating the impact, gathering customer feedback, and then pressure testing that live with clinicians before we, we launch.

David:

Got it. So I, I get your caution before about, uh, needing human in the loop and, you know, not going to, uh, a hundred percent, but pretty much everybody is talking one way or the other about, you know, streamlining workflows. And so how, you know, if I'm sitting there in the, in the chair of a purchaser, how do I evaluate against the different ones? Everyone who's telling me they're gonna streamline my workflow?

John:

Yeah. Again, I think, um. When you're, when you're looking to streamline workflow, um, there's a lot of opportunity to do that in, in the, the morass of healthcare delivery. I think some of those are gonna be more impactful financially than others. And, and so that, that's kind of the, the risk calculation you're gonna be making, um, in terms of where am I gonna spend time? Money, but also time and effort of my health system to kind of say, what, what thing am I going to streamline? Um, the workflow reduction opportunity is pretty massive though, so I, I do think in the near term we're gonna see a lot of focus on operational efficiency before we get into the clinical, into the clinical world. Most of that will be in the revenue cycle in the near term. I think there's a lot of opportunity there. And then we'll move into some of the other administrative functions of healthcare.

David:

So let's talk a little bit more about people, and I'm talking both in terms of human oversight in these processes where there's a lot of technology and AI in particular. And then also, I know we're talking mostly about the administrative side to get started, but we think about clinicians. Some of whom have been quite untrustworthy about ai, how do we build trust there with the clinicians?

John:

Yeah. So let's, let's maybe start on the clinician side first. So I think, um, you have a very skeptical user base there, right? Um, and so you need to, to recognize that when you're developing your AI application, your AI workflow, um. Again, we at IMO, we've thought a lot about essentially helping clinicians use their electronic health records. That's kind of one of our core value propositions over the years, helping them capture data in those EHRs so we understand. One key thing is you have to put information in front of the clinician that they understand, can contextualize before they it to document. Um. If you're building an AI workflow, you're gonna, you need to be thoughtful about what language you're going to use to present something in front of a physician. If you misinterpret or even use something language y, may it be directly correct but not accurate for that patient's clinical condition. You're gonna lose trust very quickly. So I would just caution all AI organizations to be very thoughtful about. Be thoughtful and nuanced about what you put in front of those clinicians to help them gain trust that you understand clinically what's going on with that patient.

David:

Makes sense and let's talk about some of the safeguards that can be put in place because a lot of the critique of AI has been about things like bias that it may display some of that coming from the data sets that's trained on hallucinations is a big topic or even, you know, unsafe outputs that aren't exactly hallucinations. These things are all important. In their own, just absolute right. But also for building trust. Yeah. How, how do you make sure to keep an eye on all those items?

John:

There's a, there's a concept of grounding that we would advocate most organizations to think about with their AI models. Um, the, and part of grounding is essentially is finding a trusted content partner to essentially say, Hey. I wanna bump this up against your assessment of the situation, whether it be a rag model or any other thing else. And so, um, if you're looking for that trusted partner, and that's one role IMO plays in those supporting AI vendors is that should be looking for somebody that has pressure tested content. So we have close to seven 50 physicians using our content every day, has a deep editorial policy that's been. Withstood the test of time and has kept up to date. And then for us, we have about 65 clinicians and informaticists that are constantly pressure testing internally, checking our, our content as well. So that's one reason the EHRs have relied upon us to power content documentation. I would say that's a good what those factors are good things for AI models to look at when they're looking for a, a governor to put on some of those hallucinations or biases.

David:

There's a lot of discussion now about, uh, clinician shortages, and particularly in primary care and the kind of, uh, burden that folks have and, you know, the potential for, uh, for burnout or just, you know, mistakes and, and all sorts of challenges. And AI sometimes can seem like, well, one more thing that, you know, sounds good from above but isn't really gonna work if, if we look at this objectively, is AI today in a place where it can realistically. Reduce clinician burden.

John:

I think, um, we're just starting to get close to that. I think there is line of sight into key areas where reduc, where burden reduction can happen. Um, I think you'll look at some of the automation of, of patient engagement that clinicians often had to, um, self-initiate. Now that can be more of a triage role. You look at the rise of the ambient listening and the ambient transcription, um, that gives, it's no different than the transcription kind of services we've had years ago, but now it's automated, scaled in a different way. So I think they're, they're starting to move in that direction. Um. You, you think about that from the physician perspective, there's shortages there. You think about it from the nursing perspective, there's also massive opportunity in that space, um, as both of those disciplines need are gonna face shortages over time. So I think we're getting closer, we're getting more to the heart of some of the inefficiencies. Um, so I do see that having a role over the next few years.

David:

I've been reading a lot about just the huge amount of spending that's going on in ai. A lot of this is from the vendor side and building data centers and buying chips and hiring people and all that. And I'm wondering whether in the overall AI industry, there's gonna be return on investment, but looking here in the sector that you're serving, how should provider organizations think about measuring their return on investment? And should that be purely on a financial yardstick? Clinical impact, you know, what are the, what are the sort of metrics that, uh, should be employed?

John:

Um, I mean, both. Ideally, I think each, each health system will look at their book of business and, and their, their, um, population of patients to determine how to flex between those appropriately. Um, ideally they're aligned in your value-based care model. Um, but. I think you can work on both in parallel, and I think both are being worked on parallel. I think the revenue cycle billing side will, will have more rapid um, achievement using AI and some of the clinical interventions where again. If you mess up a use case there or, um, uh, an esoteric presentation of a patient, the risks are higher than not getting denied by a payer. So those will, will take a little longer. Um, and I think those is where you'll see more scrutiny from these health systems before they make big investments

David:

on the administrative side. Just to talk about revenue cycle for a minute, I think that, um. You know, as the computers were first introduced into the healthcare and it was started for, you know, financial purposes and even physicians wanted to use it to measure, to manage their stock portfolios before they got into the clinical side. So on the, on the revenue cycle side, you know what's going on there. I can see how the AI's gonna be useful. From a submission standpoint, but I could also see kind of an arms race or standoff between the providers and the payers, both using AI kind of against one another. In some ways, you could say, well, that's great, let the, you know, let the AI go and fight it out. But what's, what's actually going on in that space at the moment, whether I'm not, that may, may not be directly related to what you're doing, but.

John:

It, it is, I think, um, I mean the translation of the clinical terminology used by physicians to those billing codes is one key role Immo has played for 30 years, and so we have a meaningful role in the generation of, of, of claims in the downstream revenue cycle functions. Um. I mean, there's certainly early entries on the autonomous coding side that are leveraging AI and advanced machine learning models to drive autonomous coding functions and removing some of the effort, or at least focusing the effort by organizations to focus on higher dollar, more complex coding efforts. On the ambient side, you're seeing the interplay of the MA encounter and the note more directly tied to medical necessity. The, the billing regulations necessary to justify that invoice and potentially reduce the denial or overturn the denial if it does come down from a payer. So I think you're starting to see some of those clinical efficiency or operational efficiency for a physician tied to the revenue cycle benefit of that by automating the note generation, but making sure it's compliant with the billable event as well. Um, so I see those as both exciting opportunities. On the flip side, yes, payers are gonna apply their own AI and ML models to, to look for opportunity. Um, so as it always has been, David, they'll continue to be an arms race here to, to, to make the funds flow continue to work.

David:

Let's talk about it from the patient perspective, uh, for a moment.'cause we've talked about the clinical side, administrative side. Does it feel any different to a patient? I sometimes think about, you know, we talk a lot about the different payment models and value-based payment and so on, and I'm thinking like, does a patient know that they're involved in this? And, and how about on the, with the AI side is, is the appointment feel any different? Does the whole interaction seem any different? Are there measurable changes?

John:

Um, I don't, not yet. I think, um, I mean, in healthcare you've always had kind of three different lenses. You've had the billing lens and the, the kind of the payer lens that you just described, brown value-based care. You've had the deep clinical lens and you've had this patient interpersonal lens, and all three of them have never really spoken well to each other. And so you've had translation and frustration. Um, through that IMO has often bridged that clinical to payer. We actually support a lot of the patient translation as well. We have patient friendly terminology that maps to billing codes and maps to, to the clinical terms as well. Um, I think AI can play a role in essentially translating and translating to help patients understand the complexity that is inherent in the. Billing and revenue cycle process and complexity on the clinical side. But again, that data governance and that underlining data infrastructure has to be well thought out. Um, because if you have a patient that doesn't understand that breast cancer metastasized to pelvis stage four, to them, they just need to understand that's breast cancer, that's spread, right? Like you gotta be really thoughtful about that translation function. Um, and that requires a good data governance and good data modeling.

David:

John, what is your reaction to some of the early research that has shown in the clinical accuracy using AI and, and does it, you know, doctor using ai, patient using AI themselves or the doctor themself, and it's sometimes not very flattering, uh, to see what the results there are. And I know there's been speculation about what it means. What, what's your take on, uh, this type of research?

John:

I think it's still fairly early game, right? I think, um, it. On both sides. It's showing potential of, of AI's de AI models demonstrating and very controlled, uh, controlled data, data assets, and controlled experiences. Um, so it shows opportunity there. I mean, the models have, the frontier models have advanced very quickly, and thus that creates the ability for people to enter the market, as we talked about at the beginning. Worked in hospitals for years. If you go into hospital, it is organized. Chaos can be sometimes. So, um, those models have not demonstrated that in the complexity of one patient, let alone a unit, let alone a care team interacting with multiple patients at once. So I think there's still a long way to go before, um, we really can assess the model's impact there.

David:

So you mentioned how the AI is developing quickly some of these frontier models just from, you know, month to month, certainly from year to year, making big, big changes in what they're doing, big advances. And at the same time you've got, you know, decades of experience in applying structured data at IMO Health to, to improve care, to support the clinician. What's the role between. Kind of your expertise on the structured data side and making those AI models themselves more, uh, precise, more appropriate for use in healthcare?

John:

I mean, to me, that, that, that we view that as one of our opportunities to have a meaningful impact on this moment, right at IMO. And those frontier and foundational models will improve, but again, the long tail of clinical terminology and clinical data will challenge them. I think in perpetuity, I mean, and what IMO can offer is essentially, like I said, uh, a data asset that's been pressure tested and continues to be pressure tested billions of times a year. Editorial PO policy and governance that is driven by clinicians and then constant human in the loop pressure testing by our team, but also those physicians out out in the world. So for those that want to lean in and solve that last 80%, that the foundational models, that really will help you build the trust to go to market with the providers, then I think that's what you should be looking for, data assets in that, in that space to help ground your AI model.

David:

How much, uh, are you able to differentiate what you're doing from these, you know, fast, fast fashion models that we discussed, uh, earlier? Is it easy for a customer to discern the difference? Yeah.

John:

Yes. I mean, put it in front of a physician and let them document or talk the way that they normally talk, that the way they learned in residency and fellowship. And if your model doesn't. Accurately reflect that the, the, the language of a clinician, your model's gonna not generate the trust. So when organizations use IMO or we put products in front of them, clinician's like, oh yeah, that makes sense. When I, when I translate a billing denial to, to help a physician understand maybe why that coding may not necessarily, they're like, oh, that makes sense to me. Um, so I think that to me right now, that's trust is the, the has always, I guess trust has always been the currency for adoption in healthcare and I think it will remain. So

David:

I'd love to hear if you have an example or two to, to bring this down maybe to the level of an individual. A patient or encounter or even a, uh, you know, customer level. Any examples where your technology's really helped to improve outcomes or even to give greater insight?

John:

Yeah, I think, um, in our cases working with, uh, some of our ambient partners we can support. Here's a great example of where. Um, after the transcription is complete, IMO can, can help within that transcription, essentially find the potential reason for visit at the right level of specificity. Um, uh, you and I talking, if I was your physician, David, we're gonna talk about diabetes, but we may not talk about it in a clinical sense. Diabetes with retinopathy. And, uh, insulin dependency. We're not gonna say that in a physician and patient interaction, but can IMO working with our trusted partners help, uh, understand the context of that patient's encounter, and essentially say physician, it's not just diabetes, it's diabetes with retinopathy insulin dependency. That not only has a clinical impact that moves to that patient's problem list and is stored as a chronic condition that needs to be managed at that level, but it has a pretty big billing impact on that patient encounter and that patient's overall acuity score. So those are examples where we're using our understanding of how clinicians have documented diabetes with retinopathy billions of times over the last 30 years, and understand what context that occurred in. To help that individual patient interaction to be coded appropriately.

David:

A big area here that it seems to be a, a chronic and unsolvable problem is the shortage of primary care clinicians, and I've seen this being dealt with by, you know, having advanced practice. Nurses or physician assistants trying to do telehealth. Uh, but more recently I've seen some partnerships on the AI side, whereas for, for patients that don't have an assigned physician, they'll do a kind of a, a AI pre-visit. Are you, are you involved in that? Do you have a perspective on that?

John:

Um, we have organizations that we partner with that are moving, that are exploring those types of clinical interventions. Um, again, I don't think anything kind of the, the, the evaluations, the, um, thoughtfulness that you need to deploy, that changes. Um, I think this is probably on the, the, the farther edge of that, that spectrum of adoption. Um. But, uh, as there's opportunity there to deal with the vast majority of less, uh, maybe less acute instances, but then again, those, those use cases that are more esoteric, you definitely need a thoughtful strategy to deal with those. So I think there's validity here. Um. But I would, I would treat that with utmost caution.

David:

Yeah. I mean, my sense is that, you know, they're not comparing it with the ideal. They're comparing it with, okay, this patient doesn't have a, the absence primary care doctor, and they're gonna go into the hospital and I think they end up, the idea is you'll do the AI pre-visit, but then a, a physician somewhere is gonna review it and, and be in discussion. Where I was gonna go with that is to ask you about, let's say it's one thing, you know, don't have a doctor, but. Maybe that type of visit is good for, or that type of interaction and workflow is good for the patient that does have a primary care physician, so that, that when you actually have the time to spend with them, they've really made the most of the technology upfront so they're well prepared.

John:

Yeah, I think there's, there's a lot to think through and if you think about the incentive alignment and how that generates trust in that patient. Digital, digital physician interaction will be critical. As this matures a little bit, if this is coming from your existing primary care, I think there could be a trust dynamic that's stronger. I think if you think about it coming from your payer or your insurance company as a kind of a, an evaluation potential, um, there may be some more skepticism. Both may have valid uses. So I think there's still a lot to figure out, not only in just the tech side, but how will. Patients, uh, trust this, this new interaction, um, uh, with, with this new mode of, of communication.

David:

So the fast fashion, you know, that's something that's gonna be here today, gone. Gone tomorrow, and, and these. Well established provider organizations in particular are gonna be thoughtful about what they do, um, in the long term, and at the same time, they, they can't stand still. So I'm wondering what you're doing at IMO to help your partners to scale, maybe responsibly across the overall, uh, ecosystem. What does that look like for you, your vision of it, and then what do you do in pragmatic terms?

John:

Yeah, I think, um, so where we play with most of our large health systems and physician practices is really ensured high quality data capture and data quality within their, their EHRs and maybe their data repositories that they're using for analytics. Um, the first thing we're doing is essentially making sure they understand the risk and benefit of working with vendors that are using ai. Are gonna push structured data back into the EHR and helping them understand what if they don't do it well, what gets degraded? What gets degraded clinically from what goes to that patient's problem list and is managed? What gets degraded financially if you're not capturing that? Diabetes versus diabetes retinopathy is great example. So right now we're essentially helping them pressure test some of those AI models and how they should be thinking about what is the data quality that they're gonna push back into the hr. Um, those that are concerned, we would welcome the opportunity to talk with them and say, Hey, let's, let's think about how we can make sure that data doesn't degrade. Um, you can still get the operational efficiency, but we need to ensure high data quality that comes outta that.

David:

This is a very, uh, sobering approach and hopefully, uh, will be picked up, picked up appropriately. It's,

John:

uh, it's, it's required and there's too much Yeah. At stake not to, and put yourself in a chief financial officer's role. Um, ambient provides a lot of value from a clinician efficiency, but if I am. Starting to degrade my coded outcome, I either have to throw middle revenue cycle functions at it and spend more money there. Yep. Or I'm gonna have to accept a lower, uh, lower reimbursement model. And that just, just can't fly.

David:

Yeah. No it doesn't.

John:

Yep.

David:

That's it for another episode of Cure Talk Executive Features. My guest today has been John Laursen. He's SVP of Product Commercialization. At IMO Health, we've been discussing what to look for when evaluating AI tools in healthcare to ensure accuracy and safety. I'm David Williams, president of Health Business Group. If you like what you heard, please subscribe on your favorite podcast platform and thank you, John.

John:

Thanks, David. It's been a pleasure. Talk to you soon.