CareTalk: Healthcare. Unfiltered.

Is the AI Bubble Ready to Burst? w/ Ed Zitron

CareTalk: Healthcare. Unfiltered.

Send us a text

As AI becomes more integrated into our lives, many believe it will revolutionize nearly every industry, with healthcare being one of them.

Ed Zitron is not one of those people.

In this episode of CareTalk, David Williams speaks with Ed Zitron, CEO of EZPR, on why he believes AI is a bubble ready to burst and the ramifications of AI on everything from patient care to climate change.

This episode is brought to you by BetterHelp. Give online therapy a try at https://betterhelp.com/caretalk and get on your way to being your best self.

As a BetterHelp affiliate, we may receive compensation from BetterHelp if you purchase products or services through the links provided.

TOPICS
(0:24) Intro
(0:43) Sponsorship
(1:58) Should We Be Optimistic About AI?
(2:52) Will AI Investments Be Worth It?
(4:42) Why Are People Excited About AI?
(6:34) AI’s Role in Capitalism
(8:24) Are You the Customer or the Product?
(9:34) Is It AI or Just a Chatbot?
(11:47) Can AI Improve Healthcare Efficiency?
(14:55) Can AI Improve After-Appointment Care?
(17:29) The Ethical Issues with AI
(19:32) Will Nuclear Power Fix AI’s Eco-Damage?
(20:42) Will the AI Bubble Pop?
(23:11) Is There a Role for Regulators in AI?
(24:18) How Should the Public Think About AI?

🎙️⚕️ABOUT CARETALK
CareTalk is a weekly podcast that provides an incisive, no B.S. view of the US healthcare industry. Join co-hosts John Driscoll (President U.S. Healthcare and EVP, Walgreens Boots Alliance) and David Williams (President, Health Business Group) as they debate the latest in US healthcare news, business and policy.

🎙️⚕️ABOUT ED ZITRON
Ed Zitron is the CEO of EZPR, a prominent tech and business public relations agency that serves clients nationwide. He also authors the tech and culture newsletter "Where's Your Ed At" and has written two acclaimed books, This Is How You Pitch: How To Kick Ass In Your First Years of PR and Fire Your Publicist. Known for his expertise in the PR industry, Ed has been recognized four times as one of Insider's Top 50 Best Public Relations People in Tech. His work blends strategic communication with insights on the tech industry's evolving landscape.

🎙️⚕️ABOUT EZPR
EZPR was founded by Ed Zitron, an award-winning author, writer, and speaker with seven years of PR experience and over a decade as a reporter. The agency has successfully launched products and services for clients ranging from billion-dollar corporations to startups, securing both national and global exposure. EZPR excels at transforming nervous CEOs into top media sources and delivering measurable sales growth through media coverage. Known for consistently outperforming larger agencies, EZPR's primary focus is on achieving results that matter for their clients.

GET IN TOUCH
Become a CareTalk sponsor
Guest appearance requests
Visit us on the web
Subscribe to the CareTalk News

Support the show


CareTalk: Healthcare. Unfiltered. is produced by
Grippi Media.

AI seems like the future of everything. We're told it will transform healthcare by revolutionizing diagnosis and treatment, discovering new cures, and mastering administrative paperwork. But what if it's all a mirage that will end in disappointment or even disaster? Welcome to Care Talk, America's home for incisive debate about healthcare business and policy. I'm David Williams, president of Health Business Group. Today's guest, Ed Zitron, is CEO of EZPR, author of Where's Your Ed At? and host of iHeartMedia's Better Offline Podcast. We'll get to Ed in a second. Masks are visible to everyone in October, but many of us have an invisible mask we wear all year long at work. in social interactions, basically everywhere. But therapy can help you rediscover your true self so you no longer feel the need to hide behind a mask of any kind. Sure, masks are fun for Halloween, but we really shouldn't have to keep our emotions buried. That's where BetterHelp steps in. BetterHelp provides online therapy tailored to your life, convenient, flexible, and made to fit your individual needs. With just a few questions, you'll be matched with licensed therapist who aligns with your preferences. And if you ever feel the need for a change, switching therapists is easy and comes at no extra cost. So whether you're managing stress, dealing with anxiety, or simply seeking personal growth, BetterHelp connects you with a professional who can guide you on your path to self-discovery and healing. Let BetterHelp help you take off that mask. Visit betterhelp.com slash care talk to get 10 % off your first month. That's Better help, help.com slash care talk. Ed Zitrin, welcome to care talk. Thanks for having me. Yeah, there's tremendous excitement about AI these days, but somehow you seem somewhat less enthusiastic. And I wonder why is that? So if you look at what people like Sam Altman and they're promising, they're talking about this autonomous general intelligence, artificial general intelligence, whatever you call it, this idea that even to your point, AI will be able to solve diseases and so to quote Sam Altman, solve physics, which is an insane thing unto itself. You have all of these promises and then the actual reality is pretty much large language models, which have some utility, but burn way more money than they'll ever make. OpenAI spends $2.35 to make a dollar. And so there are useful things that generative AI does, but they're vastly overwhelmed by both the hype and Well, the massive cost in many cases. it's maybe not shocking in and of itself that a technology in its early days is maybe spending more money than is coming in. And someone would argue that's a sign of all the investment that's being made and it's going to pan out, but you don't think so? I think that that compare, I understand why people make this comparison, but the last two of these, by the way, that people have said that about the Metaverse and crypto, they've been completely wrong. And My favorite reference here is a guy called Jim Covello from Goldman Sachs. He made a point that when people say it's like the early days of the internet, when the internet started, you need these massive $64,000 some microsystem servers to run it. And then the cost came down. The problem is that's not really how it went down. Yes, you needed these servers, but you didn't need anywhere near as many of them. And also the cost benefits were obvious immediately. E-commerce immediately reduced the cost of stores. Like it was just an immediate obvious thing. And smartphones, for example, people say, well, early days of smartphones, people didn't think they were a big deal. Again, not really true. Smartphones, was a Cavello in this report, Gen. AI, it's called Gen. AI, too much spend for not enough return, I think. And he makes the point that even with smartphones, there was this obvious roadmap going back to the very early 2000s saying when GPS sizes come down, when chip sizes come down, we will be able to create these devices. Describing smartphones, he describes thousands of presentations in which this was described. No such roadmap exists for AI. And when I say AI here, I really mean generative AI, because that's the other thing. AI has been around for 10, 20, 30 years, depending on how you look at it. It actually goes back to the seventies, depending on the terminology you use. Generative AI, which is the latest boom, really is not going to do the kind of artificial intelligence. that we're being sold today. If anything, I think the term AI for it is kind of a misnomer. Why people so excited about it. I mean, to me, seems that what happened is you've had AI around for a while, people have been talking about it, but when someone could go to chat GPT and just say, say, you know, give me a good looking email that I can send to my boss to say he's a jackass, but I don't look like I'm being bad. And then it kind of comes alive for them. Is that, is that why people are so excited about it? I think it's a few things. I think you're right. The people are using it and going, wow, it can write an email for me. And then there's a bunch of investor hype around it as well, because right now, if you look around outside of AI, there really aren't any hyper growth markets left. We're kind of done with smartphones will keep selling, but smartphones are kind of plateauing app stores and all they're like, are kind of, we've kind of tapped out on that as well. Software as a service already kind of reaching the limit. Sure. We'll find new things to sell, but there's not really a new business model waiting. And so you don't really have a new thing to point out. And what's happened is people like Sam Altman of OpenAI have attached this dream of what AI could be to generative AI, large language models, which kind of resemble something that's sort of smart, but you really have to squint kind of hard. And they generate things probabilistically, meaning that they will say, if you ask it to write an email, it will say based on the training data, the most likely series of words will be this. It doesn't know meaning. doesn't know any of this, but nevertheless, This is a runaway marketing campaign. And it's also kind of exposing a big, big problem in the economy, bigger than AI, which is the people running companies do not understand what their businesses do at all. And they don't understand the underlying technologies and they don't understand what their workers do all day. So to them, wow, something can write emails for me. That's all they do all day. I go to meetings. I write emails. I don't code or do a real job. I just go places and wow, I no longer have to write the three emails I write a day between my three lunches I take. Sounds good. You know, I wonder I love a lot of the things that you write one of the terms that you've been using for some time that I love is about the rot economy and I have to say that sounds that sounds bad. What is that? It's not good. So the rot economy is the sense that most of modern capitalism has been engineered engineered around growth growth at all costs. It isn't about say whether it's a good business whether it will last the test of time. It's about can it show quarter by quarter year over year growth? without fail because the markets love it. That's how the markets value companies. And it goes all the way back to Jack Welch of GE who created stack ranking, which is when you chop off the bottom 10 % and the idea that layoffs can be something that's done for profit reasons rather than an existential threat to the company. But what it means is that companies are in many cases building things not to solve a need, but to solve the need of the company to sell more stuff. So you look at Google, for example, Google deliberately. there's, wrote an article, the man who killed Google search about this. deliberately made itself worse. It added more spam back into it. It obfuscated the way you see ads on the platform so that the customer had a worse experience, but Google got to show them more ads, which meant Google got to make more money. So Google was very happy with that. look at Spotify. Spotify is like living in a minefield, except the mines are music you don't want to hear or buttons you don't want to press. User interface elements, which feel confusing and kind of counterintuitive like they're there to trick you because they are look at Facebook, look at Instagram. They are built to get in the way of you seeing your friends, seeing your family, seeing the things you want to see. And that's because they must show quarter over quarter growth. And this hits everything. Once you see it, it's kind of hard to not see it anymore. It's like the arrow in the FedEx logo. Yeah. One of the things I like about, about Google and Facebook is when people make these sort of observations that you're making about, the user experiences is crappy. There's all these ads and things in the way is, is to remind the user that they're, they're not really the customer. They're, they're the product. They're the raw material that's kind of being fed into it. And that's the thing. You can do a decent business where you are the customer and the product. Facebook used to be a very profitable business where the person was the product. It was fine. It worked pretty well. Facebook up until about, I'd say like. 2012, 2015 maybe wasn't that bad. was a positive-ish product, kind of evil, but still like it didn't feel like it was actively fighting you every time you used it. And that's really the, think something tripped in 2019, 2020 growth was slowing across the board. Pandemic happened and all these companies in 2021 saw these crazy earnings and they were like, wow, we need this forever. And they got rot poisoned. And now we're in this situation where you can't really, can't put the toothpaste back in the tube with this. So let's talk about healthcare. I'm almost hesitant to turn the topic to healthcare because healthcare is almost always a bad discussion. You know, it's usually a bad experience for the patient, but at least there's, at least there's some purpose behind trying to keep people healthy and then get them to be a little bit better if they're, if they're not. So let's talk about AI and healthcare. You wrote something recently about a partnership between Thrive, Global and open and a open AI I know Ariana Huffington who's excited about a lot of things seems excited about this Sam Altman too So how do you look at it? Well two con artists got together and made a new grift I think it's nice to see they're still kicking That that I'm not even going to discuss the details because nothing exists. Ariana Huffington is a con artist She has like what does thrive do thrive has been around forever. The only thing it's done is spent money or paid her she sits on board She does nothing. She does nothing at all Sam Altman, same deal. He's a professional con artist. He is good at raising money for things that may or may not exist. Putting that aside, what Thrive is talking about there is basically a chat bot, a chat bot that can look at your data and do stuff. And they love doing these things. They love talking about them because they seem theoretically reasonable, right? You can just connect a chat bot to some data and then it will work, right? That will be personalized experience. Amazing. The thing is, if you think about it, Like 11 different companies have promised this by now. You'll notice that none of them have launched it. And that's because of a few things. Chief of them, interoperability. The connection between two data sets, you know, within health, Epic is difficult to say the least. Can you imagine any kind of health data interoperating with anything else like a chatbot? Hell no. With the amount of regulations that are around health data, probably not going to happen. But on top of that, There's something very craven about Thrive and they're not the only people to do it. I don't like this push of trying to replace caregivers with chat bots. And the theory is, well, you get a 24-7 response. get a 24-7 response from a chat bot, though. It's a chat It doesn't do anything. And they're like, well, doctors are busy. This isn't going to really fix that. All it's going to do is give patients potentially bad data. I definitely understand how there's no great solution to say you've got 24-7 access, but it's just to a chatbot, not a doctor. If I think about it from the doctor's standpoint though, they say, I'd to spend more time with my patients, I can't really do it. Can AI give me that opportunity to be more efficient or more helpful to my patients? Yeah, but how does putting something else instead of the doctor fix that, is the thing. Now, if the answer is, okay, there are things that the doctor... doesn't need to do or there are ways that I'm even trying now and I can't do it because if you think about it as a patient, when you go to the doctor after waiting half an hour for the time your appointment was meant to start, so half an hour later, the doctor actually arrives, you then go and speak to a nurse and then the nurse makes you wait another 10 minutes and then the doctor arrives and then you speak to the doctor for five minutes. This isn't an issue of doctors not having enough time with patients. This is an issue of administrative burdens never being moved. It's about overbooking offices. It's about bad administration, which isn't necessarily on the doctor's side. I think that they want to do the AI thing to kind of fob off patients as a means of doctors having to speak to them less. Because right now, and I say this so my father ran part of the NHS in England and I won't pretend like there aren't weights there, but I've never felt rushed like I do in a doctor's office here. Nor do I, and that's the funny thing. I remember when I moved in 2008, say, you're going to wait. You're like, you're going to Americans would say, well you waited all the time. The doctor's office in England, right? I wait all the time here every single time, every single time. But the solutions are, well let's find more ways to get doctors away from patients. Shall we? Let's those menial questions that the doctor aren't the doctor doesn't answer menial questions. Most patients can't even talk to their doctor without an appointment. Most patients have to go through intermediary after intermediary. So we want to add another intermediary. Great. How does that make patients' lives better? It doesn't. Ever. All it does is further distance medicine from patient. And it's frustrating to me because people like Ariana Huffington, people like Sam Altman, they're like, well, it'll give you a healthcare assistant. It'll give you all these goals, blah, blah, blah, blah. Now it won't. Stop pretending. It isn't doing any of this stuff. It's not built to do this stuff. It's not built to, it is a large language model at this point. And this whole idea of the agentic approach, I will have a healthcare agent that will be able to, I don't know, intelligently interface with a patient so the doctor doesn't have to spend as much time. Doctors are already not spending enough time with patients. That is the fact. I am not, I am not a medical expert. I am not a healthcare expert, but as a patient and as a friend of multiple people with chronic illnesses. I know for a fact that doctors throughout at least the West coast, but I remember even when I lived on the East coast, doctors are already not spending enough time with patients. The idea of fobbing them off to LLMs is disgusting to me. let's take it from those doctors that are, one of the great things about healthcare and problem solving is you could solve 10 or 20 problems and it's still going to be a huge mess ahead of you. let's, instead of trying to solve the whole thing, let's say for that physician that for whatever reason they're in a situation where they can only have the five or 10 minutes with a patient, which is a typical thing. A lot of as patient when I go in, I'm not a physician but I fairly well understand healthcare, I can't completely understand what the doctor told me, I can't either remember it or I didn't totally understand it. And the physician realises that, in a way, they don't always realise it but they'll use a lot of terminology, is AI useful for them to say, write a follow-up note, say, take this as what I was going to say and make it in a language that the patient can understand. Why even go to med school at this point? You can't write a letter cupcake! Is it too hard to write a letter? no, I get the form that isn't necessary and I'm being somewhat glib. Like there are examples with that. There are like, especially when it comes to like healthcare plans, there are buttons you need to pick. If anything, the actual use here, and you kind of see a version of this with a much more expensive version with, there's this private clinic called Forward in San Francisco. What they have is during the intake, they have someone listening. you talk about it and they start filling in things and it pops up on a screen, it's kind of cool. I actually do imagine there would be a use of generative AI within that for a more intelligent intake. Something that asks something customized to the patient to say, okay, this patient is this many years old, they have these problems. And they might say something offhandedly that the LLM catches and goes, something to check into. That is useful. Something that gets the administrative layer away, but also makes it so that the patient is fully heard because the real problem with the five to 10 minutes is doctors aren't thinking about the patients particularly much. They are and I'm not saying doctors don't care. I must be clear, they're overloaded, all caregivers are. But at the same time, the solutions being offered are very much how do we get the doctor away from the patient and how do we get the patient away from the doctor's office entirely, which isn't good for the doctors either. No one likes that. It's just frustrating because I am neither a medical expert nor an AI founder. Yeah. I think within the last five minutes, I've come up with a more useful use case than anything Arianna Huffington's pushing. And it's because it does get to the grander point. These people aren't thinking of real problems. They're thinking of things they can sell, things they can market and ways they can get headlines. And people are falling for it every time they just print whatever they say. are some of the big ethical issues in AI? I mean, in what part specifically other than I mean, the environmental damage, the fact that you are basically boiling lakes to make this stuff happen, to make people generate a picture of Garfield with a gun. the fact that we don't let the training data is trained. is, is made up of copyrighted material. People's work is being stolen. The fact that there is a big dream of all of this, of replacing workers, even though it's not really going to happen based on the tech we have today, the fact that they're so excited, isn't fun to watch. And. quite frankly, when it comes to health data, as I've been saying repeatedly, it seems like the big move that lot of these companies are making is to push you away to keep the patient as far apart as possible versus thinking of ways in which you can enrich the patient's relationship with the doctor. If you really only have 10 minutes, the reason I bring up the intake thing is because that's a way in which you can get more out of the patient. You can then have it digested and brought to the doctor so the doctor can say, wait, you mentioned you had a pain in your right eyeball. That might say this, this, this, this might be vascular. Now I'm again, not a doctor. And really the ethical issues as well as with people making companies in health connected to generative AI, I really hope they don't train on customer data. The only companies I work with don't do that. don't like anything related to customer data being touched. Putting aside the legal side, the amount that you could reveal about a patient if it's fed into one of these models is really scary. There was a think last year a thing with chat GPT spitting out people's personal cells because of the way in which it ingested training data. And on top of this, do we really need to burn this much money on nothing? Is that really the, is this the place we want to put our cash? The planet is burning. We could be investing in climate tech. The problem is that climate tech isn't going to have a hundred X multiplier for some VC that's already quite rich already. So you mentioned about boiling lakes in order to power the service for this. I know AI uses a lot of electricity, and it seems that Google and Amazon perhaps are going to try to solve that problem by funding nuclear power plants. Should that take care of it? I mean, I'm not necessarily anti-nuclear power, so I don't mind that. What I mind is the fact that there are coal plants being reopened because of this. I think in Virginia, The fact that we don't need this data center sprawl at all. The fact that we are ruining, I mean, the climate, the emissions, I think Microsoft's over 40 % gonna blow past those, but there was a study that suggested it might be in those hundreds of percent. It's just, it's really frustrating because on top of this being environmentally destructive, it's totally useless. This isn't helping society. This isn't the future. Things are not better as a result of this. There's nothing. point at and say, wow, well, I mean, I know it burns the environment, but you know, at least this and there is no at least here. It's just, it's nihilistic, honestly. Now you talk about AI maybe as a bubble and bubbles tend to inflate and then pop. What would that, what would that look like? is just be less attention and we go talk about something else, or is there real damage, that would be done or would just use less electricity and it will just be back to a more mellow standard. It's less about the AI part and more about what happens next. So right now, I kind of hinted at this earlier, there are no lands left to conquer. There are no hyper growth markets left. We have the cloud boom, we have the smartphone boom. They tried with wearables. It didn't really work. They tried with crypto, a little bit, not that hard, but didn't work. They tried with metaverse, a little bit, didn't work. So they came up with AI, with generative AI and This is meant to make them a bunch of money, except it isn't. What will happen when the bubble pops is not just, and it will be horrible for their stocks, don't get me wrong. But the next step is when the markets realize these companies have no other growth vehicle. Microsoft, Google, Apple to an extent as well. Oracle as well for data. I mean, there are countless other options when they realize, and of course Microsoft, if it didn't say them. When the markets realize these companies can't grow forever and it's kind of already happening, there's going to be an apocalypse with the tech stocks. They're going to see 30, 40 % haircuts on top of that. also recovering those losses is going to be really difficult when you don't have a narrative to sell of where those gains are going to come from. There are only so many people to sell stuff to. There is only so many businesses that you can sell to. And at some point, something's got to pop and it's bad. It's bad for, it will be bad for a lot of people. think startups are in a good position just because startups are obviously not exposed to the public markets. But again, investment has been primarily focused on AI when it comes to startups. It's a really rough, rough situation, which is why I think they're keeping this thing inflated because deflating it requires them to admit that things are bad. And also no one's making any money off of this. It's such a small industry for the amount of money they're spending. I estimate based on numbers I've seen, Microsoft's making two, three billion off of AI. That's like a year. That's not that much. And it's, I worry about it. I'll be here to narrate it obviously, but I worry about it. So we've been talking about kind of the tech companies, their investors. What about regulators? Is there a role for regulators with AI and are there any special considerations regulators should give on the healthcare side of AI? I think that there should be legislation that says you cannot train on customer data. Like that should be the number one first thing they do. Should be completely off the table and it should be criminal. If you do, I think that there needs to be a big fat red line right at the beginning. Just because if patient data starts getting into these models, it is bad for everybody involved. And there are companies using synthetic data, which works in small amounts. That's great. but patient data needs to not be there. also, I don't know how you regulate out the idea of replacing doctors with LLMs, but you should. You shouldn't be, you can use them for intake. I don't think you should be able to use them for actual any medical advice. I don't think that that should happen. I think it's immoral. And I think that it shows a deep lack of respect for the patient. So we've been talking a lot about this from kind of the expert standpoint, investors, regulators, tech companies and so on. How should the general public be thinking about AI? What you've been putting forward today is very different from what they'll be hearing on the radio or seeing in the newspaper or on television or whatever. Should people be asking questions, pushing back against AI in their workplace, on the internet, when they go to the doctor? What should the general public be doing? I think the main thing to focus on is utility. Always ask why this is here, what does this do? And when someone gives you something vague, give them a specific question. Does this touch my data? Well, what does this do? Why is this here? If it's someone telling you what it might do, ask them what it does. Never ever accept what it might do because you are safe to assume it will never do that. And I think that I think that people in the workplace and in doctors offices are at the mercy of the people running them. But I think it is always fair to opt out and it's always fair to ask as many questions and if they can't answer them, you should not touch the thing. Unless I mean you're working then nothing you can really do about it. But I think that the big thing is, just don't listen to what it might do, ask what it does. Fair enough. Well, that's it for another episode of Care Talk. I've been speaking today with Ed Zitron. He's CEO of EZPR, author of Where's Your Ed At? and host of iHeartMedia's Better Offline podcast. I'm David Williams, president of Health Business Group. If you like what you're heard, or even if you don't, please subscribe on your favorite service. And thank you, Ed. Thanks so much.

People on this episode