In this episode, hosts Brad and Michael share the story of a tech-savvy plastic surgeon who embraced AI to streamline patient communication and lead generation. What started as a cutting-edge solution quickly turned into a legal and ethical dilemma. Discover compliance pitfalls of using AI in health care, including the importance of transparency in patient communications and balancing technology with human oversight. Tune in to learn how to navigate AI in medical practices while maintaining trust and compliance.
Listen to the full episode using the player below, or by visiting one of the links below. Contact ByrdAdatto if you have any questions or would like to learn more.
Transcript
*The below transcript has been edited for readability.
Intro: [00:00:00] Welcome to Legal 123s with ByrdAdatto. Legal issues simplified through real client stories and real world experiences, creating simplicity in 3, 2, 1.
Brad: Welcome back to another episode of Legal 123s with ByrdAdatto. I’m your host, Brad Adatto, with my co-host, Michael Byrd.
Michael: Thanks, Brad. As a business and health care law firm, we meet a lot of interesting people and learn their amazing stories. This season’s theme is Compliance Fundamentals. We’re going to take real client stories, scrub their names, and build these stories around navigating compliance obstacles in the business of health care.
Brad: Yeah. And we’ll be talking about compliance a lot this season. Michael, for those that don’t know, what does compliance mean?
Michael: It’s a broad word that is used to describe kind of the whole bucket of laws that govern the practice of medicine or other health care practices. [00:01:00] And we’ve talked about this; health care is one of the most heavily regulated industries, and there are both state and federal laws that come into play. And so, compliance means that you are running your practice in compliance with these various laws.
Brad: And as a reminder, compliance is not stagnant.
Michael: Yes, there’s a misperception that compliance is a task that you check the box, you go and develop a policy or do this training or set your business up correctly, and then you’re done – once you complete it. It’s the exact opposite it’s ever moving, compliance, and it’s becomes a way of life for a successful practice. Laws change, technology changes, employees change, enforcement changes, and so this all affects compliance for your business.
Brad: [00:02:00] Cool. Well, thanks for that summary, Michael. And before we get into today’s story, do you know about Skynet,
Michael: Brad, have you’ve been watching the Terminator movies again?
Brad: Maybe. Why?
Michael: Well, sci-fi Brad keeps trying to show up in our shows, so that’s one. I mean I can’t remember the last episode where you didn’t try to sneak Star Wars in. So for those that don’t know, Skynet is a fictional AI that rules over the ruined future of the Terminator franchise. And for those that don’t know what the Terminator is, it just means you’re not old like Brad because it was a franchise that it came into existence, what, three decades ago?
Brad: No, there’s been other releases since then, and reboot since then. It’s still relevant and it’s still really scary stuff.
Michael: Okay, just a movie, and I don’t think Skynet something that we need to be worrying about. [00:03:00] You know, the “fi” in sci-fi means fiction,
Brad: Or is it? Did you know there is an actual program called Skynet in the real world? The National Security Agency uses this machine learning to analyze communications data to track terrorist threats. Michael, what if it becomes self-aware like in the movies?
Michael: Well, I did hear about the Skynet named machine, but real world Skynet’s just a tool, Brad. It’s designed to identify specific threats, and it doesn’t have independent capabilities for having thoughts and taking its own actions. It’s kind of like if you compared a calculator to a supercomputer.
Brad: But technology is always advancing so quickly. What if it evolves beyond its program? I mean, read somewhere – like I was reading how… I was talking about how simple systems could potentially develop into [00:04:00] something even more sophisticated.
Michael: It’s just going to magically start being able to learn on its own. Is that what you’re saying? I have to say, I got a little scared when I saw initially these crazy videos from the Elon Musk robots. I don’t know if you saw those where these robots could spin a football and kick it. And they were doing all these things that were so realistic. And then I heard the videos themselves were created by AI, and so I have no idea what to believe anymore.
Brad: Exactly, because AI doesn’t want you to know what you’re supposed to know. And so you have to admit it’s a little unsettling. It just takes one mistake. And the next thing, we’re wake up and find that these machines are actually running our lives.
Michael: You can either be scared, you can embrace it, Brad. You need to put your big boy pants on. There are so many amazing advancements from AI. Just think about your smartphone and think about our [00:05:00] assistant Siri, and how helpful Siri’s been through 18 full seasons of the Legal 123s with ByrdAdatt, so you need to stop focusing on this whole doomsday worst case scenario and lean into it. Become an advocate.
Brad: Okay. So it’s just a little bit hard to shake off after seeing all those robots This wreak havoc on the screen.
Michael: Okay. Well, my prior kind of boost didn’t work. Maybe just switch to comedies instead of these sci-fi movies.
Brad: So start watching movies that does not have killer robots?
Michael: Yes, Brad. I think that’s a good next step. We’re not anywhere close to this whole Skynet Terminator situation. Okay.
Brad: Well, here’s hope in the future stays bright and terminator free. And for our AI over lords who are listening, I was just kidding. I’m sure you’re going to be very kind, especially to me and most of the human race.
Michael: Okay. Let’s jump into today’s story.
Brad: [00:06:00] All right, Michael, today’s story, we have a plastic surgeon who is in his early forties. He lives out in the West coast. He actually earned an engineer degree in undergrad before going off to medical school. His practice focuses on women with mommy makeovers, breast and body contouring.
Michael: Okay. Does our doctor have a name?
Brad: Let’s just call him Dr. Arnold Schwarzenegger.
Michael: Well, that’s that’s predictable? Yeah. Good job, Brad.
Brad: Yeah. Well, now based on Dr. Schwarzenegger’s undergrad degree – engineering, he was always interested in the latest and greatest medical devices and software to run his practice.
Michael: Did he minor in bodybuilding?
Brad: No, he did not. But Dr. Schwarzenegger had been using a patient coordinator, we’ll call her Sarah Connor, to help with his sales and lead generations and a software he actually had also for lead generation.
Michael: Well, it seems that we’re all in on the Terminator theme today. For those who don’t know, [00:07:00] Sarah Connor was also a character in the Terminator franchise. So we have more importantly, Brad, I heard two vocabulary words that we need to kind of focus in on. So you said patient coordinator, and then the second thing you said that caught my ear is software for lead generation. So let me start with the first one. For those who don’t know, a patient care coordinator in a plastic surgery practice is helping manage the onboarding of a new patient or a potential new patient. They’re answering questions on the types of treatments that are available, price point for such treatments. Kind of think of it like a blend of sales, education and client experience. You know, trying to help facilitate the [00:08:00] conversation of someone who’s trying to make a decision of whether to get a treatment or surgery utmost often. And so, they are heavily involved in maintaining an open line of communications, both with the patient and with the provider on the other side of things. And so, they’re kind of like a navigator. They’re helping kind of guide this potential patient through the surgery or treatment. So, Brad, talk a little bit about you know, software for lead generation.
Brad: Yeah. I know we discussed the good faith exam last week, but as a reminder, the patient care coordinator does not replace the need for a good faith exam. It only helps with a number of questions the patient might have about a particular treatment. The concept though, going back to the software for lead generation, is typically [00:09:00] a customer relation management system also known as CRM – tattooed in some of our good friends, marketing friends that they love CRMs. Many companies have CRMs design specifically for health care industry, allowing health care providers to manage and track their potential patients AKA leads, from various sources. and that helps nurture the relationship and ultimately convert hopefully into paying customer/patients. When done correctly, it’s really a great way of having a great centralized platform that can kind of talk to all these different elements in the medical industry.
Michael: Yeah. You said something too at the beginning, I think is insightful.
Brad: Can I just put that…?
Michael: But I want to expand on it and make it better if that’s okay.
Brad: Nevermind.
Michael: You talked about the good faith exam and the patient care coordinator not replacing it. And I think if you expand that, [00:10:00] generally speaking, to be done rightly, the patient care coordinator is not directly involved in the actual chain of care, so it’s more this facilitator-type, navigator role that we’ve talked about. I did say done correctly. ’cause we probably could do episodes where they are involved in the chain of care and it didn’t go well. Good context on the CRM, we have this patient care coordinator that you mentioned. Dr. Schwarzenegger, did he use the CRM and patient care coordinator to kind of help in his practice with this personal touch and this journey that were talking about?
Brad: Yep. He absolutely did. Gold star for Michael for understanding how plastic surgery practice could work. However, Dr. Schwarzenegger was actually looking to AI to streamline his practice. The challenge was making the [00:11:00] AI sound less mechanical, more like a real person. And as you know, for those who don’t know, many women search the worldwide web for cosmetic treatments and consultations sometime between eight and 10:00 PM. So typically after dinner, they’re having some downtime, and then that’s also when most practices are closed. And according to recent survey by the American Society of Plastic Surgeons, the most popular procedures they’re searching for are breast augmentations, rhinoplasty, and liposuction, which happened to be actually in a lot of the wheelhouse that Dr. Schwarzenegger did.
Michael: That makes a lot of sense. I’m guessing there’s usually a nice glass of wine, get the computer eight o’clock, just kind of thinking about what kind of treatments you might want to do to improve your appearance. Well, that all makes a lot of sense. and I could see AI could really help as kind of a virtual patient coordinator during these off hours, especially in the initial inquiries. [00:12:00] But patients want a level of personalization that’s hard for AI to provide.
Brad: Yeah, exactly. And that’s what Dr. Schwarzenegger ran into. The AI in the market was, it was just too clunky. It sounded robotic when it respond to questions and people realized they really weren’t talking to a real boy as he would put it, so he worked with a coder, develop his own CRM AI.
Michael: I wonder what Vegas odds were that you were going to have a Pinocchio reference? Well, that’s interesting. How did he make it more personal?
Brad: We spent a lot of time and effort, but he actually programmed the CRM AI to respond to questions, to using his voice or how he will respond to those types of questions. The patients thought they were actually interacting with Dr. Schwarzenegger himself, and he discovered people wanted responses immediately after reaching out. So basically, particularly within a 30 minute period, once they inquired, [00:13:00] They didn’t want to wait until the next day or the next day in which his practice was open that someone could then read the emails and respond.
Michael: Wow. That’s crazy. Okay. So AI, it has the potential to aid in understanding the patient needs, but what about the actual diagnosis? That’s a tricky area without a doctor’s consultation.
Brad: That’s where things got complicated. The AI was so good a suggesting potential procedures that it almost bypassed the need of any initial medical consultation. In fact, the AI was so good that the patients were actually putting deposits down for surgeries before they actually ever saw Dr. Schwartzenegger. And with this new found, CRM AI, Dr. Schwarzenegger decided he actually didn’t need that handholding anymore that he used to have with that patient coordinator because AI could replace her.
Michael: Well, I’m betting that didn’t sit well with Sarah Connor, and I don’t know that you want to get on the wrong side of her.
Brad: No, you don’t. I’ve seen her in action. Sarah Connor, the long time coordinator was furious. [00:14:00] She felt replaced and that these close connections that the patients that she had was really being devalued over this AI and outrage. She felt she may have reached out to a few patients to let them know that they had been communicating with AI and not the good doctor.
Michael: Well, let me start with saying, yes, I can empathize. Everyone has this fear of being replaced by a robot, and it sounds like she was
Brad: Yeah, I saw the movie.
Michael: Okay. Well, going the next step, which is taking it to the patients; I’m guessing that was probably a shock for the patients to learn that they weren’t talking or hearing from Dr. Schwarzenegger. Were they upset?
Brad: They were, some were quite upset. They felt that their privacy may have been compromised and questioned who was actually diagnosing them. Did Dr. Schwarzenegger really understand their actual needs? And I brought up some serious concerns about obviously the doctor-patient trusts and transparency. [00:15:00]
Michael: I can understand why. Curious, when did we get involved in this story?
Brad: Very late in the game. Probably like, if we’re watching the terminated series like much late into the series. Dr. Schwarzenegger had received a few letters from plaintiff attorneys expressing their client’s discomfort – would be the word I’d use. They might have used other words – when they learned they had been communicating with an AI and not Dr. Schwarzenegger. They were asking for refunds for medical treatments, even though it appears from our conversation, that they were good results.
Michael: Well, I can say this, and you alluded to it, I don’t know. We have a lot of plaintiff’s attorney buddies. I’ve never heard them use the word discomfort, and we’ve seen their letters before. But I get the picture. Why don’t we do this? Let’s go to break. And on the other side, talk about some lessons to be learned about this AI and then I would [00:16:00] love to hear more about Dr. Schwarzenegger’s story.
Access+: Many business owners use legal counsel as a last resort, rather than as a proactive tool that can further their success. Why? For most, it’s the fear of unknown legal costs. ByrdAdatto’s Access+ program makes it possible for you to get the ongoing legal assistance you need for one predictable monthly fee, that gives you unlimited phone and email access to the legal team so you can receive feedback on legal concerns as they arise. Access+, a smarter, simpler way to access legal services. Find out more, visit byrdadatto.com today.
Brad: Welcome back to Legal 123s with ByrdAdatto. I’m your host, Brad Adatto, my co-host, Michael Byrd. And Michael, this season our theme is Compliance Fundamentals. And we just heard a pretty interesting story. Maybe you can give audience members a little quick recap.
Michael: Okay. I always feel like I’m getting tested, like I have to passed the exam that I was listening to you.
Brad: No, we’re worried about your age, so this is an age test.
Michael: [00:17:00] You got to have my back if I miss something. So we have a plastic surgeon in his forties in California named Dr. Arnold Schwarzenegger. He’s an engineer, background in undergrad, has a lot of interest in kind of the computer sciences. He has a successful practice. He has a patient coordinator, Sarah Connor, that’s been with him for quite some time. And the problem he was trying to solve, or maybe the curiosity that led him down a road was he was interested in implementing AI into his practice. And found that things were a little clunky with what was out there, and designed his own system, and designed his own system in a so well that he was able to train it to learn how to kind of talk the way he talks and interact with patients that were making these inquiries. He had found they’re making inquiries during off hours in the evenings, [00:18:00] and that this AI chat bot or mechanism was able to interact and engage and it actually did so well. Deposits were being made and Sarah Connor’s job was no longer needed.
Brad: She was terminated.
Michael: Thank you for that. And so, she didn’t like that notified patients, patients started getting angry. And that’s where we left off at commercial. Did I nail it?
Brad: You did great. You’re passing, Michael. And before we get back into Dr. Schwarzenegger’s story the story of reminds an article in the New York Times about MyChart using AI to draft responses to patient’s messages. Apparently many health systems are using this without actually disclosing it to patients.
Michael: I do. I saw that as well. And it does raise some serious legal issues. The main concern is this whole patient autonomy [00:19:00] and informed consent. Patients have a right to know if their doctor’s response was partly or wholly written by AI. And there’s a breach of trust that happens, and it can be considered deceptive. There’s a lot of risk that kind of goes into implementing these tools and clients or patients not being aware of it.
Brad: Yeah. And the legal implications of AI and health care can seem even overwhelming to maybe seasons attorneys, like people that have gray hair and who have been practicing 90 years.
Michael: Well, and it’s new to everyone, including us, and it’s a rapidly evolving field. And so while the benefits, the potential benefits are huge, the legal risks are significant, and trying to take all this new technology and apply it to old laws can be tricky.
Brad: You said there’s good and bad, so let’s start with the pros. [00:20:00] What are some of the advantages of using AI in medicine?
Michael: AI can significantly approve efficiency, kind of think about it like faster diagnosis, better scheduling for appointments, more efficient administrative task, and it can also improve quality of care. AI can analyze vast amount of data to identify patterns and make predictions that a human might miss. And it can aid in kind of personalized medicine by helping tailor treatments to individual patient’s unique genetic makeup and their health histories.
Brad: All sounds amazing. But as we said earlier, there’s some downsides.
Michael: Well, the first con is that it’s a robot. It’s not a human.
Brad: Not a human.
Michael: Yes.
Brad: Not a real boy.
Michael: There’s a huge [00:21:00] liability issue because you know, something’s going to go wrong, and the big question mark is, “Well, who bears the risk?” You can’t sue a robot or a computer. Or maybe you can, I don’t know. But if an AI makes a wrong diagnosis or suggests inappropriate treatment, who is responsible? Is it the doctor, the hospital that’s involved in the care, the AI developer or all of them? And so the legal framework is being developed and will probably take years to develop as these case law gets established over things gone wrong type lawsuits. I also think you should touch Brad on patient privacy concerns.
Brad: Yeah. The AI systems relies on a massive amount of patient data, and ensuring that privacy and security of this data is, is going to be crucial. It’s already highly regulated on HIPAA and other state privacy laws on data security. A data [00:22:00] breach could have devastating consequences, and also there’s a potential for the algorithm to create biases. If the data used to train the IA is bias, it will actually perpetuate, even amplify those biases in a result, which obviously has some serious ethical issues. While my head is actually hurting, I’m thinking about this. Mike, why don’t you talk about transparency?
Michael: Yeah. We touched on this a little bit earlier. It’s a major concern. Patients, have a right to know if AI is being used in their health care and it’s it’s affecting their treatment and lack of transparency could be considered deceptive. And so we talked about compliance, having state and federal implications. Even at state law, no matter what is specifically written about this, every medical board has a catchall for unprofessional conduct that is a blanket for things like this if it doesn’t quite fit. But the [00:23:00] patient is not fully aware of something important, a data point that’s important like this. And so, again, what do you do with informed con sense when you have AI that’s involved in the decision making?
Brad: So with all that being said, and we’re focusing on our physicians here, what should these doctors be doing?
Michael: They shouldn’t do it. You did at the beginning with the whole Skynet conversation and just become terrified.
Brad: Become AI overlord?
Michael: Well, maybe that, but not run scared, but definitely need to proceed cautiously. We must be aware of the current regulations and any new laws which there will be regarding use of AI in health care. And again, we need transparency in our policies about the use of AI in our practice, both for a practice’s employees and their patients. and have an effort to make sure the patients are fully informed. We definitely, at least the way the laws are written right now, have to prioritize human oversight using AI as a tool to assist the physicians, but not to replace them in the chain of care. And then finally, we must ensure rigorous kind of quality control measures are in place to mitigate risk associated with AI generated recommendations.
Brad: And to your point, there are these benefits that are there, but we really have to tread very carefully as you enter into the AI world.
Michael: Yeah. And taking a step back, I mentioned this briefly a moment ago, the practical answer is, if AI is used as a tool, and the doctor is the person that’s in charge of the care still, then you can navigate [00:25:00] a lot of these risks because there’s all sorts of tools that are used already in delivery of care. And so, if your AI is generating a response to a patient question, but the doctor is reviewing it and signing off on it before it goes out, that is different than just a patient getting a random email straight from a computer that that’s never seen a doctor. So AI is a powerful tool with huge potential, but there is a responsibility on how to proceed and embracing compliance with all the laws and regulations, including the transparency that we’ve talked about, and really the biggest point is focusing on patient safety. Like how, when you’re looking at this kind of cutting edge stuff, do we ensure that the patients aren’t harmed by whatever [00:26:00] this advancements are. So Brad, you’ve been asking me a lot of questions.
Brad: Yeah, I’m enjoying that by the way.
Michael: Yeah. I’ve going to fire back at you. Talk a little bit about potential liability. If the AI gives inaccurate medical advice, who’s responsible? Is it the doctor, the hospital, the AI developer?
Brad: You know, that will be the million, if not billion dollar question. It’s a complicated issue with no easy answer. Likely at this moment in time, the doctor ultimately bears the burden of being responsible for the patient care because that’s the person who’s in charge of it. That’s the person who they went to go see. So the media term definitely would say that’s who it would be, but could there be arguments made that if it’s through a hospital system that’s employing the doctor and he has to use it, or maybe the AI developer who should say responsibility? That’s a very difficult question right now. And as you said, probably go through the court system to start figuring this stuff out. The lack of transparency [00:27:00] really makes it difficult to determine exactly who’s at fault.
Michael: Yeah, I agree with you. I mean, the doctor’s license is probably the key from a liability to the patient perspective. But to your point, I mean, that doctor has probably got some sort of development contract, software development contract with the AI developer and is there some sort of indemnification if what happened is that the fault of the AI? It’s fascinating, right? Well, what happened with Dr. Schwarzenegger,
Brad: we spent some time negotiating some refunds with some of those angry patients and discuss policies on the use of the AI and how it’s being used to communicate. And we highlight the need to ensure that the practice was disclosing the use of AI to the patients to be more transparent. And additionally, even though he had built it we recommended that Dr. Schwarzenegger and maybe some other outside experts conduct a review and accuracy of the reliability of the AI. And finally, we discuss [00:28:00] how vital it was for the practice not to solely rely on AI for these complex medical decisions. And you said this earlier with having human oversight is going to be essential for the protection of his license.
Michael: Yeah. And we think about how did Dr. Schwarzenegger use this? And it sounded like the main interface from a patient perspective or a potential patient was a chat bot on the website. And so, legally speaking, as a question of are they even a patient at that point? Because you have someone that’s having their wine and curious, and so risk is lower when you’re at that stage for an inquiry because they’re not a patient and they’re not subject to the patient privacy rights. There are other rights that consumers have, but from that perspective, it is a lower risk. Yet, what makes it murky is that you’re going to have existing patients that are going into the website to come back for more. [00:29:00] And so if you have that relationship and they’re sharing their personal information with your chat bot how are you safeguarding that? And really the key is for your CRM system to be HIPAA compliant to really be on the right side of compliance. What are your final thoughts?
Brad: Yeah, using AI medicine, there’s a lot to consider besides obviously what vendor to use. It’s evolving quickly as we’ve been talking about the entire show. And the practices really need to stay informed about the legal and obviously, ethical implications of AI in health care. First, practices should adopt AI policies, you said that earlier. Second, they should regularly update these policies and obviously train, train, train, train. And it’s a delicate balance. You know, Dr. Schwarzenegger experiment shows potential, but also highlights the ethical considerations to use with AI, especially in health care. Michael, your final thoughts?
Michael: Ignoring the advancement of AI is not a [00:30:00] realistic option. Practices who do not embrace Ai may get left behind. At the same time, AI are not doctors. They don’t have a license to treat patients. And so at least in the current, the way the law is, the risk goes to the doctor as it relates to patient care. And so the doctor must render ultimate medical judgment on the patients.
Brad: Awesome. Well, next Wednesday’s show we’ll continue to focus on compliance fundamentals when we discuss a story that made national headlines on when and what triggers a doctor patient-relationship. Thanks again for joining us today. And remember, if you like this episode, please subscribe, make sure to give us a five star rating and share with your friends.
Michael: You can also sign up for the ByrdAdatto newsletter by going to our website at byrdadatto.com.
Outro: ByrdAdatto is providing this podcast as a public service. This podcast is for educational purposes only. This podcast does not constitute legal advice, nor does it [00:31:00] establish an attorney-client relationship. Reference to any specific product or entity does not constitute an endorsement or recommendation by ByrdAdatto. The views expressed by guests are their own, and their appearance on the program does [00:31:00] not imply an endorsement of them or any entity they represent. Please consult with an attorney on your legal issues.