Listen on the Almanac or Spotify, and find us and subscribe on Apple Podcasts or Spotify.
Transcript
Morgan Leafe, MD, MHA: Keep asking questions and seeking out information rather than making assumptions.
Milini Mingo, MPA, CHCP: Hello and welcome back to the Alliance Podcast, continuing conversations. I'm Milini Mingo, and we are recording live from the Alliance 2025 Annual Conference in Orlando, Florida. I'm excited to sit down with the conference presenters, Núria Negrao, Alexandra Howson, Morgan Leafe and Sara Fagerlie, who just earlier today, presented the session titled “Navigating the Ethical and Legal Landscape of AI in Healthcare”. Núria, Alex, Morgan and Sarah, welcome to the podcast.
Núria Negrao, PhD; Alexandra Howson, PhD, CHCP, FACEHP, E-RYT; Morgan Leafe, MD, MHA; and Sara Fagerlie, PhD, CHCP: Thanks.
MM: All right, so we do know that AI is a pretty dynamic topic, especially when you're speaking about it in context of the healthcare education. What particularly drew you, each of you, to explore the intersection of AI and CE within the health professions, and most importantly, why do you think that this is so important to our community?
Núria Negrao, PhD: So, I think I can start. Hi everyone, I'm Núria. What drew me into this is, I work as a medical writer, so I was immediately interested to see, like, how could technology help me in my work, to make me more efficient. And then I started, like, having bigger dreams and thinking how can it not only make me more efficient, but actually make my work better and help me do things that before I never had the chance of doing. So that is what drew me in.
Alexandra Howson, PhD, CHCP, FACEHP, E-RYT: Hi, this is Alex. So, like Núria, I'm also a medical writer, and the thing that really drew me in was the potential for creativity. I know that one of the things that we talk about a lot is how AI can potentially dampen creativity and make everything homogenous. But, you know, I'm an independent writer. I work from home, solo, in an office. I don't have somebody to kind of bounce ideas off on a regular, you know, on a frequent basis. So, I think in the certainly at the beginning, I saw AI as an opportunity to have that kind of peer on my shoulder, that research assistant to bounce questions off and kind of get ideas for how I might think about, you know, approaching certain types of work.
Sara Fagerlie, PhD, CHCP: Hi, this is Sara. Thank you for having us. For me, just last year, I made a commitment to learn more about AI, and I do content development, so how that can fit into those pieces. And as I started educating myself and learning for myself, I started thinking about the larger framework. And you know, there's a lot of implications, ethical, legal and social implications, and we had a great session on bias yesterday, and so I just started thinking about those things, and I talked to this great group and we, you know, wanted to explore it in the session.
Morgan Leafe, MD, MHA: Hi, this is Morgan, and like the rest of the team here, I'm also a medical writer. I also happen to be a physician by training, and I have a master's degree in informatics, and I'm board-certified in the physician world as a clinical informaticist. So, I've always had an interest in the intersection between technology and healthcare and AI, what we now think of as traditional AI, and ways to implement that into the healthcare world. So, when generative AI came along, I became, of course, very interested in this new technology. ‘What are the implications in healthcare?’ Was first and foremost on my mind, and I do write a number of blogs about that, and it sort of naturally led into, ‘okay, well, there's great implications in the healthcare world. It's very vast. What about in our world? What about when I have my CME hat on? How can we be using this to both perform our roles better and bring better education to the physicians and the patients that we serve?’
MM: Thank you. So, speaking of that, let's talk about your session that you had today. Can you give us an overview of the key ethical and legal considerations discussed during that session, particularly in regards to plagiarism and transparency in AI tool?
AH: Well, I can start with plagiarism, if I may. You know, plagiarism is word theft. It comes from the Latin word plagiaris. And when you know, as humans, when we are working with content, if we are lifting that content that somebody else has created and using it in our own content, without attributing that source to the original creator, we're plagiarizing. And there's lots of different ways to plagiarize. You know, we can paraphrase poorly. We can, you know, there's lots of different ways to plagiarize, and you know, for a lot of people, it's an inadvertent thing. You know, they're not conscious that they're doing it, and sometimes it's very blatant. Now, if we can do all of those things, we know that AI can do those things too. We know that when we ask generative AI tools to generate text, we don't know where that text is coming from, and so we don't have the source, we can't check it, we don't know where it's stitching all that content from, and so there's always the potential there that what we are receiving is already plagiarized. So, I think that's one thing to think about in terms of plagiarism. You know, there are workarounds. There are things that we can do to combat that in terms of, you know, writing our own content up front and asking generative AI for ideas on how to improve or reorganize, but we need to kind of check that too. And there also are some tools that will retrieve sources for you. They're not always true and accurate, but at least that kind of gives you another step up the ladder, as it were, for checking your sources. So, you know, plagiarism is always a potential in the work that we do anyway, and I think it is probably compounded a little bit with AI.
SF: And just talking a little bit about transparency. You know, there's all there's many levels of transparency, but just thinking about disclosing when you're using AI in particular, that's an action we can take now. So that's something we talked about during the session, is you can choose to be transparent now. Now, what does that look like? You know, many organizations have to decide what that looks like. How, how deeply do I go in my referencing how, you know, and then that's not just thinking about your internal organization, but also, you know, any vendors or faculty or freelancers that you work with, you know, how are you disclosing that? So, I think transparency is the one thing that's very actionable right now. There's a lot of ethical, legal gray areas right now, but as far as transparency that's actionable, we just have to decide what framework that goes you know, how to put that out there?
MM: And privacy is also a major focus, especially in healthcare. So how did the session address protecting sensitive patient data when working with AI?
AH: So, you know, we did talk about patient data a little bit, and certainly that's one of the themes that came up from the group, because patient data is governed by HIPAA, and so if you're using patient data at all, you always want to be conscious about that privacy. We did talk a little bit more about learner data and privacy precautions in relation to learner data, because one of the things that we hear about from our peers in the field is using generative AI tools to analyze outcomes data, which involves uploading information about learners into some kind of system and so, you know, we had a very productive conversation, I think, around what can we do to make sure that learner data isn't exposed to privacy violations? You know, one of the things that we talked about involve, you know, de-identifying and anonymizing data that may or may not be enough, depending on the kind of tool that you're using, depending on your internal organization and what their policies are, depending on your own gut feeling about whether that's the right thing to do, and that's where ethics really comes in, because ethics is about practice. It's about our personal orientation to the world, versus kind of legal frameworks where there's a little more prohibition and permission. So, as Sara was saying, that's the transparency part. We have to kind of act that out. We also talked about the potential for asking learners how they feel about, or if they give their permission to upload their comments, say it's comments, in response to open ended survey questions, how they feel about that? Do they give permission? Because that's at the heart of consent.
MM: Also, one of your session's objectives focused on strategies for responsibly integrating AI into CE initiatives. So, can you share some examples of AI tools that effectively balance innovation with strong data protection, and have you incorporated any of these tools into your work?
NN: Okay, hi, so this is Núria back again. So, I think what we can think about when we're thinking about tools that balance innovation with data, but data protection, is you can think about using what we call open tools, so tools that are on the internet. So out of the box, I'm talking like ChatGPT, etc., where whatever you're inputting is going to their servers being transformed, thought about in their servers, and coming back to you and closed tools, or tools that you have in house, right? And also like, with the open tools, you can use the free version, or you can use tools that say clearly, like for the longest time, Gemini said clearly that we will train on your data, Claude said clearly we will not train on your data, and ChatGPT, it depended, right? So, I think there's a need to be a little bit more careful in choosing your tools depending on what is the data that you're inputting, right? So maybe, like something that is really that you really want to safeguard, maybe that, the same way you would go and put it in a safety security box when we're talking about digital assets, maybe that you don't put on anything that's going to go on the internet. And if you want to use AI for that, that means that you need to have a server, an in-house server, not connected to the internet, and you can use AI for that. I am aware that that takes AI out of the hands of some companies, smaller companies or individuals that maybe don't have the money to have Nvidia chips in their house, but that is like, so those people, maybe they don't use AI for that type of data, and then you have the difference between the tools like Claude that says that they will not train and then, like, okay, but do you trust them? So, you need to decide, how important is this piece of data, and can I use it? And the other thing I want to say is that Word is not the best place to do accounting. Excel is the best place to do accounting. Same with AI tools. Not all AI tools are the same. Some are more for doing data analysis, specifically because we talk a lot in our area about qualitative data analysis, there are tools that are built for qualitative data analysis. I would go to those tools first for qualitative data analysis, rather than go to a tool that is not built for that. All right, so I think fitness of tool is also something that people need to consider. As a medical writer, there are tools that are better for researching, for example, Perplexity. For example, now the very new tool from Stanford, Cos-Storm. Google just came out with Gemini Deep Research. Those are good for researching. ,They are built to go on the internet, find things and bring it back, versus tools that are better for deep thinking, for reasoning, for strategies. So that would be like a AI-03 or -01, which is different than from the -40. And these people, they really need to come up with better names for their models, and then tools that are better for writing. And different types of writing, there are different tools. So, a tool for purpose is something that people need to start thinking about. And instead of having like a Swiss army knife, maybe start thinking about specific tools for purpose, like we use Word for one thing, Excel for one thing, Adobe for one thing, we don't use Word for everything.
MM: That's a great point. Basically, use critical thinking as far as what AI tool you would use based on the actual project or whatever you're working on. And I love your analogy regarding the safe box as well. So, you mentioned earlier that you had interactive group activities. What were some of the most compelling insights, the memorable comments or impactful questions that stood out to you all from the actual audience? What was that thing that made you say, ‘wow’?
ML: So, we did two different group activities. This is Morgan, by the way, and we did two different group activities during our talk. And what I really appreciated wasn't so much a ‘wow’ comment, but really seeing the wheels turning in people's heads, so when we talked, we did a second group activity about designing an AI guideline for your institution. And people were tossing out really great concepts that are very important for guidelines, and they maybe needed a little assistance with okay, ‘how would you frame that so that it can be some type of rule, recommendation, guideline for an organization?’ So, for example, well, we don't want an identity or data that's not de-identified with patient information put into a generative AI tool. Very reasonable. So how can we frame that so that it's something that works for everybody in your organization? So, watching people work through the steps that you need to develop your AI guidelines, I think felt very rewarding for us, and I hope also for the audience.
AH: Yeah, I was just going to add to what Morgan said there. I mean, for me, I completely agree. I think it was really interesting to see, because at the beginning of the session, people were pretty quiet, but, you know, they were able to start thinking about, how would they apply what we were talking about to their particular circumstances? And one of the things that really kind of came out for me is that overview, that kind of global overview of risk mitigation, and all the people and Núria talked about this as well, but all the people that you need to have on board within an organization in order to make sure that there's a kind of streamlined AI adoption process. So, I think it was really interesting to hear people are starting to think about that in a way that you know they weren't probably six, nine months ago.
MM: And speaking of that, you did have your participants focus on creating AI guidelines. What tips would you offer listeners looking to develop similar guidelines, and most importantly, what would you consider the most important first step?
NN: Okay, so tips for designing guidelines. So, I think the first or critical steps are you need to, like, kind of, know, the baseline, know in your organization, is anyone using, how are they using it? Like, so know what is already being used and how is it being used. And then I think you need to step by step, like, can you use AI tools for text generation? Can you use it for image generation? Can you use it for video generation, etc.? And can you use it only for internal stuff? Can you use it for external stuff? Which tools Exactly? For which steps exactly? Do you need to say if you've used the tool like so, for example, if I you send me an email and I tell copilot or Google Gemini reply, saying this, do I need to tell you that I used, do I need to put NB used AI to reply to your email? Yes or no. What if I'm talking to a client? Can I use AI to email a client, yes or no? And then, what steps are you going to take for risk mitigation? What steps are you going to take, like before you input the data once it comes out? What steps are you going to take for validation, to make sure that everything that that is produced at the end of the day you are okay with? So there's the principle of responsibility. The AI is not responsible, as the humans, we are responsible. So, it is okay to use tools, but we are ultimately the responsible ones for the output, so we need to think about, if I am responsible for the output. Like, what are the safeguards that I will put in place to make sure that I am happy with what comes out? So those are the things, to really think deeply and go step by step. It seems boring, but I think people need to, like, really step by step, use case by use case. These are our rules. These are our principles. This is why.
SF: And this is Sara, just to add a little bit to that, just thinking about for companies that are organizations that are just starting to maybe formulate a task force, I think AI varies a little bit, so you still need your legal, your IT, you know, just for acquisition and getting, making sure everything's on board, but the people who know how it's going to be implemented are the people who are using it. So, IT is not going to know how an AI fits in every job a little bit differently. So just thinking about that and making sure you've got the right people who know you know on your task force, or your committees, or however you're doing it, to start that, to get them in the right space.
MM: Love that, awesome. So, your session was very informative. I had the pleasure of actually attending, and it made me want to learn more. So what resources or tools would you recommend for listeners to deepen their understanding of ethical and the legal aspects of AI in healthcare? Beyond your session.
NN: So in healthcare specifically, I haven't found like a proper source that talks about AI legal aspects or in healthcare, one source that I can say. I think like, what people need to do is like you need to if it's podcasts that you like, if it's news articles that you like, if it's LinkedIn people that you like, what I would say is like, find the people that are invested in AI and talking about AI and follow them, and you can maybe subscribe to some newsletters or something like that. I think a lot of people who are talking about AI all the time,they do go into these deeper thoughts. Out of the top of my mind right now, someone that I would that I think for everyone that works in education should be following Ethan Mollick, because he is in business, yes, but his thing is like, how can we use AI in education, and how can we teach people to use AI appropriately? And he does go into some like, what are the ethical considerations of all of this? A second person that I would recommend for people to follow is Paul Roetzer from the Marketing AI Institute. And he, he's in marketing, yes, but he thinks very deeply about the implications of AI and the use cases in business. And he's very focused on getting on AI literacy. So that's another good person to follow. And then for healthcare and AI in general, I would recommend people follow Dr. Andree Bates. She's from the UK, and she has been working with pharma for, like, more than 15 years, or 20 years, I think even, on how to integrate AI into pharma and into healthcare, and she has like, really use case scenarios that we don't usually hear talked about. Dr. Andree Bates has, like, really cool ideas. So, I think if you want a little bit more innovation, I follow a person who is more on the protecting privacy side of things, Louiza Jarovsky, and she's a legal person who is interested in in privacy and the internet. So, she wrote a lot about cookies and things like that on the internet, and now she's transitioned to AI so that, but like exactly what we did here, I didn't find any person to follow, maybe my colleagues did.
AH: No, I would only add, not in terms of people to follow, but on the content side of things, The Copyright Clearance Center actually blogs quite frequently around the implications for medical communications. And so that's definitely something, you know, I would recommend following, signing up for.
SF: And I would add that, you know, we need to understand how clinicians are using AI as well. You know, not just how we use AI in our everyday jobs, but the people were education, how they are using AI in their jobs. And so there's now more journals, New England Journal of Medicine has an NHMAI journal now. So I think reviewing the literature is also an important piece now.
ML: I don't have any specific sources to recommend necessarily, but I think it's important, besides seeking out sources, which is important, podcast, newsletters to learn more, I see a lot of people not asking questions about AI and generative AI, and instead making assumptions about what a what it can and can't do, what is legal, what is not legal, what is ethical, what is not ethical, and I think it's really helpful, especially as we know these tools are going to keep changing rapidly. So, an answer you get on a question today, you might not get the same answer a year from now. So keep asking questions and seeking out information rather than making assumptions.
MM: Thank you. So, as we bring our conversation to a close, I'd like to hear your thoughts on the future. How do you see AI shaping our field over the next maybe three to five years?
ML: This is Morgan. I think that the use of AI is going to be incredibly normalized over the next three to five years. And we may look back on this time and say, isn’t this silly? We had so many conversations about that. I think the way we now say, we might say, don't use AI, don't use generative AI, or kind of treat it as a dirty cheat with something rather than a tool to enhance your work, a few years from now, saying don't use AI will sound like saying don't use Google to look something up now, it's just going to seem silly. It's certainly growing. It's absolutely going to become better, and I see it becoming very normalized and part of our everyday work much more so than it is now.
SF: I would agree with that it's going to impact our industry tremendously. That said, I think we'll be using it a lot. It, you know, the impact on jobs, the impact on we don't know those things right now, but what I would guess is we won't have a lot of answers to some of the harms, like in Dr. Shepherd's talk, and those are things we really need to start thinking about, because as it's moving and growing, the other piece is harder, right, to address that, and so my guess is we'll be using it more, not having a lot of answers, and that's a little frightening. So just sort of trying to really think about that, especially us as CPD providers.
AH: I agree. This is Alex. I do think there's a real potential for content homogenization. And so, to reiterate Morgan's point around critical thinking and really thinking deeply about how we're using these tools and what we're satisfied with, and being really careful to monitor that quality is not being compromised, because I think, you know, one of the things that we see in the marketing field, you know, as Núria kind of flagged up, is that there is a lot of content that is just, it's so homogenous, you know, everybody's saying the same thing, and we definitely don't want to get that point, because I think one of the things that makes our field unique is that we all have creative and unique ways of developing education.
NN: So I'm going to ask that you ask me again in five years, because I think this is a fool's errand. I don't know. Imagine AI does something really dumb and they really stop the train, you know, and burn all the servers, we're not going to be using AI then. Is there a possibility that AI does something really dumb and they burn the servers? Yes, there is a possibility. I think it's a slim possibility, but I do think that there is a possibility, and I mean, if you talk of all these people that are developing the tool, the people that are the scientists that are doing this, they put it at like a catastrophic risk of, like 10% right? 15%. 10% catastrophic risk, like, that’s a big risk, like for less than 1% women cut off their breasts, you know, because they have the BRC1, BRC2 genes. So, like, a 10% risk is a high risk. So, that is one possibility. Barring that, I also don't know how AI will be used, like it is really hard to predict. So like, for example, with other technologies, when they came out, we used to predict that this is how people are going to be using the technology. And then now, like many years later, we see, like, that is not how people use the technology. So, like you see a lot of people talking about the AI will go and buy your ticket for you, decide where you go on vacation and buy your ticket for you. Maybe, maybe not. We don't know, right, but I do agree with everyone else that I think AI will become the water. So it will be embedded in so many things, and we will not even notice, if things don't go the catastrophic way, and it will be more and more embedded in how we do things. And I also think that we will calm down. We will figure out what the best ways are to use it. In the same way that, I don't know, like I was alive in ‘97 when Word came out, and everyone used Word Art to create their Word documents, and in PowerPoint, we had, like, all the animations. You would go dizzy, and now no one does that, like I think in less than five years, people stopped using Word Art and they stopped using all the animations in PowerPoint. I know this is a little bit different, but I do think that ,expect a lot of silliness in the beginning and then people calm down.
MM: Yes, as you've said in your in your session, this is the Wild West, so you never know what's going to happen in five years. But, thank you all for attempting to look into the proverbial crystal ball. And on that note, I want to thank all the listeners for tuning in to the special episode of the Alliance Podcast, continuing conversations, recorded live from the Alliance 2025 Annual Conference. And a huge thank you to our guests, Núria Negrao, Alexandra Howson, Morgan Leafe and Sara Fagerlie for sharing their insights on the ethical and legal implications of AI in healthcare. So, as we've learned today, the integration of AI into healthcare education is both a challenge and an opportunity to require us to stay informed, proactive and committed to ethical practices. We hope that today's conversation inspires you to explore more about AI's potential, and do that in a responsible way within your work. If you're interested in learning more, please check out the resources mentioned during this discussion and stay connected with us, the Alliance, for updates on future conferences and educational opportunities. Until next time, I'm Milini Mingo, and thanks for listening and keep the conversation going.