Almanac - Insights and Applications for the Healthcare CPD Community
Powered by
Alliance for Continuing Education in the Health Professions
  • Education
  • Outcomes
  • Leadership
  • Podcasts
  • Industry News
Transcript of Episode 57 – Live From #Alliance25: Next-Generation Adaptive Learning in Continuing Education: The Impact of AI-Synthetic Humans
Tuesday, April 8, 2025

Transcript of Episode 57 – Live From #Alliance25: Next-Generation Adaptive Learning in Continuing Education: The Impact of AI-Synthetic Humans

By: Andrea Zimmerman, EdD, CHCP; and Bretten Gordeau

Listen on the Almanac or Spotify, and find us and subscribe on Apple Podcasts or Spotify.

Transcript

Bretten Gordeau: Because it's not going to fully replace us. It can replace some tasks, but it can be an integral part of being a great assistant in education for us.

Andrea Zimmerman, EdD, PhD: Well, hello and welcome back to the Alliance Podcast, continuing conversations. I'm Andrea Zimmerman, and I am the senior director for accreditation and compliance at HMP global, and I'm also a member of the Podcast Task Force for the Alliance Almanac. I'm thrilled to join you live from Alliance 2025 Annual Conference in Orlando, Florida. This year's conference explores groundbreaking advancements like AI-supported synthetic humans that are transforming healthcare continuing education by creating realistic, engaging and scalable learning experiences. Now I'm sitting down with conference presenter Brett and Gordeau who will be presenting the session, "Next Generation Adaptive Learning and Continuing Education: The Impact of AI-synthetic Humans." Brett, welcome to the Alliance Podcast.

BG: Hi. Thanks for having me.

AZ: Let's start off with CE transformation and AI. There are two key topics that this year's conference is exploring. What initially drew you to speak about these topics, and why do you think they resonate so strongly within our community today?

BG: So I think that's a great lead in today for this discussion, and also to set up the session. You know, what drew me to want to speak about the topic and transformation in AI is really the potential between these two forces and how they will redefine how we learn and grow, especially in around confidence and competence. So for last 24 years, working as a CPD professional, I've always been drawn to disrupt and believe that there is change that is needed, right? So these are our opportunities that we can transform how we learn, and that's going to be necessitated as the future, you know, unfolds for us, right? And you know, when we think about CE, in general, and CME, it's always been about helping professionals stay educated, stay competent, stay effective. But you know, in terms of our traditional models, they've often struggled to keep up with the change, right? Information moves faster now, we have different challenges within health systems now, because we see more and more corporate health systems, and those challenges in amount of time that we have to learn also. So you know, when we think about this also, is that what works for one learner doesn't necessarily work for another, right, right? So we create negative training or education, and then what happens is we create more gaps. So in terms of that, you know, we think about AI and that, you know, that isn't a buzz word, it's reality now, it's reality for us. We'll talk a little bit more about that, but it is a force multiplier, and AI just doesn't accelerate learning. It also very much personalizes it and I think that's part of the key is in transformation, we have to think about personalization, right? So, you know, it moves beyond a one-size-fits-all model. It creates that adaptive, tailored experience, and it meets learners exactly where they are. So it's that just in time learning that we talk about, but more importantly, also looking at real time, gaps, analysis and having something adapt to you because the way that we speak, the way we respond, and you know, in terms of practice, we know that it's not robotic. It isn't very simple with a patient, and it can change, you know, from one patient to another, and what you expect is not always expecting the unexpected from them. And if we can adapt to that, we become better educators and better clinicians and we have better health outcomes. So, you know, in terms of that, it excites me, because I think that all of this in terms of transformation, you know, obviously, is resonating in our community now right here and AI is opening those doors for innovations like synthetic humans and hyper-realistic avatars, right? That are ultimately going to revolutionize training. So if we imagine for a minute clinician practicing, you know, a delicate patient conversation, and that AI-synthetic human responds in real time with emotions and body language, and even cultural nuances that we may not find relevant in a standardized patient, maybe or even patients that we may or may not ever see, so in terms of that complex diagnosis, but also, you know, for a patient, utilizing those with patients is, you know, having someone that is there that is relatable and ties directly back into the physician training. And when we think about having something that for a patient, that they could speak to, that is empathetic and speaks to them the way they'd like to be spoken to, or if you think about it, doesn't even need to be a human avatar, it can be cartoon character for children. And you know, if we think about the medical jargon, trying to explain something to a child is not easy, but what if you know a character did that? So those are the potentials. So simply put, it's it's moving fast, and we're at the point education that we have to be as dynamic as possible.

AZ: Thank you. I think that covered several of my questions. But it also sounds like you're really opening the door to talk about individualized learning. Can you speak to how this would inform individualized learning and address some of those gaps?

BG: So, when we start thinking about how AI works, right? It doesn't work in the sense of like we're going from one question to another, it can be dynamic, right? So if we take it down one path that opens up another door. So let's think about it like a "choose your own adventure." So it isn't necessarily a right or wrong answer, because with a patient, it isn't necessarily a right or answer or question. So when we start thinking about adaption in that sense that if you're speaking to a synthetic human in a particular tone, it may respond to you in a negative tone, right? If it's a negative tone, if it's a positive tone. So that adaption from your bedside manner, for example, to the response of the patient, right? Which is important. You see that with standardized patients in typical simulations and facilities, or when you're using standardized patients at CME events. So, the adaption aspect is so important because it's able to look at your gaps, see how you respond and then adjust itself to you and then correct you, if needed.

AZ: Well, you mentioned that the synthetic humans are highly scalable and accessible, which is a benefit for remote and underserved areas in accessing high-quality education. Can you share an example of how synthetic humans have contributed to healthcare continuing education access?

BG: So when we start thinking about to rural or to areas that are underserved, they're not widely used yet. You know, we're in its infancy at this point now. We have used synthetic humans for about three years now, and we have slowly moved to introduce the more interactive models where you're talking to them in real-time. You know that development has occurred over several years to get to the point where it's at now. But, you know, when we start thinking about that development and moving towards the rural and underserved areas, you know, that's actively underway. The tools, like I said, are relatively new, so it's going to take time to scale them, and it's going to get time for people to get used to using them. You know, we're not creating the Terminator here. It's not the Terminator, but it is a useful tool that to interact with. So the promise is pretty clear. You know, if we imagine a future where community health workers in a remote region can log into virtual training platforms and interact with synthetic humans that are tailored to specific or local health challenges, like managing an infectious disease outbreak, for example, or addressing maternal or child health, or anything that may also be dynamic, because you can update this content so quickly and change in minutes, so that world becomes a way to keep content and learning from being stagnant and make it active. So again, that goes back to the agility of it. So for us, in what we call the "cipher eight", which is the synthetic human, it adapts culturally and linguistically, they speak about 30 languages in real-time, and they their knowledge is embedded for them to believe exactly who they are, much like we do. We have belief systems, and that's how they are. They have belief systems. So, ensuring that you know the education is also both relevant and impactful is important, and as the technology refines, the focus on creating solutions to accessibility and scalability will reveal themselves, right? Because this is going to become exponential. We're just at this, you know, we're not even to the tip of the spear. We're at the bottom of the spear. We still have to rise. So it's going to take some time.

AZ: That's hard to believe, but I'm excited to hear about all of this. So you mentioned empathy, which is not something that we necessarily think about in regards to AI. So synthetic humans also can serve as empathetic AI assistants you've mentioned. How do they do this, and what impact does it have? Not only on the learner, but also the patient.

BG: So when we start thinking about empathy, it's probably something that we say. You know, if we probably looked at surveys, we might find that empathy may be draining, right? It's waning at this point in time. And I think because practice moves so quickly, we think about that. So, you know, when we think about the application currently, you think about chat bots. So chat bots are being used as AI assistants, right? So you can go and utilize them for various aspects, whether it's cross dressing information or whether that cross reference is for the patient or for the doctor or the clinician. But they're very limited, right? Because it's just a text based interaction. There may be some some voice and they're built on large language models, right? And usually for the larger corporation. So if we think about driving those chat bots into a version that is more human, like, right? So you're giving it a human form, one that not only talks, but also listens, observes and responds with life, like empathy. That's important. It's hard to get that from text. You can get that, but when you talk about emotional context and a voice, an expression on the face, or a pause, just as it listens, right? And then, you know, proceeds with its empathetic response. So this is the vision for you know, particularly behind synthetic humans, is bringing AI to life in a way that feels natural, personal and also engaging, and deeply engaging, for that matter. So in terms of that, we focused on mimicking human behavior and the community and in the psychology of communication, right? How do we communicate? What does that look like? And how human can we make that right? Because it ultimately needs to be authentic and it has to be empathetic, so using visual recognition as a part of that. So you're interpreting nonverbal cues like we do naturally, and adjust a tone and adjust our language to respond correctly or empathetically. So when we start thinking about for learners, it creates a safe environment to practice real world interactions, right? So we're not practicing on our patients anymore. We're getting a chance to simulate that ahead of time. You know, for example, you know, healthcare professional can work with a synthetic patient on managing diabetes. They receive immediate feedback to build both their, you know, clinical skills and their emotional intelligence, which is important because they the synthetic human may not respond to the way that you wanted them to respond or thought they would respond. You know, you might find that the resistant to particular treatment strategies. So how do I use share decision making to drive them to the proper decision? Right? In terms of that for patients, it can be a powerful connection for patient care, right? Because that same synthetic human with that same database, let's say that shared, if you share that synthetic patient as a physician with them, can serve as a digital health coach, right? But it provides patients with consistent, clear and empathetic guidance without the confusing medical jargon we mentioned earlier. But more importantly, from the outcomes perspective, we can see how did the patient, you know, interact, and on the training side, how do the clinician interact? And if you can tie those together, well, then you're really going to start identifying some gaps. So this ensures for us an alignment between clinicians learning and the patient experiencing, which ultimately you want to foster trust and improve health outcomes, and I think it also creates a seamless journey for the patient.

AZ: Thanks. For those who may be unfamiliar or just starting to explore this technology. Can you explain what AI supported synthetic humans are? How are they currently being used in healthcare, continuing education? And you've already touched on that a little bit, but please, please go in depth.

BG: Sure.Let's talk a little bit about AI, for example. So most people, what they understand about AI is large language models Open AI, for example, Chatgpt, hear the words Llama and Orca and all these other wonderful llama names, names like Clod. So large language models are designed to create a human like conversation right as best as they can. But that's one kind of AI. There are many kinds of AI. And when we think about that, to create a synthetic human, it takes a dance of AIs, because you're talking about emotion engines, you're talking about movement engines, and all of those tie to the language engine, right? That drive a response. And then on top of it, you may have an empathy engine, or you. May have a data set.

AZ: Are these engines? What you're referring to with multiple types of AI?

BG: Yes, they would be that dance in an orchestration. So, and that's what drives this experience. Because when you just look at a language model, it's purely just conversational we think about synthetic humans. It's all of these different pieces that work together, just much like we think about our own neurologic system, right? Driving movement, driving thought, driving reaction, all simultaneously in seconds. So the AI-supported synthetic humans. They need to be life-like, the visuals, the speech, the pattern of their speech, the rise and fall of of their emotion and their behavior. So we create something that is very realistic, or is authentic. And what you find with with the AI-synthetic humans, is they are very realistic, and you can create a relationship with them. As strange as that sounds, it happens. We have seen that with are because this program and all of the synthetic humans were developed for Department of Defense, and we have seen this. We have seen reactions of fear from individuals, and we see reactions of between in the interactions where they're building trust. And the focus on that side is cultural competency, teaching, judgmental training and cultural competency and AI synthetics, in terms of that, of how we could leverage it in healthcare, is we do think about about cultural competency. We think about health literacy. We think about if someone moves from a one practice, and maybe they have a mixed environment. They come from a very suburban area, but now they've gone to a mixed environment, where they have urban, suburban and rural, where your health literacy is different. Well, how do I deal with that when I had a literate society, right, a very health literate group. So in terms of that innovation, you know, also is the complexity of patient care and complexity of you know, we think about comorbid illness. And so the opportunity there for us is to be able to sharpen the clinical skills and ultimately help learners build their confidence in a very safe environment where you can mess up and it isn't high stakes like someone's life or treatment of a disease. And you know, for for me, what I've seen is that now the accessibility of all forms of AI are in everybody's hands. So it's the one thing I say is encourage everybody to learn as much as they can, because this is coming. You know, we're seeing it, and have seen it for some time now, but going back to my point, when you think about the ability to train clinicians or work with patients to directly, just speak to them in the way that they need to be spoken to, and to look at that cultural training, you know, and diversity training and inclusion training is vastly important.

AZ: It is. So you've mentioned the synthetic humans enhance clinical decision making and patient care and CE and patient education activities, which is also one of your session objectives, can you share a practical example of how you've experienced this in your own work?

BG: Sure, so this past year, and let me go back a little bit. So we introduced, initially, the use of a synthetic patient in our opioid rems activity with Purdue University and Archimedics, we actually won an Innovation Award for it, and that was several years ago, just right after pandemic. So that was our first instance of using one. That point, it wasn't interactive. So we go to a different kind of AI that it's not an interactive AI at that point, it's pre-recorded, we design it and then it presents itself. But we were able to work with that and do question and answer. Well, this year, we moved on to slowly introduce the fully interactive, where it has emotion and empathy. Now, in that multi-cancer diagnosis activity, we did keep the voice pretty synthetic, because we want to introduce this slowly. We don't want people be shocked by it. But in the session, it is, it is the actual version where it sounds very non-robotic, very human. So when we think about that with the multi-cancer diagnosis, what we did was have the moderator interact with the patient, which is the synthetic patient, live, and do it back and forth as a patient case. So, and the advantage to that was, if we had used a real patient, we might have someone who's nervous, and I think we've all experienced that, right? We're working with a real patient. They're very nervous. You can empathize with that, right? And you don't always stay in your learning objectives. And if it's pre-recorded, you have to go back and do a lot of editing, if it's in a live activity, whether that's at a conference or broadcast live, that could be a challenge, right? So it allowed us to stay on course of the learning objectives and present challenges, and be able to simulate and discuss diagnostic testing and shared decision making and to explain away some of the concerns of a new technology, right, and what would be beneficial for the patient, and you know, in terms of that shared decision making process. So it ultimately also sets up, you know how the patient may respond with concerns or preferences, and then in real time, be able to present clinical factors, right? So it ultimately enhanced the importance of clear communication with with the client, with the patient in the case.

AZ: Can you talk as a little off script, but can you talk about the decision that you had to keep the voice synthetic because it seems like that would not make the person comfortable?

BG: Well, why we chose to use more of a synthetic voice was because we don't really have much AI guidelines right now, right? So the AI committee here for the Alliance, you know our recommendations that we've all made, you know you're introducing a new technology, and because they look so life-like you don't want someone to believe something that it's not. And at the point that we launched that activity, and in the other activities, we've been very clear their AI, and they've been well received. And the outcomes are tremendous with them, which is, which is excellent, because you show that there is a trust factor that's built with with the with the learner. So you know, for us, it was that choice to just slowly introduce so then upcoming activities will be human, but we'll continue to have a tag that says, "This is AI, don't get confused." So and when it gets to the point, hopefully this year, where learners will interact directly with them, of course, they'll have a warning, but we want them to be immersed and and feel as if they're having a real patient experience.

AZ: So, let's dig into that a little bit. Your session featured live interactions with a real time AI-supported synthetic human What do you expect participants to take away from these interactions? And for those inspired to implement AI and synthetic human technology in their own work, how do you recommend they start?

BG: Well, in terms of the first question, you know, I would really love to see people be inspired, because that's the exciting part. You know, when you and really, for 24 years, like I said, we have disrupted, you know, and our Innovation Awards, you know, we've always tried to push the boundaries to do something that is effective, that gets learners excited, but also, for anyone in the industry it's an example that to push those boundaries. So that's what I hope they'll be inspired, to push boundaries and to learn, and, you know, really, to understand how this technology can unlock, you know, the potential for better clinical decision making and patient care. Because if we get better outcomes, and we get better data right at the end of the day, we can build better education. And, you know, for those also curious about implementing AI and synthetic humans and their work, you know, start by focusing on an application right, and it could be as simple as using a chat bot, right, and then move up into the world of synthetics, right? As we said, this is in its infancy, highly effective. We have good proof that it works. You know, good, good evidence. But, you know, for someone to focus look at, you know, the scenarios, the patient education potentially, but a great way to start exploring the potential of it. So start small and focused. And I would also say that you know, in terms of where they learn from or or where they gather their information, is look at the journals, because there's a lot of noise about AI right now. A lot of noise, and it's draining people because they don't know how to use it. They're fearful of it, but if we concentrate on evidence-based information, that makes it a lot easier to get past the noise that's out there.

AZ: To really reduce it down, there is a lot. There are a lot of sessions at the conference this year, and I expect that'll continue to grow

BG: Absolutely and I will tell you that I think that even for me, I'm very technology-focused. I mean, it's I had to cut through the noise and turn to the research articles and really look and and I think that there's some pretty good articles that have been presented recently. And I think the strides that we're making here, even at the Alliance, it'll make it a lot easier for people to access it and to understand it and use it. You know, know which utility.

AZ: Well, I hope that gives comfort to some of our our listeners out there, who maybe are a little newer to this, that even our expert is having to trim down what you're looking at. So, why should healthcare CPD professionals consider implementing AI and synthetic human technology into their educational strategies? You already mentioned better outcomes, can you give a little bit more about that and how they continue to learn about these advancements beyond your session?

BG: Sure, when we think about the agility, let's look at using a standardized patient, or a patient you know, unlike those traditional, standardized patients, synthetic humans offer the ability to simulate dozens of patient scenarios at any given time, right? We can change those up with diverse backgrounds, you know, enabling learners to practice cultural competency, like we discussed sensitive topics, or refine inclusion communication like, for example, this also provides, you know, using a synthetic human with consistent, scalable solutions to address those real world challenges in a safe and controlled environment. So when we start thinking about these advancements, you know, we can dive into emergent case studies attend workshops or conferences, using these as a utility, and connect peers to explore AI-driven education and outcomes, and we start looking at those groups and being able to collect data faster. So you know, typically, the amount of time that it takes us to analyze outcomes, this can be done in minutes, literally minutes by an AI. So if we look at the by the time we're done with a session, we know the results of the session. So being able to take that information for your next activity, right? Or being able to adjust, if it's a live conference series, for example, right, real-time you adjust that contents for the next city, or you could adjust it in that moment.

AZ: You have to have some very adaptive faculty to do that.

BG: Yes, you do. Hopefully they would be but the great thing about AI today, I could probably generate some slides for them.

AZ: That's true. Okay. Well, Brett, thank you for joining me in today's discussion. As we wrap up, we have to ask, where do you see AI and adaptive learning heading in the next five years within continuing education in the health professions?

BG: Yeah, I think that that question kind of sets you up, because we think about how things are moving so fast. I talked about this several times at different conferences over the last year or so, I talked about AI years. And AI years are not human years. Technology moves fast, but AI years is ridiculous. So when we think about, you know, a model that we may have built or a cipher three months ago is now 120 AI years old, because it advances so quickly. So just in five years, we could see, probably, you know, human synthetics being an integral part of medical education, being diagnostic assistants, or being empathetic patient assistants, or, you know, care managers even. I think that these are utilities to assist. I think that we have to be very careful about full autonomy. I don't think that you can ever get rid of the human element. I think that is absolutely important, because you do need to validate your data sets. You do need to make sure that you validate that the information coming across is accurate and just quickly to address one thing we didn't touch on is that a lot of questions come around hallucinations that AI creates. So if you use small, curated data sets, you tend to move away from that, right? And so that's why it's also very important that no matter how good this ever gets, we still have to have a human touch. In it in terms of summarization about the future, we're moving toward a future where education needs and is going to be required to be dynamic, deeply personalized, agile and directly tied to improving human health. So a future where technology doesn't just teach, but transforms. And I think that's where we all want to go, and that's what we were talking about in this conference, and that's our theme. So if we look at transformation, you know, again, carefully, but probably do that, because it's not going to fully replace us. It can replace some tasks, but it can be an integral part of being a great assistant in education for us. So, you know, in terms of that, I think that the future is very bright, and hopefully we'll all embrace these technologies carefully, carefully, carefully is the key word.

AZ: Well, there are a lot of sessions out there about AI and ethics, especially, so that is very important to think about. Well, as we look into the future, it's clear that AI and synthetic humans will play a transformative role in healthcare education, this technology not only expands access, but also redefines how we learn and teach. Thank you for joining us on this journey into the future of continuing education in the health professions. For more on AI-synthetic humans and their applications, visit the Alliance website or explore resources shared during the conference. Thank you again, and thanks, Brett.

BG: Thank you, thanks for having me.

Keywords:   

Related Articles

Transcript of Episode 58 – Live From #Alliance25: Building Momentum: Charting a Path to Success for Women in CE/CPD Careers
podcasts
Transcript of Episode 58 – Live From #Alliance25: Building Momentum: Charting a Path to Success for Women in CE/CPD Careers

By: Andrea Zimmerman, EdD, CHCP; Maura H. Davis, and Ruth Adewuya, MD, CHCP, MEHP Fellow

Episode 58 – Live From #Alliance25: Building Momentum: Charting a Path to Success for Women in CE/CPD Careers
podcasts
Episode 58 – Live From #Alliance25: Building Momentum: Charting a Path to Success for Women in CE/CPD Careers

By: Andrea Zimmerman, EdD, CHCP; Maura H. Davis, and Ruth Adewuya, MD, CHCP, MEHP Fellow

Transcript of Episode 57 – Live From #Alliance25: Next-Generation Adaptive Learning in Continuing Education: The Impact of AI-Synthetic Humans
podcasts
Transcript of Episode 57 – Live From #Alliance25: Next-Generation Adaptive Learning in Continuing Education: The Impact of AI-Synthetic Humans

By: Andrea Zimmerman, EdD, CHCP; and Bretten Gordeau

Episode 57 – Live From #Alliance25: Next-Generation Adaptive Learning in Continuing Education: The Impact of AI-Synthetic Humans
podcasts
Episode 57 – Live From #Alliance25: Next-Generation Adaptive Learning in Continuing Education: The Impact of AI-Synthetic Humans

By: Andrea Zimmerman, EdD, CHCP; and Bretten Gordeau

Alliance for Continuing Education in the Health Professions
2001 K Street NW, 3rd Floor North, Washington, DC 2006
P: (202) 367-1151 | F: (202) 367-2151 | E: acehp@acehp.org
Contact Us | Privacy Policy | Disclaimer | About
© Alliance for Continuing Education in the Health Profession
Login
Search