Listen on the Almanac or Anchor, and find us and subscribe on Apple Podcasts or Spotify.
Transcript
Raja V Akunuru: I wanted to be that healthy skeptic and a healthy adopter, where I could really test the boundaries, but then also do it for something that is deeply valuable to the community.
Beth Ryan Townsend: Hello and welcome back to the Alliance Podcast, continuing conversations. I'm Beth Ryan Townsend, a member of the Almanac Editorial Board. Outside of my volunteer role with the Alliance, I serve as the director of compliance and programs for Continuing Education Company, which is an independent, non profit 501c3, continuing medical education organization. We are counting down the days to the Alliance 2025 Annual Conference, taking place January 8 to 11th in Orlando, Florida. We can't wait to gather and see how we can expand perspectives and inspire possibilities together to preview one of the sessions on the Alliance 2025 program. Raja V Akunuru is joining me today on the podcast. Raja is a colleague of mine on the Almanac, and he and his co-presenters will lead the session "Generative AI in Action: Five-Step Framework to Improve Faculty, Disclosure and Management Process." AI is constantly evolving and remains a popular topic within the healthcare CPD, so I'm particularly excited about this conversation. Raja, welcome to the Alliance Podcast.
RVA: Hi, Beth, thank you for having me.
BRT: I know from our work together on the Editorial Board that you have a strong interest and expertise in generative AI and how it applies to healthcare and continuing education. How did you realize this interest, and can you share a high level overview of the work you've done with Gen AI so far?
RVA: Absolutely. So my trist with generative AI was was more informal, and it actually started off with a poem that had to write for my 8-year-old. So I have 2 daughters. One is a 8-year-old, and the other is a 2-and-a-half-year-old. The 2-and-a-half-year-old couldn't care less about generative AI or any other technology. The 8-year-old, however, was tasked with writing a poem in her school. So she then approached me. I am not a linguist by any means, so she's much better into into poetry and all of those things. But then she was asking me for some ideas about, you know, some of the things that I read that I'm passionate about, and some examples of poems that I might, you know, just share casually, just so that she gets some inspiration. So I shared a few, but then I heard so much about generative AI that I wanted to do something in a more casual, stress free environment, where I won't be judged and I don't have to worry about security, privacy, legal implication, anything. So I thought this was the perfect platform to do it. And I took a totally different theme about describing a city, Philadelphia, which is where I'm from, and then I asked generative AI to write a poem describing how wonderful the city is, and then how historic Philadelphia is. And then with a couple of more iterations, I think I got a hang of how to use it. And then I shared it with my daughter. She was floored, and she was like, all praises for me. I did tell her at the end that, you know, I don't probably deserve the entire credit. I mentioned that, but then I think what was really interesting is that inspired her to think about her own theme that was in school, and she started working on some poems. She didn't use generative AI, but I think I was able to help her get that creative freedom in a stress-free, you know, low-stakes environment, probably high-stakes for her, low-stakes for me, I think.
BRT: Yeah, that's a great point that it that it can be a spark like that.
RVA: For sure, and that was kind of my introduction. And then slowly and steadily, I started expanding generative AI into multiple themes. So throughout the last few years, I've dabbled into probably most areas in CPD, starting from marketing, then, course, landing pages, theming, a little bit of HTML, a little bit of outcomes reporting, and then, of course, the faculty management, so a little bit of everything. Hopefully it's not, jack-of-all, but maybe experimenter-of-all at this point.
BRT: Well, great. I did want to ask you, because your session does present a five-step solution framework to improving faculty disclosure management, how did you develop this idea of using Gen AI in faculty disclosure management? And if you could also just share, have you tested it and any specific challenges this framework addresses within the current disclosure process?
RVA: Absolutely. So oftentimes, the pathway that I hear with generative AI is people are either sometimes too passionate about it or too skeptical about it. So I come from a tech background, and so I somehow wanted to hit that happy medium between trying to take it for granted, versus not trusting it at all. I wanted to be that healthy skeptic and a healthy adopter, where I could really test the boundaries, but then also do it for something that is deeply valuable to the community. So when I was reviewing different use cases, I always heard from the community that faculty disclosures is a huge time sink for multiple reasons. A) every creditor has to go through that process has to do it so it's a must-have. But at the same time, the use case, or the scenario itself, has a lot of challenges in terms of doing it the right way and then doing it the more efficient way. So in my mind, those were the 2 big drivers. Is this big enough problem to solve? And 2, is this something that I initially believe could be addressed, at least to some extent, with generative AI? So I put this in. I'd call this as my, my own 2-step test, litmus test. And faculty management did pass both of these gates. Yeah, that was my, my initial introduction to to testing it out. And then...
BRT: I would upvote both of those, by the way, for sure.
RVA: I'm glad to hear that Beth, because the community was like, ripe with super great feedback about all of these things, and I was able to test it with a couple of providers. So we will be sharing that in the Alliance. But a little bit of teaser, you know, not to reveal the plot, so to speak, but a little bit of teaser for sure is we tested it out with a real medical specialty association who's planning some of their live events, and one of our co-presenters has developed some prompts, and he's actually running it by the by their staff, who's actually using it to do some portion of faculty disclosure management.
BRT: Wonderful. Oh, gosh, I can't wait to hear more about that. Well, I know one of the session's learning objectives is to leverage AI tools to achieve optimal CE outcomes without jeopardizing compliance or regulatory requirements. And we know that healthcare is a highly regulated field, so how does your framework that you're presenting help healthcare CPD professionals navigate those complexities?
RVA: Great, great question. And I do believe, not only for this particular use case, the compliance and regulatory concerns is probably one of the biggest barriers to generative AI adoption. And I think that particular item was one of our pillars in terms of going through the analysis. You know, you could get a good output, but if we cannot ascertain towards the regulatory aspect of it, it's not going to cut it. So the way we develop the framework is, a) we analyzed the tool in its entirety, and we tried to find out what kinds of data is the tool storing. And then we went with the approach of, you know, share only what you need and pretty much delete everything else. So when we tried to play with the faculty disclosure data, we wanted to kind of start with what's the minimal amount of information we could give or feed into the tool, so that we don't have to stress too much about if my IT team finds out, or if my regulatory team finds out, am I going to be in a soup? Because you don't want to be in a soup. You don't want to solve one problem to create a bigger problem. That was one of the big drivers, where we said, start with that minimalistic data approach, and then test the output, and then start adding in more data points so that you don't feel like you are giving too much more than what you absolutely need to and I wouldn't say it solves every possible use case, but I think that healthy approach of go with as little as it as you want. You know, oftentimes in copyright, they say, just imagine every word is going to cost you $100, so in the mental framework, if you assume that every additional column I'm providing to this Gen AI tool is going to cost me $10,000 or more, that framework at least helps us understand, "okay, do I need three columns, or do I need four columns?" And get that analysis before you really start adding in the kitchen sink approach, I'm going to feed in everything into the tool and then get stressed about somebody asking me about the privacy of it.
BRT: Good point. Well, I know you'll be demonstrating the faculty disclosure management use case to illustrate the benefits of AI. How do you see generative AI transforming this specific process, and what are the long-term impacts it could have on the efficiency in CE programs?
RVA: I'm probably going to say this multiple times Beth, but I think it's probably worth noting that the tool has a lot of value as long as we know the parameters in which we need to work with. So, I'm going to assume that those boundaries are set, and we'll talk about how to set those boundaries, how to make sure, organizationally, that you review those boundaries as the first step. So I'm going to assume that that groundwork is done and it is imperative before incorporating any generative AI use case. Now that's the basic, like step zero. Once we assume that that that is particularly incorporated, I think, for a tool or a process like faculty management, the generative AI tool can have a huge benefit in terms of reducing the redundant steps that people tend to do, redundant, not necessarily from a value perspective. So for example, if you have 100 faculty members, you have to look at faculty relationships for all the 100 faculty members, but within that analysis, you're actually doing a sequence of steps 100 times. And the goal behind the tool is, if not all the steps, can the generative AI tool help me maybe reduce 30% of my steps? And if you get 30% efficiency over 100 faculty members, that is 100x, even though it's 30% the multitude of it is what gives the value. And so we believe that the tool or the framework that we're going to present, along with the boundaries that I discussed, is going to bring in those operational efficiencies. And even if you're a small provider who has fewer faculty members, the purpose behind this session is hopefully to understand the framework, understand the boundaries, understand what kinds of use cases make the most sense, and then adapt it to your scenario. Maybe faculty management is not a big pain point for you, but maybe the framework will allow you to look at another process that is a huge pain point because of maybe you having a small office or you know, resource constrained CPD departments, which we know is the lay of the land. I've yet to meet a CPD provider who said I have just a little more staff members than I can afford at this point, never going to happen. So that's the hope.
BRT: I like what you said there, Raja, about how you know that can save a good amount of time, and when you think about it, over the number of programs that people have, you're right, this anything to aid efficiencies, definitely worth looking at. So excited to hear more. And I know you said it sounds like you and your co-presenters are going to put on some live demos, analysis and hands on exercises for the attendees of your session. Could you give some hints about that, and why is it important for learners to test and discuss the prompts and outputs in this type of environment?
RVA: Absolutely. I'm sure we all have been positively influenced by so many AI sessions and so many conferences. So when I say positively influence, I mean either adopt it or be aware of the risk. In my opinion, both of these are good outcomes, because you know what the tool can or cannot do. So from from a session perspective, we really want to give the attendees like a small snippet of a faculty disclosure use case. Have some dummy data, we're going to prepare a dummy data set, and then pretty much keep the floor open towards people trying it out on a generative AI tool. We may have some free versions of some tools. We're still figuring out the logistics of it, but we do want to have a few versions of the tool available. If someone has a paid version, they're more than welcome to test it out, but our hope is at least have some some options where people can immediately test what they what they learned, and then I think, at least see the varieties of options and the results that the tool is going to give. Because more often than not, we all know that the prompt output is not always going to be the same every single time. There is a variability, and that's kind of what makes the process nice and challenging, but we wanted the fact the attendees to get that real feel of how the output could change based on how you're prompting the system, and rather than having a didactic one where we just have a monologue, we just thought having the attendees do a little bit of prompting, no judgment. You know, we are all learning here, and between now and January, who knows what new tool is going to come up, so we are all we are learning it's a safe environment to play. And we thought just giving them a small exercise would be a great starting point, because we realized we only have an hour, so at least a small one might hopefully whet the appetite.
BRT: Yeah, and I suspect what you're going to find from that exercise is that where you're not eliminating human review either, it's going to just emphasize that that's still going to be absolutely necessary, but yeah, I'm curious to see what comes of that. Well, what's one key takeaway you hope attendees will gain from your session, and how can they apply what they learn when they get back to their office?
RVA: I think the biggest takeaway we are hoping to give is, generative AI, when you apply it within the context of a problem, if you apply to the right problem to the right degree, it can make wonders or it can create wonders. Those two parameters are really important: identify the right problem and identify the right extent to which you apply it. And hopefully with a with a use case as ubiquitous as faculty disclosures, we are hoping that they'll probably learn a little bit of ways to prompt, how certain kinds of different prompting could help them achieve a slightly different outcome, because we do want to talk about not ideal prompts as well because we all love to talk about ideal prompting, but we, as faculty members, we've dabbled into this use case to find out, you know, some of the problems didn't give us output, but we thought that'll be good to let the attendees know why certain prompts were not optimal for the solution. So that's why, even though we'll get a lot into faculty management use case, our hope is that the framework that we're going to talk through, the five-step framework, will allow the attendees to at least, hopefully, feel inspired to try it out, select one small use case of it that they haven't tried, or if they've already tried, take it one step further and see whatever results they get and share it with the community because we're not saying it's a panacea for every possible use case, we know it can't, the hope here really is some actionable takeaways using some good prompts and some some efficiency, I think, just to get that fear factor out, because I think when you really get the fear factor out and you know the boundaries that you're working towards, I think your creative freedom gets a lot more effective once you keep the fear factor out. So our hope is, we'll we'll dispel, we'll minimize the fear factor by talking through those boundaries in the session.
BRT: Yeah, and I'm just thinking from previous Alliances I've attended, your audience is probably going to run the gamut of people's experience with disclosure management, so I think it's going to be helpful, honestly, for the new people and people who have been doing it for a long time. So
RVA: I'm glad you shared the same the same sentiment there, but it means a lot.
BRT: Yeah, absolutely. Well, Raja, thank you so much for joining me today in today's discussion. And before we go, I just want to make sure, do you have any final words of wisdom to share with our listeners?
RVA: Firstly, thanks again for having me. Beth, it's been a great conversation. I will leave the audience with with a good analogy that I thought describes generative AI, or any technology. So the great analogy I really like is think about Gen AI as a scalpel. A scalpel could be used for different purposes, it really depends on who uses it and what kind of outcome it can provide. So a scalpel, if it's used by a surgeon, can save lives. If it's used by someone who has not good intentions could hurt people. So you just can't blame the scalpel, you have to actually talk through the person who used it and how he used it. So in my mind, that analogy struck me quite a bit, which is, it's easy to blame the tool, but I think once you learn what it can or cannot do and its limitations, I think you can get a better bang for your buck once you understand the boundaries of the tool.
BRT: Yeah, absolutely. Knowledge is power, and I love analogies, so thank you for leaving us with that thought. Well, again, wonderful Raja, I'm really looking forward to seeing you and all our listeners at the conference again January 8 through 11th, and listeners, be sure to mark your calendars for Thursday, January 9 at 3:45 p.m., that's when Raja is presenting this session, and we look forward to seeing you there. Thanks so much!
RVA: Thank you so much again!