Listen on the Almanac or Anchor, and find us and subscribe on Apple Podcasts, Google Podcasts or Spotify.
Transcript
Don Harting, MA, MS, ELS, CHCP: Hello and welcome back to the Alliance Podcast, Continuing Conversations. My name is Don Harting. I'm a member of the Alliance PEERS section. I call myself a CME writer, and by that I mean I develop needs assessments and instructional content for continuing education in the health professions. Generative artificial intelligence or GenAI, is making quite the impression across many industries including our own. It's been a popular discussion topic among members of the alliance PEERS section. So much so that we're collaborating with the Almanac and the Alliance Podcast to host a mini-series focused fully on generative AI. We've invited three experts and enthusiasts to cover its use in developing needs assessments and business proposals. That was the first episode. Developing content, that was the second episode. And in this, our third episode, we're going to be focusing on using GenAI in producing and analyzing outcome reports. We heard previously from Andy Crim and Alex Howson, in that order. Today in the final installment of this mini-series, we’re sitting down with my friend Greg Salinas, who is president of CE Outcomes. Greg will share with us how AI can be used, or not, in producing and analyzing outcome reports. Greg, welcome to the Alliance Podcast.
Greg Salinas, Ph.D., FACEhp: Thanks, Don. Happy to be here.
DH: Greg, could you please tell our listeners a bit about who you are, what you do in your day job, and how you became interested in the general field of CME/CPD in the first place?
GS: Sure. Well, I got started in my career in academia, actually. I came out of university with a Ph.D. in molecular pharmacology, worked in a lab for a few years, decided I didn't really like it and started to look for other things. My background is mostly in neurophysiology brain receptors, how we learn, how we interact with the world, and started to look around for something else in that field and got introduced to the CE Outcomes group that happened to be in my town. So I started working there as an analyst and got really started in understanding the CME world, all the outcomes. We're all, you know, we like to say often in our industry that no one's really trained for CME, no one's really trained in outcomes specifically of CME, you know, to a degree, we're all kind of making it up as we go. So it was, it was a good fit for me, I liked it. My day-to-day work has changed. Since then, mostly involved in helping groups understand their outcomes. My day-to-day life is a little different from day to day. So you know, whether I'm working with reporters of education, providers of education, on understanding their target audience, understanding how people learn, understanding where they go for their education, to how they're putting that education into practice, different ways to do that. That’s kind of what I do.
DH: OK, super. Thanks. And then getting more specific on the GenAI piece. Can you tell us a little bit, if there's a story about how you became, how you got started using GenAI, in your outcomes work, to the extent that you are?
GS: This is kind of interesting, because I think this weekend was the one year anniversary of ChatGPT. So it's really, where we really started to become interested in this, probably as everyone did, probably in the spring of this year, as it started to really become, ‘Oh, hey, this is the next new thing. And this is on the horizon, look what this thing can do. What can it do to help us in our day-to-day jobs?’ I think, you know, even being at the AIS Conference this year, you know, there are people talking about GenAI. There were a few sessions that I agreed and disagreed with, on how they use it. You know, I think, sometimes a little frustrated with people saying that, ‘Oh, well, you can just use GenAI for everything, you can just, you know, ask it to do something, and it'll do it.’ And it's like, well, have you actually tried it? Have you tried doing that? It doesn't work like that. So, you know, we kind of came back from that meeting, and we're talking about that and, you know, then started trying some things and like, yeah, it doesn't really do exactly what you want there. So what can it do? And I think we're kind of on that burgeoning horizon of, you know, everything. There's a lot of things that can be done and will start to be done in the future, but maybe we're not quite there yet.
DH: OK. Before we get down at the micro level, let's stay at the macro level just a little bit longer. And Greg, could you please tell us a little bit more about the importance of outcomes reporting and analysis generally within the world of accredited CME CPD, and the work of all of us who find ourselves so active in this niche.
GS: Yeah, so I've been in this industry a little more than 15 years now. And it certainly is different than where it was when I first started. It seems like for a while, and for years, CME CPD has kind of had this Field of Dreams model where, you know, you build education, people will come and they'll learn. And by definition, education is good and productive. That's great. And I do agree with that. However, it's hard to know exactly what is happening. And you know, all education isn't necessarily built alike. And I generally think that people who are involved with outcomes should be appreciated more. They're often the low person on the totem pole, sometimes, you know, putting out these reports and trying to understand what's going on out there. But how else do you know that what you're putting together as far as education is working in the real world? How do you know how to translate the success of education to key stakeholders within your organization? These are the things that the people that helped develop the outcomes reports and work on these outcomes reports can do.
DH: OK, great. In a recent email exchange that you and I had, you shared your perspective that AI and outcomes work is currently lacking. You did say that there are a few things in the qualitative realm that it does OK. If I remember it had something to do with filtering or summarizing, but it seems to fall short in the quantitative realm. Can you say a little bit more about what you had in mind?
GS: Yeah. So when we think about GenAI I think we typically start with programs like ChatGPT and for ChatGPT, it almost has two tiers. You've got the free version and you've got the paid version. And those tiers are very different in how they work with outcomes. If you're just working with the free, open AI, ChatGPT I think it's three point something now, I don't know, maybe 3.5. But it'll do some qualitative work for you. You can put in, say, a list of items, maybe you had a list of responses to something like, you know, what kinds of education do you want in the future? You know, how would you approach something. It can basically sort those responses by theme. But you've got to remember that this isn't really more intelligent than the word association. It knows kind of what words go together. It can kind of do a little bit of that. It has trouble making connections between themes. And sometimes it'll even, you know, if you have say, a list of or you might be even given give a group of learners a case and have them understand the differential diagnosis, like how would you differentiate diagnosis patient, it'll list a bunch of things, or theme them together from that list, but it doesn't necessarily know that, say, Crohn's disease and ulcerative colitis, or also IBD, it doesn't really, it has a hard time making that connection. So you have to kind of go through and make sure that it's appropriately themed. And you know, but if you start working on the top tier, the paid for tier, it does a little bit more. The free tier, again, doesn't do any type of quantitative analysis. It won't look at your data. It says it will, it says it does a great job at it. But then you ask, you know, for instance, well, let me upload a file. Well, you can't upload a file. So I think if you pay for it, then it can do a little bit there. So I think there is some things there. I think it's just to be aware that using the free basic version, it won't do any quantitative analysis.
DH: OK. What about the issue of feeding an artificial intelligence platform with data that could be considered proprietary?
GS: I think there are some features within some of those ChatGPT, et cetera, that say, ‘Please don't, you know, use my data for anything.’ But I think that's something you have to be aware of anytime that you're uploading any data is to make sure that there's no identifiable information, there's, you know, be aware that if what you're putting out there could possibly go to open source, it could possibly just be put on the web. So if you're OK with that, if you're OK with having that data out there. If you are just publishing on a website, then it's fine. But just be aware that, that it is not, I wouldn't say that it is completely confidential.
DH: OK. So it sounds like, I mean, an inference that I would have for myself there is, if I'm working for a client, I certainly need to have the client's permission to upload raw data to an artificial intelligence platform before I do so, so that it can be analyzed coming back.
GS: Yeah, I think so. And you know, I'm not familiar with every particular application. I think that for the most part, if you can click that, I think that helps you know that option that says, ‘Don't use my data.’ But you never really know, you know, I certainly don't read every user agreement to know exactly how they're using all that data. So I would, I would be very careful about that.
Alliance Ad Break: Being an Alliance member has its perks. From discounts to industry leading events like the Alliance Annual Conference to members-only access to the Alliance Communities, the Alliance is where healthcare CPD professionals come to learn. Visit acehp.org to join today.
DH: Well, I think you've just answered or started to answer my next question, which is, what are some of the lessons you've learned so far using GenAI for outcomes work? Any mistakes you might have made, or frustrations you've endured? Or other lessons that you've learned that you'd like to share with our listeners to this episode?
GS: Yeah, so we've talked a little bit about what's available with the free versions, and you know, the free versions are good to try things out. Again, I think it does a great job with qualitative data and maybe summarizing some things. It does an OK job with making themes out of lists and organizing some themes. There might be a little challenge, if you take those and you want to do something with them, for instance, you want to, you know, classify them as a new variable and see, you know, maybe maybe there's different groups that may respond to these variables differently. It's hard sometimes to get the data out of there back into your systems. There are always hallucinations with these GenAIs where, you know, it starts going down a road, and you're like, what's, what's happening there? I've noticed that especially in I think like references, you know, you're asked for maybe some references around a topic. Often times, at least with the free versions, those aren't true. You have to double check everything there. Sometimes I think in the theme categorization, there's some hallucinations. So for me, at least with the free versions, I feel like I've got to check everything, that it's accurate. Now, if I had to check everything that is accurate, then I could have just done that work in the first place. And it probably would have taken me less time. So there's, you know, I think we're getting there. But I don't completely trust all of it that's coming out of it, at least to be comfortable enough in outcomes right now. Right now, I think AI is very powerful. I think there's a lot of hype. I don't know that there's a lot it can do that an experienced data scientist could do quicker and easier and more competently. The one other thing I just want to mention is sometimes too much data can shut it down completely. Like we’ve even in the paid version, you know, we put up a good data set. And it's not so many the rows, but if it has more than, say, 10 or so columns of different kinds of variables, sometimes it's just like, ‘Well, I can't, I can't do anything with this.’ And so you've got to be very careful about that, too. And how you're putting the data there. So it doesn't completely shut it down or lock it up.
DH: And when you're feeding it that data, are you feeding it an Excel spreadsheet or a Word document? What are you feeding it?
GS: So it depends. So if you've got an Excel spreadsheet that has worked, think if you're working on other types of systems, you know, say you've got Survey Monkey, you probably need to put it into an Excel sheet. I'm not really sure how it interfaces with some of that. I know things like it doesn't really work well with SPSS, or even I’m not sure how it works with Word either.
DH: OK, are there any specific platforms that you'd recommend to people at this point for outcomes work, either in text or in graphics? Because we all know that, you know, graphics are a very important aspect of a lot of the outcome reports we see today.
GS: Yeah, so there are a few things that I think are interesting. Again, I don't know that we're completely there yet. But graphics, I'm very interested in graphics. I'm very interested in if you have a story to tell, having a good graphic there, that's sometimes hard to find, you know, whether it's an icon, whether it's a picture of a type of patient, perhaps. I'm kind of playing around with this a little bit in outcomes too. You know, we have a lot of educational programs right now that are maybe a bit softer, right? A bit more on soft skills, about communication, about shared decision making, et cetera. And then, you know, we do a pre- post-survey where we say, you know, ‘What do you do with like, maybe you have a patient case and like what do you do with this patient?’ And it’s text-based, and it doesn't seem like a clinician would be able to type an answer to how they're approaching something the same way they wouldn't be like in a clinic where they're talking to somebody, right? So if you're teaching somebody who has those communication skills, and then ask them to like answer multiple choice questions about whether or not they can do those communication skills, that doesn't line up to me. But say you have an animated something, or you know, a patient that you present to them that kind of comes to you with a question, you know, you can hear it from their mouth, or you can you can see a picture of someone. You might have a different response to that than you would a paragraph of text. So I'm interested in using some of these GenAI tools to almost do the simulated patient types where you can, you know, have a script. You could have like a pre- post- question, you can ask them, like, you have this patient in front of you, they've got this problem, they asked you this question, you click on it and it gives you a question. How would you respond to that question, and then you can even get the GenAI outcomes to help analyze some of that. It's open ended, it’s qualitative. How many people use this term, or this, this turn of phrase that you're really looking for? So I think there's a lot of possibility in that where you don't necessarily have to go out and hire an actor, you know, some groups, we don't, we don't have the ability to do that. Or it takes too long. We don't have a studio, but you can maybe do something like this with some of the GenAI techniques. There's something like, HeyGen, which it allows texture recording. It’s honestly, it's a little creepy, but you can even upload, I can upload a picture of my face, and then upload a script, and it would, it would talk, like I'm talking just with a script. So there's some things there that are kind of cool. And you can do some interesting things. Now, you know, it's not just the cases, maybe I want to have a little talking head of myself, you know, explain an outcomes report, you know, we have 30 seconds. Let's do an outcomes report explanation. And I'm notoriously bad. I mean, we can tell right here, I kind of go back and forth, like jump around. But if I had a script, and I made the GenAI do it, it would go a lot faster. And I could, you know, not have to take 50 different takes to say a minute without slurring or skipping words. So I think something like that is great. And you can do a lot for free, do under a minute recordings for free with that. So I think there's a lot of capability there.
DH: Greg, for what it's worth, I think you're doing a great job.
GS: Well, I appreciate it.
DH: You are really bringing our podcast listeners up to speed. You just mentioned a new platform I'd never heard of. I think you called it HeyGen?
GS: Yeah. HeyGen. H-E-Y-G-E-N. HeyGen. There’s a few more. There’s one that I've been playing with recently called Elicit AI. E-L-I-C-I-T AI, which is a much better way to do lit review searches. You know, ChatGPT we talked about.
DH: Oh, no. Say it isn’t so. No, no, no.
GS: So what it does, is allows, it's a smarter way to pull in, say, like, and you can just type in, ‘I want articles about this. You know, give them to me.’ And it pulls in a lot of articles, you know, you have to read it. You have to go through them. But it does summarize the articles for you and probably better than their own abstracts do.
DH: So, tell me this, is there a platform that can find out the punchline to this joke? Here's the joke. How many CME writers does it take to change a light bulb?
GS: I think there are too many answers to that. Give me your best one.
DH: One to screw in the bulb and the other nine to review the literature on change.
GS: Yeah, I think that it kind of goes around the problems that we're having with ChatGPT. And just getting all these weird references that didn't have anything to do with anything. It seemed exactly set up for this. And I know there's ways now with ChatGPT too to almost create your own ChatGPT. There's ways to kind of teach it like I need to do something, you know, I know Andy Crim has been playing with that lately. I need to do something. Let's make our own ChatGPT that handles that specific thing. The future is now, right? So everybody's going to come up with new things that will do that and but I think there's all these little applications and again, not familiar with all of them, but can help a little bit here and there. Microsoft is also working on their own AI where, what is it? Copilot. That is going to start being able to be introduced into their Excel, into their Word. And they say it's going to do things like I want to know the means and standard deviations for this particular column. You just type that in. I want to know the means and standard deviations for Column D, and it'll give it to you. So you don't have to, you don't have to be an expert in this. You don't have to code, right? You don't have to know the formula to get that. You can do it if you know it. That’s easy enough, but to be able to say words, you know, to express in a conversation what you want and have it spit out that. That's the goal. I think that's the goal with all of this because what I look for in the future is, you know, we talked about, you just need a good data scientist, but we don't all have access to that. The democratization, democratization, can I say that word? I don't even know if that's a word.
DH: Yes, it's definitely a word.
GS: The democracy of data analysis. I think this is where, what we're hoping to get to, and that someone can say, ‘I just want to know, give me a summary of what's out there.’ So again, for someone like me, who is in data analysis, it's a little scary. But the ability to talk to a computer and get it to plain words tell you what's going on with a set of data is pretty powerful.
DH: Agreed, agreed. So Greg, can you tell us which GenAI platforms you find yourself using the most these days?
GS: Yeah, that's a great question. Because, you know, I've mentioned a few. I've played around a little bit. I think, overall, none of them are quite there yet, at least for the things that we do and how deep we like to get into the data. You know, we mentioned Microsoft Copilot, I think that one has possibly the highest potential, because it's using, it's in the same kind of work framework, et cetera, that we're currently using. I think anything that doesn't necessarily add a new site to go to, but works within our current work environment is going to be the most powerful, at least for me. That's kind of how I see it. You know, I'm getting older, I like to do things the way I do them. So you know, anything that's incorporated in the types of workflows that I'm that I'm currently using is great for me. Again, you know, I mentioned SPSS, it's an IBM product. They haven't mentioned anything yet, as far as AI that I'm aware of, I'm sure that it's, you know, within their, you know, what they're trying to get out. We use a lot of Qualtrics for data collection. Again, they don't really, they haven't talked about AI too much. But there are some different things that they're probably doing behind the scenes that I don't know about. So I imagine that I will have a lot more that I'll be working with perhaps in a year or two. But right now, I think it's almost kind of that none of them are quite there yet.
DH: OK. If I hear you correctly, in your regular work day, nine-to-five world, you already use SPSS a lot and you use Qualtrics a lot. Did I hear you accurately saying that?
GS: As well as the typical Microsoft suite.
DH: You probably use a whole bunch of Excel, I'm guessing. So a GenAI version of SPSS, GenAi assisted, GenAI enhanced version of SPSS or GenAI enhanced version of Qualtrics, or something like that would be of great interest to you, because it would kind of help you kind of keep doing what you're doing a little bit and not force you to kind of go back and learn a whole new set of tools at this point.
GS: Right. Because what I like to do is I like to ask, you know, and this is the thing I do is, you know, when I use ChatGPT, I ask it a question, you know, it kind of spits out an answer, but I like to put that answer back into my data set and play with that a little bit more. Again, I'm a very hands on data guy. So if anything that works within the current workflow works for me because I like to add new variables, like to play around with variables, I like to do, you know, do different cuts ask it to do a regression on this, you know. So again, I'm very hands on. Now someone who's not as hands on, they might just be like, I just need the means, the standard deviation, the effect size, I'm out. So that's great. And then if that works for you, that's great. It just doesn't work for me.
DH: OK. All right. Well, thank you, that helps me understand a little bit better, because I can understand, I can imagine how, if you get an answer from an artificial intelligence engine, you're not going to want to stop there. You're gonna want to keep probing, you're gonna want to ask it a follow up question, kind of like, kind of like we've been doing on his interview, is a follow up question and drill down and drill down. And for that, to have that artificial intelligence platform integrate seamlessly with software programs you're already using, like you mentioned, SPSS and Qualtrics. That's really going to make your workflow simpler and make your life simpler.
GS: And that's the hope. Well, we'll see.
DH: I think you've already started to answer my next question. But when you were starting to talk about customizing ChatGPT for kind of a mission related to ours, or customizing other platforms to work best in a field related to ours. And also you mentioned Microsoft’s Copilot program, but here's my question. Looking ahead, you know, three or four years in the future. What if anything, excites you the most about GenAI and its application stands for reporting and analyzing outcomes?
GS: I think that's exactly it. I think being able to do data interpretation and analysis without knowing how to code without, you know, spending a few years for a master's program. Being able to, to at least know what you want to know, and get that out of a data set is one of the most powerful things that I think that AI can do, all that back coding for you, be able to possibly put together reports based on that, the way you want them to be put together. You know, one of the big things and in CME is sometimes we want to know what certain subsets of data look like. For instance, you have a large data set, it's like, OK, well just show me the physicians, what do the physicians do? And there are ways to do that, obviously, and in data analysis, and coding, you gotta go in there, but you gotta make the groups, you've got to then do the sorting, you got to do all that. But if you can just say, OK, I just want to look at, you know, oncology nurses versus oncology pharmacists. Just do it without me having to go in there and do that data coding. Completely, that's going to be the next big thing in our, in our work. So, you know, all the data cleaning, all the statistics, like what's the best statistic for what I want to do? Just tell me. I don't, I don't need to go look it up and have to read five different opinions about using t tests, or why use a p value or just, you know, just tell me what it is to do. And I'll do it. Oh, actually, no, you do it. And then I'll look at it. And I'll interpret and I'll find out what's interesting, what's informative. It always will require a human element. I can't see at least, you know, in the next decade or so being able to completely rely on this, but to be given all the information and then I can decide what's important. I can decide what's important to stakeholders, I can put together a report, you can do the hard work, AI, but I can decide what goes in it. That's what's exciting to me.
DH: Well, I want to just mention here one way that you've already expanded my horizons and helped me to imagine a future of how CME CPD is going to be different in the future with GenAI is this idea of developing potentially an animated patient case scenario using some of the image production capabilities of artificial intelligence. And perhaps using that we're getting actually into content development here as opposed to outcomes analysis, and we're kind of technically crossing the boundary of the topic we were supposed to talk about on this interview. But in any case, the idea of using these image generation engines to develop art, artwork and illustrations, including potentially patients for interactive case scenarios. So that's really great. Thank you, Greg, for doing that.
GS: Well, I would say that it's also the outcomes. I think, if it isn't content generation, you need to have something similar to that to assess how that content worked. So you can have that within the outcome. You know, you can have a very similar case, make it a little different. But I think it’s all, outcomes is content.
DH: Yeah. Especially since we're in this world of pre to post, right? And if you're, you know, you can't change too many variables pre to post, you have to have the content be pretty similar in order to have a meaningful measurement of change, right? I mean, I'm thinking. So let's switch gears and talk about what worries us? What scares us? What keeps us up at night? What worries you the most about GenAI in the outcomes space? Coming forward here in the next few years?
GS: Couple things. So if we're talking about animation, we're talking about images. Well obviously that image and that animation has to come from somewhere? Did it come from an artist whose materials were just, you know, uploaded, unknown to them? At some point, you know, where did those rights come from? Where does it work? How do we make sure that where that art is attributed to? I don't want that to go away. I don't want, you know, people to be out of work and out of the creative work, because they're being so supplanted.
DH: Before we go any further, let me just thank you. Thank you, Greg, for saying that. Because as a creative professional, that means a lot to us. So keep going.
GS: So AI, as it stands right now cannot be creative. So it's all these things that are creative is getting it from something. So what is it getting it from? And I think to be open about that, and certainly there are open access types of things. There are clip arts, there are, you know, things that have that license, you know, to be used. I just want to make sure it's that and not something else that's being used. I think that that is worrisome. Obviously the capabilities for what we want to do with AI aren't there. They're not there right now at least, especially in the free version, you know, for the average user. When we're talking about Copilot, you know, I don't know what they said, something like 20 to 50 dollars per month per user, in addition to, you know, whatever else you're paying for Word or you know, all those things. So, so that's not something, you know, everybody can just kind of pick up and start using. This is something that's a major investment. It might even be more than that, you know, that might be low. So, you know, just the intro price to get you hooked. So I think that's something just to be aware of that, that it's not, right now, we're not there, you know, even if AI is generating analysis and reporting, someone has to make sense of it. Someone has to interpret that data. Someone has to determine, you know, what's important, what's clinically relevant? You know, I know that there's a lot even in clinical medicine talking about, well, let's just use AI instead of a primary care physician or something like that, you know, that's an extreme example. But it's something that people have talked about, it's like, sure, if you want to follow the guideline, you know, exact by exact. Yeah, you know, maybe AI could do that. But would it be responsible for if that goes wrong, you know, what's happening? There are always these risk cases, and side cases that don't really line up with what you want to do. So I wouldn't rely on anything OpenAI tells me to do as far as data interpretation without really looking at it.
DH: Well, you just mentioned a key word in my world anyway, you use the ‘r’ word, which is responsibility. And it's kind of hard to hold a computer responsible in the same way that you might hold an individual human being or a corporation responsible.
GS: Let’s talk a few more worries because I mean, you know, already outcomes has a template problem, you know, that it seems sometimes, you know, you get these outcomes reports and they're, they're in this template and this is OK. It's you know, you know this is an overworked outcomes of like, that keeps putting these in templates. But does that really tell you how well an education worked? Is all education created the same to then have this template and then you have AI doing it, that's just going to be another template. So it just goes back to that interpretation and finding what's meaningful. Lastly, I think we have to think about publications because any publication in any kind of outcomes you do has a potential to be published. And there are so many different regulations from some of the manuscripts, some of the journals about how you use AI in a publication. So just be aware. If it's something that you're considering publishing be aware that there are probably some regulations around how you're using these GenAIs.
DH: That's a really great point. It's a really great point. Greg, there's been a question in the back of my mind, that I wanted to pose to you in your capacity as President of CE Outcomes and, and in your capacity as an employer. And that is, you know, employers spend a lot of time investing in their workforces, bringing, bringing people on training them, retaining them. And my question at this point is, what messages are you sending to your workforce, to reassure them that they'll still have a job in a year or two, despite the many capabilities of GenAI?
GS: Currently, I'm not worried about that at all. I think that the way that GenAI is right now, you know, it's going to be good if it's not quite there yet at data analysis, and kind of cranking out numbers. What it hasn't shown to me, and I don't really see the capacity of this, is data interpretation, knowing what to do with data, knowing what to recommend, having that background in education, and that familiarity with the space to say that, based on this data, here's what you need to do, or here's kind of that next step in outcomes. I see GenAI almost like I see, maybe pulling in a summer intern where they have the ability to do what is asked of them. They don't really know the space very well, but they know how to do a particular task. But they don't know how to do too much more than that. They can do what you ask, they can get you back some information, they probably weren't aren't going to go by themselves and create something and come back to you with that. So as far as our staff goes, you know, they're involved as much as I am in trying to understand the GenAI space and how it helps their jobs too. So if we can make their jobs faster and make their jobs easier, and cut out some of that busy work and allow them to spend their time thinking and interpreting, that's only going to help us in the long run.
DH: OK, great. Thank you.
Alliance Ad Break: Like what you hear on the Alliance podcast? Visit almanac.acehp.org to read the latest continuing professional development news and insights. Visit today to get informed and inspired.
DH: Well, Greg, thank you so much for joining me for today's discussion. Do you have any more, any final words of wisdom or caution to share with our audience?
GS: I think I've hit the caution pretty hard. But I, you know, I, it seems to me that it's only going to get more useful, you know, it's not going away. So I think we just have to embrace it, you know, in the data world, in the outcomes world, and use the tools that we have to use. Again, 15 years ago, when I first started, just looking at some of those PowerPoints back then, completely different than what they are now, you know, you think of it as just, you know, it's a white slide. What can we do differently, completely different now than how we're presenting the data, how we're telling the story of outcomes? How will GenAI help us in the next 15 years? I can't even I can't even guess. I'm sure that our stories and the way we're telling these stories are going to be completely different.
DH: Great. Well, thank you. Thank you, Greg. And thank you to our listeners for joining us for this podcast mini-series on generative AI and its uses across the healthcare CPD space. For even more education on GenAI and healthcare CPD, be sure to register to join us for the Alliance 2024 Annual Conference taking place February 5 through 8 in New Orleans, Louisiana. You can visit acehp.org/annual-conference to secure your spot and view the full program, including sessions from Greg and myself. And Greg, why don't you just say a few words about this session that you're going to be offering?
GS: Sure, so we are starting with a pre conference session. Myself and Andy Bowser are going to be talking about the kinds of the same things we just talked about today on this podcast. But we're gonna be doing a little bit more hands-on. So we're going to help people through, like what prompts to use, how do we get an outcomes report from data? What types of things can we do, and what kinds of things we probably shouldn't do? In addition, a few more sessions during the conference, we have a basic outcome session, like how do you take data right from Excel sheet to a very basic report? How do we, and then we have a more advanced outcome session as well. So we're gonna be talking about some more detailed things for those advanced users. In addition, myself and Brian McGowan will be talking about some OSP updates and where we should go for outcome standardization in the next five years.
DH: Wow. OK, sounds like you're gonna be busy down in New Orleans. That's great. Well, thank you very much for that overview of what we can expect in regard to outcomes down in New Orleans. And I guess we'll see you there.
GS: Will do.