Listen on the Almanac or Anchor, and find us and subscribe on Apple Podcasts, Google Podcasts or Spotify.
Transcript
Don Harting, MA, MS, ELS, CHCP: Hello and welcome back to the Alliance Podcast, Continuing Conversations. My name is Don Harting, and I belong to the Alliance member section called PEERS, which stands for the Professionals With Educational Expertise, Resources and Services. In my day job I work as a medical writer. I specialize in needs assessments and grant proposals for continuing education in the health professions, mostly hematology and oncology. As many listeners already know, generative artificial intelligence, or GenAI, is making impressions across many industries, including ours. GenAI has become such a popular topic of discussion among members of the PEERS section that we are now collaborating with the Almanac and the Alliance Podcast staff to host this mini-series. We’ve invited experts to cover the use of GenAI in three specific areas. First, developing needs assessments and business proposals. Second, developing instructional content. And third, producing and analyzing outcomes reports. Also it bears mentioning that the Alliance is developing an every-member survey on how we use GenAI in our daily work. We’ll have more to say about that later in this episode.
Today, I have the pleasure of introducing our listeners to my first guest, Andrew Crim. Andy has spoken on several previous occasions about his interest in computer science and artificial intelligence. He’ll be speaking with us about using GenAI in needs assessments and business proposals.
Andy, welcome to the Alliance Podcast.
Andy Crim, M.Ed., CHCP, FACEhp: Thank you, Don. I'm very happy to be here.
DH: Well, thank you for joining us. Could you please tell our listeners a bit about who you are, what you do in your day job, your involvement with the Alliance and how you became interested in the field of CME and CPD in the first place?
AC: Sure, Don. By day I am the director of education and professional development for the American College of Osteopathic Obstetricians and Gynecologists. I've been here for about five years, trying to innovate what we do for continuing education. Prior to that, I was executive director of professional continuing education at the University of North Texas Health Science Center for about 22 years. So most of my career has been spent in adult continuing education in the health professions. Last year, I was elected to the board of the Alliance, where I'm proud to serve there. I accidentally got into continuing education. I was right out of college, I was hired as a grant writer for the county hospital and was recruited a little under two years after I started to work in the CME office at the Health Science Center. Because I knew how to write grants, and then got busy doing this education stuff and realized that I liked it and have been doing it ever since.
DH: Well, I knew we had something in common there when you mentioned you enjoy writing grants because that's something we have in common.
AC: I never said I enjoy writing grants.
DH: I understand you're a bit of a self-described computer geek. If that's accurate, can you tell us a little bit more about how you became interested in generative AI?
AC: So I've been drawn to computers since I was in middle school when I got my first TI-99/4A. And played around, learned how to program a little on it. And I've always wanted to do more than what my peers could do on a computer and sometimes I succeed, and sometimes I don't. When ChatGPT was released last year, I heard about it through several TikTok postings that I had just started watching. For some reason, my feed started filling up with this new thing of ChatGPT. And people were posting about all the things that could do and I'm like, ‘Nah, certainly it can't do that and I bet if it could, it would be really cool.’ So I created an account, logged on. And sure enough, it could do most of what they said it could do, or at least some aspect of it. And it wasn't long after that that I devoted a lot of time playing with it. And that's what I consider it, it's playing with it, because it's fun. There's a rewarding factor to me when I can make it do something new, when I can make it do something that I haven't done previously, or seen it done before. And so it's fun and that's why I play with it so often. But it wasn't long after that that I really took an interest. I opened a paid account, started sharing with other people what I was doing and it just kind of took off from there.
DH: I understand you're running a little experiment using GenAI to help with developing a needs assessment that you included in an actual grant proposal. Can you tell us a bit more about that experiment? And where things stand with that proposal right now?
AC: Yeah, I'll start with where things stand with that proposal. It was denied. Unfortunately.
DH: That's real life there.
AC: It's real life. I am not surprised. It was a topic of which I don't have a lot of expertise in. I just knew a little about. But it was a topic that did match our needs assessments, align well with our overall organizational needs assessments so that that wasn't an issue. But I let ChatGPT do most of the writing. I cleaned it up, tightened it up a little bit on my own, which probably was a mistake. I should probably have done most of the writing. Things I've learned since then, I'll share that in a moment. But yeah, after careful consideration, the grant was denied. And I don't blame them. I went back and reading it, I probably would have denied it myself. But it was a fun process. And something I needed to try and probably could do it better again, next time or figure out a better way to use GenAI into the process.
DH: Well, that's certainly how we learn, right? I mean, it is taking a risk and not all the risks that we take work out. So okay, well, thanks. Thanks for that follow up. I believe you mentioned in a recent interview on the Right Medicine podcast that you've been able to greatly reduce the amount of time you spend developing needs assessments. If I recall, specifically, you said something about it used to be able to churn one out in a week, and now maybe you can develop three or four in the space of a week. If that's true, can you tell me a little bit about which parts of the process seem to be, I'll call it speedupable with artificial intelligence, and which parts really resist any attempts to speed them up?
AC: Yeah, the parts that resist any attempt to speed it up or not any attempt, but most, are the ones that require true technical expertise, the human touch behind a needs assessment. And it's been my premise, since I've been using GenAI, is that instead of replacing the expertise of humans, it's going to require more. And that means what comes out of GenAI is going to have to be verified. It's going to have to be fact checked. It's going to have to make sure that it reads like every other part of what you're doing. And that requires expertise and Don, as a writer, you know not everybody writes the same. And you know, when you have something being written by a committee, and it all comes in different voices, and it all comes in written in different ways. It's a tremendous amount of work to put those things together and make it sound like it's coming from one voice.
DH: Let me repeat that back to you just briefly to be sure that I heard you and also our listeners heard you. And that is when you get a document that is generated by several different people, that is written in several different voices, which is a term from grammar that we learned in high school, meaning the you’s, the we’s, the he’s, the ‘they’s, and they are perhaps also using different reference systems, reference manager systems and coming from different sources, it takes a lot of work to smooth it all out so that it seems like it comes from a single source. Did I hear you correctly?
AC: You heard me correctly. Now, one of the challenges that can be overcome using GenAI is you could take such a document that is written by different people or pieces that are written by different people. And that can be uploaded into it, especially now with ChatGPT, and say, ‘Reanalyze this, write this in a consistent voice, write this in third person, rewrite all the references to be APA style.’ And it will start working on that. It's better to do it in smaller sections. That way you don't mess it up and then upload the previous section and say, ‘This is the style, I want you to write the next section.’ But that's one area that can be facilitated through this and make things a lot faster. You still, as an expert, you still have to go through that and make sure it captured everything that the intent of the document hadn't changed. That's one area that GenAI can be used to speed things up. The other is just start exploring what some of the gaps are. If you run into a topic that you're not familiar with, and you just start exploring gaps, when I do that, I start searching the web, I'm like, what are some of the gaps related to this, this, this or this. And I start making notes to myself. Or I'll go to PubMed and start searching for gaps in the literature and seeing what they are. I'll look at the abstracts and copy the citations down and maybe copy the abstracts. That you can upload into, all your notes from that you can upload into ChatGPT and say, ‘These are the gaps that I'm finding related to this condition. I need you to take everything I’m finding, look for overlaps, analyze it and write what would be a preliminary needs assessment for writing a grant to this.’ You would put some additional conditions on it. You want it structured the way you want to, and it would start writing that. Now that's not what you're going to use. That's not what you're going to submit to anybody that has a grant request. But that's going to get you started and it might get you about 80% of the way there. Then you can do a preliminary search on there in an afternoon. And loaded up into ChatGPT and have a preliminary basic needs assessment a few seconds after you've uploaded. That'll get you about 80% on the way of writing, start really writing a needs assessment. Then as the expert, Don, it's up to you to really delve deeper into those sections that ChatGPT puts out, really verify ‘Does a gap exist there? What are the references? Did it provide me references?’ Because ChatGPT can do that. Sometimes it makes them up. Sometimes it pulls bad references or old references. So you have to verify those and look to see if what it's pulling is still current. But that's where that human expertise comes in. But having that start, and some people call it cheating, I don't call it cheating at all. By the same token, a word processor would be cheating instead of using a clay tablet. It's an efficiency tool. And that's all it is. It's something to get you started and something to move you forward.
DH: Great. Okay, thanks. What are some lessons learned so far that you've come up with using GenAI in the CME CPD space? I'm talking about mistakes made or frustrations that you've endured or any lessons that you'd like to share with our listeners.
AC: The mistake is not being specific. One of the mistakes is not being specific enough of what you ask for GenAI to produce. And that's called prompt engineering and it sounds technical. It's really not. It's just a set of instructions that you ask a GenAI system to follow and produce something for you. Prompts can be short, simple things. I asked ChatGPT today, I was on a phone call and I had a lot of data and I was talking about one of our recent activities and somebody wanted to know what the net promoter score was. And I had all the numbers in front of me, but because I was on the phone and we were taking notes, I just copied and pasted those into ChatGPT and said. ‘This is a net promoter score. Calculate the NPS.’ And I clicked start, I continued on my call, looked over and I said, ‘The NPS is 85.’ And they're like, ‘How did you get that?’ And I'm like, ‘I just calculated, no problem.’ So that's an example of a very simple prompt, right? Not even a complete sentence. It knew what I wanted. It knows what an NPS score is. I've asked it for NPS scores in the past. It's not a big deal. When you're talking about something as complicated as a needs assessment, or any type of writing that you're wanting to use to expand on later, you want to get a lot more specific. Your prompt needs to know who your audience is. Your prompt needs to, so you're writing this, you want to give it a persona first and say, ‘You are a world class medical writer. You write the best medical needs assessments or clinical needs assessments for grants of anybody that you know, so take pride.’
DH: So it helps to brag to ChatGPT.
AC: You know, there was a study out last week and Ethan Mollick referenced it on his LinkedIn page. I can't remember who published the study. But pride, when you get into appealing to ChatGPT’s pride, it produces more effective and better results for you. So you would say something, ‘You write the best needs assessments of anybody you know so take pride in what you produce for me.’ You could even say, someone even put, ‘My job depends on this.’ And so it sounds silly, but these are the kind of the variables that it's looking at as it's producing. So you give it a persona, you want to say, ‘I want this to be suitable for a medical journal or for medical literature. So write it in a style of medical literature.’ So now it knows, ‘Okay, I'm not writing a haiku, I'm writing something very, very precise, very tech, right.’ So it's adopting that style, okay. And then you start, you want to start giving it the different parameters of what you're looking for. It needs to have current references for medical or public health literature. It needs to identify four gaps in clinical practice related to x. And each one of those gaps need to be explained with references and citations. There need to be transition paragraphs in between each one. And so as you're telling it these things, and then summarize everything at the end with a conclusion statement. So as you're telling it these things, it's saying, ‘Okay, I need to do this. I need to reference these sets of data from my database. I can ignore everybody who wanted a recipe for crack chicken that night.’ And you hit start, and it will use those parameters that you gave it, the very specific parameters, to get as close to what it thinks you want, as it can, and then you start revisiting and revising it.
Alliance Ad Break: Being an Alliance member has its perks. From discounts to industry leading events like the Alliance Annual Conference to members-only access to the Alliance Communities, the Alliance is where healthcare CPD professionals come to learn. Visit acehp.org to join today.
DH: Okay. Wow, that's great. I have several follow up questions I want to ask based on that. But before I do, you may recall before we hit the record button here, you mentioned a couple of stories that you had that related to AI, one of which was an article that you submitted to the Almanac, and the other was a self study you were working on. Would you mind, for the sake of our listeners, just to briefly recap each of those particular episodes?
AC: Yeah, so after the Alliance conference in February, this year. And here's the shameless plug that if you haven't registered for the Alliance conference next year, please do so. There's going to be a lot of AI content in that. And it's in New Orleans around Mardi Gras time, so you'll really want to be there. But after the Alliance meeting this year, I came back and thought, ‘I wonder if AI can . . .’ and keep in mind, ChatGPT was just a couple of months old at the time. I guess about three months old, and I said, ‘I wonder if it can actually write an article.’ And so I looked up what's required to write for and submit for the Almanac. I put a couple of prompts into ChatGPT, more than a couple of prompts and said, ‘I need an article around these prompts.’ And it wrote roughly an article in 15 to 20 minutes. It didn't take long at all. The article was completely written by ChatGPT. The Alliance editorial staff edited it, just for content and tone. Just a little bit, not much. But the big reveal was that it was written by ChatGPT. It's an article about ChatGPT, written by ChatGPT. We revealed that at the end, and we provided a list of prompts that were used to create the different sections of the article. But it was published. I don't remember exactly what month it was published in, but it was in the first part of the year.
DH: Okay, thanks. I’ll be sure to look for that. Thanks.
AC: And the funny thing is, some of it is static, but because it was written the first part of this year, a lot of it's out-of-date, because systems are evolving at a speed that is almost impossible to comprehend. The new tools that are coming out, the features that they offer and the precision at which they are responding to prompts is just getting better and better. In fact, one of the quotes I like to use, I wish it were mine. It isn't, but it's Sam Altman. Make sure I'm not talking about the Ponzi scheme guy. It's Altman, the guy who runs OpenAI. And the quote was, ‘This is the worst version of AI you'll ever use.’
DH: Because they just keep getting better, right?
AC: Because they just keep getting better. And we can't say that with all our technologies. I mean, some people would say 10 years ago, Facebook was the best version because it didn't have as many ads and you just saw your friends’ feeds. And now it's worse. But AI is just getting better and better. The second example you refer to, as we're writing, preparing to submit our self-study for accreditation to the ACCME. And it's an involved process. As anybody who's done it knows, you've got to solicit a lot of information from stakeholders, collect a lot of those data into a single voice, as we said earlier. One of the ways I used ChatGPT through my self-study was not to write the sections. I wrote the sections myself. I know my program better than anyone else. But I used ChatGPT’s analytical functions and assessment functions to my benefit. So I uploaded the criteria that I wanted to look at. Then I went to the ACCME website, downloaded the surveyor tools. The surveyor tools are available for anybody to download. Downloaded the surveyor tool for the criteria, uploaded that and then uploaded the section for that criteria that I wanted to assess. And I asked ChatGPT, ‘Read and understand the criteria. Look at the surveyor tool. Now, critically assess what I've uploaded for my section. As a surveyor, provide areas of weakness and opportunities for strength.’ Just to strengthen it. And so it would read it. It provided a wonderful list. One of the things that shocked me is as we were going through one of the criteria, and if you're familiar with the criteria, they're numbered in their bullet points under the numbers and you want to make sure you answer everything. I had completely skipped a bullet point at one of the criterias. I didn't realize that I did it. ChatGPT found it, and it says, ‘It doesn't look like you address this if you're self-study. It's making stuff up.’ And I went back and my jaw hit the floor because I was like, ‘I completely missed that.’
DH: That's a great example. It actually also adds value to this podcast series, because we’re talking about just needs assessment and content development and outcomes. We hadn't even moved on to compliance and accreditation issues. But here's a beautiful example of it. That's a great example. Thank you, Andy.
AC: You're welcome. You're welcome. And it's an area I'm going to be using more and more. And it can do that for needs assessment. You come up with your final needs assessment, and then you put the needs assessment back into ChatGPT or into ChatGPT with the exact same or similar prompt, saying ‘Please critically assess this. You’re a grant reviewer, or you're a world class grant reviewer. Please critically assess this based on what you know about this condition and the way grants should be written and provide opportunities to strengthen it and look for feedback.’
DH: Okay. I hope our listeners are taking notes as you are speaking, Andy, because there's a lot of specific applications and takeaways I can imagine that you've gone over so far. One of the things that strikes me as I learn more about AI. And actually, as I was reviewing an early version of this survey that the Alliance is going to be sending out to members here in the next few weeks or months, is the proliferation of platforms. I mean, it's not just ChatGPT. It's Bing and it's Bard. And it's OpenEvidence. And it's Claude and all these other platforms. And those are just, I think, the text ones. And then there's some image-based artificial intelligence engines as well. So my question for you is coming back to the issue of needs assessment, and business proposals. Are there any specific platforms that you've used that you would recommend for people who are developing these assessments and doing business development work? Or conversely, any that you might recommend that we stay away from?
AC: This is going to sound like a silly answer, but use the one you're familiar with. They each have their benefits. They each have their drawbacks. Up until yesterday, or the day before ChatGPT was limited to data. It was trained on data that ended in 2021. That's expanded now to April 2023. So it's very current.
DH: Which version are you talking about there?
AC: ChatGPT Plus, the paid version, now is trained on data up through April 2023. The free version is still, I believe, November ‘21, or September ‘21, or September ‘22. But they're updating it as they advance the technology. Claude does some things that no other system can do. Right now, Claude uses a completely different kind of generative model. It's a constitutional model. And it can process almost a book worth of information. You can upload entire PDFs that fit and books and quiz it on those. It’s pretty amazing.
DH: Funny, you should mention Claude, because I was just getting my feet wet using Claude earlier today. And I have my own little story to tell about that when the time seems right. Keep going. I'm very interested in your perspective on the various platforms.
AC:I was working with somebody the other day who wanted that critical feedback on his dissertation and wanted AI to look for repetition and redundancies in his dissertation. And I recommended he use Claude and upload it into Claude and have it do that. He did and found some things. And it really strengthened his dissertation. He was really happy about that.
DH: But I did have just one little story I wanted to share, because I was starting to get my feet wet with Claude. And I used it today to do some preliminary research and a pretty, I'll call it an abstruse, esoteric section of medicine, which is the use of an enzyme called asparaginase as a therapy to help patients with acute lymphocytic leukemia or ALL. I asked Claude, ‘What is the evidence for asparaginase?’ Specifically, the kind of asparaginase that comes from a microorganism called, I can’t remember it. I'm better at writing than I am at pronouncing but anyway. So Claude, to its credit. And I already had some background in this field because I'd written on it in the past. And so Claude spun out pretty quickly, maybe five or six paragraphs of text about the asparaginase. And the various forms and flavors of asparaginase being used in acute lymphocytic leukemia. But everything that it was saying about the asparaginase was positive. Only, it was basically only saying good things about these various asparaginases. And so while I was pretty pleased with the accuracy of what it had spit out, I had noticed kind of a bias in favor of positivity. And so I questioned Claude on this. I said, ‘Claude, thank you very much. This is very helpful. But it seems like you're only saying good things about the asparaginases. Are there any drawbacks? Anything less than great?’ And Claude wrote back and said, ‘Thank you so much, I really appreciate this feedback. I want to become more fair and balanced in my summaries. Therefore, I would like to tell you this, this, this, this and this about the asparaginase that are really not that great.’ For example, one particular one has a very high cost. Another, there's very little real-world data. The other is there's very little experience, and then I can't remember the other two or three drawbacks of this particular flavor of asparaginase. So I thought that was really interesting, that you really need to kind of probe and ask and maybe not necessarily be satisfied with the first or even the second answer that you get.
AC: I’ve started using the phrase critically assess. And in some of the things that I asked GenAI to do, I find I get a more balanced, true assessment of what I'm asking. Google Bard, Google started up late on the AI scene and did it kind of clunky at first. They really advanced. Google Bard has gotten a lot better than it was a year ago, or less than a year ago. I guess in February is when Google Bard launched. It launched when we were at the Alliance meeting. And it it had a terrible launch, because the first thing it showed on screen was an incorrect fact. It's gotten a lot better. And Google has done really good at integrating it within its entire suite of productivity tools and its email. One of the cool things about Google Bard is it will fact check itself now. So you can ask it to generate something and Google Bard is free to use. That's one of the great things about it. You can ask it to generate something, and at the bottom of the screen it will say, ‘Let's fact check that. Click here to fact check.’ And it will fact check. It will do a Google search to fact check what it has just generated and highlight areas where there might be mismatch between reality and AI hallucination, which I think is a powerful tool.
DH: For sure, and much needed, according to what I've been reading and hearing.
AC: Yeah, much needed. Bing is overall positive, because it's essentially ChatGPT form. Microsoft paid $10 billion to OpenAI to use the technology and they're getting their money's worth there. They're integrating it as copilot throughout Windows, throughout a lot of their applications, and it's being useful. But if you want to use the GPT form, but don't want to pay for ChatGPT, the $20 a month, which I don't know why anybody wouldn't want to pay for the $20 a month because once you get using it, you don't want to go back. But if you want to try it for free, you can use Bing because it uses the same version as the paid version for ChatGPT. The image generators, they're fantastic. ChatGPT has DALL-E, or OpenAI has DALL-E 3 now available and can produce amazing images. And I'll show you. I can’t show you. This is a podcast.
DH: That was a great thought, though. Maybe on the next version, you know, version 2.0. It'll be a visual podcast I guess they call it.
AC: ChatGPT’s license says you can use, you're the owner of the image it creates, and you can use it in the ways that you want. Now those licenses change frequently, and I recommend people look at them before they use them in any way they want to make sure they're still compliant. But right now, mostly for noncommercial use, you can use images that are created by these platforms. Bing is using the same engine. Adobe has a fantastic set of tools, AI tools, that it's built within its products. Generative AI and generative fill in Adobe Photoshop is amazing. Where you can take a small image and say generative fill and it creates this huge image out of just a small, small sample and makes it look amazingly real. Background removers, let's see, there's Midjourney. There's Stable Diffusion. They're all fantastic. They all do wonderful, wonderful things.
Alliance Ad Break: Like what you hear on the Alliance podcast? Visit almanac.acehp.org to read the latest continuing professional development news and insights. Visit today to get informed and inspired.
DH: Okay, thanks. Thank you for that kind of broad spectrum introduction to some of the platforms you've been working with. Looking ahead, and also bringing our focus back specifically on this issue of needs assessment and business development. What excites you the most about the potential application of generative AI to needs assessment?
AC: I think it's the personalization that generative AI offers. The ability to go in and learn how Don writes, to learn how Don thinks because Don has trained it on his writing, on his thinking process. He’s uploaded examples of his work as examples of what it wants to mimic. And then to come up with a prompt to have something come out that looks like that, that's 80% of the way there, Don, that you don't have to spend three days working on that in an afternoon. It comes out and you can then spend the rest of your time whether it be an afternoon, another afternoon editing and polishing up and making it as useful as you want it to be. I think the personalization of GenAI is amazing. The versatility of GenAI, with just the examples we've talked about, on this podcast has been everything from generating images to computing net promoter scores to coming up with a critical assessment of really disparate documents, and then assembling different things together into one unified voice. I think that's a tremendous benefit of using a system like this. A lot of people will ask, ‘Can this find me the, I want to know what the latest gaps are in this specific disease state.’ And yeah, it can probably do that. But Google can probably find that for you too, where the benefit of a system like this is taking all these notes that you find from different places, all these abstracts, all these different references and then coming up with a cohesive document based on that. What we do in a very manual process it automates. And a lot of times can do that much more efficiently than we could and then allows us to focus our expertise where our expertise needs to stay.
DH: Right. Okay. Well, gosh, here, we thought this might take 20 minutes. And here we've been on the call here for 40 minutes. I think we need to start thinking about drawing this conversation to a close. But even as I do so I think about so many other follow up questions, I'd love to ask you potentially in a future episode, if you would be perhaps, so kind. But as we think about wrapping up for today, anyway. First of all, I just want to thank you for joining us for this discussion. I want to thank you on behalf of the Alliance members who are going to be listening to this conversation, for sharing your expertise and your experience with us. Do you have any final words of wisdom before we sign off?
AC: I do. Very quick. The latest CMEPalooza presentation that Brian McGowan and I did, there was a poll. On that, only about 25% of the participants have knowingly used generative AI to help their daily work. And that really kind of surprised me because it's ubiquitous. It's everywhere now. So my words of wisdom, if you're at all interested, even if you're not, get in and figure it out. Play around with it, just get your feet dirty on a free version. You're not going to mess up, you're not going to do anything wrong, nobody's going to steal your data. Just go in and figure out what it can and can't do. Even if you have no plans of using it down the road. You need to know what it can do. And because knowing what it can do is the greatest step forward in advancing it and making it happen.
DH: Well, thank you, Andy. Thank you so much. And on behalf of our listeners, thank you for sharing your experience and expertise with us. I look forward to seeing you in person at the Alliance in New Orleans in February. And as a message to our listeners, be sure to tune in to the next installment of the Alliance podcast mini series on generative AI when we'll be featuring an interview with Alexandra Howson, my medical writing colleague who lives on the West Coast. We'll be discussing AI's use in developing content, which is one of Alex's specialties. Also for even more education on generative AI and healthcare CPD, register to join us for the Alliance 2024 Annual Conference taking place February 5 through 8 in New Orleans. Visit acehp.org/annual-conference to secure your spot and view the full program, including sessions from Andy and myself. We'll see you there.