Almanac - Insights and Applications for the Healthcare CPD Community
Powered by
Alliance for Continuing Education in the Health Professions
  • Education
  • Outcomes
  • Leadership
  • Podcasts
  • Industry News
Transcript of Episode 61 – Data Analytics for Impact
Thursday, July 31, 2025

Transcript of Episode 61 – Data Analytics for Impact

By: Katie Robinson, PhD, CHCP; Wendy Cerenzia, MS; Andrea Zimmerman, EdD, CHCP

Listen on the Almanac or Spotify, and find us and subscribe on Apple Podcasts or Spotify.

Transcript

Andrea Zimmerman, EdD, CHCP: In a world overwhelmed by information, how do we sift through the noise to find what truly matters today? We're diving into the power of data analytics, not just to measure impact, but to drive it. Whether you're a data enthusiast or data-wary, this conversation will spark ideas on transforming how we use data in continuing education. Hello, and welcome back to the Alliance Podcast: continuing conversations. I'm Andrea Zimmerman and I'm thrilled to be your host for today's episode on a topic that's both timely and foundational: data analytics for impact. Joining me are Katie Robinson and Wendy Cerenzia. Katie Robinson is the editor-in-chief and chair of the Almanac Editorial Board and VP of instructional design and analytics at Vindico Medical Education. Wendy is the CEO of CE Outcomes LLC. Together, Katie and Wendy bring a wealth of experience in using data not only to assess performance, but to drive meaningful educational change. Katie and Wendy, welcome to the Alliance Podcast.

Wendy Cerenzia: Thank you.

Katie Robinson, PhD, CHCP: Thanks. It's great to be here.

AZ: What role does data play in advancing the goals of continuing education or continuing professional development?

KR: That is an excellent question, and pretty much the reason why we are all here today. You know, I think that in CE, we sort of live in this unique world compared to other departments within medical affairs companies. And we really have the unique opportunity to really understand our learners at a different level, partially because our accrediting bodies make us understand our learners at a deeper level, but also just because it's part of the education that we do. A lot of the education that we do is making sure we're asking the right questions to really understand where our learners needs are both in terms of how they're practicing and what they know. And maybe you know what challenges that they continue to face. And because we can do that, we can give them exactly what they need, and a learner who is getting what they need from our education is much more likely to come back for more education, which is a wonderful thing. So you know, thinking about, for example, treatment guidelines. Right? It might be that, you know, maybe guidelines get updated every year every 2 years, and learners just don't know that they're getting updated so we could do some education where we're not only educating on what the guidelines are. But then we can maybe take it a step further and give them some realistic case scenarios where they're they're really being challenged to apply the guidelines in sort of a safe setting, not a real patient, but in a clinical learning experience that we create for them. You know another thing I think that's from an educational standpoint of content, but we can also use our data to see what kind of educational formats that they like, maybe you're seeing that a specific subset of learners, whether it be younger clinicians or maybe a specific specialty. Maybe they're seeking education that's all done on a mobile device versus on a computer. Or maybe they're accessing education through your social networks versus traditional email. These are all types of things that if you're looking at the data. It's always trying to tell you something, and then it can just help you better to refine your education going forward. So you know, again, going back to the point of certain things we have to do in the CME world, because our accrediting bodies required us to, but we can always take it a step further and and learn beyond what we're just required to learn right? And and really take it a step further and see, you know, can I get at what my learners want to know? Can I get at what my learners like in terms of educational formats? And in doing so, it's just gonna help you continue to to develop that education that's more refined and more what your learners want.

AZ: Thank you. Wendy, do you have anything that you would like to add?

WC: Yeah, I completely agree with everything that Katie's saying. I think every time you have the opportunity to engage with your learner through data and getting that data from your learner is an opportunity to improve your education. So, listening to that learner, not only are those required things that you have to gather for certain accreditation or outcomes reporting needs, but it's the opportunity to really understand what it is that they're looking for. How can you best meet their needs so couldn't agree more, you know, look for those opportunities to make sure that the data that you're getting is the most meaningful, not only for your outcomes reporting. But for your future education, development for continuing programs into the future.

AZ: I would love to dive into that a little bit. So, Wendy, how do you utilize data in your day to day operations, and do you also take it forward and influence your life outside of work, too?

WC: So, data is everything that we do. So CE outcomes, sits in kind of a unique position. In that we conduct outcome surveys related to individual programs. But we also do a lot of research and outreach to clinician populations, understanding their educational needs. What are they looking for? What formats of education? So everything that we do is really a database decision. Beyond just that, I mean data in everyday life. It makes me question every piece of data that is out there. How is it gathered? What was the sample question question digging deeper into the data? I mean, data revolves around every decision being made. Whether it's a business decision, whether it's an educational decision, really assessing that data and making sure it is going to give you the best insights that you can take forward.

AZ: Katie, can you share an example of a time in which analytics created a positive or a negative impact in your work?

KR: Yeah. So I do like to to start off by saying that like, there's no such thing as bad data. That's really something that I try to, you know, communicate to everybody within my company. They're like, 'Oh, we did so bad!' Nobody did this, or everybody failed this, and I'm like that doesn't mean anything. It's just your data trying to tell you something. You know, there was a time when we had very few people fill out our program evaluation. And and it really just started to make us think like, Oh, maybe we're asking at the wrong time. Maybe if we're, you know, giving learners a QR code before they even get out of the room. Then maybe they'll do it, you know, before they before they leave the room. So it's really, you know, just because somebody didn't do something, is it not just like, okay, everything's bad. And and you can really learn from that. There's always a learning opportunity and something that you might consider at first bad data. But it's really just your data trying to tell you something. But I think you know, the the positive side of it is the the stuff that excites us, the stuff that that makes us all do what we what we do because we're we're seeing positive impact in, you know, patient or in clinician knowledge, patient health. Not all of our programs are designed to measure changes in patient health. But you know certainly that is the end goal. But so one thing that really stood out to me recently is, you know, at Vindico. We're trying to go beyond sort of this, this framework of Moore's pyramid of outcomes, which is just how many people came? Did they like what they saw did they learn foundational knowledge right and taking it all the way up to patient health? But we really like to think about other ways of measuring change. Right? So so one thing that we did really recently was in obesity. And you know, in obesity management, there's a lot of negative biases that clinicians might not even be aware of just sort of these latent biases that they have in terms of obesity being a lifestyle decision which aren't necessarily true things. But just as society has evolved, this is how the management of patients with obesity has sort of been construed. So we did a program that was really just designed to make clinicians aware of these sort of latent biases that they may or may not have, and then have them self reflect on certain statements about what they felt about the management of patients with obesity before and after the education, and we really saw some positive changes in terms of clinicians, feeling more that obesity was a disease and less of a lifestyle decision. and those types of things, I think, are very powerful, and might not necessarily be measuring change in behaviors or change in practice. But you have a clinician that now recognizes obesity as a disease. You know that they're going to go into clinic the next day and have much more empathetic conversations with their patients, and which is just going to make patient satisfaction go up and possibly have, you know, a positive impact on disease control.

AZ: It's a great example.

KR: Yeah, Wendy, do you have any other? I mean, I'm sure we could talk about this for hours, examples of positive impact.

WC: So no, and first of all, I couldn't agree more about negative data. There is no negative data, every data opportunity is just that opportunity to dig deeper and to understand that data more. So, you know, if something doesn't come out the way that you would expect it, that just gives you the opportunity to dig in. Okay, who were the learners? Let's understand who's getting it wrong, who's getting it right and and just keep digging to understand that story better. So, love that you know that has been translated to changes in how you would field a survey, or when you would field a survey, Katie. Your example of obesity. We've also done quite a bit of research in that area, and it's so impactful to see that education as a whole makes a difference, not only in those attitudes, but they do go and change their practice. The next day we get to talk to clinicians daily and have those interactions about what they have implemented into their practice. And it just makes it is the reason we do what we do. You see that they are treating their patients differently. Those patients are getting better care because of the education. So the the positives of this tons of stories about how impactful education is, but really being able to tell that story. I think it's a key that we'll dig into in just a few minutes.

AZ: Thanks. So are either of you using AI tools in your analytics work? And beyond that, also, maybe to inform data collection methods, writing your survey instruments when you distribute the survey? Are you using any of that in your work with data? And if so, how is that helping you uncover insights? Or do you have any lessons learned from that?

WC: Carefully, I would say, is how we're implementing it. So we had started testing Chat GPT for some open-ended coding several years ago we literally ran tests of what would Chat GPT give us versus what would our manual approach to coding data? How comparable were those across a series of different projects? Not great, there was a lack of clinical nuance, I would say, in terms of how open-ended. Responses related to a specific clinical question were coming out and so we kind of paused on using it in any terms of open-ended coding. That being said, I think more recently, Chat CPT, Copilot is doing a better job of where it used to be. We're still very careful in how we apply it, particularly with any data analysis where I've seen it helpful? Is some of those questions around, why is my data not coming out as I would expect it? So you can put that question that maybe those pre post test questions or those pre post test scores came out unexpectedly, you put that question in and say, what am I missing here? How may this have been misinterpreted? And sometimes it gives you a little bit more nuance to then either go back to the clinical expert that you're working with, or dig in a little bit more to some of those questions. So I think there's some really creative ways to think about, not only the survey questions, and if you're getting tripped up on. I'm not wording this question or this distractor, how I should be. That's a place where it's not asking Chat GPT for content. It's asking for refinement in ways that may be helpful. Again, in making sure your questions are crafted in a way that aren't going to give away that right? Answer, because it's too long, for example.

AZ: Yeah, that's that's really great. To use it to troubleshoot some issues that maybe you aren't seeing. Katie, do you have anything you'd like to add.

KR: Yeah, that's definitely a new thing we've done at Vindico was using it sort of for question validation, not question creation, but but validation. It's definitely been very helpful and sort of eye-opening things that we wouldn't have necessarily thought about. I think our AI journey for data analysis is sort of similar to Wendy's as well. We started about a year and a half ago. You know you you're talking about coding, Wendy, one thing I I've never tried that specifically, but for our qualitative data, we are using it more of like a sentiment analysis. So asking, like all these are a lot of times for patient data. Right? So patients said this and there's a hundred social media comments, or there's a hundred survey responses like, what is the can you do a sentiment analysis on these? And it'll tell you, oh, 17% concerned, 18% empathy. So it's actually, very, very good at that, and we validated it sort of across. You know, our own manual way and across other methods as well. What I would love, and I'm still sort of struggling with, is using Chat GPT or another AI tool for sort of more quantitative data analysis. I think our struggle there is very real in that nothing we do from the data analytics side is templated. So it's not like column C is always going to be knowledge question one right? Like, it's never like that. So our data output is really where we're struggling with that but you know, when we initially started about a year and a half ago. Things were very, very messy now Chat GPT has improved quite a bit, and it's like just a little bit messy to the point where we can use it for like a quick glance. So you know, I say, I wanted to to separate my data into 2 cohorts just to see if there was a trend I could use Chat GPT quickly enough to identify at a glance, if I thought there was going to be a trend there, and maybe there is, maybe there isn't, if there isn't okay. I didn't just spend a half hour doing it manually. I just got an added glance, sort of look. another thing that I've used chat gpt for a lot is Excel formulas. So you know, it could be something very complicated and be like, oh, I want to compare data in column B, but only those that answered C. And column L, you know. So it's very quick to like not do the analysis, but give me the formula to just copy and paste and do that. So that is, that is, has been a time saver for me, not necessarily. Doing the analysis, but providing me the tool to do the analysis a little bit quicker.

WC: Yeah, I think that has been a huge thing. It's just going in and saying, I want to run these things. How do I do it? If you're not really really in depth, in tune with Excel for those of us who aren't. It really gives you those keys to be able to run whatever you want to run without having to spend all the time on the tutorial in Excel.

KR: Exactly or watching YouTube video.

WC: Yeah.

AZ: Exactly. Well, so you allude to just the sheer amount of data that you get from various surveys or focus groups. Or, however, you're collecting your data. How do you navigate the information fatigue, whether it's quantitative or qualitative? Are there any practical tools or resources that you would recommend that help you do your job?

KR: Practical tools and resources, that's a challenging one. But I can tell you that in terms of navigating the fatigue. I like to think about my data telling me either one of two things it can tell me impact, or it can tell me persisting challenges. And when I go to sort of tell an outcome story, that's sort of what I think of. So there may have been a lot of questions within my activity that are are related to you know, showing persisting challenges. But maybe that's a separate thing, you know, for one story. Maybe I just want to focus on impact, because I saw some really cool shifts. And like, we talked about attitudes and biases. Or I saw some really good shifts and plan changes to practice, and I will try to in my mind, separate those things, because then I feel like I'm looking at smaller sets of data, and I don't necessarily need to fit all of it into one overwhelming story. I can just sort of put it into separate silos that are serving two different purposes: one, to show the impact of my education or, two, to show where my education needs to go in the future dor practical tools. You know, I think that, just learn from yourself, learn from others. Right? Like we're all in this space, we all know each other in this world like, go online and and do a CME program that wasn't, you know, related to any of the the education that you guys provided, or you know, go attend. If you're at a meeting, go attend CE programs and just learn to see how other people are doing things and and try to figure out how you can best apply that to your own work. I think that a lot of times when we do what we do over and over and over again. We sort of get stuck in a cycle of asking things the same way all the time. And maybe if you just asked a question a little bit differently. You know it might take you somewhere new, and it might allow you to find out something new about your learners, or do something more efficiently.

WC: And I was just gonna say, too, I think one of the things that we always talk about within our company when you're looking at a survey question from the beginning. What is this data going to tell me? How am I going to use this data? If you can't answer that question, why are you asking the question? Because it's really easy to create more and more and more questions that then turn into more and more data that then creates not only fatigue when you're doing the output, but fatigue on the survey taker. It's really challenging to get clinicians engaged in any response to any pre post any survey whatsoever. So making sure that every single question is that best opportunity to get the data that's going to be the most meaningful to contribute to that story. So I think that's what I always say is. look at that question. If you can't say, what is this data going to do to tell my story at the end. Then the question is either wrong or shouldn't be included at all.

KR: I love that. That's great advice.

AZ: Yeah, that leads right into a question I was gonna ask, which is how you determine which data are valuable? Do either of you have anything to add there beyond what you've just said?

WC: I would just say, in addition to thinking about what is the question going to tell me is, really, think about how the questions in your survey are going to weave together to tell the story. So if you have multiple objectives that you're trying to assess through a survey, demonstrate the outcomes of the program. If you've got a knowledge objective, you've got a competence objective. All of these things are going to weave together in a clinical, meaningful story. So thinking about those things, if you've asked a barriers question, is it going to support one of the questions that you asked about implementing a change into practice? Is it going to support shift from knowledge into that competence change? So I think, just thinking about how those questions we've together. If you can do that in the beginning, and it's really aligned to the goals of the education. So if that education was designed to close a gap, knowledge gap, competence gap, clinical practice gap, that should then weave through to your story. So again, thinking from the beginning about your outcomes, and how that needs to be thought of throughout the program to the right target audience. The right question will help you at the end really pull together your story.

KR: Another thing I would sort of add, I started my job at Vindico only writing needs assessments. And then that was about 10 years ago, and as outcomes evolved, there was more and more need for a dedicated outcomes team. So we then had the needs assessment team and outcomes team. But since I had had experience in writing needs assessments, I was like, 'Hey, these things are so related, we need an instructional design and analytics team'. And so now I just tell you know all the people on my team that are writing the needs assessment. You guys write the best outcomes questions because you guys are the ones that are we're like, I really wish I had a piece of data that showed this evidence of need. Right? And that's what we're thinking about when we're putting questions into a program. Those are your juicy questions that the ACCME doesn't require you to ask right? But those are the ones that are really gonna show you what your learners need in terms of content in terms of format. So think if you have somebody, you know, at your company that might not be involved in in outcomes, but they're very much involved in writing agendas and doing needs assessments, get them involved. They probably have a lot of good ideas.

WC: And it's really that pull through. Because if it's not considering what that educational needs assessment did and described and demonstrated in the beginning. And your outcomes aren't related to that. It's two different stories, right? You set up the need, you said, 'Here's why we need education'. We did the education. We designed it great. But then the outcomes don't align to that. So it's really tying all of the pieces together is what is going to give you that data that's like, 'Aha! Now it all makes sense'.

KR: Yep, for sure.

AZ: Yeah, along with that thinking about whatever type of project you have? Is there a best time to collect data? Is it before, during, after we talk about outcomes? But how are you gauging that? And and when are you collecting the data?

KR: Yeah, I mean, I can start. I'm sure Wendy and I have the same thing to say about this. But you know, it really depends on what you're trying to show right? If you're so if you're trying to show baseline anything right? Baseline knowledge, baseline practice patterns, baseline perceptions that all needs to be asked before, whether it's asked before. You know, if it's a live event it's asked at registration. That's fine. It's before it's assessing baseline need if you want to show, then impact right? You need to ask those same questions after the education. it, you know it is possible. A lot of times, I think a strategy in education/ adult learning is to integrate learning moments or engagement opportunities for for learners to engage with the content. So that could be a question at a live meeting. And and it's possible to do that right or or within an enduring, it's possible to do that and use that as baseline data if you're doing it before that part of the education. So if you're want to assess baseline knowledge of guidelines, or baseline use of guidelines. You can do that in the education it serves as a way to bring your learner back to the content. But you haven't gotten to that part of the content yet, so it's still there. It's still new to them, and I do think that's a powerful tool. There's a lot of data out there to show that integrating opportunities for your learners to engage with the content does draw them back in. I think we can all relate to that ourselves in any sort of education that we've gone to. Wendy, do you have anything to add to that?

WC: Well, now you started it perfectly with 'it depends'. We joke internally because every single response that I give about a project about a design, about a slide deck, it depends what's the goal that you're trying to accomplish and then determine when that best time to engage and get that data is going to be, and it really does depend on the type of education how you're engaging the learner. You don't want to be disruptive, but you do need to get that data if you are trying to show a pre-post, for example. So there are different ways to think about, like, Katie said, engaging during the activity. And then even simple ways, if it's not possible, not functional, to get that pre- data. But you still need to show change. Think about a control group, think about other ways to show that comparator. So again, it depends, is kind of the the key phrase there.

KR: Another thing I just want to add, sort of related to this is, you know, I know that a lot of times we talk about even talked about at the beginning of this podcast is fatigue and worrying about, you know, burdening your learners with too many questions. But you know, I do think it's founded in adult learning? Right? You're asking some of these principles or these questions for learners to ask before they engage in content. And it's priming them to understand what they need to learn. So I think a lot of times people are like, 'Oh, not too many questions'. But the flip side of that is, you're helping the learner to understand what they're going to learn and what they're going to need to think about. And again, I think we've all been there in any education that we've done. You're like, 'Oh, they're going to ask me this. I better pay attention when it gets to be that far'. Right? So just just thinking about it that way, I think, too, helps me to not feel so bad. If there's a question I really want to know an answer to just throwing one more in there.

WC: And again, creating those questions in a way that's going to be engaging, you know, presenting patient case scenarios, gets learners really in the mode of thinking. Okay, if I were in practice and I'm seeing this complicated patient, how am I going to really approach, you know, diagnosis, testing, those sorts of things. And so there's ways to create those opportunities, like Katie said, to prime the learner to get them thinking in ways to open their mind differently. We have a lot of case-based scenarios in our surveys, and we get comments at the end, and ours is not related or incorporated necessarily into an educational program. We'll get comments like, 'I didn't know I didn't know that'. 'Oh, I need to know more information on that'. I mean, that is the light bulb. They're getting it. They're going to now go seek information on something that you raised a question in their mind about so doing that ahead of an educational session, I mean. There you go! I know I don't know it, now let me pay a little more attention.

AZ: Yeah, that's a great example. Well, after you collect all of this data, how do you translate it into a compelling story that not only supports your CE Efforts, but also you can relay to various stakeholders like any grantors that you may have or learners? Or if you've identified additional gaps after this, how do you translate that into something that they can really take and do something with?

WC: Yeah, I think the balance is between data dump and oversimplification of your data. And it really lies in between. You know, we talk a lot in the community about infographics and different data display elements, while those are all key to helping your audience digest the data. I kind of think of it as putting that story together, layer in the data. So start with kind of that overall summary provide enough depth that there's not skepticism around the data that you're really providing enough detail around who the sample was who the learners were. So I kind of think of it as a layered approach. Give enough for those who want to dig in. But don't dump where you're going to overwhelm somebody who's just coming in and needs like, 'What are the key takeaways'. Okay? 'Well, now, I want to know more'. But start with the here's the results. And here's the key insights.

KR: Yeah, that's awesome. I was just going to say very similar to Wendy, but she also just took us back to high school learning, and that's sort of how I approach it is in high school. You learn about your five paragraph essay right? Like, start with your thesis statement. What is it that you wanted to show and find three pieces of data that support that. Oftentimes we have 15 pieces of data that support that. But, like Wendy said, we don't want to overcomplicate right?  Pick out the few that really tell your story, and then the rest can be sort of supplemental and add to it. But don't over complicate if you don't have to.

WC: And we say that I mean, we're data people. We have to sometimes just come up out of it because we'll get so in the in the weeds of like, oh, but this particular segment of the learners did so and so and so, but we're so granular that it's not at the end of the day contributing to the overall story. So I think again, just like you're going to ask on every question, what is this going to tell me anytime you run a sub-analysis anytime, you perform something additional in that data or digging deeper. Okay, that's great. But what is that really contributing to the overall story? Because you can get deep, and it doesn't always have a lot of meaning.

AZ: Thanks so much, Wendy and Katie, thank you for joining us in today's conversation as we wrap up. What's one takeaway listeners can take action on today to become stronger stewards of data? Do you have any advice that you would give providers looking to maximize the impact of their analytics?

KR: Yeah. So the first thing I would say is, don't confine yourself to the traditional measures that were asked to report back to our accrediting bodies. I think that Wendy talked a lot about asking very valuable questions that can be maybe more related to attitudes, perceptions, you know these Likert rating scales like with a statement 'To what level do you agree?' Right? These kinds of things can be very valuable, and they're also very easy to write. So, making sure that you're you're not overcomplicating it right? If there's something you really want to know, because you are doing a needs assessment, and you really wish you had your own data to back up a point, then write a question about it and doesn't have to be complicated. And then I think that the second thing that we mentioned is that there's no such thing as bad data. And really, your data is just trying to tell you something, so try to listen. How about you, Wendy?

WC: I would just say, don't be overwhelmed or intimidated by data. This can be a very tricky industry, because you're talking about clinical data. You're talking about outcomes data. You're talking about educational data. All of these things, I think, sometimes can be very intimidating to those who are trying to elevate what they are doing. Look at your questions. Make sure you understand why you're asking the questions. Make sure you're understanding why the answer. Choices are the answer. Choices understand how that question relates to the goal of the education and why that education was put on. Make sure that question is going to relate to the audience that you're asking the question to. If you do all of those things, you have set yourself up for a successful survey, successful data analysis. And again. Just think about those pieces when you're putting that story together. Can you simply tell that story? If you were going to present and you had 2 min to present. Can you clearly articulate what came out of that data? So I think just really simplify is what I would say.

AZ: That's great advice, Wendy. Thanks for tuning in to continuing conversations, whether you're just starting out with data analytics or deep in the weeds. We hope today's episode sparked ideas to help you make data work harder and smarter for your learners.

Keywords:   

Related Articles

Transcript of Episode 63 – Accreditation Myths That Limit Innovation
podcasts
Transcript of Episode 63 – Accreditation Myths That Limit Innovation

By: Andrea Zimmerman, EdD, CHCP and Dion Richetti

Episode 63 – Accreditation Myths That Limit Innovation
podcasts
Episode 63 – Accreditation Myths That Limit Innovation

By: Andrea Zimmerman, EdD, CHCP and Dion Richetti

Transcript of Episode 62 – Faculty Development
podcasts
Transcript of Episode 62 – Faculty Development

By: Heather Ranels, MA, MS, CHCP, FACEHP; Kristi King, MHA

Episode 62 – Faculty Development
podcasts
Episode 62 – Faculty Development

By: Heather Ranels, MA, MS, CHCP, FACEHP; Kristi King, MHA

Alliance for Continuing Education in the Health Professions
2001 K Street NW, 3rd Floor North, Washington, DC 2006
P: (202) 367-1151 | F: (202) 367-2151 | E: acehp@acehp.org
Contact Us | Privacy Policy | Disclaimer | About
© Alliance for Continuing Education in the Health Profession
Login
Search