Have you ever seen a Rube Goldberg machine in action?
These intricate systems consist of different sections or areas that each trigger a reaction of some kind, pushing the action forward in the ultimate pursuit of one particular outcome. When a Rube Goldberg machine gets going, it’s hard to tear your eyes away until the outcome has been achieved. You can find examples of Rube Goldberg machines on YouTube, but similar systems exist naturally, too — as Katie Lucero, the vice president of audience, analytics, outcomes and insights at Medscape Education, recently discovered.
Over the course of a study she conducted, Lucero examined learners’ knowledge, competence, confidence and commitment to change in the context of continuing healthcare education. She found that improvements in one area tend to trigger improvements in the others.
“Results suggested a causal relationship among these variables,” Lucero explains.
Lucero’s study involved multiple-choice questions, case-based questions and Likert-type rating questions. Over the course of the study and the educational offering, she collected and analyzed learners’ responses to these questions and surveys. This analysis allowed Lucero to identify certain patterns among respondents, including evidence of a causal relationship.
Lucero underscores the importance of having an educational plan that will produce data points capable of evaluation and analysis. For educators looking to obtain such data points, it is important to be thoughtful about the educational plan from the very beginning.
“In general, you really need to think about a study design first, beyond just what kind of questions you might ask,” Lucero says. “You have to think about the methodological and the measurement plan.”
The measurement plan is in part determined by the learning objectives. If educators want to improve clinicians’ knowledge, competence, confidence and commitment to change, as in Lucero’s study, then there must be some way to evaluate whether these outcomes have been achieved. For Lucero, multiple choice questions represented the best way to measure how the education impacted learners, but that isn’t the only way. For educators who have more time to comb through the data, open-ended questions are a strong choice for competence and commitment to change.
“Open-ended questions provide a deeper level of understanding about whether someone makes the right decision, or the best decision for the right reason; they also provide point of additional learning and self-reflection,” Lucero says.
She cautions educators to think about their own timelines, schedules or availability when creating the measurement plan. Just because open-ended questions may provide more specific information about why certain learners made certain decisions doesn’t mean they are automatically the best way to measure the learning outcomes.
“In our field, we have to do assessments quickly and efficiently. Open-ended responses are tougher to analyze at scale,” Lucero points out.
After educators reflect on desired learning outcomes and create measurement and education plans, they can start collecting data from learners. These data should not only be used to assess impact, but also analyzed and used to inform future education, Lucero says.
For example, if learners are getting multiple choice questions incorrect and reporting low confidence in the subject matter, educators should reflect on their offering and brainstorm ways to improve it in the future. Lucero urges educators to ask themselves a series of questions if the responses indicate low knowledge, competence or confidence.
“You should do some self-reflection and ask, ‘Could we have done something differently? Could we have had better or different content? Could we improve the way we wrote our questions to inform the next offering?’” Lucero says.
After engaging in this reflection and looking at the data, educators should probably consider a new way to convey the information if they believe the questions were well written. If the initial education was mainly didactic and text-based, then subsequent education should be more interactive and case-based, Lucero suggests.
On the other hand, if the learners are scoring well on the multiple choice questions but reporting high confidence, educators can design more specific follow-up sessions.
“If your education was primarily on the typical presentation of a disease, then you might want to go into the more nuanced cases so that learners feel like they’ve received a comprehensive view of how to treat the disease,” Lucero says.
And even more nuanced, if learners are scoring high on knowledge but report lower confidence, Lucero says, “You may want to think of ways to increase confidence — how can you give the learners practice implementing their knowledge?”
The most important thing is to make sure that responses from learners are given attention and consideration. After all, analyzing learners’ responses is the best way to ascertain whether an educational offering achieved its goals. And, as Lucero points out, the responses should be used to set the stage for future learning.
“You have to use that information to fuel what you want to do next,” Lucero says.
Annabel Steele is an editorial senior associate for the Almanac.