Almanac - Insights and Applications for the Healthcare CPD Community
Powered by
Alliance for Continuing Education in the Health Professions
  • Education
  • Outcomes
  • Leadership
  • Podcasts
  • Industry News
Methods of Measurement: Competence and Performance
Wednesday, October 13, 2021

Methods of Measurement: Competence and Performance

By: Annabel Steele, Alliance Staff

Competence and performance are essential terms to comprehend and apply in education. A simplified, non-clinical example is the professional golfer. Golfers spend many hours on the putting green learning to read the slope and speed of the green to put the ball in the hole where these hours of practice lend to building competence. There is no concrete score to signal how they do during their practice; other methods must be used to evaluate these sessions.

However, performance is when the golfer competes in a sanctioned golf tournament as the ultimate stage. The stakes are high; there is professional standing to be improved or tarnished, money to be won or lost and sponsors to be satisfied. Those skills-based behaviors or actions practiced over extensive hours are now tested in the golfer’s professional setting. A true demonstration of performance.

The same is true for clinicians, says Anne Symons, CHCP, FACEHP. Symons differentiates between competence and performance where competence is analogous to the golfer’s practice, while performance is analogous to the professional golfer’s round on the course.

“Competence is developing skills and strategies — what you know how to do and what you can show how to do,” she says, referring to the definition from the Moore, Green, Gallis outcomes model. “Performance is about the skills and strategies that a clinician implements when treating patients or in the practice of medicine.”

According to Symons, it is of the utmost importance to understand the correct definitions for these terms and know how to properly measure them in an educational context. After all, understanding the terms and evaluating the outcomes after an educational offering can help instructional designers improve their courses and maintain accreditation.

There are two major ways to evaluate competence and performance in the continuing healthcare education setting. The first is objective, or fact-based and data-driven, measurement. The second is subjective, or self-reported, measurement.

Objective Measurement

“One of the techniques commonly used to objectively measure competence is putting the learner into a skills lab, where they’re actually using the skills and being observed,” Symons says. In this setting, students can demonstrate the skills and strategies gained from the educational offering by practicing on cadavers or mannequins.

Another way to objectively measure competence is case-based learning. This method can be applied to just one learner or a group of learners. In case-based learning, the student “gets an opportunity to review a case and use the knowledge they’ve gained from the educational activity to develop skills and strategies to diagnose and treat the patient,” Symons says.

Unlike competence, performance takes place in the actual clinical setting. Objective measurement of performance, therefore, takes other forms. One method of objective performance measurement is to conduct a chart audit, which provides “actual data to tell us what the physician did when they were seeing patients,” Symons says.

Another way to objectively measure performance involves a longer-term examination of quality data. Under this strategy, a quality improvement team would assess something like surgical site infection rates before the education takes place, then assess it again quarterly after the education takes place. This allows educators to measure the effectiveness of the educational offering.

Subjective Measurement

The second way to evaluate competence and performance in the continuing healthcare education setting is subjective measurement. Subjective measurement always involves self-reporting.

One subjective measurement technique involves asking the learner to fill out an evaluation form rating the degree to which they can perform the skill before the educational activity, then again after the activity. This self-reported assessment sheds light on the learner’s self-reported change in competence.

Similarly, surveys can be distributed three and six months after the education to ask clinicians whether they are applying the skills gained during the education in the patient care setting. “They come back to us with their self-reported performance with answers such as, ‘I was able to do this 100% of the time with my patients,’ or ‘I intended to implement this in the patient care setting, but I could not,’” Symons says. “Then we ask them to tell us what barriers to implementation they had.”

An Important Consideration

Before educators can even start to measure outcomes from their educational interventions, whether objectively or subjectively, there is one major factor to consider, according to Symons.

“Question which outcomes model you are going to use,” Symons says. “Will you be using the Moore, Green, Gallis model? Or are you going to use the ACCME model?”

This is important because the two models diverge slightly on one point. Under the Moore, Green, Gallis outcomes model, simulation is used to measure competence. But the Accreditation Council for Continuing Medical Education (ACCME) allows for performance to be measured via simulation, “provided that the simulation includes assessment of the learner’s or learners’ actions, behaviors and skills,” according to a FAQ on the ACCME website.

Symons, describing this as “a gray area for us in the profession,” urges educators to know which outcomes model they will use before they start to measure the impact of the education. Doing so will ensure educators know what they are measuring and how to evaluate the clinicians. It also will help the educators’ organizations secure and maintain their accreditation status.

“It’s important to ask which model we are using,” Symons concludes.


Annabel Steele is an editorial senior associate for the Almanac.

Keywords:   Measurement and Evaluation

Related Articles

Pilot Evaluation of AI-Driven Simulation for Communication in Palliative and Hospice Care: Preliminary Insights From a Small-scale Study
outcomes
Pilot Evaluation of AI-Driven Simulation for Communication in Palliative and Hospice Care: Preliminary Insights From a Small-scale Study

By: Boris Rozenfeld, MD; and Ian Nott

Top 5 Best Practices for Performance Improvement CME
outcomes
Top 5 Best Practices for Performance Improvement CME

By: Sharon Nichols, BSN, RN, CPHRM; Kasha Askew, MSM; Sarah Porter, CHCP

The American Academy of Pediatrics Health Care Transition From Pediatric to Adult-focused Care for Youth With Spina Bifida Project ECHO® — Cohort 3
outcomes
The American Academy of Pediatrics Health Care Transition From Pediatric to Adult-focused Care for Youth With Spina Bifida Project ECHO® — Cohort 3

By: Nkemdilim Chineme, MPH; Lisa Brock, LCSW; Kendall Arslanian, PhD; Jamie Jones, MPH; Nataliya Shtym, MPH; Dzeneta Dujkovic, MPH; and Kerri Leo, MATD, CAE, CHCP

Lessons From the Alligator Pit: A Multi-year, Cross-industry Effort to Improve the Quality of CPD
outcomes
Lessons From the Alligator Pit: A Multi-year, Cross-industry Effort to Improve the Quality of CPD

By: Sharon E. Cathcart, CHCP; Pam Beaton, PhD, CHCP, FACEHP; Dustin Ensign, CHCP, FACEHP; Andrew Crim, M.Ed., CHCP, FACEHP; and Ann Lichti, CHCP, FACEHP

Alliance for Continuing Education in the Health Professions
2001 K Street NW, 3rd Floor North, Washington, DC 2006
P: (202) 367-1151 | F: (202) 367-2151 | E: acehp@acehp.org
Contact Us | Privacy Policy | Disclaimer | About
© Alliance for Continuing Education in the Health Profession
Login
Search