Worst Practices for Writing CME Needs Assessments: Results from a Survey of Practitioners

By Donald Harting, MA, MS, ELS, CHCP1; and Andrew Bowser, ELS, CHCP2

1Harting Communications LLC, Downingtown, PA; 2iconCME, Narberth, PA

Editor’s note: 

This original research article examines ways in which to improve the quality and clarity of Needs Assessment (NA) reports. It is being published simultaneously by the Alliance for Continuing Education in the Health Professions (Alliance) in the Almanac and by the American Medical Writers Association (AMWA) in the AMWA Journal. To provide context and interpretation of the research results pertinent to their respective readers, both publications have solicited Commentaries from their audience to accompany the main article.  

Original Research

Abstract: Background and Aims: Needs assessments (NAs) are commonly developed to identify gaps in the knowledge, competence, performance and confidence of healthcare providers and to guide the development of continuing education activities designed to remedy these deficiencies. Although best practices of NA development have been thoroughly described, little work has been done to evaluate poor or unprofessional practices that may compromise their value or validity. We sought to describe these practices with a survey primarily targeted toward individuals who develop NAs.

Methods: Respondents to an annual survey were prompted to describe unprofessional or poor practices that they had observed in NAs developed by other writers. Responses were categorized by 2 independent reviewers.

Results: A total of 104 individuals submitted responses to the survey. Of those, 67 included write-in responses describing poor practices. The most common poor practices were related to sources and referencing (19 responses), whereas other commonly cited poor practices included irrelevance or poor focus; organization, coherence and readability; and plagiarism, fabrication or bias. Specific quotations from write-in responses are provided in this article.

Conclusion: Despite available resources that outline and teach best practices in writing CME NAs, writers continue to struggle with referencing, organization, coherence and readability. This may present an opportunity for the industry to consider new best practices that would encourage standardization and eliminate some of the poor practices described here.

We have been conducting a multiyear research project aimed at identifying best practices in writing and editing needs assessments (NAs) for continuing education in the health professions, including continuing medical education (CME). Compliance criteria promulgated by the Accreditation Council for Continuing Medical Education (ACCME) require all accredited CME providers to design educational activities to address deficits in knowledge, competence or performance that underlie professional practice gaps.1 Needs assessments are widely used by publishing and education companies within the accredited CME system to identify these deficits. For example, an assessment of need typically appears as a single section within a larger request for commercial support submitted to one or more pharmaceutical companies (Figure 1). The task of developing the NA often falls to an in-house or freelance medical writer, whereas proposal assembly, editing and submission are usually handled by staff employees. As commercial support increases within the ACCME system (Figure 2), so does the number of NAs required to support a growing number of funding requests. Needs assessments can vary in length from less than a page to more than 10 pages depending on the number of gaps, the quantity of supporting evidence and the resources available.

CME_fig 1.jpg

Figure 1. Components of a typical CME grant proposal. CME, continuing medical education.

CME_fig 2.jpg

Figure 2. Growth of commercial support for accredited CME. CME, continuing medical education. (Source: Accreditation Council for Continuing Medical Education)

Our research into best practices in NA development originated in 2011 with a small pilot study analyzing a convenience sample of NAs written by various authors and collected from several sources, including a roundtable conducted at a freelance writers conference hosted by the Delaware Valley Chapter of the American Medical Writers Association (AMWA). A considerable amount of variation was noted in the sources of evidence used in these NAs, how the evidence was presented and how it was cited. Unwarranted variation in a healthcare–related process can be a sign of poor quality2,3 and, as stated in the professional literature,4 effective continuing education begins with a high-quality NA (Figure 3). Thus, we sought to explore this variation further in surveys targeted to writers of NAs.

These surveys have been conducted annually since 2014 for a total of five surveys to date. We have previously published posters,5-8 workshop slide decks,9,10 a journal article11 and a downloadable tutorial12 disseminating best practices. In 2018, for the first time, survey respondents were invited to describe any poor or unprofessional practices they had noticed in NAs written by others. This article presents our first discussion of “worst practices,” based on analysis of those write-in responses.

CME_fig 3.jpg

Figure 3. The continuing education cycle. (Graphic courtesy of AMWA Journal.)


The fifth annual survey of best practices for writing CME NAs was developed in SurveyMonkey and promoted to fellow members of AMWA and the Alliance for Continuing Education in the Health Professions (ACEHP), mostly via Twitter and LinkedIn, between October 5 and 19, 2018. The survey link was also sent via email to previous years’ respondents and to anyone else within the authors’ professional networks who had written at least several NAs and expressed interest in the past year. In addition, AMWA, the Delaware Valley Chapter of AMWA, and the Mid-Atlantic Alliance for CME helped promote the survey to their members.

The first two questions of the survey provided us with demographic data, and most of the other questions were designed to capture data on best practices of NA development. One open-ended question was included to elicit responses on worst practices: “What unprofessional or poor practices, if any, have you noticed while reviewing needs assessments written by others that might be appropriate for future survey research?” Responses to this question were entered into a spreadsheet and provided to two reviewers, including a past president of ACEHP (Robert L. Addleton, EdD, [Reviewer 1]) and the current president of AMWA (Cynthia L. Kryder, MS [Reviewer 2]). The reviewers were unknown to each other and worked separately to sort the 67 responses into categories defined for them in advance (Table 1). In cases in which a survey respondent combined ≥2 poor practices into a single response, the reviewers were instructed to select the category that best described the most salient problem.


A total of 104 survey responses were received. Respondents were roughly balanced between freelances (50%) and staff employees (44%), with freelances in the slight majority (Figure 4).


Figure 4. Breakdown of freelances compared with staff employees among survey respondents.

More than half of respondents (61%; N=102) had written at least 26 NAs in their careers; 45% had written more than 50. A total of 67 responses described poor practices. Some responses were simple and focused on a single poor practice, such as “gap statements that are not supported by evidence.” Other respondents included multiple poor practices, such as “poor grammar/formatting, poor narrative structure, too much industry influence, lack of educational outcomes data or heavy reliance on outcomes data at the expense of current science.” The spreadsheet with all 67 responses can be found here. Results as sorted by the two reviewers are shown (Table 1).

Table 1. Worst Practices Sorted into Categories


Reviewer 1

Reviewer 2

Grammar Issues



Sources and Referencing



Outdated Information



Organization, Coherence and Readability



Plagiarism, Fabrication and Bias



Irrelevance or Poor Focus










The following verbatim responses are illustrative:

  1. “Atrocious grammar!” (Grammar Issues)
  2. “Insufficient references,” “Lack of citation,” and “Not having strong enough support for gaps in education” (Sources and Referencing)
  3. “Cites outdated research or fails to acknowledge new developments that discredit previous findings” (Outdated Information)
  4. “Poor writing skills, e.g., organization, crafting sentences” (Organization, Coherence and Readability)
  5. “Plagiarism,” “Spinning the NA to favor the potential grantor’s product,” “Making up faculty quotes, making up outcomes data” (Plagiarism, Fabrication and Bias)
  6. “Data dump that does not get to the actual gaps in clinical practice or why there is an unmet need” (Irrelevance or Poor Focus)
  7. “Lack of examination of clinician attitudes/beliefs” (Other)


This article briefly describes our first-ever analysis of worst practices in NA development in the five-plus years that we have been researching this topic. The most commonly cited problem category was Sources and Referencing. In four out of five previous years’ surveys, respondents have reported that the medical literature review is the most essential source of evidence in the NA. Because the heart of any literature review is the reference list, deficits in sources and referencing may suggest an inexperienced, rushed or sloppy writer has had trouble finding valid data, identifying sources or compiling the reference list in a clear and orderly manner. These are all problems that the reader may notice if an editor does not identify and correct them. Conversely, even skilled and experienced medical writers may underperform if given too little lead time, low-quality templates to follow or vague editorial direction.

In either case, a skilled and meticulous researcher who is able to work quickly and effectively with a client to overcome obstacles, identify a bona fide educational need and marshal detailed evidence to support it adds great value to the process. For this reason, a face-to-face workshop or online exercise aimed at mastering the skill of conducting a high-quality literature review may be helpful. Based on survey respondents’ comments, this workshop could also include instruction on proper appraisal of clinical study results, along with tips for extracting relevant data and using them to support statements of educational need. A separate workshop, aimed at assigning editors, medical directors and other individuals who hire freelance writers, may also be useful; this session might include instruction on ways to work more effectively with freelance medical writers, and topics might include facilitating two-way communication, setting reasonable deadlines, providing editorial direction and support and incorporating evidence-based best practices into proprietary templates.

Reports of plagiarism, fabrication and commercial bias are troubling, given past efforts by the ACCME, the US Congress, the Josiah Macy Jr. Foundation, the Institute of Medicine and other stakeholders to protect the integrity of continuing education in the health professions.13 Independence is the cornerstone of accredited continuing education; without it, clinicians lose their ability to teach and learn free from commercial influence.14 Thus, in light of our findings, it would appear that the ACCME’s current effort to revisit this issue is necessary and timely. Spot-checking of a random sample of CME NAs would be illuminating and represents a potential further avenue for research.

There are several limitations to this study. First, this was not a random sample of writers. The respondent pool may have been biased toward members of the investigators’ professional networks, most of whom live in the eastern United States. Unlike in the prior year, the 2018 survey link was not promoted by ACEHP staff to members nationwide, so the respondent pool may also be biased in favor of AMWA members. Second, the fact that reviewers could only assign a single category to a lengthy response containing ≥2 poor practices introduced an extra measure of subjectivity to the analysis. Third, the fact that only two reviewers were recruited makes it difficult to interpret the significance of the remarkable similarity of their analyses. Fourth, the survey contained only a single question about poor practices; as a result, we obtained descriptive examples of poor practices but did not delve deeper to identify reasons behind the poor practices or inquire about ways to address them. Fifth and finally, we obtained secondhand observations of problems noted by respondents at some point in the past; accuracy would have been greater if we had audited a random sample of NAs.

These limitations notwithstanding, some inferences may be drawn. Both reviewers were invited to submit comments. In a note accompanying her review, Ms. Kryder wrote, “Despite available resources and hands-on workshops that describe best practices in writing CME needs assessments, these survey data show that writers continue to struggle with referencing, organization, coherence and readability. This presents an opportunity for the CME industry to adopt a structured template that writers can use when developing needs assessments, similar to templates already in use in the regulatory writing setting. Such standardization may eliminate some of the poor practices…and enable writers to more successfully develop an organized and readable narrative that identifies educational gaps that are clearly supported by evidence.”

Author contact: don@hartingcom.com, abowser@gmail.com


1. Criterion 2. Accreditation Council for Continuing Medical Education website. https://www.accme.org/accreditation-rules/accreditation-criteria/criterion-2. December 2018. Accessed May 5, 2019.

2. Sipkoff, M. 9 Ways to reduce unwarranted variation. Managed Care. https://www.managedcaremag.com/archives/2003/11/9-ways-reduce-unwarranted-variation. Published November 1, 2003. Accessed March 9, 2019.

3. Ferguson, J. Reducing unwarranted variation in healthcare clears the way for outcomes improvement. Health Catalyst. https://www.healthcatalyst.com/Reducing-Variation-in-Healthcare-to-Boost-Improvement. Published February 7, 2017. Accessed March 9, 2019.

4. Queeney DS. Assessing Needs in Continuing Education: An Essential Tool for Quality Improvement.­ 1st ed. San Francisco, CA: Jossey-Bass; 1995.

5. Harting D, Bowser AD. Best practices for writing CME needs assessments: 2018 survey results. Poster presented at annual meeting of the Alliance for Continuing Education in the Health Professions; January 2019; National Harbor, MD. https://drive.google.com/file/d/1A523OTLZaI_TT-jfrkIwSZxw6CyJEShv/view. Accessed March 9, 2019.

6. Harting D, Bowser A. Best practices for writing CME needs assessments: 2017 survey results. Poster presented during annual meeting of the American Medical Writers Association; November 2018; Washington, DC. https://drive.google.com/file/d/1N7kO_StGYmgTwfkaxY_7mfoR6O96TCzw/view. Accessed March 9, 2019.

7. Harting D, Molnar-Kimber K. Best practices for writing CME needs assessments. Poster presented during annual meeting of the Mid-Atlantic Alliance for CME; November 2016; Allentown, PA. https://drive.google.com/file/d/0B5BtKN9Nc3UYNDVvWnkwX1VzODQ/view. Accessed March 9, 2019.

8. Harting D, Vakil R. Survey of best practices for writing and editing CME needs assessments. Poster presented during annual meeting of the Alliance for Continuing Education in the Health Professions; January 2016; National Harbor, MD. http://b54356eed47bd9a61d65-b3471301acd784ff429c3eefa6028222.r94.cf1.rackcdn.com/214935-2400px.png. Accessed March 9, 2019.

9. Harting D, Molnar-Kimber K, Turner N. Best practices for writing and editing CME needs assessments. Slide deck presented as a workshop during the 2017 annual meeting of the American Medical Writers Association; 2017; Orlando, FL. https://drive.google.com/file/d/1izrYzrnZVu_8cn0DUjrkVfJ5vxbKdezO/view. Accessed March 9, 2019.

10. Vakil R, Harting D. Best practices for writing and editing needs assessments. Slide deck presented as a workshop during the 2015 annual meeting of the Alliance for Continuing Education in the Health Professions; 2015; Dallas, TX. https://drive.google.com/file/d/0B8IUqmckAV_oc1pDYVBhQW5vVUE/view. Accessed March 9, 2019.

11. Harting D, Turner, N. Best practices for writing and editing CME needs assessments: 2014 and 2015 results from surveys of practitioners. AMWA Journal. 2016; 31(3):128-131. https://drive.google.com/file/d/1dICNxF03kZQJyNBoV8GxtCVlEk_40K0w/view. Accessed March 9, 2019.

12. Vakil R, Harting D. Best practices for writing CME needs assessments. Downloadable tutorial. Rockville, MD: American Medical Writers Association; 2015. https://drive.google.com/file/d/0B5BtKN9Nc3UYclRmZHU1U1lzYVE/view. Accessed March 9, 2019.

13. Institute of Medicine. Redesigning Continuing Education in the Health Professions. Washington DC: National Academies Press; 2010. https://www.nap.edu/catalog/12704/redesigning-continuing-education-in-the-health-professions. Accessed May 6, 2019.

14. Accreditation Council for CME opens call for feedback about protecting the integrity and independence of accredited continuing education. Chicago, IL: Accreditation Council for Continuing Medical Education; January 22, 2019. http://www.accme.org/news-releases/my-SCS-feedback. Accessed March 7, 2019.


Needs Assessments – CE Provider Commentary

By Theodore Bruno, MD, FACEHP / The France Foundation, Old Lyme, CT

In their most current survey (above), Harting and Bowser asked survey participants to address an intriguing open-ended question about what they had noticed as poor or unprofessional practices in other needs assessments they had seen. The correlation between two independent reviewers who each assessed the 67 responses and then categorized them into the predefined categories was very well aligned. The authors do note that the observations given are all secondhand from something seen from the past. Additionally, the qualifications of the writers whose work is in question are unknown.

Survey respondents noted a number of concerns that are just overall poor writing skills (grammar, organization, coherence, readability, sources and referencing); perhaps the writers who responded perceive their skills at writing are better than others they have seen. Other concerns seem to be more about a lack of understanding of the content area (irrelevance or poor focus along with outdated information); although this can easily occur within today’s rapidly expanding information occurring within medicine, it is not an excuse. Although the specifics are lacking, the more concerning comments are of course those referring to plagiarism, fabrication and bias.

The authors have noted some key limitations of this survey and its results. Another consideration is, shouldn’t those who read the needs assessments—the commercial supporters—assess the issue of describing poor needs assessments? Although there has been some discussion at panels that have included supporters at meetings of the Alliance for Continuing Education in the Health Professions (Alliance), the authors have not noted any specific published surveys from the supporter audience. Since 2009, the Alliance member section, the Industry Alliance for Continuing Education (IACE), has created an annual Benchmarking Survey to gain insights to trends and standards applied in commercial support of Independent Medical Education and results reported by their Benchmarking Working Group (BWG). To date, questions about the quality of needs assessment within grants have not been extensively assessed by this survey (personal communication with a BWG member). It would be great to see such a comparison survey done within this member section of Alliance. Until such time, comments are welcomed from commercial supporters who sit on grants committees and read many needs assessment and grants.

In their discussion, the authors suggest that educational opportunities for writers and others on how to conduct a high-quality literature review might be useful. The needs assessment is only one part of an overall grant and must align with the other parts, including the overall goals and objectives of the initiative, the educational tactics being recommended and what will be assessed as part of the outcomes plan.

As the authors have noted, the most disturbing results of this survey are suggestions of plagiarism, fabrication and commercial biases. What is not clear about these “worse case” examples is when they occurred—were they noted from many years ago or from a more recent period? If more recent, this is of even more concern based on today’s environment of ensuring independence of commercial interests. I think it is also necessary to stress that the ultimate responsibility of any proposed continuing education initiative (which includes the needs assessment and entire grant) is that of the accredited provider. Even if being contracted out to another entity or person, they need to ensure proper understanding, have processes for editorial reviews and have oversight for this important step in the process. This is well documented within the Accreditation Council for Continuing Medical Education (ACCME) accreditation criteria that address the needs, formats and independence of all certified educational activities.

The ideas raised by Harting and Bowser in the article about the need for a structured template as well as a need for needs assessment spot check of the continuing education needs assessment (and perhaps the entire grant) are interesting to consider. However, just like the practice of medicine, there is both an art and a science to a well-created needs assessment. It takes practice, time and feedback. Moreover, similar to clinical practice, what we are lacking is feedback and coaching regarding our needs assessments.

Author contact: tbruno@francefoundation.com

Recent Stories
Re-Thinking the Design of EMS Continuing Education Programs

Toward Competency-Based Continuing Professional Development for Practicing Surgeons

Exceptional Care, Exceptional Performance: The Continuing Medical Education Program's Impact on Quality and Performance Improvement