
At the Alliance Industry Summit lightning session, "Leveraging AI to Translate Faculty Insights for IME," presenters examined how qualitative feedback from faculty and learners can be systematically analyzed to inform educational planning. This article summarizes key takeaways from the session, focusing on how artificial intelligence can support the analytical review of faculty discussions, learner-submitted questions and post-activity debriefs. While such feedback often contains signals of unmet educational needs, these signals are frequently diffuse, implicit and embedded within narrative commentary, limiting their direct utility without a structured analytic approach.
In IME planning, qualitative feedback may be summarized descriptively rather than subjected to systematic analytic approaches designed to identify and prioritize educational gaps. As AI tools are incorporated into qualitative workflows, the importance of clearly defined analytic frameworks and educational intent becomes more pronounced. Without these elements, AI-assisted analyses may yield surface-level thematic summaries with limited planning utility. The key consideration, therefore, is not AI capability, but how IME professionals structure its use to support analytic rigor and accountability.
The session presenters agreed that IME professionals could use AI as an analytical assistant to organize and synthesize faculty discussion points and learner-submitted questions for course planning and development (rather than using AI as a content author or an educational decision-maker). The session also stated that, while the output of AI tools is based on the educational intent established by planners, it remains the responsibility of professionals (and not automated systems) to interpret, prioritize and validate educational needs.
Prompt Design as an Educational Method
A key insight from the session was that prompt design acts as an educational translation tool. In IME, prompts do not merely instruct a tool; they encode analytic purpose, context and planning relevance prior to AI interaction. When prompts lack structure or intent, AI outputs tend to default to high-level summaries. Although often accurate, these outputs fail to distinguish between descriptive themes and planning-relevant educational gaps. The session showed that effective prompt design requires clearly defining:
- The analytic task
- The qualitative data source and educational context
- The intended use of the output
- The desired structure and tone of the response
This approach does not generate new analytical work; instead, it formalizes existing educational judgment so AI can assist with qualitative synthesis in a consistent, reviewable and planning-ready manner
Applying the INSIGHT Framework
To address concerns about efficiency and overreliance on AI-generated clarification, the session used the INSIGHT framework to structure early encoding of educational intent and reduce later reinterpretation. INSIGHT is a repeatable methodology for translating qualitative educational judgment into AI-executable prompts:
- Instruction – Clearly define the AI’s task.
- Narrative – Specify the data source and educational setting.
- Structure – Define output format, length, and organization to support planning review.
- Illustration – Request concrete themes or examples rather than abstract summaries.
- Guidance Role – Clarify the purpose of the output.
- Human Tone – Set a professional, analytic tone appropriate for internal educational planning.
By explicitly encoding educational intent across these dimensions, INSIGHT enables AI outputs that are interpretable, reviewable, and suitable for defensible IME planning decisions without requiring AI to infer purpose or generate its own analytic direction.
Why AI Should Not Generate Its Own Prompts
One of the most common questions in AI-enabled workflows is whether AI can develop its own prompts to improve objective clarity and increase efficiency. However, in an accredited IME planning process, adding another layer of ambiguity to the analysis may introduce more confusion than clarity. The reason is that when a system (AI) is allowed to develop its own prompts, it must make assumptions about the educational purpose, audience and standards of education (i.e., learning objectives, outcomes framework, professional norms), which cannot be accurately inferred.
Therefore, AI-generated prompts typically focus on general summaries rather than on developing meaningful analyses. If there is a subsequent need for further clarification and/or reinterpretation of the data developed in previous workflow stages, the analytical burden will have shifted to those subsequent stages, and the risk of misalignment increases the likelihood of invalidating the needs assessment and/or the justification for the grant award.
The session emphasized that effective efficiency is achieved through deliberate human involvement at two points:
- Upstream, by encoding educational intent into structured prompts.
- Downstream, by reviewing AI outputs for accuracy, contextual fidelity, and planning appropriateness.
Human Review as a Validity Safeguard
While prompts can be well-written, the output of an AI system will always require professional review. A major aspect of this session was to emphasize that while human validation may be a formal requirement, it is also an essential safety net that provides quality assurance and accountability in education, so that synthesized insights from an AI accurately represent the input of both faculty and learners, and are applicable for needs assessments, educational planning and outcomes evaluations.
Training and Skill Development
The session framed prompt design not as a standalone technical skill, but as an applied professional competency that develops through structured use within existing IME workflows. This framing is consistent with established guidance on human-AI interaction, which emphasizes that effective AI use depends on clear task definition, contextual grounding, and iterative human review rather than on technical automation alone. ¹
It also aligns with guidance from the Alliance AI Committee, which underscores the importance of human oversight, domain expertise, and critical evaluation when integrating generative AI into continuing healthcare education. ² Together, these sources support the session's conclusion that IME professionals develop prompt design skills most effectively through embedded, practice-based application within routine planning and evaluation activities.
References
- Amershi S, Weld D, Vorvoreanu M, et al. Guidelines for Human-AI Interaction. Proc CHI Conf Hum Factors Comput Syst. 2019. doi:10.1145/3290605.3300233.
- Alliance AI Committee. AI in Continuing Healthcare Education: An Update from the Alliance AI Committee. Almanac of Continuing Education in Health Professions. https://almanac.acehp.org/Podcasts/Podcasts-Article/ai-in-continuing-healthcare-education-an-update-from-the-alliance-ai-committee.
Disclosure: OpenAI’s ChatGPT was used during the preparation of this manuscript to assist with language refinement, formatting, and clarity of expression. All Information was generated, reviewed, and validated by the authors, who take full responsibility for the accuracy, integrity and originality of the work.

Leen Alyaseen, PharmD, MBA, is a second-year healthcare education and outcomes research fellow at Decera Clinical Education.

Sarah A. Nisly, PharmD, MEd, BCPS, FCCP, is a senior vice president, outcomes and insights at Decera Clinical Education.
Interested in this article? Join in on the discussion in the Alliance Community.