Almanac - Insights and Applications for the Healthcare CPD Community
Powered by
Alliance for Continuing Education in the Health Professions
  • Education
  • Outcomes
  • Leadership
  • Podcasts
  • Industry News
Prompting With Purpose
Wednesday, July 9, 2025

Prompting With Purpose

By: Andrew Crim, M.Ed., CHCP, FACEHP; and Núria Waddington Negrão, PhD

Introduction: The Rise of Advanced Reasoning Models (ARMs) in Generative AI

In early 2025, leading AI providers like OpenAI, Google and Anthropic introduced new generative AI models adept at advanced reasoning. These Advanced Reasoning Models (ARMs) fundamentally differ from earlier Large Language Models (LLMs). While LLMs primarily predicted the next word based on surface patterns (akin to sophisticated auto-complete), ARMs engage in a deeper process. They analyze the entire prompt, create an internal plan or outline, verify constraints and then generate a response. This planning phase enhances factual accuracy, maintains structure and allows the model to adhere to complex instructions, such as citation rules or regulatory limits. Consequently, ARMs produce more complex, near-final quality output, significantly reducing subsequent editing time.

Why Prompting Needs to Change for ARMs

The advent of ARMs necessitates new prompting techniques. Previous LLMs performed best with short, direct prompts (e.g., "Write three learning objectives on obesity") and iterative, "chain-of-thought" prompting, where users refined results through dialogue. This worked because older models relied on pattern matching and statistical prediction. ARMs, however, benefit from prompts that align with their internal planning process.

Examples of Current ARMs (as of writing):

At the time of writing, ARMs are reserved for paid plans for most LLM platforms, but this is likely to change. Paid plans often support uploading files for the “Content Dump” phase. Some of the most popular ARMs include:

  • OpenAI’s o3 (with or without deep research), and o3/o4 mini
  • Google Gemini 2.5 Flash/Pro (with or without deep research)
  • Anthropic Claude 3.7/4 Sonnet
  • Perplexity Pro-Search
  • Microsoft Copilot “Think Deeper” feature

The ‘Goal → Return Format → Warnings → Content Dump’ Framework

Ben Hylak, the founder of AI company Dawn, developed the “Goal → Return Format → Warnings → Content Dump” model, which was reposted by OpenAI President Greg Brockman highlighting it as a way to improve prompt effectiveness. Each component serves a unique function in a prompt.

COMPONENT

PURPOSE

BEST PRACTICES

Goal

One clear task expressed in one sentence.

Focus on result, not the method. Clearly articulate what you’re wanting to accomplish. Use an action verb. Avoid "and". Focus on the outcome, not the tool.

Example: Draft three measurable learning objectives for an obesity workshop.

Return Format

Blueprint for structure, length and tone. Where the finished product is described.

Specify headings, lists, tables, word limits. Provide examples if layout is critical. Align with style guides.

Example: Bullet list; 15 words max each; use higher-order Bloom's verbs; bold each verb; vary sentence length.

Warnings

Non-negotiables and guardrails. Provides space to protect against known or anticipated issues.

List each on a new line. Include reading level, inclusion criteria or policy pointers (e.g., ACCME, HIPAA).

Example: No trade names. Grade-10 reading level. Cite references in AMA style.

Content Dump

Full reference material. Where the model is provided everything it needs without adding too much or omitting important information.

Paste or attach full data (not summaries). Label uploads or sections clearly (e.g., "Survey Data," "Guideline PDF"). Maintain logical source order.

 

Example: 2023 AHA guideline excerpt; Participant gap survey (CSV); Prior workshop objectives (for reference).

Why This Framework Outperforms Previous Prompting Frameworks

  1. Model-Native Sequence: “Goal → Format → Warnings → Content Dump” mirrors ARMs’ processing path, which is “plan → shape → constrain → fill”. This closer alignment improves accuracy.
  2. First-pass Compliance: Embedding warnings minimizes errors like off-label references or privacy issues from the start, reducing iterative correction.
  3. Transparent Audit Trail: Storing the full prompt alongside the output creates a clear audit trail for teams and leaders.
  4. Simplified Collaboration: Provides a structured template for team members or faculty to contribute notes, improving draft quality when their input is used in the Content Dump.
  5. Continuous Improvement Loop: The modular structure allows for easy experimentation (e.g., tweaking Warnings) and measurement, fitting well with PDSA cycles.

As with any prompting strategy and AI-generated output, it is crucial to review ARM output for accuracy, completeness and validity. This framework turns prompting from trial and error into a repeatable, efficient workflow, yielding better first drafts faster, but it is not infallible.

Below are practical examples of the “Goal → Return Format → Warnings → Content Dump” prompting format, followed by an exercise to practice and grow competence in prompting.

Practical CME/CPD-focused Use Cases (Examples Using the Framework Structure)

  1. Needs Assessment Synthesis
    • Goal: Create a two-page narrative linking learner gaps to proposed interventions.
    • Return Format: H1: Executive Summary (≤120 words); H2: Key Gaps (bullets); H3: Evidence Table (CSV-ready); H4: Proposed Activities (bullets).
    • Warnings: Define all abbreviations. Use AMA citation style.
    • Content Dump: Raw survey data, PubMed abstracts, ACCME Criterion map.
    • Benefit: Structures evidence for easy transfer to grant portals.
  2. Faculty Slide Drafts
    • Goal: Draft slide text for an eight-slide deck on transfusion thresholds in obstetrics.
    • Return Format: Markdown list using "Slide X:" headers; 25-word limit per slide.
    • Warnings: No brand names; follow American College of Osteopathic Obstetricians and Gynecologists (ACOOG) terminology; include in-slide citations.
    • Content Dump: Latest ACOG bulletin, figure captions, recent RCT summaries.
    • Benefit: Provides concise slide text respecting space limits, freeing faculty for review, validation and visuals/narrative.
  3. Multiple-choice Question (MCQ) Generation
    • Goal: Produce ten — always ask for more than what you need, this way you can select the best options — NBME-style multiple choice questions covering postpartum hemorrhage.
    • Return Format: Numbered list; stem ≤30 words; four options; answer key in parentheses, rationale for each correct and incorrect response.
    • Warnings: Avoid negative phrasing; align difficulty to USMLE Step 2.
    • Content Dump: Learning objectives, topic outline, relevant presentations, abstracts or articles, question writing guide.
    • Benefit: Ensures questions adhere to NBME guidelines, focusing review on psychometrics and content accuracy.
  4. Outcome Report Narratives
    • Goal: Draft an outcomes section for a final grant report on a virtual anesthesia course.
    • Return Format: 250-word narrative; line chart code block (pre-/post-score trend); bullet list of top behavior changes.
    • Warnings: De-identify quotes; reference sample size.
    • Content Dump: Pre-/post-scores (CSV), open-text reflections, attendance metrics.
    • Benefit: Can generate code (e.g., Matplotlib) for charts, streamlining reporting.

Embedding the Framework Into Daily Workflow

  1. Template Repository: Store blank prompt skeletons (tagged by use case) in shared drives, LMS or project management tools.
  2. Process Checkpoints: Add a "Prompt QA" step to production checklists before design/final review.
  3. Faculty Onboarding: Demonstrate the framework during speaker orientation; provide a one-page cheat sheet.
  4. Version Control: Archive prompt-response pairs with metadata (date, author, activity code) for easy retrieval.
  5. Continuous Metrics: Track edit time, compliance errors and feedback quarterly to refine templates.  

Practice Points

  • Focus on one Goal per prompt.
  • Detail the Return Format precisely (length, layout, style, tone, etc.)
  • List Warnings clearly, one per line.
  • Provide a rich, well-labeled Content Dump.
  • Save prompts and outputs together for team learning and improvement.

Your Turn: Exercise: Apply the Framework

  1. Select a Project: Choose a current task (e.g., drafting objectives for a new module).
  2. Write the Prompt:
    • Goal: e.g., "Draft four competency-based objectives for the telehealth module."
    • Return Format: e.g., "Bullet list; max 20 words each; use measurable verbs."
    • Warnings: e.g., "No promotional language; use inclusive terminology."
    • Content Dump: Attach or paste relevant gap analysis notes, competencies list, prior objectives.
  3. Run and Review: Generate the output and compare it to previous manual efforts regarding accuracy, tone and time needed for edits.
  4. Share and Refine: Discuss the results with your team. Identify how tweaking prompt components improved the outcome. Help Others: Consider sharing your results and experience on the Alliance’s AI Community Board!

This article is provided by members of the Alliance AI Committee. This article used AI for a grammar and flow consistency check.


 

Andrew Crim, M.Ed., CHCP, FACEHP, is the director, education and professional development, for the American College of Osteopathic Obstetricians and Gynecologists.

 

 

Núria Waddington Negrão, PhD, is a medical writer and AI consultant, who also serves as the chair of the Alliance AI Committee.

Keywords:   Adult Learning and Educational Design

Related Articles

Prompting With Purpose
education
Prompting With Purpose

By: Andrew Crim, M.Ed., CHCP, FACEHP; and Núria Waddington Negrão, PhD

Racing Toward Impact: How the Amazing Case Race Is Redefining CME for Early-career Clinicians
education
Racing Toward Impact: How the Amazing Case Race Is Redefining CME for Early-career Clinicians

By: Kelly Nevins-Kraines

Recapping #Alliance25: A Grants Team’s Perspective
education
Recapping #Alliance25: A Grants Team’s Perspective

By: Chloë Legg

AI-enhanced Simulation for Telehealth Communication Training
education
AI-enhanced Simulation for Telehealth Communication Training

By: Boris Rozenfeld, MD; Ian Nott; and Wendy Whitmore

Alliance for Continuing Education in the Health Professions
2001 K Street NW, 3rd Floor North, Washington, DC 2006
P: (202) 367-1151 | F: (202) 367-2151 | E: acehp@acehp.org
Contact Us | Privacy Policy | Disclaimer | About
© Alliance for Continuing Education in the Health Profession
Login
Search