
The widespread adoption of telehealth has underscored the necessity for healthcare providers to develop virtual patient interaction skills. To address this need, a training program utilizing AI-simulated patient encounters was developed, enabling providers to practice and refine these competencies (4Med+ Corporation — Course Link). Learners received structured feedback for each step, with a focus on actionable improvements such as using empathetic language, simplifying medical jargon and verifying patient understanding through techniques like the Teach-Back method — a method which has been shown to improve communication skills and patient engagement in telehealth settings1.
This approach aligns with recent findings that emphasize the importance of integrating telehealth training into medical education curricula2. For example, incorporating Tele-Objective Structured Clinical Examinations (TeleOSCEs) has been shown to effectively enhance telehealth competencies among medical trainees3.

The Learning Journey
Seventeen healthcare professional learners completed this structured simulation focused on telehealth communication skills. The program emphasized core competencies of virtual patient interaction: Establishing rapport, gathering information and documenting in real-time through a mock virtual platform.
The program utilized Xuron's advanced AI-driven simulation platform for evaluation of learner competency and providing feedback. Each learner's transcript was analyzed using a standardized rubric, enabling consistent, quick and reliable assessment across learners — an approach that would be difficult to standardize with human evaluators. Learners received a score (Likert scale 1–5) and feedback for each step of the simulation, providing detailed insights into their telehealth communication skills. While a human review was not performed to validate the AI's scoring in this implementation due to scale and resource considerations, Xuron is currently conducting formal validation studies comparing AI assessments with expert human evaluators across various clinical scenarios to further refine the platform's feedback mechanisms.
Using a case of poison ivy exposure as the clinical scenario, learners completed six steps common to telehealth encounters:
- Establishing the virtual visit (obtaining consent, ensuring patient comfort).
- Gathering visual information and history.
- Conducting pain assessment.
- Summarizing findings.
- Explaining treatment plans.
- Effectively closing the visit.
The simulation steps and learning objectives (table below) were developed by subject matter experts in collaboration with Xuron. The course designers established these specific steps and evaluation criteria to align with telehealth communication best practices, while also creating detailed instructions for the AI system on how to evaluate learner performance.
What We Learned
Analysis of the results identified both strengths and areas for improvement. Learners demonstrated competency in technical aspects of telehealth, with all correctly identifying the need for verbal consent and establishing initial rapport (average score 4.1 out of 5). They also showed proficiency in conducting systematic assessments, scoring 4.4 out of 5 in pain and symptom evaluation.
The data also revealed specific challenges. Diagnostic communication scored 3.3 based on the standard rubric, with several notable patterns. Many learners gave basic summaries without detailed explanations of the diagnosis, potentially limiting the virtual patient’s understanding and engagement. Others did not explain their diagnostic reasoning or used medical terminology without adequate explanation, which is contrary to patient communication best practices. This lack of depth can hinder patient comprehension, reduce trust and impair decision-making, underscoring the need for clear, tailored and empathetic communication strategies.
Treatment planning received the lowest average score (3.0). Common deficiencies included unclear explanation of prednisone tapering instructions, limited discussion of potential side effects and insufficient guidance about warning signs requiring follow-up care. Many learners also moved through discussion on treatment plans without adequately checking patient comprehension.
Table: Average Scores and Step-by-step Challenges
Overall Average Score Across All Steps and Learners: 3.7
Simulation Step |
Key Challenges |
Learner Feedback Summary |
Step 1: Initial Greeting and Consent |
- Confirming Patient Comfort: Some learners did not verify the patient's comfort with the telehealth platform.
- Misidentification of Patient Name: Occasional errors in addressing the patient correctly.
- Personalization of Greeting: Inconsistent use of warm, personalized introductions.
|
- Verify the patient's comfort with the telehealth platform to build rapport.
- Ensure accurate identification by using the correct name.
- Use warm and personalized introductions to foster connection.
|
Step 2: Acknowledge Concerns and Visual Assessment |
- Thoroughness in Questioning: Limited follow-up questions on symptom progression and severity changes.
- Empathy in Acknowledgment: Some learners acknowledged the concern briefly but lacked empathetic language.
- Missed Review of Images: In some cases, learners didn’t adequately reference the reviewed images.
|
- Use empathetic language to acknowledge patient concerns.
- Ask thorough follow-up questions about symptom progression and severity changes.
- Refer explicitly to visual data like images to build patient confidence.
|
Step 3: Pain and Itching Evaluation |
- Differentiating Pain and Itching: Occasional confusion between addressing pain vs. itching separately.
- Clarifying Severity Scale: Some learners needed to guide the patient more on using the 1–10 scale.
- Validation of Patient's Discomfort: Infrequent instances where learners missed acknowledging the discomfort.
|
- Clearly differentiate between pain and itching during assessment.
- Guide the patient effectively on using severity scales.
- Acknowledge the patient’s discomfort to enhance rapport and diagnostic reliability.
|
Step 4: Diagnostic Summary |
- Inconsistent Diagnostic Detail: Several learners provided basic summaries without detailing the diagnosis.
- Explanation of Diagnostic Rationale: Learners often did not explain the reasoning behind the diagnosis.
- Avoiding Medical Jargon: Some learners used complex language without simplifying for patient comprehension.
|
- Provide detailed diagnostic summaries to avoid patient confusion.
- Explain the rationale behind diagnoses to build patient trust.
- Avoid medical jargon; use patient-friendly language for better understanding.
|
Step 5: Prescription and Follow-up Plan |
- Clarity in Taper Instructions: Many learners gave unclear instructions on the prednisone taper.
- Side Effects Explanation: Limited discussion of potential side effects.
- Follow-up Steps and Warning Signs: Inconsistent guidance on follow-up steps and symptoms that should prompt further care.
|
- Provide clear and detailed tapering instructions for medications.
- Discuss potential medication side effects comprehensively.
- Outline specific follow-up steps and warning signs to ensure patient safety.
|
Step 6: Final Confirmation and Closing |
- Reviewing Key Points: Learners often closed without summarizing key points.
- Confirming Patient Understanding: Not all learners explicitly confirmed that the patient understood the follow-up steps.
- Warm and Supportive Tone: Some learners ended the visit without a friendly or supportive closing.
|
- Summarize key points before closing the interaction.
- Explicitly confirm the patient's understanding of follow-up steps.
- Conclude with a warm, supportive tone to leave a positive impression and ensure retention.
|
Post-training assessment showed that 82.4% of participants reported feeling "Very" or "Extremely" confident in conducting telehealth visits. This confidence reflects targeted feedback that corrected deficiencies and reinforced best practices, ensuring learners build effective, evidence-based communication skills. Furthermore, learners indicated intentions to modify their practice: 76.5% planned to incorporate shared decision-making techniques, and 70.6% intended to enhance their active listening skills.
Real-world Barriers
Qualitative feedback identified five categories of real-world barriers in the simulated training activity:
- Physical assessment limitations around hands-on examinations and vital signs
- Communication and comprehension challenges with virtual interactions
- Technical barriers including connectivity and image quality
- Documentation efficiency while maintaining patient engagement
- Varying levels of patient technology literacy and language barriers
Looking Ahead
The simulation offered learners structured practice in telehealth delivery, fostering the development of practical strategies for virtual care. Future training should address challenges such as efficient documentation, technical limitations, and clear communication of diagnoses and treatment plans. By focusing on fundamental virtual patient interaction skills, this program provides a framework for preparing healthcare providers for telehealth delivery.
Preliminary results indicate that structured training can enhance the skills necessary for effective virtual care platforms. This perspective is supported by research highlighting the importance of digital communication skills in telehealth training4. Additionally, interventions like the Teach-Back method have been shown to improve communication skills in telehealth settings, leading to better patient understanding and engagement1.
The detailed learner analysis presented in this study could have been done before AI-powered virtual patients, but with major practical challenges. Traditional methods using human actors as patients would require many staff hours for both acting and evaluation, making it too expensive and time-consuming for large groups of students. Getting consistent results from multiple human evaluators would be difficult even with extensive training. AI-enabled assessment delivers similar quality analysis but requires far fewer resources, allowing for consistent evaluation across many learners in a way that simply wouldn't be practical using traditional methods alone.
Acknowledgement: Many thanks to Caroline O. Pardo, PhD, CHCP, FACEHP, for her guidance on interpretation of the research results and implications to practice.
References
- Morony S, Weir K, Duncan G, Biggs J, Nutbeam D, McCaffery KJ. Enhancing communication skills for telehealth: development and implementation of a Teach-Back intervention for a national maternal and child health helpline in Australia. BMC Health Serv Res. 2018;18:162. doi:10.1186/s12913-018-2956-6
- Brendan Murphy. How the telehealth boom is changing physician training. Available at: https://www.ama-assn.org/practice-management/digital/how-telehealth-boom-changing-physician-training
- Bajra R, Srinivasan M, Torres EC, Rydel T, Schillinger E. Training future clinicians in telehealth competencies: outcomes of a telehealth curriculum and teleOSCEs at an academic medical center. Front Med. 2023;10:1222181. doi:10.3389/fmed.2023.1222181.
- HHS.gov. Types of trainings for telehealth. Available at: https://telehealth.hhs.gov/providers/best-practice-guides/telehealth-training-and-workforce-development/types-of-trainings-for-telehealth

Boris Rozenfeld, MD, has over 10 years of experience in medical education and currently serves as director of MedEd at Xuron, specializing in AI-powered conversational simulations for teaching interpersonal, communication, and clinical reasoning skills.

Ian Nott is the CEO of Xuron, leveraging nearly a decade of experience in spatial computing and software development to lead innovations in AI-powered virtual human simulations for medical training.

Wendy Whitmore has more than 20 years of experience in the healthcare and information technology industries. She currently acts as the chief learning officer for the 4Medplus Corporation.