Journal of Continuing Education in the Health Professions (02/21/25) Maslej, Marta M.; Donner, Kayle; Thakur, Anupam; et al.
When evaluating open-ended feedback from continuing professional development training at a psychiatric hospital, large language models (LLMs) may be useful because they can convey context, new research shows. For the study, researchers assessed natural language processing methods using survey responses from staff participants. The survey asked about how participants intended to use the training as well as if there was other information they wanted to share. In the "intent to use" group, topic modeling was not helpful when differentiating content between the topics, as the answers were short and had little diversity. In the "open-ended feedback" group, an LLM-based clustering approach "generated meaningful clusters characterized by semantically similar words for both responses," the authors report. This approach shows the LLM can distinguish between answers using similar words for different topics. Additional research may want to investigate other LLM-based methods or how such methods work for other datasets or types of feedback.
Read More