Research by: Rod D. Roscoe, Renu Balyan, Danielle S. McNamara, Michelle Banawan, & Dean Schillinger
Modern communication between health care professionals and patients increasingly relies upon secure messages (SMs) exchanged through an electronic patient portal. Despite the convenience of secure messaging, challenges include gaps between physician and patient expertise along with the asynchronous nature of such communication. Importantly, less readable SMs from physicians (e.g., too complicated) may result in patient confusion, non-adherence, and ultimately poorer health outcomes. Patient-physician communication is a critical element of healthcare with substantial and well-documented impacts on patient health. Physicians’ communications to patients influence both proximal outcomes (e.g., patient comprehension, satisfaction, and trust) and intermediate outcomes (e.g., self-care skills and increased adherence to treatment plans) that are associated with improved overall health. Consequently, clear communication also substantially impacts health equity and accessibility by enabling more and diverse individuals to fully participate in the healthcare process.
In this study, we synthesize work on patient-physician electronic communication, message readability assessments, and feedback to explore the potential for automated strategy feedback to improve the readability of physicians’ SMs to patients. In a digital medium (e.g., email), written SMs are composed and then electronically transmitted between parties (e.g., via a secure messaging patient portal). In this environment, it is feasible to incorporate automated tools that “read” and analyze electronic messages before they are delivered to a patient. Computational algorithms can assess the readability and complexity of a physician SM, which in turn might guide strategies for how the SM might be improved (e.g., adding details to very short messages or reducing message complexity). These dynamic assessments and feedback also have the potential to iteratively nudge physicians toward authoring more readable SMs over time. Specifically, physicians may gradually learn how to compose less complex SMs from the outset, thus attaining more optimal levels of readability faster and with less feedback.
The current research is guided by questions about the impact of automated strategy feedback on the complexity and readability of physicians’ emails to simulated patients via a simulated patient portal. To assess physicians’ SM readability, we implement a modified version of the Crossley et al. (2020) algorithm. We first consider whether physicians demonstrate sensitivity to the literacy level of their patients (RQ1). This is an important preliminary question because it speaks to whether physicians are already able and willing to adapt their messaging based on patient needs.
Subsequent research questions focus on the specific effects of feedback. We first explore whether complexity feedback on potentially problematic messages results in an overall decrease in message complexity (RQ2). We hypothesize that strategy feedback will help physicians revise their messages, but the effect on any one message might be small. Next, we evaluate trends across patients and over time to assess whether physicians’ overall message complexity decreases over multiple interactions with a patient and the feedback system (RQ3) and over interactions with multiple patients (RQ4). We expect that the effects of feedback may be cumulative, such that trends within and across patients demonstrate physician adaptivity or learning more clearly than individual revisions. Finally, we consider how physicians’ perceptions of their efforts and attitudes toward the strategy feedback influence their responses (RQ5). Based upon research on feedback uptake (e.g., Roscoe et al., 2017), we hypothesize that more favorable perceptions of the system will be associated with increased responsiveness to feedback (i.e., generating SMs of lower response complexity).
In this within-subjects study, all participating physicians responded to stimuli associated with the same six simulated patient scenarios. When physicians first logged into the experiment, they were introduced to the study, tasks, and interface, and then completed a brief background questionnaire. Afterward, physicians interacted with a simulated SM system to respond to stimuli messages associated with six distinct patient scenarios (i.e., Patient 1, Patient 2, and so on). Each patient scenario began with a brief description of the patient. Physicians then received four stimuli messages per patient. Participating physicians were required to author and send an original response (i.e., SM) to the patient after each stimulus message; physicians could not proceed until they had submitted a response. Physicians did not have access to pre-generated “smart phrases” or “smart texts” to insert in their messages, and patients were always the intended audience of physicians’ SMs. Importantly, patient scenarios did not include any information about patients’ literacy level—as in the real-world, physicians could only attempt to infer patient health literacy based on message content.
The messaging portal provided strategy feedback for how physician responses might be improved (e.g., adding details and information to reduce complexity). The complexity estimation algorithm implemented in the current study included 21 NLP-based indices. This subset of measures was selected from the original 85 indices based on normality, absence of multicollinearity, and robust effect sizes of relationships to human ratings. Natural Language Processing indices were employed to train multiple machine learning models via LDA, eXtreme Gradient Booting tree (XGBTree), and random forest methods. Although not reported here for brevity, tests of model fit and accuracy indicated that the random forest model was the most accurate (i.e., accuracy of 62%). Algorithm reliability and validity was also best for responses of 50 words or more, which is shorter than typical responses observed in studies of physician messaging in real-world systems. The output of the complexity estimation algorithm is a probability that a given physician message was complex. Thus, complexity estimates could range from 0.00 (very unlikely to be complex) to 1.00 (very likely to be complex). Importantly, this estimate was as assessment of likely complexity, but was neither a “score” nor a judgment of quality.
Analyses of changes in SM complexity revealed that automated strategy feedback indeed helped physicians compose and refine more readable messages. Although the effects for any individual SM were slight, the cumulative effects within and across patient scenarios showed trends of decreasing complexity. These findings provide a proof-of-concept and prototype for how automated strategy feedback might be incorporated into SM systems to help physicians communicate more effectively with their patients. Prior research indicates that feedback and coaching are important to physicians’ communication skill development, but such efforts are time-intensive and require expert mentors. Similarly, such feedback is necessarily after-the-fact because coaches and physicians must review the interactions retrospectively (e.g., via video recordings). Automated approaches thus enable immediate and real-time feedback or coaching. Although the current study was not testing an adaptive instructional technology (e.g., intelligent tutoring systems), results suggested that physicians may have learned to craft more readable SMs over successive trials. This outcome makes sense given that our automated strategy feedback approach took inspiration from robust educational findings about formative feedback and automated feedback. For example, automated writing evaluation (AWE) systems extract and analyze linguistic data from written responses in order to provide holistic scores, reveal features of writing, and communicate formative feedback for revising and future writing. Such systems have been shown to help students improve their writing proficiency. Consequently, with regards to patient-provider communication, there may be potential for computer-based systems to contribute to training or coaching future healthcare providers on how to communicate with patients. That is, instruction and tutorials might first teach about health communication best practices, and then an automated system could guide mindful practice via realistic scenarios along with automated strategy feedback.
To cite this article: Roscoe, R. D., Balyan, R., McNamara, D.S., Banawan, M., & Schillinger, D. (2023). Automated strategy feedback can improve the readability of physicians’ electronic communications to simulated patients. International Journal of Human-Computer Studies, 176, 103059. https://doi.org/10.1016/j.ijhcs.2023.103059.
To access this article: https://doi.org/10.1016/j.ijhcs.2023.103059
About the Journal
The International Journal of Human Computer Studies publishes research on the design and use of interactive computer technology.
|Chartered Association of Business Schools Academic Journal Guide 2021||ABS 2|
|Scimago Journal & Country Rank||SJR h-index: 138
SJR 2021: 1.27
|Scopus||Cite Score: 9.2|
|Australian Business Deans Council Journal List||Rating B|
|Journal Citation Reports (Clarivate)||JCI 2021: 1.20
Impact factor: 4.866