Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
There is increased interest in AI and machine learning for healthcare. The main data source used to train these algorithms are electronic health records (EHRs), which have been increasingly used to predict future medical needs, disease progression, and illnesses. While there is promise in this technology, scholars and healthcare professionals must address potential bias at the source of data creation: physicians and nurses. These medical professionals record data based on their perceptions and assessments of patients, which are imbued with social biases. While differences in treatment decisions by patient race and gender has been studied, fewer works examine differences in the records themselves, which would be the primary source for information and potential bias in an algorithmically driven healthcare process. Contributing to previous research that has found disparities in pre-existing, de-identified medical records, I provide new insights with an original video vignette survey experiment to help isolate race and gender as the key characteristics prompting difference in the content and sentiment of clinical notes. The survey is administered to healthcare providers (MD, DO, PA, RN, NP) who practice in the U.S. Preliminary results suggest disparities by patient race in the length and content of clinical notes. White women receive the most detailed notes on average, while Black women receive the least. Notes for white patients were more likely to include information about mental health concerns and potential barriers to care. This corresponded to lower subsequent ratings of mental health for white patients and higher likelihood that providers expressed concern or wanted more information about mental health and social supports for white patients. These racially biased differences in clinical notes and their connection to different follow-up pathways cause concern over how algorithms trained on these data and providers would disproportionately neglect or underestimate the needs of Black patients.