Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
INTRODUCTION
“Benefit navigators” — caseworkers, call center specialists, and community health workers who help clients identify and enroll in public assistance programs — play a critical role in bridging access to the safety net. To provide accurate advice, navigators must understand program eligibility, documentation requirements, and application processes. Understanding program rules is often complex, however, leading navigators to have less time to interact with clients and higher cognitive load, burnout, and turnover risks — which can exacerbate the stress and hardship felt by clients seeking assistance.
This study aims to estimate the effect of generative artificial intelligence (GenAI) on navigators’ experience, particularly regarding administrative burden. In collaboration with a California-based community nonprofit and Georgetown University, we developed a chatbot that ingests guidelines from multiple programs (including CalFresh, WIC, EITC, Medi-Cal, SSA, etc.) and provides tailored, plain-language responses with direct-quote citations to navigators working directly with clients.
Our session’s goals are to describe preliminary evaluation findings, share design and implementation insights, and discuss implications of leveraging GenAI to enhance policy design and service delivery, particularly for vulnerable populations.
METHODS
A quasi-experimental design with pre-post testing was used to evaluate the chatbot’s effect, including 32 chatbot-assisted navigators (treatment) and 30 non-chatbot “status quo” navigators (comparison). This study received IRB approval from Georgetown University and occurred March-June 2025.
Navigators completed pre- and post-surveys. The Administrative Burden Scale (Jilke et al., 2024) was used to assess navigators’ experience with learning, compliance, and psychological costs while advising clients on benefit programs. Navigators also reported their knowledge of benefit programs, perception of and prior experience with GenAI tools, client volume/workload, tenure in years, and information-finding strategies.
RESULTS
A total of 55 navigators (29 from treatment and 26 from comparison) completed the pre-survey (89% response rate). Preliminary analysis of pre-survey data found that, on average, navigators served 17 clients weekly and spent 48% of their time interacting with clients. Most navigators expressed neutral-to-positive views of GenAI tools, and 62% expressed favorable views toward chatbots.
Navigators frequently faced complex benefit-related questions when advising clients, with 44% encountering them weekly and 20% daily. One-third (36%) of navigators reported that these questions were “somewhat” or “very difficult” to answer. A large share (42%) of navigators felt moderate-to-extreme frustration about navigating benefit programs, and 51% expended “high” to “very high” mental effort to support clients. For information-finding strategies, most navigators used multiple methods, including searching online (98%), consulting colleagues (95%), and checking program manuals (85%).
Post-survey data collection will be completed by June 2025. With linked pre-post survey data, we will conduct further descriptive and inferential analyses to estimate the chatbot’s effect on navigators’ experience, along with qualitative analysis of chatbot logs using an inductive approach.
DISCUSSION
Our anticipated findings have several policy implications. If the chatbot reduced navigators’ administrative burden, then perhaps it could increase navigator efficiency, mitigate burnout and turnover, enhance client experiences, and expand access to benefit programs. Our pilot could serve as a scalable blueprint for others seeking to design ethical and human-centered GenAI tools in the public sector.