
Search

Browse By Day

Browse By Time

Browse By Person

Browse By Area

Browse By Session Type
Search Tips
ASC Home

Sign In


X (Twitter)
AI has been increasingly implemented as a plugin in online housing and rental platforms (e.g., Zillow and Redfin). It is unclear however, how AI forms its conclusions and recommendations about where to search for housing and whether crime influences AI suggestions. We hypothesize AI responses will engage in racially coded language by discussing the presence of crime as a reason to avoid Blacker neighborhoods. We also hypothesize that AI will mention concerns of crime and disorder more often for White homeseekers than minority homeseekers. To investigate potential biases in AI-generated housing recommendations, we will employ an audit study design using OpenAI's GPT-4 language model. Our prompt will frame the user as a homeseeker looking to move to a new city and systematically vary the demographic characteristics of the hypothetical homeseeker to examine how AI recommendations differ across social groups and neighborhood characteristics. We will use computational text analysis tools, including topic modeling and word embeddings, to identify racialized themes, paying particular attention to the discussion of crime. Preliminary results for Chicago suggest that ChatGPT mentions concerns for safety and crime more often for hypothetical White homeseekers than for Black or Hispanic homeseekers.