Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
The deployment of predictive AI in education risks automating and reinforcing existing socioeconomic and structural inequities. Using a dataset of secondary school student performance from Portugal, this study applies a Context Engineering framework to shift the focus from generalized prediction accuracy (G3) to achieving equitable outcomes through responsible design. Since the dataset lacks explicit racial data, we operationalize Context Engineering by prioritizing variables that proxy structural disadvantage, including parental education (Medu,Fedu), occupation (Mjob,Fjob), geographical location (address), and structural support (schoolsup,paid). Our approach advocates for context-aware stratification, involving the disaggregated analysis of model performance across high-risk subgroups (e.g., rural students with low parental education). Furthermore, we propose that the most responsible application is upstream prediction, forecasting the final grade (G3) using only demographic and contextual features, thereby identifying students at risk before academic underperformance (G1,G2) begins. We demonstrate how Context Engineering guides the design toward minimizing the missed detection of students needing intervention. This work provides a concrete methodology for transforming theoretical fairness goals into actionable steps, ensuring educational AI systems are built for equity and intervention.