Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objective
This study profiles users of Answer.AI, a mobile AI-tutoring app ( with >6 million student accounts by Sept 2024), focusing on whether U.S. high school users (grades 9–12) differ from national norms. Using a January 2025 in-app survey (14,725 valid U.S. responses), we compare users’ demographics, schooling context, and advanced coursework participation to federal benchmark datasets, illuminating which students rely on AI when human tutoring is scarce.
Theoretical Framework
Drawing from digital equity research (Warschauer, 2004) and the resource-substitution hypothesis (Rideout & Katz, 2016), this study challenges assumptions that under-resourced students lag in tech adoption. When human tutors or well-resourced schools are unavailable, low-SES and minoritized learners may turn to ubiquitous, low-cost technologies—like free AI tutors—as adaptive tools to close academic gaps.
Methods
We conducted a cross-sectional analysis of data collected via a self-administered, incentivized in-app survey. After filtering for U.S. grades 9–12, responses were weighted by grade level. We calculated proportional distributions (e.g., race/ethnicity, school type, advanced coursework), descriptively compared them to benchmarks, and tested differences using two-tailed z-tests. In parallel, we plan to use NLP (BERTopic) to analyze open-ended responses about AI’s academic value.
Our primary data include nine survey questions on demographics, school context, AP/IB/Dual-Enrollment/Honors participation, and perceived learning impact. Comparison data were drawn from: NCES Digest of Education Statistics (2019, Table 203.60), 2024 EdChoice Schooling in America survey, and High School & Beyond 2009–2013 transcripts. While Answer.AI lacks objective performance metrics like GPA, self-reports and qualitative feedback offer insight into perceived academic value.
Results
Answer.AI’s user base is notably more diverse than the national high school population: 7% American Indian/Alaska Native (<1% nationally), 2.2% Pacific Islander (0.4%), 32.6% Black (14.6%), and 22% White (43.9%).
Homeschoolers comprise 7.8% of users vs. 5% nationally, suggesting strong uptake in nontraditional settings.
Advanced-course engagement exceeds national benchmarks across all groups; for example, 59% of Black users report such enrollment vs. 23% nationally (noting broader course definitions in the survey). Open-ended survey responses will be analyzed to understand how users describe AI tutoring to peers, shedding light on perceived benefits and limitations among students with limited offline support.
Significance
Rather than illustrating a new AI divide, findings suggest students with fewer traditional supports are early adopters of AI tutors. This raises deeper questions: not just who uses AI, but why they must. Structural gaps driving this reliance—and the educational adequacy of AI tools—warrant greater attention.