Individual Submission Summary
Share...

Direct link:

AI Justice for Social Justice in LLM-Powered Online Learning.

Wed, March 26, 1:15 to 2:30pm, Palmer House, Floor: 3rd Floor, Salon 4

Proposal

The integration of large-language models (LLMs) into online learning platforms holds great promise for enhancing educational accessibility and personalization, particularly in underrepresented regions like Sub-Saharan Africa (SSA). However, without deliberate design and careful implementation, these technologies can inadvertently reinforce existing inequalities. This paper explores the concept of AI justice as a framework for ensuring that LLM-powered online learning systems are not only effective but also equitable.

AI justice involves the intentional development of AI systems that prioritize fairness, inclusivity, and the mitigation of biases that could disadvantage marginalized populations. To protect justice in LLM applications, this study draws on theories of decolonized knowledge production, participatory design, and culturally relevant pedagogy, contextualizing these within the socio-economic realities of SSA. It emphasizes the importance of designing LLMs with built-in safeguards against bias, such as bias detection algorithms, transparent data sources, and inclusive training datasets that reflect the diversity of SSA learners. Additionally, the study advocates for a participatory approach where local educators and learners are involved in the AI development process, ensuring that the technology is aligned with their specific needs and contexts.

The research employs a mixed-methods approach, combining a systematic literature review with case studies of current LLM applications in SSA and participatory workshops involving local stakeholders. The data collected from qualitative interviews, focus groups, and content analysis of educational platforms reveal both the potential and pitfalls of LLMs in this context.

Findings highlight that while LLMs can offer personalized and scalable educational solutions, they often fail to account for the nuanced cultural and linguistic diversity of SSA, leading to the exclusion or misrepresentation of local knowledge systems. To address these issues, the paper proposes a set of technical guidelines and ethical considerations for developing LLMs that uphold social justice. These include the implementation of fairness audits during the development phase, the use of diverse and representative training data, and the deployment of continuous monitoring systems to detect and correct biases in real time. By embedding these practices into the design and operation of LLMs, educational technologies can better serve the diverse needs of SSA learners, ensuring that advancements in AI contribute to, rather than detract from, educational equity.

In conclusion, this paper contributes to the field of comparative and international education by offering a technologically informed perspective on the intersection of AI and social justice in online learning. It challenges the notion of one-size-fits-all AI solutions, advocating for a more localized and justice-oriented approach. The findings provide actionable insights for researchers, developers, and policymakers aiming to create more equitable and contextually relevant online learning environments, particularly in resource-constrained settings like SSA.

Author