Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
Recent advancements in artificial intelligence technologies, especially large language models (LLMs) have provided new avenues for the creation of diverse and inclusive content in digital media spaces. This study explores potential applications of these technologies in Korean digital media to foster cultural diversity awareness and promote anti-racist narratives.
South Korea, traditionally viewed as ethnically homogeneous, has become increasingly multicultural since the 2000s, with over 1 million foreign residents. However, the media and news discourse about 'multicultural children' often perpetuates 'everyday racism' in Korean society (Lee & Cho, 2021; Lee, 2021). There is a notable lack of research addressing anti-racism or multicultural education for children within Korean media.
Even though Korea has increasingly become a multicultural society, children's exposure to racial diversity remains primarily mediated through visual content in the media rather than through direct experiences or interactions with diverse individuals in their daily lives. The Internet Usage Survey (MOE, 2022) shows that 93% of Koreans aged three and older use the internet, including 92% of children aged 3 to 9 and 99.4% of teenagers. Young children spend nearly 5 hours online daily, while teenagers average 8 hours, exceeding WHO's recommended screen time. This extensive media use significantly shapes children's perceptions of race and diversity, making media a key influence on their understanding of multiculturalism.
By utilizing three key theoretical frameworks: anti-racism critical theory, AI ethics and developmental psychology, our study examines how large language models that are trained on parameters that had been created in diverse cultural contexts can create valuable tools for anti-racist education that challenge the inherent biases in children's media. Drawing on Noble's (2018) concept of "algorithms of oppression," we explore both the potential benefits and risks of using AI technologies in crafting anti-racist narratives.
This research uses a mixed method approach that includes a literature review, a theoretical analysis of the potential of large language models in the application of anti-racist educational principles, and computational experiments to create sample content to develop the validity of the theory. This study seeks to address the gap that exists between emerging AI technologies and their potential for social justice applications in the Korean context while also providing a hypothesis that can be tested in the wider global children's media discourse.
The study's theoretical framework is further elaborated through the lens of Critical Race Theory in Education, as articulated by Ladson-Billings and Tate (1995). They argue that "The voice of people of color is required for a complete analysis of the educational system" (p. 58). This perspective is particularly challenging in the Korean context, given the country's position as a global cultural exporter and its relatively recent shift towards multiculturalism. The research aims to explore how AI technologies can amplify diverse voices ethically within children's media.
The research methodology includes a mixed methods approach:
Literature review of narratives whose content has been computationally assisted in children's media, anti-racist education, and ethical AI development
Theoretical analysis on the potential of large language models and GPTs in drafting anti-racist narratives through the use of retrieval augmented generation, drawing on computational linguistics and computer vision literature, using licensed culturally diverse narratives.
Computational experiments using LLMs to create sample anti-racist narratives and visual content, followed by algorithmic analysis
Through a multi-phase approach, this exploration will involve collecting a dataset of popular Korean children's media, educational materials about multiculturalism as well as anti-racist narratives from various cultural contexts. Using a large language model that functions using retrieval augmented generation, we will use ethically sourced content to train the model to create anti-racist content that can be tailored based on age as well as to Korean cultural sensitivities. Following this phase, we will create a series of short stories that address aspects of cultural diversity and anti-racism while meeting a series of pre-determined tests. The final phase of the study will involve testing the model against a variety of case studies that could be replicated in real-world contexts, as well as creating narratives that are easily translatable into digital media contexts.
By examining the theoretical application of artificial intelligence techniques, particularly Large Language Models (LLMs) for creating anti-racist narratives in Korean children's digital media. The research explores how these technologies can enhance children's understanding and acceptance of cultural diversity, especially in a context where their exposure to racial diversity is primarily mediated through digital content rather than direct personal experiences. The study also addresses key ethical considerations and risks associated with AI-driven content creation, such as the potential reinforcement of biases or cultural insensitivity. Furthermore, it emphasizes the importance of establishing clear guidelines and best practices for the ethical use of AI technologies, ensuring the promotion of inclusivity while safeguarding against unintended biases in the development of culturally sensitive narratives.