Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
Since the debut of OpenAI’s ChatGPT, generative AI has captured educators’ attention. Initial fears, particularly about cheating prospects, are now often giving way to innovation across the pedagogical spectrum. The impressive power of AI to edit and write has meant that students (and others) rely upon bots as sources of efficiency and authority. Chatbots function in a wide array of languages. While the largest datasets in major large language models are primarily comprised of texts in English, issues with trusting AI’s advice are applicable when writing across the language spectrum.
For many non-native speakers, along with native speakers with less confident writing skills, AI’s prowess in editing and composition is undoubtably proving invaluable. At the same time, there are drawback. Most frequently cited are problems of bias and hallucination. Less often noticed are cases in which AI-driven grammar and style applications such as Microsoft Editor provide recommendations that are at best odd or simply wrong. Such challenges particularly arise with words or grammatical constructions that are more sophisticated or subtle. Native speakers who are strong writers might well notice when advice is problematic and therefore ignore it. However, for less fluent speakers or those with less confident writing skills, the temptation is to assume AI’s advice is sound, sometimes resulting in unfortunate errors.
Today’s discussion will illustrate the sorts of problematic advice that AI conjures up, situating the challenge in the context of global learners who rely upon AI for accurate guidance.