Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
The impressive capabilities for fluent text generation demonstrated by ChatGPT and other Large Language Models (LLMs) include the capability to adjust the style and complexity of the generated text according to user specifications. This opens possibilities for generation of educational texts at different levels of complexity; something that might become an aid to teachers. This also implies that users (i.e., students) may also utilize such technologies, and sometimes use them for academic misconduct. In this strand of research, we ask how well can LLMs generate essays at prescribed level of complexity, and to what extent AI detectors can distinguish between AI-generated texts and human-written texts – at different levels of text complexity. For practical investigation, we focus on informational texts across a variety of topics. For this study, text complexity is expressed in U.S. grade levels, and we utilize the TextEvaluator automated system for text complexity estimation. For AI detection, we utilize detectors developed at ETS.