Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
1 Introduction
With the continuous deepening of education informatization and rapid development of intelligent technologies such as the Internet, big data, and cloud computing, the online education model has begun to gain large-scale popularity. Especially catalyzed by the epidemic, online education has become a new trend of educational reform because it lowers the threshold of education that exists due to time and space constraints and significantly increases the opportunities for public participation in learning (Shen Yixin et al., 2020). However, in the face of a new online educational scenario with more learning participants, richer learning resources and more diverse learning formats, how to provide personalized learning support for learners is a hot issue that needs to be urgently explored in online education. Unlike face-to-face learning, personalized learning guidance for online learners (in the form of exercise test reports and learning resource recommendations) does not come mainly from teachers, but relies more on computer-aided applications (such as intellectual correction and adaptive testing tools) embedded in online education systems (MOOC and intelligent guidance systems)(Martin & Lazendic, 2017).In practice, the main basis for these intelligent systems to provide personalized learning guidance is the learning states reflected by the evaluation of exercises in the online learning system. Therefore, the comprehensiveness and accuracy of online education exercises in diagnosing learners' learning states is a crucial indicator to measure the reliability of personalized learning guidance. Therefore, it is important to investigate the design and examination features of learning state assessment in current online education exercises for personalized learning guidance formed on this basis.
At present, the research on online education exercises mainly lies in the field of computer technology, which collects data such as the exercise text, quantity, type, difficulty, discrimination, and knowledge concepts to make itself a fine-grained portrayal (Frederiksen et al., 1993; Mao et al., 2021; Q. Liu, 2021), and it models the learners' answer data together with them to achieve accurate diagnosis of learners' learning states (Fei Wang et al., 2020). However, relevant studies mainly focus on the accurate portrayal of learners' knowledge learning state but lacks the measurement of the change of cognitive state during the learning process. From the perspective of cognitive learning theory, learning is a process of knowledge internalization and cognitive ability co-development (Anderson et al., 2001), but the current research of cognitive theory on exercises is mainly limited to textbooks and has been rarely applied in the field of online education.
This research follows the framework of cognitive theory and focuses on the design and examination features of online education exercises both in the knowledge and cognitive dimensions, dedicating to provide new perspectives for the evaluation of online education exercises and promote personalized learning in online education.
2 Conceptual framework
This study followed Anderson's Taxonomy of Learning Objectives, which is based on the Taxonomy of Learning, Teaching, and Assessment. A Revision of Bloom's Taxonomy of Educational Objectives (Anderson et al., 2001). It focuses on the "knowledge dimension" and the "cognitive process dimension" to evaluate learning objectives because learning activities often involve various thinking skills and abundant knowledges." The "knowledge dimension" classifies four types of knowledge that learners may want to acquire or construct, including factual, conceptual, procedural, and metacognitive knowledge. The "cognitive process dimension" refers to a continuum of increasing cognitive complexity that varies within six categories (remember, understand, apply, analyze, evaluate, and create) according to the order of thinking skills.
3 Methodology
3.1 Instrument development
In order to measure the design and examination features of the exercises in the knowledge and cognitive process dimension, we developed an exercise classification and labeling tool based on Anderson's Taxonomy of Learning Objectives and a review of lots of relevant studies, which contains four types of knowledge dimensions (factual, conceptual, procedural, and metacognitive), six categories of cognitive process dimensions (remember, understand, apply, analyze, evaluate, and create), and related classification criteria.
3.2 Data
The original exercise data of this study were obtained from xuetangX (a MOOCs platform established by Tsinghua University in 2013), including 9 Chinese courses in the fields of science and engineering as well as humanities, with 1441 Chinese exercises (including text, options, and question types, etc.).
To ensure the reliability of the labeling, we invited undergraduates, masters and doctors from Tsinghua University with backgrounds in related disciplines as volunteers to label the exercises; to improve the accuracy of manual labeling process, we used a back-to-back multi-person annotation method (two people label the same exercise) and carried out a consistency test on the dataset.
4 Initial Findings
In our study, we adopt the mathematical statistics method to conduct descriptive statistics and heterogeneity analysis of the labeled exercises dataset. The initial findings may provide some insights into our research issues.
Overall, the distribution of online education exercises in the knowledge dimension is dominated by conceptual knowledge (60%), and the proportion of exercises designed for metacognitive knowledge is zero. The distribution of online education exercises in the cognitive process dimension was concentrated in the lower-order cognitive skills (remember, understand, apply, over 70% totally). The results may indicate that the design of current online education exercises is not sufficient to test learners' metacognitive knowledge and high-order cognitive ability. The heterogeneity analysis revealed that the distribution of online education exercises in the knowledge and cognitive process dimensions differed significantly across disciplines. In the knowledge dimension, the proportion of exercises involving procedural knowledge is much higher in science and engineering than humanities and social sciences, while the proportion of exercises involving factual knowledge is reversed; in the cognitive process dimension, exercises in science and engineering are focused on the application and analysis of knowledge, while those in humanities and social sciences on the memorization and understanding of knowledge. It suggests that to ensure the evaluation on learners' learning states are comprehensive and accurate both in the knowledge and cognitive process dimensions, Online education exercises should be designed according to disciplinary characteristics, and our study provides some evidence for this as a reference.