![]() ![]() SBBG or the ‘Snapshot, Bookend, Between-Groups’ paradigm is the unflattering description by Winne and Nesbit of the current state of affairs of building and estimating educational models. The final version of this archive, labelled Tempelaar, D, 2020, "Replication Data for PlosOne 2020 manuscript Tempelaar ea", has received the unique handle:, DataverseNLįunding: The author(s) received no specific funding for this work.Ĭompeting interests: The authors have declared that no competing interests exist. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.ĭata Availability: The data, the MPlus and SPSS codes, and the main components of the output are archived in DANS, the Data Archiving and Networked Services of the NOW, the Dutch organization of scientific research. Received: FebruAccepted: Published: June 12, 2020Ĭopyright: © 2020 Tempelaar et al. PLoS ONE 15(6):Įditor: Vitomir Kovanovic, University of South Australia, AUSTRALIA ![]() Given that empirical models in education typically aim to explain the outcomes of learning processes or the relationships between antecedents of these learning outcomes, our analyses suggest that the bias present in surveys adds predictive power in the explanation of performance data and other questionnaire data.Ĭitation: Tempelaar D, Rienties B, Nguyen Q (2020) Subjective data, objective data and the role of bias in predictive modelling: Lessons from a dispositional learning analytics application. The effect of overconfidence bias is limited. It is only the trace data, notably that of process type, that stand out in being independent of these response style patterns. We found that the response style bias accounts for a modest to a substantial amount of variation in the outcomes of the several self-report instruments, as well as in the course performance data. We investigate two types of bias in self-report surveys: response styles (i.e., a tendency to use the rating scale in a certain systematic way that is unrelated to the content of the items) and overconfidence (i.e., the differences in predicted performance based on surveys’ responses and a prior knowledge test). This study investigates the strengths and weaknesses of different types of data when designing predictive models of academic performance based on computer-generated trace data and survey data. On the other hand, there is mixed evidence of how trace data are compatible with existing learning constructs, which have traditionally been measured with self-reports. On the one hand, trace data might be perceived as “objective” measures that are independent of any biases. ![]() The emergence of trace data from digital learning environments has sparked a controversial debate on how we measure learning. However, the existence of potential biases in such self-report instruments might cast doubts on the validity of the measured constructs. For decades, self-report measures based on questionnaires have been widely used in educational research to study implicit and complex constructs such as motivation, emotion, cognitive and metacognitive learning strategies. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |