Comparison of Different Unidimensional-CAT Algorithms Measuring Students' Language Abilities: Post-hoc Simulation Study

dc.contributor.authorOzdemir, Burhanettin
dc.date.accessioned2024-12-24T19:24:00Z
dc.date.available2024-12-24T19:24:00Z
dc.date.issued2016
dc.departmentSiirt Üniversitesi
dc.descriptionICEEPSY 2016: 7th International Conference on Education and Educational Psychology -- OCT 10-15, 2016 -- Rhodes, GREECE
dc.description.abstractThe purpose of this study is to examine the applicability of Computerized Adaptive Testing (CAT) for English Proficiency Tests (EPT) and to determine the most suitable unidimensional-CAT algorithm that aims to measure language ability of university students. In addition, results of CAT designs were compared to the results of the original paper-pencil format of EPT. For this purpose, real data set was used to create item pool. In order to determine the best CAT algorithm for EPT, three different theta estimation methods, three different Fisher-information based item selection methods and four different Kullback-Leibler divergence based item selection methods and three different termination methods were used. In total, 63 different conditions were taken into consideration and results of these conditions were compared with respect to SEM, averaged number of administered items, reliability coefficients and RMSD values between full bank theta and estimated CAT theta. Results indicated that using different theta estimation methods and item selection methods and termination rules had substantial effect on SEM of estimated theta, averaged number of administered items and RMSD values. Averaged number of administered items decreased to less than 11 items when precision criteria to terminate the analysis was set to.30. Overall, EAP estimation method with Fixed pointwise Kullback-Leibler (FP-KL) item selection and precision based stopping rule (0.20) yield more consistent results with smaller RMSD and SEM. Results indicated that post-hoc CAT simulation for EPT provided ability estimations with higher reliability and fewer items compared to corresponding paper and pencil format. (C) 2016 Published by Future Academy www.FutureAcademy.org.uk
dc.identifier.doi10.15405/epsbs.2016.11.42
dc.identifier.endpage416
dc.identifier.issn2357-1330
dc.identifier.startpage403
dc.identifier.urihttps://doi.org/10.15405/epsbs.2016.11.42
dc.identifier.urihttps://hdl.handle.net/20.500.12604/5809
dc.identifier.volume16
dc.identifier.wosWOS:000390872300042
dc.identifier.wosqualityN/A
dc.indekslendigikaynakWeb of Science
dc.language.isoen
dc.publisherFuture Acad
dc.relation.ispartofIceepsy 2016 - 7th International Conference on Education and Educational Conference
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_20241222
dc.subjectComputerized Adaptive Testing
dc.subjectLanguage Testing
dc.titleComparison of Different Unidimensional-CAT Algorithms Measuring Students' Language Abilities: Post-hoc Simulation Study
dc.typeConference Object

Dosyalar