Please use this identifier to cite or link to this item: http://earsiv.odu.edu.tr:8080/xmlui/handle/11489/3486
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAkin-Arikan, Cigdem-
dc.contributor.authorGelbal, Selahattin-
dc.date.accessioned2023-01-06T11:10:21Z-
dc.date.available2023-01-06T11:10:21Z-
dc.date.issued2021-
dc.identifier.citationAkin-Arikan, C., Gelbal, S. (2021). A Comparison of Kernel Equating and Item Response Theory Equating Methods. Eurasian Journal of Educational Research, (93), 179-198.Doi:10.14689/ejer.2021.93.9en_US
dc.identifier.isbn1302-597X-
dc.identifier.isbn2528-8911-
dc.identifier.urihttp://dx.doi.org/10.14689/ejer.2021.93.9-
dc.identifier.urihttps://www.webofscience.com/wos/woscc/full-record/WOS:000658924500009-
dc.identifier.urihttp://earsiv.odu.edu.tr:8080/xmlui/handle/11489/3486-
dc.descriptionWoS Categories : Education & Educational Research Web of Science Index : Emerging Sources Citation Index (ESCI) Research Areas : Education & Educational Research Open Access Designations : golden_US
dc.description.abstractPurpose: This study aims to compare the performances of Item Response Theory (IRT) equating and kernel equating (KE) methods based on equating errors (RMSD) and standard error of equating (SEE) using the anchor item nonequivalent groups design. Method: Within this scope, a set of conditions, including ability distribution, type of anchor items (internalexternal), the ratio of anchor items, and spread of anchor item difficulty, were observed in 24 different simulation conditions. Findings: The results showed that ability distribution, type of anchor items, the ratio of anchor items, and spread of anchor item difficulty affected the performance of the equating methods. It was also observed that kernel chained equating methods (KE CE) were less affected by the difference in group mean ability. Moreover, in the case of increased average differences in ability between groups, a high range of score scale yielded higher standard errors in KE methods, while a medium-high range of scale scores exhibited higher standard errors in IRT equating. Using external anchor items led to lower SEE and RMSD than using internal anchor items, and both errors decreased as the ratio of anchor items increased. When internal anchor items were used with similar average group ability distribution, mini and midi anchor tests gave similar results. On the other hand, a midi anchor test performed better with increased average differences in group ability distribution for external anchor items. At the end of the scale scores, the IRT equating method had a lower rate of errors. Implications for Research and Practice: KE methods can be used while IRT assumptions are not met. (C) 2021 Ani Publishing Ltd. All rights reserveden_US
dc.language.isoengen_US
dc.publisherANI YAYINCILIK BAKANLIKLARen_US
dc.relation.isversionof10.14689/ejer.2021.93.9en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectLINKING; TESTSen_US
dc.subjectEquating; kernel; IRT; erroren_US
dc.titleA Comparison of Kernel Equating and Item Response Theory Equating Methodsen_US
dc.typearticleen_US
dc.relation.journalEURASIAN JOURNAL OF EDUCATIONAL RESEARCHen_US
dc.contributor.departmentOrdu Üniversitesien_US
dc.identifier.issue93en_US
dc.identifier.startpage179en_US
dc.identifier.endpage198en_US
Appears in Collections:Eğitim Bilimleri

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.