The study’s population comprised all students’ responses of 12784 that took this year’s BGCSE Agriculture exam. A survey with a descriptive format was employed using a representative number of students of 530. The responses of the students were analyzed using factors analysis as well as IRT (1PL 2PL, 1PL as well as 3PL) models to analyze the psychometric parameters of the forty items on the test; dimension analysis, and the Chi square test for each item of the test that was able to be incorporated into all three IRT models.1
The specialized XCALIBRE 4 and ITEMAN 4 softwares were used for the analysis of items. The results showed that the test wasn’t unidimensional. Results show that, both of the methods identified 13(32.5 percent) objects in the analysis as "problematic" or 27(67.5 percent) are "good".1 The 40 items did not would fit in the 1PL.
Additionally, a significantly stronger correlation is found between item data derived from CTT or IRT models. [(r=-0.985,) and (r=0.801) P. One item was a fit for the 2PL and eight items fit the 3PL. The declining of students’ performance in Botswana General Certificate of Education (BGCSE) results of the examination is a troubling trend that has the attention of parents educators, policy makers, and the government.1 The results of this study, in that it examined the tool used to assess national standards revealed that . This is the reason for this study on the dimensionality evaluation of the performance of students during the 2013 BGCSE Agriculture Examination, to assess its dimension. Global Journal of Educational Research.1 The sample of this study was the total of responses from 12784 students who took in 2013’s BGCSE Agriculture exam. The theory of item response (IRT) can be described as a method to model and analyze the data of item responses. Students’ responses were analyzed with the help of the factor analysis method and IRT (1PL 3PL, 2PL as well as 3PL) models to study the psychometric parameter estimates for the forty exam items, Dimensionality analysis as well as the test of chi squares for each item of the test which was compatible with 3 IRT models.1
The item-level model provides IRT advantages over traditional test theory. The findings showed that the exam did not appear to be unidimensional. The ability to match an item’s score pattern to the IRT (IRT) model is an crucial requirement to be evaluated for future use of the models and items that best match the data.1 All of the 40 items could be classified as 1PL. The study examined item-level diagnostic statistics as well as model-based data that were matched to one and two-parameter models employing IRTPROV3.0 and BILOGand Modeling V3.0.
Only one item fit the 2PL and 8 items fell within the 3PL. Ex-post facto design was used.1 In the end, the findings of this study as it looked at the assessment tool for the nation found that . The study’s population comprised 11,538 applicants respondents who had taken the Typ L’s 2014. Global Journal of Educational Research.
Unified Tertiary Matriculation Examination (UTME) Mathematics exam within Akwa Ibom State, Nigeria.1 Item Response Theory (IRT) provides a method for analyzing and modeling information about item response. The number of 5,192(45 percent) responses was selected randomly using a stratified sampling method. Modeling at the item level offers IRT advantages over conventional test theory.
The BILOG-MG V3.0 as well as IRTPROV3.0 computer software were used to measure the candidate’s responses.1 The ability of an item’s score pattern to the IRT (IRT) modeling is a requirement that needs to be assessed in order to determine further application of models and items that are most likely to correspond to the data. The study was conducted with two research issues posed to inform the study. The research examined the item level diagnostic statistics and model-based data that fit to one- and two-parameter models that were built using IRTPROV3.0 and BILOGthe Mg V3.0.1 Pearson’s KH 2 and S KH 2 statistics were used as an.
Ex-post facto designs were used. The sample for the study included 11,538 students responding to the The Type L exam in 2014. Comparative Analyses of Classical Test Theory and Item Response Theory Based Item Parameter Estimates of the Senior School Certificate Mathematics Exam.1 Unified Tertiary Matriculation Examination (UTME) Mathematics test from Akwa Ibom State, Nigeria.
Comparative Analyses of Classical Test Theory and Item Response Theory Based Item Parameter Estimates for the senior school Certificate Mathematics Exam. The number of 5,192(45 percent) responses was randomly chosen by stratified sampling. 2016, European Scientific Journal.1 Bilog-MG V3.0 along with IRTPROV3.0 computer software were utilized to test the candidates’ responses. The study evaluated Classical Test Theory (CTT) and Item Response Theory (IRT)-estimated item difficulty and discrimination indexes in relation to the performance of test takers to pass the Senior School Certificate Examination (SSCE) in Mathematics with the aim of providing an empirical basis for making an informed decision on the usefulness of tests that are psychometric or statistical.1
There were two questions asked for the study’s direction. The study was conducted using an ex-post-facto method. Pearson’s KH 2 and S KH 2 statistics are used as an. A student sample of 6,000 was selected from a sample of 35,262 who were sat on this NECO SSCE Mathematics Paper 1 in 2008 in Osun State, Nigeria.1 The instrument was comprised of 60 multi-choice items June/May 2008. Comparative Analyses of Classical Test Theory and Item Response Theory Based Item Parameter Estimates of the Senior School Certificate Mathematics Exam. NECO SSCE Mathematics Paper 1 was employed.
Comparative Analyses of Classical Test Theory and Item Response Theory Based Item Parameter Estimates for the senior school Certificate Mathematics Exam.1 Three sampling strategies: random gender, gender and capability sampling strategies were employed to analyze the performance of scores of examinees under those CTT as well as IRT measurements frameworks. 2016, European Scientific Journal. BILOG-MG3 was utilized to determine the indices for Item parameters, along with SPSS 20 was used to examine the relationship between CTTIRT and CTT-related items parameters.1
The study evaluated Classical Test Theory (CTT) and Item Response Theory (IRT)-estimated item difficulty and discrimination indexes in relation to the performance of test takers to pass the Senior School Certificate Examination (SSCE) in Mathematics with the aim of providing an empirical basis for making an informed decision on the usefulness of tests that are psychometric or statistical.1 The results revealed that . The study was conducted using an ex-post-facto method. Related Papers.
A student sample of 6,000 was selected from a sample of 35,262 who were sat on this NECO SSCE Mathematics Paper 1 in 2008 in Osun State, Nigeria. The objective of this research was to analyze two-category rated mathematics course’s final exam, based in Item Response Theory data analyzed with the aid of a 2-Parameter Logistic Model as well as determination of ability and common mistakes using various software.1