Font Size: a A A

A Method Of Agreement Evaluation For Binary Data Based On AC1

Posted on:2019-02-18Degree:MasterType:Thesis
Country:ChinaCandidate:J W ZhangFull Text:PDF
GTID:2370330548489002Subject:Epidemiology and Health Statistics
Abstract/Summary:PDF Full Text Request
BackgroundAssessing agreement is important in medical research.In many studies,assessing agreement of the same data by different raters or measurements referred to intra-rater reliability.Inter-rater reliability quantifies the agreement of the same data by the same raters or the same measurements.Since Scott proposed the Tt coefficient in 1955,there are many kinds of agreement coefficients have been proposed,such as Cohen and Fleiss put forward the kappa coefficient from1960,Holley and Guilford(1964)proposed the G coefficient,Gwet(2008)proposed the first-order agreement coefficient(AC1)and Isabella Locatelli(2016)proposed intraclass odds ratio.However,they all have limitations,the most familiar is known as the paradoxes of kappa.ObjectiveIn order to overcome the above limitations and for higher accuracy and reliability,we propose an alternative statistic referred to as the coefficient of evaluating agreement(CEA)for Inter-rater reliability,which adjusts the chance agreement and does not depend on the marginal distribution of the trait prevalence,which provides another choice for assessing agreement.MethodsWe explored the relationship of sensitivity,specificity and the trait prevalence of different measurement methods(different raters)to estimate the trait prevalence of population and chance agreement based on the theory of AC1,and we use of the relationship between the various probabilities and range of values to estimate the overall agreement by exploring the influence of the trait prevalence and chance agreement to the overall agreement.In order to verify the accuracy and stability of CEA,we had three part of simulation work.First of all,for different the trait prevalence and the probability of raters to perform a random rating,CEA simulated the bias of true agreement T;Secondly,Monte-Carlo simulations of the deviations and variances of kappa,AC1,CEA coefficients and true agreement were performed,and the results were described below.Afterwards,examples were verified.Finally,the relation of kappa,AC1 and CEA coefficient with sensitivity,specificity,and trait prevalence was simulated with the combination of different sensitivity and specificity.ResultsThe first part of the results of the simulation result shows that as the trait prevalence to 0.5 as the center close to the two endpoints,the bias of CEA and the true agreement T is gradually reduced under the different combination of ra and rb.The bias of CEA and the true agreement T is constant less than 0.1.The second part of the simulation results shows that with the trait prevalence approaching 1 the bias of kappa and true agreement T increases gradually and the bias of AC1,CEA coefficient and true agreement T decreases gradually.Under the different simulation conditions,the accuracy of the CEA is higher than the kappa and the AC1.With the trait prevalence approaching 1,the bias of the AC1,CEA and the true agreement T decreases gradually,and CEA outperform kappa and AC1.The third part of the simulation result shows that the kappa is symmetrical when the trait prevalence is 0.5,and the AC1 and CEA is not affected by the trait prevalence.When the sensitivity and specificity of the raters are equal,the result of the CEA coefficient is more consistent with the true agreement T under the combination of different sensitivity and specificity.ConclusionBased on the results of simulation and analyzing real data,the proposed method CEA is a reliable agreement evaluating method,which provides more choices for the evaluation of inter-rater consistency.
Keywords/Search Tags:agreement, Kappa coefficient, AC1 coefficient, categorical variable
PDF Full Text Request
Related items