Examining the Association between Estimated Prevalence and Diagnostic Test Accuracy using Directed Acyclic Graphs
By: Yang Lu, Robert Platt, Nandini Dendukuri
Potential Business Impact:
Fixes how doctors' tests are judged.
There have been reports of correlation between estimates of prevalence and test accuracy across studies included in diagnostic meta-analyses. It has been hypothesized that this unexpected association arises because of certain biases commonly found in diagnostic accuracy studies. A theoretical explanation has not been studied systematically. In this work, we introduce directed acyclic graphs to illustrate common structures of bias in diagnostic test accuracy studies and to define the resulting data-generating mechanism behind a diagnostic meta-analysis. Using simulation studies, we examine how these common biases can produce a correlation between estimates of prevalence and index test accuracy and what factors influence its magnitude and direction. We found that an association arises either in the absence of a perfect reference test or in the presence of a covariate that simultaneously causes spectrum effect and is associated with the prevalence (confounding). We also show that the association between prevalence and accuracy can be removed by appropriate statistical methods. In the risk of bias evaluation in diagnostic meta-analyses, an observed association between estimates of prevalence and accuracy should be explored to understand its source and to adjust for latent or observed variables if possible.
Similar Papers
Correcting for partial verification bias in diagnostic accuracy studies: A tutorial using R
Applications
Fixes medical tests that miss or guess.
Selecting valid adjustment sets with uncertain causal graphs
Statistics Theory
Finds the right information to understand cause and effect.
Prevalence estimation in infectious diseases with imperfect tests: A comparison of Frequentist and Bayesian Logistic Regression methods with misclassification correction
Methodology
Fixes sick people counts from bad tests.