# Data Analysis

Gold Standard | ||||
---|---|---|---|---|

Positive | Negative | Total | ||

SussStat | Positive | 90 | 40 | 130 |

Negative | 10 | 860 | 870 | |

100 | 900 | 1000 |

You are now ready to compute some statistics that will tell you how well SussStat performs compared to the gold standard of clinical diagnosis. Sensitivity and specificity are commonly used measures of the validity of a screening of a test (Aschengrau & Seage, pp. 421-422). Validity is the ability of a test to correctly categorize persons into their true disease status.

The measures of positive predictive value (PPV) and negative predictive value (NPV) describe how well a positive screening test result predicts presence or absence of a disease in a particular population. The PPV and NPV are measures of a screening program's feasibility (Aschengrau & Seage, p. 423).

2. Calculate the sensitivity, specificity, PPV, and NPV.

a. | Sensitivity | = | Answer — 0.90 |

b. | Specificity | = | Answer — 0.96 |

c. | PPV | = | Answer — 0.69 |

d. | NPV | = | Answer — 0.988 |

3. Which is the best interpretation of the sensitivity of SussStat?

- The probability that SussStat correctly categorized an individual as not having Susser Syndrome.
- Of those who tested positive on the SussStat test, the percent of persons that developed Susser Syndrome.
- The probability that SussStat correctly categorized an individual as having Susser Syndrome.

4. Which is the best interpretation of the specificity of SussStat?

- The probability of obtaining a false negative if you truly do not have Susser Syndrome.
- Of those who never develop Susser Syndrome, the percent that tested negative on the SussStat essay.
- The probability that SussStat correctly categorized an individual as having Susser Syndrome.

5. How could you interpret the positive predictive value (PPV)?

- Of those persons who developed Susser Syndrome, the percent that tested positive on the SussStat test.
- The probability of not developing Susser Syndrome given a negative SussStat test result.
- Of those who tested positive, the percent that develop Susser Syndrome.

You are concerned that there are too many false positives when the manufacturer-suggested cutpoint of DNA adducts is used to define the pre-clinical SS and you want to see if you can reduce the number by changing the cutpoint, or criterion of positivity, of the SussStat test. Click here for an interactive exercise demonstrating how raising or lowering the cutpoint changes the measures you calculated for SussStat.

[ Click here for the Interactive Exercise. ]

6. What is a consequence if SussStat has a low sensitivity?

- You will miss the opportunity to correctly diagnose and treat people with Susser Syndrome.
- You will treat too many people who don't actually have the disease, which is costly and stressful to the subjects and also puts people at risk from possible side effects.
- Both a and b.

You are happy with the screening test's ability to identify Susser Syndrome, but you are now considering to what groups in Epiville you should target your screening program.

7. Using the sensitivity and specificity measures you calculated in Question 2 above, calculate what the PPV would be if you screen the total population of Epiville where you estimate the prevalence of SS to be only 1%. (hint: draw a new 2x2 table and write the new column totals for the Clinician Gold Standard using the new "true" population prevalence of 1%, given a total N=1000 Then use the specificity and sensitivity proportions you calculated previously to fill in the rest of the 2x2 table.) NOTE: ONLY USE THE SENSITIVITY AND SPECIFICITY VALUES ROUNDED TO 2 DECIMAL PLACES THAT APPEAR IN THE ANSWER TO QUESTION 2.

Answer | = | Correct answer — 0.18 |

You make note of the fact that as the prevalence of the disease decreases, the PPV of the screening test decreases. You recommend to the EDOH that a screening program be introduced among workers of the Glop Industries since they have a higher prevalence of the disease, and therefore SussStat will be most effective in that group because it will detect a larger proportion of actual cases among individuals with positive results (Aschengrau & Seage, p. 424).

The measures you calculated above describe the validity of the SussStat test. In contrast, reliability is the ability of a test to give the same result on repeated testing, i.e., consistency. (Aschengrau & Seage, p. 419) Reliability can also be computed to describe the extent to which two tests agree with each other. A common measure of reliability is the kappa statistic.