Building a Becer Syndromic Surveillance System: the NYC Experience

Transcription

Building a Becer Syndromic Surveillance System: the NYC Experience
Building a Be=er Syndromic Surveillance System: the NYC Experience Robert Mathes Jessica Sell Anthony Tam Alison Levin-­‐Rector Ramona Lall Thom Commissioner Bureau of Communicable Disease New York City Department of Health and Mental Hygiene Bureau of Epidemiology Services New York City Department of Health and Mental Hygiene What we did Evaluated alternaAve staAsAcal methods for event detecAon Why we did it Since our syndromic system was developed, many new staAsAcal methods have been proposed for event detecAon Syndromic Surveillance in NYC System
Key field
Daily volume
Coverage
Geographic
resolution
ED
Chief
complaint
11,000
51/53 EDs
~98% of visits
Hospital
Home zip
EMS
Call-type
3600
FDNY
~95%
Zip
Drug
name
34,000 OTC
10,000 Rx
350/2600 ~15%
315/2500 ~15%
Store zip
Reason for
visit
13,000
1000/1600
~75%
School location
Pharmacy
School nurse
visits
Syndromic Surveillance in NYC System
Key field
Daily volume
Coverage
Geographic
resolution
ED
Chief
complaint
11,000
51/53 EDs
~98% of visits
Hospital
Home zip
EMS
Call-type
3600
FDNY
~95%
Zip
Drug
name
34,000 OTC
10,000 Rx
350/2600 ~15%
315/2500 ~15%
Store zip
Reason for
visit
13,000
1000/1600
~75%
School location
Pharmacy
School nurse
visits
We focused on our ED system An overview of what we did Goal: Implement best method at finding localized cluster or spike of syndrome visits 1. Review alternaMve methods published in the literature 2. Select candidate methods for further analysis 3. Evaluate candidate methods against each other and our current system 4. Apply best-­‐performing method(s) The methods we currently use Temporal Temporal scan staMsMc (SaTScan) EARS The methods we currently use Temporal Temporal scan staMsMc (SaTScan) EARS SpaAal SpaMal scan staMsMc (SaTScan) The methods we chose for analysis Temporal ARIMA CUSUM Holt-­‐Winters exponenMal smoother Modified EARS C2 Generalized linear model Temporal scan staMsMc (SaTScan) The methods we chose for analysis Temporal ARIMA CUSUM Holt-­‐Winters exponenMal smoother SpaAo-­‐temporal Generalized linear mixed model Bayesian regression Space-­‐Mme permutaMon (SaTScan) Modified EARS C2 Generalized linear model Temporal scan staMsMc (SaTScan) SpaMal scan staMsMc (SaTScan) How we assigned these methods Analysts volunteered for methods Analysts were MPH and PhD level Some methods we knew well (GLMM) Others we took training (Bayesian, ARIMA) How we built these methods We developed the most opMmal model We used: SAS, R, WinBUGS How we are evaluaMng Two approaches to evaluaMng the methods: 1. Performance-­‐based How well the methods detect outbreaks
2. PracMce-­‐based The expense and ease-­‐of-­‐use of the system Se[ng up the evaluaMon Developed a series of evaluaMon datasets (n=180) -­‐ Baseline of real syndromic data -­‐ Spiked with simulated outbreaks Added 1-­‐day outbreak of 315 cases Se[ng up the evaluaMon SyntheMc dataset created (to set thresholds) Se[ng up the evaluaMon Run methods on evaluaMon datasets GLMM Space-­‐Mme How we are evaluaMng How well do methods detect outbreaks? How we are evaluaMng How well do methods detect outbreaks? Sensi&vity How we are evaluaMng How well do methods detect outbreaks? Sensi&vity Specificity How we are evaluaMng How well do methods detect outbreaks? Sensi&vity Specificity Posi&ve predic&ve value How we are evaluaMng How well do methods detect outbreaks? Sensi&vity Specificity Posi&ve predic&ve value Timeliness How we are evaluaMng How well do methods detect outbreaks? Sensi&vity Specificity Posi&ve predic&ve value Timeliness Coherence Results Results – summary of temporal SensiAvity Specificity PPV Timeliness Coherence <=5 >5 <=5 >5 <=5 <=5 >5 <=5 >5 ARIMA 0.02 0.16 0.99 0.99 <0.01 0.02 0.33 0.27 0.02 0.05 GLM 0.01 0.32 0.99 0.99 <0.01 0.07 0.20 0.63 0.01 0.12 Holt Winters 0.02 0.23 0.99 0.99 <0.01 0.04 0.33 0.52 0.02 0.10 EARS C2 0.02 0.32 0.97 0.97 <0.01 0.02 0.37 0.51 0.02 0.10 CUSUM 0.10 0.50 0.84 0.84 <0.01 0.02 0.76 0.70 0.09 0.33 Scan StaMsMc* 0.61 0.90 0.52 0.52 <0.01 0.02 0.95 0.78 0.51 0.60 *p<0.01 Alert threshold = 0.01 >5 Diarrhea – Baseline count: 145 Results – summary of temporal SensiAvity Specificity PPV Timeliness Coherence <=5 >5 <=5 >5 <=5 <=5 >5 <=5 >5 ARIMA 0.02 0.16 0.99 0.99 <0.01 0.02 0.33 0.27 0.02 0.05 GLM 0.01 0.32 0.99 0.99 <0.01 0.07 0.20 0.63 0.01 0.12 Holt Winters 0.02 0.23 0.99 0.99 <0.01 0.04 0.33 0.52 0.02 0.10 EARS C2 0.02 0.32 0.97 0.97 <0.01 0.02 0.37 0.51 0.02 0.10 CUSUM 0.10 0.50 0.84 0.84 <0.01 0.02 0.76 0.70 0.09 0.33 Scan StaMsMc* 0.61 0.90 0.52 0.52 <0.01 0.02 0.95 0.78 0.51 0.60 *p<0.01 Alert threshold = 0.01 >5 Diarrhea – Baseline count: 145 Results – summary of temporal Diarrhea – Baseline count: 145 Scan staAsAc had the highest sensiAvity and picked up outbreaks the fastest Results – summary of temporal Methods did not pick up small outbreaks Diarrhea – Baseline count: 145 Results – summary of temporal PPV is low meaning a signal is unlikely to be an outbreak Diarrhea – Baseline count: 145 Results – summary of spaMal SensiAvity Specificity PPV <=5 >5 <=5 >5 <=5 GLMM 0.37 0.91 0.52 0.52 <0.01 0.01 SpaAal scan staAsAc 0.01 0.13 Space-­‐Ame permutaAon 0.17 Bayesian regression 0.01 Coherence <=5 >5 <=5 0.96 0.97 0.27 0.97 0.97 <0.01 <0.01 0.01 0.47 <0.01 0.03 0.54 0.64 0.64 <0.01 <0.01 0.73 0.89 0.13 0.26 0.44 0.86 0.86 <0.01 0.02 0.72 0.01 0.14 >5 1 Diarrhea – Baseline count: 145 Timeliness >5 0.41 Results – summary of spaMal SensiAvity Specificity PPV <=5 >5 <=5 >5 <=5 GLMM 0.37 0.91 0.52 0.52 <0.01 0.01 SpaAal scan staAsAc 0.01 0.13 Space-­‐Ame permutaAon 0.17 Bayesian regression 0.01 Coherence <=5 >5 <=5 0.96 0.97 0.27 0.97 0.97 <0.01 <0.01 0.01 0.47 <0.01 0.03 0.54 0.64 0.64 <0.01 <0.01 0.73 0.89 0.13 0.26 0.44 0.86 0.86 <0.01 0.02 0.72 0.01 0.14 >5 1 Diarrhea – Baseline count: 145 Timeliness >5 0.41 Results – summary of spaMal Single day spike Point source Sens Spec PPV GLMM 0.46 0.52 SpaAal scan 0.01 staAsAc Space-­‐Ame Bayesian Coh Propagated Diarrhea – Baseline count: 145 Sens Spec PPV Time Coh Sens Spec PPV Time Coh <0.01 0.35 0.78 0.52 0.01 0.96 0.34 0.81 0.52 0.02 0.87 0.35 0.97 <0.01 0.01 0.09 0.97 <0.01 0.35 0.02 0.19 0.67 <0.01 0.30 0.01 0.17 0.64 <0.01 0.14 0.42 0.64 <0.01 0.79 0.22 0.69 0.64 0.01 0.81 0.28 0.04 0.86 <0.01 0.03 0.34 0.86 0.01 0.63 0.11 0.46 0.86 0.03 0.71 0.12 Results – summary of spaMal Diarrhea – Baseline count: 145 GLMM and space-­‐Ame permutaAon had the highest sensiAvity and idenAfied outbreaks the fastest Results – summary of spaMal SpaAal methods also had low PPV Diarrhea – Baseline count: 145 PracMce-­‐based evaluaMon Expense SoNware Hardware Programming Ame Run Ame Ease of Use Programming skills needed AnalyAcal skills needed Interpretability of output PracMce-­‐based evaluaMon SAS was mostly used for the temporal methods R used for the spaMal methods Run on standard HP desktops (except Bayesian) PracMce-­‐based evaluaMon Easy Moderate Difficult EARS C2 CUSUM ARIMA GLM Temporal scan staMsMc Holt-­‐Winters GLMM Bayesian SpaMal scan staMsMc Space-­‐Mme permutaMon e.g. GLM took about a week to build and code Bayesian took about 10 weeks What we determined Temporal scan staMsMc GLMM Space-­‐Mme permutaMon We selected these based on their performance and relaMve ease of use Next step: Will run prospecMvely in parallel with our current system and evaluate further LimitaMons 1. We didn’t evaluate all published methods 2. Our results may be biased given there are unlabeled outbreaks in the baseline 3. Results may be be=er/worse with different model inputs and parameters 4. The models we chose for further evaluaMon were subjecMvely chosen Lessons learned Double the amount of Mme you think it’s going to take Lessons learned It is challenging to come up with one summary measure to esMmate performance Lessons learned The applicaMon and implementaMon of advanced staMsMcal methods is challenging Lessons learned Should methods oriented projects like this be housed in a local health department or would it be be=er to partner with a local university? QuesMons? Thank You!!! Jim Buehler Howard Burkom Jim Hadler Aaron Kite-­‐Powell MarAn Kulldorff Jose Lojo Tom MaWe Marc Paladini Dan Sosin Alfred P. Sloan FoundaAon