Skip to content Skip to navigation

Software to predict hospital readmissions may lack accuracy

April 15, 2014
by Richard R. Rogoski
| Reprints

Software designed to predict hospital readmissions risk did not do as good a job in accurately predicting potentially preventable readmissions as the manual method, noted a study published in BMC Medical Informatics and Decision Making.

Researchers manually reviewed 459 all-cause readmissions at 18 Kaiser Permanente Northern California hospitals, determining potential preventability through a manual review process that included a chart review tool, interviews with patients and their families, treating providers, a nurse reviewer, and physician evaluation of findings as a determination of preventability on a five-point scale. 

Then they reassessed the same readmissions using 3M’s Potentially Preventable Readmission (PPR) software and compared both the sensitivity and specificity of the results.

As reported by the researchers, "Using manual review as reference, the sensitivity of PPR was 85 percent. In other words, it identified 85 percent of the potentially preventable readmissions that were identified by manual review. The specificity of PPR was 28 percent; it correctly classified 28 percent of the non-potentially preventable readmissions identified by manual review. Of the 232 cases identified as not potentially preventable by manual review, PPR identified 72 percent as potentially preventable."

While the researchers said more study is needed, including the inclusion of other software solutions, they concluded: "Concordance between methods was not high enough to replace manual review with automated classification as the primary method of identifying preventable 30-day, all-cause readmission for quality improvement purposes."