Health & Medical Infectious Diseases

Bloodstream Infections after a Hand Hygiene Initiative

Bloodstream Infections after a Hand Hygiene Initiative

Methods


Our hypothesis was that the intervention changed the monthly rates of infection. We did not specify a direction for this change, so all hypotheses tests are 2-sided. The following analysis plan was developed a priori, and no post hoc tests were made. The plan was agreed upon at a meeting of the project steering group that involved the project's chief investigators and representatives from every state and territory. The results were discussed with representatives from each jurisdictional health department before publication.

Data


Data on healthcare-associated SAB infections are routinely collected by Australian hospitals and are reported both to their state or territory health authority and nationally for performance monitoring. The hospitals chosen were the 5 largest public hospitals (by number of acute care beds) in New South Wales, Victoria, Queensland, Western Australia, and South Australia; the 3 largest public hospitals in Tasmania; and the single main public hospital in the Northern Territory and Australian Capital Territory. This gave 30 hospitals. We then selected the next-largest 20 public hospitals throughout Australia to give 50 hospitals in total. We requested all of the available monthly data for the 50 hospitals. Data on multiple infection types were available, but we only examine infections that are due to hospital-associated SAB infections in this analysis, because the steering group felt that this infection had the most consistent data collection protocol (including definitions). SAB infection was defined using the nationally agreed definition as endorsed by the Australian Commission on Safety and Quality in Healthcare. Both methicillin-resistant S. aureus and methicillin-susceptible S. aureus were included.

The dates of available data differed between states (Figure 1). All states and territories had data for the preintervention and postintervention periods except for the Northern Territory, which provided no data, and Victoria, which provided data but with too many missing data to be usable. These 2 states were therefore excluded.



(Enlarge Image)



Figure 1.



Available Staphylococcus aureus bloodstream infection data over time by hospital and state. In total, there were 2,304 months (192 years) of available data. ACT, Australian Capital Territory; NSW, New South Wales; QLD, Queensland; RBWH, Royal Brisbane and Women's Hospital; SA, South Australia; Tas, Tasmania; WA, Western Australia.





The NHHI was implemented at different times across the country. Because collection of auditing data formed the basis of the intervention, we used the first report of auditing data for each hospital as the start of the intervention.

The data used here were provided to us by individual hospitals (sometimes via the state bodies). We verified the data quality and checked the infection definitions used. The study was approved by the appropriate human research ethics committees in each state and territory, and the release of data was additionally approved through the research governance processes appropriate to each hospital. The study was also approved by the Queensland University of Technology human research ethics committee.

Study Design


We used a before-and-after quasi-experimental design by comparing the infection rates after the intervention with those before, while controlling for other potential changes over time (see below for details). Similar designs include an interrupted time series, change-point estimation, segmented regression, and stepped-wedge design.

We ran the analyses separately in each state, because the intervention was implemented on a state-by-state basis, with overall coordination at both a state and national level. There were also important differences between states in terms of average infection rates and preexisting hand hygiene campaigns and infection prevention policies. Hence, it was thought likely that the effect of the intervention would vary by state.

Statistical Methods


We examined the change in infection rates after the intervention. Discussions with the project steering group led us to believe that the change in infection rates could have a number of different patterns. For example, the intervention may have gradually reduced rates from month to month in a linear way, or it may have caused an abrupt lowering in rates. There may have been a delay between the start of the intervention and its impact on rates because of learning time and the time taken for the intervention to reach all parts of the hospital. There may also have been a delayed increase in rates after the initial impact of intervention wore off. To capture these possibilities, we examined 12 possible changes over time (Figure 2). Models A and D adhere to the null hypothesis that the intervention had no impact on rates. Models K and L allow a potential delayed increase in rates once the intervention effect has worn off.



(Enlarge Image)



Figure 2.



The 12 models used to capture the mean change in infection rates after the intervention. The dashed line is the time of intervention, and the dotted line is the time of the delayed change.





The regression model for the counts of infections in hospital i in month t was as follows:




?dctmLink chronic_id='0901c79180790323' object_id='0901c79180790323' edit_widget_type=graphic??dctmEditor selectedObject='0901c79180790323'?
where M is the total number of hospitals and ni is the number of months observed in hospital i. A Poisson distribution is the most appropriate for modeling counts. The offset, log (ni,t), divides the mean counts, μi,t, by the denominator of bed days, ni,t, which we standardized to per 10,000 bed-days ( Table 1 ). Including the bed-day denominator helped control for changes over time, such as long-term trends in increasing hospital use and seasonal changes in hospital admissions.

A change in denominator reporting over time could create a spurious change in infection rates. We asked the infection control practitioner in each hospital about any changes, and either none were reported or, where changes did occur, the data we received were retrospectively standardized. We plotted the denominators in each hospital over time to look for sudden changes that would indicate a change in denominator, and none were found.

We controlled for any seasonal patterns in infection rates using a categorical variable for month (δ). We used a random intercept in each hospital (αi) to control for differences in the average infection rates between hospitals. We were not interested in differences in infection rates between hospitals but were instead interested in the within-hospital change due to the intervention and the average within-hospital change per state.

We examined a step-change due to the intervention (model A in Figure 2) by modeling the change in equation (1) as:




?dctmLink chronic_id='0901c79180790324' object_id='0901c79180790324' edit_widget_type=graphic??dctmEditor selectedObject='0901c79180790324'?
where Ti is the time the intervention was introduced in hospital i. This assumes that the rates changed immediately at the time of the intervention and remained consistently changed at all times thereafter.

In another model, we assumed a linear change due to the intervention (model C in Figure 2) using the following:




?dctmLink chronic_id='0901c79180790325' object_id='0901c79180790325' edit_widget_type=graphic??dctmEditor selectedObject='0901c79180790325'?
We examined a possible delayed intervention effect (eg, model G in Figure 2). This is plausible because it may take time for the changes promoted by the intervention to become standard practice. We examined delays of 1–6 months from the start of the intervention.

We examined a second change some time after the intervention (eg, model L in Figure 2). This second change could happen if the impact of the intervention on staff behavior wanes with time. We assumed that this second change happened sometime between 2 and 12 months after the intervention.

We examined a linear change in rates before the introduction of the intervention (eg, model F in Figure 2). This is important because, if rates were already decreasing, then the effect of the intervention is the additional change in rates after accounting for the previous linear decrease.

We selected the best model from the 12 using the Akaike information criterion (AIC). The AIC is a trade-off between model complexity and a good fit to the data. The equation for the AIC is minus twice the log-likelihood (goodness of fit) plus twice the number of parameters (model complexity). The smaller the AIC, the better the model, but a difference of 2 or less is not considered important. We used the following steps to choose the best model: (1) of the 12 models, find that with the lowest AIC (AICbest); (2) of the remaining 11 models, find that with the next lowest AIC (AICnext); (3) if AICnext − AICbest ≤ 2, then use the next model if the next model has fewer parameters (principle of parsimony). The order of model simplicity, as determined by the number of parameters, is {A}0 < {B, C, D, G}1 < {E, F, I, J}2 < {H, K, L}3, where the subscripts are the number of parameters for modeling the change in infection rates. Therefore, model A is simpler than models B and C, and models B and C are equally complex (with 1 extra parameter). An example of the model selection process is shown in Figure 3. An advantage of using the AIC is that the best fitting model is chosen regardless of the statistical significance (or otherwise) of any change in infection rates. The AIC is a useful statistic for quantifying the evidence for a set of competing models, and it has been used in a wide variety of model selection problems, including choosing between competing sets of independent variables, prediction models, and covariance structures.



(Enlarge Image)



Figure 3.



Example of finding the best model for the change over time in infection rates using the Akaike information criterion (AIC). The Y-axis shows the AIC (the lower, the better), and the X-axis shows the delayed change in the intervention. The letters correspond to the models in Figure 2. Model G, with a change at 6 months, has the lowest AIC (1,468.6). Model C has the next lowest (1,468.9). Model A is the most parsimonious model, because its AIC is 1,469.8, which is within 2 of the AICs for models C and G, but model A has 1 less parameter.





For the best model in each state, we estimated the percentage change in infection rates after the intervention, together with 95% confidence intervals (CIs). All analyses were conducted in R version 3.0.1 (http://www.r-project.org).

Verifying the Model


We tested the residuals of the best model to check for autocorrelation that would violate the assumption of independence; none was evident. We examined a histogram of the residuals to check that they were approximately unimodal with no outliers, and this was the case for each model. We checked the dispersion parameter to verify that an overdispersed Poisson was not required.

Post Hoc Power Calculations


For those states where no statistically significant difference in rates was found, we used a post hoc power calculation to estimate the chances of a false-negative error. We used a simulated power calculation by using the observed baseline data and then simulating reductions in the postintervention period using reasonable reductions in rates based on the reductions in the other states. We assumed that the change in rates followed the pattern of either model B or C (Figure 2). We used 1,000 simulations per state.



You might also like on "Health & Medical"

Leave a reply