Harming through prevention?

Fuente NEJM

Editor’s Note: On February 15, 2008, after this article had gone to press, the Office for Human Research Protections (OHRP) issued a statement (www.hhs.gov/ohrp/news/recentnews.html#20080215) expressing its new conclusion that Michigan hospitals may continue to implement the checklist developed by Pronovost et al. “without falling under regulations governing human subjects research,” since it “is now being used . . . solely for clinical purposes, not medical research or experimentation.” OHRP further stated that in the research phase, the project “would likely have been eligible for both expedited IRB review and a waiver of the informed consent requirement.”

About 80,000 catheter-related bloodstream infections occur in U.S. intensive care units (ICUs) each year, causing as many as 28,000 deaths and costing the health care system as much as $2.3 billion. If there were procedures that could prevent these infections, wouldn’t we encourage hospitals to introduce them? And wouldn’t we encourage the development, testing, and dissemination of strategies that would get clinicians to use them? Apparently not, judging from the experience of Peter Pronovost and other Johns Hopkins investigators who helped 103 ICUs in 67 Michigan hospitals carry out a highly successful infection-control effort,1 only to run into major problems with federal regulators.

The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.

The Pronovost effort was part of a quality and safety initiative sponsored by the Michigan Hospital Association, with funding from the Agency for Healthcare Research and Quality (AHRQ). The intervention was designed to improve ICU care by promoting the use of five procedures recommended by the Centers for Disease Control and Prevention: washing hands, using full-barrier infection precautions during catheter insertion, cleaning the patient’s skin with disinfectant, avoiding the femoral site if possible, and removing unnecessary catheters. The hospitals designated the clinicians who would lead the teams and provided the necessary supplies. The investigators provided an education program for the team leaders, who educated their colleagues about the procedures and introduced checklists to ensure their use. Infection-control practitioners in each hospital gave the teams feedback on infection rates in their ICUs.

The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn’t find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.

After the report by Pronovost et al. was published,1 the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP’s disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.

The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can’t easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term “human-subjects research” and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.

Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.

If certain stringent conditions are met, human-subjects researchers may obtain a waiver of informed consent. After the OHRP required the Hopkins IRB to review the project as human-subjects research, the board granted such a waiver. The OHRP had also ruled that the university had failed to ensure that all collaborating institutions were complying with the regulations. Each participating hospital should have received approval from its own IRB or another IRB willing to accept the responsibility of review and oversight. This requirement adds substantial complexity and cost to a study and could sink it altogether.

In my view, the project was a combination of quality improvement and research on organizations, not human-subjects research, and the regulations did not apply. The project was not designed to use ICU patients as human subjects to test a new, possibly risky method of preventing infections; rather, it was designed to promote clinicians’ use of procedures already shown to be safe and effective for the purpose. Each hospital engaged in a classic quality-improvement activity in which team members worked together to introduce best practices and make them routine, with quantitative feedback on outcomes being intrinsic to the process. Such activities should not require IRB review. Since the activity did not increase patients’ risk above the level inherent in ICU care and patient confidentiality was protected, there was no ethical requirement for specific informed consent from patients. Indeed, it is hard to see why anyone would think it necessary or appropriate to ask ICU patients whether they wanted to opt out of a hospital’s effort to ensure the use of proven precautions against deadly infections — or why anyone would think that clinicians should have the right to opt out rather than an ethical obligation to participate.

Did the situation change because hospitals shared their experiences with each other? Since no identifiable patient or clinician information was shared, I don’t think so. Did the fact that quality-improvement experts educated the teams about the best practices change the situation? I don’t think so; bringing in consultants to conduct training activities is normal managerial practice. Did the fact that these experts studied and reported the results change the situation? The investigators were asking whether the hospitals produced and sustained a reduction in ICU infection rates. From one perspective, this was simply an evaluation of the quality-improvement activity; from another, it might be considered research, but the object of study was the performance of organizations.

Of course, the complexity of the regulations leaves room for different interpretations. Moreover, small changes in the facts of the situation can make a large difference in the regulatory burden imposed, even when they make no difference in the risk to patients — a fact underscored by the OHRP’s 11 detailed decision-making charts summarizing the regulations.2 But technical debates about the meaning of “research” and “human subject” miss the most important point: if we want our health care system to engage in data-guided improvement activities that prevent deaths, reduce pain and suffering, and save money, we shouldn’t make it so difficult to do so.

In a public statement on this case,3 the OHRP has indicated that institutions can freely implement practices they think will improve care as long as they don’t investigate whether improvement actually occurs. A hospital can introduce a checklist system without IRB review and informed consent, but if it decides to build in a systematic, data-based evaluation of the checklist’s impact, it is subject to the full weight of the regulations for human-subjects protection.

Obviously, collaborative research and improvement activities require supervision. AHRQ, the state hospital association, hospital managers, and local staff members should all evaluate such projects before taking them on, with a primary focus on their effect on patients’ well-being. This kind of supervision must be in place and working well regardless of whether an activity qualifies as human-subjects research.4,5

The extra layer of bureaucratic complexity embodied in the current regulations makes using data to guide change in health care more difficult and expensive, and it’s more likely to harm than to help. It’s time to modify or reinterpret the regulations so that they protect people from risky research without discouraging low-risk, data-guided activities designed to make our health care system work better.

No potential conflict of interest relevant to this article was reported.
Source Information

Dr. Baily is an associate for ethics and health policy at the Hastings Center, Garrison, NY.


  1. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725-2732. [Free Full Text]
  2. Office for Human Research Protections, U.S. Department of Health and Human Services. Human subject regulations decision charts, September 24, 2004. (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm.)
  3. Office for Human Research Protections, U.S. Department of Health and Human Services. OHRP statement regarding the New York Times op-ed entitled “A Lifesaving Checklist.” (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/news/recentnews.html#20080115.)
  4. Baily MA, Bottrell M, Lynn J, Jennings B. The ethics of using QI methods to improve health care quality and safety. Hastings Cent Rep 2006;36:S1-40. [CrossRef][ISI][Medline]
  5. Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med 2007;146:666-673. [Free Full Text]