Harming through prevention?

Fuente NEJM

Editor’s Note: On February 15, 2008, after this article had gone to press, the Office for Human Research Protections (OHRP) issued a statement (www.hhs.gov/ohrp/news/recentnews.html#20080215) expressing its new conclusion that Michigan hospitals may continue to implement the checklist developed by Pronovost et al. “without falling under regulations governing human subjects research,” since it “is now being used . . . solely for clinical purposes, not medical research or experimentation.” OHRP further stated that in the research phase, the project “would likely have been eligible for both expedited IRB review and a waiver of the informed consent requirement.”

About 80,000 catheter-related bloodstream infections occur in U.S. intensive care units (ICUs) each year, causing as many as 28,000 deaths and costing the health care system as much as $2.3 billion. If there were procedures that could prevent these infections, wouldn’t we encourage hospitals to introduce them? And wouldn’t we encourage the development, testing, and dissemination of strategies that would get clinicians to use them? Apparently not, judging from the experience of Peter Pronovost and other Johns Hopkins investigators who helped 103 ICUs in 67 Michigan hospitals carry out a highly successful infection-control effort,1 only to run into major problems with federal regulators.

The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.

The Pronovost effort was part of a quality and safety initiative sponsored by the Michigan Hospital Association, with funding from the Agency for Healthcare Research and Quality (AHRQ). The intervention was designed to improve ICU care by promoting the use of five procedures recommended by the Centers for Disease Control and Prevention: washing hands, using full-barrier infection precautions during catheter insertion, cleaning the patient’s skin with disinfectant, avoiding the femoral site if possible, and removing unnecessary catheters. The hospitals designated the clinicians who would lead the teams and provided the necessary supplies. The investigators provided an education program for the team leaders, who educated their colleagues about the procedures and introduced checklists to ensure their use. Infection-control practitioners in each hospital gave the teams feedback on infection rates in their ICUs.

The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn’t find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.

After the report by Pronovost et al. was published,1 the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP’s disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.

The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can’t easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term “human-subjects research” and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.

Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.

If certain stringent conditions are met, human-subjects researchers may obtain a waiver of informed consent. After the OHRP required the Hopkins IRB to review the project as human-subjects research, the board granted such a waiver. The OHRP had also ruled that the university had failed to ensure that all collaborating institutions were complying with the regulations. Each participating hospital should have received approval from its own IRB or another IRB willing to accept the responsibility of review and oversight. This requirement adds substantial complexity and cost to a study and could sink it altogether.

In my view, the project was a combination of quality improvement and research on organizations, not human-subjects research, and the regulations did not apply. The project was not designed to use ICU patients as human subjects to test a new, possibly risky method of preventing infections; rather, it was designed to promote clinicians’ use of procedures already shown to be safe and effective for the purpose. Each hospital engaged in a classic quality-improvement activity in which team members worked together to introduce best practices and make them routine, with quantitative feedback on outcomes being intrinsic to the process. Such activities should not require IRB review. Since the activity did not increase patients’ risk above the level inherent in ICU care and patient confidentiality was protected, there was no ethical requirement for specific informed consent from patients. Indeed, it is hard to see why anyone would think it necessary or appropriate to ask ICU patients whether they wanted to opt out of a hospital’s effort to ensure the use of proven precautions against deadly infections — or why anyone would think that clinicians should have the right to opt out rather than an ethical obligation to participate.

Did the situation change because hospitals shared their experiences with each other? Since no identifiable patient or clinician information was shared, I don’t think so. Did the fact that quality-improvement experts educated the teams about the best practices change the situation? I don’t think so; bringing in consultants to conduct training activities is normal managerial practice. Did the fact that these experts studied and reported the results change the situation? The investigators were asking whether the hospitals produced and sustained a reduction in ICU infection rates. From one perspective, this was simply an evaluation of the quality-improvement activity; from another, it might be considered research, but the object of study was the performance of organizations.

Of course, the complexity of the regulations leaves room for different interpretations. Moreover, small changes in the facts of the situation can make a large difference in the regulatory burden imposed, even when they make no difference in the risk to patients — a fact underscored by the OHRP’s 11 detailed decision-making charts summarizing the regulations.2 But technical debates about the meaning of “research” and “human subject” miss the most important point: if we want our health care system to engage in data-guided improvement activities that prevent deaths, reduce pain and suffering, and save money, we shouldn’t make it so difficult to do so.

In a public statement on this case,3 the OHRP has indicated that institutions can freely implement practices they think will improve care as long as they don’t investigate whether improvement actually occurs. A hospital can introduce a checklist system without IRB review and informed consent, but if it decides to build in a systematic, data-based evaluation of the checklist’s impact, it is subject to the full weight of the regulations for human-subjects protection.

Obviously, collaborative research and improvement activities require supervision. AHRQ, the state hospital association, hospital managers, and local staff members should all evaluate such projects before taking them on, with a primary focus on their effect on patients’ well-being. This kind of supervision must be in place and working well regardless of whether an activity qualifies as human-subjects research.4,5

The extra layer of bureaucratic complexity embodied in the current regulations makes using data to guide change in health care more difficult and expensive, and it’s more likely to harm than to help. It’s time to modify or reinterpret the regulations so that they protect people from risky research without discouraging low-risk, data-guided activities designed to make our health care system work better.

No potential conflict of interest relevant to this article was reported.
Source Information

Dr. Baily is an associate for ethics and health policy at the Hastings Center, Garrison, NY.


  1. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725-2732. [Free Full Text]
  2. Office for Human Research Protections, U.S. Department of Health and Human Services. Human subject regulations decision charts, September 24, 2004. (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm.)
  3. Office for Human Research Protections, U.S. Department of Health and Human Services. OHRP statement regarding the New York Times op-ed entitled “A Lifesaving Checklist.” (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/news/recentnews.html#20080115.)
  4. Baily MA, Bottrell M, Lynn J, Jennings B. The ethics of using QI methods to improve health care quality and safety. Hastings Cent Rep 2006;36:S1-40. [CrossRef][ISI][Medline]
  5. Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med 2007;146:666-673. [Free Full Text]

Metanalisis: analisis para su comprension

Monografía de Dr C. Rafael Avilés Merens, Dr C. Melvyn Morales Morejón, Lic. Augusto Sao Avilés y Lic. Rubén Cañedo Andalia – 28 de Diciembre de 2005

Se realizó una búsqueda bibliográfica sobre la metodología metanalítica en diversas bases de datos: Medline, Science Citation Index, entre otras. Para la búsqueda en Medline, se utilizó el descriptor Meta-analysis que indica el MeSH, el tesauro de la Biblioteca Nacional de Medicina de los Estados Unidos. Se emplearon, también, términos afines utilizados por el Institute of Scientific Information, asi como sinónimos o cuasi-sinónimos, obtenidos a partir de las estrategias probadas por los propios autores en las diferentes búsquedas realizadas y del intercambio con otros autores en el tema. Se emplearon diversas técnicas para la localización y posterior recuperación de la información. La participación de los autores en los colegios/académias invisibles, contituyó una fuente esencial para la obtención de información actualizada y, en no pocas ocasiones, información inédita.

« Anterior | Inicio | Siguiente »

1 – Aproximaciones útiles para la comprensión de los metanálisis
2 – Métodos
3 – Antecedentes
4 – Las revisiones cualitativas y cuantitativas
5 – Metanalisis: Definición
6 – Clasificación
7 – Etapas
8 – Criterios de selección
9 – Características metodológicas – variables moderadoras
10 – Características sustantivas
11 – Características extrínsecas
12 – Consideraciones finales
13 – Anexo. Procesamiento documental e informacional: interrelaciones y distinci
14 – Anexo. El metanalisis por etapas
15 – Anexo. Problemas pertinentes a cada una de las etapas de una revisió
16 – Anexo. Informe propuesto por autores, editores y críticos del metan&
17 – Anexo. Protocolo de control de calidad en la presentación de resulta
18 – Anexo. Tipos principales de sesgos en la elaboración de la revisión
19 – Anexo. La estadística en la revisión metanalítica
20 – Referencias bibliograficas

Tras la muerte de un paciente en el Hospital Fernandez

Denuncian graves irregularidades en los ensayos clínicos de los hospitales
Clarín 05/12/2007 –
La auditoria porteña reveló que sólo el 18% cumple con todos los requisitos formales.

La muerte de un paciente sometido a un ensayo clínico en el hospital Fernández parece formar parte de un universo mayor de descontrol e ilegalidad en los hospitales públicos. Así lo indica un durísimo informe de la Auditoría de la ciu dad de Buenos Aires, que analizó y encontró graves irregularidades en el sistema de protocolos en los hospitales porteños.

Ensayos clínicos que se hacen sin el consentimiento del paciente, sin la documentación necesaria o sin los seguros obligatorios por si algún drama se cruza en el camino. Estas son algunas de las conclusiones del informe que ayer dio a conocer Radio Mitre, en base a un relevamiento de la Auditoría porteña durante el primer semestre de este año en siete hospitales de la ciudad.

Los hospitales analizados fueron el Argerich, el Durand, el Pirovano, el hospital de Niños Pedro Elizalde, el María Curie, el Muñiz y el Udaondo. Ninguno salió bien parado, ya que sólo se encontró la documentación completa de los protocolos en apenas el 18 por ciento de los casos. En todos los hospitales, la constante fue que la mayoría de los ensayos no cuenta con los certificados de aprobación del hospital y ni siquiera de la ANMAT, el organismo del ministerio de Salud de la Nación encargado de autorizarlos. Esto podría dar origen a denuncias penales contra los investigadores y directivos de los hospitales, ya que sólo se pueden hacer ensayos clínicos con autorización de la ANMAT. La legislación también exige que los ensayos estén respaldados por una póliza de seguros, para cubrir eventuales daños. Eso ocurre sólo en 26 por ciento de los casos.

La auditoría también se metió en uno de los puntos más polémicos de estos estudios: su financiación, en su mayor parte proveniente de laboratorios extranjeros. Esos laboratorios les pagan a los médicos para que hagan los estudios y la Ciudad los “invita” a que hagan algún aporte para el hospital. Según la auditoría, por cada paciente involucrado en los ensayos clínicos los médicos cobran un promedio de 18.000 pesos, mientras que los hospitales reciben apenas 1.083 pesos en concepto de donaciones. Es decir que a los hospitales poco les queda, aunque ponen el prestigio y la estructura.

Para este estudio se analizaron en total 184 protocolos. Y se descubrió que en el 14 por ciento (25 casos), ni siquiera se había pedido autorización a los pacientes. La legislación nacional y porteña exige que el paciente sepa exactamente a qué ensayo va a ser sometido y cuáles son sus riesgos.

El paciente Eduardo D”Angelillo murió el 14 de enero pasado, un mes después de haber sido sometido a un ensayo de este tipo. La Auditoría concluye: “Falta supervisión, monitoreo y evaluación de las investigaciones”.

Agradecimiento a Martin Cañas por acercarnos el articulo.