
En esto no estamos solos, esto sucede en toda America Latina, y a mal de muchos consuelo de tontos, les muestro un interesante video que habla sobre la investigacion en Peru.
En esto no estamos solos, esto sucede en toda America Latina, y a mal de muchos consuelo de tontos, les muestro un interesante video que habla sobre la investigacion en Peru.
la Declaración de Helsinki. Asociación Médica Mundial. Principios eticos para la investigación médica que involucra seres humanos Sexta Revisión. Octubre 2008 ( WORLD MEDICAL ASSOCIATION DECLARATION OF HELSINKI. Ethical Principles for Medical Research Involving Human Subjects)
la Declaración de Helsinki. Asociación Médica Mundial. Principios eticos para la investigación médica que involucra seres humanos Sexta Revisión. Octubre 2008 ( WORLD MEDICAL ASSOCIATION DECLARATION OF HELSINKI. Ethical Principles for Medical Research Involving Human Subjects)
En la barra lateral podemos acceder a un excelente documento sobre principios basicos de la investigacion clinica de Joan Ramon Laporte, Instituto Catalan de Farmacologia: http://www.icf.uab.es/llibre/llibre.htm
En la barra lateral podemos acceder a un excelente documento sobre principios basicos de la investigacion clinica de Joan Ramon Laporte, Instituto Catalan de Farmacologia: http://www.icf.uab.es/llibre/llibre.htm
Fuente NEJM
Editor’s Note: On February 15, 2008, after this article had gone to press, the Office for Human Research Protections (OHRP) issued a statement (www.hhs.gov/ohrp/news/recentnews.html#20080215) expressing its new conclusion that Michigan hospitals may continue to implement the checklist developed by Pronovost et al. “without falling under regulations governing human subjects research,” since it “is now being used . . . solely for clinical purposes, not medical research or experimentation.” OHRP further stated that in the research phase, the project “would likely have been eligible for both expedited IRB review and a waiver of the informed consent requirement.”
About 80,000 catheter-related bloodstream infections occur in U.S. intensive care units (ICUs) each year, causing as many as 28,000 deaths and costing the health care system as much as $2.3 billion. If there were procedures that could prevent these infections, wouldn’t we encourage hospitals to introduce them? And wouldn’t we encourage the development, testing, and dissemination of strategies that would get clinicians to use them? Apparently not, judging from the experience of Peter Pronovost and other Johns Hopkins investigators who helped 103 ICUs in 67 Michigan hospitals carry out a highly successful infection-control effort,1 only to run into major problems with federal regulators.
The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.
The Pronovost effort was part of a quality and safety initiative sponsored by the Michigan Hospital Association, with funding from the Agency for Healthcare Research and Quality (AHRQ). The intervention was designed to improve ICU care by promoting the use of five procedures recommended by the Centers for Disease Control and Prevention: washing hands, using full-barrier infection precautions during catheter insertion, cleaning the patient’s skin with disinfectant, avoiding the femoral site if possible, and removing unnecessary catheters. The hospitals designated the clinicians who would lead the teams and provided the necessary supplies. The investigators provided an education program for the team leaders, who educated their colleagues about the procedures and introduced checklists to ensure their use. Infection-control practitioners in each hospital gave the teams feedback on infection rates in their ICUs.
The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn’t find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.
After the report by Pronovost et al. was published,1 the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP’s disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.
The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can’t easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term “human-subjects research” and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.
Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.
If certain stringent conditions are met, human-subjects researchers may obtain a waiver of informed consent. After the OHRP required the Hopkins IRB to review the project as human-subjects research, the board granted such a waiver. The OHRP had also ruled that the university had failed to ensure that all collaborating institutions were complying with the regulations. Each participating hospital should have received approval from its own IRB or another IRB willing to accept the responsibility of review and oversight. This requirement adds substantial complexity and cost to a study and could sink it altogether.
In my view, the project was a combination of quality improvement and research on organizations, not human-subjects research, and the regulations did not apply. The project was not designed to use ICU patients as human subjects to test a new, possibly risky method of preventing infections; rather, it was designed to promote clinicians’ use of procedures already shown to be safe and effective for the purpose. Each hospital engaged in a classic quality-improvement activity in which team members worked together to introduce best practices and make them routine, with quantitative feedback on outcomes being intrinsic to the process. Such activities should not require IRB review. Since the activity did not increase patients’ risk above the level inherent in ICU care and patient confidentiality was protected, there was no ethical requirement for specific informed consent from patients. Indeed, it is hard to see why anyone would think it necessary or appropriate to ask ICU patients whether they wanted to opt out of a hospital’s effort to ensure the use of proven precautions against deadly infections — or why anyone would think that clinicians should have the right to opt out rather than an ethical obligation to participate.
Did the situation change because hospitals shared their experiences with each other? Since no identifiable patient or clinician information was shared, I don’t think so. Did the fact that quality-improvement experts educated the teams about the best practices change the situation? I don’t think so; bringing in consultants to conduct training activities is normal managerial practice. Did the fact that these experts studied and reported the results change the situation? The investigators were asking whether the hospitals produced and sustained a reduction in ICU infection rates. From one perspective, this was simply an evaluation of the quality-improvement activity; from another, it might be considered research, but the object of study was the performance of organizations.
Of course, the complexity of the regulations leaves room for different interpretations. Moreover, small changes in the facts of the situation can make a large difference in the regulatory burden imposed, even when they make no difference in the risk to patients — a fact underscored by the OHRP’s 11 detailed decision-making charts summarizing the regulations.2 But technical debates about the meaning of “research” and “human subject” miss the most important point: if we want our health care system to engage in data-guided improvement activities that prevent deaths, reduce pain and suffering, and save money, we shouldn’t make it so difficult to do so.
In a public statement on this case,3 the OHRP has indicated that institutions can freely implement practices they think will improve care as long as they don’t investigate whether improvement actually occurs. A hospital can introduce a checklist system without IRB review and informed consent, but if it decides to build in a systematic, data-based evaluation of the checklist’s impact, it is subject to the full weight of the regulations for human-subjects protection.
Obviously, collaborative research and improvement activities require supervision. AHRQ, the state hospital association, hospital managers, and local staff members should all evaluate such projects before taking them on, with a primary focus on their effect on patients’ well-being. This kind of supervision must be in place and working well regardless of whether an activity qualifies as human-subjects research.4,5
The extra layer of bureaucratic complexity embodied in the current regulations makes using data to guide change in health care more difficult and expensive, and it’s more likely to harm than to help. It’s time to modify or reinterpret the regulations so that they protect people from risky research without discouraging low-risk, data-guided activities designed to make our health care system work better.
No potential conflict of interest relevant to this article was reported.
Source Information
Dr. Baily is an associate for ethics and health policy at the Hastings Center, Garrison, NY.
References
MARÍA VALERIO
MADRID.- Es una tendencia creciente, y también preocupante. Investigadores de la Universidad de Texas (EEUU) han analizado siete millones de trabajos publicados en revistas científicas para tratar de dar con aquellos que se duplican: bien porque los propios autores envían su mismo trabajo a varias publicaciones, o bien porque comparten una excesiva ‘similitud’ con otros estudios.
Los promotores de este análisis, Mounir Errami y Harold Graner, critican que este tipo de prácticas distorsionan la literatura científica, inflan artificialmente el curriculum de publicaciones de algunos investigadores, y obligan a las revistas a emplear grandes sumas de recursos y tiempo para tratar de detectar este tipo de ‘fraude’.
En algunos casos, añaden, las similitudes entre dos textos pueden estar justificadas, por ejemplo, en el caso de las citas adecuadamente atribuidas o de actualizaciones que arrojan nuevos datos de un ensayo clínico. Sin embargo, estos especialistas señalan que en la mayoría de los casos se trata de plagio o incluso de autoplagio de documentos anteriores (artículos repetidos o duplicados). Las conclusiones de su búsqueda pueden leerse esta semana en un comentario en la revista ‘Nature’.
“No es equiparable copiarse a sí mismo (autoplagio) que copiar a otros autores”, apunta a elmundo.es José Alonso, director editorial en el grupo Elsevier en España, uno de los principales grupos editoriales de publicaciones científicas. “Aunque ambas sean mala práctica, la primera es más discutible y en algunos casos puede justificarse”. Como él mismo destaca, “partimos de la premisa de ofrecer al lector investigación original, novedosa, no publicada previamente o basada en artículos ya publicados, pero que ofrecen aspectos nuevos y de interés. Lo contrario es hacer perder el tiempo a los lectores y utilizar inútilmente recursos editoriales y humanos sin necesidad”.
Entre las soluciones que los expertos de Texas proponen para controlar estas prácticas destaca el uso de los nuevos programas informáticos y de software capaz de analizar varios textos en busca de similitudes sospechosas. Pero sobre todo, disuadir a los investigadores exponiendo públicamente estos ‘errores’.
Eso es lo que han hecho ellos de momento con los 70.000 abstracts (un resumen con los datos fundamentales de un trabajo científico) que por el momento han hallado duplicados en su búsqueda en MedLine (una de las principales bases de datos médicas). A través de una página web bautizada como Déjà Vu, cualquier internauta puede comprobar personalmente qué trabajos se han publicado en más de una ocasión.
Para afinar más sus conclusiones, el siguiente paso incluye un análisis ‘manual’ de estas copias para tratar de hallar una justificación a la duplicidad (por ejemplo, que sea una versión traducida en otro idioma), incluso contactando con los autores implicados si es necesario. De hecho, anuncian, la revisión que han comenzado ha desencadenado ya una investigación por parte de algunas revistas científicas, que prohíben expresamente en sus normas que los artículos se envíen simultáneamente a otros editores.
En el caso de las revistas que publica Elsevier, incluida Medicina Clínica, una de las más destacadas en España, Alonso subraya que las normas indican que sólo serán evaluados artículos originales (no publicados anteriormente ni presentados al mismo tiempo a otras revistas). “En este sentido, se siguen las normas del Comité Internacional de Publicaciones Biomédicas. Aunque esto no impide que en la práctica envíen sus trabajos a varias publicaciones a la vez, y es imposible de detectar”.
En este sentido, los comentaristas de ‘Nature’ reconocen que su revisión está aún en marcha y es necesario tomar sus datos con cautela, pero insisten en que existe una tendencia al alza desde el año 1975 de esta práctica “éticamente cuestionable”. Reconocen también que es probable que muchos casos pasen inadvertidos a cualquier mecanismo de control dado el enorme volumen de literatura científica que se publica en todo el mundo cada año.
Alonso destaca que todas las revistas publicadas por Elsevier-Doyma España tienen “un sistema de registro de artículos informatizado: Se registra el título, los autores y la filiación, además del contenido del manuscrito como fichero adjunto”. Para cada nuevo artículo registrado, esta tecnología compara el título y los autores con la base de datos y mediante un algoritmo es capaz de detectar similitudes. “Si existen, se revisan con detalle”, aclara.
Cuando estos controles detectan alguna duplicidad se sigue un protocolo interno de actuación. “En primer lugar se avisa al autor en cuestión y se permite que dé todas las explicaciones que considere oportunas. Inicialmente no se debe acusar de plagio o duplicidad, sino advertirle de que se han detectado una serie de coincidencias”, justifica el director editorial de este grupo. Si los argumentos no son satisfactorios y el artículo está indexado en alguna base de datos médica, la revista publica una retracción y lo retira. “En algunos casos detectados aprovechamos para publicar un editorial recordando a los autores todos estos aspectos éticos y reforzar así estos conceptos desde un punto de vista pedagógico”, concluye José Alonso.
Fuente: el Mundo.es
by Carlos Fernández Oropesa
Herramientas, consejos, comentarios para implementar e-learning
photography and digital art
Marc Andreessen's Tweets in Blog Form
Just another WordPress.com weblog
Virtual library for artist
Stuff and things.
Diseño de estrategias públicas sociosanitarias
Blog personal de Alfredo Vela , en él encontrarás información sobre Social Media, Marketing, Formación y TICs, sobre todo en formato de infografía.
Historias, prevención y tratamientos en la lucha contra el cáncer.
Inquietudes sobre salud de un médico que también es paciente
Improving Diagnosis and Clinical Practice
Otro mundo es posible
Una fusión de saberes, la piedra rosetta entre la ciencia y las humanidades.
Tu espacio de salud y cuidados online. Todo lo que tu farmacéutico te contaría si tuviera tiempo en la farmacia. RPS 46/15
La voz de algunos médicos de Atención Primaria de Madrid
Pediatría de tarde en Paracuellos del Jarama, Madrid