Deteniendo ensayos clinicos para beneficio. Demasiado bueno para ser verdad?


Stopping Trials Early for Benefit — Too Good to Be True?

Be skeptical of results from trials stopped early for benefit.

The practice of stopping randomized clinical trials (RCTs) early for benefit overestimates treatment effects, accrues inadequate data on adverse events, and can lead to misguided treatment recommendations. The number of trials that were stopped early for benefit has increased markedly since 1990 (JAMA 2005; 294:2203). Motives for stopping a trial early include a perceived ethical obligation to offer the treatment to participants and lower trial costs. However, might investigators have less noble motives?

In this systematic review, Italian investigators analyzed 25 RCTs in which new anticancer treatments were tested and that were stopped early for benefit. In 22 studies, the interim endpoint that drove trial truncation was the same endpoint that was proposed for the final analysis. However, the sample sizes used to generate interim efficacy results varied greatly; five trials were stopped when only 43% or less of the planned sample size was reached, and in another five trials, researchers did not provide this information. Of 14 trials published in the last 3 years, 11 were used to support applications for marketing approval by the FDA or a European counterpart. Notably, six trials did not involve data and safety monitoring committees.

Comment: These results affirm the growing phenomenon of trials stopped early for benefit. Furthermore, the authors assert that most of these trial results were used for drug approval, which suggests a commercial motive for stopping early. These findings, coupled with concerns about scientific validity, indicate that clinicians should be skeptical of results of trials stopped early for benefit.

Paul S. Mueller, MD, MPH, FACP

Published in Journal Watch General Medicine May 1, 2008

Citation(s):

Trotta F et al. Stopping a trial early in oncology: For patients or for industry? Ann Oncol 2008 Apr 9; [e-pub ahead of print]. (http://dx.doi.org/doi:10.1093/annonc/mdn042)

Meta-Disc


Meta-DiSc

Descarga

Meta-DiSc es un programa de libre distribución, para la realización de Meta-análisis de estudios de evaluación de pruebas Diagnósticas y de “Screening”. Pulse el botón izquierdo de su ratón para descargar un documento en formato PDF con una descripción de los métodos estadísticos implementados.

Meta-DiSc se distribuye gratuitamente para su uso libre y se ofrece tal cual es. Los autores no se hacen responsables de cualquier fallo que pudiera ocasionar en los ordenadores donde se instale.

Agradecemos que se cite como:
Zamora J, Abraira V, Muriel A, Khan KS, Coomarasamy A. Meta-DiSc: a software for meta-analysis of test accuracy data. BMC Medical Research Methodology 2006, 6:31.

El programa ha sido desarrollado por el equipo de la Unidad de Bioestadística Clínica del Hospital Ramón y Cajal de Madrid (España), dentro de los proyectos de investigación FIS PI02/0954 y PI04/1055 y con financiación parcial de la red temática de investigación cooperativa G03/090 “Desarrollo de metodologías para la aplicación y gestión del conocimiento en la práctica clínica”.

Por favor si está interesado en este programa, preste atención a sus actualizaciones fijándose en el número de versión.

El programa trabaja en Windows 95, 98, Me, NT, 2000 y XP. Para instalarlo, pulse el botón izquierdo de su ratón, para descargar el archivo metadisc_es.msi, guárdelo en el disco de su ordenador y ejecútelo. La única opción de instalación es la carpeta donde se instalará el programa, que por defecto es “C:\Archivos de programa\Meta-DiSc\”

Para desinstalarlo, hay que acceder desde la configuración de Windows al Panel de Control y elegir la utilidad de Agregar/quitar programas.

Fuente: Bioestadistica – Hospital Ramón y Cajal.

Los ensayos clínicos infantiles requieren un control externo


Los ensayos clínicos infantiles requieren un control externo

La seguridad en los ensayos clínicos es un aspecto fundamental siempre, pero se convierte en imprescindible cuando se trata de población infantil. Sin embargo, sólo el 2 por ciento de estos experimentos cuentan con comités de control de seguridad independientes.
DM Londres 27/03/2008

Sólo el dos por ciento de los ensayos clínicos pediátricos cuentan con comités independientes de monitorización de seguridad, que permiten detectar precozmente reacciones adversas, según una revisión que se publica en el último número de Acta Paediatrica.

Un equipo de investigadores del Departamento de Salud Infantil de la Universidad de Nottingham, en Reino Unido, encabezado por Helen Sammons, ha realizado un análisis detallado de 739 ensayos clínicos de fármacos realizados en todo el mundo entre 1996 y 2002, monitorizando el índice de reacciones adversas y el nivel de seguridad de las investigaciones.

Cerca de las tres cuartas partes de los ensayos (el 74 por ciento) describían de alguna manera cómo se iba a controlar la seguridad durante su desarrollo, pero sólo trece (el 2 por ciento) tenían comités independientes. “Nos sorprendió el bajo nivel de control de seguridad de este tipo que encontramos, y pensamos que es necesario que las compañías farmacéuticas incorporen estas figuras en sus ensayos, fundamentalmente en los que involucren a niños”, afirma Sammons, que aclara que “los ensayos clínicos en población pediátrica deben continuar indiscutiblemente; son vitales porque incrementan las posibilidades de evitar efectos adversos antes de que los fármacos se lancen al mercado”.

Otros resultados
Los investigadores también descubrieron que siete de cada diez ensayos causaron efectos adversos, que fueron graves en un 20 por ciento de los casos aunque no necesariamente ligados a la administración de medicamentos. Sólo se reportaron reacciones adversas a fármacos en el 37 por ciento de los ensayos, siendo moderadas o graves en el 11 por ciento de los casos. De todos los ensayos analizados sólo seis se concluyeron antes de tiempo por haber notificado la toxicidad del medicamento; todos ellos contaban con comités de seguridad independientes.

En el estudio se consideraron como reacciones adversas hemorragias, hipertensión arterial, ataques, psicosis, suicidio, fallo renal grave y muerte. En el 11 por ciento de las investigaciones se produjeron muertes, pero la mayoría no tenían relación con los fármacos.

Los ensayos clínicos estudiados proceden de numerosos países, como Argentina, Bélgica, Canadá, Chile, China, Francia, India, Israel, Italia, Japón, Países Bajos, Sudáfrica, Suecia, Taiwan, Tailandia, Turquía, Reino Unido y Estados Unidos.

(Acta Paediatrica 2008; 97 (4) :474-477)

Fuente: Diario Médico

Ensayos clinicos


Click here to read
Assessing the quality of reports of randomized clinical trials: is blinding necessary?

Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, McQuay HJ.

Oxford Regional Pain Relief Unit, University of Oxford, UK.

It has been suggested that the quality of clinical trials should be assessed by blinded raters to limit the risk of introducing bias into meta-analyses and systematic reviews, and into the peer-review process. There is very little evidence in the literature to substantiate this. This study describes the development of an instrument to assess the quality of reports of randomized clinical trials (RCTs) in pain research and its use to determine the effect of rater blinding on the assessments of quality. A multidisciplinary panel of six judges produced an initial version of the instrument. Fourteen raters from three different backgrounds assessed the quality of 36 research reports in pain research, selected from three different samples. Seven were allocated randomly to perform the assessments under blind conditions. The final version of the instrument included three items. These items were scored consistently by all the raters regardless of background and could discriminate between reports from the different samples. Blind assessments produced significantly lower and more consistent scores than open assessments. The implications of this finding for systematic reviews, meta-analytic research and the peer-review process are discussed.

Publication Types:

PMID: 8721797 [PubMed – indexed for MEDLINE]

Calidad de un estudio: escala de Jadad


Quality of Study:
A numerical score between 0-5 is assigned as a rough measure of study design/reporting quality (0 being weakest and 5 being strongest). This number is based on a well-established, validated scale developed by Jadad et al. (Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Controlled Clinical Trials 1996;17[1]:1-12). This calculation does not account for all study elements that may be used to assess quality (other aspects of study design/reporting are addressed in the “Evidence Discussion” sections of monographs).

  • A Jadad score is calculated using the seven items in the table below. The first five items are indications of good quality, and each counts as one point towards an overall quality score. The final two items indicate poor quality, and a point is subtracted for each if its criteria are met. The range of possible scores is 0 to 5.

Jadad Score Calculation  
Item Score
Was the study described as randomized (this includes words such as randomly, random, and randomization)?
0/1
Was the method used to generate the sequence of randomization described and appropriate (table of random numbers, computer-generated, etc)?
0/1
Was the study described as double blind?
0/1
Was the method of double blinding described and appropriate (identical placebo, active placebo, dummy, etc)?
0/1
Was there a description of withdrawals and dropouts?
0/1
Deduct one point if the method used to generate the sequence of randomization was described and it was inappropriate (patients were allocated alternately, or according to date of birth, hospital number, etc).
0/-1
Deduct one point if the study was described as double blind but the method of blinding was inappropriate (e.g., comparison of tablet vs. injection with no double dummy).
0/-1

P = pending verification.

Magnitude of Benefit:
This summarizes how strong a benefit is: small, medium, large, or none. If results are not statistically significant “NA” for “not applicable” is entered. In order to be consistent in defining small, medium, and large benefits across different studies and monographs,
Natural Standard defines the magnitude of benefit in terms of the standard deviation (SD) of the outcome measure. Specifically, the benefit is considered:

  • Large: if >1 SD
  • Medium: if 0.5 to 0.9 SD
  • Small: if 0.2 to 0.4 SD

P = pending verification.In many cases, studies do not report the standard deviation of change of the outcome measure. However, the change in the standard deviation of the outcome measure (also known as effect size) can be calculated, and is derived by subtracting the mean (or mean difference) in the placebo/control group from the mean (or mean difference) in the treatment group, and dividing that quantity by the pooled standard deviation (Effect size=[Mean Treatment – Mean Placebo]/SDp).

Anticoagulantes orales


La tolerancia y la seguridad de los anticoagulantes orales en ancianos es peor en las condiciones de uso habituales que en los ensayos clínicos

Hylek EM, Evans-Molina C, Shea C, Henault LE, Regan S. Major Hemorrhage and Tolerability of Warfarin in the First Year of Therapy Among Elderly Patients With Atrial Fibrillation. Circulation 2007; 115: 2689-2696.  R   TC   PDF

Introducción

El tratamiento con anticoagulantes orales en pacientes con fibrilación auricular está infrautilizado. Uno de los factores responsables es el miedo de los profesionales a los efectos adversos de la medicación, en especial a las hemorragias. La tasa de efectos adversos observada en los ensayos clínicos es baja, pero se desconoce si es la misma que se da en la utilización de los fármacos en condiciones reales.

Objetivo

Estudiar la tolerancia del tratamiento anticoagulante oral en ancianos.

Perfil del estudio

Tipo de estudio: Estudio de cohortes

Ámbito del estudio: Comunitario

Métodos

Se incluyó en el estudio a pacientes de ≥65 años de edad, con fibrilación auricular (FA) constatada en ECG y que habían iniciado un tratamiento con anticoagulantes orales indicado y controlado por el hospital en el que se llevó a cabo el estudio. Se les siguió durante el año inmediatamente posterior al inicio del tratamiento. Se registraron las hemorragias mayores (fatal, que requiriese la transfusión de ≥2 concentrados de hematíes o localización crítica ), las interrupciones de tratamiento y los motivos de las mismas. Además, se incluyó el INR en el momento del evento, otros factores que hubiesen podido contribuir a la hemorragia, los factores de riesgo de AVC y de hemorragia y las medicaciones concurrentes.

Resultados

Se incluyeron en el estudio 472 pacientes. El 53% eran varones. El 54% tenían una edad >75 años. En un 59% de los casos se indicó por un primer episodio de FA. Un 90% de los pacientes >80 años tenían un riesgo elevado de AVC (escala CHADS2 ≥2) y un 5% un riesgo elevado de hemorragia. Un 40% estaban tomando aspirina a dosis de 80 mg/día. El seguimiento fue completo para el 100% de los participantes. El 65% seguían tomando los anticoagulantes al cabo de un año, un 28% habían abandonado el tratamiento y un 3% habían muerto por causas no relacionadas con el tratamiento.

La tasa de hemorragias mayores fue de 7,2 por 100 pacientes y año. El riesgo de hemorragia fue superior en los pacientes >80 años que en los más jóvenes (fig. 1).

Como era previsible, el riesgo de hemorragia aumentó en relación con el INR y fue superior en los primeros 3 meses de tratamiento (fig. 2). También fue superior en los que tenían una puntuación en la escala CHADS2 ≥3 (de valoración del riesgo de AVC).

El 28% de los pacientes habían abandonado los anticoagulantes en el primer año. El principal motivo en los <80 años fue el restablecimiento del ritmo sinusal, mientras que en los >80 años fueron las dudas sobre la seguridad del tratamiento. En el grupo de mayor edad, los abandonos fueron mucho más frecuentes en los primeros 3 meses para igualarse con los más jóvenes posteriormente.

Conclusiones

Los autores concluyen que la tasa de hemorragias en los pacientes de edad es superior a la publicada en los ensayos clínicos llevados a cabo en poblaciones más jóvenes, lo que puede suponer un obstáculo en la instauración de este tratamiento.

Conflictos de interés

Varios de los autores han recibido honorarios de diferentes laboratorios farmacéuticos por diferentes conceptos. Financiado por el Robert Wood Johnson Foundation Generalist Physician Faculty Scholars Program.

Comentario

Los anticoagulantes orales son un grupo de fármacos que han demostrado ampliamente su eficacia en la prevención secundaria del tromboembolismo venoso, en la prevención del tromboembolismo sistémico en pacientes con prótesis valvulares o con fibrilación auricular y en la prevención del AVC, la recurrencia del infarto y la muerte en pacientes con infarto agudo de miocardio. Sin embargo, en varios estudios se ha demostrado que muchos pacientes en los que teóricamente estaría indicado no los reciben. En parte se debe a que se trata de un grupo de fármacos de manejo difícil, puesto que tienen una ventana terapéutica estrecha, existe una gran variabilidad en la respuesta entre los sujetos y presentan interacciones con numerosos alimentos y fármacos, por lo que en algunos casos no se prescriben por miedo a sus efectos indeseables.

La tasa de hemorragias observada en este estudio es netamente superior a la de los ensayos clínicos publicados, especialmente en las personas >80 años. La indicación del tratamiento anticoagulante en este grupo de población es objeto de debate. Pese a que en todos los estudios el riesgo de hemorragia aumenta con la edad, para algunos autores se debería a la presencia de factores asociados, por lo que, en igualdad de condiciones, el riesgo no sería superior al de pacientes más jóvenes.

Este estudio se ha llevado a cabo en un único centro, por lo que sería conveniente disponer de más trabajos llevados a cabo en personas de este grupo de edad para conocer con un mayor grado de precisión el riesgo del tratamiento anticoagulante en ancianos.

Bibliografía

  1. Holbrook AM, Pereira JA, Labiris R, McDonald H, James D. Douketis JD, et al. Systematic Overview of Warfarin and Its Drug and Food Interactions. Arch Intern Med 2005; 165: 1095-1106.  R   TC (s)   PDF (s)  RC
  2. Choudhry NK, Anderson GM, Laupacis A, Ross-Degnan D, Normand S-LT, Soumerai SB. Impact of adverse events on prescribing warfarin in patients with atrial fibrillation: matched pair analysis. BMJ 2006; 332: 141-145.  R   TC   PDF  RC
  3. Bungard TJ, Ghali WA, Teo KK, McAlister FA, Tsuyuki RT. Why do patients with atrial fibrillation not receive warfarin?. Arch Intern Med 2000; 160: 41-46.  R   TC (s)   PDF (s)

Autor

Manuel Iglesias Rodal. Correo electrónico: mrodal@menta.net.

Harming through prevention?


Fuente NEJM

Editor’s Note: On February 15, 2008, after this article had gone to press, the Office for Human Research Protections (OHRP) issued a statement (www.hhs.gov/ohrp/news/recentnews.html#20080215) expressing its new conclusion that Michigan hospitals may continue to implement the checklist developed by Pronovost et al. “without falling under regulations governing human subjects research,” since it “is now being used . . . solely for clinical purposes, not medical research or experimentation.” OHRP further stated that in the research phase, the project “would likely have been eligible for both expedited IRB review and a waiver of the informed consent requirement.”

About 80,000 catheter-related bloodstream infections occur in U.S. intensive care units (ICUs) each year, causing as many as 28,000 deaths and costing the health care system as much as $2.3 billion. If there were procedures that could prevent these infections, wouldn’t we encourage hospitals to introduce them? And wouldn’t we encourage the development, testing, and dissemination of strategies that would get clinicians to use them? Apparently not, judging from the experience of Peter Pronovost and other Johns Hopkins investigators who helped 103 ICUs in 67 Michigan hospitals carry out a highly successful infection-control effort,1 only to run into major problems with federal regulators.

The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.

The Pronovost effort was part of a quality and safety initiative sponsored by the Michigan Hospital Association, with funding from the Agency for Healthcare Research and Quality (AHRQ). The intervention was designed to improve ICU care by promoting the use of five procedures recommended by the Centers for Disease Control and Prevention: washing hands, using full-barrier infection precautions during catheter insertion, cleaning the patient’s skin with disinfectant, avoiding the femoral site if possible, and removing unnecessary catheters. The hospitals designated the clinicians who would lead the teams and provided the necessary supplies. The investigators provided an education program for the team leaders, who educated their colleagues about the procedures and introduced checklists to ensure their use. Infection-control practitioners in each hospital gave the teams feedback on infection rates in their ICUs.

The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn’t find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.

After the report by Pronovost et al. was published,1 the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP’s disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.

The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can’t easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term “human-subjects research” and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.

Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.

If certain stringent conditions are met, human-subjects researchers may obtain a waiver of informed consent. After the OHRP required the Hopkins IRB to review the project as human-subjects research, the board granted such a waiver. The OHRP had also ruled that the university had failed to ensure that all collaborating institutions were complying with the regulations. Each participating hospital should have received approval from its own IRB or another IRB willing to accept the responsibility of review and oversight. This requirement adds substantial complexity and cost to a study and could sink it altogether.

In my view, the project was a combination of quality improvement and research on organizations, not human-subjects research, and the regulations did not apply. The project was not designed to use ICU patients as human subjects to test a new, possibly risky method of preventing infections; rather, it was designed to promote clinicians’ use of procedures already shown to be safe and effective for the purpose. Each hospital engaged in a classic quality-improvement activity in which team members worked together to introduce best practices and make them routine, with quantitative feedback on outcomes being intrinsic to the process. Such activities should not require IRB review. Since the activity did not increase patients’ risk above the level inherent in ICU care and patient confidentiality was protected, there was no ethical requirement for specific informed consent from patients. Indeed, it is hard to see why anyone would think it necessary or appropriate to ask ICU patients whether they wanted to opt out of a hospital’s effort to ensure the use of proven precautions against deadly infections — or why anyone would think that clinicians should have the right to opt out rather than an ethical obligation to participate.

Did the situation change because hospitals shared their experiences with each other? Since no identifiable patient or clinician information was shared, I don’t think so. Did the fact that quality-improvement experts educated the teams about the best practices change the situation? I don’t think so; bringing in consultants to conduct training activities is normal managerial practice. Did the fact that these experts studied and reported the results change the situation? The investigators were asking whether the hospitals produced and sustained a reduction in ICU infection rates. From one perspective, this was simply an evaluation of the quality-improvement activity; from another, it might be considered research, but the object of study was the performance of organizations.

Of course, the complexity of the regulations leaves room for different interpretations. Moreover, small changes in the facts of the situation can make a large difference in the regulatory burden imposed, even when they make no difference in the risk to patients — a fact underscored by the OHRP’s 11 detailed decision-making charts summarizing the regulations.2 But technical debates about the meaning of “research” and “human subject” miss the most important point: if we want our health care system to engage in data-guided improvement activities that prevent deaths, reduce pain and suffering, and save money, we shouldn’t make it so difficult to do so.

In a public statement on this case,3 the OHRP has indicated that institutions can freely implement practices they think will improve care as long as they don’t investigate whether improvement actually occurs. A hospital can introduce a checklist system without IRB review and informed consent, but if it decides to build in a systematic, data-based evaluation of the checklist’s impact, it is subject to the full weight of the regulations for human-subjects protection.

Obviously, collaborative research and improvement activities require supervision. AHRQ, the state hospital association, hospital managers, and local staff members should all evaluate such projects before taking them on, with a primary focus on their effect on patients’ well-being. This kind of supervision must be in place and working well regardless of whether an activity qualifies as human-subjects research.4,5

The extra layer of bureaucratic complexity embodied in the current regulations makes using data to guide change in health care more difficult and expensive, and it’s more likely to harm than to help. It’s time to modify or reinterpret the regulations so that they protect people from risky research without discouraging low-risk, data-guided activities designed to make our health care system work better.

No potential conflict of interest relevant to this article was reported.
Source Information

Dr. Baily is an associate for ethics and health policy at the Hastings Center, Garrison, NY.

References

  1. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725-2732. [Free Full Text]
  2. Office for Human Research Protections, U.S. Department of Health and Human Services. Human subject regulations decision charts, September 24, 2004. (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm.)
  3. Office for Human Research Protections, U.S. Department of Health and Human Services. OHRP statement regarding the New York Times op-ed entitled “A Lifesaving Checklist.” (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/news/recentnews.html#20080115.)
  4. Baily MA, Bottrell M, Lynn J, Jennings B. The ethics of using QI methods to improve health care quality and safety. Hastings Cent Rep 2006;36:S1-40. [CrossRef][ISI][Medline]
  5. Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med 2007;146:666-673. [Free Full Text]

Los ensayos clinicos no siempre son representativos de la poblacion general


Published: December 25, 2007

The randomized clinical trial, widely considered the most reliable biomedical research method, can have significant drawbacks, a new study suggests, because patients included may not be representative of the broader population.

The scientists, writing in the December issue of The Annals of Surgical Oncology, reviewed 29 clinical trials of surgical procedures in prostate, colon, breast and lung cancer involving 13,991 patients. Although 62 percent of those cancers occur in people over 65, just 27 percent of the participants in the trials were that old. Although patients younger than 55 account for 16 percent of cancer cases, they made up 44 percent of the participants. More than 86 percent of the participants were white, and fewer than 8 percent African-American.

Thirty percent of the cases were breast cancers, but nearly 75 percent of the participants had that disease. Although prostate cancer accounted for 27 percent of the cancers, fewer than 2 percent of the patients were in prostate cancer studies.

In colon and lung cancer trials, women were less likely to be enrolled than men, and at all study sites, the rates of participation in trials was extremely low, from 0.04 to 1.7 percent.

Dr. John H. Stewart IV, the lead author and an assistant professor of surgery at Wake Forest University, said the disparities could call the results into question. “Our ability to generalize the findings of surgical trials,” he said, “is directly dependent on having equitable participation in trials by underrepresented groups.”

Escepticemia: Segun un estudio


Gonzalo Casino

La medicina vista desde Internet y pasada por el saludable filtro del escepticismo.

Gonzalo Casino

Según un estudio

Sobre el totum revolutum de la investigación y los trabajos a medida

Nada parece respaldar tanto la veracidad de un mensaje como el aval de un estudio. La muletilla “según un estudio” es moneda corriente en las informaciones periodísticas de salud y cada vez más en los mensajes publicitarios de productos en los que el valor salud es importante (por desgracia, periodismo y publicidad se mezclan y confunden a menudo). La palabra estudio tiene las espaldas tan anchas y tan amplias las tragaderas que lo mismo sirve para designar una encuesta de medio pelo que una rigurosa investigación científica, un intrascendente análisis estadístico que un ensayo clínico. Pero lo cierto es que aludir vagamente a “un estudio” no dice nada si no se añaden a continuación los datos esenciales de dicho trabajo. Y de esta imprecisión y calculada ambigüedad se aprovechan, obviamente, los trabajos más chapuceros, que no sólo se utilizan para publicitar los supuestos beneficios de un producto sino que encuentran además eco en algunas informaciones periodísticas, para mayor desgracia y desconcierto del consumidor.

Esta situación es especialmente llamativa en los productos alimenticios. La importancia que tiene la dieta en la salud, la demonización de algunos nutrientes (el colesterol, sin ir más lejos) o la santificación de otros (las vitaminas, por ejemplo) y la obsesión con las calorías y el sobrepeso, entre otras circunstancias, son el terreno abonado para que los fabricantes de alimentos se afanen por colocar a sus productos la etiqueta de saludable (algunos, como los bodegueros de EE UU, lo consiguieron). Hay estudios para avalar los beneficios de alimentos tan dispares como el vino y las nueces, los cereales de desayuno y el agua mineral carbonatada. Y todo parece indicar que si una empresa o lobby están dispuestos a financiar “un estudio” siempre será posible obtener un mensaje de salud favorable a sus intereses comerciales. Pero la actual inflación de estudios a medida empieza a ser tóxica, más que nada porque el mensaje que llega a la ciudadanía no suele considerar el peso de las pruebas científicas, y bajo el señuelo de “un estudio” se proclaman todo tipo de recomendaciones y afirmaciones, muchas de ellas sin confirmar.

Una de las últimas recomendaciones, basada obviamente en un estudio, es la de consumir helados por su riqueza en proteínas y su acción contra el estrés. El estudio en cuestión invoca razones nutricionales, sensoriales y de bienestar. Pero, ¿qué alimento fresco o elaborado no podría asimismo recomendarse por alguna razón nutritiva, sensorial o de bienestar? Lo cierto es que todos los alimentos son recomendables y sanos por el hecho de serlo, y hacer recomendaciones aisladas sin considerar el contexto de una dieta no tiene sentido y no es muy diferente de la simple publicidad. Por desgracia, los médicos carecen del tiempo y los conocimientos nutricionales necesarios para desmontar tantas falacias encubiertas bajo la palabra estudio y, de paso, ayudar a los pacientes-consumidores a tomar decisiones informadas.

Statistics in Medicine — Reporting of Subgroup Analyses in Clinical Trials


Medical research relies on clinical trials to assess therapeutic benefits. Because of the effort and cost involved in these studies, investigators frequently use analyses of subgroups of study participants to extract as much information as possible. Such analyses, which assess the heterogeneity of treatment effects in subgroups of patients, may provide useful information for the care of patients and for future research. However, subgroup analyses also introduce analytic challenges and can lead to overstated and misleading results.1,2,3,4,5,6,7 This report outlines the challenges associated with conducting and reporting subgroup analyses, and it sets forth guidelines for their use in the Journal. Although this report focuses on the reporting of clinical trials, many of the issues discussed also apply to observational studies. Subgroup Analyses and Related Concepts

Subgroup Analysis

By “subgroup analysis,” we mean any evaluation of treatment effects for a specific end point in subgroups of patients defined by baseline characteristics. The end point may be a measure of treatment efficacy or safety. For a given end point, the treatment effect — a comparison between the treatment groups — is typically measured by a relative risk, odds ratio, or arithmetic difference. The research question usually posed is this: Do the treatment effects vary among the levels of a baseline factor?

A subgroup analysis is sometimes undertaken to assess treatment effects for a specific patient characteristic; this assessment is often listed as a primary or secondary study objective. For example, Sacks et al.8 conducted a placebo-controlled trial in which the reduction in the incidence of coronary events with the use of pravastatin was examined in a diverse population of persons who had survived a myocardial infarction. In subgroup analyses, the investigators further examined whether the efficacy of pravastatin relative to placebo in preventing coronary events varied according to the patients’ baseline low-density lipoprotein (LDL) levels.

Subgroup analyses are also undertaken to investigate the consistency of the trial conclusions among different subpopulations defined by each of multiple baseline characteristics of the patients. For example, Jackson et al.9 reported the outcomes of a study in which 36,282 postmenopausal women 50 to 79 years of age were randomly assigned to receive 1000 mg of elemental calcium with 400 IU of vitamin D3 daily or placebo. Fractures, the primary outcome, were ascertained over an average follow-up period of 7.0 years; bone density was a secondary outcome. Overall, no treatment effect was found for the primary outcome; that is, the active treatment was not shown to prevent fractures. The effect of calcium plus vitamin D supplementation relative to placebo on the risk of each of four fracture outcomes was further analyzed for consistency in subgroups defined by 15 characteristics of the participants.

Heterogeneity and Statistical Interactions

The heterogeneity of treatment effects across the levels of a baseline variable refers to the circumstance in which the treatment effects vary across the levels of the baseline characteristic. Heterogeneity is sometimes further classified as being either quantitative or qualitative. In the first case, one treatment is always better than the other, but by various degrees, whereas in the second case, one treatment is better than the other for one subgroup of patients and worse than the other for another subgroup of patients. Such variation, also called “effect modification,” is typically expressed in a statistical model as an interaction term or terms between the treatment group and the baseline variable. The presence or absence of interaction is specific to the measure of the treatment effect.

The appropriate statistical method for assessing the heterogeneity of treatment effects among the levels of a baseline variable begins with a statistical test for interaction.10,11,12,13 For example, Sacks et al.8 showed the heterogeneity in pravastatin efficacy by reporting a statistically significant (P=0.03) result of testing for the interaction between the treatment and baseline LDL level when the measure of the treatment effect was the relative risk. Many trials lack the power to detect heterogeneity in treatment effect; thus, the inability to find significant interactions does not show that the treatment effect seen overall necessarily applies to all subjects. A common mistake is to claim heterogeneity on the basis of separate tests of treatment effects within each of the levels of the baseline variable.6,7,14 For example, testing the hypothesis that there is no treatment effect in women and then testing it separately in men does not address the question of whether treatment differences vary according to sex. Another common error is to claim heterogeneity on the basis of the observed treatment-effect sizes within each subgroup, ignoring the uncertainty of these estimates.

Multiplicity

It is common practice to conduct a subgroup analysis for each of several — and often many — baseline characteristics, for each of several end points, or for both. For example, the analysis by Jackson and colleagues9 of the effect of calcium plus vitamin D supplementation relative to placebo on the risk of each of four fracture outcomes for 15 participant characteristics resulted in a total of 60 subgroup analyses.

When multiple subgroup analyses are performed, the probability of a false positive finding can be substantial.7 For example, if the null hypothesis is true for each of 10 independent tests for interaction at the 0.05 significance level, the chance of at least one false positive result exceeds 40%. Thus, one must be cautious in the interpretation of such results. There are several methods for addressing multiplicity that are based on the use of more stringent criteria for statistical significance than the customary P<0.05.7,15 A less formal approach for addressing multiplicity is to note the number of nominally significant interaction tests that would be expected to occur by chance alone. For example, after noting that 60 subgroup analyses were planned, Jackson et al.9 pointed out that “Up to three statistically significant interaction tests (P<0.05) would be expected on the basis of chance alone,” and then they incorporated this consideration in their interpretation of the results.

Prespecified Analysis versus Post Hoc Analysis

A prespecified subgroup analysis is one that is planned and documented before any examination of the data, preferably in the study protocol. This analysis includes specification of the end point, the baseline characteristic, and the statistical method used to test for an interaction. For example, the Heart Outcomes Prevention Evaluation 2 investigators16 conducted a study involving 5522 patients with vascular disease or diabetes to assess the effect of homocysteine lowering with folic acid and B vitamins on the risk of a major cardiovascular event. The primary outcome was a composite of death from cardiovascular causes, myocardial infarction, and stroke. In the Methods section of their article, the authors noted that “Prespecified subgroup analyses involving Cox models were used to evaluate outcomes in patients from regions with folate fortification of food and regions without folate fortification, according to the baseline plasma homocysteine level and the baseline serum creatinine level.” Post hoc analyses refer to those in which the hypotheses being tested are not specified before any examination of the data. Such analyses are of particular concern because it is often unclear how many were undertaken and whether some were motivated by inspection of the data. However, both prespecified and post hoc subgroup analyses are subject to inflated false positive rates arising from multiple testing. Investigators should avoid the tendency to prespecify many subgroup analyses in the mistaken belief that these analyses are free of the multiplicity problem.

Subgroup Analyses in the Journal — Assessment of Reporting Practices

As part of internal quality-control activities at the Journal, we assessed the completeness and quality of subgroup analyses reported in the Journal during the period from July 1, 2005, through June 30, 2006. A detailed description of the study methods can be found in the Supplementary Appendix, available with the full text of this article at http://www.nejm.org. In this report, we describe the clarity and completeness of subgroup-analysis reporting, evaluate the authors’ interpretation and justification of the results of subgroup analyses, and recommend guidelines for reporting subgroup analyses.

Among the original articles published in the Journal during the period from July 1, 2005, through June 30, 2006, a total of 95 articles reported primary outcome results from randomized clinical trials. Among these 95 articles, 93 reported results from one clinical trial; the remaining 2 articles reported results from two trials. Thus, results from 97 trials were reported, from which subgroup analyses were reported for 59 trials (61%). Table 1 summarizes the characteristics of the trials. We found that larger trials and multicenter trials were significantly more likely to report subgroup analyses than smaller trials and single-center trials, respectively. With the use of multivariate logistic-regression models, when ranked according to the number of participants enrolled in a trial and compared with trials with the fewest participants, the odds ratio for reporting subgroup analyses for the second quartile was 1.38 (95% confidence interval [CI], 0.45 to 4.20), for the third quartile was 1.98 (95% CI, 0.62 to 6.24), and for the fourth quartile was 8.90 (95% CI, 2.10 to 37.78) (P=0.02, trend test). The odds ratio for reporting subgroup analyses in multicenter trials as compared with single-center trials was 4.33 (95% CI, 1.56 to 12.16).

View this table:
[in this window]
[in a new window]
Get Slide
Table 1. Characteristics and Predictors of Reporting Subgroup Analyses in 97 Clinical Trials.

Among the 59 trials that reported subgroup analyses, these analyses were mentioned in the Methods section for 21 trials (36%), in the Results section for 57 trials (97%), and in the Discussion section for 37 trials (63%); subgroup analyses were reported in both the text and a figure or table for 39 trials (66%). Other characteristics of the reports are shown in Figure 1. In general, we are unable to determine the number of subgroup analyses conducted; we attempted to count the number of subgroup analyses reported in the article and found that this number was unclear in nine articles (15%). For example, Lees et al.17 reported that “We explored analyses of numerous other subgroups to assess the effect of baseline prognostic factors or coexisting conditions on the treatment effect but found no evidence of nominal significance for any biologically likely factor.” For four of these nine articles, we were able to determine that at least eight subgroup analyses were reported. In 40 trials (68%), it was unclear whether any of the subgroup analyses were prespecified or post hoc, and in 3 others (5%) it was unclear whether some were prespecified or post hoc. Interaction tests were reported to have been used to assess the heterogeneity of treatment effects for all subgroup analyses in only 16 trials (27%), and they were reported to be used for some, but not all, subgroup analyses in 11 trials (19%).

Figure 1
View larger version (31K):
[in this window]
[in a new window]
Get Slide
Figure 1. Reporting of Subgroup Analyses from 59 Clinical Trials. The specific reporting characteristics examined in this quality-improvement exercise are indicated in each panel. CI denotes confidence interval.

We assessed whether information was provided about treatment effects within the levels of each subgroup variable (Figure 1). In 25 trials (42%), information about treatment effects was reported consistently for all of the reported subgroup analyses, and in 13 trials (22%), nothing was reported. Investigators in 15 trials (25%), all using superiority designs,10 claimed heterogeneity of treatment effects between at least one subject subgroup and the overall study population (see Table 1 of the Supplementary Appendix). For 4 of these 15 trials, this claim was based on a nominally significant interaction test, and for 4 others it was based on within-subgroup comparisons only. In the remaining seven trials, significant results of interaction tests were reported for some but not all subgroup analyses. When heterogeneity in the treatment effect was reported, for two trials (13%), investigators offered caution about multiplicity, and for four trials (27%), investigators noted the heterogeneity in the Abstract section.

Analysis of Our Findings and Guidelines for Reporting Subgroups

In the 1-year period studied, the reporting of subgroup analyses was neither uniform nor complete. Because the design of future clinical trials can depend on the results of subgroup analyses, uniformity in reporting would strengthen the foundation on which such research is built. Furthermore, uniformity of reporting will be of value in the interval between recognition of a potential subgroup effect and the availability of adequate data on which to base clinical decisions.

Problems in the reporting of subgroup analyses are not new.1,2,3,4,5,6,18 Assmann et al.2 reported shortcomings of subgroup analyses in a review of the results of 50 trials published in 1997 in four leading medical journals. More recently, Hernández et al.4 reviewed the results of 63 cardiovascular trials published in 2002 and 2004 and noted the same problems. To improve the quality of reports of parallel-group randomized trials, the Consolidated Standards of Reporting Trials statement was proposed in the mid-1990s and revised in 2001.19 Although there has been considerable discussion of the potential problems associated with subgroup analysis and recommendations on when and how subgroup analyses should be conducted and reported,19,20 our analysis of recent articles shows that problems and ambiguities persist in articles published in the Journal. For example, we found that in about two thirds of the published trials, it was unclear whether any of the reported subgroup analyses were prespecified or post hoc. In more than half of the trials, it was unclear whether interaction tests were used, and in about one third of the trials, within-level results were not presented in a consistent way.

When properly planned, reported, and interpreted, subgroup analyses can provide valuable information. With the availability of Web supplements, the opportunity exists to present more detailed information about the results of a trial. The purpose of the guidelines (see Guidelines for Reporting Subgroup Analysis) is to encourage more clear and complete reporting of subgroup analyses. In some settings, a trial is conducted with a subgroup analysis as one of the primary objectives. These guidelines are directly applicable to the reporting of subgroup analyses in the primary publication of a clinical trial when the subgroup analyses are not among the primary objectives. In other settings, including observational studies, we encourage complete and thorough reporting of the subgroup analyses in the spirit of the guidelines listed.

The editors and statistical consultants of the Journal consider these guidelines to be important in the reporting of subgroup analyses. The goal is to provide transparency in the statistical methods used in order to increase the clarity and completeness of the information reported. As always, these are guidelines and not rules; additions and exemptions can be made as long as there is a clear case for such action.

Guidelines for Reporting Subgroup Analysis.

In the Abstract:

Present subgroup results in the Abstract only if the subgroup analyses were based on a primary study outcome, if they were prespecified, and if they were interpreted in light of the totality of prespecified subgroup analyses undertaken.

In the Methods section:

Indicate the number of prespecified subgroup analyses that were performed and the number of prespecified subgroup analyses that are reported. Distinguish a specific subgroup analysis of special interest, such as that in the article by Sacks et al.,8 from the multiple subgroup analyses typically done to assess the consistency of a treatment effect among various patient characteristics, such as those in the article by Jackson et al.9 For each reported analysis, indicate the end point that was assessed and the statistical method that was used to assess the heterogeneity of treatment differences.

Indicate the number of post hoc subgroup analyses that were performed and the number of post hoc subgroup analyses that are reported. For each reported analysis, indicate the end point that was assessed and the statistical method used to assess the heterogeneity of treatment differences. Detailed descriptions may require a supplementary appendix.

Indicate the potential effect on type I errors (false positives) due to multiple subgroup analyses and how this effect is addressed. If formal adjustments for multiplicity were used, describe them; if no formal adjustment was made, indicate the magnitude of the problem informally, as done by Jackson et al.9

In the Results section:

When possible, base analyses of the heterogeneity of treatment effects on tests for interaction, and present them along with effect estimates (including confidence intervals) within each level of each baseline covariate analyzed. A forest plot21,22 is an effective method for presenting this information.

In the Discussion section:

Avoid overinterpretation of subgroup differences. Be properly cautious in appraising their credibility, acknowledge the limitations, and provide supporting or contradictory data from other studies, if any.

No potential conflict of interest relevant to this article was reported.

We thank Doug Altman, John Bailar, Colin Begg, Mohan Beltangady, Marc Buyse, David DeMets, Stephen Evans, Thomas Fleming, David Harrington, Joe Heyse, David Hoaglin, Michael Hughes, John Ioannidis, Curtis Meinert, James Neaton, Robert O’Neill, Ross Prentice, Stuart Pocock, Robert Temple, Janet Wittes, and Marvin Zelen for their helpful comments.

References

  1. Yusuf S, Wittes J, Probstfield J, Tyroler HA. Analysis and interpretation of treatment effects in subgroups of patients in randomized clinical trials. JAMA 1991;266:93-98. [Abstract]
  2. Assmann SF, Pocock SJ, Enos LE, Kasten LE. Subgroup analysis and other (mis)uses of baseline data in clinical trials. Lancet 2000;355:1064-1069. [CrossRef][ISI][Medline]
  3. Pocock SJ, Assmann SF, Enos LE, Kasten LE. Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems. Stat Med 2002;21:2917-2930. [CrossRef][ISI][Medline]
  4. Hernández A, Boersma E, Murray G, Habbema J, Steyerberg E. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J 2006;151:257-264. [CrossRef][ISI][Medline]
  5. Parker AB, Naylor CD. Subgroups, treatment effects, and baseline risks: some lessons from major cardiovascular trials. Am Heart J 2000;139:952-961. [CrossRef][ISI][Medline]
  6. Rothwell PM. Subgroup analysis in randomised controlled trials: importance, indications, and interpretation. Lancet 2005;365:176-186. [CrossRef][ISI][Medline]
  7. Lagakos SW. The challenge of subgroup analyses — reporting without distorting. N Engl J Med 2006;354:1667-1669. [Erratum, N Engl J Med 2006;355:533.] [Free Full Text]
  8. Sacks FM, Pfeffer MA, Moye LA, et al. The effect of pravastatin on coronary events after myocardial infarction in patients with average cholesterol levels. N Engl J Med 1996;335:1001-1009. [Free Full Text]
  9. Jackson RD, LaCroix AZ, Gass M, et al. Calcium plus vitamin D supplementation and the risk of fractures. N Engl J Med 2006;354:669-683. [Erratum, N Engl J Med 2006;354:1102.] [Free Full Text]
  10. Pocock SJ. Clinical trials: a practical approach. Chichester, England: John Wiley, 1983.
  11. Halperin M, Ware JH, Byar DP, et al. Testing for interaction in an IxJxK contingency table. Biometrika 1977;64:271-275. [Free Full Text]
  12. Simon R. Patient subsets and variation in therapeutic efficacy. Br J Clin Pharmacol 1982;14:473-482. [ISI][Medline]
  13. Gail M, Simon R. Testing for qualitative interactions between treatment effects and patient subsets. Biometrics 1985;41:361-372. [CrossRef][ISI][Medline]
  14. Brookes ST, Whitely E, Egger M, Smith GD, Mulheran PA, Peters T. Subgroup analyses in randomized trials: risks of subgroup-specific analyses; power and sample size for the interaction test. J Clin Epidemiol 2004;57:229-236. [CrossRef][ISI][Medline]
  15. Bailar JC III, Mosteller F, eds. Medical uses of statistics. 2nd ed. Waltham, MA: NEJM Books, 1992.
  16. Lonn E, Yusuf S, Arnold MJ, et al. Homocysteine lowering with folic acid and B vitamins in vascular disease. N Engl J Med 2006;354:1567-1577. [Erratum, N Engl J Med 2006;355:746.] [Free Full Text]
  17. Lees KR, Zivin JA, Ashwood T, et al. NXY-059 for acute ischemic stroke. N Engl J Med 2006;354:588-600. [Free Full Text]
  18. Al-Marzouki S, Roberts I, Marshall T, Evans S. The effect of scientific misconduct on the results of clinical trials: a Delphi survey. Contemp Clin Trials 2005;26:331-337. [CrossRef][ISI][Medline]
  19. Moher D, Schulz KF, Altman DG, et al. The CONSORT Statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. (Accessed November 1, 2007, at http://www.consort-statement.org/.)
  20. International Conference on Harmonisation (ICH). Guidance for industry: E9 statistical principles for clinical trials. Rockville, MD: Food and Drug Administration, September 1998. (Accessed November 1, 2007, at http://www.fda.gov/cder/guidance/ICH_E9-fnl.PDF.)
  21. Cuzick J. Forest plots and the interpretation of subgroups. Lancet 2005;365:1308-1308. [CrossRef][ISI][Medline]
  22. Wactawski-Wende J, Kotchen JM, Anderson GL, et al. Calcium plus vitamin D supplementation and the risk of colorectal cancer. N Engl J Med 2006;354:684-696. [Free Full Text

Sesgo de publicacion o sesgo de financiamiento? * Sign In to E-Mail or Save This * Print * Reprints * Share o Del.icio.us o Digg o Facebook o Newsvine o Permalink Article Tools Sponsored By By ERIC NAGOURNEY Published: November 13, 2007 Sesgo de publicacion o sesgo de financiamiento?


By ERIC NAGOURNEY

Published: November 13, 2007

Inhalers are an effective treatment for asthma and other respiratory diseases, but they can have adverse side effects. The conclusions of studies on these effects apparently depend in part on who pays for the study.

A review of more than 500 studies has found that independently backed studies of the inhalers, formally known as inhaled corticosteroids, are up to four times as likely to find adverse effects as studies paid for by drug companies. The paper appears in the Oct. 22 issue of The Archives of Internal Medicine.

Even randomized clinical trials — the “gold standard” for clinical research — were two and a half times as likely to find adverse effects if a drug company did not pay for the work. Moreover, when authors of studies with drug company financing did report adverse events, they were less likely than authors of independently backed studies to interpret them as clinically significant. But when the researchers did a statistical analysis that eliminated the effect of study design, the disparities were no longer apparent. This suggests that design features chosen before the study begins might lead to a certain kind of finding about adverse effects.

Reviews of drug-company backed studies of other drugs have found similar results.

Many medical journals now require authors to disclose their financial relationships. The senior author of the review, Dr. F. Javier Nieto, professor of population and health studies at the University of Wisconsin, recommended requiring “that the disclosure is made in the abstract, right up front.”

Resto del articulo en el NYT. Sin desperdicios.

Sesgo de publicacion o sesgo de financiamiento? * Sign In to E-Mail or Save This * Print * Reprints * Share o Del.icio.us o Digg o Facebook o Newsvine o Permalink Article Tools Sponsored By By ERIC NAGOURNEY Published: November 13, 2007 Sesgo de publicacion o sesgo de financiamiento?


By ERIC NAGOURNEY

Published: November 13, 2007

Inhalers are an effective treatment for asthma and other respiratory diseases, but they can have adverse side effects. The conclusions of studies on these effects apparently depend in part on who pays for the study.

A review of more than 500 studies has found that independently backed studies of the inhalers, formally known as inhaled corticosteroids, are up to four times as likely to find adverse effects as studies paid for by drug companies. The paper appears in the Oct. 22 issue of The Archives of Internal Medicine.

Even randomized clinical trials — the “gold standard” for clinical research — were two and a half times as likely to find adverse effects if a drug company did not pay for the work. Moreover, when authors of studies with drug company financing did report adverse events, they were less likely than authors of independently backed studies to interpret them as clinically significant. But when the researchers did a statistical analysis that eliminated the effect of study design, the disparities were no longer apparent. This suggests that design features chosen before the study begins might lead to a certain kind of finding about adverse effects.

Reviews of drug-company backed studies of other drugs have found similar results.

Many medical journals now require authors to disclose their financial relationships. The senior author of the review, Dr. F. Javier Nieto, professor of population and health studies at the University of Wisconsin, recommended requiring “that the disclosure is made in the abstract, right up front.”

Resto del articulo en el NYT. Sin desperdicios.

Pediatria Basada en la Evidencia


Adolescentes y drogas. Un reto para los profesionales sanitarios
Hidalgo Vicario MI, Redondo Romero A. Evid Pediatr. 2007;3:60.
[HTM] [PDF]

Papel protector de la lactancia materna en las infecciones de la infancia: análisis crítico de la metodología de estudio
Paricio Talayero JM. Evid Pediatr. 2007;3:61.
[HTM] [PDF]

Artículos valorados críticamente

Los criterios de exclusión presentes en los ensayos clínicos no siempre están justificados y afectan a la posterior generalización de sus resultados
Martin Muñoz P, Fernández Rodríguez M. Evid Pediatr. 2007; 3:62.
[HTM] [PDF]

En niños con hipertrofia adenoidea los corticoides tópicos podrían ser útiles aunque son necesarios más estudios que confirmen su eficacia
Fernández Rodriguez M, Martín Muñoz P. Evid Pediatr. 2007; 3:63.
[HTM] [PDF]

Las madres gestantes en España toman el ácido fólico para la prevención primaria de defectos congénitos a destiempo y a dosis muy elevadas
Gonzalez de Dios J, Ochoa Sangrador C. Evid Pediatr. 2007; 3:64.
[HTM] [PDF]

¿Son válidas las fórmulas para estimar el peso de los niños en las urgencias?
Suwezda A, Melamud A, Matamoros R. Evid Pediatr. 2007; 3:65.
[HTM] [PDF]

La edad materna y paterna avanzada, posibles factores de riesgo para el autismo
Esparza Olcina MJ, Buñuel Álvarez JC. Evid Pediatr. 2007; 3:66.
[HTM] [PDF]

La falta de tiempo y de formación son las principales dificultades de los profesionales sanitarios para identificar el consumo de sustancias en los jóvenes
De la Rosa Morales V, González Rodríguez MP. Evid Pediatr. 2007; 3:67.
[HTM] [PDF]

La lactancia materna reduce el riesgo de ingreso hospitalario por gastroenteritis e infección respiratoria de vías bajas en países desarrollados
Olivares Grohnert M, Buñuel Álvarez JC. Evid Pediatr. 2007;3:68.
[HTM] [PDF]

Los escolares varones que maltratan a otros niños a los 8 años de edad, y que presentan síntomas psiquiátricos asociados, están en mayor riesgo de cometer delitos en la adolescencia tardía
Chalco Orrego JP, Bada Mancilla CA, Rojas Galarza RA. Evid Pediatr. 2007;3:69.
[HTM] [PDF]

Un indicador numérico que resume las políticas nacionales respecto a la regulación del alcohol muestra una buena relación con su consumo
Carvajal Encina F, Balaguer A. Evid Pediatr. 2007;3:70.
[HTM] [PDF]

La critorquidia debe tratarse antes de la pubertad para evitar el cáncer testicular
Orejón de Luna G, Aizpurua Galdeano P. Evid Pediatr. 2007;3:71.
[HTM] [PDF]

La inyección intraesfinteriana de toxina botulínica resulta tan efectiva como la miectomía del esfínter interno en el tratamiento del estreñimiento crónico idiopático
Bonillo Perales A, Ibañez Pradas V. Evid Pediatr. 2007; 3:72.
[HTM] [PDF]

En pacientes con bronquiolitis leve-moderada y sin factores de riesgo, la radiografía de torax tiene escasa utilidad clínica
Ochoa Sangrador C, Castro Rodríguez JA. Evid Pediatr. 2007; 3:73.
[HTM] [PDF]

Enfermedad Neumocócica Invasiva: aumento de la incidencia de serotipos no vacunales tras la vacunación universal de los niños nativos de Alaska
Ruiz-Canela Caceres J, Juanes de Toledo B. Evid Pediatr. 2007; 3:74.
[HTM] [PDF]

La administración temprana de suplementos de probióticos en pretérminos de muy bajo peso al nacer podría disminuir el riesgo de padecer enterocolitis necrotizante
Ibáñez Pradas V, García Vera C. Evid Pediatr. 2007; 3:75.
[HTM] [PDF]

En los recién nacidos asintomáticos, la oximetría de pulso tiene una sensibilidad limitada para el diagnóstico de cardiopatías congénitas, por lo que parece poco útil como método de cribado
Aparicio Sánchez JL, Puebla Molina. Evid Pediatr. 2007; 3:76.
[HTM] [PDF]

Una vacuna cuatrivalente contra el virus del papiloma humano, previene las lesiones cervicales de alto grado de malignidad asociadas a los serotipos 16 y 18, en mujeres jóvenes sin infección previa
Orejón de Luna G, Ochoa Sangrador C. Evid Pediatr. 2007;3:77.
[HTM] [PDF]

Los recién nacidos de muy bajo peso al nacer nacidos en centros que disponen de unidades neonatales de mayor nivel y con mayor volumen de pacientes presentan una menor mortalidad
Bernaola Aponte G, Aparicio Sánchez JL. Evid Pediatr. 2007;3:78.
[HTM] [PDF]

Infecciones osteoarticulares en niños ¿Debemos pensar primero en Kingella kingae?
Carreazo Pariasca NY, Cuervo Valdés JJ. Evid Pedatr. 2007;3:79.
[HTM] [PDF]

Es posible que, en niños con primera infección de orina simple y que tienen una ecografía renal prenatal normal, no sea necesaria la ecografía renal de control
González de Dios J, Perdikidis Olivieri L. Evid Pediatr. 2007;3:80.
[HTM] [PDF]

Fundamentos de medicina basada en la evidencia

Evaluación de artículos científicos sobre pronóstico
González de Dios J, Ibáñez Pradas V, Modesto i Alapont V. Evid Pediatr. 2007;3:81
[HTM] [PDF]

Toma de decisiones clínicas basadas en pruebas: del artículo al paciente

En la fimosis es aconsejable el tratamiento con corticoides tópicos antes de plantearse una opción quirúrgica
Orejón de Luna G, Fernández Rodríguez M. Evid Pediatr. 2007;3:82.
[HTM] [PDF]

Artículos traducidos

Efectos de la aplicación de toxina botulínica tipo A en la función de extremidades superiores de niños con parálisis cerebral infantil: una revisión sistemática. Traducción autorizada de: Reeuwijk A, van Schie P E, Becher J G, Kwakkel G. Effects of botulinum toxin type A on upper limb function in children with cerebral palsy: a systematic review. Clin Rehabil. 2006 May; 20(5): 375-387. University of York. Centre of Reviews and Dissemination (CRD). Database of Abstracts of Review of Effects (DARE) [fecha de consulta: 17-6-2007]. Disponible en: http://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?View=Full&ID=120060024
42

Barroso Espadero D. Evid Pediatr. 2007; 3:83.
[HTM] [PDF]

¿Qué exploraciones radiológicas deben realizarse para identificar fracturas ante la sospecha de maltrato en niños? Traducción autorizada de: Kemp A M, Butler A, Morris S, Mann M, Kemp K W, Rolfe K, Sibert J R, Maguire S. Which radiological investigations should be performed to identify fractures in suspected child abuse? Clinical Radiology. 2006; 61: 723-36. University of York. Centre of Reviews and Dissemination (CRD). Database of abstracts of Review of Effects (DARE) [fecha de consulta 14-05-07]. Disponible en: http://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?ID=1 2006008408.
Esparza Olcina MJ.Evid Pediatr. 2007; 3:84.
[HTM] [PDF]

Estrategias para que los niños coman más frutas y verduras: una revisión sistemática. Traducción autorizada de: Knai C, Pomerleau J, Lock K, McKee M. Getting children to eat more fruit and vegetables: A systematic review. University of York. Centre of Reviews and Dissemination (CRD). Database of Abstracts of Review of Effects (DARE) [fecha de consulta: 28-7-2007]. Disponible en: http://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?Vie w=Full&ID=12006001030
González Rodríguez MP. Evid Pediatr. 2007; 3:85
[HTM] [PDF]

¿La vacunación infantil sistemática frente a la gripe tiene un beneficio indirecto sobre la comunidad? Revisión de la evidencia. Traducción autorizada de: Jordan R, Connock M, Albon E, Fry-Smith A, Olowokure B, Hawker J et al. Universal vaccination of children against influenza: are there indirect benefits to the community? A systematic review of the evidence. University of York. Centre of Reviews and Dissemination (CRD). Database of Abstracts of Review of Effects (DARE) [fecha de consulta: 28-6-2007]. Disponible en: http://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?View=Full&ID=120060010
34.

Aizpurua Galdeano MP. Evid Pediatr. 2007;3:86
[HTM] [PDF]

Otros artículos seleccionados y no valorados críticamente

Otros artículos seleccionados y no valorados críticamente.
Grupo de Trabajo de Pediatría Basada en la Evidencia. Evid Pediatr. 2006; 3:87.
[HTM]