Validez de los ensayos clinicos


Juan Gervas: Validez de los Ensayos Clinicos

Evidencias en Pediatria.

La Medicina Basada en Pruebas tiene herramientas varias para ayudar en la decisión médica, pero su más
característico elemento es el ensayo clínico, cuyo diseño se considera como “patrón oro” en la evaluación de
intervenciones terapéuticas.
El ensayo clínico aleatorizado doble enmascarado es el referente en cuanto a calidad científica en el apoyo al
trabajo médico clínico, y en la organización de servicios sanitarios. De hecho, muchos ven el ensayo clínico
doble enmascarado y aleatorizado como un estudio experimental, pese a que no es más que la adaptación
al campo sanitario de un diseño de comienzos del siglo XX del área de la Pedagogía, un diseño de estudio “cuasiexperimental”
1. En la historia hay eximios antecedentes de propuestas de ensayo clínico; el más conocido el del poeta italiano Petrarca2, tan crítico con los médicos como otro poeta posterior, el español Quevedo. Escribió Petrarca a su amigo Boccaccio, en pleno siglo XIV: “si se tomasen a cien personas, o mil, de una misma edad y constitución general, habituadas también a una misma comida, que hubiesen caído todas víctimas de una enfermedad a la vez, y mitad de ellas siguiesen las prescripciones de nuestros médicos contemporáneos, y la otra mitad se guiase por su instinto natural y sentido común, pero sin ningún tipo de doctores, entonces, no tengo ninguna duda de qué grupo mejoraría”. Como se ve, una propuesta sesgada en contra de los médicos, que tuvo poco eco.
Resonancia que también faltó a los estudios del francés Pierre Louis, quien a comienzos del siglo XIX demostró la inutilidad y peligrosidad de la sangría en el tratamiento de la pulmonía (neumonía).
En 1948 se publicó el primer ensayo clínico propiamente dicho, acerca de la eficacia de la estreptomicina en la tuberculosis3,4. Detrás de ese nuevo movimiento había mucho joven inglés crítico, y también un solterón y fumador empedernido, según su auto-necrológica,  Archie Cochrane5. La beneficiosa influencia de sus ideas6 llega a nuestros días como bien demuestran los Centros Cochrane, y la Biblioteca Cochrane, aunque con alguna sombra7. En este trabajo revisaré varias cuestiones en torno a la selección de la muestra en los ensayos clínicos, una cuestión básica en la valoración de los resultados que luego se utilizan en la Medicina Basada en Pruebas.

Curso de introducción a la investigación clínica. Capítulo 5: Seleccion de la muestra: tecnicas de muestreo y tamaño muestral


Curso de introducción a la investigación clínica. Capítulo 5: Selección de la muestra: técnicas de muestreo y tamaño muestral

T Seoanea JLR Martínb E Martín-Sánchezc S Lurueña-Segoviad FJ Alonso Morenoe

aÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo.
bÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo. Unidad de Investigación Aplicada. Hospital Nacional de Parapléjicos. Toledo.
cÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo.
dÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo. FENNSI Group. Fundación Hospital Nacional de Parapléjicos. Toledo.
eCentro de Salud Sillería. Toledo. Responsable de Investigación de Semergen.

Para realizar un proyecto de investigación debemos obtener datos de la población objetivo, que se define como el conjunto de elementos del cual se quiere conocer cierto aspecto. En algunos estudios cada elemento de la población puede ser medido realmente, lo cual es posible solamente si la población no es muy numerosa y si todos los elementos son accesibles. Pero lo habitual es que el estudio completo de la población sea inviable, ya que el trabajo empírico necesario es costoso e implica mucho tiempo y recursos.

Para obtener resultados confiables no es necesario obtener los datos de todos los elementos poblacionales, es suficiente recoger las variables de un subconjunto de elementos denominado muestra. El estudio tendrá la validez y la fiabilidad necesarias si este subconjunto es representativo de la población objetivo y los resultados obtenidos son extrapolables a la misma.

Existen distintas técnicas o procedimientos para seleccionar la muestra, dependiendo del tiempo, de los recursos económicos y de la naturaleza de los elementos poblacionales. El conjunto de estas técnicas se denomina muestreo.

En el diseño del estudio se debe definir el tamaño muestral necesario; su cálculo está relacionado con ciertos problemas que estudia la Inferencia Estadística y que permitirán extraer conclusiones científicamente válidas a la población.

Palabras clave: población, muestra, muestreo, tamaño muestral, estimación.

Fuente: Rev. SEMERGEN – Miércoles 1 Agosto 2007. Volumen 33 – Número 07 p. 356 – 361 Continue reading Curso de introducción a la investigación clínica. Capítulo 5: Seleccion de la muestra: tecnicas de muestreo y tamaño muestral

Dictionary for Clinical Trials


Book Description:

As a result of the expansion in the area of pharmaceutical medicine there is an ever-increasing need for educational resources. The Dictionary of Clinical Trials, Second Edition comprehensively explains the 3000 words and short phrases commonly used when designing, running, analysing and reporting clinical trials.
This book is a quick, pocket reference tool to understand the common and less well-used terms within the discipline of clinical trials, and provides an alternative to the textbooks available. Terms are heavily cross-referenced, which helps the reader to understand how terms fit into the broad picture of clinical trials.
Wide ranging, brief, pragmatic explanations of clinical trial terminology Scope includes medical, statistical, epidemiological, ethical, regulatory and data management terminology Thoroughly revised and expanded – increase of 280 terms from First Edition, reference to Cochrane included

# Publisher: Wiley
# Number Of Pages: 262
# Publication Date: 2007-06-11
# ISBN / ASIN: 0470058161
# EAN: 9780470058169

Format: PDF
1.8 MB rar

Download

Ensayos clinicos


Click here to read
Assessing the quality of reports of randomized clinical trials: is blinding necessary?

Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, McQuay HJ.

Oxford Regional Pain Relief Unit, University of Oxford, UK.

It has been suggested that the quality of clinical trials should be assessed by blinded raters to limit the risk of introducing bias into meta-analyses and systematic reviews, and into the peer-review process. There is very little evidence in the literature to substantiate this. This study describes the development of an instrument to assess the quality of reports of randomized clinical trials (RCTs) in pain research and its use to determine the effect of rater blinding on the assessments of quality. A multidisciplinary panel of six judges produced an initial version of the instrument. Fourteen raters from three different backgrounds assessed the quality of 36 research reports in pain research, selected from three different samples. Seven were allocated randomly to perform the assessments under blind conditions. The final version of the instrument included three items. These items were scored consistently by all the raters regardless of background and could discriminate between reports from the different samples. Blind assessments produced significantly lower and more consistent scores than open assessments. The implications of this finding for systematic reviews, meta-analytic research and the peer-review process are discussed.

Publication Types:

PMID: 8721797 [PubMed – indexed for MEDLINE]

Calidad de un estudio: escala de Jadad


Quality of Study:
A numerical score between 0-5 is assigned as a rough measure of study design/reporting quality (0 being weakest and 5 being strongest). This number is based on a well-established, validated scale developed by Jadad et al. (Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Controlled Clinical Trials 1996;17[1]:1-12). This calculation does not account for all study elements that may be used to assess quality (other aspects of study design/reporting are addressed in the “Evidence Discussion” sections of monographs).

  • A Jadad score is calculated using the seven items in the table below. The first five items are indications of good quality, and each counts as one point towards an overall quality score. The final two items indicate poor quality, and a point is subtracted for each if its criteria are met. The range of possible scores is 0 to 5.

Jadad Score Calculation  
Item Score
Was the study described as randomized (this includes words such as randomly, random, and randomization)?
0/1
Was the method used to generate the sequence of randomization described and appropriate (table of random numbers, computer-generated, etc)?
0/1
Was the study described as double blind?
0/1
Was the method of double blinding described and appropriate (identical placebo, active placebo, dummy, etc)?
0/1
Was there a description of withdrawals and dropouts?
0/1
Deduct one point if the method used to generate the sequence of randomization was described and it was inappropriate (patients were allocated alternately, or according to date of birth, hospital number, etc).
0/-1
Deduct one point if the study was described as double blind but the method of blinding was inappropriate (e.g., comparison of tablet vs. injection with no double dummy).
0/-1

P = pending verification.

Magnitude of Benefit:
This summarizes how strong a benefit is: small, medium, large, or none. If results are not statistically significant “NA” for “not applicable” is entered. In order to be consistent in defining small, medium, and large benefits across different studies and monographs,
Natural Standard defines the magnitude of benefit in terms of the standard deviation (SD) of the outcome measure. Specifically, the benefit is considered:

  • Large: if >1 SD
  • Medium: if 0.5 to 0.9 SD
  • Small: if 0.2 to 0.4 SD

P = pending verification.In many cases, studies do not report the standard deviation of change of the outcome measure. However, the change in the standard deviation of the outcome measure (also known as effect size) can be calculated, and is derived by subtracting the mean (or mean difference) in the placebo/control group from the mean (or mean difference) in the treatment group, and dividing that quantity by the pooled standard deviation (Effect size=[Mean Treatment – Mean Placebo]/SDp).

Curso de introducción a la investigación clínica. Capítulo 4: El ensayo clínico. Metodología de calidad y bioética


Curso de introducción a la investigación clínica. Capítulo 4: El ensayo clínico. Metodología de calidad y bioética

E Martín-Sáncheza JL R Martínb T Seoanec S Lurueña-Segoviad FJ Alonso Morenoe

aÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo. España.
bÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo. España. Unidad de Investigación Aplicada. Hospital Nacional de Parapléjicos. Toledo. España.
cÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo. España. FENNSI Group. Fundación Hospital Nacional de Parapléjicos. Toledo. España.
dÁrea de Investigación Clínica. Fundación para la Investigación Sanitaria en Castilla-La Mancha (FISCAM). Toledo. España.
eCentro de Salud Sillería. Toledo. España. Responsable de Investigación de Semergen.

Los estudios analíticos permiten estudiar y verificar hipótesis causales, y los ensayos clínicos, en particular, aportan el mayor nivel de evidencia en la comprobación de estas hipótesis. Un ensayo clínico aleatorio (ECA) es un experimento planificado en el que, de forma prospectiva, se comparan dos o más intervenciones preventivas, curativas o rehabilitadoras, que son asignadas de forma individualizada y aleatoria a un grupo de pacientes para estudiar el efecto de estas intervenciones en el hombre. Para su realización es necesario tener en cuenta una serie de aspectos metodológicos, como la elección de la muestra de sujetos a partir de unos adecuados criterios de selección, asignación aleatoria de los sujetos a los diferentes grupos de intervención, elección del grupo control, enmascaramiento o cegamiento de algunos o todos los sujetos que intervienen en el estudio, y descripción de pérdidas y abandonos para un correcto análisis de los datos. Las características de este tipo de estudios, realizados sobre humanos, implican la necesidad del cumplimiento de unos requisitos éticos y legales que protejan a los participantes, motivo por el cual es imprescindible la obtención de un consentimiento informado, así como el informe favorable de un Comité Ético de Investigación Clínica para su realización.

Palabras clave: estudios experimentales, ensayos clínicos, Atención Primaria.

Harming through prevention?


Fuente NEJM

Editor’s Note: On February 15, 2008, after this article had gone to press, the Office for Human Research Protections (OHRP) issued a statement (www.hhs.gov/ohrp/news/recentnews.html#20080215) expressing its new conclusion that Michigan hospitals may continue to implement the checklist developed by Pronovost et al. “without falling under regulations governing human subjects research,” since it “is now being used . . . solely for clinical purposes, not medical research or experimentation.” OHRP further stated that in the research phase, the project “would likely have been eligible for both expedited IRB review and a waiver of the informed consent requirement.”

About 80,000 catheter-related bloodstream infections occur in U.S. intensive care units (ICUs) each year, causing as many as 28,000 deaths and costing the health care system as much as $2.3 billion. If there were procedures that could prevent these infections, wouldn’t we encourage hospitals to introduce them? And wouldn’t we encourage the development, testing, and dissemination of strategies that would get clinicians to use them? Apparently not, judging from the experience of Peter Pronovost and other Johns Hopkins investigators who helped 103 ICUs in 67 Michigan hospitals carry out a highly successful infection-control effort,1 only to run into major problems with federal regulators.

The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.

The Pronovost effort was part of a quality and safety initiative sponsored by the Michigan Hospital Association, with funding from the Agency for Healthcare Research and Quality (AHRQ). The intervention was designed to improve ICU care by promoting the use of five procedures recommended by the Centers for Disease Control and Prevention: washing hands, using full-barrier infection precautions during catheter insertion, cleaning the patient’s skin with disinfectant, avoiding the femoral site if possible, and removing unnecessary catheters. The hospitals designated the clinicians who would lead the teams and provided the necessary supplies. The investigators provided an education program for the team leaders, who educated their colleagues about the procedures and introduced checklists to ensure their use. Infection-control practitioners in each hospital gave the teams feedback on infection rates in their ICUs.

The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn’t find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.

After the report by Pronovost et al. was published,1 the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP’s disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.

The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can’t easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term “human-subjects research” and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.

Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.

If certain stringent conditions are met, human-subjects researchers may obtain a waiver of informed consent. After the OHRP required the Hopkins IRB to review the project as human-subjects research, the board granted such a waiver. The OHRP had also ruled that the university had failed to ensure that all collaborating institutions were complying with the regulations. Each participating hospital should have received approval from its own IRB or another IRB willing to accept the responsibility of review and oversight. This requirement adds substantial complexity and cost to a study and could sink it altogether.

In my view, the project was a combination of quality improvement and research on organizations, not human-subjects research, and the regulations did not apply. The project was not designed to use ICU patients as human subjects to test a new, possibly risky method of preventing infections; rather, it was designed to promote clinicians’ use of procedures already shown to be safe and effective for the purpose. Each hospital engaged in a classic quality-improvement activity in which team members worked together to introduce best practices and make them routine, with quantitative feedback on outcomes being intrinsic to the process. Such activities should not require IRB review. Since the activity did not increase patients’ risk above the level inherent in ICU care and patient confidentiality was protected, there was no ethical requirement for specific informed consent from patients. Indeed, it is hard to see why anyone would think it necessary or appropriate to ask ICU patients whether they wanted to opt out of a hospital’s effort to ensure the use of proven precautions against deadly infections — or why anyone would think that clinicians should have the right to opt out rather than an ethical obligation to participate.

Did the situation change because hospitals shared their experiences with each other? Since no identifiable patient or clinician information was shared, I don’t think so. Did the fact that quality-improvement experts educated the teams about the best practices change the situation? I don’t think so; bringing in consultants to conduct training activities is normal managerial practice. Did the fact that these experts studied and reported the results change the situation? The investigators were asking whether the hospitals produced and sustained a reduction in ICU infection rates. From one perspective, this was simply an evaluation of the quality-improvement activity; from another, it might be considered research, but the object of study was the performance of organizations.

Of course, the complexity of the regulations leaves room for different interpretations. Moreover, small changes in the facts of the situation can make a large difference in the regulatory burden imposed, even when they make no difference in the risk to patients — a fact underscored by the OHRP’s 11 detailed decision-making charts summarizing the regulations.2 But technical debates about the meaning of “research” and “human subject” miss the most important point: if we want our health care system to engage in data-guided improvement activities that prevent deaths, reduce pain and suffering, and save money, we shouldn’t make it so difficult to do so.

In a public statement on this case,3 the OHRP has indicated that institutions can freely implement practices they think will improve care as long as they don’t investigate whether improvement actually occurs. A hospital can introduce a checklist system without IRB review and informed consent, but if it decides to build in a systematic, data-based evaluation of the checklist’s impact, it is subject to the full weight of the regulations for human-subjects protection.

Obviously, collaborative research and improvement activities require supervision. AHRQ, the state hospital association, hospital managers, and local staff members should all evaluate such projects before taking them on, with a primary focus on their effect on patients’ well-being. This kind of supervision must be in place and working well regardless of whether an activity qualifies as human-subjects research.4,5

The extra layer of bureaucratic complexity embodied in the current regulations makes using data to guide change in health care more difficult and expensive, and it’s more likely to harm than to help. It’s time to modify or reinterpret the regulations so that they protect people from risky research without discouraging low-risk, data-guided activities designed to make our health care system work better.

No potential conflict of interest relevant to this article was reported.
Source Information

Dr. Baily is an associate for ethics and health policy at the Hastings Center, Garrison, NY.

References

  1. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725-2732. [Free Full Text]
  2. Office for Human Research Protections, U.S. Department of Health and Human Services. Human subject regulations decision charts, September 24, 2004. (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm.)
  3. Office for Human Research Protections, U.S. Department of Health and Human Services. OHRP statement regarding the New York Times op-ed entitled “A Lifesaving Checklist.” (Accessed February 1, 2008, at http://www.hhs.gov/ohrp/news/recentnews.html#20080115.)
  4. Baily MA, Bottrell M, Lynn J, Jennings B. The ethics of using QI methods to improve health care quality and safety. Hastings Cent Rep 2006;36:S1-40. [CrossRef][ISI][Medline]
  5. Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med 2007;146:666-673. [Free Full Text]

Los ensayos clinicos no siempre son representativos de la poblacion general


Published: December 25, 2007

The randomized clinical trial, widely considered the most reliable biomedical research method, can have significant drawbacks, a new study suggests, because patients included may not be representative of the broader population.

The scientists, writing in the December issue of The Annals of Surgical Oncology, reviewed 29 clinical trials of surgical procedures in prostate, colon, breast and lung cancer involving 13,991 patients. Although 62 percent of those cancers occur in people over 65, just 27 percent of the participants in the trials were that old. Although patients younger than 55 account for 16 percent of cancer cases, they made up 44 percent of the participants. More than 86 percent of the participants were white, and fewer than 8 percent African-American.

Thirty percent of the cases were breast cancers, but nearly 75 percent of the participants had that disease. Although prostate cancer accounted for 27 percent of the cancers, fewer than 2 percent of the patients were in prostate cancer studies.

In colon and lung cancer trials, women were less likely to be enrolled than men, and at all study sites, the rates of participation in trials was extremely low, from 0.04 to 1.7 percent.

Dr. John H. Stewart IV, the lead author and an assistant professor of surgery at Wake Forest University, said the disparities could call the results into question. “Our ability to generalize the findings of surgical trials,” he said, “is directly dependent on having equitable participation in trials by underrepresented groups.”

Metanalisis: analisis para su comprension


Monografía de Dr C. Rafael Avilés Merens, Dr C. Melvyn Morales Morejón, Lic. Augusto Sao Avilés y Lic. Rubén Cañedo Andalia – 28 de Diciembre de 2005

Se realizó una búsqueda bibliográfica sobre la metodología metanalítica en diversas bases de datos: Medline, Science Citation Index, entre otras. Para la búsqueda en Medline, se utilizó el descriptor Meta-analysis que indica el MeSH, el tesauro de la Biblioteca Nacional de Medicina de los Estados Unidos. Se emplearon, también, términos afines utilizados por el Institute of Scientific Information, asi como sinónimos o cuasi-sinónimos, obtenidos a partir de las estrategias probadas por los propios autores en las diferentes búsquedas realizadas y del intercambio con otros autores en el tema. Se emplearon diversas técnicas para la localización y posterior recuperación de la información. La participación de los autores en los colegios/académias invisibles, contituyó una fuente esencial para la obtención de información actualizada y, en no pocas ocasiones, información inédita.

« Anterior | Inicio | Siguiente »

1 – Aproximaciones útiles para la comprensión de los metanálisis
2 – Métodos
3 – Antecedentes
4 – Las revisiones cualitativas y cuantitativas
5 – Metanalisis: Definición
6 – Clasificación
7 – Etapas
8 – Criterios de selección
9 – Características metodológicas – variables moderadoras
10 – Características sustantivas
11 – Características extrínsecas
12 – Consideraciones finales
13 – Anexo. Procesamiento documental e informacional: interrelaciones y distinci
14 – Anexo. El metanalisis por etapas
15 – Anexo. Problemas pertinentes a cada una de las etapas de una revisió
16 – Anexo. Informe propuesto por autores, editores y críticos del metan&
17 – Anexo. Protocolo de control de calidad en la presentación de resulta
18 – Anexo. Tipos principales de sesgos en la elaboración de la revisión
19 – Anexo. La estadística en la revisión metanalítica
20 – Referencias bibliograficas

Aproximaciones útiles para la comprensión de los metanálisis


Un rasgo distintivo del desarrollo alcanzado por la humanidad en los procesos cognoscitivos, investigativos y de toma de decisión frente a la incertidumbre informacional es la velocidad siempre creciente de la transmisión de la información, que genera:

  • Una sobrecarga de información.
  • Una polución informacional – hiperinflación (con su efecto de infoxicación).
  • Una desigual calidad de la información publicada o inédita.
  • Una acumulación de la información y el conocimiento.

Ello, en conjunto, origina, en los marcos de la llamada Sociedad de la Información, un reto formidable: extraer el conocimiento relevante de la información existente.

En tal sentido, y en el entorno clínico, CD Mulrow planteó: “En esta era de proliferación y abundancia de las publicaciones …, la capacidad personal de lectura y absorción de información sigue siendo la misma. Reducir la gran masa de información a piezas masticables es asunto esencial para la digestión”.1

Y sentenció más adelante: “Necesitamos revisiones sistemáticas para integrar eficientemente toda la información válida y proporcionar una base para tomar decisiones de manera racional”.2

“En estos tiempos, el poder no lo determina la posesión de grandes volúmenes de información, sino poseer información de valor, es decir, información evaluada y analizada, precisa, relevante, confiable, simple y válida. La ignorancia de la existencia de información de valor o la forma de obtener dicha información y, además, la información tardía, lejos de proporcionar poder a una organización, puede conducirla a caminos marcadamente erróneos”.3

Tanto el empleo de las nuevas tecnologías de la información y la comunicación ofrece, como resultado directo, la posibilidad de acceder a grandes volúmenes de información, que rebasan las posibilidades de análisis y asimilación de los individuos, como el hecho de que la información no se ensamble (y presente) en forma útil para aquellas personas que toman decisiones (a cualquier nivel), mediante una adecuada síntesis, evaluación y resumen de las opciones disponibles ha generado la búsqueda de métodos para analizar, sintetizar e integrar sinérgicamente la información recuperada.4

Las revisiones cuantitativas, sistemáticas y metanalíticas constituyen una respuesta relevante y significativa a esta situación en los marcos de la atención sanitaria.

Los metanálisis no representan sólo un cambio cuantitativo, ellos, a partir de la acumulación, evaluación e integración de la información disponible, generan un cambio cualitativo en los acervos de conocimientos existentes sobre determinado objeto de estudio.

La generalización de las investigaciones metanalíticas tiene lugar dentro de la tendencia mundial que concede al desarrollo de la ciencia y de la tecnología, una función decisiva en el logro del bienestar de la sociedad, como vía para solucionar los problemas actuales y futuros y el desarrollo de la sociedad en su conjunto.

Inicio | Siguiente »