Risk factors and interventions with statistically significant tiny effects


Risk factors and interventions with statistically significant tiny effects
George CM Siontis1 and John PA Ioannidis1,2,*
+ Author Affiliations


1Clinical Trials and Evidence-Based Medicine Unit and the Clinical and Molecular Epidemiology Unit, Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece and 2Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, Stanford, USA
↵*Corresponding author. Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, Stanford, CA 94305, USA. E-mail: jioannid@stanford.edu
Accepted May 19, 2011.
Abstract


Background Large studies may identify postulated risk factors and interventions with very small effect sizes. We aimed to assess empirically a large number of statistically significant relative risks (RRs) of tiny magnitude and their interpretation by investigators.


Methods RRs in the range between 0.95 and 1.05 were identified in abstracts of articles of cohort studies; articles published in NEJM, JAMA or Lancet; and Cochrane reviews. For each eligible tiny effect and the respective study, we recorded information on study design, participants, risk factor/intervention, outcome, effect estimates, P-values and interpretation by study investigators. We also calculated the probability that each effect lies outside specific intervals around the null (RR interval 0.97–1.03, 0.95–1.05, 0.90–1.10).


Results We evaluated 51 eligible tiny effects (median sample size 112 786 for risk factors and 36 021 for interventions). Most (37/51) appeared in articles published in 2006–10. The effects pertained to nutrition (n = 19), genetic and other biomarkers (n = 8), correlates of health care (n = 8) and diverse other topics (n = 16) of clinical or public health importance and mostly referred to major clinical outcomes. A total of 15 of the 51 effects were >80% likely to lie outside the RR interval 0.97–1.03, but only 8 were >40% likely to lie outside the RR interval 0.95–1.05 and none was >1.7% likely to lie outside the RR interval 0.90–1.10. The authors discussed at least one concern for 23 effects (small magnitude n = 19, residual confounding n = 11, selection bias n = 1). No concerns were expressed for 28 effects.


Conclusions Statistically significant tiny effects for risk factors and interventions of clinical or public health importance become more common in the literature. Cautious interpretation is warranted, since most of these effects could be eliminated with even minimal biases and their importance is uncertain.

Enhanced by Zemanta

More abut Tamiflu


Cochrane CollaborationImage via Wikipedia

By Michael Smith, North American Correspondent, MedPage Today
Published: January 17, 2012
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco.
A new review of the influenza drug oseltamivir (Tamiflu) has raised questions about both the efficacy of the medication and the commitment of its maker to supply enough data for claims about the drug to be evaluated by independent experts.

It also raises questions about the entire process of systematic review.

Researchers led by Tom Jefferson, MD, of the Cochrane Collaboration, pored over 15 published studies and nearly 30,000 pages of “clinical study reports.”

But, they reported, the clinical study information – data previously shared only with regulators – was only a part of what internal evidence suggested was available.

And many published studies had to be excluded because of missing or contradictory data, Jefferson and colleagues reported.

Activate MedPage Today’s CME feature and receive free CME credit on medical stories like this one
Action Points  


  • Explain that a new review of an important flu drug has raised questions about the medication and the entire process of systematic review.
  • Point out that the review of oseltamivir showed that there was no evidence of effect on hospital admissions.
The drug’s maker, Switzerland-based Roche, had promised after a previous Cochrane review to make all of its data available for “legitimate analyses.” After a request for the data, Jefferson and colleagues reported, the company sent them 3,195 pages covering 10 treatment trials of the drug.
But, three of the reviewers noted in a parallel report in BMJ, the tables of contents suggested that the data were incomplete.
“What we’re seeing is largely Chapter One and Chapter Two of reports that usually have four or five chapters,” according to theBMJ article’s lead author, Peter Doshi, PhD, of Johns Hopkins University.
Roche did not immediately respond to a telephoned request for comment.
Requests for More Data
The researchers then asked the European Medicines Agency (EMA) for the data, under a Freedom of Information request, and obtained a further 25,453 pages, covering 19 trials.
But that data, too, was incomplete, they said, although the agency said it was all that was available.
The FDA is thought to have the complete reports, but has not yet responded to requests for them, the researchers reported.
Regulatory agencies such as the EMA and FDA routinely see the large clinical study reports, Jefferson and colleagues said in BMJ, but systematic reviewers and the general medical public do not.
“While regulators and systematic reviewers may assess the same clinical trials, the data they look at differs substantially,” they said.
The Cochrane group has been trying for several years to put together a clear-cut systematic review of the evidence on antivirals aimed at flu.
In 2006, the group concluded that the evidence showed that oseltamivir reduced the complications of the flu. But that conclusion was challenged on the basis that a key piece of data was flawed.
An updated review in 2009 – throwing out the flawed study — concluded there wasn’t enough evidence to show that the drug had any effect on complications.
For this analysis, the Cochrane reviewers had originally intended to perform a systematic review on both of the approved neuraminidase inhibitors – oseltamivir and zanamivir (Relenza), using the clinical study reports to supplement published trials.
In the end, they decided that for oseltamivir, they needed more detail in order to perform the review in its entirety. But, they reported, some conclusions could be drawn from published data on the 15 trials and from 16,000 pages of clinical study reports that were available before their deadline.
They also decided to postpone analysis of zanamivir (for which they had 10 trials) because the drug’s maker, GlaxoSmithKline, offered individual patient data which they wanted time to analyze.
The oseltamivir analysis showed:
  • The time to first alleviation of symptoms in people with influenza-like illness was a median of 160 hours in the placebo groups and about 21 hours shorter in those treated with oseltamivir. The difference, evaluated in five studies, was significant at P<0.001.
  • There was no evidence of effect on hospital admissions: In seven studies, the odds ratio was 0.95, with a 95% confidence interval from 0.57 to 1.61, which was nonsignificant atP=0.86.
  • A post-protocol analysis of eight studies showed that oseltamivir patients were less likely to be diagnosed with influenza.
  • The data “lacked sufficient detail to credibly assess” any effect on influenza complications and viral transmission.
Data Discrepancies Found
But discrepancies between the published trial data and the clinical study reports “led us to lose confidence in the journal reports,” Doshi and colleagues wrote in BMJ.
For example, they noted that one journal report clearly said there were no drug-related serious adverse events, but the clinical study report listed three that were possibly related to oseltamivir.
As well, the sheer scope of the clinical study reports meant that much was left out of journal reports. One 2010 study, on safety and pharmacokinetics of oseltamivir at standard and high dosages, took up seven journal pages and 8,545 pages of the clinical study report.
But the researchers were also shaken, they said, by the “fragility” of some of their assumptions.
For instance, they found that the clinical study reports showed that in many trials, the placebo contained two chemicals not found in the oseltamivir capsules.
“We could find no explanation for why these ingredients were only in the placebo,” they wrote in BMJ, “and Roche did not answer our request for more information on the placebo content.”
Jefferson and colleagues also reported they found disparities in the numbers of influenza-infected people reported to be present in the treatment versus control groups of oseltamivir trials.
One possible explanation, they noted, is that oseltamivir affects antibody production – even though the manufacturer says it does not.
Gaps in Knowledge Remain
That question is profoundly important, Doshi told MedPage Today, because it may offer clues to how the drug works – one of the gaps in knowledge about oseltamivir.
“You can’t make good therapeutic decisions if you don’t know how the drugs works,” he said – information that he and his colleagues suspect may be buried in the mass of missing data.
It’s also important, he said, because public health agencies have been making decisions to stockpile oseltamivir without a clear understanding of the facts.
Essentially, he said, those decisions have been based on the flawed study – a Roche-supported meta-analysis – that was thrown out of the 2009 Cochrane review.
“They’re taking the drug manufacturer’s word at face value,” he said.
The results seem unlikely to resolve conflicts over the medical value of the drug, which is a major cash cow for Roche, adding some $3.4 billion to the company’s bottom line in 2009 alone, according to Deborah Cohen, investigations editor of BMJ.
In an accompanying article, Cohen said that “clinicians can be forgiven for being confused about what the evidence on oseltamivir says.”
She noted that the European Centre for Disease Prevention and Control, the CDC, and the World Health Organization “differ in their conclusions about what the drug does.”
As well, those conclusions are often contradicted by claims on the drug labels – themselves allowed by regulators, Cohen argued.
The Cochrane reviewers reported grant support from the U.K. National Institute for Health Research and Jefferson and Doshi reported they had no recent financial links with industry.
Cohen is employed by BMJ.

Evidence-Based Medicine in the EMR Era


Students working with an artificial patient (F...Image via Wikipedia

Evidence-Based Medicine in the EMR Era

Jennifer Frankovich, M.D., Christopher A. Longhurst, M.D., and Scott M. Sutherland, M.D.
November 2, 2011 (10.1056/NEJMp1108726)

Many physicians take great pride in the practice of evidence-based medicine. Modern medical education emphasizes the value of the randomized, controlled trial, and we learn early on not to rely on anecdotal evidence. But the application of such superior evidence, however admirable the ambition, can be constrained by trials’ strict inclusion and exclusion criteria — or the complete absence of a relevant trial. For those of us practicing pediatric medicine, this reality is all too familiar. In such situations, we are used to relying on evidence at Levels III through V — expert opinion — or resorting to anecdotal evidence. What should we do, though, when there aren’t even meager data available and we don’t have a single anecdote on which to draw?
We recently found ourselves in such a situation as we admitted to our service a 13-year-old girl with systemic lupus erythematosus (SLE). Our patient‘s presentation was complicated by nephrotic-range proteinuria, antiphospholipid antibodies, and pancreatitis. Although anticoagulation is not standard practice for children with SLE even when they’re critically ill, these additional factors put our patient at potential risk for thrombosis, and we considered anticoagulation. However, we were unable to find studies pertaining to anticoagulation in our patient’s situation and were therefore reluctant to pursue that course, given the risk of bleeding. A survey of our pediatric rheumatology colleagues — a review of our collective Level V evidence, so to speak — was equally fruitless and failed to produce a consensus.
Without clear evidence to guide us and needing to make a decision swiftly, we turned to a new approach, using the data captured in our institution’s electronic medical record (EMR) and an innovative research data warehouse. The platform, called the Stanford Translational Research Integrated Database Environment (STRIDE), acquires and stores all patient data contained in the EMR at our hospital and provides immediate advanced text searching capability.1 Through STRIDE, we could rapidly review data on an SLE cohort that included pediatric patients with SLE cared for by clinicians in our division between October 2004 and July 2009. This “electronic cohort” was originally created for use in studying complications associated with pediatric SLE and exists under a protocol approved by our institutional review board.
Of the 98 patients in our pediatric lupus cohort, 10 patients developed thrombosis, documented in the EMR, while they were acutely ill. The prevalence was higher among patients who had persistent nephrotic-range proteinuria and pancreatitis (see tableResults of Electronic Search of Patient Medical Records (for a Cohort of 98 Pediatric Patients with Lupus) Focused on Risk Factors for Thrombosis Relevant to Our 13-Year-Old Patient with Systemic Lupus Erythematosus.). As compared with our patients with lupus who did not have these risk factors, the risk of thrombosis was 14.7 (95% confidence interval [CI], 3.3 to 96) among patients with persistent nephrosis and 11.8 (95% CI, 3.8 to 27) among those with pancreatitis. This automated cohort review was conducted in less than 4 hours by a single clinician. On the basis of this real-time, informatics-enabled data analysis, we made the decision to give our patient anticoagulants within 24 hours after admission.
Our case is but one example of a situation in which the existing literature is insufficient to guide the clinical care of a patient. But it illustrates a novel process that is likely to become much more standard with the widespread adoption of EMRs and more sophisticated informatics tools. Although many other groups have highlighted the secondary use of EMR data for clinical research,2,3 we have now seen how the same approach can be used to guide real-time clinical decisions. The rapid electronic chart review and analysis were not only feasible, but also more helpful and accurate than physician recollection and pooled colleague opinion. Such real-time availability of data to guide decision making has already transformed other industries,4 and the growing prevalence of EMRs along with the development of sophisticated tools for real-time analysis of deidentified data sets will no doubt advance the use of this data-driven approach to health care delivery. We look forward to a future in which health information systems help physicians learn from every patient at every visit and close the feedback loop for clinical decision making in real time.
Did we make the correct decision for our patient? Thrombosis did not develop, and the patient did not have any sequelae related to her anticoagulation; truthfully, though, we may never really know. We will, however, know that we made the decision on the basis of the best data available — acting, as the fictional detective Nero Wolfe would say, “in the light of experience as guided by intelligence.”5 In the practice of medicine, one can’t do better than that.
Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.
This article (10.1056/NEJMp1108726) was published on November 2, 2011, at NEJM.org.

SOURCE INFORMATION

From the Division of Rheumatology (J.F.), the Division of Systems Medicine (C.A.L.), and the Division of Nephrology (S.M.S.), Department of Pediatrics, Stanford University School of Medicine, Palo Alto, CA.

REFERENCES

  1. 1
    Lowe HJ, Ferris TA, Hernandez PM, Weber SC. STRIDE — an integrated standards-based translational research informatics platform. AMIA Annu Symp Proc 2009;14:391-395
  2. 2
    Prokosch HU, Ganslandt T. Perspectives for medical informatics: reusing the electronic medical record for clinical research. Methods Inf Med 2009;48:38-44
    Web of Science | Medline
  3. 3
    Gunn PW, Hansen ML, Kaelber DC. Underdiagnosis of pediatric hypertension — an example of a new era of clinical research enabled by electronic medical records. AMIA Annu Symp Proc 2007;11:966-966
  4. 4
    Halevy A, Norvig P, Pereira F. The Unreasonable Effectiveness of Data. IEEE Intelligent Systems, March/April 2009:8-12.
  5. 5
    Stout R. In the best families. New York: Viking Press, 1950:71.

The Prostrate Placebo


Serenoa repens, Arecaceae, Saw Palmetto, habitus.Image via Wikipedia

 a tennis ball, I am going to go looking for a therapy that will shrink it, not fool me into thinking I can write 

Science Based Medicine

I seem to be writing a lot about the urinary tract this month. Just coincidence, I assure you. As I slide into old age, medical issues that were once only of cursory interest for a young whippersnapper have increasing potential to be directly applicable to grumpy old geezers. Like benign prostatic hypertrophy (BPH). I am heading into an age where I may have to start paying attention to my prostate (not prostrate, as it is so often pronounced, although an infection of the former certainly can make you the latter), so articles that in former days I would have ignored, I read. JAMA this month has what should be the nail in the coffin of saw palmetto, demonstrating that the herb has no efficacy in the treatment of symptoms of BPH: Effect of increasing doses of saw palmetto extract on lower urinary tract symptoms: a randomized trial.
It demonstrated that compared to placebo, saw palmetto did nothing. There have been multiple studies in the past with the more or less the usual arc of clinical studies of CAM products: better designed trials showing decreasing efficacy, until excellent studies show no effect. There is the usual meta analysis or two, where all the suboptimal studies are lumped together, the authors bemoan the quality of the data, and proceed to draw conclusions from the garbage anyway. GIGO.
The NEJM study from 2006 demonstrated that saw palmetto was no better than placebo but it was suggested that perhaps the dose of saw palmetto was not high enough or that the patients were not treated long enough to demonstrate an effect, and the JAMA study hoped to remedy that defect.There is, as is often the case, no good reason to suspect that saw palmetto would benefit or harm the prostate. Like many herbal preparations, it had widespread uses back in the day, when I had an onion tied to my belt, which was the style at the time. You couldn’t get white onions, because of the war. The only thing you could get was those big yellow ones.., but I digress:
“It is also an expectorant, and controls irritation of mucous tissues. It has proved useful in irritative cough, chronic bronchial coughs, whooping-cough, laryngitis, acute and chronic, acute catarrh, asthma, tubercular laryngitis, and in the cough of phthisis pulmonalis. Upon the digestive organs it acts kindly, improving the appetite, digestion, and assimilation. However, its most pronounced effects appear to be those exerted upon the urino-genital tracts of both male and female, and upon all the organs concerned in reproduction. It is said to enlarge wasted organs, as the breasts, ovaries, and testicles, while the paradoxical claim is also made that it reduces hypertrophy of the prostate. Possibly this may be explained by claiming that it tends toward the production of a normal condition, reducing parts when unhealthily enlarged, and increasing them when atrophied.”
At the turn of century Edwin M Hale, MD and homeopath, wrote a treatise on the topic, extolling its benefits on the prostate and other organs. You will be happy to know that if you have testicular atrophy from being an old masturbator, saw palmetto will help. For no good reason I can find, it became popular only for BPH. As best I can determine from the internet, there was a natural medicine fad in the early 1900’s, and saw palmetto became part of the fad. No clinical trials were responsible for the use. And, like acupuncture and homeopathy, there are many explanations for an efficacy that does not exist.
The JAMA study followed 369 men for 72 weeks. They received placebo or saw palmetto twice a day, and at weeks 24 and 48 the dose of each was increased.
They were followed for subjective complaints with the AUASI score, which is a 7 question self administered questionnaire:
Well validated as a tool for BPH symptoms, it relies overmuch on memory and is subject to wishful thinking on the part of the test taker. I doubt I could ever accurately remember my urinary patterns over the prior month without writing it down.
There were also objective endpoints like peak urine flow, PSA levels, and post void residual. Makes me wonder again what they want done when the radio advertisement says ‘Void were prohibited by law.’ Would saw palmetto make that easier? When it came to the subjective measurements, there was a slight, and similar, improvements in both groups. Objective, anatomic and physiologic endpoints were not affected. No surprise. So much for the powerful placebo.
Adverse effects were the same in both groups, with the only significant difference that the saw palmetto group had more physical injury and trauma. Was this the dreaded nocebo effect, or the random badness that occurs as a result of life? Probably the latter.
Based on the JAMA and NEJM trials, it is reasonable to conclude that saw palmetto has no efficacy in the treatment of symptoms due to BPH.
More interesting is what this article says about the so called placebo effect. This is yet another article that demonstrates that for hard endpoints, altering abnormal physiology or anatomy, placebo does nothing. I bet if we did brain scans of these patients they would show changes when the patient took the medications, and to that I would yawn. Do anything to anyone, give a placebo, tickle their feet, there will be changes in the brain. And while in some studies, increasing placebo amounts and frequency leads to increasing effects, in this study an increase in placebo dose led to no improvement in subjective outcomes.
More real world data to suggest that there are no real placebo effects.
Of course, I have bias. I have spent 30 years in acute care hospitals. My patients have derangements of anatomy and physiology that, if not corrected or at least ameliorated, lead to death or permanent morbidity. Placebo isn’t going to cure endocarditis, stop a gastric ulcer bleed, or reverse a stroke. And even if the patient feels better from the therapeutic relationship, if the anatomic/pathophysiologic abnormalities continue unabated, the patient is toast.
I am not even certain it can be said that placebos cure gastric ulcers. There is little on the natural history of ulcers in the flexible endoscopy age. The only reference I could find suggests that patients who have ulcers found with x-ray screening (not a reliable way to diagnose ulcers and probably under-represented the incidence) and who are not treated had a 24% cure rate at 6 months and a 29% relapse rate at 24 months. Most of the placebo trials followed patients around 4 weeks and had a higher cure rate in the placebo wing than seen in the natural history report, but the two are not directly comparable. Given the propensity of untreated ulcers to come and go and the unreliability of symptoms for diagnosis, unless there was a study that had a treatment, a placebo, and a no intervention arm, I do not think it is reasonable to conclude that placebos ‘cure’ ulcers. Especially given the NEJM review that suggested that placebo is usually no more effective than a no treatment/waiting arm.
Perhaps it is me. I do have some intellectual blind spots, like the anthropic principal. Every time I come across it in a cosmology book, I think that it is inane. I lack the imagination, or perhaps I am not stoned enough, to recognize its significance. So too with the placebo effect.
Placebo effects are probably more like quantum mechanics. The single slit experiment gives key insights into the fundamental nature of reality, but in the macroscopic world of day to day life my electrons move about just fine to heat my house and run my computer. No need to worry about probability functions, I can throw potatoes at a slit all day and never see a interference pattern. So too with the placebo effect. Most of the practical effect is lost in the noise of the complexity of illness, especially in the acute care hospital where I spend most of my time.
the take-home message for clinicians, for physicians, for all health professionals is that their words, behaviors, attitudes are very important, and move a lot of molecules in the patient’s brain. So, what they say, what they do in routine clinical practice is very, very important, because the brain of the patient changes sometimes… there is a reduction in anxiety; but we know that there is a real change…in the patient’s brain which is due to… the ‘ritual of the therapeutic act.’
I do not disagree with that. I consciously try to accentuate just those interactions with every patient, because I know my job as a physician is more than ‘Me find bug, Me kill bug. Me go home’. But I do not think it is important for modifying any disease process I am involved with. Grooming each other has salubrious effects in monkeys, and as best I can tell, the placebo is no more than evolutionarily advanced nit picking.
Large swaths of the world rely on native healers and the only tool in their armamentarium is the “ritual of the therapeutic act.” And across the world and throughout time, people have suffered and died in droves. You may argue that is not a fair comparison, people suffered from poor hygiene, no vaccines, malnutrition and no health infrastructure. But the US has a group whose health care is only placebo, relying entirely on the ritual of the therapeutic act, and despite being surrounded by the benefits of western societal infrastructure, they die faster and younger: Christian Scientists.
At the end of the day, the practice of medicine is practical endeavor. I am a builder, not an architect. I have to try to make my patients better objectively and subjectively, and the placebo is a tool that has little utility in my toolbox. When my prostate grows to the size ofmy nam
e in the snow a little better.

All Trial Data Must Be Disclosed: Rogawski Explains


National Institutes of HealthImage via Wikipedia

michael-rogawskiOver the past few years, there has been controversy over clinical trial results that remain unreported. This has stoked concern, for instance, when data may provide information about side effects. At the same time, other researchers may be precluded from learning clues needed to proceed on related drug development. However, these issues pertain to studies on drugs that are not commercialized as well. In a recent paper in Science Translational Medicine, a pair of academics argue that “translational medicine cannot approach its full potential if negative drug developments are unpublished” (here is the abstract). And they cite an ethical duty for insisting on disclosure. We spoke with Michael Rogawski, one of the co-authors and chair of the Department of Neurology at the UC Davis School of Medicine, about the need to disclose trial data. This is an excerpt…
Pharmalot: So why raise this issue and why now?
Rogawski: When I participated in a translational working group of the International League Against Epilepsy on how to encourage the development of more effective epilepsy therapies, I realized that negative clinical data was critically important in assessing the predictiveness of animal models. Then, sometime later when I was writing a review article I asked a company for the clinical trial results on a product they had abandoned. I let them know that I hoped they would publish their trial results as even negative studies provide important scientific information and the patients who participated in the trials expect that the information derived from their participation will benefit mankind. The terse answer was that the company “does not intend to publish the results of the epilepsy trial.
So, this is a problem that has concerned me for some time, but we’re now at a critical moment where the NIH has the opportunity to require sponsors to post the results of clinical trials on the ClinicalTrials.gov web site. A new law that many people may not be aware of requires the results for most drug and device trials to be posted on ClinicalTrials.gov. However, there is a loophole that exempts trials of products that are still in development and if they are never approved, the results don’t need to be posted. A provision in FDAAA (the Food and Drug Administration Amendments Act of 2007) Section 801 allows HHS to require results reporting for clinical trials for drugs and devices not approved by the FDA. In the law, Congress gave the NIH the ability to formulate regulations that require sponsors to do this, to post results on this web site, even for drugs and devices not approved by the FDA. They’re going to publish draft regulations by the end of the year and then there will be a public comment period.
As I dug into this, I began to realize that there were quite a lot of changes going on in this area. The FDAMA (the FDA Modernization Act of 1997) required anyone doing a clinical trial to post an announcement posting on ClinicalTrials.gov. The original intent was to help patients find relevant clinical trials. Then, when concerns were raised about selective reporting, ClinicalTrials.gov was seen as a way to keep track of all the trials that have been done. More recently, they have made a requirement to post basic results. I don’t think a lot of people realize that. But if you go to the web site, there are few clinical trials with results actually there, as this is all so new. Even drug companies seem to be confused about the requirements.
Going into the future, the results from all clinical trials for approved products will have to be up there. But that raises the question about publication. I guess some people thought this would be sufficient. But it’s not. Our position is it’s still necessary to publish results of clinical trials in the peer-reviewed literature.

Pharmalot: Why is that?
Rogawski: The results data in ClinicalTrials.gov is simply presented in tables. There’s no presentation of detailed analysis or an interpretation as in a journal publication. And there’s minimal review. ClinicalTrials.gov could become the primary repository for the results of clinical trials of drugs and devices. And some people may feel that publication in a peer-reviewed article is no longer needed.
We’re arguing that you still need to write up the results.
Pharmalot: So you’re saying some results are located on ClinicalTrials.gov, but not all?
Rogawski: There’s a whole separate tab on every entry for results. If no study results are available, it will indicate that. For terminated products, you may never see the results published. There’s a loophole in the law that allows them (sponsors) to be exempt from posting the basic results when the product hasn’t been approved.
Pharmalot: What are these exemptions?
Rogawski: There are a couple of ways a study can be exempt. One way is to study the drug for another indication. Delayed submission is permitted when sponsor is seeking a new use. The other way is if a drug or device is never approved. In principle, although it’s not stated this way, if the sponsor never submits the particular product for FDA approval, then they don’t have to post the basic results. The current law only applies to agents approved by the FDA. Let’s say the sponsor submits an NDA and the agency gives a complete response letter saying they have to do X, Y and Z, and the sponsor never does that. They decide it would be too expensive to do further clinical trials and they decide to abandon the product. Then they also don’t have to put data on ClinicalTrials.gov.
What we maintain is that it’s unethical to do that. It’s a basic principle of clinical research – it’s espoused in the fundamental ethics of clinical trials, such as in the Declaration of Helsinki, which states that “authors have duty to make publicly available results of research on humans and are accountable for the completeness and accuracy of their reports….Negative and inconclusive, as well as positive results should be published or otherwise made publicly available.”
Pharmalot: And so you’re hoping the HHS will close this loophole, as you call it.
Rogawski: We’re hoping HHSs will use its authority under FDAAA 801 to require sponsors to report results with any trial registered with ClinicalTrials.gov, even for any product that is abandoned… There’s potentially useful information in any clinical trial and if it’s not available, it diminishes public knowledge. It may even place patients in later trials at risk. There could be some data showing this particular product causes green spots. It would be important to know this the next some time somebody considers developing a drug that acts by a similar molecular mechanism.
Pharmalot: But a drug maker may argue that intellectual property is at stake, whether or not they choose to continue development, because disclosure may somehow give a rival an edge, right?
Rogawski: HHS may decide to balance with the interests of pharmaceutical companies with the public interest. They may decide it’s commercially damaging to require companies to publish the results of trials with abandoned products. We think it’s a fallacious argument. We understand that many sponsors consider data proprietary… They put all this money into it and may want to continue work on the product later on or try for a different indication, and they may perceive that negative trial data would impair that. And they may consider it a waste of time and money to write stuff up when it isn’t going to go anywhere…We think it’s incorrect, but I guess companies may think that way.
Pharmalot: You mentioned you also believe all trial results should be published as well? How does this tie in?
Rogawski: I’ve noticed that it’s often the case that sponsors do not publish results when they’re abandoning the product. In my own area, which is the development of anti-epileptic drugs, we have a problem. We use animal screening models to identify drugs, but we don’t know how valid the models are because we don’t know much about the cases where a drug was effective in the models, but not in clinical trials because very little information about failed drugs is available publicly.
That’s what stimulated my concern. We make the assumption that these animal models are highly predictive, but it could be a flawed assumption because we don’t have the full set of data. There could be situations where drugs don’t work in the clinical trials. I do know of situations like that, being in this field for as long as I have, but we don’t know why there was a failure – lack of efficacy, the sponsor ran out of money, or perhaps there were idiosyncratic reactions?
So I also believe the companies should voluntarily publish negative results in the peer-reviewed literature. I believe they have an ethical responsibility to do that, but ethics is not the same as the law. In my view, many companies are not acting ethically by not reporting trial results.
Source: Pharmalot


  

Investigating over-the-counter oral analgesics






There is good evidence supporting the efficacy of standard doses of aspirin, paracetamol, ibuprofen, naproxen, and diclofenac, all of which are available as over-the-counter (OTC) medicines in some part of the world. There is no good evidence for most branded combination products, though it is likely that additional analgesic effect is produced by codeine. Combinations of ibuprofen and paracetamol appear to be particularly effective.


Background


A wide variety of over-the-counter (OTC) analgesics are available to buy, but the amount of high quality information about these treatments is limited. We set out to find evidence for the efficacy of a range of OTC analgesics, available in various parts of the world, in standard acute pain trials. Specifically we were looking for single dose data from 4-6 hour trials in post-operative pain models, and reporting standard outcomes.
Clinical trials measuring the efficacy of analgesics in acute pain have been standardised over many years. Trials have to be randomised and double blind. Typically, in the first few hours or days after an operation, patients develop pain that is moderate to severe in intensity, and will then be given the test analgesic or placebo. Pain is measured using standard pain intensity scales immediately before the intervention, and then using pain intensity and pain relief scales over the following 4 to 6 hours for shorter acting drugs. Pain relief of half the maximum possible pain relief or better (at least 50% pain relief) is typically regarded as a clinically useful outcome. For patients given rescue medication it is usual for no additional pain measurements to be made, and for all subsequent measures to be recorded as initial pain intensity or baseline (zero) pain relief (baseline observation carried forward). This process ensures that analgesia from the rescue medication is not wrongly ascribed to the test intervention. In some trials the last observation is carried forward, which gives an inflated response for the test intervention compared to placebo, but the effect has been shown to be negligible over four to six hours.
Single dose trials in acute pain are commonly short in duration, rarely lasting longer than 12 hours, allowing no reliable conclusions to be drawn about safety. To show that the analgesic is working it is necessary to use placebo. There are clear ethical considerations in doing this. These ethical considerations are answered by using acute pain situations where the pain is expected to go away, and by providing additional analgesia, commonly called rescue analgesia, if the pain has not diminished after about an hour. This is reasonable, because not all participants given an analgesic will have significant pain relief. Approximately 18% of participants given placebo will have significant pain relief, and up to 50% may have inadequate analgesia with active medicines.


Systematic review and methods

For references to methods used, refer to Moore et al., Bandolier’s Little Book of Pain, Oxford University Press, 2006.
We searched PubMed, Cochrane Central Library, and our own in-house databases in pain research for any double-blind, randomised controlled trials reporting pain relief, pain intensity, or patient global evaluation of efficacy as outcomes over 4-6 hours for single dose analgesic versus placebo. The search terms used included both trade names and generic names of the individual analgesic constituents, including combinations where appropriate. It is not likely that all OTC analgesics have been included, since sources for OTC analgesic names and availability are not easy to come by, and may change from time to time. OTC analgesic combinations, in particular, may change. The approach, therefore, was to work with combinations of drugs and doses of the combinations that appeared to be current in 2009.
From these trials we extracted outcome data, including pain relief measured as a TOTPAR (total pain relief) at 4 or 6 hours, and pain intensity measured as a SPID (summed pain intensity difference) at 4 or 6 hours. Mean TOTPAR or SPID values, for both the active analgesic and placebo, were then converted to %maxTOTPAR or %maxSPID by division into the calculated maximum value. The proportion of participants in each treatment group who achieved at least 50%maxTOTPAR was calculated using verified equations, and these proportions converted into the number of participants achieving at least 50%maxTOTPAR by multiplying by the total number of participants in the treatment group. Information on the number of participants with at least 50%maxTOTPAR for active treatment and placebo was then used to calculate relative benefit (RB) and number-needed-to-treat-to-benefit (NNT).

Results

One hundred and twenty five RCTs were retrieved that matched the search criteria. After closer scrutiny, six head-to-head comparative trials were excluded due to lack of a placebo control, and two trials were excluded due to lack of analysable data. The remaining one hundred and seventeen trials were randomised, double blind and placebo controlled and were included in the efficacy analysis. The studies involved a mixture of dental pain and episiotomy pain.
The overall standard and quantity of data available was poor, particularly for studies specifically using the trade name over-the-counter analgesics. To compensate for this we have included data on the equivalent dose generic named analgesics and their combinations. For some of the test analgesics (Anadin Extra, Askit, Codis, Dispirin, Dispirin Extra, Panadeine 15, Paracodol, Paramol, Pentalgin H, Sedalgin-neo, Solpadeine Max) no useable data could be found. In many cases, particularly those combination analgesics including codeine, this was due to differences in the doses of the constituent analgesics used in the available trials as compared with the over-the-counter versions. In general, over-the-counter analgesics containing codeine tended to use significantly lower doses of codeine and higher doses of other constituents; presumably to minimise codeine-related side effects. Information on combinations of paracetamol and ibuprofen is included since these newer combinations are likely to appear as OTC analgesics in several parts of the world. Table 1 gives information about the included studies.

Table 1: Details of available data

Drug

Details of available data

References of included studies

Anadin Extra We found no trials comparing Anadin Extra (or a generic combination analgesic containing paracetamol, aspirin and caffeine in similar doses) to placebo N/A
Askit We found no trials comparing Askit (or a generic combination analgesic containing aspirin, caffeine and aloxiprin in similar doses) to placebo N/A
Aspirine We found two trials: Forbes et al. 1990 and Rubin et al. 1984 comparing a generic combination of aspirin and caffeine (ASA 650mg/caffeine 65mg in Forbes et al. and ASA 800mg/caffeine 65mg in Rubin et al.) against placebo. Both trials were relatively small and used different pain types: Forbes et al. (n=141) in dental and Rubin et al. (n=230) in episiotomy. The results reflect this with Forbes et al. reporting the % of patients achieving 50% pain relief on the active treatment as 27% and on the placebo as 1%; while Rubin et al. report 86% on the active treatment and 48% on the placebo Forbes JA. Pharmacotherapy. 1990; 10(6):387-93
Rubin A. J Int Med Res. 1984; 12(6):338-45
Aspro Clear We found seven trials in a Cochrane review of single dose oral aspirin for acute pain (Edwards et al. 2000 – currently undergoing in-house update) comparing aspirin in any formulation (ASA 1000mg) against placebo Edwards JE. Cochrane Database Syst Rev. 2000;(2):CD002067
Codis We found no trials comparing Codis (or a generic combination analgesic containing aspirin and codeine in similar doses) to placebo N/A
Cuprofen Plus We found two trials: Cater et al. 1985 and Norman et al. 1985 comparing a generic combination of ibuprofen and caffeine (IBU 400mg/COD 30mg) against placebo. Both trials reported pain following episiotomy with similar results Cater M. Clin Ther. 1985; 7(4):442-7
Norman SL. Clin Ther. 1985; 7(5):549-54
Disprin We found no trials comparing Dispirin (or a generic formulation of aspirin in a similar dose) to placebo N/A
Disprin Extra We found no trials comparing Dispirin Extra (or a generic combination of aspirin and paracetamol in similar doses) to placebo N/A
Feminax Ultra We found five trials in an up-to-date Cochrane review of single dose oral naproxen for acute pain (Derry et al. 2009) comparing naproxen or naproxen sodium (NAPROX 500mg or NAPROX SODIUM 550mg) against placebo Derry C. Cochrane Database Syst Rev. 2009 Jan 21;(1);CD004234
Mersyndol We found one trial: Margarone et al. 1995 comparing Mersyndol against placebo. The trial reported pain following dental surgery Margarone JE. Clin Pharmacol Ther. 1995 Oct; 58(4):453-8
Nurofen We found 61 trials in an up-to-date Cochrane review of single dose oral ibuprofen for acute pain (Derry et al. 2009) comparing a generic formulation of ibuprofen (IBU 400mg) against placebo Derry C. Cochrane Database Syst Rev. 2009 Jul 8;(3):CD001548
Panadeine 15 We found no trials comparing Panadeine 15 (or a generic combination of paracetamol and codeine in similar doses) to placebo N/A
Panadol We found 28 trials in an up-to-date Cochrane review of single dose oral paracetamol for acute pain (Toms et al. 2008) comparing a generic formulation of paracetamol (PARA 1000mg) against placebo Toms L. Cochrane Database Syst Rev. 2008 Oct 8;(4):CD004602
Panadol Extra We found one trial: Winter et al. 1983 comparing a generic combination of paracetamol and caffeine (PARA 1000mg/CAF 130mg) against placebo. The trial reported pain following dental surgery Winter L Jr. Current Therapeutic Research. 1983 Jan; 33(1):115-122
Paracodol We found no trials comparing Paracodol (or a generic combination of paracetamol and codeine in similar doses) to placebo N/A
Paramol We found no trials comparing Paramol (or a generic combination of paracetamol and dihydrocodeine tartrate in similar doses) to placebo N/A
Pentalgin H We found no trials comparing Pentalgin H (or a generic combination of naproxen, codeine, caffeine, dipyrone and phenobarbitol in similar doses) to placebo N/A
Saridon We found one trial: Kiersch et al. 2002 comparing Saridon against placebo. The trial reported pain following dental surgery Kiersch TA. Curr Med Res Opin. 2002; 18(1):18-25
Sedalgin-neo We found no trials comparing Sedalgin-neo (or a generic combination of paracetamol, caffeine, codeine, dipyrone and phenobarbitol in similar doses) to placebo N/A
Solpadeine Max We found no trials comparing Solpadeine Max (or a generic combination of paracetamol and codeine in similar doses) to placebo N/A
Solpadeine Plus We found one trial: Cooper et al. 1986 comparing a generic combination of paracetamol, codeine and caffeine (PARA 1000mg/COD 16mg/CAF 30mg) against placebo. The trial reported pain following dental surgery Cooper SA. Anesth Prog. 1986 May-Jun; 33(3):139-42
Voltarol We found four trials in an up-to-date Cochrane review of single dose oral diclofenac for acute pain (Derry et al. 2009) comparing all generic formulations of diclofenac (DICLO 25mg) against placebo Derry P. Cochrane Database Syst Rev. 2009 Apr 15;(2):CD004768

Table 2 summarises data available for each of the analgesics along with its calculated relative benefit (RB) and number-needed-to-treat-to-benefit (NNT). Para = paracetamol, Asa – aspirin, Caf = caffeine, Cod = codeine, Naprox = naproxen, Diclo = diclofenac, Ibu = ibuprofen

Drug

Constituents

Number of Trials

Number of Patients

Percent with Active

Percent with Control

RB
(95% CI)
NNT
(95% CI)
Anadin Extra Para400 + Asa600 + Caf90

0

Askit Asa530 + Caf110 + Aloxiprin140

0

Aspirine Asa650 + Caf65

2

371

65

28

2.3 (1.8 – 3.0)

2.7 (2.2 – 3.7)

Forbes 1990

141

17

0

39.8 (2.4 – 648)

3.9 (2.7 – 6.7)

Rubin 1984

230

86

48

1.8 (1.5 – 2.2)

2.6 (2.0 – 3.7)

Aspro Clear Asa1000

7

679

43

16

2.6 (2.0 – 3.5)

3.7 (3.0 – 5.0)

Codis Asa1000 + Cod base 16

0

Cuprofen Plus Ibu400 + Cod base 20

2

167

55

31

1.8 (1.2 – 2.6)

4.1 (2.6 – 10.3)

Disprin Asa900

0

Disprin Extra Asa600 + Para400

0

Margarone 1995

76

21

8

2.7 (0.8 – 9.3)

7.6

Feminax Ultra Naprox500

9

784

52

15

3.4 (2.7 – 4.4)

2.7 (2.3 – 3.2)

Winter 1983

81

48

22

2.2 (1.1 – 4.2)

3.9 (2.2 – 18.0)

Mersyndol Para1000 + Cod base15 + Doxylamine succinate10

1

76

21

8

2.7 (0.8 – 9.3)

Not Calculated

Nurofen Ibu400

61

6475

54

14

4.0 (3.6 – 4.4)

2.5 (2.4 – 2.6)

Panadeine 15 Para1000 + Cod base23

0

Panadol Para1000

28

3232

46

18

2.5 (2.2 – 2.9)

3.6 (3.2 – 4.0)

Cooper 1986

61

29

4

6.2 (0.9 – 45.0)

4.2 (2.5 – 14.2)

Panadol Extra Para1000 + Caf130

1

81

48

22

2.2 (1.1 – 4.2)

3.9 (2.2 – 18.0)

Paracodol Para1000 + Cod base13

0

Paramol Para1000 + Dihydrocodeine tartarate15

0

Pentalgin H Naprox100 + Cod base8 + Caf50 + Dipyrone300 + Phenobarbitol15

0

Saridon Para500 + Caf100 + Propifenazone300

1

301

23

2

9.2 (1.3 – 64.5)

4.9 (3.6 – 7.4)

Sedalgin-neo Para600 + Caf100 + Cod base20 + Dipyrone300 + Phenobarbitol30

0

Norman 1985

74

53

29

1.8 (1.0 – 3.3)

4.2 (2.2 – 48.5)

Cater 1985

93

57

32

1.8 (1.1 – 2.9)

4.1 (2.3 – 19.8)

Solpadeine Max Para1000 + Cod base20

0

Solpadeine Plus Para1000 + Caf60 + Cod base13

1

61

29

4

6.2 (0.9 – 45.0)

4.2 (2.5 – 14.2)

Voltarol Diclo25

4

502

53

15

3.6 (2.6 – 5.0)

2.6 (2.2 – 3.3)

None Ibu100 + Para 250

2

175

73

10

7.6 (4.2 -14)

1.6 (1.3 – 1.9)

None Ibu200 + Para 500

2

280

74

10

7.7 (2.2 – 14

1.6 (1.4 – 1.8)

None Ibu400 + Para 1000

2

320

75

10

7.9 (4.3 – 14)

1.5 (1.4 – 1.7)

Table 3 shows a sub-analysis of only those trials involving dental pain.

Drug

Constituents

Number of Trials

Number of Patients

Percent with Active

Percent with Control

RB
(95% CI)
NNT
(95% CI)
Anadin Extra Para400 + Asa600 + Caf90

0

Askit Asa530 + Caf110 + Aloxiprin140

0

Aspirine Asa650 + Caf65

1

141

17

0

39.8 (2.4 – 648)

3.9 (2.7 – 6.7)

Aspro Clear Asa1000

3

345

32

11

2.9 (1.8 – 4.8)

4.7 (3.4 – 7.6)

Codis Asa1000 + Cod base 16

0

Cuprofen Plus Ibu400 + Cod base 20

0

Disprin Asa900

0

Disprin Extra Asa600 + Para400

0

Feminax Ultra Naprox500

5

402

62

7

8.9 (5.3 – 14.9)

1.8 (1.6 – 2.1)

Mersyndol Para1000 + Cod base15 + Doxylamine succinate10

1

76

21

8

2.7 (0.8 – 9.3)

Not Calculated

Nurofen Ibu400

49

5428

55

12

4.7 (4.2 – 5.2)

2.3 (2.2 – 2.4)

Panadeine 15 Para1000 + Cod base23

0

Panadol Para1000

18

2171

40

9

4.4 (3.5 – 5.5)

3.3 (3.0 – 3.7)

Panadol Extra Para1000 + Caf130

1

81

48

22

2.2 (1.1 – 4.2)

3.9 (2.2 – 18.0)

Paracodol Para1000 + Cod base13

0

Paramol Para1000 + Dihydrocodeine tartarate15

0

Pentalgin H Naprox100 + Cod base8 + Caf50 + Dipyrone300 + Phenobarbitol15

0

Saridon Para500 + Caf100 + Propifenazone300

1

301

23

2

9.2 (1.3 – 64.5)

4.9 (3.6 – 7.4)

Sedalgin-neo Para600 + Caf100 + Cod base20 + Dipyrone300 + Phenobarbitol30

0

Solpadeine Max Para1000 + Cod base20

0

Solpadeine Plus Para1000 + Caf60 + Cod base13

1

61

29

4

6.2 (0.9 – 45.0)

4.2 (2.5 – 14.2)

Voltarol Diclo25

3

398

51

11

4.6 (3.1 – 7.1)

2.5 (2.1 – 3.2)

None Ibu100 + Para 250

2

175

73

10

7.6 (4.2 -14)

1.6 (1.3 – 1.9)

None Ibu200 + Para 500

2

280

74

10

7.7 (2.2 – 14

1.6 (1.4 – 1.8)

None Ibu400 + Para 1000

2

320

75

10

7.9 (4.3 – 14)

1.5 (1.4 – 1.7)

To summarise the findings of our investigation we produced comparative figures (Figures 1 and 2 for all data and just dental studies, respectively) showing the NNTs and their 95% confidence intervals for each analgesic where calculable.

Figure 1: NNTs for all available data

Figure 2: NNTs for dental studies only

Comment

There are two main issues when looking at the evidence of acute pain efficacy of OTC analgesics. The first is the dearth of evidence in the public domain for some of these products. The second is what we are able to make of what evidence we have.

Dearth of evidence

Most of the OTC analgesics, including combination analgesics, were developed decades ago, as long ago as the 1950s, in times when trials were performed for registration purposes. Publication was infrequent. A good example is a review of 30 trials involving about 10,000 patients examining the analgesic efficacy of caffeine in combination with analgesics published in JAMA in 1984 [1]. Most of the data was unpublished then, and has remained unpublished subsequently. We know more about OTC drugs like paracetamol and ibuprofen from trials in which they have been used as active comparators than trials in which they themselves have been tested [2].
The dearth of evidence is not, therefore, surprising. It is, however, frustrating. For several OTC analgesics we have no reliable data, and for others the data available are inadequate – leading to very wide confidence intervals in Figures 1 and 2. This is a shame, because OTC analgesics, properly used, are effective for many people.
It is also the case that the case for analgesic combinations can be developed using evidence from closely related studies. A case in point is the combination of paracetamol and codeine, where relatively small amounts of information for some dose combinations is bolstered with evidence from other dose combinations [3].

What can we make of the evidence we have

The best evidence we have is from ibuprofen 400 mg (Nurofen), paracetamol 10000 mg (Panadol), naproxen 500 mg (Feminax Ultra), diclofenac 25 mg (Voltarol), aspirin 1000 mg (Aspro), and from ibuprofen +paracetamol in combination, though the evidence is likely not to have come from testing of any particular product. All of these analgesics have usefully low NNTs in the range of about 2-4, or somewhat lower for ibuprofen/paracetamol combination.
The evidence for combination analgesics is less clear, with predominantly no trials, or too few trials and patients available to make any judgement. This is a shame, because there is evidence elsewhere [3, for example] that combinations of analgesics can produce very good results, as seen here with combinations of ibuprofen and paracetamol.
Consumers can make up their own minds whether the expense of branded analgesics is worth it compared to the often much lower cost of unbranded – though that is a UK view, and certainly analgesics like paracetamol and ibuprofen are available in quantity and at low cost in the USA.

References

  1. Laska EM, Sunshine A, Mueller F, Elvers WB, Siegel C, Rubin A. Caffeine as an analgesic adjuvant. JAMA 1984 251:1711-8.
  2. Barden J, Derry S, McQuay HJ, Moore RA. Bias from industry trial funding? A framework, a suggested approach, and a negative result. Pain 2006 121:207-18.
  3. Smith LA, Moore RA, McQuay HJ, Gavaghan D. Using evidence from different sources: an example using paracetamol 1000 mg plus codeine 60 mg. BMC Med Res Methodol. 2001 Jan 10;1:1.

Enhanced by Zemanta

El señor de las curvas que tanto nos suenan


Fuente: Rafa Bravo

Via: Primum Non Nocere
Paul Meier, who was among the most influential biostatisticians of his generation and helped bring mathematical rigor to medical research in the years after World War II, died Aug. 7 at his home in Manhattan.
Pues bien, el problema de analizar observaciones incompletas por su dispersión en el tiempo era un viejo problema. Charlie Winsor estaba trabajando en el, y se acercó a Princeton y habló con Tukey al respecto. [Joseph] Berkson de la Clínica Mayo había escrito un artículo sobre él, pero no había estimado la varianza. Alguien me preguntó cómo hacerlo, y yo le dije, “Oh, eso es muy difícil: tienes que hacer esto y aquello y. . . “.
Entonces uno de mis colegas me mostró un artículo de Mayor Greenwood que me abrió  bastante los ojos, y me contó lo que había hecho Winsor y que me abrió los ojos aún más..
Así contestaba Paul Meier en una entrevista publicada en la revista Clinical Trials cuando le preguntaron por su famoso artículo uno de los más citados de la historia que describía el famoso método de la curvas de supervivencia de Kaplan- Meier que vemos como grafico prácticamente en todos los ensayos clínicos. Sigue la entrevista comentando la anécdota de su forzada unió a Kaplan para escribir el artículo que finalmente se publico Como yo estaba trabajando en ella, le escribí a Tukey sobre el problema, y ​​me dijo que Kaplan – otro de sus estudiantes – estaba haciendo algo similar………..Tras hablar con el editor de JASA “Trague saliva y supongo que Kaplan también, nos pusimos a trabajar duro, y casi al tiempo yo resolví un problema que el no podía resolver y él, uno de los que yo no podía
Que son las Curvas de supervivencia de Kaplan -Meier
Cuando la variable que se desea medir es el tiempo hasta que ocurre un evento, se utilizan para analizarlas un conjunto de técnicas estadísticas  conocidas como“análisis de supervivencia”– al principio se usaron sobre todo para analizar el tiempo hasta el fallecimiento del paciente o supervivencia y de ahí el nombre, pero los eventos pueden ser la muerte, o cualquier otro perjudicial o beneficioso”
Cuando medimos los  datos relacionados con el tiempo hasta que ocurre un evento podemos encontrarnos con varios tipos de problemas
  1. Que al final del periodo de observación no todos los pacientes habrán presentado el evento objeto de estudio.
  2. Los pacientes se incorporan durante todo el periodo de observación, por lo que los últimos en hacerlo serán observados durante un periodo de tiempo menor que los que entraron al principio y por lo tanto la probabilidad de que les ocurra el suceso es menor.
  3. Que algunos pacientes se hayan perdido por causas diversas, no habiendo sido posible determinar su estado.
  4. Al final del estudio habrá pacientes que no presentan el suceso.
Dentro de los análisis de supervivencia, el método de Kaplan-Meier se caracteriza por calcular “la supervivencia” cada vez que ocurre un evento y se basa en algo obvio: para sobrevivir un año hay que sobrevivir cada uno de los días de éste. Calculamos entonces para cada día la proporción de sucesos que se observan en ese día. Para cada instante de tiempo la supervivencia se calcula como la supervivencia en el instante anterior multiplicada por la tasa de supervivencia en ese instante. Como se ve en la tabla, el procedimiento de Kaplan-Meier calcula la estimación de la probabilidad de supervivencia de cada uno de los períodos de tiempo t, excepto el primero, como una probabilidad condicional compuesta
 El método produce  también un gráfico,  como el de abajo que a todos nos suena mucho y que la proxima vez que veamos asociaremos a ese hombre con gafas y  aspecto de empollón que acaba de fallecer. D.E.P 

Enhanced by Zemanta