US seeks new review of easier-to-spread bird flu


English: This Map shows the Spread of H5N1/ Bi...
Image via Wikipedia

Source: http://www.cbsnews.com
Internet Blacklist Legislation Supporter! This company may be a supporter of the dangerous SOPA or PIPA legislation.

WASHINGTON — A scientist who created an easier-to-spread version of the bird flu said his work isn’t as risky as people fear. The U.S. government is asking its biosecurity advisers to reconsider if the research should be made public.

Bird flu only occasionally sickens people, mostly after close contact with infected poultry, but it can be deadly when it does. Scientists have long feared it might mutate to spread more easily and thus spark a pandemic. Researchers in the Netherlands and Wisconsin were studying how that might happen when they created bird flu strains that at least some mammals — ferrets — can spread by coughing or sneezing.

The work triggered international controversy. U.S. health officials urged the details be kept secret so would-be terrorists couldn’t copy the strains, and critics worried that a lab accident might allow deadly viruses to escape.

But contrary to public perceptions, the airborne bird flu didn’t kill the ferrets, Dr. Ron Fouchier of the Netherlands’ Erasmus University told a meeting of U.S. scientists Wednesday. In fact, he said those previously exposed to regular flu were protected from severe disease.

Fouchier said publishing the research would help other scientists monitor the so-called H5N1 bird flu for similar mutations in the wild, and to test vaccines and treatments.

A federal biosecurity panel first sounded the alarm about the research, concerned about the easier mammal-to-mammal spread. The U.S. is asking that panel to conduct another review of the two laboratories’ work, Dr. Anthony Fauci of the National Institutes of Health said Wednesday. He said the board should hear some new data that came to light at a recent closed-door meeting of the World Health Organization, where international flu experts concluded the research eventually should be published.

Enhanced by Zemanta

Impact of facilitating physician access to relevant medical literature on outcomes of hospitalised internal medicine patients: a randomised controlled trial


The CIT Program database contains more than 8,...Image via Wikipedia

  • Methods
  • Qualitative research                                  

Impact of facilitating physician access to relevant medical literature on outcomes of hospitalised internal medicine patients: a randomised controlled trial

Editor's Choice

  1. Ariel Izcovich1
  2. Carlos Gonzalez Malla1
  3. Martín Miguel Diaz1
  4. Matías Manzotti1,
  5. Hugo Norberto Catalano1
+Author Affiliations

  1. 1Medical Clinical Service, Internal Medicine Department, School of Medicine, Hospital Alemán de Buenos Aires, Buenos Aires, Argentina
  1. Correspondence toAriel Izcovich Pueyrredon 1640, Zip code C1118AAT, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina; hambp2008@gmail.com
  • Accepted 5 July 2011

Abstract

Introduction There is limited high-quality evidence regarding the usefulness of bibliographic assistance in improving clinically important outcomes in hospitalised patients. This study was designed to evaluate the impact of providing attending physicians with bibliographic information to assist them in answering medical questions that arise during daily clinical practice.
Methods All patients admitted to the Internal Medicine ward of Hospital Aleman in Buenos Aires between March and August 2010 were randomly assigned to one of two groups: intervention or control. Throughout this period, the medical questions that arose during morning rounds were identified. Bibliographic research was conducted to answer only those questions that emerged during the discussion of patients assigned to the intervention group. The compiled information was sent via e-mail to all members of the medical team.
Results 809 patients were included in the study, 407 were randomly assigned to a search-supported group and 402 to a control group. There was no significant difference in death or transfer to an intensive care unit (ICU) (RR 1.09 (95% CI 0.7 to 1.6)), rehospitalisation (RR 1.0 (95% CI 0.7 to 1.3)) or length of hospitalisation (6.5 vs 6.0 days, p=0.25). The subgroup of search-supported physicians’ patients (n=31), whose attending physicians received hand-delivered information, had a significantly lower risk of death or transfer to an ICU compared with the control group (0% vs 13.7%, p=0.03).
Conclusions The impact of bibliographic assistance on clinically important outcomes could not be proven by this study. However, results suggest that some interventions, such as delivering information by hand, might be beneficial in a subgroup of inpatients.

Introduction

Searching for information in books or consulting a specialist has been the most traditional way to answer questions that arise during patient care.1 The ability to search for information, along with its increasing accessibility, allows physicians who have the skills to perform critical appraisal to answer questions based on high-quality evidence. Therefore, it is extremely important to use search resources and an ability to perform critical appraisal, as tools to provide the best evidence-based patient care.
In a systematic review by Dawes and Sampson,2 different reasons are mentioned as to why most physicians do not use bibliographic searching to answer daily questions. Some of these reasons are limited time to perform the research, lack of training in critical appraisal of the information found and low expectations for finding a relevant and direct answer to questions (ie, useful for patient’s care, unbiased and easily accessible).3
Bibliographic assistance could be a useful tool for answering medical questions that arise during patient care, not only for its academic importance, but also for improving patient-important outcomes.
There is limited high-quality evidence regarding the impact of bibliographic assistance on clinically important outcomes for hospitalised patients. The vast majority of the evidence comes from uncontrolled studies that have a high risk of biased results.4
We conducted a randomised study to determine the proportion of admitted patients who generate questions for attending physicians, to compare the outcome of these patients with those who do not generate questions and to evaluate the usefulness (in terms of important patient outcomes) of facilitating information access for physicians who work with admitted patients on internal medicine services.

Context

In 2009, intending to increase the utilisation of EBM resources in daily medical practice, a specialist in internal medicine and trained and skilled in evidence-based medicine was hired in the general internal medicine unit of the Hospital Aleman de Buenos Aires. His work, starting in 2009, consisted of identifying and answering medical questions that arose mainly during daily morning reports. A review of this process found that 80% of the questions identified could be successfully answered, 60% of them based on high-quality evidence. Most of the identified questions were about treatment and prognosis and could be classified as haematology, oncology, infectious diseases or cardiopulmonary questions.5
A survey answered by resident and staff physicians working on the internal medicine ward showed that 72% of the publications delivered were useful information and 42% motivated changes in medical practice for at least one of the participating physicians.6

Methods

From March 2010 through August 2010, all patients admitted to the general internal medicine ward of the Hospital Aleman de Buenos Aires were randomly assigned to an intervention (search-supported) group or a control group in a 1:1 ratio by flipping a coin at the time of admission.
Morning report is held daily on the General Internal Medicine Service. It is attended by resident physicians, staff physicians and heads of wards, and new admissions and other inpatients are discussed.
During the course of this study, a physician who is a specialist in internal medicine and who was trained and skilled in evidence-based medicine and whose work is funded by the Internal Medicine Service identified the medical questions that arose during morning reports. Such questions were either explicitly formulated by staff or resident physicians or inferred by the physician responsible for collecting them. Questions were collected using the PICOT structure (Population/Problem, Intervention, Comparison, Outcome, Type of design that would answer the question)7 in order to gather key words for the literature search. In some cases, the questions were answered immediately by someone who was present in the session, frequently, using electronic resources such as UpToDate. Those questions were excluded from this study.
The same physician who collected the questions also searched the literature for evidence. He answered only those questions obtained from the discussions of the inpatients that were assigned to the intervention (search-supported) arm and not those obtained from inpatients assigned to the control arm. The literature search was carried out once the morning report was over and it was considered finished 12 h after it started. The sources used in this search were Cochrane Library, PubMed and Lilacs.
The literature found was sent by e-mail to the whole medical team, including those physicians directly responsible for the care of the patient who had prompted the question. Emails were sent daily, from Monday to Thursday, and they included a brief summary of the literature found to address each of the questions answered that day, a critical appraisal of the papers based on the User’s Guides8 and the papers themselves attached in PDF format. In some cases, the literature was printed out and delivered by hand, directly to the professionals involved. The following criteria were used to choose to deliver information by hand: (1) one of the physicians directly involved in the care of the patient who prompted the question requested the information in hardcopy or (2) the physician who searched for the information considered that the literature could alter the diagnostic and therapeutic strategy for the patient who prompted the question. Follow-up of the patients included in the study was done, prospectively, from 1 March 2010 to 15 August 2010 through spreadsheets that were prepared everyday by the resident physicians working on the Service.
The process of medical question identification and answering described for the intervention arm is the usual practice on the internal medicine service of the Hospital Aleman de Buenos Aires since 2009. This job is carried out since then, by the same physician who did it during the study.
The outcomes considered were a composite of in-hospital death or transfer to an intensive care unit (ICU), death, transfer to an ICU, length in days of hospital stay and rehospitalisation during the course of the study.
The primary analyses compared the randomised groups (patients assigned to the intervention arm versus patients assigned to the control arm) and the non-randomised groups (patients who prompted questions versus patients who did not prompt questions).
We also planned a priori subgroup analyses among patients who prompted at least one question, particularly regarding patients whose attending physicians received information delivered by hand.
Normally distributed numerical variables were compared by Student’s t tests and dichotomous variables were assessed using RR and absolute risks, by χ2 tests with continuity corrections or Fisher’s exact test, as applicable. Tests of significance were two-tailed, and a p value of less than 0.05 or 95% CI excluding 1 were considered significant. All the calculations were performed using STATA v11.0 software. A post hoc power analysis with α error value of 0.05 and a two-tailed test for each outcome was performed using G*power 3 software (http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/). The power of the study for each of the evaluated outcomes was length of stay 16%, death or ICU 5%, and rehospitalisation 5%.

Findings

A total of 809 patients were included in the study, 407 were randomised to the search-supported arm and 402 were randomised to the control arm (figure 1).
Figure 1

View larger version:

Figure 1

Group assignment and questions identified/answered
The average age of the patients included in the study was 66 years (95% CI 65 to 67 years old). Baseline characteristics were similar in both arms (table 1).
View this table:

Table 1

Baseline characteristics
Of all the patients included in the study, 151 (19%, 95% CI 15% to 21%) prompted at least one question: 78 (19%, 95% CI 15% to 23%) randomised to the search-supported arm and 73 (18%, 95% CI 14% to 21%) randomised to the control arm. Most of the questions identified were about treatment or prognosis (table 1). The average number of questions per patient was 1.2. The total number of collected questions was 188.
The questions about 77 of the 78 patients who were randomised to the search-supported arm were satisfactorily answered. For 31 of these patients, the information was delivered by hand to their attending physicians (see figure 1).
The combined outcome of death or transfer to an ICU occurred for 76 patients (9.3%, 95% CI 7.3% to 11.4%). The number of readmissions during the course of the study was 135 (16.6%, 95% CI 14.1% to 19.2%). The average length of stay for the study population was 6.3 days (95% CI 5.8 to 6.7 days).
The patients who generated questions had an increase in the risk of being transferred to an ICU (RR 2.0, 95% CI 1.1 to 3.9) and had significantly longer hospital stays, compared with those who did not (7.7 vs 6.0 days, p=0.004). The risk of death (RR 1.5, 95% CI 0.9 to 2.5) and rehospitalisation (RR 1.03, 95% CI 0.7 to 1.5) were not different between those who did and did not generate questions.
The comparisons between search-supported and control groups were not statistically significant (table 2).
View this table:

Table 2

Primary/subgroup analysis
Among patients who prompted questions, the subgroup of patients (31 patients) whose questions were answered and whose attending physicians received hand-delivered information had a significantly lower risk of death or transfer to an ICU compared with the control group (0 vs 10 deaths or transfers, 0% vs 13.7%, p=0.03). There were no significant differences between these groups in ICU transfer (0 vs 8 (11%), p=0.1), death (0 vs 2 (2.7%), p=1.0) rehospitalisation (5 vs 14 rehospitalisations, 16.1% vs 19.1%, RR 0.8, 95% CI 0.3 to 2.1, p=0.7) and days of hospitalisation (5.5 vs 6.8, p=0.3).

Discussion

To our knowledge, this is the first randomised study that attempts to measure the impact of bibliographic assistance by physicians on clinically important outcomes for admitted patients.
Many questions arise for physicians during patient care. Ely et al9 identified 1101 questions that emerged during patient visits over a period of 732 h in the primary care ambulatory setting, which averages approximately three questions every 2 h. Sackett and Straus10 identified 98 questions generated during the care of 166 hospitalised patients in a period of 30 days.
In this study, we found that one in five patients generated at least one clinical question. Those questions were identified during case discussion and not during patient visits. This could have led to a difference in the number and complexity of questions compared with the previously mentioned studies. It is important to note that numerous questions were dismissed because they were resolved immediately using resources like UpToDate. This could be the explanation for the difference between the number of questions reported herein and by Sackett and Straus.10
In this study, we found that patients who generated questions had a twofold increase in the risk of being transferred to an ICU and had a significantly longer hospital stay compared with those who did not. We have not been able to find similar studies; therefore we could not compare our results with other experiences. Although it was not explored in the present study, one explanation for the increase in the risk of being transferred to an ICU and the longer hospital stay of those patients who generated questions could be that these patients had more complex pathologies than those who did not generate questions.
A systematic review by Weightman and Williamson4 that included 28 studies and evaluated the impact of bibliographic assistance on different outcomes found that most of the studies evaluated changes in physicians’ decisions (surrogate outcomes) and only six studies reported outcomes that could have been considered as clinically relevant. All of these studies had observational designs.
Two randomised controlled trials11 12 evaluated the usefulness of bibliographic assistance on changing physicians’ attitudes towards searching for information and their satisfaction but did not measure important patient outcomes. Banks et al13 in a case-control study evaluated the impact of facilitating information access for physicians who cared for hospitalised patients on the length of hospital stay. We intended to measure important patient outcomes using a randomised design attempting to reduce the risk of bias.
The primary analysis results did not show benefits of the intervention on the outcomes evaluated. This could be explained, as shown by the post hoc power analysis, by the limited number of patients enrolled. It is now clear that the type of intervention we evaluated and its impact on those outcomes would require a larger sample in order to prove its benefits.
The published results that are found in the literature regarding the impact of bibliographic assistance on medical practice are heterogeneous.4 Some studies have demonstrated benefits on important patient outcomes like the one performed by Banks et al,13 which suggest that facilitating information access for physicians who take care of hospitalised patients could result in a shorter length of hospital stay.
In a review performed by Bryant and Gray,14 they state that there is not only a limited amount of literature on this subject, but that it is also based on trials with small sample sizes and with high heterogeneity in quality, data analysis, way of reporting results and design. This makes it difficult to apply the results to daily practice.
Hence, considering the low methodological quality of the studies and the inconsistency of their results, we can conclude that the quality of the existing evidence regarding the impact of bibliographic assistance on important patient outcomes is poor. In this context, this study attempts to bring light to the existing evidence with a design that reduces the risk of biased results. Nevertheless, there are some weaknesses of our study that could make the interpretation of its results difficult. First, it has limited statistical power to prove the intervention benefits or to believe that this was a negative study. Second, we did not measure whether the information provided to the attending physicians was actually used in patient care. These data could have been useful for understanding why the two groups did not differ in outcomes. Finally, only questions that arose during morning reports from Mondays through Thursdays were answered. This delay in the identification and answer of questions that might have arisen from Fridays to Sundays could have negatively affected the impact of the intervention.
In the primary analysis, our results did not reach statistical significance. However, we found that bibliographic assistance might have an impact on the subgroup of patients whose attending physicians received hand-delivered information. This intervention seemed to decrease the rate of transfer to ICUs and in-hospital mortality. According to the criteria proposed by Sun et al,15 it is unlikely that the differences found in the subgroup of patients whose physicians received hand-delivered information are due to a real effect of the intervention, since the variable was measured after the allocation of the randomisation, and the number of events in one group was 0, making it difficult to appropriately estimate the effect. We believe that the differences observed in this subgroup probably stem from differences in prognosis and not from the effects of the intervention. Nevertheless, we consider that the hypothesis of the existence of a subgroup of patients, who could particularly benefit from some type of bibliographic assistance, should be considered when designing future studies.

Conclusion

Admitted patients frequently raise questions that require the assignment of specific resources to be answered. Those patients who generate questions seem to have a higher risk of being transferred to an ICU and a longer hospital stay. The usefulness of the bibliographic assistance to change patient important outcomes could not be demonstrated in this study. The results suggest the existence of a subgroup of patients who might benefit from interventions such as delivering information by hand. This hypothesis could be tested in future studies.

Acknowledgments

We appreciate Melissa Kucharczyk’s contribution to the translation of this article into English.

Footnotes

  • Competing interests None.

References

Enhanced by Zemanta

Porque la mayoria de los resultados de investigacion publicados son falsos?


Existe una creciente preocupación de que los resultados de investigación más recientes publicados son falsos. La probabilidad de que una reivindicación de la investigación es cierto puede depender de la energía y el sesgo del estudio, el número de otros estudios sobre la misma cuestión, y, más importante, la proporción de fieles a ninguna relación entre las relaciones investigado en cada campo científico. En este marco, un hallazgo de investigación es menos probable que sea cierto cuando los estudios realizados en el campo son más pequeños, cuando son más pequeños tamaños del efecto, cuando existe un número mayor y menor prueba de preselección de las relaciones, donde existe una mayor flexibilidad en los diseños, definiciones, resultados, y los modos de análisis, cuando los intereses financieros y de otro no es mayor y los prejuicios, y cuando más equipos están involucrados en un campo científico a la caza de la significación estadística. Las simulaciones muestran que la mayoría de los diseños de estudio y configuración, sean más probable que los resultados de una investigación sean más falsos que verdaderos. Además, para muchos campos científicos actuales, los resultados de la investigación puede ser a menudo una medición simplemente precisa de la tendencia imperante. En este ensayo, se discuten las implicaciones de estos problemas para la realización e interpretación de la investigación.
El articulo completo puede encontrarse en http://dx.doi.org/10.1371/journal.pmed.0020124

Why Most Published Research Findings Are False. by: John P. A. Ioannidis

Ioannidis JPA. Why Most Published Research Findings Are False. PLoS Med. 2005 August;2(8):e124
Enhanced by Zemanta

A cure for the disease of hate


BMJImage via Wikipedia

BMJ 2011; 343:d5715 doi: 10.1136/bmj.d5715 (Published 14 September 2011)

Cite this as: BMJ 2011; 343:d5715
  • Views & Reviews
  • Review of the Week

A cure for the disease of hate

  1. Iain McClure, consultant child and adolescent psychiatrist, Royal Hospital for Sick Children, Edinburgh
  1. imcclure@nhs.net
A Gazan doctor working in Israel describes his life and extraordinary tragedy, with a determination that good must come from bad. Iain McClure recommends his book to all doctors
On 16 January 2009 three Palestinian sisters were killed when an Israeli tank fired two shells into their bedroom. They were the daughters of Dr Izzeldin Abuelaish, a Palestinian gynaecologist, who, uniquely for a Gazan doctor, held a consultant post in an Israeli hospital. Abuelaish’s book, I Shall Not Hate, is an account of his life up to this momentous event and movingly explains his remarkable reaction. In essence, Abuelaish, who likens hate to disease and communication to cure, has drawn on his medical experience to seek a new approach to the resolution of apparently insoluble conflict.
For the three weeks prior to January 2009 the Israeli Defense Forces had been pursuing an incursion into the Gaza Strip to eradicate Quassam rocket attacks into Israel. The Israeli government had prevented Israeli or foreign journalists broadcasting from within Gaza during the operation. However, Abuelaish, a well known public figure in Gaza, had …

Reevaluating Studies: A CRO & A Coincidence?


Published  por Ed Silverman, via Pharmalot

oh-my-flickrLast week, the FDA announced that any clinical tests conducted between April 2005 and June 2010 by a contract research organization called Cetero Research may have to be reevaluated because two FDA inspections and an outside audit found falsified data and manipulated samples. In explaining its move, the agency maintained there were “significant instances” of misconduct.
The agency says Cetero failed to conduct an adequate internal investigation to determine the extent and impact of the violations, and did not take sufficient steps to assure data integrity during those five years. And so drugmakers must check their databases for trials that were used to support New Drug Applications and Abbreviated New Drug Applications – and may have to repeat or confirm results (back story).
For its part, Cetero subsequently issued a statement saying the CRO initiated its own internal investigation of its Houston bioanalytical laboratory in 2009 after discovering six chemists had misreported the date that samples were extracted prior to analysis. They did this to seek overtime pay for hours when they did not actually work. But the CRO insists reports were filed with the FDA and agency feedback was sought, although none was received. Cetero clients were also contacted.
“We leaned in roughly June 2009 and that’s the time at which we self reported the findings to the FDA,” Cetero ceo Troy McCall tells us. “We requested a meeting at the time that we originally provided them with our preliminary findings and during the course of our 18 month investigation. We were providing them with regular updates…They received all the information we provided to them on an interim basis.
“We took these issues so seriously that we not only terminated the employees who were responsible for these actions, but we also replaced management and ultimately the site leadership,” McCalls continues. “…That was probably another half dozen or so people.” He reiterated that all of the terminated employees were based in the Houston facility.
There is, however, an interesting tidbit concerning some other former and current Cetero employees – several of them have had experience suitable for dealing with the recent troubles. How so? They also once worked for MDS Pharma Services, another CRO that is now owned by INC Research and, notably, had rather similar problems with the validity and accuracy of test results. In fact, in January 2007, the FDAnotified drugmakers to reevaluate pharmacokinetic studies that were conducted for them by MDS from 2000 through 2004 (read here).
Which former and current Cetero employees worked at MDS? And when? Well, there wasJerry Merritt, who was MDS senior vice president and general manager from 2000 to 2006, when he left to run Cetero, although he was succeeded as ceo early last year by McCall. And Murray Ducharme, the chief scientific officer at Cetero, was previously an MDS vice president from 2000 to 2006, although his Cetera bio neglects to mention this (look here).
Then there was John Capicchioni, who was the MDS senior vice president of business development from 1994 to 2006, when he joined Cetero as vice president of business development, although his bio also overlooks time spent at MDS (see this).
There was also Herb Smith, who was MDS senior director of quality assurance from 1999 to 2007, when he became vp of quality assurance at Cetero, although he retired in April. Finally, there is April Johnson, who worked at MDS as a marketing manager from 1999 to 2005 and as a marketing director of early clinical research and bioanalysis from 2005 to 2007, when she joined Cetero as vp, business relationship management (see this). And, yes, her bio fails to mention MDS.
In other words, several current and former members of the Cetero managerial team arrived from another CRO that experienced similar breakdowns affecting the validity of bioequivalence data, which caused regulators to question the ability to conduct a proper audit (read a sample letter the FDA sent to drugmakers with pending ANDAs).
The problems at MDS, by the way, factored into growing concern a few years ago about oversight of CROs. The rise in the number of clinical trials prompted a corresponding growth in the number of such companies, along with complaints about quality and competency. In February 2007, for instance, Gilead reported that the FDA had found “certain irregularities” in studies conducted by MDS (read this).
We asked Cetero for comment about the experience some of their executives brought with them from MDS. Johnson, who also acts as the Cetero spokesperson, declined to comment

Ethical and Practical Issues Associated with Aggregating Databases


By Richard Wheeler (Zephyris) 2007. Lambda rep...Image via WikipediaThe goal of “personalized medicine” relies upon defining the genetic variation responsible for disease susceptibility and response to therapy [1]. For most common human diseases, the contribution of a single sequence variant to disease susceptibility is typically small, and can only be detected with data from large numbers of people [2]. Practically, this necessitates collaboration among investigators who either have DNA and phenotypic information previously collected, or have access to populations from which to recruit participants. It also requires that data be shared among the collaborators. Modern bioinformatics platforms have the capacity to combine datasets and store them for re-analysis. This is scientifically advantageous since it makes possible studies with enhanced validity in a cost-effective fashion. However, this data storage can complicate the already vexing practical, scientific, and ethical issues associated with gene and tissue banks. Research participants’ data may have been collected without authorization that meets today’s standards for informed consent. Research participants may not have consented to participation in genetics research in general, to inclusion in genetics databases specifically, or to use of their samples in genetic analyses that were unanticipated, unknown, or nonexistent at the time samples were collected [3]. Participants who consented to the collection of their data for use in a particular study, or inclusion in a particular database, may not have consented to “secondary uses” of those data for unrelated research, or use by other investigators or third parties [4]. There is concern that institutional review boards (IRBs) or similar bodies will not approve of the formation of aggregated databases or will limit the types of studies that can be done with them, even if those studies are believed by others to be appropriate, since there is a lack of consensus about how to deal with re-use of data in this manner…………read more inn PLoS Medicine.