Ask the Authors: Ebola in Well-Resourced Settings

Posted by Karen Buckley • November 20th, 2014

The physicians who treated patients with Ebola in Atlanta and Hamburg are now answering your questions on the NEJM Group Open Forum.

Two recent NEJM Brief Reports provide detailed clinical information about three patients with Ebola virus disease who were transferred from West Africa to the United States or Germany in the midst of their illness. While most cases occur in areas where tragically few resources are available to care for affected patients, these reports afford us the opportunity to observe the course of illness in a well-resourced health care setting. The cases highlight the importance of intensive fluid management during the course of the illness. Authors of both reports are answering questions about what this means for treating patients as the epidemic continues and more cases present to well-resourced settings.

The NEJM Group Open Forum is publicly available for all to view, but in order to comment you must register with Medstro and be a physician. This discussion is open until Wednesday, November 26.

Join the discussion now!

Mortality in Type 1 Diabetes

Posted by Joshua Allen-Dicker • November 19th, 2014

There are moments during every physician’s day when she or he gives medical advice based on well-established evidence– “The data show that starting medication A for this disease will reduce the risk of death by 20%.”   There are also moments when she or he may give advice just because it seems like the right thing to do, though evidence may be lacking– “It makes sense that using medication B might help in the treatment of this disease.” Sometimes advice based on common sense or medical tradition turns out to be misguided (e.g., bed rest for back pain, niacin for atherosclerotic vascular disease). And sometimes advice that makes sense is spot-on correct, as shown by a paper published in this week’s issue of NEJM, “Glycemic Control and Excess Mortality in Type 1 Diabetes Mellitus, “ by Lind and colleagues.

According to a recent Centers for Disease Control and Prevention report, each year over 18,000 people in the United States are diagnosed with type 1 diabetes (T1D). People with T1D are at increased risk for both microvascular complications (e.g., neuropathy, nephropathy) and macrovascular complications (e.g., coronary disease, stroke), as well as morbidity associated with these conditions. As above, it might make sense that better glycemic control, as measured by a lower hemoglobin A1c level, would be associated with improved outcomes for these disease states. However, unlike prior research that has demonstrated a clear association between lower HbA1c levels and improved outcomes from microvascular complications, the relation between mortality and glycemic control has remained less well defined.

Lind and colleagues describe a prospective cohort study of patients with T1D who were enrolled in Sweden’s National Diabetes Registry. For each person with diabetes enrolled in the study, the study also included 5 matched controls from Sweden’s general population. Participants were followed from enrollment until death or study completion. Outcome data collected for all participants included date of death (if it occurred) and relevant associated diagnoses. Data collected specifically for participants with T1D included albuminuria status, kidney function, and updated mean HbA1c level. Cox regression models were used to compare outcomes between persons with T1D and the matched controls.

Between 1996 and the end of 2011, 33,915 patients with T1D and 169,249 controls were enrolled in the study. In their subsequent analyses of these populations, Lind et al. found that persons with poor glycemic control (HbA1c ≥9.7%) had 8-10 times the risk of mortality compared to the control population, and significantly higher mortality than persons with T1D who had appropriate glycemic control.

At first glance, the results of Lind et al. are no surprise—better glycemic control can improve outcomes in T1D.  However, Lind also provides us with humbling data. First, while the risk of mortality in T1D appears to be modifiable and dependent on adequate glycemic control, our progress in improving T1D outcomes may have stalled over the last two decades.  In comparing study time periods (1998-2004 cf. 2005-2011), there was no significant improvement in excess mortality risk for the T1D population.  Additionally, the authors found that even when the glycemic control of persons with T1D was appropriate (updated mean HbA1c ≤ 6.9%), they still had twice the risk of mortality as compared to the control population.

As clinicians these results may leave us wondering: what can we do to help improve outcomes in T1D?  We know there has been an historical gap between guidelines and the actual quality of care patients receive–only 13-15% of persons with T1D reach their HbA1c goal.  Similar data exist for other diabetes quality metrics.  Innovation in patient engagement, quality improvement projects around guideline adherence and identification of additional outcomes metrics may be appropriate starting places for our collective efforts.

On top of this we should ask, if appropriately controlled T1D still carries an increased risk of mortality, what are we missing?  Continued research on insulin replacement strategies (e.g. the bionic pancreas) and mitigation of the end-organ effects of diabetes is needed.

After reading Lind et al., we may feel inclined to congratulate ourselves–our clinical intuition was correct after all.  However, by strengthening the known association between glycemic control and mortality, Lind and colleagues have sounded an important warning: clinicians and researchers still have much progress to make in improving our understanding of T1D and the quality of care we provide each day.

Dual Antiplatelet Therapy after Drug-eluting Stents

Posted by Chana Sacks • November 16th, 2014

“Well, doc, it’s been a year!  Now what?”

You first met your patient 12 months ago when he presented to the emergency department having a heart attack.  He was rushed to the cardiac catheterization lab, where a drug-eluting stent was placed to open the blocked coronary artery responsible for his crushing chest pain.

He has done well since and has been following up with you regularly in Cardiology Clinic. He has had no further episodes of chest pain, nor any bleeding complications from the dual antiplatelet therapy of aspirin and clopidogrel that you prescribed to prevent thrombosis of his new stent. Starting the day after his heart attack – and at every follow up visit since – you stressed the importance of taking these two medications for at least one year. At the one-year mark, you told him, you would discuss whether or not to continue this regimen. He presents today ready for that conversation.

As you begin to discuss the risks and benefits of continuing these two antiplatelet drugs, you find yourself in that uncomfortable data-free zone. After placement of a drug-eluting stent, the guidelines are clear that dual antiplatelet therapy should be continued for six months to one year. Beyond one year, however, the risks and benefits have remained uncertain.

In this week’s NEJM, Mauri and colleagues report the results of the DAPT trial, which was designed to fill this void. The data from this study suggest that an additional 18 months of dual antiplatelet therapy results in improved cardiovascular outcomes but also leads to an increased risk of bleeding.

The study included 10,000 participants who – like your patient – had received an FDA-approved drug-eluting stent, had already completed 1 year of treatment with dual antiplatelet therapy without ischemic events, repeat revascularizations, or major bleeding, and who had shown a high degree of adherence to the first year of dual antiplatelet therapy.  Participants were randomized to receive an additional 18 months of either continued dual antiplatelet therapy with aspirin plus a thienopyridine (either clopidogrel or prasugrel) or aspirin alone. The co-primary endpoints were the incidence of stent thrombosis and major adverse cardiovascular and cerebrovascular events (MACCE) – a composite of death, MI, or stroke.

The results: continued dual antiplatelet therapy reduced the rates of both stent thrombosis (0.4% vs. 1.4%, hazard ratio 0.29, 95% CI 0.17-0.48, P<0.001) and MACCE (4.3% vs. 5.9%, hazard ratio 0.71, 95% CI 0.59-0.85, P<0.001). Moderate or severe bleeding was higher in the dual antiplatelet therapy arm (2.5% vs. 1.6%, P=0.001).

Surprisingly, at the final 33-month follow-up, all-cause mortality was higher in the dual antiplatelet therapy group: 2.3% as compared with 1.8% in the placebo arm (hazard ratio 1.36, P=0.04). This was driven by a higher rate of non-cardiovascular death, due to increased bleeding from trauma and to cancer-related deaths.  Importantly, while there were some cancer deaths attributable to bleeding, this last finding seems to be due to more patients with cancer randomized by chance to the dual antiplatelet therapy arm.

NEJM Deputy Editor Dr. John Jarcho describes how this new data might inform clinical practice: “The DAPT trial highlights the pros and cons of prolonged dual anti platelet therapy. It suggests that the best approach may be to individualize patient management. Those at higher risk of atherothrombosis might benefit from prolonged treatment, while those at higher risk of bleeding might be best advised to stop at one year.”

To your patient and the decision about his medications: this study doesn’t offer a clear directive.  So you talk to him about that. After a discussion of the risks and benefits, and your best attempt to share what we do – and what we don’t – know, you arrive at a decision to continue his aspirin and clopidogrel for another 18 months.  At the 18-month mark, you tell him, you will discuss whether or not to continue the regimen.

As he heads out of your office, you start thinking about the data-free zone you will face a year and a half from now, when he comes back to your office again wondering, “Well, doc, it’s been 18 months! Now what?”

For more on the DAPT trial, watch the 2-minute video summary and read the accompanying editorial.

The α-Thalassemias

Posted by Carla Rothaus • November 14th, 2014

More than 100 varieties of α-thalassemia have been identified. Their geographic distribution and the challenges associated with screening, diagnosis, and management suggest that α-thalassemias should have a higher priority on global public health agendas.  A new review article on this topic comes from the University of Oxford’s Drs. Frédéric Piel and David Weatherall.

The α-thalassemias represent a global health problem with a growing burden. A refined knowledge of the molecular basis of α-thalassemia will be fully relevant from a public health perspective only if it is complemented by detailed epidemiologic data. To ensure appropriate care of patients and the sustainability of health care systems, more effort must be put into obtaining evidence-based estimates of affected populations, providing resources for the prevention, control, and management of the thalassemias, and performing cost-effectiveness analyses.

Clinical Pearls

What are the α-thalassemias and how are they classified?

The thalassemias are the most common human monogenic diseases. These inherited disorders of hemoglobin synthesis are characterized by a reduced production of globin chains of hemoglobin. Worldwide, the most important forms are the α- and β-thalassemias, which affect production of the α-globin and β-globin chains, respectively. Normal adult hemoglobin consists of pairs of α and  β chains (α2 β2), and fetal hemoglobin has two α chains and two γ chains (α2γ2).  The genes for the α chains and γ chains are duplicated (αα/αα, γγ/γγ), whereas the  β chains are encoded by a single gene locus ( β/ β). In the fetus, defective production of α chains is reflected by the presence of excess γ chains, which form γ4 tetramers, called hemoglobin Bart’s; in adults, excess  β chains form  β4 tetramers, called hemoglobin H (HbH). Because of their very high oxygen affinity, both tetramers cannot transport oxygen, and, in the case of HbH, its instability leads to the production of inclusion bodies in the red cells and a variable degree of hemolytic anemia. More than 100 genetic forms of α-thalassemia have thus far been identified, with phenotypes ranging from asymptomatic to lethal. On the basis of the numbers of α-globin genes lost by deletion or totally or partially inactivated by point mutations, the α-thalassemias are classified into two main subgroups: α+-thalassemia (formerly called α-thalassemia 2), in which one pair of the genes is deleted or inactivated by a point mutation (-α/αα or ααND/αα, with ND denoting nondeletion), and α0-thalassemia (formerly called α-thalassemia 1), in which both pairs of α-globin genes on the same chromosome are deleted (–/αα). Clinically relevant forms of α-thalassemia usually involve α0-thalassemia, either coinherited with α+-thalassemia (-α/– or ααND/–) and resulting in HbH disease or inherited from both parents and resulting in hemoglobin Bart’s hydrops fetalis (–/–), which is lethal in utero or soon after birth.

Figure 1. Phenotype-Genotype Relationship in α-Thalassemia.

• What are clinical features of hemoglobin Bart’s hydrops fetalis syndrome and HbH disease?

Fetuses affected by hemoglobin Bart’s hydrops fetalis succumb to severe hypoxia either early in gestation or during the third trimester. The hemoglobin Bart’s hydrops fetalis syndrome is often accompanied by a variety of congenital malformations and maternal complications, including severe anemia of pregnancy, preeclampsia, polyhydramnios, and extreme difficulty in delivery of both the fetus and the hugely enlarged placenta. HbH disease is often considered to be a relatively mild disorder. Studies have nevertheless highlighted clinically severe phenotypes, notably in nondeletional variants of the disease. In fact, HbH disease is characterized by a wide range of phenotypic characteristics. The form that results from deletions (-α/–) usually follows a relatively mild course, with moderate anemia and splenomegaly. Aside from episodes of intercurrent infection, this form of HbH disease does not require blood transfusions. However, the variety that results from the interactions of a nondeletional α-globin gene mutation together with α0-thalassemia (ααND/–) follows a much more severe course.

Morning Report Questions

Q: How is hemoglobin Bart’s hydrops fetalis diagnosed?

A: Prenatal diagnosis is required to identify fetuses affected by hemoglobin Bart’s hydrops fetalis and to reduce the risks to the mothers. The decision to consider such a diagnosis usually follows the finding of hypochromic microcytic red cells in both parents, in association with a normal hemoglobin A2 level; this combination would rule out  β-thalassemia, which usually involves an elevated hemoglobin A2 level. Iron deficiency also has to be ruled out. When facilities for rapid DNA diagnosis are available, the hematologic examination is followed by confirmation of the presence of α0-thalassemia in the parents. The fetal diagnosis is usually made early in pregnancy by means of chorionic-villus sampling, although fetal anemia may also be diagnosed later during gestation by quantitation of the peak systolic velocity in the middle cerebral artery. Various alternative methods of preimplantation and preconception genetic diagnosis or prenatal diagnosis — for example, analysis of maternal blood for fetal DNA and identification of fetal cells in maternal blood by staining with antibodies against globin chains — are still at relatively early stages of study.

Q: Describe the geographic distribution of α-thalassemia.

A: Evidence that α-thalassemia is highly protective against severe malaria is well established. As a result of this selective advantage, heterozygous α-thalassemia has reached high frequencies throughout all tropical and subtropical regions, including most of Southeast Asia, the Mediterranean area, the Indian subcontinent, the Middle East, and Africa. In conjunction with large-scale global population movements in recent decades, α-thalassemia has spread to many other parts of the world, including northern Europe and North America. This phenomenon is best illustrated by the implementation in 1998 of a universal screening program for α-thalassemia in California. After the immigration of large numbers of people from the Philippines and other Southeast Asian countries, the incidence of α-thalassemia syndromes in California between January 1998 and June 2006 was 11.1 cases per 100,000 persons screened, with 406 cases of HbH disease and 5 cases of hemoglobin Bart’s hydrops fetalis.

Figure 2. Geographic Distribution of α-Thalassemia, Hemoglobin Bart’s Hydrops Fetalis, and HbH Disease.

Fevers, Chest Pain, and Substance-Use Disorder

Posted by Carla Rothaus • November 14th, 2014

In the latest Case Record of the Massachusetts General Hospital, a 31-year-old woman with substance-use disorder was admitted to this hospital because of fevers and chest pain. CT of the chest revealed multiple thick-walled nodular opacities throughout both lungs. Diagnostic tests were performed, and management decisions were made.

Between 2007 and 2009 in the United States, heroin use increased by almost 80%, and approximately 0.5% of the population had an opioid-use disorder (about 650,000 users).

Clinical Pearls

What are common and uncommon pathogens seen in injection-drug users?

Among persons who use injection drugs, infectious complications are the leading cause of hospitalization and in-hospital death. The most common causes of fever among injection-drug users are skin and soft-tissue infections, pulmonary infections, and endocarditis, and the most common pathogens are S. aureus and streptococci. However, outbreaks of less common organisms have been linked to particular injection practices. Licking needles before injection increases the risk of infection with oral anaerobes, such as eikenella. Using tap water as a solvent increases the frequency of infection with gram-negative bacteria, such as pseudomonas. Candida infections have been associated with the use of lemon juice to dissolve basic substances, such as crack cocaine, before injection.

What factors are linked to opioid abuse?

The majority of first-time heroin users (81%) have previously misused prescription drugs, especially narcotic pain medications. Among patients with non-cancer-related chronic pain who have been exposed to long-term opioid therapy, there are relatively high rates of drug misuse (11.5%), addiction (3.7% among persons with a history of substance-use disorder and 0.2% among persons without a history of substance-use disorder), and illicit drug use (14.5%). Most patients who are taking narcotic pain medications do not abuse them, but patients who are treated with opioids should be monitored to ensure the appropriate use of these agents, and persons with a history of substance-use disorder should be followed very closely.

Table 2. DSM-5 Diagnostic Criteria for Opioid-Use Disorder.

Morning Report Questions

Q: What are effective treatments for opioid-use disorder?

A: The opioid agonists methadone and buprenorphine are among the most effective treatments for opioid-use disorder. Either medication, when administered as maintenance treatment, can decrease opioid use and drug-related hospitalizations and improve health, quality of life, and social functioning. In addition, treatment with an opioid agonist markedly reduces the likelihood of heroin overdose and death, which is particularly important for hospitalized patients because the risk of drug-related death for a patient with a history of substance-use disorder is nearly 10 times as high in the first month after hospital discharge as it is in subsequent months. Patients with a history of injection-drug use are among the patients who are most likely to be discharged against medical advice, and such discharges, as compared with medically approved discharges, result in higher rates of readmission, longer lengths of stay, and increased cost and in twice the risk of death within 30 days after discharge. Treating opioid withdrawal with opioid agonists and proactively addressing substance use have been shown to decrease the rate of discharge against medical advice.

Q: How does maintenance therapy with an opioid agonist compare to drug detoxification therapy?

A: A majority of patients relapse to opioid use after treatment has been tapered and discontinued. The evidence regarding therapy with an opioid agonist supports maintenance treatment, not detoxification or a tapered course. As with other medications for chronic diseases, the benefits, at least in the short term, last only while the patient is taking the medication. Maintenance therapy results in higher rates of treatment retention and lower rates of heroin use than does detoxification, even when the detoxification is prolonged and is accompanied by psychosocial support and aftercare.

NEJM Group Open Forum

Posted by Karen Buckley • November 13th, 2014

NEJM Group has joined with Medstro, a social professional network for physicians, to create the NEJM Group Open Forum, a series of live discussions intended to generate active conversation around important–and sometimes controversial–ideas. We’ve brought together authors, experts in the field, and many of your peers to discuss these three topics, now active and ready for your contribution.

Join the conversation >>

Can you implement handoff improvements in your own hospital? Ask the authors of the recent I-PASS study in NEJM how they reduced medical errors by 23% and preventable adverse events by 30%, without more time spent.

Meet Sandeep Jauhar, MD, PhD, cardiologist and author of “Doctored” and “Intern,” and first in the series of “Fascinating Physicians,” brought to you by the NEJM CareerCenter.

And, with ABIM recertification pass rates falling, how can you be sure you are prepared? What could be contributing to the increasingly poor performance? Join experts and your peers in this new discussion from NEJM Knowledge+.

We’ll be adding new discussions over the next two months. The NEJM Group Open Forum is publicly available for all to view, but in order to comment you must register with Medstro – a one-minute process – and be a physician.

We hope you will participate in these and future discussions!

Vaccination and Pneumococcal Disease in South Africa

Posted by Brian Honeyman • November 12th, 2014

Since the beginning of medical school I, like most of us, have been bombarded by medical questions from friends, family, and even brand-new acquaintances.  If I had a dollar for every rash that I’ve been asked to “take a quick look at” my loans would be a bit less daunting.  One question often posed by friends with young children: how important are vaccines really?  In a time when the public is inundated with information of questionable scientific merit, it becomes critical for all physicians-in-training to understand this commonly discussed issue of vaccines and their effectiveness in diverse populations.  In this week’s NEJM, von Gottberg and colleagues offer an opportunity to examine one nation’s experience with a pneumococcal vaccine program.

The largest number of deaths due to pneumococcal disease among children <5 years of age occur in Africa.  Additionally, HIV-infected children aged <1 have a 20-fold greater rate of invasive pneumococcal disease-associated (IPD) hospitalizations than HIV-uninfected children. In 2009, South Africa adopted a 7-valent pneumococcal conjugate vaccine (PCV), and later, in 2011, replaced it with a 13-valent PCV.  The country reached 82% completion of the 3 PCV doses (doses at 6 and 14 weeks, with a booster at 9 months of age) in all children under 12 months old.  While these vaccines have a demonstrated effect in resource-rich countries, this study provides a multi-year examination of the effectiveness of PCV in a predominantly low-resource population with higher rates of pneumococcal disease and carriage, and high HIV prevalence.  Over the study period, HIV incidence rates in infants <2 months old decreased from 9.8% in 2008 to 2.8% in 2011.

This study compared annual IPD incidence pre- and post-vaccine rollout, comparing 2005-2008 to 2011-2012, in all ages and further stratified the results by HIV status.  Years 2009-2010 were excluded to account for the gradual rollout of the PCV vaccination program.  They deployed nationwide laboratory surveillance for IPD gathered from 2005 through 2012 in 459 hospitals in South Africa.  35,192 cases of IPD defined as Streptococcus pneumonia cultured from normally sterile-site specimens were found, serotyped, and the antimicrobial minimum inhibitory concentrations (MIC) were determined. 

The investigators found that in all ages, the rates of IPD decreased by 40% between 2008 and 2012, with a non-significant increase of 6% in non-vaccine serotype IPD.  In the high risk age group of children <2 years the incidence of all-serotype IPD decreased 69% over the study period. Similar reductions in disease were seen when stratified by HIV-infection.  Specifically, in HIV-uninfected children aged <2 years incidence of vaccine-serotype IPD cases decreased by 85% while non-vaccine serotype disease increased by 33%; results were comparable in HIV-infected children. Significant reductions of IPD with vaccine-related serotypes were also observed in the 25-44 age group regardless of HIV-infection, driven mostly by reductions in PCV-7-serotype disease rates; -57% in HIV-infection and -59% in HIV-uninfected.  Other age groups showed reductions in IPD but failed to reach statistical significance.

Across all age groups the rates of antimicrobial-resistant pneumococcal disease decreased. The rates (per 100,000) of penicillin nonsusceptible IPD decreased by 57% (from 4.3 to 1.9), ceftriaxone nonsusceptible IPD decreased from by 58% (from 0.8 to 0.3), and multidrug-resistant IPD rates decreased by 52% (from 2 to 1).

NEJM Deputy Editor Lindsey Baden, MD, notes: “Pneumococcus remains a tremendous cause of severe illness especially in children and immunocompromised patients. Improving the pneumococcal vaccine and its deployment can significantly decrease disease from pneumococcus and it can also apply immunologic selective pressure on this bacillus, as seen in the decrease in the circulation of antibiotic resistant strains. ”

These data demonstrate effectiveness of the recent vaccination program in South Africa in prevention of IPD.  Further surveillance will be needed to determine the long-term impact of non-vaccine serotype disease.


Brian Honeyman is a 4th year medical student at Boston University, and recently completed an elective course at the editorial offices of the New England Journal of Medicine.



Lung-Cancer Screening

Posted by Carla Rothaus • November 7th, 2014

A large randomized trial showed that low-dose CT screening reduced the risk of lung-cancer death by 20% among long-time smokers. Recent guidelines support consideration of screening but with attention to the possibility of false positive results and associated risks. Read the latest Clinical Practice review on this topic.

Despite advances in diagnosis, staging, and treatment, only 18% of patients with lung cancer are still alive 5 years after diagnosis. Clinicians, scientists, and advocates have long sought a safe and effective screening test to identify lung cancer during its preclinical phase, when it is presumed to be more amenable to curative treatment.

Clinical Pearls

What was the NLST and what were its findings?

The NLST included more than 50,000 persons enrolled at 33 U.S. centers and has thus far provided the strongest evidence regarding the potential benefits of lung-cancer screening. Participants were 55 to 74 years of age, with a smoking history of at least 30 pack-years (former smokers had to have quit within the previous 15 years); they were randomly assigned to three rounds of annual screening with low-dose CT [computed tomography] or chest radiography. The NLST showed a 20% reduction in lung-cancer mortality with low-dose CT versus chest radiography (247 vs. 309 deaths per 100,000 patient-years of follow-up). In absolute terms, this translated to approximately 3 fewer deaths from lung cancer per 1000 high-risk persons who underwent low-dose CT screening.

How common were false positive findings and complications of invasive testing in the NLST?

False positive findings were common with low-dose CT, but complications of invasive testing were not. Across all three rounds of screening, 39% of the participants in the low-dose CT group had at least one positive result; more than 95% of these findings were falsely positive. Most patients with positive screening-test results required follow-up imaging. After three rounds of screening, a minority of participants underwent invasive tissue sampling by means of needle biopsy (2%), bronchoscopy (4%), or surgery (4%). Relatively few of the surgeries (24%) were performed in patients with benign nodules, but most of the nonsurgical biopsies (73%) revealed benign findings and therefore were potentially avoidable. Among participants with a positive screening-test result in the low-dose CT group, 1% had at least one complication related to invasive testing, but only 20% of these complications occurred among participants who did not have lung cancer.

Table 1. Potential Benefits and Harms of Three Rounds of Annual Screening with Low-Dose CT, as Compared with Chest Radiography or No Screening.

Morning Report Questions

Q: What uncertainties do the authors highlight about lung-cancer screening with low-dose CT?

A: Several important questions about low-dose CT screening remain unanswered, and screening continues to be controversial. A key controversy is whether the NLST results are applicable to the Medicare population in the United States. A related question is how to optimize the selection of candidates for screening. The potential benefits of screening are greatest in persons who are at the highest risk for death from lung cancer. Although limiting screening to persons at highest risk represents the most efficient approach to screening, extending eligibility criteria to include those at lesser risk will inevitably prevent a greater number of lung-cancer deaths, albeit less efficiently. The potential harms of screening warrant additional consideration. Patients at high risk for procedure-related complications and those with limited life expectancy owing to chronic illness have less to gain from screening than those at low risk and those without chronic illness, respectively. Ultimately, the trade-offs will need to be weighed by patients and their physicians. To facilitate a personalized approach, models have been developed that estimate individualized risks of lung-cancer death and predict complications of needle biopsy and lung-cancer surgery, although further studies are needed to determine which models perform best. There is also uncertainty about whether the relatively low risks of invasive testing for benign condition and procedure-related complications observed in the NLST can be replicated in community-based practice.

Table 2. Guidelines for Lung-Cancer Screening with Low-Dose CT.

Q: Is there evidence that lung-cancer screening programs will reduce rates of smoking?

A: Sparse data from randomized, controlled trials of low-dose CT screening have been inconsistent and are inconclusive thus far as an answer to the question of whether participation in a screening program improves rates of smoking cessation. A possible unintended consequence of screening is that some current smokers with negative results on low-dose CT will be falsely reassured that they do not have lung cancer and will therefore continue to smoke. Several studies have shown that rates of smoking cessation are higher among persons with positive screening-test results than among those with negative results.

A Chilly Fever

Posted by Carla Rothaus • November 7th, 2014

A 30-year-old graduate student presented with fevers associated with shaking chills and severe headaches. He had been well until 1 week before presentation, when he began to have daily fevers, with temperatures as high as 39.4°C. Any fever in a patient who has had possible exposure to malaria should prompt consideration of this diagnosis.

Clinical Pearls

What is the annual incidence of malaria in the United States?

In the United States, the annual incidence of malaria is approximately 1500 cases. In 2010, a total of 1691 cases were reported to the Centers for Disease Control and Prevention (CDC), the largest number reported since 1980; P. falciparum, P. vivax, P. malariae, and P. ovale were identified in 58%, 19%, 2%, and 2% of cases, respectively.

How do malaria and babesiosis differ in appearance on a peripheral blood smear?

Intraerythrocytic parasites are seen in both malaria and babesiosis. Plasmodia metabolize heme to form an intracellular crystallized pigment, hemozoin. Although hemozoin is not invariably identified in cases of malaria, its presence reliably distinguishes malaria infection from babesia infection. Malaria parasites can be distinguished from B. [Babesia] microti by the presence of recognizable gametocytes (characteristically banana-shaped in Plasmodium falciparum and round, with a granular appearance, in nonfalciparum species). In addition, intracellular vacuoles and extracellular merozoites are unusual in malaria but common in babesiosis, and the classic “Maltese cross” (a tetrad of parasites budding at right angles) is unique to babesia species.  Morning Report Questions

Q: Which malaria species can remain dormant in the liver?

A: In the case of P. vivax and P. ovale, some sporozoites (immature malaria parasites) do not replicate immediately when they invade hepatocytes but remain dormant (as hypnozoites) for prolonged periods. The average time to relapse is approximately 9 months, but it can range from weeks to years. The interval to relapse depends on the strain (earlier with tropical strains and later with temperate strains), the initial inoculum, and host factors (e.g., febrile illnesses can trigger relapse associated with P. vivax). None of the commonly used prophylactic agents (chloroquine, mefloquine, doxycycline, or atovaquone-proguanil) eliminate hypnozoites. Primaquine, the only effective drug against dormant hypnozoites, has not been approved by the Food and Drug Administration for primary prophylaxis, but the CDC endorses its use for prophylaxis in Latin American countries where P. vivax predominates, because the drug can prevent both primary attacks and relapses caused by all species that are a source of malarial infection.

Q: How is acute or recurrent P. vivax infection treated?

A: In patients with acute or recurrent malaria infection, treatment depends on the species and the resistance status in the area where the infection was acquired. P. falciparum is resistant to chloroquine in most regions in which it is endemic and resistant to mefloquine in parts of Southeast Asia. In contrast, nonfalciparum malaria parasites do not have substantial resistance to mefloquine, and the distribution of chloroquine-resistant P. vivax malaria is limited, occurring primarily in Indonesia and Papua New Guinea. After treatment is initiated, peripheral-blood smears should be obtained daily for 4 days (parasitemia is typically eliminated by day 4), on days 7 and 28 to confirm eradication, and at any time symptoms recur, suggesting treatment failure. In areas other than those with known chloroquine resistance, chloroquine, followed by a 14-day course of primaquine to prevent subsequent relapses, remains the standard treatment for P. vivax parasitemia. Given the risk of hemolysis in patients with glucose-6-phosphate dehydrogenase (G6PD) deficiency who receive treatment with primaquine, potential recipients should be tested for G6PD deficiency. Among patients with a contraindication to primaquine therapy, treatment with chloroquine alone carries a 20% risk of relapse; extended chloroquine prophylaxis can be offered to patients who have frequent relapses.

The Good Word: Improving Patient Handoffs

Posted by Rena Xu • November 5th, 2014

Starting at six in the evening, the surgery residents at my hospital gather for sign-out.  This is when residents from the day shift hand over care of their patients to those working overnight.  Sign-out takes place in the residents’ lounge — a room furnished with computers, couches, and a makeshift ping-pong table — and tends to be an informal affair.  Multiple conversations happen at once, and interruptions are not uncommon, whether by phone call or passerby or stray ping-pong ball.

If you ask any of the residents, they’ll tell you this system works just fine.  But as shift changes have become more frequent in recent years to accommodate work hour restrictions, there is concern that the growing number of handoffs may be leading to medical errors.  As in a game of “telephone,” information can get distorted each time it is relayed.  If reducing the quantity of handoffs isn’t an option, is there a way to improve the quality?

A few years ago, a group of researchers developed a tool called I-PASS that attempted to standardize the sign-out process.  I-PASS, which stands for Illness severity, Patient summary, Action list, Situation awareness and contingency plans, and Synthesis by receiver, acts as a checklist — a way to summarize a patient’s history and care plan and “employ closed loop communication” to ensure that the receiver understands.

The investigators built a handoff-improvement program around this tool and tested it in a pediatrics residency program, measuring the rate of medical errors among residents before and after they received I-PASS training.  They found that implementation of the program reduced miscommunications and preventable errors.  Encouraged by these results, they tweaked the tool and expanded it to a total of nine pediatrics programs.  The results of this multi-center study, published this week in NEJM, again show that the I-PASS tool is effective: medical errors decreased by 23%, preventable adverse events decreased by 30%, and critical information was included more frequently in written and verbal handoffs.  And, importantly, handoffs weren’t any more time-consuming than before.

“Our study shows that the risk of handoff-related errors can be significantly reduced,” the authors concluded. “Implementing handoff-improvement programs such as the I-PASS Handoff Bundle may potentiate the effectiveness of work-hour reductions, because doing both together may concurrently reduce both fatigue and handoff-related errors.”

This may be why, within a week of starting residency earlier this year, my co-interns and I underwent mandatory I-PASS training.  The three-hour session involved signing out fictional patients to each other – “Mrs. So-and-So is a ‘watcher’,” we’d say to practice using the I-PASS terminology, glancing at the cheat sheets we’d been given.

The session was helpful, but in the months that followed, sign-out reverted to old habits.   “I don’t think we need it,” the night shift resident said when I suggested one evening that we incorporate I-PASS into our sign-out.  We decided to give it a try anyway.  At first, things went smoothly; the cheat sheet was a nice reminder of points to cover.  Soon, though, I found myself taking shortcuts.  Was it really necessary to summarize the events leading up to a patient’s admission (the “Patient summary” step), when all I needed the night resident to do was make sure the patient urinated?  And did he really need to repeat information (“Synthesis by receiver”) that I’d given him literally seconds before?

When surgical checklists were first proposed, they were criticized as being cumbersome and unnecessary.  It turned out using them could reduce errors in the operating room.  Still, proof of efficacy wasn’t enough to spur adoption; the surgical culture also had to change.  People had to want to change.

The same might be said for handoffs.  In the I-PASS study, an intervention “bundle” involved not only resident training but also faculty engagement and a process- and culture-change campaign.  The intervention period lasted a full six months, and post-intervention outcomes were measured over an additional six months.  Sign-outs were taped, and residents were followed around by research assistants for 8 to 12 hours at a time, their every activity recorded.  By Hawthorne effect alone (people modifying their behavior because they know they’re being observed), it’s plausible that study participants were more compliant with the I-PASS methodology than they might have been otherwise.  Outside the study environment, getting residents to embrace I-PASS as their own is more challenging.

Continuity of care in an era of many handoffs does not have to be a futile aspiration.  The I-PASS study suggests that thoughtfully implemented interventions can make care transitions safer.  That may require, among other things, redefining the sign-out ritual.  It’s easier said than done, to be sure, but it’s also worth trying — especially for those of us who are just starting our medical careers and learning how to care safely and meaningfully for others.

For more on I-PASS, join the conversation on the new NEJM Group Open Forum, powered by Medstro, a social professional network for physicians.  We’re piloting a series of live interactive discussions among authors, experts, and fascinating physicians, surrounding topics including the intricacies of modern medicine, cutting edge research, career development, and more. Look for new discussions, coming soon.