Coming Home: Medicine from the Frontlines of Indian Country

Posted by Ken Bernard • August 10th, 2015

Editor’s note: This is the first in a series of posts by Dr. Ken Bernard on his experience with the Indian Health Service. 

481px-Indian_Health_Service_Logo.svgBoozhoo, or, “Hello.” I start in the language of my ancestors, the Anishinaabe, which means “Original People.” This summer I will be starting my career as an emergency physician and returning to the “rez” (reservation) after a thirteen year hiatus in the Northeast completing college, medical school, and an emergency medicine residency. What is bringing me back? To answer that, I must start at the beginning.

My life, like many here in the U.S., began with a series of numbers: APGARS of 9 and 9, weight 8 lbs. 3 oz., length 24 inches. There is also a curious ⅜. This last fraction is unique to the roughly five million American Indians and Alaskan Natives (AIAN) from over 566 federally recognized tribes that currently live in the U.S. This amount is recorded on my Certificate of Degree of Indian or Alaskan Native Blood. It is conferred by the Bureau of Indian Affairs and grants my enrollment in the Turtle Mountain Band of Chippewa Indians. Essentially, it is recognition by the federal government of my lineage and the degree of my “Indianness,” so to speak.

However, this little fraction has huge political, socioeconomic, historical, and cultural implications for Native populations. We are the only race whose membership is defined, regulated, and granted by the federal government. Personally, this number has been somewhat confusing. Nationally I am American, culturally I am Native, and ethnically I am a hodgepodge of Turtle Mountain Chippewa and European ancestry. Many consider me to look ethnically ambiguous — my patients often ask if I am Latino, Italian, or Greek. Furthermore, this number, which ties me to my home and heritage, took me farther away from both than I could have ever imagined to gain the knowledge and skills needed to fulfill my dream of becoming a physician. But I want to move away from the discussion of blood quantum for a moment to discuss a few other unsettling numbers that have been looming in my mind over the past few years.

As a researcher of health disparities and quality care, I have run into figures such as: 200%, 300% and 50%, which are the likelihoods that AIAN people are to be impoverished, unemployed, or obtain a college degree, respectively, when compared with the rest of the U.S. population. Forty-three percent represents the annual budget shortfall, and $3,348 is the per capita health expenditure, of the Indian Health Service (IHS), which is less than any other federally funded program including the federal prison system.

Or how about 20%? This is the annual physician vacancy rate within the IHS, which persists despite generous scholarship and loan repayment programs like the one I was grateful to earn. Inequities in social determinants of health and health care access, in addition to generational trauma, cultural erosion, and racial discrimination, have contributed to other disturbing facts. AIAN adults can expect to die five years earlier than their fellow U.S. citizens and have a 50% higher age-adjusted mortality rate from all causes. And AIAN neonatal mortality and death of young AIAN adults from intentional and unintentional injuries is 200% higher than their U.S. counterparts.

So now back to my “3/8thness” and my calling to serve Native people through the IHS. What do I do with the other 5/8ths or the leftovers? Why doesn’t anyone feel the need to define the 5/8ths of me that have been deemed “not Indian”? Well, that is easy. One dinner with my family and me would demonstrate to anyone observing that I am all in, 100% Indian, with no proportion of me that has ever wanted to be anything but a physician who serves Native people.

Our dinner ritual has been practiced for generations. The adolescents and healthy adults prepare and serve the food, first to elders, then the ill or frail followed by expectant mothers or moms with infants, and honored guests.

Next, the food is laid out on the table and a line naturally forms with the youngest first. The prep and service team eats last, once all others are comfortable and full. This small, quiet ritual is an important expression of our dedication to one another. It is a sign of respect and gratitude to those elders who have worked so hard in making sure we have had a chance to thrive. It is recognition that we are only as strong as our weakest members who deserve our compassion, love, and care.

So really, my call to service had nothing to do with fractions or percentages, but with values that lead most of us to medicine such as: service, compassion, justice, altruism, community, dedication, and love. As a researcher and physician I have come to appreciate that we need to look past numbers to understand the context and experience of individual patients. As a person and proud member of the Turtle Mountain Band of Chippewa I have never paid much attention to this notion of 3/8ths, but instead look to the foundation of cultural values and beliefs taught to me by elders and my community, which have brought me back to serve the Navajo and Hopi Nations at the Tuba City Regional Health Center in Northern Arizona.

I am excited to be able to share my experiences during this journey. I hope to not only raise awareness about the challenges on the front lines of medicine in Indian Country, but also encourage others to follow their passion and calling.

Chronic Cough

Posted by Carla Rothaus • August 7th, 2015

Cause of Chronic Cough 8-6-15In a new Clinical Problem-Solving article, a 63-year-old nonsmoking man presented with a 2-year history of dry cough. Videos showing transthoracic echocardiograms obtained before and during the course of the patient’s treatment are available.

Whipple’s disease may be difficult to recognize; its rarity and nonspecific clinical features (overlapping with those of many chronic inflammatory diseases) often result in delayed diagnosis (up to 2 to 6 years after the onset of symptoms, according to some case series).

Clinical Pearls

What causes Whipple’s disease?

Whipple’s disease is a chronic infectious disease that affects multiple organ systems; it is a rare disease, with a reported annual incidence of less than 1 case per 1,000,000 people. It is caused by Tropheryma whipplei, a ubiquitous environmental organism. T. whipplei is thought to be acquired by fecal-oral transmission and is found in sewage-plant effluxes.

What is the most common profile of a patient diagnosed with Whipple’s disease?

The disease appears to occur more frequently in persons of European ancestry than in persons of other ancestries and more frequently in men than in women (male:female ratio, 4:1). It most often manifests in middle age, although cases occur across the age spectrum.

Morning Report Questions

Q: What are some of the typical manifestations of Whipple’s disease?

A: Whipple’s disease has a broad spectrum of symptoms and signs; typical manifestations (and their frequency in case series) include weight loss (in 92% of patients), diarrhea (in 76%), arthralgia (in 67%), abdominal pain (in 55%), fever (in 38%), supranuclear ophthalmoplegia (in 32%), headache (in 10%), anemia (in 85%), lymphadenopathy (in 60%), endocarditis (in 30%, and usually culture-negative), and pulmonary involvement (in 30 to 40%, approximately half of whom have cough). The typical presentation is a prodrome of arthritis, followed by persistent diarrhea and weight loss; arthritis can precede the gastrointestinal symptoms by many years. Although Whipple’s disease is uncommon, chronic infection with T. whipplei should be considered in any patient with persistent unexplained arthralgias or gastrointestinal or systemic symptoms.

Q: What is the diagnostic test of choice for Whipple’s disease and how is it treated?

A: The diagnostic test of choice is upper gastrointestinal endoscopy with biopsies (although other tissue sites may be sampled). Identification of the 16S ribosomal RNA gene with the use of a polymerase-chain-reaction assay (e.g., from feces or saliva) has high sensitivity and specificity for the organism but should not be used in isolation for diagnosis. Culture is not useful in diagnosis, because it may take months for the organism to grow and requires special techniques; serologic testing is nonspecific. In the absence of randomized trials, treatment recommendations are guided largely by case series. First-line treatment typically involves 2 weeks of ceftriaxone, followed by at least 1 year of trimethoprim-sulfamethoxazole. For patients who have a sulfa allergy or desire only oral therapy, doxycycline plus hydroxychloroquine may be substituted for either or both ceftriaxone and trimethoprim sulfamethoxazole (with response rates in case series similar to those reported with standard therapy).

Figure 2. Pathological Findings.

Pregnancy Complicated by Venous Thrombosis

Posted by Carla Rothaus • August 7th, 2015

Pregnancy Complicated by Venous Thrombosis 8-6-15A new Clinical Practice article provides an overview of venous thrombosis in pregnancy. Low-molecular-weight heparins are generally preferred for the treatment of venous thromboembolism in pregnant women. Coumarins are contraindicated in pregnancy but can be used after delivery.

Although the absolute incidence of venous thromboembolism in pregnancy is low (1 or 2 cases per 1000 pregnancies), this risk is approximately five times as high as the risk among women who are not pregnant. These risks reflect the venous stasis and procoagulant changes in coagulation and fibrinolysis, which are considered to be part of physiologic preparation for the hemostatic challenge of delivery.

Clinical Pearls

Are there any distinguishing clinical features of deep-venous thrombosis associated with pregnancy, and when does venous thromboembolism associated with pregnancy most often occur?

The clinical diagnosis of venous thrombosis is unreliable in pregnancy. Suggestive symptoms and signs, such as leg swelling and dyspnea, may be difficult to differentiate from the physiologic changes of pregnancy. As compared with deep-vein thrombosis in nonpregnant persons, deep-vein thrombosis in pregnant women occurs more frequently in the left leg (85%, vs. 55% in the left leg among nonpregnant persons) and is more often proximal (72% in the iliofemoral veins, vs. 9% in the iliofemoral veins among nonpregnant persons), with a greater risk of embolic complications and the post-thrombotic syndrome. Thrombotic events occur throughout pregnancy, with more than half occurring before 20 weeks of gestation. The risk increases further in the puerperium (the 6-week period after delivery), probably owing to endothelial damage to the pelvic vessels that occurs during delivery. Recent data indicate that an increased relative risk (but low absolute risk) persists until 12 weeks after delivery. However, approximately 80% of postpartum thromboembolic events occur in the first 3 weeks after delivery.

What is the recommended evaluation for suspected venous thromboembolism occurring during pregnancy?

Suspected deep-vein thrombosis is best assessed by means of compression duplex ultrasonographic examination, including examination of the iliofemoral region. In women with a negative result on ultrasonography in whom clinical suspicion of deep-vein thrombosis is high, it may be prudent to repeat the test after 3 to 7 days. In cases in which iliocaval venous thrombosis is suspected but ultrasonography cannot detect a thrombus, magnetic resonance or conventional x-ray venography may be considered. Chest radiographic findings are normal in the majority of cases of pulmonary embolism, but they can show pulmonary features that point to an alternative diagnosis or nonspecific features of pulmonary embolism such as atelectasis or regional oligemia. Since deep-vein thrombosis is often present in patients with pulmonary embolism, ultrasonographic venography is useful in patients who have possible symptoms or signs of deep-vein thrombosis. If deep-vein thrombosis is detected, further radiologic studies do not have to be performed to confirm a pulmonary embolism. In women with normal findings on chest radiography, ventilation-perfusion lung scanning is often recommended, since it has a high negative predictive value, owing to the low prevalence of coexisting pulmonary problems that can result in indeterminate or false positive results. Moreover, the ventilation component can often be omitted, thereby minimizing the dose of radiation to the fetus. Whereas computed tomographic (CT) pulmonary angiography (CTPA), with its high sensitivity and specificity, is usually the first-line test to detect pulmonary embolism in nonpregnant patients, it is used less often in pregnant women.

Morning Report Questions

Q: What is the recommended treatment for venous thromboembolism associated with pregnancy?

A: Anticoagulation in pregnancy typically involves unfractionated heparin or low-molecular-weight heparin, which do not cross the placenta or enter breast milk. In contrast, vitamin K antagonists such as warfarin are contraindicated in pregnancy, since they cross the placenta and their use is associated with embryopathy, central nervous system abnormalities, pregnancy loss, and fetal anticoagulation with possible bleeding. Low-molecular-weight heparins have largely replaced unfractionated heparin for the management of venous thromboembolism in pregnancy. Typical agents include dalteparin (at a dose of 200 IU per kilogram of body weight daily or 100 IU per kilogram twice daily), enoxaparin (1.5 mg per kilogram daily or 1 mg per kilogram twice daily), and tinzaparin (175 units per kilogram daily). Data are limited regarding the use of fondaparinux in pregnancy. Oral direct thrombin inhibitors such as dabigatran and anti-factor Xa inhibitors such as rivaroxaban should generally be avoided during pregnancy. These agents may cross the placenta with possible adverse fetal effects. Thrombolysis in pregnancy is reserved for massive life-threatening pulmonary embolism with hemodynamic compromise or for proximal deep-vein thrombosis that is threatening leg viability. Caval filters are sometimes used in women who have recurrent pulmonary embolisms despite adequate anticoagulation or in whom anticoagulation is contraindicated, or in women in whom acute deep-vein thrombosis has developed close to the time of delivery.

Q: What adjustments are made to heparin therapy with the approach of labor and delivery?

A: Women should be advised to discontinue injections of heparin if labor starts or is suspected. Neuraxial anesthesia is usually deferred until at least 24 hours after the last dose, given a small risk of epidural hematoma associated with administration of neuraxial anesthesia before that time. After delivery, low-molecular-weight heparin should not be administered for at least 4 hours after spinal anesthesia or removal of an epidural catheter. After delivery, anticoagulant treatment is continued for at least 6 weeks, with a minimum total duration of 3 months.

Table 1. Summary of Recommendations for Which There is Consensus and Uncertainties and Variations in Guidelines for the Management of Venous Thromboembolism in Pregnancy.

Idarucizumab for Dabigatran Reversal — The RE-VERSE AD Trial

Posted by Joshua Allen-Dicker • August 5th, 2015

The RE-VERSE AD Trial Insight Post 8-4-15Despite the increasing frequency of direct oral anticoagulant use, some clinicians may remain uncertain about their safety. Direct oral anticoagulants are easier to use, and for certain patients they may have a decreased risk of bleeding. But there is one major concern: how to rapidly reverse the effects of direct oral anticoagulants when there is an urgent indication for hemostasis. In this week’s issue of NEJM, Pollack et al. present early results that may ease physician doubts about using this new class of drugs.

RE-VERSE AD (Reversal Effects of Idarucizumab on Active Dabigatran) is an ongoing prospective cohort study on the use of idarucizumab to reverse the effects of the oral thrombin inhibitor dabigatran. Idarucizumab is a monoclonal antibody fragment that acts by binding dabigatran with very high affinity. This multicenter trial is enrolling two classes of patients taking dabigatran: (1) those who develop life-threatening bleeding judged to require anticoagulation reversal, and (2) those who require an urgent invasive procedure for which normal hemostasis is required.

All trial participants received 5 grams of intravenous idarucizumab. Blood samples were drawn at regular intervals to assess for percentage reversal of dabigatran, calculated via use of dilute thrombin time or ecarin clotting time. Adverse events and clinical endpoints relating to bleeding and hemodynamic stability were followed as well. While the study authors aim to enroll up to 300 patients, this week they report on interim results from the first 90 patients enrolled from June 2014 to February 2015.

RE-VERSE AD found that the effects of dabigatran were completely reversed in 88 to 98% percent of anticoagulated patients receiving idarucizumab. For patients admitted with bleeding, median time to cessation of bleeding was 11.4 hours. For those undergoing urgent procedures, 92% were reported to have normal intraoperative hemostasis after receiving idarucizumab. There were five thrombotic events.

When we think about how to treat the next patient on dabigatran who presents with gastrointestinal hemorrhage, RE-VERSE AD provides us with two points to consider. First, the clinical breakthrough: we may soon have a drug that has been shown to effectively reverse the effects of dabigatran. Second, the unalterable reality of studying this breakthrough: because of ethical considerations, RE-VERSE AD does not include a control group.

RE-VERSE AD provides us with groundbreaking results that have a high potential to inform clinical decision making, but with an asterisk. In the accompanying editorial Dr. Ken Bauer expresses cautious optimism about the convincing effectiveness and safety profile of idarucizumab. However, to ensure ideal hemostatic outcomes as well as high-value idarucizumab utilization, he advocates for the development of clear clinical pathways and wider use of appropriate laboratory measurements of direct oral anticoagulant activity.

While continued work in this area may be needed, physicians considering prescribing dabigatran can begin to feel reassured that when serious bleeding occurs, an antibody that quickly reverses dabigatran’s effects may soon be available.

Chagas’ Disease

Posted by Carla Rothaus • July 31st, 2015

Chagas' Disease PIT 7-31-2015The infection of millions of people with the protozoan parasite Trypanosoma cruzi is a public health concern. A new review article explains how treatments for Chagas’ disease remain marginally effective, and cardiac and gastrointestinal complications dominate the clinical picture.

Chagas’ disease is caused by the protozoan parasite Trypanosoma cruzi, which is transmitted when the infected feces of the triatomine vector are inoculated through a bite site or through an intact mucous membrane of the mammalian host. Vectorborne transmission is limited to areas of North America, Central America, and South America. Both in endemic and in nonendemic areas, other infection routes include transfusion, organ and bone marrow transplantation, and congenital transmission. Infection is lifelong in the absence of effective treatment.

Clinical Pearls

How prevalent is Chagas’ disease?

Latin America has made substantial progress toward the control of Chagas’ disease. The estimated global prevalence of T. cruzi infection declined from 18 million in 1991, when the first regional control initiative began, to 5.7 million in 2010. Serologic screening for T. cruzi is conducted in most blood banks in endemic Latin American countries and the United States, and some countries have systematic screening for congenital Chagas’ disease. Nevertheless, Chagas’ disease remains the most important parasitic disease in the Western Hemisphere, with an estimated disease burden, as measured by disability adjusted life-years, that is 7.5 times as great as that of malaria.

Figure 1. Life Cycle of Trypanosoma cruzi.

Figure 2. Estimated Prevalences of T. cruzi Infection.

• What cardiac and gastrointestinal sequelae are associated with chronic Trypanosoma cruzi infection?

The vast majority of acute infections are never detected. In persons who survive the acute phase, the cell-mediated immune response controls parasite replication, symptoms resolve spontaneously, and patent parasitemia disappears in 4 to 8 weeks. Persons then pass into the chronic phase of T. cruzi infection. Most persons remain asymptomatic but are infected for life. It is estimated that 20 to 30% of infected persons have progression over the course of years to decades to chronic Chagas’ cardiomyopathy. The earliest signs are typically conduction-system defects, especially right bundle-branch block or left anterior fascicular block. Chagas’ cardiomyopathy is highly arrhythmogenic and is characterized by sinus and junctional bradycardias, atrial fibrillation or flutter, atrioventricular blocks, and nonsustained or sustained ventricular tachycardia. Affected patients eventually have progression to dilated cardiomyopathy and congestive heart failure. Gastrointestinal Chagas’ disease is less frequent than Chagas’ cardiomyopathy, and predominantly affects the esophagus, colon, or both and results from damage to intramural neurons. The manifestations of esophageal disease range from asymptomatic motility disorders and mild achalasia to severe megaesophagus.

Morning Report Questions

Q: How is Chagas’ disease diagnosed?

A: In the acute phase, motile trypomastigotes can be detected by means of microscopic examination of fresh anticoagulated blood or buffy coat. Parasites may also be visualized on blood smears stained with Giemsa or other stains and can be grown with the use of hemoculture in a specialized medium. Polymerase chain reaction (PCR) is a sensitive diagnostic tool in the acute phase and is the best test for early detection of infection in the recipient of an organ from an infected donor or after accidental exposure. The diagnosis of chronic infection relies on IgG serologic testing, most commonly with the use of an enzyme-linked immunosorbent assay (ELISA) or immunofluorescent antibody assay (IFA). No single assay for chronic T. cruzi infection has high enough sensitivity and specificity to be used alone; positive results of two tests, preferably based on different antigens (e.g., whole-parasite lysate and recombinant antigens), are required for confirmation. The sensitivity of PCR in the chronic phase of Chagas’ disease is highly variable and depends on specimen volume and processing, population characteristics, and PCR primers and methods. Negative PCR results do not prove that infection is absent.

Figure 3. Phases of Trypanosoma cruzi Infection.

Table 1. Diagnosis and Management of Chagas’ Disease.

Q: What therapies are available for Chagas’ disease, and who is a candidate for treatment?

A: Nifurtimox and benznidazole, the only drugs with proven efficacy against T. cruzi infection, are not currently approved by the Food and Drug Administration but can be obtained from the CDC and used under investigational protocols. Benznidazole, a nitroimidazole derivative, is considered to be the first-line treatment, on the basis of a better side-effect profile than nifurtimox, as well as a more extensive evidence base for efficacy. Until the 1990s, only the acute phase of the infection was thought to be responsive to antiparasitic therapy. However, in the 1990s, two placebo-controlled trials of benznidazole involving children with chronic T. cruzi infection showed cure rates of approximately 60%, on the basis of conversion to negative serologic test results 3 to 4 years after treatment. Follow-up studies have suggested that the earlier in life children are treated, the higher the rate of conversion from positive to negative results of serologic assays (negative seroconversion). Over the past 15 years, there has been a growing movement toward broader treatment of chronically infected adults, including those with early cardiomyopathy. Most experts now believe that the majority of patients with chronic T. cruzi infection should be offered treatment, with exclusion criteria such as an upper age limit of 50 or 55 years and the presence of advanced irreversible cardiomyopathy. This change in standards of practice is based in part on nonrandomized, nonblinded longitudinal studies that have shown significantly decreased progression of cardiomyopathy and a trend to decreased mortality among adults treated with benznidazole, as compared with untreated patients.

Table 2. Dosage Regimens and Frequencies of Adverse Effects Associated with Benznidazole and Nifurtimox Use.

Community-Acquired Pneumonia

Posted by Carla Rothaus • July 31st, 2015

Community-Acquired Pneumonia PIT 7-31-15The etiology of community-acquired pneumonia requiring hospitalization in adults is evolving, in light of vaccine deployment and new diagnostic tests. A new Original Article defines pathogens potentially causing pneumonia. In a majority of cases, no pathogen was identified.

Community-acquired pneumonia is a leading infectious cause of hospitalization and death among U.S. adults. The last U.S. population-based incidence estimates of hospitalization due to community-acquired pneumonia were made in the 1990s, before the availability of the pneumococcal conjugate vaccine and more sensitive molecular and antigen-based laboratory diagnostic tests. Thus, contemporary population-based etiologic studies involving U.S. adults with pneumonia are needed. The Centers for Disease Control and Prevention (CDC) Etiology of Pneumonia in the Community (EPIC) study was a large, contemporary, prospective, population-based study of community-acquired pneumonia in hospitalized adults in the United States.

Clinical Pearls

• What is the annual incidence of community-acquired pneumonia requiring hospitalization in the United States?

In the EPIC study, among 2320 adults with radiographic evidence of pneumonia, 2061 (89%) were enrolled from July 1, 2010, to June 30, 2012. The annual incidence of community-acquired pneumonia requiring hospitalization was 24.8 cases (95% confidence interval, 23.5 to 26.1) per 10,000 adults. The incidence overall for each pathogen rose with increasing age. The estimated incidences of hospitalization for pneumonia among adults 50 to 64 years of age, 65 to 79 years of age, and 80 years of age or older were approximately 4, 9, and 25 times as high, respectively, as the incidence among adults 18 to 49 years of age.

Figure 3. Estimated Annual Pathogen-Specific Incidence Rates of Community-Acquired Pneumonia Requiring Hospitalization, According to Age Group.

Table 2. Estimated Annual Incidence Rates of Hospitalization for Community-Acquired Pneumonia, According to Year of Study, Study Site, Age Group, and Pathogen Detected.

• How often is the causative pathogen identified in an adult hospitalized for community-acquired pneumonia?

The EPIC study found that despite efforts to use more sensitive and specific diagnostic methods than had been available previously, pathogens were detected in only 38% of adults. Possible reasons for such few detections include an inability to obtain lower respiratory tract specimens, antibiotic use before specimen collection, insensitive diagnostic tests for known pathogens, a lack of testing for other recognized pathogens (e.g., coxiella), unidentified pathogens, and possible noninfectious causes (e.g., aspiration pneumonitis). The low pathogen-detection yield among adults who were hospitalized for pneumonia highlights the need for more sensitive diagnostic methods and innovative discovery of pathogens.

Morning Report Questions

Q: What are the most commonly identified pathogens in adults hospitalized with community-acquired pneumonia?

A: In the EPIC study, diagnostic results for both bacteria and viruses were available for 2259 adults (97%) with radiographic evidence of pneumonia. A pathogen was detected in 853 of these patients (38%). Viruses were detected in 27% and bacteria in 14%. Human rhinovirus, influenza virus, and Streptococcus pneumonia were the most commonly detected pathogens, with the highest burden occurring among older adults. Human rhinovirus was the most commonly detected virus in patients with pneumonia but was rarely detected in asymptomatic controls, a finding similar to that in other studies. Although the understanding of human rhinovirus remains incomplete, these data suggest a role for human rhinovirus in adult pneumonia. Influenza virus was the second most common pathogen detected, despite mild influenza seasons during the study. The incidences of influenza and of Streptococcus pneumoniae were almost 5 times as high among adults 65 years of age or older than among younger adults, and the incidence of human rhinovirus was almost 10 times as high among adults 65 years of age or older than among younger adults. The incidence of influenza virus was almost twice that of any other pathogen (except for human rhinovirus) among adults 80 years of age or older, which underscores the need for improvements in influenza-vaccine uptake and effectiveness. Together, human metapneumovirus, respiratory syncytial virus, parainfluenza viruses, coronaviruses, and adenovirus were detected in 13% of the patients, a proportion similar to those found in other polymerase chain reaction (PCR)-based etiologic studies of pneumonia in adults (11 to 28%). The study adds to the growing evidence of the contribution of viruses to hospitalizations of adults, highlighting the usefulness of molecular methods for the detection of respiratory pathogens.

Figure 2. Pathogen Detection among U.S. Adults with Community-Acquired Pneumonia Requiring Hospitalization, 2010-2012.

Q: Are certain pathogens more commonly detected in cases of community-acquired pneumonia requiring admission to the intensive care unit (ICU)?

A: Three pathogens were detected more commonly in patients in the ICU than in patients not in the ICU: S. pneumoniae (8% vs. 4%), Staphylococcus aureus (5% vs. 1%), and Enterobacteriaceae (3% vs. 1%) (P<0.001 for all comparisons).

Outcomes of Therapeutic Hypothermia in Deceased Organ Donors on Delayed Graft Function

Posted by Andrea Merrill • July 29th, 2015

Hypothermia 2 7-29-2015While many think of surgery as messy, gory, or even medieval at times, there are certain operations that are simply beautiful to observe and perform, requiring both elegance and finesse. For me, kidney transplants fall into this category, demanding gentle handling of the arterial, venous and ureteral anastomoses by the surgeon with the help of special magnifying lenses (surgical loupes).

The first time I had the privilege to participate as a surgeon in a kidney transplant was also my most memorable. Mr. X was in his 50s with end stage renal disease from IgA nephropathy. He had been on dialysis and waiting for a kidney transplant for several years. He was finally called in, late one Sunday night, to receive the kidney that would unchain him from the thrice-weekly dialysis sessions essential to keep him alive. I briefly introduced myself, performed a routine physical exam and then pulled out a consent form for him to sign before surgery. He would be getting a kidney from a donor after neurologic determination of death (formerly called brain dead donors). In fact, it was a kidney I had helped harvest earlier that morning, from a brave adolescent girl who had suffered brain death from what had become fatal status asthmaticus. There was a pervasive somber mood in the operating room that morning as we removed her liver and kidneys, moving quickly to reduce cold ischemia time.

Although the grief of that morning was still present several hours later, there I was offering life, or at least an improved quality of life, to this grateful man. The operation went well, and I was allowed to carefully sew the delicate veno-venous anastomosis. I rotated off service the next day but continued to follow my patient from afar. Two days later, I was dismayed to find out that, despite our careful surgical dexterity, he had experienced delayed graft function requiring dialysis. His new kidney eventually began functioning, but only after several rounds of dialysis and an extended postoperative course.

Unfortunately, delayed graft function is not uncommon in patients who receive kidneys from deceased donors. Given the long wait list times and shortage of organs, many patients opt to receive kidneys from deceased donors after neurologic determination of death, from donors after cardiac death and, in some cases, from extended-criteria donors (any donor > 60 yo, or any donor > 50 yo with 2 of the following: a history of hypertension, a creatinine > 1.5, or death resulting from a stroke). Efforts to improve outcomes in kidney transplantation from deceased donors are needed, given the ever-growing number of patients awaiting a transplant.

An article by Niemann et al. in this week’s issue of NEJM details the results of a randomized controlled trial that studied the effect of therapeutic hypothermia on delayed graft function in patients who received kidneys from donors after neurologic determination of death.  Current organ donor management protocols require normothermia during harvesting of organs; however, retrospective studies of hypothermia in cardiac arrest patients have demonstrated renal protection. Thus, the authors of this study hypothesized that hypothermia in organ donors might improve outcomes in deceased donor transplants.

The investigators randomized donors in two organ donation service areas—in California and Nevada. Donors were assigned by computer randomization to either mild hypothermia (34-35°C) or normothermia (36.5-37.5°C), stratified according to organ procurement organ, standard or expanded criteria donor status and whether or not they had received therapeutic hypothermia prior to neurologic death. The primary outcome was delayed graft function, defined as need for dialysis during the first week after transplantation.

In total, 370 donors were randomized– 180 assigned to hypothermia and 190 to normothermia; and 583 kidneys were transplanted, 290 from the hypothermia group and 293 from the normothermia group.  Baseline characteristics were similar between the two groups.  Additionally, recipient characteristics known to affect outcomes were balanced between the two groups.  The trial was stopped early because of overwhelming efficacy following after a preplanned interim analysis.

Delayed graft function occurred in about 40% of transplants from the normothermia group in contrast to just over 28% from the hypothermia group (a statistically significant difference, p=0.008). The primary efficacy analysis used a multivariable model logistic analysis and found that hypothermia significantly reduced the risk of delayed graft function by about 40%.

This difference was more pronounced in pre-planned subgroup analysis of extended criteria donors.  Among recipients of extended criteria donor kidneys, delayed graft function occurred in about 30% of hypothermic group compared to just over 50% of the normothermic group, representing a risk reduction of about 70%.  While there was also less delayed graft function in the standard criteria hypothermic group, it was not statistically significant.  Only a small proportion of recipients (11) received dual kidneys, but there was a significant advantage in those patients who received kidneys from the hypothermia group (0% delayed graft function vs. about 80%). There were minimal adverse events and no differences between the two arms.

The accompanying editorial by Jochmans and Watson notes several concerns and limitations.  While immediate outcomes are improved, longer term outcomes such as acute rejection or graft survival were not studied.  Additionally, outcomes of the other organs transplanted (livers and pancreases) were not reported.  Regardless of these limitations, the Niemann et al. study is important, and the editorialists applaud the authors for a potentially simple, cheap, and non-technological intervention in the organ donor “that can have dramatic therapeutic effects” in the recipient.  One hopes that mild hypothermia is a new step towards improving outcomes in transplants in every Mr. and Ms. X on the organ waiting list.

 

Irradiation in Early-Stage Breast Cancer

Posted by Carla Rothaus • July 24th, 2015

Irradiation in Early-Stage Breast Cancer  PIT 7-24-15Women with breast cancer who are undergoing breast-conserving surgery were assigned to receive whole-breast irradiation with or without regional nodal irradiation. At 10 years, disease-free survival in the nodal-irradiation group was improved but overall survival was not.

Many women with early-stage breast cancer undergo breast-conserving surgery followed by whole breast irradiation, which reduces the rate of local recurrence. Radiotherapy to the chest wall and regional lymph nodes, termed regional nodal irradiation, which is commonly used after mastectomy in women with node-positive breast cancer who are treated with adjuvant systemic therapy, reduces locoregional and distant recurrence and improves overall survival. An unanswered question is whether the addition of regional nodal irradiation to whole-breast irradiation after breast-conserving surgery has the same effect.

Clinical Pearls

Does the addition of regional nodal irradiation to whole-breast irradiation after breast-conserving surgery prolong survival in early-stage breast cancer?

In the study by Whelan et al., eligible patients were women with invasive carcinoma of the breast who were treated with breast-conserving surgery and sentinel-lymph-node biopsy or axillary-node dissection and who had positive axillary lymph nodes or negative axillary nodes with high-risk features. The study randomly assigned women to undergo either whole-breast irradiation plus regional nodal irradiation (including internal mammary, supraclavicular, and axillary lymph nodes) (nodal-irradiation group) or whole-breast irradiation alone (control group). There was no significant between-group difference in overall survival, with 10-year rates of survival of 82.8% in the nodal-irradiation group and 81.8% in the control group (hazard ratio, 0.91; 95% confidence interval [CI], 0.72 to 1.13; P=0.38). Moreover, no significant difference was detected in breast-cancer mortality, with 10-year rates of 10.3% in the nodal-irradiation group and 12.3% in the control group (hazard ratio, 0.80; 95% CI, 0.61 to 1.05; P=0.11).

Table 1. Characteristics of the Patients at Baseline.

Figure 1. 10-Year Kaplan-Meier Estimates of Survival.

Does the addition of regional nodal irradiation to whole-breast irradiation after breast-conserving surgery prolong disease-free survival in early-stage breast cancer?

In the Whelan study, the rate of disease-free survival was higher in the nodal-irradiation group than in the control group, with 10-year rates of 82.0% and 77.0%, respectively (hazard ratio, 0.76; 95% CI, 0.61 to 0.94; P=0.01). The 10-year rates of isolated locoregional disease-free survival were 95.2% in the nodal-irradiation group and 92.2% in the control group (hazard ratio, 0.59; 95% CI, 0.39 to 0.88; P=0.009). The rates of distant disease-free survival at 10 years were 86.3% in the nodal-irradiation group and 82.4% in the control group (hazard ratio, 0.76; 95% CI, 0.60 to 0.97; P=0.03).

Figure 2. Disease-free Survival at 10 Years, According to Subgroup.

Table 2. Disease Recurrence or Death.

Morning Report Questions

Q: Are there increases in the rates of adverse events with the addition of regional nodal irradiation in early-stage breast cancer?

A: For acute events (those occurring 3 months or less after the completion of radiation), significant increases in the rates of radiation dermatitis and pneumonitis were reported in the nodal-irradiation group. For delayed events (those occurring 3 months or less after the completion of radiation), there were significant increases in the rates of lymphedema, telangiectasia of the skin, and subcutaneous fibrosis in the nodal-irradiation group. No increases in rates of brachial neuropathy, cardiac disease, or second cancers were observed in the nodal-irradiation group, but the period of follow-up was not sufficiently long to rule out a difference in the rate of secondary cancers.

Table 3. Adverse Events of Grade 2 or Higher.

Q: Were there any subgroups in the Whelan study that appeared to particularly benefit from the addition of regional nodal irradiation?

A: Although subgroup analyses were prespecified, they were generally not adequately powered to assess the benefit of treatment in different subgroups. Furthermore, the P values of the subgroup analyses were not adjusted for multiple testing. Patients with ER-negative or PR-negative tumors appeared to benefit more from regional nodal irradiation than those with ER-positive or PR-positive tumors. Although this effect was not observed in previous trials of postmastectomy radiation therapy, it supports the hypothesis that further research on the molecular characterization of the primary tumor may identify patients who are more likely to benefit from regional nodal irradiation. Since the number of node-negative patients in this trial was relatively small, the application of the study results to node-negative patients is unclear.

Gallbladder Disease

Posted by Carla Rothaus • July 24th, 2015

Peroral Endoscopic 7-25-15A new review summarizes recent innovations in the approaches to gallbladder disease, including laparoscopic cholecystectomy, cholecystectomy with natural orifice transluminal endoscopic surgery, percutaneous cholecystostomy, and peroral endoscopic gallbladder drainage.

Cholecystectomy is a well-established and frequently performed procedure. The demand for safer and less-invasive interventions continues to promote innovations in the management of gallbladder disease.

Clinical Pearls

•Describe laparoscopic approaches that are less invasive then standard laparoscopic cholecystectomy.

In single-incision laparoscopic cholecystectomy, one large, transumbilical, multi-instrument port is used instead of four incisions, leaving only a periumbilical scar. Theoretical but unproven advantages include improved cosmesis and reductions in postoperative pain, recovery time, and wound-related adverse events. Another less invasive technique, mini-laparoscopy, involves the use of access ports and instruments with a small diameter (2 to 5 mm). The cosmetic results are better than with standard laparoscopic cholecystectomy, but randomized trials have not shown other advantages. Single-incision laparoscopic and mini-laparoscopic cholecystectomy have failed to gain widespread acceptance because the techniques are more challenging to learn, and the procedures prolong operative time and increase costs.

Figure 1. Comparison of Access-Port Locations and Diameters for Conventional Laparoscopic Cholecystectomy, Mini-Laparoscopic Cholecystectomy, and Single-Incision Laparoscopic Cholecystectomy.

•What is the advantage of natural orifice transluminal endoscopic surgery (NOTES) cholecystectomy as compared to the laparoscopic approach?

NOTES is a technique in which surgery is performed through a naturally existing orifice and does not leave a cutaneous scar. It is typically performed by means of transgastric or transvaginal access with the use of flexible or rigid endoscopes, alone or in combination with limited laparoscopic access (which is known as hybrid NOTES). A major advantage of NOTES over laparoscopic approaches is the fact that removal of the resected gallbladder does not require an incision in the abdominal wall, which can be a source of postoperative pain and complications in wound healing. The procedure has been performed only a few thousand times, most often through the transvaginal route in patients without acute cholecystitis. Outcomes have been similar to those achieved with laparoscopic cholecystectomy, although it is associated with a better aesthetic outcome, a shorter recovery time, and less pain. NOTES requires special equipment and is technically very difficult. Consequently, adoption of this technique has been limited to a few select medical centers.

Figure 2. Sites of Entry for Cholecystectomy with Natural Orifice Transluminal Endoscopic Surgery (NOTES).

Morning Report Questions

Q: Is there an alternative to percutaneous drainage of the gallbladder for patients who are not surgical candidates?

A: Transpapillary drainage of the gallbladder, which was first reported more than 25 years ago, follows the standard procedure for cannulation of the bile duct with the use of endoscopic retrograde cholangiography. A guidewire is advanced through the cystic duct and into the gallbladder. One end of a pigtail stent is deployed within the gallbladder, and the other end is either brought out through a nasobiliary catheter that exits through the nose or left to drain internally within the duodenum (double pigtail stent). The procedure is helpful in patients with symptomatic cholelithiasis who are not good candidates for percutaneous therapy or surgery, particularly those with advanced liver disease, ascites, or coagulopathy. The use of transpapillary drainage of the gallbladder is limited by the technical difficulty of advancing a guidewire from a retrograde position through the cystic duct, which is often long, narrow, and tortuous and is sometimes occluded by an impacted gallstone. In addition, the cystic duct can accommodate only small-caliber plastic stents (5 to 7 French), which are prone to occlusion with biofilm.

Q: Is transpapillary drainage the only alternative to percutaneous cholecystectomy?

A: The most recent alternative to percutaneous cholecystostomy is transmural endoscopic ultrasound-guided gallbladder drainage, which was described in 2007. The gallbladder is usually closely apposed to the gastrointestinal tract and is conspicuous on endosonography. The use of Doppler imaging allows the endoscopist to avoid vessels while introducing the needle into the gallbladder. A guidewire is then positioned within the gallbladder, which allows for the deployment of transnasal drainage catheters or internal stents. Although assessments of endoscopic ultrasound-guided gallbladder drainage have been limited to small studies conducted at expert centers, the procedure has been effective in the treatment of more than 95% of high-risk surgical patients who have acute cholecystitis. The development of endoscopic transmural access to the gallbladder introduces new questions, such as whether the stent can be easily removed, as well as whether the stent should be removed and, if so, when it should be removed.

Figure 3. Peroral Endoscopic Approaches to Gallbladder Drainage.

Table 2. Advantages and Disadvantages of Interventional Approaches to Symptomatic Gallbladder Disease.

Figure 4. Procedures for the Treatment of Symptomatic Gallbladder Disease, Stratified According to Patient Operative Status and Disease Severity.

 

Take the Fluids and Electrolytes Challenge

Posted by Jennifer Zeis • July 22nd, 2015

diabetic ketoacidosis 7-23-15A 28-year-old man presents with diabetic ketoacidosis after an influenza-like illness. Lab values include: sodium 144 mmol/L, potassium 5.7 mmol /L, chloride 98 mmol /L, sodium bicarbonate 13 mmol/L, creatinine 1.5 mg/dL, BUN 30 mg/dL, glucose 702 mg/dL, and venous pH 7.2. What is the best strategy to support this patient?

Take the poll and comment now. On Twitter use #NEJMCases.

Find the answers in the review article, “Electrolyte and Acid-Base Disturbances in Patients with Diabetes Mellitus,” to be published on August 6.