Chagas’ Disease

Posted by Carla Rothaus • July 31st, 2015

Chagas' Disease PIT 7-31-2015The infection of millions of people with the protozoan parasite Trypanosoma cruzi is a public health concern. A new review article explains how treatments for Chagas’ disease remain marginally effective, and cardiac and gastrointestinal complications dominate the clinical picture.

Chagas’ disease is caused by the protozoan parasite Trypanosoma cruzi, which is transmitted when the infected feces of the triatomine vector are inoculated through a bite site or through an intact mucous membrane of the mammalian host. Vectorborne transmission is limited to areas of North America, Central America, and South America. Both in endemic and in nonendemic areas, other infection routes include transfusion, organ and bone marrow transplantation, and congenital transmission. Infection is lifelong in the absence of effective treatment.

Clinical Pearls

How prevalent is Chagas’ disease?

Latin America has made substantial progress toward the control of Chagas’ disease. The estimated global prevalence of T. cruzi infection declined from 18 million in 1991, when the first regional control initiative began, to 5.7 million in 2010. Serologic screening for T. cruzi is conducted in most blood banks in endemic Latin American countries and the United States, and some countries have systematic screening for congenital Chagas’ disease. Nevertheless, Chagas’ disease remains the most important parasitic disease in the Western Hemisphere, with an estimated disease burden, as measured by disability adjusted life-years, that is 7.5 times as great as that of malaria.

Figure 1. Life Cycle of Trypanosoma cruzi.

Figure 2. Estimated Prevalences of T. cruzi Infection.

• What cardiac and gastrointestinal sequelae are associated with chronic Trypanosoma cruzi infection?

The vast majority of acute infections are never detected. In persons who survive the acute phase, the cell-mediated immune response controls parasite replication, symptoms resolve spontaneously, and patent parasitemia disappears in 4 to 8 weeks. Persons then pass into the chronic phase of T. cruzi infection. Most persons remain asymptomatic but are infected for life. It is estimated that 20 to 30% of infected persons have progression over the course of years to decades to chronic Chagas’ cardiomyopathy. The earliest signs are typically conduction-system defects, especially right bundle-branch block or left anterior fascicular block. Chagas’ cardiomyopathy is highly arrhythmogenic and is characterized by sinus and junctional bradycardias, atrial fibrillation or flutter, atrioventricular blocks, and nonsustained or sustained ventricular tachycardia. Affected patients eventually have progression to dilated cardiomyopathy and congestive heart failure. Gastrointestinal Chagas’ disease is less frequent than Chagas’ cardiomyopathy, and predominantly affects the esophagus, colon, or both and results from damage to intramural neurons. The manifestations of esophageal disease range from asymptomatic motility disorders and mild achalasia to severe megaesophagus.

Morning Report Questions

Q: How is Chagas’ disease diagnosed?

A: In the acute phase, motile trypomastigotes can be detected by means of microscopic examination of fresh anticoagulated blood or buffy coat. Parasites may also be visualized on blood smears stained with Giemsa or other stains and can be grown with the use of hemoculture in a specialized medium. Polymerase chain reaction (PCR) is a sensitive diagnostic tool in the acute phase and is the best test for early detection of infection in the recipient of an organ from an infected donor or after accidental exposure. The diagnosis of chronic infection relies on IgG serologic testing, most commonly with the use of an enzyme-linked immunosorbent assay (ELISA) or immunofluorescent antibody assay (IFA). No single assay for chronic T. cruzi infection has high enough sensitivity and specificity to be used alone; positive results of two tests, preferably based on different antigens (e.g., whole-parasite lysate and recombinant antigens), are required for confirmation. The sensitivity of PCR in the chronic phase of Chagas’ disease is highly variable and depends on specimen volume and processing, population characteristics, and PCR primers and methods. Negative PCR results do not prove that infection is absent.

Figure 3. Phases of Trypanosoma cruzi Infection.

(https://www.nejm.org/action/showImage?doi=10.1056/NEJMra1410150&iid=f03)

Table 1. Diagnosis and Management of Chagas’ Disease.

Q: What therapies are available for Chagas’ disease, and who is a candidate for treatment?

A: Nifurtimox and benznidazole, the only drugs with proven efficacy against T. cruzi infection, are not currently approved by the Food and Drug Administration but can be obtained from the CDC and used under investigational protocols. Benznidazole, a nitroimidazole derivative, is considered to be the first-line treatment, on the basis of a better side-effect profile than nifurtimox, as well as a more extensive evidence base for efficacy. Until the 1990s, only the acute phase of the infection was thought to be responsive to antiparasitic therapy. However, in the 1990s, two placebo-controlled trials of benznidazole involving children with chronic T. cruzi infection showed cure rates of approximately 60%, on the basis of conversion to negative serologic test results 3 to 4 years after treatment. Follow-up studies have suggested that the earlier in life children are treated, the higher the rate of conversion from positive to negative results of serologic assays (negative seroconversion). Over the past 15 years, there has been a growing movement toward broader treatment of chronically infected adults, including those with early cardiomyopathy. Most experts now believe that the majority of patients with chronic T. cruzi infection should be offered treatment, with exclusion criteria such as an upper age limit of 50 or 55 years and the presence of advanced irreversible cardiomyopathy. This change in standards of practice is based in part on nonrandomized, nonblinded longitudinal studies that have shown significantly decreased progression of cardiomyopathy and a trend to decreased mortality among adults treated with benznidazole, as compared with untreated patients.

Table 2. Dosage Regimens and Frequencies of Adverse Effects Associated with Benznidazole and Nifurtimox Use.

Community-Acquired Pneumonia

Posted by Carla Rothaus • July 31st, 2015

Community-Acquired Pneumonia PIT 7-31-15The etiology of community-acquired pneumonia requiring hospitalization in adults is evolving, in light of vaccine deployment and new diagnostic tests. A new Original Article defines pathogens potentially causing pneumonia. In a majority of cases, no pathogen was identified.

Community-acquired pneumonia is a leading infectious cause of hospitalization and death among U.S. adults. The last U.S. population-based incidence estimates of hospitalization due to community-acquired pneumonia were made in the 1990s, before the availability of the pneumococcal conjugate vaccine and more sensitive molecular and antigen-based laboratory diagnostic tests. Thus, contemporary population-based etiologic studies involving U.S. adults with pneumonia are needed. The Centers for Disease Control and Prevention (CDC) Etiology of Pneumonia in the Community (EPIC) study was a large, contemporary, prospective, population-based study of community-acquired pneumonia in hospitalized adults in the United States.

Clinical Pearls

• What is the annual incidence of community-acquired pneumonia requiring hospitalization in the United States?

In the EPIC study, among 2320 adults with radiographic evidence of pneumonia, 2061 (89%) were enrolled from July 1, 2010, to June 30, 2012. The annual incidence of community-acquired pneumonia requiring hospitalization was 24.8 cases (95% confidence interval, 23.5 to 26.1) per 10,000 adults. The incidence overall for each pathogen rose with increasing age. The estimated incidences of hospitalization for pneumonia among adults 50 to 64 years of age, 65 to 79 years of age, and 80 years of age or older were approximately 4, 9, and 25 times as high, respectively, as the incidence among adults 18 to 49 years of age.

Figure 3. Estimated Annual Pathogen-Specific Incidence Rates of Community-Acquired Pneumonia Requiring Hospitalization, According to Age Group.

Table 2. Estimated Annual Incidence Rates of Hospitalization for Community-Acquired Pneumonia, According to Year of Study, Study Site, Age Group, and Pathogen Detected.

• How often is the causative pathogen identified in an adult hospitalized for community-acquired pneumonia?

The EPIC study found that despite efforts to use more sensitive and specific diagnostic methods than had been available previously, pathogens were detected in only 38% of adults. Possible reasons for such few detections include an inability to obtain lower respiratory tract specimens, antibiotic use before specimen collection, insensitive diagnostic tests for known pathogens, a lack of testing for other recognized pathogens (e.g., coxiella), unidentified pathogens, and possible noninfectious causes (e.g., aspiration pneumonitis). The low pathogen-detection yield among adults who were hospitalized for pneumonia highlights the need for more sensitive diagnostic methods and innovative discovery of pathogens.

Morning Report Questions

Q: What are the most commonly identified pathogens in adults hospitalized with community-acquired pneumonia?

A: In the EPIC study, diagnostic results for both bacteria and viruses were available for 2259 adults (97%) with radiographic evidence of pneumonia. A pathogen was detected in 853 of these patients (38%). Viruses were detected in 27% and bacteria in 14%. Human rhinovirus, influenza virus, and Streptococcus pneumonia were the most commonly detected pathogens, with the highest burden occurring among older adults. Human rhinovirus was the most commonly detected virus in patients with pneumonia but was rarely detected in asymptomatic controls, a finding similar to that in other studies. Although the understanding of human rhinovirus remains incomplete, these data suggest a role for human rhinovirus in adult pneumonia. Influenza virus was the second most common pathogen detected, despite mild influenza seasons during the study. The incidences of influenza and of S. pneumoniae were almost 5 times as high among adults 65 years of age or older than among younger adults, and the incidence of human rhinovirus was almost 10 times as high among adults 65 years of age or older than among younger adults. The incidence of influenza virus was almost twice that of any other pathogen (except for human rhinovirus) among adults 80 years of age or older, which underscores the need for improvements in influenza-vaccine uptake and effectiveness. Together, human metapneumovirus, respiratory syncytial virus, parainfluenza viruses, coronaviruses, and adenovirus were detected in 13% of the patients, a proportion similar to those found in other polymerase chain reaction (PCR)-based etiologic studies of pneumonia in adults (11 to 28%). The study adds to the growing evidence of the contribution of viruses to hospitalizations of adults, highlighting the usefulness of molecular methods for the detection of respiratory pathogens.

Figure 2. Pathogen Detection among U.S. Adults with Community-Acquired Pneumonia Requiring Hospitalization, 2010-2012.

Q: Are certain pathogens more commonly detected in cases of community-acquired pneumonia requiring admission to the intensive care unit (ICU)?

A: Three pathogens were detected more commonly in patients in the ICU than in patients not in the ICU: S. pneumoniae (8% vs. 4%), Staphylococcus aureus (5% vs. 1%), and Enterobacteriaceae (3% vs. 1%) (P<0.001 for all comparisons).

Outcomes of Therapeutic Hypothermia in Deceased Organ Donors on Delayed Graft Function

Posted by Andrea Merrill • July 29th, 2015

Hypothermia 2 7-29-2015While many think of surgery as messy, gory, or even medieval at times, there are certain operations that are simply beautiful to observe and perform, requiring both elegance and finesse. For me, kidney transplants fall into this category, demanding gentle handling of the arterial, venous and ureteral anastomoses by the surgeon with the help of special magnifying lenses (surgical loupes).

The first time I had the privilege to participate as a surgeon in a kidney transplant was also my most memorable. Mr. X was in his 50s with end stage renal disease from IgA nephropathy. He had been on dialysis and waiting for a kidney transplant for several years. He was finally called in, late one Sunday night, to receive the kidney that would unchain him from the thrice-weekly dialysis sessions essential to keep him alive. I briefly introduced myself, performed a routine physical exam and then pulled out a consent form for him to sign before surgery. He would be getting a kidney from a donor after neurologic determination of death (formerly called brain dead donors). In fact, it was a kidney I had helped harvest earlier that morning, from a brave adolescent girl who had suffered brain death from what had become fatal status asthmaticus. There was a pervasive somber mood in the operating room that morning as we removed her liver and kidneys, moving quickly to reduce cold ischemia time.

Although the grief of that morning was still present several hours later, there I was offering life, or at least an improved quality of life, to this grateful man. The operation went well, and I was allowed to carefully sew the delicate veno-venous anastomosis. I rotated off service the next day but continued to follow my patient from afar. Two days later, I was dismayed to find out that, despite our careful surgical dexterity, he had experienced delayed graft function requiring dialysis. His new kidney eventually began functioning, but only after several rounds of dialysis and an extended postoperative course.

Unfortunately, delayed graft function is not uncommon in patients who receive kidneys from deceased donors. Given the long wait list times and shortage of organs, many patients opt to receive kidneys from deceased donors after neurologic determination of death, from donors after cardiac death and, in some cases, from extended-criteria donors (any donor > 60 yo, or any donor > 50 yo with 2 of the following: a history of hypertension, a creatinine > 1.5, or death resulting from a stroke). Efforts to improve outcomes in kidney transplantation from deceased donors are needed, given the ever-growing number of patients awaiting a transplant.

An article by Niemann et al. in this week’s issue of NEJM details the results of a randomized controlled trial that studied the effect of therapeutic hypothermia on delayed graft function in patients who received kidneys from donors after neurologic determination of death.  Current organ donor management protocols require normothermia during harvesting of organs; however, retrospective studies of hypothermia in cardiac arrest patients have demonstrated renal protection. Thus, the authors of this study hypothesized that hypothermia in organ donors might improve outcomes in deceased donor transplants.

The investigators randomized donors in two organ donation service areas—in California and Nevada. Donors were assigned by computer randomization to either mild hypothermia (34-35°C) or normothermia (36.5-37.5°C), stratified according to organ procurement organ, standard or expanded criteria donor status and whether or not they had received therapeutic hypothermia prior to neurologic death. The primary outcome was delayed graft function, defined as need for dialysis during the first week after transplantation.

In total, 370 donors were randomized– 180 assigned to hypothermia and 190 to normothermia; and 583 kidneys were transplanted, 290 from the hypothermia group and 293 from the normothermia group.  Baseline characteristics were similar between the two groups.  Additionally, recipient characteristics known to affect outcomes were balanced between the two groups.  The trial was stopped early because of overwhelming efficacy following after a preplanned interim analysis.

Delayed graft function occurred in about 40% of transplants from the normothermia group in contrast to just over 28% from the hypothermia group (a statistically significant difference, p=0.008). The primary efficacy analysis used a multivariable model logistic analysis and found that hypothermia significantly reduced the risk of delayed graft function by about 40%.

This difference was more pronounced in pre-planned subgroup analysis of extended criteria donors.  Among recipients of extended criteria donor kidneys, delayed graft function occurred in about 30% of hypothermic group compared to just over 50% of the normothermic group, representing a risk reduction of about 70%.  While there was also less delayed graft function in the standard criteria hypothermic group, it was not statistically significant.  Only a small proportion of recipients (11) received dual kidneys, but there was a significant advantage in those patients who received kidneys from the hypothermia group (0% delayed graft function vs. about 80%). There were minimal adverse events and no differences between the two arms.

The accompanying editorial by Jochmans and Watson notes several concerns and limitations.  While immediate outcomes are improved, longer term outcomes such as acute rejection or graft survival were not studied.  Additionally, outcomes of the other organs transplanted (livers and pancreases) were not reported.  Regardless of these limitations, the Niemann et al. study is important, and the editorialists applaud the authors for a potentially simple, cheap, and non-technological intervention in the organ donor “that can have dramatic therapeutic effects” in the recipient.  One hopes that mild hypothermia is a new step towards improving outcomes in transplants in every Mr. and Ms. X on the organ waiting list.

 

Irradiation in Early-Stage Breast Cancer

Posted by Carla Rothaus • July 24th, 2015

Irradiation in Early-Stage Breast Cancer  PIT 7-24-15Women with breast cancer who are undergoing breast-conserving surgery were assigned to receive whole-breast irradiation with or without regional nodal irradiation. At 10 years, disease-free survival in the nodal-irradiation group was improved but overall survival was not.

Many women with early-stage breast cancer undergo breast-conserving surgery followed by whole breast irradiation, which reduces the rate of local recurrence. Radiotherapy to the chest wall and regional lymph nodes, termed regional nodal irradiation, which is commonly used after mastectomy in women with node-positive breast cancer who are treated with adjuvant systemic therapy, reduces locoregional and distant recurrence and improves overall survival. An unanswered question is whether the addition of regional nodal irradiation to whole-breast irradiation after breast-conserving surgery has the same effect.

Clinical Pearls

Does the addition of regional nodal irradiation to whole-breast irradiation after breast-conserving surgery prolong survival in early-stage breast cancer?

In the study by Whelan et al., eligible patients were women with invasive carcinoma of the breast who were treated with breast-conserving surgery and sentinel-lymph-node biopsy or axillary-node dissection and who had positive axillary lymph nodes or negative axillary nodes with high-risk features. The study randomly assigned women to undergo either whole-breast irradiation plus regional nodal irradiation (including internal mammary, supraclavicular, and axillary lymph nodes) (nodal-irradiation group) or whole-breast irradiation alone (control group). There was no significant between-group difference in overall survival, with 10-year rates of survival of 82.8% in the nodal-irradiation group and 81.8% in the control group (hazard ratio, 0.91; 95% confidence interval [CI], 0.72 to 1.13; P=0.38). Moreover, no significant difference was detected in breast-cancer mortality, with 10-year rates of 10.3% in the nodal-irradiation group and 12.3% in the control group (hazard ratio, 0.80; 95% CI, 0.61 to 1.05; P=0.11).

Table 1. Characteristics of the Patients at Baseline.

Figure 1. 10-Year Kaplan-Meier Estimates of Survival.

Does the addition of regional nodal irradiation to whole-breast irradiation after breast-conserving surgery prolong disease-free survival in early-stage breast cancer?

In the Whelan study, the rate of disease-free survival was higher in the nodal-irradiation group than in the control group, with 10-year rates of 82.0% and 77.0%, respectively (hazard ratio, 0.76; 95% CI, 0.61 to 0.94; P=0.01). The 10-year rates of isolated locoregional disease-free survival were 95.2% in the nodal-irradiation group and 92.2% in the control group (hazard ratio, 0.59; 95% CI, 0.39 to 0.88; P=0.009). The rates of distant disease-free survival at 10 years were 86.3% in the nodal-irradiation group and 82.4% in the control group (hazard ratio, 0.76; 95% CI, 0.60 to 0.97; P=0.03).

Figure 2. Disease-free Survival at 10 Years, According to Subgroup.

Table 2. Disease Recurrence or Death.

Morning Report Questions

Q: Are there increases in the rates of adverse events with the addition of regional nodal irradiation in early-stage breast cancer?

A: For acute events (those occurring 3 months or less after the completion of radiation), significant increases in the rates of radiation dermatitis and pneumonitis were reported in the nodal-irradiation group. For delayed events (those occurring 3 months or less after the completion of radiation), there were significant increases in the rates of lymphedema, telangiectasia of the skin, and subcutaneous fibrosis in the nodal-irradiation group. No increases in rates of brachial neuropathy, cardiac disease, or second cancers were observed in the nodal-irradiation group, but the period of follow-up was not sufficiently long to rule out a difference in the rate of secondary cancers.

Table 3. Adverse Events of Grade 2 or Higher.

Q: Were there any subgroups in the Whelan study that appeared to particularly benefit from the addition of regional nodal irradiation?

A: Although subgroup analyses were prespecified, they were generally not adequately powered to assess the benefit of treatment in different subgroups. Furthermore, the P values of the subgroup analyses were not adjusted for multiple testing. Patients with ER-negative or PR-negative tumors appeared to benefit more from regional nodal irradiation than those with ER-positive or PR-positive tumors. Although this effect was not observed in previous trials of postmastectomy radiation therapy, it supports the hypothesis that further research on the molecular characterization of the primary tumor may identify patients who are more likely to benefit from regional nodal irradiation. Since the number of node-negative patients in this trial was relatively small, the application of the study results to node-negative patients is unclear.

Gallbladder Disease

Posted by Carla Rothaus • July 24th, 2015

Peroral Endoscopic 7-25-15A new review summarizes recent innovations in the approaches to gallbladder disease, including laparoscopic cholecystectomy, cholecystectomy with natural orifice transluminal endoscopic surgery, percutaneous cholecystostomy, and peroral endoscopic gallbladder drainage.

Cholecystectomy is a well-established and frequently performed procedure. The demand for safer and less-invasive interventions continues to promote innovations in the management of gallbladder disease.

Clinical Pearls

•Describe laparoscopic approaches that are less invasive then standard laparoscopic cholecystectomy.

In single-incision laparoscopic cholecystectomy, one large, transumbilical, multi-instrument port is used instead of four incisions, leaving only a periumbilical scar. Theoretical but unproven advantages include improved cosmesis and reductions in postoperative pain, recovery time, and wound-related adverse events. Another less invasive technique, mini-laparoscopy, involves the use of access ports and instruments with a small diameter (2 to 5 mm). The cosmetic results are better than with standard laparoscopic cholecystectomy, but randomized trials have not shown other advantages. Single-incision laparoscopic and mini-laparoscopic cholecystectomy have failed to gain widespread acceptance because the techniques are more challenging to learn, and the procedures prolong operative time and increase costs.

Figure 1. Comparison of Access-Port Locations and Diameters for Conventional Laparoscopic Cholecystectomy, Mini-Laparoscopic Cholecystectomy, and Single-Incision Laparoscopic Cholecystectomy.

•What is the advantage of natural orifice transluminal endoscopic surgery (NOTES) cholecystectomy as compared to the laparoscopic approach?

NOTES is a technique in which surgery is performed through a naturally existing orifice and does not leave a cutaneous scar. It is typically performed by means of transgastric or transvaginal access with the use of flexible or rigid endoscopes, alone or in combination with limited laparoscopic access (which is known as hybrid NOTES). A major advantage of NOTES over laparoscopic approaches is the fact that removal of the resected gallbladder does not require an incision in the abdominal wall, which can be a source of postoperative pain and complications in wound healing. The procedure has been performed only a few thousand times, most often through the transvaginal route in patients without acute cholecystitis. Outcomes have been similar to those achieved with laparoscopic cholecystectomy, although it is associated with a better aesthetic outcome, a shorter recovery time, and less pain. NOTES requires special equipment and is technically very difficult. Consequently, adoption of this technique has been limited to a few select medical centers.

Figure 2. Sites of Entry for Cholecystectomy with Natural Orifice Transluminal Endoscopic Surgery (NOTES).

Morning Report Questions

Q: Is there an alternative to percutaneous drainage of the gallbladder for patients who are not surgical candidates?

A: Transpapillary drainage of the gallbladder, which was first reported more than 25 years ago, follows the standard procedure for cannulation of the bile duct with the use of endoscopic retrograde cholangiography. A guidewire is advanced through the cystic duct and into the gallbladder. One end of a pigtail stent is deployed within the gallbladder, and the other end is either brought out through a nasobiliary catheter that exits through the nose or left to drain internally within the duodenum (double pigtail stent). The procedure is helpful in patients with symptomatic cholelithiasis who are not good candidates for percutaneous therapy or surgery, particularly those with advanced liver disease, ascites, or coagulopathy. The use of transpapillary drainage of the gallbladder is limited by the technical difficulty of advancing a guidewire from a retrograde position through the cystic duct, which is often long, narrow, and tortuous and is sometimes occluded by an impacted gallstone. In addition, the cystic duct can accommodate only small-caliber plastic stents (5 to 7 French), which are prone to occlusion with biofilm.

Q: Is transpapillary drainage the only alternative to percutaneous cholecystectomy?

A: The most recent alternative to percutaneous cholecystostomy is transmural endoscopic ultrasound-guided gallbladder drainage, which was described in 2007. The gallbladder is usually closely apposed to the gastrointestinal tract and is conspicuous on endosonography. The use of Doppler imaging allows the endoscopist to avoid vessels while introducing the needle into the gallbladder. A guidewire is then positioned within the gallbladder, which allows for the deployment of transnasal drainage catheters or internal stents. Although assessments of endoscopic ultrasound-guided gallbladder drainage have been limited to small studies conducted at expert centers, the procedure has been effective in the treatment of more than 95% of high-risk surgical patients who have acute cholecystitis. The development of endoscopic transmural access to the gallbladder introduces new questions, such as whether the stent can be easily removed, as well as whether the stent should be removed and, if so, when it should be removed.

Figure 3. Peroral Endoscopic Approaches to Gallbladder Drainage.

Table 2. Advantages and Disadvantages of Interventional Approaches to Symptomatic Gallbladder Disease.

Figure 4. Procedures for the Treatment of Symptomatic Gallbladder Disease, Stratified According to Patient Operative Status and Disease Severity.

 

Take the Fluids and Electrolytes Challenge

Posted by Jennifer Zeis • July 22nd, 2015

diabetic ketoacidosis 7-23-15A 28-year-old man presents with diabetic ketoacidosis after an influenza-like illness. Lab values include: sodium 144 mmol/L, potassium 5.7 mmol /L, chloride 98 mmol /L, sodium bicarbonate 13 mmol/L, creatinine 1.5 mg/dL, BUN 30 mg/dL, glucose 702 mg/dL, and venous pH 7.2. What is the best strategy to support this patient?

Take the poll and comment now. On Twitter use #NEJMCases.

Find the answers in the review article, “Electrolyte and Acid-Base Disturbances in Patients with Diabetes Mellitus,” to be published on August 6. 

Now on the NEJM Group Open Forum

Posted by Jennifer Zeis • July 21st, 2015

Here’s an update with the latest on the NEJM Group Open Forum.

The NEJM CareerCenter, in conjunction with Medstro and the American Physician Scientist Association, today opens an exploration of topics related to physician-scientist careers. Take the opportunity to chat with an esteemed panel from the National Institutes of Health, Brigham and Women’s Hospital, and the Mayo Clinic, as well as innovators from the TEDMED community and biotechnology.

The discussion series will focus on the following topics.

July 21, 2015: Physician-Scientists Shaping American Healthcare

July 28, 2015: Physician-Scientist Training: Current Options and Future Changes

August 4, 2015: How funding may impact the future of the Physician-Scientist

August 11, 2015: Successful Physician-Scientists mentor-mentee relationships

August 18, 2015: Physician-Scientist Transitions: How to take those next steps.

August 25, 2015: Role of Scientific Organizations in Training and Career

Also opening today, we bring together experts to discuss and share how we are implementing high value care strategies in daily practice, whether that’s in the hospital, on rounds, during Grand Rounds, or in the clinic. We’ll also discuss the place high value care has in the future of American health care.

The NEJM Group Open Forum is a pilot project we are running in collaboration with Medstro, a social professional network for physicians. Free registration is required; social login is available through Facebook, Twitter, Google, and LinkedIn.

You’re welcome to join any (or all!) of these discussions. Read the questions and answers posted so far; and like, share, and comment to become a part of the conversation.

Heparin-Induced Thrombocytopenia

Posted by Carla Rothaus • July 17th, 2015

Heparin-Induced 7-17-15A new Clinical Practice article provides an overview of heparin induced thrombocytopenia. HIT is characterized by a platelet count fall of more than 50% at 5 to 10 days after the start of heparin and hypercoagulability. Platelet factor 4–heparin antibody testing has a high negative, but low positive, predictive value. Treatment involves therapeutic-dose anticoagulation.

In contrast to other conditions caused by enhanced consumption, impaired production, or destruction of platelets, which lead to bleeding complications, immune-mediated heparin-induced thrombocytopenia (HIT) does not induce bleeding but rather results in a paradoxical prothrombotic state. Thromboembolic complications develop in approximately 50% of patients with confirmed HIT. Venous thrombosis of the large vessels of the lower limbs and pulmonary embolism are the most frequent complications, followed by peripheral arterial thrombosis and then stroke; myocardial infarction is uncommon.

Clinical Pearls

Who is most at risk for HIT?

HIT occurs in approximately 1 in 5000 hospitalized patients. The risk of HIT depends on the type of heparin and the patient population. The incidence is up to 10 times as high among patients receiving unfractionated heparin as it is among those receiving low-molecular-weight heparin, and HIT occurs more frequently among patients who have had major surgery than among those who have had minor surgery or are receiving medical therapy. HIT is rare in obstetrical patients, although in contexts other than pregnancy, women are at slightly higher risk than men.

Figure 1. Pathogenesis of Heparin-Induced Thrombocytopenia.

What is the typical time course for the onset of HIT, and what is the magnitude of the decrease in platelet count?

The onset of HIT characteristically occurs between 5 and 10 days after heparin is started, both in patients who receive heparin for the first time and in patients with reexposure. However, there are exceptions. In persons who have received heparin within the previous 90 days (especially, less than or equal to 30 days), there may be persistent circulating anti-platelet factor 4 (PF4)-heparin antibodies, and HIT can start abruptly on reexposure to heparin (rapid-onset HIT); in this case, HIT is sometimes complicated by an anaphylactoid reaction within 30 minutes after a heparin bolus. The fall in platelet count in HIT occurs rapidly (over a period of 1 to 3 days) and is assessed relative to the highest platelet count after the start of heparin. The typical nadir is 40,000 to 80,000 platelets per cubic millimeter, but the count may remain in the normal range (e.g., a decline from 500,000 to 200,000 per cubic millimeter). In less than 10% of patients, the decrease in platelet count is less pronounced (30 to 50% of the highest preceding value). Rarely, the platelet count may fall below 20,000 per cubic millimeter, especially when HIT is associated with other causes of thrombocytopenia, such as consumptive coagulopathy.

Morning Report Questions

Q: How should a patient at risk for or with suspected HIT be evaluated?

A: Although monitoring of platelet counts facilitates the recognition of HIT, it is difficult to justify in many patients, especially outpatients. Monitoring should be considered when the risk of HIT is relatively high (>1%), such as among patients who have undergone cardiac surgery and those receiving unfractionated heparin after major surgery (other than heparin received for intraoperative flushes or catheter-related flushes). Scoring systems can be helpful in estimating the probability of HIT. A widely used scoring system is the 4T score, which evaluates four indicators: the relative platelet-count fall, the timing of the onset of the platelet-count fall, the presence or absence of thrombosis, and the likelihood of another cause, with scores on the individual components ranging from 0 to 2 and higher scores indicating a higher likelihood of HIT. For those whose score is intermediate or high, laboratory tests are needed to rule out HIT. Anti-PF4-heparin enzyme immunoassays have an excellent negative predictive value (98 to 99%) but a low positive predictive value, owing to the detection of clinically insignificant anti-PF4-heparin antibodies. Diagnostic accuracy for HIT is improved with the use of both an anti-PF4-heparin enzyme immunoassay and a functional test (e.g., a platelet-activation assay).

Figure 2. Timing of HIT and Rationale for Platelet Count Monitoring at Various Time Points.

Table 1. 4T Scoring System for Evaluating the Pretest Probability of Heparin-Induced Thrombocytopenia.

Figure 3. Diagnosis of HIT.

Q: What is the appropriate management for HIT?

A: Key interventions in patients with highly suspected or confirmed acute HIT are the prompt cessation of heparin (if still being administered) and the initiation of an alternative anticoagulant at a therapeutic dose. Prophylactic-dose anticoagulation is insufficient to compensate for massive thrombin generation, even if the patient has no apparent thrombosis. Vitamin K antagonists (e.g., warfarin and phenprocoumon) must not be given until HIT has abated (e.g., the platelet count has increased to >150,000 per cubic millimeter at a stable plateau for 2 consecutive days), because they increase the risk of venous limb gangrene and limb loss by decreasing the level of protein C. Two drugs are approved for the treatment of HIT — the direct thrombin inhibitor argatroban (in the United States, Canada, the European Union, and Australia) and the antithrombin-dependent factor Xa inhibitor danaparoid (in Canada, the European Union, and Australia). Argatroban is frequently used in critically ill patients. It has a relatively short half-life, which is independent of renal function, but it requires intravenous administration. Fondaparinux and bivalirudin are also used in this context, although they have not been approved by the Food and Drug Administration for this indication. Prophylactic platelet transfusions should be avoided in patients with HIT. The risk of bleeding is very low, and such transfusions can increase the risk of thrombosis.

A Man with Sore Throat and Myalgias

Posted by Carla Rothaus • July 17th, 2015

A Man with Sore Throat and Myalgias 7-17-15In the latest Case Record of the Massachusetts General Hospital, a 20-year-old man presented with fever and a pericardial effusion. Five weeks earlier, sore throat, fever, malaise, and myalgias had developed. Broad-spectrum antibiotic therapy was administered, without improvement. A diagnosis was made.

A number of case reports indicate an association between tamponade and adult-onset Still’s disease. One case report describes a patient with adult-onset Still’s disease and reversible constrictive pericarditis.

Clinical Pearls

What clinical and laboratory findings are associated with adult-onset Still’s disease?

Adult-onset Still’s disease is classically characterized by four cardinal symptoms: spiking fever, evanescent salmon-pink maculopapular rash, arthritis, and a white-cell count greater than 10,000 per cubic millimeter, with predominance of neutrophilic polymorphonuclear cells. The rash associated with adult-onset Still’s disease is nonpruritic; it occurs with fever and commonly involves the trunk, arms, and legs. Persistent, atypical rashes may also occur. The arthralgias and arthritis associated with adult-onset Still’s disease generally involve the larger joints (i.e., knees, wrists, elbows, ankles, and shoulders). In addition, many affected patients present with sore throat, lymphadenopathy, anemia, and abnormal results of liver-function tests, and approximately one quarter of patients have pleuritis or pericarditis. Most patients have markedly elevated ferritin levels. Pericardial effusion occurs in approximately 4% of patients with adult-onset Still’s disease, and pleural effusion occurs in approximately 18%.

Figure 1. Clinical Photograph.

Does adult-onset Still’s disease have more than one clinical phenotype?

Adult-onset Still’s disease has two clinical phenotypes — a systemic form, which is predominantly characterized by systemic symptoms (e.g., high fever and rash), and a chronic form, which is predominantly characterized by arthritis that may become deforming. The systemic form is associated with a better prognosis and may be monophasic.

Morning Report Questions

Q: Are there diagnostic laboratory tests or specific histologic findings for adult-onset Still’s disease?

A: In patients with adult-onset Still’s disease, synovial or pleuropericardial fluids are sterile, inflammatory exudates. In addition, skin-biopsy or lymph-node-biopsy specimens have no specific histologic appearance. A low fraction of glycosylated ferritin may have specificity for adult-onset Still’s disease, but a test for this is not widely available. No laboratory tests are specific for adult-onset Still’s disease, and therefore the diagnosis of this disorder relies on the presence of a constellation of clinical findings. Because adult-onset Still’s disease remains a diagnosis of exclusion, there is often a considerable delay in establishing the diagnosis. Differentiating between adult-onset Still’s disease and acute rheumatic fever has long been a clinical challenge because of their overlapping clinical features. Patients with adult-onset Still’s disease may have a higher maximum temperature and greater temperature swings than do patients with acute rheumatic fever. The Yamaguchi criteria are the most validated diagnostic criteria for adult-onset Still’s disease, although the Yamaguchi criteria are considered valid only in the absence of infection.

Table 3. Diagnostic Criteria for Adult-Onset Still’s Disease and Acute Rheumatic Fever.

Q: How is adult-onset Still’s disease treated?

A: Adult-onset Still’s disease is characterized by inflammation, and interleukin-1 appears to be the dominant mediator. Biologic agents that block interleukin-1 activity are highly effective in the treatment of persons with adult-onset Still’s disease. However, evidence for the treatment of adult-onset Still’s disease is based on small studies and case series. For mild disease, glucocorticoids and nonsteroidal antiinflammatory drugs may be effective. For moderate disease, glucocorticoids are typically combined with methotrexate for chronic forms or combined with biologically active agents for systemic forms. For severe disease, evidence is limited.

Take the New Case Challenge!

Posted by Karen Buckley • July 16th, 2015

pregnantA 28-year-old pregnant woman was admitted in the summer with fever, headache, and fatigue. She reported neck stiffness, earache, intermittent contractions, and a possible erythematous rash on her shin. What is the most likely diagnosis?

Vote and comment now on NEJM.org. The answer will appear within the full text of the Case Record of the Massachusetts General Hospital published on July 30. 

On Twitter use #NEJMCases.