Daratumumab for Myeloma

Posted by • October 6th, 2016


Click to enlarge

The incorporation of proteasome inhibitors and immunomodulatory drugs into the standard of care has improved outcomes in patients with multiple myeloma over the past 10 years, but most patients still eventually have a relapse. In the October 6, 2016, issue of the New England Journal of Medicine, Dimopoulos et al. report the results of a prespecified interim analysis of a phase 3 trial of daratumumab, lenalidomide, and dexamethasone in patients with relapsed or refractory myeloma. The addition of daratumumab to lenalidomide and dexamethasone resulted in superior response rate and progression-free survival, as compared with lenalidomide and dexamethasone alone, at a cost of more frequent neutropenia and infusion reactions.

Clinical Pearl

What is daratumumab?

Daratumumab, a human IgGκ monoclonal antibody that targets CD38, has shown substantial single-agent efficacy and a manageable safety profile in phase 1–2 studies involving patients with heavily pretreated relapsed or refractory multiple myeloma, with reported overall response rates of 29% and 36%. On the basis of these findings, daratumumab monotherapy (at a dose of 16 mg per kilogram of body weight) was approved by the Food and Drug Administration and the European Medicines Agency for these patients.

Clinical Pearl

• What are the mechanisms of action of daratumumab?

The mechanisms of action of daratumumab comprise immune-mediated effects, including complement-dependent and antibody-dependent cell-mediated cytotoxic effects, antibody-dependent cellular phagocytosis, and apoptosis by means of cross-linking. Moreover, daratumumab may have a role in immunomodulation by means of depletion of CD38-positive regulator immune suppressor cells, which leads to a greater clonal expansion of T cells in patients who have a response than in those who do not.

Morning Report Questions

Q: Does the addition of daratumumab to lenalidomide and dexamethasone improve progression-free survival in patients with relapsed or refractory myeloma?

A: In the study by Dimopoulos et al., the primary end point was progression-free survival. The results showed that the addition of daratumumab to lenalidomide and dexamethasone significantly prolonged progression-free survival and was associated with a 63% lower risk of disease progression or death than lenalidomide and dexamethasone alone among patients with multiple myeloma who had received one or more lines of therapy previously. The treatment effect of daratumumab was consistent regardless of previous exposure to lenalidomide (a limitation being the relatively small number of patients with previous exposure to lenalidomide) and across all subgroups, including patients 65 years of age or older, those with disease that was refractory to proteasome inhibitors or the most recent line of therapy, those with International Staging System stage III disease, and those with previous exposure to a proteasome inhibitor or immunomodulatory drug. The treatment benefit that was associated with daratumumab was also similar in patients with one previous line of therapy and in those with one, two, or three previous lines of therapy.

Q: What were some of the notable adverse events associated with daratumumab in the study by Dimopoulos et al.?

A: In the study by Dimopoulos et al., the most common adverse events of grade 3 or 4 during treatment were neutropenia (in 51.9% of the patients in the daratumumab group vs. 37.0% of those in the control group), thrombocytopenia (in 12.7% vs. 13.5%), and anemia (in 12.4% vs. 19.6%). Daratumumab-associated infusion-related reactions occurred in 47.7% of the patients and were mostly of grade 1 or 2.

The Health Effects of Electronic Cigarettes

Posted by • October 6th, 2016


Click to enlarge

Although the sale of e-cigarettes is prohibited in some countries, it is legal in most, including the United States, where the FDA recently finalized rules for the regulation of e-cigarettes as a tobacco product. The U.S. market for e-cigarettes is now estimated to be worth $1.5 billion, a number that is projected to grow by 24.2% per year through 2018. Global sales are predicted to reach $10 billion by 2017. The use of electronic cigarettes is growing, and some hope that they will replace what is felt to be the more dangerous nicotine-delivery system — cigarettes. However, data on the long-term safety of e-cigarettes are still being gathered and expanded upon in a new Review Article.

Clinical Pearl

What are the components of an electronic cigarette?

E-cigarettes, also known as electronic nicotine-delivery systems, are devices that produce an aerosol by heating a liquid that contains a solvent (vegetable glycerin, propylene glycol, or a mixture of these), one or more flavorings, and nicotine, although the nicotine may be omitted. The evaporation of the liquid at the heating element is followed by rapid cooling to form an aerosol. E-cigarette aerosol is directly inhaled (or “vaped”) by the user through a mouthpiece. Each device includes a battery, a reservoir that contains the liquid, and a vaporization chamber with heating element. The composition of the aerosol that is generated depends on the ingredients of the liquid, the electric characteristics of the heating element, the temperature reached, and the characteristics of the wick. The constituents of the aerosol generated by e-cigarettes and inhaled by the user are more directly relevant to health than the ingredients of e-cigarette liquids.


Click to enlarge

Clinical Pearl

Who is using e-cigarettes?

In 2010, a total of 1.8% of U.S. adults reported having used an e-cigarette at some time, a rate that rose to 13.0% by 2013; reports of “current use” increased from 0.3% to 6.8% during this period. Although tobacco smokers were among those most likely to be current users of e-cigarettes, a third of current e-cigarette users had never smoked tobacco or were former tobacco smokers. Of particular concern regarding public health has been the increasing experimentation with and use of e-cigarettes among persons younger than 18 years of age.

Morning Report Questions

Q: Do e-cigarettes help tobacco smokers quit smoking?

A: The efficacy of e-cigarettes as a smoking-cessation intervention remains uncertain owing to the limited data available from randomized trials. Furthermore, it is difficult to extrapolate the results of studies that used first-generation e-cigarettes to second- and third-generation devices that are more satisfying to users because of changes in aerosol characteristics, nicotine delivery, and the variety of flavors. Recent meta-analyses that have combined data from randomized trials and observational cohort studies have not shed further light on the efficacy of e-cigarettes as a smoking-cessation aid.

Q: Are the flavorings that are added to e-cigarette liquid considered harmless?

A: In 2014, there were an estimated 466 brands and 7764 unique flavors of e-cigarette products; this heterogeneity complicates research on potential health effects. Although some studies suggest that smoking e-cigarettes may be less dangerous than smoking conventional cigarettes, more needs to be learned. A particular challenge in this regard is the striking diversity of the flavorings in e-cigarette liquids, since the effects on health of the aerosol constituents produced by these flavorings are unknown. Many of the liquid flavorings in e-cigarettes are aldehydes, which in some cases are present in concentrations sufficient to pose risks owing to the irritant characteristics of these compounds. Sweet-flavored e-cigarette liquids often contain diacetyl, acetyl propionyl, or both. These flavorings are approved for use in foods but have been associated with respiratory disease when inhaled during manufacturing processes. Some e-cigarette liquids are flavored with tobacco extracts, and these may contain tobacco-specific nitrosamines, nitrates, and phenol, although in far lower concentrations than those found in tobacco products.

Learning More About Living Longer After Myocardial Infarction

Posted by • October 5th, 2016


Click to enlarge

To practice medicine is often to be inundated in metrics. Institutions may spend immense resources collecting data on readmission rates, time to treatment, and adherence to therapy. We know that appropriate measurement is key to health care improvement, improved rankings, and reimbursement. However, for many of these metrics, the connection between short-term performance and long-term patient outcomes remains unproven. This may prompt the overwhelmed and pragmatic provider to ask, “Have we even picked the right metric?In this week’s NEJM, Bucholz et al. share results that may help answer this question in patients with acute myocardial infarction (AMI).

The investigators utilized data from the Cooperative Cardiovascular Project (CCP) to assess Medicare hospitalizations for AMI between February 1994 and July 1995. For each hospital involved, they calculated a risk-standardized 30-day mortality rate, and then divided all hospitals into quintiles based on their 30-day mortality performance. Hospitals with the highest mortality rates were categorized as “lowest-performing” and those with the lowest mortality rates were categorized as “highest-performing.” Then, using Medicare enrollment data from 1994 to 2012, the authors identified dates of death for patients included in the initial CCP database, allowing them to estimate life expectancy for each of the hospital quintiles.

Analysis of this data indicated that AMI patients admitted to the lowest-performing 30-day mortality hospitals had the lowest 17-year life expectancy, whereas AMI patients admitted to highest-performing 30-day mortality hospitals had the highest estimated life expectancy. After adjusting for patient-specific factors and hospital-specific clinical factors, AMI patients treated at high-performing hospitals lived on average 0.74 to 1.14 years longer than patients treated at low-performing hospitals. This difference in life expectancy remained statistically significant even when accounting for a hospital’s case mix.

What impact should these findings have on our AMI practice? The results suggest that early, better care resulting in improved short-term mortality may actually create a persistent benefit to patients. They also reinforce the importance of 30-day mortality in assessing the quality and impact of AMI care being administered at our hospitals.

Unfortunately, Bucholz et al. cannot provide us with recommendations on specific interventions or processes in which to invest resources to improve AMI 30-day mortality. In fact, the authors note, prior research has yet to consistently demonstrate a tight link between specific AMI process measurements and AMI patient outcomes, and in their own analysis, adjustment for treatment variables such as use of aspirin and beta-blockers did not fully account for the observed differences in outcome.

Moving forward, these results are an excellent starting point for the next phase of quality improvement in cardiac care. We should continue to critically assess the utility of current metrics, building on those that create value (such as 30-day mortality) and eliminating those that only add to paperwork without providing benefit to patients. As the authors suggest, future AMI improvement efforts could also focus on areas we are less adept at measuring: hospital culture, organizational structure, and collaboration. Regulators, administrators, and front-line providers should begin to think critically about how to leverage this information to improve patient care.

What are you and/or your institution doing to improve care for acute myocardial infarction? When you assess the quality of care you are providing, what metrics are most/least important to you?

Influenza Vaccination

Posted by • September 29th, 2016


Click to enlarge

The effects of influenza traditionally have been assessed by comparing hospitalizations and deaths during an influenza season with a baseline model. These calculations suggest that seasonal influenza epidemics in the United States are responsible for between 55,000 and 431,000 hospitalizations due to pneumonia and influenza each year and as many as 49,000 deaths. The highest levels of influenza-attributable hospitalizations and deaths tend to occur in years in which H3N2 viruses predominate. Influenza vaccines confer considerable but incomplete protection and are recommended for everyone. The Advisory Committee on Immunization Practices does not endorse a specific vaccine but recommends against the live attenuated vaccine during 2016–2017 in the United States in a new Clinical Practice article.

Clinical Pearl

• At what time of the year are decisions made regarding the composition of each annual influenza vaccine?

The specific influenza viruses that will be included in the vaccine each year are determined by worldwide surveillance and antigenic characterization of human viral isolates by the Global Influenza Surveillance and Response System of the World Health Organization. Currently, the production process requires that these decisions be made in February to allow for the production of vaccines to be distributed in the Northern Hemisphere in the following fall.

Clinical Pearl

• Is the inactivated quadrivalent formulation of the influenza vaccine more effective than the inactivated trivalent formulation?

Since 1977, inactivated vaccines have contained three components — a recent H1N1 virus, an H3N2 virus, and an influenza B virus — in a so-called trivalent formulation (IIV3). Since approximately 1980, two antigenically distinct lineages of influenza B virus have cocirculated, and many inactivated vaccines now include both B lineages in a quadrivalent formulation (IIV4). Studies have shown that the addition of the fourth component does not interfere with the immune response to the other three components, but direct evidence of enhanced protection from IIV4, as compared with IIV3 formulations, is lacking.


Click to enlarge

Morning Report Questions

Q: Is the high-dose influenza vaccine more effective than the standard-dose vaccine?

A: Antibodies against the viral attachment protein hemagglutinin (HA) prevent entry of the virus into cells, neutralize virus in vitro, and are associated with protection in clinical studies. The serum HA-inhibition (HAI) assay is the primary means of assessing serum antibody responses to standard influenza vaccines. Higher levels of HAI antibodies are associated with increased protection against influenza, but no absolute value of antibodies uniformly predicts protection. Although the dose–response curve for IIVs is rather flat, administration of increased doses of HA protein does result in levels of postvaccination serum HAI antibodies that are higher than those with lower doses. In one very large, randomized, comparative trial, a vaccine containing approximately four times the standard dose of HA was shown to provide significantly greater protection than the standard-dose vaccine against laboratory-confirmed influenza in persons who were 65 years of age or older (incidence of influenza, 1.9% in the standard-dose group vs. 1.4% in the high-dose group). The enhanced protective effect was primarily against H3N2 viruses, the subtype with the greatest effect on older adults. Some, but not all, postmarketing studies of this high-dose vaccine (IIV3-HD) have confirmed the enhanced effectiveness of high-dose vaccine in older persons. Serious adverse events have not been more frequent with the high-dose vaccine than with the standard-dose vaccine, but pain at the injection site has been reported more often (36% vs. 24%).

Q: Have any recommendations been issued to date regarding vaccination for the 2016–2017 influenza season?

A: One live attenuated influenza vaccine (LAIV4) is licensed in the United States. LAIV is administered intranasally, and the limited replication of the vaccine viruses in the upper respiratory tract induces immunity against influenza. Observational studies have recently called into question the effectiveness of LAIV. Analysis of data collected from 2010 through 2014 showed similar levels of effectiveness of LAIV and IIV against H3N2 and B viruses, but a decreased effectiveness of LAIV, especially against H1N1 viruses in the 2010–2011 and 2013–2014 seasons. Preliminary data for 2015–2016 have also suggested minimal effectiveness of LAIV, and the Advisory Committee on Immunization Practices (ACIP) has recommended that LAIV not be used in the 2016–2017 vaccination season.

Azithromycin Prophylaxis for Cesarean Delivery

Posted by • September 29th, 2016


Click to enlarge

Cesarean delivery is the most common major surgical procedure and is associated with a rate of surgical-site infection (including endometritis and wound infection) that is 5 to 10 times the rate for vaginal delivery. Tita et al. assessed whether the addition of azithromycin to standard antibiotic prophylaxis before skin incision would reduce the incidence of infection after cesarean section among women who were undergoing nonelective cesarean delivery during labor or after membrane rupture. In this new Original Article involving women who received standard antibiotic prophylaxis for nonelective cesarean section, the risk of infection after surgery was lower with the addition of azithromycin than with placebo.

Clinical Pearl

• How does pregnancy-associated infection rank as a cause of maternal death in the United States?

Globally, pregnancy-associated infection is a major cause of maternal death and is the fourth most common cause in the United States.

Clinical Pearl

• How often do postoperative infections occur after nonelective cesarean delivery?

Despite routine use of antibiotic prophylaxis (commonly, a cephalosporin given before skin incision), infection after cesarean section remains an important concern, particularly among women who undergo nonelective procedures (i.e., unscheduled cesarean section during labor, after membrane rupture, or for maternal or fetal emergencies). As many as 60 to 70% of all cesarean deliveries are nonelective; postoperative infections occur in up to 12% of women undergoing nonelective cesarean delivery with standard preincision prophylaxis.

Morning Report Questions

Q: Does the addition of azithromycin to standard antibiotic prophylaxis reduce the frequency of infection after nonelective cesarean section?

A: In the study by Tita et al., the authors found that the addition of azithromycin to standard antibiotic prophylaxis significantly reduced the frequency of infection after nonelective cesarean section. The primary outcome was a composite of endometritis, wound infection, or other infections (abdominopelvic abscess, maternal sepsis, pelvic septic thrombophlebitis, pyelonephritis, pneumonia, or meningitis) occurring up to 6 weeks after surgery. The primary composite outcome occurred in 62 women (6.1%) who received azithromycin and in 119 (12.0%) who received placebo (relative risk, 0.51; 95% confidence interval [CI], 0.38 to 0.68; P<0.001). The use of azithromycin was associated with significantly lower rates of endometritis (3.8% vs. 6.1%; relative risk, 0.62; 95% CI, 0.42 to 0.92; P=0.02) and wound infections (2.4% vs. 6.6%; relative risk, 0.35; 95% CI, 0.22 to 0.56; P<0.001). The risks of other infections were low and did not differ significantly between groups.


Click to enlarge

Q: Does the addition of azithromycin to standard antibiotic prophylaxis for nonelective cesarean delivery increase the risk of serious neonatal complications? 

A: In the study by Tita et al., there was no significant between-group difference in a secondary neonatal composite outcome that included neonatal death and serious neonatal complications (14.3% vs. 13.6%, P=0.63).

Drug-Eluting or Bare-Metal Stents for Coronary Artery Disease

Posted by • September 26th, 2016


Click to enlarge

Mr. Patrick is a 59-year-old man admitted to the hospital with a non-ST elevation myocardial infarction (NSTEMI). You start him on an aspirin, P2Y12 inhibitor, statin, and heparin, and discuss the need for cardiac catheterization and possible stent placement. Mr. Patrick has heard that there are different types of stents available and wants to know which one is best. How do you answer Mr. Patrick’s question?

Percutaneous coronary intervention (PCI) began with balloon angioplasty and has since evolved with the development of bare-metal stents, followed by first-generation drug-eluting stents, and now second-generation drug-eluting stents. Second-generation drug-eluting stents have been shown to be associated with lower risk of stent restenosis than first-generation drug-eluting stents. However, the newest drug-eluting stents and bare-metal stents have only been compared in small studies with limited generalizability and in meta-analyses. Additionally, since their development three decades ago, bare-metal stent technology has improved, with changes in metal composition and thinner struts. A large randomized trial that compares these updated bare-metal stents with contemporary drug-eluting stents in the era of medical therapy with antiplatelet agents and statins is needed.

In the Norwegian Coronary Stent Trial (NORSTENT), published in this week’s NEJM, investigators screened all patients at all eight PCI centers in Norway from 2008 to 2011, and enrolled those with lesions in native coronary arteries or coronary-artery grafts. After excluding patients with prior stent placement, limited life expectancy, and contraindications to long-term dual anti-platelet therapy, 9013 patients were randomized to receive either drug-eluting stents or bare-metal stents. Patients in both groups received aspirin (75 mg daily) indefinitely and clopidogrel (75 mg daily for 9 months) after PCI.

At 6 years of follow-up, the rate of the primary composite outcome of death from any cause and nonfatal spontaneous myocardial infarction did not differ between the drug-eluting stent group and the bare-metal stent group (16.6% vs. 17.1%, P=0.66). The rate of any revascularization, a secondary endpoint, was lower in the drug-eluting stent arm (16.5% vs. 19.8%, P<0.001). Rates of definite stent thrombosis were 0.8% and 1.2%, respectively (P=0.0498). Measures of quality of life and disease-specific health status on the Seattle Angina questionnaire did not differ between the groups.

In an accompanying editorial, Dr. Eric Bates from the Division of Cardiovascular Diseases at University of Michigan Medical Center writes, “The outcomes with second-generation drug-eluting stents make them preferred in most clinical situations, and recent recommendations for shorter-duration dual-antiplatelet therapy make that choice even more attractive.” But he adds, “Nevertheless, the use of bare-metal stents remains an important option for PCI in some patients,” citing previous studies showing low restenosis rates in patients with large-vessel diameters, and in those who cannot take dual anti-platelet therapy for an extended period of time due to bleeding, cost, anticipated surgery, or need for anticoagulation for another indication.

The NORSTENT study answers important questions about outcomes for patients with modern coronary artery stents. The results represent a large number of patients from across an entire country, allowing for greater generalizability than results of prior trials. John Jarcho, deputy editor at NEJM, noted that, “NORSTENT confirmed that the principal advantage of drug-eluting stents compared to bare-metal stents is the lower rate of subsequent revascularization. It also found that rates of stent thrombosis, a major concern with first-generation drug-eluting stents, are low with both newer-generation drug-eluting stents and bare-metal stents.”

Reversal of Factor Xa Inhibitor-Associated Acute Major Bleeding

Posted by • September 23rd, 2016


Click to enlarge

Use of anticoagulants

It is estimated that slightly more than 1 in 7 strokes is due to atrial fibrillation. The use of anticoagulants reduces this risk of thromboembolism. In recent years, a number of direct oral anticoagulants (such as apixaban, rivaroxaban, and edoxaban) that inhibit active coagulation factor X have been approved for stroke prevention in patients with atrial fibrillation, prevention and treatment of venous thromboembolism, and management of acute coronary syndrome. These drugs have more predictable pharmacokinetics than warfarin and obviate the need for prothrombin-time monitoring and multiple dose adjustments. However, many patients and clinicians have expressed the concern that reversal agents are not available, and therefore, have been hesitant to adopt the use of such drugs, despite data from clinical trials showing that factor Xa inhibitors are at least as efficacious as warfarin for prevention and treatment of thromboembolism complications, with lower risk of intracranial bleeding (see N Engl J Med 2011; 365:981; 2010; 363:2487; 2013; 369:799; 2011; 365:883; 2010; 363:2499; 2008; 358:2765; 2013; 369:2093; 2013; 369:1406).

What is the ANNEXA-4 study about?

In a multicenter, open-label, single-group study, investigators evaluated 67 patients who presented with acute major bleeding within 18 hours after the administration of a factor Xa inhibitor. The patients were treated with andexanet, a recombinant modified human factor Xa decoy protein that has been shown to reverse the inhibition of factor Xa in healthy volunteers. After receiving a bolus of andexanet followed by a 2-hour infusion of the drug, the patients were evaluated for pharmacodynamic outcomes (changes in the reduction of anti-factor Xa activity over time) and clinical outcomes (hemostasis efficacy and safety).

Who participated in the study?

The mean age of the patients was 77 years and the predominant indication for anticoagulation was atrial fibrillation (70%). Of the 67 patients, a majority had received rivaroxaban (n=32) or apixaban (n=31). The primary acute major bleeding sites were GI tract (49%) and intracranial (42%). The time from presentation to andexanet bolus was about 5 hours.

What were the results?

After the administration of andexanet bolus, the median anti-factor Xa activity was reduced by 89% among patients receiving rivaroxaban (95% CI, 58-94) and by 93% among those receiving apixaban (95% CI, 87-94). The reduction of anti-factor Xa activity was stable during the 2-hour infusion. Four hours after the conclusion of the infusion, the median anti-factor Xa activity was reduced by 39% (rivaroxaban) and 30% (apixaban), as compared to baseline measurements.

Twelve hours after the andexanet infusion, clinical hemostasis effectiveness was judged to be excellent or good in about 80% of patients. However, thrombotic events occurred in about 18% of patients at 30-day follow-up.

So, should I worry about safety? Does this potentially change my practice?

In an accompanying editorial, Beverley Hunt (St Thomas’ Hospital, UK) and Marcel Levi (University of Amsterdam, Netherlands) state, “It is impossible to know whether andexanet had an intrinsic prothrombotic effect or whether the high rate of thrombosis was related to the absence of an antithrombotic agent in a high-risk situation, since the presence of major bleeding alone is associated with an increased subsequent rate of venous thromboembolism.”

The editorialists believe that the actual need for antidote is likely to be small: “Because the half-lives of  direct oral anticoagulants are shorter than that of  warfarin, the effects of the drugs  wear off quickly, and unlike the case with warfarin, stopping the drug may be all that is required in most scenarios.”

What is my takeaway?

In patients with acute major bleeding associated with factor Xa inhibitor use, an initial bolus and 2-hour infusion with andexanet subsequently reduced anti-factor Xa activities with effective hemostasis in a vast majority of the patients. It is reassuring to have an effective antidote to the factor Xa inhibitors. Additional work will be necessary to sort out the optimal duration of reversal in patients with underlying increased clotting risk.

Primary Sclerosing Cholangitis

Posted by • September 22nd, 2016


Click to enlarge

Primary sclerosing cholangitis is an idiopathic, heterogeneous, cholestatic liver disease that is characterized by persistent, progressive biliary inflammation and fibrosis. There is no effective medical therapy for this condition. End-stage liver disease necessitating liver transplantation may ultimately develop in affected patients. This new Review Article summarizes the pathogenesis and management of this condition.

Clinical Pearl

How does primary sclerosing cholangitis typically present, and how is it diagnosed?

There are several subtypes of primary sclerosing cholangitis. The classic subtype, which involves the entire biliary tree, is present in approximately 90% of patients with primary sclerosing cholangitis. Primary sclerosing cholangitis is insidious; about half the patients with this condition do not have symptoms but receive a diagnosis after liver-function tests are found to be abnormal. When symptoms are present, abdominal pain (in 20% of patients), pruritus (in 10%), jaundice (in 6%), and fatigue (in 6%) predominate. Diagnostic criteria include an increased serum alkaline phosphatase level that persists for more than 6 months, cholangiographic findings of bile-duct strictures detected by means of either MRCP or ERCP, and exclusion of causes of secondary sclerosing cholangitis. A liver biopsy is not necessary for diagnosis unless small-duct primary sclerosing cholangitis or an overlap with autoimmune hepatitis is suspected.


Click to enlarge


Click to enlarge

Clinical Pearl

• What conditions are associated with primary sclerosing cholangitis?

A variety of coexisting conditions are associated with primary sclerosing cholangitis. Because inflammatory bowel disease (ulcerative colitis more often than Crohn’s disease) occurs in most patients with primary sclerosing cholangitis, colonoscopy is warranted in all patients who have received a new diagnosis. The risk of colon cancer among patients with primary sclerosing cholangitis and concomitant inflammatory bowel disease is four times as high as the risk among patients with inflammatory bowel disease alone and 10 times as high as the risk in the general population. Gallbladder disease (stones, polyps, and cancer) is common in patients with primary sclerosing cholangitis. In developed countries, primary sclerosing cholangitis is the most common risk factor for cholangiocarcinoma. Indeed, the risk of cholangiocarcinoma among patients with primary sclerosing cholangitis is 400 times as high as the risk in the general population.

Morning Report Questions

Q: Do treatment guidelines recommend ursodeoxycholic acid for primary sclerosing cholangitis?

A: Ursodeoxycholic acid has been widely studied as a therapy for primary sclerosing cholangitis. In one randomized, double-blind, placebo-controlled trial, patients who received ursodeoxycholic acid had decreased levels of serum liver enzymes, but they did not have higher rates of survival than the rates among patients who received placebo. In a randomized, double-blind, placebo-controlled trial, the risk of the primary end point (death, liver transplantation, minimal listing criteria for liver transplantation, cirrhosis, esophageal or gastric varices, and cholangiocarcinoma) was 2.3 times higher among patients who received high-dose ursodeoxycholic acid (at a dose of 25 mg per kilogram of body weight) than among those who received placebo (P<0.01). Thus, treatment guidelines for primary sclerosing cholangitis are conflicting: the American Association for the Study of Liver Diseases and the American College of Gastroenterology do not support the use of ursodeoxycholic acid, whereas the European Association for the Study of the Liver endorses the use of moderate doses (13 to 15 mg per kilogram). Several new treatments are being assessed in ongoing clinical trials.

Q: What percentage of patients with primary sclerosing cholangitis will eventually require liver transplantation?

A: Because of the progressive nature of primary sclerosing cholangitis, approximately 40% of patients with this disease will ultimately require liver transplantation. In fact, primary sclerosing cholangitis was the indication for approximately 6% of liver transplantations performed in the United States from 1988 through 2015. After liver transplantation for primary sclerosing cholangitis, the 1-year survival rate is approximately 85% and the 5-year survival rate is approximately 72%. Nevertheless, the disorder may recur in approximately 25% of patients after transplantation.

Craniectomy for Traumatic Intracranial Hypertension

Posted by • September 22nd, 2016


Click to enlarge

Intracranial hypertension after traumatic brain injury (TBI) is associated with an increased risk of death in most case series. The monitoring of intracranial pressure and the administration of interventions to lower intracranial pressure are routinely used in patients with TBI, despite the lack of level 1 evidence. Hutchinson et al. conducted the Randomised Evaluation of Surgery with Craniectomy for Uncontrollable Elevation of Intracranial Pressure (RESCUEicp) trial to assess the effectiveness of craniectomy as a last-tier intervention in patients with TBI and refractory intracranial hypertension. Evidence from the new Original Article shows that decompressive craniectomy resulted in lower mortality and higher rates of vegetative state and severe disability.

Clinical Pearl

What different types of craniectomy have been used to treat traumatic intracranial hypertension?

Decompressive craniectomy is a surgical procedure in which a large section of the skull is removed and the underlying dura mater is opened. Primary decompressive craniectomy refers to leaving a large bone flap out after the evacuation of an intracranial hematoma in the early phase after a TBI. A secondary decompressive craniectomy is used as part of tiered therapeutic protocols that are frequently used in intensive care units in order to control raised intracranial pressure and to ensure adequate cerebral perfusion pressure after TBI.

Clinical Pearl

What has been learned from randomized trials that assessed craniectomy as an early intervention for traumatic intracranial hypertension?

In the Decompressive Craniectomy (DECRA) trial, patients who had an intracranial pressure of more than 20 mm Hg for more than 15 minutes (continuously or intermittently) within a 1-hour period, despite optimized first-tier interventions, were randomly assigned to early bifrontal decompressive craniectomy and standard care or to standard care alone. The authors found that decompressive craniectomy was associated with more unfavorable outcomes than standard care alone.

Morning Report Questions

Q: What are the eight outcome categories that comprise the Extended Glasgow Outcome Scale (GOS-E)?

A: In the study by Hutchinson et al., the primary-outcome measure was assessed with the use of the Extended Glasgow Outcome Scale (GOS-E) at 6 months after randomization. The GOS-E is a global outcome scale assessing functional independence, work, social and leisure activities, and personal relationships. Its eight outcome categories are as follows: death, vegetative state (unable to obey commands), lower severe disability (dependent on others for care), upper severe disability (independent at home), lower moderate disability (independent at home and outside the home but with some physical or mental disability), upper moderate disability (independent at home and outside the home but with some physical or mental disability, with less disruption than lower moderate disability), lower good recovery (able to resume normal activities with some injury-related problems), and upper good recovery (no problems).

Q: What clinical outcomes are associated with craniectomy performed as a last-tier intervention for refractory traumatic intracranial hypertension? 

A: In the RESCUEicp trial, the authors found that the rate of death at 6 months was 26.9% in the surgical group and 48.9% in the medical group. The rate of vegetative state was 8.5% versus 2.1%; the rate of lower severe disability (dependent on others for care), 21.9% versus 14.4%; the rate of upper severe disability (independent at home), 15.4% versus 8.0%; the rate of moderate disability, 23.4% versus 19.7%; and the rate of good recovery, 4.0% versus 6.9%. At 12 months after randomization, 30.4% of the patients in the surgical group had died, as compared with 52.0% in the medical group. The rate of vegetative state was 6.2% in the surgical group versus 1.7% in the medical group; the rate of lower severe disability, 18.0% versus 14.0%; the rate of upper severe disability, 13.4% versus 3.9%; the rate of moderate disability, 22.2% versus 20.1%, and the rate of good recovery, 9.8% versus 8.4%.


Click to enlarge


Click to enlarge


Click to enlarge

A 31-Year-Old Woman with Infertility

Posted by • September 15th, 2016

infertility header

Click to enlarge

Tuberculous endometrial granulomas take a while to become caseated; women of reproductive age, who regularly shed their endometrial lining, may do so before caseation has had the opportunity to develop. In older women, who have longer cycles or do not have cycles, caseating granulomas in the endometrium are more likely to develop. A 31-year-old Nepalese woman presented with primary infertility. Two cycles of in vitro fertilization had been unsuccessful. A hysterosalpingogram showed abnormal narrowing and outpouching of the distal fallopian tubes.
Additional diagnostic procedures were performed in a new Case Record.

Clinical Pearl

• When granulomas are found in the endometrium, what is the most likely diagnosis?

When granulomas are found in the endometrium, tuberculosis (most commonly due to Mycobacterium tuberculosis) must be considered to be the most likely cause.

infertility table 2

Click to enlarge

infertility fig 2

Click to enlarge

Clinical Pearl

The endometrium is affected in what percentage of patients with genital tuberculosis?

The endometrium is affected in 50 to 75% of patients with genital tuberculosis. The infection is thought to spread through the blood, or possibly through the lymphatics, from the site of primary infection to the fallopian tubes, and from there it seeds the endometrium through direct drainage.

Morning Report Questions

Q: What changes may be seen on hysterosalpingography in women with genital tuberculosis? 

A: Salpingitis isthmica nodosa–like changes and tubal occlusion have been found on hysterosalpingography in women with genital tuberculosis. The involvement of the genital tract can be protean. Loss of tubal epithelial architecture due to infection (which results in a “pipe stem” appearance) and the presence of intraluminal filling defects possibly due to granulomas (which result in a “leopard skin” pattern) are suggestive of genital tuberculosis.

infertility fig 1

Click to enlarge

Q: How is genital tuberculosis diagnosed and treated?

A: Genitourinary tuberculosis is most often paucibacillary, and culture and other diagnostic tests (including nucleic acid testing) provide higher sensitivity than do special stains for acid-fast organisms. Short-course chemotherapy for a duration of at least 6 to 9 months has been associated with a risk of recurrent disease of less than 10%. Despite the administration of appropriate treatment, overall rates of live birth are still lower among women who have had genitourinary tuberculosis than in the general population.