Postherpetic Neuralgia

Posted by Sara Fazio • October 17th, 2014

Postherpetic neuralgia is more common with older age. Recommended treatments include topical agents (lidocaine or capsaicin) and systemic agents (in particular, gabapentin, pregabalin, or tricyclic antidepressants), but their efficacy tends to be suboptimal.  The latest Clinical Practice article is on this topic, and comes from University of Bristol’s Dr. Robert Johnson and Imperial College London’s Dr. Andrew Rice.

Postherpetic neuralgia is the most frequent chronic complication of herpes zoster and the most common neuropathic pain resulting from infection.

Clinical Pearls

What are the epidemiology of and risk factors for postherpetic neuralgia?

Postherpetic neuralgia is conventionally defined as dermatomal pain persisting at least 90 days after the appearance of the acute herpes zoster rash. The incidence and prevalence of postherpetic neuralgia vary depending on the definition used, but approximately a fifth of patients with herpes zoster report some pain at 3 months after the onset of symptoms, and 15% report pain at 2 years. Analysis of data from the United Kingdom General Practice Research Database showed that the incidence of postherpetic neuralgia (as defined by pain at 3 months) rose from 8% at 50 to 54 years of age to 21% at 80 to 84 years of age. Risk factors for postherpetic neuralgia include older age and greater severity of the prodrome, rash, and pain during the acute phase. The incidence is also increased among persons with chronic diseases such as respiratory disease and diabetes, and it may be increased among immunocompromised patients, although the evidence is sparse and inconsistent.

What is the typical clinical presentation and appropriate evaluation of a patient with postherpetic neuralgia?

Although a history of herpes zoster often cannot be confirmed with absolute certainty, the disorder has a characteristic clinical presentation, and thus postherpetic neuralgia rarely presents a diagnostic challenge. Clinical assessment of the patient with postherpetic neuralgia should follow the general principles of assessment of patients with peripheral neuropathic pain. Features of pain and associated sensory perturbations (e.g., numbness, itching, and paresthesias) should be assessed. Pain associated with postherpetic neuralgia occurs in three broad categories: spontaneous pain that is ongoing (e.g., continuous burning pain), paroxysmal shooting or electric shock-like pains, and evoked sensations that are pathologic amplifications of responses to light touch and other   innocuous stimuli (mechanical allodynia) or to noxious stimuli (mechanical hyperalgesia). The physical examination should include a comparison of sensory function in the affected dermatome with that on the contralateral side. Loss of sensory function in response to both mechanical and thermal stimuli is common in patients with postherpetic neuralgia, as are pathologic sensory amplifications (e.g., allodynia and hyperalgesia). In most cases, no additional evaluation is needed beyond the history taking (with concomitant disease and medications noted) and physical examination.

Morning Report Questions

Q: What is the appropriate treatment for postherpetic neuralgia?

A: Topical therapy alone is reasonable to consider as first-line treatment for mild pain. It is sometimes used in combination with systemic drugs when pain is moderate or severe, although data are lacking from randomized trials comparing combination topical and systemic therapy with either therapy alone. Patches containing 5% lidocaine are approved for the treatment of postherpetic neuralgia in Europe and the United States. However, evidence in support of their efficacy is limited. There is evidence to support the use of tricyclic antidepressants (off-label use) and the antiepileptic drugs gabapentin and pregabalin (Food and Drug Administration-approved) for the treatment of postherpetic neuralgia. Opioids, including tramadol, should generally be considered as third-line drugs for postherpetic neuralgia after consultation with a specialist and should be prescribed only with appropriate goals and close monitoring.

Q: What is the evidence for the effectiveness of preventive therapy for postherpetic neuralgia?

A: Placebo-controlled trials of antiviral drugs for acute herpes zoster have shown that they reduce the severity of acute pain and rash, hasten rash resolution, and reduce the duration of pain. These trials were not designed to assess the subsequent incidence of postherpetic neuralgia. Two randomized trials have shown that the addition of systemic glucocorticoids to antiviral drugs during the acute phase of herpes zoster does not reduce the incidence of postherpetic neuralgia. In one placebo-controlled trial, low-dose amitriptyline, started soon after the diagnosis of herpes zoster and continued for 90 days, significantly reduced the incidence of pain at 6 months. Further studies are required to confirm this finding. The only well-documented means of preventing postherpetic neuralgia is the  prevention of herpes zoster. A live attenuated VZV [varicella-zoster virus] vaccine has been available since 2006; it was initially licensed for immunocompetent persons 60 years of age or older but now is approved for persons 50 years of age or older. In a randomized trial in the older age group, its use reduced the incidence of herpes zoster by 51% and the incidence of postherpetic neuralgia by 66%. In patients 70 years of age or older as compared with those 60 to 69 years of age, the vaccine was less effective in reducing the risk of herpes zoster (38% reduction) but conferred similar protection against postherpetic neuralgia (67% reduction).

Chronic Sore Throat and a Tonsillar Mass

Posted by Sara Fazio • October 17th, 2014

In the latest Case Record of the Massachusetts General Hospital, a 78-year-old woman with rheumatoid arthritis was admitted to the Massachusetts Eye and Ear Infirmary because of a chronic sore throat, odynophagia, and a tonsillar mass. A diagnostic procedure was performed.

Histoplasma capsulatum is commonly found in soil in certain regions of the United States and South America, and 50 to 80% of people living in regions where the fungus is endemic have evidence of prior exposure.

Clinical Pearls

What is the differential diagnosis of acute vesicular pharyngitis?

Acute vesicular pharyngitis may be caused by coxsackievirus (and occasionally other enteroviruses), herpes simplex virus (HSV), and varicella-zoster virus (VZV). Coxsackievirus causes a bilateral pharyngitis that primarily affects young children and resolves in a few days. Primary HSV-associated stomatitis may be severe in immunocompromised patients and is bilateral. Recurrent HSV may produce atypical lesions in immunocompromised patients and may involve the pharynx and larynx; the lesions are usually bilateral. Patients with rheumatoid arthritis are at increased risk for herpes zoster, and patients with a zoster rash affecting the second or third divisions of cranial nerve V may have an accompanying ipsilateral pharyngitis or laryngitis, although this is rare. VZV pharyngitis or laryngitis may also develop without a concurrent facial zoster rash, but in these cases, one or more cranial neuropathies are almost always present.

Who is at greatest risk for histoplasmosis and disseminated histoplasmosis?

In the United States, histoplasmosis infections occur mainly in the regions of the Mississippi River Valley and Ohio River Valley, although a study of the geographic distribution of infection among older adults showed that 12% of cases occur in areas where the fungus is not endemic, including New England. Histoplasmosis is initially asymptomatic or produces a mild acute respiratory illness that resolves. Disseminated disease develops in 0.05% of patients, most of whom are immunocompromised. Dissemination occurs either soon after the primary infection or reinfection or after reactivation of previously unrecognized latent disease.

Morning Report Questions

Q: How does anti-tumor necrosis factor-alpha (TNF-alpha) therapy alter the risk of histoplasmosis?

A: Anti-TNF-alpha therapy increases the risk of histoplasmosis, and histoplasmosis is the most common invasive fungal infection in patients receiving these medications. In patients who are receiving TNF-alpha inhibitors, histoplasmosis is three times more common than tuberculosis, according to one report, and is associated with a mortality of 20%, with deaths often due to a delay in diagnosis and treatment. The risk of disease is higher among patients who are receiving infliximab than among those who are receiving etanercept.

Q: What are the features of oropharyngeal histoplasmosis?

A: Disseminated histoplasmosis may involve the throat, with the most common sites of involvement being the buccal mucosa, tongue, and palate; the larynx may also be involved. The lesions are often painful, ulcerated, and indurated, with heaped-up borders, and may mimic cancer. Oral and laryngeal lesions may be present simultaneously. Oropharyngeal or laryngeal lesions may be the only signs of disseminated disease, and fever occurs in only one third of patients with such disease.

Malpractice Reform and Emergency Department Care

Posted by Chana Sacks • October 15th, 2014

A 67-year-old woman presents to your Emergency Department (ED) with a headache for the last 48 hours.  She describes herself as “a headachy” person since her late teens, but this one is particularly bad, throbbing, associated with nausea and photophobia.  She is afebrile without neck stiffness. Your thorough neurologic exam reveals no focal deficits.

You form your differential diagnosis and debate your next step: do you send the patient home with a diagnosis of a migraine, a prescription, and a plan for close follow-up with her primary care physician? Do you pursue imaging with a CT scan or an MRI to rule out a more insidious cause of her symptoms?

You run the case – and your nagging uncertainty – by one of your colleagues.  “Just get the scan,” she advises.  She then tells you that one time a doctor she knows didn’t get an MRI on a patient with a headache who turned out to have a brain tumor.  He is still embroiled in that lawsuit, she says, her voice trailing off as she walks away.

You have heard many physicians give voice to this line of thinking: yes, we may be ordering some unnecessary tests, but we practice medicine in an exceptionally litigious U.S. society. We have to protect ourselves.

In a Special Article published in the NEJM this week, Waxman and colleagues examine whether this fear of malpractice lawsuits truly motivates physicians’ practices, resulting in extra tests and added costs.   Perhaps surprisingly, they conclude that it does not.

The authors used the real-life experiment provided by Georgia, Texas, and South Carolina to investigate this question. Between 2003 and 2005, these states each passed legislation changing the malpractice standard for emergency care to gross negligence, which the authors note is “widely considered to be a very high bar for plaintiffs.”

The investigators assessed the effect of this legislation that largely eliminated the threat of lawsuits by examining three outcomes: use of CT or MRI, admission to the hospital, and total ED charges per visit. In prior survey studies, emergency physicians had identified ordering CT/MRI imaging and deciding to admit a patient to the hospital as common, costly “defensive maneuvers” often motivated by fear of malpractice lawsuits.  Using a random sample of Medicare claims from 1997-2011, they examined the outcomes in the three states that passed the malpractice reforms – before and after the change in legislation – and in ten control states.

Their findings: in none of the three states was malpractice reform associated with a reduction in CT/MRI ordering or in the rates of hospital admission. In Texas and South Carolina, there was also no reduction in per-visit ED charges.  In Georgia, malpractice reform was associated with a 3.6% reduction in charges [95% CI -6.2% to -0.9% p = 0.010].   The authors conclude, “these strongly protective laws caused little (if any) change in practice intensity among physicians caring for Medicare patients in emergency departments.”

NEJM Deputy Editor Mary Beth Hamel commented, “The authors’ rigorous analyses suggest that legislation designed to reduce the risk of malpractice did not change emergency room clinicians’ decisions to order tests and admit patients to the hospital. It is not clear if the negative findings reflect the misperception that ‘defensive medicine’ is a substantial driver of health care costs or the intractability of the problem.”

To your patient and her headache – do you order an imaging test? Maybe. This study offers no guidance about when and whether ordering an MRI or choosing to admit a patient to the hospital is medically appropriate. However, if the authors’ conclusions are correct, that feeling driving you to pursue the additional test is probably not your fear of a lawsuit.

What is that force, then, that pushes physicians to order expensive tests? Perhaps it’s the uncomfortable uncertainty inherent in medicine. Maybe insecurity about the sensitivity of the physical exam. Or the fear of missing a rare or life-threatening diagnosis. Almost certainly, it is an amalgam of factors.  However, this study suggests that it is a force that malpractice reform – at least as enacted in these states – is unlikely to ameliorate.

Acid-Base Disturbances

Posted by Sara Fazio • October 10th, 2014

Acid–base homeostasis is fundamental for maintaining life. The first article in the new Disorders of Fluids and Electrolytes series reviews a stepwise method for the physiological approach to evaluation of acid–base status.

Internal acid-base homeostasis is fundamental for maintaining life. Accurate and timely interpretation of an acid-base disorder can be lifesaving, but establishment of a correct diagnosis may be challenging.

Clinical Pearls

What are some uses and limitations of the anion gap?

Lactic acidosis accounts for about half of the high anion gap cases, and is often due to shock or tissue hypoxia. The anion gap however, is a relatively insensitive reflection of lactic acidosis — roughly half the patients with serum lactate levels between 3.0 and 5.0 mmol per liter have an anion gap within the reference range. With a sensitivity and specificity below 80% in identifying elevated lactate levels, the anion gap cannot replace a serum lactate measurement. Nevertheless, lactate levels are not routinely drawn or always rapidly available, and a high anion gap can alert the physician that further evaluation is necessary. In addition, the anion gap should always be adjusted for the albumin concentration, because this weak acid may account for up to 75% of the anion gap. Without correction for hypoalbuminemia, the anion gap can fail to detect the presence of a clinically significant increase in anions (>5 mmol per liter) in more than 50% of cases. For every 1 g per deciliter decrement in serum albumin concentration, the calculated anion gap should be raised by approximately 2.3 to 2.5 mmol per liter.

Table 1. Primary Acid-Base Disturbances with a Secondary (“Compensatory”) Response.

What are the characteristics of a normal anion-gap (hyperchloremic) acidosis?

Chloride plays a central role in intracellular and extracellular acid-base regulation. A normal anion-gap acidosis will be found when the decrease in bicarbonate ions corresponds with an increase in chloride ions to retain electroneutrality, also called a hyperchloremic metabolic acidosis. This type of acidosis occurs from gastrointestinal loss of bicarbonate (e.g., because of diarrhea or ureteral diversion), from renal loss of bicarbonate that may occur in defective urinary acidification by the renal tubules (renal tubular acidosis), or in early renal failure when acid excretion is impaired. Hospital-acquired hyperchloremic acidosis is usually caused by the infusion of large volumes of normal saline (0.9%). Hyperchloremic acidosis should lead to increased renal excretion of ammonium, and measurement of urinary ammonium can therefore be used to differentiate between renal and extrarenal causes of normal anion-gap acidosis. However, since urinary ammonium is seldom measured, the urinary anion gap and urinary osmolal gap are often used as surrogate measures of excretion of urinary ammonium. The urine anion gap ([Na+] + [K+] – Cl-]) is usually negative in normal anion-gap acidosis, but it will become positive when excretion of urinary ammonium (NH4+) (as ammonium chloride [NH4Cl]) is impaired, as in renal failure, distal renal tubular acidosis, or hypoaldosteronism.

Morning Report Questions

Q: What is a useful approach to the analysis and treatment of a metabolic alkalosis?

A: The normal kidney is highly efficient at excreting large amounts of bicarbonate, and accordingly, the generation of metabolic alkalosis requires both an increase in alkali and impairment in renal excretion of bicarbonate. Gastric fluid loss and diuretic use account for the majority of metabolic alkalosis cases. By measuring chloride in urine, one can distinguish between chloride-responsive and chloride-resistant metabolic alkalosis. If the kidneys perceive a reduced “effective circulating volume,” they avidly reabsorb filtered sodium, bicarbonate, and chloride, largely through activation of the renin-angiotensin-aldosterone system, thus reducing the concentration of urinary chloride. A (spot sample) urinary chloride concentration of less than 25 mmol per liter is reflective of chloride-responsive metabolic alkalosis. Administration of fluids with sodium chloride (usually with potassium chloride) restores effective arterial volume, replenishes potassium ions, or both with correction of metabolic alkalosis. Metabolic alkalosis with a urinary chloride concentration of more than 40 mmol per liter is mainly caused by inappropriate renal excretion of sodium chloride, often reflecting mineralocorticoid excess or severe hypokalemia (potassium concentration <2 mmol per liter). The administration of sodium chloride does not correct this type of metabolic alkalosis, which, for that reason, is called “chloride-resistant.” Diuretic-induced metabolic alkalosis is an exception because the concentration of chloride in urine may increase initially, until the diuretic effect wanes, after which the concentration of chloride in the urine will fall below 25 mmol per liter.

Figure 2. Assessment of Alkalosis.

Q: How is the “delta anion gap” helpful in the evaluation of mixed metabolic acid-base disorders?

A: In high anion-gap metabolic acidosis, the magnitude of the anion gap increase (delta AG) is related to the decrease in the bicarbonate ions (delta[HCO3-]). To diagnose a high anion-gap acidosis with concomitant metabolic alkalosis or normal anion-gap acidosis, the so-called delta-delta may be used. The delta gap is the comparison between the increase (delta) in the anion gap above the upper reference value (e.g., 12 mmol per liter) and the change (delta) in the concentration of bicarbonate ions from the lower reference value of bicarbonate ions (e.g., 24 mmol per liter). In ketoacidosis, there is a 1:1 correlation between the rise in anion-gap and the fall in concentration of bicarbonate. In lactic acidosis, the decrease in concentration of bicarbonate is 0.6 times the increase in anion gap (e.g., if the anion gap raises 10 mmol per liter, the concentration of bicarbonate should decrease about 6.0 mmol per liter). This difference is probably due to the lower renal clearance of lactate compared with keto-anions. Hydrogen buffering in cells and bone takes time to reach completion. Accordingly, the ratio may be close to 1:1 with “very acute” lactic acidosis (as with seizures or exercise to exhaustion). If the delta AG – delta[HCO3-] in ketoacidosis or if 0.6 delta AG – delta[HCO3-] in lactic acidosis = 0+/-5 mmol per liter, simple anion-gap metabolic acidosis is present. A difference greater than 5 mmol per liter suggests a concomitant metabolic alkalosis, and if the difference is less than -5 mmol per liter, a concomitant normal anion-gap metabolic acidosis is diagnosed.

Blood-Pressure Lowering and Glucose Control in Type 2 Diabetes

Posted by Sara Fazio • October 10th, 2014

In a follow-up study of patients with type 2 diabetes, mortality benefits in those originally assigned to antihypertensive therapy were evident at the end of follow-up, but in-trial glucose differences did not result in long-term benefits in mortality or macrovascular events.

Post-trial follow-up studies of patients with diabetes have previously reported long-term beneficial effects of earlier periods of intensive glucose control, but not blood-pressure lowering, on a range of outcomes, including mortality and macrovascular events.

Clinical Pearls

What were the primary results of the Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE) trial?

The Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE) trial assessed the effects of routine blood-pressure lowering and intensive glucose control in a broad cross section of patients with type 2 diabetes. Routine administration of a single-pill (fixed-dose) combination of perindopril and indapamide was associated with a reduction in the risk of the primary composite end point of major macrovascular or microvascular events. Reductions in the risks of death from any cause, cardiovascular death, and nephropathy were also observed. Intensive glucose control was associated with a reduction in the risk of the primary composite end point of major macrovascular or microvascular events, owing primarily to a reduction in the incidence of new or worsening nephropathy. This benefit for nephropathy included a reduction of end-stage renal disease but not of renal death. No clear protective or harmful effects of intensive glucose control on death from any cause or major macrovascular events were identified.

What were the results of this post-trial follow up of the ADVANCE-Observational Study in the group assigned to a 4.5-year period of blood-pressure lowering treatment with perindopril-indapamide?

Consistent with the in-trial finding of a significant risk reduction of 14% for death from any cause for those assigned to erindopril-indapamide therapy (hazard ratio, 0.86; 95% CI, 0.75 to 0.98; P=0.03), there was a significant but attenuated cumulative benefit for death from any cause to the end of overall follow-up (hazard ratio, 0.91; 95% CI, 0.84 to 0.99; P=0.03). There was no cumulative benefit of perindopril-indapamide for major macrovascular events, and the hazard ratios for this composite outcome were similar at the end of the in-trial and at the end of overall follow-up periods, though not significant at either time. The in-trial reduction in the risk of cardiovascular death recorded for those assigned to perindopril-indapamide (hazard ratio, 0.82; 95% CI, 0.68 to 0.98; P=0.03) was attenuated but remained significant at the end of the overall follow-up period. There were no cumulative benefits for any other secondary outcome, including major clinical microvascular events.

Table 2. Primary and Secondary Outcomes during the Randomized Trial and Overall in the Blood-Pressure-Lowering Cohort and the Glucose-Control Cohort.

Figure 1. Cumulative Incidence of Events, According to Blood-Pressure-Lowering Study Group.

Figure 2. Hazard Ratios for Events, According to Blood-Pressure-Lowering Study Group.

Morning Report Questions

Q: What were the results of post-trial follow-up in the group who had received intensive glucose control?

A: Consistent with in-trial findings, there were no cumulative benefits of intensive glucose control for either death from any cause or major macrovascular events. There were no cumulative benefits for major clinical microvascular events or severe diabetes-related eye disease. There was a significant cumulative benefit with respect to end-stage renal disease (hazard ratio, 0.54; 95% CI, 0.34 to 0.85; P=0.007), although relatively few events were recorded. There was no cumulative benefit for renal death. There were no cumulative benefits for any other secondary outcome, including cardiovascular death, myocardial infarction and stroke.

Table 2. Primary and Secondary Outcomes during the Randomized Trial and Overall in the Blood-Pressure-Lowering Cohort and the Glucose-Control Cohort.

Figure 3. Cumulative Incidence of Events, According to Glucose-Control Study Group.

Figure 4. Hazard Ratios for Events, According to Glucose-Control Study Group.

Q: What do the authors postulate as the reason for the divergent outcomes of this study compared to other glucose control studies in diabetes?

A: The Epidemiology of Diabetes Interventions and Complications (EDIC) study, an extension of the Diabetes Intervention and Complications Trial (DCCT), and United Kingdom Prospective Diabetes Study (UKPDS) post-trial follow-up studies reported the emergence of long-term beneficial effects of earlier periods of intensive glucose control on macrovascular events and death. This study did not observe any such long-term benefits after post-trial follow-up. In explaining the divergent outcomes, the authors note that first, the younger patients with type 1 diabetes (in the DCCT-EDIC) or with newly diagnosed type 2 diabetes (in the UKPDS) may have been more likely to achieve long-term benefits from glucose lowering than the older patients with established disease included in the study. Second, there were differences between the studies in the in-trial levels of blood glucose, as reflected in the levels of glycated hemoglobin, which differed by an average of 0.67% over a period of 5 years in the ADVANCE trial, but were much larger in the DCCT (2.0% over a mean of 6.5 years during the trial) and slightly larger in the UKPDS (0.9% over a median of 10 years during the trial). Third, post-trial follow-up in this study (5 years) was shorter than for DCCT-EDIC and UKPDS (both >10 years) and may have been insufficient for benefits to emerge. Fourth, it is possible that more widespread use of effective background preventive therapy in the ADVANCE trial masked the long-term effects. Finally, competing risk, which is a greater issue for older patients than younger patients, may not have allowed the full effects of the glucose intervention to be observed in this study.

No Effect of Transfusion Threshold on Sepsis Survival

Posted by Rupa Kanapathipillai • October 8th, 2014

Yet who would have thought the old man to have had so much blood in him? (Macbeth Act V, Scene 1)

Our fascination with blood dates back to the dawn of time.   Poets, writers and philosophers have waxed lyrical about it.  Blood as a life source, elixir and contagion  – it is an entity that has been described as everything from sacred to poison.  Before the 20th century, removing blood (blood-letting) was thought to be therapeutic.  With the advent of blood typing, transfusion became possible and widely applied.  But when you transfuse a critically ill patient, how much blood is enough?

For centuries, the role of blood transfusions in the management of unwell patients has been debated.  The recommendations for blood transfusion in critical care remain complex. Both the TRICC and FOCUS trials, previously published in the NEJM, support the use of a more restrictive transfusion threshold in a broad population of ICU patients and in high-risk patients after hip surgery respectively. In this week’s NEJM, Holst et al, provide further insight into hemoglobin thresholds for transfusion in patients with septic shock.

The trial was a multicenter, parallel group study conducted between December 2011 and December 2013, across 32 intensive care units in Denmark, Sweden, Norway and Finland. 1224 patients were screened who were 18 years of age and older, met criteria for septic shock, and had hemoglobin concentrations of 9g per deciliter or below. 1000 patients were randomized to two blood transfusion thresholds – 503 patients to 9g per deciliter and 497 to 7g per deciliter. Randomization was stratified by site and the presence or absence of active hematological malignancy.

Patients received single units of cross-matched, pre-storage leuko-reduced red blood cells when blood concentrations of hemoglobin decreased below the 9g or 7 g per deciliter transfusion threshold.  Hemoglobin concentrations were reassessed within 3 hours of termination of transfusion, or before initiation of another transfusion.  Patients received the intervention for the entire ICU stay to a maximum duration of 90 days following randomization.  Decision to repeat the transfusion at the allocated transfusion threshold was left to the discretion of the attending doctor, during and after ICU discharge.

The primary endpoint was mortality 90 days after randomization; secondary outcomes included the of use of life-support (use of vasopressor/inotropic therapy, mechanical ventilation, or renal replacement therapy) at days 5, 14, 28 post randomization, number of patients with serious adverse reactions, ischemic events, percent of days alive without vasopressor support, mechanical ventilation or renal replacement therapy, and percentage of days alive and out of hospital in the 90 days after randomization.

During the study period, 1545 blood transfusions were given in the lower Hgb- threshold group and 3088 transfusions were given in the higher Hgb-threshold group (p<0.0001).  The median cumulative number of blood transfusions after randomization was 1 (IQR 0-3) unit in the lower Hgb-threshold group and 4 (2-7) units (p<0.0001) in the higher Hgb-threshold group.

No significant difference in 90-day mortality was found between the two transfusion thresholds; 216 patients (43.0%) in the lower Hgb-threshold group and 223 patients (45.0%) in the higher Hgb-threshold group died (RR 0.94 95% CI 0.78 – 1.09, p=0.44).  Similar results were found when adjusted for baseline risk factors.  No heterogeneity in the effect of transfusion on 90-day mortality was seen in patients with chronic cardiovascular disease, patients aged 70 years or above, or those with simplified acute physiology score II above 53 at baseline.  The numbers of ischemic events in the ICU were similar between groups; 7% in the lower Hgb-threshold group and 8% in the higher Hgb-threshold group.  The use of life-support at days 5, 14 and 28 was similar between the two intervention groups, as were the percentages of days alive without vasopressor/inotropic therapy, mechanical ventilation, renal replacement therapy and the percentage of days alive and out of hospital.

“It is noteworthy that outcomes were not adversely affected by using a 7 gm/dL transfusion trigger and about half the blood was required than was needed with the 9 gm/dL threshold,” Deputy Editor Dan Longo commented.

Importantly patients with acute myocardial infarction were excluded from this study and the FOCUS trial – the safety of lower transfusion thresholds in these patients remains unclear.

How will the results of this study change your clinical practice?  What transfusion thresholds are currently utilized in your ICU?  What is the quality of the evidence supporting these recommendations?

View the New Quick Take on Teen Contraception

Posted by Karen Buckley • October 6th, 2014

Your 17-year-old patient tells you she’s terrified of becoming pregnant. How can you help her?

Each year more than 700,000 teens in the United States become pregnant. If we could eliminate some of the barriers to teens having access to contraception, would the pregnancy rates fall? The latest NEJM Quick Take, a brief animation narrated by Editor-in-Chief Jeffrey Drazen, summarizes new findings from a prospective cohort study in which teens were provided free contraception and educated about reversible methods. These teens had low rates of pregnancy, birth, and abortion, as compared with national rates for sexually experienced teens.

View all previous NEJM Quick Takes, including Benefits and Risks of Salt Consumption, Distracted Driving and Risks of Road Crashes, and Nut Consumption and Mortality.

Against the Grain

Posted by Sara Fazio • October 3rd, 2014

In the latest Clinical Problem-Solving article, a 42-year-old man with a history of coronary artery disease presented to the emergency department with left-upper-quadrant abdominal pain that radiated to his back and along the subcostal margin. He also reported substernal chest pressure similar to his usual angina.

Elevated lipase and amylase levels can be indicative of pancreatic inflammation, but elevations are also seen with other intraabdominal conditions, such as cholecystitis, bowel obstruction, or celiac disease. Certain drugs, such as opiates and cholinergic agents, can also spuriously raise lipase and amylase levels.

Clinical Pearls

What is the epidemiology and typical presentation of celiac disease?

Celiac disease is an autoimmune disorder that affects the small bowel and that is triggered by ingested gluten from barley, rye, and wheat. The disease has both intestinal and extraintestinal clinical manifestations. The intestinal symptoms occur in 40 to 50% of adults, a prevalence that is less than that in children, and include abdominal pain, diarrhea, and other nonspecific abdominal symptoms; a mild elevation in aminotransferase levels is reported in 15 to 20% of patients with celiac disease. Celiac disease is associated with an increase by a factor of three in the risk of pancreatitis.

What are the extraintestinal manifestations of celiac disease?

Celiac disease is also manifested outside the gastrointestinal tract. Rashes (e.g., dermatitis herpetiformis), arthralgias, neurologic and psychiatric symptoms, fatigue, and infertility can be presenting manifestations. Patients can also present with sequelae of malabsorption, including weight loss, iron-deficiency anemia, and osteoporosis or osteomalacia due to calcium and vitamin D  malabsorption. Celiac disease can be associated with other autoimmune conditions, such as type 1 diabetes, autoimmune thyroiditis, and hepatitis. Some retrospective studies, but not others, have shown an increased risk of incident ischemic heart disease, a finding that has been postulated to be associated with chronic inflammation.

Morning Report Questions

Q: What is the epidemiology and prevalence of celiac disease?

A: The prevalence of celiac disease in screening studies is 0.5 to 1%; the disease is seen in all populations for which gluten is part of the diet, although the prevalence varies depending on the population studied. The HLA class II genes HLA-DQ2 or, much less commonly, HLA-DQ8 are expressed in the majority of patients with celiac disease. Although men and women have a similar prevalence of celiac disease in population-based screening studies, the disease is diagnosed more frequently in women than in men.

Q: How may the diagnosis of celiac disease made?

A: The diagnosis of celiac disease is usually made on the basis of serologic screening, followed by a confirmatory small-bowel biopsy. The serologic test of choice is the IgA anti-tissue transglutaminase antibody assay, which is highly standardized, specific (94%), and sensitive (97%). Measurement of IgG anti-tissue transglutaminase antibodies or deamidated gliadin peptide IgG antibodies can be performed in persons who are IgA-deficient. IgA antiendomysial antibodies are highly specific, but testing is expensive and operator-dependent. Measurement of antigliadin antibodies is no longer recommended for diagnosis owing to low diagnostic accuracy. Positive serologic testing in adults should be followed by a small-bowel biopsy to assess the severity of the small-bowel involvement and to ensure that the serologic test results are not falsely positive. Findings on biopsy range from near-normal villous architecture with prominent intraepithelial lymphocytosis to complete villous atrophy.

Microcytic Anemia

Posted by Sara Fazio • October 3rd, 2014

A new review discusses diagnosis and treatment of thalassemia, anemia of inflammation, and iron-deficiency anemia, highlighting recent findings.  The article includes an interactive graphic that shows various types of red cells that are observed in microcytic anemias and other conditions.

The microcytic anemias are those characterized by the production of red cells that are smaller than normal. The small size of these cells is due to decreased production of hemoglobin, the predominant constituent of red cells. The causes of microcytic anemia are a lack of globin product (thalassemia), restricted iron delivery to the heme group of hemoglobin (anemia of inflammation), a lack of iron delivery to the heme group (iron-deficiency anemia), and defects in the synthesis of the heme group (sideroblastic anemias).

Clinical Pearls

What is the mechanism of microcytosis in inflammatory states?

Inflammatory states are often accompanied by microcytic anemia. The cause of this anemia is twofold. First, renal production of erythropoietin is suppressed by inflammatory cytokines, resulting in decreased red-cell production. Second, lack of iron availability for developing red cells can lead to microcytosis. The lack of iron is largely due to the protein hepcidin, an acute-phase reactant that leads to both reduced iron absorption and reduced release of iron from body stores. The protein ferroportin mediates cellular efflux of iron. Hepcidin binds to and down-regulates ferroportin, thereby blocking iron absorbed by enterocytes from entering the circulation and also preventing the release of iron from its body stores to developing red cells.

Figure 2. Mechanism of Anemia of Inflammation.

Which persons are at greatest risk for iron deficiency anemia?

Owing to obligate iron loss through menses, women are at greater risk for iron deficiency than men. Iron loss in all women averages 1 to 3 mg per day, and dietary intake is often inadequate to maintain a positive iron balance. Pregnancy adds to demands for iron, with requirements increasing to 6 mg per day by the end of pregnancy. Athletes are another group at risk for iron deficiency. Gastrointestinal tract blood is the source of iron loss, and exercise-induced hemolysis leads to urinary iron losses. Decreased absorption of iron has also been implicated as a cause of iron deficiency, because levels of hepcidin are often elevated in athletes owing to training-induced inflammation. Obesity and its surgical treatment are also risk factors for iron deficiency. Obese patients are often iron-deficient, with increased hepcidin levels being implicated in decreased absorption. After bariatric surgery, the incidence of iron deficiency can be as high as 50%.

Morning Report Questions

Q: How is iron deficiency anemia diagnosed?

A: For the diagnosis of iron deficiency, many tests have been proposed over the years, but the serum ferritin assay is currently the most efficient and cost-effective test, given the shortcomings of other tests. The mean corpuscular volume is low with severe iron deficiency, but coexisting conditions such as liver disease may blunt the decrease in red-cell size. An increased total iron-binding capacity is specific for iron deficiency, but because total iron-binding capacity is lowered by inflammation, aging, and poor nutrition, its sensitivity is low. Iron saturation is low with both iron-deficiency anemia and anemia of inflammation. Serum levels of soluble transferrin receptor will be elevated in patients with iron deficiency, and this is not affected by inflammation. However, levels can be increased in patients with any condition associated with an increased red-cell mass, such as hemolytic anemias, and in patients with chronic lymphocytic leukemia. Bone marrow iron staining is the most accurate means of diagnosing iron-deficiency anemia, but this is an invasive and expensive procedure. Even in the setting of chronic inflammation, it is rare for a patient with iron deficiency to have a ferritin level of more than 100 ng per milliliter.

Q: What is the appropriate treatment for iron deficiency anemia?

A: Traditionally, ferrous sulfate (325 mg [65 mg of elemental iron] orally three times a day) has been prescribed for the treatment of iron deficiency. Several trials suggest that lower doses of iron, such as 15 to 20 mg of elemental iron daily, can be as effective as higher doses and have fewer side effects. The reason may be that enterocyte iron absorption appears to be saturable; one dose of iron can block absorption of further doses. Consuming the iron with meat protein can also increase iron absorption. Calcium and fiber can decrease iron absorption, but this can be overcome by taking vitamin C. A potent inhibitor of iron absorption is tea, which can reduce absorption by 90%. Coffee may also decrease iron absorption but not to the degree that tea does. With regard to dietary iron, the rate of absorption of iron from heme sources is 10 times as high as that of iron from nonheme sources. There are many oral iron preparations, but no one compound appears to be superior to another. A pragmatic approach to oral iron replacement is to start with a daily 325-mg pill of ferrous sulfate, taken with a meal that contains meat. Avoiding tea and coffee and taking vitamin C (500 units) with the iron pill once daily will also help absorption. If ferrous sulfate has unacceptable side effects, ferrous gluconate at a daily dose of 325 mg (35 mg of elemental iron) can be tried. The reticulocyte count should rise in 1 week, and the hemoglobin level starts rising by the second week of therapy. Iron therapy should be continued until iron stores are replete.

Eliminating Barriers to Teen Contraception

Posted by John Staples • October 1st, 2014

Unplanned pregnancy can be a lucrative topic for Hollywood, with movies like Precious, Boyhood, Juno and Knocked Up collectively making hundreds of millions of dollars. Yet what’s profitable for producers comes at great socioeconomic cost to teen mothers and their children. Watching a movie about this public health problem isn’t likely to help. Is there another option?

In this week’s NEJM, Dr. Gina M. Secura (Washington University, St. Louis) and colleagues describe the results of the Contraceptive CHOICE Project, a program that promotes long-acting reversible contraceptive methods in order to reduce unintended pregnancy in the St. Louis region. After obtaining parental consent, teen participants were provided with screening for sexually transmitted infections, contraceptive counseling, and their choice of reversible contraception – all at no cost.

Participants were then followed by biannual telephone interviews for 2 to 3 years. The investigators found an annual average of 34 pregnancies, 19 live births, and 10 induced abortions for every 1,000 teens in the CHOICE cohort. In comparison, sexually-experienced U.S. teens have 159 pregnancies, 94 births, and 42 abortions per 1,000 teens. The authors conclude that programs that remove barriers to contraception may be of substantial public health importance in the United States.

“Teen pregnancy rates are much higher in the United States than they are in many other developed countries,” says cardiologist and NEJM Executive Editor Dr. Gregory Curfman. “The contraceptive mandate with the Affordable Care Act requires coverage of contraception without a copayment, and this study suggests that such coverage may be an effective way to prevent teen pregnancy.”

Lower rates of teen pregnancy? That sounds like a box office preview of something we’d all like to see.

These findings are also summarized in a short animation. Watch it now!