Against the Grain

Posted by Sara Fazio • October 3rd, 2014

In the latest Clinical Problem-Solving article, a 42-year-old man with a history of coronary artery disease presented to the emergency department with left-upper-quadrant abdominal pain that radiated to his back and along the subcostal margin. He also reported substernal chest pressure similar to his usual angina.

Elevated lipase and amylase levels can be indicative of pancreatic inflammation, but elevations are also seen with other intraabdominal conditions, such as cholecystitis, bowel obstruction, or celiac disease. Certain drugs, such as opiates and cholinergic agents, can also spuriously raise lipase and amylase levels.

Clinical Pearls

What is the epidemiology and typical presentation of celiac disease?

Celiac disease is an autoimmune disorder that affects the small bowel and that is triggered by ingested gluten from barley, rye, and wheat. The disease has both intestinal and extraintestinal clinical manifestations. The intestinal symptoms occur in 40 to 50% of adults, a prevalence that is less than that in children, and include abdominal pain, diarrhea, and other nonspecific abdominal symptoms; a mild elevation in aminotransferase levels is reported in 15 to 20% of patients with celiac disease. Celiac disease is associated with an increase by a factor of three in the risk of pancreatitis.

What are the extraintestinal manifestations of celiac disease?

Celiac disease is also manifested outside the gastrointestinal tract. Rashes (e.g., dermatitis herpetiformis), arthralgias, neurologic and psychiatric symptoms, fatigue, and infertility can be presenting manifestations. Patients can also present with sequelae of malabsorption, including weight loss, iron-deficiency anemia, and osteoporosis or osteomalacia due to calcium and vitamin D  malabsorption. Celiac disease can be associated with other autoimmune conditions, such as type 1 diabetes, autoimmune thyroiditis, and hepatitis. Some retrospective studies, but not others, have shown an increased risk of incident ischemic heart disease, a finding that has been postulated to be associated with chronic inflammation.

Morning Report Questions

Q: What is the epidemiology and prevalence of celiac disease?

A: The prevalence of celiac disease in screening studies is 0.5 to 1%; the disease is seen in all populations for which gluten is part of the diet, although the prevalence varies depending on the population studied. The HLA class II genes HLA-DQ2 or, much less commonly, HLA-DQ8 are expressed in the majority of patients with celiac disease. Although men and women have a similar prevalence of celiac disease in population-based screening studies, the disease is diagnosed more frequently in women than in men.

Q: How may the diagnosis of celiac disease made?

A: The diagnosis of celiac disease is usually made on the basis of serologic screening, followed by a confirmatory small-bowel biopsy. The serologic test of choice is the IgA anti-tissue transglutaminase antibody assay, which is highly standardized, specific (94%), and sensitive (97%). Measurement of IgG anti-tissue transglutaminase antibodies or deamidated gliadin peptide IgG antibodies can be performed in persons who are IgA-deficient. IgA antiendomysial antibodies are highly specific, but testing is expensive and operator-dependent. Measurement of antigliadin antibodies is no longer recommended for diagnosis owing to low diagnostic accuracy. Positive serologic testing in adults should be followed by a small-bowel biopsy to assess the severity of the small-bowel involvement and to ensure that the serologic test results are not falsely positive. Findings on biopsy range from near-normal villous architecture with prominent intraepithelial lymphocytosis to complete villous atrophy.

Microcytic Anemia

Posted by Sara Fazio • October 3rd, 2014

A new review discusses diagnosis and treatment of thalassemia, anemia of inflammation, and iron-deficiency anemia, highlighting recent findings.  The article includes an interactive graphic that shows various types of red cells that are observed in microcytic anemias and other conditions.

The microcytic anemias are those characterized by the production of red cells that are smaller than normal. The small size of these cells is due to decreased production of hemoglobin, the predominant constituent of red cells. The causes of microcytic anemia are a lack of globin product (thalassemia), restricted iron delivery to the heme group of hemoglobin (anemia of inflammation), a lack of iron delivery to the heme group (iron-deficiency anemia), and defects in the synthesis of the heme group (sideroblastic anemias).

Clinical Pearls

What is the mechanism of microcytosis in inflammatory states?

Inflammatory states are often accompanied by microcytic anemia. The cause of this anemia is twofold. First, renal production of erythropoietin is suppressed by inflammatory cytokines, resulting in decreased red-cell production. Second, lack of iron availability for developing red cells can lead to microcytosis. The lack of iron is largely due to the protein hepcidin, an acute-phase reactant that leads to both reduced iron absorption and reduced release of iron from body stores. The protein ferroportin mediates cellular efflux of iron. Hepcidin binds to and down-regulates ferroportin, thereby blocking iron absorbed by enterocytes from entering the circulation and also preventing the release of iron from its body stores to developing red cells.

Figure 2. Mechanism of Anemia of Inflammation.

Which persons are at greatest risk for iron deficiency anemia?

Owing to obligate iron loss through menses, women are at greater risk for iron deficiency than men. Iron loss in all women averages 1 to 3 mg per day, and dietary intake is often inadequate to maintain a positive iron balance. Pregnancy adds to demands for iron, with requirements increasing to 6 mg per day by the end of pregnancy. Athletes are another group at risk for iron deficiency. Gastrointestinal tract blood is the source of iron loss, and exercise-induced hemolysis leads to urinary iron losses. Decreased absorption of iron has also been implicated as a cause of iron deficiency, because levels of hepcidin are often elevated in athletes owing to training-induced inflammation. Obesity and its surgical treatment are also risk factors for iron deficiency. Obese patients are often iron-deficient, with increased hepcidin levels being implicated in decreased absorption. After bariatric surgery, the incidence of iron deficiency can be as high as 50%.

Morning Report Questions

Q: How is iron deficiency anemia diagnosed?

A: For the diagnosis of iron deficiency, many tests have been proposed over the years, but the serum ferritin assay is currently the most efficient and cost-effective test, given the shortcomings of other tests. The mean corpuscular volume is low with severe iron deficiency, but coexisting conditions such as liver disease may blunt the decrease in red-cell size. An increased total iron-binding capacity is specific for iron deficiency, but because total iron-binding capacity is lowered by inflammation, aging, and poor nutrition, its sensitivity is low. Iron saturation is low with both iron-deficiency anemia and anemia of inflammation. Serum levels of soluble transferrin receptor will be elevated in patients with iron deficiency, and this is not affected by inflammation. However, levels can be increased in patients with any condition associated with an increased red-cell mass, such as hemolytic anemias, and in patients with chronic lymphocytic leukemia. Bone marrow iron staining is the most accurate means of diagnosing iron-deficiency anemia, but this is an invasive and expensive procedure. Even in the setting of chronic inflammation, it is rare for a patient with iron deficiency to have a ferritin level of more than 100 ng per milliliter.

Q: What is the appropriate treatment for iron deficiency anemia?

A: Traditionally, ferrous sulfate (325 mg [65 mg of elemental iron] orally three times a day) has been prescribed for the treatment of iron deficiency. Several trials suggest that lower doses of iron, such as 15 to 20 mg of elemental iron daily, can be as effective as higher doses and have fewer side effects. The reason may be that enterocyte iron absorption appears to be saturable; one dose of iron can block absorption of further doses. Consuming the iron with meat protein can also increase iron absorption. Calcium and fiber can decrease iron absorption, but this can be overcome by taking vitamin C. A potent inhibitor of iron absorption is tea, which can reduce absorption by 90%. Coffee may also decrease iron absorption but not to the degree that tea does. With regard to dietary iron, the rate of absorption of iron from heme sources is 10 times as high as that of iron from nonheme sources. There are many oral iron preparations, but no one compound appears to be superior to another. A pragmatic approach to oral iron replacement is to start with a daily 325-mg pill of ferrous sulfate, taken with a meal that contains meat. Avoiding tea and coffee and taking vitamin C (500 units) with the iron pill once daily will also help absorption. If ferrous sulfate has unacceptable side effects, ferrous gluconate at a daily dose of 325 mg (35 mg of elemental iron) can be tried. The reticulocyte count should rise in 1 week, and the hemoglobin level starts rising by the second week of therapy. Iron therapy should be continued until iron stores are replete.

Eliminating Barriers to Teen Contraception

Posted by John Staples • October 1st, 2014

Unplanned pregnancy can be a lucrative topic for Hollywood, with movies like Precious, Boyhood, Juno and Knocked Up collectively making hundreds of millions of dollars. Yet what’s profitable for producers comes at great socioeconomic cost to teen mothers and their children. Watching a movie about this public health problem isn’t likely to help. Is there another option?

In this week’s NEJM, Dr. Gina M. Secura (Washington University, St. Louis) and colleagues describe the results of the Contraceptive CHOICE Project, a program that promotes long-acting reversible contraceptive methods in order to reduce unintended pregnancy in the St. Louis region. After obtaining parental consent, teen participants were provided with screening for sexually transmitted infections, contraceptive counseling, and their choice of reversible contraception – all at no cost.

Participants were then followed by biannual telephone interviews for 2 to 3 years. The investigators found an annual average of 34 pregnancies, 19 live births, and 10 induced abortions for every 1,000 teens in the CHOICE cohort. In comparison, sexually-experienced U.S. teens have 159 pregnancies, 94 births, and 42 abortions per 1,000 teens. The authors conclude that programs that remove barriers to contraception may be of substantial public health importance in the United States.

“Teen pregnancy rates are much higher in the United States than they are in many other developed countries,” says cardiologist and NEJM Executive Editor Dr. Gregory Curfman. “The contraceptive mandate with the Affordable Care Act requires coverage of contraception without a copayment, and this study suggests that such coverage may be an effective way to prevent teen pregnancy.”

Lower rates of teen pregnancy? That sounds like a box office preview of something we’d all like to see.

These findings are also summarized in a short animation. Watch it now!

Take the Fluids and Electrolytes Challenge

Posted by Karen Buckley • September 30th, 2014

A 22-year-old woman has received 6 liters of isotonic saline and is awaiting transfer to the operating room for stabilization of injuries suffered in a car accident. The lab values include blood pH 7.28, PaCO2 39 mm Hg, sodium 135 mmol per liter, potassium 3.8 mmol per liter, chloride 115 mmol per liter, and bicarbonate 18 mmol per liter. What strategy would provide the best support for this patient?

Read a brief case description, then vote and comment about what the diagnosis may be and what diagnostic tests will prove useful.  Follow the conversation on Facebook and Twitter with #NEJMCases.

The answer will be published on October 9 alongside the first article in the new Disorders of Fluids and Electrolytes review series, Physiological Approach to Assessment of Acid-Base Disturbances.  Look for a new article each month.

29-Year-Old Man with Diarrhea, Nausea, and Weight Loss

Posted by Sara Fazio • September 26th, 2014

A 29-year-old man was seen in the walk-in clinic because of diarrhea of 1 year’s duration and weight loss. Initial laboratory values included elevated hepatic aminotransferase levels and a ferritin level of 1716 ng per milliliter. A diagnostic procedure was performed.

Common causes of cirrhosis include chronic alcohol use, viral hepatitis (either HBV or HCV), nonalcoholic steatohepatitis, and    hereditary hemochromatosis.

Clinical Pearls

What are clinical features associated with autoimmune hepatitis?   

Patients with autoimmune hepatitis may present with fatigue, lethargy, anorexia, nausea, abdominal pain, itching, and arthralgia of small joints. Diarrhea is not a common symptom of this illness. The International Autoimmune Hepatitis Group proposed a scoring system intended to help estimate the probability of this illness. Elements that make autoimmune hepatitis more likely include: female sex, a low ratio of alkaline phosphatase to aspartate aminotransferase, an elevated IgG level, presence of antinuclear antibody, negative viral tests, no history of drug use, and no history of substantial alcohol use. The presence of eosinophilia has been associated with autoimmune hepatitis, and an elevated ferritin level may occur with autoimmune hepatitis. Autoimmune hepatitis often occurs in Italian and Chinese populations. Autoimmune disorders are common among first-degree relatives of children with autoimmune hepatitis.

Table 2. Scoring Systems to Differentiate Autoimmune Hepatitis from Wilson’s Disease.

How does Wilson’s disease typically present, and what tests are useful in making a diagnosis?

Patients with Wilson’s disease present with a range of hepatic manifestations, including persistently elevated serum aminotransferase levels, chronic hepatitis, cirrhosis, or fulminant hepatic failure. Wilson’s disease is considered in the differential diagnosis when there is coexisting liver disease and a neuropsychiatric disorder. A 2008 guideline for the diagnosis and management of Wilson’s disease suggests that a diagnosis of Wilson’s disease can be established by the presence of Kayser-Fleischer rings, a ceruloplasmin level of less than 20 mg per deciliter, and a 24-hour urinary copper level of greater than 40 micrograms. A liver biopsy is the next study that should be performed in patients in whom the diagnosis is being considered.

Morning Report Questions

Q: What is the most reliable diagnostic test for Wilson’s disease?

A: The most reliable diagnostic test for Wilson’s disease is copper quantitation in tissue. Values greater than 250 micrograms per gram of dry weight have 83.3% sensitivity and 98.6% specificity for Wilson’s disease; this diagnostic threshold is incorporated into practice guidelines of the American Association for the Study of Liver Diseases. A value greater than 1000 micrograms per gram of dry weight is considered virtually diagnostic of Wilson’s disease. It is important to note that hepatocytic copper deposition in Wilson’s disease may be uneven, and needle biopsies are often associated with sampling error; therefore, histochemical staining may be unreliable and yield false negative results. In addition, the most commonly used stains (rhodanine and rubeanic acid) mainly detect copper concentrated in lysosomes, a finding that is seen in the late stages of the disease. In the early stages, excess copper is diffusely distributed in the cytoplasm and is not detected by these stains. Thus, a negative histochemical stain for copper does not rule out the diagnosis of Wilson’s disease.

Q: How is Wilson’s disease treated?

A: Management of Wilson’s disease is determined by the clinical presentation of the patient. Asymptomatic patients, without signs of  hepatic or neurologic disease, can be treated with zinc. Zinc acts in the enterocyte to induce metallothionein, an endogenous metal chelator. For patients with evidence of neurologic or hepatic involvement, chelation therapy with penicillamine or trientine is indicated. The choice of chelator is influenced by the presence of neurologic disease. Patients with neurologic manifestations who are initially treated with trientine have higher rates of neurologic deterioration than patients with neurologic manifestations who are initially treated with penicillamine; thus, penicillamine may be preferred in this group. However, penicillamine is associated with high rates of adverse events leading to discontinuation of therapy, as compared with trientine. Therefore, in patients without neurologic involvement, trientine is a reasonable initial therapy. For patients with decompensated liver disease that is unresponsive to chelation therapy or for patients with fulminant hepatic failure, referral for evaluation for liver transplantation is warranted.

Depression in the Elderly

Posted by Sara Fazio • September 26th, 2014

Late-life depression (major depressive disorder in adults 60 years of age or older) is often associated with coexisting medical illness or cognitive impairment. Either pharmacotherapy (with SSRIs as the initial choice) or psychotherapy may be used as first-line therapy.

Late-life depression is the occurrence of major depressive disorder in adults 60 years of age or older. Major depressive disorder occurs in up to 5% of community-dwelling older adults, and 8 to 16% of older adults have clinically significant depressive symptoms. Rates of major depressive disorder rise with increasing medical morbidity, with reported rates of 5 to 10% in primary care and as high as 37% after critical care hospitalizations.

Clinical Pearls

How do patients with late-onset depression compare to those with an initial diagnosis earlier in life?

Patients with late-life depression are heterogeneous in terms of clinical history and coexisting medical conditions. As compared with older adults reporting an initial depressive episode early in life, those with late-onset depression are more likely to have neurologic abnormalities, including deficits on neuropsychological tests and age-related changes on neuroimaging that are greater than normal; they are also at higher risk for subsequent dementia. Such observations informed the hypothesis that vascular disease may contribute to depression in some older adults. Low mood may be less common in older adults with depression than in younger adults with the disorder, whereas irritability, anxiety, and somatic symptoms may be more common. Psychosocial stressors such as the death of a loved one may trigger a depressive episode, although transient reactions to major losses can resemble depression.

Table 1. DSM-5 Diagnostic Criteria for Major Depressive Disorder. 

What is the recommended initial evaluation in elderly patients suspected of having depression?

Recommended laboratory tests include blood counts to test for anemia and measurement of the glucose level, as well as measurement of thyrotropin, since hypothyroidism can mimic depressive symptoms. Measurement of serum levels of vitamin B12 and folate is also commonly recommended, because the prevalence of vitamin B12 deficiency increases with age, and low levels of vitamin B12 and folate may contribute to depression. Cognitive screening (e.g., with the Mini-Mental State Examination) is warranted in persons reporting memory problems and may reveal deficits in visuospatial processing or memory even if the total score is in the normal range. Neuropsychological testing may help identify early dementia, but because acute depression negatively affects performance, testing should be postponed until depressive symptoms diminish.

Morning Report Questions

Q: What is considered the first line pharmacologic treatment for late-life depression?

A: Owing to their favorable adverse-event profiles and low cost, selective serotonin-reuptake inhibitors (SSRIs) are first-line treatments for late-life depression. In some randomized, controlled trials, but not others, SSRIs such as sertraline, fluoxetine, and paroxetine have been more effective than placebo in reducing depressive symptoms and increasing rates of remission of depression. Serotonin-norepinephrine reuptake inhibitors (SNRIs) are commonly used as second-line agents when remission is not obtained with SSRIs. In small studies, venlafaxine did not show greater efficacy than placebo, but a larger, placebo-controlled trial of duloxetine showed significant improvements in late-life depression (response rate, 37% vs. 19%; remission rate, 27% vs. 15%). As observed in trials involving younger adults, randomized trials involving older adults have not shown significant differences between the benefits of SSRIs and those of SNRIs, although adverse effects may be more frequent with SNRIs. Tricyclic antidepressants have efficacy similar to that of SSRIs in the treatment of late-life depression but are less commonly used owing to their greater side effects.

Table 3. Antidepressants Commonly Used to Treat Late-Life Depression.

Q: Is there a role for brain stimulation in treating late-life depression?

A: Electroconvulsive therapy (ECT) is the most effective treatment for severely depressed patients, including elderly patients. Although antidepressant medication is first-line therapy, ECT should be considered in patients if they are suicidal, have not had a response to antidepressant pharmacotherapy, have a deteriorating physical condition, or have depression-related disability that threatens their ability to live independently. Available data from open-label trials, typically involving persons who had not had a response to antidepressants, suggest remission rates of 70 to 90% with ECT, although remission rates in community samples may be lower (30 to 50%). Common side effects include postictal confusion with both anterograde and retrograde amnesia; current administration techniques, such as unilateral electrode placement with a brief pulse, substantially reduce this risk, and cognitive symptoms typically resolve after the completion of ECT. Persons with cardiovascular or neurologic disease are at increased risk for ECT-related memory problems. Transcranial magnetic stimulation is a newer treatment for depression that uses a focal electromagnetic field generated by a coil held over the scalp, most commonly positioned over the left prefrontal cortex. Sessions are scheduled five times a week over a period of 4 to 6 weeks. This treatment does not require anesthesia and does not have cognitive side effects. However, a meta-analysis of six trials comparing transcranial magnetic stimulation with ECT showed that ECT has higher remission rates. Although a large multisite trial did not show that age was a significant predictor of response, other studies have suggested that depressed older adults may not have as robust a response as younger adults.

Ethical Challenges in Treating Family and Friends

Posted by Karen Buckley • September 25th, 2014

What do you do when:

  • A neighbor wants a prescription to refill her daughter’s albuterol inhaler before her soccer game — which starts in two hours?
  • A colleague asks for a prescription for fluoxetine because he has been feeling depressed since his divorce?
  • Your father-in-law has been admitted to the hospital where you work following a car accident, he is in substantial pain and needs more morphine, but the resident does not respond to your pages?

Take the poll now on, and read the new Sounding Board article in which a group of authors from the University of Michigan discuss the ethical issues that arise when physicians provide informal care to family members or friends. They argue that physicians should refrain from providing medical care to family and close friends, except in urgent situations where no other options are available. Do you agree?

Anti–Interleukin-5 Monoclonal Antibody to Treat Severe Eosinophilic Asthma

Posted by Daniela Lamas • September 24th, 2014

Most asthma patients who come to your pulmonary office get better. With a regimen of inhaled therapies and possibly a short course of oral corticosteroids, the wheeze and cough and shortness of breath remit. But the patient you are seeing in clinic today is still struggling.

She’s tried every inhaler at its maximum dose and still has been taking oral corticosteroids for more than six months now. Each year, she’s in the hospital at least three times with a severe exacerbation of her symptoms, and an elevated eosinophil count in her peripheral blood. Reaching the limits of your therapeutic armamentarium, you are considering referring her to bronchial thermoplasty but otherwise, you’re not sure what else to offer.

Now, two studies published in this week’s NEJM lend promising evidence to a new therapy that could benefit your patient – a monoclonal antibody, mepolizumab, that targets one aspect of the inflammatory cascade leading to asthma symptoms.

Mepolizumab works by binding to IL-5 – a cytokine that recruits eosinophils from the bone marrow. As blood and sputum eosinophilia correlate with worsened asthma control, it stands to reason that a drug that decreases eosinophilia might improve asthma control. But how to tell which patients with severe asthma might benefit? Initial studies failed to find a benefit across the entire spectrum of those with severe asthma, but smaller studies demonstrated better outcomes in patients with asthma characterized by elevated levels of eosinophils. The pair of studies in this week’s NEJM further characterizes how to administer this drug and to what group of patients.

In one of the studies, Hector Ortega and colleagues enrolled 576 patients with severe asthma. All had more than two exacerbations of their asthma in the prior year despite high dose inhaled corticosteroids, and an elevated blood eosinophil count (≥300cells/µL). Patients were randomly assigned to receive intravenous mepolizumab, subcutaneous mepolizumab or placebo every four weeks, for 32 weeks. They found that both routes of administration decreased frequency of exacerbations by about one-half and improved measures of quality of life and asthma control.

The companion study, by Elisabeth Bel and colleagues, looks specifically at a subset of patients with glucorticoid-dependent severe asthma and persistently elevated eosinophil levels. Patients received monthly subcutaneous mepolizumab or placebo for twenty weeks. Those who were randomly assigned to mepolizumab were able to reduce their glucorticoid dose by fifty percent and – despite the reduced steroid dose – had fewer exacerbations and reported improved asthma control. Both studies showed the drug to have similar adverse event rates to placebo.

In an editorial, Parameswaran Nair, a pulmonologist and asthma researcher, writes that these studies leave open some key questions, specifically whether tracking sputum eosinophil levels might improve drug dosing, and what the most effective route and frequency of administration might be. He also notes that even the placebo group in the Ortega trial saw a significant decrease in exacerbation rates (although the drop was smaller than in those taking the study drug). Perhaps, Nair writes, for some patients, improving adherence might negate the need for a costly therapy like mepolizumab.

Despite these concerns, he notes that anti-interleukin 5 therapy offers “an important advance in our ability to care for patients with severe eosinophilic asthma, particularly as a method of decreasing exacerbations in patients who are dependent on daily use of oral glucocorticoids.” He concludes, “it is reasonable to consider anti–interleukin-5 therapy for patients with severe asthma who are receiving high doses of systemic glucocorticoids and who continue to have an elevated eosinophil count in sputum or blood regardless of their atopic status.”

When it comes to your patient, then, first make sure she’s truly been adherent with her prescribed medications. If she has been, then she will need to wait for regulatory approval as mepolizumab has not been approved for use in asthma.



60 Year-Old Woman with Syncope

Posted by Sara Fazio • September 19th, 2014

In the latest Case Record of the Massachusetts General Hospital, a 60-year-old woman was seen in the emergency department after a syncopal episode. Oxygen saturation was 71% while she was breathing ambient air. A focused cardiac ultrasound revealed right-sided heart strain and McConnell’s sign. Additional diagnostic procedures were performed.

A FOCUS examination, also referred to as clinician-performed ultrasonography or point-of-care ultrasonography, provides time-sensitive information that may narrow the differential diagnosis, inform resuscitation strategies, and guide treatment of patients with cardiovascular disease. The purpose of a FOCUS examination is to look for any evidence of pericardial effusion, assess global cardiac function and relative chamber size, and guide emergency procedures. A FOCUS examination is intended to serve as a complement to comprehensive echocardiography and is now considered an essential part of training in emergency medicine.

Clinical Pearls

What is “clot in transit”?

Clot in transit is a dangerous manifestation of pulmonary embolism that is seen in approximately 4% of cases, although this may be an underestimate. The mortality associated with pulmonary embolism with clot in transit is high (27 to 45%), with nearly all deaths occurring in the first 24 hours, so rapid and aggressive treatment is essential. With clot in transit, anticoagulation alone is unlikely to be sufficient because it is associated with a higher mortality (38%) than thrombolysis or surgery.

What is the role of systemic thrombolysis versus catheter-directed thrombolysis in massive pulmonary embolism?

In cases of massive, hemodynamically unstable pulmonary embolism and in the presence of clot in transit, systemic thrombolysis is associated with improved survival, as compared with anticoagulation alone. Thrombolytic therapy can be delivered locally through a catheter inserted into a pulmonary artery. The main advantage of a catheter-directed approach is that a low dose of the thrombolytic agent is typically used in someone with a higher risk of bleeding. However, in a patient with suspected clot in transit, passing a catheter through the right atrium is risky and could disrupt the clot.

Morning Report Questions

Q: What is the role of aspiration versus surgical thrombectomy in the treatment of pulmonary embolism?

A: Aspiration thrombectomy is a relatively new technique that allows clinicians to remove a large volume of thrombus from the right side of the heart or the proximal pulmonary artery. This procedure requires a venotomy and a perfusion team, but it is less invasive than an open surgical thrombectomy. Although there is a paucity of data describing its use in patients with a pulmonary embolism and clot in transit, the procedure is typically well tolerated. A major risk of this approach is that aspiration can fragment a fragile clot and lead to further embolization. In a patient with a patent foramen ovale, a fragmented clot could release emboli into the systemic circulation. Open surgical thrombectomy allows for rapid removal of clots from both the pulmonary arteries and the right side of the heart. The procedure requires a median sternotomy and cardiopulmonary bypass, but improvements in technique and patient selection have greatly increased survival over the past two decades.

Q: What is the duration of anticoagulation and evaluation recommended in a patient with an unprovoked pulmonary embolus?

A: Current guidelines suggest lifelong treatment in patients who have had unprovoked thrombosis. Routine screening according to the patient’s age, symptoms, and sex is usually advocated, but a more extensive evaluation is generally not warranted. For patients who have had an unprovoked event and who are reluctant to take anticoagulants on a lifelong basis, the risk of recurrence can be predicted by measuring the D-dimer level while the anticoagulant is withheld. Deep venous thrombosis and pulmonary embolism tend to recur in the same form as the initial clinical manifestation; which should also be considered before discontinuing anticoagulation.

Ultrasonography versus CT for Suspected Nephrolithiasis

Posted by Sara Fazio • September 19th, 2014

Patients with suspected nephrolithiasis were randomly assigned either to ultrasonography performed by an emergency physician or a radiologist or to CT for initial study. Ultrasonography was associated with lower cumulative radiation, with no significant difference in complications.

Pain from nephrolithiasis is a common reason for emergency department visits in the United States. Abdominal computed tomography (CT) has become the most common initial imaging test for suspected nephrolithiasis because of its high sensitivity for the diagnosis of urinary stone disease. However, CT entails exposure to ionizing radiation with attendant long-term cancer risk, is associated with a high rate of incidental findings that can lead to inappropriate follow-up referral and treatment, and contributes to growing annual care costs for acute nephrolithiasis, which are currently approximately $2 billion in the United States.

Clinical Pearls

What were the study results with respect to high-risk diagnoses with complications?

High-risk diagnoses with complications during the first 30 days after randomization were recorded in 11 patients (0.4%) — 6 patients (0.7%) assigned to point-of-care ultrasonography, 3 (0.3%) assigned to radiology ultrasonography, and 2 (0.2%) assigned to CT — with no significant difference according to study group (P=0.30).

Table 3. Primary and Secondary Study Outcomes According to Study Group. 

What were the results with respect to radiation exposure and serious adverse events between groups?

Over the course of the 6-month study period, the average cumulative radiation exposures were significantly lower in patients assigned to point-of-care ultrasonography and radiology ultrasonography than in those assigned to CT (10.1 mSv and 9.3 mSv, respectively, vs. 17.2 mSv; P<0.001). This difference is attributable to the imaging performed at the baseline emergency department visit. There were no significant differences among the study groups in the number of patients with serious adverse events: 113 of 908 patients (12.4%) assigned to point-of-care ultrasonography, 96 of 893 (10.8%) assigned to radiology ultrasonography, and 107 of 958 (11.2%) assigned to CT (P=0.50). A total of 466 serious adverse events occurred in these 316    patients; 426 (91.4%) were hospitalizations during the follow-up period, and 123 (26.4%) involved surgical treatment or complications of urinary stone disease.

Table 3. Primary and Secondary Study Outcomes According to Study Group.

Morning Report Questions

Q: Did the length of stay in the emergency department differ between groups?

A: The median length of stay in the emergency department was 6.3 hours in the point-of-care ultrasonography group, 7.0 hours in the radiology ultrasonography group, and 6.4 hours in the CT group (P<0.001 for the comparison of radiology ultrasonography with each of the other two groups). No significant differences were observed among the groups with respect to the proportion of patients who had a return visit to the emergency department within 7 or 30 days or who were admitted to the hospital within 7, 30, or 180 days or with respect to self-reported pain scores at any assessment.

Table 3. Primary and Secondary Study Outcomes According to Study Group. 

Q: In this study, after an analysis of accuracy of the first imaging test performed, what were the sensitivity and specificity of point-of-care ultrasound, radiology ultrasound and computed tomography in the diagnosis of suspected nephrolithiasis?

A: An analysis of diagnostic accuracy for nephrolithiasis that was performed on the basis of the result of the first imaging test patients underwent showed that ultrasonography had lower sensitivity and higher specificity than CT: the sensitivity was 54% (95% confidence interval [CI], 48 to 60) for point-of-care ultrasonography, 57% (95% CI, 51 to 64) for radiology ultrasonography, and 88% (95% CI, 84 to 92) for CT (P<0.001), and the specificity was 71% (95% CI, 67 to 75), 73% (95% CI, 69 to 77), and 58% (95% CI, 55 to 62), respectively (P<0.001). There was no significant differences in results between those with and those without complete follow-up.