Advanced Dementia

Posted by Carla Rothaus • June 26th, 2015

Advanced dementia is a leading cause of death in the United States. A new Clinical Practice article covers treatment decisions guided by the goals of care — comfort is usually the primary goal, and tube feeding is not recommended.

In 2014, Alzheimer’s disease affected approximately 5 million persons in the United States, a number that is projected to increase to approximately 14 million by 2050.

Clinical Pearls

- What are the features of advanced dementia?

The features of advanced dementia include profound memory deficits (e.g., inability to recognize family members), minimal verbal abilities, inability to ambulate independently, inability to perform any activities of daily living, and urinary and fecal incontinence.

- Are there barriers to hospice care in the United States for patients with advanced dementia?

Eligibility guidelines for the Medicare hospice benefit require that patients with dementia have an expected survival of less than 6 months, as assessed by their reaching stage 7c on the Functional Assessment Staging tool (a scale ranging from stage 1 to stage 7f, with stage 7f indicating the most severe dementia) and having had one of six specified complications in the past year. However, these eligibility guidelines do not accurately predict survival. Although hospice enrollment of patients with dementia has increased over past decade, many barriers to accessing hospice care persist, particularly the requirement of having a life expectancy of less than 6 months. Given the challenge of predicting life expectancy among patients with advanced dementia, access to palliative care should be determined on the basis of a desire for comfort care, rather than the prognostic estimates.

Table 1. Hospice Guidelines for Estimating Survival of Less Than 6 Months in a Patient with Dementia.

Morning Report Questions

Q: What are some of the concerns regarding current management of patients with advanced dementia, especially when comfort is the goal of treatment?

A: Infections are very common in patients with advanced dementia. The Study of Pathogen Resistance and Exposure to Antimicrobials in Dementia (SPREAD), which prospectively followed 362 nursing home residents with advanced dementia, showed that in a 12-month period, two thirds were suspected to have infections, most commonly of the urinary or respiratory tract. In SPREAD, 75% of suspected infections were treated with antimicrobials, but less than half of all treated infections and only 19% of treated urinary tract infections met minimal clinical criteria for the initiation of antimicrobials. An estimated 75% of hospitalizations may be medically unnecessary or are discordant with the patients’ preferences and are thus avoidable. The goal of care for most patients is comfort, and hospitalization seldom promotes that goal, except in rare cases, such as in the treatment of hip fractures and when palliative care is unavailable. Daily medications should align with the goals of care, and drugs of questionable benefit should be discontinued. In 2008, an expert panel declared that the use of certain medications is inappropriate (i.e., not clinically beneficial) in patients with advanced dementia for whom comfort is the goal. Cross-sectional analyses of a nationwide pharmacy database showed that 54% of nursing home residents with advanced dementia were prescribed at least one of those medications. Of all the inappropriate medications prescribed, the most common were cholinesterase inhibitors (36%), memantine (25%), and statins (22%). Medications with questionable benefits accounted for 35% of the mean 90-day medication expenditures for the nursing home residents with advanced dementia to whom they were prescribed.

Q: What is the recommended approach to the care of patients with advanced dementia?

A: Advance care planning is a cornerstone of the care of patients with advanced dementia. Providers should educate health care proxies about the disease trajectory (i.e., the final stage of an incurable disease) and expected clinical complications (e.g., eating problems and infections). Providers should also counsel proxies about the basic tenet of surrogate decision making, which is to first consider  written or oral advance directives previously expressed by patients and then choose treatment options that align with these advance directives (e.g., a do-not-hospitalize order) before acute problems arise, and ideally, avoid treatments that are inconsistent with the patients’ wishes. In the absence of clear directives, proxies will have to either exercise substituted judgment according to what they think the patient would want or make a decision based on the patient’s best interests. Some observational studies showed that patients with advanced dementia who had advance directives had better palliative care outcomes (e.g., less tube feeding, fewer hospitalizations, and greater enrollment in hospice) than those without advance directives. Treatment decisions for patients with advanced dementia should be guided by the goals of care; providers and patients’ health care proxies must share in the decision making.

A Newborn Girl with Hyperbilirubinemia

Posted by Carla Rothaus • June 26th, 2015

In the latest Case Record of the Massachusetts General Hospital, a newborn girl was transferred to this hospital because of hypotension, coagulopathy, anemia, and hyperbilirubinemia. Generalized edema, anuria, and respiratory distress developed, and the trachea was intubated. Diagnostic procedures were performed.

Neonatal hemochromatosis is the most common cause of neonatal liver failure and the leading indication for liver transplantation in infants. It is characterized by progressive iron deposition during the fetal period, predominantly targeting the liver, pancreas, heart, and thyroid and salivary glands but sparing the reticuloendothelial  system.

Clinical Pearls

- Is neonatal hemochromatosis a genetic disease?

Neonatal hemochromatosis was considered for decades to be part of the hemochromatosis family and to have a genetic cause. Despite multiple attempts, no candidate genes were identified. Also known as gestational alloimmune liver disease, neonatal hemochromatosis is now recognized to be a congenital alloimmune hepatitis and is defined as the association of severe neonatal liver disease with iron deposition (siderosis) in extrahepatic tissue. Neonatal hemochromatosis is associated with a high recurrence rate (80 to 92%) in subsequent pregnancies, a pattern that cannot be explained by genetic inheritance but is consistent with an alloimmune pathogenesis.

- What are typical clinical features associated with this disease?

Extensive liver injury is typically present at birth, and some signs– such as placental edema, oligohydramnios, intrauterine growth retardation, prematurity, and stillbirth — can be detected antenatally. Hypoalbuminemia, hypoglycemia, coagulopathy, a low fibrinogen level, thrombocytopenia, and eventual multiorgan failure are the hallmarks of the disease. Low aminotransferase levels at birth are consistent with a long-standing antenatal process.

Morning Report Questions

Q: What diagnostic tests are obtained when clinical and laboratory findings suggest a diagnosis of neonatal hemochromatosis?

A: Gradient-echo MRI has become the standard noninvasive diagnostic procedure for neonatal hemochromatosis. All newborns have a relatively large amount of iron deposited in the liver because of prenatal maternal transfer; therefore, to make the diagnosis of neonatal hemochromatosis, abnormal iron storage in the pancreas, which is not seen in healthy newborns, must also be established.T1-weighted and T2-weighted MRI images can be helpful in detecting iron deposits in the liver, pancreas, and thyroid glands. The presence of iron deposits in the biopsy specimens of affected organs has become the standard in establishing the diagnosis. Since marked coagulopathy makes a liver biopsy exceedingly difficult to perform, biopsy of the minor salivary gland offers an excellent alternative.

Biopsy of the minor salivary gland is a useful method for detecting evidence of extrahepatic hemosiderosis and is a highly sensitive and specific test for neonatal hemochromatosis.

Figure 1. MRI Scans of the Liver.

Figure 2. Biopsy Specimens.

Q: What treatment options are available, and what survival rates are associated with this disease?

A: Therapy for neonatal hemochromatosis includes treatment for liver failure with antioxidant cocktails (including vitamin E, N-acetylcysteine, prostaglandins, and selenium), fresh-frozen plasma, and cryoprecipitate. Infusions of intravenous immune globulin (IVIG) and exchange transfusion have also been suggested. Exchange transfusion is performed to remove any maternal alloantibodies remaining in the fetal circulation, and IVIG is administered to displace specific reactive IgG antibodies that are bound to target antigens and to bind with circulating complement. Favorable outcomes among patients with neonatal hemochromatosis have been described; however, the prognosis remains seriously guarded, and the disease is associated with an overall survival of 36%. In a large case series, the survival rate was 51% among patients who had undergone a liver transplantation and 22% among those who had not undergone a transplantation.

New Interactive Medical Case: Test Your Skills

Posted by Karen Buckley • June 24th, 2015

Approximately 10 minutes after being stung on the right lower leg by a yellow jacket (a type of wasp), a 45-year-old man began to feel lightheaded and nauseated.  He called emergency medical services, and paramedics arrived 10 minutes later.  They found him to be alert but anxious, with scattered areas of erythema on his trunk and a small, localized area of tenderness, swelling and erythema at the site of the sting.  His blood pressure was 70/45 mm Hg, and his heart rate was 108 beats per minute.

Test your diagnostic and therapeutic skills with this new Interactive Medical Case on  Receive feedback on your choices and learn more about the condition and optimal treatment steps.

Browse previous Interactive Medical Cases. Try one or all 37 cases and earn CME credit or MOC points now!

Ischemic Optic Neuropathies

Posted by Carla Rothaus • June 19th, 2015

A new review article covers the diagnosis, pathophysiological features, and prognosis of ischemic optic neuropathy, a relatively common cause of visual loss in older patients, including visual loss after cardiac surgery. It must be distinguished from inflammatory optic neuritis.

ION refers to all ischemic causes of optic neuropathy. ION is classified as anterior ION or posterior ION depending on the segment of optic nerve that is affected. Anterior ION accounts for 90% of ION cases. Anterior ION and posterior ION are further categorized into nonarteritic or arteritic. The term arteritic refers to ION caused by small-vessel vasculitis, most often giant-cell arteritis.

Clinical Pearls

- What is the clinical presentation of nonarteritic anterior ischemic optic neuropathy, and how is it diagnosed?

Nonarteritic anterior ION is manifested as isolated, sudden, painless, monocular vision loss with edema of the optic disc. Progressive worsening of vision over a period of a few days or a few weeks is not uncommon, presumably related to worsening ischemia in the context of a local compartment syndrome associated with the disc edema. The severity of vision loss varies from normal visual acuity with visual-field defects to profound vision loss. The diagnosis of acute nonarteritic anterior ION is primarily clinical and relies on demonstration of vision loss with a relative afferent pupillary defect and edema of the optic disc, which consists of the optic-nerve head. A crucial finding on examination is the presence of a small, crowded optic-nerve head with a small physiological cup. This small cup-to-disc ratio defines a “disc at risk.” Although this finding is difficult to see during the acute phase of nonarteritic anterior ION when the optic disc is swollen, examination of the normal eye should show a disc at risk. Imaging of the optic nerve is typically normal in patients with nonarteritic anterior ION.

Figure 1. Blood Supply to the Optic Nerve and Anatomy of the Optic-Nerve Head.

- What causes nonarteritic anterior ischemic optic neuropathy, and can it be successfully treated?

Although nonarteritic anterior ION results from disease of the small vessels supplying the anterior portion of the optic nerve, its exact cause remains unknown. A disc at risk is essential for the development of nonarteritic anterior ION. Other optic-nerve anomalies resulting in crowding of the optic-nerve head, such as optic-nerve drusen and papilledema, may also confer a predisposition to nonarteritic anterior ION. The absence of a disc at risk in a patient with presumed nonarteritic anterior ION should raise the possibility of arteritic anterior ION or another cause of optic neuropathy. There is no established treatment for nonarteritic anterior ION such as there is for the arteritic type of anterior ION. Thus, the most important management concerns are distinguishing nonarteritic anterior ION from arteritic anterior ION and detecting and controlling vascular risk factors in cases of nonarteritic anter ION. Most proposed therapeutic interventions in nonarteritic anterior ION are based on the presumed mechanism and cascade of events. Although multiple therapies have been attempted, most have not been adequately studied, and animal models of nonarteritic anterior ION have emerged only in the past several years. Given the paucity of data regarding the exact pathophysiology of nonarteritic anterior ION and its treatment, the maxim “first, do no harm” is most important in the management of this devastating optic neuropathy.

Figure 2. Presumed Pathophysiology of Nonarteritic Anterior ION and Potential Treatment Strategies.

Figure 3. Nonarteritic Anterior ION in the Context of a Disc at Risk.

Morning Report Questions

Q: How do the clinical findings of posterior ischemic optic neuropathy differ from those of anterior ischemic optic neuropathy, and is the diagnostic evaluation the same for both?

A: When the posterior portion of the optic nerve is ischemic, there is no visible disc edema and the term “posterior ION” is used. Nonarteritic posterior ION is exceedingly rare, as compared with nonarteritic anterior ION. The typical presentation of nonarteritic posterior ION is isolated, painless, sudden loss of vision in one eye, with a relative afferent pupillary defect and a normal-appearing optic-nerve head. As expected with any optic neuropathy, optic-disc pallor develops 4 to 6 weeks later. The clinical diagnosis of nonarteritic posterior ION is difficult and remains a diagnosis of exclusion, with other causes of posterior optic neuropathy (e.g., inflammatory and compressive causes) ruled out by high-quality MRI of the brain and orbits with contrast and with fat suppression and by an extensive workup for underlying systemic inflammatory disorders. Giant-cell arteritis is the most common cause of posterior ION, and must be considered in every patient older than 50 years of age who has posterior ION.

Q: What clinical findings may help to distinguish arteritic from nonarteritic anterior ischemic optic neuropathy?

A: The clinical presentation of arteritic ION is similar to that of nonarteritic ION, but several “red flags” should raise clinical suspicion for arteritic ION. Systemic symptoms of giant-cell arteritis may precede visual loss by months; however, about 25% of patients with biopsy-confirmed giant-cell arteritis present with isolated ION without any systemic symptoms (so-called occult giant-cell arteritis). The degree of visual loss is often more severe in arteritic anterior ION than in nonarteritic anterior ION. In one study, 54% of the patients with arteritic anterior ION were unable to count fingers as compared with 26% of the patients with nonarteritic anterior ION. Untreated arteritic ION becomes bilateral in days to weeks in at least 50% of cases. The affected swollen optic nerve is often pale immediately in giant-cell arteritis, whereas pallor is delayed in nonarteritic anterior ION. The finding of associated retinal or choroidal ischemia in addition to ION is highly suggestive of giant-cell arteritis. Finally, a disc at risk is not necessary for arteritic anterior ION; the absence of a crowded optic disc in the second eye of a patient with anterior ION should make the diagnosis of nonarteritic anterior ION unlikely and should increase the probability of arteritic anterior ION.

A Man with Chest Pain and Shortness of Breath

Posted by Carla Rothaus • June 19th, 2015

In the latest Case Record of the Massachusetts General Hospital, a 71-year-old man presented with sudden chest pain, diaphoresis, shortness of breath, and hypotension. An electrocardiogram showed new ST-segment elevations. Ten days earlier, an implantable cardioverter–defibrillator had been placed. Diagnostic procedures were performed.

Complications of ICD placement are well described, and ICD lead migration or dislodgment occurs within a few days after implantation in approximately 0.14 to 1.2% of patients. Early clinical signs of lead perforation can be subtle and nonspecific, so a rapid and focused evaluation is required, even in the absence of signs of tamponade on physical examination.

Clinical Pearls

- Are there known risk factors for perforation of an ICD lead?

A few risk factors for lead perforation, including female sex and a low body-mass index, have been described. Some data suggest that myocardial fibrosis, which is frequently observed in patients with ischemic cardiomyopathy, may be protective against perforation. Ventricular hypertrophy and diabetes, both of which are associated with fibrosis, may be associated with reduced rates of perforation.

- What is the typical time course for ICD lead dislodgement, and what risk does it carry for a related major adverse event?

The overall incidence of ICD lead dislodgment is highest in the first weeks after implantation, before myocardial fibrosis occurs at the insertion site. Perforations that occur 1 month or more after implantation are rare but have been reported. In nearly 11% of patients with lead dislodgment, another related major adverse event (e.g., cardiac perforation and tamponade, pneumothorax, or cardiac arrest) or in-hospital death occurs.

Morning Report Questions

Q: Is lead perforation challenging to diagnose?

A: During the diagnostic evaluation of a patient who has had any recent medical or surgical procedure, the clinician should consider and rule out periprocedural complications. The manifestations of ICD lead migration are protean and may be surprisingly subtle. However, the events after a lead perforation may evolve rapidly, and a normal overall examination or ultrasound examination at any one point in time cannot rule out a perforation. To make a diagnosis of lead perforation, a high index of suspicion is required, and the diagnostic strategy must expeditiously rule out other lethal possibilities, including aortic dissection and pulmonary embolism. It is important to note that chest radiography is not very sensitive in the detection of lead migration. It is also important to remember that changes in lead measurements can reveal lead migration even in the absence of definitive imaging findings.

Figure 3. CT Images of the Chest.

Q: How should you manage perforation of an implantable cardioverter-defibrillator lead?

A: Lead perforation must be addressed promptly, because it can precipitate life-threatening cardiac tamponade within minutes. The key issue in the management of a lead perforation is to be prepared for decompensation or a disaster at the time the lead is extracted. Extraction of a migrated lead is performed in the operating room; the patient should receive general anesthesia and be monitored with transesophageal echocardiography. In the majority of cases, a lead associated with a perforation can be withdrawn without substantial bleeding into the pericardium. Even when the presence of the ventricular lead tip outside the myocardial wall is confirmed by CT scan, management of lead migration cannot be based on imaging findings alone. Careful correlation between the imaging findings and repeat device interrogation is required before a treatment strategy can be formed. It is possible to see a lead tip located beyond the myocardial border on CT without finding any evidence of change in lead measurements or pericardial effusion; this is frequently termed an asymptomatic lead perforation and does not necessarily require revision of the lead.

Permissive Underfeeding in the ICU

Posted by Rena Xu • June 17th, 2015

Nutrition among critically ill patients is widely considered important, but the ideal caloric targets remain a subject of debate.  Some believe higher caloric intake is helpful and can reduce mortality; others argue the exact opposite, pointing to studies linking caloric restriction to lower morbidity, as long as protein intake is adequate. This debate has prompted investigation of an underfeeding strategy as a way to reduce mortality in critically ill patients.

The Permissive Underfeeding versus Target Enteral Feeding in Adult Critically Ill Patients (PermiT) trial enrolled close to 900 critically ill adults in seven centers in Saudi Arabia and Canada. These patients were randomized to either standard feeding (70-100% of calculated caloric requirements) or to underfeeding (40-60% of caloric requirements) for up to two weeks. The various centers delivered enteral feeding according to their own protocols; calculated caloric intake also accounted for calories from parenteral nutrition, intravenous dextrose, and propofol (1.1 kCal per milliliter).  Protein intake was kept the same for the two groups, with the underfeeding group receiving additional protein supplements as well as saline or water to match the protein amount and volume received by the standard feeding group.  The primary outcome was 90-day mortality.  The investigators predicted an 8% absolute risk reduction in favor of the underfeeding group.

As intended, patients in the underfeeding group consumed fewer calories than those in the standard feeding group (average intake was 46% vs 71% of daily requirements). But the study found no difference in 90 day mortality between the two groups (29% in the standard group, 27% in the underfeeding group; P=0.58).  There were also no differences between the groups for a number of secondary outcomes, including length of ICU stay, in-hospital mortality, 28-day mortality, and 180-day mortality. Further, based on limited subgroup analyses, the study did not identify any subpopulations with differences in mortality between the two strategies.

“The collective results of our study and the two previous trials add to a growing body of research that suggests that standard feeding goals in critically ill patients do not improve clinical outcomes,” the authors write.

While underfeeding did not demonstrate a mortality benefit in this study, the authors note that the study was powered to detect an eight percent risk reduction, which means smaller treatment effects cannot be ruled out.  They also observe that some of the enrolled patients, particularly in the standard feeding group, failed to reach their target caloric intake, which would have decreased the gap in caloric intake between the two groups. Finally, less than fifteen percent of ICU patients who were screened for the study were ultimately enrolled, suggesting the need for caution before generalizing these results to other critically ill patients.

How do you determine nutrition goals for critically ill patients?  Have you seen a role for permissive underfeeding in your management of certain patient populations? 

Breast-Cancer Screening

Posted by Carla Rothaus • June 12th, 2015

The International Agency for Research on Cancer (IARC) has updated its 2002 guidelines on screening for breast cancer, drawing on data from studies completed in the past 15 years.

In November 2014, experts from 16 countries met at the International Agency for Research on Cancer (IARC) to assess the cancer-preventive and adverse effects of different methods of screening for breast cancer. In preparation for the meeting, the IARC scientific staff performed searches of the openly available scientific literature according to topics listed in an agreed-upon table of contents. The full report is presented in volume 15 of the IARC Handbooks of Cancer Prevention.

Clinical Pearls

- What data are available to assess the effectiveness of contemporary mammographic screening?

The IARC working group recognized that the relevance of randomized, controlled trials conducted more than 20 years ago should be questioned, given the large-scale improvements since then in both mammographic equipment and treatments for breast cancer. More recent, high-quality observational studies were considered to provide the most robust data with which to evaluate the effectiveness of mammographic screening. The working group gave the greatest weight to cohort studies with long follow-up periods and the most robust designs, which included those that accounted for lead time, minimized temporal and geographic differences between screened and unscreened participants, and controlled for individual differences that may have been related to the primary outcome. Analyses of invitations to screenings (rather than actual attendance) were considered to provide the strongest evidence of screening effectiveness, since they approximate the circumstances of an intention-to-treat analysis in a trial.

- Is there evidence of a reduction in breast cancer mortality with mammographic screening?

Some 20 cohort and 20 case-control studies, all conducted in the developed world (Australia, Canada, Europe, or the United States) were considered by the IARC working group to be informative for evaluating the effectiveness of mammographic screening programs, according to invitation or actual attendance, mostly at 2-year intervals. Most incidence-based cohort mortality studies, whether conducted in women invited to attend screening or women who attended screening, reported a clear reduction in breast-cancer mortality, although some estimates pertaining to women invited to attend were not statistically significant. Women 50 to 69 years of age who were invited to attend mammographic screening had, on average, a 23% reduction in the risk of death from breast cancer; women who attended mammographic screening had a higher reduction in risk, estimated at about 40%. Case-control studies that provided analyses according to invitation to screening were largely in agreement with these results.

Morning Report Questions

Q: Is there benefit to mammographic screening of women 70 to 74 years of age, and is there a benefit for those 40 to 44 years of age?

A: In the IARC analysis, a substantial reduction in the risk of death from breast cancer was consistently observed in women 70 to 74 years of age who were invited to or who attended mammographic screening in several incidence-based cohort mortality studies. Fewer studies assessed the effectiveness of screening in women 40 to 44 or 45 to 49 years of age who were invited to attend or who attended mammographic screening, and the reduction in risk in these studies was generally less pronounced. Overall, the available data did not allow for establishment of the most appropriate screening interval.

Table 1. Evaluation of Evidence Regarding the Beneficial and Adverse Effects of Different Methods of Screening for Breast Cancer in the General Population and in High-Risk Women.

Q: What harms are associated with mammographic screening?

A: Estimates of the cumulative risk of false positive results differ between organized programs and opportunistic screening. The estimate of the cumulative risk for organized programs is about 20% for a woman who had 10 screens between the ages of 50 and 70 years. Less than 5% of all false positive screens resulted in an invasive procedure. There is an ongoing debate about the preferred method for estimating over-diagnosis. After a thorough review of the available literature, the working group concluded that the most appropriate estimation of over-diagnosis is represented by the difference in the cumulative probabilities of breast-cancer detection in screened and unscreened women, after allowing for sufficient lead time. The Euroscreen Working Group calculated a summary estimate of over-diagnosis of 6.5% (range, 1 to 10%) on the basis of data from European studies that adjusted for both lead time and contemporaneous trends in incidence. The estimated cumulative risk of death from breast cancer due to radiation from mammographic screening is 1 to 10 per 100,000 women, depending on age and the frequency and duration of screening. It is smaller by a factor of at least 100 than the estimates of death from breast cancer that are prevented by mammographic screening for a wide range of ages. After a careful evaluation of the balance between the benefits and adverse effects of mammographic screening, the working group concluded that there is a net benefit from inviting women 50 to 69 years of age to receive screening.

A Woman with Decreased Vision and Diplopia

Posted by Carla Rothaus • June 12th, 2015

In the latest Case Record of the Massachusetts General Hospital, a 41-year-old woman presented with decreased visual acuity in the left eye and diplopia. MRI of the head and orbits revealed abnormal soft tissue in the left sphenoid sinus and orbital apex, extending to the left cavernous sinus. A diagnostic procedure was performed.

Lymphoma of the orbit is typically painless and has an indolent course, and thus the presence of pain and subacute progression of symptoms may suggest a different diagnosis or a more aggressive type of lymphoma.

Clinical Pearls

- What conditions may predispose to the development of orbital cellulitis?

Orbital cellulitis commonly results from bacterial infection, most often as an extension of ethmoid or frontal sinusitis, but it may also result from cutaneous trauma, dental abscess, or dacryocystitis.

The organisms most commonly associated with orbital cellulitis are streptococcal and staphylococcal species.

- What diseases are included in the differential diagnosis of an inflammatory process involving the orbit?

Inflammatory disease of the orbit is common, and causes include idiopathic orbital inflammation, IgG4-related orbital inflammation, sarcoidosis, granulomatosis with polyangiitis, and proliferative disorders of histiocytes. Idiopathic orbital inflammation, which is by far the most common of these diseases, was previously known as orbital pseudotumor and refers to inflammation involving any structure of the orbit. Specific descriptive nomenclature includes dacryoadenitis, scleritis, and myositis, although many cases involve diffuse infiltration of the orbital fat. This usually painful condition often results in visible periorbital inflammation and may occasionally extend to involve the paranasal sinuses or dura. The presence of pain is often clinically useful in making the diagnosis, but the absence of pain can be misleading. IgG4-related disease is an orbital inflammatory disorder that is less common than idiopathic orbital inflammation. Patients with IgG4-related disease have clinical and radiographic presentations that are similar to those of patients with idiopathic orbital inflammation, but IgG4-related disease is more likely to be bilateral and associated with an inflammatory disorder of another organ system. Sarcoidosis is a granulomatous disease that may involve the lungs, liver, spleen, eyes, and orbit. Orbital sarcoidosis most often involves the lacrimal glands but may involve other orbital structures and extend through apical foramina to the surrounding structures.

Morning Report Questions

Q: What clinical and imaging features characterize lymphoid tumors involving the orbit?

A: Lymphoid tumors are common infiltrative orbital cancers and range from the most common variety, indolent mucosa-associated lymphoid-tissue lymphomas, to more rare, aggressive varieties.

Lymphoma may involve any orbital structure — commonly including the lacrimal gland, extraocular muscle, or fat — and may be part of a systemic process. B-cell lymphomas are the most common type to involve the orbit and tend to be unilateral, painless, and slow-growing. On radiography, lymphoma has an infiltrative pattern, with molding to the surrounding structures.

Q: Is CD30 expression a common feature of diffuse large B-cell lymphoma?

A: Diffuse large B-cell lymphoma represents a group of biologically heterogeneous cancers that may be divided into morphologic, genetic, and immunophenotypic subgroups and that include certain specific disease entities. Most cases do not fulfill diagnostic criteria for one of the specific disease entities and are classified as diffuse large B-cell lymphoma (not otherwise specified). CD30 expression is seen in only 14% of cases of diffuse large B-cell lymphoma, and CD30-positive cases have been reported to be associated with a superior 5-year overall and progression-free survival, as compared with CD30-negative cases, a difference that is maintained in both germinal-center and nongerminal-center subgroups. Gene-expression profiling studies have shown a distinct profile, suggesting that CD30-positive cases may represent a distinct subgroup of diffuse large B-cell lymphoma.

Early CPR in Out-of-Hospital Cardiac Arrests — Outcomes and Evaluation of a Mobile-Dispatch System

Posted by Andrea Merrill • June 10th, 2015

The first time I ever performed CPR was on my 19th birthday.  My official title was “summer employee,” a minimum wage job that encompassed a variety of menial but necessary tasks in the emergency department of a busy rural hospital.  One of the accompanying benefits of my job was the chance to learn CPR, and by the summer’s end, by my birthday, I had finally received my CPR certification.

Now, 12 years later, some details are a little hazy; but I remember that the patient was in her 80s and had gone into cardiac arrest at home.   The dispatched emergency medical services team had been providing chest compressions for 14 minutes by the time she arrived at the emergency department. I remember how quickly my arms tired, despite an adrenaline surge and my racing heart as I watched my compressions on the telemetry monitor, seeing that the force provided was adequate. But, most of all, I remember feeling sad and defeated when we could not save her.  Afterward, the emergency physician tried to comfort me, telling me that our efforts were likely futile from the start– that most patients who come into the hospital after an outside cardiac arrest do not survive.

Since that fateful afternoon, I have performed CPR many more times, usually with similar disappointing outcomes.  CPR guidelines have changed twice since my initial certification, and now focus on fast, hard compressions, often suggested, ironically, to be given to the beat of “Stayin’ Alive.”  However, despite our widespread CPR training, patients who develop cardiac arrest outside the hospital generally continue to have poor outcomes, spurring debate on the utility and value of CPR training for non-medical professionals.

This debate is the subject of two articles by a group from the Center for Resuscitation Science in Sweden that appear in this week’s NEJM that examine the potential benefits of early out-of-hospital bystander CPR in cardiac arrest.   In the first study by Hasselqvist-Ax et al., the authors used the Swedish Cardiac Arrest Registry in a retrospective analysis of outcomes from over 30,000 cases of witnessed cardiac arrest that occurred from 1990 to 2011.  About half of all cardiac arrest patients received out-of-hospital CPR by a bystander, while the other half did not.

Although survival overall was poor at 30 days, the survival for those who received out-of-hospital CPR was more than double that of those who did not (10.5% vs 4.0%, p<0.001).  These findings remained robust when the authors controlled for multiple potential confounding factors such as sex, age, cardiac etiology, initial heart rhythm, and location of cardiac arrest (in the home versus in a public place– Table 2 in the original article).  There was, however, in preplanned subgroup analysis, a positive interaction for male sex and arrest outside the home- these 2 subgroups had a more marked benefit from out-of-hospital CPR compared to other subgroups (in contrast to my first CPR patient).

There are a few characteristics of interest in the out-of-hospital CPR group.  First, men outnumbered women in the out-of-hospital group, and were more likely to have ventricular fibrillation as their initial ECG rhythm.  The out-of-hospital CPR group also had a shorter collapse-to-call time for EMS (by 1 minute), indicating that people trained in CPR appear to recognize a medical emergency faster and therefore call emergency services sooner.  Despite calling emergency services faster, however, there was a delay seen in time to EMS arrival after the call (difference of 2 minutes) as well as a delay in time from collapse to initiation of defibrillation (2 minutes).  Thus, early initiation of out-of-hospital CPR leads to improved survival benefits sufficiently robust to outweigh a delay in arrival of emergency medical services and a delay in initiation of defibrillation.

The second study, by Svensson and colleagues, evaluated the utility of a mobile phone positioning system to dispatch CPR-trained lay volunteers to out-of-hospital cardiac arrests in an effort to increase the rate of out-of-hospital CPR actually delivered.

Svensson et al. performed a community-based randomized controlled trial in the county of Stockholm, Sweden from April 2012 to December 2013.  5,989 CPR-trained lay volunteers were recruited at the beginning of the study.  When an out-of-hospital cardiac arrest was called into the dispatcher between 6am and 11pm, the mobile phone positioning system was launched to locate any trained lay volunteers within a 500-meters of the incident.  Randomization was performed in a 1:1 ratio by the mobile phone positioning system. In cases assigned to the intervention group, a voice call, text message and web link to a map showing the location was then sent out to any volunteers in a 500-meter radius.  The dispatcher and all investigators were blinded until the final analysis was completed.  Primary outcome was initiation of bystander CPR before arrival of EMS.  Secondary outcomes included return of spontaneous circulation and 30-day survival.

There were 667 out-of-hospital cardiac arrests randomized and evaluated (46% in the intervention group and 54% in the control group).  Baseline characteristics were similar for the two groups.  61.6% of patients randomized to the intervention group (utilization of mobile phone dispatch system) received bystander CPR compared to 47.8% of those in the control group, a difference of about 14% (p<0.001).  However, about 700 out-of-hospital cardiac arrests were not randomized by the dispatcher, which would reduce the effect of the intervention by about half, resulting in only a 7% difference when the whole population of persons having an out-of-hospital cardiac arrest is considered.

Despite the increase in bystander CPR in the intervention group, there was no significant difference in return of spontaneous circulation or 30-day survival between the two groups, which the authors postulate is because the trial was not adequately powered for those outcomes.  Additionally, only 65% of lay volunteers contacted by the mobile phone dispatch attempted to reach the scene. And even when lay volunteers reached the scene, only 13% of all cases for whom calls were dispatched had CPR initiated before trained personnel arrived (see Table 3 from study).

The technology to dispatch trained laypersons to a nearby cardiac arrest victim may have potential to increase bystander CPR rates, if implemented on a wider scale; but many obstacles may be encountered, including legal, social and economic factors.  One might speculate that greatest benefit may occur in rural communities, where there may be considerable delay until emergency medical services can reach the scene or in senior communities where many cardiac arrests events occur at home where CPR-trained and capable lay persons are not usually around.  Maybe my first patient 12 years ago would have benefited from such a system, or at least have had a slightly increased chance in survival once she got to the hospital under the care of the emergency room team, and my 19-year-old hands.

The authors of these studies are available through June 19th to answer your questions on the NEJM Group Open Forum.


Vasopressin Antagonists

Posted by Carla Rothaus • June 5th, 2015

A new review in the Disorders of Fluids and Electrolytes series summarizes the salient discoveries that culminated in the development of vasopressin antagonists, focusing on their actions, side effects, emerging safety concerns, and important gaps in data. The review also considers how and when to use these agents.

Ample evidence is available to implicate vasopressin, a small polypeptide that is synthesized in the hypothalamus and secreted from the posterior pituitary, in the pathogenesis of many hyponatremic disorders. As the most common electrolyte disorder, hyponatremia is consistently associated with increased mortality and morbidity. The treatment of hyponatremia has been plagued by a paucity of controlled studies and by a lack of reliable and safe approaches. Therefore, the regulatory approval of vasopressin antagonists represents a milestone in the field.

Clinical Pearls

- Describe the vasopressin receptors, and the mechanism of action of vasopressin antagonists.

The V1A receptor is found in liver, smooth muscle, myocardium, brain, and platelets; the V1B receptor is involved in the secretion of corticotropin in the anterior pituitary. The V2 receptor is located primarily in the basolateral membrane of collecting-duct cells. The binding of the hormone to the V2 receptor on the basolateral membrane of the principal cell of the collecting duct activates adenylyl cyclase and generates cyclic AMP (cAMP) from adenosine triphosphate (ATP). In turn, cAMP activates protein kinase A, which phosphorylates aquaporin water channels (AQP2) and induces them to relocate to the luminal membrane. This promotes the reabsorption of water from tubular fluid to blood, rendering the tubular fluid more concentrated (increased osmolality). In the presence of a vasopressin antagonist, the signaling pathway is not activated. As a consequence, the water permeability of the cell remains high and water is not reabsorbed, causing the excretion of dilute urine (decreased osmolality) and thereby increasing the level of sodium in the blood compartment.

Figure 1. Binding of Vasopressin to Its Receptor and Location of Antagonist.

Figure 2. Cellular Effects of Vasopressin and Consequences of Vasopressin Antagonism.

- What vasopressin antagonists are approved for use in the United States?

Tolvaptan and conivaptan (with the latter blocking both the V1A and V2 receptors) have garnered approval for the treatment of euvolemic and hypervolemic hyponatremia in the United States. Conivaptan and tolvaptan have differing affinities for the vasopressin receptor. The relative inhibition of the two receptors (V2:V1 selectivity ratio) is much greater with tolvaptan (by a factor of 29) than with conivaptan (by a factor of 5.7). Thus, conivaptan is a nonselective vasopressin inhibitor, whereas tolvaptan is a more selective V2 inhibitor. Each of the drugs has a half-life that ranges from 6 to 10 hours and has activity that peaks several hours after administration. Both increase urine flow and the excretion of electrolyte-free water, without substantial changes in sodium or potassium excretion, leading to their designation as aquaretic agents.

Table 2. Inhibitory Constants and Pharmacokinetics of Two Vasopressin Antagonists.

Morning Report Questions

Q: When should vasopressin antagonists be avoided?

A: Dependence on the excretion of free water makes the response to vasopressin antagonists too slow to benefit patients with hyponatremia who have severe cerebral symptoms. Such patients require a prompt decrease in the volume of brain water, which is best achieved with hypertonic saline. Similarly, patients with hypovolemic hyponatremia require volume repletion to halt nonosmotic release of vasopressin. Furthermore, V1A-receptor antagonists can cause hypotension in such patients. Neither drug is effective in patients with advanced chronic kidney disease (stage 4 or 5).

Q: Is there consensus regarding the use of vasopressin antagonists?

A: Despite the paucity of data, panels have put forth recommendations for the treatment of hyponatremia. Of the available guidelines, two have garnered the most attention. The first set of guidelines was prepared by an expert panel that was supported by the manufacturer of tolvaptan; the second set, the European Clinical Practice Guideline, was developed by members of three medical societies with an interest in hyponatremia without support from the pharmaceutical industry. The two panels have divergent recommendations regarding the use of vasopressin antagonists. The European guidelines do not recommend the use of vasopressin antagonists in patients with euvolemia who have SIADH [syndrome of inappropriate secretion of antidiuretic hormone] and recommend against their use in patients with heart failure, in whom the need for water restriction and the wider use of urea are recommended. In contrast, the expert panel recommends that vasopressin antagonists be used in patients with SIADH when water restriction fails and states that vasopressin antagonists are “a viable option along with loop diuretics” in patients with heart failure. The European panelists express concern regarding the neurologic consequences of overcorrection, the risk of hepatotoxicity, and the lack of data supporting a survival benefit. The recommendations of the expert panel also have merit, particularly since none of the alternative approaches have been subjected to the rigors of a regulatory process requiring randomized, controlled trials nor have they received the approval of any regulatory agency.

Table 3. Recommendations for the Use of Vaptans in the Treatment of Hyponatremia.