A Man with Fever and Rash

Posted by Carla Rothaus • July 2nd, 2015

In the latest Clinical Problem-Solving article, a 67-year-old man with hairy-cell leukemia presented to the clinic after 3 days of fevers, night sweats, arthralgias, and an erythematous vesicular-appearing rash on his back. He had not had headache, shortness of breath, bleeding episodes, vomiting, or diarrhea.

When caring for an immunocompromised patient, the clinician must continually reevaluate the differential diagnosis if the patient has not had the expected response to therapy, bearing in mind that multiple, concurrent disease processes may be present.

Clinical Pearls

What type of lymphocyte is primarily affected in hairy cell leukemia?

Hairy-cell leukemia is primarily a disorder of B cells, although T cells are also impaired.

What type of hematologic neoplasm is most commonly associated with a neutrophilic dermatosis?

A neutrophilic dermatosis can be seen with virtually any type of hematologic neoplasm, although it is most often associated with acute myeloid leukemia.

Morning Report Questions

Q: What is Sweet’s syndrome?

A: Sweet’s syndrome is an acute, febrile, neutrophilic dermatosis that can be associated with drugs, infection, inflammatory bowel disease, pregnancy, cancer, and many other illnesses. The syndrome is characterized by painful inflammatory nodules, papules, and plaques and is often accompanied by malaise, arthralgias, myalgias, and headaches. Dermal edema may lead to a pseudovesicular pattern. The pathogenesis of Sweet’s syndrome is not well understood, but it may be a hypersensitivity reaction in which the body’s response to an infection, cancer, or other illness stimulates the production of cytokines, including granulocyte colony-stimulating factor. This process eventually activates neutrophils and promotes abnormal migration of these neutrophils into dermal tissues.

Figure 1. Rash on the Patient’s Back at Presentation.

Q: How is Sweet’s syndrome diagnosed?

A: A commonly used diagnostic algorithm for Sweet’s syndrome requires confirmation of two major criteria and at least two of four minor criteria. The major criteria are the abrupt onset of painful or tender erythematous plaques or nodules and histologic findings that reveal a dense neutrophilic infiltration in the dermis without leukocytoclastic vasculitis. The minor criteria are malaise and fever, with a temperature higher than 100.4 degrees F (38 degrees C); association with an underlying cancer, inflammatory disease, pregnancy, vaccine administration, or nonspecific infection; substantial response to treatment with systemic glucocorticoids or second-line agents such as dapsone, colchicine, or potassium iodide; and three of the following abnormal laboratory values: erythrocyte sedimentation rate greater than 20 mm per hour, an elevated level of C-reactive protein, leukocytosis in which the white-cell count is greater than 8000 per cubic millimeter, and a percentage of neutrophils in the differential count that is greater than 70% as determined on a peripheral-blood smear.

Figure 3. Skin Biopsy.

 

 

Potassium Homeostasis

Posted by Carla Rothaus • July 2nd, 2015

The plasma potassium level is normally maintained within narrow limits by multiple mechanisms. The latest article in the Fluids and Electrolytes series reviews the mechanisms that regulate potassium homeostasis and describes the important role that the circadian clock exerts on these processes.

The plasma potassium level is normally maintained within narrow limits (typically, 3.5 to 5.0 mmol per liter) by multiple mechanisms that collectively make up potassium homeostasis. Such strict regulation is essential for a broad array of vital physiologic processes. The importance of potassium homeostasis is underscored by the well-recognized finding that patients with hypokalemia or hyperkalemia have an increased rate of death from any cause. In addition, derangements of potassium homeostasis have been associated with pathophysiologic processes, such as progression of cardiac and kidney disease and interstitial fibrosis.

Clinical Pearls

What systems regulate the balance between potassium intake and renal potassium excretion?

External potassium homeostasis regulates renal potassium excretion to balance potassium intake, minus extrarenal potassium loss and correction for any potassium deficits. External potassium balance involves three control systems. Two systems can be categorized as “reactive,” whereas a third system is considered to be “predictive.” A negative-feedback system reacts to changes in the plasma potassium level and regulates the potassium balance. Potassium excretion increases in response to increases in the plasma potassium level, leading to a decrease in the plasma level. A reactive feed-forward system that responds to potassium intake in a manner that is independent of changes in the systemic plasma potassium level has also been recognized. A predictive system appears to modulate the effect of reactive systems, enhancing physiologic mechanisms at the time of day when food intake characteristically occurs — typically, during the day in humans and at night in nocturnal rodents. This predictive system is driven by a circadian oscillator in the suprachiasmatic nucleus of the brain and is entrained to the ambient light-dark cycle. The central oscillator (clock) entrains intracellular clocks in the kidney that generate the cyclic changes in excretion. When food intake is evenly distributed over 24 hours, and physical activity and ambient light are held constant, this system produces a cyclic variation in potassium excretion.

Figure 3. Circadian Rhythm of Urinary Potassium Excretion in Humans during Two Levels of Potassium Intake.

What is “internal potassium homeostasis?”

Internal potassium homeostasis is the maintenance of an asymmetric distribution of total body potassium between the intracellular and extracellular fluid (approximately 98% intracellular and only a small fraction, approximately 2%, extracellular), which occurs by the balance of active cellular uptake by sodium-potassium adenosine triphosphatase, an enzyme that pumps sodium out of cells while pumping potassium into cells (called the sodium-potassium pump rate), and passive potassium efflux (called the leak rate). Little increase in the plasma potassium level occurs during potassium absorption from the gut in normal persons owing to potassium excretion by the kidney and potassium sequestration by the liver and muscle. Insulin, catecholamines, and mineralocorticoids stimulate potassium uptake into muscle and other tissues. Between meals, the plasma potassium level is nearly constant, as potassium excretion is balanced by the release of sequestered intracellular potassium.

Figure 1. Overview of Potassium Homeostasis.

Morning Report Questions

Q: What factors influence renal potassium handling?

A: The healthy kidney has a robust capacity to excrete potassium, and under normal conditions, most persons can ingest very large quantities of potassium (400 mmol per day or more) without clinically significant hyperkalemia. Potassium that is filtered at the glomerulus is largely reabsorbed in the proximal tubule and the loop of Henle. Consequently, the rate of renal potassium excretion is determined mainly by the difference between potassium secretion and potassium reabsorption in the cortical distal nephron and collecting duct. Both of these processes are regulated — potassium ingestion stimulates potassium secretion and inhibits potassium reabsorption. Factors that regulate potassium secretion and reabsorption can be divided into those that serve to preserve potassium balance (homeostatic) and those that affect potassium excretion without intrinsically acting to preserve potassium balance (contra-homeostatic). Examples of the latter include flow rate in the renal tubular lumen and the luminal sodium level. The acid-base balance also affects potassium excretion. The predominant effect of acidosis is to inhibit potassium clearance, whereas the predominant effect of alkalosis is to stimulate potassium clearance.

Table 1. Factors Regulating Potassium Secretion and Potassium Reabsorption.

Figure 2. Model of the Major Cell Types of the Cortical Collecting Duct.

Q: How are the central and peripheral biological clocks synchronized?

A: In vertebrates, a central clock in the suprachiasmatic nucleus of the brain and peripheral clocks that are present in virtually all cells regulate circadian rhythms. Among the many physiologic functions in humans that show circadian rhythms, few are more consistent and stable than the circadian rhythm of urinary potassium excretion. The  timing signals from the central clock to the peripheral clocks remain uncertain, but adrenal corticosteroids and agents from other loci have been proposed or identified. Although the action of cortisol in promoting potassium excretion would suggest a direct (nonclock) hormonal effect, studies by Moore-Ede and colleagues indicate that cortisol serves as a clock synchronizer. Aldosterone also affects certain circadian clocks and, in particular, acutely induces the expression of period circadian clock 1 (PER1) in the kidney.

Advanced Dementia

Posted by Carla Rothaus • June 26th, 2015

Advanced dementia is a leading cause of death in the United States. A new Clinical Practice article covers treatment decisions guided by the goals of care — comfort is usually the primary goal, and tube feeding is not recommended.

In 2014, Alzheimer’s disease affected approximately 5 million persons in the United States, a number that is projected to increase to approximately 14 million by 2050.

Clinical Pearls

- What are the features of advanced dementia?

The features of advanced dementia include profound memory deficits (e.g., inability to recognize family members), minimal verbal abilities, inability to ambulate independently, inability to perform any activities of daily living, and urinary and fecal incontinence.

- Are there barriers to hospice care in the United States for patients with advanced dementia?

Eligibility guidelines for the Medicare hospice benefit require that patients with dementia have an expected survival of less than 6 months, as assessed by their reaching stage 7c on the Functional Assessment Staging tool (a scale ranging from stage 1 to stage 7f, with stage 7f indicating the most severe dementia) and having had one of six specified complications in the past year. However, these eligibility guidelines do not accurately predict survival. Although hospice enrollment of patients with dementia has increased over past decade, many barriers to accessing hospice care persist, particularly the requirement of having a life expectancy of less than 6 months. Given the challenge of predicting life expectancy among patients with advanced dementia, access to palliative care should be determined on the basis of a desire for comfort care, rather than the prognostic estimates.

Table 1. Hospice Guidelines for Estimating Survival of Less Than 6 Months in a Patient with Dementia.

Morning Report Questions

Q: What are some of the concerns regarding current management of patients with advanced dementia, especially when comfort is the goal of treatment?

A: Infections are very common in patients with advanced dementia. The Study of Pathogen Resistance and Exposure to Antimicrobials in Dementia (SPREAD), which prospectively followed 362 nursing home residents with advanced dementia, showed that in a 12-month period, two thirds were suspected to have infections, most commonly of the urinary or respiratory tract. In SPREAD, 75% of suspected infections were treated with antimicrobials, but less than half of all treated infections and only 19% of treated urinary tract infections met minimal clinical criteria for the initiation of antimicrobials. An estimated 75% of hospitalizations may be medically unnecessary or are discordant with the patients’ preferences and are thus avoidable. The goal of care for most patients is comfort, and hospitalization seldom promotes that goal, except in rare cases, such as in the treatment of hip fractures and when palliative care is unavailable. Daily medications should align with the goals of care, and drugs of questionable benefit should be discontinued. In 2008, an expert panel declared that the use of certain medications is inappropriate (i.e., not clinically beneficial) in patients with advanced dementia for whom comfort is the goal. Cross-sectional analyses of a nationwide pharmacy database showed that 54% of nursing home residents with advanced dementia were prescribed at least one of those medications. Of all the inappropriate medications prescribed, the most common were cholinesterase inhibitors (36%), memantine (25%), and statins (22%). Medications with questionable benefits accounted for 35% of the mean 90-day medication expenditures for the nursing home residents with advanced dementia to whom they were prescribed.

Q: What is the recommended approach to the care of patients with advanced dementia?

A: Advance care planning is a cornerstone of the care of patients with advanced dementia. Providers should educate health care proxies about the disease trajectory (i.e., the final stage of an incurable disease) and expected clinical complications (e.g., eating problems and infections). Providers should also counsel proxies about the basic tenet of surrogate decision making, which is to first consider  written or oral advance directives previously expressed by patients and then choose treatment options that align with these advance directives (e.g., a do-not-hospitalize order) before acute problems arise, and ideally, avoid treatments that are inconsistent with the patients’ wishes. In the absence of clear directives, proxies will have to either exercise substituted judgment according to what they think the patient would want or make a decision based on the patient’s best interests. Some observational studies showed that patients with advanced dementia who had advance directives had better palliative care outcomes (e.g., less tube feeding, fewer hospitalizations, and greater enrollment in hospice) than those without advance directives. Treatment decisions for patients with advanced dementia should be guided by the goals of care; providers and patients’ health care proxies must share in the decision making.

A Newborn Girl with Hyperbilirubinemia

Posted by Carla Rothaus • June 26th, 2015

In the latest Case Record of the Massachusetts General Hospital, a newborn girl was transferred to this hospital because of hypotension, coagulopathy, anemia, and hyperbilirubinemia. Generalized edema, anuria, and respiratory distress developed, and the trachea was intubated. Diagnostic procedures were performed.

Neonatal hemochromatosis is the most common cause of neonatal liver failure and the leading indication for liver transplantation in infants. It is characterized by progressive iron deposition during the fetal period, predominantly targeting the liver, pancreas, heart, and thyroid and salivary glands but sparing the reticuloendothelial  system.

Clinical Pearls

- Is neonatal hemochromatosis a genetic disease?

Neonatal hemochromatosis was considered for decades to be part of the hemochromatosis family and to have a genetic cause. Despite multiple attempts, no candidate genes were identified. Also known as gestational alloimmune liver disease, neonatal hemochromatosis is now recognized to be a congenital alloimmune hepatitis and is defined as the association of severe neonatal liver disease with iron deposition (siderosis) in extrahepatic tissue. Neonatal hemochromatosis is associated with a high recurrence rate (80 to 92%) in subsequent pregnancies, a pattern that cannot be explained by genetic inheritance but is consistent with an alloimmune pathogenesis.

- What are typical clinical features associated with this disease?

Extensive liver injury is typically present at birth, and some signs– such as placental edema, oligohydramnios, intrauterine growth retardation, prematurity, and stillbirth — can be detected antenatally. Hypoalbuminemia, hypoglycemia, coagulopathy, a low fibrinogen level, thrombocytopenia, and eventual multiorgan failure are the hallmarks of the disease. Low aminotransferase levels at birth are consistent with a long-standing antenatal process.

Morning Report Questions

Q: What diagnostic tests are obtained when clinical and laboratory findings suggest a diagnosis of neonatal hemochromatosis?

A: Gradient-echo MRI has become the standard noninvasive diagnostic procedure for neonatal hemochromatosis. All newborns have a relatively large amount of iron deposited in the liver because of prenatal maternal transfer; therefore, to make the diagnosis of neonatal hemochromatosis, abnormal iron storage in the pancreas, which is not seen in healthy newborns, must also be established.T1-weighted and T2-weighted MRI images can be helpful in detecting iron deposits in the liver, pancreas, and thyroid glands. The presence of iron deposits in the biopsy specimens of affected organs has become the standard in establishing the diagnosis. Since marked coagulopathy makes a liver biopsy exceedingly difficult to perform, biopsy of the minor salivary gland offers an excellent alternative.

Biopsy of the minor salivary gland is a useful method for detecting evidence of extrahepatic hemosiderosis and is a highly sensitive and specific test for neonatal hemochromatosis.

Figure 1. MRI Scans of the Liver.

Figure 2. Biopsy Specimens.

Q: What treatment options are available, and what survival rates are associated with this disease?

A: Therapy for neonatal hemochromatosis includes treatment for liver failure with antioxidant cocktails (including vitamin E, N-acetylcysteine, prostaglandins, and selenium), fresh-frozen plasma, and cryoprecipitate. Infusions of intravenous immune globulin (IVIG) and exchange transfusion have also been suggested. Exchange transfusion is performed to remove any maternal alloantibodies remaining in the fetal circulation, and IVIG is administered to displace specific reactive IgG antibodies that are bound to target antigens and to bind with circulating complement. Favorable outcomes among patients with neonatal hemochromatosis have been described; however, the prognosis remains seriously guarded, and the disease is associated with an overall survival of 36%. In a large case series, the survival rate was 51% among patients who had undergone a liver transplantation and 22% among those who had not undergone a transplantation.

New Interactive Medical Case: Test Your Skills

Posted by Karen Buckley • June 24th, 2015

Approximately 10 minutes after being stung on the right lower leg by a yellow jacket (a type of wasp), a 45-year-old man began to feel lightheaded and nauseated.  He called emergency medical services, and paramedics arrived 10 minutes later.  They found him to be alert but anxious, with scattered areas of erythema on his trunk and a small, localized area of tenderness, swelling and erythema at the site of the sting.  His blood pressure was 70/45 mm Hg, and his heart rate was 108 beats per minute.

Test your diagnostic and therapeutic skills with this new Interactive Medical Case on NEJM.org.  Receive feedback on your choices and learn more about the condition and optimal treatment steps.

Browse previous Interactive Medical Cases. Try one or all 37 cases and earn CME credit or MOC points now!

Ischemic Optic Neuropathies

Posted by Carla Rothaus • June 19th, 2015

A new review article covers the diagnosis, pathophysiological features, and prognosis of ischemic optic neuropathy, a relatively common cause of visual loss in older patients, including visual loss after cardiac surgery. It must be distinguished from inflammatory optic neuritis.

ION refers to all ischemic causes of optic neuropathy. ION is classified as anterior ION or posterior ION depending on the segment of optic nerve that is affected. Anterior ION accounts for 90% of ION cases. Anterior ION and posterior ION are further categorized into nonarteritic or arteritic. The term arteritic refers to ION caused by small-vessel vasculitis, most often giant-cell arteritis.

Clinical Pearls

- What is the clinical presentation of nonarteritic anterior ischemic optic neuropathy, and how is it diagnosed?

Nonarteritic anterior ION is manifested as isolated, sudden, painless, monocular vision loss with edema of the optic disc. Progressive worsening of vision over a period of a few days or a few weeks is not uncommon, presumably related to worsening ischemia in the context of a local compartment syndrome associated with the disc edema. The severity of vision loss varies from normal visual acuity with visual-field defects to profound vision loss. The diagnosis of acute nonarteritic anterior ION is primarily clinical and relies on demonstration of vision loss with a relative afferent pupillary defect and edema of the optic disc, which consists of the optic-nerve head. A crucial finding on examination is the presence of a small, crowded optic-nerve head with a small physiological cup. This small cup-to-disc ratio defines a “disc at risk.” Although this finding is difficult to see during the acute phase of nonarteritic anterior ION when the optic disc is swollen, examination of the normal eye should show a disc at risk. Imaging of the optic nerve is typically normal in patients with nonarteritic anterior ION.

Figure 1. Blood Supply to the Optic Nerve and Anatomy of the Optic-Nerve Head.

- What causes nonarteritic anterior ischemic optic neuropathy, and can it be successfully treated?

Although nonarteritic anterior ION results from disease of the small vessels supplying the anterior portion of the optic nerve, its exact cause remains unknown. A disc at risk is essential for the development of nonarteritic anterior ION. Other optic-nerve anomalies resulting in crowding of the optic-nerve head, such as optic-nerve drusen and papilledema, may also confer a predisposition to nonarteritic anterior ION. The absence of a disc at risk in a patient with presumed nonarteritic anterior ION should raise the possibility of arteritic anterior ION or another cause of optic neuropathy. There is no established treatment for nonarteritic anterior ION such as there is for the arteritic type of anterior ION. Thus, the most important management concerns are distinguishing nonarteritic anterior ION from arteritic anterior ION and detecting and controlling vascular risk factors in cases of nonarteritic anter ION. Most proposed therapeutic interventions in nonarteritic anterior ION are based on the presumed mechanism and cascade of events. Although multiple therapies have been attempted, most have not been adequately studied, and animal models of nonarteritic anterior ION have emerged only in the past several years. Given the paucity of data regarding the exact pathophysiology of nonarteritic anterior ION and its treatment, the maxim “first, do no harm” is most important in the management of this devastating optic neuropathy.

Figure 2. Presumed Pathophysiology of Nonarteritic Anterior ION and Potential Treatment Strategies.

Figure 3. Nonarteritic Anterior ION in the Context of a Disc at Risk.

Morning Report Questions

Q: How do the clinical findings of posterior ischemic optic neuropathy differ from those of anterior ischemic optic neuropathy, and is the diagnostic evaluation the same for both?

A: When the posterior portion of the optic nerve is ischemic, there is no visible disc edema and the term “posterior ION” is used. Nonarteritic posterior ION is exceedingly rare, as compared with nonarteritic anterior ION. The typical presentation of nonarteritic posterior ION is isolated, painless, sudden loss of vision in one eye, with a relative afferent pupillary defect and a normal-appearing optic-nerve head. As expected with any optic neuropathy, optic-disc pallor develops 4 to 6 weeks later. The clinical diagnosis of nonarteritic posterior ION is difficult and remains a diagnosis of exclusion, with other causes of posterior optic neuropathy (e.g., inflammatory and compressive causes) ruled out by high-quality MRI of the brain and orbits with contrast and with fat suppression and by an extensive workup for underlying systemic inflammatory disorders. Giant-cell arteritis is the most common cause of posterior ION, and must be considered in every patient older than 50 years of age who has posterior ION.

Q: What clinical findings may help to distinguish arteritic from nonarteritic anterior ischemic optic neuropathy?

A: The clinical presentation of arteritic ION is similar to that of nonarteritic ION, but several “red flags” should raise clinical suspicion for arteritic ION. Systemic symptoms of giant-cell arteritis may precede visual loss by months; however, about 25% of patients with biopsy-confirmed giant-cell arteritis present with isolated ION without any systemic symptoms (so-called occult giant-cell arteritis). The degree of visual loss is often more severe in arteritic anterior ION than in nonarteritic anterior ION. In one study, 54% of the patients with arteritic anterior ION were unable to count fingers as compared with 26% of the patients with nonarteritic anterior ION. Untreated arteritic ION becomes bilateral in days to weeks in at least 50% of cases. The affected swollen optic nerve is often pale immediately in giant-cell arteritis, whereas pallor is delayed in nonarteritic anterior ION. The finding of associated retinal or choroidal ischemia in addition to ION is highly suggestive of giant-cell arteritis. Finally, a disc at risk is not necessary for arteritic anterior ION; the absence of a crowded optic disc in the second eye of a patient with anterior ION should make the diagnosis of nonarteritic anterior ION unlikely and should increase the probability of arteritic anterior ION.

A Man with Chest Pain and Shortness of Breath

Posted by Carla Rothaus • June 19th, 2015

In the latest Case Record of the Massachusetts General Hospital, a 71-year-old man presented with sudden chest pain, diaphoresis, shortness of breath, and hypotension. An electrocardiogram showed new ST-segment elevations. Ten days earlier, an implantable cardioverter–defibrillator had been placed. Diagnostic procedures were performed.

Complications of ICD placement are well described, and ICD lead migration or dislodgment occurs within a few days after implantation in approximately 0.14 to 1.2% of patients. Early clinical signs of lead perforation can be subtle and nonspecific, so a rapid and focused evaluation is required, even in the absence of signs of tamponade on physical examination.

Clinical Pearls

- Are there known risk factors for perforation of an ICD lead?

A few risk factors for lead perforation, including female sex and a low body-mass index, have been described. Some data suggest that myocardial fibrosis, which is frequently observed in patients with ischemic cardiomyopathy, may be protective against perforation. Ventricular hypertrophy and diabetes, both of which are associated with fibrosis, may be associated with reduced rates of perforation.

- What is the typical time course for ICD lead dislodgement, and what risk does it carry for a related major adverse event?

The overall incidence of ICD lead dislodgment is highest in the first weeks after implantation, before myocardial fibrosis occurs at the insertion site. Perforations that occur 1 month or more after implantation are rare but have been reported. In nearly 11% of patients with lead dislodgment, another related major adverse event (e.g., cardiac perforation and tamponade, pneumothorax, or cardiac arrest) or in-hospital death occurs.

Morning Report Questions

Q: Is lead perforation challenging to diagnose?

A: During the diagnostic evaluation of a patient who has had any recent medical or surgical procedure, the clinician should consider and rule out periprocedural complications. The manifestations of ICD lead migration are protean and may be surprisingly subtle. However, the events after a lead perforation may evolve rapidly, and a normal overall examination or ultrasound examination at any one point in time cannot rule out a perforation. To make a diagnosis of lead perforation, a high index of suspicion is required, and the diagnostic strategy must expeditiously rule out other lethal possibilities, including aortic dissection and pulmonary embolism. It is important to note that chest radiography is not very sensitive in the detection of lead migration. It is also important to remember that changes in lead measurements can reveal lead migration even in the absence of definitive imaging findings.

Figure 3. CT Images of the Chest.

Q: How should you manage perforation of an implantable cardioverter-defibrillator lead?

A: Lead perforation must be addressed promptly, because it can precipitate life-threatening cardiac tamponade within minutes. The key issue in the management of a lead perforation is to be prepared for decompensation or a disaster at the time the lead is extracted. Extraction of a migrated lead is performed in the operating room; the patient should receive general anesthesia and be monitored with transesophageal echocardiography. In the majority of cases, a lead associated with a perforation can be withdrawn without substantial bleeding into the pericardium. Even when the presence of the ventricular lead tip outside the myocardial wall is confirmed by CT scan, management of lead migration cannot be based on imaging findings alone. Careful correlation between the imaging findings and repeat device interrogation is required before a treatment strategy can be formed. It is possible to see a lead tip located beyond the myocardial border on CT without finding any evidence of change in lead measurements or pericardial effusion; this is frequently termed an asymptomatic lead perforation and does not necessarily require revision of the lead.

Permissive Underfeeding in the ICU

Posted by Rena Xu • June 17th, 2015

Nutrition among critically ill patients is widely considered important, but the ideal caloric targets remain a subject of debate.  Some believe higher caloric intake is helpful and can reduce mortality; others argue the exact opposite, pointing to studies linking caloric restriction to lower morbidity, as long as protein intake is adequate. This debate has prompted investigation of an underfeeding strategy as a way to reduce mortality in critically ill patients.

The Permissive Underfeeding versus Target Enteral Feeding in Adult Critically Ill Patients (PermiT) trial enrolled close to 900 critically ill adults in seven centers in Saudi Arabia and Canada. These patients were randomized to either standard feeding (70-100% of calculated caloric requirements) or to underfeeding (40-60% of caloric requirements) for up to two weeks. The various centers delivered enteral feeding according to their own protocols; calculated caloric intake also accounted for calories from parenteral nutrition, intravenous dextrose, and propofol (1.1 kCal per milliliter).  Protein intake was kept the same for the two groups, with the underfeeding group receiving additional protein supplements as well as saline or water to match the protein amount and volume received by the standard feeding group.  The primary outcome was 90-day mortality.  The investigators predicted an 8% absolute risk reduction in favor of the underfeeding group.

As intended, patients in the underfeeding group consumed fewer calories than those in the standard feeding group (average intake was 46% vs 71% of daily requirements). But the study found no difference in 90 day mortality between the two groups (29% in the standard group, 27% in the underfeeding group; P=0.58).  There were also no differences between the groups for a number of secondary outcomes, including length of ICU stay, in-hospital mortality, 28-day mortality, and 180-day mortality. Further, based on limited subgroup analyses, the study did not identify any subpopulations with differences in mortality between the two strategies.

“The collective results of our study and the two previous trials add to a growing body of research that suggests that standard feeding goals in critically ill patients do not improve clinical outcomes,” the authors write.

While underfeeding did not demonstrate a mortality benefit in this study, the authors note that the study was powered to detect an eight percent risk reduction, which means smaller treatment effects cannot be ruled out.  They also observe that some of the enrolled patients, particularly in the standard feeding group, failed to reach their target caloric intake, which would have decreased the gap in caloric intake between the two groups. Finally, less than fifteen percent of ICU patients who were screened for the study were ultimately enrolled, suggesting the need for caution before generalizing these results to other critically ill patients.

How do you determine nutrition goals for critically ill patients?  Have you seen a role for permissive underfeeding in your management of certain patient populations? 

Breast-Cancer Screening

Posted by Carla Rothaus • June 12th, 2015

The International Agency for Research on Cancer (IARC) has updated its 2002 guidelines on screening for breast cancer, drawing on data from studies completed in the past 15 years.

In November 2014, experts from 16 countries met at the International Agency for Research on Cancer (IARC) to assess the cancer-preventive and adverse effects of different methods of screening for breast cancer. In preparation for the meeting, the IARC scientific staff performed searches of the openly available scientific literature according to topics listed in an agreed-upon table of contents. The full report is presented in volume 15 of the IARC Handbooks of Cancer Prevention.

Clinical Pearls

- What data are available to assess the effectiveness of contemporary mammographic screening?

The IARC working group recognized that the relevance of randomized, controlled trials conducted more than 20 years ago should be questioned, given the large-scale improvements since then in both mammographic equipment and treatments for breast cancer. More recent, high-quality observational studies were considered to provide the most robust data with which to evaluate the effectiveness of mammographic screening. The working group gave the greatest weight to cohort studies with long follow-up periods and the most robust designs, which included those that accounted for lead time, minimized temporal and geographic differences between screened and unscreened participants, and controlled for individual differences that may have been related to the primary outcome. Analyses of invitations to screenings (rather than actual attendance) were considered to provide the strongest evidence of screening effectiveness, since they approximate the circumstances of an intention-to-treat analysis in a trial.

- Is there evidence of a reduction in breast cancer mortality with mammographic screening?

Some 20 cohort and 20 case-control studies, all conducted in the developed world (Australia, Canada, Europe, or the United States) were considered by the IARC working group to be informative for evaluating the effectiveness of mammographic screening programs, according to invitation or actual attendance, mostly at 2-year intervals. Most incidence-based cohort mortality studies, whether conducted in women invited to attend screening or women who attended screening, reported a clear reduction in breast-cancer mortality, although some estimates pertaining to women invited to attend were not statistically significant. Women 50 to 69 years of age who were invited to attend mammographic screening had, on average, a 23% reduction in the risk of death from breast cancer; women who attended mammographic screening had a higher reduction in risk, estimated at about 40%. Case-control studies that provided analyses according to invitation to screening were largely in agreement with these results.

Morning Report Questions

Q: Is there benefit to mammographic screening of women 70 to 74 years of age, and is there a benefit for those 40 to 44 years of age?

A: In the IARC analysis, a substantial reduction in the risk of death from breast cancer was consistently observed in women 70 to 74 years of age who were invited to or who attended mammographic screening in several incidence-based cohort mortality studies. Fewer studies assessed the effectiveness of screening in women 40 to 44 or 45 to 49 years of age who were invited to attend or who attended mammographic screening, and the reduction in risk in these studies was generally less pronounced. Overall, the available data did not allow for establishment of the most appropriate screening interval.

Table 1. Evaluation of Evidence Regarding the Beneficial and Adverse Effects of Different Methods of Screening for Breast Cancer in the General Population and in High-Risk Women.

Q: What harms are associated with mammographic screening?

A: Estimates of the cumulative risk of false positive results differ between organized programs and opportunistic screening. The estimate of the cumulative risk for organized programs is about 20% for a woman who had 10 screens between the ages of 50 and 70 years. Less than 5% of all false positive screens resulted in an invasive procedure. There is an ongoing debate about the preferred method for estimating over-diagnosis. After a thorough review of the available literature, the working group concluded that the most appropriate estimation of over-diagnosis is represented by the difference in the cumulative probabilities of breast-cancer detection in screened and unscreened women, after allowing for sufficient lead time. The Euroscreen Working Group calculated a summary estimate of over-diagnosis of 6.5% (range, 1 to 10%) on the basis of data from European studies that adjusted for both lead time and contemporaneous trends in incidence. The estimated cumulative risk of death from breast cancer due to radiation from mammographic screening is 1 to 10 per 100,000 women, depending on age and the frequency and duration of screening. It is smaller by a factor of at least 100 than the estimates of death from breast cancer that are prevented by mammographic screening for a wide range of ages. After a careful evaluation of the balance between the benefits and adverse effects of mammographic screening, the working group concluded that there is a net benefit from inviting women 50 to 69 years of age to receive screening.

A Woman with Decreased Vision and Diplopia

Posted by Carla Rothaus • June 12th, 2015

In the latest Case Record of the Massachusetts General Hospital, a 41-year-old woman presented with decreased visual acuity in the left eye and diplopia. MRI of the head and orbits revealed abnormal soft tissue in the left sphenoid sinus and orbital apex, extending to the left cavernous sinus. A diagnostic procedure was performed.

Lymphoma of the orbit is typically painless and has an indolent course, and thus the presence of pain and subacute progression of symptoms may suggest a different diagnosis or a more aggressive type of lymphoma.

Clinical Pearls

- What conditions may predispose to the development of orbital cellulitis?

Orbital cellulitis commonly results from bacterial infection, most often as an extension of ethmoid or frontal sinusitis, but it may also result from cutaneous trauma, dental abscess, or dacryocystitis.

The organisms most commonly associated with orbital cellulitis are streptococcal and staphylococcal species.

- What diseases are included in the differential diagnosis of an inflammatory process involving the orbit?

Inflammatory disease of the orbit is common, and causes include idiopathic orbital inflammation, IgG4-related orbital inflammation, sarcoidosis, granulomatosis with polyangiitis, and proliferative disorders of histiocytes. Idiopathic orbital inflammation, which is by far the most common of these diseases, was previously known as orbital pseudotumor and refers to inflammation involving any structure of the orbit. Specific descriptive nomenclature includes dacryoadenitis, scleritis, and myositis, although many cases involve diffuse infiltration of the orbital fat. This usually painful condition often results in visible periorbital inflammation and may occasionally extend to involve the paranasal sinuses or dura. The presence of pain is often clinically useful in making the diagnosis, but the absence of pain can be misleading. IgG4-related disease is an orbital inflammatory disorder that is less common than idiopathic orbital inflammation. Patients with IgG4-related disease have clinical and radiographic presentations that are similar to those of patients with idiopathic orbital inflammation, but IgG4-related disease is more likely to be bilateral and associated with an inflammatory disorder of another organ system. Sarcoidosis is a granulomatous disease that may involve the lungs, liver, spleen, eyes, and orbit. Orbital sarcoidosis most often involves the lacrimal glands but may involve other orbital structures and extend through apical foramina to the surrounding structures.

Morning Report Questions

Q: What clinical and imaging features characterize lymphoid tumors involving the orbit?

A: Lymphoid tumors are common infiltrative orbital cancers and range from the most common variety, indolent mucosa-associated lymphoid-tissue lymphomas, to more rare, aggressive varieties.

Lymphoma may involve any orbital structure — commonly including the lacrimal gland, extraocular muscle, or fat — and may be part of a systemic process. B-cell lymphomas are the most common type to involve the orbit and tend to be unilateral, painless, and slow-growing. On radiography, lymphoma has an infiltrative pattern, with molding to the surrounding structures.

Q: Is CD30 expression a common feature of diffuse large B-cell lymphoma?

A: Diffuse large B-cell lymphoma represents a group of biologically heterogeneous cancers that may be divided into morphologic, genetic, and immunophenotypic subgroups and that include certain specific disease entities. Most cases do not fulfill diagnostic criteria for one of the specific disease entities and are classified as diffuse large B-cell lymphoma (not otherwise specified). CD30 expression is seen in only 14% of cases of diffuse large B-cell lymphoma, and CD30-positive cases have been reported to be associated with a superior 5-year overall and progression-free survival, as compared with CD30-negative cases, a difference that is maintained in both germinal-center and nongerminal-center subgroups. Gene-expression profiling studies have shown a distinct profile, suggesting that CD30-positive cases may represent a distinct subgroup of diffuse large B-cell lymphoma.