Lung-Cancer Screening

Posted by Carla Rothaus • November 7th, 2014

A large randomized trial showed that low-dose CT screening reduced the risk of lung-cancer death by 20% among long-time smokers. Recent guidelines support consideration of screening but with attention to the possibility of false positive results and associated risks. Read the latest Clinical Practice review on this topic.

Despite advances in diagnosis, staging, and treatment, only 18% of patients with lung cancer are still alive 5 years after diagnosis. Clinicians, scientists, and advocates have long sought a safe and effective screening test to identify lung cancer during its preclinical phase, when it is presumed to be more amenable to curative treatment.

Clinical Pearls

What was the NLST and what were its findings?

The NLST included more than 50,000 persons enrolled at 33 U.S. centers and has thus far provided the strongest evidence regarding the potential benefits of lung-cancer screening. Participants were 55 to 74 years of age, with a smoking history of at least 30 pack-years (former smokers had to have quit within the previous 15 years); they were randomly assigned to three rounds of annual screening with low-dose CT [computed tomography] or chest radiography. The NLST showed a 20% reduction in lung-cancer mortality with low-dose CT versus chest radiography (247 vs. 309 deaths per 100,000 patient-years of follow-up). In absolute terms, this translated to approximately 3 fewer deaths from lung cancer per 1000 high-risk persons who underwent low-dose CT screening.

How common were false positive findings and complications of invasive testing in the NLST?

False positive findings were common with low-dose CT, but complications of invasive testing were not. Across all three rounds of screening, 39% of the participants in the low-dose CT group had at least one positive result; more than 95% of these findings were falsely positive. Most patients with positive screening-test results required follow-up imaging. After three rounds of screening, a minority of participants underwent invasive tissue sampling by means of needle biopsy (2%), bronchoscopy (4%), or surgery (4%). Relatively few of the surgeries (24%) were performed in patients with benign nodules, but most of the nonsurgical biopsies (73%) revealed benign findings and therefore were potentially avoidable. Among participants with a positive screening-test result in the low-dose CT group, 1% had at least one complication related to invasive testing, but only 20% of these complications occurred among participants who did not have lung cancer.

Table 1. Potential Benefits and Harms of Three Rounds of Annual Screening with Low-Dose CT, as Compared with Chest Radiography or No Screening.

Morning Report Questions

Q: What uncertainties do the authors highlight about lung-cancer screening with low-dose CT?

A: Several important questions about low-dose CT screening remain unanswered, and screening continues to be controversial. A key controversy is whether the NLST results are applicable to the Medicare population in the United States. A related question is how to optimize the selection of candidates for screening. The potential benefits of screening are greatest in persons who are at the highest risk for death from lung cancer. Although limiting screening to persons at highest risk represents the most efficient approach to screening, extending eligibility criteria to include those at lesser risk will inevitably prevent a greater number of lung-cancer deaths, albeit less efficiently. The potential harms of screening warrant additional consideration. Patients at high risk for procedure-related complications and those with limited life expectancy owing to chronic illness have less to gain from screening than those at low risk and those without chronic illness, respectively. Ultimately, the trade-offs will need to be weighed by patients and their physicians. To facilitate a personalized approach, models have been developed that estimate individualized risks of lung-cancer death and predict complications of needle biopsy and lung-cancer surgery, although further studies are needed to determine which models perform best. There is also uncertainty about whether the relatively low risks of invasive testing for benign condition and procedure-related complications observed in the NLST can be replicated in community-based practice.

Table 2. Guidelines for Lung-Cancer Screening with Low-Dose CT.

Q: Is there evidence that lung-cancer screening programs will reduce rates of smoking?

A: Sparse data from randomized, controlled trials of low-dose CT screening have been inconsistent and are inconclusive thus far as an answer to the question of whether participation in a screening program improves rates of smoking cessation. A possible unintended consequence of screening is that some current smokers with negative results on low-dose CT will be falsely reassured that they do not have lung cancer and will therefore continue to smoke. Several studies have shown that rates of smoking cessation are higher among persons with positive screening-test results than among those with negative results.

A Chilly Fever

Posted by Carla Rothaus • November 7th, 2014

A 30-year-old graduate student presented with fevers associated with shaking chills and severe headaches. He had been well until 1 week before presentation, when he began to have daily fevers, with temperatures as high as 39.4°C. Any fever in a patient who has had possible exposure to malaria should prompt consideration of this diagnosis.

Clinical Pearls

What is the annual incidence of malaria in the United States?

In the United States, the annual incidence of malaria is approximately 1500 cases. In 2010, a total of 1691 cases were reported to the Centers for Disease Control and Prevention (CDC), the largest number reported since 1980; P. falciparum, P. vivax, P. malariae, and P. ovale were identified in 58%, 19%, 2%, and 2% of cases, respectively.

How do malaria and babesiosis differ in appearance on a peripheral blood smear?

Intraerythrocytic parasites are seen in both malaria and babesiosis. Plasmodia metabolize heme to form an intracellular crystallized pigment, hemozoin. Although hemozoin is not invariably identified in cases of malaria, its presence reliably distinguishes malaria infection from babesia infection. Malaria parasites can be distinguished from B. [Babesia] microti by the presence of recognizable gametocytes (characteristically banana-shaped in Plasmodium falciparum and round, with a granular appearance, in nonfalciparum species). In addition, intracellular vacuoles and extracellular merozoites are unusual in malaria but common in babesiosis, and the classic “Maltese cross” (a tetrad of parasites budding at right angles) is unique to babesia species.  Morning Report Questions

Q: Which malaria species can remain dormant in the liver?

A: In the case of P. vivax and P. ovale, some sporozoites (immature malaria parasites) do not replicate immediately when they invade hepatocytes but remain dormant (as hypnozoites) for prolonged periods. The average time to relapse is approximately 9 months, but it can range from weeks to years. The interval to relapse depends on the strain (earlier with tropical strains and later with temperate strains), the initial inoculum, and host factors (e.g., febrile illnesses can trigger relapse associated with P. vivax). None of the commonly used prophylactic agents (chloroquine, mefloquine, doxycycline, or atovaquone-proguanil) eliminate hypnozoites. Primaquine, the only effective drug against dormant hypnozoites, has not been approved by the Food and Drug Administration for primary prophylaxis, but the CDC endorses its use for prophylaxis in Latin American countries where P. vivax predominates, because the drug can prevent both primary attacks and relapses caused by all species that are a source of malarial infection.

Q: How is acute or recurrent P. vivax infection treated?

A: In patients with acute or recurrent malaria infection, treatment depends on the species and the resistance status in the area where the infection was acquired. P. falciparum is resistant to chloroquine in most regions in which it is endemic and resistant to mefloquine in parts of Southeast Asia. In contrast, nonfalciparum malaria parasites do not have substantial resistance to mefloquine, and the distribution of chloroquine-resistant P. vivax malaria is limited, occurring primarily in Indonesia and Papua New Guinea. After treatment is initiated, peripheral-blood smears should be obtained daily for 4 days (parasitemia is typically eliminated by day 4), on days 7 and 28 to confirm eradication, and at any time symptoms recur, suggesting treatment failure. In areas other than those with known chloroquine resistance, chloroquine, followed by a 14-day course of primaquine to prevent subsequent relapses, remains the standard treatment for P. vivax parasitemia. Given the risk of hemolysis in patients with glucose-6-phosphate dehydrogenase (G6PD) deficiency who receive treatment with primaquine, potential recipients should be tested for G6PD deficiency. Among patients with a contraindication to primaquine therapy, treatment with chloroquine alone carries a 20% risk of relapse; extended chloroquine prophylaxis can be offered to patients who have frequent relapses.

The Good Word: Improving Patient Handoffs

Posted by Rena Xu • November 5th, 2014

Starting at six in the evening, the surgery residents at my hospital gather for sign-out.  This is when residents from the day shift hand over care of their patients to those working overnight.  Sign-out takes place in the residents’ lounge — a room furnished with computers, couches, and a makeshift ping-pong table — and tends to be an informal affair.  Multiple conversations happen at once, and interruptions are not uncommon, whether by phone call or passerby or stray ping-pong ball.

If you ask any of the residents, they’ll tell you this system works just fine.  But as shift changes have become more frequent in recent years to accommodate work hour restrictions, there is concern that the growing number of handoffs may be leading to medical errors.  As in a game of “telephone,” information can get distorted each time it is relayed.  If reducing the quantity of handoffs isn’t an option, is there a way to improve the quality?

A few years ago, a group of researchers developed a tool called I-PASS that attempted to standardize the sign-out process.  I-PASS, which stands for Illness severity, Patient summary, Action list, Situation awareness and contingency plans, and Synthesis by receiver, acts as a checklist — a way to summarize a patient’s history and care plan and “employ closed loop communication” to ensure that the receiver understands.

The investigators built a handoff-improvement program around this tool and tested it in a pediatrics residency program, measuring the rate of medical errors among residents before and after they received I-PASS training.  They found that implementation of the program reduced miscommunications and preventable errors.  Encouraged by these results, they tweaked the tool and expanded it to a total of nine pediatrics programs.  The results of this multi-center study, published this week in NEJM, again show that the I-PASS tool is effective: medical errors decreased by 23%, preventable adverse events decreased by 30%, and critical information was included more frequently in written and verbal handoffs.  And, importantly, handoffs weren’t any more time-consuming than before.

“Our study shows that the risk of handoff-related errors can be significantly reduced,” the authors concluded. “Implementing handoff-improvement programs such as the I-PASS Handoff Bundle may potentiate the effectiveness of work-hour reductions, because doing both together may concurrently reduce both fatigue and handoff-related errors.”

This may be why, within a week of starting residency earlier this year, my co-interns and I underwent mandatory I-PASS training.  The three-hour session involved signing out fictional patients to each other – “Mrs. So-and-So is a ‘watcher’,” we’d say to practice using the I-PASS terminology, glancing at the cheat sheets we’d been given.

The session was helpful, but in the months that followed, sign-out reverted to old habits.   “I don’t think we need it,” the night shift resident said when I suggested one evening that we incorporate I-PASS into our sign-out.  We decided to give it a try anyway.  At first, things went smoothly; the cheat sheet was a nice reminder of points to cover.  Soon, though, I found myself taking shortcuts.  Was it really necessary to summarize the events leading up to a patient’s admission (the “Patient summary” step), when all I needed the night resident to do was make sure the patient urinated?  And did he really need to repeat information (“Synthesis by receiver”) that I’d given him literally seconds before?

When surgical checklists were first proposed, they were criticized as being cumbersome and unnecessary.  It turned out using them could reduce errors in the operating room.  Still, proof of efficacy wasn’t enough to spur adoption; the surgical culture also had to change.  People had to want to change.

The same might be said for handoffs.  In the I-PASS study, an intervention “bundle” involved not only resident training but also faculty engagement and a process- and culture-change campaign.  The intervention period lasted a full six months, and post-intervention outcomes were measured over an additional six months.  Sign-outs were taped, and residents were followed around by research assistants for 8 to 12 hours at a time, their every activity recorded.  By Hawthorne effect alone (people modifying their behavior because they know they’re being observed), it’s plausible that study participants were more compliant with the I-PASS methodology than they might have been otherwise.  Outside the study environment, getting residents to embrace I-PASS as their own is more challenging.

Continuity of care in an era of many handoffs does not have to be a futile aspiration.  The I-PASS study suggests that thoughtfully implemented interventions can make care transitions safer.  That may require, among other things, redefining the sign-out ritual.  It’s easier said than done, to be sure, but it’s also worth trying — especially for those of us who are just starting our medical careers and learning how to care safely and meaningfully for others.

For more on I-PASS, join the conversation on the new NEJM Group Open Forum, powered by Medstro, a social professional network for physicians.  We’re piloting a series of live interactive discussions among authors, experts, and fascinating physicians, surrounding topics including the intricacies of modern medicine, cutting edge research, career development, and more. Look for new discussions, coming soon.  

Simvastatin in the Acute Respiratory Distress Syndrome

Posted by Sara Fazio • October 31st, 2014

In a recent study, patients with acute respiratory distress syndrome who were not receiving statins were assigned to receive simvastatin or placebo. At 28 days, there were no significant between-group differences in survival or in the number of ventilator-free days.

The acute respiratory distress syndrome (ARDS) is a common, devastating clinical syndrome characterized by life-threatening respiratory failure requiring mechanical ventilation and by multiple organ failure. In ARDS there is an uncontrolled inflammatory response that results in alveolar damage, with the exudation of protein-rich pulmonary-edema fluid in the alveolar space that results in respiratory failure.

Clinical Pearls

What is the basis of interest in statins as a possible treatment for ARDS?

The inhibition of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase with statins has been shown to modify a number of the underlying mechanisms implicated in the development of ARDS. Statins decrease inflammation and histologic evidence of lung injury in murine models of ARDS. Simvastatin reduced pulmonary and systemic inflammatory responses in a human model of ARDS induced by lipopolysaccharide inhalation. In addition, in a small, single-center, randomized, placebo-controlled study, simvastatin ameliorated nonpulmonary organ dysfunction and was safe.

What were the outcomes of this study, which compared simvastatin to placebo for the treatment of ARDS?

The primary outcome, the number of ventilator-free days, did not differ significantly between the two study groups (12.6+/-9.9 days with simvastatin and 11.5+/-10.4 days with placebo; mean difference, 1.1 days [95% CI, −0.6 to 2.8]; P=0.21). The change from baseline to day 28 in the oxygenation index did not differ significantly between the two groups, nor did the Sequential Organ Failure Assessment (SOFA) score. There were no significant differences in the number of days free of nonpulmonary organ failure or in mortality at 28 days. Mortality at ICU discharge or hospital discharge was also not significantly different between the two groups.

Table 2. Main Clinical Outcomes.

Figure 2. Probabilities of Survival and Breathing without Assistance from Randomization to Day 28, According to Whether Patients Received Simvastatin or Placebo.

Morning Report Questions

Q: What were the study results with respect to simvastatin’s safety in this clinical setting?

A: Overall, adverse events related to the study drug were significantly more common in the simvastatin group than in the placebo group. The majority of the adverse events were related to elevated creatine kinase and hepatic aminotransferase levels. The numbers of serious adverse events (other than those reported as trial outcomes, such as death) were similar in the two groups. There was no significant between-group difference in the proportion of patients with nonpulmonary organ dysfunction, as measured by a SOFA score of less than 2 for each organ.

Q: How do study results compare with those of the recent Statins for Acutely Injured Lungs from Sepsis (SAILS) study?

A: The recent SAILS study, which involved patients with sepsis-associated ARDS, showed that rosuvastatin did not improve clinical outcomes, as compared with placebo, and was associated with fewer days free of renal and hepatic failure. The authors note that the data from the current study and the SAILS trial show that neither a lipophilic statin (simvastatin) nor a hydrophilic statin (rosuvastatin) is effective in the treatment of ARDS.

Focal Seizures and Progressive Weakness

Posted by Sara Fazio • October 31st, 2014

In the latest Case Record of the Massachusetts General Hospital, a 7-year-old boy was evaluated because of focal seizures, twitching of the right arm and the right side of the face, and progressive weakness. Imaging revealed progressive left cortical atrophy and a focal lesion in the left parietal cortex. A diagnostic procedure was performed.

Rasmussen’s encephalitis, which was first described in 1958 by Dr. Theodore Rasmussen, is a progressive neurologic disease of unknown cause.

Clinical Pearls

What are the manifestations of epilepsia partialis continua?

Epilepsia partialis continua is defined as almost continuous regular or irregular muscular clonic twitching affecting a limited part of the body. Consciousness is typically preserved, and the twitching most commonly involves the face, arms, or both. According to the definition of epilepsia partialis continua, the twitching must last at least 1 hour (but may last hours or years) and the twitches must occur at least once every 10 seconds, typically in isolation or in clusters of 1 to 2 Hz.

What is the differential diagnosis of epilepsia partialis continua?

The differential diagnosis for epilepsia partialis continua is divided into nonprogressive and progressive causes. Nonprogressive causes include vascular causes, metabolic causes, neoplasm, infectious or immunologic causes, cortical dysplasia, mitochondrial causes, perinatal central nervous system injury, and cryptogenic causes. A progressive cause is Rasmussen’s encephalitis. Much of the literature on epilepsia partialis continua focuses on adults, but a recent study involving 51 children with epilepsia partialis continua showed that Rasmussen’s encephalitis was the most common cause, with other common causes including immune or inflammatory processes (e.g., acute and subacute encephalitis and subacute sclerosing panencephalitis), metabolic disorders (e.g., mitochondrial disease and neuronal ceroid lipofuscinosis), cortical malformations, and vascular causes.

Table 1. Differential Diagnosis of Epilepsia Partialis Continua.

Morning Report Questions

Q: What is Rasmussen’s encephalitis?

A: Rasmussen’s encephalitis is a progressive neurologic disease of unknown cause. Patients typically present in childhood with focal-onset seizures, which then progress over a period of months to refractory epilepsy with progressive hemiparesis. Epilepsia partialis continua develops in approximately 50 to 90% of persons with Rasmussen’s encephalitis, and fixed hemiparesis typically occurs within 2 to 3 years after the onset of seizures. Seizures are typically refractory to medication, but glucocorticoids and intravenous immune globulin can be effective in controlling seizures. Various other immunosuppressive medications have been tried, but the most effective therapy remains hemispherectomy. Hemispheric procedures are associated with a high rate of success in stopping seizures and also halting progression of the disease.

Table 2. Diagnostic Criteria for Rasmussen’s Encephalitis.

Q: What techniques are available for disconnection of the affected hemispheres in Rasmussen’s encephalitis, and what are the functional outcomes?

A: Over time, a number of techniques have been developed, ranging from complete anatomical hemispherectomy to newer, less extensive procedures that disconnect the affected hemisphere from the opposite hemisphere and from the remainder of the nervous system. With seizure activity but the discharges cannot propagate and do not produce symptoms. The less extensive procedures are associated with less blood loss and fewer long-term complications but with a slightly lower likelihood of seizure control. Hemispherectomy and hemispheric disconnection can each result in varying degrees of contralateral weakness, language and other cognitive dysfunction, and hemianopsia, depending on the degree to which such functions already have been or can be subserved by the contralateral hemisphere. After a patient has undergone anatomical hemispherectomy or hemispheric disconnection, the long-term outcome typically includes spastic hemiplegia with return of ambulation (patients can walk but have a spastic-type limp), development of a “helper” arm without fine motor manipulative abilities (patients can use the arm to help lift objects but cannot perform fine motor functions with the hand), permanent hemianopsia, and various degrees of language and cognitive function (depending on preoperative status and age at surgery).

Feed Me: Early Nutritional Support in Intensive Care

Posted by Rena Xu • October 29th, 2014

What is the best way to feed a critically ill patient?  Nutrition can be delivered either parenterally — directly into the veins – or enterally, e.g., via a tube that runs from the nose to the stomach.  Both have potential and well-reported adverse consequences, along with potential benefits.  It’s commonly believed that, if given the option, you should go with enteral feeding — in addition to being less invasive and more physiologically intuitive, it’s been associated with lower rates of infection and other complications.

But what if the adverse consequences that have hampered parenteral nutrition in the past aren’t a reflection of faulty strategy per se, but rather of poor execution?  Rowan and her colleagues in the UK have hypothesized that, with advancements in feeding technology and better management of vascular access, parenteral nutrition may now be superior to enteral feeding, as it’s more likely to ensure delivery of the intended nutrition.

To test this theory, they conducted the CALORIES trial, enrolling 2400 adult patients with unexpected admissions to 33 intensive care units across England. The patients were randomized to receive nutritional support either parenterally (via a central venous catheter) or enterally (via a nasogastric or nasojejunal tube), initiated within 36 hours of admission and used exclusively for five days or until complete transition to oral feeding, discharge from the ICU, or death.

The results, published recently in NEJM, suggest there may not be a clear winner. All-cause mortality at 30 days, the primary outcome, was similar between the two groups: roughly a third of patients died (33.1% in the parenteral group, and 34.2% in the enteral group; relative risk 0.97, P=0.57).  Patients in the parenteral group were less likely to become hypoglycemic than patients in the enteral group (3.7% vs. 6.2% of patients; absolute risk reduction 2.5%; P=0.006).  They also had lower rates of vomiting (8.4% vs. 16.2%; absolute risk reduction 7.8%; P<0.001).  For all other secondary outcomes — including the rate of infection, length of ICU and overall hospital stays, and 90-day survival – no significant difference was found between the two groups.  The rate of adverse events was similar as well (4.9% in the parenteral group, and 4.8% in the enteral group; P=1.00).

“The reported increase in infectious complications that have been associated with the parenteral route was not observed,” the authors underscored.  This could in part reflect improvements in the formulation, delivery, and monitoring of parenteral nutrition — in other words, better execution as compared to older studies.

But implementation issues may still be undermining the effectiveness of parenteral nutrition.  The majority of patients in both study groups failed to achieve their targeted caloric intake (25kcal per kilogram per day).  This finding was consistent with the results of previous studies, but still somewhat surprising: parenteral nutrition is supposed to be more reliable at guaranteeing delivery.  The authors enumerated various logistical constraints that may have contributed to the shortfall, concluding, “There are substantial practical and organizational impediments for both routes of delivery, at least during an initial 5-day period.”

The CALORIES trial didn’t find parenteral nutrition to be superior to enteral nutrition as the investigators had hypothesized.  Even by demonstrating comparable outcomes across the two routes, however, the study invites debate.  The most important takeaway lesson from the study may be that for the critically ill patient, getting adequate nutrition early on — by any route — is hard.  As for whether route matters, and to what extent, it’s likely still too soon to tell.

What is your approach to nutritional supplementation for patients who require intensive care?  When enteral and parenteral nutrition are both available, what influences your decision to use one versus the other? How will the results of the CALORIES trial affect your practice?

 

60-Year-Old Man with Bone Pain

Posted by Sara Fazio • October 24th, 2014

In the latest Case Record of the Massachusetts General Hospital, a 60-year-old man was seen in the outpatient cancer center because of bone pain that had lasted for 2 months and the presence of lytic bone lesions on imaging studies. Biopsy specimens of bone marrow and bone lesions showed increased mast cells. A diagnostic procedure was performed.

In general, bone lesions can be divided into two major types according to their radiologic appearance. Lytic lesions have a characteristic “moth eaten” appearance on imaging studies, which is caused by the juxtaposition of degraded bone and unaffected, calcified bone. The process of bone degradation is mediated by osteoclasts. In contrast, blastic bone lesions reflect increased bone formation, a process mediated by increased osteoblastic activity.

Clinical Pearls

What tumors most commonly metastasize to bone, and what are their radiologic characteristics?

The presence of multiple widespread bone lesions suggests a metastatic tumor; most common are lung or prostate cancer, renal-cell carcinoma, and melanoma. Tumors that metastasize to the bone often have characteristic biologic and radiologic characteristics; prostate cancer, carcinoid tumors, small-cell lung cancer, Hodgkin’s lymphoma, and medulloblastoma often cause osteoblastic lesions, whereas renal-cell carcinoma, non-small-cell lung cancer, thyroid cancer, melanoma, and lymphomas predominantly cause osteolytic lesions. Many metastatic tumors, particularly sarcomas and cancers of breast and gastrointestinal origin, may cause both lytic and blastic lesions. In general, the lesions can manifest in various ways.

Table 1. Partial Differential Diagnosis of Bone Lesions.

What are the clinical features of plasma-cell (multiple) myeloma?

Plasma-cell myeloma is characterized by an increase in clonal plasma cells in the bone marrow and the presence of a monoclonal paraprotein in the serum, as well as an associated abnormal calcium level and associated hematologic, renal, and bone abnormalities, including lytic bone lesions. A rare variant, nonsecretory myeloma, may occur, in which lytic bone lesions are present but a monoclonal paraprotein is not detected.

Morning Report Questions

Q: What are the characteristics of an epithelioid hemangioendothelioma, and how may it be treated?

A: Epithelioid hemangioendothelioma is a rare malignant tumor that affects fewer than 300 patients per year in the United States and accounts for approximately 1% of all vascular neoplasms. Clinically, it is treated as a low-to-intermediate-grade angiosarcoma because, as compared with high-grade angiosarcomas, metastasis is less likely to develop, disease progression or time to relapse is slower, and survival is longer, even in cases of advanced disease. Most epithelioid hemangioendotheliomas follow an indolent clinical course, but it is estimated that approximately 15% of patients with epithelioid hemangioendothelioma die of the disease. Because epithelioid hemangioendothelioma is a neoplasm of vascular origin, therapies that target angiogenesis in this disease were thought to be promising. Epithelioid hemangioendothelioma is known to express a wide variety of ligands and receptors for vascular endothelial growth factor (VEGF) isoforms. Several published case reports have shown a clinical benefit of the VEGF inhibitor bevacizumab. Other antiangiogenic agents have also been reported to show evidence of activity when they are administered as single agents. These agents include sunitinib, thalidomide, lenalidomide, and interferon-(alpha).

Q: How are painful bone metastases best managed?

A: In the management of painful spine metastases, the most important initial step is correctly identifying the cause of pain. Common causes of back pain that is associated with bone metastases are mechanical instability, tumor-related inflammation, nerve-root involvement, or a combination of these causes. Before initiating therapy for pain relief, cord compression requiring urgent surgical intervention must be ruled out. Augmentation, such as vertebroplasty and kyphoplasty, is minimally invasive and involves the percutaneous injection of acrylic cement (such as methyl methacrylate) with the use of imaging guidance. Randomized trials of augmentation involving patients with osteoporotic fractures have not shown any benefit.  However, in selected cases of cancer-related fractures, augmentation has been associated with rapid and clinically significant pain relief. Pain that results from tumor-related inflammation (so-called biologic pain) is often unrelenting, subacute or chronic in onset, and responsive to glucocorticoids. Systemic therapy may offer pain relief in particularly responsive types of cancer. Radiation therapy is a common palliative strategy that offers partial pain relief in 50 to 80% of patients with tumor-related inflammation and complete pain relief in 20 to 40% of patients. Pain relief often does not occur until several weeks after completion of  radiation therapy.

Community-Acquired Pneumonia

Posted by Sara Fazio • October 24th, 2014

Community-acquired pneumonia is a commonly diagnosed illness in which no causative organism is identified in half the cases. Application of molecular diagnostic techniques has the potential to lead to more targeted therapy in the face of increasing antibiotic resistance. A new review article looks at this topic.

Community-acquired pneumonia (CAP) is a syndrome in which acute infection of the lungs develops in persons who have not been hospitalized recently and have not had regular exposure to the health care system.

Clinical Pearls

What are the most common causes of CAP?

Although pneumococcus remains the most commonly identified cause of CAP, the frequency with which it is implicated has declined, and it is now detected in only about 10 to 15% of inpatient cases in the United States. Other bacteria that cause CAP include Haemophilus influenzae, Staphylococcus aureus, Moraxella catarrhalis, Pseudomonas aeruginosa, and other gram-negative bacilli. Patients with chronic obstructive pulmonary disease (COPD) are at increased risk for CAP caused by H. influenzae and Mor. catarrhalis. P. aeruginosa and other gram-negative bacilli also cause CAP in persons who have COPD or bronchiectasis, especially in those taking glucocorticoids. There is a wide variation in the reported incidence of CAP caused by Mycoplasma pneumoniae and Chlamydophila pneumoniae (so-called atypical bacterial causes of CAP), depending in part on the diagnostic techniques that are used. During influenza outbreaks, the circulating influenza virus becomes the principal cause of CAP that is serious enough to require hospitalization, with secondary bacterial infection as a major contributor.

Table 1. Infectious and Noninfectious Causes of a Syndrome Consistent with Community-Acquired Pneumonia (CAP) Leading to Hospital Admission.

What evaluation do the authors recommend to determine the cause of community-acquired pneumonia in a hospitalized patient?

In hospitalized patients with CAP, the authors favor obtaining Gram’s staining and culture of sputum, blood cultures, testing for legionella and pneumococcal urinary antigens, and multiplex PCR assays for Myc. pneumoniae, Chl. pneumoniae, and respiratory viruses, as well as other testing as indicated in patients with specific risk factors or exposures. A low serum procalcitonin concentration (<0.1 microg per liter) can help to support a decision to withhold or discontinue antibiotics. Results on Gram’s staining and culture of sputum are positive in more than 80% of cases of pneumococcal pneumonia when a good-quality specimen (>10 inflammatory cells per epithelial cell) can be obtained before, or within 6 to 12 hours after, the initiation of antibiotics. Blood cultures are positive in about 20 to 25% of inpatients with pneumococcal pneumonia but in fewer cases of pneumonia caused by H. influenzae or P. aeruginosa and only rarely in cases caused by Mor. catarrhalis.

Morning Report Questions

Q: What are the guidelines for treating community-acquired pneumonia in outpatients and inpatients?

A: For outpatients without coexisting illnesses or recent use of antimicrobial agents, IDSA/ATS [Infectious Diseases Society of America and the American Thoracic Society] guidelines recommend the administration of a macrolide (provided that <25% of pneumococci in the community have high-level macrolide resistance) or doxycycline. For outpatients with coexisting illnesses or recent use of antimicrobial agents, the guidelines recommend the use of levofloxacin or moxifloxacin alone or a beta-lactam (e.g., amoxicillin-clavulanate) plus a macrolide. The authors argue, however, that a beta-lactam may be favored as empirical therapy for CAP in outpatients, since most clinicians do not know the level of pneumococcal resistance in their communities, and Str. pneumoniae is more susceptible to penicillins than to macrolides or doxycycline. Even though the prevalence of Str. pneumoniae as a cause of CAP has decreased, they raise concern about treating a patient with a macrolide or doxycycline to which 15 to 30% of strains of Str. pneumoniae are resistant. For patients with CAP who require hospitalization and in whom no cause of infection is immediately apparent, IDSA/ATS guidelines recommend empirical therapy with either a beta-lactam plus a macrolide or a quinolone alone.

Q: What is the appropriate duration of antibiotic therapy for community-acquired pneumonia?

A: Early in the antibiotic era, pneumonia was treated for about 5 days; the standard duration of treatment later evolved to 5 to 7 days. A meta-analysis of studies comparing treatment durations of 7 days or less with durations of 8 days or more showed no differences in outcomes, and prospective studies have shown that 5 days of therapy are as effective as 10 days and 3 days are as effective as 8. Nevertheless, practitioners have gradually increased the duration of treatment for CAP to 10 to 14 days. The authors argue that a responsible approach to balancing antibiotic stewardship with concern about insufficient antibiotic therapy would be to limit treatment to 5 to 7 days, especially in outpatients or in inpatients who have a prompt response to therapy. Pneumonia that is caused by Staph. aureus or gram-negative bacilli tends to be destructive, and concern that small abscesses may be present has led clinicians to use more prolonged therapy, depending on the presence or absence of coexisting illnesses and the response to therapy.

Less is not more for TB treatment

Posted by Rachel Wolfson • October 22nd, 2014

Shorter regimens fail to be non-inferior to the standard tuberculosis treatment plans

One third of the world’s population is currently infected with tuberculosis (TB), and, in 2012, there were 1.3 million TB-related deaths (Centers for Disease Control and Prevention). Moreover, in 2012, 450,000 people worldwide developed multi-drug resistant TB (MDR TB), which is resistant to at least rifampin and isoniazid, two of the first line antimicrobials for TB (World Health Organization). Although many cases of MDR TB are acquired from patients harboring these organisms, among the mechanisms for inducing drug resistance in TB are being on incorrect treatment or not completing a full antibiotic course. Inadequate compliance is particularly challenging in TB, because the standard treatment course lasts at least 6 months. Finding new treatment approaches that shorten the duration could help decrease the development of drug resistance and lower costs.

In this week’s NEJM, three groups published phase III trials investigating the efficacy of using fluoroquinolones in combination with other anti-TB drugs to treat patients with a four-month regimen, two months shorter than the standard of care. Merle et al. performed a randomized, open-label, controlled trial in which they compared the standard six-month treatment regimen (isoniazid, rifampin, pyrazinamide, and ethambutol) to a four-month regimen that replaced ethambutol with gatifloxacin, a fourth-generation fluoroquinolone. The trial enrolled just over 1800 patients across five African countries, and, despite the positive results from phase II trials and mouse studies, the four-month treatment regimen failed to demonstrate non-inferiority, with higher TB recurrences than that of the standard treatment (14.6% vs. 7.1%). Gillespie et al. found similar results when they randomized just over 1900 patients across nine countries to either one of two fourth-month regimens or the standard of care. In one of these shorter regimens, they used a combination of rifampin, isoniazid, pyrazinamide, and moxifloxacin, a fluoroquinolone, while, in the other, they used ethambutol in place of isoniazid. Both moxifloxacin-containing regimens failed to show non-inferiority compared to the control arm.

Finally, Jindani et al. performed a randomized controlled trial in which they enrolled just over 800 patients across four African countries. Compared to the control six-month regimen, they tested two treatment plans for non-inferiority: one four-month and one six-month regimen in which isoniazid was replaced by moxifloxacin. Similar to the other two trials, the four-month treatment plan also failed to show non-inferiority. The six-month moxifloxacin-containing treatment plan, in which treatment in the final four months was administered weekly, was as effective as the standard daily regimen. While this regimen does not decrease the overall treatment duration, it does decrease the frequency at which patients need to take medications. This advance may increase treatment adherence and is a step towards helping to decrease the development of MDR TB.

While these trials are steps in the right direction, the failure of these three trials to show non-inferiority highlights a major challenge in the field. Even though fluoroquinolones were effective at decreasing the treatment time in mouse models infected with TB, the discrepancy in human trials demonstrates again that differences in biology between mice and humans are major hurdles in the drug discovery pipeline. As Digby Warner, PhD, and Valerie Mizrahi, PhD, commented in an accompanying editorial, more effort will need to be focused on how to more effectively develop and test new therapies so that fewer drugs and regimens fail in phase III and IV trials.

Postherpetic Neuralgia

Posted by Sara Fazio • October 17th, 2014

Postherpetic neuralgia is more common with older age. Recommended treatments include topical agents (lidocaine or capsaicin) and systemic agents (in particular, gabapentin, pregabalin, or tricyclic antidepressants), but their efficacy tends to be suboptimal.  The latest Clinical Practice article is on this topic, and comes from University of Bristol’s Dr. Robert Johnson and Imperial College London’s Dr. Andrew Rice.

Postherpetic neuralgia is the most frequent chronic complication of herpes zoster and the most common neuropathic pain resulting from infection.

Clinical Pearls

What are the epidemiology of and risk factors for postherpetic neuralgia?

Postherpetic neuralgia is conventionally defined as dermatomal pain persisting at least 90 days after the appearance of the acute herpes zoster rash. The incidence and prevalence of postherpetic neuralgia vary depending on the definition used, but approximately a fifth of patients with herpes zoster report some pain at 3 months after the onset of symptoms, and 15% report pain at 2 years. Analysis of data from the United Kingdom General Practice Research Database showed that the incidence of postherpetic neuralgia (as defined by pain at 3 months) rose from 8% at 50 to 54 years of age to 21% at 80 to 84 years of age. Risk factors for postherpetic neuralgia include older age and greater severity of the prodrome, rash, and pain during the acute phase. The incidence is also increased among persons with chronic diseases such as respiratory disease and diabetes, and it may be increased among immunocompromised patients, although the evidence is sparse and inconsistent.

What is the typical clinical presentation and appropriate evaluation of a patient with postherpetic neuralgia?

Although a history of herpes zoster often cannot be confirmed with absolute certainty, the disorder has a characteristic clinical presentation, and thus postherpetic neuralgia rarely presents a diagnostic challenge. Clinical assessment of the patient with postherpetic neuralgia should follow the general principles of assessment of patients with peripheral neuropathic pain. Features of pain and associated sensory perturbations (e.g., numbness, itching, and paresthesias) should be assessed. Pain associated with postherpetic neuralgia occurs in three broad categories: spontaneous pain that is ongoing (e.g., continuous burning pain), paroxysmal shooting or electric shock-like pains, and evoked sensations that are pathologic amplifications of responses to light touch and other   innocuous stimuli (mechanical allodynia) or to noxious stimuli (mechanical hyperalgesia). The physical examination should include a comparison of sensory function in the affected dermatome with that on the contralateral side. Loss of sensory function in response to both mechanical and thermal stimuli is common in patients with postherpetic neuralgia, as are pathologic sensory amplifications (e.g., allodynia and hyperalgesia). In most cases, no additional evaluation is needed beyond the history taking (with concomitant disease and medications noted) and physical examination.

Morning Report Questions

Q: What is the appropriate treatment for postherpetic neuralgia?

A: Topical therapy alone is reasonable to consider as first-line treatment for mild pain. It is sometimes used in combination with systemic drugs when pain is moderate or severe, although data are lacking from randomized trials comparing combination topical and systemic therapy with either therapy alone. Patches containing 5% lidocaine are approved for the treatment of postherpetic neuralgia in Europe and the United States. However, evidence in support of their efficacy is limited. There is evidence to support the use of tricyclic antidepressants (off-label use) and the antiepileptic drugs gabapentin and pregabalin (Food and Drug Administration-approved) for the treatment of postherpetic neuralgia. Opioids, including tramadol, should generally be considered as third-line drugs for postherpetic neuralgia after consultation with a specialist and should be prescribed only with appropriate goals and close monitoring.

Q: What is the evidence for the effectiveness of preventive therapy for postherpetic neuralgia?

A: Placebo-controlled trials of antiviral drugs for acute herpes zoster have shown that they reduce the severity of acute pain and rash, hasten rash resolution, and reduce the duration of pain. These trials were not designed to assess the subsequent incidence of postherpetic neuralgia. Two randomized trials have shown that the addition of systemic glucocorticoids to antiviral drugs during the acute phase of herpes zoster does not reduce the incidence of postherpetic neuralgia. In one placebo-controlled trial, low-dose amitriptyline, started soon after the diagnosis of herpes zoster and continued for 90 days, significantly reduced the incidence of pain at 6 months. Further studies are required to confirm this finding. The only well-documented means of preventing postherpetic neuralgia is the  prevention of herpes zoster. A live attenuated VZV [varicella-zoster virus] vaccine has been available since 2006; it was initially licensed for immunocompetent persons 60 years of age or older but now is approved for persons 50 years of age or older. In a randomized trial in the older age group, its use reduced the incidence of herpes zoster by 51% and the incidence of postherpetic neuralgia by 66%. In patients 70 years of age or older as compared with those 60 to 69 years of age, the vaccine was less effective in reducing the risk of herpes zoster (38% reduction) but conferred similar protection against postherpetic neuralgia (67% reduction).