Herniated Lumbar Intervertebral Disk

Posted by Carla Rothaus • May 6th, 2016

2016-05-02_14-26-47“Sciatica” refers to pain in a sciatic-nerve distribution, but this term is sometimes used indiscriminately to describe back and leg pain. Lumbar “radiculopathy” more specifically refers to pain with possible motor and sensory disturbance in a nerve-root distribution. After lumbar stenosis, spondylolisthesis, and fracture have been ruled out, approximately 85% of patients with sciatica are found to have a herniated intervertebral disk. In two studies of surgery for sciatica, at least 95% of herniated disks were at the L4–L5 or L5–S1 levels.

Pain from disk herniation, the leading cause of sciatica, usually resolves within several weeks with conservative therapy. In patients with sciatica for 6 weeks, pain relief is faster with surgery than with conservative therapy; however, outcomes are similar at 1 year. A new Clinical Practice summarizes.

Clinical Pearl

• When should a patient with a suspected herniated lumbar disk undergo magnetic resonance imaging (MRI)?

Computed tomography (CT) or MRI can confirm a clinical diagnosis of a herniated disk. Early MRI is indicated in patients with progressive or severe deficits (e.g., multiple nerve roots) or clinical findings that suggest an underlying tumor or infection. Otherwise, CT or MRI are necessary only in a patient whose condition has not improved over 4 to 6 weeks with conservative treatment and who may be a candidate for epidural glucocorticoid injections or surgery. Disk herniation does not necessarily cause pain; MRI commonly shows herniated disks in asymptomatic persons, and the prevalence of herniated disks increases with age. Thus, symptoms may be misattributed to incidental MRI findings.

Clinical Pearl

• What is the likelihood that pain associated with a herniated lumbar disk will improve without surgery?

The natural history of herniated lumbar disks is generally favorable, but patients with this condition have a slower recovery than those with nonspecific back pain. In one study involving patients with a herniated disk and no indication for immediate surgery, 87% who received only oral analgesics had decreased pain at 3 months. Even in randomized trials that enrolled patients with persistent sciatica, the condition of most patients who did not undergo surgery improved.

Figure 1. Testing for Compromise of a Lumbar Nerve Root.

Figure 2. CT and MRI Terminology for Herniated Disks.

Morning Report Questions

Q: What is a general approach to the management of a herniated lumbar disk? 

A: Cohort studies suggest that the condition of many patients with a herniated lumbar disk improves in 6 weeks; thus, conservative therapy is generally recommended for 6 weeks in the absence of a major neurologic deficit. There is no evidence that conservative treatments change the natural history of disk herniation, but some offer slight relief of symptoms. In patients with acute disk herniation, avoidance of prolonged inactivity in order to prevent debilitation is important. The use of epidural glucocorticoid injections in patients with herniated disks has increased rapidly in recent years, although these injections are used on an off-label basis. A systematic review showed that patients with radiculopathy who received epidural glucocorticoid injections had slightly better pain relief (by 7.5 points on a 100-point scale) and functional improvement at 2 weeks than patients who received placebo. There were no significant advantages at later follow-up and no effect on long-term rates of surgery. Unless patients have major neurologic deficits, surgery is generally appropriate only in those who have nerve-root compression that is confirmed on CT or MRI, a corresponding sciatica syndrome, and no response to 6 weeks of conservative therapy. The major benefit of surgery is that relief of sciatica is faster than relief with conservative therapy, but, on average, there is a smaller advantage of surgery with respect to the magnitude of relief of back pain. Most, although not all, trials showed no significant advantage of surgery over conservative treatment with respect to relief of sciatica at 1 to 4 years of follow-up. Given these results, either surgery or conservative treatment may be a reasonable option, depending on the patient’s preferences for immediate pain relief, how averse the patient is to surgical risks, and other considerations.

Q: How common are complications when a patient undergoes lumbar diskectomy?

A: Procedural complications of lumbar diskectomy are less common than procedural complications of other types of spine surgery. A registry study indicated that an estimated 0.6 deaths per 1000 procedures had occurred at 60 days after the procedure. New or worsening neurologic deficits occur in 1 to 3% of patients, direct nerve-root injury occurs in 1 to 2%, and wound complications (e.g., infection, dehiscence, and seroma) occur in 1 to 2%. Incidental durotomy, which occurs in approximately 3% of patients, is associated with increases in the duration of surgery, blood loss during surgery, and the length of inpatient stay, as well as potential long-term effects such as headache. All tissues at the surgical site heal with some scarring, which contracts and binds nerves to surrounding structures. Normally, each nerve root glides a few millimeters in its neuroforamen with each walking step. Stretch on tethered nerves may be one source of chronic postsurgical pain.

Scaling the ALPS — Antiarrhythmic Drugs in Out-of-Hospital Cardiac Arrest

Posted by Bhavna Seth, M.D. • May 4th, 2016

2016-05-04_9-29-46Imagine you are out for an evening jog when a young man, who is running ahead of you, collapses. You rush over and a rapid assessment suggests that he is unresponsive, has no pulse, and is not breathing. You start chest compressions and a bystander calls 911. EMS arrives soon, however, after 5 cycles of chest compressions and several attempts at defibrillation for a rhythm of ventricular fibrillation there is still no response. The ACLS algorithm next prompts the team to choose an antiarrhythmic: amiodarone or lidocaine? The team pulls amiodarone and his rhythm reverts. What led to that choice?  Would lidocaine have had the same effect?

The pre-hospital period has been identified as critical to obtaining favorable outcomes in acute cardiovascular events. Currently, there are Class IIb recommendations from the American Heart Association for the use of antiarrhythmic medication, typically intravenous amiodarone and lidocaine, in cases of cardiac arrest. The agents may be considered for ventricular fibrillation or pulseless ventricular tachycardia unresponsive to CPR, defibrillation, or vasopressor treatment. However, evidence regarding the effectiveness of these agents remains inconclusive. In this week’s NEJM, Kudenchuk et al try to answer the question of the survival benefit of anti-arrhythmic agents in out-of-hospital cardiac arrest.

This randomized, double blinded, placebo-controlled trial compares parenteral amiodarone, lidocaine and saline placebo, along with standard care, in 3026 adults with non-traumatic, out-of-hospital, cardiac arrest with shock refractory ventricular fibrillation, or pulseless ventricular tachycardia. The primary outcome of interest was survival to hospital discharge, and the secondary outcome of interest was favorable neurologic function at discharge, defined as a modified Rankin score of 3 or less. The study outcomes were evaluated using a modified intention-to-treat (or efficacy) population and an intention-to-treat (or safety) population

The study did not find a statistically significant difference in either of the outcomes; neither amiodarone nor lidocaine resulted in higher survival compared to saline placebo (24.4%, 23.7%, and 21.0%, respectively) upon hospital discharge (primary outcome) nor did they result in significant difference in favorable neurological function at discharge (secondary outcome). However, there was significant benefit of both antiarrhythmic drugs over placebo in certain measures: fewer shocks were administered after the first dose of the trial drug; fewer patients received rhythm-control medications during hospitalization; and fewer patients required CPR during hospitalization.

Although the study is well designed, investigators had little control over certain aspects of the study, such as standardization of care in hospitals. The frequency of coronary catheterization, therapeutic hypothermia, and withdrawal of life-sustaining treatments demonstrated no difference between groups. The trial used only one administrative strategy; others manners with active-treatment crossover and may produce different results. However, pre-hospital trials pose unique practical, operational, ethical and analytical challenges of uncertainty. Selection bias could have influenced trial enrollment, though instances of exclusion were numerically small (6.3%). Potential confounders in clinical CPR trials are the relatively ‘open’ exclusion and inclusion criteria, where many cases with no chance of survival are enrolled; for instance, asystolic patients who do not convert to ventricular tachycardia/ventricular fibrillation. Thus, more benefit may be seen in certain sub-populations. However, it is difficult to implement strict criteria into cardiac arrest studies, because the decision to randomize has to be executed instantaneously, without much clinical data.

The study suggests the possible clinical benefit of antiarrythmic administration, with respect to fewer shocks and less need of in-hospital antiarrhythmics and CPR, which warrants further evaluation. Additionally, the accompanying editorial by Dr. Joglar and Dr. Page speculates that the antiarrhythmics may have been administered too late to ameliorate significant metabolic consequences of prolonged arrest, and it is unknown if earlier administration confers further benefit. The study raises several interesting hypotheses, which need to be explored to further expand the evidence available to support or refute current resuscitation practices.

NEJM Deputy Editor Dr. John Jarcho notes that there has been very little previous research directly comparing antiarrhythmic drug strategies in out-of-hospital cardiac arrest. “We believe that these data are very likely to influence both guidelines and clinical practice in the management of this patient population. We also consider it important to encourage the effort to conduct clinical trials in this difficult setting.”

Violence against Health Care Workers

Posted by Carla Rothaus • April 28th, 2016

65Health care workplace violence is an underreported, ubiquitous, and persistent problem that has been tolerated and largely ignored. According to the Joint Commission, a major accrediting body for health care organizations, institutions that were once considered to be safe havens are now confronting “steadily increasing rates of crime, including violent crimes such as assault, rape, and homicide.” Although metal detectors may theoretically mitigate violence in the health care workplace, there is no concrete evidence to support this expectation.

Violence against health care professionals in the workplace is underreported and understudied. Additional data are needed to understand steps that might be taken to reduce the risk. A new Review Article summarizes.

Table 1. Types of Workplace Violence.

Clinical Pearl

• Which one of the four categories of workplace violence is most common in the health care setting?

Experts have classified workplace violence into four types on the basis of the relationship between the perpetrator and the workplace itself (Table 1). Most common to the health care setting is a situation in which the perpetrator has a legitimate relationship with the business and becomes violent while being served by the business (categorized as a type II assault). The highest number of such assaults in U.S. workplaces each year are directed against health care workers. These episodes are characterized by either verbal or physical assaults perpetrated by patients and visitors against providers.

Clinical Pearl

• Are there evidence-based approaches to preventing health care workplace violence?

Most studies on workplace violence have been designed to quantify the problem, and few have described research on experimental methods to prevent such violence. The most recent critical review of the literature in 2000 identified 137 studies that described strategies to reduce workplace violence. Of these studies, 41 suggested specific interventions, but none provided empirical data showing whether or how such strategies worked. Only 9 studies, all of which were health care–related, reported data on interventions. Even so, the conclusion of the 9-study review was that each of the studies used weak methods, had inconclusive results, and used flawed experimental designs. A review of nursing literature had similar conclusions: all the studies showed that after training, nurses had increased confidence and knowledge about risk factors, but no change was seen in the incidence of violence perpetrated by patients. There is a lack of high-quality research, and existing training does not appear to reduce rates of workplace violence.

Morning Report Questions

Q: Are certain health care workers or certain health care settings particularly vulnerable to workplace violence? 

A: Certain hospital environments are more prone to type II workplace violence than are other settings. The emergency department and psychiatric wards are the most violent, and well-studied, hospital environments. Since rates of assault correlate with patient-contact time, nurses and nursing aides are victimized at the highest rates. Emergency department nurses reported the highest rates, with 100% reporting verbal assault and 82.1% reporting physical assault during the previous year. Physicians are also frequent targets of type II workplace violence; approximately one quarter of emergency medicine physicians reported being targets of physical assault in the previous year. All employees who work in inpatient psychiatric environments are at higher risk for targeted violence than are other health care workers. Rates of workplace violence against physicians in psychiatric settings may be even higher than those in emergency department settings, with 40% of psychiatrists reporting physical assault in one study.

Q: What is known about perpetrators of health care workplace violence, and what are some potential solutions? 

A: The characteristic that is most common among perpetrators of workplace violence is altered mental status associated with dementia, delirium, substance intoxication, or decompensated mental illness. In some studies, researchers have postulated that patients with a previous history of violence are at increased risk for committing violence toward staff members; however, this association remains unproven. Among strategies for individual workers that have been proposed to reduce workplace violence are training in aggression de-escalation techniques and training in self-defense. Recommendations for target hardening of infrastructure include the installation of fences, security cameras, and metal detectors and the hiring of guards. Perhaps most important are recommendations that health care organizations revise their policies in order to improve staffing levels during busy periods to reduce crowding and wait times, decrease worker turnover, and provide adequate security and mental health personnel on site. The importance of recognizing verbal assault as a form of workplace violence cannot be overlooked, since verbal assault has been shown to be a risk factor for battery. The “broken windows” principle, a criminal-justice theory that apathy toward low-level crimes creates a neighborhood conducive to more serious crime, also applies to workplace violence. When verbal abuse and low-level battery are tolerated, more serious forms of violence are invited.

Aphasia during a Transatlantic Flight

Posted by Carla Rothaus • April 28th, 2016

2016-04-25_11-01-53In more than 60% of patients with ischemic stroke, the cause is readily established and is most often atherosclerosis or heart disease. However, in a young patient with no traditional vascular risk factors and a large clot burden, the search can be broadened to include, at minimum, thrombophilia, arterial dissection, paradoxical embolism, and unusual arteriopathies.

A 49-year-old woman was brought to the ED 2 hours after the onset of hemiplegia and aphasia during a transatlantic flight. Examination revealed evidence of acute ischemic stroke. Additional diagnostic studies were performed. A new Case Records of the Massachusetts General Hospital summarizes.

Clinical Pearl

• What is the Cincinnati Prehospital Stroke Scale?

The Cincinnati Prehospital Stroke Scale, which screens for three symptoms (facial droop, arm drift, and speech disturbance) that may occur with stroke, is based on the larger National Institutes of Health Stroke Scale (NIHSS). One point is designated for each abnormal finding, and a score higher than 0 triggers a stroke alert. The Cincinnati Prehospital Stroke Scale has shown excellent correlation between prehospital providers’ suspicion of stroke and physicians’ confirmation of stroke.

Clinical Pearl

• What is the May–Thurner syndrome?

The May–Thurner syndrome is a common anatomical anomaly in which the right common iliac artery, a muscular structure, extrinsically compresses the thin-walled left common iliac vein. This anatomical feature of the May–Thurner syndrome was identified in up to 25% of an asymptomatic population; none of these patients had unilateral edema or a history of deep venous thrombosis of the legs. Among patients with cryptogenic stroke and patent foramen ovale, the prevalence of the May–Thurner syndrome is 6.3%.

Figure 2. Additional Imaging Studies.

Morning Report Questions

Q: What do guidelines indicate for the prevention of recurrent stroke in patients with an ischemic stroke and patent foramen ovale?

A: For patients with ischemic stroke and patent foramen ovale and without definitive evidence of deep venous thrombosis, guidelines indicate that the current data are insufficient to establish whether anticoagulation is equivalent or superior to aspirin for the prevention of recurrent stroke, and available data do not support a benefit of patent foramen ovale closure. In a recent meta-analysis including a total of 4251 patients, those who had stroke with patent foramen ovale did not have a higher risk of either recurrent stroke or the combination of stroke and transient ischemic attack than did those who had stroke without patent foramen ovale. However, for patients who have both a patent foramen ovale and a presumed venous source of embolism, anticoagulation is generally indicated. When anticoagulation is contraindicated, because of an increased risk of intracranial or systemic hemorrhage, then placement of an inferior vena cava filter is a reasonable option.

Q: What is the risk of venous thromboembolism in patients with cancer, and what may influence the risk associated with renal cancer?

A: The rate of venous thromboembolism among patients with cancer varies according to tumor factors (including tumor type, the presence or absence of metastasis, and the anatomical features of the tumor) and the patients’ baseline risk factors and previous and current therapies. Approximately 20% of first venous thromboembolic events occur in patients with cancer; the risk of venous thromboembolism among patients with cancer is 4 to 7 times as high as the risk among those without cancer. Patients with renal-cell carcinoma, particularly those with metastases, often present with elevated levels of fibrinogen, d-dimer, and fibrin monomers. Immunofluorescence staining with the use of antibodies specific for fibrin and factors VII and X has shown positivity around intravascular and extravascular nodules of renal-cell carcinoma tumor cells, thus raising the possibility that tumor cells activate coagulation locally.

Transcatheter or Surgical Aortic-Valve Replacement in Intermediate-Risk Patients

Posted by Andrea Merrill • April 27th, 2016

2016-04-25_14-45-38I’ve sometimes wondered if I’m embarking on the field of surgery in the wrong era.  As a medical student, and now as a resident, the big open operations have always seemed to be the most exhilarating and rewarding.  It always seems more thrilling to have your hands deep in a patient’s abdomen or chest than lightly grasping and maneuvering small laparoscopic instruments or delicate guidewires.  However, as technology evolves, so must medicine and surgery and every specialty seems to be jumping on the bandwagon, including cardiac surgery.  It always amazed me that one of the “easiest” and shortest operations in cardiac surgery, an open aortic valve replacement, still required placing a patient on cardiac bypass through a large sternotomy.  More recently, though, transcatheter aortic valve replacement (TAVR) has emerged as a less invasive alternative to open aortic valve surgery for patients with severe aortic stenosis.  Clinical trials initially tested TAVR in patients at high risk for open cardiac surgery.  However, many patients who are referred for aortic valve replacement are low to intermediate risk.  With the goal of expanding criteria for TAVR candidates, Leon et al. published the results of the PARTNER 2 trial in this week’s NEJM comparing aortic valve replacement surgery to TAVR in intermediate risk patients.

2032 patients with severe aortic stenosis and intermediate surgical risk (according to Society of Thoracic Surgery criteria) were enrolled in the trial at 57 centers in the United States and Canada.  Before randomization, patients were evaluated to assess whether they would be eligible for transfemoral access (if the femoral artery was accessible) or transthoracic access (i.e. transapical or transaortic through an incision in the chest if the femoral artery was not readily accessible) for the TAVR procedure, and then stratified into a transfemoral or transthoracic cohort accordingly.  Each cohort was subsequently randomized in a 1:1 fashion to either TAVR or surgical-aortic valve replacement.

The primary endpoint was a composite score of death from any cause or disabling stroke at 2 years with the hypothesis that the TAVR was non-inferior to surgery, with a prespecified noninferiority margin of 1.2 for the upper bound of the hazard ratio. Secondary endpoints included improved aortic-valve areas, frequency of acute kidney injury, severe bleeding, new onset atrial fibrillation, and paravalvular aortic regurgitation.

Among the 2,032 randomized patients, 1,011 were assigned to TAVR and 1,021 were assigned to surgery. There were 236 patients in the TAVR group who had transthoracic access (14%) while the rest were accessed transfemorally. There were no significant differences in the primary endpoint of death from any cause or disabling stroke at 2 years between the 2 treatment arms (19.3% in TAVR and 21.1% in surgery patients, and the hazard ratio (HR) for TAVR was 0.89; 95% confidence interval [CI], 0.73 to 1.09; P = 0.25).  The risk ratio at 2 years of the primary endpoint was 0.92 (95% CI, 0.77 to 1.09) which met prespecified noninferiority criteria.  There was no difference in the 2 groups between the individual components of the primary endpoint.  In an underpowered subgroup analysis of the transfemoral and transthoracic TAVR cohorts, the transfemoral TAVR cohort had reduced rates of death from any cause or disabling strokes compared with surgery but there was no difference in the transthoracic cohort.

At 30 days, major vascular complications were more frequent in TAVR patients, but life-threatening bleeding, acute kidney injury, and new onset atrial fibrillation were more frequent in surgery patients.

Improvement in aortic-valve areas and gradients was significantly greater in the TAVR group at both 30 days and 2 years.  However, the frequency and severity of paravalvular aortic regurgitation was greater after TAVR than surgery.

With these encouraging results Dr. Neil Moat asks in his accompanying editorial, “Will TAVR become the predominant method for treating severe aortic stenosis?”  He seems to answer in the affirmative but reminds the reader of a few caveats.  He notes that although these patients are technically at “intermediate” risk for complications from open surgery according to Society of Thoracic Surgery criteria, they are actually still within the highest quartile of risk which is important to remember when assessing patients with severe aortic stenosis. Another important limitation is that small stented valves were used in the surgery group which has been shown to lead to poorer outcomes, leaving room for improvement in the future.  Additionally, cost-effectiveness was not addressed in this trial and while TAVR has been shown to be cost-effective in previous trials in high risk groups, the same may not hold true in low-risk patients.  Finally, he concludes that, “As with many trials involving new technologies, the findings have to be interpreted with the understanding that the technology (in both groups) has advanced since the design of the trial.”  Both TAVR and surgical technologies have advanced since the start of this trial which makes it harder to generalize the results to current patients.

Alas, this is likely not the last we will hear of the debate as the Treatment of Severe, Symptomatic Aortic Stenosis in Intermediate Risk Subjects Who Need Aortic Valve Replacement (SURTAVI) trial (ClinicalTrials.gov number, NCT01586910) has just finished recruitment and further studies involving low risk patients are likely to emerge.

Don’t miss the NEJM Quick Take video summary on this study:

Surgery in Patients with Ischemic Cardiomyopathy

Posted by Carla Rothaus • April 22nd, 2016

2016-04-19_9-59-01The original Surgical Treatment for Ischemic Heart Failure (STICH) trial by Velazquez et al. was designed to test the hypothesis that coronary-artery bypass grafting (CABG) plus guideline-directed medical therapy for coronary artery disease, heart failure, and left ventricular dysfunction would improve survival over that with medical therapy alone. The authors now report the results of the STICH Extension Study (STICHES), which was conducted to evaluate the long-term (10-year) effects of CABG in patients with ischemic cardiomyopathy.

Among patients with ischemic cardiomyopathy, coronary-artery bypass grafting added to medical therapy led to significantly lower rates of death from any cause and of cardiovascular death over 10 years than did medical therapy alone. A new Original Article summarizes.

Clinical Pearl

• Why is there a need for contemporary trials that compare coronary-artery bypass grafting with medical therapy for patients with coronary artery disease and heart failure?

Landmark clinical trials have established CABG as an effective treatment for patients with disabling angina symptoms. In these trials, CABG was associated with longer survival than was medical therapy alone among the subgroups with more extensive coronary artery disease and worse left ventricular dysfunction. However, the trials were conducted more than 40 years ago, before the availability of current guideline-based medical therapy for coronary artery disease and heart failure, and they did not include patients with severe left ventricular dysfunction; only 4% of participants had symptomatic heart failure. More recently, an increasing proportion of patients with heart failure and left ventricular dysfunction are referred for CABG.

Clinical Pearl

• What were the results of the STICH trial at a median follow-up of 56 months?

In the analysis of data from that trial, at a median follow-up of 56 months, there was no significant difference between the CABG group and the medical-therapy group in the rate of death from any cause, although the rates of death from cardiovascular causes and of death from any cause or hospitalization for cardiovascular causes were lower among patients in the CABG group.

Morning Report Questions

Q: Is CABG associated with a long-term survival benefit as compared to medical therapy alone in patients with coronary artery disease and left ventricular dysfunction?

A: In the trial by Velazquez et al. involving patients with heart failure, left ventricular dysfunction, and coronary artery disease, the rate of death from any cause over 10 years was lower by 16% (an 8-percentage-point absolute difference in the 10-year Kaplan–Meier rates) among patients who underwent CABG in addition to receiving medical therapy than among those who received medical therapy alone. Overall, CABG was associated with an incremental median survival benefit of nearly 18 months and prevention of one death due to any cause for every 14 patients treated and of one death due to a cardiovascular cause for every 11 patients treated. The authors previously reported that CABG was associated with a risk of death within the initial 30 days after randomization that was triple the risk with medical therapy alone, with similar differences in risk up to the second year of follow-up, before a significant benefit began to accrue after 2 years. Thus, it appears that the operative risk associated with CABG is offset by a durable effect that translates into increasing clinical benefit to at least 10 years.

Figure 2. Kaplan–Meier Estimates of the Rates of Death from Any Cause, Death from Cardiovascular Causes, and Death from Any Cause or Hospitalization for Cardiovascular Causes.

Table 2. Primary and Secondary Outcomes.

Q: What are some of the factors that are felt to have contributed to the effectiveness of CABG in the Velazquez study?

A: Substantial declines in risk-adjusted mortality associated with CABG have occurred since the 1970s, when the landmark trials comparing CABG and medical therapy were performed. Improvements in myocardial protection techniques, surgical skill, and perioperative care, coupled with the near-universal use of the left internal mammary artery (LIMA) conduit are probably responsible. Among the patients randomly assigned to undergo CABG, 91.0% of patients in STICH received a LIMA graft, as compared with 9.9% of patients in the early CABG trials. Although there are limited data on the long-term patency of LIMA or saphenous vein grafts in patients at high risk for death or complications, like those enrolled in STICH, evidence from studies involving lower-risk patients supports the superior 1-year angiographic results with the LIMA. In addition, the high rate of use of statins, which have been shown to reduce the rate of vein-graft failure, is likely to have contributed to the durable effect of CABG and the low rates of repeat revascularization observed in this group.

A Boy with a Breast Mass

Posted by Carla Rothaus • April 22nd, 2016

2016-04-19_10-08-13When evaluating a boy with breast enlargement, diagnostic considerations include gynecomastia, benign breast lesions, and cancer.

An 8-year-old boy presented with a mass in the right breast that had been present for 18 months and had enlarged during the previous 6 months. On examination, a firm, mobile mass (2 cm by 2 cm) was present under the right areola. Diagnostic procedures were performed. A new Case Records of the Massachusetts General Hospital summarizes.

Clinical Pearl

• What are some of the drugs or herbal compounds that may cause gynecomastia?

Drugs and herbal compounds that cause gynecomastia include estrogens and estrogen-like compounds (e.g., tea tree and lavender oils and weak estrogens and antiandrogens that are found in certain lotions, soaps, and shampoos), drugs that inhibit testosterone secretion or action (e.g., ketoconazole, spironolactone, androgen-receptor blockers, metronidazole, cimetidine, and alkylating agents), drugs that increase endogenous estrogen production (e.g., fertility-inducing therapies), and other agents that operate through unclear mechanisms (e.g., marijuana, heroin, growth hormone, calcium-channel blockers, isoniazid, and tricyclic antidepressants).

Clinical Pearl

• What are some of the causes of gynecomastia associated with decreased testosterone secretion and high gonadotropin levels?

Causes of decreased testosterone secretion include conditions associated with hypergonadotropic hypogonadism (high gonadotropin levels), such as anorchia, testicular trauma, mumps orchitis, use of alkylating agents, radiation to the testes, and disorders of testosterone biosynthesis. Klinefelter’s syndrome is another cause of hypergonadotropic hypogonadism; when elevated gonadotropin levels are found in a patient with gynecomastia, a karyotype analysis should be performed to rule out this sex-chromosome disorder.

Morning Report Questions

Q: What is the most common primary breast malignancy in children?

A: Primary breast cancer in children is exceedingly rare; only 0.1 to 0.3% of all breast cancers occur in the pediatric age group. Approximately 80% of primary breast cancers in children are secretory carcinomas, which is an unusual histologic tumor type among adults with breast cancer. Primary breast cancers that occur in children less commonly and behave more aggressively than secretory carcinomas include medullary and inflammatory carcinomas. In addition, metastases from other primary cancers may cause breast masses; in fact, this occurs more commonly than does primary breast cancer in pediatric patients.

Figure 1. Ultrasound Images of the Breasts.

Q: What are some of the features of secretory carcinoma of the breast?

A: Invasive secretory carcinoma is a rare variant of invasive ductal carcinoma that accounts for less than 0.15% of all diagnosed breast cancers. Secretory carcinomas frequently harbor the t(12;15) (p13;q25) translocation and ETV6-NTRK3 gene fusion. Secretory carcinomas are slow-growing, and although approximately 10% of affected patients have nodal involvement at the time of diagnosis, the prognosis is typically good. The available literature on secretory carcinoma suggests that local excision is the preferred initial treatment in children.

Figure 3. Excisional-Biopsy Specimen of the Mass in the Right Breast.

Déjà Voodoo: Readmission or Observation after the Affordable Care Act

Posted by Rena Xu • April 20th, 2016

58The hospital where I work has one of the busiest emergency departments in Boston. Patients come in with everything you might imagine, from heart attacks to rabbit bites. A number of these patients, after being evaluated and treated, can be discharged home from the emergency department; others need to be admitted for further management. For still others, the plan isn’t immediately clear. Let’s say a patient has pain from a kidney stone. Can he try to pass the stone, or does he need intravenous medications and surgery? Such a patient might benefit from a short period of observation, in a so-called observation unit. If his pain improves, he could potentially return home, avoiding an admission.

While there are many appropriate uses for an observation unit, critics have raised the possibility that hospitals may be preferentially observing patients, rather than admitting them, to make their readmission rates appear more favorable. The Affordable Care Act, which was passed in 2010, introduced an initiative to reduce costly readmissions for patients recently discharged from the hospital. The Hospital Readmissions Reduction Program targeted a handful of “high-yield” conditions — heart failure, acute myocardial infarction, pneumonia, chronic obstructive pulmonary disease, and knee and hip replacements. Readmission rates, which had begun to decline before the legislation went into effect, declined at an even greater pace afterward. Was this because hospitals were using observation units to “game” the system?

To test this theory, researchers from the Department of Health and Human Services and Brigham and Women’s Hospital in Boston looked at how readmission rates and the use of observation units changed after the passage of the ACA, both for targeted and non-targeted conditions. They analyzed data from more than 3,000 hospitals across the country, over three different time intervals: 2007 to April 2010, when the ACA was passed; 2010 to 2012, as the Hospital Readmissions Reduction Program was being implemented; and 2012 to 2015.

The data, published recently in NEJM, offered several notable insights. Readmission rates fell most dramatically in the spring of 2010, shortly after the ACA was passed. They declined further over the longer term (2012-15), but at a slower rate. And while readmissions decreased for both targeted and non-targeted conditions after the ACA, the slope of decline was greater for the conditions that were targeted.

Meanwhile, the use of observation units increased steadily over the entire study period, from 2007 to 2015. This was true for both targeted and non-targeted conditions. There was not a noticeable uptick in use, for either group of conditions, associated with the passage of the ACA. Nor was there a correlation between changes in the use of observation units and changes in readmission rates (P=0.07).

“It seems likely that the upward trend in observation-service use may be attributable to factors that are largely unrelated to the Hospital Readmissions Reduction Program, such as confusion over whether an inpatient stay would be deemed inappropriate by Medicare recovery audit contractors,” the authors remarked.

They concluded, “Our analysis does not support the hypothesis that increases in observation stays can account in any important way for the reduction in readmissions.”

Trends of association, of course, cannot establish causality. Changes in practice may reflect concurrent changes in insurance policies, coverage patterns, and hospital workflow. Furthermore, readmission rates are only one metric by which to measure quality of care and cost effectiveness. While they fall outside the scope of this paper, trends in hospital length-of-stay, overall spending, and long-term patient outcomes are also important to consider when interpreting these findings and assessing the influence of policy on practice.

And while the growing use of observation units remains incompletely understood, to a physician, their value is apparent. When I am reluctant to send a patient home, observation units offer a middle ground between discharge and admission. Their rising popularity may simply reflect the need for more nuanced disposition options as medicine becomes increasingly complex.

Surgery for Lumbar Spinal Stenosis

Posted by Carla Rothaus • April 14th, 2016

2016-04-13_14-44-02Lumbar spinal stenosis has become the most common indication for spinal surgery, and studies have shown that surgical treatment in selected patients is more successful than conservative alternatives. The aim of the Swedish Spinal Stenosis Study was to investigate whether fusion surgery as an adjunct to decompression surgery resulted in better clinical outcomes at 2 years than decompression surgery alone. Inclusion criteria included patients with spinal stenosis at one or two adjacent lumbar vertebral levels, with or without degenerative spondylolisthesis.

In this randomized, controlled trial comparing decompression surgery alone with decompression surgery plus fusion surgery for patients with lumbar spinal stenosis, there was no significant between-group difference in clinical outcomes at 2 and 5 years. A new Original Article summarizes.

Clinical Pearl

• In the United States, how often is spinal fusion added to decompression laminectomy for treatment of lumbar spinal stenosis?

As the use of surgery to treat lumbar spinal stenosis has increased during the past decades, so has the complexity of the surgical procedures. Thus, decompression of the neural structures via laminectomy has increasingly been supplemented with lumbar fusion, with the intention of minimizing a potential risk of future instability and deformity. In recent years, approximately half the patients in the United States who have received surgical treatment for lumbar spinal stenosis have undergone fusion surgery.

Clinical Pearl

• Is the presence of spondylolisthesis an indication for adjunctive spinal fusion in patients undergoing laminectomy for lumbar spinal stenosis?

Degenerative spondylolisthesis, a condition in which one vertebra has shifted forward in relation to the vertebra below it, can be seen on radiographs in some patients who have lumbar spinal stenosis. The presence of degenerative spondylolisthesis has often been considered to be a sign of instability, although there is no consensus on the definition of that term. Some studies have suggested that there may be a risk of iatrogenic slip or an increased degree of spondylolisthesis after decompression surgery in patients with degenerative spondylolisthesis. However, the possible clinical consequences of a slipped vertebra have been under debate for decades. The natural course of untreated degenerative spondylolisthesis has been reported to be benign and has not been correlated with progression of slip or clinical symptoms. Furthermore, few studies support the widespread use of fusion surgery in patients with lumbar spinal stenosis, regardless of the presence of spondylolisthesis. Despite this lack of evidence, surgeons often use a combination of decompression surgery and fusion surgery as a means of avoiding possible postoperative instability and restenosis. In the United States, 96% of patients with degenerative spondylolisthesis undergo fusion surgery as an adjunct to decompression surgery.

Morning Report Questions

Q: Is there a clinical benefit to adding fusion surgery to decompression surgery in patients with lumbar spinal stenosis?

A: The trial by Försth et al., which included 247 patients with lumbar spinal stenosis, with or without degenerative spondylolisthesis, revealed no clinical benefit 2 years after surgery of adding fusion surgery to decompression surgery. The primary outcome was the score on the Oswestry Disability Index (ODI; which ranges from 0 to 100, with higher scores indicating more severe disability). There was no significant difference between the two treatment groups in the primary outcome; the mean score on the ODI at 2 years was 27 in the fusion group and 24 in the decompression-alone group (P=0.24). For the primary outcome, the authors found no significant interaction between type of treatment and presence of spondylolisthesis (P=0.33 for interaction). There was no significant difference between treatment groups in the results of the 6-minute walk test at 2 years (397 m in the fusion group and 405 m in the decompression-alone group, P=0.72). Approximately two thirds of the patients involved in the trial had a follow-up longer than 5 years, and the lack of superiority of decompression plus fusion seemed to persist at 5 years among those patients.

Table 3. Outcomes in the Per-Protocol Population.

Q: How does the cost of adjunctive spinal fusion compare to that of decompression surgery alone?

A: In the trial by Försth et al., the addition of fusion surgery to decompression surgery significantly increased direct hospital costs, including the costs of surgery and the in-hospital stay, but did not increase indirect costs at 2 years. As compared with decompression plus fusion, the use of decompression surgery alone not only is associated with a lower treatment cost per patient but also can save resources by releasing surgical capacity as a consequence of shorter operating time and hospitalization.

Table 4. Resource Use.

Fusion for Lumbar Spinal Stenosis?

Posted by Joshua Allen-Dicker • April 13th, 2016

2016-04-12_12-26-01Back pain does not respect traditional boundaries in healthcare.  Patients with back pain are present in our emergency rooms, our minute clinics, our surgical subspecialty offices, and our inpatient units.  As such, many of us—orthopedist or internist, rheumatologist or advanced practitioner—have had to think about advising the patient with lumbar spinal stenosis or lumbar spondylolisthesis, two of the most common reasons for spinal surgery.  This week’s NEJM includes two studies, the Swedish Spinal Stenosis Study (SSSS) and the Spinal Laminectomy versus Instrumented Pedicle Screw (SLIP) trial, that may prompt us to revisit our views of surgical treatment options for symptomatic lumbar spinal stenosis.

SSSS enrolled patients with symptomatic lumbar spinal stenosis (a narrowing of the spinal canal) with or without degenerative spondylolisthesis (slippage of one vertebral body over another).  Patients were randomized to receive either decompression via laminectomy (removal of the lamina from a vertebral body) or decompression via laminectomy plus spinal fusion (removal of the lamina from a vertebral body followed by the coupling of adjacent vertebral bodies).  The primary outcome was a 2-year follow-up assessment of disability via the Oswestry Disability Index, which is a disease specific outcome measure.  Data on costs were also collected.

Between the 124 patients randomized to the laminectomy alone group and the 123 patients randomized to the laminectomy plus fusion group, the SSSS authors found no significant difference in primary outcome (p=0.24).  This also held true for the subset of patients with spondylolisthesis (p=.11).  The authors did note significantly higher costs for those in the laminectomy plus fusion group compared to the laminectomy alone group.

The SLIP trial enrolled patients with lumbar spinal stenosis and degenerative spondylolisthesis.  Patients were randomized to receive laminectomy alone or laminectomy plus fusion.  The primary outcome was physical health-related quality of life via the SF-36 PCS questionnaire, performed at 2-year follow-up.  Data on operative blood loss, length of operation and length of stay were also collected.

In comparing the 35 patients randomized to the laminectomy alone group and the 31 patients randomized to the laminectomy plus fusion group, the SLIP authors found a significant difference in mean treatment effect of 5.7 points on the SF-36 PCS questionnaire in favor of laminectomy plus fusion (the authors had pre-specified a minimal clinically important difference of 5).  There was no significant difference on the Oswestry Disability Index, a secondary outcome measure.  However, laminectomy plus fusion was associated with significantly greater blood loss, length of operation and length of stay.

How should SSSS and SLIP change our approach to the next patient we see with lumbar spinal stenosis-related back pain? After rapid increases in the utilization of spinal fusion without supportive high-grade evidence, some clinicians may be looking for an affirmation of current practices, and others a reprisal. SSSS and SLIP provide a little of both:

From a clinical outcomes standpoint, SSSS fails to show a benefit for laminectomy plus fusion over laminectomy alone in patients with lumbar spinal stenosis as well as a subgroup of patients who also had degenerative spondylolisthesis.  On the other hand, SLIP, which only examined patients with lumbar stenosis and degenerative spondylolisthesis, showed a modest benefit in physical health-related quality of life for those receiving laminectomy plus fusion.

Despite this, both studies revealed significant drawbacks to the laminectomy plus fusion approach, which was associated with higher costs, greater blood loss, length of operation, and length of stay than laminectomy alone.

When taking a risk-benefit or value-driven approach to the laminectomy plus fusion debate, SSSS and SLIP do not provide a clear winner.  In an accompanying editorial, Drs. Wilco Peul and Wouter Moojen of Leiden University Medical Center in the Netherlands conclude: “Both studies demonstrate clearly that for most stenosis patients surgery should be limited to decompression in absence of overt instability. Evidence from these two trials suggest that fusion in stenosis is no longer best practice and its use should be restricted…”

SSSS and SLIP highlight the need for us to develop better methods of identifying those patients who will benefit most from the laminectomy plus fusion approach, and those who will do best with laminectomy alone.  This is no easy task.  In the meantime, these studies reinforce the importance of prudence, restraint, and personalization of our discussions with patients—in every healthcare venue—regarding treatment options for lower back pain.

Don’t miss the NEJM Quick Take video summary on this study: