Rate Control versus Rhythm Control for Atrial Fibrillation after Cardiac Surgery

Posted by Ravi Parikh, M.D., M.P.P. • May 18th, 2016

2016-05-06_15-56-46Whether you are a resident on the cardiology, surgery, or general medicine service, encountering patients with atrial fibrillation is common. Many patients, particularly after cardiac surgery, go in and out of atrial fibrillation so often that residents caring for them often ignore the blinking lights and loud alarms from telemetry machines after a while. However, atrial fibrillation has adverse consequences– postoperative atrial fibrillation and its sequelae cost the US healthcare system up to $1 billion each year. While anticoagulation can reduce the risk of the dreaded complication of stroke, it is also important to treat atrial fibrillation to prevent structural consequences of rapid ventricular response, such as tachycardia-mediated cardiomyopathy. Two strategies – controlling the heart rate (rate control) and converting the patient’s heart to sinus rhythm (rhythm control) – are used as medical treatments for atrial fibrillation.

Taking an historical perspective, in the 1980s and 1990s, when large clinical trials demonstrated the large stroke risk of atrial fibrillation, a slew of antiarrythmic drugs came onto the market. The logic was simple– if atrial fibrillation led to devastating consequences, it made sense to try to restore sinus rhythm. However, antiarrythmic drugs came with potentially dangerous side effects. Some practitioners thought that decreasing the risk of rapid ventricular response through rate control might confer a similar benefit without the potentially harmful side effects of rhythm control. The landmark Atrial Fibrillation Follow-up Investigation of Rhythm Management (AFFIRM) trial, published in NEJM in 2002, found that in 4000 nonsurgical patients with prior atrial fibrillation, rhythm control offered no difference in survival compared to rate control, but resulted in more hospitalizations and more adverse events. The smaller Rate Control versus Electrical Cardioversion for Persistent Atrial Fibrillation (RACE) trial, also published in NEJM in 2002, reported similar findings in 522 patients with persistent atrial fibrillation.

However, as in most of medicine, it is difficult to generalize. This is particularly true after cardiac surgery, when up to 50% of patients develop atrial fibrillation. While a joint 2014 American College of Cardiology (ACC), American Heart Association (AHA), and Heart Rhythm Society (HRS) guideline recommended rate control with beta-blockers for postoperative atrial fibrillation, considerable variation in practice continues.

In this week’s NEJM, Gillinov et al. report the results of a multi-center (23 centers) randomized controlled trial designed to answer this question. Investigators enrolled patients who were undergoing elective cardiac surgery for either coronary artery disease or valvular disease and developed persistent or recurrent postoperative atrial fibrillation. Patients with a history of prior atrial fibrillation were excluded. From 2014-2015, 523 patients with postoperative atrial fibrillation were randomized to receive either a rate control or rhythm control strategy. The rate control arm received medications to slow the heart rate to a target of less than 100 beats per minute. The rhythm control arm received a standard loading dose of amiodarone for pharmacologic rhythm control. If atrial fibrillation persisted for 24-48 hours after randomization, patients received cardioversion. If any patient in either arm had atrial fibrillation for longer than 24-48 hours after randomization, they received anticoagulation.

The investigators found that atrial fibrillation was present after cardiac surgery in 30% of patients – and in 50% of patients who had received combined coronary artery bypass graft and valve surgery. Adults (mean age 68.8 years) receiving rate control had similar numbers of hospital days (6.4 vs 7.0, respectively; P=0.76); length of stay for the index hospitalizations (5.5 vs 5.8 respectively; P=0.88); and rates of rehospitalization (P=0.88). At 60-days, 93.2% of individuals in the rate control group remained in sinus rhythm, compared to 97.9% of patients in the rhythm control group (P=0.02). There were no significant differences in the overall rates of serious adverse events, strokes, or serious bleeding in the rate- or rhythm-control groups. Similar numbers of patients in both arms received anticoagulation prior to discharge; while rate control resulted in slower resolution to sinus rhythm, the duration of anticoagulation was similar between both arms (median 45 days).

There are several limitations to the study, including the large rates of crossover between treatment strategies, as well as treatment discontinuation in the two arms. However, the study’s greatest strength lies in numbers—it is a large (N=523) randomized trial of post-operative atrial fibrillation management. The findings of Gillinov et al. suggest that whereas rhythm control may offer faster time to sinus rhythm, it comes with the potential side effects of antiarrythmic drugs, in this case, amiodarone, with no shorter duration of anticoagulation. Since rhythm control strategies are complex and offered no definitive clinical benefit, many would interpret the study’s results as in clear favor of rate control. Hugh Calkins, M.D., Director of the Cardiac Arrhythmia Service at the Johns Hopkins Hospital, writes in an accompanying editorial: “The all too frequent knee-jerk reaction to cardiovert patients with postoperative atrial fibrillation with hopes of reducing stroke risk, the need for anticoagulation, and the duration of hospitalization seems hard to justify given the well-known risks and costs associated with such interventions and the absence of clear beneficial effects of the rhythm control strategy.” The Gillinov et al. study clarifies the risks and benefits of either treatment strategy in post-operative atrial fibrillation and argues for shared decision-making with patients prior to heart surgery.

A Woman with Psychosis

Posted by Carla Rothaus • May 13th, 2016

2016-05-06_13-34-48The combination of malabsorption and autoimmunity strongly suggests the possibility of celiac disease, which is not always associated with gastrointestinal symptoms. Although neurologic and psychiatric symptoms of celiac disease are not widely recognized, they have been reported.

Examination of a 37-year-old woman with adult-onset psychosis revealed weight loss, a thyroid nodule, anemia, and micronutrient deficiencies. Diagnostic tests were performed. A new Case Records of the Massachusetts General Hospital summarizes.

Clinical Pearl

• What historical or clinical findings should make one consider celiac disease?

Celiac disease should be considered in patients with gastrointestinal symptoms and in patients with extraintestinal problems, such as iron-deficiency anemia that is unresponsive to treatment, arthritis, or elevated levels of liver enzymes; it should also be considered in high-risk patients, including those with a family history of celiac disease, those with type 1 diabetes mellitus, and those with autoimmune thyroid disease.

Clinical Pearl

• What serologic tests are useful when celiac disease is suspected?

Serologic tests — including tests for IgA tissue transglutaminase antibodies, endomysial antibodies, and deamidated gliadin peptide antibodies — help to identify persons who may benefit from duodenal biopsy. General consensus regarding these studies is that the test for IgA tissue transglutaminase antibodies is the most reliable and cost-effective.

Morning Report Questions

Q: What is the appropriate management when test results for IgA tissue transglutaminase antibodies are abnormal? 

A: Patients with abnormal test results for IgA tissue transglutaminase antibodies or those in whom celiac disease is highly suspected must be referred to a gastroenterologist for confirmatory testing with endoscopy and biopsy. The diagnostic standard for celiac disease is a biopsy of the small intestine. The patient must remain on a gluten-containing diet for testing to be accurate. A trial of a gluten-free diet is not appropriate unless celiac disease has been ruled out; once the patient is on a gluten-free diet, a clinician cannot distinguish between celiac disease and other gluten-related disorders, including nonceliac gluten sensitivity, because both can have extraintestinal manifestations. If a patient has initiated a gluten-free diet without having had the proper testing, a genetic test may be helpful to identify whether there is a need to reintroduce gluten for confirmatory testing for celiac disease.

Figure 1. Initial Biopsy Specimen of the Duodenum.

Figure 2. Subsequent Biopsy Specimen of the Duodenum (Hematoxylin and Eosin).

Q: Is there a connection between celiac disease and neuropsychiatric disease?

A: Celiac disease was classically described as a gastrointestinal condition that almost exclusively affects white children. Now celiac disease is described as an autoimmune disorder that can affect persons of any age or race and can involve any tissue or organ of the body. The typical gastrointestinal symptoms (diarrhea, bloating, failure to thrive, and weight loss) can be easily understood by the underlying intestinal damage caused by the autoimmune attack that occurs after the ingestion of gluten, but the many extraintestinal symptoms that patients with celiac disease often have are more difficult to explain. New insights on the pathogenesis of celiac disease suggest that it is truly a systemic disease that can spread from the intestine to any tissue or organ of the body. One of the most intriguing yet controversial clinical presentations of celiac disease involves the nervous system as a preferred target. Patients with celiac disease often have chronic headache, short-term memory loss, irritability, anxiety, and depression and more rarely have seizures, ataxia, autism, attention-deficit disorders, and psychosis. Although we do not have a definitive explanation of the way in which an inflammatory process in the intestine affects the brain, there is growing evidence of close functional and organic interactions between these two systems, which are typically described as the gut–brain axis.

Table 1. Extraintestinal Manifestations of Celiac Disease.

Table 2. Neuropsychiatric Symptoms Associated with Celiac Disease.

Caregivers of Critically Ill Patients

Posted by Carla Rothaus • May 13th, 2016

2016-05-06_13-44-08Unpaid caregivers (typically family or close friends) are essential to the sustainability of North American health care systems, because their unpaid labor annually accounts for $27 billion in Canada and $642 billion in the United States. Although caregiver assistance can be beneficial for patients, such care may have negative consequences for caregivers, including poor health-related quality of life, emotional distress, a subjective sense of burden, and symptoms of post-traumatic stress disorder. Cameron et al. used hospital data and self-administered questionnaires to collect information on caregiver and patient characteristics in a prospective cohort of caregivers of patients who had received mechanical ventilation for a minimum of 7 days in the ICU and had survived to discharge. The objectives were to describe health outcomes in caregivers, identify subgroups of caregivers with distinct health trajectories, and identify variables associated with poor caregiver outcomes.

Investigators evaluated the caregivers of patients who had received mechanical ventilation for at least 7 days in an ICU. Although there was a large burden of depressive symptoms soon after discharge, the burden diminished in magnitude, in most caregivers, during the subsequent year. A new Original Article summarizes.

Clinical Pearl

• Is physical health affected to the same degree as mental health in caregivers who provide care for an ill patient at home?

In the study by Cameron et al., the caregivers’ mean age was 53 years, 70% were women, and 61% were caring for a spouse. Caregivers of patients who had received mechanical ventilation in the ICU for at least 7 days were at risk for poor mental health outcomes, whereas their physical health was similar to population norms.

Clinical Pearl

• To what extent do the specific characteristics of patients affect the health outcomes of those who care for them?

Cameron et al. found that caregiver outcomes did not appear to be related to patient demographic and clinical characteristics or to changes in patient functional and psychological outcomes over time. Multivariable mixed-effects models were not able to identify any patient characteristics that were significantly associated with caregiver outcomes during the 1-year follow-up. Together, these analyses suggested that patients’ severity of illness, functional abilities, cognitive status, and neuropsychological well-being, were not associated with caregiver outcomes.

Morning Report Questions

Q: How substantial is the risk of clinical depression among caregivers of critically ill patients? 

A: Findings of the study by Cameron et al. suggest that a substantial percentage of caregivers may be at risk for clinical depression. Previous research has shown that the Center for Epidemiologic Studies Depression (CES-D) scale is a good screening tool for clinical depression. In the Cameron study, 43% of caregivers had a score of more than 15 on the CES-D scale at 12 months after the patients for whom they were caring were discharged from the ICU, suggesting persistent symptoms of clinical depression. This rate is substantially higher than the rate in the Canadian adult population (12%) and is also higher than the rate observed in a large sample of caregivers of persons with dementia (32%). Although the mental status of caregivers before patients’ critical illness events was not known, depressive symptoms did decrease over time for most caregivers except within a subgroup who had more severe symptoms than the rest of the sample at all time points.

Figure 1. Caregiver Outcomes during the First Year after Patient Discharge from an Intensive Care Unit (ICU).

Q: What characteristics of caregivers of critically ill patients are associated with better health outcomes in the caregivers? 

A: In the Cameron study, caregivers had better health outcomes when they were older, were caring for a spouse, had higher income, and had better social support and sense of control and when caregiving had less of a negative effect on their everyday lives. The findings are consistent with previous pilot data from Choi et al., who identified two trajectory groups with respect to depressive symptoms and identified caregiver characteristics (younger age, female sex, financial difficulty, and poor health behaviors) but no patient characteristics (diagnosis at admission, ICU length of stay, age, illness severity, and abilities to perform activities and instrumental activities of daily living) that were associated with poor caregiver outcomes.

Table 1. Characteristics of the Caregivers at Baseline.

COPD Is Not the Whole Story

Posted by Rachel Wolfson • May 11th, 2016

79Many diagnostic guidelines use black or white parameters – either patients meet the criteria and have the disease, or they don’t. While guidelines like this can be useful for developing clear definitions, in practice many patients fall within a gray area. The current diagnosis of Chronic Obstructive Pulmonary Disease (COPD), for example, relies on spirometry results that show airflow limitation (i.e. a ratio of FEV1/FVC<0.70). Some smokers who do not meet this criterion, however, have reported symptoms suggestive of COPD and are potentially even being treated for COPD.

To better define this potential patient population, Woodruff et al report the results of an observational study, SPIROMICS, in this week’s NEJM. The authors enrolled over 2,000 ever-smokers (current and former smokers) and never-smokers (non-smoking controls) into the study. They then subdivided the ever-smokers into those with FEV1/FVC≥0.70 (i.e. with a diagnosis of COPD) or those without a diagnosis of COPD based on spirometry parameters. Using the COPD Assessment Test (CAT), they found that 50% of ever-smokers with preserved spirometry had symptoms of COPD, which was a bit less than the prevalence in ever-smokers with COPD (65%, p<0.001), but greater than the prevalence of symptoms in never-smokers (16%, p<0.001). Further, they followed these patients over time and found that ever-smokers with preserved spirometry had higher rates of exacerbations, reduced exercise tolerance, and higher rates of airway wall thickening on chest imaging in comparison to asymptomatic ever-smokers and never-smoking controls.

These data are an initial foray into defining a new population of patients who do not meet the criteria for COPD diagnosis, but who nonetheless suffer from symptoms suggestive of COPD due to their smoking history. Now that this population has been provisionally identified, Leonardi Fabbri, MD posits, in an accompanying editorial, that more work must be done to define treatments for this group. This means running more clinical trials to find treatments that can help these patients that fall in the gray area.

Don’t miss the NEJM Quick Take video summary on this study:

Eye of the Beholder

Posted by Carla Rothaus • May 6th, 2016

73Dermatomyositis, polymyositis, and necrotizing autoimmune myopathy all cause proximal muscle weakness. Proximal weakness is often progressive, with patients reporting difficulty in raising their arms above their head, climbing stairs, or standing from a seated position. Clinically, dermatomyositis is distinguished from polymyositis and necrotizing autoimmune myopathy by its distinctive dermal findings.

A 47-year-old man presented to an urgent care ambulatory clinic with a 3-day history of swelling around his left eye and a sensation of tightness in his throat. It had become difficult for him to swallow solids, and he felt as though food was sticking in his throat. A new Clinical Problem-Solving summarizes.

Clinical Pearl

• What are the characteristic dermal findings associated with dermatomyositis?

Dermatomyositis is distinguished from polymyositis and necrotizing autoimmune myopathy by its distinctive dermal findings, including Gottron’s papules (violaceous papules on the dorsum of the metacarpalphalangeal and interphalangeal joints), Gottron’s sign (nonpalpable macules over the extensor surfaces of joints), heliotrope rash (a violaceous rash involving the periorbital skin), V sign (also known as shawl sign), in which there are macular erythematous lesions over the anterior chest and back, and holster sign, in which these lesions appear over the upper lateral thighs. Patients may have dilated nail-fold capillaries (which can be visualized with a dermatoscope) and ragged or thickened cuticles. In rare instances, the characteristic cutaneous features of dermatomyositis develop without muscle involvement, in a condition that is designated amyopathic dermatomyositis or dermatomyositis sine myositis.

Clinical Pearl

• What laboratory findings may be seen in dermatomyositis?

Elevation in creatine kinase levels is characteristic of all the inflammatory myopathies, with levels rising up to 50 times as high as the upper limit of the normal range in patients with dermatomyositis. Myositis-specific antibodies are often measured, since findings can have prognostic significance. The most common such antibody is anti–Jo-1, which is present in up to 20% of patients with polymyositis or dermatomyositis. The anti–Jo-1 antibody is associated with the antisynthetase syndrome, a constellation of arthritis, Raynaud’s phenomenon, mechanic’s hands (in which there is cracking along the distal tip and edges of the fingers), and interstitial lung disease, which is associated with a poorer prognosis in patients with an inflammatory myopathy.

Morning Report Questions

Q: What is the nature of the cancer risk in patients with dermatomyositis? 

A: A reported 15 to 25% of patients with dermatomyositis have had prior cancer or have concurrent cancer, or cancer will develop in them. However, there does not appear to be a correlation between dermatomyositis and a particular type of cancer; in most patients, the diagnosis of dermatomyositis comes after the diagnosis of cancer. There are no consensus guidelines for cancer screening in patients with inflammatory myositis. It is prudent to ensure age-appropriate cancer screening; some experts recommend chest, abdominal, and pelvic imaging for further assessment in patients without a known cancer, although data are lacking in regard to the associated cost-effectiveness and outcomes. Anti–transcriptional intermediary factor 1γ (also referred to as anti-p155) and anti–nuclear matrix protein 2 are associated with an increased risk of cancer in adults with dermatomyositis; the former, which is included in the myositis-specific antibody panel at most laboratories, has been reported to have a positive predictive value of 78% and a specificity of 89%.

Q: What medications are used for the initial treatment of dermatomyositis? 

A: Although there is a paucity of controlled trials on the treatment of dermatomyositis, glucocorticoids are the mainstay of treatment for dermatomyositis, polymyositis, and autoimmune necrotizing myositis — largely on the basis of clinical experience. Oral prednisone is often started at a dose of 0.75 to 1 mg per kilogram of body weight, with intravenous glucocorticoids reserved for severe or progressive cases; doses are tapered gradually, primarily on the basis of the patient’s symptoms. Additional immunosuppressants are often added as glucocorticoid-sparing agents — most often methotrexate or azathioprine, although mycophenolate and cyclosporine have also been used.

Herniated Lumbar Intervertebral Disk

Posted by Carla Rothaus • May 6th, 2016

2016-05-02_14-26-47“Sciatica” refers to pain in a sciatic-nerve distribution, but this term is sometimes used indiscriminately to describe back and leg pain. Lumbar “radiculopathy” more specifically refers to pain with possible motor and sensory disturbance in a nerve-root distribution. After lumbar stenosis, spondylolisthesis, and fracture have been ruled out, approximately 85% of patients with sciatica are found to have a herniated intervertebral disk. In two studies of surgery for sciatica, at least 95% of herniated disks were at the L4–L5 or L5–S1 levels.

Pain from disk herniation, the leading cause of sciatica, usually resolves within several weeks with conservative therapy. In patients with sciatica for 6 weeks, pain relief is faster with surgery than with conservative therapy; however, outcomes are similar at 1 year. A new Clinical Practice summarizes.

Clinical Pearl

• When should a patient with a suspected herniated lumbar disk undergo magnetic resonance imaging (MRI)?

Computed tomography (CT) or MRI can confirm a clinical diagnosis of a herniated disk. Early MRI is indicated in patients with progressive or severe deficits (e.g., multiple nerve roots) or clinical findings that suggest an underlying tumor or infection. Otherwise, CT or MRI are necessary only in a patient whose condition has not improved over 4 to 6 weeks with conservative treatment and who may be a candidate for epidural glucocorticoid injections or surgery. Disk herniation does not necessarily cause pain; MRI commonly shows herniated disks in asymptomatic persons, and the prevalence of herniated disks increases with age. Thus, symptoms may be misattributed to incidental MRI findings.

Clinical Pearl

• What is the likelihood that pain associated with a herniated lumbar disk will improve without surgery?

The natural history of herniated lumbar disks is generally favorable, but patients with this condition have a slower recovery than those with nonspecific back pain. In one study involving patients with a herniated disk and no indication for immediate surgery, 87% who received only oral analgesics had decreased pain at 3 months. Even in randomized trials that enrolled patients with persistent sciatica, the condition of most patients who did not undergo surgery improved.

Figure 1. Testing for Compromise of a Lumbar Nerve Root.

Figure 2. CT and MRI Terminology for Herniated Disks.

Morning Report Questions

Q: What is a general approach to the management of a herniated lumbar disk? 

A: Cohort studies suggest that the condition of many patients with a herniated lumbar disk improves in 6 weeks; thus, conservative therapy is generally recommended for 6 weeks in the absence of a major neurologic deficit. There is no evidence that conservative treatments change the natural history of disk herniation, but some offer slight relief of symptoms. In patients with acute disk herniation, avoidance of prolonged inactivity in order to prevent debilitation is important. The use of epidural glucocorticoid injections in patients with herniated disks has increased rapidly in recent years, although these injections are used on an off-label basis. A systematic review showed that patients with radiculopathy who received epidural glucocorticoid injections had slightly better pain relief (by 7.5 points on a 100-point scale) and functional improvement at 2 weeks than patients who received placebo. There were no significant advantages at later follow-up and no effect on long-term rates of surgery. Unless patients have major neurologic deficits, surgery is generally appropriate only in those who have nerve-root compression that is confirmed on CT or MRI, a corresponding sciatica syndrome, and no response to 6 weeks of conservative therapy. The major benefit of surgery is that relief of sciatica is faster than relief with conservative therapy, but, on average, there is a smaller advantage of surgery with respect to the magnitude of relief of back pain. Most, although not all, trials showed no significant advantage of surgery over conservative treatment with respect to relief of sciatica at 1 to 4 years of follow-up. Given these results, either surgery or conservative treatment may be a reasonable option, depending on the patient’s preferences for immediate pain relief, how averse the patient is to surgical risks, and other considerations.

Q: How common are complications when a patient undergoes lumbar diskectomy?

A: Procedural complications of lumbar diskectomy are less common than procedural complications of other types of spine surgery. A registry study indicated that an estimated 0.6 deaths per 1000 procedures had occurred at 60 days after the procedure. New or worsening neurologic deficits occur in 1 to 3% of patients, direct nerve-root injury occurs in 1 to 2%, and wound complications (e.g., infection, dehiscence, and seroma) occur in 1 to 2%. Incidental durotomy, which occurs in approximately 3% of patients, is associated with increases in the duration of surgery, blood loss during surgery, and the length of inpatient stay, as well as potential long-term effects such as headache. All tissues at the surgical site heal with some scarring, which contracts and binds nerves to surrounding structures. Normally, each nerve root glides a few millimeters in its neuroforamen with each walking step. Stretch on tethered nerves may be one source of chronic postsurgical pain.

Scaling the ALPS — Antiarrhythmic Drugs in Out-of-Hospital Cardiac Arrest

Posted by Bhavna Seth, M.D. • May 4th, 2016

2016-05-04_9-29-46Imagine you are out for an evening jog when a young man, who is running ahead of you, collapses. You rush over and a rapid assessment suggests that he is unresponsive, has no pulse, and is not breathing. You start chest compressions and a bystander calls 911. EMS arrives soon, however, after 5 cycles of chest compressions and several attempts at defibrillation for a rhythm of ventricular fibrillation there is still no response. The ACLS algorithm next prompts the team to choose an antiarrhythmic: amiodarone or lidocaine? The team pulls amiodarone and his rhythm reverts. What led to that choice?  Would lidocaine have had the same effect?

The pre-hospital period has been identified as critical to obtaining favorable outcomes in acute cardiovascular events. Currently, there are Class IIb recommendations from the American Heart Association for the use of antiarrhythmic medication, typically intravenous amiodarone and lidocaine, in cases of cardiac arrest. The agents may be considered for ventricular fibrillation or pulseless ventricular tachycardia unresponsive to CPR, defibrillation, or vasopressor treatment. However, evidence regarding the effectiveness of these agents remains inconclusive. In this week’s NEJM, Kudenchuk et al try to answer the question of the survival benefit of anti-arrhythmic agents in out-of-hospital cardiac arrest.

This randomized, double blinded, placebo-controlled trial compares parenteral amiodarone, lidocaine and saline placebo, along with standard care, in 3026 adults with non-traumatic, out-of-hospital, cardiac arrest with shock refractory ventricular fibrillation, or pulseless ventricular tachycardia. The primary outcome of interest was survival to hospital discharge, and the secondary outcome of interest was favorable neurologic function at discharge, defined as a modified Rankin score of 3 or less. The study outcomes were evaluated using a modified intention-to-treat (or efficacy) population and an intention-to-treat (or safety) population

The study did not find a statistically significant difference in either of the outcomes; neither amiodarone nor lidocaine resulted in higher survival compared to saline placebo (24.4%, 23.7%, and 21.0%, respectively) upon hospital discharge (primary outcome) nor did they result in significant difference in favorable neurological function at discharge (secondary outcome). However, there was significant benefit of both antiarrhythmic drugs over placebo in certain measures: fewer shocks were administered after the first dose of the trial drug; fewer patients received rhythm-control medications during hospitalization; and fewer patients required CPR during hospitalization.

Although the study is well designed, investigators had little control over certain aspects of the study, such as standardization of care in hospitals. The frequency of coronary catheterization, therapeutic hypothermia, and withdrawal of life-sustaining treatments demonstrated no difference between groups. The trial used only one administrative strategy; others manners with active-treatment crossover and may produce different results. However, pre-hospital trials pose unique practical, operational, ethical and analytical challenges of uncertainty. Selection bias could have influenced trial enrollment, though instances of exclusion were numerically small (6.3%). Potential confounders in clinical CPR trials are the relatively ‘open’ exclusion and inclusion criteria, where many cases with no chance of survival are enrolled; for instance, asystolic patients who do not convert to ventricular tachycardia/ventricular fibrillation. Thus, more benefit may be seen in certain sub-populations. However, it is difficult to implement strict criteria into cardiac arrest studies, because the decision to randomize has to be executed instantaneously, without much clinical data.

The study suggests the possible clinical benefit of antiarrythmic administration, with respect to fewer shocks and less need of in-hospital antiarrhythmics and CPR, which warrants further evaluation. Additionally, the accompanying editorial by Dr. Joglar and Dr. Page speculates that the antiarrhythmics may have been administered too late to ameliorate significant metabolic consequences of prolonged arrest, and it is unknown if earlier administration confers further benefit. The study raises several interesting hypotheses, which need to be explored to further expand the evidence available to support or refute current resuscitation practices.

NEJM Deputy Editor Dr. John Jarcho notes that there has been very little previous research directly comparing antiarrhythmic drug strategies in out-of-hospital cardiac arrest. “We believe that these data are very likely to influence both guidelines and clinical practice in the management of this patient population. We also consider it important to encourage the effort to conduct clinical trials in this difficult setting.”

Violence against Health Care Workers

Posted by Carla Rothaus • April 28th, 2016

65Health care workplace violence is an underreported, ubiquitous, and persistent problem that has been tolerated and largely ignored. According to the Joint Commission, a major accrediting body for health care organizations, institutions that were once considered to be safe havens are now confronting “steadily increasing rates of crime, including violent crimes such as assault, rape, and homicide.” Although metal detectors may theoretically mitigate violence in the health care workplace, there is no concrete evidence to support this expectation.

Violence against health care professionals in the workplace is underreported and understudied. Additional data are needed to understand steps that might be taken to reduce the risk. A new Review Article summarizes.

Table 1. Types of Workplace Violence.

Clinical Pearl

• Which one of the four categories of workplace violence is most common in the health care setting?

Experts have classified workplace violence into four types on the basis of the relationship between the perpetrator and the workplace itself (Table 1). Most common to the health care setting is a situation in which the perpetrator has a legitimate relationship with the business and becomes violent while being served by the business (categorized as a type II assault). The highest number of such assaults in U.S. workplaces each year are directed against health care workers. These episodes are characterized by either verbal or physical assaults perpetrated by patients and visitors against providers.

Clinical Pearl

• Are there evidence-based approaches to preventing health care workplace violence?

Most studies on workplace violence have been designed to quantify the problem, and few have described research on experimental methods to prevent such violence. The most recent critical review of the literature in 2000 identified 137 studies that described strategies to reduce workplace violence. Of these studies, 41 suggested specific interventions, but none provided empirical data showing whether or how such strategies worked. Only 9 studies, all of which were health care–related, reported data on interventions. Even so, the conclusion of the 9-study review was that each of the studies used weak methods, had inconclusive results, and used flawed experimental designs. A review of nursing literature had similar conclusions: all the studies showed that after training, nurses had increased confidence and knowledge about risk factors, but no change was seen in the incidence of violence perpetrated by patients. There is a lack of high-quality research, and existing training does not appear to reduce rates of workplace violence.

Morning Report Questions

Q: Are certain health care workers or certain health care settings particularly vulnerable to workplace violence? 

A: Certain hospital environments are more prone to type II workplace violence than are other settings. The emergency department and psychiatric wards are the most violent, and well-studied, hospital environments. Since rates of assault correlate with patient-contact time, nurses and nursing aides are victimized at the highest rates. Emergency department nurses reported the highest rates, with 100% reporting verbal assault and 82.1% reporting physical assault during the previous year. Physicians are also frequent targets of type II workplace violence; approximately one quarter of emergency medicine physicians reported being targets of physical assault in the previous year. All employees who work in inpatient psychiatric environments are at higher risk for targeted violence than are other health care workers. Rates of workplace violence against physicians in psychiatric settings may be even higher than those in emergency department settings, with 40% of psychiatrists reporting physical assault in one study.

Q: What is known about perpetrators of health care workplace violence, and what are some potential solutions? 

A: The characteristic that is most common among perpetrators of workplace violence is altered mental status associated with dementia, delirium, substance intoxication, or decompensated mental illness. In some studies, researchers have postulated that patients with a previous history of violence are at increased risk for committing violence toward staff members; however, this association remains unproven. Among strategies for individual workers that have been proposed to reduce workplace violence are training in aggression de-escalation techniques and training in self-defense. Recommendations for target hardening of infrastructure include the installation of fences, security cameras, and metal detectors and the hiring of guards. Perhaps most important are recommendations that health care organizations revise their policies in order to improve staffing levels during busy periods to reduce crowding and wait times, decrease worker turnover, and provide adequate security and mental health personnel on site. The importance of recognizing verbal assault as a form of workplace violence cannot be overlooked, since verbal assault has been shown to be a risk factor for battery. The “broken windows” principle, a criminal-justice theory that apathy toward low-level crimes creates a neighborhood conducive to more serious crime, also applies to workplace violence. When verbal abuse and low-level battery are tolerated, more serious forms of violence are invited.

Aphasia during a Transatlantic Flight

Posted by Carla Rothaus • April 28th, 2016

2016-04-25_11-01-53In more than 60% of patients with ischemic stroke, the cause is readily established and is most often atherosclerosis or heart disease. However, in a young patient with no traditional vascular risk factors and a large clot burden, the search can be broadened to include, at minimum, thrombophilia, arterial dissection, paradoxical embolism, and unusual arteriopathies.

A 49-year-old woman was brought to the ED 2 hours after the onset of hemiplegia and aphasia during a transatlantic flight. Examination revealed evidence of acute ischemic stroke. Additional diagnostic studies were performed. A new Case Records of the Massachusetts General Hospital summarizes.

Clinical Pearl

• What is the Cincinnati Prehospital Stroke Scale?

The Cincinnati Prehospital Stroke Scale, which screens for three symptoms (facial droop, arm drift, and speech disturbance) that may occur with stroke, is based on the larger National Institutes of Health Stroke Scale (NIHSS). One point is designated for each abnormal finding, and a score higher than 0 triggers a stroke alert. The Cincinnati Prehospital Stroke Scale has shown excellent correlation between prehospital providers’ suspicion of stroke and physicians’ confirmation of stroke.

Clinical Pearl

• What is the May–Thurner syndrome?

The May–Thurner syndrome is a common anatomical anomaly in which the right common iliac artery, a muscular structure, extrinsically compresses the thin-walled left common iliac vein. This anatomical feature of the May–Thurner syndrome was identified in up to 25% of an asymptomatic population; none of these patients had unilateral edema or a history of deep venous thrombosis of the legs. Among patients with cryptogenic stroke and patent foramen ovale, the prevalence of the May–Thurner syndrome is 6.3%.

Figure 2. Additional Imaging Studies.

Morning Report Questions

Q: What do guidelines indicate for the prevention of recurrent stroke in patients with an ischemic stroke and patent foramen ovale?

A: For patients with ischemic stroke and patent foramen ovale and without definitive evidence of deep venous thrombosis, guidelines indicate that the current data are insufficient to establish whether anticoagulation is equivalent or superior to aspirin for the prevention of recurrent stroke, and available data do not support a benefit of patent foramen ovale closure. In a recent meta-analysis including a total of 4251 patients, those who had stroke with patent foramen ovale did not have a higher risk of either recurrent stroke or the combination of stroke and transient ischemic attack than did those who had stroke without patent foramen ovale. However, for patients who have both a patent foramen ovale and a presumed venous source of embolism, anticoagulation is generally indicated. When anticoagulation is contraindicated, because of an increased risk of intracranial or systemic hemorrhage, then placement of an inferior vena cava filter is a reasonable option.

Q: What is the risk of venous thromboembolism in patients with cancer, and what may influence the risk associated with renal cancer?

A: The rate of venous thromboembolism among patients with cancer varies according to tumor factors (including tumor type, the presence or absence of metastasis, and the anatomical features of the tumor) and the patients’ baseline risk factors and previous and current therapies. Approximately 20% of first venous thromboembolic events occur in patients with cancer; the risk of venous thromboembolism among patients with cancer is 4 to 7 times as high as the risk among those without cancer. Patients with renal-cell carcinoma, particularly those with metastases, often present with elevated levels of fibrinogen, d-dimer, and fibrin monomers. Immunofluorescence staining with the use of antibodies specific for fibrin and factors VII and X has shown positivity around intravascular and extravascular nodules of renal-cell carcinoma tumor cells, thus raising the possibility that tumor cells activate coagulation locally.

Transcatheter or Surgical Aortic-Valve Replacement in Intermediate-Risk Patients

Posted by Andrea Merrill • April 27th, 2016

2016-04-25_14-45-38I’ve sometimes wondered if I’m embarking on the field of surgery in the wrong era.  As a medical student, and now as a resident, the big open operations have always seemed to be the most exhilarating and rewarding.  It always seems more thrilling to have your hands deep in a patient’s abdomen or chest than lightly grasping and maneuvering small laparoscopic instruments or delicate guidewires.  However, as technology evolves, so must medicine and surgery and every specialty seems to be jumping on the bandwagon, including cardiac surgery.  It always amazed me that one of the “easiest” and shortest operations in cardiac surgery, an open aortic valve replacement, still required placing a patient on cardiac bypass through a large sternotomy.  More recently, though, transcatheter aortic valve replacement (TAVR) has emerged as a less invasive alternative to open aortic valve surgery for patients with severe aortic stenosis.  Clinical trials initially tested TAVR in patients at high risk for open cardiac surgery.  However, many patients who are referred for aortic valve replacement are low to intermediate risk.  With the goal of expanding criteria for TAVR candidates, Leon et al. published the results of the PARTNER 2 trial in this week’s NEJM comparing aortic valve replacement surgery to TAVR in intermediate risk patients.

2032 patients with severe aortic stenosis and intermediate surgical risk (according to Society of Thoracic Surgery criteria) were enrolled in the trial at 57 centers in the United States and Canada.  Before randomization, patients were evaluated to assess whether they would be eligible for transfemoral access (if the femoral artery was accessible) or transthoracic access (i.e. transapical or transaortic through an incision in the chest if the femoral artery was not readily accessible) for the TAVR procedure, and then stratified into a transfemoral or transthoracic cohort accordingly.  Each cohort was subsequently randomized in a 1:1 fashion to either TAVR or surgical-aortic valve replacement.

The primary endpoint was a composite score of death from any cause or disabling stroke at 2 years with the hypothesis that the TAVR was non-inferior to surgery, with a prespecified noninferiority margin of 1.2 for the upper bound of the hazard ratio. Secondary endpoints included improved aortic-valve areas, frequency of acute kidney injury, severe bleeding, new onset atrial fibrillation, and paravalvular aortic regurgitation.

Among the 2,032 randomized patients, 1,011 were assigned to TAVR and 1,021 were assigned to surgery. There were 236 patients in the TAVR group who had transthoracic access (14%) while the rest were accessed transfemorally. There were no significant differences in the primary endpoint of death from any cause or disabling stroke at 2 years between the 2 treatment arms (19.3% in TAVR and 21.1% in surgery patients, and the hazard ratio (HR) for TAVR was 0.89; 95% confidence interval [CI], 0.73 to 1.09; P = 0.25).  The risk ratio at 2 years of the primary endpoint was 0.92 (95% CI, 0.77 to 1.09) which met prespecified noninferiority criteria.  There was no difference in the 2 groups between the individual components of the primary endpoint.  In an underpowered subgroup analysis of the transfemoral and transthoracic TAVR cohorts, the transfemoral TAVR cohort had reduced rates of death from any cause or disabling strokes compared with surgery but there was no difference in the transthoracic cohort.

At 30 days, major vascular complications were more frequent in TAVR patients, but life-threatening bleeding, acute kidney injury, and new onset atrial fibrillation were more frequent in surgery patients.

Improvement in aortic-valve areas and gradients was significantly greater in the TAVR group at both 30 days and 2 years.  However, the frequency and severity of paravalvular aortic regurgitation was greater after TAVR than surgery.

With these encouraging results Dr. Neil Moat asks in his accompanying editorial, “Will TAVR become the predominant method for treating severe aortic stenosis?”  He seems to answer in the affirmative but reminds the reader of a few caveats.  He notes that although these patients are technically at “intermediate” risk for complications from open surgery according to Society of Thoracic Surgery criteria, they are actually still within the highest quartile of risk which is important to remember when assessing patients with severe aortic stenosis. Another important limitation is that small stented valves were used in the surgery group which has been shown to lead to poorer outcomes, leaving room for improvement in the future.  Additionally, cost-effectiveness was not addressed in this trial and while TAVR has been shown to be cost-effective in previous trials in high risk groups, the same may not hold true in low-risk patients.  Finally, he concludes that, “As with many trials involving new technologies, the findings have to be interpreted with the understanding that the technology (in both groups) has advanced since the design of the trial.”  Both TAVR and surgical technologies have advanced since the start of this trial which makes it harder to generalize the results to current patients.

Alas, this is likely not the last we will hear of the debate as the Treatment of Severe, Symptomatic Aortic Stenosis in Intermediate Risk Subjects Who Need Aortic Valve Replacement (SURTAVI) trial (ClinicalTrials.gov number, NCT01586910) has just finished recruitment and further studies involving low risk patients are likely to emerge.

Don’t miss the NEJM Quick Take video summary on this study: