Abdominal Aortic Aneurysms

Posted by Carla Rothaus • November 28th, 2014

Rupture of an abdominal aortic aneurysm is associated with a high risk of death. Endovascular repair results in lower perioperative morbidity and mortality than open repair, but the two methods have similar long-term mortality. The latest Clinical Practice review on this topic comes from Dr. K. Craig Kent at the University of Wisconsin School of Medicine and Public Health.

Rupture of an abdominal aortic aneurysm is often lethal; mortality is 85 to 90%. Of those persons who reach the hospital, only 50 to 70% survive. Thus, the goal is to identify and treat aneurysms before they rupture.

Clinical Pearls

What are the risk factors for abdominal aortic aneurysm?

Nonmodifiable risk factors for abdominal aortic aneurysm include older age, male sex, and a family history of the disorder. Starting at 50 years of age for men and 60 to 70 years of age for women, the incidence of aneurysms increases significantly with each decade. The risk of abdominal aortic aneurysm is approximately four times as high among men as among women and four times as high among people with a family history of the disorder as among those without a family history. Smoking is the strongest modifiable risk factor. Other, less prominent risk factors for abdominal aortic aneurysm include hypertension, an elevated cholesterol level, obesity, and preexisting atherosclerotic occlusive disease.

What are the recommendations for screening for abdominal aortic aneurysms?

Ultrasonography is the primary method used for screening and is highly sensitive (95%) and specific (100%). CT scanning and magnetic resonance imaging (MRI) are expensive, incur risks (radiation exposure from CT and risks associated with intravenous contrast material), and should not be used for screening but rather reserved for preinterventional planning. The current recommendations of the U.S. Preventive Services Task Force are a one-time screening in men 65 to 75 years of age who have ever smoked (grade B recommendation) and selective screening in men 65 to 75 years of age who have never smoked (grade C recommendation). Medicare also covers screening for patients with a family history of abdominal aortic aneurysm. Data from nonrandomized studies suggest that there may be subgroups of women who benefit from screening; however, this finding has not been prospectively validated.

Morning Report Questions

Q: What are the indications for surgical repair of an abdominal aortic aneurysm?

A: Under most circumstances, aneurysms should not be prophylactically repaired unless they are at least 5.5 cm in diameter. Nevertheless, there are occasions when repair of small aneurysms should be considered. Symptomatic aneurysms should be immediately repaired. Pain in the abdomen, back, or flank is the most common symptom, but aneurysms can produce many other symptoms or signs (e.g., hematuria or gastrointestinal hemorrhage). The rate of growth is another important predictor of rupture; aneurysms that expand by more than 0.5 cm in diameter over a period of 6 months should be considered for repair regardless of the absolute size. The observations that aneurysms rupture at a smaller size in women than in men and that women have higher rupture-related mortality than men have led some experts to recommend a diameter of 5.0 cm as the threshold for elective intervention in women. Other factors that are associated with an increased risk of rupture and may prompt repair at a threshold of less than 5.5 cm include the presence of a saccular aneurysm (most aneurysms are fusiform) and a family history of abdominal aortic aneurysm.

Table 1. Annual Risk of Rupture of Abdominal Aortic Aneurysms.

Q: What surgical techniques are available for repair of an abdominal aortic aneurysm?

A: Two approaches to repairing aneurysms are currently available: open repair (performed since the 1950s) and endovascular repair (first performed in 1987). Endovascular repair, a less invasive approach, involves the intraluminal introduction of a covered stent through the femoral and iliac arteries; the stent functions as a sleeve that passes through the aneurysm sac, anchoring in the normal aorta above the aneurysm and in the iliac arteries below the aneurysm. To be eligible for endovascular repair, a patient must have appropriate anatomy, including iliac vessels that are of sufficient size to allow introduction of the graft and an aortic neck above the aneurysm that allows the proximal graft to be anchored without covering the renal arteries. Thus, with existing techniques, there are some infrarenal aneurysms that are not amenable to endovascular repair. The use of endovascular repair has grown steadily in the United States, and this procedure is currently performed in more than 75% of patients undergoing surgical intervention for abdominal aortic aneurysm, with a portion of the remaining patients having unsuitable anatomy. Endovascular repair confers an initial survival benefit; however, this benefit disappears over a period of 1 to 3 years. Endovascular repair and open repair are associated with similar mortality over the long term (8 to 10 years).

Figure 1. Techniques Available for Repair of Abdominal Aortic Aneurysms.

Figure 2. Annual Proportion of Elective Endovascular and Open Repairs for Abdominal Aortic Aneurysms in the United States, 2000-2012.

Atenolol versus Losartan in Children and Young Adults with Marfan’s Syndrome

Posted by Daniela Lamas • November 25th, 2014

When French pediatrician Antoine-Bernard Marfan first described the syndrome that would bear his name in 1896, doctors knew little about the management and prognosis of the connective tissue disorder.

In the century since that first description, what was once a fatal syndrome due to the risk of aortic dissection is now a condition that can be managed with proper care and follow-up. Two decades ago, a trial demonstrated that patients who were given beta blockade with propranolol had a lower rate of aortic enlargement than those given placebo, ushering in the beta blocker as a key component of therapy for those with Marfan’s.

More recently, basic science research into the pathogenesis of the disease has led to a new management strategy using angiotensin receptor blockade in lieu of beta blockade. Now, a study in this week’s issue of NEJM adds to this body of research, suggesting that one treatment modality might be no better than the other.

The science behind this trial dates back to the early 1990s, when researchers discovered that mutations in the gene encoding a protein called fibrillin-1 were responsible for Marfan’s syndrome. Fibrillin-1 is a structural component of elastic fibers in connective tissue, and also influences cell signaling activity through binding the protein TGF beta. Animal studies have provided evidence that excessive TGF beta signaling results in the clinical findings of Marfan’s, including aortic dilatation. This finding led to the question – could angiotensin receptor blockers (ARBs), which inhibit TGF beta signaling, reduce aortic dilatation in Marfan’s?

This hypothesis first led to mouse studies, which did in fact show a reduced rate of aortic enlargement in Marfan’s mice who received losartan, an ARB, compared to those who were given beta blockers or placebo. A small series of patients and then two clinical studies comparing a beta blocker-ARB combination to beta blockade alone came to similar conclusions.

With this background, R.V. Lacro and colleagues set out to determine whether losartan is more effective than beta-blockers in slowing aortic-root enlargement in Marfan’s syndrome. The investigators enrolled just over 600 patients with Marfan’s whose ages ranged from 6 months to 25 years old. The study participants were randomly assigned to receive either atenolol or losartan. They were followed over a 3-year period for the primary outcome, which was the rate of aortic-root enlargement. Investigators also monitored rates of aortic-root surgery, aortic dissection and adverse events.

Their findings were surprising, given the strong rationale for treatment with an ARB in this population. In all of the endpoints studied, the investigators found that losartan performed no better than atenolol. There was no significant difference between groups in the rate of aortic-root enlargement, nor in rates of surgery, dissection or death.

What to take from this study?  In an accompanying editorial, Juan Bowen and Heidi Connelly discuss certain limitations of the study design that might have masked the true benefit of losartan for Marfan’s syndrome and urge physicians, instead of casting aside losartan as a therapeutic option, to “wait and see.” First, the editorialists note, the study did not have a placebo group and thus, cannot address the question of whether both beta-blockade and angiotensin-receptor blockade might be equally effective. The study also did not assign patients to a combined beta and angiotensin receptor blockade group, which could have demonstrated synergistic effects. Additionally, the losartan dose might not have been sufficient to meet its goal – suppressing TGF beta signaling. Finally, the enrolled patients had “advanced aortic disease” as gauged by their aortic root dilation scores at young ages. Perhaps, they note, blocking TGF beta could work more effectively at earlier disease stage.

Despite the study’s limitations, Bowen and Connelly write that they expect these results to stimulate debate and further research into how best to treat patients with Marfan’s. They conclude, “Each step forward gives hope to those living with Marfan’s syndrome as they strive to live healthier, longer, and more productive lives.”

Testicular Cancer

Posted by Carla Rothaus • November 21st, 2014

The treatment of testicular cancer is a success story in oncology. With available methods, 95% of men with this condition can be cured. Emphasis is shifting toward maintaining high cure rates and reducing or effectively managing late effects of treatment.  A new review article on this topic comes from Dr. Nasser Hanna and Lawrence Einhorn at the Indiana University School of Medicine.

Fifty years ago, a diagnosis of metastatic testicular cancer meant a 90% chance of death within 1 year. Today, a cure is expected in 95% of all patients who have received a diagnosis of testicular cancer and in 80% of patients with metastatic disease.

Clinical Pearls

Describe the epidemiology and clinical presentation of testicular cancer.

In the United States, the incidence of testicular cancer, which is highest among whites and lowest among blacks, has increased steadily over the past 20 years. In some parts of northern Europe, the incidence has doubled, and in Denmark and Norway, 1% of men will receive a diagnosis of testicular cancer during their lifetime. Genetic and environmental factors appear to play a role in this increase in incidence. The risk of testicular cancer is 8 to 10 times as high in a brother of a person with testicular cancer and 4 to 6 times as high in a son of a person with testicular cancer as in a brother or son of an unaffected family member. Genetic disorders, including Down’s syndrome and the testicular dysgenesis syndrome, are also associated with increased risks of testicular cancer. Cryptorchidism, which occurs in 2 to 5% of boys born at term, is the most well-characterized risk factor for testicular cancer. The timing of orchiopexy influences the future risk of testicular cancer. However, 90% of persons with testicular cancer do not have a history of cryptorchidism. Recent investigations have shed light on the malignant transformation of normal gonocytes into germ-cell tumors. Germ-cell tumors appear to develop as a result of a tumorigenic event in utero that leads to a precursor lesion classified as intratubular germ-cell neoplasia. Most patients with testicular cancer receive a diagnosis when the disease is in stage I and present with a testicular mass. Less frequently, patients report back pain (secondary to enlarged retroperitoneal lymph nodes) or symptoms of metastatic disease, including cough, hemoptysis, pain, and headaches.

Table 1. Staging and Risk Stratification of Germ-Cell Tumors.

What is the treatment for Stage I and Stage II seminoma?

Most patients with clinical stage I seminoma are cured with orchiectomy. Adjuvant radiation therapy was standard treatment for many years and was instrumental to the cure before the advent of effective chemotherapy. Over the past 20 years, the dose and field of radiation have been considerably reduced, and in many instances radiotherapy has been eliminated altogether. Most patients today are treated with active surveillance, although some still receive radiation therapy consisting of 20 Gy to the ipsilateral retroperitoneal lymph nodes (sometimes including the inguinal lymph nodes, depending on whether the patient had undergone prior surgery involving the inguinal, pelvic, or scrotal areas) or adjuvant carboplatin therapy. More relapses are associated with surveillance than with radiotherapy or chemotherapy (20% vs. 4%), but the long-term survival is nearly 100%, irrespective of the initial option chosen. For some patients with low-volume stage II seminoma (disease confined to the retroperitoneal lymph nodes, with the lymph nodes <3 cm in diameter), 30 to 36 Gy of radiation to the paraaortic and ipsilateral iliac lymph nodes remains a standard treatment. In other patients, the preferred treatment is chemotherapy with bleomycin, etoposide, and cisplatin (also known as BEP) for three cycles or etoposide and cisplatin for four cycles. Chemotherapy is preferred for patients with bulkier disease, since the rate of relapse is higher with radiotherapy alone. Cures are achieved in 98% of patients.

Table 2. Treatment Options for Stage I Seminoma.

Morning Report Questions

Q: How are the nonseminiferous germ-cell tumors managed?

A: Most patients with a nonseminomatous germ-cell tumor present with clinical stage I disease. Treatment options after orchiectomy include active surveillance, nerve-sparing retroperitoneal lymph-node dissection, and adjuvant BEP for one or two cycles; each of these options is associated with 99% long-term cure rates. Patients are characterized as high risk (relapse rates of 50% with surveillance) or low-risk (relapse rates of 15% with surveillance) according to the presence or absence of lymphovascular invasion. Patients with a low-volume stage II nonseminomatous germ-cell tumor (disease confined to the retroperitoneal lymph nodes, with the lymph nodes <3 cm in diameter) and normal beta-hCG [beta human chorionic gonadotropin] and AFP [alpha-fetoprotein] levels after orchiectomy are generally treated with retroperitoneal lymph-node dissection, although care must be individualized. Patients with higher-volume stage II disease or increasing levels of markers should receive chemotherapy (BEP for three cycles or etoposide and cisplatin for four cycles). Cures are achieved in 95 to 99% of patients.

Table 3. Treatment Options after Orchiectomy for Stage I Nonseminomatous Germ-Cell Tumor.

Q: What are long-term risks of treatment for a patient with testicular cancer?

A: Since most patients will survive after a diagnosis of testicular cancer, clinicians must be vigilant to reduce the long-term risks of therapy and limit unnecessary morbidity and early mortality. Therapeutic radiation has been recognized as a risk factor for secondary cancers. However, studies also implicate chemotherapy in the risk of cancers of the kidney, thyroid, soft tissue, bladder, stomach, and pancreas, as well as in the risk of lymphoma and leukemia. Survivors of testicular cancer are also at risk for later relapse of disease (defined as relapse >2 years after remission), as well as for the metabolic syndrome; cardiovascular disease; infertility; neurotoxic, nephrotoxic, and pulmonary toxic effects; Raynaud’s phenomenon; psychosocial disorders; and hypogonadism, which may confer a predisposition to sexual dysfunction, fatigue, depression, and osteoporosis. Retrograde ejaculation may develop postoperatively in men who have undergone retroperitoneal lymph-node dissection. The most comprehensive study to date is under way to understand the genetic susceptibility to the long-term toxic effects of platinum-based chemotherapy in survivors of testicular cancer.

Glycemic Control in Type 1 Diabetes

Posted by Carla Rothaus • November 21st, 2014

In a new study, patients with type 1 diabetes and a glycated hemoglobin level of 6.9% or lower (≤52 mmol per mole) were found to have a risk of death from any cause or from cardiovascular causes that was twice as high as that for matched controls.

The excess risks of death from any cause and from cardiovascular causes in patients with diabetes who have varying degrees of glycemic control, as compared with the risks in the general population, have not been evaluated. This study undertook an evaluation using the Swedish National Diabetes Register, which includes information on glycemic control for most adults with type 1 diabetes in Sweden.

Clinical Pearls

Why is glycemic control important for patients with type 1 diabetes?

Type 1 diabetes is associated with a substantially increased risk of premature death as compared with that in the general population. Among persons with diabetes who are younger than 30 years of age, excess mortality is largely explained by acute complications of diabetes, including diabetic ketoacidosis and hypoglycemia; cardiovascular disease is the main cause of death later in life. Improving glycemic control in patients with type 1 diabetes substantially reduces their risk of microvascular complications and cardiovascular disease. Accordingly, diabetes treatment guidelines emphasize good glycemic control, which is indicated by the glycated hemoglobin level, a measure of the mean glycemic level recorded during the preceding 2 to 3 months. Although a target level of less than 7.0% (53 mmol per mole) is generally recommended and is considered to be associated with a lower risk of diabetic complications, as compared with higher levels, in two national registries, only 13 to 15% of patients with type 1 diabetes met this target, whereas more than 20% had very poor glycemic control (i.e., a glycated hemoglobin level >8.8%, or greater than or equal to 73 mmol per mole).

What is the risk of all-cause and cardiovascular mortality at different levels of HbA1c?

This nationwide Swedish study of 33,915 patients with type 1 diabetes and 169,249 controls matched for age and sex shows that for patients with type 1 diabetes who had on-target glycemic control, the risk of death from any cause and the risk of death from cardiovascular causes were still more than twice the risks in the general population. Analyses of outcomes within the group of patients with diabetes showed that the risk of death from any cause and the risk of death from cardiovascular causes increased incrementally with higher updated mean glycated hemoglobin levels. The hazard ratio for death from any cause among patients with diabetes was 2.36 (95% CI, 1.97 to 2.83) at an updated mean glycated hemoglobin level of 6.9% or lower and increased to 8.51 (95% CI, 7.24 to 10.01) for a level of 9.7% or higher (greater than or equal to 83 mmol per mole). For death from cardiovascular causes, the corresponding hazard ratios ranged from 2.92 (95% CI, 2.07 to 4.13) to 10.46 (95% CI, 7.62 to 14.37).

Table 2. Mortality among Patients with Type 1 Diabetes as Compared with Controls According to Baseline Level of Glycated Hemoglobin.

Table 3. Adjusted Hazard Ratios for Death from Any Cause and Death from Cardiovascular Causes among Patients with Type 1 Diabetes versus Controls, According to Time-Updated Mean Glycated Hemoglobin Level and Renal Disease Status, Model 3.

Morning Report Questions

Q: How does the risk of all-cause or cardiovascular mortality differ by gender or change over time?

A: As compared with men, women with type 1 diabetes had a significantly greater excess risk of death from cardiovascular disease but not of death from any cause. The excess risk of death associated with diabetes did not diminish over time, with increases during the last 7 calendar years of the study (2005 through 2011) that were similar to those during the first 7 years (1998 through 2004).

Figure 1. Hazard Ratios for Death from Any Cause and for Death from Cardiovascular Causes According to Age and Sex among Patients with Type 1 Diabetes versus Controls.

Q: Is there an explanation for the increased risk of all-cause and cardiovascular mortality in type 1 diabetes patients with HbA1c less than or equal to 6.9%?

A: Unlike patients with type 2 diabetes, those with type 1 diabetes generally do not have excess rates of obesity, hypertension, or hypercholesterolemia; thus, the increased risks of death from any cause and of death from cardiovascular causes among patients with type 1 diabetes who have good glycemic control is unexplained. In this study, beginning with the year 2005, patients with type 1 diabetes were four to five times as likely as controls to receive a prescription for statins or renin-angiotensin-aldosterone system inhibitors. Thus, the omission of currently recommended cardioprotective treatment cannot explain the remaining excess risk of death; determination of the underlying reasons will require further research.

Ask the Authors: Ebola in Well-Resourced Settings

Posted by Karen Buckley • November 20th, 2014

The physicians who treated patients with Ebola in Atlanta and Hamburg are now answering your questions on the NEJM Group Open Forum.

Two recent NEJM Brief Reports provide detailed clinical information about three patients with Ebola virus disease who were transferred from West Africa to the United States or Germany in the midst of their illness. While most cases occur in areas where tragically few resources are available to care for affected patients, these reports afford us the opportunity to observe the course of illness in a well-resourced health care setting. The cases highlight the importance of intensive fluid management during the course of the illness. Authors of both reports are answering questions about what this means for treating patients as the epidemic continues and more cases present to well-resourced settings.

The NEJM Group Open Forum is publicly available for all to view, but in order to comment you must register with Medstro and be a physician. This discussion is open until Wednesday, November 26.

Join the discussion now!

Mortality in Type 1 Diabetes

Posted by Joshua Allen-Dicker • November 19th, 2014

There are moments during every physician’s day when she or he gives medical advice based on well-established evidence– “The data show that starting medication A for this disease will reduce the risk of death by 20%.”   There are also moments when she or he may give advice just because it seems like the right thing to do, though evidence may be lacking– “It makes sense that using medication B might help in the treatment of this disease.” Sometimes advice based on common sense or medical tradition turns out to be misguided (e.g., bed rest for back pain, niacin for atherosclerotic vascular disease). And sometimes advice that makes sense is spot-on correct, as shown by a paper published in this week’s issue of NEJM, “Glycemic Control and Excess Mortality in Type 1 Diabetes Mellitus, “ by Lind and colleagues.

According to a recent Centers for Disease Control and Prevention report, each year over 18,000 people in the United States are diagnosed with type 1 diabetes (T1D). People with T1D are at increased risk for both microvascular complications (e.g., neuropathy, nephropathy) and macrovascular complications (e.g., coronary disease, stroke), as well as morbidity associated with these conditions. As above, it might make sense that better glycemic control, as measured by a lower hemoglobin A1c level, would be associated with improved outcomes for these disease states. However, unlike prior research that has demonstrated a clear association between lower HbA1c levels and improved outcomes from microvascular complications, the relation between mortality and glycemic control has remained less well defined.

Lind and colleagues describe a prospective cohort study of patients with T1D who were enrolled in Sweden’s National Diabetes Registry. For each person with diabetes enrolled in the study, the study also included 5 matched controls from Sweden’s general population. Participants were followed from enrollment until death or study completion. Outcome data collected for all participants included date of death (if it occurred) and relevant associated diagnoses. Data collected specifically for participants with T1D included albuminuria status, kidney function, and updated mean HbA1c level. Cox regression models were used to compare outcomes between persons with T1D and the matched controls.

Between 1996 and the end of 2011, 33,915 patients with T1D and 169,249 controls were enrolled in the study. In their subsequent analyses of these populations, Lind et al. found that persons with poor glycemic control (HbA1c ≥9.7%) had 8-10 times the risk of mortality compared to the control population, and significantly higher mortality than persons with T1D who had appropriate glycemic control.

At first glance, the results of Lind et al. are no surprise—better glycemic control can improve outcomes in T1D.  However, Lind also provides us with humbling data. First, while the risk of mortality in T1D appears to be modifiable and dependent on adequate glycemic control, our progress in improving T1D outcomes may have stalled over the last two decades.  In comparing study time periods (1998-2004 cf. 2005-2011), there was no significant improvement in excess mortality risk for the T1D population.  Additionally, the authors found that even when the glycemic control of persons with T1D was appropriate (updated mean HbA1c ≤ 6.9%), they still had twice the risk of mortality as compared to the control population.

As clinicians these results may leave us wondering: what can we do to help improve outcomes in T1D?  We know there has been an historical gap between guidelines and the actual quality of care patients receive–only 13-15% of persons with T1D reach their HbA1c goal.  Similar data exist for other diabetes quality metrics.  Innovation in patient engagement, quality improvement projects around guideline adherence and identification of additional outcomes metrics may be appropriate starting places for our collective efforts.

On top of this we should ask, if appropriately controlled T1D still carries an increased risk of mortality, what are we missing?  Continued research on insulin replacement strategies (e.g. the bionic pancreas) and mitigation of the end-organ effects of diabetes is needed.

After reading Lind et al., we may feel inclined to congratulate ourselves–our clinical intuition was correct after all.  However, by strengthening the known association between glycemic control and mortality, Lind and colleagues have sounded an important warning: clinicians and researchers still have much progress to make in improving our understanding of T1D and the quality of care we provide each day.

Dual Antiplatelet Therapy after Drug-eluting Stents

Posted by Chana Sacks • November 16th, 2014

“Well, doc, it’s been a year!  Now what?”

You first met your patient 12 months ago when he presented to the emergency department having a heart attack.  He was rushed to the cardiac catheterization lab, where a drug-eluting stent was placed to open the blocked coronary artery responsible for his crushing chest pain.

He has done well since and has been following up with you regularly in Cardiology Clinic. He has had no further episodes of chest pain, nor any bleeding complications from the dual antiplatelet therapy of aspirin and clopidogrel that you prescribed to prevent thrombosis of his new stent. Starting the day after his heart attack – and at every follow up visit since – you stressed the importance of taking these two medications for at least one year. At the one-year mark, you told him, you would discuss whether or not to continue this regimen. He presents today ready for that conversation.

As you begin to discuss the risks and benefits of continuing these two antiplatelet drugs, you find yourself in that uncomfortable data-free zone. After placement of a drug-eluting stent, the guidelines are clear that dual antiplatelet therapy should be continued for six months to one year. Beyond one year, however, the risks and benefits have remained uncertain.

In this week’s NEJM, Mauri and colleagues report the results of the DAPT trial, which was designed to fill this void. The data from this study suggest that an additional 18 months of dual antiplatelet therapy results in improved cardiovascular outcomes but also leads to an increased risk of bleeding.

The study included 10,000 participants who – like your patient – had received an FDA-approved drug-eluting stent, had already completed 1 year of treatment with dual antiplatelet therapy without ischemic events, repeat revascularizations, or major bleeding, and who had shown a high degree of adherence to the first year of dual antiplatelet therapy.  Participants were randomized to receive an additional 18 months of either continued dual antiplatelet therapy with aspirin plus a thienopyridine (either clopidogrel or prasugrel) or aspirin alone. The co-primary endpoints were the incidence of stent thrombosis and major adverse cardiovascular and cerebrovascular events (MACCE) – a composite of death, MI, or stroke.

The results: continued dual antiplatelet therapy reduced the rates of both stent thrombosis (0.4% vs. 1.4%, hazard ratio 0.29, 95% CI 0.17-0.48, P<0.001) and MACCE (4.3% vs. 5.9%, hazard ratio 0.71, 95% CI 0.59-0.85, P<0.001). Moderate or severe bleeding was higher in the dual antiplatelet therapy arm (2.5% vs. 1.6%, P=0.001).

Surprisingly, at the final 33-month follow-up, all-cause mortality was higher in the dual antiplatelet therapy group: 2.3% as compared with 1.8% in the placebo arm (hazard ratio 1.36, P=0.04). This was driven by a higher rate of non-cardiovascular death, due to increased bleeding from trauma and to cancer-related deaths.  Importantly, while there were some cancer deaths attributable to bleeding, this last finding seems to be due to more patients with cancer randomized by chance to the dual antiplatelet therapy arm.

NEJM Deputy Editor Dr. John Jarcho describes how this new data might inform clinical practice: “The DAPT trial highlights the pros and cons of prolonged dual anti platelet therapy. It suggests that the best approach may be to individualize patient management. Those at higher risk of atherothrombosis might benefit from prolonged treatment, while those at higher risk of bleeding might be best advised to stop at one year.”

To your patient and the decision about his medications: this study doesn’t offer a clear directive.  So you talk to him about that. After a discussion of the risks and benefits, and your best attempt to share what we do – and what we don’t – know, you arrive at a decision to continue his aspirin and clopidogrel for another 18 months.  At the 18-month mark, you tell him, you will discuss whether or not to continue the regimen.

As he heads out of your office, you start thinking about the data-free zone you will face a year and a half from now, when he comes back to your office again wondering, “Well, doc, it’s been 18 months! Now what?”

For more on the DAPT trial, watch the 2-minute video summary and read the accompanying editorial.

The α-Thalassemias

Posted by Carla Rothaus • November 14th, 2014

More than 100 varieties of α-thalassemia have been identified. Their geographic distribution and the challenges associated with screening, diagnosis, and management suggest that α-thalassemias should have a higher priority on global public health agendas.  A new review article on this topic comes from the University of Oxford’s Drs. Frédéric Piel and David Weatherall.

The α-thalassemias represent a global health problem with a growing burden. A refined knowledge of the molecular basis of α-thalassemia will be fully relevant from a public health perspective only if it is complemented by detailed epidemiologic data. To ensure appropriate care of patients and the sustainability of health care systems, more effort must be put into obtaining evidence-based estimates of affected populations, providing resources for the prevention, control, and management of the thalassemias, and performing cost-effectiveness analyses.

Clinical Pearls

What are the α-thalassemias and how are they classified?

The thalassemias are the most common human monogenic diseases. These inherited disorders of hemoglobin synthesis are characterized by a reduced production of globin chains of hemoglobin. Worldwide, the most important forms are the α- and β-thalassemias, which affect production of the α-globin and β-globin chains, respectively. Normal adult hemoglobin consists of pairs of α and  β chains (α2 β2), and fetal hemoglobin has two α chains and two γ chains (α2γ2).  The genes for the α chains and γ chains are duplicated (αα/αα, γγ/γγ), whereas the  β chains are encoded by a single gene locus ( β/ β). In the fetus, defective production of α chains is reflected by the presence of excess γ chains, which form γ4 tetramers, called hemoglobin Bart’s; in adults, excess  β chains form  β4 tetramers, called hemoglobin H (HbH). Because of their very high oxygen affinity, both tetramers cannot transport oxygen, and, in the case of HbH, its instability leads to the production of inclusion bodies in the red cells and a variable degree of hemolytic anemia. More than 100 genetic forms of α-thalassemia have thus far been identified, with phenotypes ranging from asymptomatic to lethal. On the basis of the numbers of α-globin genes lost by deletion or totally or partially inactivated by point mutations, the α-thalassemias are classified into two main subgroups: α+-thalassemia (formerly called α-thalassemia 2), in which one pair of the genes is deleted or inactivated by a point mutation (-α/αα or ααND/αα, with ND denoting nondeletion), and α0-thalassemia (formerly called α-thalassemia 1), in which both pairs of α-globin genes on the same chromosome are deleted (–/αα). Clinically relevant forms of α-thalassemia usually involve α0-thalassemia, either coinherited with α+-thalassemia (-α/– or ααND/–) and resulting in HbH disease or inherited from both parents and resulting in hemoglobin Bart’s hydrops fetalis (–/–), which is lethal in utero or soon after birth.

Figure 1. Phenotype-Genotype Relationship in α-Thalassemia.

• What are clinical features of hemoglobin Bart’s hydrops fetalis syndrome and HbH disease?

Fetuses affected by hemoglobin Bart’s hydrops fetalis succumb to severe hypoxia either early in gestation or during the third trimester. The hemoglobin Bart’s hydrops fetalis syndrome is often accompanied by a variety of congenital malformations and maternal complications, including severe anemia of pregnancy, preeclampsia, polyhydramnios, and extreme difficulty in delivery of both the fetus and the hugely enlarged placenta. HbH disease is often considered to be a relatively mild disorder. Studies have nevertheless highlighted clinically severe phenotypes, notably in nondeletional variants of the disease. In fact, HbH disease is characterized by a wide range of phenotypic characteristics. The form that results from deletions (-α/–) usually follows a relatively mild course, with moderate anemia and splenomegaly. Aside from episodes of intercurrent infection, this form of HbH disease does not require blood transfusions. However, the variety that results from the interactions of a nondeletional α-globin gene mutation together with α0-thalassemia (ααND/–) follows a much more severe course.

Morning Report Questions

Q: How is hemoglobin Bart’s hydrops fetalis diagnosed?

A: Prenatal diagnosis is required to identify fetuses affected by hemoglobin Bart’s hydrops fetalis and to reduce the risks to the mothers. The decision to consider such a diagnosis usually follows the finding of hypochromic microcytic red cells in both parents, in association with a normal hemoglobin A2 level; this combination would rule out  β-thalassemia, which usually involves an elevated hemoglobin A2 level. Iron deficiency also has to be ruled out. When facilities for rapid DNA diagnosis are available, the hematologic examination is followed by confirmation of the presence of α0-thalassemia in the parents. The fetal diagnosis is usually made early in pregnancy by means of chorionic-villus sampling, although fetal anemia may also be diagnosed later during gestation by quantitation of the peak systolic velocity in the middle cerebral artery. Various alternative methods of preimplantation and preconception genetic diagnosis or prenatal diagnosis — for example, analysis of maternal blood for fetal DNA and identification of fetal cells in maternal blood by staining with antibodies against globin chains — are still at relatively early stages of study.

Q: Describe the geographic distribution of α-thalassemia.

A: Evidence that α-thalassemia is highly protective against severe malaria is well established. As a result of this selective advantage, heterozygous α-thalassemia has reached high frequencies throughout all tropical and subtropical regions, including most of Southeast Asia, the Mediterranean area, the Indian subcontinent, the Middle East, and Africa. In conjunction with large-scale global population movements in recent decades, α-thalassemia has spread to many other parts of the world, including northern Europe and North America. This phenomenon is best illustrated by the implementation in 1998 of a universal screening program for α-thalassemia in California. After the immigration of large numbers of people from the Philippines and other Southeast Asian countries, the incidence of α-thalassemia syndromes in California between January 1998 and June 2006 was 11.1 cases per 100,000 persons screened, with 406 cases of HbH disease and 5 cases of hemoglobin Bart’s hydrops fetalis.

Figure 2. Geographic Distribution of α-Thalassemia, Hemoglobin Bart’s Hydrops Fetalis, and HbH Disease.

Fevers, Chest Pain, and Substance-Use Disorder

Posted by Carla Rothaus • November 14th, 2014

In the latest Case Record of the Massachusetts General Hospital, a 31-year-old woman with substance-use disorder was admitted to this hospital because of fevers and chest pain. CT of the chest revealed multiple thick-walled nodular opacities throughout both lungs. Diagnostic tests were performed, and management decisions were made.

Between 2007 and 2009 in the United States, heroin use increased by almost 80%, and approximately 0.5% of the population had an opioid-use disorder (about 650,000 users).

Clinical Pearls

What are common and uncommon pathogens seen in injection-drug users?

Among persons who use injection drugs, infectious complications are the leading cause of hospitalization and in-hospital death. The most common causes of fever among injection-drug users are skin and soft-tissue infections, pulmonary infections, and endocarditis, and the most common pathogens are S. aureus and streptococci. However, outbreaks of less common organisms have been linked to particular injection practices. Licking needles before injection increases the risk of infection with oral anaerobes, such as eikenella. Using tap water as a solvent increases the frequency of infection with gram-negative bacteria, such as pseudomonas. Candida infections have been associated with the use of lemon juice to dissolve basic substances, such as crack cocaine, before injection.

What factors are linked to opioid abuse?

The majority of first-time heroin users (81%) have previously misused prescription drugs, especially narcotic pain medications. Among patients with non-cancer-related chronic pain who have been exposed to long-term opioid therapy, there are relatively high rates of drug misuse (11.5%), addiction (3.7% among persons with a history of substance-use disorder and 0.2% among persons without a history of substance-use disorder), and illicit drug use (14.5%). Most patients who are taking narcotic pain medications do not abuse them, but patients who are treated with opioids should be monitored to ensure the appropriate use of these agents, and persons with a history of substance-use disorder should be followed very closely.

Table 2. DSM-5 Diagnostic Criteria for Opioid-Use Disorder.

Morning Report Questions

Q: What are effective treatments for opioid-use disorder?

A: The opioid agonists methadone and buprenorphine are among the most effective treatments for opioid-use disorder. Either medication, when administered as maintenance treatment, can decrease opioid use and drug-related hospitalizations and improve health, quality of life, and social functioning. In addition, treatment with an opioid agonist markedly reduces the likelihood of heroin overdose and death, which is particularly important for hospitalized patients because the risk of drug-related death for a patient with a history of substance-use disorder is nearly 10 times as high in the first month after hospital discharge as it is in subsequent months. Patients with a history of injection-drug use are among the patients who are most likely to be discharged against medical advice, and such discharges, as compared with medically approved discharges, result in higher rates of readmission, longer lengths of stay, and increased cost and in twice the risk of death within 30 days after discharge. Treating opioid withdrawal with opioid agonists and proactively addressing substance use have been shown to decrease the rate of discharge against medical advice.

Q: How does maintenance therapy with an opioid agonist compare to drug detoxification therapy?

A: A majority of patients relapse to opioid use after treatment has been tapered and discontinued. The evidence regarding therapy with an opioid agonist supports maintenance treatment, not detoxification or a tapered course. As with other medications for chronic diseases, the benefits, at least in the short term, last only while the patient is taking the medication. Maintenance therapy results in higher rates of treatment retention and lower rates of heroin use than does detoxification, even when the detoxification is prolonged and is accompanied by psychosocial support and aftercare.

NEJM Group Open Forum

Posted by Karen Buckley • November 13th, 2014

NEJM Group has joined with Medstro, a social professional network for physicians, to create the NEJM Group Open Forum, a series of live discussions intended to generate active conversation around important–and sometimes controversial–ideas. We’ve brought together authors, experts in the field, and many of your peers to discuss these three topics, now active and ready for your contribution.

Join the conversation >>

Can you implement handoff improvements in your own hospital? Ask the authors of the recent I-PASS study in NEJM how they reduced medical errors by 23% and preventable adverse events by 30%, without more time spent.

Meet Sandeep Jauhar, MD, PhD, cardiologist and author of “Doctored” and “Intern,” and first in the series of “Fascinating Physicians,” brought to you by the NEJM CareerCenter.

And, with ABIM recertification pass rates falling, how can you be sure you are prepared? What could be contributing to the increasingly poor performance? Join experts and your peers in this new discussion from NEJM Knowledge+.

We’ll be adding new discussions over the next two months. The NEJM Group Open Forum is publicly available for all to view, but in order to comment you must register with Medstro – a one-minute process – and be a physician.

We hope you will participate in these and future discussions!