Screening for Colorectal Neoplasia

Posted by • January 6th, 2017

Click to Enlarge

Click to Enlarge

The percentage of U.S. residents with up-to-date screening for colorectal cancer has not increased appreciably since 2010 and remains at approximately 60%. The National Colorectal Cancer Roundtable has established a goal of 80% adherence to colorectal cancer screening by the year 2018. To achieve the highest level of adherence to colorectal cancer screening, it may be best to provide participants a choice, because the “best” strategy is the one that they will adhere to consistently.

 

Clinical Pearl

• What interventions may increase patient participation in colorectal cancer screening?

Various interventions used in randomized, controlled trials have been shown to increase patient participation in screening; such interventions include sending patients invitations from their primary care provider, sending reminder letters and making telephone calls, and mailing fecal occult blood test kits to patients’ homes. The most successful programs use patient navigators to reduce logistic barriers, address cultural issues, and encourage participants to undergo screening; the use of patient navigators is especially important in underserved populations.

 

Clinical Pearl

• How can the benefit of colorectal cancer screening be maximized?

Maximizing the benefit of colorectal cancer screening requires a programmatic approach to implementing screening strategies. The quality of a screening program should be measured by its ability to identify patients who are due for screening, provide access to screening, assess adherence to the screening test and to follow-up colonoscopy if a noncolonoscopy screening test is positive, document test outcomes and disseminate accurate follow-up recommendations, identify patients with a negative test to follow them for repeat screening at the appropriate intervals, and provide timely surgery for cancers.

 

Morning Report Questions

Q: Are there individual patient factors other than a personal or family history of colonic neoplasia that influence colorectal cancer screening recommendations? 

A: Additional factors that might influence colorectal screening strategies include race, lifestyle factors, or aspirin use. For example, among black men and women, the rates of death from colorectal cancer are 28.4 and 18.9 per 100,000 population, respectively; among white men and women, the corresponding rates are 18.7 and 13.2 per 100,000 population. Obesity, tobacco smoking, low physical activity, high intake of alcohol, high intake of red or processed meat, and low intake of fruits and vegetables are associated with increased risk of colorectal cancer, and regular use of aspirin has been associated with reduced risk. However, none of these factors are currently used to differentiate screening strategy, age of screening initiation, or surveillance intervals.

 

Q: At what patient age should colorectal cancer screening cease?

A: Although the risk of colorectal cancer increases with age, the competing risk of death from other diseases and the risk of serious complications from colonoscopy also increase with age. Several national organizations recommend that screening for patients between 76 and 85 years of age should be tailored on the basis of the presence of coexisting illnesses and that screening should be stopped after patients reach 85 years of age. A microsimulation model suggested that the intensity of prior screening and the individual risk of colorectal cancer should also be considered in determining the age at which to stop screening. Patients without a notable coexisting illness who are at average or higher risk for colorectal cancer and have had no prior screening would be expected to benefit from screening into their 80s.

A Woman with Progressive Loss of Language

Posted by • January 6th, 2017

Click to Enlarge

The anterior temporal lobe is an area of the brain that is critically involved in object naming and word comprehension. Multiple lines of evidence suggest that the left anterior temporal lobe is specialized for word comprehension (recognition), whereas the right anterior temporal lobe may serve a similar function for objects and faces.

 

Clinical Pearl

• What underlying pathologic processes are associated with primary progressive aphasia?

In patients with primary progressive aphasia, the underlying pathologic process is usually caused by frontotemporal lobar degeneration or Alzheimer’s disease. Approximately 60% of cases of primary progressive aphasia are associated with frontotemporal lobar degeneration, and 40% are associated with Alzheimer’s disease.

 

Clinical Pearl

• How does primary progressive aphasia differ from typical dementia?

In contrast to typical dementias that occur in late life, primary progressive aphasia most commonly starts before 65 years of age and is not associated with memory loss. There are three variants of primary progressive aphasia: agrammatic, logopenic, and semantic.

Morning Report Questions

Q: Describe some of the features of the three variants of primary progressive aphasia.

A: The agrammatic variant is characterized by the construction of grammatically incorrect sentences and a loss of fluency in the setting of preserved word comprehension. The logopenic variant is characterized by impairment of word finding, poor language repetition, and fluctuating fluency in the setting of preserved grammar and word comprehension. The semantic variant is characterized by impairment of object naming and word comprehension in the setting of preserved fluency, repetition, and grammar. Pauses for word finding and impaired object naming can occur in each of the variants. Each variant of primary progressive aphasia is associated with a different anatomical site of peak atrophy in the left-hemisphere language network: the inferior frontal gyrus (Broca’s area) in the agrammatic variant, the temporoparietal junction (Wernicke’s area) in the logopenic variant, and the anterior temporal lobe in the semantic variant.

Q: What type of neuronal deposits are associated with the semantic variant of primary progressive aphasia caused by frontotemporal lobar degeneration?

A: There is a strong correlation between the semantic variant of primary progressive aphasia and a type of frontotemporal lobar degeneration that is linked to the presence of abnormal deposits of TAR DNA-binding protein 43 (TDP-43), an RNA-binding protein with a wide range of targets. TDP-associated frontotemporal lobar degeneration is further classified into subgroups defined according to the pattern of inclusions: type A is associated with the presence of many neuronal cytoplasmic inclusions and short dystrophic neurites, type B with the presence of some neuronal cytoplasmic inclusions and rare dystrophic neurites, and type C with the presence of rare neuronal cytoplasmic inclusions and long dystrophic neurites. Although the clinicopathological correlations are not exact, type C lesions often occur with the semantic variant of primary progressive aphasia, semantic dementia, or the behavioral variant of frontotemporal lobar degeneration.

Zika Virus Infection in Pregnant Women in Rio de Janeiro

Posted by • December 15th, 2016

zika-header

Click to enlarge

Zika virus (ZIKV) is a flavivirus that was recently introduced into Brazil. Brasil et al. enrolled pregnant women in whom a rash had developed within the previous 5 days and tested blood and urine specimens for ZIKV by reverse-transcriptase–polymerase-chain-reaction assays. The authors followed women prospectively to obtain data on pregnancy and infant outcomes. This final report updates preliminary data on Zika virus infection among pregnant women in Rio de Janeiro. ZIKV infection during pregnancy was associated with fetal death, fetal growth restriction, and central nervous system abnormalities in a new Original Article.

Clinical Pearl

• Are there any clinical features that may be more common among ZIKV-positive women than ZIKV-negative women?

In the study by Brasil et al., a descending macular or maculopapular rash was the most common type of exanthem noted in ZIKV-positive women. The maculopapular rash was seen far more frequently in ZIKV-positive women than in ZIKV-negative women (P=0.02). The other prevalent finding was pruritus, which was seen in 90% of ZIKV-positive women in the study. Conjunctival injection was present in 58% of ZIKV-positive women and in a smaller percentage (40%) of ZIKV-negative women (P=0.03), which suggests that this symptom is a more specific clinical feature of ZIKV infection.

Clinical Pearl

• Does the time window for adverse outcomes in utero due to Zika virus infection occur only in early pregnancy?

With rubella, the time window for adverse outcomes in utero occurs in the first 16 weeks of pregnancy. In contrast, with ZIKV, the time window appears to be throughout pregnancy. ZIKV pathogenicity was evident in the Brasil study cohort even in the presence of a “control” group that was affected by chikungunya virus, which is also linked to adverse pregnancy outcomes, particularly fetal loss. Adverse outcomes after ZIKV infection occurred regardless of the timing of maternal infection; adverse outcomes occurred in 55% of pregnancies in which the mother was infected in the first trimester (11 of 20 ZIKV-infected pregnancies), in 52% of those in which the mother was infected in the second trimester (37 of 71 ZIKV-infected pregnancies), and in 29% of those in which the mother was infected in the last trimester of pregnancy (10 of 34 ZIKV-infected pregnancies).

zika-fig-2

Click to enlarge

Morning Report Questions

Q: How did pregnancy outcomes compare between ZIKV-positive and ZIKV-negative women in the study by Brasil et al.?

A: Despite the high rate of adverse outcomes in the control group of pregnant women with other infectious illnesses, the findings in the ZIKV-positive group were far more striking. Among 125 pregnancies in ZIKV-positive women, 58 adverse pregnancy outcomes were noted (46.4%); in contrast, 7 of the 61 pregnancies (11.5%) in the ZIKV-negative cohort resulted in adverse outcomes (P<0.001). Among 117 live births in the ZIKV-positive cohort, 49 infants (42%) were found to have abnormalities on clinical examination, imaging, or both; in contrast, among 57 live births in the ZIKV-negative cohort, 3 infants (5%) had such abnormalities (P<0.001). ZIKV-positive women were nearly 10 times as likely as ZIKV-negative women to have emergency cesarean sections performed owing to fetal distress (23.5% vs. 2.5%, P=0.003). Infants born to ZIKV-positive mothers were also nearly four times as likely to need critical care assistance immediately after birth (a finding that is reflective of fetal distress) as infants who had not been exposed to ZIKV (21% vs. 6%, P=0.01). There was no significant difference in the rate of fetal loss between ZIKV-positive mothers and ZIKV-negative mothers (7.2% and 6.6%, respectively; P=1.0).

Q: Was microcephaly the most common abnormality observed after ZIKV infection in the study by Brasil et al.?

A: Four infants in the ZIKV-positive group (3.4%) were noted to have microcephaly at birth; two were small-for-gestational-age infants with proportionate microcephaly (i.e., the head size is small but is proportional to the weight and length of the infant), and two had disproportionate microcephaly (i.e., the head size is small relative to the weight and length of the infant). None of the infants in the control group had microcephaly. Although microcephaly has been widely discussed in relation to ZIKV infection, it is important to note that other findings such as cerebral calcifications and fetal growth restriction were present more frequently.

Dupilumab versus Placebo in Atopic Dermatitis

Posted by • December 15th, 2016

dupilummab-header

Click to enlarge

For patients with moderate-to-severe atopic dermatitis, topical therapies have limited efficacy, and systemic treatments are associated with substantial toxic effects. Thus, there is an unmet need for effective and safe long-term medications for these patients. Simpson et al. reported the results of two phase 3 trials of dupilumab monotherapy (SOLO 1 and SOLO 2) in adults with moderate-to-severe atopic dermatitis whose disease was inadequately controlled by topical treatment or for whom topical treatment was medically inadvisable. In these placebo-controlled trials, dupilumab, a human monoclonal antibody against interleukin-4 receptor alpha, was effective in controlling the signs and symptoms of atopic dermatitis. A new Original Article explains.

Clinical Pearl

What are some of the features of atopic dermatitis?

Atopic dermatitis is a chronic, relapsing inflammatory skin disease that is characterized by the up-regulation of type 2 immune responses (including those involving type 2 helper T cells), an impaired skin barrier, and increased Staphylococcus aureus colonization. In patients with moderate-to-severe atopic dermatitis, skin lesions can encompass a large body-surface area and are frequently accompanied by intense, persistent pruritus, which leads to sleep deprivation, symptoms of anxiety or depression, and a poor quality of life.

Clinical Pearl

• Why might dupilumab be an effective therapy for atopic dermatitis?

Dupilumab is a fully human monoclonal antibody that binds specifically to the shared alpha chain subunit of the interleukin-4 and interleukin-13 receptors, thereby inhibiting the signaling of interleukin-4 and interleukin-13, which are type 2 inflammatory cytokines that may be important drivers of atopic or allergic diseases such as atopic dermatitis and asthma. In support of this premise, early-phase trials of dupilumab showed efficacy in patients with atopic dermatitis, those with asthma, and those with chronic sinusitis with nasal polyposis — all of which are conditions that have type 2 immunologic signatures.

Morning Report Questions

Q: Does dupilumab ameliorate the signs and symptoms of atopic dermatitis as compared to placebo?

A: In the SOLO trials, patients were randomly assigned in a 1:1:1 ratio to receive, for 16 weeks, weekly subcutaneous injections of dupilumab (300 mg) or placebo or the same dose of dupilumab every other week alternating with placebo. In SOLO 1 and SOLO 2, both dose regimens of dupilumab resulted in better results than placebo over 16 weeks of treatment across multiple outcome measures that reflected objective signs of atopic dermatitis, subjective symptoms (e.g., pruritus), important aspects of mental health (i.e., anxiety and depression), and quality of life. The mean efficacy results were similar for both dupilumab regimens. SOLO 1 and SOLO 2 were designed to provide replication of results, and patient populations and results were highly consistent in the two trials.

Q: What were some of the adverse events noted during the 16-week treatment period in the SOLO trials?

A: The most common adverse events in the two trials were exacerbations of atopic dermatitis, injection-site reactions, and nasopharyngitis. The incidence of nasopharyngitis was generally balanced across dupilumab and placebo groups. Exacerbations of atopic dermatitis and most types of skin infections were more common in the placebo groups. Injection-site reactions and conjunctivitis were more frequent in patients receiving dupilumab than in those receiving placebo.

In Coronary Artery Disease, Can Nurture Override Nature?

Posted by • December 14th, 2016

genetic-risk

Click to enlarge

Mr. Locke, a 48-year-old man with prehypertension, comes to your office for a routine visit. He has a strong family history of coronary artery disease (CAD); his father and two brothers had myocardial infarctions in their 50s. His BMI is 31 kg/m2, he is a nonsmoker, but he does not exercise routinely. You suggest that exercising and focusing on a healthy diet might reduce his risk for CAD, but he wants to know by how much? Is his fate sealed by his family history or could a healthier lifestyle help him reduce the risk of CAD?

Previous observational studies have identified lifestyle factors associated with increased risk of CAD, including family history, smoking, obesity, unhealthy diet, and lack of physical activity. But how much of family history is genetic versus behavioral? How much can lifestyle choices modulate genetic risk? These gene-environment interactions have previously been difficult to quantify, but increasingly the collection and study of genetic data have contributed new insights.

In a study entitled “Genetic Risk, Adherence to a Healthy Lifestyle, and Coronary Disease” published in this week’s issue of NEJM, investigators examined gene-environment interactions in three large prospective cohorts — the Atherosclerosis Risk in Communities (ARIC) study, the Women’s Genome Health Study (WGHS), and the Malmö Diet and Cancer Study (MDCS) — as well as the Bio-Image cross-sectional study. To determine genetic risk, the authors derived a polygenic risk score using single-nucleotide polymorphisms (SNPs) previously associated with CAD in genome-wide association studies, and divided the patients into quintiles from lowest to highest risk. They also collected data on four healthy lifestyle factors (no current smoking, no obesity, physical activity at least once weekly, and a healthy diet pattern) and divided patients into three lifestyle risk categories: favorable (3-4 factors), intermediate (2 factors), and unfavorable (0 or 1 factor).

As expected, in more than 50,000 patients from the three prospective cohorts, the risk of CAD increased from the lowest to the highest quintiles (hazard ratio, 1.91 among participants with high vs. low genetic risk). A family history of CAD was a surrogate, although imperfect, for genetic risk. Likewise, the risk of coronary events was higher in those with unfavorable versus favorable lifestyle.

In this study, the investigators set out to understand not only the role of genes and environment on CAD risk, but also the interaction between genetic and lifestyle risk. To this end, they assessed the effect of lifestyle on the risk of CAD within each category of genetic risk and found that adherence to a favorable lifestyle was beneficial across all genetic risk groups: The relative risk associated with a favorable lifestyle (versus an unfavorable lifestyle) was 45% lower in patients with low genetic risk, 47% lower in those at intermediate genetic risk, and 46% lower in those at high genetic risk. Similarly, they found that the benefit of having a low genetic risk of CAD was offset by an unfavorable lifestyle. The full results are represented visually in the paper, as in the bar graph above. Similar results were found in the cross-sectional Bio-Image study using coronary artery calcification, rather than clinical outcomes, as a marker of subclinical coronary burden.

This study represents the most precise estimates to date of the relative contribution of genetics and lifestyle on risk of CAD. In a large cohort and using robust genetic data, genetic risk and lifestyle behaviors were each independently associated with CAD risk, but favorable or unfavorable lifestyle appeared to modulate genetic risk positively or negatively.

We do not routinely perform genetic testing on patients to calculate genetic risk scores. Therefore, our ability to use these specific risk assessments for counseling is limited. However, this study reinforces the beneficial effect of lifestyle modification on CAD risk, regardless of one’s genetic load. Public health efforts and individual patient counseling should continue to stress the importance of a healthy lifestyle, including smoking cessation, weight reduction, physical activity, and healthy diet, on reducing the risk of CAD.

Stents or Bypass Surgery for Left Main Coronary Artery Disease

Posted by • December 8th, 2016

stent-header

Click to enlarge

EXCEL (Evaluation of XIENCE versus Coronary Artery Bypass Surgery for Effectiveness of Left Main Revascularization) was an international, open-label, multicenter randomized trial that compared everolimus-eluting stents with coronary-artery bypass grafting (CABG) in patients with left main coronary artery disease. A new Original Article explains how at 3 years, PCI was noninferior to CABG with respect to the rate of death, stroke, or myocardial infarction.

Clinical Pearl

• How are patients with obstructive left main coronary artery disease usually treated?

Left main coronary artery disease is associated with high morbidity and mortality owing to the large amount of myocardium at risk. European and U.S. guidelines recommend that most patients with left main coronary artery disease undergo CABG.

Clinical Pearl

• In what subgroup of patients with left main coronary artery disease might percutaneous coronary intervention (PCI) be an acceptable alternative to CABG?

Randomized trials have suggested that PCI with drug-eluting stents might be an acceptable alternative for selected patients with left main coronary disease. Specifically, in the subgroup of patients with left main coronary disease in the Synergy between PCI with Taxus and Cardiac Surgery (SYNTAX) trial, the rate of a composite of death, stroke, myocardial infarction, or unplanned revascularization at 5 years was similar among patients treated with paclitaxel-eluting stents and those treated with CABG. However, the outcomes of PCI were acceptable only in the patients with coronary artery disease of low or intermediate anatomical complexity, a hypothesis-generating subgroup observation that motivated the EXCEL trial.

Morning Report Questions

Q: Is PCI noninferior to CABG for left main coronary artery disease of low or intermediate anatomical complexity?

A: In the EXCEL trial involving patients with left main coronary artery disease and low or intermediate SYNTAX scores, PCI with everolimus-eluting stents was noninferior to CABG with respect to the primary composite end point of death, stroke, or myocardial infarction at 3 years. The primary composite end-point event of death, stroke, or myocardial infarction at 3 years occurred in 15.4% of the patients in the PCI group and in 14.7% of the patients in the CABG group (difference, 0.7 percentage points; upper 97.5% confidence limit, 4.0 percentage points; P=0.02 for noninferiority; hazard ratio, 1.00; 95% confidence interval [CI], 0.79 to 1.26; P=0.98 for superiority). The relative treatment effect for the primary end point was consistent across prespecified subgroups, including the subgroup defined according to the presence versus absence of diabetes.

stent-table-2

Click to enlarge

Q: What changes in practice since the time of the SYNTAX trial might improve outcomes with PCI?

A: Since the time that the SYNTAX trial was conducted, changes in practice have occurred that would be expected to improve outcomes with PCI. In the EXCEL trial, the authors used everolimus-eluting stents almost exclusively; these stents are associated with a low rate of stent thrombosis. In addition, intravascular ultrasonographic imaging guidance was used in nearly 80% of the patients in the PCI group in the EXCEL trial, a practice that has been associated with higher event-free survival after left main coronary-artery stenting.

A Woman with Leukocytosis

Posted by • December 8th, 2016

leukocytosis-header

Click to enlarge

Primary causes of neutrophilia encompass benign, congenital, and familial syndromes. Another category of primary neutrophilias includes those associated with clonal bone marrow diseases. An 86-year-old woman was seen at the hospital because of fatigue, night sweats, leukocytosis, and splenomegaly. Review of the peripheral-blood smear revealed neutrophilia without dysplastic features, immature forms, or monocytosis. A diagnostic procedure was performed in a new Case Record.

Clinical Pearl

• Is chronic neutrophilic leukemia (CNL) common?

CNL, which was first described approximately a century ago, is rare, with only a few hundred cases reported in the literature. The diagnosis is often made incidentally, but constitutional symptoms such as sweats, fatigue, weight loss, and abdominal symptoms related to splenomegaly, can be present.

Clinical Pearl

• What are the World Health Organization (WHO) diagnostic criteria for CNL?

The diagnosis of CNL, a condition that was first recognized as a distinct entity in the 2001 WHO classification guidelines, requires the presence of unexplained peripheral-blood leukocytosis (≥25×109 white cells per liter), with neutrophils and bands comprising at least 80% of white cells, immature granulocytes comprising less than 10% of white cells, myeloblasts comprising less than 1% of white cells, an absolute monocyte count of less than 1×109 cells per liter, and absence of granulocytic dysplasia; a hypercellular bone marrow with increased granulocytic forms, less than 5% myeloblasts, and normal neutrophil maturation; splenomegaly; exclusion of BCR-ABL1, PDGFRA, PDGFRB, and FGFR1 genetic abnormalities; and exclusion of the diagnoses of polycythemia vera, primary myelofibrosis, and essential thrombocythemia.

leukocytosis-table-2

Click to enlarge

Morning Report Questions

Q: What mutations are associated with CNL?

A: In the past several years, it has been shown that up to 90% of patients with CNL harbor a CSF3R (colony-stimulating factor 3 receptor) mutation; the presence of this mutation has been added to the revised 2016 WHO list of diagnostic criteria for CNL. The genetic profile of CNL may include other abnormalities, such as SETBP1, ASXL1, and TET2 mutations. Whether these abnormalities affect the pathogenesis, evolution, or prognosis of CNL is currently not clear. One study suggested that concurrent ASXL1 mutations may be prognostically detrimental.

Q: How is CNL treated, and how does the location of a CSF3R mutation influence the choice of therapy?

A: The most effective therapeutic approach to CNL has not been well defined. Oral agents, including hydroxyurea and busulfan, have been used to control leukocytosis and splenomegaly and improve quality of life. CSF3R mutations occur at two distinct regions: the membrane proximal region and the cytoplasmic tail, which can be truncated by nonsense and missense mutations. Mutations occurring at the former region result in dysregulation of JAK family kinases, whereas mutations occurring at the latter region result in dysregulation of SRC family–TNK2 kinases. Membrane proximal mutations may confer sensitivity to JAK kinase inhibitors such as ruxolitinib, whereas truncation mutations confer sensitivity to the multikinase inhibitor dasatinib but not to JAK kinase inhibitors.

Tranexamic Acid in Patients Undergoing Coronary-Artery Surgery

Posted by • December 7th, 2016

tranexamic-acid

Click to enlarge

On the first day of my second year of residency, I showed up to the cardiac surgery intensive care unit (CSICU) terrified for what awaited me.  It wasn’t uncommon for patients recovering from cardiac surgery to be on three different vasopressors while intubated with four chest tubes and pacing wires. Even after a year of general surgery residency, the CSICU was a bit of a foreign land to me.   One thing I learned quickly, though, was that bleeding after cardiac surgery was relatively common.

Many patients required blood transfusions and occasionally had to return to the operating room due to ongoing bleeding.  In an effort to reduce postoperative bleeding, transfusion, and reoperation, most cardiac surgery units administer antifibrinolytic agents such as tranexamic acid, a lysine analogue. Although studies have shown that tranexamic acid can reduce the risk of blood loss and transfusion after cardiac surgery, it also has a prothrombotic effect, and the risk of myocardial infarction, stroke, or other complications is unclear. Further, some studies have shown an increased risk of seizures with tranexamic acid, potentially due to cerebral infarction.

A recent multicenter, double-blind, randomized-controlled study by Myles et al. published in NEJM investigates the risks and benefits of intra-operative tranexamic acid infusion in cardiac surgery. According to the trial’s 2-by-2 factorial design, patients at increased surgical risk undergoing coronary-artery surgery (either on- or off-pump and with or without concomitant valve procedures) were randomized to receive aspirin or placebo (results reported in a separate study) and tranexamic acid or placebo. The primary outcome was a composite of death and thrombotic complications (nonfatal myocardial infarction, stroke, pulmonary embolism, renal failure, or bowel infarction) within 30 days after surgery. Secondary outcomes included all components of the composite primary outcome as well as reoperation due to major hemorrhage or cardiac tamponade, transfusion, and seizures (which was added later in the study and was based on clinical diagnosis).

Among the 2311 patients randomized to receive tranexamic acid and the 2320 patients who received placebo, baseline characteristics and comorbidities were evenly distributed between groups. The composite primary outcome was reported in 17% of patients in the tranexamic acid group and 18% of patients in the placebo group (relative risk, 0.92; 95% confidence interval [CI], 0.81 to 1.05; P = 0.22). This result was not affected by whether or not patients received aspirin. Rates of the individual components of the composite outcome were similar between groups.

Tranexamic acid was associated with lower rates of bleeding, blood loss, number of patients transfused, number of blood products transfused (4331 vs. 7994, P<0.001), and major hemorrhage or cardiac tamponade requiring reoperation (1.4% vs. 2.8%, P=0.001). However, postoperative seizures were more common in the tranexamic acid group (15 vs. 2 patients; relative risk, 7.62; 95% CI, 1.77 to 68.71; P = 0.002 by Fisher’s exact test). In a post-hoc analysis, seizure was associated with poorer outcomes such as stroke (relative risk, 21.88; 95% CI, 10.06 to 47.58; P<0.001) and death (relative risk, 9.52; 95% CI, 2.53 to 35.90; P = 0.02). Several years into the study, the dose of tranexamic acid administered was halved after some studies showed a possible dose-related risk of seizure.  The dose reduction did not affect the risk of seizure (although the study was underpowered for this analysis).

Overall, tranexamic acid seems to reduce bleeding complications after cardiac surgery, most impressively by reducing the number of transfusions required by 47%. In addition, tranexamic acid did not increase the risk of death, myocardial infarction, or stroke. However, this study corroborates data linking tranexamic acid to higher risk of seizure.

How will this study change current practice in cardiac surgery? Will cardiac surgery units that do not currently use tranexamic acid change their practice? NEJM Deputy Editor John Jarcho comments, “Surgeons really don’t like bleeding, for good reasons. I think they will see these findings as a good reason to use antifibrinolytic therapy, if they weren’t using it previously.”

Resident Burnout

Posted by • December 1st, 2016

This podcast is the second in a series that reflects medicine’s most pressing issues through the eyes of residents. “The House” provides residents with a forum to share their stories from the bedside, where they are learning far more than the lessons of clinical medicine.

Lisa Rosenbaum is a cardiologist at Brigham and Women’s Hospital in Boston, an instructor at Harvard Medical School, and National Correspondent for the New England Journal of Medicine.
Dan Weisberg is a resident in Internal Medicine and Primary Care at Brigham and Women’s Hospital and Harvard Medical School.

Postpartum Depression

Posted by • December 1st, 2016

postpartum-depression-header

Click to enlarge

Untreated postpartum depression is common affects the health of the woman, infant, and family. Pregnant women should receive information about the signs and symptoms of postpartum depression and its effects. Treatment depends on the severity of symptoms and the level of functional impairment and can include social support, psychological therapy, and pharmacotherapy (generally an SSRI as first-line treatment). A new Clinical Practice article explains further.

Clinical Pearl

What are some of the risk factors for postpartum depression?

The strongest risk factor for postpartum depression is a history of mood and anxiety problems and, in particular, untreated depression and anxiety during pregnancy. The rapid decline in the level of reproductive hormones after childbirth probably contributes to the development of depression in susceptible women, although the specific pathogenesis of postpartum depression is unknown; in addition to hormonal changes, proposed contributors include genetic factors and social factors including low social support, marital difficulties, violence involving the intimate partner, previous abuse, and negative life events.

Clinical Pearl

What is the natural course of postpartum depression?

The natural course of postpartum depression is variable. Although it may resolve spontaneously within weeks after its onset, approximately 20% of women with postpartum depression still have depression beyond the first year after delivery, and 13% after 2 years; approximately 40% of women will have a relapse either during subsequent pregnancies or on other occasions unrelated to pregnancy.

Morning Report Questions

Q: How would you evaluate a woman for possible postpartum depression? 

A: The best method for detecting postpartum depression remains controversial. Administration of the 10-item Edinburgh Postnatal Depression Scale (EPDS) is recommended by both the American College of Obstetricians and Gynecologists and the American Academy of Pediatrics as a method of identifying possible postpartum depression. The U.S. Agency for Healthcare Research and Quality suggests that serial testing, beginning with the use of a sensitive, two-question screening tool relating to feelings of depression or hopelessness and of a lack of interest or pleasure in activities, followed by the use of a second, more specific instrument for women who give a positive answer to either screening question, may be a reasonable strategy to reduce both false positive and false negative results. The evaluation of women with possible postpartum depression requires careful history taking to ascertain the diagnosis, identify coexisting psychiatric disorders, and manage contributing medical and psychosocial issues. During the process of history taking, special attention should be given to a personal or family history of depression, postpartum psychosis, or bipolar disorder, especially if depression or bipolar disorder had been associated with pregnancy. Coexisting anxiety and obsessive–compulsive symptoms are common among women with postpartum depression and should be investigated further. Women should be asked about social support as well as substance abuse and violence involving an intimate partner. An examination to assess mental status should be conducted, as well as a physical examination if symptoms suggest a medical cause. Laboratory investigations should be performed as indicated; measurement of hemoglobin and thyroid-stimulating hormone levels are generally recommended.

postpartum-depression-fig-1

Click to enlarge

Q: What are some of the antidepressants that are used in women with postpartum depression who are breast-feeding?

A: Although data on long-term child development are limited, in most cases breast-feeding need not be discouraged among women who are taking an antidepressant medication. Selective serotonin reuptake inhibitors (SSRIs) pass into breast milk at a dose that is less than 10% of the maternal dose, and drugs in this class are generally considered to be compatible with breast-feeding of healthy, full-term infants. Despite some variability among SSRIs in terms of their passage into breast milk, switching the antidepressant medication because of lactation is not usually recommended for women who had previously been receiving effective treatment with a given agent, owing to the risk of a relapse of depression. Serotonin norepinephrine reuptake inhibitors (SNRIs) or mirtazapine are commonly used either when SSRIs are ineffective or when a woman has previously had a positive response to these agents since available data also suggest minimal passage into breast milk. Data on safety for these agents remain limited, however, since fewer than 50 cases have been reported in which women who were breast-feeding were taking either SNRIs or mirtazapine.