Zika Virus Infection in Pregnant Women in Rio de Janeiro

Posted by • December 15th, 2016


Click to enlarge

Zika virus (ZIKV) is a flavivirus that was recently introduced into Brazil. Brasil et al. enrolled pregnant women in whom a rash had developed within the previous 5 days and tested blood and urine specimens for ZIKV by reverse-transcriptase–polymerase-chain-reaction assays. The authors followed women prospectively to obtain data on pregnancy and infant outcomes. This final report updates preliminary data on Zika virus infection among pregnant women in Rio de Janeiro. ZIKV infection during pregnancy was associated with fetal death, fetal growth restriction, and central nervous system abnormalities in a new Original Article.

Clinical Pearl

• Are there any clinical features that may be more common among ZIKV-positive women than ZIKV-negative women?

In the study by Brasil et al., a descending macular or maculopapular rash was the most common type of exanthem noted in ZIKV-positive women. The maculopapular rash was seen far more frequently in ZIKV-positive women than in ZIKV-negative women (P=0.02). The other prevalent finding was pruritus, which was seen in 90% of ZIKV-positive women in the study. Conjunctival injection was present in 58% of ZIKV-positive women and in a smaller percentage (40%) of ZIKV-negative women (P=0.03), which suggests that this symptom is a more specific clinical feature of ZIKV infection.

Clinical Pearl

• Does the time window for adverse outcomes in utero due to Zika virus infection occur only in early pregnancy?

With rubella, the time window for adverse outcomes in utero occurs in the first 16 weeks of pregnancy. In contrast, with ZIKV, the time window appears to be throughout pregnancy. ZIKV pathogenicity was evident in the Brasil study cohort even in the presence of a “control” group that was affected by chikungunya virus, which is also linked to adverse pregnancy outcomes, particularly fetal loss. Adverse outcomes after ZIKV infection occurred regardless of the timing of maternal infection; adverse outcomes occurred in 55% of pregnancies in which the mother was infected in the first trimester (11 of 20 ZIKV-infected pregnancies), in 52% of those in which the mother was infected in the second trimester (37 of 71 ZIKV-infected pregnancies), and in 29% of those in which the mother was infected in the last trimester of pregnancy (10 of 34 ZIKV-infected pregnancies).


Click to enlarge

Morning Report Questions

Q: How did pregnancy outcomes compare between ZIKV-positive and ZIKV-negative women in the study by Brasil et al.?

A: Despite the high rate of adverse outcomes in the control group of pregnant women with other infectious illnesses, the findings in the ZIKV-positive group were far more striking. Among 125 pregnancies in ZIKV-positive women, 58 adverse pregnancy outcomes were noted (46.4%); in contrast, 7 of the 61 pregnancies (11.5%) in the ZIKV-negative cohort resulted in adverse outcomes (P<0.001). Among 117 live births in the ZIKV-positive cohort, 49 infants (42%) were found to have abnormalities on clinical examination, imaging, or both; in contrast, among 57 live births in the ZIKV-negative cohort, 3 infants (5%) had such abnormalities (P<0.001). ZIKV-positive women were nearly 10 times as likely as ZIKV-negative women to have emergency cesarean sections performed owing to fetal distress (23.5% vs. 2.5%, P=0.003). Infants born to ZIKV-positive mothers were also nearly four times as likely to need critical care assistance immediately after birth (a finding that is reflective of fetal distress) as infants who had not been exposed to ZIKV (21% vs. 6%, P=0.01). There was no significant difference in the rate of fetal loss between ZIKV-positive mothers and ZIKV-negative mothers (7.2% and 6.6%, respectively; P=1.0).

Q: Was microcephaly the most common abnormality observed after ZIKV infection in the study by Brasil et al.?

A: Four infants in the ZIKV-positive group (3.4%) were noted to have microcephaly at birth; two were small-for-gestational-age infants with proportionate microcephaly (i.e., the head size is small but is proportional to the weight and length of the infant), and two had disproportionate microcephaly (i.e., the head size is small relative to the weight and length of the infant). None of the infants in the control group had microcephaly. Although microcephaly has been widely discussed in relation to ZIKV infection, it is important to note that other findings such as cerebral calcifications and fetal growth restriction were present more frequently.

Dupilumab versus Placebo in Atopic Dermatitis

Posted by • December 15th, 2016


Click to enlarge

For patients with moderate-to-severe atopic dermatitis, topical therapies have limited efficacy, and systemic treatments are associated with substantial toxic effects. Thus, there is an unmet need for effective and safe long-term medications for these patients. Simpson et al. reported the results of two phase 3 trials of dupilumab monotherapy (SOLO 1 and SOLO 2) in adults with moderate-to-severe atopic dermatitis whose disease was inadequately controlled by topical treatment or for whom topical treatment was medically inadvisable. In these placebo-controlled trials, dupilumab, a human monoclonal antibody against interleukin-4 receptor alpha, was effective in controlling the signs and symptoms of atopic dermatitis. A new Original Article explains.

Clinical Pearl

What are some of the features of atopic dermatitis?

Atopic dermatitis is a chronic, relapsing inflammatory skin disease that is characterized by the up-regulation of type 2 immune responses (including those involving type 2 helper T cells), an impaired skin barrier, and increased Staphylococcus aureus colonization. In patients with moderate-to-severe atopic dermatitis, skin lesions can encompass a large body-surface area and are frequently accompanied by intense, persistent pruritus, which leads to sleep deprivation, symptoms of anxiety or depression, and a poor quality of life.

Clinical Pearl

• Why might dupilumab be an effective therapy for atopic dermatitis?

Dupilumab is a fully human monoclonal antibody that binds specifically to the shared alpha chain subunit of the interleukin-4 and interleukin-13 receptors, thereby inhibiting the signaling of interleukin-4 and interleukin-13, which are type 2 inflammatory cytokines that may be important drivers of atopic or allergic diseases such as atopic dermatitis and asthma. In support of this premise, early-phase trials of dupilumab showed efficacy in patients with atopic dermatitis, those with asthma, and those with chronic sinusitis with nasal polyposis — all of which are conditions that have type 2 immunologic signatures.

Morning Report Questions

Q: Does dupilumab ameliorate the signs and symptoms of atopic dermatitis as compared to placebo?

A: In the SOLO trials, patients were randomly assigned in a 1:1:1 ratio to receive, for 16 weeks, weekly subcutaneous injections of dupilumab (300 mg) or placebo or the same dose of dupilumab every other week alternating with placebo. In SOLO 1 and SOLO 2, both dose regimens of dupilumab resulted in better results than placebo over 16 weeks of treatment across multiple outcome measures that reflected objective signs of atopic dermatitis, subjective symptoms (e.g., pruritus), important aspects of mental health (i.e., anxiety and depression), and quality of life. The mean efficacy results were similar for both dupilumab regimens. SOLO 1 and SOLO 2 were designed to provide replication of results, and patient populations and results were highly consistent in the two trials.

Q: What were some of the adverse events noted during the 16-week treatment period in the SOLO trials?

A: The most common adverse events in the two trials were exacerbations of atopic dermatitis, injection-site reactions, and nasopharyngitis. The incidence of nasopharyngitis was generally balanced across dupilumab and placebo groups. Exacerbations of atopic dermatitis and most types of skin infections were more common in the placebo groups. Injection-site reactions and conjunctivitis were more frequent in patients receiving dupilumab than in those receiving placebo.

In Coronary Artery Disease, Can Nurture Override Nature?

Posted by • December 14th, 2016


Click to enlarge

Mr. Locke, a 48-year-old man with prehypertension, comes to your office for a routine visit. He has a strong family history of coronary artery disease (CAD); his father and two brothers had myocardial infarctions in their 50s. His BMI is 31 kg/m2, he is a nonsmoker, but he does not exercise routinely. You suggest that exercising and focusing on a healthy diet might reduce his risk for CAD, but he wants to know by how much? Is his fate sealed by his family history or could a healthier lifestyle help him reduce the risk of CAD?

Previous observational studies have identified lifestyle factors associated with increased risk of CAD, including family history, smoking, obesity, unhealthy diet, and lack of physical activity. But how much of family history is genetic versus behavioral? How much can lifestyle choices modulate genetic risk? These gene-environment interactions have previously been difficult to quantify, but increasingly the collection and study of genetic data have contributed new insights.

In a study entitled “Genetic Risk, Adherence to a Healthy Lifestyle, and Coronary Disease” published in this week’s issue of NEJM, investigators examined gene-environment interactions in three large prospective cohorts — the Atherosclerosis Risk in Communities (ARIC) study, the Women’s Genome Health Study (WGHS), and the Malmö Diet and Cancer Study (MDCS) — as well as the Bio-Image cross-sectional study. To determine genetic risk, the authors derived a polygenic risk score using single-nucleotide polymorphisms (SNPs) previously associated with CAD in genome-wide association studies, and divided the patients into quintiles from lowest to highest risk. They also collected data on four healthy lifestyle factors (no current smoking, no obesity, physical activity at least once weekly, and a healthy diet pattern) and divided patients into three lifestyle risk categories: favorable (3-4 factors), intermediate (2 factors), and unfavorable (0 or 1 factor).

As expected, in more than 50,000 patients from the three prospective cohorts, the risk of CAD increased from the lowest to the highest quintiles (hazard ratio, 1.91 among participants with high vs. low genetic risk). A family history of CAD was a surrogate, although imperfect, for genetic risk. Likewise, the risk of coronary events was higher in those with unfavorable versus favorable lifestyle.

In this study, the investigators set out to understand not only the role of genes and environment on CAD risk, but also the interaction between genetic and lifestyle risk. To this end, they assessed the effect of lifestyle on the risk of CAD within each category of genetic risk and found that adherence to a favorable lifestyle was beneficial across all genetic risk groups: The relative risk associated with a favorable lifestyle (versus an unfavorable lifestyle) was 45% lower in patients with low genetic risk, 47% lower in those at intermediate genetic risk, and 46% lower in those at high genetic risk. Similarly, they found that the benefit of having a low genetic risk of CAD was offset by an unfavorable lifestyle. The full results are represented visually in the paper, as in the bar graph above. Similar results were found in the cross-sectional Bio-Image study using coronary artery calcification, rather than clinical outcomes, as a marker of subclinical coronary burden.

This study represents the most precise estimates to date of the relative contribution of genetics and lifestyle on risk of CAD. In a large cohort and using robust genetic data, genetic risk and lifestyle behaviors were each independently associated with CAD risk, but favorable or unfavorable lifestyle appeared to modulate genetic risk positively or negatively.

We do not routinely perform genetic testing on patients to calculate genetic risk scores. Therefore, our ability to use these specific risk assessments for counseling is limited. However, this study reinforces the beneficial effect of lifestyle modification on CAD risk, regardless of one’s genetic load. Public health efforts and individual patient counseling should continue to stress the importance of a healthy lifestyle, including smoking cessation, weight reduction, physical activity, and healthy diet, on reducing the risk of CAD.

Stents or Bypass Surgery for Left Main Coronary Artery Disease

Posted by • December 8th, 2016


Click to enlarge

EXCEL (Evaluation of XIENCE versus Coronary Artery Bypass Surgery for Effectiveness of Left Main Revascularization) was an international, open-label, multicenter randomized trial that compared everolimus-eluting stents with coronary-artery bypass grafting (CABG) in patients with left main coronary artery disease. A new Original Article explains how at 3 years, PCI was noninferior to CABG with respect to the rate of death, stroke, or myocardial infarction.

Clinical Pearl

• How are patients with obstructive left main coronary artery disease usually treated?

Left main coronary artery disease is associated with high morbidity and mortality owing to the large amount of myocardium at risk. European and U.S. guidelines recommend that most patients with left main coronary artery disease undergo CABG.

Clinical Pearl

• In what subgroup of patients with left main coronary artery disease might percutaneous coronary intervention (PCI) be an acceptable alternative to CABG?

Randomized trials have suggested that PCI with drug-eluting stents might be an acceptable alternative for selected patients with left main coronary disease. Specifically, in the subgroup of patients with left main coronary disease in the Synergy between PCI with Taxus and Cardiac Surgery (SYNTAX) trial, the rate of a composite of death, stroke, myocardial infarction, or unplanned revascularization at 5 years was similar among patients treated with paclitaxel-eluting stents and those treated with CABG. However, the outcomes of PCI were acceptable only in the patients with coronary artery disease of low or intermediate anatomical complexity, a hypothesis-generating subgroup observation that motivated the EXCEL trial.

Morning Report Questions

Q: Is PCI noninferior to CABG for left main coronary artery disease of low or intermediate anatomical complexity?

A: In the EXCEL trial involving patients with left main coronary artery disease and low or intermediate SYNTAX scores, PCI with everolimus-eluting stents was noninferior to CABG with respect to the primary composite end point of death, stroke, or myocardial infarction at 3 years. The primary composite end-point event of death, stroke, or myocardial infarction at 3 years occurred in 15.4% of the patients in the PCI group and in 14.7% of the patients in the CABG group (difference, 0.7 percentage points; upper 97.5% confidence limit, 4.0 percentage points; P=0.02 for noninferiority; hazard ratio, 1.00; 95% confidence interval [CI], 0.79 to 1.26; P=0.98 for superiority). The relative treatment effect for the primary end point was consistent across prespecified subgroups, including the subgroup defined according to the presence versus absence of diabetes.


Click to enlarge

Q: What changes in practice since the time of the SYNTAX trial might improve outcomes with PCI?

A: Since the time that the SYNTAX trial was conducted, changes in practice have occurred that would be expected to improve outcomes with PCI. In the EXCEL trial, the authors used everolimus-eluting stents almost exclusively; these stents are associated with a low rate of stent thrombosis. In addition, intravascular ultrasonographic imaging guidance was used in nearly 80% of the patients in the PCI group in the EXCEL trial, a practice that has been associated with higher event-free survival after left main coronary-artery stenting.

A Woman with Leukocytosis

Posted by • December 8th, 2016


Click to enlarge

Primary causes of neutrophilia encompass benign, congenital, and familial syndromes. Another category of primary neutrophilias includes those associated with clonal bone marrow diseases. An 86-year-old woman was seen at the hospital because of fatigue, night sweats, leukocytosis, and splenomegaly. Review of the peripheral-blood smear revealed neutrophilia without dysplastic features, immature forms, or monocytosis. A diagnostic procedure was performed in a new Case Record.

Clinical Pearl

• Is chronic neutrophilic leukemia (CNL) common?

CNL, which was first described approximately a century ago, is rare, with only a few hundred cases reported in the literature. The diagnosis is often made incidentally, but constitutional symptoms such as sweats, fatigue, weight loss, and abdominal symptoms related to splenomegaly, can be present.

Clinical Pearl

• What are the World Health Organization (WHO) diagnostic criteria for CNL?

The diagnosis of CNL, a condition that was first recognized as a distinct entity in the 2001 WHO classification guidelines, requires the presence of unexplained peripheral-blood leukocytosis (≥25×109 white cells per liter), with neutrophils and bands comprising at least 80% of white cells, immature granulocytes comprising less than 10% of white cells, myeloblasts comprising less than 1% of white cells, an absolute monocyte count of less than 1×109 cells per liter, and absence of granulocytic dysplasia; a hypercellular bone marrow with increased granulocytic forms, less than 5% myeloblasts, and normal neutrophil maturation; splenomegaly; exclusion of BCR-ABL1, PDGFRA, PDGFRB, and FGFR1 genetic abnormalities; and exclusion of the diagnoses of polycythemia vera, primary myelofibrosis, and essential thrombocythemia.


Click to enlarge

Morning Report Questions

Q: What mutations are associated with CNL?

A: In the past several years, it has been shown that up to 90% of patients with CNL harbor a CSF3R (colony-stimulating factor 3 receptor) mutation; the presence of this mutation has been added to the revised 2016 WHO list of diagnostic criteria for CNL. The genetic profile of CNL may include other abnormalities, such as SETBP1, ASXL1, and TET2 mutations. Whether these abnormalities affect the pathogenesis, evolution, or prognosis of CNL is currently not clear. One study suggested that concurrent ASXL1 mutations may be prognostically detrimental.

Q: How is CNL treated, and how does the location of a CSF3R mutation influence the choice of therapy?

A: The most effective therapeutic approach to CNL has not been well defined. Oral agents, including hydroxyurea and busulfan, have been used to control leukocytosis and splenomegaly and improve quality of life. CSF3R mutations occur at two distinct regions: the membrane proximal region and the cytoplasmic tail, which can be truncated by nonsense and missense mutations. Mutations occurring at the former region result in dysregulation of JAK family kinases, whereas mutations occurring at the latter region result in dysregulation of SRC family–TNK2 kinases. Membrane proximal mutations may confer sensitivity to JAK kinase inhibitors such as ruxolitinib, whereas truncation mutations confer sensitivity to the multikinase inhibitor dasatinib but not to JAK kinase inhibitors.

Tranexamic Acid in Patients Undergoing Coronary-Artery Surgery

Posted by • December 7th, 2016


Click to enlarge

On the first day of my second year of residency, I showed up to the cardiac surgery intensive care unit (CSICU) terrified for what awaited me.  It wasn’t uncommon for patients recovering from cardiac surgery to be on three different vasopressors while intubated with four chest tubes and pacing wires. Even after a year of general surgery residency, the CSICU was a bit of a foreign land to me.   One thing I learned quickly, though, was that bleeding after cardiac surgery was relatively common.

Many patients required blood transfusions and occasionally had to return to the operating room due to ongoing bleeding.  In an effort to reduce postoperative bleeding, transfusion, and reoperation, most cardiac surgery units administer antifibrinolytic agents such as tranexamic acid, a lysine analogue. Although studies have shown that tranexamic acid can reduce the risk of blood loss and transfusion after cardiac surgery, it also has a prothrombotic effect, and the risk of myocardial infarction, stroke, or other complications is unclear. Further, some studies have shown an increased risk of seizures with tranexamic acid, potentially due to cerebral infarction.

A recent multicenter, double-blind, randomized-controlled study by Myles et al. published in NEJM investigates the risks and benefits of intra-operative tranexamic acid infusion in cardiac surgery. According to the trial’s 2-by-2 factorial design, patients at increased surgical risk undergoing coronary-artery surgery (either on- or off-pump and with or without concomitant valve procedures) were randomized to receive aspirin or placebo (results reported in a separate study) and tranexamic acid or placebo. The primary outcome was a composite of death and thrombotic complications (nonfatal myocardial infarction, stroke, pulmonary embolism, renal failure, or bowel infarction) within 30 days after surgery. Secondary outcomes included all components of the composite primary outcome as well as reoperation due to major hemorrhage or cardiac tamponade, transfusion, and seizures (which was added later in the study and was based on clinical diagnosis).

Among the 2311 patients randomized to receive tranexamic acid and the 2320 patients who received placebo, baseline characteristics and comorbidities were evenly distributed between groups. The composite primary outcome was reported in 17% of patients in the tranexamic acid group and 18% of patients in the placebo group (relative risk, 0.92; 95% confidence interval [CI], 0.81 to 1.05; P = 0.22). This result was not affected by whether or not patients received aspirin. Rates of the individual components of the composite outcome were similar between groups.

Tranexamic acid was associated with lower rates of bleeding, blood loss, number of patients transfused, number of blood products transfused (4331 vs. 7994, P<0.001), and major hemorrhage or cardiac tamponade requiring reoperation (1.4% vs. 2.8%, P=0.001). However, postoperative seizures were more common in the tranexamic acid group (15 vs. 2 patients; relative risk, 7.62; 95% CI, 1.77 to 68.71; P = 0.002 by Fisher’s exact test). In a post-hoc analysis, seizure was associated with poorer outcomes such as stroke (relative risk, 21.88; 95% CI, 10.06 to 47.58; P<0.001) and death (relative risk, 9.52; 95% CI, 2.53 to 35.90; P = 0.02). Several years into the study, the dose of tranexamic acid administered was halved after some studies showed a possible dose-related risk of seizure.  The dose reduction did not affect the risk of seizure (although the study was underpowered for this analysis).

Overall, tranexamic acid seems to reduce bleeding complications after cardiac surgery, most impressively by reducing the number of transfusions required by 47%. In addition, tranexamic acid did not increase the risk of death, myocardial infarction, or stroke. However, this study corroborates data linking tranexamic acid to higher risk of seizure.

How will this study change current practice in cardiac surgery? Will cardiac surgery units that do not currently use tranexamic acid change their practice? NEJM Deputy Editor John Jarcho comments, “Surgeons really don’t like bleeding, for good reasons. I think they will see these findings as a good reason to use antifibrinolytic therapy, if they weren’t using it previously.”

Resident Burnout

Posted by • December 1st, 2016

This podcast is the second in a series that reflects medicine’s most pressing issues through the eyes of residents. “The House” provides residents with a forum to share their stories from the bedside, where they are learning far more than the lessons of clinical medicine.

Lisa Rosenbaum is a cardiologist at Brigham and Women’s Hospital in Boston, an instructor at Harvard Medical School, and National Correspondent for the New England Journal of Medicine.
Dan Weisberg is a resident in Internal Medicine and Primary Care at Brigham and Women’s Hospital and Harvard Medical School.

Postpartum Depression

Posted by • December 1st, 2016


Click to enlarge

Untreated postpartum depression is common affects the health of the woman, infant, and family. Pregnant women should receive information about the signs and symptoms of postpartum depression and its effects. Treatment depends on the severity of symptoms and the level of functional impairment and can include social support, psychological therapy, and pharmacotherapy (generally an SSRI as first-line treatment). A new Clinical Practice article explains further.

Clinical Pearl

What are some of the risk factors for postpartum depression?

The strongest risk factor for postpartum depression is a history of mood and anxiety problems and, in particular, untreated depression and anxiety during pregnancy. The rapid decline in the level of reproductive hormones after childbirth probably contributes to the development of depression in susceptible women, although the specific pathogenesis of postpartum depression is unknown; in addition to hormonal changes, proposed contributors include genetic factors and social factors including low social support, marital difficulties, violence involving the intimate partner, previous abuse, and negative life events.

Clinical Pearl

What is the natural course of postpartum depression?

The natural course of postpartum depression is variable. Although it may resolve spontaneously within weeks after its onset, approximately 20% of women with postpartum depression still have depression beyond the first year after delivery, and 13% after 2 years; approximately 40% of women will have a relapse either during subsequent pregnancies or on other occasions unrelated to pregnancy.

Morning Report Questions

Q: How would you evaluate a woman for possible postpartum depression? 

A: The best method for detecting postpartum depression remains controversial. Administration of the 10-item Edinburgh Postnatal Depression Scale (EPDS) is recommended by both the American College of Obstetricians and Gynecologists and the American Academy of Pediatrics as a method of identifying possible postpartum depression. The U.S. Agency for Healthcare Research and Quality suggests that serial testing, beginning with the use of a sensitive, two-question screening tool relating to feelings of depression or hopelessness and of a lack of interest or pleasure in activities, followed by the use of a second, more specific instrument for women who give a positive answer to either screening question, may be a reasonable strategy to reduce both false positive and false negative results. The evaluation of women with possible postpartum depression requires careful history taking to ascertain the diagnosis, identify coexisting psychiatric disorders, and manage contributing medical and psychosocial issues. During the process of history taking, special attention should be given to a personal or family history of depression, postpartum psychosis, or bipolar disorder, especially if depression or bipolar disorder had been associated with pregnancy. Coexisting anxiety and obsessive–compulsive symptoms are common among women with postpartum depression and should be investigated further. Women should be asked about social support as well as substance abuse and violence involving an intimate partner. An examination to assess mental status should be conducted, as well as a physical examination if symptoms suggest a medical cause. Laboratory investigations should be performed as indicated; measurement of hemoglobin and thyroid-stimulating hormone levels are generally recommended.


Click to enlarge

Q: What are some of the antidepressants that are used in women with postpartum depression who are breast-feeding?

A: Although data on long-term child development are limited, in most cases breast-feeding need not be discouraged among women who are taking an antidepressant medication. Selective serotonin reuptake inhibitors (SSRIs) pass into breast milk at a dose that is less than 10% of the maternal dose, and drugs in this class are generally considered to be compatible with breast-feeding of healthy, full-term infants. Despite some variability among SSRIs in terms of their passage into breast milk, switching the antidepressant medication because of lactation is not usually recommended for women who had previously been receiving effective treatment with a given agent, owing to the risk of a relapse of depression. Serotonin norepinephrine reuptake inhibitors (SNRIs) or mirtazapine are commonly used either when SSRIs are ineffective or when a woman has previously had a positive response to these agents since available data also suggest minimal passage into breast milk. Data on safety for these agents remain limited, however, since fewer than 50 cases have been reported in which women who were breast-feeding were taking either SNRIs or mirtazapine.

Niraparib Maintenance Therapy in Ovarian Cancer

Posted by • December 1st, 2016


Click to enlarge

Ovarian cancer is a leading cause of death from gynecologic cancers worldwide. A randomized, placebo-controlled, phase 3 trial conducted by Mirza et al. evaluated the efficacy and safety of niraparib versus placebo as maintenance treatment in a broad population of patients with platinum-sensitive, recurrent ovarian cancer. Among these patients, the use of niraparib, a PARP inhibitor, was associated with a significantly longer duration of progression-free survival than placebo, with moderate bone marrow toxicity. A new Original Article summarizes.

Clinical Pearl

• What is the usual pattern of response to platinum and taxane treatment in patients with advanced ovarian cancer?

Despite a high initial response rate to platinum and taxane treatment in patients with advanced cancer, the effectiveness of the treatments diminishes over time, and most patients have a relapse. Platinum retreatment is used in patients in whom there is an assumed platinum sensitivity, with diminishing effectiveness and a cumulative increase in toxicity.

Clinical Pearl

• To what class of drugs does niraparib belong?

Niraparib is a highly selective inhibitor of poly (adenosine diphosphate [ADP]–ribose) polymerase (PARP) 1/2, nuclear proteins that detect DNA damage and promote its repair. Clinical studies have evaluated PARP inhibitors in patients with recurrent ovarian cancer, including those with germline BRCA mutations, platinum-sensitive disease, or both.

Morning Report Questions

Q: Does niraparib maintenance therapy prolong progression-free survival in patients with platinum-sensitive, recurrent ovarian cancer?

A: The trial by Mirza et al. enrolled two independent cohorts on the basis of the presence or absence of a germline BRCA mutation (gBRCA cohort and non-gBRCA cohort). Before the database lock, tumor testing of archived tissue samples was performed with the use of a central laboratory DNA-based test to define the population of patients in the non-gBRCA cohort in whom tumors were found to have homologous recombination deficiency (HRD). Such patients were included in the non-gBRCA HRD-positive subgroup. (Decreased rates of homologous recombination have been found to cause inefficient DNA repair.) The three predefined primary efficacy populations were the gBRCA cohort, the HRD-positive subgroup of the non-gBRCA cohort, and the overall non-gBRCA cohort. The authors found that niraparib had a positive effect among patients with platinum-sensitive recurrent ovarian cancer. The duration of progression-free survival in patients with platinum-sensitive, recurrent ovarian cancer was significantly longer in the niraparib group than in the placebo group, regardless of the presence or absence of gBRCA mutations or HRD status.


Click to enlarge

Q: What adverse events were associated with niraparib in the trial by Mirza et al.?

A: In the trial by Mirza et al., Grade 3 or 4 hematologic events that were observed in at least 10% of patients receiving niraparib were thrombocytopenia (in 33.8%), anemia (in 25.3%), and neutropenia (in 19.6%). Treatment discontinuations because of these events were infrequent. Most of the hematologic laboratory abnormalities occurred within the first three treatment cycles; after dose adjustment on the basis of an individual adverse-event profile, the incidence of grade 3 or 4 thrombocytopenia, neutropenia, or fatigue was infrequent beyond cycle 3.

Reduction in HIV Transmission with Dapivirine Vaginal Ring – Is it Enough?

Posted by • November 30th, 2016


Click to enlarge

While you are working for one month in a health center in South Africa, a 19-year-old woman comes to the clinic and asks for your advice: She is HIV-negative, and is unsure of her partner’s HIV status. Is there anything she can do to prevent her from acquiring HIV?

Preexposure prophylaxis (PrEP) with oral tenofovir-emtricitabine is an emerging approach to prevent HIV transmission, with some data in men who have sex with men suggesting effectiveness. However, the efficacy of PrEP in women has been inconsistent, especially in young women in sub-Saharan Africa. Studies of vaginal tenofovir gel and oral tenofovir-emtricitamine in women have failed to consistently demonstrate a reduction in the risk of HIV acquisition. The limited efficacy was thought to be attributed to poor adherence and possibly a lower concentration of tenofovir in the female genital tract.

Two studies appearing in this week’s NEJM highlight a new approach to prophylaxis against HIV. Both the Ring study and the Aspire study were randomized, double-blind, placebo-controlled trials conducted in sub-Saharan Africa. These trials examined the effectiveness of a monthly self-inserted vaginal ring that contains sustained-release dapivirine, a non-nucleotide reverse transcriptase inhibitor, in healthy, nonpregnant, HIV-uninfected women aged 18-45 years who were engaged in regular sexual activity. All participants received treatment for sexually transmitted infections and a standard HIV prevention package (including regular HIV testing, counseling, and condoms).

The primary outcome in both studies was new HIV infections. In both studies, the HIV incidence rate was lower in the dapivirine group than in the placebo group. In the Ring study, incidence rates were 4.1 vs. 6.1 per 100 person-years (relative reduction, 31%) and in the Aspire study, the rates were 3.3 vs. 4.5 per 100 person-years (relative reduction, 27%). In the Aspire study, the ring did not significantly reduce HIV incidence in women ≤21 years old, but was associated with a 61% relative reduction in women >21. Adherence to treatment was good in both studies, based on pre-specified levels of plasma dapivirine concentrations and remaining dapivirine levels in returned rings.

In prior studies, poor adherence to PrEP therapies was hypothesized as the primary reason for treatment failure. The finding from these two studies that the dapivirine ring only reduced HIV acquisition by approximately 30% suggests that the ring itself had limited efficacy or that the thresholds for adherence selected by the investigators were insufficient to identify poor adherence. “Preventing HIV transmission is an important individual and public health goal and new approaches are needed. What to do with this modest effect is a challenging question for the field” says Dr. Lindsey Baden, Deputy Editor at the NEJM. Additionally, the reduced efficacy in younger women complicates the benefits.  In an accompanying editorial, Dr. Adaora A. Adimora from the Institute for Global Health and Infectious Diseases at the University of North Carolina School of Medicine concludes, “The past few years have yielded substantial progress in strategies for the prevention of HIV infection. Nevertheless, considerable work will be required to achieve safe, effective, affordable HIV prevention for all women at risk.”