“Control is the thread that runs through all this. With every new advance, there are more decision points. When you have decisions, you have to take responsibility, and we’re always afraid of making a mistake. This is really a debate amongst ourselves over how much control we want to have.”
R. Alta Charo
In August 2017, a group of biologists at Oregon Health and Science University in Portland published a paper in the peer-reviewed journal Nature that shook the global science and biotech communities. In it, the biologists, led by Shoukhrat Mitalipov, reported that they had carried out the first-ever experiment in the U.S. to edit the genes of a human embryo. They did so in an attempt to use the cutting-edge genetic engineering technology known as CRISPR to correct a mutation associated with the heritable condition hypertrophic cardiomyopathy, a common cause of heart failure in young people.
Whether they succeeded is a matter of debate. Mitalipov reported that his technique fixed the mutation in 70 percent of embryos used in the trial, which were made in a lab using semen from a man known to carry the disease. In subsequent papers, other scientists pushed back, arguing that the study didn’t adequately investigate the possibility that the technique could have caused large unintended “off-target” alterations in the genome, with the potential to wreak untold havoc on the hypothetical future human in Mitalipov’s petri dish.
Either way, the study, which followed three earlier, smaller-scale Chinese forays into human embryo editing, broke important new ground in how far scientists are willing and able to go in tinkering with the genome of an unborn human. It was the closest we’ve gotten yet to the highly controversial possibility of customized “designer babies.”
To be clear: A dystopian hellscape in which meticulously engineered superhumans rule over the teeming unmodified hordes or even the more credible prospect of one-percenters purchasing the ideal hair and eye color for their little future Harvard neurosurgeons is not visible on the horizon. The embryos in Mitalipov’s study were destroyed after a few days, as required by federal law.
If anything, Mitalipov’s research underscored just how far science has yet to go in order to exercise control over a single gene—let alone the vast expanse of the human genome, where even simple physical traits like eye color are often controlled by many interacting genes. For example, it seemed that when the experiment worked, it was not for the right reasons: CRISPR, a special kind of DNA that produces a biologically programmed enzyme, was meant to find and snip out the damaged DNA on the parental copy of the embryo’s genome, then insert an artificial replacement. Instead, it replaced the damaged genes with a replica of the undamaged maternal copy. The final product wasn’t so much “designer” as borrowed from Mom’s closet.
Still, the experiment would have been impossible just a few years ago. In the last half decade, human genetic engineering has become one of the most booming pursuits in science, thanks to a series of high-profile breakthroughs that allowed scientists to use CRISPR, and a few other related new technologies, to study and edit DNA in a manner that’s simpler, faster, cheaper, and more effective than previously possible.
Cutting-edge human genetic engineering opens a highly promising avenue for research on a vast range of illnesses, including diabetes, sickle-cell anemia, Alzheimer’s, autism, muscular dystrophy, hemophilia, cancer, and even H.I.V. It offers the possibility that a person living with such a condition could have their DNA altered to alleviate it—and the much more controversial possibility that a human embryo could be inoculated or even enhanced in some way before birth, a change that would be passed to that person’s offspring in perpetuity.
The technology, especially heritable edits to embryos, raises a number of ethical red flags. Concerned parties include religious groups, politicians, physicians, legal experts, people who suffer from genetic conditions, and even, or perhaps especially, the scientists responsible for developing the technology in the first place. Who are we to alter God’s, or nature’s, design? The same genes that today cause sickle-cell anemia once conferred malaria resistance to our ancestors in Africa: How can we take the risk of making a change when its consequences may be unknowable for generations? Can we deprive an unborn person of the right to choose whether they want their DNA tweaked? Where is the line between treating illness and artificially enhancing our physiology? How might this technology exacerbate existing socio-economic inequalities and injustices?
Just a few years ago, it was easy to kick these questions down the road because the science was still in the realm of science fiction. A panoply of laws and agreements around the world over the last few decades have held that heritable human gene editing should be off the table until we know more about it. But now, for the first time, the science is suddenly real. And it’s forcing a new public debate that promises to challenge the very notion of what it means to be human.
“Control is the thread that runs through all this,” says R. Alta Charo, a bioethicist at the University of Wisconsin Law School and a leading expert on the ethics of human genetic engineering. “With every new advance, there are more decision points. When you have decisions, you have to take responsibility, and we’re always afraid of making a mistake. This is really a debate amongst ourselves over how much control we want to have.”
Evolution and Revolution
DNA is the chemical code inside each cell of all living things that dictates every aspect of how an organism looks and functions. It’s written on a pair of twisted strings—the double helix—with four alternating chemical units, called nucleotides, represented by the letters A, T, C, and G. The human genome is about three billion nucleotide pairs long.
This massive code is broken into chunks, known as genes. In humans, genes are around 10,000 nucleotide pairs long on average. Each contains some unit of specific information, like a sentence within a book. Like a James Joyce novel, many sentences make sense only in the context of the book. And while many are crucial to the overall meaning, some may be meaningless red herrings.
As cells divide, DNA makes copies of itself, and in that process, typos can occur—accidental changes, additions, or deletions of letters that can garble the meaning of the sentence, potentially throwing off the whole book. Since humanity’s earliest days, millennia before American biologist James Watson and English physicist Francis Crick uncovered the basic architecture of DNA in 1953, we have sought ways to fix these aberrations—or tweak them to our benefit. The selective breeding of crops and animals, whether we understood the science or not, is simply a tedious and imprecise approach to genetic modification. But a few decades ago, scientists began to invent much more fastidious methods.
In the early 1970s, scientists developed techniques for snipping a strand of DNA from one organism and transplanting it into another, first in an experiment on bacteria and later on mice. Over the next two decades, the genetic engineering frontier spread to pharmaceutical drugs and crops, especially following a 1980 Supreme Court decision that allowed for the patenting of genes and the U.S. Food and Drug Administration approval in 1992 of “Flavr Savr” tomatoes, the first “GMO” food.
This burgeoning technology spooked a lot of people. So, in 1975, a group of more than 100 leading scientists and ethicists gathered for a conference at Asilomar State Beach in California to set some ground rules. It was the first time the ethics of genetic engineering were formally addressed. The attendees produced a set of guidelines that called for researchers to scrupulously prevent modified DNA from escaping their labs, to proceed with great caution, and to actively engage the general public with their work. “Designer babies,” which at the time seemed light-years away, were not on the agenda.
Meanwhile, on a parallel track, reproductive medicine was making gains that raised similar ethical quandaries about our capacity to play God. In 1974, four years before the first successful in-vitro fertilization, the same James Watson mentioned above infamously warned Congress that IVF would cause “all hell [to] break loose, politically and morally, all over the world.” But when that didn’t happen, and as the science became more familiar, ethical opposition generally thawed.
“The technologies we use and the way in which we think about them kind of co-evolve,” says Peter Mills, assistant director of the Nuffield Council on Bioethics in London, a leading ethics watchdog. “In the ’70s, people were horrified by IVF and uncertain about where it might lead. Working out the science has led to an evolution of social norms.”
The first high-profile foray into human gene editing was fraught with controversy. In 1981, a highly respected research physician at the University of California Los Angeles named Martin Cline was disciplined by the National Institutes of Health (NIH) and had several major grants revoked after it was revealed that he had given two patients carrying a hereditary blood disease an infusion of bone marrow cells with altered DNA without the approval of the university’s ethics board.
However, similar to the progress of reproductive medicine, gains in genetic science contributed to a gradual relaxation of ethical opposition. By 1990, in an experiment similar to Cline’s but with better oversight, researchers at the National Cancer Institute and the NIH demonstrated how “gene therapy” could be used to treat melanoma and a rare immunodeficiency disease. The idea of gene therapy was not to alter a person’s DNA, but to augment it by removing a selection of a person’s cells, treating them with artificial, curative DNA, and inserting them back into the patient. To return to the analogy of a book, it’s a bit like adding footnotes to clarify confusing passages. For much of the ’90s, gene therapy was the cutting edge of human genetic research. It was also the field in which many of the key players who would go on to develop CRISPR initially cut their teeth.
But the research hit a wall in 1999 when 18-year-old Jesse Gelsinger, who suffered from a rare metabolic disorder, died from a massive immune reaction and multiple organ failure after he was treated with gene therapy in a University of Pennsylvania trial.
“That was a very sobering moment for everybody in the field,” MIT biochemist Feng Zhang recently recounted to The Atlantic.
Nevertheless, within the next several years, Zhang would be among the elite group of scientists to stumble across the foundations of a new technology that would go far beyond the limits of gene therapy—and open a Pandora’s box of new ethical challenges. According to Michael Specter’s 2015 story on gene editing for The New Yorker, in the mid-2000s, an offhand comment from a colleague inspired Zhang to read up on CRISPR, a then obscure cluster of DNA found in some species of bacteria that produces an enzyme that dices up the DNA of invading viruses and reconfigures the pieces to form something like an immune system. He found a trickle of research indicating that it might be possible to tap CRISPR’s innate ability to navigate a genome and use it as a delivery vehicle and installation crew for genetic treatments.
In 2012, a team led by University of California Berkeley biochemist Jennifer Doudna became the first to show that CRISPR could be used to edit purified DNA. Zhang and his colleagues followed up a year later with a study showing more specifically that CRISPR could work in human cells. From there, it was off to the races.
Since Doudna’s and Zhang’s papers came out, the number of peer-reviewed articles published per year on the subject of CRISPR has grown more than 15-fold, according to an analysis by University of Washington biochemist Ian Haydon. The technology is being used to research a wide range of human diseases; to detect Zika virus at low cost; to improve production of rice, pork, mushrooms, bananas, and other foods; to produce malaria-resistant mosquitoes; and many other applications.
In September 2018, researchers from an American biotech company announced the preliminary results of the first-ever clinical trial to edit the genes of a living person inside their actual body, using a CRISPR-like technique called zinc-finger nucleases. The participants were a 45-year-old man from Phoenix, Arizona named Brian Madeux along with four other unnamed patients afflicted with Hunter syndrome, a rare genetic disorder that causes abnormalities in organ development and is often fatal by the time a carrier reaches their teens. The results of the trial were encouraging: A reduction by 50 percent, on average, of the urine sugars that are an indicator of the disease (although it’s too early to tell what exactly caused the reductions or whether they will translate to an alleviation of symptoms).
All of this insight into the potential benefits of CRISPR has also revealed some important risks, including a recent finding that cells with edited DNA could turn cancerous and evidence that between one and five percent of CRISPR edits happen in the wrong place on the genome, with as-yet-unknown consequences.
But in most of these experiments, the risk is not overwhelming. Take Brian Madeux: In the worst possible case, if the treatment somehow proved fatal, the fallout would be contained to him and his peers, who participated voluntarily, with full knowledge of the risks. “I’m old and having Hunter’s has done a lot of damage to my body,” Madeux told the Associated Press. “I’m actually pretty lucky I’ve lived this long.”
Which brings us back to Mitalipov’s embryo study in Oregon. The risks of edits to DNA in reproductive cells, what scientists call “germline” editing, differ fundamentally from “somatic” edits inside a living organism. If a person were allowed to be born from edited germline cells, the changes would be permanent and passed to all that person’s descendants. Germline editing presents a messy, entirely novel ethical problem without a clear solution. For now, and for the foreseeable future, a clinical procedure in the U.S. that sought to implant edited embryos in a woman to carry them through birth would require a human tissue-transfer authorization from the Food and Drug Administration, which is barred by law from even considering any clinical germline trials.
“No scientist has taken the next step, which is to implant [an embryo into a woman], and you wouldn’t find an ethics committee that would let them do so. There is still a line in the sand,” says I. Glenn Cohen, a bioethicist at Harvard Law School. “For some people, even the Mitalipov study crossed a red line.”
One of the first people to raise serious concerns about heritable DNA editing was none other than Jennifer Doudna. In her interview with Specter, Doudna recounted a harrowing nightmare in which a colleague invited her to explain her research to a powerful friend. The friend turned out to be Hitler, with a pig’s face, scribbling detailed notes.
“That dream has haunted me from that day,” she said. “Because suppose somebody like Hitler had access to this—we can only imagine the kind of horrible uses he could put it to.”
In early 2015, Doudna organized a small conference in Napa, CA with about a dozen of the country’s leading biochemists and bioethicists to discuss how human genetic engineering might be done ethically—a kind of updated Asilomar Conference for the CRISPR era that could grapple with familiar ethical questions in the context of new science.
“When things become technically feasible,” says University of Wisconsin’s Charo, who attended the conference, “you have to confront whether you’re willing to lose the benefit because of some principled objection. [CRISPR] raises a host of questions that have never been answered before, and the goal was to think about this now, before it becomes a major part of the social scene.”
The attendees were in agreement about the technology’s transformative potential in curing human disease and in other nonhuman applications. But, Charo says, “When it came to the germline stuff, it was clear that it wasn’t ready for primetime. There were too many uncertainties and the risks were way too high.”
The attendees produced an essay for the journal Science in April 2015 that “strongly discouraged…any attempts at germline genome modification for clinical application in humans.” That essay touched off a global ethical reckoning that is still unfolding today. As the science surges ahead, uncertainty and disagreements abound. Yet there’s wide agreement among ethicists that society needs to catch up to the science before the opportunity to press the brakes is lost.
“The time to address these moral questions is when they’re coming over the horizon,” Nuffield Council on Bioethics’s Mills says. “Not when they’re standing on the threshold and knocking on the door.”
“If we begin to look deeper at the flexibility of our genome, the concept that human rights are dependent on a static vision of the human genome begins to fall apart.”
R. Alta Charo
The Evolutionary Rat Race
In December 2015, there was a second, much larger, conference in Washington, D.C. with participants from around the globe, which Charo calls “the first chance to have a mega meeting talking about the ethics and research and policy” of human genetic engineering. Again, the attendees leaned in favor of continuing somatic genome editing research but putting a hold on clinical trials of heritable edits.
Meanwhile, the Science essay had sparked the interest of National Academies of Science president Ralph Cicerone, an award-winning climatologist. Cicerone approved Charo to lead a group to “gather the best evidence about what’s real and what’s highly speculative” in human gene editing, to produce a document that would essentially form the U.S. scientific community’s official position on human gene editing ethics. That 300-page report was released in February 2017 (after Cicerone had been succeeded by former Science editor-in-chief Marcia McNutt).
The report was remarkable in that it concluded that clinical trials of heritable edits could at some point be allowed to proceed—“but only following much more research aimed at meeting existing risk/benefit standards for authorizing clinical trials and even then, only for compelling reasons and under strict oversight.” It was the first time a major scientific committee had made such a statement.
“We were the first to say that as a matter of principle, germline editing is not wrong,” Charo says.
That finding sparked backlash from some quarters. The Council for Responsible Genetics, a Cambridge, Massachusetts-based nonprofit chaired by the distinguished Tufts University bioethicist Sheldon Krimsky, published a column calling it “a radical and dangerous departure from the long-standing international consensus that interventions in the human germline should remain off limits.”
Still, the report identifies a number of key ethical challenges in need of much greater research and public consensus before clinical germline trials could go forward, several of which were endorsed by a subsequent editorial in Nature.
The first is the potential impact on future generations. Without a long-term, multigenerational experiment, it may be impossible to say for certain that germline editing is safe and free of side-effects that outweigh its benefits. It’s not clear how researchers could compel the great-grandchildren of a person born from an edited embryo to participate in a lifelong study of their health. It’s a bit of a catch-22: In order to study whether these experiments should proceed, you may have to carry one out. Moreover, those great-grandchildren may have been denied a basic human right to decide whether they want their “natural” genes meddled with.
“Once the embryo is transferred, there’s nothing you can do and the die is cast,” Mills says.
For many people and some religious groups, the issue is more fundamental: The genome is inviolable, the divine architecture of humanity. Meddling with it might undermine some unique quality that makes us human, an argument Charo calls “species integrity.” But the steady progress of genetic research makes that quality increasingly hard to pin down.
With or without technology, human DNA is constantly changing. Mutations occur every time a cell divides; genes are constantly bombarded with chemicals and radiation. Our genome contains significant chunks of the DNA of other species, including Neanderthals and ancient viruses. It evolves in each generation in response to the countless selective pressures that determine who has children with whom. Our bodies are filled with foreign microbes, many essential for our life.
“One of the really interesting things is that if we begin to look deeper at the flexibility of our genome, the concept that human rights are dependent on a static vision of the human genome begins to fall apart,” Charo says.
The report also raises issues of social injustice and inequality: How could heritable gene editing be used, intentionally or not, to exacerbate class, economic, or other divisions predicated on physical characteristics or health? Who would have access to this technology? Would people with genetic diseases be even more marginalized than they are today? Could heritable gene editing become a terrifying new avenue for racist eugenics, fulfilling Doudna’s nightmare? Could, as Harvard’s Cohen suggested in a 2015 essay, the falling barriers to genetic enhancement “create a kind of evolutionary rat race where people must enhance merely not to fall behind”?
Ultimately, the report concludes that these qualms “do not necessarily mean that society should forswear any human intervention at all.” But it does draw a line at the same issue raised by Cohen: “Enhancement.” There should be no clinical trials at this time, the report finds, “for purposes other than treatment or prevention of disease or disability.” In other words, the report essentially rules out the ethical permissibility of “designer babies.”
That view seems to accord with public opinion. The report contains a roundup of the results of nearly three dozen public opinion polls on human gene editing carried out by various scientists and media outfits over the last three decades. On nearly all questions related to using human gene editing to treat or prevent disease, a majority respond in favor. But on questions related to improving appearance and intelligence, public support plummets into a minority. In one 2016 Harvard poll, 26 percent of respondents said it should be legal to change the genes of unborn babies to reduce their risk of disease while only 11 percent said the same about changes affecting intelligence and appearance.
But aside from obvious attributes like hair color and intelligence, the distinction between treatment and enhancement remains far from clear.
“Immunity to HIV, for example. Is that a treatment or an enhancement?” Cohen asks. “It would be strange to have the view that a treatment for HIV is permissible, but an enhancement to prevent it isn’t permissible.”
This debate took its latest step forward this summer, when the U.K.’s Nuffield Council on Bioethics issued its own report that, in contrast to the National Academies of Science report, left the door to “designer babies” open by not explicitly opposing enhancement. According to Mills, who helped author the report, there was a deliberate decision to exclude the word “enhancement” entirely.
“The terminology doesn’t help you distinguish what are good or bad moral reasons for bringing out people with certain characteristics,” he explains.
Instead, the report proposes two guiding principles: that any application of germline editing be done to “secure the welfare” of future people and “be consistent with social justice.” These are left somewhat vague by design, Mills says, so that they can be applied on a highly case-specific basis, in response to whatever future scientific opportunities arise. But Mills rejects the suggestion that the Nuffield Council is ready to forge ahead on “designer babies” today. If anything, he says, recent advances in the cell biology have only underscored the extent to which the social sciences, not to mention public policy and the law, have much more work to do before a clinical germline trial could go forward—if it ever will.
“There hasn’t been anything like enough research,” he says. “We still don’t know that we’ll ever get to having a technique at the level where it could be used.”
If and when it does, Charo recommends looking to history for comfort. From IVF, to genetic disease screening, to Roe v. Wade, to the ill-advised “Nobel Prize Sperm Bank,” any time the vision of “designer babies,” or any undue human interference in our progeny, has reared its head, society has generally opted to defer to nature as much as possible, she says. Heritable gene editing is likely to follow that precedent.
“We’ve been so worried that human nature is so craven that we’ll use technology to design our children in a way that is unaccepting of variation,” she says. “Even though we see over and over that that is not what people have actually done.”
In any case, she notes, “We don’t even know how to define intelligence! We can’t hope to create it ourselves.”
Crossing the Line
There was a line in the sand. And then a young scientist in China named He Jiankui crossed it.
At the end of 2018, just after the original version of this article went to print, Dr. He announced that he had, for the first time ever, overseen the birth of two twin girls born from embryos with modified DNA. These were the world’s first “designer babies,” equipped with genes tailored to make them resistant to HIV. The announcement sparked an immediate backlash from biologists around the world, for all the reasons you can read about in this story, and because it later became apparent that Dr. He may have been supported in his work by researchers at American universities, where so-called “germline editing” has never been allowed.
In March, a group of leading CRISPR scientists, including MIT’s Feng Zhang, co-authored an opinion column for the journal Nature in which they called for a global moratorium on clinical trials of heritable genome editing—essentially demanding that the line in the sand be legally enforced, at least until a broader agreement is reached on how to move forward ethically.
“Clinical germline editing should not proceed for any application without broad societal consensus on the appropriateness of altering a fundamental aspect of humanity for a particular purpose,” they wrote. “The introduction of genetic modifications into future generations could have permanent and possibly harmful effects on the species.”
Whether their call will be heeded remains unclear. The World Health Organization, U.S. National Academy of Sciences, China’s Academy of Sciences, and other scientific authorities are scrambling to assemble international committees to weigh the future of human gene editing in the hope of establishing the first global standards for governments and universities to follow. In any case, the future of “designer babies” is here, and the questions discussed below are more urgent than ever.