ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

The not-so-young parents sat in the office of their socio-genetic consultant, an occupation that emerged in the late 2030s, with at least one practitioner in every affluent fertility clinic. They faced what had become a fairly typical choice: Twelve viable embryos had been created in their latest round of in vitro fertilization. Anxiously, they pored over the scores for the various traits they had received from the clinic. Eight of the 16-cell morulae were fairly easy to eliminate based on the fact they had higher-than-average risks for either cardiovascular problems or schizophrenia, or both. That left four potential babies from which to choose. One was going to be significantly shorter than the parents and his older sibling. Another was a girl, and since this was their second, they wanted a boy to complement their darling Rita, now entering the terrible twos. Besides, this girl had a greater than one-in-four chance of being infertile. Because this was likely to be their last child, due to advancing age, they wanted to maximize the chances they would someday enjoy grandchildren.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

That left two male embryos. These embryos scored almost identically on disease risks, height, and body mass index. Where they differed was in the realm of brain development. One scored a predicted IQ of 180 and the other a “mere” 150. A generation earlier, a 150 IQ would have been high enough to assure an economically secure life in a number of occupations. But with the advent of voluntary artificial selection, a score of 150 was only above average. By the mid 2040s, it took a score of 170 or more to insure your little one would grow up to become a knowledge leader.

At the same time, the merger of 23andMe—the largest genetics database in the world—and InterActiveCorp (owner of Tinder and OKCupid), and their subsequent integration with Facebook, meant that not only were embryos being selected for implantation based on their future abilities and deficits, but that people were also screening potential spouses based on genotype. Rather than just screening for non-smokers, why not screen for non-smokers who are genotypically likely to pass that trait onto one’s potential offspring?

But there was a catch. There was always a catch. The science of reprogenetics—self-chosen, self-directed eugenics—had come far over the years, but it still could not escape the reality of evolutionary tradeoffs, such as the increased likelihood of disease when one maximized on a particular trait, ignoring the others. Or the social tradeoffs—the high-risk, high-reward economy for reprogenetic individuals, where a few IQ points could make all the difference between success or failure, or where stretching genetic potential to achieve those cognitive heights might lead to a collapse in non-cognitive skills, such as impulse control or empathy.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Against this backdrop, the embryo predicted to have the higher IQ also had an eight-fold greater chance of being severely myopic to the point of uncorrectable blindness—every parent’s worst nightmare. The fact that the genetic relationship between intelligence and focal length had been known about for decades did not seem to figure in the mania for maximizing IQ.1 Nor the fact that the correlation worked through genes that controlled eye and brain size, leading to some very odd looking, high IQ kids.2 (And, of course, anecdotally, the correlation between glasses and IQ has been the stuff of jokes for as long as ground lenses have existed.)

Parents were lured by slick marketing campaigns that promised educational environments fine-tuned to a child’s particular combination of genotypes.

The early proponents of reprogenetics failed to take into account the basic genetic force of pleiotropy: that the same genes have not one phenotypic effect, but multiple ones. Greater genetic potential for height also meant a higher risk score for cardiovascular disease. Cancer risk and Alzheimer’s probability were inversely proportionate—and not only because if one killed you, you were probably spared the other, but because a good ability to regenerate cells (read: neurons) also meant that one’s cells were more poised to reproduce out of control (read: cancer).3 As generations of poets and painters could have attested, the genome score for creativity was highly correlated with that for major depression.

But nowhere was the correlation among predictive scores more powerful—and perhaps in hindsight none should have been more obvious—than the strong relationship between IQ and Asperger’s risk.4 According to a highly controversial paper from 2038, each additional 10 points over 120 also meant a doubling in the risk of being neurologically atypical. Because the predictive power of genotyping had improved so dramatically, the environmental component to outcomes had withered in a reflexive loop. In the early decades of the 21st century, IQ was, on average, only two-thirds genetic and one-third environmental in origin by young adulthood.5 But measuring the genetic component became a self-fulfilling prophecy. That is, only kids with high IQ genotypes were admitted to the best schools, regardless of their test scores. (It was generally assumed that IQ was measured with much error early in life anyway, so genes were a much better proxy for ultimate, adult cognitive functioning.) This pre-birth tracking meant that environmental inputs—which were of course still necessary—were perfectly predicted by the genetic distribution. This resulted in a heritability of 100 percent for the traits most important to society—namely IQ and (lack of) ADHD, thanks to the need to focus for long periods on intellectually demanding, creative work, as machines were taking care of most other tasks.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Who can say when this form of prenatal tracking started? Back in 2013, a Science paper constructed a polygenic score to predict education.6 At first, that paper, despite its prominent publication venue, did not attract all that much attention. That was fine with the authors, who were quite happy to fly under the radar with their feat: generating a single number based on someone’s DNA that was correlated, albeit only weakly, not only with how far they would go in school, but also with associated phenotypes (outcomes) like cognitive ability—the euphemism for IQ still in use during the early 2000s.

The approach to constructing a polygenic score—or PGS—was relatively straightforward: Gather up as many respondents as possible, pooling any and all studies that contained genetic information on their subjects as well as the same outcome measure. Education level was typically asked not only in social science surveys (that were increasingly collecting genetic data through saliva samples) but also in medical studies that were ostensibly focused on other disease-related outcomes but which often reported the education levels of the sample.

That Science paper included 126,000 people from 36 different studies across the western world. At each measured locus—that is, at each base pair—one measured the average difference in education level between those people who had zero of the reference (typically the rarer) nucleotide—A, T, G, or C—and those who had one of the reference base and those who had two of those alleles. The difference was probably on the order of a thousandth of a year of education, if that, or a hundredth of an IQ point. But do that a million times over for each measured variant among the 30 million or so that display variation within the 3 billion total base pairs in our genome, and, as they say, soon you are talking about real money.

That was the beauty of the PGS approach. Researchers had spent the prior decade or two pursuing the folly of looking for the magic allele that would be the silver bullet. Now they could admit that for complex traits like IQ or height or, in fact, most outcomes people care about in their children, there was unlikely to be that one, Mendelian gene that explained human difference as it did for diseases like Huntington’s or sickle cell or Tay-Sachs.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

That said, from a scientific perspective, the Science paper on education was not Earth-shattering in that polygenic scores had already been constructed for many other less controversial phenotypes: height and body mass index, birth weight, diabetes, cardiovascular disease, schizophrenia, Alzheimer’s, and smoking behavior—just to name some of the major ones. Further, muting the immediate impact of the score’s construction was the fact that—at first—it only predicted 3 percent or so of the variation in years of schooling or IQ. Three percent was less than one-tenth of the variation in the bell curve of intelligence that was reasonably thought to be of genetic origin.

By the 2040s, it took a score of 170 or more to insure your little one would grow up to become a knowledge leader.

Instead of setting off of a stampede to fertility clinics to thaw and test embryos, the lower predictive power of the scores in the first couple decades of the century set off a scientific quest to find the “missing” heritability—that is, the genetic dark matter where the other, estimated 37 percent of the genetic effect on education was (or the unmeasured 72 percentage points of IQ’s genetic basis). With larger samples of respondents and better measurement of genetic variants by genotyping chips that were improving at a rate faster than Moore’s law in computing (doubling in capacity every six to nine months rather than the 18-month cycle postulated for semiconductors), dark horse theories for missing heritability (such as Lamarckian, epigenetic transmission of environmental shocks) were soon slain and the amount of genetic dark matter quickly dwindled to nothing.

Clinicians and the wider public hardly noticed at first. Instead, they had been enthralled by a technology called CRISPR/Cas9 that was lighting up the press with talk of soon-to-be-awarded Nobel prizes. CRISPR borrows a technology from bacteria and archea, which borrows and adapts the Cas9 endonuclease (a protein that cuts DNA) to cut one or both strands of DNA, excising a small section in the process (the part one would like to remove or replace). Donor DNA (supplied by the scientist) is then inserted as the strand or strands are repaired. As the 21st century rolled on, the gene-editing system did indeed have a huge impact on human disease and well-being: it not only effectively eliminated all single-gene birth defects from the human population of the developed world, it also turned cancer into a chronic (if still painful) condition and improved the food yields and nutritional value of staple crops once opposition to GMOs was politically overcome.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But once parents started “enhancing” their germ lines through the editing of their ova and sperm cells, the science hit a political wall. The changes that these intrepid reprogenetic practitioners were evincing were generally relatively harmless: switching from brown eyes to blue, or from attached earlobes to detached, or from dark hair to light. That, in fact, was the limit of what could be done by editing a single gene or small number of genes, since most dimensions along which humans varied—everything from height to extraversion to metabolism—were highly polygenic—resulting from the sum total of millions of small effects spread across the 23 pairs of human chromosomes.

A breakthrough came when the amount of DNA that technicians were able to reliably amplify from an early stage embryo during prenatal genetic diagnosis reached a level that allowed for full genome sequencing—examination of each base pair—rather than just a bird’s eye view of the chromosomes, to assess major damage such as duplications or deletions, which had been the practice until about 2020. Once the code for the embryos was unlocked, it was a simple matter of running the results through a spreadsheet of weights to end up with predicted phenotypes.

MADE IN THE DNA: Once the genetic code for embryos had been unlocked in the 2030s, it was a simple matter of running the results through a spreadsheet of weights to end up with desired traits.Wichy

The practice quickly spread down the socioeconomic ladder as employees demanded that their health insurance pay for this sort of screening, and eventually such coverage was mandated by law.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Of course, just as there were always people who refused to sign up for Facebook, the naturalist, alternative subcultures—those who mated “blindly”—also grew and thrived. Little did these subaltern geno-resisters know that their DNA was surreptitiously being used by the reprogeneticists to test the statistical models and refine the polygenic scores. First, the simulations required the greater amount of variation that was provided by natural reproduction; as more and more parents practiced reprogenetics and minimized the presence of certain DNA variations and maximized that of others, there was little variation upon which to test for cross-gene potential interactions. Secondly and more importantly, since the majority of the population now sorted and invested in their children based on their genetic scores, the effects of those scores within that population became circular and self-fulfilling, adding little to their underlying predictive power. To achieve greater efficiency through improved genotyping accuracy, one needed an environmental landscape that still followed its own logic that was orthogonal to genotype.

As genetic variation had become steadily purged from society, humans had become like a mono-culture staple crop.

The social world soon bent to this new auto-evolutionary reality: Not only did admissions testing for schools give way to genetic screening, the educational system fragmented into stratified niches based on specific combinations of genetically based traits: There were programs for those who were neurotypical and high on athletic ability, and others for those who were high on both motor skill and on the autism spectrum (a rather rarer category). There were jobs that required ADHD and those that shunned it. All in the name of greater economic efficiency.

The result for families was the declining cohesion of anything that could be called a family or household unit. Even though one might think that greater parental control over the genotypes of their offspring might result in families that specialized—in sports or in verbal ability—and then created a domestic culture geared toward fostering that tendency or skill, the reality was that sibling differences were accentuated to an extent never before seen.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Most socio-genetic consultants suggested to parents that they might want to maximize similarity to their older child to avoid this problem of how to parent two vastly different genotypes in the same household. Genetic differences, they explained, are magnified within families as parents consciously or unconsciously try to provide the investments and environments that each child needs to achieve their genetic potential. Small differences in IQ or athletic ability that might be part of random measurement error, or swamped by environmental differences when comparing two kids from different backgrounds with the same genotypic score, become outsized in their effects when seen against the control group of a sibling. And this becomes, like so many social dynamics, a self-fulfilling prophecy that leads to the ironic situation of greater differences within families than between them.

This, too, could have been predicted by early 21st-century science papers, some of which showed the effect of the polygenic score for cognitive ability was stronger within families than between them.7 That is, the genetic scores predicted the differences between siblings better than they predicted the differences between randomly selected individuals from different parents. A sibling with a better education score than his brother is likely to complete a half-year more schooling on average. But compare two strangers with the same difference in their genetic score for education, and the average difference in schooling is only one-third of a year. So rather than acting as a buffer against the harsh winds of capitalism, mitigating inequalities with an internal logic of altruism, the household now acted as a pure sorting machine, accentuating rather than attenuating small differences in what economists call “endowment.” The result of parental control, then, was nothing less than the total reconfiguration of the family unit.

The effects of this reconfiguration of socialization was the rise of highly specialized boarding schools that seemed to cater to younger and younger children each year. The interaction between a parent’s genetic score and that of her offspring had the potential to magnify effects (for better or worse), so parents were lured by slick marketing campaigns that promised educational environments fine-tuned to a child’s particular combination of genotypes. Adding to the frenzy for the “right” placement—making the competition for private preschool slots in places like Manhattan and San Francisco in the early 2000s look like open enrollment—was research that showed that not only did a parent’s genotype blunt or accentuate the genotypic effects for a child, the entire genotypic environment mattered. How your genotype for behavior played out depended on the distribution of genotypes around you.

But unlike the tall poppy or the differently colored wildflower that attracted the pollinator, and was advantaged in the game of reproduction by being different, the research showed positive genotypic peer effects; that is, being around those with your same predicted phenotype was good for you—there was strength in genetic numbers. The end result was micro-sorting in schools and in almost all aspects of social life, including marriage and occupation. In the 2000s, doctors and lawyers had the highest rates of in-marriage; for example, more than 25 percent of doctors and lawyers were married to other doctors and lawyers. By the 2040s, doctors and lawyers married those in their fields at a 90 percent rate, and it was hard to find any profession that was below 80 percent.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Micro-sorting should not have been able to last, as the entire point of sexual reproduction, as evolutionary biologists long told us, was to introduce genetic variation into the population. As compared to asexually reproducing species, mating species—which inefficiently pass on half as many genes as they would by merely cloning themselves—are more robust to genetic drift and environmental challenges. Recombination during the production of sperm and eggs (meiosis) means that advantageous alleles can be pooled and deleterious ones purged from surviving offspring. In this way, sex speeds up selection. This, of course, was what made the entire reprogenetic endeavor so rapidly realized in the first place.

Most critically, all this convergence on ideal genotypes had immunological consequences. As genetic variation had become steadily purged from society, humans had become like a mono-culture staple crop, unable to resist parasites that could evolve more quickly than their hosts due to their short generation times. People should have known better from past experiences with the Irish potato famine, or any number of collapsed societies. But without random mating to stir up genomes and provide herd immunity, they had to not only live in social bubbles, they ended up having to live in immunological ones as well—literally separated by impermeable membranes so as to not pass on strains of microbes that might be fatal. By managing their own genomes, they were forced to control their microbiome and environmental exposures.

In the clinic, the socio-genetic consultant suggested to the not-so-young parents who wanted a boy that they pick the embryo the most similar to their daughter, regardless of which might be the most successful. If they were immunologically similar, they could interact without fear of literally killing each other. Alas, the consultant’s advice was not heeded. The parents opted for the 180.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Dalton Conley is visiting professor of sociology at Princeton University, where he is on leave from his faculty position at New York University. He holds Ph.D.s in sociology (1996) and in biology (2014), and is co-author of the forthcoming book, Genotocracy? DNA, Inequality and Society.

References

1. Cohn, S.J., Cohn, C.M., & Jensen, A.R. Myopia and intelligence: a pleiotropic relationship? Human Genetics 80, 53-58 (1988).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

2. Miller, E.M., On the correlation of myopia and intelligence. Genetic, Social, and General Psychology Monographs 118, 361-383 (1992).

3. Driver, J.A., et al. Inverse association between cancer and Alzheimer’s disease: results from the Framingham Heart Study. BMJ 344, e1442 (2012).

4. Hayashi, M., Kato, M., Igarashi, K., & Kashima, H. Superior fluid intelligence in children with Asperger’s disorder. Brain and Cognition 66, 306-310 (2008).

5. Haworth, C.M., et al. The heritability of general cognitive ability increases linearly from childhood to young adulthood. Molecular Psychiatry 15, 1112-1120 (2010).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

6. Rietveld, C.A., et al. GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science 340, 1467-1471 (2013).

7. Conley, D., et al. Is the Effect of Parental Education on Offspring Biased or Moderated by Genotype? Sociological Science 2, 82-105 (2015).

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.