More Recent Comments

Sunday, July 10, 2016

What is a "gene" and how do genes work according to Siddhartha Mukherjee?

It's difficult to explain fundamental concepts of biology to the average person. That's why I'm so interested in Siddhartha Mukherjee's book "The Gene: an intimate history." It's a #1 bestseller so he must be doing something right.

My working definition of a gene is based on a blog post from several years ago [What Is a Gene?].
A gene is a DNA sequence that is transcribed to produce a functional product.
This covers two types of genes: those that eventually produce proteins (polypeptides); and those that produce functional noncoding RNAs. This distinction is important when discussing what's in our genome.

Monday, July 04, 2016

Paradigm shifting at the Royal Society meeting in November

Suzan Mazur has been making a name for herself by promoting the overthrow of modern evolutionary theory. She began with a lot of hype about the Alternberg 16 back in 2008 and continued with a series of interviews of prominent evolutionary biologists.

Now she's focused on the upcoming meeting in November as another attempt to shift paradigms [see New Trends in Evolutionary Biology: The Program]. She's not entirely wrong. Many of the people involved in those meeting see themselves as paradigm shifters.

TED-Ed misrepresents epigenetics

TED-Ed is the educational arm of TED. Here's what TED says about itself and about TED-Ed ...
TED believes passionately that ideas have the power to change attitudes, lives, and ultimately, the world. This underlying philosophy is the driving force behind all of TED’s endeavors, including the TED Conferences, TEDx, TED Books, the TED Fellows Program, and the TED Open Translation Project. With this philosophy in mind, and with the intention of supporting teachers and sparking the curiosity of learners around the world, TED-Ed was launched in 2012.

TED-Ed is TED’s youth and education initiative. TED-Ed’s mission is to spark and celebrate the ideas of teachers and students around the world. Everything we do supports learning — from producing a growing library of original animated videos , to providing an international platform for teachers to create their own interactive lessons, to helping curious students around the globe bring TED to their schools and gain presentation literacy skills, to celebrating innovative leadership within TED-Ed’s global network of over 250,000 teachers. TED-Ed has grown from an idea worth spreading into an award-winning education platform that serves millions of teachers and students around the world every week.

Sunday, July 03, 2016

The scientific literature is becoming more complex

A recent paper by Cordero et al. (2016) looked at the biological scientific literature in 1993, 2003, and 2013. They found that the average publishable unit (APU) almost doubled in twenty years. There were substantial increases in the number of tables & figures and the number of panels per figure. The number of pages increased as did the number of references and the number of authors.

I agree that papers are becoming more complex and more difficult to understand for the average scientist; especially those outside of the specific field of study. The authors of this study point out a number of problems with this increase. I'd like to highlight one of them.

With respect to the number of authors, they say,
Concomitantly, with the increase in information density we note a significant increase in the number of authors per article that also correlated with the average IF of the journal. Since the famous de Solla Price predictions [38], trends toward an increasing number of authors per publication have been widely documented [23,39–44]. Such a trend of increasing collaboration could be explained by the causes suggested above for the growth of information density. The costs associated with the generation of cutting-edge scientific information, the funding restrictions, and the associated risks in scientific publishing in a “winner-take-all” reward system [45] may motivate scientists to team-up, pool resources and fractionate the risks through co-authoring. Also, the increasing complexity of scientific research has resulted in greater specialization of scientists [46], which in turn suggests that the inclusion of additional techniques requires the recruitment of additional investigators to provide that data and thus serve as co-authors. This trend could have both positive and negative consequences. Increased interaction between scientists in diverse fields could translate into greater communication and the possibility for advances at the interfaces of different disciplines. On the other hand, an increase in the number of authors, some of whom bring highly specialized knowledge, could result in reduced supervision of larger groups, and less responsibility per author for the final product and reduced integration of data.
I think the major consequence is the lack of responsibility of individual authors in a multi-author study. With increased specialization, there are fewer and fewer authors who see the big picture and who are capable of integrating the results from several subspecialties. The fact that the studies include work from several highly specialized techniques that only a few people understand also makes it harder for the average reader to evaluate the paper.

It's likely, in my opinion, that many of the authors on the paper don't fully understand the techniques being used by their colleagues. This is a big change from the science I grew up with.

Cordero et al. are worried about the possibility of fraud.
The growth in authors brings with it the concerns about the possibility that as more authors are added, there is an increased likelihood of some individuals with reduced integrity and capable of misconduct joining the group. In this regard, we note that the inclusion of one individual who has been accused of misconduct in numerous studies has led to dozens of retractions of scientific publications.
This is a very real danger but I think that outright fraud is not a significant worry. What concerns me more is the tendency to gloss over the limitations and possible misinterpretations of complex data analyses. The specialist who performs these analyses probably doesn't intend to misrepresent or exaggerate the significance of the result; it's just that they have become so used to using a particular technique (i.e. a software package) that they have forgotten those limitations. They don't communicate them to their colleagues who, because they don't understand the technique, don't realize there's a problem.

Cordero et al. summarize their results ....
In summary, our study documents a change in the literature of the biological sciences toward publications with more data over time. The causes for these trends are complex and probably include increasing experimental options and changes to the culture of science. At first glance, this data could be interpreted as a cultural change opposite to data fragmentation practices. However, it is also possible that an increase in publication density can still occur over a ‘salami slicing’ culture if the publication unit to be segregated is larger to begin with, as the result of technological improvements and increasing numbers of scientific authors. The benefits and debits of this trend for the scientific process are uncertain at this time but it is clear that there have been major changes to the nature of scientific publications in the past two decades that are likely to have major repercussions in all aspects of the scientific enterprise.
I think they're on to something.


Cordero, R. J., de León-Rodriguez, C. M., Alvarado-Torres, J. K., Rodriguez, A. R., and Casadevall, A. (2016). Life Science’s Average Publishable Unit (APU) Has Increased over the Past Two Decades. PloS one, 11(6), e0156983. [doi: 10.1371/journal.pone.0156983]

Friday, July 01, 2016

How to read the scientific literature?

Science addressed the problem of How to (seriously) read a scientific paper by asking a group of Ph.D. students, post-docs, and scientists how they read the scientific literature. None of the answers will surprise you. The general theme is that you read the abstract to see if the work is relevant then skim the figures and the conclusions before buckling down to slog through the entire paper.


None of the respondents address the most serious problems such as trying to figure out what the researchers actually did while not having a clue how they did it. Nor do they address the serious issue of misleading conclusions and faulty logic.

I asked on Facebook whether we could teach undergraduates to read the primary scientific literature. I'm skeptical since I believe it takes a great deal of experience to be able to profitably read recent scientific papers and it takes a great deal of knowledge of fundamental concepts and principles. We know from experience that many professional scientists can be taken in by papers that are published in the scientific literature. Arseniclife is one example and the ENCODE papers published in September 2012 are another. If professional scientists can be fooled, how are we going to teach undergraduates to be skeptical?

Thursday, June 30, 2016

Do Intelligent Design Creationists still think junk DNA refutes ID?

I'm curious about whether Intelligent Design Creationists still think their prediction about junk DNA has been confirmed.


Here's what Stephen Meyer wrote in Darwin's Doubt (p. 400).
The noncoding regions of the genome were assumed to be nonfunctional detritus of the trial-and-error mutational process—the same process that produced the functional code in the genome. As a result, these noncoding regions were deemed "junk DNA," including by no less a scientific luminary than Francis Crick.

Because intelligent design asserts that an intelligent cause produced the genome, design advocates have long predicted that most of the nonprotein-coding sequences in the genome should perform some biological function, even if they do not direct protein synthesis. Design theorists do not deny that mutational processes might have degraded some previously functional DNA, but we have predicted that the functional DNA (the signal) should dwarf the nonfunctional DNA (the noise), and not the reverse. As William Dembski, a leading design proponent, predicted in 1998, "On an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we DNA, as much as possible, to exhibit function."
I'm trying to write about this in my book and I want to be as fair as possible.

Do most ID proponents still believe this is an important prediction from ID theory?

Do most ID proponents still think that most of the human genome is functional?


Tuesday, June 28, 2016

New Trends in Evolutionary Biology: The Program

I'm going to London next November to attend The Royal Society conference on New trends in evolutionary biology: biological, philosophical and social science perspectives. This is where all the scientists who want to change evolution will be gathering to spout their claims.

Developments in evolutionary biology and adjacent fields have produced calls for revision of the standard theory of evolution, although the issues involved remain hotly contested. This meeting will present these developments and arguments in a form that will encourage cross-disciplinary discussion and, in particular, involve the humanities and social sciences in order to provide further analytical perspectives and explore the social and philosophical implications.
The program has been published. Here's the list of speakers ...

Gerd B. Müller
The extended evolutionary synthesis

Douglas Futuyma
The evolutionary synthesis today: extend or amend?

Sonia Sultan
Re-conceiving the genotype: developmental plasticity

Russell Lande

Evolution of phenotypic plasticity

Tobias Uller
Heredity and evolutionary theory

John Dupré
The ontology of evolutionary process

Paul Brakefield

Can the way development works bias the path taken by evolution?

Kevin Laland
Niche construction

James Shapiro
Biological action in read-write genome evolution

Paul Griffiths
Genetics/epigenetics in development/evolution

Eva Jablonka
Epigenetic inheritance

Greg Hurst
Symbionts in evolution

Denis Noble
Evolution viewed from medicine and physiology

Andy Gardner
Anthropomorphism in evolutionary biology

Sir Patrick Bateson
The active role of the organism in evolution

Karola Stotz

Developmental niche construction

Tim Lewens
A science of human nature

Agustín Fuentes
Human niche, human behaviour, human nature

Andrew Whiten
The second inheritance system: the extension of biology through culture

Susan Antón
Human evolution, niche construction and plasticity

Melinda Zeder
Domestication as a model system for evolutionary biology

I didn't know that Paul Griffiths and Karola Stotz were going. It's a bit surprising that they would associate with some of these views. I'm glad that Douglas Futuyma will be there to represent the voice of reason. He seems to be one of the few speakers who understands modern evolutionary theory.

There are still a few spots available, according to the organizers. Sign up quickly.

The meeting is at Carlton House Terrace, which is just a few blocks from Trafalger Square and a short walk down The Mall to Buckingham Palace where the Corgis live.


Wednesday, June 15, 2016

What does a person's genome reveal about their ethnicity and their appearance?

If you knew the complete genome sequence of someone could you tell where they came from and their ethnic background (race)? The answer is confusing according to Siddhartha Mukherjee writing in his latest book "The Gene: an intimate history." The answer appears to be "yes" but then Mukherjee denies that knowing where someone came from tells us anything about their genome or their phenotype. He writes the following on page 342.

... the genetic diversity within any racial group dominates the diversity between racial groups. This degree of intraracial variability makes "race" a poor surrogate for nearly any feature: in a genetic sense, an African man from Nigeria is so "different" from another man from Namibia that it makes little sense the lump them into the same category.

For race and genetics, then, the genome is strictly a one-way street. You can use the genome to predict where X or Y came from. But knowing where A or B came from, you can predict little about the person's genome. Or: every genome carries a signature of an individual's ancestry—but an individual's racial ancestry predicts little about the person's genome. You can sequence DNA from an African-American man and conclude that his ancestors came from Sierra Leone or Nigeria. But if you encounter a man whose great-grandparents came from Nigeria or Sierra Leone, you can say little about the features of this particular man. The geneticist goes home happy; the racist returns empty-handed.
I find this view very strange. Imagine that you were an anthropologist who was an expert on humans and human evolution. Imagine you were told that there's a woman in the next room whose eight great-grandparents all came from Japan. According to Mukherjee, such a scientist could not predict anything about the features of that woman. Does that make any sense?

I suspect this is just a convoluted way of reconciling science with political correctness.

Steven Monroe Lipkin has a different view. He's a medical geneticist who recently published a book with Jon R. Luoma titled "The Age of Genomes: tales from the front lines of genetic medicine." Here's how they explain it on page 6.
Many ethnic groups carry distinct signatures. For example, from a genome sequence you can usually tell if an individual is African-American, Caucasian, Asian, Satnami, or Ashkenazi Jew, even if you've never laid eyes on the patient. A well-regarded research scientist whom I had never met made his genome sequence publically available as part of a research study. I remember scrolling through his genetic variant files and trying, more successfully than I had expected, to guess what he would look like before I peeked at his webpage photo. The personal genome is more than skin deep.
This makes more sense to me. If you know what you look for—and Simon Monroe certainly does—then many of the features of a particular person can be deduced from their genome sequence. And if you know which variants are more common in certain ethnic groups then you can certainly predict what a person might look like just by knowing where their ancestors came from.

What's wrong with that?


Monday, June 06, 2016

Can scientists describe what they're doing to a fifth grader?

I'm working on a review of "The Gene" by Siddhartha Mukherjee. It raises a huge number of issues about science writing and the conflict between producing a bestseller and educating the public about science.

As part of the research for that blog post I've been reading all the reviews of his book and I came across an interview with Mukherjee on the Smithsonian website [Siddhartha Mukherjee Follows Up Biography of Cancer With “An Intimate History” of Genetics].

Here's an interesting answer to an important question ...

Sunday, June 05, 2016

Evolution according to "New Scientist"

A recent editorial in the magazine New Scientist caught my eye. The title is "Long Live Evolution" and it offers support for "new ideas" about evolution. The online version is titled Darwin’s beautiful theory must itself be allowed to evolve. The author is not identified; I assume it's one of the editors.

Here's the opening paragraph ...
Nothing in evolution makes sense except in the light of population genetics.

Michael Lynch (2007)
Darwin's great theory must itself be allowed to evolve

THE theory of evolution is a splendid thing: an elegant and utterly logical explanation for how natural selection solves the problems of survival and creates the enormous diversity of life we see in the world around us.
There is no such thing as "THE" theory of evolution. Evolutionary theory is complex. It covers several mechanisms (natural selection, random genetic drift) and its core is population genetics—something that was unknown in Darwin's time.

We know that Darwin’s hypothesis of natural selection ... was correct, but we also know that there are more causes of evolution than Darwin realized ...

Douglas Futuyma (2009)
The New Scientists editor is describing the theory of natural selection but he/she even gets that wrong because most of life's diversity is probably NOT due to natural selection.

The irony here is that New Scientist then goes on to say ...
That brings to the fore areas that are not part of the canon of evolutionary theory: epigenetics, for example, which studies how organisms are affected by changes in the ways in which genes are expressed, rather than in the genes themselves.

Attempts to incorporate such elements into evolutionary theory have not always been welcomed, however. That is understandable, given how successful the theory has been without them. Occam’s razor applies: do not add complications unless they are absolutely necessary.

But another motivating factor is undoubtedly the fear that if scientists themselves are seen to suggest that even small details of the theory of evolution could be improved upon, its detractors will seize upon them with avidity. This is a well-founded fear: it happens all the time, with well-funded and highly visible front organisations distorting scientific discussion to create the false impression of disagreement about the basics of evolutionary theory.

It is a fear scientists need to overcome, lest the admirable defence of truth mutates into defensiveness and rigidity. It is one thing to counter reactionaries who reject evolution; it is quite another to be dismissive of or even hostile to scientists who have new ideas to offer.
I recommend that the editors of New Scientist purchase and read any introductory textbook on evolution before they write any more silly editorials. They will learn that "Darwin's great theory" has already been changed beyond anything that Darwin would have recognized. The fact that the editors of a prominent science magazine don't understand evolution is an example of one of the main problems that have led to so much confusion today over recent attempts to extend evolutionary theory.

If science journalists are going to write about whether epigenetics should be part of evolutionary theory then they better do their homework before criticizing prominent evolutionary biologists for being afraid of changing even "small details" of modern evolutionary theory. I suggest they start by reviewing some "small details" like Neutral Theory, random genetic drift, hierarchical theory, species selection, punctuated equilibria, sympatric speciation, group selection, directed mutation, cladistics, kin selection, selfish genes, endosymbiosis, and a host of other aspects of evolution that have been vigorously debated in the scientific literature over the past century.

Maybe after doing their homework they will realize that prominent evolutionary biologists who question epigenetics are not doing it because they fear change ... they're doing it because "epigenetics" has been debated for fifty years and it has little to do with modern evolutionary theory. Maybe the science journalists will realize that proponents of the "extended evolutionary synthesis" are as ignorant of modern evolutionary theory as they were before they did their homework.

The editorial ends with ...
Evolution is true. But it is also a living, breathing idea that must not be allowed to ossify into a dogma of the kind that it has done so much to sweep away.
Ironically, the most common "dogma" is the false idea that evolutionary theory hasn't changed since Darwin's time and the editor of New Scientist is a prime example of this kind of ossification.



Tuesday, May 24, 2016

University of Toronto press release distorts conclusions of RNA paper

My colleague, Ben Blencowe, just published a paper ...

Sharma, E., Sterne-Weiler, T., O’Hanlon, D., and Blencowe, B.J. (2016) Global Mapping of Human RNA-RNA Interactions. Molecular Cell, [doi: 10.1016/j.molcel.2016.04.030]

ABSTRACT (Summary)

The majority of the human genome is transcribed into non-coding (nc)RNAs that lack known biological functions or else are only partially characterized. Numerous characterized ncRNAs function via base pairing with target RNA sequences to direct their biological activities, which include critical roles in RNA processing, modification, turnover, and translation. To define roles for ncRNAs, we have developed a method enabling the global-scale mapping of RNA-RNA duplexes crosslinked in vivo, ‘‘LIGation of interacting RNA followed by high-throughput sequencing’’ (LIGR-seq). Applying this method in human cells reveals a remarkable landscape of RNA-RNA interactions involving all major classes of ncRNA and mRNA. LIGR-seq data reveal unexpected interactions between small nucleolar (sno) RNAs and mRNAs, including those involving the orphan C/D box snoRNA, SNORD83B, that control steady-state levels of its target mRNAs. LIGR-seq thus represents a powerful approach for illuminating the functions of the myriad of uncharacterized RNAs that act via base-pairing interactions.

Monday, May 16, 2016

Tim Minchin's "Storm," the animated movie, and another no-so-good Minchin cartoon

I've mentioned this before but it bears repeating. If you haven't listened to "Storm" then you are in for a treat because now you can listen AND watch. If you've heard it before, then hear it again. The message never gets old.


A word of caution. Minchin may be very good at recognizing pseudoscience and quacks but he can be a bit gullible when listening to scientists. He was completely take in by the ENCODE hype back in 2012. This cartoon is also narrated by Tim Minchin but it's not so good.



Monday, May 09, 2016

Research for a book

I'm on sabbatical this term, working on a possible book whose working title is "What's in Your Genome?: 90% of your genome is junk."

Here's some of the most important books I've read (or re-read) in the past few months.


I've also read a lot of papers and scribbled notes on what's important and what's bullshit not. The most difficult part about keeping up with the scientific literature is organizing it in some meaningful way so you can quickly find it again if you need to—something I do just about every day.

Everyone has their own method. What works for me is to keep an electronic reference with key words and links to a file folder on a particular topic. (I use EndNote.) Here are the folders with all the papers I've been reading in the past few months.


I don't know how other authors behave but for me the most difficult thing about writing a book is organizing my thoughts and planning how to present them in the most effective manner. I tend to write too much on too many topics so the initial drafts usually have to be pared down considerably. Keeping that in mind, what are YOUR favorite topics?


John Oliver teaches us to be skeptical of scientific publications

We all know that the purpose of education should be to teach students how to think critically. We're not doing a very good job. Take biochemistry, for example. We spend a lot of time transferring information from lecture notes to student notes and then examining students on whether the transfer has worked. We think that teaching students to read the primary literature will make them better scientists when, in fact, teaching them to be skeptical of the primary literature is what's really necessary.

The ENCODE fiasco is just one of many examples where the scientific literature got it wrong. We need to make sure that our students appreciate the important parts of science; namely, the necessity of repeating experiments and the value of scientific consensus. Our students, and many of my colleagues, are prone to hype and promotion just like every one else but that's exactly what critical thinking is supposed to avoid. And it's exactly what proper science—no matter how you define it—is designed to overcome.

If students and scientists are having trouble with these concepts, imagine how difficult it is for the general public. How are they supposed to know that not every "breakthrough" is a real breakthrough and not every new study is correct?

John Oliver did an excellent job of explaining the problem on a recent (May 8, 2016) episode of Last Week Tonight. Watch it. It's worth 20 minutes of your time. The last bit on "Todd Talks" is classic.




Monday, May 02, 2016

The Encyclopedia of Evolutionary Biology revisits junk DNA

The Enclyopedia of Evolutionary Biology is a four volume set of articles by leading evolutionary biologists. An online version is available at ScienceDirect. Many universities will have free access.

I was interested in what they had to say about junk DNA and the evolution of large complex genomes. The only article that directly addressed the topic was "Noncoding DNA Evolution: Junk DNA Revisited" by Michael Z. Ludwig of the Department of Ecology and Evolution at the University of Chicago. Ludwig is a Research Associate (Assistant Professor) who works with Martin Kreitman on "Developmental regulation of gene expression and the genetic basis for evolution of regulatory DNA."

As you could guess from the title of the article, Michael Ludwig divides the genome into two fractions; protein-coding genes and noncoding DNA. The fact that organismal complexity doesn't correlate with the number of genes (protein-coding) is a problem that requires an explanation, according to Ludwig. He assumes that the term "junk DNA" was used in the past to account for our lack of knowledge about noncoding DNA.
Eukaryotic genomes mostly consist of DNA that is not translated into protein sequence. However, noncoding DNA (ncDNA) has been little studied relative to proteins. The lack of knowledge about its functional significance has led to hypotheses that much nongenic DNA is useless "junk" (Ohno, 1972) or that it exists only to replicate itself (Doolittle and Sapienza, 1980; Orgel and Crick, 1980).
Ludwig says that we now know some of the functions of non-coding DNA and one of them is regulation of gene expression.
These regulatory sequences are distributed among selfish transposons and middle or short repetitive DNAs. The genome is an extremely complex machine; functionally as well as structurally it is generally not possible to disentangle the regulatory function from the junk selfish activity. The idea of junk DNA needs to be revisited.
Of course we all know about regulatory sequences. We've known about this function of non-coding DNA for half a century. The question that interests us is not whether non-coding DNA has a function but whether a large proportion of noncoding DNA is junk.

Ludwig seems to be arguing that a significant fraction of the mammalian genome is devoted to regulation. He doesn't ever specify what this fraction is but apparently it's large enough to "revisit" junk DNA.

The biggest obstacle to his thesis is the fact that only 8% of the human genome is conserved (Rands et al., 2014). Ludwig says that 1% of the genome is coding DNA and 7% "has a functional regulatory gene expression role" according to the Rands et al. study. This is somewhat misleading since Rands et al. specifically mention that not all of this conserved DNA will be regulatory.

All of this is consistent with a definition of function specifying that it must be under negative selection (i.e. conserved). It leads to the conclusion that about 90% of the human genome is junk. That doesn't require a re-evaluation of junk.

In order to "revisit" junk DNA, the proponents of the "complex machine" view of evolution must come up with plausible reasons why lack of sequence conservation does not rule out function. Ludwig offers up the standard rationales ...
  1. Some ultra-conserved sequences don't seem to have a function and this "shows that the extent of sequence conservation is not a good predictor of the functional importance of a sequence."
  2. The amount of conserved sequence depends on the alignment and alignment is difficult.
  3. About 40%-70% of the noncoding DNA in Drosophila melanogaster is under functional constraint within the species but not between D. melanogaster and D. simulans. Therefore, some large fraction of functional regulatory sequences might only be conserved in the human lineage and it won't show up in comparisons between species. (Does this explain onions?)
The idea here is that there is rapid turnover of functional DNA binding sites required for regulation but the overall fraction of DNA devoted to regulation remains large. This explains why there doesn't seem to be a correlation between the amount of conserved DNA and the amount that can possibly be devoted to regulating gene expression. The argument implies that much more than 7% of the genome is required for regulation. The amount has to be >50% or so in order to justify overthrowing the concept of junk DNA.

That's a ridiculous number, but so is 7%. Imagine that "only" 7% of the genome is functionally involved in regulating expression of the protein-coding genes. That's 224 million base pairs of DNA or approximately 10 thousand base pairs of cis-regulatory elements (CREs) for every protein-coding gene.

There is no evidence whatsoever that even this amount (7%) of DNA is required for regulation but Ludwig would like to think that the actual amount is much greater. The lack of conservation is dismissed by assuming rapid turnover while conserving function and/or stabilizing selection on polymorphic sequences.

The problem here is that Ludwig is constructing a just-so evolutionary story to explain something that doesn't require an explanation. If there's no evidence that a large fraction of the genome is required for regulation then there's no problem that needs explaining. Ludwig does not tell us why he believes that most of our genome is required for regulation. Maybe it's because of ENCODE?

Since this is published in the Encyclopedia of Evolutionary Biolgoy, I assume that this sort of evolutionary argument resonates with many evolutionary biologists. That's sad.


Rands, C. M., Meader, S., Ponting, C. P., and Lunter, G. (2014) 8.2% of the Human Genome Is Constrained: Variation in Rates of Turnover across Functional Element Classes in the Human Lineage. PLoS Genetics, 10(7), e1004525. [doi: 10.1371/journal.pgen.1004525]