Bill Dembski and the case of the unsupported assertion
While ID ‘scientists’ vociferously object to being labeled creationists, they share one notable feature with the creation scientists of the 80s: their frequent use of discredited sources. In a 1983 PBS special, Duane Gish of the Institute for Creation Research (ICR), a YEC institution, claimed that certain human proteins were more similar to bullfrog proteins than chimpanzee homologues, a claim that would be nearly inexplicable if our current understanding of evolution was correct. However, despite countless public and private requests spanning the last 20 years, Gish has never provided a source for that claim, nor retracted it (see here for more). A few years ago, Bill Dembski claimed that there was evidence of a biochemical system for which any slight modification would not only destroy the system’s current function, but any possible function of that system whatsoever. He concluded that such a system could not have evolved through ‘Darwinian’ evolution, because of the supposed lack of functional intermediates between the current system and any hypothetical precursor. However, as I document in this post, not only is Dembski’s claim unsupported by his lone source, Dembski admits this, and yet he continues to assert that claim, even strengthening it in recent writings. One of those writings was even published in the IDists’ “peer-reviewed” journal PCID, despite the editors’ knowledge that this claim was unsupported. The intention of this blog entry is to add yet another example to the list of shoddy scholarship inherent in IDist writings.
The intelligent design argument basically consists of two parts. The first is the claim that evolution cannot explain the origin of certain features of life. The second is that “intelligence” can, and therefore serves as a better explanation for the origin of those features. In his 1996 book Darwin’s Black Box, Michael Behe made the first half of the argument regarding irreducibly complex (IC) biochemical systems, which are supposedly so complex that they defy evolutionary explanation. Because Behe felt there were no “detailed, testable models” to explain the origin of IC systems, no such explanations were possible. While detailed, testable models for the origin of IC systems already existed prior to DBB, and even more have emerged since its publication, IDists reject those explanations as not being detailed or testable enough for their satisfaction (see the Talkdesign article ID demystified for a more thorough analysis). Even if we took this argument at face value, it still suffers from a major oversight. Just because there’s no known natural mechanism for the origin of IC systems now, doesn’t necessarily mean there won’t be any in the future. In other words, absence of evidence is not evidence of absence. Commonly referred to as the appeal to ignorance, this fallacy is extremely important to the intelligent design argument because there is no evidentiary support for intelligent design (like, say, evidence of a designer). However, it is very easy for even a beginning biology grad student to propose a hypothetical model for the origin of an IC system. What IDists needed to keep this argument from being fallacious was a way to not only eliminate known natural mechanisms, but also unknown ones as well. In describing his October 2002 article, “The Logical Underpinnings of ID”, Bill Dembski made such a claim:
I also note that there can be cases where all material mechanisms (known and unknown) can be precluded decisively.
The relevant section is on page 20, where he writes:
But there is now mounting evidence of biological systems for which any slight modification does not merely destroy the system’s existing function, but also destroys the possibility of any function of the system whatsoever (Axe, 2000). For such systems, neither direct nor indirect Darwinian pathways could account for them. In that case we would be dealing with an in-principle argument showing not merely that no known material mechanism is capable of accounting for the system but also that any unknown material mechanism is incapable of accounting for it as well.
Wow, that’s a pretty bold claim. This assertion, if true, might be considered pretty damning evidence against evolution. However, Dembski offers only a single citation to back this claim, a 2000 Journal of Molecular Biology paper by Douglas Axe entitled, “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors”. Does this article really support Dembski’s claim? Not even close.
Summary of Axe 2000 In the article, Axe reports the results of a series of mutagenesis experiments, focusing primarily on the TEM-1 gene, a bacterial beta-lactamase that confers resistance to the antibiotics penicillin and ampicillin. Axe made conservative amino acid substitutions to residues on the exterior of this protein. Conservative substitutions are ones that exchange an amino acid for another with the same basic shape and charge (e.g. leucine to isoleucine, arginine to lysine). Additionally, because those residues are outside of the active site, they are not considered to have an effect on the activity of a protein. Therefore, evolutionary theory would predict that those residues could switch to other similar amino acids via neutral evolution. Axe tested this prediction by making an increasing number of conservative substitutions to see how many the protein could tolerate while still retaining function. He made four groups of substitutions (blaM, blaY, blaG, and blaB), with 10 substitutions in each group. Here are the results, in his own words:
The single-group substitutions in blaM, blaY, and blaG affect function only mildly, yet these substitutions result in >99% inactivation when combined.
Only the combination of all four groups resulted in the complete inactivation of the protein’s activity. Based on these experiments, Axe concluded that the exterior residues of a protein are more sensitive to conservative substitutions than previously thought.
Let’s compare this to Dembski’s interpretation:
any slight modification does not merely destroy the system’s existing function, but also destroys the possibility of any function of the system whatsoever.
On at least four levels Dembski’s conclusion regarding the Axe paper is completely and utterly wrong.
- Any slight modification destroys the function of the system. (note that I’m paraphrasing Dembski)
All of the single-group substitutions retained their beta-lactamase activity. Only when they were combined into triple and quadruple group mutations was activity abolished. So clearly there were modifications that did not destroy the function of the system.
- Any slight modification destroys the function of the system.
As previously mentioned, at least 30 substitutions were required to reduce activity greater than 99%, and 40 mutations to completely abolish it. This amounts to about 20% of the exterior residues, or 10% of the total protein. This can hardly be considered “slight”, by any definition of the word. One substitution would be considered slight, not 30 to 40. This is not just a semantic quibble, as the changes that occur during the course of gradual, �Darwinian’ evolution occur one substitution at a time (except in cases of recombination and exon shuffling).
- Any slight modification does not merely destroy the system’s existing function, but also destroys the possibility of any function of the system whatsoever.
I don’t know how Dembski can claim that the mutations destroyed other functions of the system, since Axe never tested for other functions. This is basically an appeal to ignorance. However, as it turns out, another group analyzed mutations in the active site of the exact same gene (TEM-1) and found that certain “slight modifications” drastically reduced the original function of the system (penicillin and ampicillin resistance), but increased a separate, distinct function (cephalosporin resistance). In their words:
When purified, mutant enzymes had increased activity against cephalosporin antibiotics but lost both thermodynamic stability and kinetic activity against their ancestral targets, penicillins.
(It should be noted that these researchers studied mutations within the active site, whereas Axe made mutations on the exterior of the protein.) So contrary to Dembski’s claim, mutations that result in the loss of the original function do not necessarily cause the loss of any possible function. This is significant because a major contention of Dembski’s is that protein sequences that contain a function are completely isolated from one another, such that random mutation cannot create one functional protein from another without passing through a non-functional state. However, recent research has shown that oftentimes mutations can increase the activity of a new function of a protein without seriously hindering the protein’s original function, a concept known as promiscuity. Steve Reuland has blogged this topic here.
- Any slight modification does not merely destroy the system’s existing function, but also destroys the possibility of any function of the system whatsoever.
This point is a little confusing, but if you read Dembski’s claim as a whole, you can infer what I think Dembski is really trying to get at. There are some mutations that can completely destroy a protein’s function. Nonsense mutations, whereby a codon is mutated into a stop codon, creating a truncated protein, is one example. However, many truncated proteins still retain some function. Frameshift mutations, where a nucleotide is either inserted or deleted from the gene, altering its reading frame, more often than not leads to a nonfunctional protein (for example, trying shifting your hand one key to the right and typing a sentence). However, I think what Dembski is trying to claim is a third possibility, that certain single amino acid substitutions can drastically affect the stability of the protein. If, at a key residue, a charged amino acid is substituted for a nonpolar one, or vice versa, this could lead to the inability of the protein to fold properly and rendering it useless. In this case, you could say the mutation destroyed all possibility of function. However, this is a rare event (I can’t think of an example), and certainly not demonstrated in the Axe paper. For Dembski’s claim to be accurate, he would need to find a protein for which all mutations catastrophically disrupt the stability of the protein. Axe never found a single catastrophic mutation in TEM-1, but plenty of non-catastrophic ones.
Each of these errors by themselves is enough to derail Dembski’s conclusion. Furthermore, each of the four points is absolutely necessary for Dembski to make the conclusion “not merely that no known material mechanism is capable of accounting for the system but also that any unknown material mechanism is incapable of accounting for it as well”. Taken together, these errors demonstrate that either Dembski really doesn’t understand the evidence, or that he is willfully misinterpreting it. While Dembski is not a biologist, and frequently makes claims regarding biology that display a fundamental lack of understanding of the field (see this thread for an example), I’m more inclined toward the latter. In fairness to Axe, his research wasn’t even about Darwinian evolution or the origin of new protein functions. Rather, he focused on the amount of “drift” a protein’s exterior can endure while still retaining function, which is an issue of neutral evolution. In fact, Axe, when asked directly whether his work supports Dembski’s conclusions, remained neutral on the topic, despite the fact that his research was supported in part by funding from the DI, and he was a senior fellow of the DI. Here are his own words, as said in Forrest and Gross’ Creationism’s Trojan Horse [1]:
These three statements summarize my position:
- I remain open-minded with respect to the possibility that a sound argument can be made for intelligent design in biology.
- I have not attempted to make such an argument in any publications.
- Since I understand that Bill Dembski has referred to my work in making such an argument, I shall remain open to the possibility that my published findings may support such an inference until I have had a chance to see his argument.
See Trojan Horse if you want to read more about Axe’s association with the DI.
Calling Dembski’s bluff When Dembski first announced his “Logical Underpinnings of ID” article on the ISCID forum in November 2002, ID critics immediately pounced on him for making such an unsupported claim. After repeatedly hounding him, Dembski finally conceded:
I met with Douglas Axe at the recent RAPID conference and I’ve had a preview of where his research is going for some time (at least since the summer of 2000), so in reading his JMB paper, I’m anticipating where’s he’s going. I agree that the JMB paper does not resolve the issues we are debating. That’s why I put it in terms of “preliminary indications.”
(boldface added for emphasis)
So essentially Dembski agreed that his claim was unsupported by the Axe paper. Is it ethical to make an assertion using a reference that you know does not support it?
Ironically, one ISCID forum participant, “charlie_d” commented,
Hopefully, this very basic mistake will now cease to find its way into ID literature.
Unfortunately, he was very much mistaken.
Nearly a year later, in September 2003, Dembski again mentions the Axe paper, this time in an FAQ entitled, “Three Frequently Asked Questions About Intelligent Design”, as an example of research supporting intelligent design in the peer-reviewed literature.
This work shows that certain enzymes are extremely sensitive to perturbation. Perturbation in this case does not simply diminish existing function or alter function, but removes all possibility of function. This implies that neo-Darwinian theory has no purchase on these systems.
Not only does Dembski not soften his interpretation of the paper, he no longer bothers to qualify it as a “preliminary indication”. He did remove the “any slight modification” phrase, replacing it with the more nebulous “perturbation”. However, I don’t think readers will assume that perturbation means “30 to 40 amino acid substitutions”, especially considering that “neo-Darwinian theory” does not propose that proteins evolve 30 to 40 mutations at a time.
In January 2004, Dembski submitted another article for publication on ISCID’s online journal PCID. This article, titled “Irreducible Complexity Revisted”, purported to clarify the IC argument with greater scientific detail. Notice this passage, from page 34.
Moreover, recent work on the extreme functional sensitivity of proteins provides strong evidence that certain classes of proteins are in principle unevolvable by gradual means (and thus a fortiori by the Darwinian mechanism) because small perturbations of these proteins destroy all conceivable biological function (and not merely existing biological function).31
Yes, reference 31 is Axe 2000. Rather than tone the comment down, Dembski apparently decided to bump his claim up a notch or two. Now “preliminary indications” has evolved into “strong evidence”, and “perturbations” into “small perturbations”. Again Dembski implies that 30 amino acid substitutions is “gradual”. So even though Dembski himself admitted over a year ago that this claim was unsupported by Axe 2000, he felt no obligation to leave it out of future writings. Why correct when you can just reassert?
Articles submitted for publication on PCID often have threads created for them on the ISCID chat forum, to allow readers the opportunity to comment on the article. I was so disgusted by this article (sadly, his repeated unsupported assertion wasn’t even its biggest problem, see Mark Perakh’s PT post for more), I challenged the PCID editors (Micah Sparacio and John Bracht) to correct it in that ISCID thread.
there is at least one instance here where an erroneous interpretation, by dembski, of an article he cited that was corrected by ID critics on this very forum, was repeated again in this article. . ..
i think micah and john need to do some serious soul-searching and decide if this is the kind of material they want ISCID to be known for. i realize that ISCID doesn’t reject many articles, especially those submitted by their big-wigs, but if the editors of PCID want to continue to call their journal “peer-reviewed”, then they need to take responsibility for the material they present.
They certainly saw my challenge, because Micah offered this in response:
as a note, an article published in the Archive is not necessarily published in PCID. Dembski’s article, featured in this thread, has not yet been chosen for an issue of PCID.
According to their review standards, articles are first placed in the archive as a draft, where they are then subject to review by one or more of the ISCID fellows. Only after they are reviewed are they eligible for publication in PCID. Also of note is this requirement: “articles need to meet basic scholarly standards”. Sure enough, the article was published in the next issue of PCID. Was the statement citing the Axe paper removed? Nope.
Moreover, recent work on the extreme functional sensitivity of proteins provides strong evidence that certain classes of proteins are in principle unevolvable by gradual means (and thus a fortiori by the Darwinian mechanism) because small perturbations of these proteins destroy all conceivable biological function (and not merely existing biological function).31
So clearly, correctly supporting statements central to the primary focus of the article is not considered a “basic scholarly standard” for PCID.
Accountability On another thread, created solely to discuss Axe’s paper, in which PCID editor John Bracht participated in, “charlie_d” asked John this question:
what do you think about instituting internal peer-review panels?
While John did not reply, another ISCID heavyweight, Paul Nelson, said this:
I think it’s a good idea. Something more informal than Charlie’s suggestion – in the way of critical peer review – is already going on (e.g., at the recent RAPID meeting), but as the ID research community matures, it’s going to need robust internal quality controls.
So here we have an example of the robust internal quality controls displayed by the ISCID fellows. Ironically, I also made this statement when “IC Revisited” was first made public:
i think it’s safe to assume that in the coming months, this article will be added to the pile of references ID proponents cite when their ideas are questioned by school boards or the press. the next time someone brings up a specific critique of IC, some DI rep like dembski will dismiss their critique as having been answered in this article. we’ve seen it before, we’ll see it again. the purpose of all of this is to give the appearance of a controversy.
And in that same issue of PCID:
Irreducible Complexity Reduced
Dembski shows [in “Irreducible Complexity Revisited”] that this common refrain [that “Behe’s arguments have somehow been refuted”] is inaccurate at best, and that the intervening period since the publication of Darwin’s Black Box has only underscored the acute lack of any meaningful explanation for the existence of irreducibly complex systems on the basis of Darwinian principles.
Is this really the quality of scholarship we can expect from PCID? I hope that either Dembski or the PCID editors offer a correction, or at least a defense of Dembski’s continued use of this unsupported assertion. If nothing else, he should at least stop using it, since he himself admits that the source does not support his claim. While I doubt any of this will happen, at least we have a documented example of the poor scholarship that IDists display in their works.
Footnotes:
- Forrest and Gross, Creationism’s Trojan Horse 2003, pages 40-42
(edited 2/16 to add a trackback to Mark Perakh’s post on IC revisted)