Unanswered Criticism of Dembski's Specified Complexity
In 2003, Micah Sparacio collected a set of common criticisms of Dembski’s Specified Complexity. Since the initial collection of criticisms, little seems to have happened to address the various criticisms raised. I have collected ones which I find particularly of interest as they shown the various and many problems with the concept of specified complexity. Especially C11 seems applicable to my argument that CSI is merely a unncessary complex way of stating that we do not understand yet how something with a function in biology may have arisen.
This is a work in progress, as I will be linking the claims to other relevant materials.
C7 [yersinia]: Dembski’s original formulation of the CSI (complex specified information, i.e. specified complexity) argument was to:
(1) rule out chance causes (which can only produce small amounts of “specified information” – the definition of this is difficult but we can think of it as a e.g. “functional DNA changes with a probability of random occurence greater than 10^-150”),
(2) rule out regular causes (which can only transmit “specified information”, not increase it), and
(3) If (1) and (2) were successful, conclude design.
However this failed to rule out the very important possiblity of variation + natural selection, a combination of chance & regularity, which could randomly generate small amounts of specified information via chance and then preserve them (on average) via selection. Thus even hundreds or thousands of bits of SI (at some point we pass the 10^-150 random-generation-probability limit and reach CSI) could be generated by gradual accumulation.
To patch this hole, Dembski turned to Behe’s concept of Irreducible Complexity. The beginnings of this are seen in his book Intelligent Design and IC is emphasized further in No Free Lunch. IMO Dembski’s SC argument is in fact entirely dependent on the IC argument.
So C7 is: Dembski’s SC argument boils down to Behe’s IC argument, thus the SC argument adds nothing to the debate.
C8 [yersinia] is that the IC argument has been subject to a number of severe criticisms, especially regarding indirect evolutionary pathways. In recent articles Dembski seems to have been hedging his bets by saying that even if convincing evolutionary pathways to IC/SC were found (to his satisfaction) that the SC would still somehow imply design via obscure means (front-loading the fitness function, but conceiving this in practical ecological terms is difficult).
Summary of C8: Dembski having an emergency backup design scenario in case it turns out that IC/SC can evolve removes the SC–>ID argument from being falsifiable even in principle.
C9 [Alix Nenuphar]: Misuse of an inductive argument by the assertion of no false positives.
As I understand it, specified complexity as used in the filter guarantees no false positives. But the argument is inductive in nature, i.e. it relies on the possibility of sweeping the field of all chance, regularity, and chance+regularity scenarios, without examining each in detail beyond what is required to assign a probability to the scenario. Nothing in this process guarantees that some highly unlikely natural scenario might not in fact, occur and be mis-identified by the filter.
C10 [Erik]: One thing that can be said in favour of the definition of “specified complexity” is that it is detailed. To check if the definition is satisfied, one must specify a sample space, the set of hypotheses to be eliminated, the event under study, the specification, the value of the rejection function everywhere on the sample space, and background knowledge which “explicitly and univocally” identifies the rejection function. In practice, Dembski does not take his own definition seriously and in none of his examples has he provided the details needed to verify that the definition is satisfied. It is symptomatic that Dembski failed to specify any of these details in his analysis of the flagellum.
C11 [Erik]: The term “specified complexity” is a redundant, obfuscatory middle-man that serves no non-rhetorical purpose (it is apparently the name of the state of affairs that someone has sucessfully eliminated a set of non-ID hypotheses using the Explanatory Filter). It adds nothing to the actual argument, but it invites equivocation with other concepts with the same name (e.g. Paul Davies’s concept) and with intuitive concepts of “complexity” that lack any a priori connection to specified complexity. Dembski also seems to equivocate between specified complexity w.r.t. to a uniform probability assumption and specified complexity w.r.t. all known natural causes.
C12 [Erik]: I have not checked all the relevant publications, but to the best of my knowledge at most one person has been able to apply Dembski’s concepts and methods to a real example, namely Dembski himself. It’s been something like five years the methods were first formulated and only one real application (the flagellum calculation) has been published. That no one except its creator has been able to apply the method and concepts, not even to simpler non-trivial real-world cases than the origin of flagella, is clear testament to its lack of scientific utility in its current state.
C13 [Erik]: The form of the Explanatory Filter gives ID a free-ride by asking us to accept a general “ID hypothesis” without evaluating the merits, or lack thereof, of this hypothesis. It also assumes the existence of a sharp dividing line separating non-intelligence and intelligence. Hypotheses involving intelligence are to lumped into the general ID hypothesis and protected from being subject to critical evaluations of their merits. This assumption is made without a definition of “intelligence”.
C14 [Erik]: The definition of the concept of “specification” is so subjective that specifications, like the appeal of painting, are in the eye of the beholder. To establish that something is a “specification” all you do (and can do!) is to assert that you have background knowledge that allows you to explicitly and univocally identify a superset of the event in question without recourse to the event, and hope that the rest of the world believes you.
C15 [Erik]: The Universal Probability Bound is a reasonable estimate iff the definitions are strictly adhered to and intelligence is not as magical as Dembski assumes. This means, among other things, that one must be sure to specify the rejection function on the entire sample space. Since the definitions are not strictly adhered to in practice, there is no reason to think that the UPB is an underestimate of the appropriate probability. In Dembski’s terminology, vagueness translates to lots of specificational resources. Regarding intelligence, we must assume that the intelligent agent that applies Dembski’s method is not sufficiently magical and creative to (e.g.) come up with a specification for every observed event, whatever it is. If intelligent agents can escape the implications of the NFL theorems for learning/inference and optimization, and do things that no natural causes can, then what prevents them from inventing a (non-trivial) specification for every event they investigate?
C19 [Gedanken]
Take a case in which the prior probability is extremely low that a designer can effect the potential “design” being observed. (By this I do not mean that this is a generally usable method for evaluating cases, rather I am specifying that in this case that prior probability can be known. I do not mean that such prior probability can regularly be known.) Also assume that there is a rather high probability that something was missed in the steps of analyzing chance and necessity in the explanatory filter. (In other words that the “argument from ignorance” aspect actually may have an important case that the observer is ignorant of, and this is a high probability in this case.) In this case the Bayesean posterior probability that the “designer did it” is often lower than the posterior probability that the missed case is the explanation. Now considering cases in which the prior probability is unknown (a basic assumption of the normal application of the “explanatory filter”) the reasonableness of the EF is dependent on the actual prior probability, though unknown. If one has certain religious reasons, for example, of having differing views of that prior probability, then the result changes based on those views. The EF is not an objective methodology, and its “reliability” differs depending on precisely that prior probability.
Imagine that a supporter–let’s call him Bob–of Dembski’s Explanatory Filter is looking at two 150-digit liquid crystal displays. Bob knows from reliable sources that LCD 1 displays a 150 digit random number, which is either drawn according to a uniform probability distribution or chosen directly by a human. A new number is drawn, by one of the methods, everyday at 12 and displayed on LCD 1 starting one hour later. LCD 2 displays the same number starting 30 minutes after it was drawn, but Bob doesn’t know that. Thus, if Bob looks at the two displays as LCD 1 is updated with todays number, he will observe that it is identical to that displayed on LCD 2. He might be tempted to use LCD 2 as a specification for the outcome of LCD 1 and conclude, using the Explanatory Filter, that the number was chosen by a human, even if it was generated randomly. (For a more creative and amusing example, see Sobel’s review of “The Design Inference”.)
C22 [Erik] The requirement that the items of knowledge determining the specification and the event to be specified must be statistically independent is either practically impossible to verify or ineffective at ensuring Dembski’s claim of no false positives. It is possible to interpret the condition strictly so that it represents a condition that serves it theoretical purpose well. If the condition is interpreted in an objective fashion, so that (e.g.) Bob above (or the reader of Sobel’s review) could be faulted if he applied the Explanatory Filter without having made sure that the numbers of LCD 1 and LCD 2 are uncorrelated before using the latter as a specification, then it is difficult verify the validity of specifications in practice. On the other hand, if the independence criterion is interpreted to be less demanding, then it does not ensure that false positives cannot occur (with a non-neglible frequency).