Where's the beef, Paul?
Fifteen months ago Paul Nelson made available a “discussion paper” on Ontogenetic Depth to serve as background for an ISCID chat. In that paper he claimed that
The ontogenetic depth of a handful of extant animals (from the model systems of developmental biology) is known with precision.
That’s 15 months ago. In the chat itself Nelson cut and pasted the same claim from the background paper, and said further that
But, as I said at the beginning, the start of any scientific answer begins with correctly understanding the problem. Ontogenetic depth helps to do that. This is what any candidate theory of animal origins has to explain. it is not itself an explanation, but a description.
So somehow or other, ontogenetic depth provides a problem for biology to explain, but we’re not told what the problem is - we are not given the metric Nelson uses to describe the problem. That’s of no particular help in “correctly understanding the problem.”
In a thread discussing that chat that lasted for more than 7 months, Nelson never described how to measure OD nor did he mention what the values asserted to be “known with precision” actually are. To be fair, that thread wandered off into ‘how could common descent be refuted’ questions and ontogenetic depth pretty much disappeared from view.
More recently, in the comments following PZ Myers’ entry here on the Thumb, 7 weeks ago Nelson wrote
Quick note – I’m drafting an omnibus reply (to points raised here and in Shalizi’s commentary), with title and epigraph from a Rolling Stones song. I’ll post it tomorrow.
and then a week later,
I’m lecturing at the University of Maine (Orono) today, but will try to post the reply when I return to Chicago tomorrow. It’s pretty long: I think I’ll put it up at ISCID and link from here.
I think that after this history of promises we are entitled to conclude that Ontogenetic Depth is a fantasy. In spite of repeated requests, we have never seen the two critical pieces of information necessary to evaluate Ontogenetic Depth as a purported new metric:
-
how to measure/calculate/estimate it; and
-
systematic validation and calibration data.
I think it’s past time to call Nelson on this. Ontogenetic Depth most closely resembles the Explanatory Filter in this respect. Dembski and his colleagues repeatedly claim that there is a “scientific” methodology for detecting design in biological systems, and that claim is used as support in their quest to inject ID into the teaching of biology in public schools. For example, in DeWolf, Meyer, and deForrest’s Intelligent Design in Public School Science Curricula: A Legal Guidebook (here), we read
Mathematician William Dembski has, for instance, published an important work on the theoretical underpinnings for detecting design. In The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge University Press, 1998) he shows how design is empirically detectable and therefore properly a part of science.
They repeat that claim in their Utah Law Review article, available at the same URL:
The [Explanatory] filter outlines a formal method by which scientists (as well as ordinary people) decide among three different types of explanations: chance, necessity, and design.(76) His [Dembski’s] “explanatory filter” constitutes, in effect, a scientific method for detecting the effects of intelligence.(77) (p. 61)
Yet like Ontogenetic Depth, that “scientific method for detecting the effects of intelligence” has never been validated or calibrated. More damning, as far as one can tell from their publications (peer reviewed or otherwise) ID “theorists” have never actually even used it. It is an empty promise, vaporware, as is the promise of Ontogenetic Depth.
So, with apologies to Clara Peller, Where’s the beef, Paul?
RBH