Would you like some salt with that meta-analysis?
I have a vague recollection of reading a short article by a psychologist, in which the author and his colleague separately performed meta-analyses on certain parapsychological data. Meta-analysis involves, among other things, rating the studies under discussion as to their quality. The psychologist gave low ratings to many studies to which his colleague gave high ratings, and vice versa. The result: the psychologist’s colleague concluded that the data supported the existence of whatever phenomenon they were studying, whereas the psychologist concluded the opposite. In other words, each evaluated the data subjectively and performed a meta-analysis that was more or less stacked to come to the conclusion that they wanted or expected. If I remember correctly, the psychologist concluded that meta-analyses must not be particularly useful.
I cannot find that article, if it exists, but I suspect that the author was Ray Hyman. Professor Hyman discusses meta-analyses and parapsychology in a longer article here. In that article, Prof. Hyman notes that he and the statistician Jessica Utts evaluated a certain data set regarding parapsychology and came to opposite conclusions.
Notably, Prof. Hyman once performed a meta-analysis on the original ganzfeld experiments (never mind what those experiments involved), and concluded, in essence, that the experiments had been performed poorly. The parapsychologist Charles Honorton famously performed his own meta-analysis and drew the opposite conclusion. As Prof. Hyman notes, he and Mr. Honorton obtained results consistent with their preconceptions. They agreed that the database had enough problems that they could fairly draw no firm conclusions. The ganzfeld analyses failed because the two experimenters could not agree as to the quality of the data. Other meta-analyses fail, for example, because of what is often called the file-drawer effect, that is, that unsuccessful experiments are not published but rather are left in the file drawer.
I have just related almost everything I knew about meta-analyses, until the other day when The metawars by Jop de Vrieze appeared in Science magazine. Now I know that meta-analyses are burgeoning because they are relatively inexpensive to perform – yet they are still inconclusive, partly because of the way researchers choose or rate the studies they include or how they try to correct for the file-drawer effect.
The Science paper is long, and I do not want to recapitulate it. It appears, though, that meta-analysts agree that, if they cannot make meta-analyses objective, at least they can make them transparent, so that they may be criticized. Others argue that protocols should be published in advance of the meta-analysis, and in particularly controversial cases “rival researchers” should get together and set up a meta-analysis of their own, if they cannot perform wholly new studies and analyze them. Mr. De Vrieze describes a protocol in which researchers at 23 different laboratories performed the same standardized experiment and then performed a meta-analysis. The result was very close to zero and settled a long-running debate as to whether self-control can be depleted (as muscles can be fatigued).
As for me, I will accept the results of all meta-analyses that conform to my preconceptions and take the rest with a grain of salt. On second thought, maybe I had better take them all with a grain of salt.