Published Research? Bah, Humbug!
A reader has drawn our attention to a thought-provoking article by John P. A. Ioannidis in the November 2005 edition of PLoS Medicine, entitled "Why Most Published Research Findings Are False."
The official blog of www.daubertontheweb.com
2 Comments:
Doesn't this imply that Daubert isn't strict enough when it comes to plaintiffs' evidence? After all, the problem the paper identifies is a surplus of false positives.
I think you are correct, Ted, that if the article's theses were taken as true, it would tend to dilute the strength of any inference from published studies that reject the null hypothesis and find some positive effect (say an association between bodily insult and some health outcome). If you also believed that "Daubert" should dictate the exclusion of evidence at rates varying inversely with the strength of the inference that the evidence is offered to support (as you may, Ted), then yes, it might follow that "Daubert" should be less accepting, in general, of positive peer-reviewed results (though this would still leave the question of whether the baseline level of acceptance was ever too high to begin with; maybe judicial rulings have anticipated sub silentio the critiques in this article and discounted admissibility accordingly).
To me, though, there seem to be at least three problems with the proposed train of reasoning. First, the abstract proposition that many published results are "false" does not, by itself, illuminate methodological fallacy or empirical error in any specific instance. It is more like a modern variant of radical Cartesian doubt, which may warrant increased skepticism about the certainty of our knowledge in general, but which does little, by itself, to segregate reliable from unreliable knowledge in particular cases.
Second, and relatedly, there are different practical consequences that might be seen as flowing from the overall skeptical conclusion. Some might react by feeling that published studies contribute less than previously supposed to carrying a plaintiff's burden of persuasion and that expert ascriptions of causation should be admitted into evidence less frequently. But others might feel that Daubert's preference for published peer-reviewed literature over other foundations for expert testimony should be relaxed, if this barometer of reliability is less trustworthy than previously supposed.
Third, it might be felt that the law of evidence should busy itself, at most, with weeding out expert testimony that suffers from some identifiable and significant logical or methodological vice, leaving it to juries to decide what weight to assign to testimony that rests on rationally defensible argument. If a form of evidence is really shown to be less consistently accurate than previously believed, the conclusion to draw isn't necessarily that juries shouldn't hear it. The conclusion might be that juries should scrutinize that form of evidence more carefully than before.
Post a Comment
<< Home