Ted Frank on Expert Batting Averages
Over at overlawyered.com, Ted Frank has chimed in, in his reliably thoughtful way, in response to my post of August 30 on expert batting averages.
He is right to suggest that statistical analysis of expert testimony may raise more questions than it answers, and to observe that it is often more illuminating, in searching for explanations, to evaluate each expert's testimony in light of its context in each particular case.
Is Frank also correct to hypothesize that forensic experts and criminalists do better than experts in civil cases partly because prosecutions are state-funded endeavors? Probably so. Moreover, challenges to civil experts by corporate defendants are often well-subsidized too, and that may bring admissibility rates down in civil cases. It is also possible that statistical comparisons are subject to special suspicion here, because different kinds of experts tend to testify in criminal versus civil proceedings. In short, the overall numbers probably tell us very little about the actually operative reasons for case outcomes. In the end, the numbers only point to potential issues, and one must then look deeper.
Still, if one does look deeper and surveys the actual federal appellate decisions, it is hard to shake the impression that on the whole, Daubert scrutiny is less exacting in the criminal context. This could be partly because persons facing incarceration are less likely than litigants facing civil judgments to forgo weak appellate arguments, which would make for a certain tendency to give Daubert arguments short appellate shrift in criminal cases. But my own sense is that a de facto double standard may also be in play, and to me, that possibility warrants further systematic study than it has so far received, in a nation that prides itself on the fairness of its criminal justice system, at least by comparison with regimes elsewhere. It is even possible that in the end, certain kinds of evidence commonly offered in criminal proceedings should in fact be evaluated under different standards from the ones generally employed in civil litigation, if only because different kinds of evidence may require different treatment -- but probably, if we're going to do that, we should be explicit about what the different standards really are.
Frank also ponders various possible reasons why the chances of expert admissibility in civil cases should closely approximate those of a coin flip, and ventures that in the end this may be a coincidental artifact, perhaps partly attributable to a sample that works from appellate dispositions. Again, he may be right. More people should be doing statistical work here. For true though Frank's observation is, that statistical analysis can't easily capture the play of doctrinal considerations in individual cases, it can at least afford clues about trends that may warrant closer investigation. As for the fifty percent civil batting average itself, other students of the problem, working from samples of district court decisions rather than appellate ones, have come up with roughly comparable figures. For example, researchers working under the auspices of the Federal Judicial Center reported in 2002 on a 1998 survey of all active federal district judges, of whom 41% said they limited or excluded expert evidence in their most recent civil trial. And a 2001 Rand study, which analyzed 399 district court opinions issued between January 1980 and June 1999, found, if I'm reading it right, that in civil cases where the reliability of expert evidence was addressed, post-Daubert district courts were finding it unreliable at rates roughly ranging between 60% and 70% (controlling for case type, substantive area of evidence, and appellate circuit).
So if the FJC is right, the prospect for civil experts is not so bleak as my coin-flip numbers had suggested. And if Rand is right, the prospects are even bleaker. The data from both studies, of course, mostly predated the Supreme Court's 1999 decision in Kumho Tire.
P.S. to Ted Frank: Thanks for drawing the Berlyn decision to my attention. That particular fish had somehow eluded my net.
He is right to suggest that statistical analysis of expert testimony may raise more questions than it answers, and to observe that it is often more illuminating, in searching for explanations, to evaluate each expert's testimony in light of its context in each particular case.
Is Frank also correct to hypothesize that forensic experts and criminalists do better than experts in civil cases partly because prosecutions are state-funded endeavors? Probably so. Moreover, challenges to civil experts by corporate defendants are often well-subsidized too, and that may bring admissibility rates down in civil cases. It is also possible that statistical comparisons are subject to special suspicion here, because different kinds of experts tend to testify in criminal versus civil proceedings. In short, the overall numbers probably tell us very little about the actually operative reasons for case outcomes. In the end, the numbers only point to potential issues, and one must then look deeper.
Still, if one does look deeper and surveys the actual federal appellate decisions, it is hard to shake the impression that on the whole, Daubert scrutiny is less exacting in the criminal context. This could be partly because persons facing incarceration are less likely than litigants facing civil judgments to forgo weak appellate arguments, which would make for a certain tendency to give Daubert arguments short appellate shrift in criminal cases. But my own sense is that a de facto double standard may also be in play, and to me, that possibility warrants further systematic study than it has so far received, in a nation that prides itself on the fairness of its criminal justice system, at least by comparison with regimes elsewhere. It is even possible that in the end, certain kinds of evidence commonly offered in criminal proceedings should in fact be evaluated under different standards from the ones generally employed in civil litigation, if only because different kinds of evidence may require different treatment -- but probably, if we're going to do that, we should be explicit about what the different standards really are.
Frank also ponders various possible reasons why the chances of expert admissibility in civil cases should closely approximate those of a coin flip, and ventures that in the end this may be a coincidental artifact, perhaps partly attributable to a sample that works from appellate dispositions. Again, he may be right. More people should be doing statistical work here. For true though Frank's observation is, that statistical analysis can't easily capture the play of doctrinal considerations in individual cases, it can at least afford clues about trends that may warrant closer investigation. As for the fifty percent civil batting average itself, other students of the problem, working from samples of district court decisions rather than appellate ones, have come up with roughly comparable figures. For example, researchers working under the auspices of the Federal Judicial Center reported in 2002 on a 1998 survey of all active federal district judges, of whom 41% said they limited or excluded expert evidence in their most recent civil trial. And a 2001 Rand study, which analyzed 399 district court opinions issued between January 1980 and June 1999, found, if I'm reading it right, that in civil cases where the reliability of expert evidence was addressed, post-Daubert district courts were finding it unreliable at rates roughly ranging between 60% and 70% (controlling for case type, substantive area of evidence, and appellate circuit).
So if the FJC is right, the prospect for civil experts is not so bleak as my coin-flip numbers had suggested. And if Rand is right, the prospects are even bleaker. The data from both studies, of course, mostly predated the Supreme Court's 1999 decision in Kumho Tire.
P.S. to Ted Frank: Thanks for drawing the Berlyn decision to my attention. That particular fish had somehow eluded my net.
<< Home