United States v. Rhodes

Decision Date17 January 2023
Docket Number3:19-cr-00333-MC
PartiesUNITED STATES OF AMERICA, v. RONALD CLAYTON RHODES, a/k/a “Big Fly,” LORENZO LARON JONES, a/k/a “Low Down,” Defendants.
CourtU.S. District Court — District of Oregon
OPINION AND ORDER

Michael J. McShane United States District Judge.

On August 4-5, 2022, this Court presided over an evidendtiary hearing regarding Defendant Rhodes's Motion for Daubert Hearing Regarding Admissibility of Toolmark Comparison Evidence, ECF 290, and Defendant Jones's related Motion to Limit the Presentation of Ballistics Comparison Evidence by the Government, ECF 291. For the reasons set forth below, Defendants' motions related to ballistic or toolmark comparison evidence, ECF 290 and ECF 291, are DENIED.

STANDARDS

Federal Rule of Evidence 702 governs the admissibility of expert testimony. It provides that:

A witness who is qualified as an expert by knowledge, skill experience, training, or education may testify in the form of an opinion or otherwise if: (a) the expert's scientific technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;
(b) the testimony is based on sufficient facts or data;
(c) the testimony is the product of reliable principles and methods; and
(d) the expert has reliably applied the principles and methods to the facts of the case.

“Under Daubert[1] and its progeny, including Daubert II[2], a district court's inquiry into admissibility is a flexible one.” City of Pomona v SQM N. Am. Corp., 750 F.3d 1036, 1043 (9th Cir. 2014) (citing Alaska Rent-A-Car, Inc. v. Avis Budget Grp., Inc., 738 F.3d 960, 969 (9th Cir. 2013)). The trial court serves as “a gatekeeper, not a fact finder.” Primiano v. Cook, 598 F.3d 558, 565 (9th Cir.2010) (internal quotation marks and citation omitted). The court “screen[s] the jury from unreliable nonsense opinions” but does not “exclude opinions merely because they are impeachable.” Alaska Rent-A-Car, 738 F.3d at 969-70 “The district court is not tasked with deciding whether the expert is right or wrong, just whether his testimony has substance such that it would be helpful to a jury.” Id.

Before admitting expert testimony into evidence, district court judges must determine whether the evidence is reliable and relevant under Rule 702. Wendell v. GlaxoSmithKline LLC, 858 F.3d 1227, 1232 (9th Cir. 2017). “Expert opinion testimony is relevant if the knowledge underlying it has a valid connection to the pertinent inquiry. And it is reliable if the knowledge underlying it has a reliable basis in the knowledge and experience of the relevant discipline.” Primiano, 598 F.3d at 565 (internal quotation marks and citation omitted). “Reliable expert testimony need only be relevant, and need not establish every element that the plaintiff must prove, in order to be admissible.” Id.

Determinations of the reliability of scientific expert testimony are guided by the factors outlined by the Supreme Court in Daubert. See Daubert, 509 U.S. at 593-95 (outlining the nonexclusive factors of (1) general acceptance in the scientific community, (2) peer review and publication, (3) testability, and (4) error rate). Courts have recognized that this inquiry is flexible, and that these factors “neither necessarily nor exclusively appl[y] to all experts or in every case.” Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137, 141 (1999). The court must consider whether the rate of error is sufficiently low, though the Ninth Circuit has said a methodology “need not be flawless in order to be admissible.” United States v. Prime, 431 F.3d 1147, 1153 (9th Cir. 2005). Finally, another “significant fact” is whether the expert is testifying “about matters growing naturally and directly out of research they have conducted independent of the litigation, or whether they have developed their opinions expressly for purposes of testifying.” Daubert II, 43 F.3d at 1317.

The Daubert standard of reliability applies not only to scientific testimony but also to testimony based on technical or other specialized knowledge. Kumho Tire Co., Ltd., 526 U.S. at 141. But the Supreme Court also made clear that “the law grants a district court broad latitude when it decides how to determine reliability,” and that the Daubert factors “neither necessarily nor exclusively appl[y] to all experts or in every case.” Id. (emphasis in original); see also id. at 151 (explaining that certain factors may be more or less relevant). The Ninth Circuit has explained that when applying Daubert to non-scientific expert testimony, the district court “may consider the specific factors identified where they are reasonable measures of the reliability of proffered expert testimony,” but the court is not bound to “mechanically apply the Daubert factors.” United States. v. Hankey, 203 F.3d 1160, 1168 (9th Cir. 2000) (citation omitted).

DISCUSSION

Defendants challenge the testimony of three Government experts with regard to firearm toolmark comparison evidence: Leland Samuelson and Shawn Malikowski, firearms and toolmark experts with the Oregon State Police (“OSP”) Forensic Laboratory; and Erich Smith, a forensic examiner with the FBI Laboratory. ECF 272 at 1. Having reviewed the parties' briefing and considered the testimony and argument made at the Daubert hearing, this Court determines that the testimony of all three experts is admissible.

1. Testability

Courts-including those that have excluded toolmark comparison evidence-have repeatedly found toolmark identification to be testable. See United States v. Johnson, Case No. (S5) 16 Cr. 281 (PGG), 2019 WL 1130258, at *15 (S.D.N.Y. Mar. 11, 2019) (“There appears to be little dispute that toolmark identification is testable as a general matter.”); United States v. Ashburn, 88 F.Supp.3d 239, 245 (E.D.N.Y. 2015) (“The AFTE methodology has been repeatedly tested.”); United States v. Tibbs, Case No. 2016-CF1-19431, 2019 WL 4359486, at *7 (D.C. Super. Sep. 5, 2019) (“Although the NRC and PCAST reports have levied significant criticisms against firearms and toolmark analysis, courts have found that such reports do not affect the method's testability....[Toolmark analysis] can be, and ha[s] been, tested.”).

This Court acknowledges that at least one judge in this District has found that toolmark analysis was not testable for Daubert purposes. See United States v. Adams, 444 F.Supp.3d 1248, 1260-64 (D. Or. 2020). In Adams, the court found that the proffered expert had failed to provide evidence that his toolmark analysis could be replicated. Id. at 1260 (“I do not, however, find that the AFTE comparison testing methodology, as described by [the proffered expert], is replicable.”). As a result, the court found in that case that the government did not satisfy the testability prong of Daubert. Id. at 1264 But this Court does not read Adams to say that the AFTE methodology or toolmark comparison is never testable as a matter of law; rather, the expert in Adams “could not define th[e] baseline in any objective way, nor could he explain the role it played in the actual comparison he made in this case.” Id. at 1261.

Here, on the other hand, Eric Smith, a Physical Scientist Forensic Examiner for the FBI, provided extensive testimony not only on AFTE methodology, but on the standards and training programs instituted at the FBI labs.[3] As part of that training, agents “look at thousands of comparisons of known matches and known non-matches.” Aug. 4, 2022 Tr. 14. Trainees appear before three oral boards, with “competency tests built into them,” testing the trainees' knowledge and ability in comparing toolmark evidence. Id. at Tr. 15. After two years, the trainee may become a “qualified examiner.” Qualified examiners take yearly proficiency tests in firearms identification and toolmark identification. Id. at 17-18. These tests, performed by a third-party testing agency, evaluate both the examiner and the quality assurance system of the lab. The tests are treated exactly like ordinary evidence. Id. at 18. As the tests have “a known ground truth,” the examiner receives a full report on the accuracy of the results. Id. at 19.

In addition to these yearly proficiency tests, Smith testified to his personal involvement in numerous validation studies over the past two decades. Smith testified to approximately 25 validation studies over the past 25 years focusing on firearm toolmark examiner accuracy. Id. at 27-28. Many of these studies, most of which were “black box” studies, were specifically designed to test how often examiners incorrectly identified a toolmark as coming from a particular firearm; i.e., how often examiners returned a false positive. The Court agrees with the great weight of authority finding AFTE methodology can, and has been, tested. See United States v. Chavez, Case No. 15-CR-00285-LHK-1, 2021 WL 5882466, at *2 (N.D. Cal. Dec. 13, 2021) (listing cases and noting the ‘fact that numerous studies have been conducted testing the validity and accuracy of the AFTE method' strongly suggests the method has and can be tested.”).

Additionally this Court does not measure replicability in the way proposed in Adams. In Adams, the court held that, “because [the AFTE methodology] cannot be explained in a way that would allow an uninitiated person to perform the same test in the same way that [the expert] did,” the methodology was not replicable or testable. Adams, 444 F.Supp.3d at 1264. This focus on an uninitiated person is, in the Court's view, misplaced. “Under Daubert's testability factor, the primary requirement is that [s]omeone else using the same data and methods . . . be able to replicate the result[s].' City of Pomona v. SQM N. Am. Corp., 750 F.3d 1036, 1047 (9th Cir. 2014) (citation omitted). Even if the expert in Adams...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT