“I do not know why originalists and other legalists are not AI enthusiasts.”
That’s from Richard Posner’s How Judges Think, suggesting that those who, like legalists, believe that judges should do nothing more than apply clear, staid rules, should be fans of moving towards some kind of artificial intelligence in the judiciary. It’s hard to tell if he’s being coy when he says he doesn’t know why originalists wouldn’t be a fan of AI, because his own somewhat jaded pragmatism would suggest that originalists are selectively so and consider themselves such chiefly because it tends to lead to outcomes that they favor – that is, the ends of the process are more important than the means.
In any case, it is interesting to consider if something like evidence-based medicine Bayesian diagnosis could be applied to the law. This form of diagnosis is to use our best knowledge about symptoms along a kind of dialogue tree to guess at the root disease. Could something like this be devised for the law? For example, in criminal trials some statistical knowledge could be applied, like the reliability of various forms of evidence (the reliability of eyewitnesses is a fascinatingly troublesome case, especially because jurors may be applying false heuristics in determining their accuracy. In two separate studies it was found that recall of peripheral details about the crime was positively correlated with false identifications on criminal lineups, but jurors believe the opposite to be the case) and how multiple forms of evidence interact with each other statistically. This might be the basis of a useful tool for judges and juries to turn to, just as human make use of computers in advanced chess. Applications in constitutional law seem a little dicier. Something to consider is, as in the case of eyewitnesses above, how people correct for cognitive errors once they learn of them. Studies suggest that people do not correct for bias consistently, and often way over-adjust, producing an equally powerful opposite error.
Photo: by afsart