This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Criminal,
Civil Rights

Sep. 29, 2017

Crime and algorithms

There is a cost to be paid for the myriad benefits society derives from the use of algorithms. An occasional “date from hell,” which feels like a jail sentence, is an acceptable cost. Imposing on a defendant an actual prison sentence that fails to meet constitutional standards is not.

Daniel Grunfeld

Executive Vice Dean for Strategy and Partnerships, Pardee RAND Graduate School

300 S Grand Ave
Los Angeles , CA 90071

Email: grunfeld@rand.org

Previously, Dan served in leadership roles at two international law firms, as deputy chief of staff for policy for former Los Angeles Mayor Antonio Villaraigosa, and as president/CEO of Public Counsel.

GRUNFELD AT LARGE

We live in a world of algorithms. We love in a time of algorithms. We invest our hard-earned dollars by means of algorithms. We use algorithms to navigate our cars while listening to music selected by algorithms. You get the point.

Data collection, and our reliance on it, have evolved extremely rapidly. The resulting algorithms have proved invaluable for organizing, evaluating and utilizing information. It’s a rare day when our lives are not impacted by them.

Network analysis algorithms give you your Google search results, Facebook news feed and Netflix and Hulu recommendations. Routing and matching algorithms guide your Uber through real-time traffic to the most efficient route. According to the Pew Research Center, one in 20 Americans met their future spouse or committed partner through online dating compatibility algorithms. And pity the poor would-be traveler who scouts a hotel online, has second thoughts and then cannot escape, seemingly forever, the ongoing targeted online advertising blasts from any and all Hawaiian getaways.

These successes, made possible, in part, by what can appear to be the removal of human error from the decision-making process, have led many to view algorithms as flawless, almost magical solutions — offering what my RAND colleagues, Osonde Osoba and William Welser IV, describe as an “aura of objectivity and infallibility” in their recent paper, “An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence.” Because “numbers do not lie,” we assume that if we adhere to the algorithm, we will benefit from the best and least biased results.

But how do individuals’ rights come in to play, when data pertaining to their lives is compiled to create algorithms, and then the resulting tools are applied to judge them?

Certain algorithms utilizing machine learning tools tend to mimic human biases. And, in some respects, the repetitive nature of an algorithm is simply doing that. Those shortcomings can have far-reaching consequences. Imperfect travel routes or poor romantic matching are frustrating and annoying. But they are trivial when compared to algorithm’s impact on weightier matters, like crime and punishment.

In the justice system, algorithms are now commonly relied upon to determine an individual’s sentence, whether parole is being granted, or on a broader basis if police resources are assigned to a given area. Many states now use the Correctional Offender Management Profiling for Alternative Sanctions, aka COMPAS, risk assessment, for example, to estimate the likelihood of recidivism. It and other tools were developed to improve accuracy and efficiency in setting bail by applying algorithms based on metadata of previous arrestees to predict the likelihood someone will commit another crime.

That didn’t sit so well with Eric Loomis when he was arrested in 2013 for driving a stolen vehicle connected with a drive-by shooting in Wisconsin. After the COMPAS risk assessment was applied to his case, Loomis was sentenced to six years in prison. See Wisconsin v. Loomis, 881 N.W.2d 749 (Wis. 2016). He appealed his sentence, claiming his due process rights were violated. COMPAS’ proprietary nature, he maintained, prevented him from challenging the assessment’s accuracy. In addition, he claimed COMPAS’ methodology yielded racially discriminatory effects.

The Wisconsin Supreme Court heard Loomis’ claim last year in Loomis v. Wisconsin. The court highlighted some of the significant concerns about the COMPAS risk assessment algorithms. Due to the proprietary nature of COMPAS, judges, lawyers and defendants did not know how certain risk factors were weighted in the algorithm’s scoring system, making it harder to identify and correct mistakes. In addition, studies showed that risk assessment scores can disproportionately classify minorities as more likely to reoffend. The underlying data also originated from a national sample, making it less relevant for the Wisconsin population. The algorithm also was not designed for sentencing, but for determining treatment options and conditions of parole.

The court ultimately ruled against Loomis, finding that the sentencing judge had given the assessment “little or no weight” in his decision, and that without its inclusion in the trial, Loomis would have received the same sentence. However, the Supreme Court also circumscribed future use of COMPAS. Wisconsin law now requires presentence investigation reports utilizing COMPAS to include written instructions to the sentencing judges saying risk scores may not be used to determine whether an offender is incarcerated, or to determine the severity of the sentence. Risk scores also cannot be used as the determinative factor in deciding whether the offender can be supervised safely in the community.

Ideally, algorithms are supposed to provide objective, data-driven measures to help guide decision-making. And some algorithms do precisely that — decreasing bias in the decision-making process. But as the Loomis court recognized, others do not. So how is it that mathematical equations can be biased? The problem starts with the information collected.

“With limited human direction,” write Osoba and Welser, “an artificial agent is only as good as the data it learns from. Automated learning on inherently biased data leads to biased results.” For example, arrest rates filtered by ZIP code may contain racial bias. Citizens in traditionally African-American neighborhoods have, in many communities, statistically been more likely to be arrested because of racially biased policing and other factors. Defendants who reside in one of those neighborhoods may be more likely to be scored as flight risks or as larger threats for recidivism, based on where they live.

Perhaps even more troubling is algorithms’ reiterative nature. Some algorithms “learn” from themselves by running repeatedly, analyzing the results, reapplying that analysis and compounding anew. This process creates numerous models, which are then analyzed and tested as more data becomes available. If even a small amount of bias is introduced, the algorithm can replicate and exacerbate it. “It is not immediately clear that the algorithm’s recommendations will look strange or weird, and it’s not clear that the effects will impact certain groups unfairly,” explains Welser. “What looks like a 1 percent to 2 percent difference initially can lead to larger problems over time, and there isn’t a clear trail of breadcrumbs to see what went wrong.”

The explosion in artificial intelligence applications has increased the risk. “With the speed at which artificial intelligence is being introduced, there is not enough time for the people who check for bias. We see the AI boom, and we expect that the bias checks are booming as well,” says Osoba. “In reality, AI use is moving too fast, and the bias is often not being properly accounted for.”

Combating these biases is not an easy task. However, some progress is underway. The algorithmic fairness field is growing. Data scientists are increasingly developing technical approaches to certifying and correcting disparate impact in machine learning algorithms. But these tactics alone will likely not suffice. Consumers of algorithm-utilizing tools, who make critical decisions that impact fundamental rights, need to understand the strengths and limitations of those tools and take them into account. Justice’s complexity lies in balancing individual and societal demands of punishment, deterrence, proportionality, fairness, empathy and many other factors. Algorithms can help shed light on these complicated goals, but should not be the determinative factors in decisions that impact vital rights.

There is a cost to be paid for the myriad benefits society derives from the use of algorithms. An occasional “date from hell,” which feels like a jail sentence, is an acceptable cost. Imposing on a defendant an actual prison sentence that fails to meet constitutional standards is not.

#344003


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com