Second Opinion: Why an algorithm can never truly be ‘fair’ – Los Angeles Times – DC Initiative on Racial Equity
Skip to content Skip to footer

Late last year, the Justice Department joined the growing list of agencies to discover that algorithms don’t heed good intentions. An algorithm known as PATTERN placed tens of thousands of federal prisoners into risk categories that could make them eligible for early release. The rest is sadly predictable: Like so many other computerized gatekeepers making life-altering decisions — presentencing decisions, resume screening, even healthcare needs — PATTERN seems to be unfair, in this case to Black, Asian and Latino inmates.

A common explanation for these misfires is that humans, not equations, are the root of the problem. Algorithms mimic the data they are given. If that data reflect humanity’s sexism, racism and oppressive tendencies, those biases will be baked into the algorithm’s predictions.

But there is more to it. Even if all the shortcomings of humanity were stripped away, equity would still be an elusive goal for algorithms for reasons that have more to do with mathematical impossibilities than backward ideologies. In recent years, a growing field of research in algorithmic equity has revealed fundamental — and insurmountable — limits to equity. The research has deep implications for any decision maker, human or machine.

::

Imagine two physicians. Dr. A graduated from a prestigious medical school, is up on all the latest research and carefully tailors her approach to each patient’s needs. Dr. B takes on


Read Full Article at www.latimes.com


Leave a comment

DC Initiative on Racial Equity
📧 dcracialequity@gmail.com

© 2022. All Rights Reserved.

AllEscort