Fairness isn’t so much about “being fair” as it is about “becoming less unfair.” Fairness isn’t an absolute; we all have our own (and highly biased) notions of fairness. On some level, our inner child is always saying: “But that’s not fair.” We know humans are biased, and it’s only in our wildest fantasies that we believe judges and other officials who administer justice somehow manage to escape the human condition. Given that, what role does software have to play in improving our lot? Can a bad algorithm be better than a flawed human? And if so, where does that lead us in our quest for justice and fairness?
While we talk about AI being inscrutable, in reality it’s humans who are inscrutable. In Discrimination in the Age of Algorithms, Jon Kleinberg, et al., argue that algorithms, while unfair, can at least be audited rigorously. Humans can’t. If we ask a human judge, bank officer, or job interviewer why they made a particular decision, we’ll probably get an answer, but we’ll never know whether that answer reflects the real reason behind the decision. People often don’t know why they make a decision, and even when someone attempts an honest explanation, we never know whether there are underlying biases and prejudices they aren’t aware of. Everybody thinks they’re “fair,” and few people will admit to prejudice. With an algorithm, you can at least audit the data that was used to train the algorithm and test the results the algorithm gives you. A male manager will rarely tell you he doesn’t like working with women, or he can’t trust people of color. Algorithms don’t have those underlying and unacknowledged agendas; the agendas are in the training data, hiding in plain sight if we only search for them. We have the tools we need to make AI transparent–not explainable, perhaps, but we can expose bias, whether it’s hiding in the training data or the algorithm itself.
Auditing can reveal when an algorithm has reached its limits. Julia Dressel and Hany Farid, studying the COMPAS software for recommending bail and prison sentences, found that it was no more accurate than randomly chosen people at predicting recidivism. Even more striking, they built a simple classifier that matched COMPAS’s accuracy using only two features–the defendant’s age and number of prior convictions–not the 137 features that COMPAS uses. Their interpretation was that there are limits to prediction, beyond which providing a richer set of features doesn’t add any signal. Commenting on this result, Sharad Goel offers a different interpretation, that “judges in the real world have access to far more information than the volunteers…including witness testimonies, statements from attorneys, and more. Paradoxically, that informational overload can lead to worse results by allowing human biases to kick in.” In this interpretation, data overload can enable unfairness in humans. With an algorithm, it’s possible to audit the data and limit the number of features if that’s what it takes to improve accuracy. You can’t do that with humans; you can’t limit their exposure to extraneous data and experiences that may bias them.
Understanding the biases that are present in training data isn’t easy or simple. As Kleinberg points out, properly auditing a model would require collecting data about protected classes; it’s difficult to tell whether a model shows racial or gender bias without data about race and gender, and we frequently avoid collecting that data. In another paper, Kleinberg and his co-authors show there are many ways to define fairness that are mathematically incompatible with each other. But understanding model bias is possible, and if possible, it should be possible to build AI systems that are at least as fair as humans, if not more fair.
This process is similar to the 19th-century concept of the “hermeneutic circle.” A literary text is inseparable from its culture; we can’t understand the text without understanding the culture, nor can we understand the culture without understanding the texts it produced. A model is inseparable from the data that was used to train it; but analyzing the output of the model can help us to understand the data, which in turn enables us to better understand the behavior of the model. To philosophers of the 19th century, the hermeneutic circle implies gradually spiraling inward: better historical understanding of the culture that produces the text enables a better understanding of the texts the culture produced, which in turn enables further progress in understanding the culture, and so on. We approach understanding asymptotically.
I’m bringing up this bit of 19th-century intellectual history because the hermeneutic circle is, if nothing else, an attempt to describe a non-trivial iterative process for answering difficult questions. It’s a more subtle and self-reflective process than “fail forward fast” or even gradient descent. And awareness of the process is important. AI won’t bring us an epiphany in which our tools suddenly set aside years of biased and prejudiced history. That’s what we thought when we “moved fast and broke things”: we thought we could non-critically invent ourselves out of a host of social ills. That didn’t happen. If we can get on a path toward doing better, we are doing well. And that path certainly entails a more complex understanding of how to make progress. We shouldn’t ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement. If we can make progress through several possibly painful iterations, we approach the center.
The hermeneutic circle also reminds us that understanding comes from looking at both the particular and the general: the text and the context. That is particularly important when we’re dealing with data and with AI. It is very easy for human subjects to become abstractions–rows in a database that are assigned a score, like the probability of committing a crime. When we don’t resist that temptation, when we allow ourselves to be ruled by abstractions rather than remembering our abstractions represent people, we will never be “fair”: we’ve lost track of what fair means. It’s impossible to be fair to a table in a database. Fairness is always about individuals.
We’re right to be skeptical. First, European thought has been plagued by the notion that European culture is the goal of human history. “Move fast and break things” is just another version of that delusion: we’re smart, we’re technocrats, we’re the culmination of history, of course we’ll get it right. If our understanding of “fairness” degenerates into an affirmation of what we already are, we are in trouble. It’s dangerous to put too much faith in our ability to perform audits and develop metrics: it’s easy to game the system, and it’s easy to trick yourself into believing you’ve achieved something you haven’t. I’m encouraged, though, by the idea that the hermeneutic circle is a way of getting things right by being slightly less wrong. It’s a framework that demands humility and dialog. For that dialog to work, it must take into account the present and the past, the individual and the collective data, the disenfranchised and the franchised.
Second, we have to avoid turning the process of fairness into a game: a circle where you’re endlessly chasing your tail. It’s easy to celebrate the process of circling while forgetting that the goal isn’t publishing papers and doing experiments. It’s easy to say “we’re not making any progress, and we probably can’t make any progress, but at least our salaries are being paid and we’re doing interesting work.” It’s easy to play the circle game when it can be proven different definitions of fairness are incompatible, or when contemplating the enormous number of dimensions in which one might want to be fair. And we will have to admit that fairness is not an absolute concept that’s graven on stone tablets, but that is fundamentally situational.
It was easy for the humanistic project of interpretation to lose itself in the circle game because it never had tools like audits and metrics. It could never measure whether it was getting closer to its goal, and when you can’t measure your progress, it’s easy to get lost. But we can measure disenfranchisement, and we can ensure that marginalized people are included in our conversations, so we understand what being “fair” means to people who are outside the system. As Cathy O’Neil has suggested, we can perform audits to black-box systems. We can understand fairness will always be elusive and aspirational, and use that knowledge to build appeal and redress into our systems. We can’t let the ideal of perfect fairness become an excuse for inaction. We can make incremental progress toward building a world that’s better for all of us.
We’ll never finish that project, in part because the issues we’re tracking will always be changing, and our old problems will mutate to plague us in new ways. We’ll never be done because we will have to deal with messy questions like what “fair” means in any given context, and those contexts will change constantly. But we can make progress: having taken one step, we’ll be in a position to see the next.