Author: Sebastian Benthall, Research Scientist at NYU Steinhardt and PhD Candidate UC Berkeley School of Information.
In February, 2018, the inaugural 2018 FAT* conference was held in New York City:
The FAT* Conference 2018 is a two-day event that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. This inaugural conference builds on success of prior workshops like FAT/ML, FAT/Rec, DAT, Ethics in NLP, and others.
FAT stands for “Fairness, Accountability, Transparency”, and the asterisk, pronounced “star”, is a wildcard character, which indicates that the conference ranges more widely that earlier workshops it succeeds, such as FAT/ML (ML meaning, “machine learning“), FAT/Rec (Rec meaning “recommender systems“). You might conclude from the amount of geekery in the title and history of the conference that FAT* is a computer science conference.
You would be half right. Other details reveal that the conference has a different, broader agenda. It was held at New York University’s Law School, and many of the committee chairs are law professors, not computer science professors. The first keynote speaker, Latanya Sweeney, argued that technology is the new policy as more and more decisions are delegated to automated systems. The responsibility of governance, it seems, is falling to the creators of artificial intelligence. The keynote speaker on the second day was Prof. Deborah Hellman, who provided a philosophical argument for why discrimination is morally wrong. This opened into a conversation about the relationship between random fate and justice with computer scientist Cynthia Dwork. The other speakers in the program in one way or another grappled with the problem of how to responsibly wield technological power over society.
It was a successful conference and it has great promise as venue for future work. It has this promise because it has been set up to expand intellectually beyond the confines of the current state of discourse around accountability and automation. This post is about the tensions within FAT* that make it intellectually dynamic. FAT* reflects the conditions of our a particular historical, cultural, and economic moment. The contention of this post is that the community involved in the conference has the opportunity to transcend that moment if they encounter its own contradictions head-on through praxis.
One significant tendency among the research at FAT* was the mathematization of ethics. Exemplified by Menon and Williamson’s “The cost of fairness in binary classification” (2018) (winner of a best paper award at the conference), many researchers come to FAT* to translate ethical injunctions, and the tradeoffs between them, into mathematical expressions. This striking intellectual endeavor sits at the center of a number of controversies between the humanities and sciences that have been going on for decades and continue today.
As has been long recognized in the foundational theory of computer science, computational algorithms are powerful because the are logically equivalent to the processes of mathematical proof. Algorithms, in the technical sense of the term, can be no more and no less powerful than mathematics itself. It has long been a concern that a world controlled by algorithms would be an amoral one; in his 1947 book Eclipse of Reason, Max Horkheimer argued that the increasing use of formal reason (which includes mathematics and computation) for pragmatic purposes would lead to a world dominated by industrial power that was indifferent to human moral considerations of what is right or good. Hannah Arendt, in The Human Condition (1959), wrote about the power of scientists who spoke in obscure mathematical language and were therefore beyond the scrutiny of democratic politics. Because mathematics is universal, it is unable to express political interests, which arise from people’s real, particular situations.
We live in a strikingly different time from the mid-20th century. Ethical concerns with the role of algorithms in society have been brought to trained computer scientists, and their natural and correct inclination has been to determine the mathematical form of the concern. Many of these scholars would sincerely like to design a better system.
Perhaps disappointingly, all the great discoveries in foundations of computing are impossibility results: the Halting Problem, the No Free Lunch theorem, etc. And it is no different in the field of Fairness in Machine Learning. What computer scientists have discovered is that life isn’t, and can’t be, fair, because “fairness” has several different definitions (twenty-one at last count) that are incompatible with each other (Hardt et al., 2016; Kleinberg et al., 2016). Because there are inherent tradeoffs to different conceptions of fairness and any one definition will allocate outcomes differently for different kinds of people, the question of what fairness is has now been exposed as an inherently political question with no compelling scientific answer.
Naturally, computer scientists are not the first to discover this. What’s happened is that it is their turn to discover this eternal truth because in this historical moment computer science is the scientific discipline that is most emblematic of power. This is because the richest and most powerful companies, the ones almost everybody depends on daily, are technology companies, and these companies project the image that their success is do mainly to the scientific genius of their early employees and the quality of the technology that is at their operational core.
The problem is that computer science as scientific discipline has very little to do with why large technology companies have so much power and sometimes abuse that power. These companies are much more than their engineers; they also include designers, product managers, salespeople, public relations people, and of course executives and shareholders. As sociotechnical organizations, they are most responsive to the profit motive, government regulations, and consumer behavior. Even if being fair was technically possible, they would still be businesses with very non-technical reasons for being unfair or unaccountable.
Perhaps because these large companies are so powerful, few of the papers at the conference critiqued them directly. Instead, the focus was often on the software systems used by municipal governments. These were insightful and important papers. Barabas et al.’s paper questioned the assumptions motivating much of the inquiry around “fairness in machine learning” by delving into the history and ideology of actuarial risk assessment in criminal sentencing. Chouldechova et al.’s case study in the workings of a child mistreatment hotline (winner of a best paper award) was a realistic and balanced study of the challenges of operating an algorithmic risk assessment system in municipal social services. At its best, FAT* didn’t look much like a computer science conference at all, even when the speakers and authors had computer science training. At its best, FAT* was grappling towards something new.
Some of this grappling is awkward. Buolamwini and Gebru presented a technically and politically interesting study of how commercially available facial recognition technologies underperform on women, on darker-skinned people, and intersectionally on darker-skinned women. In addition to presenting their results, the speakers proudly described how some the facial recognition companies responded to their article by improving the accuracy of their technology. For some at the conference, this was a victory for fairer representation and accountability of facial recognition technology that was otherwise built to favor lighter skinned men. But others found it difficult to celebrate the improved effectiveness of a technology for automated surveillance. Out of context, it’s impossible to know whether this technology does good or ill to those wearing the faces it recognizes. What was presented as a form of activism against repressive or marginalizing political forces may just as well have been playing into their hands.
This political ambiguity was glossed over, not resolved. And therein lay the crux of the political problem at the heart of FAT*: it’s full of well-intentioned people trying to discover technical band-aids to what are actually systemic social and economic problems. Their intentions and their technical contributions are both laudable. But there was something ideologically fishy going on, a fishiness reflective of a broader historical moment. Nancy Fraser (2016) has written about the phenomenon of progressive neoliberalism, an ideology that sounds like an oxymoron but in fact reflects the alliance between the innovation sector and identity-based activist movements. Fraser argues that progressive neoliberalism has been a hegemonic force until very recently. This year FAT*, with its mainly progressive sense of Fairness and Accountability and arguably neoliberal emphasis on computational solutions, was a throwback to what for many at the conference was a happier political time. I hope that next year’s conference takes a cue from Fraser and is more critical of the zeitgeist.
For now, as form of activism that changes things for the better, this year’s conference largely fell short because it would not address the systemic elephants in the room. A dialectical sublation is necessary and imminent. For it to do this effectively, the conference may need to add another letter to its name, representing another value. Michael Veale has suggested that the conference add an “R”, for reflexivity, perhaps a nod to the cherished value of critical qualitative scholars, who are clearly welcome in the room. However, if the conference is to realize its highest potential, it should add a “J”, for justice, and see what the bright minds of computer science think of that.
References
Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.
Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).
Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on Fairness, Accountability and Transparency. 2018.
Chouldechova, Alexandra, et al. “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.” Conference on Fairness, Accountability and Transparency. 2018.
Fraser, Nancy. “Progressive neoliberalism versus reactionary populism: A choice that feminists should refuse.” NORA-Nordic Journal of Feminist and Gender Research 24.4 (2016): 281-284.
Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.
Hellman, Deborah. “Indirect Discrimination and the Duty to Avoid Compounding Injustice.” (2017).
Horkheimer, Max. “Eclipse of Reason. 1947.” New York: Continuum (1974).
Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).