As of 2021 there is no shortage of articles focusing on bias in AI models but that wasn’t always the case. Still, most of this reporting (especially in main stream sources) stops short of the analysis that this topic really deserves. On this page we showcase some of the more thorough and nuanced engagements with this issues.
The video below is an excellent conversation with some people engaged in raising awareness and creating change in new technologies. Latanya Sweeney, professor of government and technology in residence, Department of Government, Harvard University and Joy Buolamwini, founder, Algorithmic Justice League; research assistant, MIT Media Lab share what experiences interrupted their work and in turn led them to finding and fixing bias in machine learning.
The Algorithmic Justice League
…is an initiative started by Joy (in the video above) that connects people to these issues. You can report software as small as social media filters where you witness bias in a new technology as well as dig into a huge library of related articles, research and advocacy.
Excavating AI: The Politics of Images in Machine Learning Training Sets
More about our theme of “defining with data”.
This 2016 story spawned dozens of other articles in the main stream and helped to bring this issue to a wider audience. It looks at “risk assessment” tools then being used within the prison industrial complex.
Note: Most of what we look at in this course is under the umbrella of “supervised” machine learning, where the data is labeled (manually or automatically). However, as of January 2021 this paper reports on how bias permeates data even without explicit labels in deep learning frameworks.