As of 2021 New York has propsed a law requiring companies who use screening software (AI) in their hiring to disclose the use of those tools. The law also might require companies who sell such tools to complete a “bias audit” and make the records public. This has been a long time coming to even hear of some regulation on the table. Dozens of city, state, and federal law makers are fumbling over the sometimes difficult implementations of regulating machine learning. In this module we look at the crux of all criticisms against current AI: bias. While bias is probably the most frustrating issue in regards to the roll out of widely used “AI” we believe there is something else just as troublesome that is less often talked about: what data we use to define our categories. After working through this module you should be able to: