By Natalie Lung
Algorithms play an increasingly prominent role in journalism, but we’ve been slow to scrutinize those algorithms like we would any other source.
ProPublica senior reporter Julia Angwin, Stanford computer science Ph.D. candidate Sam Corbett-Davies, and Philip Merrill College of Journalism assistant professor Nick Diakopoulos shared their experience in evaluating algorithmic fairness at the 2017 CAR Conference.
Angwin stressed the importance of independent testing when using algorithms. Her example was a recidivism prediction model. She said that algorithms first published by a company or a professor in a whitepaper come under almost no scrutiny by the jurisdictions buying them. Authorities only put out their own analysis of the recidivism algorithm after they have been using it for years.
She also emphasized the need to examine the outcomes, and not the formulas, when evaluating algorithms. Even if the companies or institutions share their algorithms with you, as they did in the recidivism investigation, that might not be enough. Transparency "may be necessary, but not a sufficient condition" for examining an algorithm, Angwin said.
Corbett-Davies, whose research applies machine learning and statistics to questions of politics and policy, said he found a problem in the recidivism algorithm. He pointed to the lack of transparency behind the training data and the fact that the algorithm has several potentially dubious features. He also told them not to give up on algorithms because they are more consistent than humans (judges).
Diakopoulos introduced “Algorithm Tips”, a database of 5,000 leads on algorithms discovered on .gov domains, which he developed with his team at the University of Maryland. They vetted all the links by using different dimensions of newsworthiness, in hopes of helping more journalists get started with algorithmic accountability.