Fairness Awareness in ML models

December 4, 2019

48’ watch Meetup presentation: identify when and where bias can be introduced during ML process and how to minimize it

Below is a talk I gave during the ML Fribourg meetup on November 27th 2019, based on a 2017 paper by Brian d'Alessandro, Cathy O'Neil and Tom Lagatta entitled Conscientious Classification: A Data Scientist's guide to Discrimination-Aware Classification.

The talk led to a fruitful discussion (not recorded) about the need for data normalization, the risk involved in augmenting the loss function with a fairness-regularizer, algorithm aversion, what incentives could companies have to introduce more fairness, the need for regulation and more.

You can download the slides here. They include additional links and a summary of the topics raised during the discussion.

comments powered by Disqus