Something is rotten at the heart of artificial intelligence. Machine learning algorithms that spot patterns in huge datasets, hold promise for everything from recommending if someone should be released on bail to estimating the likelihood of a driver having a car crash, and thus the cost of their insurance.
But these algorithms also risk being discriminatory by basing their recommendations on categories like someone’s sex, sexuality, or race. So far, all attempts to de-bias our algorithms have failed.
But a new approach by Niki…