David Hansson’s verdict that the Apple Card algorithm is sexist lit up Twitter earlier this month. But the existence of biased/sexist/racist algorithms is not a new discovery; dozens of scholars have written about the hazards of letting AI mine data patterns for everything from job applicant screening to data-driven policing. Still, Goldman Sachs weakly argued that the company had no discriminatory intent in its credit limit determination process because it does not have information about applicants’ gender or marital status. This is an example of arguing for “fairness through unawareness”. But research shows that excluding sensitive attributes (gender, marital status, race, etc.) does not automatically render the algorithm unbiased.
The more troubling takeaway from this event was this: David and Jamie Hansson could not get anyone to give them a clear reason for the credit decision outcome. They heard variations of “credit limits are determined by an algorithm”, “we do not know your gender or marital status during the Apple Card process”, “it’s just the algorithm”, etc.
This kind of deep failure of accountability could become increasingly common as opaque algorithms are used for more kinds of decision making (what we have called “automation bias”). The Hanssons’ case highlighted a toxic combination: The companies relied on a “black box” algorithm with no capability to produce an explanation, and then abdicated all responsibility for the decision outcomes.
Osonde A. Osoba
Another older story I was reminded of while reading about TikTok and the glorification of its algorithms. A clear example how algorithms can fail spectacularly when they are deployed without understanding their decision processes, without carefully validating the input data and the results. In this particular case the biased algorithm was uncovered early on, but one has to wonder how many others are running unnoticed in the background, with no one to question their outcomes?
Post a Comment