AI systems are increasingly being used to both augment and replace human decision-makers of all kinds: an HR representative screening a job applicant, a judge assessing a prisoner’s bail request, or an immigration agent issuing a visa, for example. Concerns about fairness and transparency in such systems has led to legislation demanding that decision-making AI be explainable, both to users and those ultimately affected by algorithmic decisions. This positions explainability as a crucial – yet often ill-understood – epistemic interface between humans and intelligent machines. Drawing on interviews with members of Element AI’s explainability team, I show how explainability experts frame and encode notions of trust, accountability, and bias as they produce models intended to make an AI system’s decision-making process more legible and actionable to humans. I argue that for AI systems to be ethical and trustworthy, system designers and regulators must better account both for the differences in human and machine forms of intelligence, as well as for the subtle conceptual shifts that accompany the development of AI systems capable of autonomously making decisions about us.
This event is free and open to all. No registration necessary.
Moot Court Room