Suggested time: 5 min
In our exercise, we designed fraud detection classifiers. We proposed input features that could be used to tell apart fraudulent and legitimate transactions. We also discussed how we might validate whether our classifiers work, who benefits when they work, and who may be harmed when they make a mistake.
In our discussion we always returned to the importance of data - of how we may get access to the input features, and how we can know whether the prediction the classifier made was actually correct. Perhaps the most important kind of data is human judgment: are we satisfied with the classifier’s predictive accuracy? Of course, prediction is difficult, but are the classifier’s predictions better than a random guess? Are there systematic mistakes the classifier is making? And are these mistakes systematically harming specific stakeholders - individuals or groups?
Module 3 will discuss how different stakeholders and their values play a role in human-AI interactions.
Hearing a reflection from everyone in a meeting is a great way to round out your understanding of the learning materials and identify ways to improve future meetings. Take a few minutes to write out 1–2 sentence responses to the following questions or take turns answering them out loud:
What is one thing you…
Read the short article Algorithmic Stakeholders: An Ethical Matrix for AI by Cathy O’Neil.