Wrap Up & Looking Forward

Suggested time: 5 min

In our exercise, we designed fraud detection classifiers. We proposed input features that could be used to tell apart fraudulent and legitimate transactions. We also discussed how we might validate whether our classifiers work, who benefits when they work, and who may be harmed when they make a mistake.

In our discussion we always returned to the importance of data - of how we may get access to the input features, and how we can know whether the prediction the classifier made was actually correct. Perhaps the most important kind of data is human judgment: are we satisfied with the classifier’s predictive accuracy? Of course, prediction is difficult, but are the classifier’s predictions better than a random guess? Are there systematic mistakes the classifier is making? And are these mistakes systematically harming specific stakeholders - individuals or groups?

Module 3 will discuss how different stakeholders and their values play a role in human-AI interactions.


Closing reflections

Hearing a reflection from everyone in a meeting is a great way to round out your understanding of the learning materials and identify ways to improve future meetings. Take a few minutes to write out 1–2 sentence responses to the following questions or take turns answering them out loud:

What is one thing you…

  1. Learned in today’s meeting?
  2. Liked about today’s meeting?
  3. Wish you could have changed about today’s meeting?
  4. Are confused about or want to learn about AI in a future meeting?


Preview for module 3:

Read the short article Algorithmic Stakeholders: An Ethical Matrix for AI by Cathy O’Neil.


Previous submodule:
Next submodule: