Suggested time: 5 minutes
In recent years, there has been a rapid proliferation of algorithmic tools into the public and private sectors to improve processes and increase efficiency. Algorithmic decision-making systems (ADS) are systems that use algorithms for making decisions in a specific context, such as finance, employment, healthcare, and education.
While these algorithmic tools have the ability to greatly improve society, they also have the potential to cause great harm. As a case study, Amazon once built and implemented an automated resume screening and hiring system—only to later find out that the system was biased against hiring women. In another instance, algorithmic systems threatened global economic stability by causing the 2010 Flash Crash, wherein erroneous decisions made by complex algorithmic trading systems caused the Dow Jones to lose $1 trillion in value in 36 minutes. Both of these issues were caused, at least in part, by a lack of transparency into the underlying algorithmic system.
In this course, discuss instructions, best-practices, and recommendations on algorithmic transparency to avoid potential risks and harm. Our approach involves making the stakeholders of algorithmic decision-making the primary focus—that is, the individuals or groups (both internal and external to an organization) that are impacted by an algorithmic system.
The content of this course is based on the Algorithmic Transparency Playbook, which is a free-guide to algorithmic transparency published by the New York University Center for Responsible AI.
The main objective of this course is to teach you how to create transparency for the algorithms in your organization.
Along the way, you will learn about many different elements of algorithmic transparency, and by the end of this course you should be comfortable answering the following questions:
This course is intended for a range of audiences: