A Stakeholder-first Approach to Creating Transparency for Your Organization's Algorithms
This course is currenlty under construction.
Welcome to 2033, the year when AI, while not yet sentient, can finally be considered responsible. Only systems that work well, improve efficiency, are fair, law abiding, and transparent are in use today. It’s AI nirvana. You ask yourself: “How did we get here?”
You may have played a major role! As more organizations use algorithmic systems, there is a need for practitioners, industry leaders, managers, and executives to take part in making AI responsible. In this course, we provide a playbook, detailing how to influence change and implement algorithmic transparency for your organization’s algorithmic systems.
This course is based on the Algorithmic Transparency Playbook, which is a free-guide to algorithmic transparency published by the New York University Center for Responsible AI.
Those who complete this course will learn everything they need to know about algorithmic transparency, and how one can help influence change within their organization towards having more open, and accountable systems. The course also includes a case study game where you will get to explore the tension between different key stakeholders vying for and against algorithmic transparency!
To get started, proceed to the first module All-About-Transparency.
This course is published in the 2023 ACM CHI Conference on Human Factors in Computing Systems proceedings. A pre-cursor to this work was also published in Data & Policy.
This research was supported in part by NSF Awards No. 1934464, 1922658, 1916505, 1928614, and 2129076.