Evaluating Automated Decision Systems

Suggested time: 35 min

As we saw in the video, Automated Decision Systems aid or replace human decisions. The decisions these systems make take many forms: predicting, classifying, optimizing, identifying, recommending, scoring, and judging. Depending on where and how these decisions are made, these systems can have an impact on our opportunities, safety, rights, and behaviors.


Read Scenario (5 min)

In this activity we explore a local NYC example of an ADS, the Administration for Children’s Services (ACS) Child Welfare Child Risk and Safety Assessments tool. This system is used by ACS to evaluate potential child neglect and abuse cases for risk of child death or injury. Data used by this system often comes from multiple sources, including human services and law enforcement agencies. The system is not designed to make the final decision on child placement, but rather to advise a case worker on whether a reported case of potential child abuse or neglect should be further investigated or reviewed. We will learn about this system and then debate its purpose and impacts.

Read the following fictional scenario of the ACS’s Child Welfare Child Risk and Safety Assessment:

Nicole's Story:

You are doing well, looking after your three year old daughter Nicole on your own with support from your parents and extended family. One night, after a long day at work you come home, feed and play with Nicole and put her to bed. You then take a hot bath, put some headphones on and relax for a half hour - but when you get out of the bath and check on Nicole you find she is not in her bed.

Nicole is found by a neighbor, barefoot, cold and lost, trying to find her Nana's house (where you have walked with her many times). The worried neighbor settles Nicole, who is very upset, and returns her to your care. On returning home the neighbor calls Child Welfare Services and the intake worker decides to recommend an investigation.

The next day you pick up Nicole from daycare and are very embarrassed to hear that a case worker has visited the daycare center to make inquiries about you and your family situation. The case worker also visits your home and informs you that you are under investigation, which involves checking on any previous welfare or criminal records, and that the outcome could take some weeks.

Imagine the same scenario, but where Child Welfare Services have introduced a computer tool to help workers make more informed decisions about whether to recommend an investigation or not.

When your neighbor calls Child Welfare Services, a computer tool runs a statistical analysis of historical data which includes family case history, public health data, and criminal justice records, and also the history of other calls and how they turned out. The result is a risk score which predicts the level of child risk, from 1 (low risk) to 10 (severe risk).

Since you have no history with Child Welfare Services and no criminal record, Nicole's risk score is assessed as being low and does not result in an investigation.

It is now nine years later and Nicole is 12. She is going through some difficult times - she has fallen in with a bad crowd and one afternoon you ground her because of bad behavior. Late that evening she jumps out her bedroom window and runs away to a friend's place. Police are concerned to see her walking alone late at night and bring her home.

You explain what has happened, but they are required to inform Child Welfare Services.

The next day you receive a visit from a social worker who says that the computer tool they are using scored Nicole as being at high risk. This is because your new partner has a previous domestic violence investigation on his record and together with your data record in the Child Welfare Services system there is a possibility of a pattern.


Group Discussion (10 min)

  • Any high-level thoughts/concerns with the scenario?
  • Does the algorithm change how ACS works? What issues might arise in how the algorithm works?
  • How would you approach conversations with technologists and decision-makers to convince them to improve the design and use of the tool?

Previous submodule: