Module #1 - Everyday Algorithms

Module #1 draws from Competency 1:

  • Competency 1: The ability to recognize AI

The AI literacy paper defines competency #1 as a disambiguation task:

Figuring out what AI is can be even more complex for individuals without a technical background, as AI is often overblown and conflated with other areas of computing in popular media. Many people think that AI is synonymous with robotics [57,138,145], and artifacts that do not achieve human-level intelligence are often discounted as being “not AI” (a phenomena referred to as the superhuman human fallacy [18]). AI is also often obscured on commonly used platforms—as a result, many users do not realize when they are interacting with AI [10,54,55,73]. The ability to recognize AI (Competency 1 (Recognizing AI)) is a critical skill necessary for informed interactions with AI.

The paper points to the importance of providing definitions of AI to accomplish this competency:

Established definitions of AI can aid learners in understanding what AI is.

Of course, the paper acknowledges the wide and often conflicting definitions of AI. Therefore, the designers should pick a definition that suits the format and context of the learning environment (which is what we have done). The module opens with a warm-up that allows the group to define AI. The goal of this ice-breaker is two-fold:

  1. it should reveal the wide and conflicting visions of AI from the group;
  2. it will begin the work of socializing the group to provide the relational scaffolding needed for a strong social constructivist environment.

One key part of enabling people to recognizing AI highlighted in the paper is the importance of understanding intelligence:

It is important for learners to be able to examine what it means to be intelligent.

The definition of intelligence depends on the approach and context:

Schank notes that definitions of intelligence can differ depending on the researcher and their approach to understanding AI [116]. He suggests that there are two main goals to AI research—to “build an intelligent machine” and to “find out about the nature of intelligence” [116]. He then proposes a set of traits that comprise general “intelligence”— communication, world knowledge, internal knowledge, intentionality, and creativity—emphasizing that the ability to learn is the most critical component of intelligence [116].

Thus, much like with defining AI, how intelligence should be articulated will depend on the learning context and goals. The paper suggests the following approach explaining intelligence:

Activities like comparing AI devices [69] and AI vs. human abilities [145] have been used to promote this understanding.

The current implementation follows this suggestion in several ways. First, the video contrasts human intelligence (e.g., baking) and artificial intelligence (e.g., Roomba; IBM Deep Blue). The video then pivots to critically and cautiously question the level of artificial intelligence necessary for more complex situations like hiring.

The activity builds upon the contrast of intelligence set-up in the video by framing algorithms as recipes and inviting participants to “cook up” an algorithm for soup. The activity employs a constructivist approach in how it leverages a familiar experience (cooking) as a scaffold towards engaging an unfamiliar experience (algorithm development) through social interactions (sharing and discussing the steps). By putting people in control of designing an algorithm and revealing the subjectivity of the design process, this activity is intended to provide the foundation for agency and a sense of closeness to AI. Specifically by simplifying the complexity algorithms as a form of “everyday intelligence” via cooking. Learners need to understand two points from this activity: they are performing a pseudo coding process that models the computational thinking of computer scientists; their “code” (like all code) is subjective, context and goal dependent.

Previous submodule:
Next submodule: