Missing Algorithm Affordances
Algorithms, especially those created using machine learning, are currently all the rage. Rarely a day goes by when there is not a new article about some new algorithm that claims it is going to fundamentally change its target domain. However, for all of their technical sophistication, these algorithms fail to provide useful ways for humans to interact with them. They lack affordance that would allow users to understand them or to understand how best to employ them to accomplish their goals. At best, most machine learning algorithms are Norman doors; at worst, they are masochist's teapots.
Affordances are opportunities for action that the organisms can perceive with respect to objects in their environment. Specifically, Gibson defined an affordance as
“what the environment offers the animal, what it provides or furnishes, either for good or ill” (p. 127; Gibson, 1979)
An affordance is a characteristic of an object that makes certain types of behaviors possible. Affordances require the user to detect an object’s invariants (e.g., functional properties) relative to the user’s capabilities.
Affordances do not dictate user actions, but constrain what people interacting with that object will need to work with, or around, in order to complete a task. In this way, user actions are shaped by the object’s affordances. Thus an affordance is the interaction between an object in the environment and it’s potential user. For example, to be graspable, an object must have surfaces, separated by a distance less than the span of the user’s hand (Gibson, 1979, p. 133). It is the user’s understanding of an object’s affordance that enables the object to become a tool – to manipulate the affordance for a purpose, to detect how tools can be used to accomplish a goal (e.g., using a bit to drill a hole). More generally, tools can be thought of as physical objects (e.g., lever, saw, backhoe, or exoskeleton) that can be manipulated to amplify the user’s sensorimotor abilities, or external computational devices (e.g., abacus, calculator, algorithm, or automation) that improve an individual’s cognitive capabilities (e.g., calculation, categorization, sorting, or selection).
Tools can be used for the purpose for which they were made (i.e., conventional use; e.g., using a knife to cut vegetables) as well as in novel ways (e.g., using a butter knife as a screwdriver). Prior experience with a tool, either direct or through cultural communication, provides the users with a mental model that enables them to manipulate it appropriately and to use it in a meaningful way. It is from this perspective the concept of intuitive derives – simply seeing a tool or UI component is sufficient to obtain information about how to use it. However, there is no such thing as intuitive UIs, only learned and yet to be learned affordance. Novel uses result from users exploiting the tool’s affordances to achieve their own goals rather than the tool’s engineered purpose.
Algorithms are the basis for some of the most important digital tools for people today (e.g., search engines, recommendation systems, or sorting tools). In its simplest form, an algorithm takes data in one form, transforming it, and outputs the data in another form. From a user’s perspective, the algorithm is a “black box” – it is not possible for the user to know how the computation is completed. However, some boxes are blacker than others, and the ability for users to understand how an algorithm reached its conclusions can be particularly difficult if it is based on machine learning. Some algorithms further hamstring user understanding and use by reducing high dimensional data into single dimensional list (e.g., pagerank). If users have no way to perceive an algorithm’s affordances, or no knowledge of its invariants, how can they use it effectively as a tool?
To paraphrase Parasuraman, algorithms fundamentally change the nature of the cognitive demands and responsibilities of the human decision makers – often in ways that were unintended or unanticipated by their developers. This is not to say all is lost. Algorithm developers can build affordances into their algorithms to help facilitate their use.
Acknowledge Human Engagement
For most tasks, the knowledge, data, and information processing capabilities will be distributed across human and machine agents. The goal needs to be to support coordination between users and algorithms. For the foreseeable future, humans will be on the "blame line", and therefore, need to be able to interject control inputs when necessary.
Open the Black-box
Algorithms need to provide some level of observability to the users – an affordance they can perceive. The goal is to give the user some insight into algorithm's process. One way to do this would be to provide information about the algorithm’s training set, its provenance, size, variability, and any known limitations. This provides a global rationale for the algorithm’s output. Providing the algorithm’s past performance information provides a second global observability metric.
In order to provide a more local rationale, the algorithm could include explanations for individual outputs. For example, recommendation algorithms could identify which features lead to the selection of a particular movie. Another local approach would be to provide a confidence rating for algorithm outputs. Like human decision-makers, algorithms have response criteria – letting users know if an output exceeded the response criteria by a large or small amount allows them to know how more effectively interact with the algorithm.
Insert Knobs and Dials
Algorithm developers need to provide operators with control points to direct input, transformation, and output processes. On the input side, this could include the addition of the new cases into the training data set, or mechanisms for providing feedback to the utility of algorithm outputs. Transformation control options could result in changes to feature weighting or approach used to build the algorithm (e.g., unsupervised learning, regression, classification). Finally, with respect to outputs, developers could provide ways for users to personalize or customize algorithm outputs to better accomplish their tasks - tailoring them to their goals and needs.
While building affordances into algorithms will not be easy, it is not an insurmountable problem. Studies have shown that if the algorithms used by the automation are highly complex, and are dissimilar from the human’s strategies or not understood by the human, the automation’s outputs were ignored (Adelman et al, 1998). To take full advantage of our algorithm tools we must make the algorithm’s opportunities for action observable and controllable by its users. Without these improvements, algorithms won’t realize their potential across industries for improving forecasting and predictive analytics.
Important point . But I would say that the algorithms have unique affordances. The problem is that these affordances are not well specified. This is a problem of information, representation, or interface design. The illustration shows the affordance of graspable. The affordance is present whether the stick is visible or not. But this affordance will not be realized if the stick is not visible. Similarly, the algorithms have affordances, but the affordances will not be realized unless they are specified to the user. This typically would be accomplished through the design of interface representations.