Measuring Cognitive Load as a software engineering team

Measuring Cognitive Load as a software engineering team

This article is also featured on Medium.

Cognitive Load is a concept popularised by John Sweller (see Cognitive Load During Problem Solving: Effects on Learning).

In a previous article, I examined Sweller’s theories about “forward-working strategies”. This is part of Sweller’s analysis which leads to categorisation of different types of load: Sweller’s “forward-working strategies” and programming

However, I also note that his characterisation of working strategies isn’t well suited to how we work as software engineers. His theories about Cognitive Load are thus hard to validate within our context.

In particular, the distinction of “germane” from “extraneous” is problematic when dealing with, for example, understanding a codebase. The distinction of code from its environment is by no means black-and-white. The same can be said for “infrastructure” in general.

Nevertheless, industry experts such as Skelton and Pais (Team Topologies, 2019) bluntly state that Cognitive Load is a problem for teams. Seasoned software engineers also learn to avoid unnecessary complicatedness via heuristics such as:

  • “avoid premature optimisation”
  • “KISS”
  • “over engineering”

There is consensus that we need to set our brains up for success when working with software.

As part of continuous learning, it’s worth monitoring how context-switching is affecting a team. As part of regular retrospective analysis, it’s worth doing a quick temperature check to validate team health in this area.

Below is a picture of a simple spreadsheet. This is an example of how you could do this in a rudimentary fashion. Note that each team needs to customise metrics to suit its own context, and commit to a version which works for them (ideally for at least six months at a time.)

No alt text provided for this image

I recommend using this in a similar fashion to the Kanban heuristic of limiting WIP (Work-in-progress). The team sets a limit beyond which they will no longer take on additional work if it involves switching to a new context. This forces us to plan ahead about which contexts we will focus on within the next iteration.


To view or add a comment, sign in

More articles by Andrew Gibson

  • Goodbye Elixir

    It's a year since #elixir was declared "officially, a gradually typed language". I've done a lot of soul searching…

    5 Comments
  • ... over processes and tools

    Here’s an experiment for Agile teams. It’s best for teams that are well established.

    2 Comments
  • Can ChatGPT do Ops?

    In this article, I present the third of four questions in an interview with ChatGPT. The interview follows a general >>…

  • Can ChatGPT help CTOs with Technical Strategy?

    I chatted to ChatGPT over the course of several days. We discussed how ChatGPT is relevant to technical leaders.

    1 Comment
  • Lessons learned from ChatGPT about marketing new technologies

    This article attempts to unpick what has made ChatGPT so successful in marketing itself as an AI language model. As…

    1 Comment
  • ChatGPT: Ethical and philosophical considerations

    This article is extracted from a series of articles about ChatGPT and technology leaders. The articles are based on…

    6 Comments
  • BaaS: Google Firebase and Stripe integration

    Full version available on Medium Google’s Firebase is one of the quickest ways for small/medium businesses to create a…

  • The mutant offspring of transformational leadership

    The original (better formatted) version of this article is available on Medium: https://tinycode2.medium.

    2 Comments
  • Asking for feedback

    This article is also available (and more nicely formatted) on Medium. One of the hardest tasks in professional…

  • Healthy value streams in software engineering

    This article is also available (and more nicely formatted) on Medium A healthy software engineering team is aware of…

    4 Comments

Others also viewed

Explore content categories