Context and Cognition - Simon's Scissors
Third in a series on context and decisive action with AI
“Human rational behavior is shaped by a scissors whose blades are the structure of task environments and the computational capabilities of the actor.” — Herbert Simon, “Invariants of Human Behavior” in Annual Review of Psychology, 41, 1990, p.1–19
Context and cognition are the gateway to trusted AI. Bringing context and cognition into full focus will result in an AI that delivers on the promise of social and economic benefit. Many of us now have experiences with Generative AI. The common thread to many of these discussions is that dazzle leads to disappointment for use cases requiring contextual understanding. Getting ChatGPT, Claude, or other models to meet our exact contextual application requires work. In the aftermath of the MIT article on the high failure rate of early AI projects, most commentary focused on the need for greater understanding of application requirements and the knowledge or training of the implementers. Simon's Scissors creates a powerful rubric.
The Context blade places focus on the task environment. For the intended application, is there a clear understanding of the total environmental requirements? Allow me to illustrate with an example. I spoke with a creative at a design agency early in the GenAI wave. Their experience was that early ideation was inspiring, but attempts to drive the system to do exactly what was needed caused increasing frustration. The last mile of extracting detailed performance in context goes against the fundamental design of GenAI, which is to generate plausible ideas. For this task, GenAI was highly productive at the ideation level; however, it fell short in the final steps of tailoring to the exact customer's desires and needs. For the current state of AI, this is often the case, superior at ideation or plausible solutions, but weak at ‘last mile’ details.
The Cognition blade shifts focus to the actor. The cognition blade is useful in both drawing attention to the user's capabilities and the depth of understanding of the application developer. Many, including GenAI's evangelists, create expectations for users that go well beyond the system's current capabilities. Anthropomorphism, the Eliza effect, leads the average human to project models of reasoning and understanding that are non-existent in the current models.
In a prior post on this topic, I mentioned that for the past 90 days, I have experimented with and pushed the boundaries of Claude Code and Cursor.io. Software development is an excellent domain for studying AI impact because coding requires a broad understanding of approaches and techniques while requiring a high degree of contextualization. For years, community sharing sites like StackOverflow met the need. AI coding platforms provide a profound shift in accessing shared experience. AI assistants radically uplift the productivity of software developers.
Software also has to work, and the final stages of fit and finish matter. The current coding environments require significant awareness and management of context, as well as cognition, in understanding the specific user skill level. If you push them, the models themselves will tell you their limitations. AI coding environments require you, the user, to maintain context. That is not evident, nor is it in the popular narrative. The following statements were generated by Cursor.io
Given that understanding, AI coding environments offer extremely powerful assistance. They are not, however, a substitute for human software developers for use on complex coding projects. Many of these barriers will be removed over time. Simon’s Scissors provides a highly useful rubric in moving from promise to practical utility.
Both human cognition and task context were sidelined in the rush to commercialize Generative AI. If human cognition and context management are necessary to translate the generative value to practical application, what value is the AI system delivering? Note, this is the view of many very bright members of the Z generation. Many are not at all bought into the premise that this will lead to sustainable value. I leave that thought intentionally dangling because it will take time to see the implications of the next generation's impact on the future of AI. Their voice and impact cannot be ignored. The current prophets of the end of coding will need to face the fact that the current class of brilliant minds will impact the future in ways we cannot currently predict.
Recommended by LinkedIn
AI is the science of self learning and intelligent systems. It is far more than statistical generation from fragments of text that may be plausibly true after human filtering. The need for contextual adaptation was captured in DARPA’s framing of the waves of AI. The report was published a few years prior to the Generative AI boom. DARPA's report on AI preceded the current AI wave by a few years.
The next wave is happening right now. Prompt engineering (see my prior post on this) is contextual adaptation. Alignment is contextual adaptation. Explanation is contextual adaptation.
For those who follow me, you know that focusing on collective intelligence, context, and cognition is central to the work we are doing at CrowdSmart. The shortcomings of the current generation of AI are manageable if they are supplemented with human supervision. While this idea is often overshadowed by the hype, it is already essential to making models useful: Reinforcement Learning from Human Feedback is required to make models effective. The breakthroughs are to be embraced and extended with human collaboration.
Generative Collective Intelligence embraces this by framing AI as a social phenomenon, integrating human cognition and contextualization processes into the mix. GCI learns AI models from the process of human deliberations on task alignment to shared understanding. Applying new adaptive learning architectures that learn from the processes of contextualization and cognition is an obvious next step. It is the next wave of AI.
It is out of scope for this piece to go into technical detail. Work in adaptive learning architectures has a long parallel history. Partially Observable Markov Decision Processes (POMDPs), while around for some time, are now moving to center stage. We published on using adaptive learning with transformer models 7 years ago. The parallel work done by our team and many others is ready for prime time.
Generative Collective Intelligence brings Simon's Scissor concept into the current situation as a powerful rubric for discerning where AI should be best applied. It can be done in a live interactive environment, collectively reasoning with other humans and AI agents working together. The future of AI is social and in the service of humanity.
Fantastic framing. The hype often hides LLMs’ weakness at the ‘last mile,’ where context, precision, and cognition collide. Bridging that gap is where meaningful, durable AI value will be created.
interesting analogy for AI 🤔