Computing, cognition and the future of knowing

Computing, cognition and the future of knowing

Computing, cognition and the future of knowing

How humans and machines are forging a new age of understanding

 

It’s not surprising that the public’s imagination has been ignited by artificial intelligence since the term was first coined in 1955. In the ensuing 60 years, we have been alternately captivated by its promise, wary of its potential for abuse, and frustrated by its slow development.

 

But like so many advanced technologies that were conceived before their time, artificial intelligence has come to be widely misunderstood: co-opted by Hollywood, mischaracterized by the media, portrayed as everything from savior to scourge of humanity. Those of us engaged in serious information science and in its application in the real world of business and society understand the enormous potential of intelligent systems. The future of such technology – which we believe will be cognitive, not “artificial” – has very different characteristics from those generally attributed to AI, spawning different kinds of technological, scientific, and societal challenges and opportunities, with different requirements for governance, policy, and management.

 

Cognitive computing refers to systems that learn at scale, reason with purpose, and interact with humans naturally. Most important, rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. They are made possible by advances in a number of scientific fields over the past half-century, and are different in important ways from the information systems that preceded them. Those systems have been deterministic; cognitive systems are probabilistic. They generate not just answers to numerical problems, but hypotheses, reasoned arguments, and recommendations about more complex – and meaningful – bodies of data.

 

What’s more, cognitive systems can make sense of the 80 percent of the world’s data that computer scientists call “unstructured.” This enables them to keep pace with the volume, complexity, and unpredictability of information and systems in the modern world.

 

None of this involves either sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand – and act upon – the complex systems of our society. This augmented intelligence is the necessary next step in our ability to harness technology in the pursuit of knowledge, to further our expertise, and to improve the human condition. That is why it represents not just a new technology, but the dawn of a new era of technology, business, and society: The Cognitive Era.

 

The success of cognitive computing will not be measured by Turing tests or a computer’s ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved. Here at IBM, we have been working on the foundations of cognitive computing technology for decades, combining more than a dozen disciplines of advanced computer science with 100 years of business expertise. Now, we are seeing first-hand its potential to transform businesses, governments, and society. We have seen it turn big data from obstacle to opportunity, help physicians make early diagnoses for childhood disease, and suggest creative solutions for building smarter cities. And we believe that this technology represents our best – perhaps our only – chance to help tackle some of the most enduring systemic issues facing our planet, from cancer to climate change to an increasingly complex global economy.

To view or add a comment, sign in

More articles by Yvo Donders

Others also viewed

Explore content categories