Artificial Intelligence: Improving learning by forgetting

No alt text provided for this image

The evolutionary process has enhanced human capabilities to efficiently manage information sensed from the environment. To this end, our brain undertakes the key functions of storage and retrieval, as well as conversion of data to its generalized form through the process of learning. Additionally, our mental faculties continuously retrieve content from the memory, and we effectively bring it to bear to achieve our objectives. Hence, as information in all its form is critical for our sustenance, we have historically regarded it as the most powerful possession.

Whereas our senses and the brain add information to our mental repository, forgetfulness does the opposite by suppressing content in our memory. We are therefore understandably disappointed at not being able to recall expunged knowledge, especially when it could have helped us achieve our goals. Hence, as forgetting results in information loss, it has traditionally been referred negatively.

Paradoxically however, our past views on information and forgetfulness have undergone a reset due to modern research in the areas of psychology and artificial intelligence. Firstly, although information is generally valuable, however, it is only useful up to a threshold and then it starts to become detrimental. Secondly, forgetfulness improves our intellect as it prevents our mind from becoming overloaded with complex data. In short, if the content in our mind were to expand unchecked, it could get overly congested and cause degradation of our cognitive functions. However, forgetting halts the uncontrolled growth of information, and this prevents the brain from getting overwhelmed from the continuous flow of data through the environment.

Even though information has great utility, its effectiveness recedes drastically if acquired limitlessly. This feature of it is best highlighted by the law of diminishing marginal returns in the field of economics. For example, in a situation when we are thirsty, the highest gratification is experienced after consuming the first glass of water. Subsequently then, the relative satisfaction subsides and reaches a tipping point beyond which any further ingestion could turn into an agonising experience. Hence, a seemingly positive and a healthy action becomes burdensome if the usage stays unchecked. Incidentally, information also follows the law of marginal utility which means that it starts to cause more harm than good if expanded beyond certain levels.

Interestingly, the decline in cognitive functions due to unregulated knowledge expansion was best described in a landmark research paper authored by Shaul Markovitch and Paul Scott. The investigators created an AI based system that solved, as well as saved solutions to basic toy problems. The main purpose of the storage was to provide a handy resolution in case if a newly encountered problem was similar to a previously addressed instance. As the program went about tackling further puzzles, it experienced improved efficiency due to the availability of cached worked-out solutions. However, as the solution repository grew unchecked, the performance started to degrade as the program began to sink in its own weight of accumulated knowledge. It was apparent that after reaching a tipping point, it was faster to solve problems using the first principals than to search for a ready-made answer from an increasingly large solution storage.

At some stage, the researchers introduced the concept of forgetfulness in the system, which was implemented by discarding those stored solutions that were sparingly used. This change made the program acquire stability and its overall performance improved drastically. However, the most striking aspect of the study was that even random forgetfulness, which was implemented by arbitrarily removing stored solutions, was enough to outperform a system that never forgot!

In general, there are many ways in which overabundance of information decreases the intellect. Firstly, the retrieval speed of content could slow down as its size increases uncontrollably. Secondly, superfluous information causes confusion as any access to an overcrowded memory results in irrelevant data being fetched along with appropriate content. Thirdly, excessive information also deteriorates learning by reducing the ability to form useful generalisations. This occurs when noisy data cluttered in the mind is not suppressed, which results in it being used as building blocks to form poor generalisations.

Hermann Ebbinghaus, the late nineteenth century psychologist, proved to the world that forgetting follows a decreasing power law trend. This implies that prior to settling down, the strength of recall from memory reduces sharply at each passing day. On the surface, this steep decline seems worrisome, however, consistent stream of data from the environment regularly replenishes our mental storage. Hence, we can postulate that high levels of cognition require an equilibrium to be maintained between acquisition and discarding of knowledge. On one hand, the pruning of content by forgetting protects us from the harmful effects of uncontrolled data expansion. Whereas, on the other hand, loss of information by forgetfulness is continuously refurbished with new stream of input from the environment. Therefore, information loss as well as gain maintain a balance, and any major deviation results in decreased intellectual capabilities.

Despite the proven role of forgetfulness in enhancing human cognition and learning, the world of data science has been slow to implement methods that prune information. Although a few exceptions are noteworthy to have shown potential, the current approaches are far from being comprehensive. For example, winnowing is a process that discards certain data elements which results in improved outcomes by a machine learning model as it targets more relevant features. Similarly, in many dynamic algorithms, recent information is weighted higher to support the algorithm enhance its learning accuracy by deprioritizing older data and focusing on freshly received input.

As businesses adopt big data at scale, the magnitude of information within these organisations is expected to grow at an unprecedented level. Although smart algorithms are being deployed to draw meaningful trends, the sheer vastness of data is a major bottleneck to improve their outcomes. In the hope of generating better analytics, these intelligent programs are put under stress to process enormous quantities of data as organisations leverage the progress made in cloud storage technologies. Irrespective of the type of advance algorithm being used, inputting large volumes of unregulated information perpetuates deficient learning with stagnant results. The key lesson from human cognition is that information growth has to maintain an equilibrium in order to achieve optimum outcomes. Hence, keeping that in perspective, it is imperative that the current AI systems embrace content pruning and forgetfulness techniques in order to successfully tackle the organisational push to deliver actionable insights from mammoth amounts of data.

Never thought of Machine Forgetfulness being such an important aspect of Artificial Intelligence as well but it totally makes sense. Excellent articulation of a relatively complex topic, Vaqar Khamisani. The year 2020 has officially kicked off for me now with this new lesson and this topic added to the year's focus list for me. I want to read & learn more in this space, please share some more material / book recommendations on the topic. Thanks.

To view or add a comment, sign in

More articles by Vaqar Khamisani

  • The Dragon and the Dance of Dualities

    This article is based on a talk I gave recently, and I want to begin with a bit of context. This was a session…

    2 Comments
  • Artificial Intelligence: lessons on trust from the prisoner’s dilemma

    The coming world will witness robots and humans interact amongst themselves to achieve their respective goals. In this…

    3 Comments
  • Powering innovations with smart algorithms

    Unsupervised machine learning relies on an algorithm’s discretion to traverse large data sets to select, extract and…

  • In Search of an Optimal Match

    Irrespective of whether we buy or sell an item, it is often the case that we later regret not having gone through…

    3 Comments
  • Beguiled by Artificial Intelligence

    Experienced educators know the importance of using good quality teaching aids to deliver effective lessons to students.…

    3 Comments
  • AI gaming and the final frontier

    When I was about 10 years old, I developed a passion for maze puzzles after discovering a shortcut to solve them. The…

  • Genetic Algorithms: why evolution works

    We were introduced to plotting numbers on a graph early in our secondary year schooling. The usual first step was to…

    2 Comments
  • Success and Failure in Pursuit of Excellence

    “I've missed more than 9,000 shots in my career. I've lost almost 300 games.

  • Foundations of Natural and Artificial Intelligence

    Growing up in Pakistan during the mid-seventies, a TV show captured the imagination of millions of young and old alike.…

  • Complexity, Heuristics, and Artificial Intelligence

    Artificial Intelligence has truly come of age. From being confined in the labs for several decades, it has now emerged…

    2 Comments

Others also viewed

Explore content categories