Oh, Doesn't Google Wish!

Oh, Doesn't Google Wish!

I recently came across another of the multitude of popular media articles that are written by technology journalists that, sorry to say, know very little about the complex subjects they are reporting on. The article, titled "Google's AI is no smarter than a 6-year-old, study says" was supposed to report on a very unscientific study by researchers that ranked AI capabilities by testing their IQ.

First of all, we have had calculators that do amazing computations that no human can match since at least the 16th century (if not before), and one can hardly call these devices intelligent machines. Playing Chess or Go is also not the threshold we are trying to pass. These games -- that have a huge search space (and Go has multitudes more than Chess), are finite and can thus be conquered by machines with more computing power since, regardless of how large the search space is, they are in the end finite problems and more powerful hardware can, in a brute force fashion, compute a score for, potentially, every board configuration. The intelligence we are seeking is also not a bit related to the kind of pattern recognition that irrational species do (image/sound recognition, for example, as many animal species surpass humans in their visual and sound recognition capabilities!) If these pattern recognition capabilities were relevant to reasoning and language skills, then we would have seen animals speaking and reasoning by now -- clearly these capabilities do not in any way shed any light on human intelligence!

Human intelligence is about the ingenious method humans have developed in "framing" the knowledge they posses in such a way that they can effortlessly and instantly put millions of facts aside and focus on the current problem/subject-matter at hand, changing this frame in some mysterious way in a fraction of second, and starting to focus on another fragment of their knowledge structures as soon as there is a change in the context (subject-matter). It is not a coincidence that this is also related to the infinite number of thoughts that humans can make (and understand), performing complex chains of reasoning along the way in order to resolve various types of scope, reference, lexical, structural, and other types of ambiguities. It is this specific capability that eludes us - namely, our capability of making (and interpreting) potentially an infinite number of thoughts, efficiently accessing the relevant "frames" of background knowledge needed in a specific context and ignoring the rest.

I once told a colleague of mine there's one possible way for amateurs working in NLP/NLU to start appreciating the complexity of the problem. Perhaps the field should be called "Natural Thought Understanding" since language = thought, and NLP is not about the text we see on the outside, but about the huge amount of text that is never explicitly stated in ordinary discourse but is left out since we humans usually assume that what is left out is available to all of us (as shared 'commonsense' knowledge) and need not be explicitly stated (thus much of language 'understanding' is about uncovering that missing text). And this is what specifically intrigued me about the article ("Google's AI is no smarter than a 6-year-old"). A 6-year old? Hold on!

A 6-year old knows that elephants don't fly, that mountains don't dream, that every human has a mother and a father, that when hearing 'John enjoyed the sandwich' what is meant is that John enjoyed eating (not making, buying, etc) the sandwich, that 'it' in "the ball will not fit in the suitcase because it is too big" refers to the ball (and not the suitcase), that "John has a beautiful red car"is more natural to say then "John has a red beautiful car", that a "a gold bracelet" refers to a bracelet made (at least partially) of gold, but "a gold mine" is not a mine made of gold, they know that John doesn't use his kids as toppings for his pizza when they hear "John likes having pizza with his kids" (unlike the case of "John likes having pizza with pineapple"), and they know that "a house" in "John visited a house on every street in his village" actually refers to "many houses" since it is not likely that the same house is on every street, and they know and they know and they know ...

Google's AI is no smarter than a 6-year-old? Don't they wish! In fact, I am certain they wish their "AI" is smarter than a 2-year old. I am also certain they wish -- really, really wish -- they are at least on the right path to get there!

Very intriguing and insightful perspectives. In these perspectives what do you think are reasonable domains for the new AI data intensive paradigm. I for example think about non stationary statistical processes as another problem. Also with Shannon's Information Theorem are we missing something about source data whether can reduce uncertainty with its self information. Much to contemplate for understanding limits.

Like
Reply

Nice article, thank you. I particularly liked your discussion of knowledge frames, and how they change every fraction of a second. In my view a frame is a memory set activated, or close to being activated, by immediate circumstance. This memory state, and corresponding functional neural network set, changes across hours, minutes, seconds, and split seconds.... Thoughts are also memory combinations, which tend to be more strongly activated, and often trigger word sequences. I agree thoughts are essentially infinite.

Relevant input, Walid. Best, Olfert

To view or add a comment, sign in

More articles by Walid Saba

Others also viewed

Explore content categories