The Bridge
What?
Strong AI is essentially about bridging the gap between natural and artificial intelligence. By saying that, do I mean we are yet far from doing so, even with artificial intelligence reaching such great heights, where machines can read and recognise faces, make comments of their own, and win prizes for poetry?
Yes, I do.
Even with all that neural networks and deep machine learning can handle, we still require separate terms to define natural and artificial intelligence. And the reason is quite simple. Artificial intelligence, no matter how closely it may be made to resemble human intelligence, is never allowed to be imperfect. Let’s face it, our imperfections they are that make us human. A mere “fx991MS” calculator can turn out arithmetic calculations in billions, whereas a human mind struggles to get it right in two or three digit mental multiplications. For years, we have only been building machines to make up for our own short-comings, rather than for actually trying to give them life. Maybe it is time to change that, if anywhere in the near future we are to dream of taking AI to the next level.
Maybe its time to consider treating machines like humans if we were to remotely expect them to behave like ones. Because, let’s face it, The Turing Test no longer suffices as a test for AI, being significantly relative and subjective in nature. It is time for us to change the way we treat our machines. For example, we ought to try letting them be born free, like a little child, and then leaving them at the mercy of the external environment, from which they would learn, imbibe and evolve, like a normal person is expected to do. One of the best gifts we can present AI with is the ability to learn without help from us. Although Google Brain is attempting something of the sort, combining open-ended resources with system engineering and google-scale computing, and projects like Deep Blue by IBM which holds the record for beating a world champion in chess under regular timing controls, the question remains, will machines ever truly learn to learn?
Perhaps synonymous to natural intelligence is human error. It is our errors, like I said, that comprise our shortcomings but also make us so extraordinary. Because we have been given the ability to learn from our mistakes, which is a mechanism which makes it unnecessary for us to rely on an external teaching entity, although they do come in handy, like a class-teacher or a quick youtube video, most of what we require to learn throughout our lives, be it a life skill or a special expertise, we are capable of teaching ourselves. That is what forms the soul of our evolution. It was because our ancestors of the caves learnt that they needed more comfort, we sit here today, researching on futuristic AI. Giving machines a chance to make mistakes, therefore, could lead to self-evolving machines, that would no longer need human support to go to their higher next level.
How?
Digital genes promise such a future. Electronic IC chips integrated into our AI friends seem to hold not much promise for future growth.
The concept of computer genomes has been long been in existence. Imagine being able to frame and condense all the wonders of the human DNA into the computer chip, letting loose the process of self-evolution into the artificial world?
The processes of DNA replication and digital protein synthesis could be closely followed. That would be more of imitation than the Turing Test, if you ask me.
When?
AI has come a long way, but, we cannot reliably say it has reached its goals, unless, someday in the news we can hear robots being criticised for their wrong decisions, we can hear robots fighting for their rights, or getting emotional over their national pride. We look to eliminate human faults in them, not realising that faults are the key for them to go that one step further.
Very well written Proma, keep writing more!
Excellent article. Brilliantly drafted. Wish you all the best.