The Technology of the Future
Thirty years ago today (yikes) I defended my Master's thesis. The date is easy to recall as it matches the birthday of a family member. My research was focused on analyzing the weights of trained neural networks to interpret their behavior in terms of symbolic rules. The old joke then was "AI is the technology of the future... and always will be", and there were several "AI winters", periods of disappointment and reduced funding interest, both before and after my graduate school years. We're clearly in a very hot period now.
Even back then the black box nature of a trained network was a concern. Here's an excerpt from the introduction of my thesis:
But for all the benefits neural networks offer, they suffer from a significant disadvantage not shared by their symbolic, rule-based counterparts: they cannot explain their decisions. Networks are often described as opaque; one can't easily look inside them to ascertain how they produce their results. Without a thorough understanding of network behavior, confidence in a system's results is lowered, and transfer of learned knowledge to other processing systems – including humans – is precluded.
The formalism I used for interpreting network behavior was "n-of-m rules", a simple example of which is the majority voter function Y = 2 of (A, B, C). These rules are easy for people to understand, yet still quite powerful. They're capable of describing behaviors intermediate to standard Boolean OR (n = 1) and AND (n = m) functions, and the intermediate behaviors reflect a limited form of two-level logic. (To see why this is true, note that the expression for Y given above is equivalent to AB + BC + AC.)
Recommended by LinkedIn
It turns out that even for modest values of m, the set of possible n-of-m functions is very large. A neuron with, say, 10 inputs, could learn one of nearly 400,000 such functions, while for a neuron with 20 inputs the figure is over 46 billion. That's an amazing amount of flexibility for just one neuron. It's far too many possibilities to feasibly assess, and one of things I showed was how to dramatically compress the search space without any risk of missing the best match. For the 10-input example the algorithm only needed to consider 30 possibilities, and for the 20-input case, only 110. That's quite a big reduction, and it made the analysis super efficient. I coded up all the math and tested the approach on random functions as well as some of the standard machine learning datasets of the time. One of the latter was related to gene sequencing, another was for breast cancer diagnosis.
It was good work at the time, and it resulted in several published papers, one of which is still available here, but it's all but irrelevant given the size of today's networks, with billions of weights distributed across over 100 hidden layers of neurons. Of course, the size and complexity of current networks makes the opacity problem even worse, and has contributed to serious concerns about how much is entrusted to AI, even as performance reaches new heights and more applications are identified every day. There is work being done on interpretable AI, explainable AI, and ethical AI. It is a story that is in the news regularly and is still very much being written.
It wasn't long after my fellowship ended that another AI winter set in. But machine learning of various sorts has intersected with electronic design and test solutions for a while now. It was in some of the speech recognition capabilities we explored for the Infiniium and InfiniiVision products. (And for which Mike Karin and I received a patent. 🙂) Keysight has neural network derived transistor models incorporated into its design software offering, and some of their data analytics solutions make use of AI approaches. The page shown above has some good information on different facets of the topic, and I have to believe more will be coming over time. When added to the amazing amount of natural (human) intelligence in the company's products and solutions it makes for some pretty powerful capabilities.
The technology of the future is indeed here today, with all of its promise, risk, and continuing opportunities for improvement.
Awesome thesis and very interesting. Thanks for sharing