Accelerating the Adoption of AI Models   - Part 1
Photo by Intel

Accelerating the Adoption of AI Models - Part 1

In collaboration with industry partners, Peter Darveau P. Eng., has been contributing to this project to develop an open-source machine learning (ML) model for identifying and analyzing root causes in plant operations. The project is part of stage 2 of a 5-stage AI maturity framework. Hexagon Technology in Oakville, Ontario educates about and develops transformative AI solutions to bridge human and machine collaboration. In this post, we discuss how to optimize a controller's model using machine learning to reduce time-consuming troubleshooting and analysis methods that come with legacy controllers and software.

Controls systems in place today work mostly on binary logic (1s and 0s) or some basic form of linear control incorporating proportional gain and possibly an offset. In some cases, such as process control, the rate of change of the output compared to the rate of change of the input is factored in to stabilize an otherwise unstable output. While the various operations and sequences perform their respective tasks well independently from each other, the function of the controller or system as a whole is ignored. Advances in neural networks theory, IIoT and computing technology offer new ways of getting more out of legacy controllers.

What is the advantage of looking at the controller or system as a whole? We can imagine a box with many inputs and outputs. By performing statistical analysis around what the controller controls, we can visualize how our controller is performing and make transactional parameter adjustments to get the desired output performance. This is generally what Statistical Process Control (SPC) does. But doing it this way involves an external control method and in no way does this mean we have an optimized model of all the control sequences taking into account the sensitivities the outputs to all the inputs.

Developing an optimized model of the controller will only be beneficial if we can optimize the box with some method of error correction; that is to say to optimize the box such that the differences (errors) between what is expected at the outputs and what is observed are minimized. This is where neural networks come into play.

It can be shown that a series of control sequences can be represented as a neural network model and is related to the most iconic model of all: linear regression given by the algebraic formula familiar to all: y = mx + b or a straight line on an x-y plane. By cascading the output from each linear regression model layer (function) to the next, we obtain a simple network.

No alt text provided for this image

By replacing and adding some nomenclature to keep track of how nodes are connected to other nodes, we turned a straight line equation into a mathematical network. These connections represent the weights given to each relationship. The diagram flows unidirectionally from inputs to the outputs. Adding more functionality creates a more complex network as shown below:

No alt text provided for this image

It is important to note in this example that no matter how we analyze or modify this model, the output(s) will always behave according to a straight line. In fact, mathematically, this holds true for any network model designed in a similar fashion and this explains why the model represented by all control sequences running in typical controllers or a more complex system involving multiple controllers can’t be optimized at all! We have now defined a clear and actionable problem statement to proceed further.

In future post, I will explain the barriers to optimization with this network and how the problem can be resolved with the introduction of an activation function to create an adaptive neural network. It is the use of this function that shapes the outputs into a non-linear (curve) behaviour allowing the network to perform error minimization by factoring in the node relations and the sensitivity of changes to the outputs to changes of the inputs in the network.

AI systems perform at or above human-level for many specialized tasks. This includes tasks that were never before possible with written rules or software, such as recognizing and categorizing millions of states and patterns. Advances in computing technology such as with Intel’s Xeon processors, Movidius NCS and OneAPI open framework make that possible.

The key to AI maturity is envisioning what the end-state could look like and seek support to aid with a clear path to that vision from the current state. At Hexagon Technology, we are inspired by the potential of AI. We’ve also been privileged to go on this transformational journey with business leaders to help them understand the promise of AI to create the future of industry, services, customer experiences and the environment.

Thanks,

Peter


Indded an enjoyable reading. The support from Businss leaders are vey exciting and it is the key for the future applications . Thanks for sharing and waiting for the next Chapter.

Like
Reply

To view or add a comment, sign in

More articles by Peter Darveau P. Eng., CED, 6sigma BB, MBA

Others also viewed

Explore content categories