Speed and the optics of process efficiency
The rapidly increasing popularity of Agentic AI has once again raised our interest in process automation. A decade ago, robotic process automation (RPA) had a similar impact on system modernization concepts and strategies. But it is rare to see this interest coupled with a genuine attempt at reducing waste and optimizing the performance and efficiency of a process.
The usual discussion is about automating current processes as-is or with minor enhancements. It seems the reasoning or justification for this approach is that by taking away the human actor and replacing it with advanced automation, we will achieve results quicker. Increasing the speed at which we achieve the current results appears to be more important to us than improving the quality of the result or efficiency of the process relative to the ultimate outcome and its value.
Is increasing the speed and frequency of sub-optimal performance and inefficient process an indication of success? It is possible that seeing the results sooner than later may create a perception of efficiency even if the actual results are sub-optimal. There could be situations where the longer it takes to see the results, the greater our perception of inefficiency. We may even accept a less optimal result if it was delivered much quicker.
But things may not be as simple if we bring in hyper automation to such processes and accept less optimal results on a vast scale, even if it’s delivered quickly. Automation will amplify the timeliness of results at scale, but it may come at the cost of amplified inefficiency. How many of us have seen a dashboard that shows captures and measures waste and sub-optimal performance of processes, either prevented or incurred, or even the root causes of both problems?
Recommended by LinkedIn
What should be the priority? Making process automation an overarching intent and goal or making process automation to be a strategic component of the overarching goal of process reengineering and continuous improvements? I prefer the latter as when it comes to process-automation, removing waste and optimizing performance before automating the process increases the benefits of automation exponentially, while minimizing the hidden negative consequences irrespective of whether we measure it or not.
So, before we launch into hyper automation initiatives, we should try to reengineer the process where speed is not everything and the quality of results and optimal performance are considered as equal success factors. From a technological point of view, I like Agentic AI as much as I like RPA. Which is to say not very much. Especially when compared to continuous reengineering of processes that achieve high-quality results, in the most efficient ways that are relative to the intended system outcomes and stakeholder value.
Of course, my views are subject to change once there are ways to measure the efficiency / inefficiency of Agentic AI and RPA just as much as there are ways to measure human efficiency and performance in realistic ways. Until then, being skeptical of the actual benefits of hyper automation in the context of business processes is neither wrong nor being an obstructionist.