The Purpose of Security in Process Automation

In process automation, security is not an end in itself. Its primary purpose is to preserve operational integrity against unauthorized or malicious influence, such that the process, the automation functions, the operators, and the physical installation remain capable of performing in accordance with design intent and operational intent. In engineering terms, this means preserving the conditions under which the process remains observable, controllable, and operable. Business continuity follows from this purpose because business performance depends on the installation remaining capable of sustained, acceptable, and intended operation. Once operational integrity is lost, continuity is typically degraded through production loss, off-spec operation, equipment stress, forced shutdown, or unsafe conditions.

Confidentiality also matters, but it is not a universal first principle. It becomes relevant when loss of information can increase operational exposure, damage competitive or strategic position, distort market-sensitive value, or create legal or regulatory harm.

This starting point matters because much of the language still used in cybersecurity begins with the Confidentiality, Integrity, and Availability (CIA) triad. Some attempt to adapt that model to process automation by adding terms such as safety, reliability, or resilience. That does not resolve the problem. It mixes properties of information and digital systems with performance characteristics of the wider automation and process environment. As a result, the model becomes harder to apply without becoming more technically accurate.

The same limitation appears when International Electrotechnical Commission 62443 (IEC 62443) is treated as sufficient on its own. IEC 62443 is valuable, but it is intentionally industry-agnostic and primarily structured around the security of the Industrial Automation and Control System (IACS) as a system, including risk assessment, zones and conduits, and system or component requirements. In the process industry, that leaves a sector-specific gap because the decisive engineering problem is not only protection of the digital architecture, but preservation of control and safety performance throughout the process lifecycle. International Society of Automation Technical Report 84.00.09 (ISA-TR84.00.09) addresses that gap more directly by building on the safety lifecycle defined in American National Standards Institute and International Society of Automation Standard 61511 (ANSI/ISA-61511), integrating cybersecurity into that lifecycle, and explicitly addressing process safety controls, alarms, interlocks, and the process sector.

From an engineering perspective, the central question is whether unauthorized or malicious influence can degrade the functions required to keep the process within design intent and operational intent. The distinction in ISA-TR84.00.09 between Equipment under Control and the System under Consideration is useful, but it is still not sufficient for process automation risk analysis. Equipment-based views remain too close to the physical asset model, while system-under-consideration and zone-based views remain too close to the architecture model. In practice, cyber-physical consequences arise through disruption of automation functions, not through equipment categories alone. Those functions may be distributed across multiple assets, and multiple distinct functions may be hosted on the same asset. For that reason, the most appropriate object of analysis is the control function and its dependencies. A Control Function Catalog therefore provides a stronger analytical basis than either an equipment-only view or a purely architecture-centric view. It separates logical automation functions from the physical installation, captures shared services and communication dependencies, and allows consequences to be assessed at the point where control is actually created, modified, and lost.

The engineering question is therefore not simply whether equipment is protected or whether zones and conduits are well designed. It is whether the relevant functions remain capable of preserving observability, controllability, operability, and the effectiveness of independent safety protections under real operating conditions.

For that reason, the purpose of security in process automation cannot be defined primarily in terms of protecting digital assets in isolation. The digital system matters, but it exists to monitor, coordinate, optimize, and constrain a physical process. If security is framed only around the architecture of the digital system, it will miss a critical part of the problem, especially where the attacker’s objective is not merely unauthorized access or digital disruption, but interference with the production system itself in order to cause environmental damage, financial loss, equipment damage, prolonged loss of production, or even fatalities. This is particularly important in scenarios driven by sabotage or terrorist intent, where the digital intrusion is only a means to achieve physical consequences.

An architecture-centric approach may protect networks, servers, authentication paths, and communications, yet still fail to address whether the automation strategy, control dependencies, operating windows, safeguard philosophy, and physical installation remain robust against deliberate interference. This is the limitation of an architecture-centric view when used in isolation. It focuses on system components, communication paths, zones, conduits, and access boundaries. These elements are necessary, but they are not sufficient. A process installation does not become secure merely because its digital architecture is segmented, hardened, and well administered. It becomes secure only when those protections are embedded in a broader model that asks a more fundamental question: does the installation remain capable of keeping the process within intended and acceptable conditions when parts of the digital system fail, degrade, or are deliberately manipulated?

That question also requires acceptance of a second reality: digital security measures may fail or be bypassed. The installation should therefore be designed to provide the highest achievable degree of resilience under those conditions. In the process industry, this means not only preventing compromise, but also ensuring that control functions, operating limits, independent safeguards, and the physical installation continue to reduce the probability of escalation and limit the severity of consequences when prevention is no longer effective. That is a control-centric question.

A control-centric model starts from function and consequence. It begins with the automation purpose of the system and the physical realities it governs. It asks which functions are essential to maintaining control, which measurements are critical for situational awareness, which actions can move the process toward equipment damage or loss of containment, which dependencies can be exploited to alter outcomes, and which safeguards remain capable of limiting escalation if the primary automation layer is compromised. It also examines whether process design, control philosophy, alarm strategy, operator support, and the physical installation provide sufficient robustness once prevention has failed.

This is where observability, controllability, and operability become essential. They are engineering conditions that determine whether the process can still be governed.

Observability is the condition in which the actual state and progression of the process can still be determined with sufficient accuracy and timeliness. It depends on trustworthy measurements, alarm behavior, status indications, sequence information, and operator visibility of relevant process conditions.

Controllability is the condition in which the process can still be influenced in the intended manner through the available control functions, interventions, and safeguards. It depends on the integrity of commands, logic, sequencing, permissives, interlocks, final control actions, and the ability of operators and automated functions to impose the required response.

Operability is the condition in which the process can still be run in a stable, manageable, and acceptable way within its intended operating envelope. It depends on the interaction between process design, automation, procedures, safeguards, human decision-making, and physical process behavior. In a process engineering context, operability includes the ability to operate within production requirements, equipment limitations, and safety constraints.

Process safety is directly linked to these conditions. It is partly embedded in operability, because a process that cannot be operated safely within its intended limits is no longer truly operable. But process safety is not exhausted by operability. It also depends on independent protective layers and safeguards that must remain effective when normal operation is disturbed or lost. Security therefore has to be linked not only to normal operation, but also to the continued effectiveness and independence of these process safety protections.

Operational integrity is the broader state that exists when observability, controllability, operability, and the effectiveness of process safety protections are preserved together. If any of these conditions is materially degraded, the process may still appear to be running, but it is no longer operating with full integrity.

Availability therefore requires careful treatment. Availability matters, but it is not the highest objective. It is one supporting condition for operational integrity. A controller that remains online while executing altered logic, accepting manipulated inputs, or presenting misleading information may satisfy a narrow availability measure while undermining actual control of the process. A running system is not necessarily an observable, controllable, or operable system. Nor is it necessarily a safe one.

The same applies to integrity in the narrow information-security sense. Information integrity matters because process automation depends on trustworthy data, commands, configurations, and event histories. But the deeper issue is whether corrupted information degrades observability, weakens controllability, reduces operability, or impairs process safety in ways the organization does not detect or cannot stop in time. The hazard lies not in the corrupted bit itself, but in what the corrupted information causes the system, the operator, or the installation to do.

The hierarchy of objectives is therefore clear. First, security must preserve operational integrity. Second, security must support business continuity. Third, security must protect confidentiality where it has real operational, commercial, market, legal, or regulatory consequences.

Business continuity belongs in second place because continuity without correctness and safety has little engineering value. The objective is not merely to keep production moving or to restart quickly. The objective is to sustain or restore operations in a condition that remains safe, stable, acceptable, and under control. In process engineering terms, this requires restoration of observability, controllability, operability, and the continued effectiveness of independent safety barriers.

Confidentiality belongs in the model as a context-dependent objective. In process automation it is rarely the first concern, but it can still be decisive. Some confidential information increases attackability when exposed. Other information has commercial, strategic, or market-sensitive value. These concerns are valid, but they remain secondary to the central question of whether the process remains within intended bounds.

A stronger model therefore starts from the condition that must be preserved: operational integrity. In process automation this means preserving the ability of the installation to keep the process within design intent and operational intent through adequate observability, controllability, operability, and effective independent safety protections. The analysis should start with the physical process and the control functions that govern, protect, and support it. From there it should identify the dependencies that sustain those functions, the ways those dependencies can be degraded or manipulated, the abnormal conditions that may follow, and the safety layers that limit escalation when prevention fails. Cybersecurity architecture remains important in this model, but only as one supporting layer within a broader function-, consequence-, and process-based framework.

This has a direct implication for process safety. Process safety can no longer treat hazardous deviations as arising only from accidental causes, random failures, or non-malicious human error. It must also consider intentional causes. Malicious influence can produce different hazard patterns because functions may be disabled, manipulated, or misled in coordinated, simultaneous, or deliberately sequenced ways designed to defeat normal control and protection assumptions. The hazard potential is therefore not defined only by whether a component can fail, but also by whether multiple functions can be driven to fail together, suppressed in a critical order, or made unavailable at the moment they are needed.

Conclusion

The purpose of security in process automation is not simply to protect networks, endpoints, data, or systems. Its purpose is to preserve operational integrity against unauthorized or malicious influence, including forms of interference intended to disrupt or damage the production system and thereby cause environmental harm, financial loss, equipment damage, prolonged production loss, or fatalities. In engineering terms, this means preserving the conditions under which the process remains observable, controllable, and operable, while ensuring that independent safety protections remain effective when normal operation is disturbed or lost.

 Cybersecurity architecture is necessary, but it is not sufficient. Its value depends on whether it helps preserve operational integrity and whether the installation remains resilient when digital protections are bypassed or fail. In the process industry, security and process safety meet in cyber-physical risk. That is the point at which malicious digital influence, degraded control performance, weakened protection, and physical process consequences have to be understood as one connected problem. For that reason, cyber-physical risk analysis should be regarded as essential in any serious process industry risk governance approach. Neither a stand-alone cybersecurity risk analysis nor a traditional process hazard analysis is sufficient on its own, because neither addresses the combined problem of malicious cause, functional disruption, process deviation, safeguard response, and hazardous consequence in an integrated way.

Another excellent and informative article that focuses on dealing with the realities that exist in working with technologies used to monitor and control a physical process. Like the discussion in the gaps present in several models that purport to ensure security,

To view or add a comment, sign in

More articles by Sinclair Koelemij

Others also viewed

Explore content categories