Four Additional Principles for Cybersecurity
We all know about CIAA, but there are four other principles that have arisen from extensive experience working with cybersecurity systems that should be followed when constructing cybersecurity for systems. These are: security must be rooted in hardware, the full stack of software needs to be protected layer by layer, components must be designed using an autonomous posture, and security must be designed in from the start. The four of these are closely related and depend on each other to some extent. When added to the original CIAA principles these four new principles creates a set of rules for building highly secure systems.
Security must be rooted in hardware: The premise here is that hardware is immutable and operates by inviolable physical principles, while software is infinitely malleable. This property of hardware is a key contrast to the character of software systems which are endlessly mutable.
The ease in which software can be changed is one of the reasons that large and complex software systems can be so readily constructed. Developers readily extend and change software as needed with relatively little effort. Contrast this with manufacturing physical devices, which involves the construction of tooling that is difficult and expensive to change after the design goes into production. Changes to the design of software are often made after it goes into production due to the ease of making such changes. However, only changes that are made by the development team are expected in production software. In a spirit similar to that of physical devices, after a software system is finished and put into production the expectation is that the code will not change once it leave the development team, and it therefore will have a permanence and robustness much like hardware, so that it will operated repeatedly as expected.
Once in possession of a physical device, changes can be made to the device that were not intended by the designers. This is the heart of so called “hardware hacks”. And similarly, the software can be changed in ways that were not intended by the designers. But, the software alone on a computing device cannot alter the underlying hardware in any way, either intentionally or incidentally. Physical action is needed to change hardware. So, if one does not have physical possession of a device or a way to physically affect a device no changes to the hardware can be made. Unlike hardware, software can be changed by other software and remotely through network access to computing devices. One needs merely to introduce new values into the computer memory where the software is used or in the storage where the software is persisted, and voila’ the software is changed. This is why software is “soft” and hardware is “hard”.
One way to detect a change of software is to add a second software element to test the first software element to see if it has been changed in any way from the version shipped by the development team. Subverting this method is easily accomplished. It only requires that the second software element is modified first by the attacker, so it will fail to detect any modifications to the first software element. This game can be played endlessly by introducing additional layers of checking software elements and can similarly be defeated by disabling the last element first and proceeding inward to the actual target element. Hardware, on the other hand, can always be implicitly trusted to act in a predictable way, and only in that predictable way. By rooting security in hardware the system can obtain the benefits of the immutable and inviolable properties of hardware. Through this method all security actions anywhere in the software are transitively based on the assurance of the hardware.
Here, hardware means true hardware, made solely of electronic components, operating within its design constraints. Some components that are referred to as hardware are actually complex systems that also include kinds of software. Modern CPUs that run “microcode” are labeled as hardware, but by the definition used here the microcode is software, so the security must start below that level. Similarly for firmware, which runs on many microcontroller controlled devices, the firmware is a form of software, so the security must start below this layer. Hardware doesn’t need to mean a complex device like a CPU. Indeed, the hardware for a root of trust purpose should be very elementary. Examples include: a discrete component buffer, a system register, a protected area of memory, and perhaps up to something as complex as the TPM chip. Complex components, like CPUs, are known to have been compromised at the hardware level with hidden modifications that can allow unintended behavior. The vulnerability of this hardware to malicious modifications or implants is due explicitly to the complex nature of the hardware, which allows unintended modifications to be hidden from regular production inspection and test processes. Localizing the hardware basis for cybersecurity on a small component allows the examination and testing of the component for proper behavior and limits the ability of hackers to hide malicious modifications in such simple hardware.
The full stack must be protected: Modern systems are built of layers of software operating on top of the hardware. From low layers such as microcode and firmware through hypervisors and operating systems, to application frameworks and to the application itself. Each layer of software depends on the layer below to provide needed functionality. Therefore, each layer of software must trust the layer below it to provide the correct functionality operating in the expected manner to provide correct operation of the overall system. If each software layer is trusted to provide the expected behavior, then the entire system should also provide the expected behavior and be trusted. This dependency on the layer below should also be taken to mean that the layer above cannot corrupt the layer below. If the layer above becomes corrupted or compromised in some manner then the layer below is not affected by this adverse behavior other than that the lower layer may be used in a manner that is different than expected. Primarily, correct behavior means that the provided functionality will do what it says it will, and only what it says it will. For simple functions, like math operations such as sin(x), this behavior is very well defined. The function sin(x) will return the mathematical value of sine for whatever value is passed in as the argument. In principle, all functions should be as well defined as math functions, so that uncertainties of outcomes are not allowed and assertions of guarantees of operation can be made for them. A higher layer of software relies on the aggregate of all the lower software layer functions,and only on this aggregate of lower level functionality, to fulfil it operations. Therefore, if all the lower level functions can be trusted to operate as expected then so should the higher layer functionality be trusted.
Autonomous posture: Building components with an autonomous posture is a core element of good software and systems engineering, but it also provides essential security benefits as well. Autonomous posture means that components are loosely coupled. Components interact with each other, but their essential functionality will not be impaired if another component fails or misbehaves in some way. Autonomous posture is the opposite of tightly coupled, or brittle, systems where each component relies on other components for its operations, and any failure of another component will cause the first component to work improperly or stop working altogether. Autonomous poster, when applied to cybersecurity, aids in isolating a compromised component from other components in a system. Unintended behavior in the component compromised by the attacker will not affect the essential behavior of other components. Furthermore, the use of secure communications will create components that are isolated from each other in terms of cybersecurity. Indeed, the attacker will need to compromise each component individually as if starting a new attack, if the attacker wishes to expand the attack beyond the initial component in what amounts to new attacks.
The detailed requirements for conditions for cybersecurity described by the three principles above: rooted in hardware, trusting layers of software, and autonomous posture, must be designed into an application or system from the start. Only the completeness of the use of the above principles can ensure the trustworthiness of a system. Piecemeal use of these principles where most of the software is trusted but some layers are not, voids the entire method. Modern applications are typically large and highly complicated, often with hundreds of classes, thousands of methods or functions, and hundreds of thousands of lines of code. Each application was designed in a certain manner to achieve a desired goal. That design becomes integral to the identity and operation of the application. So much so, that any significant change to the design also significantly changes the application or system, and may require a redevelopment of the entire system. Imposing the above principles is tantamount to making basic and integral design changes to an application. In the process, the application will undergo significant changes to achieve the goals stated above. Such changes are generally out of question for large established applications and systems due to the great and costly effort to implement the changes and the side effects on other systems and components that interface with the modified system.
Instead, an effort to patch or wrap an existing application or system with a new approach is often tried, with the original application or system left largely intact. The hope is that most of the benefits of the new techniques will be realized while the impact and costs are minimized. These efforts can never be completely true to the intent and execution of the required principles. The result is a system with some of the characteristics of the new approach but not one that comprehensively complies with the new approach. In terms of cybersecurity, the system thus produced will have many omissions and behaviors that diverge from the desired goals for the system. In the patched system, the total number of vulnerabilities may be fewer than in the original system, but the cybersecurity will not come close to what would be possible by designing the systems with the above principles from the start as part of the base requirements. In a sense, cybersecurity is an all-or-nothing proposition, that behooves taking the maximal effort to secure a system simply because the next vulnerability is completely unknown.
At Cognoscenti Systems we have applied the above principles in designing our SecureSieve cybersecurity technology and our ControlMQ secure communications product for controls in order to achieve the highest level of network security for controls available. Find out more at: www.cognoscentisystems.com