For decades, industrial control security relied on a simple assumption:
If the system was isolated, it was secure.
That assumption no longer holds.
Modern control systems are connected, virtualized, software-defined, and increasingly multi-vendor. Open Process Automation (OPA) accelerates this reality — by design. But openness also forces a hard truth into the spotlight:
Security cannot be an add-on. It must be a continuously enforced system property.
This is where many discussions about open control architectures stall — not because OPA weakens security, but because it exposes how fragile legacy security models actually are.
Why Perimeter-Based Security Is Failing Industrial Systems
Traditional OT security models focus heavily on the perimeter:
- Firewalls
- Network segmentation
- Air gaps (real or imagined)
While these controls still matter, they do not address what happens inside the system once a device is compromised.
As Julie Smith of DuPont explained when discussing end-user realities:
“Cybersecurity is becoming more important, but it’s becoming more difficult — and in most cases it’s still an afterthought.”
— Julie Smith, DuPont (Why End Users Are Driving the Open Process Automation Standard, ~13:57)
Perimeter defenses assume that internal devices remain trustworthy. But modern attacks target exactly that assumption:
- Malware modifies application binaries
- Unauthorized configuration changes persist silently
- Compromised devices continue operating “normally” until damage is done
OPA does not create this problem — it forces it to be addressed.
OPA’s Security Premise: Assume Breach, Design for Resilience
One of the most important — and often misunderstood — aspects of OPA is that it assumes:
- Systems will evolve
- Software will change
- Components will be replaced
- And eventually, something will fail or be compromised
This mindset aligns closely with modern IT security practices but is still relatively new in OT environments.
As Don Bartusiak of ExxonMobil noted:
“The systems we’re designing now must assume ubiquitous connectivity and the cyber risks that come with it.”
— Don Bartusiak, ExxonMobil (Why End Users Are Driving the Open Process Automation Standard, ~06:45)
OPA’s answer is not stronger walls — it is continuous verification.
What “Trust Verification” Means in Practice
In secure open control systems, trust is not a static attribute. It is a condition that must be constantly revalidated.
A practical trust-verification model includes:
1. Hardware-Based Root of Trust
Devices establish an immutable baseline using hardware security elements (such as TPMs) that cryptographically verify:
- Boot state
- Firmware integrity
- Operating system authenticity
If the baseline changes unexpectedly, the system knows immediately.
2. Continuous Attestation
Rather than verifying devices only at startup, secure systems continuously revalidate cryptographic signatures during runtime.
This allows detection of:
- Unauthorized file changes
- Malware insertion
- Configuration drift
As demonstrated in CPLANE-led trust verification demos:
“As long as that signature remains the same, we know that device is valid and has not been tampered with.”
— Trust Verification of Industrial Devices, ~01:03
3. Automated Containment
Detection alone is not enough. When trust is violated, the system must respond faster than a human can.
This includes:
- Isolating compromised devices
- Preserving system continuity by shifting workloads
- Enabling forensic analysis without halting operations
In practice, this turns cyber incidents into controlled events, not catastrophic failures.
Why Security Must Live at the System Layer
One of the key lessons from OPA pilots is that security cannot be delegated solely to:
- Individual devices
- Application vendors
- Or network teams
Security must be orchestrated at the system layer, where:
- Device state
- Application behavior
- Network connectivity
- And operational intent
can all be correlated in real time.
Brandon Williams summarized this challenge succinctly:
“Without a system-level view, you don’t have security — you have isolated controls.”
— Brandon Williams, Next Generation IT/OT Convergence Pilot, ~08:00
This is especially critical in multi-vendor environments, where no single component has full visibility into system behavior.
Security Without Locking Down Innovation
A common fear among operators is that stronger security will slow down change.
In legacy systems, that fear is justified:
- Updates are disruptive
- Security patches are risky
- Changes are deferred for years
OPA flips this dynamic by decoupling software from hardware and enforcing security through verification instead of restriction.
As a result:
- Components can be upgraded independently
- Security controls move with the system as it evolves
- Innovation becomes safer, not riskier
This is how IT systems operate today — and why they can evolve rapidly without collapsing under their own complexity.
CPLANE’s Role: Making Security Operable, Not Theoretical
CPLANE does not replace device-level security or vendor responsibilities. Instead, it provides the missing layer that turns security architecture into operational reality.
That includes:
- Unified visibility into system trust state
- Automated responses to trust violations
- Evidence-based security events tied to system behavior
- A clear operational narrative during incidents
This is what allows open systems to meet — and often exceed — the security posture of proprietary platforms.
Security Is the Price of Admission to Open Control
End users are not choosing between openness and security. They are demanding both.
OPA establishes the architectural foundation for secure, interoperable systems. Platforms like CPLANE exist to ensure that foundation can actually be operated — day after day, under real-world conditions.
Security-by-design is not optional in open control systems.
It is the price of admission to production.
