Evolution toward a Data-Centric Enterprise Application Framework
Historic Challenges of Enterprise Integration
Historically, enterprise integration and manufacturing system architectures were often constrained by a project-centric mindset and the deployment of large-footprint, monolithic applications. Each integration effort was typically approached as a standalone project with narrowly defined objectives, budgets, and timelines—often aligned to capital projects or specific operational pain points rather than a long-term digital strategy. This led to fragmented solutions, custom point-to-point integrations, and a lack of shared data context across systems. Compounding this issue was the reliance on heavyweight applications, such as traditional MES or ERP systems, that required significant infrastructure, long implementation cycles, and rigid interfaces. These systems were not designed for flexibility or rapid iteration, making it difficult to adapt to changing business needs or technological advances. As a result, manufacturers found themselves locked into brittle architectures that were expensive to scale, hard to modernize, and ill-suited for real-time data flow or enterprise-wide visibility. This legacy approach stands in contrast to today’s need for composable, service-oriented architectures that prioritize agility, data interoperability, and continuous improvement over static, one-time deployments.
Foundational – Data Centric Platform
The modern shift toward data-centric architecture represents a fundamental reorientation in both mindset and method. In this new paradigm, data becomes the primary asset, and applications are viewed as consumers, producers, and enhancers of data, rather than the central pillars around which systems are built. This approach inverts the traditional logic of designing systems around application functionality and instead begins by defining the lifecycle, structure, ownership, and governance of data itself. The goal is to make high-quality, contextualized data available, accessible, and actionable across the enterprise, regardless of where or how it is generated.
Placing data at the center of the architecture unlocks a number of strategic benefits. First, it fosters composability and modularity, allowing organizations to select or replace applications based on how well they interoperate with the enterprise’s data fabric, rather than how well they conform to a rigid suite. Second, it encourages shared context and semantic consistency, ensuring that different stakeholders and systems interpret data the same way—an essential foundation for digital twins, AI/ML, and enterprise analytics. Third, a data-centric model supports decentralized innovation: teams can build services, reports, or automation independently, so long as they adhere to shared data contracts or models. This reduces dependency on central IT and shortens delivery cycles. Finally, it aligns with the principles of continuous improvement and agility—data flows are designed to evolve, scale, and adapt in response to business needs, not the other way around.
Critically, this mindset shift requires organizations to adopt architectural practices like unified namespace, event-driven messaging, and semantic modeling using standards such as ISA-95, OPC UA, and MIMOSA CCOM. It also demands robust data governance, with clear roles for data stewardship, lineage, and quality management. By putting data first, manufacturers are no longer constrained by the capabilities or limitations of any one application. Instead, they create a dynamic ecosystem where applications are interchangeable tools in service of a larger, continuously improving data strategy—making the enterprise more resilient, interoperable, and capable of adapting to disruption and opportunity alike.
Adoption and Transition
Transitioning from a legacy, application-centric architecture to a data-centric paradigm is no trivial undertaking. It requires not just a change in mindset, but the deployment of new architectural components purpose-built to bridge the old and the new. At the heart of this transition is a data-centric application or platform that serves as the connective tissue across the enterprise. This application acts as a data operations hub, designed to aggregate, contextualize, and broker data from a wide variety of disparate systems in real time, using both industrial and enterprise-grade protocols.
To fulfill this role, the platform must provide broad connectivity capabilities. It should support industrial protocols like OPC UA, Modbus, EtherNet/IP, and PROFINET, as well as modern messaging protocols like MQTT with Sparkplug B and Kafka, enabling publish/subscribe communication across OT and IT layers. For integrating with business systems, the platform must also expose and consume RESTful APIs, SOAP interfaces, and cloud-native connectors. This allows it to function as a bridge across technological generations—from legacy PLCs to cloud analytics platforms—ensuring that all relevant data sources are connected to the enterprise’s digital backbone.
Recommended by LinkedIn
However, connectivity alone is not sufficient. The platform must also provide robust edge capabilities, particularly for distributed or remote operations. This includes features like store-and-forward buffering, which ensures data integrity during network outages, as well as data pre-processing at the edge, reducing bandwidth usage and latency. Additionally, the platform must offer security by design, with support for encrypted communication (TLS), role-based access control, secure device provisioning, and network segmentation. This ensures that even as the architecture becomes more interconnected, it remains resilient against cybersecurity threats and compliant with industry regulations.
Architecturally, this results in a layered but flat architecture: at the base are the distributed edge nodes responsible for local data acquisition and buffering; above them is a publish/subscribe layer (e.g., MQTT broker, unified namespace) that facilitates decoupled communication; above that, a semantic modeling layer contextualizes raw signals into meaningful objects and operations (often using ISA-95 object models); and finally, at the top are consumers—MES, ERP, analytics tools, and AI models—that consume data as services rather than through tightly coupled interfaces.
This new architecture is not only scalable and future-proof, but also extensible—allowing enterprises to continuously integrate new devices, systems, and use cases without disrupting existing operations. Most importantly, it shifts the enterprise focus from “what application do I need?” to “what data do I have, and how can I leverage it?”—unlocking the full value of digital transformation and enabling faster innovation, better decision-making, and enhanced operational performance.
From Here to the Future
A data-centric application can indeed be seen as a natural evolution of the traditional message bus architecture—but with important enhancements in scope, capability, and intent. Historically, enterprise integration evolved from point-to-point connections, which were brittle and difficult to scale, to message bus systems (such as ESBs or industrial service buses) that centralized communication, handled protocol translation, and added basic orchestration and routing. These buses introduced structure and consistency but remained largely focused on transport and transformation of messages, rather than on the broader concept of data as a first-class asset.
The next evolutionary step: a data-centric integration platform that not only manages connectivity and messaging, but also becomes the governor of enterprise data semantics. This application does more than just move messages—it understands the context, structure, and meaning of data as defined by a unified enterprise model, often rooted in standards like ISA-95, OPC UA information models, or industry-specific ontologies. It doesn’t merely pass data through—it curates, enriches, and publishes it into a structured, interoperable namespace or data fabric that can serve the entire organization.
In this sense, the data-centric platform is semantically aware, event-driven, and role-sensitive. It enforces the architecture’s canonical models and ensures that data from disparate sources conforms to shared definitions and business rules before it is consumed. This enables real-time interoperability between systems that were never designed to work together and supports a loosely coupled, yet tightly governed architecture. Moreover, by incorporating capabilities like edge deployment, store-and-forward reliability, and standardized APIs, the platform extends the concept of the message bus beyond the data center—into the cloud, to the edge, and across the supply chain.
This new class of application represents a significant advancement over traditional message bus systems. It retains the integration and messaging backbone functions of the bus but layers in data governance, semantic enforcement, distributed deployment, and publish/subscribe scalability. It is the keystone of a truly data-centric enterprise architecture, where the emphasis is no longer on moving messages between applications, but on enabling trusted, contextualized, and reusable data across the entire digital ecosystem.
Great post, Chris, I mostly agree. My only concerns is the centralized approach to data governance, control and semantic definition : - difficult to expand to a whole enterprise, toward other enteprises; - single point of failure meaning complex redundancy; - and required multitenant mastery. This needs to be broken down according to actual local knowledge/culture/language/technology... for seamless, possibly unplanned intra and inter-enterprises processes. This needs: - business driven data flows, not hard-wired by IT project; - documented concepts by users/domain adapted ontologies for capturing and embedding knowledge; - and the real time translation accross semantic domains.
Thanks for sharing, Chris
you need https://FlowWright.com
Thoughtful post, thanks Chris