AI Misconception

AI Misconception

How many of you are feeling overwhelmed by the overload of AI information, claims, and product offerings? I am, and I'm fairly skilled at navigating technology. There are over 72,000 purveyors of AI tools in North America alone, and that number is growing exponentially. The commonality is that they all, for the most part, are leveraging public LLMs. In fact, if you are using ChatGPT, Grok, Claude, CoPilot, Gemini, (fill in the blank), then you are making use of public LLMs yourself. Claims are being made that must be evaluated before making any commitment to the tech. One AI provider claims to map messy data in brownfield plants. They make it sound simple. If all their system does is "read" reports and screens, it has no more of a capability than the individuals who were reading those same screens and reports before their tech was deployed. In fact, it usally means the data reflected in the screens and reports was incomplete for analysis. Then, more data is needed from the shop floor. Older PLCs, CNCs, and SCADA systems often use non-standard or vendor-locked protocols (e.g., proprietary serial, old Modbus variants). AI systems typically rely on clean, structured, real-time data; connecting to decades-old controllers may require expensive protocol converters or reverse engineering. Legacy equipment rarely has consistent, semantic data labeling (ISA-95/UNS models). AI needs contextualized inputs (e.g., machine state, job, operator, material) to produce reliable predictions. Retrofitting semantic layers is complex and labor-intensive. Many brownfield plants lack centralized historians or proper time-stamped logs. AI models require large, high-quality historical datasets; collecting and aligning decades of unstructured or missing data is often infeasible. Messy data mapping is not a quick fix. Brownfield plants often rely on operators’ handwritten notes, spreadsheets, or tribal knowledge. AI cannot invent missing variables (e.g., downtime reasons, tooling life, job context) if they were never captured digitally. To make AI actionable, you often must add IoT sensors (vibration, temperature, power draw), edge gateways, and network upgrades—contradicting the claim that AI requires no “complete rebuild.” AI models need continuous retraining as processes drift. Plants without strong data governance and analytics teams struggle to sustain them.

Such vendor claims oversells AI as a drop-in fix for brownfield manufacturing. In reality, successful AI deployment requires data infrastructure upgrades, protocol bridging, historian development, semantic modeling, network hardening, and workforce training. “Reading screens” and “mapping messy data quickly” sound appealing but are fragile, insecure, and insufficient for reliable, scalable decision support in decades-old plants.

“Just add AI to your old factory — it will map messy data, read screens, and make decisions fast. No rebuild needed.”

It sounds great. It’s not real.

Why this fails in practice:

  • Legacy systems use proprietary, outdated protocols and lack clean, contextualized data. AI needs structure — not random tags and missing logs.
  • Screen scraping is fragile and risky. HMIs change, OCR breaks, and unsupported Windows XP/7 boxes become security holes.
  • Messy data isn’t a quick fix. Much lives in paper logs or operators’ heads. You often need new sensors, gateways, and historians — a “rebuild” by another name.
  • People & process matter. Without IT/OT modernization, secure networks, and operator trust, AI won’t stick.
  • ROI disappoints when retrofitting costs rival partial modernization.

Reality: AI succeeds in brownfield plants only when paired with data infrastructure upgrades, sensor retrofits, unified namespaces, and strong change management.

AI is powerful — but not a magic overlay for decades-old factories.

To view or add a comment, sign in

Explore content categories