Traditional DevOps Build Phase

Why CI Is Powerful… and Painfully Manual Before AI

The Build phase has always been the beating heart of the DevOps lifecycle, where ideas first become working software. But before AI, this phase was also one of the most labor-intensive, error-prone, and coordination-heavy parts of delivering enterprise applications. And for metadata-driven platforms like Salesforce, ServiceNow, and SAP, the Build phase wasn’t just about writing code—it was about orchestrating metadata (MD) changes, configuration, automation workflows, UX composition, object definitions, permissions, and integration mappings.

And yet, for decades, most DevOps frameworks treated metadata as an afterthought or a secondary artifact, even though it often represents 80%+ of the functional surface area in a low-code platform. Traditional DevOps tooling wasn’t designed to understand metadata deeply, and humans had to compensate—by applying tribal knowledge, manually reconciling dependencies, and cross-checking changes across the entire configuration landscape. Before AI, that manual effort defined the Build phase.

Why CI Was Essential, but Also Heavyweight

Continuous Integration (CI) emerged as a best practice to prevent the “integration hell” that used to appear when developers merged weeks of isolated changes into a shared repository. The mantra is simple: Integrate early. Integrate often. Test continuously.

This worked exceptionally well for pure-code systems. But for metadata platforms, CI was more complex because:

• Metadata changed faster than code.

• Metadata had implicit dependencies not easily enforced by tooling.

• Metadata from multiple developers often overlapped.

• The same metadata asset could represent both data model and behavior.

CI was indispensable, but succeeding at it required a continuous vigilance—checking what changed, who changed it, why it was changed, and what it might break downstream. Before AI, all of that was manual.

Metadata Matters as Much as Code—Sometimes More

In modern enterprise DevOps, especially on platforms like Salesforce, metadata must be treated as a first-class citizen. The Build phase isn’t just compiling code or packaging files. It’s ensuring that:

• Field-level changes won’t break existing automations.

• Workflow, Flow, or Apex changes don’t reintroduce recursion or conflicts.

• Permission sets reflect new data access needs.

• Page layouts and Lightning Web Components (LWCs) align with new UX requirements.

• Integrations still serialize and deserialize data in expected formats.

Historically, this required substantial SME knowledge, cross-team communication, and gut-feel assessments. Even with CI tools, the developer had to predict which metadata assets would interact… and how.

AI changes all of this. But before we get to that future, it’s important to understand the full weight of what CI teams used to carry.

Tests in Build Phase

Even though many organizations view “testing” as a separate phase, the most successful teams understood that CI was only as strong as the tests run during the Build. Can you say “Shift Left?”. Traditional Build best practices include:

1. Linting Tools

Developers used linters (ESLint, PMD, Checkstyle, Apex PMD, Prettier, etc.) to catch Syntax errors, Unsafe patterns, Style inconsistencies, and Performance red flags. But linters required meticulous configuration—and constant updating—to keep pace with platform changes.

2. Unit Tests

Unit tests ensure small blocks of code behaved as expected, but Salesforce and other metadata platforms made this complex:

• Metadata behavior often replaced “functions” in code.

• Triggered automation could fire unexpectedly.

• Test data setup required deep system knowledge.

Writing unit tests was essential—but slow, brittle, and hard to maintain.

3. Static Security Testing

Static scanners identify:

• Unsafe library usage

• Cross-site scripting risks

• SOQL/SOQL injection

• Insecure object/field access patterns

• Hardcoded secrets

But scanners depended on manually maintained rulesets, and teams had to triage large volumes of alerts.

4. Functional Tests Within the Build

Some teams pushed functional tests earlier into CI:

• Selenium or browser-based UI flows

• Cucumber or BDD scenarios

• API regression tests

These are extremely sensitive to metadata changes. A renamed field or updated layout could break dozens of tests. Human effort is needed to diagnose whether a test failure represented a real bug or just a metadata ripple effect. AI and tools like Copado Robotic Testing are now beginning to rewrite all of this. But the pre-AI Build phase demanded armies of test engineers—and long feedback cycles.

Open Source in the Build Phase

Powerful, but Labor Intensive

Open source is the backbone of modern development. But in the Build phase, it introduced several challenges—especially pre-AI:

1. Manual Dependency Management

Developers had to Track library versions, Ensure compatibility, Manually triage deprecation warnings, and Update transitive dependencies (often dozens deep). Every update risked breaking something.

2. Library Version Drift

Different teams, repos, or micro-services often used different versions of the same library. Over time Codebases bloated, Inconsistent behavior emerged, and Attack surfaces increased.

3. Inherited Security Vulnerabilities

Open source is only secure if maintained. In reality, Developers reused old versions for convenience, Teams patched vulnerabilities inconsistently, and Legacy code quietly accumulated Common Vulnerabilities and Exposures (CVEs).

Before AI, manually reviewing open-source dependencies was a constant grind. Now, AI can standardize library versions, automatically update them, and even rewrite legacy code to conform to newer APIs. But in traditional DevOps, humans owned that work.

Dependency & Impact Analysis: Planning Was Not Enough

We’ve already emphasized dependency and impact analysis in the Planning phase. But in reality, the Build phase introduced new truths:

• Plans were often incomplete.

• Hidden dependencies only emerged once work began.

• Metadata edits revealed downstream automation impacts.

• Developers changed more than originally estimated.

• Other teams were modifying overlapping assets simultaneously.

Impact analysis had to be performed again—this time on the actual changes, not the proposed ones. Before AI, this meant:

• Diff analysis

• Checking dependency graphs manually

• Reviewing automation chains

• Consulting SMEs

• Re-analyzing integration touch-points

• Inspecting permission and security implications

This is where AI shines today, but before AI, this step consumed hours—sometimes days.

The Merge Problem: Coordination Across Teams

Perhaps the most underestimated challenge in the Build phase, especially on metadata platforms, was merge conflicts.

Why merge conflicts were so common:

• Multiple developers edited the same metadata files.

• Layouts, flows, permission sets, and automation overlapped.

• Branching strategies weren’t metadata-aware.

• Git wasn’t originally designed for XML-heavy, interdependent MD assets.

Resolving conflicts required:

• Deep expertise in platform semantics

• Manual inspection of XML or JSON

• Conversations with other developers

• Re-testing everything after resolution

Some enterprises dedicated full-time engineers just to merge conflict resolution. AI is now extremely good at reconciling metadata and code changes. But before AI? It was a slow, frustrating, high-risk part of the Build process.

Environment Drift

Dev, QA, and staging often diverged from production. This made Build validation harder because:

• Tests passed in one org but failed in another.

• Metadata dependencies existed in one environment but not another.

• Configuration drift introduced false negatives and false positives.

AI will be able to detect and reconcile drift automatically, but historically, this was manual detective work.

This also brings up the importance of back deployments. In a long lived sandbox environment, it is critical that changes made later in the pipeline are moved quickly and regularly back to the early stages, including the developer environments.

Packaging & Release Readiness

Before code could move into Deploy, teams had to:

• Package metadata correctly

• Validate dependencies

• Order deployment steps manually

• Align sequencing with platform quirks

This was especially difficult on Salesforce, where deployment order matters and relationships between metadata assets aren’t always explicit.

Conclusion

The Pre-AI Build Phase Was Powerful—but Burdened by Manual Effort

CI was revolutionary. It reduced integration failures, improved quality, and forced discipline into the software delivery lifecycle. But on metadata-driven platforms:

• Metadata mattered as much as code

• Tests were essential but brittle

• Open source required constant manual care

• Merge conflicts consumed huge cycles

• Dependency and impact analysis happened twice

• Environment drift was a hidden tax

• Packaging was non-trivial

• Coordination across teams determined success or chaos

AI isn’t just improving this phase—it is reshaping it. But to appreciate its value, we must understand just how much weight the Build phase once placed on human shoulders.

To view or add a comment, sign in

More articles by David Brooks

Others also viewed

Explore content categories