Why Do Enterprise IT Projects Slip and Fail? The Hidden Causes in the QA Process
When Complex Architecture Eats Up Your Testing Time
“Why are there so many defects in production if we’ve already tested the system?” “We’re delayed again because of testing, and we really can’t push the go-live any further.” “On paper we have a testing process, but in practice it just doesn’t work…”
Do these questions sound familiar?
In an average enterprise IT environment, architecture today is no longer about “a monolithic ERP and one or two interfaces”. At most large companies, the reality is microservices, API-first architectures, hybrid cloud (on-prem + AWS/Azure/GCP), dozens of integrations (ERP, CRM, billing, logistics, identity, mobile app, web portal) and 3–8 vendors working in parallel.
This doesn’t just make the system larger – it changes it qualitatively as well. Complexity grows in a network-like way: every new interface multiplies the number of potential failure points.
What’s the problem with this?
Fundamentally nothing – these modern architectures bring many advantages: fast business responsiveness, flexibility and vendor independence, scalability and more room for business experimentation. The problem starts when the new system is being introduced or upgraded, but the company’s testing practice – if such a thing exists at all – is still optimised for the “old world”. From this, the familiar phenomena follow almost automatically: integration defects in production, regression issues after every release, “if there’s time left” testing and chronic uncertainty around go-live decisions.
At TestIT we often see exactly these kinds of bottlenecks that significantly impact the success of IT projects. That’s why, as a form of assistance, we’ll walk through in more detail why enterprise IT projects typically slip, fail and become more expensive due to hidden weaknesses in the testing process – and what direction it makes sense to move in if you want to change this, or at least avoid the most common traps.
The Reasons That Contribute to IT Project Delays
First Reason: Many Applications, Many Interfaces, Little Testing Time
In enterprise environments it’s a very typical problem that several dozen business applications communicate with each other, with potentially hundreds of interfaces (API, batch, message queue) in operation, while mixed teams (in-house, nearshore, offshore, vendors) develop in parallel. At the same time, release cycles are getting shorter: quarterly or half-yearly releases are often replaced by monthly or even bi-weekly ones.
According to recent editions of the World Quality Report (Capgemini–Sogeti–Micro Focus), 60–70% of organisations feel that system complexity is growing faster than QA capacity and competence. This shows clearly in project scheduling: scope increases, integrations multiply, development delays have to be “swallowed” somehow, but the go-live date remains fixed because there is no business room to move it.
And what draws the short straw in these situations? The time allocated for testing. The planned six weeks of testing become three, often still on paper “with the same scope”. Out of those three weeks, one week is lost to integration firefighting and environment problems, one week to UAT, and in the best case one week remains for regression. This is obviously insufficient in an architecture with 100+ interfaces. No wonder Gartner and Forrester analyses show that in enterprise environments 60–80% of defects are integration and regression-related: the issues are not primarily in the individual modules themselves, but in how they fit together.
Second Reason: Expecting Knowledge from the Process Instead of from People
The knowledge itself needs to live in the heads of a few key people. The testing process should then be built on top of this. As long as this knowledge is not properly structured and documented, testing will be very difficult. ISTQB standards and ISO/IEC/IEEE 29119 both emphasise that testing works in the long run only if the process is repeatable, not if it is used as a form of firefighting.
In practice, however, the following picture is very common:
1. 3–5 key people truly understand the system and the business processes.
2. A significant part of the test cases and experience exists “in their heads” or in scattered Excel sheets and old Confluence pages.
3. Documentation of end-to-end processes is incomplete or outdated.
“Judy knows how to test this end-to-end, she’s been doing it for the last nine years” – in practice it can happen that Judit has been sufficient for testing so far. But for testing a complex system she definitely won’t be enough anymore, and it would be very risky to entrust this task to her alone. Not to mention the hidden dangers of the attitude “We don’t have time to document, the release has to go out first.”
What’s the problem with this? That in such a setup you simply can’t say precisely:
Third Reason: Lack of In-house Testing Competence and Capacity
Unfortunately QA is still treated in many places as a kind of “last-minute check”, not as strategic quality management. The testing perspective on strategic IT decisions often takes a back seat. In other words:
And when something goes wrong, defects “bounce around” between the parties, and there is no one to hold QA together as a whole.
Fourth Reason: Continuous Release Pressure, Low Time-to-Market
IT is now a core business component in every industry. For banks it’s the mobile app and online banking experience, for telcos the self-service portal, for manufacturers the digital supply chain and partner portals, and in B2B the API ecosystem. Management is right to demand fast releases and fast ROI – but we’re often reluctant to price in the time required for quality.
Publications in IEEE Software and several industry studies consistently show that:
Despite this, in the final weeks you often hear: “Just this once let’s skip the full regression, it’s only a small change anyway.” And these “small” changes are exactly what lead to serious production defects that render entire processes unusable.
So Where Exactly Does the IT Project Process Actually Break Down?
From the above it’s clear what’s missing – let’s list these once more:
✘ No Testing Strategy and No Unified, Enterprise-level Testing Methodology
Most organisations have some kind of project-level test plan, UAT checklist or perhaps a regression Excel sheet. What’s missing is a unified, enterprise-level testing strategy that clearly defines:
If there is no central strategy for this, every project creates its own rulebook, with differing quality levels and expectations. In two consecutive projects, the meaning of “we’ve tested it sufficiently” can be completely different, and management makes every go-live decision “in semi-darkness” .
✘ No End-to-end Designed Regression Test Set
Regression testing typically swings between two extremes: either “we’d like to retest everything” (which is impossible in time), or “we’ll only check the directly affected module” (which is an illusion in an integrated ecosystem).
The reality should be a prioritised, risk-based E2E regression set that:
ISTQB and the World Quality Report both point this out – most organisations appear not to have a formalised, maintained regression catalogue. As long as this is missing, every release is a kind of “Russian roulette”. Testing is half-improvised, and at every major incident the question “Why didn’t we catch this?” inevitably resurfaces.
Recommended by LinkedIn
✘ No Real Place for Testing Within the Project
Projects are time-boxed and scope rarely shrinks. If testing is not firmly embedded into the project management framework (through gates and quality criteria), test time becomes the soft-scope element – this is what gets cut first. Testing happens in an ad hoc way, if there is time left.
A typical run of events looks like this:
In such cases, decisions about what to skip are not based on structured risk analysis, but on quick, political, deadline-driven bargaining. The outcome is unexpected defects in production, firefighting after the fact, and all of this capped by a loss of trust towards IT.
The ROI of test automation is a crucial factor for companies. Discover the key aspects that impact the return on investment, when automation truly pays off, and how to avoid the most frequent pitfalls that often…
So What Is Needed? End-to-end Test Management and QA Governance
In an enterprise environment, test management is a profession in its own right.
It’s not about “getting a few testers”, but about planning, directing and measuring quality at a system level, within a well-defined framework.
Among other things, this means:
Test Automation Where It Brings Real Value
When planning test automation, the primary questions are what is worth automating, at what level (unit, API, UI, E2E), and for what purpose (integration, full regression, smoke, non-functional).
Test automation is particularly effective in the following cases:
Most test automation efforts drown in the maintenance of UI scripts, and only a small subset of organisations manage to leverage automation at a strategic, E2E level. The way out is to let automation start from test management, aligned with business priorities, and based on realistic ROI calculations. It’s not the technology but the strategy that decides what pays off.
AI-assisted Testing – an Emerging but Already Tangible Trend
AI/ML-based testing support can show up in many forms: predictive defect detection, automatic test case generation, test scripts that adapt to UI changes, log analysis and anomaly detection. According to Gartner and Forrester forecasts, a significant part of QA tools will gain such capabilities over the next 3–5 years.
This can help maintain and optimise regression sets faster, identify untested areas and reduce manual, repetitive QA work. But it is important to understand that this does not replace the foundations. If there is no testing strategy, no E2E regression catalogue, no clear QA governance, AI will only support “smart chaos”. Real value appears where a stable QA process already exists, and AI accelerates it instead of trying to substitute it.
If You’re Unsure About the Effectiveness of Your Own Testing Process, Walk Through This Checklist…
Those who have already lived through a few painful production incidents or heavily delayed releases may start wondering how well their current testing process really fits their own enterprise environment.
From the inside, it is often difficult to give an honest, objective answer. This is why it’s worth bringing in an external, independent QA perspective that doesn’t look at things from a developer or (only) a vendor angle, but from a corporate and business risk point of view, to assess:
How Can TestIT Help?
At TestIT, with the above approach we provide test management and QA consulting services that:
All this with many years of solid enterprise experience behind us: we have already assessed the systems of numerous market-leading companies and successfully carried out software testing and test management tasks for their complex IT landscapes.
FAQ
1. For What Size of Organisation Is It Worth Establishing a Separate Test Management / QA Governance Function?
It is definitely justified where multiple critical business systems (ERP, CRM, billing, core banking, logistics etc.) run in parallel, several interdependent projects and releases are running each year, and multiple vendors are working at the same time. In such an environment it’s worth giving QA its own “legs”: a dedicated test manager, QA lead and a clear governance framework.
2. How Should We Start If Until Now We Have Mostly Done Manual, Ad Hoc Testing?
You don’t have to reform everything at once. Steps that work well in practice:
3. Do We Really Need an External QA Partner If We Have Our Own IT Team?
Not for everything, but in certain situations it can have a very good return. An external partner:
The goal is not for the external partner to take over QA, but to strengthen and elevate internal quality assurance.