Home About Services Engagement Insights Get in Touch
← Back to Insights Design Assurance

Ten design failures that appear in almost every infrastructure programme

After 21 years of reviewing network, security and cloud architectures across financial services, biopharmaceutical, healthcare, media and automotive programmes — the same failures appear with remarkable consistency. None of them are exotic. All of them are avoidable. All of them would have been caught by independent review before implementation began.

1. The design is reviewed only by the team that wrote it

This is the most common and most costly failure. Internal review is not independent review. The same assumptions that shaped the design shape its review. The same vendor relationships that influenced the architecture influence its assessment. A design review conducted by the delivery team is not a review — it is a confirmation.

2. Scalability is designed for today, not tomorrow

Most HLDs are sized for current load. Few seriously model the traffic, user growth or data volumes that will arrive 12 to 18 months after go-live. When those loads arrive, the architecture cannot accommodate them without significant rework — rework that is far more expensive than designing for scale from the start.

3. Security is added after the design is complete

Security as a layer bolted onto a completed design is not security architecture. Firewall rules, access controls and monitoring added at the end of the design process cannot compensate for an architecture that was not designed with security as a first principle. In regulated environments — financial services, healthcare, biopharmaceutical — this is not just a design problem. It is a compliance problem.

4. The LLD does not deliver what the HLD promised

High-Level Designs get approved. Low-Level Designs get written months later, by different people, under different constraints. By the time implementation begins, the LLD has often diverged significantly from the approved HLD. This divergence is rarely documented. It is rarely formally approved. And it is almost never communicated to the business stakeholders who signed off the original design.

5. Single points of failure inside resilient-looking architectures

A design can have dual power, dual uplinks, redundant firewalls and still have a single point of failure at the integration or dependency layer. Resilience designed at the component level but not at the system level creates architectures that look robust on a diagram but fail in ways that the diagram does not predict. Finding these failures before go-live is precisely what independent review is for.

6. Vendor recommendations reflect the integrator's interests, not the client's

This is not always intentional. But the architecture almost always leads to the product the delivery partner already knows, already has stock of, and already makes the most margin on. Independent vendor-neutral review asks the question the delivery team cannot ask objectively: is this the right solution for this requirement, or the most convenient solution for this integrator?

7. Compliance requirements are noted but not designed for

Regulatory obligations appear in the requirements section of the HLD. They are acknowledged, listed and referenced. Then the architecture proceeds to ignore them. The design notes the requirement and moves on, assuming it will be addressed somewhere else. In financial services, healthcare and biopharmaceutical environments, this approach leads directly to audit findings, remediation programmes and — in the worst cases — regulatory action.

8. The design has no formal sign-off trail

Sign-off exists on paper. Evidence of genuine scrutiny does not. When something goes wrong after go-live and the question is asked — who reviewed this design, when, and what did they find — the honest answer is often that the review was cursory, the sign-off was procedural, and nobody was actually accountable for the technical quality of what was approved.

9. Assumptions are stated as facts

HLDs regularly present sizing, performance and resilience assumptions as design decisions without validation against actual requirements or evidence. "The system will support 10,000 concurrent users" appears as a design statement rather than as an assumption that requires testing. When the assumption turns out to be wrong — as assumptions frequently do — the entire design may need to be revisited.

10. Nobody has mapped the design to the business outcomes

Business stakeholders approved a set of outcomes. The technical design delivers something adjacent to those outcomes. The gap between what was promised and what was designed is real, documented neither in the HLD nor the LLD, and surfaces only when the business expects the system to do something the architecture cannot support.

What these failures have in common

None of the ten failures listed here require exotic solutions. They require independent scrutiny at the right point in the programme — before design decisions become procurement decisions, and before procurement decisions become implementation reality. The cost of catching them early is a fraction of the cost of discovering them late. That is the entire case for independent design assurance.

The common thread across all ten failures
  • None of these failures are caused by incompetent engineers. They are caused by incentive structures, time pressure and the absence of independent scrutiny. Delivery teams have every incentive to get designs approved. They have limited incentive to find reasons why they should not be.
  • All of them are visible to an independent reviewer. An experienced eye that has no stake in the outcome, no relationship with the vendor and no pressure to hit a milestone will find these issues in a structured review. Every time.
  • All of them are significantly cheaper to fix before implementation than after. A design change costs a document revision. The same change after go-live costs rework, downtime, remediation and — frequently — programme recovery.
Want to know if your programme has any of these?
Independent design review identifies these issues before they become programme problems. All conversations are confidential.
Request a Design Review