← Back to blog

Designing for Failure

· X min read
Systems Architecture SRE Reliability

Every system fails. The difference between a minor incident and a catastrophic outage isn't whether failure happens, but whether you designed for it.

Most teams build systems with an implicit assumption: if we write good code and configure things correctly, the system will work. This assumption is technically true and practically useless. The question isn't whether your system works under ideal conditions. It's whether it degrades gracefully when conditions aren't ideal.

The Failure Spectrum

Failures exist on a spectrum. On one end, you have the failures you anticipated: a database connection times out, a service returns an error, a disk fills up. These are the failures you've written handlers for, the ones covered by your monitoring alerts.

On the other end are the failures you didn't anticipate: a leap second causes a time calculation bug, a certificate expires over a holiday weekend, a configuration change in one service cascades into a system-wide outage. These are the failures that wake people up at 3 AM.

The goal isn't to predict every possible failure. That's impossible. The goal is to build systems that remain controllable when the unexpected happens.

Principles of Failure-Aware Design

1. Explicit Failure Modes

Every component should have documented failure modes. Not just "it might crash" but specifically: What happens when this service is unavailable? What happens when it's slow? What happens when it returns corrupted data?

Document these modes and their expected impact. A database failure might be recoverable in seconds with a replica failover. A cache failure might cause temporary performance degradation. An authentication service failure might block all user access. These have different severity levels and require different response strategies.

2. Bounded Blast Radius

When something fails, the failure should be contained. This means:

A failure in your recommendation engine shouldn't take down your checkout flow. A spike in traffic from one customer shouldn't degrade service for everyone else.

3. Graceful Degradation

Systems should have defined states between "fully operational" and "completely down." When your personalization service fails, show generic recommendations. When your analytics pipeline backs up, queue events rather than blocking user actions. When your search index is stale, serve cached results with a freshness indicator.

The key is deciding these degraded behaviors in advance, not improvising during an incident. Every critical dependency should have a fallback path, even if that fallback is "return an error message that tells the user exactly what's happening."

4. Observable Failure States

You can't manage what you can't see. Every failure mode should be observable:

Observability isn't about collecting data. It's about being able to answer questions you didn't know you'd need to ask.

The Testing Gap

Most testing verifies that systems work correctly. Very little testing verifies that systems fail correctly. This is the testing gap.

Chaos engineering addresses this by intentionally introducing failures: killing processes, injecting latency, corrupting network packets. But chaos engineering is only valuable if you know what correct failure behavior looks like. Before you break things, define what "graceful degradation" means for each failure scenario.

Start small. Pick one critical path through your system. Map every external dependency. For each dependency, answer: What happens if this fails? Is that acceptable? If not, what's the mitigation?

Recovery as a Feature

Recovery shouldn't require heroics. If recovering from a failure depends on someone remembering the right sequence of manual steps, your system is fragile regardless of how rarely it fails.

Design recovery into the system:

The best incident response is the one that happens automatically, with humans notified after the fact rather than required for resolution.

Starting Point

If you're not sure where to begin, start here:

  1. List your system's critical paths (the user journeys that must work)
  2. For each critical path, identify every dependency
  3. For each dependency, document what happens if it fails
  4. For each unacceptable failure mode, design a mitigation
  5. Test the mitigations regularly

This isn't a one-time exercise. Systems evolve. Dependencies change. New failure modes emerge. Designing for failure is an ongoing practice, not a project with an end date.

The best time to think about failure is before it happens. The second best time is now.

Systems don't fail because engineers are careless. They fail because failure wasn't treated as a first-class concern. The teams that build reliable systems aren't the ones who avoid failure. They're the ones who make failure boring.