AI Engineer Melbourne
Knowledge Base
Software EngineeringIntermediate 11 min

Why AI Coding Tools May Not Move Your Delivery Metrics

Coding is rarely the bottleneck. Find the real one before you tool the wrong stage.

Introduction

Most "AI gives 10x productivity" stories assume coding is the bottleneck. For mature organisations it almost never is. You roll out AI coding tools, people feel faster, but lead time, deployment frequency, and change-fail rate barely move. Theory of Constraints gives you a frame: find the actual bottleneck โ€” handoffs, review queues, environment provisioning, decision latency โ€” and target AI where it moves the system, not just the keyboard.

Why this matters

  • AI tool spend without ROI is a board-level question waiting to happen.
  • Local optimisation (one engineer codes faster) doesn't guarantee system optimisation.
  • The real bottlenecks are usually organisational, not technical.
  • Identifying the right constraint determines whether your AI investment pays off.

Core concepts

1

The five focusing steps

Identify the constraint, exploit it, subordinate everything to it, elevate it, then go again. AI tooling helps at "elevate" โ€” but only if the constraint is the right one.

2

Common bottlenecks in mature orgs

Code review queues, environment provisioning, requirement clarification, cross-team coordination, security review, on-call load. Coding speed is rarely top of the list.

3

Local vs. system improvement

A 50% speedup at a non-bottleneck stage produces zero system improvement and may make queues worse upstream of the real constraint.

4

Measuring system flow

DORA metrics (lead time, deployment frequency, MTTR, change-fail rate) reveal system flow. Track them before and after every AI rollout.

Practical patterns

Value stream map first

Map the path from idea to production; measure wait time at each stage. The longest wait is your constraint.

Target AI at the constraint

AI for review summarisation if reviews are the queue. AI for env provisioning if envs are. AI for spec drafting if requirements are.

Pre/post DORA tracking

Roll out tools as experiments with hypotheses about which DORA metric should move; check.

Beware feeling vs. flow

Survey-based productivity gains often don't show up in flow metrics. Trust the metrics.

Pitfalls to avoid

  • Buying AI coding tools because "everyone is" without a constraint hypothesis.
  • Measuring tool adoption rather than outcome change.
  • Letting AI tools generate more code that has to flow through an unchanged review queue โ€” you make the queue worse.
  • Mistaking individual feel-good for organisational throughput.

Key takeaways

  1. 1Identify the constraint first; tool it second.
  2. 2Always measure DORA before and after.
  3. 3Most mature orgs find the constraint is not "writing code."
  4. 4AI tools can elevate any stage โ€” choose deliberately.

Go deeper ยท external resources

Curated reading list to take you from primer to practitioner. All links are external and free to read.

More from Software Engineering