AI Isn’t Increasing Output—Because We Haven’t Changed How Systems Are Defined

There’s a growing expectation that AI should be driving significant gains in software production.

In most organizations, that hasn’t materialized.

What we see instead:

This is often framed as a limitation of the technology.

It isn’t.

It’s a limitation of how systems are defined—and how organizations are structured around them.

We Introduced a System Generator… and Kept Feeding It Fragments

Large language models are capable of:

But they require something specific to operate effectively:

a coherent, constrained definition of what is allowed to exist

Most organizations do not provide this.

Instead, they operate on:

So the model is reduced to:

completing fragments inside an undefined system

That guarantees limited gains.

The Missing Layer: Defining What Can Exist

Before implementation, there must be clarity around:

Not as documentation.

As a clear definition of the system itself, including:

how it is allowed to grow

In this model:

If something new is required:

There is no ad hoc insertion.

No one-off patterns.

No silent exceptions.

This matters because AI does not create coherence on its own. It amplifies the structure it is given. If the structure is fragmented, the result is fragmented. If the structure is clear, bounded, and internally consistent, the result can be realized with far less effort than most organizations are used to.

Why This Changes Everything for AI

A model cannot reason effectively over an undefined space.

When given:

it produces:

When given a constrained system:

At that point, the model is no longer assisting.

It is operating within a defined system and completing it.

Why Most Organizations Aren’t Seeing Gains

Because nothing fundamental has changed.

Organizations have:

They have simply inserted AI into them.

So they get:

Which makes the gains appear marginal.

The organizations that resolve this will not improve incrementally.

They will operate with a fundamentally different cost and speed structure.

The Constraint Most Teams Avoid

This model requires discipline.

When something does not fit:

You stop and ask:

“Is this a valid extension of the system, or does the system need to change?”

That may require:

This is where human judgment is required.

This is where a meeting of the minds may finally be necessary.

But it happens rarely, not continuously.

That distinction matters. The goal is not to eliminate review. It is to reserve review for moments that actually alter the shape of the system, rather than using meetings as a substitute for a system that was never clearly defined.

The Organizational Implication Most Want to Avoid

If systems are defined this way:

production is no longer driven by headcount

You do not need:

Those structures often exist to compensate for:

Large engineering organizations are often a symptom of unclear systems, not a requirement for building them.

When the system is clear:

The answer is not more engineers.

The answer is not more meetings.

The answer is a small number of people who can define the system clearly at the domain level, plus a small number of engineers where targeted implementation is still necessary.

That is where the staffing conversation changes. If AI is actually allowed to operate inside a defined system, organizations must be willing to reorganize around that reality instead of pretending the existing staffing model remains optimal.

What You Actually Need

Not more engineers.

Not more management layers.

Not more translation between business intent and technical interpretation.

You need:

Call them:

The title is less important than the function.

The function is this:

the ability to precisely describe what should exist, how it fits, and how it is allowed to evolve

Everything else becomes secondary.

The Executive Shift

In this model, leadership engagement changes.

Executives are no longer dependent on:

Instead, they can directly shape and extend systems at the level of:

Not because they are writing code—

but because the system is defined in a way that allows intent to be expressed directly and realized within constraints.

This is the real executive implication of AI, and it is still widely misunderstood.

The long-term opportunity is not simply that engineers get faster.

It is that the distance between business intent and working system can collapse—provided the system has been defined clearly enough that valid growth is possible without reinvention each time.

When that happens, leadership is no longer asking engineering to interpret an idea from scratch. Leadership is operating inside an already-defined model of the business and extending it deliberately.

A Concrete Contrast

Typical approach:

Constrained system approach:

The difference is not speed.

It is coherence at the point of creation.

The Uncomfortable Reality

Most organizations are optimized for:

They are not optimized for:

So AI is forced to operate inside a model it was never meant for.

The Real Question

The question is not:

“How do we make engineers more productive with AI?”

It is:

“Are we willing to define systems clearly enough—and constrain them enough—that they can be realized instead of assembled?”

Until that changes, production gains will remain incremental.

After it changes, they will not.


This paper is part of a broader set of observations on system design and AI-driven production.

Additional papers: Layer Thirteen Papers