AI Isn’t Increasing Output—Because We Haven’t Changed How Systems Are Defined
There’s a growing expectation that AI should be driving significant gains in software production.
In most organizations, that hasn’t materialized.
What we see instead:
- modest improvements
- inconsistent results
- no clear step-change in output
This is often framed as a limitation of the technology.
It isn’t.
It’s a limitation of how systems are defined—and how organizations are structured around them.
We Introduced a System Generator… and Kept Feeding It Fragments
Large language models are capable of:
- assembling systems
- exploring solution spaces
- generating substantial portions of implementation
But they require something specific to operate effectively:
a coherent, constrained definition of what is allowed to exist
Most organizations do not provide this.
Instead, they operate on:
- tickets
- feature requests
- loosely defined requirements
So the model is reduced to:
completing fragments inside an undefined system
That guarantees limited gains.
The Missing Layer: Defining What Can Exist
Before implementation, there must be clarity around:
- what capabilities the system supports
- how those capabilities behave
- how they interact
- what must remain true regardless of change
Not as documentation.
As a clear definition of the system itself, including:
how it is allowed to grow
In this model:
- new features are not invented from scratch
- they are introduced as compatible additions to a defined system
If something new is required:
- it must conform to the system
- or the system itself is deliberately extended
There is no ad hoc insertion.
No one-off patterns.
No silent exceptions.
This matters because AI does not create coherence on its own. It amplifies the structure it is given. If the structure is fragmented, the result is fragmented. If the structure is clear, bounded, and internally consistent, the result can be realized with far less effort than most organizations are used to.
Why This Changes Everything for AI
A model cannot reason effectively over an undefined space.
When given:
- inconsistent patterns
- unclear boundaries
- evolving rules
it produces:
- plausible outputs
- but no systemic coherence
When given a constrained system:
- the space of valid solutions is reduced
- consistency becomes a property of the system
- large portions of implementation can be realized directly
At that point, the model is no longer assisting.
It is operating within a defined system and completing it.
Why Most Organizations Aren’t Seeing Gains
Because nothing fundamental has changed.
Organizations have:
- kept the same structures
- kept the same workflows
- kept the same definitions of work
They have simply inserted AI into them.
So they get:
- faster fragments
- more output
- the same coordination overhead
- the same rework
Which makes the gains appear marginal.
The organizations that resolve this will not improve incrementally.
They will operate with a fundamentally different cost and speed structure.
The Constraint Most Teams Avoid
This model requires discipline.
When something does not fit:
- you do not force it in
- you do not create a one-off solution
You stop and ask:
“Is this a valid extension of the system, or does the system need to change?”
That may require:
- architectural adjustment
- alignment across stakeholders
- deliberate introduction of a new compatible form
This is where human judgment is required.
This is where a meeting of the minds may finally be necessary.
But it happens rarely, not continuously.
That distinction matters. The goal is not to eliminate review. It is to reserve review for moments that actually alter the shape of the system, rather than using meetings as a substitute for a system that was never clearly defined.
The Organizational Implication Most Want to Avoid
If systems are defined this way:
production is no longer driven by headcount
You do not need:
- hundreds of engineers
- large coordination layers
- constant meetings to align fragmented work
Those structures often exist to compensate for:
- unclear systems
- inconsistent definitions
- ad hoc growth
Large engineering organizations are often a symptom of unclear systems, not a requirement for building them.
When the system is clear:
- fewer people are needed to define it
- fewer people are needed to realize it
- coordination overhead drops significantly
The answer is not more engineers.
The answer is not more meetings.
The answer is a small number of people who can define the system clearly at the domain level, plus a small number of engineers where targeted implementation is still necessary.
That is where the staffing conversation changes. If AI is actually allowed to operate inside a defined system, organizations must be willing to reorganize around that reality instead of pretending the existing staffing model remains optimal.
What You Actually Need
Not more engineers.
Not more management layers.
Not more translation between business intent and technical interpretation.
You need:
- a small number of individuals who can clearly define systems in domain terms
- a constrained model of how those systems can grow
- targeted implementation where necessary
Call them:
- domain architects
- system definers
- capability designers
The title is less important than the function.
The function is this:
the ability to precisely describe what should exist, how it fits, and how it is allowed to evolve
Everything else becomes secondary.
The Executive Shift
In this model, leadership engagement changes.
Executives are no longer dependent on:
- translating intent through multiple layers
- waiting for interpretation and implementation cycles
- funding ever-larger teams to compensate for structural ambiguity
Instead, they can directly shape and extend systems at the level of:
- capabilities
- relationships
- system behavior
Not because they are writing code—
but because the system is defined in a way that allows intent to be expressed directly and realized within constraints.
This is the real executive implication of AI, and it is still widely misunderstood.
The long-term opportunity is not simply that engineers get faster.
It is that the distance between business intent and working system can collapse—provided the system has been defined clearly enough that valid growth is possible without reinvention each time.
When that happens, leadership is no longer asking engineering to interpret an idea from scratch. Leadership is operating inside an already-defined model of the business and extending it deliberately.
A Concrete Contrast
Typical approach:
- idea → tickets → engineering → integration → rework
- AI accelerates parts of this, but not the whole
Constrained system approach:
- define system capabilities and rules
- introduce new capability as a valid extension
- AI realizes large portions within that structure
- refinement replaces rework
The difference is not speed.
It is coherence at the point of creation.
The Uncomfortable Reality
Most organizations are optimized for:
- managing people
- coordinating work
- delivering incrementally
They are not optimized for:
- defining systems clearly
- constraining how those systems grow
- allowing those systems to be realized directly
So AI is forced to operate inside a model it was never meant for.
The Real Question
The question is not:
“How do we make engineers more productive with AI?”
It is:
“Are we willing to define systems clearly enough—and constrain them enough—that they can be realized instead of assembled?”
Until that changes, production gains will remain incremental.
After it changes, they will not.
This paper is part of a broader set of observations on system design and AI-driven production.
Additional papers: Layer Thirteen Papers