By Jason Hauer | CEO, HauerX Holdings
Sam Altman just admitted something that should terrify and encourage every leader: he hasn't changed how he works.
"I still kind of run my workflow in very much the same way," Altman confessed recently, "although I know that I could be using AI much more than I am."
This is the CEO of OpenAI. The person with the best AI access on Earth. And he's stuck in the same habits as everyone else.
If that doesn't crystallize where we are right now, nothing will.
The Overhang
There's a term floating around AI circles that leaders need to understand: the capability overhang.
AI capability has leapt forward in the past six weeks. Adoption hasn't. The gap between what these systems can do and what most people are doing with them isn't closing. It's widening.
I've been talking about AI arbitrage for the past three years. The idea that you can secure massive gains by applying AI quickly in undervalued or overlooked areas of your business. Like financial markets, the window doesn't stay open forever. If you spot a valuable use case and act decisively, you reap outsized benefits before others catch up. If you drag your feet, the advantage evaporates.
The capability overhang is AI arbitrage with the stakes turned up to 100.
OpenAI's internal benchmarks now show that AI output is preferred over human expert output 74% of the time on well-scoped tasks. GPT-5.2 hit 70.9%. GPT-5.2 Pro reached 74.1%. Three-quarters of the time, the machine is preferred.
And yet most organizations are still using AI the way they used GPT-4 two years ago: ask a question, get an answer, move on. Summarize this document. Draft this email.
They're not running agent loops overnight. Not assigning hour-long tasks to AI co-workers. Not managing fleets of parallel workers across their backlog.
The people living in the future see a fundamental break. Everyone else sees incremental improvement. They're using the same tools.
Why AI Is Different from Every Technology Shift That Came Before
Here's the insight that changes everything once you see it.
Every major technology revolution in history required building physical infrastructure before it could go mainstream. Steam power needed factories. Electricity needed power grids. The internet needed fiber optic cables and cell towers. That physical buildout took decades. It was the constraint that slowed adoption.
AI doesn't need any of that.
AI scales on top of infrastructure we already built. Cloud computing. Mobile devices. Broadband connections. 5.5 billion people connected overnight. That's why ChatGPT hit 100 million users in two months. There was no buildout required.
Yes, energy and power will become constraints as these systems continue to scale. That's real. But at current capability levels, the infrastructure exists. The tools that created the overhang are available today.
So what's left? Us.
The constraint has shifted from technology to humans. The machines are ready. We're not.
This is why AI feels both explosive and strangely stalled at the same time. The capability exists. The infrastructure exists. What doesn't exist is the organizational mindset to actually use and maximize it.
Previous technology revolutions gave people decades to adapt. AI is giving us months. The gap between learning and irrelevance is razor-thin.
The Real Skill Shift
Andrej Karpathy has been coding professionally for over twenty years. He helped build OpenAI. In a matter of weeks, his entire workflow flipped: he went from writing 80% of his code manually to having AI agents write 80% of it.
The shift he describes is the same one every leader needs to make: stop asking AI questions, start assigning it tasks and projects.
The chatbot mental model is a trap. When you treat AI as an oracle that answers queries, you're using a jet engine to power a bicycle.
The new approach is declarative. Describe the end state you want. Provide success criteria. Let the system figure out how to get there.
Instead of "how do I fix this bug," try "fix this bug, run the tests, and keep iterating until they pass."
Instead of "what's the best way to approach this market," try "analyze these three market entry scenarios against our criteria and rank them with supporting evidence."
The bottleneck moves from implementation to specification. "What should this be?" becomes harder than "How do I build it?"
That's a fundamental shift. Most leaders have spent careers developing intuitions about implementation. The new skill is specification: describing what you want precisely enough that a system can build it.
Why "Adopt at Your Own Pace" Is a Lie
Here's what I think is critical every leader needs to hear: this isn't a nice-to-have.
The comfortable narrative is that AI adoption can happen gradually. That organizations can experiment, pilot, learn, and scale at whatever pace feels manageable. That the thoughtful approach is to watch how it develops before committing.
That narrative is dead wrong.
OpenAI is slowing its hiring because they believe smaller teams amplified by AI will outperform larger ones. Their internal data supports this. They're not forecasting. They're reporting.
Anthropic's engineers say they don't write code anymore. They let the model write the code, they just edit it. The self-acceleration loop is real: AI writes code, which produces better AI, which writes better code faster.
The labs are living their own thesis. And they're winning.
The 6% of organizations generating real commercial impact from AI aren't just using it. They've rewired how work gets done. They've redesigned who gets to build. They've collapsed the distance between the person with the problem and the system that can solve it.
Everyone else is automating their inefficiencies faster.
The Window
AI arbitrage was always about windows closing. The capability overhang is the same dynamic on a compressed timeline.
The leaders who figure out how to actually use these systems, who assign tasks and projects instead of asking questions, will have compounding advantages over those who wait.
Execution isn't the bottleneck anymore. Specification is. If you can describe exactly what success looks like, the system can build it.
Make Sure You're Using What's Actually Available
One more thing, and it sounds obvious but isn't: many leaders are forming opinions about AI based on tools that are generations behind what created this overhang.
The December 2025 models aren't incremental upgrades. Claude Opus 4.5, GPT-5.1-Codex-Max, Gemini 3. Three frontier models in six days. They're agentic. They work for hours. They call tools. They iterate on their own output and correct mistakes without prompting. You can assign an hour of work and walk away.
That's not a chatbot. That's a co-worker.
If your impression of AI comes from ChatGPT in 2023 or whatever your IT department approved 18 months ago, you're working from significantly outdated tech. December 2025 changed everything. The tools that created the overhang are available now. The question is whether you're actually using them.
The leaders building the future haven't fully moved into it yet. If they can't shake old habits instantly, you shouldn't feel bad that you haven't either.
But you should probably start.
Jason Hauer is CEO and Founder of HauerX Holdings, backing and building AI-native companies that deliver compounding growth for the now. Subscribe to the Tuesday Growth Brief at hauerx.com.




