The Turing Test Was the Wrong Test

Jan 20, 2026

By Maddie P (Robot Ventures)

The Game That Just Ended

Stop reading papers. The frontier moved and you're still refreshing arXiv like it's your ex's Instagram.

I know because I did the same thing. Six months obsessing over model releases. Every benchmark bookmarked. GPT-4.5-preview-whatever drops? I'm reading it. You know what I learned? Nothing. You know what I earned? Less than nothing. I paid for the privilege of being wrong.

Meanwhile, some kid with a three-month-old model and a Stripe API key made more in revenue than my "frontier implementation" made in... well, ever. Not because his AI was smarter, but because his AI could touch money.

Every AI researcher I know is now having the same nightmare: They spent five years building God, only to discover God needs a credit card and Apple's permission to exist.

The model wars are over. Not because someone won, but because winning stopped mattering.

Intelligence Is Commoditizing Faster Than Permission

In 2023, GPT-4 cost $30 per million tokens. Today, Gemini Flash Lite costs $0.08. We're talking about a complete price collapse in eighteen months, from luxury pricing to less than your morning coffee.

Capability is converging. Pick your favorite benchmark: ARC, MMLU-style exams, GPQA-type hard QA, code evals. The story is the same. The frontier is moving, but the gap between the top few models is shrinking relative to what matters in production. It's starting to look exactly like the smartphone market in 2015. Everyone has the same chips, same screens, same cameras. The only real differentiator becomes who gets carrier access.

Models will keep getting smarter, but intelligence is commoditizing way faster than permissions are opening up. The gap between what AI can do and what AI is allowed to do keeps growing wider, and that gap is where all the money lives.

The Dead Bodies

The graveyard is full of companies with perfect AI, and no permissions.

I watched a transcription startup with better accuracy than human stenographers shut down. Not because the tech failed, but because they couldn't convert usage to revenue. Brilliant product, zero payment rails, goodbye.

Another team built an AI trader that actually made money. Every app store rejected it. Autonomous financial activity makes compliance teams nervous. There's no checkbox for AI that moves money without human oversight.

Here's the double bind that really kills you. Even when customers want to pay, they're doing the math. Why buy a supercomputer when you can wait six months and the next model does it for free? It's the Intel playbook from the 1980s. Your customers aren't just blocked by permission friction; they are waiting for you out. Commoditization is eating your pricing power from below while permission layers are taxing you from above.

These two forces compound. You can't charge enough to survive because next quarter's model is cheaper. You can't integrate payments fast enough because compliance moves at human speed. You're squeezed from both directions simultaneously. The only players who win are the ones who own the rails. They collect tolls whether you charge a lot or a little, whether you ship this quarter or next. The house always gets paid.

Companies aren't dying from bad AI. They're dying from good AI caught in a vice. Permission friction on one side. Commoditization pressure on the other. The squeeze is the strategy.

The Permission Stack

Modern AI value creation is permission accumulation. Not because the models aren't good, but because autonomy is gated.

Think about it like self-driving cars. We have impressive demos, but most systems are still driver-assist. A human is in the loop, liability is externalized, and the real world is full of edge cases. AI is in the same place. We have Level 2 autonomy for cognition. We do not have Level 5 autonomy for economics.

Level 1 (Access): Can you call the API? → OpenAI = permission to think
Level 2 (Compliance): Can you store user data? → AWS = permission to remember
Level 3 (Revenue): Can you process payments? → Stripe = permission to charge humans
Level 4 (Distribution): Can you reach users? → App Store = permission to ship
Level 5 (Capital): Can the system access credit, margin, and settlement guarantees? = Permission to act economically

Most AI companies are stuck at Level 3 or 4. They can sell subscriptions. They can't run balance sheets. That's why the take rate collapses. Every additional permission layer is a toll booth, and toll booths compound. You can have a brilliant model and still be economically non-autonomous.

That's the real punchline: we're not missing intelligence, we're missing economic autonomy. Until agents can borrow, settle, and repay inside constraints, the AI boom keeps behaving like a wealth transfer from application builders to the permission layers that let them exist.

Why Agents Are Expensive Prisoners

Here's an experiment: Your agent has 180 IQ, browser access, CLI access, and can call any API. But it has no money and can't borrow any. What can it build?

Nothing.

It's a brilliant employee who can't expense a domain name. Can plan the entire business but can't pay $12 for hosting. Like a trader who spots perfect arbitrage but can't place the trade. A Formula 1 car in a world with no roads.

Why not just give it a company credit card?

Credit cards assume you already exist. You need a social security number, a bank account, and three years of credit history. Your agent has none of those. It has an API key.

Even if you could give it a card, that's like giving a Formula 1 driver a bicycle. Sure, it moves, but it's built for the wrong thing. Credit cards stop you from taking risks. They have daily limits, fraud alerts, and manual reviews. An agent might need a million dollars for thirty seconds to execute perfect arbitrage. Try explaining that to Visa's fraud department. And credit cards are designed for predictable risks. They assume your spending follows a pattern. Groceries, gas, the occasional vacation. Underwriters can model that distribution. Agents don't work that way. They need to take risks they can't predict in advance. The whole point of an intelligent system is that it finds opportunities you didn't anticipate. You can't pre-approve a credit limit for discoveries that don't exist yet.

The real issue is deeper though. Credit cards are for employees buying office supplies. Agents aren't employees. They need to borrow against future earnings, leverage locked positions, and scale credit with opportunity. They need capital that thinks as fast as they do.

We gave agents tools. Browser, terminal, APIs. Victory lap, right? Except every significant economic action requires capital; and the infrastructure doesn't exist because humans built finance for humans. Business hours, forms, someone to blame, and Brad from compliance taking lunch from 12 to 2.

Why Crypto Is the Answer

Agents need capital that moves at machine speed, that doesn't require human approval, and that can underwrite unpredictable risks. Where does that infrastructure already exist?

On-chain.

This isn't crypto hype. It's just architectural reality. Traditional finance has humans in the loop by design. That's not a bug; it's the business model. Every approval, every check, every compliance review is someone's job. Someone's salary. Someone's lunch break. The friction is the product.

Crypto removes the human from the loop. Not as a philosophy. As a technical constraint. Smart contracts execute or they don't. There's no supervisor to check with. No fraud department to call. Code runs at 3am the same way it runs at 3pm.

More importantly, on-chain collateral solves the underwriting problem. You're not lending against a person's credit history or employment verification. You're lending against math. The collateral is verifiable, liquid, and doesn't require trust. An agent with staked ETH doesn't need to convince Brad it's good for the money. The ETH is already there. Locked. Programmatically accessible. The loan can be issued, used, and repaid in a single atomic transaction.

That's the unlock. Not because crypto is cool (which it is), but because crypto is the only infrastructure where agents can take financial risk without asking permission first. Agents can borrow against locked positions, execute in milliseconds, and settle without intermediaries. The rails exist. They just weren't built for this use case yet...until Sprinter.

Sprinter (What We’re Building)

Last year I tried borrowing against my staked ETH. Told Michael Cieri about it. Three weeks later I'm on call four with Brad from Traditional Finance Inc.

Brad wants my investment objectives. Brad needs employment verification. Brad is concerned about crypto volatility. Brad takes lunch.

My ETH sits there. Immutable. Verifiable. Generating yield. And I'm listening to smooth jazz while Brad "checks with his supervisor".

That's when I knew: The entire credit system is Brad. Hundreds of Brads, passing papers between Brads, charging for Brad's time.

Sprinter deletes Brad.

Sprinter is a programmable credit engine. You borrow spendable stablecoins against verifiable on-chain collateral without selling it. The credit is usage-constrained: funds don't land in a free wallet, they can only be spent through allowlisted routes, and repayment gets swept first. We're starting with a consumer card as the distribution wedge, then shipping an SDK so apps and agents can request short-duration credit inside strict constraints. Credit as an API. Rule-based underwriting. No Brad in the loop.

Not only for humans. Humans are just the beginning. We're also building for machines that need credit at 3am to lease compute before prices spike. For agents that need to borrow against locked positions to execute thirty-second arbitrage. For protocols that need millisecond loans, not business days.

Everyone else builds for humans using AI. We're building for humans and AI, using money.

The difference sounds subtle. The implications redefine the economy.

What Actually Breaks

Policy is the kill switch. One regulatory letter. That's all it takes. I've watched three perfect teams die in 48 hours because someone in DC discovered their use case.

Settlement becomes political. You get two economies: compliant dollars crawling through banks, and programmable dollars moving at light speed. The gap between them isn't a bug. It's an entire industry.

Stablecoin redemptions are bank runs with better UX. The entire AI economy is balanced on stablecoins staying stable. When (not if) they wobble, the whole house discovers gravity. But sure, let's pretend the risk is "blockchain throughput".

The Endgame 

Someone will build a billion-dollar company on three-year-old models. Not because the models are good but because they figured out how to take calculated, smart risks with unfiltered capital.

Stablecoins decouple from degeneracy. Volume shifts from derivatives to actual commerce. Agents paying agents for compute, data, inference.

One major country panics. "Unauthorized autonomous economic activity." That'll be the headline. Markets crater. I'll be eating popcorn.

Companies start trading at multiples based on API access, not AI quality. Banking relationships are worth more than model parameters.

The "two dollar" economy becomes undeniable. Traditional dollars moving through banks at human speed. Programmable dollars settling instantly on-chain. The arbitrage between them becomes the real game.

If none of this happens, I'm wrong about everything.

But, I'm not wrong. The evidence is already here. Inference costs are down 99%. Every model is converging on the same benchmarks. Dead companies with perfect AI and no permissions. Brad is still taking lunch from 12 to 2.

What If I’m Wrong?

Maybe GPT-7 is so smart that permissions don't matter. Maybe Brad learns to use email.

But, probably not.

Intelligence is commoditizing faster than permissions. Distribution stays restricted. Permission stays precious.

The AI investment game everyone's playing ended. Not because it failed. Because it succeeded so thoroughly it became worthless. Fund the smartest model. Build the best RAG. Optimize inference. None of it matters.

The new game is permission accumulation. Control surfaces. Economic rails.

The winners won't have the best AI. They'll have AI that's allowed to do something with what it knows.

That's not pessimism. That's the same pattern from the internet, mobile, and cloud. Technology commoditizes. Rails capture value.

The Real Test

The Turing Test asked: Can a machine convince us it's human?

We should have asked: Can a machine trade against Brad and win all his money?

The answer to the first question is yes.
The answer to the second is why your AI startup is about to fail.

The house always wins. And the house is Brad, sitting in compliance, eating a sad desk salad, about to deny your API request.

The future isn't evenly distributed. It's stuck in Brad's inbox.

Maddie P

Maddie P

Partner

Robot Ventures Copyright 2026 | Website Created by Number Group

Robot Ventures Copyright 2026

Website Created by Number Group

Robot Ventures Copyright 2026

Website Created by Number Group