Human Language Model
Execution Got Cheap. Taste Didn't.
TL;DR: AI made execution cheap. You can build ten things by Friday. The hard part is knowing which seven to delete on Saturday. The bottleneck moved from code to taste – and taste doesn’t scale.
A few weeks ago I built two interactive tools for my website. Over a weekend. I can’t code – I just talk to machines for a living.
Check’em out here:
Two years ago, that sentence would’ve been either delusional or expensive. Today it’s Tuesday.
This is the part where most people write about how AI is democratizing creation, how the barriers to execution are falling, how anyone can build anything now. And they’re right. But they’re describing the easy half.
If someone brings you ten ideas now, your first instinct is: let’s try all ten. Not prioritize. Not pick three. All ten, by Friday. But if you can build ten things by Friday, how do you know which three to ship?
The Filter is Gone
Execution used to be the filter. Most ideas died in the gap between “wouldn’t it be cool if” and “actually making the thing.” That gap was expensive – it cost time, money, specialized knowledge. The gap did your editing for you. If an idea wasn’t worth the pain of building it, it simply didn’t get built.
That filter is gone.
When execution is essentially free, bad ideas survive longer. They get prototyped, polished, shipped. They look professional. They work. They sit in the app store like well-dressed strangers nobody invited.
The bottleneck didn’t disappear. It moved. From code to taste.
McKinsey estimates that generative AI could automate 60-70% of employee work activities. GitHub reports that developers using Copilot complete tasks 55% faster. But no study has quantified the value of knowing which task to skip entirely. That’s taste, not throughput.
Andrej Karpathy called it years ago: “English is the hottest new programming language.”
People heard that and thought he was talking about prompting. He wasn’t. The skill that matters isn’t speaking to the machine, but articulating (to yourself, your team, a model) what’s worth doing and why.
Something to Say
You can have tools that handle editing, voice-matching, distribution. The system does what you built it to do.
But it doesn’t know which essay to write. It doesn’t know when an idea is a thread and when it’s a 3,000-word piece. It doesn’t know when I’m writing to figure out what I think versus writing because I think I should post something this week.
That distinction – between having something to say and having to say something – no system bridges.
More output means more decisions, and taste becomes load-bearing. And taste, unlike execution, doesn’t scale.
He Kills Better
Taste is what’s left after you’ve consumed enough good and bad work to have opinions you can’t fully explain.
The thing that makes a senior editor worth ten junior editors, not because he writes better but because he kills better.
The org chart is inverting. The people who used to be overhead – the editor who cuts 40% of the draft, the creative director who says “no” eleven times before saying “yes,” the PM who buries six features to ship one – they’re not overhead anymore. They’re the product.
“Learn to code” became “learn to prompt,” which will become “learn to [whatever].” Always one dialect behind the thing that actually matters.

My LinkedIn bio has said “human language model” for over a year. When I wrote it, I thought I was being clever. The joke was about prompting – about being the person who talks to the machine.
But it’s not about talking to machines.
It’s about the thing that makes human language human: wanting something specific and saying so in a way that makes people care.
Machines are fluent now. What they’re not is opinionated.
That’s the job.
More on taste and AI: Forget Esperanto | Don’t Mid-Curve It | Atoms vs Abstractions
Frequently Asked Questions
What is the Human Language Model?
It’s a framework for thinking about what becomes scarce when AI makes execution cheap. If anyone can build, write, or generate at near-zero cost, the bottleneck shifts from “can you make it?” to “should you make it?” The human language model is the person who knows what to build, what to kill, and when to stop.
What is taste in the context of AI?
Taste is the ability to make judgment calls no model can make for you: which of ten features to ship, which draft to publish, when to stop iterating. It comes from consuming enough good and bad work to have opinions you can’t fully articulate. AI handles execution. Taste handles direction.
Will AI replace creative jobs?
The execution-heavy ones, yes. If your job is producing output (writing copy, generating images, coding features), AI compresses that cost toward zero. If your job is deciding which output matters, that role gets more valuable. The editor who cuts 40%, the creative director who says “no” eleven times. The distinction is production vs. curation.
What skills matter in the AI age?
Curation over creation. Knowing what to build, what to kill, and when to stop. Andrej Karpathy said “English is the hottest new programming language.” But the real skill isn’t prompting. It’s having something specific to say and knowing why it matters.
Last updated: May 2026.



kudos to you for a point well made and clearly exposed. I couldn't agree more with your posting contents... it's all the the gap, stupid!
Taking it yet one step further, how about considering poetry the ultimate programming language? Gosh, I'm actually enjoying this :-)