You're the Villager
“The uninformed majority will always lose to the informed minority.”
There’s a game called “Werewolf.” Some of you know this game by the name of “Mafia.”
If you don’t know this game — maybe you had a childhood, outdoor hobbies, friends who touched grass — here’s the thirty-second version: everyone sits in a circle. Each person is secretly assigned a role — either a werewolf (the informed minority) or a villager (the uninformed majority). Every night, the werewolves secretly pick someone to eliminate. Every day, the surviving players debate and vote on who to execute. The werewolves know exactly who each other are. The villagers know nothing. The villagers outnumber the werewolves by a lot.
The villagers lose anyway. Nearly every single round.
They don’t lose because they’re stupid. They don’t lose because they’re weak. They lose because they’re playing with incomplete information against people who have all of it.
That’s not a hot take. That’s not a thought-leadership post with a threadboi emoji. That’s just how information asymmetry works — and it’s been working that way since before Machiavelli wrote it down, before Sun Tzu made it a whole thing, before every intelligence apparatus in human history figured out that knowing what the other side doesn’t know is worth more than almost anything else you can have.
Hold that thought. We’re going to need it.
The Usual Suspects
There’s a whole taxonomy of people right now with Very Strong Opinions about AI. You’ve met every single one of them. You might be one of them. I’ve been at least three of them at various points, so I’m not up here on a pedestal — I’m up here because I fell off it enough times to have something worth saying.
The Panic Merchant. AI is taking our jobs. The robots are here. The craft is dead, the sky is falling, update your LinkedIn. They post articles. They repost other people’s articles. They have approximately four open PRs that have been “in review” for three weeks and absolutely elite opinions about the macroeconomic impact of large language models on the global labor market. Participation trophy. Next.
The LinkedIn Oracle. “I used Claude to rewrite our entire engineering strategy in eleven minutes. Here’s what I learned as a leader, a human being, and a father of three. A thread 🧵 (1/34).” Seventeen thousand impressions. Zero shipped. The Oracle has discovered that the machine produces confident-sounding text on demand and has mistaken this for wisdom. It is not wisdom. It is a very articulate nothingburger dressed in a blazer. The Oracle is going to be fine, actually, because the Oracle — historically — isn’t very well known for ever having done much real work to begin with.
The Vibe Coder. Paste error. Accept fix. Paste error. Accept fix. Ship. The Vibe Coder is extremely productive in the same way a dog chasing its tail is extremely active. Lots of motion. Occasionally something works. They have no idea why it works, they will have no idea why it breaks, and when it breaks at 2am they will paste the stack trace into the chat window and pray to a god that doesn’t exist for some form of divine absolution. They are building a career on a foundation of unexplained diffs and blind faith, and honestly? It’s going to be genuinely sad when it collapses. Not sad enough that I won’t say I told you so, but still.
The Principled Refuser. Grizzled. Experienced. Often actually quite competent and good at their job — which makes this one the most painful to watch, like seeing a great athlete refuse to use modern training because they came up doing it the hard way and goddammit that matters. “I’ve been writing systems code for fifteen years. I understand the fundamentals. These tools hallucinate, they produce bloated garbage, I don’t learn anything from them, and frankly the whole thing is a hype cycle.”
Here’s the thing: some of that is true. LLMs do hallucinate. They do produce bloated garbage sometimes. The hype is real and it is exhausting. But the Principled Refuser has made a category error — they’ve confused the quality of the output with the value of the interaction. They’re grading the tool like a junior engineer instead of using it like a sparring partner. They’ve built a philosophically airtight coffin and they are very comfortable inside it. Zero drafts. Full marks for internal consistency. Enjoy the box.
The Non-Learner. The final form. The ghost haunting all the others. They might use the tools or they might not — genuinely doesn’t matter either way. Six months pass. A year passes. They know the same things they knew before any of this existed. The tool moved through their hands like water through a fist. They extracted output. They absorbed zero. The Non-Learner is the most insidious because from the outside they look fine. Productive, even. Right up until they’re not, at which point they will have absolutely no idea what hit them.
Here’s what all five of these people have in common:
They’re all the same villager. Just at different points in the same game.
What’s Actually Happening
I want to get substantive for a second, because this isn’t just a vibe — there’s a real structural thing going on that’s worth naming.
Benjamin Bloom, back in 1956, gave us a taxonomy of learning that holds up embarrassingly well sixty years later: remembering, understanding, applying, analyzing, evaluating, creating. The bottom of the stack is trivia. The top is synthesis. Most people, when they use an LLM, are camping out at the bottom two rungs — asking it to remember things and explain things. Which is fine. Useful, even. But it’s essentially using a Formula 1 car to go to the grocery store. Good for the groceries. Tragedy for the car.
The Dreyfus model of skill acquisition describes how people move from novice to expert: novices need rules, competent practitioners start seeing context, experts operate on intuition built from years of internalized pattern recognition. LLMs can shortcut the bottom of that stack — they can hand you rules, context, patterns on demand — but only if you’re doing the work to internalize what you’re receiving. If you’re just passing outputs downstream without engaging, you stay a novice forever. Except now you’re a novice with a very fast copy-paste reflex, which is somehow worse because at least the old-school novice knew they were a novice.
Then there’s Robert Bjork’s research on “desirable difficulties” — the annoying finding that harder learning is stickier learning. The frictionless instant answer feels like learning. It registers neurologically about as well as watching someone else do a pushup. The people actually getting sharper from these tools are introducing friction deliberately: asking follow-up questions, pushing back, asking the model to steelman the opposite position, treating every output as the opening move of a conversation instead of a verdict.
The Vibe Coder has removed all friction. The Non-Learner never introduced any. They both feel productive right up until they face a problem the model hasn’t seen before — which, increasingly, is the only kind of problem that actually commands a premium.
Anti-Pattern Learning: The One Nobody’s Talking About
There’s a mode of learning that barely shows up in the AI discourse, and I think it’s the most underrated thing happening right now — not learning from what works, but learning deliberately from what doesn’t work, and why. Anti-pattern learning.
LLMs are, quietly, one of the best anti-pattern learning tools ever built. Most people just don’t know how to use them that way.
Here’s what it looks like: the model gives you something wrong. Or subtly wrong. Or right-but-fragile — the kind of answer that passes code review and explodes in staging. Most people patch it and move on. The informed minority stops and asks: why was this wrong? What assumption led here? What would have had to be true for this to be the right answer?
That question is worth more than the fix. Every time.
Security research has known this forever. You don’t build secure systems by reading specs — you study how systems failed, why the attacker’s model of the system differed from the defender’s, where the gap was and how it got exploited. Post-mortems exist because engineers figured out that failure is a more efficient teacher than success, if you’re paying attention correctly.
LLMs fail in fascinatingly patterned ways. They confabulate confidently. They over-fit to the shape of your question. They take the path of least surprise. They have systematic blind spots that are quite legible once you know what you’re looking for. Every one of those failure modes is a lesson — about the tool, about the problem domain, about the assumptions baked into your prompt, and about gaps in your own mental model.
The anti-pattern learner sees a hallucination and thinks: interesting. What did I ask that invited this? The Non-Learner sees a hallucination and thinks: ugh, these things are unreliable — which is itself, ironically, a kind of confabulation. The tools are reliable. Reliably patterned in their failures. The signal is there if you want it.
The uncomfortable part: anti-pattern learning requires you to sit with the failure for a minute instead of moving on. It requires asking not just “why was the model wrong” but “why didn’t I catch it faster, and what does that say about what I actually understand.” That’s friction. Most people hate friction. The informed minority has learned to treat it like a gift.
(There’s a dedicated follow-up coming specifically on anti-pattern learning with LLMs — how to structure it deliberately, what failure modes to watch for, how to build it into your workflow. Consider this the trailer.)
The Actual Asymmetry
Here’s where it lands.
I’ve been building stuff with the help of some form of AI almost every day for the better part of three years. Infrastructure frameworks, Kubernetes platforms, geospatial analytics pipelines, documentation I’d have avoided for weeks, architecture decisions at midnight when the rest of my team didn’t exist because I am the rest of my team. Solo operator, approximately one of me.
I’ve made every mistake I’m talking about in this post. I was a Non-Learner before I caught myself being one. I had my Vibe Coder era. I broke shit, wondered why, pasted the error, accepted the fix, and then broke shit again slightly differently. All along the way, I’ve had that Principled Refuser voice in my head telling me I should just know this already.
The thing I’ve eventually come to accept is simple:
Every time I used the tool to get something done, I got some…thing done. Was it a good something? Was it a bad something? Didn’t matter. It was done.
Every time I used the tool to understand something, I got something good done and I got a little harder to replace.
That’s the whole game.
Not “fix this bug.” “Fix this bug, explain what was actually broken, explain why the fix works, and tell me what I’d need to know to have written it myself.” Not “write this function.” “Write it, then show me where it breaks, then walk me through three other ways to solve it and why you chose this one.” The AI is not a vending machine. It’s a sparring partner. It should be your skeptical colleague who happens to have read everything that’s ever been recorded in letters and words, and doesn’t care about your feelings.
One of those patterns compounds.
The other one doesn’t.
Compounding knowledge versus compounding dependence. Take any two people with the same tools, and the same hours. Twelve months later, wildly diverging trajectories. One of them is sharper, faster, and genuinely harder to compete with than they were before any of this existed. The other can’t work without the tool and can’t work well with it either, because they never built the judgment to direct it, interrogate it, or catch it when it’s confidently, elegantly, completely wrong.
The werewolves are compounding knowledge.
The villagers are compounding dependence.
And here’s the thing that should make the panic merchants and the principled refusers uncomfortable: the villagers think the tool is the werewolf. AI taking jobs. AI replacing engineers. AI ending the craft. Every hot take, every anxious LinkedIn post, every senior engineer crossing their arms at the back of the room — they’ve all decided the threat is the machine.
The machine is not the werewolf.
The person sitting across the standup from you who decided to actually learn from the thing while you were busy constructing elaborate principled objections to it? That’s the werewolf. And they’re not malicious about it. They’re just paying attention in a way you aren’t — yet.
Here’s the part that still blows my mind a little.
For less than the cost of a Big Mac Combo + 10-piece McNuggets, you have a thinking machine at your fingertips that has read more than any human who has ever lived. Every paper, every RFC, every poem written by Rumi, every post-mortem published by Google, every decision that led to the invention of the internet, every obscure kernel mailing list thread from 2009 that turns out to be the only documentation for the thing that’s breaking your system right now. It’s sitting there. Waiting for you to ask.
The information asymmetry that defined every power structure in human history — the one that let the informed minority win every single round — is gone. Dissolved. Equalized. For the first time ever, the villagers have access to the same information the werewolves do.
And most of them are using it to rewrite emails no one’s going to read, or asking what temperature to cook a f*cking chicken breast at for the fourth goddamn time this year.
Nobody’s coming for your job. Your job is being taken by someone who had access to the exact same tools you did and treated them like a curriculum instead of a calculator.
Twenty bucks a month buys you access to all of it. The only question is whether you actually learn from it — or keep treating it like a bottomless vending machine for answers you’ll forget by Friday.
Don’t be a villager. Be a werewolf. Werewolves stay well informed, and well-fed.
P.S. — This article was written with the help of Claude. And yes, I actually learned something while writing it. Practice what you preach.