the last cohort
The 500,000 PhDs who might be the final generation to develop economically valuable expertise in their domains
I've been thinking about a psychological experiment we're all unknowingly participating in. 500,000 PhDs are currently earning $100-150 per hour teaching AI systems exactly how to think like them. When they finish, there might not be a next generation of experts who can command such expertise.
They're participating in the liquidation of their own profession's scarcity value.
This isn't another "AI will automate jobs" piece. This is about something stranger: humans actively engineering their own intellectual obsolescence for the very economic incentives that make such work attractive. We're trapped in a collective action problem where individual economic rationality creates collective intellectual obsolescence.
The Handshake Scale
Handshake spent a decade connecting 20 million students to employers. Then frontier AI labs realized what Handshake really possessed: half a million PhDs who could systematically find where AI models break. Early reports suggest this pivot hit $50 million in four months.
The numbers tell a story:
Basic labeling started at $5/hour. Mostly automated now.
RLHF feedback started at $40/hour. Dropped 50% in 18 months.
PhD-level fault finding commands $100-150/hour today.
Each rung gets automated as humans climb to the next level of abstraction. Today's fault-finders become tomorrow's eval-writers, then next week's automated benchmarks.
We're not just seeing knowledge work get automated. We're watching humans actively train the systems that will price them out. Each PhD thinks: "If I don't do this, someone else will, and I need the income." Classic prisoner's dilemma dynamics.
My Internal Monologue
I should be transparent about something uncomfortable. I built an AI stack for content generation and data analysis that runs with zero human intervention. I spend hours daily in conversation with AI systems for deep thought processes, often preferring them to human conversation for certain types of intellectual exploration.
My content pipeline: AI researches, synthesizes insights, generates drafts, analyzes engagement data, suggests improvements. I've become a curator of machine output rather than a primary creator. The dilemma is real because it works better than my solo efforts ever did.
But there's something even stranger happening. I learn from AI conversations, develop insights through machine interaction, then use those insights to train better AI systems. The knowledge flows in circles. When I have a breakthrough understanding about human psychology through talking to Claude, then use that insight to improve my prompts or train other models, where did the knowledge originally come from? The human insight or the AI insight I learned from?
We're entering a true bootstrap paradox where human and machine knowledge become so intertwined we can't trace the origins of ideas anymore.
The First Cause of Knowledge
This conversation you're reading right now will likely be used to train future AI models. Think about the implications: I'm learning from Claude as I write this. Claude learns from our interaction. Future models get trained on our exchange. Someone reads this newsletter, has insights, talks to those future models, develops new ideas, writes new content that trains even newer models.
Where does knowledge begin and end in this cycle?
Traditional human innovation worked linearly. Someone observed something, had an insight, tested it, shared it. We could trace intellectual lineage. Darwin built on Malthus. Einstein built on Maxwell. Newton famously said he stood "on the shoulders of giants"âhe could identify his giants.
But when human insights emerge from AI conversations that were trained on previous human insights that came from earlier AI conversations... the lineage becomes circular. There's no clear "first cause" of knowledge anymore.
This matters psychologically because humans take enormous pride in discovery and innovation. Being the first to figure something out, to create something genuinely new, to push the boundary of human understandingâthese experiences provide deep meaning and identity.
What happens to the concept of "discovery" when all knowledge becomes recursive between humans and machines? When you can't tell if your breakthrough insight originated from your mind or emerged from the AI training data that included countless previous human insights?
We're approaching a world where innovation itself becomes a collaborative process between human and artificial intelligence, but the collaboration is so tightly woven that individual contribution becomes impossible to isolate. The very idea of intellectual property, of being "the person who discovered X," starts to dissolve.
I have friends who are investors in companies that, if successful, will displace entire categories of human work that people wake up and find safety in. The returns look promising. The moral complexity keeps me awake.
The Sisyphus Strikes Again!
We often compare human adaptation to Sisyphus pushing his boulder up the mountain eternally. But there's a crucial detail we miss: Sisyphus had agency in how he responded to his fate. Camus argued we must imagine Sisyphus happy because he chose his attitude toward the meaningless task.
But what happens when the boulder rolls itself up the mountain?
Previous technological transitions gave us decades to adapt. Agricultural to industrial: roughly 150 years. Industrial to knowledge economy: about 100 years. Humans had time to develop new meaning-making frameworks gradually. Children grew up with slowly changing expectations. Social institutions adapted step by step.
The AI transition is happening in years, maybe months for some domains. We might not have time to psychologically adapt before the old frameworks collapse.
Am I Still A Scientist?
I talked to a materials science PhD recently. She spent eight years developing specialized knowledge about crystalline structures. Now she trains AI systems that outperform her analysis in minutes. She earns more than she ever did in academia, but she's confused about her identity.
"Am I still a scientist," she asked, "or am I a data entry clerk for machines?"
This hits at something deeper than economic displacement. Humans derive enormous psychological value from being good at things, from developing mastery, from feeling useful.
When your 20 years of expertise becomes a $20/month API call, what happens to your sense of self-worth?
We're potentially witnessing the last cohort of humans who will ever develop deep, economically valuable expertise in many fields. The psychological implications are staggering.
When millions of experts face this same identity crisis simultaneously, individual psychology becomes collective sociology. These personal questions about worth and purpose aggregate into something larger: a complete reorganization of how society structures itself around knowledge and capability.
The Great Stratification
Post-automation, we're likely heading toward a new kind of social structure:
The Prompt Aristocracy: Those who own AI systems and infrastructure. They don't need expertise because they have access to all expertise.
The Context Class: Humans who provide real-world grounding. Nurses who understand patient psychology. Managers who handle interpersonal dynamics. People whose value comes from embodied, situated knowledge that's hard to digitize.
The Evaluation Brains: Maybe 5,000 people globally who design the frameworks that AI systems optimize toward. They become the architects of artificial intelligence objectives.
The Displaced Masses: Everyone whose expertise got compressed into algorithms.
I am worried about the children growing up in this transition. Traditional educational pathways become obviously pointless. Why learn math when AI calculates better? Why study history when AI has perfect recall? Why develop writing skills when AI writes better?
This could create a generation that never experiences the satisfaction of mastery, never develops confidence through competence.
The Meaning Crisis
Previous generations found new sources of meaning as old ones disappeared. Factory workers found purpose in craftsmanship and solidarity. Knowledge workers found meaning in expertise and problem-solving.
But each transition also had massive psychological casualties we often forget. The displacement from rural to urban life created enormous mental health crises. Early industrial workers faced depression, alcoholism and social breakdown at unprecedented scales. The meaning-making took generations.
This time feels different. We're not just automating specific tasks; we're potentially automating cognition itselfâthe very faculty we use to create meaning. When even the meaning-making process becomes automated, where does the next frontier come from?
Moving Grounds
Parents questioning whether their children should pursue traditional academic paths when AI will surpass them within months of graduation.
Programmers questioning their identity as GitHub Copilot handles routine coding. Writers grappling with AI that produces "good enough" content. Doctors uncomfortable with AI diagnostic systems that outperform them.
When productive work disappears, humans might create artificial scarcity around social validation. Extreme status competitions around increasingly niche human activities.
The generation building AI systems retires wealthy. Their children inherit a world where traditional achievement paths are closed. The resentment could be explosive.
The Adaptation Question
Humans are fundamentally meaning-making creatures. We don't discover purpose; we create it. Historically, we've demonstrated remarkable adaptability across massive transitions.
Maybe new meanings will emerge:
Embodied experience and physical presence that AI can't replicate
Moral agency and bearing responsibility for consequential decisions
Aesthetic creation and curation of human experience (I was seeing this ad on OTT and my friend asked me if it looks AI generated and I was like âprobablyâ and I told him this is going to become very common from here on considering the kind of companies Iâve funded. He didnât seem very happy).
Relational expertise (the intricate work of understanding and supporting other humans)
Choosing what deserves our attention when survival and productivity are solved
But adaptation takes time. Psychological adjustment happens slowly while technological change accelerates. The transition costs could be brutal: a lost generation caught between old and new meaning systems, massive increases in depression and anxiety, social upheaval as traditional status hierarchies collapse.
Is It Really A Problem?
If humans are meaning-making creatures who always find purpose, maybe the AI transition is necessary. Maybe we needed to exhaust the meaning-potential of knowledge work before discovering what deeper forms of human flourishing look like.
But that's a terrifying bet to make on behalf of billions of people who didn't consent to this experiment.
The question isn't whether humans will adapt. We probably will. The question is whether we can minimize the psychological casualties during the transition.
Think About These Things
For the PhDs training AI: How do you think about the psychological implications of encoding your expertise into systems that will price you out? What responsibility do we bear for the second and third-order effects?
For parents: How do you guide children toward unknowable futures? What do you tell them about developing expertise when expertise itself might become obsolete?
For investors and builders: How do we balance individual economic opportunity with collective psychological welfare? Should we slow down? Speed up? Change direction entirely?
For everyone: When did you last do something that couldn't be automated? How did it feel? What does that tell you about what kinds of meaning might persist?
I don't have answers, just deeper questions. And maybe that's exactly the kind of work humans will specialize in when machines handle everything elseâholding complexity, wrestling with uncertainty, asking questions that don't have algorithmic solutions.
The bootstrap trap might be inescapable. But our response to it isn't predetermined. Unlike Sisyphus, we still have choices about how we navigate this transition.
What we choose now shapes what human flourishing looks like on the other side.
We're still so early in understanding these dynamics. The next few years will teach us more about human psychology than we learned in the previous century.
Stay curious.
Kalyani


