Back to Blog

The Meaning Problem

December 2024

The dominant narrative around AI follows a familiar script: either we achieve AGI and enter an era of unprecedented abundance, or we face existential catastrophe. Sam Altman speaks of universal basic compute and a world where scarcity becomes obsolete. Elon Musk muses that we'll need to "find meaning" once robots do everything. The doomers counter with extinction scenarios. Both camps share an assumption I find dangerously naive: that the transition will be legible—a clear before and after.

I believe the harder problem isn't AGI arriving. It's the slow, ambiguous erosion of what makes work meaningful over the next two decades, and our complete unpreparedness for it.

Having spent years in AI research, publishing on mechanistic interpretability, building evaluation frameworks, watching models go from party tricks to genuine reasoning, I've developed a different vantage point. The discourse is pathologically myopic. Every few months, a new model drops, benchmarks get saturated, and Twitter declares we're months from AGI. Then the hype fades, limitations surface, and the cycle resets. This eternal recurrence obscures what's actually happening: not a sudden singularity, but a gradual hollowing out.

Consider what's already underway. Junior coding tasks that once took days now take minutes. Research literature reviews that built intuition are becoming button clicks. Entry-level legal work, financial analysis, customer support—each is being compressed, not eliminated, but stripped of the parts that made them formative. We're not replacing jobs; we're removing the rungs of the ladder.

The techno-optimists dismiss this with abstractions. "People will adapt." "New jobs will emerge." "We'll pursue art and leisure." But this reflects a profound misunderstanding of human psychology. Work isn't just economic exchange—it's identity, structure, community, and purpose. The retirement literature is instructive: people who lose work without replacing it with equally demanding pursuits often decline rapidly, not from poverty, but from purposelessness. Now imagine this at civilizational scale, affecting not retirees but twenty-five-year-olds who never got the chance to build mastery at anything.

What I believe, and what few others seem to take seriously, is that we need to be building the infrastructure for meaning now, before the displacement accelerates. This isn't about slowing AI development. It's about recognizing that the "what do humans do" question isn't a philosophical afterthought to be solved post-AGI—it's the central challenge of the next generation, and it requires as much ingenuity as alignment research or scaling laws.

The irony is that I've chosen to stay in research despite knowing my own work might be automated. I do it because the process of understanding—the struggle, the dead ends, the moments of clarity—is where meaning lives. That experience can't be handed to you by a model that already knows the answer. The question isn't whether AI will be capable. It's whether we'll still have the opportunity to become capable ourselves.