10 Survival Rules for Ordinary People in the AI Era

You don’t need more time. You need to protect your best time, and use it to do what only you can do.

Attendees: about sixty people—founders, engineers, product managers, investors, recent graduates, and a few who call themselves “people who came to listen before they thought things through.”

Main speaker: Alan Walker, a Silicon Valley serial entrepreneur, someone who has personally experienced three cycles; now he only drinks black coffee—no question marks needed.

Timing: April 2026, one week after the release of Project Glasswing.

Not a methodology. Not workplace tactics.

It’s about, in a species-level turning point, how to keep yourself alive—and then live well.

Opening · ALAN WALKER

“Someone messaged before coming and asked, ‘AIan, AI is here—do ordinary people still have a chance?’ Alan didn’t reply. Because the question itself is wrong.

In 1440, before Gutenberg’s printing press appeared, what was the most valuable profession in Europe—copyists. In monasteries, the status of a senior copyist was equivalent to that of a senior engineer today. They controlled the production and circulation of knowledge. After the printing press appeared, some of them disappeared. Others became editors, publishers, authors, and teachers. They didn’t vanish—they migrated.

Every person in this room today is a descendant of that batch of copyists. Your ancestors weren’t wiped out by the printing press, which is why you can sit here and ask this question today. The people who can sit here and ask this question are already among the luckiest in human history. The problem isn’t “whether there’s a chance.” The problem is whether you’re willing to see clearly where the opportunity is.

Today, I’ll give you ten rules. No fluff. Every one of them I’ve thought through.” - Silicon Valley ALan Walker

Law I · Your opponent isn’t AI—it’s the people who know how to use AI

What gets eliminated isn’t a job. It’s the people who believe, “This has nothing to do with me.”

Let me start with a counterintuitive fact: in any technological revolution, it’s not jobs that get destroyed—it’s the people who refuse to learn. This isn’t motivation talk; it’s recorded history. In 1900, the U.S. had 41 million horses doing transportation work. When cars arrived, horse trainers disappeared, but machinists, gas station attendants, highway engineers, auto insurance actuaries, and traffic police all came into existence. Net gain, not net loss.

In 1997, Deep Blue beat Kasparov in chess. Everyone thought chess careers would die. In 2005, a competition called “Centaur Chess” appeared—an ordinary amateur chess player plus an ordinary PC could beat the combination of a top grandmaster and a supercomputer. It wasn’t the strongest person who won, and it wasn’t the strongest machine. It was the people who knew best how to coordinate with the machine. This conclusion applies to every industry in 2026—don’t change a single word.

ALAN · On-site

Your competitor today isn’t Claude, not GPT, not Gemini. It’s the person sitting next to you, already working with these tools, while you’re still stuck wondering, “Is this thing legit?” The adoption curve for technical tools has never treated everyone equally. After the printing press arrived, in the first five years, the group that mastered it early defined the knowledge-production landscape for the next two hundred years. Today’s window might be much shorter than five years.

It’s not AI replacing you. It’s the people who can use AI replacing you. Those two sentences sound the same, but they lead to completely different response strategies.

Law II · AI can’t steal the pits you’ve fallen into

Large language models can learn how to walk through all the knowledge that has been written down. They can’t walk through the part that hasn’t been written down—and that part is what truly makes you valuable.

In 1966, philosopher Michael Polanyi wrote a book of only about a hundred thin pages titled Tacit Knowledge Polanyi 1966. The core proposition is just one sentence: “We know more than we can tell.” He gave an example: you can recognize a face, but you can’t tell me how you recognized it. This ability exists in your nervous system. It can’t be turned into language—so it can’t be taught, and it can’t be copied.

The essence of a large language model is the extreme compression and retrieval of knowledge that humans have already expressed. It absorbs everything that’s been written down: textbooks, papers, code, conversations. But there’s a kind of knowledge it can’t reach: the judgment you built up through eighteen failed projects; the hunch you get after seeing a certain situation three times; your sense for human nature after you’ve muddled through in a certain industry. These things have never been written into any document. They exist in the form of neural circuits in your brain, only triggered by experience. They can’t be transmitted through language.

So the experiences you think are useless are actually your real moat in the AI era. Those twists and turns you took, the traps you stepped on, the calls you got wrong—these are forming a kind of scarce asset that AI can’t touch. The prerequisite is that you intentionally systematize them: write them down, explain them, and teach them to others.

ALAN · On-site

I know someone who’s been in the food and beverage business for eighteen years. He can’t use Excel, he can’t write code, and his Mandarin is a bit awkward. But in the first thirty minutes before a new store opens, he can walk through it once, tell you which dish will have problems today, which employee is in the wrong state today, and what the table-turn rate will be roughly tonight. How does he know? He can’t explain it clearly. But that “can’t explain it clearly” is worth millions. AI can generate a complete F&B management handbook, but it doesn’t have the eighteen years of pits he fell into.

Systematize the pits you’ve fallen into. Turn your failure cases into language. This isn’t writing a memoir—it’s forging the most underestimated moat in the AI era.

LAW III · Depth is proof; cross-domain is your weapon

In any single field, AI can be “good enough.” What it can’t do is stack the underlying logic of two domains together, and see a third possibility.

There’s a concept in economics called “comparative advantage” Ricardo 1817. It means: you don’t need to be stronger than everyone at everything. You only need to be more efficient than others in some combination. Applied to today, the source of comparative advantage has shifted from a single skill to cross-domain combinations—your biology background plus your financial intuition plus your product thinking—forming a perspective that AI can’t reproduce with any single set of training data.

Innovations that truly change the landscape almost never happen within a discipline. They happen at the boundaries. Mendel was a monk; he used statistics to study peas, laying the foundation for genetics. Shannon was a mathematician; he used the concept of entropy from thermodynamics to understand communication and created information theory. Jobs was a practitioner of Zen and an aesthetician; he welded humanities and engineering together, defining consumer technology. In an era where AI can rapidly cover any single domain, the ability to connect across domains is one of humanity’s last cognitive advantages.

› Find your deepest domain—this is the anchor. Without it, everything else is just floating weeds.

› Intentionally build enough knowledge across two or three adjacent or opposing fields. You don’t need mastery.

› Train “connection intuition”: can the underlying logic of one domain explain phenomena in another domain?

› Let AI help you retrieve; you do the connections—this is division of labor, not competition.

ALAN · On-site

I’ve seen the most formidable investors. It’s not the one strongest in finance. It’s the one who has solid finance, real presence with technology, insight into human nature, and memory of history. These four dimensions combined are something AI can’t replicate today—because the core of “insight” is integration. Integration requires you to have been hit by different systems in the real world, not pattern-matching retrieved from training data. Your complex experiences are the places AI can’t colonize—at least not for now.

Only depth without width makes you a single well. With cross-domain, you become a net. AI is water; it will flow into all wells, but the net is something you build yourself.

LAW IV · In the AI era, attention is the only thing that’s truly scarce

AI makes the cost of producing information approach zero. That means information itself approaches near-zero value. And its scarce complement—focused attention—is becoming this era’s hardest currency.

Herbert Simon wrote a line in 1971, predicting today’s reality: Simon 1971—“The abundance of information necessarily leads to a shortage of attention.” He said this before the birth of the internet. Back then, he was using the most basic economic logic: once something becomes extremely abundant, its own value drops, while the value of its scarce complement rises.

Today, the amount of content AI produces every day has already surpassed the total output of humanity in the previous few hundred years. Your brain hasn’t upgraded; your total attention is fixed. Whatever you allocate attention to is what you’re voting for; it’s cultivating what capabilities. Someone who drifts in fragmented information for three hours a day isn’t wasting time—he’s actively downgrading his cognitive system into a consumption terminal that can only receive, not produce; can only react, not think.

Here’s a counterintuitive conclusion: in the AI era, deep reading ability is scarcer and more valuable than programming ability. AI can write code, retrieve information, and generate reports. It can’t replace you in truly understanding a book and integrating it into your own judgment system. A person who can stay focused for a long time, think independently, and make autonomous judgments is a collaborator in front of AI. A person who only consumes fragments is an AI consumption terminal. Terminals don’t need to think; terminals only need to receive.

ALAN · On-site

I have a test: find a book you consider important, sit down and read it for two hours without touching your phone. If you can’t do it, your attention has already been colonized. This isn’t a moral judgment; it’s an assessment of cognitive ability. In an era where AI levels the playing field for everyone’s productivity, people who can maintain deep focus are cognitive aristocrats—not because they’re smarter, but because they protect the things most people have already given up.

Protect your attention, and you protect your cognitive sovereignty. Give up your attention, and you voluntarily downgrade yourself into an AI consumption terminal—not an AI collaborator.

LAW V · Trust is the only thing AI can’t mass-produce

AI can generate your resume, mimic your writing style, and forge your voice. It can’t forge the trust you’ve accumulated by delivering, once and again, in real relationships.

What is trust at its core? From a game theory perspective, trust is the result of repeated games Axelrod 1984: in enough interactions between two people, each validates that the probability of the other “doing what they say” is high enough—so they’re willing to lower defensive costs and move into a more efficient mode of cooperation. This process can’t be compressed, can’t be forged, and can’t be mass-produced. Because at its core, it’s a record of fulfillment over time.

When AI can generate any content and simulate any style, real human interpersonal credibility will experience a paradoxical appreciation. The more AI proliferates, the scarcer—and the more valuable—“a real person, and actually reliable” becomes. Your reputation is the only anti-forgery label you have in the AI era.

One layer deeper: trust isn’t just “you do what you say.” Trust is “other people are willing to place their uncertainty on you.” When someone hands you something whose outcome they don’t know, it’s not because they’re sure you can pull it off—it’s because they believe you’ll give it your all, provide honest feedback, and won’t disappear. This kind of trust relationship is a private contract AI can’t enter. It’s offline, emotional, and built over history.

ALAN · On-site

I know someone who doesn’t have a top-school background and doesn’t have experience at a big-name company. His English is awkward and not fluent. The only thing he has is this: over the past fifteen years, not a single promise he made has gone unfulfilled. Now, when he sends a message, fifty people will reply to him first. In the AI era, what is this called—signal penetration. In a world where AI creates infinite noise, his signal is clear. None of those fifty people treat him that way because his resume looks good.

Every time you fulfill a commitment, you’re making the most valuable investment in the AI era. Every time you miss, you’re destroying assets that AI can’t help rebuild for you.

LAW VI · Answers depreciate. Good questions appreciate

AI can answer any question within three seconds. It doesn’t know which questions are worth asking. That “not knowing” is where you stand.

For the entire human education system, for three hundred years, it has been training one thing: answering standard questions. Exams test for answers, interviews test for solving problems, performance reviews test for output. The underlying assumption of this system is that questions are fixed and answers are scarce. After AI appears, that assumption is overturned completely: answers are no longer scarce, and good questions become scarce.

Einstein said that if you gave him an hour to solve a question that concerned life and death, he would spend fifty-five minutes defining the problem and five minutes finding a solution Einstein, attributed. In 2026, the meaning of those words changes: that five minutes can be outsourced to AI. That fifty-five minutes is the part only you can do.

What is a good question? A good question has three characteristics: first, it helps you see something you couldn’t see before; second, it makes the other side of the conversation re-examine their assumptions; third, it opens up a new space of possibilities instead of narrowing the boundary of an already-existing answer. To cultivate this ability, you rely on lots of reading, lots of conversations, and switching back and forth between different systems—until you develop an instinctive distrust of what you once took for granted.

ALAN · On-site

In the AI era, the most competitive way to work looks like this: you start AI with a good question. AI generates ten answers. Then you extract the eleventh— the direction AI itself didn’t think of—using an even better question. In this closed loop, you are the director and AI is the actor. If you only know how to receive AI’s output, you’re the audience. The audience doesn’t pay the director. The world will always be short on good directors, and it will never be short on audiences.

Learning how to ask questions is more valuable than learning how to answer—because AI can answer everything, but it doesn’t know what to ask. That “not knowing” is your territory.

LAW VII · Find the place where it’s valuable “because there are people”

Not all efficiency is worth optimizing. There’s a kind of value that becomes increasingly expensive precisely because it’s inefficient and because it requires real people.

In 1899, Veblen described a strange class of goods Veblen 1899: the higher the price, the greater the demand, because a high price itself is part of the value. Today, human participation is becoming a Veblen attribute of a certain kind of service: because there are real people, it’s valuable; the scarcer it is, the more valuable it becomes.

Think about it: how many times more valuable is a doctor’s judgment who truly understands your situation compared to an AI-generated diagnosis report? How irreplaceable is a friend who sits across from you when you’re at your hardest versus any AI companion app? How fundamentally different is a decision-maker who can make calls face-to-face and bear the consequences on the spot, compared to an AI-optimized recommendation memo? What these scenarios have in common is that the presence of a person is itself part of the value—and an inseparable part.

From the perspective of human evolution, this isn’t surprising. Humans are super-social animals; our nervous systems are designed to respond to real human presence. Oxytocin, mirror neurons, facial expression recognition systems—these mechanisms don’t respond to AI. When an AI tells you, “I understand your feelings,” your edge systems know it’s fake, even if your rational brain is temporarily convinced. Human-to-human presence satisfies a biological need that can’t be replaced by digits.

ALAN · On-site

I predict an industry that will surge against the trend in the AI era: hospice and end-of-life care. Not because AI can’t provide information or companionship, but because nobody wants to face a screen at the last moment of their life. This is an extreme case of “human premium,” but it illustrates a general rule: find those areas that feel emptier the more automated they become—that’s your opportunity. The more efficient and colder a place is, the more valuable human warmth becomes.

Ask yourself: if AI did all of this, what would customers lose? The “thing they lose” is your permanent moat.

LAW VIII · Uncertainty isn’t your enemy—it’s your final advantage

Evolution never rewards the strongest. It rewards the ones that survive the longest through change. People who can keep acting in highly uncertain environments are the true strong ones in the AI era.

In Antifragile, Nassim Taleb proposed a framework that changed my worldview Taleb 2012: there are three types of systems in the world. Fragile systems collapse under stress; robust systems maintain under stress; antifragile systems become stronger under stress. He said nature doesn’t reward robustness—it rewards antifragility. Muscles grow under stress. The immune system gets stronger in infection. Economies advance through creative destruction.

Uncertainty in the AI era is structural—it won’t disappear. Every few months, new models emerge, new ability boundaries appear, and new industries get reshaped. This isn’t temporary chaos; it’s a new steady state. You can’t predict the next hand. What you can do is train yourself to be able to act, learn, and maintain a sense of direction even when you don’t know what the next hand will be.

There’s an even deeper truth: uncertainty is the last weapon ordinary people have against big institutions. In a world of certainty, large companies, big governments, and big capital have absolute advantages—they have resources, scale, and moats. But in fast-changing uncertain environments, their scale becomes a burden, their processes turn into shackles, and their history becomes baggage. Meanwhile, you—an individual who can make decisions within 72 hours and pivot completely within a week—have a kind of flexibility that big institutions can never replicate.

ALAN · On-site

More specifically: make small bets, iterate quickly—don’t go all-in on any single judgment. Build a life structure that can absorb mistakes, not one that must be correct forever. Keep the cost of failure within what you can bear, and raise your learning speed to the highest level you can sustain. You can’t predict which industry AI will disrupt next. But you can train yourself so that on the day it disrupts— you feel excitement, not panic. Big institutions fear uncertainty because they’re too heavy to move. You’re light, so you can pivot. This is your last structural advantage—don’t waste it with anxiety.

Uncertainty is ordinary people’s only structural advantage against big institutions. Big institutions fear it; you should love it.

LAW IX · Keep producing—turn your cognition into a public asset

AI enables everyone to “produce content.” But content and opinions are two different things. People with unique viewpoints who express them consistently will generate exponential visibility amid AI noise.

In economics, there’s a concept called “network effects” Metcalfe 1980: the value of a network is proportional to the square of the number of its nodes. Your public expression is your node in the human knowledge network. Every article, every speech, every viewpoint increases your number of connections. And the value of nodes comes from their uniqueness—not from the size of the quantity.

Before AI drives the cost of producing content close to zero, what was scarce was production capability. After that, what becomes scarce is unique opinions that are worth trusting. Anyone can use AI to generate an “AI survival guide,” but not everyone can write an article that makes readers finish it thinking, “This person has truly experienced the real world.” The latter requires real experience, independent judgment, and sustained thinking—these three things AI can’t do on your behalf.

A more fundamental logic is: if you don’t output, you don’t exist. In the digital age, existence means being seen, and being seen makes it possible for value to flow. A person who has many good ideas in their head but never expresses them is equivalent to a person who knows nothing in the world’s information stream—they are both transparent. Turning your cognition into a public asset is the most underestimated compounding behavior in the AI era.

ALAN · On-site

I know someone who works in factory management in a second-tier city. He doesn’t have a top-school background and doesn’t have a flashy resume. Three years ago, he started writing online about his real experience in factory operations—not a methodology, but raw failure cases and the conclusions he drew from them. Today he has 200,000 readers. Three factories proactively contact him for consultation. Publishers also want to work with him on a book. He didn’t get smarter. He just took what used to be inside his head and put it into the world. When the world sees it, value starts flowing toward him. If you don’t output, the world doesn’t know you exist.

Put the things in your head into the world. Not to perform, but to let the world know you exist—and to let value know where to find you.

LAW X · Manage your energy, not your time

Time management is the logic of the industrial era—factories need steady output, so you trade time for product. In the AI era, what you need is a burst of creative cognition, so you manage energy—not time.

The core assumption of the industrial era is: time is a function of output. You work eight hours and produce eight hours of value. This logic works on assembly lines because assembly line work is linear, stackable, and doesn’t require peak states. But creative work isn’t linear. Two hours in peak condition can produce something that twenty hours in a fatigued state can’t.

Neuroscience has already confirmed this Kahneman 2011: higher-order cognitive functions in humans—deep analysis, creative connections, complex judgment—depend on a highly active state of the prefrontal cortex. This state is extremely energy-intensive; it only has a limited daily time window. Most people use that most expensive time window to handle emails, scroll social media, and sit through low-quality meetings—then use the remaining fatigued state to do work that requires deep thinking, and finally complain that they’re not efficient and don’t have creativity.

In the AI era, this mistake becomes even more deadly. Because AI can handle all tasks with low cognitive costs—information retrieval, formatting, data summarization, and standard writing. What it can’t replace is the judgment, insights, connections, and creativity produced in your peak cognitive state. If you give your peak time to low-value tasks, you’re using the most expensive resource to do the cheapest work, while leaving the work that needs you most to your worst state.

ALAN · Closing remarks for the full hall

Every morning, I have about three hours of peak state. In those three hours, I don’t check messages, I don’t hold meetings, and I don’t answer emails. I only do one thing: think about the single most important question of the day. Everything else—including a lot of the work—I handle with AI, or I leave it for the afternoon. This isn’t laziness; it’s rational allocation. How much are your three most expensive hours in a day worth? It depends on what you use them for. After AI arrives, the answer to that question becomes even more extreme than before: if you use it correctly, your peak output is ten times that of an ordinary person; if you use it wrong, your low point is no different from AI. Asimov wrote three laws of robots to set boundaries for machines. Today, I’m giving you these ten laws to help people reclaim their place. Your place is in your peak—not on the assembly line.

You don’t need more time. You need to protect your best time, and use it to do what only you can do.

“AI isn’t your ceiling—it’s your lever.

Your place is in your peak, not on the assembly line.”

I Your opponent is never AI—it’s the person who knows how to use AI

II AI can’t steal the pits you’ve fallen into

III Depth is proof; cross-domain is your weapon

IV Attention is the only truly scarce thing in the AI era

V Trust is the only thing AI can’t mass-produce

VI Answers depreciate. Good questions appreciate

VII Find the place where it’s valuable “because there are people”

VIII Uncertainty isn’t your enemy—it’s your final advantage

IX Keep producing—turn your cognition into a public asset

X Manage your energy, not your time

-Melly

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin