AI, why does it also need to sleep?

Smart people know when they should rest.

Written by: Tang Yitao

Edited by: Jingyu

Source: Geek Park

On March 31, 2026, Anthropic accidentally leaked 510k lines of source code for Claude Code into the public npm registry due to a packaging mistake. Within a few hours, the code was mirrored to GitHub and could never be taken back.

There was a lot of leaked content. Security researchers and competitors each took what they needed. But among all the unreleased features, one name drew widespread discussion—autoDream, meaning automated dreaming.

autoDream is part of a background resident system called KAIROS (Ancient Greek, meaning “the right moment”).

KAIROS continuously observes and records while the user is working, maintaining daily logs (it’s got a bit of a “lobster” vibe). autoDream, however, starts only after the user shuts down their computer: it organizes the memories accumulated during the day, clears contradictions, and turns vague observations into definite facts.

Together they form a complete cycle—KAIROS is awake, autoDream is asleep—Anthropic’s engineers built an AI schedule.

Over the past two years, the hottest narrative in the AI industry has been Agents: autonomously running, never stopping. This has been treated as a core advantage of AI over humans.

But the company that pushes Agent capabilities deepest has, in its own code, set a time for AI to rest.

Why?

The cost of never stopping

An AI that never stops will hit a wall.

Every large language model has a “context window,” with a physical limit on how much information it can handle at any given moment. When an Agent runs continuously, project history, user preferences, and conversation records keep piling up. Once it passes a critical point, the model begins to forget early instructions, contradict itself, and fabricate facts.

The technical community calls this “context decay.”

Many Agent approaches are crude: cram all history into the context window and expect the model to figure out what’s important. The more information there is, the worse the performance becomes.

What the human mind hits is the same wall.

Everything you experience during the day gets quickly written into the “hippocampus.” This is a temporary storage area with limited capacity, more like a whiteboard. True long-term memory is stored in the “neocortex”—it has plenty of capacity, but writes slowly.

The core job of human sleep is to empty the fully loaded whiteboard, and transfer the useful information to the hard drive.

A laboratory at the Neuroscience Center at the University of Zurich—run by Björn Rasch—named this process “active systems consolidation.”

Continuous sleep deprivation experiments repeatedly prove that a brain that never stops does not become more efficient. First memory gets worse, then attention, and finally even basic judgment collapses.

Natural selection is extremely harsh on inefficient behavior, but sleep has not been eliminated. From fruit flies to whales, almost all animals with nervous systems sleep. Dolphins evolved “half-brain sleep,” where the left and right brain rest in turns—they’d rather invent an entirely new way of sleeping than give up sleep itself.

Scene of orcas, beluga whales, and bottlenose dolphins resting at the bottom of a pool|Image source: National Library of Medicine (United States)

The constraints faced by the two systems are the same set: limited immediate processing capacity, but infinite expansion of historical experience.

Two exams

In biology there’s a concept called convergent evolution: species that are distantly related, yet independently evolve similar solutions to similar environmental pressures. The most classic example is eyes.

Octopuses and humans both have camera-like eyes. A focusing lens focuses light onto the retina; a ring-shaped iris controls the amount of incoming light. The overall structure is almost identical.

Comparison of the eye structures of octopuses and humans|Image source: OctoNation

But octopuses are mollusks and humans are vertebrates. Their common ancestor lived more than 510k years ago, when the Earth still had no complex visual organs at all. Two completely independent evolutionary pathways reached almost the same end point. Because to convert light efficiently into a clear image, the paths allowed by physical laws are nearly limited to the camera-like one: the focusing lens, the light-sensitive surface that can receive the image, and the aperture that can adjust incoming light—none of them can be missing.

The relationship between autoDream and human sleep is likely of this kind: under similar constraints, the two types of systems may converge on similar structures.

The must-go-offline part is the one shared point that makes them most similar.

autoDream can’t run while the user is working. It starts independently as a branched subprocess, fully isolated from the main thread, with tightly restricted tool permissions.

Humans face the same problem, but the solution is more thorough: memory is moved from the hippocampus (temporary storage) to the neocortex (long-term storage), and it requires a set of brain electrical rhythms that appear only during sleep.

Most crucial is the hippocampal sharp-wave ripple. It packages memory fragments encoded that day one by one and sends them to the cerebral cortex. Slow oscillations in the cortex and thalamic sleep spindles provide precise timing coordination for the whole process.

These rhythms can’t form in a waking state; external stimuli would disrupt them. So you don’t sleep because you’re sleepy. Instead, your brain has to close the front door before it can open the back door.

Or, within the same time window, information acquisition and structure organization are competing resources, not complementary ones.

Active systems consolidation model during sleep. A (data transfer): During deep sleep (slow-wave sleep), memories newly written into the “hippocampus” (temporary storage) are replayed repeatedly, gradually being transferred and consolidated into the “neocortex” (long-term storage). B (transfer protocol): This data transfer process depends on highly synchronized “dialog” between the two regions. The cerebral cortex emits slow electrical waves (red line) as the main timekeeping signal. Driven by the wave peaks, the hippocampus packages memory fragments into high-frequency signals (the sharp-wave ripples at the green line), and perfectly coordinates them with the carrier waves emitted by the thalamus (spindles at the blue line). It’s like precisely inserting high-frequency memory data into the gaps of the transmission channel, ensuring that information is synchronously uploaded to the cerebral cortex.|Image source: National Library of Medicine (United States)

The other approach is not to store everything as full memory, but to do editing.

After autoDream starts, it doesn’t keep every log. It first reads existing memories to confirm known information, then scans KAIROS’s daily logs, focusing on parts that deviate from prior cognition: those memories that are different from what was said yesterday, and more complex than previously thought, are prioritized for recording.

After organization, the memories are stored in a three-layer index: a lightweight pointer layer is always loaded; topic files are loaded on demand; the complete history is never loaded directly. And facts that can be found directly in the project code (for example, which file defines a certain function) are not written into memory at all.

What the human brain does during sleep is almost the same thing.

A study by Erin J Wamsley, a lecturer at Harvard Medical School, shows that sleep preferentially consolidates unusual information—things that surprise you, those linked to emotional fluctuations, and those related to problems that haven’t been solved yet. Meanwhile, lots of repeated, featureless daily details are discarded, leaving only abstract rules. You might not remember exactly what you saw on your commute to work yesterday, but you clearly remember how the route goes.

Interestingly, in one place the two systems make different choices. The memories produced by autoDream are explicitly labeled in the code as “hint” rather than “truth.” Every time an agent uses them, it has to re-verify whether they still hold, because it knows that what it organized might not be accurate.

Humans don’t have this mechanism. That’s why, in court, eyewitnesses often give wrong testimony. They are not trying to lie on purpose; it’s because memory is pieced together on the fly from scattered fragments in the brain—error is the norm.

Evolution probably doesn’t need to install an uncertainty label on the human brain. In a primitive environment where the body needs to react quickly, trusting memory means action immediately; doubting memory means hesitation—and hesitation leads to defeat.

But for an AI that repeatedly makes knowledge-based decisions, the cost of verification is low, and blind confidence is actually dangerous.

Two scenarios, two different sets of answers.

Smarter laziness

In evolutionary biology, convergent evolution means two independent routes, with no direct information exchange, reaching the same endpoint. There’s no copying in nature, but engineers can read papers.

When Anthropic designed this sleep mechanism, did they do it because they hit a physical wall like the human brain, or because they referenced neuroscience from the start?

There are no neuroscience references in the leaked code, and even the name autoDream sounds more like a programmer’s joke. The stronger driver should still be engineering constraints themselves: the context has a hard upper limit; long-running leads to noise accumulation; and organizing online pollutes the main thread’s reasoning. They are solving an engineering problem. Bioinspiration was never the goal.

What truly shapes the answer is the compressive force of the constraints themselves.

Over the past two years, the AI industry’s definition of “stronger intelligence” has almost always pointed in the same direction—bigger models, longer context, faster reasoning, and 7×24 uninterrupted running. The direction is always “more.”

The existence of autoDream suggests a different proposition: a smart agent might actually be lazier.

An intelligent agent that never stops to organize itself won’t become smarter and smarter—it will only become more and more chaotic.

In hundreds of millions of years of evolution, the human brain reached a seemingly clumsy conclusion: intelligence must have a rhythm. Wakefulness is for perceiving the world; sleep is for understanding the world. When an AI company, while solving an engineering problem, independently arrives at the same conclusion, it may imply this:

Intelligence has some basic costs you can’t get around.

Perhaps an AI that never sleeps isn’t “stronger.” It’s just an AI that hasn’t yet realized it needs to sleep.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin