AI"Stealing Jobs," Silicon Valley Mass Layoffs! Front-line Engineers Expose the Truth: AI Efficiency Severely Overestimated, Humans Forced to Become Reviewers, Workload 10 Times More Than Before

By 2026, layoffs in Silicon Valley’s tech industry continue to make headlines. In January, Amazon confirmed it would cut about 16,000 jobs; in February, fintech company Block laid off nearly half its staff; in March, Meta was reported to plan to cut 16,000 jobs.

Anxiety over AI replacing white-collar workers is sweeping through the workplace.

However, software engineer Siddhant Khare from Ona published an article titled “AI Fatigue Is Real, Yet No One Talks About It,” sparking widespread discussion among global media and readers.

He highlighted the huge gap between AI’s practical applications and its idealized vision. He believes that the efficiency gains brought by AI are overestimated, while workers are falling into “AI fatigue.”

Recently, Siddhant Khare told a reporter from Daily Economic News (hereafter NBD) in an exclusive interview that, as a developer of AI infrastructure, he recommends changing how people use AI to avoid getting trapped in cycles of generation, review, re-generation, and re-review.

Siddhant Khare Photo source: interviewee’s social media account

“With AI, people’s workload is now ten times what it used to be”

NBD: What do you think are the main causes of “AI fatigue”?

Siddhant Khare: “AI fatigue” is fundamentally a structural problem. AI has increased the efficiency of generating code, copywriting, documents, and other content by several times, but the review and validation stages haven’t kept pace. People remain the bottleneck in the entire workflow, handling ten times the workload they used to.

It’s like a factory that replaces a stamping machine with one ten times faster, but the quality inspector at the end of the line remains just one person. Even with vastly increased capacity, the inspector’s workload doubles, defect rates stay the same, and ultimately, the person bearing all the review pressure will burn out.

In knowledge work, AI has automated production but not automated review. Most managers are unaware of this issue. They only see surface data—more code delivered, more documents produced, more emails sent, and flashy reports—while ignoring the physical and mental exhaustion of employees.

NBD: People thought AI would boost efficiency, so why has the workload actually increased significantly?

Siddhant Khare: The productivity boost from AI hasn’t translated into more free time for employees. Instead, companies have used it to raise expectations and set a higher “qualification bar” for work.

Before AI, a software engineer might submit 20 pull requests (PRs) a week, which was normal. With AI assistance, their theoretical capacity might rise to 50, and companies set 50 as the new standard.

All content generated by AI still requires human review. As an open-source maintainer, I feel this deeply. I used to handle 20-25 code PRs weekly; now that number has skyrocketed to over a hundred, most of which are AI-generated, and I must review each one carefully.

“Using AI coding tools actually decreases work efficiency by 19%”

NBD: Which values of AI are most likely overestimated? And what costs are underestimated?

Siddhant Khare: The most common overestimation is the speed of AI deployment and immediate efficiency gains. Many companies fall into the misconception that simply equipping employees with AI tools will lead to a productivity leap within weeks, but the actual data shows the opposite.

A comprehensive survey by the developer productivity platform DX, covering over 450 companies and 120,000 developers, found that even with 93% of developers using AI coding tools, actual productivity only increased by about 10%, and further breakthroughs are hard to achieve.

The results from METR, a model evaluation and risk research organization, are even more stark: developers using AI coding tools saw a 19% decrease in actual work efficiency, even though they subjectively felt their speed increased by 24%.

Most companies underestimate the cost of human review of AI-generated content. Few account for the time-consuming, labor-intensive review process in their overall work costs. They also overlook employee job identity and morale. When most work is done by AI, employees who once gained pride from their professional skills may start feeling like mere quality inspectors on an assembly line. This sense of role disparity is hard to quantify but can lead to talent attrition.

“Reviewing AI is more exhausting than doing it yourself”

NBD: Many white-collar workers now worry that using AI is actually training AI to replace them. Is this concern justified? Which roles are most vulnerable, and which are harder to replace?

Siddhant Khare: Most ordinary employees are not directly training large AI models. When using tools like ChatGPT or Copilot daily, the input they provide isn’t automatically used to train the next generation of models. Most enterprise user agreements explicitly prohibit such data use. The idea that “I’m training AI to replace myself” isn’t technically accurate.

The real impact of AI on the workplace isn’t mass job replacement but rather redefining roles, significantly increasing work intensity, and shifting core tasks. Jobs that produce standardized, low-quality, repetitive output—such as drafting initial copies, basic data entry, simple code generation, or templated report creation—are most vulnerable, as long as the output just needs to be “good enough.”

Roles that require holistic understanding, aesthetic judgment, and independent decision-making—like system architecture, product strategy, business negotiations, or creative content planning—are much harder to automate.

Most workers are in the middle ground. Their jobs won’t disappear overnight, but they will need to adapt.

NBD: How do you see the core value of employees changing?

Siddhant Khare: The shift is already happening, but most companies’ performance evaluation systems haven’t caught up yet.

In the future, the best engineers won’t be those who write the most code or produce the most output, but those who can quickly assess whether an AI solution fits into the overall system and whether the approach makes sense. This judgment depends on long-term industry experience and a broad system perspective, not just prompt optimization.

Employee value is shifting from quantity of output to quality of judgment; from speed of execution to depth of thinking. The most irreplaceable employees will be those who can accurately judge right from wrong and provide clear, rational reasoning—judgment is now the core value.

“Fatigue stems from AI’s inherent uncertainty”

NBD: Compared to previous waves of automation, why does AI cause more fatigue?

Siddhant Khare: The main reason is that previous automation tools were deterministic, while AI is full of uncertainty.

Earlier tools, given the same instructions and inputs, produced the same outputs, and errors would be immediately flagged; but AI can generate completely different content from the same prompt, and even when errors occur, they are often very convincing and confusing. AI errors are highly covert—code might run fine, copy might read smoothly, reports might be well-formatted, but there could be hidden factual inaccuracies, logical flaws, or fabricated data on a certain page or line.

These silent errors require constant vigilance, which is very mentally taxing over time. Moreover, AI tends to imitate human expression styles closely. Reviewing AI content demands cognitive effort comparable to original creation.

NBD: If AI outputs can’t be fully trusted but need to be scaled, how can we bridge this “trust gap”?

Siddhant Khare: Unfortunately, most companies rely on the worst approach—treating human review as the sole quality control point.

Good companies establish a system I call “backpressure.” Simply put, before AI-generated content reaches human review, an automated feedback mechanism intercepts most obvious errors, reducing the review burden.

“The most important work often doesn’t require AI”

NBD: How should ordinary white-collar workers properly interact with AI amid the workload and mental fatigue it brings?

Siddhant Khare: I recommend three strategies.

First, avoid using AI for tasks where “thinking itself is valuable.” For example, developing strategic plans relies on thinking, not typing. Skipping thinking by directly using AI diminishes your work’s value. AI is better suited for repetitive tasks where results matter more than the process.

Second, set clear boundaries for review time. If you spend more than two hours daily reviewing AI outputs, your workflow is problematic. It might be due to unclear prompts, insufficient context, lax rules, or lack of automated checks. Never treat “unlimited review of all AI outputs” as normal.

Third, protect your deep work time. AI traps people in cycles of generation, review, re-generation, and re-review, constantly breaking concentration. Deliberately carve out periods where you don’t use AI at all. The most important work often doesn’t depend on prompts but on independent thinking.

NBD: How should those already dependent on AI change their habits?

Siddhant Khare: The first step is to change how you use AI.

Many people instinctively open ChatGPT when facing problems, without first thinking independently.

You must reverse this order. First, think independently, clarify your goals, then decide whether to use AI. Often, a blank sheet of paper and twenty minutes of deep independent thinking yield better results.

The core of AI anxiety is losing a sense of control. When AI keeps generating and suggesting, you feel like a passive executor. Once you regain the ability to decide when and how to use AI, your sense of control returns, anxiety diminishes, and you can truly escape AI fatigue.

This article is sourced from Daily Economic News.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin