ABNB

Airbnb Price

Closed
ABNB
$127,76
+$1,75(+%1,38)

*Data last updated: 2026-04-08 07:15 (UTC+8)

As of 2026-04-08 07:15, Airbnb (ABNB) is priced at $127,76, with a total market cap of $74,92B, a P/E ratio of 33,13, and a dividend yield of %0,00. Today, the stock price fluctuated between $123,46 and $128,39. The current price is %3,48 above the day's low and %0,49 below the day's high, with a trading volume of 2,66M. Over the past 52 weeks, ABNB has traded between $110,44 to $143,87, and the current price is -%11,19 away from the 52-week high.

ABNB Key Stats

Yesterday's Close$126,81
Market Cap$74,92B
Volume2,66M
P/E Ratio33,13
Dividend Yield (TTM)%0,00
Diluted EPS (TTM)4,09
Net Income (FY)$2,51B
Revenue (FY)$12,24B
Earnings Date2026-05-07
EPS Estimate0,30
Revenue Estimate$2,61B
Shares Outstanding590,81M
Beta (1Y)1.16

About ABNB

Airbnb, Inc., together with its subsidiaries, operates a platform that enables hosts to offer stays and experiences to guests worldwide. The company's marketplace model connects hosts and guests online or through mobile devices to book spaces and experiences. It primarily offers private rooms, primary homes, or vacation homes. The company was formerly known as AirBed & Breakfast, Inc. and changed its name to Airbnb, Inc. in November 2010. Airbnb, Inc. was founded in 2007 and is headquartered in San Francisco, California.
SectorConsumer Cyclical
IndustryTravel Services
CEOBrian Chesky
HeadquartersSan Francisco,CA,US
Official Websitehttps://www.airbnb.com
Employees (FY)8,20K
Average Revenue (1Y)$1,49M
Net Income per Employee$306,21K

Airbnb (ABNB) FAQ

What's the stock price of Airbnb (ABNB) today?

x
Airbnb (ABNB) is currently trading at $127,76, with a 24h change of +%1,38. The 52-week trading range is $110,44–$143,87.

What are the 52-week high and low prices for Airbnb (ABNB)?

x

What is the price-to-earnings (P/E) ratio of Airbnb (ABNB)? What does it indicate?

x

What is the market cap of Airbnb (ABNB)?

x

What is the most recent quarterly earnings per share (EPS) for Airbnb (ABNB)?

x

Should you buy or sell Airbnb (ABNB) now?

x

What factors can affect the stock price of Airbnb (ABNB)?

x

How to buy Airbnb (ABNB) stock?

x

Risk Warning

The stock market involves a high level of risk and price volatility. The value of your investment may increase or decrease, and you may not recover the full amount invested. Past performance is not a reliable indicator of future results. Before making any investment decisions, you should carefully assess your investment experience, financial situation, investment objectives, and risk tolerance, and conduct your own research. Where appropriate, consult an independent financial adviser.

Disclaimer

The content on this page is provided for informational purposes only and does not constitute investment advice, financial advice, or trading recommendations. Gate shall not be held liable for any loss or damage resulting from such financial decisions. Further, take note that Gate may not be able to provide full service in certain markets and jurisdictions, including but not limited to the United States of America, Canada, Iran, and Cuba. For more information on Restricted Locations, please refer to the User Agreement.

Other Trading Markets

Hot Posts About Airbnb (ABNB)

CodeZeroBasis

CodeZeroBasis

4 hours ago
The AI benchmark race has a winner. It just isn't you. Every few months, a new model drops and a new leaderboard reshuffles. Labs compete to out-reason, out-code, and out-answer each other on tests designed to measure machine intelligence. The coverage follows. So does the funding. What gets less attention is whether any of this is inevitable. The benchmarks, the arms race, the framing of AI as either salvation or catastrophe — these are choices, not laws of physics. They reflect what the industry decided to optimize for, and what it decided to fund. Technology that will take decades to pan out in ordinary, useful ways doesn't raise billions this quarter. Extreme narratives do. Some researchers think the goal is simply wrong. Not that AI isn't important, but that important doesn't have to mean unprecedented. The printing press changed the world. So did electricity. Both did it gradually, through messy adoption, giving societies time to respond. If AI follows that pattern, the right questions aren't about superintelligence. They're about who benefits, who gets harmed, and whether the tools we're building actually work for the people using them. Plenty of researchers have been asking those questions from very different directions. Here are three of them. **Useful, not general** ----------------------- Ruchir Puri has been building AI at IBM $IBM -0.68% since before most people had heard of machine learning. He watched Watson beat the world's best Jeopardy players in 2011. He's watched several cycles of hype crest and recede since. When the current wave arrived, he had a simple test for it: is it useful? Not impressive. Not general. Useful. "I don't really care about artificial general intelligence," he says. "I care about the useful part of it." That framing puts him at odds with much of the industry's self-image. The labs racing toward AGI are optimizing for breadth, building systems that can do anything, answer anything, reason about anything. Puri thinks that's the wrong target, and he has a benchmark he'd like to see the industry actually try to reach. The human brain lives in 1,200 cubic centimeters, consumes 20 watts, the energy of a light bulb, and, as Puri points out, runs on sandwiches. A single Nvidia $NVDA +0.26% GPU consumes 1,200 watts, 60 times more than the entire brain, and you need thousands of them in a giant data center to do anything meaningful. If the brain is the benchmark, the industry isn't close to efficient. It's going in the wrong direction. His alternative is what he calls hybrid architecture: small, medium, and large models working together, each assigned to the task it handles best. A large frontier model does the complex reasoning and planning. Smaller, purpose-built models handle execution. A task as simple as drafting an email doesn't need a system trained on half the internet. It needs something fast, cheap, and focused. Every nine months or so, Puri notes, the small model of the previous generation becomes roughly equivalent to what was considered large. Intelligence is getting cheaper. The question is whether anyone is building for that reality. The approach has real-world backing. Airbnb $ABNB -1.45% uses smaller models to resolve a significant portion of customer service issues faster than its human representatives can. Meta $META +0.35% doesn't use its biggest models to deliver ads so it distills that knowledge into smaller ones built for that task alone. The pattern is consistent enough that researchers have started calling it a knowledge assembly line: data flows in, specialized models handle discrete steps, something useful comes out the other end. IBM has been building that assembly line longer than most. A hybrid agent combining models from several companies has shown a 45% productivity improvement across a large engineering workforce. Systems running on smaller, purpose-built models now help the engineers who keep 84% of the world's financial transactions processing get the right information at the right time. These aren't flashy applications. They're also not failing. None of them require a system that can write poetry or solve your kid's math homework. They require something narrower and, for that reason, more trustworthy. A model trained to do one thing well knows when a question falls outside its scope. It says so. That calibrated uncertainty, knowing what you don't know, is something the big frontier models still struggle with. "I want to build agents and systems for those processes," Puri says. "Not something that answers two million things." Tools, not agents ----------------- Ben Shneiderman has a simple test for whether an AI system is well designed. Does the person using it feel like they did something, or does it feel like something was done for them? The distinction matters more than it sounds. Shneiderman, a computer scientist at the University of Maryland who helped lay the foundations for modern interface design, has spent decades arguing that the goal of technology should be to amplify human ability, not replace it. Good tools build what he calls user self-efficacy, or the confidence that comes from knowing you can do something yourself. Bad ones quietly transfer that agency somewhere else. He thinks most of the AI industry is building bad tools, and he thinks the agentic turn makes it worse. The pitch for AI agents is that they act on your behalf, handling tasks end to end without your involvement. To Shneiderman, that's not a feature. It's the problem. When something goes wrong, and it will, who is responsible? When something goes right, who learned anything? The trap he's been fighting against for a long time has a name. Anthropomorphism, the impulse to make technology seem human, is what keeps winning, and what keeps failing. In the 1970s, banks experimented with ATMs that greeted customers with "How can I help you?" and gave themselves names like Tilly the Teller and Harvey the World Banker. They were replaced by machines that showed you three options. Balance, cash, deposit. Utilization shot up. Citibank had 50% higher usage than its competitors. People didn't want a synthetic relationship. They wanted to get their money. The same pattern has repeated across decades, through Microsoft $MSFT -0.16% Bob, the AI pin from Humane, and waves of humanoid robots. Each time, the anthropomorphic version fails and gets replaced by something more tool-like. Shneiderman calls it a zombie idea. It doesn't die, it just keeps coming back. What's different now is scale and sophistication. The current generation of AI is genuinely impressive, he acknowledges, startlingly so. But impressive and useful aren't the same thing, and systems designed to seem human, to say I, to simulate relationship, are optimizing for the wrong quality. The question he wants designers to ask is simpler: does this give people more power, or less? "There is no I in AI," he says. "Or at least, there shouldn't be." **People, not benchmarks** -------------------------- Karen Panetta has a simple answer for why AI development looks the way it does. Follow the money. Panetta, a professor of electrical and computer engineering at Tufts University and an IEEE fellow, studies AI ethics and has a clear view of where the technology should be going. Assistive pets for Alzheimer's patients, adaptive learning tools for children with different cognitive styles, smart home monitoring for elderly people aging in place. The technology to do this well, she says, largely exists. The investment doesn't. "The humans don't care about benchmarks," she says. "They care about, does it work when I buy it, and is it going to really make my life easier?" The problem is that the people who would benefit most from well-designed assistive AI are also the least compelling pitch to a venture capitalist. A system that transforms manufacturing processes, reduces workplace injuries, and cuts healthcare costs for a company's employees has an obvious return. A robotic companion that keeps an Alzheimer's patient calm and connected requires a different kind of math entirely. So the money goes where the money goes, and the populations with the most to gain keep waiting. What's changed, Panetta says, is that the expensive engineering problems are finally being solved at scale. Sensors are cheaper. Batteries are lighter. Wireless protocols are ubiquitous. The same investment that built industrial robots for factory floors has quietly made consumer robotics viable in a way it wasn't five years ago. The path from warehouse to living room is shorter than it looks. But she has a concern that the excitement around that transition tends to skip over. Physical robots have natural constraints. You know the force limits. You know the kinematics. You can anticipate, simulate, and design around how they'll fail. Generative AI doesn't come with those guarantees. It's non-deterministic. It hallucinates. Nobody has fully mapped what happens when you put it inside a system that is physically present in the home of someone with dementia, or a child who can't identify when something has gone wrong. She's seen what happens when a sensor gets dirty and a robot loses its spatial awareness. She's thought about what it means to build something that learns intimate details about a person's life, their routines, their cognitive state, their moments of confusion, and then acts on that information autonomously. The fail-safes, she says, haven't kept up. "I'm not worried about the robot," she says. "I'm worried about the AI." 📬 Sign up for the Daily Brief ------------------------------ ### Our free, fast and fun briefing on the global economy, delivered every weekday morning. Sign me up
0
0
0
0
CodeZeroBasis

CodeZeroBasis

5 hours ago
American AI has started speaking in the booming baritone of national purpose. But it’s doing a lot of flag-waving for an industry that keeps letting Chinese models into the building. The U.S.’ patriotic sales pitch is everywhere now — “global AI dominance,” “national mission,” “strategic race,” “democratic” values, and all the usual chest-thumping language that the AI industry has started borrowing from Washington. But behind the red, white, and blue branding, developers and platforms keep making a different calculation: Chinese models are good, cheap, open, and increasingly hard to avoid. While the public face of AI in the U.S. still looks comfortably domestic, more Chinese technology keeps slipping into the guts of the machine — the coding tools, the cloud marketplaces, and the parts of the stack most people never see. The stars-and-stripes rhetoric is getting harder to square. Patriotic branding is easy. Patriotic procurement is where things can get ugly. Washington has already been warned that this growing migration isn’t some niche side plot for engineers with tabs open on Hugging Face. In mid-March, the U.S.-China Economic and Security Review Commission warned that Chinese open-weight models have become hard to wave away. The report said that China has gone “all in” on open-source AI, that widespread adoption is feeding faster iteration, and that the result is creating “alternative pathways to AI leadership.” The open ecosystem, the report said, “enables ​China to innovate close to the frontier despite significant compute constraints” — and now “Chinese labs have narrowed performance gaps with top Western large language models.”  That’s a lot of fancy bureaucrat language for a very simple problem: The U.S. keeps grandstanding about a national mission while China keeps shipping a product that travels well.  China’s open approach has essentially created a feedback loop where adoption drives iteration and then more adoption — a “self-reinforcing competitive advantage,” as the USCC said; some estimates now put Chinese open-source models inside around 80% of U.S. AI startups. A Stanford HAI’s DigiChina brief says that Chinese-made open-weight models are now “unavoidable” in the competitive AI landscape and are increasingly being adopted in the U.S. Washington is selling sovereignty. The market is buying whatever works. **Chinese models are already getting into the stack** ----------------------------------------------------- The easiest way to miss what’s happening is to stare at the consumer apps and congratulate yourself on spotting the obvious. On that surface, the U.S. still gets to feel nice and sovereign. SSRS said this month that 52% of Americans use AI platforms weekly, with ChatGPT at 36%, Gemini at 26%, and Copilot at 14%. Similarweb’s U.S. rankings still lean heavily American, too, putting ChatGPT, Gemini, Claude, Grok, and OpenAI in the top five. The storefront looks domestic enough to keep the branding neat and the nerves calm. The more consequential shift is happening backstage, where engineers pick base models, companies choose tooling, and procurement decisions turn into architecture before anybody bothers to call them strategy. According to Hugging Face, China has surpassed the U.S. in both monthly and overall downloads on its platform, with Chinese models accounting for 41% of downloads over the past year. Stanford HAI’s DigiChina brief says that between August 2024 and August 2025, Chinese open-model developers made up 17.1% of all Hugging Face downloads, slightly ahead of U.S. developers at 15.8%. Last week, seven of the 10 most popular models on OpenRouter were Chinese. OpenRouter’s 100 trillion-token study found that Chinese open-source models rose from a negligible base in late 2024 to nearly 30% of total usage in some weeks, averaging about 13% of weekly token volume over the year it studied. DeepSeek was the single largest open-source contributor by volume on the platform, with Qwen ranked second. The work itself is changing, too. OpenRouter says Chinese open models are no longer mainly for roleplay and hobbyist messing around; programming and technology together now make up a combined 39% of Chinese open-source use on the platform. Cursor, one of the hottest American AI companies around, admitted this month that its Composer 2 coding model was, in a licensed partnership, built on top of Moonshot AI’s Kimi K2.5 before layering on its own training. Moonshot, one of China’s most promising AI startups, is based in Beijing — and valued at around $18 billion, more than quadrupling its value in three months. “Seeing our model integrated effectively through Cursor’s continued pretraining & high-compute RL training is the open model ecosystem we love to support,” Moonshot wrote on X $TWTR 0.00%. Cursor executives said that Kimi performed best in the company’s evaluations, and Business Insider reported that the resulting product came in at about one-tenth the cost of Anthropic’s Opus 4.6.  Companies ranging from Airbnb $ABNB -1.45% to Siemens have openly used Chinese models. So AI startup darlings and established companies alike are increasingly passing over expensive proprietary U.S. models in favor of lower-cost Chinese ones that have closed much of the performance gap. The market has started treating model nationality as secondary — and largely irrelevant — to whether the thing works well, ships fast, and costs less. **“Open” has become a geopolitical business model** --------------------------------------------------- The White House itself has said that open-source and open-weight systems matter because startups need flexibility and because companies with sensitive data can’t always ship to a closed-model vendor. That’s true. That’s also exactly why Chinese open models have become such a headache for the American AI nationalism story. The U.S. government’s recognition arrives after years where American AI prestige became bound up with closed APIs, elite model subscriptions, and the idea that the best systems should be tightly controlled by a handful of companies. That approach may still win at the very frontier, but it’s less obviously suited to winning the layer underneath, where developers pick and choose what they can actually afford to use.  Beijing has increasingly framed open-weight AI as part of a broader diplomatic and commercial pitch — a model of shared technological development contrasted against U.S. export controls, supply-chain restrictions, and closed systems. Open models as a soft-power product. They tell countries that Chinese AI is modifiable and not locked behind an American API tollbooth. Stanford researchers have warned that broad adoption of Chinese open-weight models could reshape global “reliance patterns,” creating new technological dependencies even when the model weights themselves are downloadable. Alibaba’s Qwen family has built the largest model ecosystem on Hugging Face, with more than 113,000 derivative models, or more than 200,000 if you count everything tagged Qwen — surpassing Meta $META +0.35%’s Llama in cumulative downloads on the platform. RAND found in January that traffic to China-based LLMs had jumped 460% in two months and that Chinese models’ global market share had risen from 3% to 13% over that stretch. RAND also said Chinese models — such as DeepSeek, Qwen, and Zhipu’s ChatGLM — can run about one-sixth to one-fourth the cost of U.S. rivals. That’s a nasty combination for any American company trying to sell patriotic virtue at premium pricing. The old story had America building the tools and the rest of the world renting access. The newer one has Chinese labs becoming the substrate for tools that may still wear American branding on the surface. More than a dozen Chinese organizations are openly releasing powerful models. Hugging Face says the number of repositories from popular Chinese organizations exploded in 2025, with ByteDance and Tencent sharply increasing releases and firms that once leaned closed moving toward open releases. China has been shipping a coherent theory of spread. The U.S. has been shipping a mixed economy of premium closed models, open-weight branding, and internal arguments about what “open” even means. The U.S.’ open field is split among open-weight branding, genuinely open research, lightweight portable families, and agent-focused stacks — see: Meta’s open-weight-but-restricted Llama, Ai2’s genuinely open OLMo line, Google $GOOGL +1.82%’s lighter Gemma family, NVIDIA’s agentic stack — which makes the ecosystem stronger in spots but less unified as a doctrine. Even China’s own market has started treating openness less as an ideology than as a go-to-market plan. In February, Baidu — long one of the loudest defenders of closed models — said it would make its next-generation Ernie model open-source, a major strategic reversal. DeepSeek had upended the sector, and Baidu’s CEO said opening things up would help the technology spread faster. “Open” in this race increasingly means scalable distribution, faster adoption, and broader developer lock-in. **U.S. cloud giants are normalizing Chinese models** ---------------------------------------------------- It would be one thing if Chinese open models were still living out on the internet as vaguely exotic artifacts for hobbyists. In that case, the patriotism problem would be manageable. But they aren’t. The hyperscalers have brought them inside.  Amazon $AMZN +0.46% Bedrock says it supports more than 100 foundation models, including DeepSeek, Moonshot AI, MiniMax, and OpenAI. AWS has also rolled out specific DeepSeek and Qwen offerings, and its marketing around DeepSeek is enterprise-grade security, unified infrastructure, and customer data that “is not shared with model providers.” Microsoft $MSFT -0.16% is doing the same thing in a tidier corporate dialect. Azure Foundry’s catalog includes DeepSeek and Moonshot’s Kimi among the models sold directly by Azure, and Microsoft’s own Foundry updates have touted Kimi’s reasoning chops as part of the platform’s expanding lineup. Foreign model in, respectable enterprise product out. The geopolitical edge gets sanded down by procurement convenience, unified billing, and the general corporate desire to pretend every uncomfortable choice is merely a feature. A Chinese open model inside an American cloud, billed on an American invoice, wrapped in American enterprise controls, stops looking like a geopolitical event and starts looking like procurement.  Google Cloud’s Vertex AI has gone down the same road. Its DeepSeek docs say the models are available as fully managed, serverless APIs, and Google explicitly recommends pairing DeepSeek R1 with Model Armor for production safety. Elsewhere in Vertex AI, Google lists open models with global endpoint support that include DeepSeek, Kimi, MiniMax, Qwen, and GLM right alongside OpenAI’s gpt-oss models. Any geopolitical edge gets sanded down by the product design itself: same console, same endpoint logic, same managed-service vocabulary, same enterprise reassurances.  Nvidia $NVDA +0.26% lists DeepSeek in its model catalog. Databricks has joined the party, too. This month, it put Qwen3-Embedding-0.6B into public preview for retrieval and agent workloads, pitching it as a state-of-the-art multilingual embedding model optimized for vector search and AI agents. That’s how dependencies settle in. One team adopts it for search. Another team plugs it into agents. A few quarters later, the strategic problem has release notes and a renewal cycle.  There are two different China problems hiding in the AI story. One is the Chinese-hosted app problem. DeepSeek’s privacy policy says it directly collects, processes, and stores personal data in the People’s Republic of China. The other is the Chinese-origin model problem — weights and model families that get pulled into U.S. clouds, U.S. products, and U.S. workflows. A “national” project starts looking a lot less national when its most useful parts keep showing up from somewhere else. American AI wants the pageantry of sovereignty and the convenience of a global shopping aisle. It wants Washington to treat it like a national champion and developers to treat every foreign model like a harmless bargain. But markets are funny that way. They keep buying what works. Running an open model locally or on trusted infrastructure can mitigate some data and governance risks. That’s why the hyperscalers matter here. They turn a politically fraught dependency into something that feels manageable and corporate. The result is that many enterprise buyers can have Chinese model performance without the unnerving part of feeling as though they are leaving the American stack. That leaves the U.S. in a strange position. It still has enormous advantages in chips, cloud infrastructure, capital markets, and top-end frontier labs. But the country’s political language around AI keeps assuming that technical leadership will naturally translate into downstream loyalty. It won’t. Not in open models — and not in software generally. Developers are promiscuous. Procurement teams are unsentimental. Cloud platforms are agnostic right up until the invoice clears. If Washington wants “American values” to matter in AI purchasing, it’ll need more than speeches about bias and dominance. It’ll need American models that are open enough, cheap enough, and ubiquitous enough that choosing them doesn’t feel like a patriotic sacrifice. Right now, the market seems increasingly unwilling to pay that premium. 📬 Sign up for the Daily Brief ------------------------------ ### Our free, fast and fun briefing on the global economy, delivered every weekday morning. Sign me up
0
0
0
0
CodeZeroBasis

CodeZeroBasis

04-07 01:34
The AI benchmark race has a winner. It just isn't you. Every few months, a new model drops and a new leaderboard reshuffles. Labs compete to out-reason, out-code, and out-answer each other on tests designed to measure machine intelligence. The coverage follows. So does the funding. What gets less attention is whether any of this is inevitable. The benchmarks, the arms race, the framing of AI as either salvation or catastrophe — these are choices, not laws of physics. They reflect what the industry decided to optimize for, and what it decided to fund. Technology that will take decades to pan out in ordinary, useful ways doesn't raise billions this quarter. Extreme narratives do. Some researchers think the goal is simply wrong. Not that AI isn't important, but that important doesn't have to mean unprecedented. The printing press changed the world. So did electricity. Both did it gradually, through messy adoption, giving societies time to respond. If AI follows that pattern, the right questions aren't about superintelligence. They're about who benefits, who gets harmed, and whether the tools we're building actually work for the people using them. Plenty of researchers have been asking those questions from very different directions. Here are three of them. **Useful, not general** ----------------------- Ruchir Puri has been building AI at IBM $IBM -0.57% since before most people had heard of machine learning. He watched Watson beat the world's best Jeopardy players in 2011. He's watched several cycles of hype crest and recede since. When the current wave arrived, he had a simple test for it: is it useful? Not impressive. Not general. Useful. "I don't really care about artificial general intelligence," he says. "I care about the useful part of it." That framing puts him at odds with much of the industry's self-image. The labs racing toward AGI are optimizing for breadth, building systems that can do anything, answer anything, reason about anything. Puri thinks that's the wrong target, and he has a benchmark he'd like to see the industry actually try to reach. The human brain lives in 1,200 cubic centimeters, consumes 20 watts, the energy of a light bulb, and, as Puri points out, runs on sandwiches. A single Nvidia $NVDA +0.14% GPU consumes 1,200 watts, 60 times more than the entire brain, and you need thousands of them in a giant data center to do anything meaningful. If the brain is the benchmark, the industry isn't close to efficient. It's going in the wrong direction. His alternative is what he calls hybrid architecture: small, medium, and large models working together, each assigned to the task it handles best. A large frontier model does the complex reasoning and planning. Smaller, purpose-built models handle execution. A task as simple as drafting an email doesn't need a system trained on half the internet. It needs something fast, cheap, and focused. Every nine months or so, Puri notes, the small model of the previous generation becomes roughly equivalent to what was considered large. Intelligence is getting cheaper. The question is whether anyone is building for that reality. The approach has real-world backing. Airbnb $ABNB +1.49% uses smaller models to resolve a significant portion of customer service issues faster than its human representatives can. Meta $META -0.25% doesn't use its biggest models to deliver ads so it distills that knowledge into smaller ones built for that task alone. The pattern is consistent enough that researchers have started calling it a knowledge assembly line: data flows in, specialized models handle discrete steps, something useful comes out the other end. IBM has been building that assembly line longer than most. A hybrid agent combining models from several companies has shown a 45% productivity improvement across a large engineering workforce. Systems running on smaller, purpose-built models now help the engineers who keep 84% of the world's financial transactions processing get the right information at the right time. These aren't flashy applications. They're also not failing. None of them require a system that can write poetry or solve your kid's math homework. They require something narrower and, for that reason, more trustworthy. A model trained to do one thing well knows when a question falls outside its scope. It says so. That calibrated uncertainty, knowing what you don't know, is something the big frontier models still struggle with. "I want to build agents and systems for those processes," Puri says. "Not something that answers two million things." Tools, not agents ----------------- Ben Shneiderman has a simple test for whether an AI system is well designed. Does the person using it feel like they did something, or does it feel like something was done for them? The distinction matters more than it sounds. Shneiderman, a computer scientist at the University of Maryland who helped lay the foundations for modern interface design, has spent decades arguing that the goal of technology should be to amplify human ability, not replace it. Good tools build what he calls user self-efficacy, or the confidence that comes from knowing you can do something yourself. Bad ones quietly transfer that agency somewhere else. He thinks most of the AI industry is building bad tools, and he thinks the agentic turn makes it worse. The pitch for AI agents is that they act on your behalf, handling tasks end to end without your involvement. To Shneiderman, that's not a feature. It's the problem. When something goes wrong, and it will, who is responsible? When something goes right, who learned anything? The trap he's been fighting against for a long time has a name. Anthropomorphism, the impulse to make technology seem human, is what keeps winning, and what keeps failing. In the 1970s, banks experimented with ATMs that greeted customers with "How can I help you?" and gave themselves names like Tilly the Teller and Harvey the World Banker. They were replaced by machines that showed you three options. Balance, cash, deposit. Utilization shot up. Citibank had 50% higher usage than its competitors. People didn't want a synthetic relationship. They wanted to get their money. The same pattern has repeated across decades, through Microsoft $MSFT -0.16% Bob, the AI pin from Humane, and waves of humanoid robots. Each time, the anthropomorphic version fails and gets replaced by something more tool-like. Shneiderman calls it a zombie idea. It doesn't die, it just keeps coming back. What's different now is scale and sophistication. The current generation of AI is genuinely impressive, he acknowledges, startlingly so. But impressive and useful aren't the same thing, and systems designed to seem human, to say I, to simulate relationship, are optimizing for the wrong quality. The question he wants designers to ask is simpler: does this give people more power, or less? "There is no I in AI," he says. "Or at least, there shouldn't be." **People, not benchmarks** -------------------------- Karen Panetta has a simple answer for why AI development looks the way it does. Follow the money. Panetta, a professor of electrical and computer engineering at Tufts University and an IEEE fellow, studies AI ethics and has a clear view of where the technology should be going. Assistive pets for Alzheimer's patients, adaptive learning tools for children with different cognitive styles, smart home monitoring for elderly people aging in place. The technology to do this well, she says, largely exists. The investment doesn't. "The humans don't care about benchmarks," she says. "They care about, does it work when I buy it, and is it going to really make my life easier?" The problem is that the people who would benefit most from well-designed assistive AI are also the least compelling pitch to a venture capitalist. A system that transforms manufacturing processes, reduces workplace injuries, and cuts healthcare costs for a company's employees has an obvious return. A robotic companion that keeps an Alzheimer's patient calm and connected requires a different kind of math entirely. So the money goes where the money goes, and the populations with the most to gain keep waiting. What's changed, Panetta says, is that the expensive engineering problems are finally being solved at scale. Sensors are cheaper. Batteries are lighter. Wireless protocols are ubiquitous. The same investment that built industrial robots for factory floors has quietly made consumer robotics viable in a way it wasn't five years ago. The path from warehouse to living room is shorter than it looks. But she has a concern that the excitement around that transition tends to skip over. Physical robots have natural constraints. You know the force limits. You know the kinematics. You can anticipate, simulate, and design around how they'll fail. Generative AI doesn't come with those guarantees. It's non-deterministic. It hallucinates. Nobody has fully mapped what happens when you put it inside a system that is physically present in the home of someone with dementia, or a child who can't identify when something has gone wrong. She's seen what happens when a sensor gets dirty and a robot loses its spatial awareness. She's thought about what it means to build something that learns intimate details about a person's life, their routines, their cognitive state, their moments of confusion, and then acts on that information autonomously. The fail-safes, she says, haven't kept up. "I'm not worried about the robot," she says. "I'm worried about the AI." 📬 Sign up for the Daily Brief ------------------------------ ### Our free, fast and fun briefing on the global economy, delivered every weekday morning. Sign me up
0
0
0
0