Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Efficiency AI: why it is the new measure of intelligence
AI Efficiency: the New Measure of Intelligence
Mini summary: From HUMANX in San Francisco, a clear strategic reading emerges: in AI, the limit is not only the quality of the models, but also the compute available. For this reason, energy efficiency, hardware-software co-design, inference, and proprietary data are becoming decisive factors for businesses and infrastructure.
In the debate on artificial intelligence, AI efficiency is becoming a central criterion. At HUMANX, the key takeaway is concrete: compute is constrained by physical, economic, and energy factors. As a result, getting more results with fewer resources becomes the main lever for continuing to scale.
The thesis is unequivocal: if available compute is constrained, then “efficiency = intelligence”. In other words, efficiency is not only an optimization topic—it is a direct multiplier of AI’s potential.
This perspective matters for companies, developers, and investors. In fact, it connects the evolution of models to infrastructure, energy costs, system design, and the economic sustainability of deployment.
The four drivers that make AI grow
According to the analysis presented at HUMANX, AI’s evolution is driven by four main factors: training, post-training, deployment, and agent.
Training builds the model’s core capabilities. Post-training refines its behavior and improves practical usefulness. Deployment turns the model into an accessible, scalable system. Finally, agents represent another leap: not only do they generate outputs, they also execute tasks, orchestrate tools, and operate in more autonomous flows.
However, all four of these levels require computational resources. When compute becomes scarce or costly, every advancement depends on the ability to use the available infrastructure more effectively.
AI Efficiency and Compute: the real bottleneck
One of the most incisive formulations to emerge in the speech is “compute = intelligence.” This synthesis helps read the sector’s current phase: the quality of AI does not depend solely on the model architecture, but also on the amount of computation you can mobilize in a sustainable way.
Compute, however, is not infinite. It is limited by costs, hardware availability, design timelines, physical constraints, and above all energy consumption. Therefore, the competitive advantage does not only go to those with more resources, but also to those who design more efficient systems.
In practice, it’s not enough to chase bigger models. You have to understand where to allocate compute, what to accelerate, which workloads to optimize, and which trade-offs to accept between quality, latency, and cost.
AI Efficiency and Energy: why the constraint is structural
Among all the limits, energy is identified as the most important. The proposed definition is very concrete: a computer is, essentially, a device that turns energy into computation.
This observation shifts the conversation from software to infrastructure. Every increase in AI capacity requires electrical power, cooling, chip efficiency, thermal management, and the economic sustainability of data centers.
If energy is the fundamental constraint, improving energy efficiency is equivalent to increasing effective computational capacity. Consequently, competition in AI will not only be about model benchmark scores, but also about watts consumed per unit of useful work, inference cost, computational density, and the ability to maintain economic margins in production.
AI Efficiency and Co-Design: hardware and software together
The proposed response to this constraint is co-design, meaning the co-design of the entire technology stack: transistors, hardware architectures, algorithms, compilers, frameworks, libraries, and datasets.
The message is clear: it’s not enough to build faster computers—you need to understand what to accelerate. In a context where the software ecosystem changes rapidly, with cycles cited in the order of about 6 months, designing hardware without an integrated vision of the software risks producing inefficiencies or systems that are poorly aligned with real workloads.
This point is crucial for investors as well. Infrastructure decisions have long time horizons, while AI software evolves within 6–12 month windows. That’s why co-design becomes a strategic discipline: it reduces the risk of building technical capabilities that are already partially obsolete by the time they reach the market.
The shift from training to inference changes priorities
Another key point concerns the sector’s changing focus. If the first phase of the AI race was dominated by training, today attention is shifting toward inference, deployment, and scalability in production.
This is an important paradigm shift. In training, the main goal is to maximize the model’s capabilities. In inference, by contrast, quality, latency, and cost matter together.
This is where many companies run into AI’s economic reality. Offering a useful service is not enough—you have to deliver it under sustainable conditions.
The speech also highlights a concrete risk: scaling too early, or without adequate optimization, can mean scaling toward failure. For businesses, the suggested sequence is more cautious: first verify product-market fit, then refine efficiency and unit economics, and finally extend operational scale.
More complex models and an open ecosystem
The technical trajectory does not suggest simplification. On the contrary, model complexity is increasing. Among the examples cited is Mixture of Experts, an architecture that aims to use specialized components to improve efficiency in how compute is utilized.
In this context, open models play an important role. Nemotron is cited as an example of an open model that is useful both for internal understanding of the technologies and for empowering the community.
For companies, this approach can help better understand architectural trade-offs, deployment patterns, and ecosystem dynamics—without being fully dependent on closed systems.
However, it’s important to clarify a limitation of the picture that emerged: no quantitative benchmarks or detailed empirical data on performance, consumption, or comparative advantages were provided. For this reason, the value of the message remains primarily strategic and directional.
Proprietary data is the real competitive advantage
One of the most relevant points for the enterprise world concerns competitive advantage. The stated position is explicit: the real “moat” is not the model itself, but proprietary data, knowledge of users, and real observed behavior over time.
This message downplays the idea of the model as an exclusive asset. If models become increasingly accessible, replicable, or integrable, the differentiator shifts toward what a competitor cannot easily copy: proprietary datasets, operational context, internal workflows, user feedback, and the ability to translate this information into better products.
So for businesses, investment priorities change. Not only AI licenses or access to advanced models, but also data governance, source quality, integration with enterprise systems, and protection of internal knowledge.
The risk of a single technological bet
The speech also brings up a theme of strategic risk. In theory, a company could want to spread its resources across many technological trajectories. In practice, limited resources, development timelines, and infrastructure constraints reduce the ability to make “10 bets” simultaneously.
This exposes a typical problem of phases of technological transition: choosing a direction is necessary, but it can be risky. Betting too heavily on a single architecture, a single vendor, or a single market hypothesis can leave the organization exposed if the sector changes rapidly.
That’s why modular approaches, flexible stacks, and strategies that preserve margins for adaptation become important. In an industry that moves quickly, architectural resilience matters almost as much as raw performance.
Millions of specialized models and hybrid local-cloud AI
One of the most interesting scenarios outlined is a future not dominated by a single universal model, but by millions of specialized models for companies, use cases, and vertical industries.
This perspective has strong industrial logic. Different applications require different trade-offs among accuracy, speed, cost, privacy, and domain knowledge. A generalist model may remain the starting point, but operational value shifts toward models adapted to real-world context.
At the same time, privacy and local AI push toward hybrid architectures, with part of the processing done on-device or on-premise and part in the cloud. For regulated or sensitive sectors, this combination can become more than just a technological option—it can become a requirement.
The implication is clear: the AI infrastructure of the future will need to be distributed, not monolithic.
Beyond language: the frontier of spatial intelligence
AI development will not stop at language. The next frontier indicated is spatial intelligence: systems capable not only of understanding text, but of perceiving space, reasoning about the physical world, and acting in real environments.
This step expands AI’s scope toward robotics, multimodal perception, navigation, physical interaction, and agents capable of connecting observation and action.
Here too, the infrastructure theme remains central. The closer the system is to the real world, the more critical latency, efficiency, reliability, and local execution capability become.
For now, the presented picture remains prospective and not supported by concrete announcements or detailed experimental results. However, the strategic direction is clear: the next phase of AI will require less emphasis on language generation alone and more integration between perception, reasoning, and action.
What changes for enterprises, infrastructure, and strategy
The overall message that emerged at HUMANX is that AI is entering a more mature and more selective phase. The availability of powerful models does not remove real constraints: compute, energy, inference costs, stack complexity, and the speed of technological change.
For companies, this means the difference won’t be made just by adopting AI, but by the quality with which it is designed, deployed, and economically sustained.
As a result, co-design, energy efficiency, inference management, intelligent use of proprietary data, and architectural flexibility become decisive elements.
In summary
The analysis that emerged at HUMANX proposes a precise thesis: in AI, the limit is not only the model, but the available compute and the energy needed to use it.
That’s why AI efficiency becomes a strategic variable. It matters for infrastructure, for costs, for scalability, and for economic sustainability.
In this scenario, inference, co-design, proprietary data, and flexible architectures become the key factors of the next competitive phase of artificial intelligence.