Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Hardware AI NVIDIA: il dilemma del co-design
Hardware AI NVIDIA: the dilemma of software that changes every six months
Mini summary: NVIDIA says that designing hardware for artificial intelligence requires co-design across the entire stack. The talk at the Humax X conference in San Francisco highlighted three points: the co-evolution between chips and software, the risk of choosing what to accelerate, and the role of Nemotron as an open project to read AI trends.
At the opening speech of the Humax X conference in San Francisco, a central question emerged for the industry: how do you design NVIDIA AI hardware in a software landscape that changes radically every six months?
For NVIDIA, this is not a theoretical topic. As explained during the talk, it represents the core of the company’s work for over 30 years. In AI, after all, models, frameworks, libraries, and deployment approaches evolve rapidly. That’s why a limited view focused only on the chip is not enough.
Instead, a strategy is needed that coordinates hardware and software across the entire technological stack. This is the main thesis that emerged from the speech.
NVIDIA AI hardware and co-design across the entire stack
NVIDIA’s indicated answer is co-design, meaning the co-design of hardware and software. It’s not about just one layer of the infrastructure. On the contrary, it involves transistors, chips, computing architectures, compilers, libraries, software frameworks, datasets, AI algorithms, and networking.
In industrial terms, efficiency does not come only from the power of silicon. It also depends on the ability to align all the components that turn a model into a system that is truly executable, optimizable, and deployable at scale.
Consequently, the competitive advantage does not come only from building advanced hardware. It also comes from the ability to evolve it together with the software that will use it.
NVIDIA AI hardware: the strategic decision is choosing what to accelerate
One of the most relevant passages from the speech concerns the selection of priorities. Designing AI hardware does not only mean increasing performance in a generic sense. It means deciding which problems to accelerate, which technologies to prioritize, and which direction to consider more likely for the future evolution of artificial intelligence.
This choice carries a high risk. If the market and research move in a different direction from the one expected, investment in a particular architecture or specific optimizations can lose value very quickly.
According to what emerged in the talk, NVIDIA adopts a strategy with high concentration. The company does not aim for broad diversification. Instead, it concentrates resources in a specific direction. The formula reported in the speech is clear-cut: either the project succeeds, or it fails completely.
For industry professionals, this point is crucial. Hardware design for AI is no longer only an engineering matter. It is also an exercise in strategic allocation of capital, talent, and development time.
Because concentrating risk is not just a gamble
At first glance, a non-diversified strategy can seem overly exposed. However, NVIDIA argues that the co-evolution between software and hardware reduces part of this risk.
If developers, frameworks, and application systems progressively align with the hardware architectural choices, a mutually reinforcing effect is created. In other words, hardware influences software, and software reinforces the hardware’s relevance.
This mechanism is particularly important in AI. Compilers, libraries, and frameworks can, in fact, decisively determine the actual adoption of a platform. That’s why co-design is not only about improving performance, but also about building an ecosystem trajectory.
Nemotron: open models for understanding where AI is headed
Nemotron fits into this picture, cited as a key project for understanding AI’s evolution and guiding future hardware design. According to the speech, the idea is to develop open models to observe industry and research directions more effectively.
A relevant point is that Nemotron’s models are then made public. This aspect has a dual value. On the one hand, it expands the availability of open tools. On the other hand, it allows NVIDIA to maintain a more direct connection with emerging technical trends.
In practical terms, Nemotron is presented as a strategic sensor as well as a technology initiative. It’s not only a model project. It’s also a way to read in advance which workloads, architectures, and inference patterns could become central in the next AI cycle.
From models to complete systems for inference and deployment
Another significant passage concerns the shift in priorities in the AI industry. According to the talk, the focus is moving from creating models alone to building complete systems for inference and large-scale deployment.
This is an important transition. In the initial phase of the current AI boom, much of the debate centered on training capacity and the size of models. Today, however, economic value is increasingly tied to the ability to put those models into production, make them run reliably, control latency and costs, and integrate them into distributed infrastructures.
This shift has direct implications for hardware, networking, and system software. Inference at scale, in fact, requires a different balance than training. Energy efficiency, orchestration, library optimization, data-traffic management, and operational integration become decisive factors.
For engineers and companies, the message is clear: future competitive advantage will not depend only on the quality of the model, but on the quality of the system that makes it usable in production.
What does this strategy imply for the tech sector
NVIDIA’s talk describes a vision of AI that is becoming less fragmented. Chips, software, open models, toolchains, and network infrastructure are treated as parts of a single industrial architecture.
For hardware manufacturers, this raises the bar for competitive complexity. It’s no longer enough to design excellent components. They need to be integrated into a coherent ecosystem. For software developers, instead, it means working increasingly close to the constraints and opportunities of the infrastructure layer.
For the AI community, finally, projects like Nemotron show how open model development can also serve a strategic function of technological guidance.
However, there is an informational limit. The speech did not provide quantitative data on performance, roadmaps, or the progress status of the projects mentioned. It also did not include independent voices or external criticism. It should also be noted that the conference name appears in a non-unique form between Humax X and HUMANX.
In summary
NVIDIA says that designing hardware for AI does not mean chasing software. It means co-evolving with it across the entire technological stack.
According to the speech, this strategy is based on three pillars: co-design, a concentrated choice of priorities, and the use of open projects such as Nemotron to anticipate trends.
The final message is unambiguous: in AI, value does not depend only on the chip or the model, but on the complete system that brings together hardware, software, and scaled deployment.