Search results for "DGX"
04:02

Jen-Hsun Huang: Personal AI computers have been fully produced.

Jin10 Data reported on May 19 that NVIDIA CEO Jen-Hsun Huang stated that our personal AI computer DGX Spark has entered full production and will be ready within a few weeks. We have also partnered with global computer manufacturers to launch the "AI-first" DGX personal computing system, which features NVIDIA's GB10 super chip and tensor cores. We also launched DGX.
More
01:00

NVIDIA H200 orders will start shipping in Q3, with B100 expected to be available in the first half of next year.

Jinshi Data News on July 3rd, NVIDIA AI GPU H200 upstream chip has entered mass production period since late Q2, and is expected to deliver a large quantity after Q3. However, the schedule for the launch of NVIDIA's Blackwell platform has been advanced by at least one to two quarters, affecting the willingness of end customers to purchase H200. The supply chain pointed out that the current pending customer orders are still mostly concentrated on the HGX architecture H100, with limited proportion of H200. The H200 to be mass-produced and delivered in Q3 will mainly be for NVIDIA DGX.
More
08:19
To help customers make more efficient use of their AI computing resources, NVIDIA has reached a final protocol for acquisition Kubernetes-based workload management and orchestration software provider Run:AI, for an undisclosed amount. Nvidia plans to keep Run:ai's current business model in place for the time being, and Run:ai employees will join Nvidia's R&D center in Israel. NVIDIA DGX and DGX Cloud customers will receive the capabilities that Run:AI provides for their AI workloads, especially large language model deployments. Today, Run:AI's solutions are integrated with products such as NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI enterprise software. In addition, Israeli media outlet Calcalist expects the acquisition to be worth about $700 million.
08:06

Nvidia's acquisition of GPU orchestration software provider Run:ai

Protocol on April 25, NVIDIA has reached a definitive agreement on the acquisition of Run:ai, a Kubernetes-based workload management and orchestration software provider, and the specific acquisition amount has not been disclosed. Nvidia plans to retain Run:ai's current business model for the time being, with Run:ai employees joining Nvidia's R&D center in Israel. In addition, NVIDIA DGX and DGX Cloud customers will receive the capabilities that Run:ai provides for their AI workloads, especially large language model deployments. Currently, Run:AI's solutions have been integrated with products such as NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI enterprise software.
More
07:58
coin Jiejie reported: On April 25, it was reported that Nvidia (NVIDIA) has reached a final protocol on acquisition Run:ai, a provider of Kubernetes-based workload management and orchestration software, and the specific acquisition amount has not been disclosed. Nvidia plans to retain Run:ai's current business model for the time being, with Run:ai employees joining Nvidia's R&D center in Israel. In addition, NVIDIA DGX and DGX Cloud customers will receive the capabilities that Run:ai provides for their AI workloads, especially large language model deployments. Currently, Run:AI's solutions have been integrated with products such as NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI enterprise software.
07:57
Protocol on April 25, NVIDIA has reached a definitive agreement on the acquisition of Run:ai, a Kubernetes-based workload management and orchestration software provider, and the specific acquisition amount has not been disclosed. Nvidia plans to retain Run:ai's current business model for the time being, with Run:ai employees joining Nvidia's R&D center in Israel. In addition, NVIDIA DGX and DGX Cloud customers will receive the capabilities that Run:ai provides for their AI workloads, especially large language model deployments. Currently, Run:AI's solutions have been integrated with products such as NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI enterprise software.
07:25
PANews news on April 25, according to Tech In Asia, American chipmaker Nvidia said in a statement that the company will acquisition Israeli artificial intelligence startup Run:ai. While Nvidia did not disclose the value of the acquisition, Israeli media outlet Calcalist expected the figure to be around $700 million. Run: AI develops software to help businesses manage their computing needs more efficiently. It runs on Kubernetes, an Open Source system designed by Google that automates software deployment, scaling, and management. In a statement, Nvidia cited "increasingly complex" customer AI deployments as the reasoning behind the deal. With this acquisition, the chipmaker aims to make it easier for customers to access and manage GPU solutions, expecting to see better GPU usage, improved GPU infrastructure management, and greater flexibility with open architecture systems. Nvidia plans to keep Run:ai's current business model in place for the time being, with Run:ai employees joining the chipmaker's R&D center in Israel, which has about 3,000 employees. Run:ai's upcoming offerings will also be included in Nvidia's DGX Cloud, an AI platform that partners with top cloud providers to serve enterprise developers.
02:56
Nvidia said biotech company Amgen will build AI models for analyzing one of the world's largest human datasets, which will be trained on Nvidia's DGX Superpod, a full data center platform, as reported by Kim Shi on January 9.
06:26
Dell Technologies recently announced that it will help customers achieve faster AI and generative AI performance by introducing new enterprise data storage technology and validating it with Nvidia DGX SuperPod AI infrastructure, as reported by Webmaster's Home on December 11. When it comes to the need for high performance and efficiency in AI storage, Dell Distributed File Storage System PowerScale has introduced new advancements. With PowerScale OneFS software enhancements, organizations can prepare, train, optimize, and infer AI models faster. The new PowerScale all-flash storage system based on the latest generation of Dell PowerEdge servers will deliver up to 2x faster performance for customers in streaming reads and writes. PowerScale will also deliver new intelligent scaling capabilities that improve the performance of individual compute nodes and enhance GPU utilization, enabling faster storage throughput for AI training, checkpointing, and inference.
07:33
According to the "Science and Technology Innovation Board Daily", on November 21, local time, Nvidia announced a partnership with Genentech, a subsidiary of Roche Pharmaceuticals, to conduct an AI platform research to accelerate drug discovery and development. Both will build AI models on NVIDIA DGX Cloud. Genentech also plans to use NVIDIA BioNeMo, a generative AI platform for drug discovery, to enable biotechs to customize models at scale and integrate the BioNeMo cloud application programming interface directly into computational drug discovery workflows.
  • 1
04:22
BABBITT, Nov. 7 (GLOBE NEWSWIRE) -- Nvidia announced that it is partnering with communications services provider Amdocs to optimize large language models (LLMs) to accelerate the adoption of generative AI in the $1.7 trillion telecommunications industry. The two companies will customize enterprise-grade LLMs running on NVIDIA accelerated computing as part of the Amdocs amAIz framework, enabling communications service providers to efficiently deploy generative AI use cases across businesses ranging from customer experience to network configuration. Amdocs will use NVIDIA DGX Cloud AI supercomputing and NVIDIA AI Enterprise software to support flexible adoption strategies and help ensure that service providers can use generative AI applications simply and securely. According to reports, Amdocs' customers include more than 350 of the world's leading telecommunications and media companies, including 27 of the world's top 30 service providers.
  • 1
08:55
Nvidia launched the most advanced DGX cloud supercomputer on Oracle Cloud, providing powerful graphics processing units for workloads such as generative AI, as reported by Webmaster House on October 20. This cloud-hosted, AI supercomputing service provides customers with everything they need to quickly train generative AI and other applications, based on Nvidia's DGX platform with access to multiple AI frameworks and pre-trained models. A New York University has used Nvidia's DGX Cloud AI as the foundation for its AI Plus initiative, which drives education and research in multiple areas, including cybersecurity, weather forecasting, health data analytics, and more.
10:56
According to news from Jinshi on September 8, Nvidia and Reliance Group have reached a partnership on promoting artificial intelligence in India. The two parties will jointly develop basic large-scale language AI; Permissions for DGX Cloud.
  • 1
01:34
According to the "Kechuang Board Daily" report on August 23, computing power leasing is a mode of renting computing resources through cloud computing service providers. Song Jiaji, an analyst at Guosheng Securities, believes that cloud computing power adopts the method of "dividing the whole into parts". The method empowers all parties in the industrial chain and is sustainable. On the demand side, as the computing power required for the AIGC large-scale model soars, the potential of the AI computing power rental market increases. On the supply side, computing power manufacturers have cooperated with cloud platforms for a long time. With the launch of NVIDIA DGX cloud, AI cloud computing power has entered a new stage. The core resources of computing power leasing lie in funds, customers, and operation and maintenance capabilities. Three types of enterprises are dominant: have the first-mover advantage and have orders in hand; have the ability to operate and maintain AI computing power; force.
09:15

DeepL deploys NVIDIA DGX SuperPOD to expand its LLM capabilities

Germany-based neural machine translation service DeepL announced that it has deployed NVIDIA's AI data center infrastructure platform DGX SuperPOD in one of its data centers in Sweden to expand its LLM capabilities. According to DeepL, this is "the first commercial deployment of this magnitude in Europe". Benchmarks show a performance of 21.85 PFlop/s, which ranks 26th globally and 8th in Europe. Consisting of 68 NVIDIA DGX H100 systems, the NVIDIA DGX SuperPOD will help DeepL train large language models faster and develop new AI communication tools for global markets.
More
15:16
According to a report by The Decoder on August 2, DeepL, a German-based neural machine translation service, announced that it has deployed Nvidia's AI data center infrastructure platform DGX SuperPOD in one of its data centers in Sweden to expand its LLM capabilities. According to DeepL, this is "the first commercial deployment of this magnitude in Europe". Benchmarks show a performance of 21.85 PFlop/s, which ranks 26th globally and 8th in Europe. Consisting of 68 NVIDIA DGX H100 systems, the NVIDIA DGX SuperPOD will help DeepL train large language models faster and develop new AI communication tools for global markets.
03:32
According to a report by the "Kechuang Board Daily" on July 3, Raymond James (Ruijie Financial) analyst Srini Pajjuri said that the "AI-as-a-service" of Nvidia's DGX cloud is a major long-term opportunity and is expected to become a large-scale business in the future. Nvidia has AI models that companies can leverage, and companies can use to customize large language modules that contain company-specific data. In addition, the analyst pointed out that Nvidia will launch AI-related products later this year. After speaking with Simona Jankowski, the company's head of investor relations and strategic finance, he believes the adoption of generative AI will continue to drive the company's strong growth in the coming quarters.
13:34
Odaily Planet Daily News On May 28th, Eastern Time, NVIDIA founder and CEO Jensen Huang announced in his NVIDIA Computex 2023 speech that the generative AI engine NVIDIA DGX GH200 is now in mass production. NVIDIA's official website shows that NVIDIA DGX GH200 is a new AI supercomputer that fully connects 256 NVIDIA Grace Hopper super chips into a single GPU, and supports trillion-parameter AI large model training. Capable of handling large-scale recommendation systems, generative AI, and graph analytics, and providing linear scalability for giant AI models. "There is no need to store data in many modules. DGX GH200 is easier to train large language models and deep learning recommendation systems." Huang Renxun said. (China Securities Network)
05:31

Nvidia launches more artificial intelligence products to further grasp the AI boom

Nvidia CEO Jensen Huang announced a batch of new AI-related products and services, hoping to further capitalize on this craze. The newly launched products range widely, including an AI supercomputer platform called DGX GH200, which will help tech companies create a "successor" to ChatGPT, Huang said. Microsoft, Meta Platforms and Google are expected to be the first users of the device. Nvidia will work with WPP Group to use AI and virtual worlds to reduce the cost of ad production. In addition, the company is preparing to release a network service designed to speed up the transmission of information in data centers. The company even intends to change the way people interact with video games: A service called Nvidia ACE for Games will use AI to animate background characters, giving them more personality. The flurry of products underscores Nvidia's transformation from a maker of computer graphics chips to a company at the center of an AI boom.
More
Load More
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)