TradingKey - NVIDIA CEO Jen-Hsun Huang announced a series of new products at CES 2025. TradingKey Analyst, Frank said NVIDIA's high moat based on the CUDA ecosystem is further strengthened.
The International Consumer Electronics Show 2025 (CES 2025) is being held in Las Vegas from January 7 to 10 and is expected to attract more than 130,000 attendees. A day before the show, NVIDIA CEO Jen-Hsun Huang arrived to deliver a keynote speech.
It is witnessing revolutionary changes at all levels of technology, from manually coded CPU software tools to advanced machine learning capable of creating and optimizing neural networks on GPUs said Jen-Hsun Huang.
NVIDIA is expected to make important product announcements at the show – such as the AI desktop computer, the automotive processor Thor, intelligent body (AI Agent), and the AI language model Llama Nemotron language base model.
“During the speech, Nvidia's Nvidia nim system development and progress on the Liama nemotron language model was highlighted by Huang.” TradingKey analyst Frank said.
According to Jen-Hsun Huang, NVIDIA NIM (NVIDIA Integrated Microservices) can be serviced to simple 3D objects as tools to guide AI image generation.
NVIDIA NIM can provide developers with self-hosted GPU-accelerated inference microservices, and NIM allows organizations to run AI models in the cloud, data centers, workstations, and other environments, providing an efficient and flexible solution that has the prospect of widespread application in autonomous driving, smart manufacturing, and medicine, Frank noted that.
“The Liama nemotron model is primarily reflected in the progressive advances in language modeling technology, with Jen-Hsun Huang particularly citing improvements in accuracy and efficiency,” Frank said.
Llama Nemotron language base model is categorized into Nano, Super, and Ultra.
“NVIDIA's high moat based on the CUDA ecosystem is further strengthened, Nim and Liama Nemotron development relies heavily on Cuda, and the system also provides a large number of platform resources for many enterprise users and developers.NIM is able to efficiently run large-scale AI models on GPUs, through the parallel computing power of CUDA, significantly improving the speed and efficiency of inference. The training and reasoning process of AI models be more efficient and flexible by combining Llama-Nemotron and CUDA system. And Llama-Nemotron provides a standardized API that will facilitate developers to integrate it into existing AI applications and optimize it with the computing power of CUDA. All of these deepen the view that NVIDIA is a hardware and software application ecosystem integrator.” Frank further stated.