Transforming Nvidia into a full-stack computing company

0


[ad_1]

Be online for GamesBeat Summit Next, taking place November 9-10. Find out more about what’s next.


Nvidia is using its GTC conference, which started today, to show how it is building full-stack technologies to provide access to AI and immersive experiences for all companies with the ambition to use them.

In a pre-press briefing, Nvidia executives outlined a vision for the emerging omniverse. In addition to designing the future of virtual worlds, Nvidia wants to populate them with real-time avatars that use AI in natural language to serve real-world customers. In the meantime, virtual cars driving on virtual roads can contribute to perfect designs for real autonomous vehicles. Virtual robots can breed better physical robots. Pizza delivery can be faster. And Nvidia plans to provide GPU chips, software, and cloud services to support these various use cases.

All of this should be announced during CEO Jensen Huang’s keynote address at 9 a.m.CET.

“You will see Nvidia transform into a full-stack computing company,” said Deepu Talla, VP and general manager for embedded and edge computing. For example, a restaurant could set up a virtual waiter that you interact with at a kiosk (perhaps on or next to your table) presented as a real-time avatar able to have a conversation, a frown on your face understand and make recommendations from the menu. Some of the Nvidia products that support this vision would be Metropolis Computer Vision, Riva Conversational AI, the Merlin Recommender System – and the Omniverse, which encompasses it all.

Nvidia has created a “Unified Compute Framework” that treats AI models as microservices that can run together or in a distributed, hybrid architecture, Talla said.

The Omniverse, Nvidia’s concept for interoperable “Metaverse” virtual worlds, is based on the Universal Scene Description, a specification originally developed by Pixar. “We think of USD [Universal Scene Description] as HTML from 3D, ”says Richard Kerris, VP of the Omniverse Platform. While it may not be ruled by an organization like the W3C, a consortium of companies are working to push the USD forward, he said. For example, Universal Scene Description recently added a rigid body physics model that Nvidia worked on with Apple and Pixar.

With Apple’s involvement, it is now possible to scan an object with the iPhone and import it into the Omniverse, Kerns said. One of the announcements today is the availability of an enterprise version of Omniverse, which starts at $ 9,000 per year.

Talla promised that an enterprise version of Riva will hit the market in the first quarter of 2022, while a free version will remain available for small businesses and individual developers. Conversational AI has improved so much that it can generate synthetic speech based on any voice with just 30 minutes of training data, he said. This is in addition to providing “world-class speech recognition” for seven languages, he said. Early adopters include a large insurance company and RingCentral, the cloud telephony and unified communications as a service company.

Robots and Autonomous Vehicles

Nvidia is not just about the virtual world, but also offers the Jetson robotics platform, which is based on the combination of its GPUs and arm CPUs. The version of Jetson AGX Orin planned for the first quarter of 2022 promises six times more computing power with the same form factor as the previous Xavier edition. With 200 trillion operations per second, the Jetson AGX Orin is like a GPU-enabled server that, according to Nvidia, fits in the palm of your hand.

But the work on physical robots also reconnects with the omniverse. Nvidia recently announced a toolkit to integrate the open source Robotics Operating System (ROS) with Isaac Sim, its simulation environment for robotics applications. Data replication with Isaac makes it possible to test virtual instantiations of robots in worlds populated with synthetic data, Talla said. “Training robots in the physical realm is really difficult. It’s much cheaper, safer, and faster to do this in simulation, ”he added. Also, because the data is synthetic, you can skip the labeling step when training a machine learning model because the system already knows what the virtual world objects should be.

For similar reasons, Nvidia will first test their designs with autonomous automakers in the Omniverse before putting them on the road. “We use Omniverse for simulations to train and test the vehicles to ensure their safety,” said Danny Shapiro, VP of Automotive at Nvidia. Using synthetic data to test autonomous driving software against simulated road conditions saves time and money, simplifies problems like labeling objects in the area, and ultimately aligns with how the vehicle behaves in real-world conditions, he said.

Supply chain, pizza delivery and business

Meanwhile, Nvidia is working to make its technologies more accessible to companies that don’t build robots but have practical business difficulties that AI can help solve. “Companies looking to use AI to automate supply chain planning, cybersecurity and conversational AI now have new frameworks to get them started,” said Justin Boitano, VP and general manager of Enterprise and Edge Computing.

To optimize the supply chain, Nvidia has its ReOpt framework with accelerated logistics and operating algorithms. ReOpt is particularly used with last mile delivery, for example as part of a partnership with Domino’s Pizza, to optimize the number of pizzas a driver should deliver to a given address list in a single trip, Boitano said. “Delivering pizza to satisfied customers in a time and cost effective way is a great example of where the power of accelerated computing lies, because every minute you spend calculating what to do is a minute you lose to actually deliver these pizzas to the customers. “

To advance cybersecurity, Nvidia is introducing DOCA 1.2, a more cloud-ready update to its SDK for programming Nvidia DPUs to isolate and control traffic in the data center, and Morpheus, an AI-powered zero trust application framework. Morpheus models any combination of interactions between applications and users to understand what normal behavior looks like and makes it possible to flag or block abnormal behavior on the network, Boitano said. DOCA 1.2 is slated to be released on November 30th, while an Early Access version of Morpheus is available now.

Cybersecurity providers working with Nvidia include Palo Alto Networks and Fortinet.

Another framework, NeMo Megatron, is aimed at companies looking to help build large language models with potentially trillions of parameters. The software is designed to operate on Nvidia DGX SuperPod 20-node server arrays.

To make all of this more accessible, Nvidia has partnered with Equinix to make preconfigured instances of the technology available in data centers around the world – two in Asia, three in the US, and four in Europe.

Health care and drug research

The last area covered in the preliminary meetings was healthcare, where Nvidia claimed progress in the fight against cancer, working with Memorial Sloan Kettering, and fine-tuning radiological therapies for children in partnership with St. Jude’s Children’s Hospital. Nvidia technologies are finding their way into surgical robotics.

Healthcare has the highest average annual growth in data volume of any industry at 36%, with hospitals generating 50 petabytes of data per year and opportunities to make better use of that data, said Kimberly Powell, VP and general manager of Health Care.

With its Clara Holoscan Medical Device AI, which is available on November 15, Nvidia offers “an all-in-one computing infrastructure for the scalable software-defined processing of streaming data from medical devices,” said Powell.

Meanwhile, Nvidia is trying to overcome the simulation bottleneck in drug discovery, where understanding of the molecular biology behind binding drug candidates to proteins is rapidly accelerating, but some simulations that scientists would like to run were too computationally intensive to be practical . However, Nvidia discovered that by using a different technique – physics modeling called density functions instead of quantum methods – it could achieve 1,000 times better performance in simulating molecular bond formation and breaking, Powell said. This allows a simulation that would previously have taken three months to run in three hours on a single GPU.

VentureBeat

VentureBeat’s mission is to be a digital marketplace for tech decision makers to gain knowledge of transformative technologies and transactions. Our website provides essential information on data technologies and strategies to help you run your organization. We invite you to become a member of our community to gain access:

  • current information on the topics of interest to you
  • our newsletters
  • closed thought leadership content and discounted access to our valuable events, such as Transform 2021: Learn more
  • Network functions and more

become a member

[ad_2]

Share.

Leave A Reply