Nvidia is presenting new technologies for the industrial at SIGGRAPH Metaverse and releases tools for AI graphics.
The SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) is an annual conference on computer graphics and has been held since 1974 instead of. At this year’s event, Nvidia founder and CEO Jensen Huang will show how the company aims to bring together AI, computer graphics and the (industrial) metaverse.
“The combination of AI and computer graphics will power the Metaverse, the next evolution of the Internet,” said Huang at the beginning of the Nvidia presentation.
Nvidia’s presentation showed different elements of this project in four sections: The new Nvidia Omniverse Avatar Cloud Engine (ACE), plans for expanding the Universal Scene Description Standard (USD), the intended to serve as the “language of the metaverse”, comprehensive extensions for Nvidia’s omniverse and tools for AI graphics.
A metaverse needs a standard way to describe all content within 3D worlds, according to Rev Lebaredian, vice president for Omniverse and simulation technology at Nvidia. The company believes Pixar’s Universal Scene Description (USD) will be the standard scene description for the next internet era, Lebaredian added, comparing USD to HTML on the 2D web.
USD is an open source framework developed by Pixar for exchanging 3D computer graphics data and is used in many industries such as architecture, design, robotics and CAD.
Nvidia will therefore Pixar, along with Adobe, Autodesk, Siemens and a number of other leading companies, are pursuing a multi-year strategy to expand USD’s capabilities beyond visual effects. USD is designed to better support industrial metaverse applications in architecture, engineering, manufacturing, scientific computing, robotics, and industrial digital twins.
“Our next milestones aim to make USD powerful for virtual worlds and industrial digital twins in real-time,” says Lebaredian. Nvidia also wants to help develop support for international character sets, geospatial coordinates, and real-time streaming of IoT data.
Nvidia relies on networked Omniverse
Nvidia also showed numerous improvements for Nvidia’s Omniverse. Huang described Omniverse as “a USD platform, a toolkit for developing metaverse applications, and a compute engine for running virtual worlds.” The new release includes several updated core technologies and more connections to popular tools.
These connections, called Omniverse Connectors, are currently for Unity , Blender, Autodesk Alias, Siemens JT, SimScale, the Open Geospatial Consortium and others in development, beta versions for PTC Creo, Visual Components and SideFX Houdini are now available. Siemens Xcelerator is also part of the Omniverse network and is intended to enable digital twins for more industrial customers.
Nvidia wants to increasingly use AI to speed up and improve computer graphics. The neural networks can be used at various points in the 3D rendering. | Image: Nvidia
Omniverse also gets neural graphics functions provided by Nvidia, including Instant NeRF for fast creation of 3D objects and scenes from 2D images and GauGAN360, a further development of GauGAN, the 8K, 211 degree panoramas.
An extension for Nvidia’s Modulus, a framework for machine learning, also enables developers to create AI-based physics simulations up to 100. times to accelerate.
Nearly a dozen partners will showcase omniverse capabilities at SIGGRAPH, including hardware, software and cloud services -Providers from AWS and Adobe to Dell, Epic and Microsoft.
In a demo, Industrial Light & Magic showed how the company’s teams with the AI search Omniverse DeepSearch huge assets -Search databases with natural language and get results even when metadata about the search terms is missing.
Nvidia delivers tools for AI graphics and improves visual effects simulation OpenVDB
One of the essential pillars for the emerging metaverse is neural graphics, according to Nvidia. In neural graphics (or neural rendering), neural networks are used to speed up and enhance computer graphics.
“Neural graphics bridges AI and graphics, paving the way for a future graphics pipeline that can learn from data ‘ said Sanja Fidler, Vice President of AI at Nvidia. “Neural graphics will redefine the way virtual worlds are created, simulated and experienced by users.”
Instant-NGP can learn gigapixel images, SDFs, NeRFs and Neural Volumes in a few seconds. | Video: Nvidia
Nvidia shows on the SIGGRAPH 16 Research papers on the topic, including Instant NGP, a tool for various applications of neural rendering. This and other AI tools are now available in the PyTorch library Kaolin Wisp for neural fields research.
You can find out more about AI graphics in our DEEP MINDS Podcast #8 with Nvidia researcher Thomas Müller.
Nvidia also announced NeuralVDB, an evolution of the OpenVDB open source standard. For the past decade, OpenVDB has won Academy Awards as a core technology used in the visual effects industry, according to Nvidia.
It has since expanded beyond the entertainment industry to include industrial and scientific use cases where sparse volumetric data play a role , such as industrial design and robotics.
Last year, Nvidia already showed NanoVDB, a support for GPUs that can accelerate calculations many times over. NeuralVDB builds on this work and uses machine learning for compact neural representations that have memory requirements of up to 100 times reduce. This enables interaction with extremely large and complex data sets in real time.
171039
171039
Note: Links to online shops in articles can be so-called affiliate links. If you buy via this link, MIXED.de will receive a commission from the seller. The price does not change for you.
Comments are closed.