Nvidia has released GET3D (Generate Explicit Textured 3D), an AI model for creators of the company’s multiverse, the Omniverse, and other developers to simplify the process of creating 3D models.
As the company explains, with the help of its several A100 Tensor Core GPUs, Nvidia GET3D was trained in just two days, utilising over a million 2D images of 3D shapes taken from various camera angles. According to the company, one GPU allows GET3D to generate up to 20 objects per second.
GET3D is capable of generating models of characters, buildings, vehicles, and other forms of 3D objects “with high-fidelity textures and complex geometric details“. These 3D objects are created in the same format as popular graphics applications, allowing users to immediately transform them into 3D rendering.
This means that it can be much easier for developers to create dense virtual worlds for games and the metaverse. Robotics and architecture were mentioned as other possible applications for NVIDIA’s new tool.
Also, with another Nvidia AI tool, StyleGAN-NADA, you can apply different styles to an object using text hints — make a car burn, turn a building into a haunted house, or paint any animal the color of a tiger.
The algorithm can detect subtle differences between similar objects — from vehicles, it can generate cars, trucks, racing cars, and vans; from animal classes, it can generate images of foxes, rhinos, horses and bears — the set of images expand as you learn.
According to Nvidia, future versions of GET3D will train on real objects rather than hand-crafted models.