NVIDIA is a leading technology company renowned for its innovation in producing high-quality graphic processing units (GPUs). The firm plans to introduce 20 research papers in the field of artificial intelligence (AI) that will enhance product quality. These papers have been developed by NVIDIA researchers in collaboration with over 12 different universities in the U.S., Europe, & Israel. The premier computer graphics conference, SIGGRAPH 2023, will host these papers from August 6-10 in Los Angeles.
What Is SIGGRAPH 2023?
It is an annual conference organized by the ACM SIGGRAPH, which incorporates academic presentations along with an industry trade show. It is one of the most significantly impactful events for academic publications in the field of computer graphics. The papers include generative AI models, neural rendering models, and more.
This year’s SIGGRAPH will feature presentations on AI-powered visual details and photorealistic 3D head-and-shoulders models, to name just a couple. These developments will make it easier for businesses and developers to quickly create synthetic data to fill virtual environments used for training robotics and autonomous vehicles.
NVIDIA’s Generative AI Models: Transforming Text Into Images
Generative AI tools that transform text into images are powerful in creating storyboards and concept art for films, video games, simulation applications, and 3D virtual worlds. They can convert a prompt like “children’s toys” into nearly infinite visuals that inspire creators to generate images of stuffed animals, blocks, or puzzles. However, artists may often have a particular theme in mind. Two SIGGRAPH papers were developed by researchers from Tel Aviv University and NVIDIA to enable this level of specificity in the generatvie AI’s output. These papers allow users to provide image examples that the model quickly learns from, enabling the personalization process to accelerate from minutes to about 11 seconds on a single NVIDIA A100 Tensor Core GPU. That is more than 60x faster than previous personalization approaches!
Also Read: How to Use Generative AI to Create Beautiful Pictures for Free?
Transforming 2D Images & Videos Into 3D Representations
The next stage after creating the concept art for a virtual world is to render the environment and add 3D characters and objects to it. NVIDIA Research is developing AI techniques that can automatically convert 2D photos and videos into 3D representations. This will further speed up the laborious process of conversion and rendering. Besides, this breakthrough AI technology has made 3D avatar creation and 3D video conferencing accessible. The researchers from the University of California, San Diego, created tech that can generate and render a photorealistic 3D head-and-shoulders model based on a single 2D portrait. This development is a major breakthrough that brings 3D avatar creation and 3D video conferencing to a whole new level.
Also Read: Bring Doodles to Life: Meta Open-Sources AI Model
Lifelike Motion to 3D Characters
NVIDIA and Stanford University have worked together to give 3D characters lifelike movements. The researchers developed an AI system that can adapt a variety of tennis strokes from 2D video recordings of actual tennis matches and incorporate them into 3D figures. The computer-generated tennis players can play extended rallies on a virtual court and even hit the ball to target positions with precision. This research showcases the potential of AI in creating lifelike movements in virtual environments.
Learn More: Machine Learning and AI in Game Development in 2023
AI-Powered Hair Grooming
Once the AI generates a 3D character, artists can add extra layers of realistic details, such as hair, which is a complex computational challenge for animators. The NVIDIA team developed a method that can render tens of thousands of hairs in high resolution and in real time using neural physics. Using this AI technique, a neural network can predict how an object would move in the real world. The team’s innovative approach for accurate simulation of full-scale hair is tailored for modern GPUs. As compared to state-of-the-art, CPU-based solvers, this AI offers a significant performance leap.
NVIDIA’s Research on Real-Time Rendering With AI
Real-time rendering is what simulates the physics of light reflecting through a virtual scene. Recent NVIDIA research demonstrates how AI models for textures, materials, and volumes can provide photorealistic images for video games and digital twins in real-time that are of film quality. In one SIGGRAPH paper, NVIDIA will demonstrate neural texture compression that can provide up to 16 times more texture detail without using additional GPU memory. As the image below shows, neural texture compression can significantly improve the realism of 3D sceneries.
NVIDIA’s advancements in AI and computer graphics are sure to revolutionize the gaming industry, film-making, and robotics. The research papers to be presented at SIGGRAPH 2023 demonstrate NVIDIA’s commitment to innovation. They also show its continued efforts to push the boundaries of what is possible. These breakthroughs will likely pave the way for many new developments in AI, graphics, and beyond.
Also Read: DinoV2: Most Advanced Self-Taught Vision Model by Meta
Other Advancements by NVIDIA Researchers
Apart from the aforementioned advancements, NVIDIA researchers have also developed other AI techniques that will be presented at SIGGRAPH 2023. These include inverse rendering, which can transform still images into 3D objects, and neural physics models that can simulate complex 3D elements with stunning realism using AI.
Moreover, AI-powered tools like NVIDIA Omniverse and NVIDIA Picasso will benefit greatly from these research advancements. NVIDIA Omniverse is a platform for building & operating metaverse applications. NVIDIA Picasso is a foundry for custom generative AI models for visual design.
The presentations at SIGGRAPH 2023 will highlight how far NVIDIA’s innovation in AI and computer graphics has come. Over the years, NVIDIA graphics research has helped bring film-style rendering to games. It is even used in the world’s first path-traced AAA title ‘Cyberpunk 2077 Ray Tracing: Overdrive Mode.’
What Does This Mean For the Future of AI and Graphics?
The developments in the field of AI and computer graphics by NVIDIA researchers are nothing short of groundbreaking. The advancements showcased at SIGGRAPH 2023 will not only benefit the gaming industry but also have several potential applications in robotics, film-making, and beyond.
NVIDIA’s innovations in AI technology and generative AI models could revolutionize the future of concept art and storyboarding. Furthermore, the ability to generate photorealistic 3D representations from a single 2D portrait will open up new possibilities in virtual conferencing, remote collaboration, and more.
Aldo Read: How AI Is Revolutionizing Game Testing in 2023
Our Say
With the growing importance of computer-generated environments in various fields, the ability to rapidly generate synthetic data and virtual worlds for robotics and autonomous vehicle training will prove invaluable. NVIDIA’s research advancements in this area will help organizations save time, resources, and money. Simultaneously, they will also improve the accuracy and efficiency of the virtual environments.
In conclusion, NVIDIA’s research papers at SIGGRAPH 2023 demonstrate the company’s commitment to innovation in AI and computer graphics. These advancements have the potential to change the way we think about gaming, film-making, and robotics. As technology continues to evolve, NVIDIA’s research will continue to push boundaries and make even more valuable products & services.