Dominoes Anyone? SIGGRAPH Attendees Go Head-to-Head with Isaac-trained Robot in VR and Real Life

Computers can beat humans at chess. They can categorize images with superhuman accuracy. Now we’re showing how to create machines smart enough to safely interact with us in our daily lives.

We’re challenging attendees at the SIGGRAPH 2017 professional graphics conference this week in Los Angeles to sit across a table and take turns slapping down dominoes with us.

Robots for the Rest of Us

Don’t be shy, the idea isn’t to create a robot that never loses.

It’s to use our first hands-on demo of Isaac, a robot trained with our NVIDIA Isaac Lab robot simulator, to show how simulation — and virtual reality — can help robots learn the much more nuanced task of interacting with people.

Skills like the ability to pour a cup of coffee, provide care for the elderly, perform surgery, or play a game of dominoes, are the key to putting robotics to work in the lives of the world’s more than 7 billion people.

They’re skills too subtle to be taught by programmers banging out line after line of code. Our demo at SIGGRAPH shows how AI can be used to achieve do this.

Hands on with Isaac and Project Holodeck

Our demo lets you get hands on with two technologies we announced earlier this year at our GPU Technology Conference.

NVIDIA Isaac is an AI-enabled robot that has been trained using a powerful simulation environment called the Isaac Lab.

NVIDIA Isaac is an AI-enabled robot that has been trained using a powerful simulation environment called the Isaac Lab.

Project Holodeck —  a collaborative and physically accurate virtually reality environment  — enables humans to enter a simulation and interact with robots in a VR environment the same way they will in real life.

You’ll be able to see how these two technologies work together by interacting with Isaac in two ways.

You’ll be able to go head-to-head with Isaac in the physical world on the show floor. And you’ll be able to strap on a VR headset, and enter a simulation via Project Holodeck.

Deep learning and computer vision have been combined to teach a robot to sense and respond to human presence, to identify the state of the play of the game, to understand the legal moves of the game, and to determine which tile to select and how to place it.

The key: a pair of neural networks that help Isaac not only understand the game, but understand how to put that understanding to work when interacting with humans.

Using classification methods, the first neural network will identify the state of the play based on captured images of the domino tiles. It will determine the various legal moves in the game.

The data then will be transferred to another neural network, which uses reinforcement learning to determine which tile to select how to place it.

Once trained in the Isaac lab environment, the knowledge can then be deployed and transcended between the physical and the virtual realms.

By developing and training robots in a simulated world and then working with those robots in a virtual reality environment like Project Holodeck, researchers can deploy them to the real world in a way that is safer, faster, and more cost-effective.

Learn More

Learn more on NVIDIA Isaac here.

Experience Project Holodeck


Beyond robots, Holodeck is an ideal platform for game developers, content creators and designers wanting to collaborate, visualize and interact with large, complex photoreal models in a collaborative VR space.

NVIDIA Project Holodeck is a highly realistic, multi-user, VR environment that makes it easy for developers to import and interact with high quality models, including iterating and tuning a robot and test methodology.

In addition to Isaac, NVIDIA is featuring the first hands-on demo at SIGGRAPH of the Koenigsegg Regera supercar in Holodeck. The Koenigsegg virtual car model is represented by more than 50 million polygons.

Trade show attendees will be able to change the color of the model, apply a clipping sphere to view hidden parts, and virtually explode the model to visualize complex assemblies.

The experience is highly collaborative, with participants in different physical rooms seeing and talking to each other in a shared virtual space.

The Holodeck early access beta program will be available to the public starting September 2017. Sign up here for updates on Project Holodeck.

The post Dominoes Anyone? SIGGRAPH Attendees Go Head-to-Head with Isaac-trained Robot in VR and Real Life appeared first on The Official NVIDIA Blog.

NVIDIA Unleashes the Future of Live 360 Storytelling

To bring more immersive VR to more people, NVIDIA is releasing the VRWorks 360 Video SDK to enable production houses to live stream high-quality, 360 degree, stereo video to their audiences.

360 degree stereo stitching is a leap forward for the live-production and live-event industries. NVIDIA VRWorks accelerates the entire stereo stitching process while maintaining the highest level of image quality.

The VRWorks SDK enables production studios, camera makers and app developers to integrate 360 degree, stereo stitching SDK into their existing workflow for live and post production. Z CAM’s new V1 Pro 360 is the first professional 360 degree VR camera to fully integrate the VRWorks SDK.

“We have clients across a wide range of industries, from travel through sports, who want high quality, 360 degree video,” said Chris Grainger, CEO of Grainger VR. “This allows filmmakers to push the boundaries of live storytelling.”​

“I love it. What is amazing about this live capability is that the sense of depth which is a lot more immersive than monoscopic video,” said Kevin Alderwiereldt co-founder of YumeVR.

Z CAM, one of the earliest companies to bring a professional live VR camera to the masses, will be the first to integrate the NVIDIA VRWorks 360 degree Video SDK into the new V1 Pro and their WonderStitch and WonderLive stitching applications.

“We’re excited to be the first 360 degree professional VR camera to bring live, stereo stitching to the market,” said Jason Zhang, Founder & Chairman of Z CAM. “With VRWorks, we make 360 degree video more accessible to both content creators and consumers alike. We will usher in a new era of never seen before VR experiences.”

Available for download on  8/7 , VRWorks 360 Video SDK lets developers capture, stitch, and stream 360-degree videos. Supporting both 360 degree mono and stereo workflows in real-time and post production, VRWorks delivers:

  • Real-time and offline stitching from 4K camera rigs.
  • GPU-accelerated video decode, calibration, stitching, and encode.
  • 360 projection onto cube-map and panorama.
  • Support for GPUDirect for Video for low latency video ingest.
  • Support for up to 32 video streams.

Trade show attendees will be able to see the VRWorks live 360 degree stereo video stitching in action. Just visit the NVIDIA booth #403 at the SIGGRAPH Conference in Los Angeles July 30 – August 3.

Demo will be featuring live 360 stereo streaming from the show floor, using Z-CAM V1 PRO and P6000 graphics cards.

Download the VRWorks 360 SDK here. Download Z CAM WonderLive and WonderStitch with VRWorks 360 video stitching here.

The post NVIDIA Unleashes the Future of Live 360 Storytelling appeared first on The Official NVIDIA Blog.

Titan Xp Enables New Levels of Performance for Creative Professionals

TITAN Xp is the most powerful GPU you can put in your PC, and now we’re enabling even better performance with our latest TITAN drivers.

We built TITAN Xp for people who design and create — and, of course, play games. And it’s always getting better.

Our latest driver — available today — delivers 3x more performance in applications like Maya to help you create and design faster than ever.

You can now also use TITAN Xp with more PCs, including thin & light laptops, than ever before, and with external GPU chassis from a wide range of manufacturers including Asus, HP, Powercolor and Razer

And when you’re done if you want to rip through Battlefield 1 at 100 frames per second – no problem.

Best part: TITAN Xp is available worldwide. It’s available from Nvidia.com in United States, Europe, Australia, and Russia; from JD.com in China, PC Home in Taiwan, and from 11st in South Korea.

The post Titan Xp Enables New Levels of Performance for Creative Professionals appeared first on The Official NVIDIA Blog.

A Whole New Game: NVIDIA Research Brings AI to Computer Graphics

The same GPUs that put games on your screen could soon be used to harness the power of AI to help game and film makers move faster, spend less and create richer experiences.

At SIGGRAPH 2017 this week, NVIDIA is showcasing research that makes it far easier to animate realistic human faces, simulate how light interacts with surfaces in a scene and render realistic images more quickly.

NVIDIA is combining our expertise in AI with our long history in computer graphics to advance 3D graphics for games, virtual reality, movies and product design.

Forward Facing

Game studios create animated faces by recording video of actors performing every line of dialogue for every character in a game. They use software to turn that video into a digital double of the actor, which later becomes the animated face.

Existing software requires artists to spend hundreds of hours revising these digital faces to more closely match the real actors. It’s tedious work for artists and costly for studios, and it’s hard to change once it’s done.

Reducing the amount of labor involved in creating facial animation would let game artists add more character dialogue and additional supporting characters, as well as give them the flexibility to quickly iterate on script changes.

Remedy Entertainment — best known for games like Quantum Break, Max Payne and Alan Wake — approached NVIDIA Research with an idea to help them produce realistic facial animation for digital doubles with less effort and at lower cost.

By using AI for computer graphics, researchers automated the task of converting live actor performances (left) to computer game virtual characters (right).
Using AI, researchers automated the task of converting live actor performances (left) to computer game virtual characters (right).

Artificially Intelligent Game Faces

Using Remedy’s vast store of animation data, NVIDIA GPUs, and deep learning, NVIDIA researchers Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen trained a neural network to produce facial animations directly from actor videos.

Instead of having to perform labor-intensive data conversion and touch-up for hours of actor videos, NVIDIA’s solution requires only five minutes of training data. The trained network automatically generates all facial animation needed for an entire game from a simple video stream. NVIDIA’s AI solution produces animation that is more consistent and retains the same fidelity as existing methods.

The research team then pushed further, training a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live animated avatars, and more easily produce games in multiple languages.

Toward a New Era in Gaming 

Antti Herva, lead character technical artist at Remedy, said that over time, the new methods will let the studio build larger, richer game worlds with more characters than are now possible. Already, the studio is creating high-quality facial animation in much less time than in the past.

“Based on the NVIDIA Research work we’ve seen in AI-driven facial animation, we’re convinced AI will revolutionize content creation,” said Herva. “Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create. After working with NVIDIA to build video- and audio-driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks.”

Creating Images with AI

AI also holds promise for rendering 3D graphics, the process that turns digital worlds into the life-like images you see on the screen. Film makers and designers use a technique called ray tracing to simulate light reflecting from surfaces in the virtual scene. NVIDIA is using AI to improve both ray tracing and rasterization, a less costly rendering technique used in computer games.

Although ray tracing generates highly realistic images, simulating millions of virtual light rays for each image carries a large computational cost. Partially computed images appear noisy, like a photograph taken in extremely low light.

To denoise the resulting image, researchers used deep learning with GPUs to predict final, rendered images from partly finished results. Led by Chakravarty R. Alla Chaitanya, an NVIDIA research intern from McGill University, the research team created an AI solution that generates high-quality images from noisier, more approximate input images in a fraction of the time compared to existing methods.

This work is more than a research project. It’ll soon be a product. Today we announced the NVIDIA OptiX 5.0 software development kit, the latest version of our ray tracing engine. OptiX 5.0, which incorporates the NVIDIA Research AI denoising technology, will be available at no cost to registered developers in November.

AI Smooths out Rough Edges

NVIDIA researchers used AI to tackle a problem in computer game rendering known as anti-aliasing. Anti-aliasing is another way to reduce noise  — in this case, the jagged edges in the partially rendered images. Called “jaggies,” these are staircase-like lines that appear instead of smooth lines. (See left inset in image below).

NVIDIA researchers Marco Salvi and Anjul Patney trained a neural network to recognize these artifacts and replace those pixels with smooth anti-aliased pixels. The AI-based technology produces sharper images than existing algorithms.

AI computer graphics: NVIDIA's AI anti-aliasing smooths out jagged edges and replaces them with smooth lines.
The left inset shows an aliased image that is jaggy and pixelated. NVIDIA’s AI anti-aliasing algorithm produced the larger image and inset on the right by learning the mapping from aliased to anti-aliased images. Image courtesy of Epic Games.

How AI Traces the Right Rays

NVIDIA is developing more efficient methods to trace virtual light rays. Computers sample the paths of many light rays to generate a photo-realistic image. The problem is that not all of those light paths contribute to the final image.

Researchers Ken Daum and Alex Keller used machine learning to guide the choice of light paths. They accomplished this by connecting the mathematics of tracing light rays to the AI concept of reinforcement learning.

Their solution learns to distinguish the “useful” paths — those most likely to connect lights with virtual cameras —from paths that don’t contribute to the image.

NVIDIA's AI-guided light simulation delivers up to 10x faster image synthesis by reducing the required number of virtual light rays.
Simulating light reflections in this virtual scene — shown without denoising — is challenging because the only light comes through the narrowly opened door. NVIDIA’s AI-guided light simulation delivers up to 10x faster image synthesis by reducing the required number of virtual light rays.

NVIDIA AI Research at SIGGRAPH

At SIGGRAPH, you can learn more about how AI is changing computer graphics by visiting us at Booth #403 starting Tuesday, and by attending  NVIDIA’s SIGGRAPH AI research talks:

Tuesday,  Aug. 1

Wednesday, Aug. 2

Thursday, Aug. 3

 

The image at the top of this blog appears in a SIGGRAPH paper by NVIDIA researchers who used artificial intelligence to accelerate image synthesis by converting the partially-rendered image (left) to the final image (right).

The post A Whole New Game: NVIDIA Research Brings AI to Computer Graphics appeared first on The Official NVIDIA Blog.