Researchers Make Movies of the Brain with CUDA

When colleagues told Sally Epstein they sped up image processing by three orders of magnitude for a client’s brain-on-a-chip technology, she responded like any trained scientist. Go back and check your work, the biomedical engineering Ph.D. told them.

Yet it was true. The handful of researchers at Cambridge Consultants had devised a basket of techniques to process an image on GPUs in an NVIDIA DGX-1 system in 300 milliseconds, a 3,000x boost over the 18 minutes the task took on an Intel Core i9 CPU.

The achievement makes it possible for researchers to essentially watch movies of neurons firing in real time using the brain-on-a-chip technology from NETRI, a French startup.

“Animal studies revolutionized medicine. This is the next step in testing for areas like discovering new drugs,” said Epstein, head of Strategic Technology at Cambridge Consultants, which develops products and technologies for a wide variety of established companies and startups such as NETRI.

The startup designs chips that sport 3D microfluidic channels to host neural tissues and a CMOS camera sensor with polarizing filters to detect individual neurons firing. It hopes its precision imaging can speed the development of novel treatments for neurological disorders such as Alzheimer’s disease.

Facing a Computational Bottleneck

NETRI’s chips generate 100-megapixel images at up to 1,000 frames per second — the equivalent of a hundred 4K gaming systems running at 120fps. Besides spawning tons of data, they use highly complex math.

As a result, processing a single second of a recording took NETRI 12 days, an unacceptable delay. So, the startup turned to Cambridge Consultants to bust through the bottleneck.

“Our track record in scientific and biological imaging turned out to be very relevant,” said Monty Barlow, Director of Strategic Technology at Cambridge Consultants. And when NETRI heard about the 3,000x boost, “they trusted us even though we didn’t trust ourselves at first,” he quipped.

Leveraging Math, Algorithms and GPUs

A handful of specialists at Cambridge Consultants delivered the 3,000x speedup using multiple techniques. For example, math and algorithm experts employed a mix of Gaussian filters, multivariate calculus and other tools to eliminate redundant tasks and reduce peak RAM requirements.

Software developers migrated NETRI’s Python code to CuPy to take advantage of the massive parallelism of NVIDIA’s CUDA software. And hardware specialists optimized the code to fit into GPU memory, eliminating unnecessary data transfers inside the DGX-1.

The CUDA profiler helped find bottlenecks in NETRI’s code and alternatives to resolve them. “NVIDIA gave us the tools to execute this work efficiently — it happened within a week with a core team of four researchers and a few specialists,” said Epstein.

Looking ahead, Cambridge Consultants expects to find further speedups for the code using the DGX-1 that could enable real-time manipulation of neurons using a laser. Researchers also aim to explore NVIDIA IndeX software to help visualize neural activity.

The work with NETRI is one of several DGX-1 applications at the company. It also hosts a Digital Greenhouse for AI research. Last year, it used the DGX-1 to create a low-cost but highly accurate tool for monitoring tuberculosis.

The post Researchers Make Movies of the Brain with CUDA appeared first on The Official NVIDIA Blog.

NVIDIA Gives COVID-19 Researchers Free Access to Parabricks

When a crisis hits, we all pitch in with what we have. In response to the current pandemic, NVIDIA is sharing tools with researchers that can accelerate their race to understand the novel coronavirus and help inform a response.

Starting today, NVIDIA will provide a free 90-day license to Parabricks to any researcher in the worldwide effort to fight the novel coronavirus. Based on the well-known Genome Analysis Toolkit, Parabricks uses GPUs to accelerate by as much as 50x the analysis of sequence data.

We recognize this pandemic is evolving, so we’ll monitor the situation and extend the offer as needed.

If you have access to NVIDIA GPUs, fill out this form to request a Parabricks license.

For researchers working with Oxford Nanopore long-read data, a repository of GPU-accelerated tools is available on GitHub. In addition, the following applications already have NVIDIA GPU acceleration built in: Medaka, Racon, Raven, Reticulatus, Unicycler.

Researchers are sequencing both the novel coronavirus and the genomes of people afflicted with COVID-19 to understand, among other things, the spread of the disease and who is most affected. But analyzing genomic sequences takes time and computing muscle.

Accelerating science has long been part of NVIDIA’s core mission. The Parabricks team joined NVIDIA in December, providing the latest tool for that work. It can reduce the time for variant calling on a whole human genome from days to less than an hour on a single server.

Given the unprecedented spread of the pandemic, getting results in hours versus days could have an extraordinary impact on understanding the virus’s evolution and the development of vaccines.

NVIDIA is inviting our family of partners to join us in matching this urgent effort to assist the research community. We’re in discussions with cloud service providers and supercomputing centers to provide compute resources and access to Parabricks on their platforms.

We’ll update this blog with links to others who can provide cloud-based access to NVIDIA GPUs and this software as those sources become available.

Support links:

The post NVIDIA Gives COVID-19 Researchers Free Access to Parabricks appeared first on The Official NVIDIA Blog.

Italy Forges AI Future in Partnership with NVIDIA

Italy is well known for its architecture, culture and cuisine. Soon, its contributions to AI may be just as renowned.

Taking a step in that direction, a national research organization forged a three-year collaboration with NVIDIA. Together they aim to accelerate AI research and commercial adoption across the country.

Leading the charge for Italy is CINI, the National Inter-University Consortium for Informatics that includes a faculty of more than 1,300 professors in various computing fields across 39 public universities.

CINI’s National Laboratory of Artificial Intelligence and Intelligence Systems (AIIS) is spearheading the effort as part of its goal to expand Italy’s ecosystem for both academic research and commercial applications of AI.

“Leveraging NVIDIA’s expertise to build systems specifically designed for the creation of AI will help secure Italy’s position as a top player in AI education, research and industry adoption,” said Rita Cucchiara, a professor of computer engineering and science and director of AIIS.

National effort begins in Modena

The joint initiative aims to train students, nurture startups and spread adoption of the attest AI technology throughout Italy. As a first step, the partners will create a local hub at the University of Modena and Reggio Emilia (Unimore) for the global NVIDIA AI Technology Center.

The partnership marks an important expansion of NVIDIA’s work with the university whose roots date back to the medieval period.

In December, the company supported research on a novel way to automate the process of describing actions in a video. A team of four researchers at Unimore and one from Milan-based AI startup Metaliquid developed an AI model that achieved up to a 16 percent relative improvement compared to prior solutions. In a final stage of the project, NVIDIA helped researchers analyze their network’s topology to optimize training it on an NVIDIA DGX-1 system.

In July, Unimore and NVIDIA collaborated on an event for AI startups. Unimore’s AImageLab hosted the event, which included representatives of NVIDIA’s Inception program, an initiative to nurture AI startupswith access to the company’s technology and ecosystem.

The collaboration comes at a time when the AImageLab, host for the new NVIDIA hub, is already making its mark in areas such as machine vision and medical imaging.

Winning kudos in image recognition

In September, two world-class research events singled out the AImageLab for recognition. One team from the lab won a best paper award at the International Conference on Computer Analysis of Images and Patterns. Another came third out of 64 research groups in an international competition using AI to classify skin lesions.

The Modena hub becomes the latest of more than 12 collaborations with countries worldwide for the NVIDIA AI Technology Center. NVAITC maintains an open database of research and tools developed with and for its partners.

Overall, the new collaboration “will bring together NVIDIA and CINI in our shared mission to enable, support and inform Italy’s AI ecosystem for research, industry and society,” said Simon See, senior director of NVAITC.

The post Italy Forges AI Future in Partnership with NVIDIA appeared first on The Official NVIDIA Blog.

Life Observed: Nobel Winner Sees Biology’s Future with GPUs

Five years ago, when Eric Betzig got the call he won a Nobel Prize for inventing a microscope that could see features as small as 20 nanometers, he was already working on a new one.

The new device captures the equivalent of 3D video of living cells — and now it’s using NVIDIA GPUs and software to see the results.

Betzig’s collaborator at the University of California at Berkeley, Srigokul Upadhyayula (aka Gokul), helped refine the so-called Lattice Light Sheet Microscopy (LLSM) system. It generated 600 terabytes of data while exploring part of the visual cortex of a mouse in work published earlier this year in Science magazine. A 1.3TB slice of that effort was on display at NVIDIA’s booth at last week’s SC19 supercomputing show.

Attendees got a glimpse of how tomorrow’s scientists may unravel medical mysteries. Researchers, for example, can use LLSM to watch how protein coverings on nerve axons degrade as diseases such as muscular sclerosis take hold.

Future of Biology: Direct Visualization

“It’s our belief we will never understand complex living systems by breaking them into parts,” Betzig said of methods such as biochemistry and genomics. “Only optical microscopes can look at living systems and gather information we need to truly understand the dynamics of life, the mobility of cells and tissues, how cancer cells migrate. These are things we can now directly observe.

“The future of biology is direct visualization of living things rather than piecing together information gleaned by very indirect means,” he added.

It Takes a Cluster — and More

Such work comes with heavy computing demands. Generating the 600TB dataset for the Science paper “monopolized our institution’s computing cluster for days and weeks,” said Betzig.

“These microscopes produce beautifully rich data we often cannot visualize because the vast majority of it sits in hard drives, completely useless,” he said. “With NVIDIA, we are finding ways to start looking at it.”

The SC19 demo — a multi-channel visualization of a preserved slice of mouse cortex — ran remotely on six NVIDIA DGX-1 servers, each packing eight NVIDIA V100 Tensor Core GPUs. The systems are part of an NVIDIA SATURNV cluster located near its headquarters in Santa Clara, Calif.

Berkeley researchers gave SC19 attendees a look inside the visual cortex of a mouse — visualized using NVIDIA IndeX.

The key ingredient for the demo and future visualizations is NVIDIA IndeX software, an SDK that allows scientists and researchers to see and interact in real time with massive 3D datasets.

Version 2.1 of IndeX debuted at SC19, sporting a host of new features, including GPUDirect Storage, as well as support for Arm and IBM POWER9 processors.

After seeing their first demos of what IndeX can do, the research team installed it on a cluster at UC Berkeley that uses a dozen NVIDIA TITAN RTX and four V100 Tensor Core GPUs. “We could see this had incredible potential,” Gokul said.

Closing a Big-Data Gap

The horizon holds plenty of mountains to climb. The Lattice scope generates as much as 3TB of data an hour, so visualizations are still often done on data that must be laboriously pre-processed and saved offline.

“In a perfect world, we’d have all the information for analysis as we get the data from the scope, not a month or six months later,” said Gokul. The time between collecting and visualizing data can stretch from weeks to months, but “we need to tune parameters to react to data as we’re collecting it” to make the scope truly useful for biologists, he added.

NVIDIA IndeX software, running on its increasingly powerful GPUs, helps narrow that gap.

In the future, the team aims to apply the latest deep learning techniques, but this too presents heady challenges. “There are no robust AI models to deploy for this work today,” Gokul said.

Making the data available to AI specialists who could craft AI models would require shipping crates of hard drives on an airplane, a slow and expensive proposition. That’s because the most recent work produced over half a petabyte of data, but cloud services often limit uploads and downloads to a terabyte or so per day.

Betzig and Gokul are talking with researchers at cloud giants about new options, and they’re exploring new ways to leverage the power of GPUs because the potential of their work is so great.

Coping with Ups and Downs

“Humans are visual animals,” said Betzig. “When most people I know think about a hypothesis, they create mental visual models.

“The beautiful thing about microscopy is you can take a model in your head with all its biases and immediately compare it to the reality of living biological images. This capability already has and will continue to reveal surprises,” he said.

The work brings big ups and downs. Winning a Nobel Prize “was a shock,” Betzig said. “It kind of felt like getting hit by a bus. You feel like your life is settled and then something happens to change you in ways you wouldn’t expect — it has good and bad sides to it.”

Likewise, “in the last several years working with Gokul, every microscope had its limits that led us to the next one. You take five or six steps up to a plateau of success and then there is a disappointment,” he said.

In the partnership with NVIDIA, “we get to learn what we may have missed,” he added. “It’s a chance for us to reassess things, to understand the GPU from folks who designed the architecture, to see how we can merge our problem sets with new solutions,” he said.

Note: The picture at top shows Berkeley researchers Eric Betzig, Ruixian Gao and Srigokul Upadhyayula with the Lattice Light Sheet microscope.

The post Life Observed: Nobel Winner Sees Biology’s Future with GPUs appeared first on The Official NVIDIA Blog.

AI’s Latest Adventure Turns Pets into GANimals

Imagine your Labrador’s smile on a lion or your feline’s finicky smirk on a tiger. Such a leap is easy for humans to perform, with our memories full of images. But the same task has been a tough challenge for computers — until the GANimal.

A team of NVIDIA researchers has defined new AI techniques that give computers enough smarts to see a picture of one animal and recreate its expression and pose on the face of any other creature. The work is powered in part by generative adversarial networks (GANs), an emerging AI technique that pits one neural network against another.

You can try it for yourself with the GANimal app. Input an image of your dog or cat and see its expression and pose reflected on dozens of breeds and species from an African hunting dog and Egyptian cat to a Shih-Tzu, snow leopard or sloth bear.

I tried it, using a picture of my son’s dog, Duke, a mixed-breed mutt who resembles a Golden Lab. My fave — a dark-eyed lynx wearing Duke’s dorky smile.

There’s potential for serious applications, too. Someday movie makers may video dogs doing stunts and use AI to map their movements onto, say, less tractable tigers.

The team reports its work this week in a paper at the International Conference on Computer Vision (ICCV) in Seoul. The event is one of three seminal conferences for researchers in the field of computer vision.

Their paper describes what the researchers call FUNIT, “a Few-shot, UNsupervised Image-to-image Translation algorithm that works on previously unseen target classes that are specified, at test time, only by a few example images.”

“Most GAN-based image translation networks are trained to solve a single task. For example, translate horses to zebras,” said Ming-Yu Liu, a lead computer-vision researcher on the NVIDIA team behind FUNIT.

“In this case, we train a network to jointly solve many translation tasks where each task is about translating a random source animal to a random target animal by leveraging a few example images of the target animal,” Liu explained. “Through practicing solving different translation tasks, eventually the network learns to generalize to translate known animals to previously unseen animals.”

Before this work, network models for image translation had to be trained using many images of the target animal. Now, one picture of Rover does the trick, in part thanks to a training function that includes many different image translation tasks the team adds to the GAN process.

The work is the next step in Liu’s overarching goal of finding ways to code human-like imagination into neural networks. “This is how we make progress in technology and society by solving new kinds of problems,” said Liu.

The team — which includes seven of NVIDIA’s more than 200 researchers — wants to expand the new FUNIT tool to include more kinds of images at higher resolutions. They are already testing it with images of flowers and food.

Liu’s work in GANs hit the spotlight earlier this year with GauGAN, an AI tool that turns anyone’s doodles into photorealistic works of art.

The GauGAN tool has already been used to create more than a million images. Try it for yourself on the AI Playground.

At the ICCV event, Liu will present a total of four papers in three talks and one poster session. He’ll also chair a paper session and present at a tutorial on how to program the Tensor Cores in NVIDIA’s latest GPUs.

The post AI’s Latest Adventure Turns Pets into GANimals appeared first on The Official NVIDIA Blog.

Pixel-Perfect Perception: How AI Helps Autonomous Vehicles See Outside the Box

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Catch up on all of our automotive posts, here.

A self-driving car’s view of the world often includes bounding boxes — cars, pedestrians and stop signs neatly encased in red and green rectangles.

In the real world, however, not everything fits in a box.

For highly complex driving scenarios, such as a construction zone marked by traffic cones, a sofa chair or other road debris in the middle of the highway, or a pedestrian unloading a moving van with cargo sticking out the back, it’s helpful for the vehicle’s perception software to provide a more detailed understanding of its surroundings.

Such fine-grained results can be obtained by segmenting image content with pixel-level accuracy, an approach known as panoptic segmentation.

With panoptic segmentation, the image can be accurately parsed for both semantic content (which pixels represent cars vs. pedestrians vs. drivable space), as well as instance content (which pixels represent the same car vs. different car objects).

Planning and control modules can use panoptic segmentation results from the perception system to better inform autonomous driving decisions. For example, the detailed object shape and silhouette information helps improve object tracking, resulting in a more accurate input for both steering and acceleration. It can also be used in conjunction with dense (pixel-level) distance-to-object estimation methods to help enable high-resolution 3D depth estimation of a scene.

Single DNN Approach

NVIDIA’s approach achieves pixel-level semantic and instance segmentation of a camera image using a single, multi-task learning deep neural network. This approach enables us to train a panoptic segmentation DNN that understands the scene as a whole versus piecewise.

It’s also efficient. Just one end-to-end DNN can extract all this rich perception information while achieving per-frame inference times of approximately 5ms on our embedded in-car NVIDIA DRIVE AGX platform. This is much faster than state-of-the-art segmentation methods.

DRIVE AGX makes it possible to simultaneously run the panoptic segmentation DNN along with many other DNN networks and perception functions, localization, and planning and control software in real time.

Panoptic segmentation DNN output from in-car inference on embedded AGX platform. Top: predicted objects and object classes (blue = cars; green = drivable space; red = pedestrians). Bottom: predicted object-class instances along with computed bounding boxes (shown in different colors and instance IDs).

As shown above, the DNN is able to segment a scene into several object classes, as well as detect different instances of these object classes, as shown with the unique colors and numbers in the bottom panel.

On-Point Training and Perception

The rich pixel-level information provided by each frame also reduces training data volume requirements. Specifically, because more pixels per training image represent useful information, the DNN is able to learn using fewer training images.

Moreover, based on the pixel-level detection results and post-processing, we’re also able to compute the bounding box for each object detection. All the perception advantages offered by pixel-level segmentations require processing, which is why we developed the powerful NVIDIA DRIVE AGX Xavier.

As a result, the pixel-level details panoptic segmentation provides make it possible to better perceive the visual richness of the real world in support of safe and reliable autonomous driving.

The post Pixel-Perfect Perception: How AI Helps Autonomous Vehicles See Outside the Box appeared first on The Official NVIDIA Blog.

Over the Moon: NVIDIA RTX-Powered Apollo 11 Spectacle Lands at SIGGRAPH

With one small step, you can take a giant leap into real-time rendering at SIGGRAPH 2019 this week in Los Angeles.

To celebrate the 50th anniversary of the Apollo 11 moon landing, NVIDIA is bringing an RTX-powered interactive demo that takes attendees on a trip to the moon and back.

Visitors to the NVIDIA booth will immediately recognize a familiar, historic scene on display: a photoreal, rendered recreation of the moon landing with an astronaut standing next to the Apollo 11 lander. But, stepping closer, they’ll notice the astronaut starts to move — and it’s copying their exact movements.

A single camera is set up in the booth to capture the person’s poses and match their movements to the 3D-rendered astronaut. Using pose estimation technology designed by NVIDIA Research, the interactive demo doesn’t require any special suits, multiple cameras or a depth sensor — it just requires a willing participant and an off-the-shelf webcam.

With real-time ray tracing to render every detail, combined with Omniverse — our open collaboration platform that streamlines 2D and 3D product pipelines — we’re able to project a photoreal 3D image of attendees as if they were an astronaut exploring a real moonscape that is literally out of this world.

Moonwalk Using NVIDIA’s AI-Enhanced Technology

Previously, capturing someone’s precise movements was a difficult task due to different types of appearances, overlapping objects and complex body poses. Studios would require a large production that involves multiple cameras, full-body suits with sensors, and painstaking calibrations to accurately detect the body movements and generate 3D recreations.

Our NVIDIA Research team has developed a state-of-the-art method that reconstructs the 3D human body motion and position from a single 2D video feed. With pose estimation technology, the Tensor Cores in RTX GPUs speed up the AI inference to understand the person’s movements. That information is then translated and sent to the Omniverse renderer to match the precise movements to the 3D astronaut.

So the moment someone steps into the NVIDIA booth, they’re virtually transported into space to relive the historic moon-landing moment.

From the moon rocks strewn about to the Apollo 11 lander, the realistic details in the scene are made possible through NVIDIA RTX technology. It creates stunning levels of realism while simulating how the sun’s rays react to the moon’s surface in real time. The interplay of lights and shadows give incredible new perspectives on the moon landing.

Learn More About NVIDIA RTX

NVIDIA RTX brings a new level of realism to computer graphics using real-time ray tracing and AI acceleration. Major studios and software application companies are choosing RTX because it provides impressive speeds for photorealistic rendering, helping designers create high-quality visuals similar to the ones built for our moon-landing demo.

Artists can also experience faster content creation workflows and GPU-powered rendering in the data center with NVIDIA RTX Server, which delivers cinematic-quality graphics enhanced by RTX ray tracing. With NVIDIA RTX Server, designers can accelerate the rendering pipeline and explore more options while creating higher quality graphics.

Take a trip to the moon powered by RTX technology in NVIDIA booths 1303 and 1313 at SIGGRAPH. See demos of the latest RTX-accelerated renderers and apps, and check out the upcoming sessions on our SIGGRAPH schedule page.

The post Over the Moon: NVIDIA RTX-Powered Apollo 11 Spectacle Lands at SIGGRAPH appeared first on The Official NVIDIA Blog.

A Pigment of Your Imagination: Over Half-Million Images Created with GauGAN AI Art Tool

From amateur doodlers to leading digital artists, creators are coming out in droves to produce masterpieces with NVIDIA’s most popular research demo: GauGAN.

The AI painting web app — which turns rough sketches into stunning, photorealistic scenes — was built to demonstrate NVIDIA Research based on harnessing generative adversarial networks.

More than 500,000 images have been created with GauGAN since the beta version was made publicly available just over a month ago on the NVIDIA AI Playground.

Art directors and concept artists from top film studios and video game companies are among the creative professionals already harnessing GauGAN as a tool to prototype ideas and make rapid changes to synthetic scenes.

“GauGAN popped on the scene and interrupted my notion of what I might be able to use to inspire me,” said Colie Wertz, a concept artist and modeler whose credits include Star Wars, Transformers and Avengers movies. “It’s not something I ever imagined having at my disposal.”

Wertz, using a GauGAN landscape as a foundation, recently created an otherworldly ship design shared on social media.

Colie Wertz ship design
AI Work of Art: Senior concept artist Colie Wertz created this ship design with a GauGAN landscape as a foundation.

“Real-time updates to my environments with a few brush strokes is mind-bending. It’s like instant mood,” said Wertz, who uses NVIDIA RTX GPUs for his creative work. “This is forcing me to reinvestigate how I approach a concept design.”

Attendees of this week’s SIGGRAPH conference can experience GauGAN for themselves in the NVIDIA booth, where it’s running on an HP RTX workstation powered by NVIDIA Quadro RTX GPUs that feature Tensor Cores. NVIDIA researchers will also present GauGAN during a live event at the prestigious computer graphics show.

Users can share their GauGAN creations on Twitter with #SIGGRAPH2019, #GauGAN and @NVIDIADesign to enter our AI art contest, judged by Wertz. Winner will receive an NVIDIA Quadro RTX 6000 GPU.

Unleash Your AI Artist

GauGAN, named for post-Impressionist painter Paul Gauguin, creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene.

People can use paintbrush and paint bucket tools to design their own landscapes with labels including river, grass, rock and cloud. A style transfer algorithm allows creators to apply filters, modifying the color composition of a generated image, or turning it from a photorealistic scene to a painting.

“As researchers working on image synthesis, we’re always pursuing new techniques to create images with higher fidelity and higher resolution,” said NVIDIA researcher Ming-Yu Liu. “That was our original goal for the project.”

But when the demo was introduced at our GPU Technology Conference in Silicon Valley, it took on a life of its own. Attendees flocked to a tablet on the show floor where they could try it out for themselves, creating stunning scenes of everything from sun-drenched ocean landscapes to idyllic mountain ranges shrouded by clouds.

The latest iteration of the app, on display at SIGGRAPH, lets users upload their own filters to layer onto their masterpieces — adopting the lighting of a perfect sunset photo or emulating the style of a favorite painter.

They can even upload their own landscape images. The AI will convert source images into a segmentation map, which can then be used as a foundation for the user’s artwork.

“We want to make an impact with our research,” Liu said. “This work creates a channel for people to express their creativity and create works of art they wouldn’t be able to do without AI. It’s enabling them to make their imagination come true.”

While the researchers anticipated game developers, landscape designers and urban planners to benefit from this technology, interest in GauGAN has been far more widespread — including from a healthcare organization exploring its use as a therapeutic, stress-mitigating tool for patients.

AI That Captures the Imagination 

Developed using the PyTorch deep learning framework, the neural network behind GauGAN was trained on a million images using the NVIDIA DGX-1 deep learning system. The demo shown at GTC ran on an NVIDIA TITAN RTX GPU, while the web app is hosted on NVIDIA GPUs through Amazon Web Services.

Liu developed the deep neural network and accompanying app along with researchers Taesung Park, Ting-Chun Wang and Jun-Yan Zhu.

The team has publicly released source code for the neural network behind GauGAN, making it available for non-commercial use by other developers to experiment with and build their own applications.

GauGAN is available on the NVIDIA AI Playground for visitors to experience the demo firsthand.

The post A Pigment of Your Imagination: Over Half-Million Images Created with GauGAN AI Art Tool appeared first on The Official NVIDIA Blog.

SIGGRAPH Showcases Amazing NVIDIA Research Breakthroughs, NVIDIA Wins Best in Show Award

Get ready to dig in this week.

SIGGRAPH is here and we’re helping graphics professionals, researchers, developers and students of all kinds take advantage of the latest advances in graphics, including new possibilities in real-time ray tracing, AI, and augmented reality.

SIGGRAPH is the most important computer graphics conference in the world, and our research team and collaborators from top universities and many industries are here with us.

At the top of the list: ray tracing, using NVIDIA’s RTX platform, which fuses ray tracing, deep learning and rasterization. We’re directly involved in 34 of 50 ray tracing-related technical sessions this week — far more than any other company. And our talks are drawing luminaries from around the industry, with four technical Academy Award winners participating in NVIDIA sponsored sessions.

Beyond the technical sessions, we’ll be showcasing new developer tools, and giving attendees a first-hand look at some of our most exciting work. One great example is NVIDIA GauGAN an interactive paint program that uses GANs (generative adversarial networks) to create works of art from simple brush strokes. Now everybody can be an artist.

Never been to the moon? A stunning new demo virtually transports visitors to the Apollo 11 landing site using never-before-shown AI pose estimation that captures their body movements in real time. This all became possible by combining NVIDIA Omniverse technology, AI and RTX ray tracing.

The story behind all these stories: our 200-person strong NVIDIA Research team — spread across 11 worldwide locations. The group embodies NVIDIA’s commitment to bringing innovative new ideas to customers in everything from machine learning, computer vision, self-driving cars, robotics, graphics, computer architecture, programming systems and more.

A Host of Papers, Talks, Tutorials

We’ll be leading or participating in six SIGGRAPH courses that detail various facets of the next-generation graphics technologies we’ve played a leading role in bringing to market.

These courses touch on everything from an introduction to real-time ray tracing, the use of the NVIDIA OptiX API, Monte Carlo and quasi-Monte Carlo sampling techniques, the latest in path tracing techniques, open problems in real-time rendering, and the future of ray tracing as a whole.

The common denominator: RTX. The real-time ray-tracing capabilities RTX unleashes offer far more realistic lighting effects than traditional real-time rendering techniques.

We’re also sponsoring seven courses on topics ranging from deep learning for content creation and real-time rendering to GPU ray tracing for film and design.

And we’re presenting technical papers that detail how our latest near-eye AR display demo works and take the next leap in denoising Monte Carlo rendering using convolutional neural networks — a cornerstone of AI — effectively using modern AI techniques to greatly reduce the time required to generate realistic images.

The Eyes Have It: Prescription-Embedded AR Display Wins Best in Show Award

You’ll be able to get hands-on with our latest technology in SIGGRAPH’s Emerging Technologies area. That’s where we have a pair of wearable augmented reality displays technology you need to see, especially if you don’t see very well without regular eyeglasses.

The first, “Prescription AR,” is a prescription-embedded AR display that won a SIGGRAPH Best in Show Emerging Technology award Monday.

The display is many times thinner and lighter and has a wider field of view that current-generation AR devices. Virtual objects appear throughout the natural instead of clustered in the center, and it has your prescription built right into it if you wear corrective optics. This much closer to the goal of comfortable, practical and socially-acceptable AR displays than anything currently available.

 

The second research demonstration, “Foveated AR,” is a headset that adapts to your gaze in real time using deep learning. It adjusts the resolution of the images it displays and their focal depth to match wherever you are looking and gives both sharper images and a wider field of view than any previous AR display.

To do this, it combines two different displays per eye, a high-resolution small field of view displaying images to the portion of the human retina where visual acuity is highest, and a low-resolution display for peripheral vision. The result is high-quality visual experiences with reduced power and computation.

TITAN RTX Giveway

Finally, NVIDIA is thanking the student volunteer community at SIGGRAPH with a daily giveaway of TITAN RTX while exhibit hall is open. These students are the future of one of the world’s most vibrant professional communities, a community we’re privileged to be a part of.

The post SIGGRAPH Showcases Amazing NVIDIA Research Breakthroughs, NVIDIA Wins Best in Show Award appeared first on The Official NVIDIA Blog.

DRIVE Labs: How We’re Building Path Perception for Autonomous Vehicles

Editor’s note: No one developer or company has yet succeeded in creating a fully autonomous vehicle. But we’re getting closer. With this new DRIVE Labs blog series, we’ll take an engineering-focused look at each individual open challenge — from perceiving paths to handling intersections — and how the NVIDIA DRIVE AV Software team is mastering it to create safe and robust self-driving software.

MISSION: Building Path Perception Confidence via Diversity and Redundancy

APPROACH: Path Perception Ensemble

Having confidence in a self-driving car’s ability to use data to perceive and choose the correct drivable path while the car is driving is critical. We call this path perception confidence.

For Level 2+ systems, such as the NVIDIA DRIVE AP2X platform, evaluating path perception confidence in real time translates to knowing when autonomous operation is safe and when control should be transitioned to the human driver.

To put our path perception confidence to the test, we set out to complete a fully autonomous drive around a 50-mile-long highway loop in Silicon Valley with zero disengagements. This meant handling highway interchanges, lane changes, avoiding unintended exits, and staying in lanes even under high road curvature or with limited lane markings. All these maneuvers and more had to be performed in a way that was smooth and comfortable for the car’s human occupants.

The key challenge was in the real-time nature of the test. In offline testing — such as analysis of pre-recorded footage — a path perception signal can always be compared to a perfect reference. However, when the signal runs live in the car, we don’t have the benefit of live ground-truth data.

Consequently, in a live test, if the car drives on just one path perception signal, there’s no way to obtain real-time correctness on confidence. Moreover, if the sole path perception input fails, the autonomous driving functionality might disengage. Even if it doesn’t disengage, the result could be reduced comfort and smoothness of the executed maneuver.

From Individual Networks to an Ensemble

To build real-time confidence, we introduced diversity and redundancy into our path perception software.

We did this by combining several different path perception signals, including the outputs of three different deep neural networks and, as an option, a high-definition map. The fact that the signals are all different brings diversity; the fact that they all do the same thing — perceive drivable paths — creates redundancy.

The path perception signals produced by the various DNNs are largely independent. That’s because the DNNs are all different in terms of training data, encoding, model architecture and training outputs.

Example of a high-confidence path perception ensemble result for left, ego vehicle, and right lane center paths. High-confidence result visualized by thick green center path lines. The solid white lines denote lane line predictions and are also computed by the ensemble.

For example, our LaneNet DNN is trained to predict lane lines, while our PathNet DNN is trained to predict edges that define drivable paths regardless of the presence or absence of lane lines. And our PilotNet DNN is trained to predict driving center paths based on trajectories driven by human drivers.

We combined the different path perception outputs using an ensemble technique. It’s a machine learning method that combines several base models and produces an optimal predictive model.

Through agreement/disagreement analysis of the different path perception signals, we built and measured path perception confidence live in the car, and obtained a higher quality overall result.

This analysis is demonstrated in our visualization. When the signal components strongly agree, the thick line denoting center path prediction for a given lane is green; when they disagree, it turns red.

Since our approach is based on diversity, a system-level failure would be less statistically likely, which is tremendously beneficial from a safety perspective.

DRIVE with Confidence

The path perception confidence we built using diversity and redundancy enabled us to evaluate all potential paths, including center path and lane line predictions for ego/left/right lanes, lane changes/splits/merges, and obstacle-to-lane assignment.

During the drive, multiple path perception DNNs were running in-car alongside obstacle perception and tracking functionality. The need to simultaneously run these tasks underscores the practical importance of high performance computing in autonomous vehicle safety.

This software functionality — termed the path perception ensemble — will be shipping in the NVIDIA DRIVE Software 9.0 release. Learn more at https://developer.nvidia.com/drive/drive-perception.

The post DRIVE Labs: How We’re Building Path Perception for Autonomous Vehicles appeared first on The Official NVIDIA Blog.