Coming to a Desktop Near You: The Future of Self-Driving

There has never been a better time to learn how AI will transform the way people, goods and services move.

During GTC Digital, anyone can experience the latest developments in AI technology for free, from the comfort of their own home. Hundreds of expert talks and training sessions covering autonomous vehicles, robotics, healthcare, finance and more will soon be available at the click of a button.

Beginning March 25, GTC Digital attendees can tune in to sessions hosted by autonomous driving leaders from Ford, Toyota, Zoox and more, as well as receive virtual training from NVIDIA experts on developing AI for self-driving.

Check out what’s in store for the NVIDIA autonomous vehicle ecosystem.

Learn from Leaders

GTC Digital talks let AI experts and developers delve into their latest work, sharing key insights on how to deploy intelligent vehicle technology.

This year’s automotive speakers are covering the latest topics in self-driving development, including AI training, simulation and software.

  • Neda Cvijetic, senior manager of Autonomous Vehicles at NVIDIA, will apply an engineering focus to widely acknowledged autonomous vehicle challenges, and explain how NVIDIA is tackling them. This session will air live with a question-and-answer segment to follow.
  • Clement Farabet, vice president of AI Infrastructure at NVIDIA, will discuss NVIDIA’s end-to-end AI platform for developing NVIDIA DRIVE software. This live talk will cover how to scale the infrastructure to train self-driving deep neural networks and include a Q&A session.
  • Tokihiko Akita, project research fellow, Toyota Technical Institute, will detail how deep neural networks can be used for autonomous vehicle object recognition and tracking in adverse weather with millimeter-wave radar sensors.
  • Nikita Jaipuria, research scientist for computer vision and machine learning, and Rohan Bhasin, research engineer, both at Ford Motor Company, will discuss in their April 2 session how to leverage generative adversarial networks to create synthetic data for autonomous vehicle training and validation.
  • Zejia Zheng and Jeff Pyke, software engineers at Zoox, will outline the Zoox TensorRT conversion pipeline to optimize deep neural network deployment on high-performance NVIDIA GPUs.

Virtual Hands-On Training

Developers will also get a chance to dive deeper into autonomy with NVIDIA Deep Learning Institute courses as well as interact virtually with experts in autonomous vehicle development.

DLI will offer a variety of instructor-led training sessions addressing the biggest developer challenges in areas such as autonomous vehicles, manufacturing and robotics. Receive intensive instruction on topics such as autonomous vehicle perception and sensor integration in our live sessions this April.

Get detailed answers to all your development questions in our Connect with Experts sessions on intelligent cockpits, autonomous driving software development and validation and more with NVIDIA’s in-house specialists.

Register to access these sessions and more for free and receive the latest updates on GTC Digital.

The post Coming to a Desktop Near You: The Future of Self-Driving appeared first on The Official NVIDIA Blog.

Driving Progress: NVIDIA Leads Autonomous Vehicle Industry Report

As the industry begins to deploy autonomous driving technology, NVIDIA is leading the way in delivering a complete end-to-end solution, including data center infrastructure, software toolkits, libraries and frameworks, as well as high-performance, energy-efficient compute for safer, more efficient transportation.

Every year, advisory firm Navigant Research releases a report on the state of the autonomous vehicle industry. This year, the company broke out its in-depth research into “Automated Vehicle Compute Platforms” and “Automated Driving Vehicles” leaderboards, ranking the major players in each category.

In the 2020 Automated Vehicle Compute Platforms report, NVIDIA led the list of companies developing AV platforms to power the AI that will replace the human driver.

Automated Vehicle Compute Platforms Leaderboard

“NVIDIA is a proven company with a long history of producing really strong hardware and software,” said Sam Abuelsamid, principal research analyst at Navigant and author of both reports. “Having a platform that produces ever-increasing performance and power efficiency is crucial for the entire industry.”

While NVIDIA provides solutions for developing autonomous vehicles, the Automated Driving Vehicles report covers those who are building such vehicles for production.  Manufacturers using these compute platforms include automakers, tier 1 suppliers, robotaxi companies and software startups that are part of the NVIDIA DRIVE ecosystem.

These reports are just one piece of the larger autonomous vehicle development picture. While they don’t contain every detail of the current state of self-driving systems, they provide valuable insight into how the industry approaches this transformative technology.

Computing the Way Forward

The Navigant leaderboard uses a detailed methodology in determining both the companies it covers and how they rank.

The Automated Vehicle Compute Platforms report looks at those who are developing compute platforms and have silicon designs that are in at least sample production or have plans to be soon. These systems must have fail-operational capability to ensure safety when there’s no human backup at the wheel.

The report evaluates companies using factors such as vision, technology, go-to-market strategy and partners. NVIDIA’s top rating is based on leading performance across this wide range of factors.

NVIDIA Strategy and Execution Scores

With the high-performance, energy-efficient DRIVE AGX platform, NVIDIA topped the inaugural leaderboard report for its progress in the space. The platform’s scalable architecture and the open, modular NVIDIA DRIVE software make it easy for autonomous vehicle manufacturers to scale DRIVE solutions to meet their individual production plans.

“Without powerful, energy-efficient compute and the reliability and robustness to handle the automotive environment, none of rest would be possible,” Abuelsamid said. “Looking at the current compute platforms, NVIDIA is the clear leader.”

Compute is a critical element to autonomous vehicles. It’s what makes it possible for a vehicle to process in real time the terabytes of data coming in from camera, radar and lidar sensors. It also enables self-driving software to run multiple deep neural networks simultaneously, achieving the diversity and redundancy in operation that is critical for safety. Furthermore, by ensuring a compute platform has enough performance headroom, developers can continuously add features and improve the system over time.

Empowering Our Ecosystem

A second report from Navigant, the Automated Driving Vehicles report, ranks companies developing Level 4 autonomous driving systems — which don’t require a human backup in defined conditions — for passenger vehicles.

The leaderboard includes announced members of the NVIDIA DRIVE ecosystem such as Baidu, Daimler-Bosch, Ford, Toyota, Volkswagen, Volvo, Voyage, Yandex and Zoox, as well as many other partners who have yet to be disclosed, all developing autonomous vehicles using NVIDIA end-to-end AI solutions.

Automated Driving Vehicles Leaderboard

The report notes many of the challenges the industry faces, such as comprehensive validation and production costs. NVIDIA delivers autonomous vehicle development tools from the cloud to the car to help companies address these issues.

NVIDIA is unique in providing solutions for in the vehicle, as well as the datacenter infrastructure for the AV industry. With NVIDIA DGX systems and advanced AI learning tools, developers can efficiently train the deep neural networks that run in the vehicle on petabytes of data in the data center. These DNNs can then be tested and validated in the virtual world on the same hardware they would run on in the vehicle using the cloud-based, bit-accurate DRIVE Constellation simulation platform.

The ability to build a seamless workflow with compatible tools can greatly improve efficiency in development, according to Abuelsamid.

“As this technology evolves and matures, developers are going through many iterations of hardware and software,” he said. “Having an ecosystem that allows you to develop at all different levels, from in-vehicle, to data center, to simulation, and transfer knowledge you’ve gained from one to the other is very important.”

By providing our ecosystem with these industry-leading end-to-end solutions, we’re working to bring safer, more efficient transportation to roads around the world, sooner.

Navigant Research provides syndicated and custom research services across a range of technologies encompassing the energy ecosystem. Mobility reports, including the Automated Driving Leaderboard, the Automated Vehicle Compute Platform leaderboard and the Automated Vehicles Forecast, are available here.

The post Driving Progress: NVIDIA Leads Autonomous Vehicle Industry Report appeared first on The Official NVIDIA Blog.

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Catch up on all our automotive posts, here.

Lidar can give autonomous vehicles laser focus.

By bouncing laser signals off the surrounding environment, these sensors can enable a self-driving car to construct a detailed and accurate 3D picture of what’s around it.

However, traditional methods for processing lidar data pose significant challenges. These include limitations in the ability to detect and classify different types of objects, scenes and weather conditions, as well as limitations in performance and robustness.

In this DRIVE Labs episode, we introduce our multi-view LidarNet deep neural network, which uses multiple perspectives, or views, of the scene around the car to overcome the traditional limitations of lidar-based processing.

AI-Powered Solutions

AI in the form of DNN-based approaches has become the go-to solution to address traditional lidar perception challenges.

One AI method uses lidar DNNs that perform top‐down or “bird’s eye view” (BEV) object detection on lidar point cloud data. A virtual camera positioned at some height above the scene, similar to a bird flying overhead, reprojects 3D coordinates of each data point into that virtual camera view via orthogonal projection.

BEV lidar DNNs use 2D convolutions in their layers to detect dynamic objects such as cars, trucks, buses, pedestrians, cyclists, and other road users. 2D convolutions work fast, so they are well-suited for use in real-time autonomous driving applications.

However, this approach can get tricky when objects look alike top-down. For example, in BEV, pedestrians or bikes may appear similar to objects like poles, tree trunks or bushes, resulting in perception errors.

Another AI method uses 3D lidar point cloud data as input to a DNN that uses 3D convolutions in its layers to detect objects. This improves accuracy since a DNN can detect objects using their 3D shapes. However, 3D convolutional DNN processing of lidar point clouds is difficult to run in real-time for autonomous driving applications.

Enter Multi-View LidarNet

To overcome the limitations of both of these AI-based approaches, we developed our multi‐view LidarNet DNN, which acts in two stages. The first stage extracts semantic information about the scene using lidar scan data in perspective view (Figure 1). This “unwraps” a 360-degree surround lidar range scan so it looks as though the entire panorama is in front of the self-driving car.

This first-stage semantic segmentation approach performs very well for predicting object classes. This is because the DNN can better observe object shapes in perspective view (for example, the shape of a walking human).

The first stage segments the scene both into dynamic objects of different classes, such as cars, trucks, busses, pedestrians, cyclists and motorcyclists, as well as static road scene components, such as the road surface, sidewalks, buildings, trees, and traffic signs.

Figure 1. Multi-view LidarNet perspective view.

 

Figure 2. Multi-view LidarNet top-down bird’s eye view (BEV).

The semantic segmentation output of LidarNet’s first stage is then projected into BEV and combined with height data at each location, which is obtained from the lidar point cloud. The resulting output is applied as input to the second stage (Figure 2).

The second stage DNN is trained on BEV-labeled data to predict top-down 2D bounding boxes around objects identified by the first stage. This stage also uses semantic and height information to extract object instances. This is easier in BEV since objects are not occluding each other in this view.

The result of chaining these two DNN stages together is a lidar DNN that consumes only lidar data. It uses end-to-end deep learning to output a rich semantic segmentation of the scene, complete with 2D bounding boxes for objects. By using such methods, it can detect vulnerable road users, such as motorcyclists, bicyclists, and pedestrians, with high accuracy and completeness. Additionally, the DNN is very efficient — inference runs at 7ms per lidar scan on the NVIDIA DRIVE™AGX platform.

In addition to multi-view LidarNet, our lidar processing software stack includes a lidar object tracker. The tracker is a computer vision-based post-processing system that uses the BEV 2D bounding box information and lidar point geometry to compute 3D bounding boxes for each object instance. The tracker also helps stabilize per-frame DNN misdetections and, along with a low-level lidar processor, computes geometric fences that represent hard physical boundaries that a car should avoid.

This combination of AI-based and traditional computer vision-based methods increases the robustness of our lidar perception software stack. Moreover, the rich perception information provided by lidar perception can be combined with camera and radar detections to design even more robust Level 4 to Level 5 autonomous systems.

The post Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars appeared first on The Official NVIDIA Blog.

Hail Yeah! How Robotaxis Will Change the Way We Move

From pedicabs to yellow cabs, hailing rides has been a decades-long convenience. App-based services like Uber and Lyft have made on-demand travel even faster and easier.

With the advent of autonomous vehicles, ride-hailing promises to raise the bar on safety and efficiency. Known as robotaxis, these shared vehicles are purpose-built for transporting groups of people along optimized routes, without a human driver at the wheel.

The potential for a shared autonomous mobility industry is enormous. Financial services company UBS estimates that robotaxis could create a $2 trillion market globally over the next decade, with each vehicle generating as much as $27,000 annually.

In dense urban environments, like New York City, experts project that a taxi fleet converted to entirely autonomous vehicles could cut down commutes in some areas from 40 minutes to 15.

On top of the economic and efficiency benefits, autonomous vehicles are never distracted or drowsy. And they can run 24 hours a day, seven days a week, expanding mobility access to more communities.

To transform everyday transportation, the NVIDIA DRIVE ecosystem is gearing up with a new wave of electric, autonomous vehicles.

Up for Cabs

Building a new kind of vehicle from square one requires a fresh perspective. That’s why a crop of startups and technology companies have begun to invest in the idea of a shared car without a steering wheel or pedals.

Already transporting riders in Florida and San Jose, Calif. retirement communities, Voyage is deploying low-speed autonomous vehicles with the goal of widely expanding safe mobility. The company is using DRIVE AGX to operate its SafeStop supercharged automatic braking system in its current fleet of vehicles.

Optimus Ride is a Boston-based self-driving technology company developing systems for geo-fenced environments — pre-defined areas of operation, like a city center or shipping yard.

Its electric, autonomous vehicles run on the high-performance, energy-efficient NVIDIA DRIVE platform, and were the first such vehicles to run in NYC as part of a pilot launched in August.

Optimus Ride

Leveraging the performance of NVIDIA DRIVE AGX Pegasus, which can achieve up to 320 trillion operations per second, smart mobility startup WeRide is developing level 4 autonomous vehicles to provide accessible transportation to a wide range of passengers.

Starting from scratch, self-driving startup and DRIVE ecosystem member Zoox is developing a purpose-built vehicle for on-demand, autonomous transportation. Its robotaxi encompasses a futuristic vision of everyday mobility, able to drive in both directions.

Zoox says it plans to launch its zero-emissions vehicle for testing this year, followed by an autonomous taxi service.

At GTC China in December, ride-hailing giant Didi Chuxing announced it was developing level 4 autonomous vehicles for its mobility services using NVIDIA DRIVE and AI technology. Delivering 10 billion passenger trips per year, DiDi is working toward the safe, large-scale application of autonomous driving technology.

DiDi

Sharing Expertise for Shared Mobility

When it comes to industry-changing innovations, sometimes two (or three) heads are better than one.

Global automakers, suppliers and startups are also working to solve the question of shared autonomous mobility, collaborating on their own visions of the robotaxi of the future.

In December, Mercedes-Benz parent company Daimler and global supplier Bosch launched the first phase of their autonomous ride-hailing pilot in San Jose. The app-based service shuttles customers in an automated Mercedes-Benz S-Class monitored by a safety driver.

The companies are collaborating with NVIDIA to eventually launch a robotaxi powered by NVIDIA DRIVE AGX Pegasus.

Daimler

Across the pond, autonomous vehicle solution provider AutoX and Swedish electric vehicle manufacturer NEVS are working to deploy robotaxis in Europe by the end of this year.

The companies, which came together through the NVIDIA DRIVE ecosystem, are developing an electric autonomous vehicle based on NEVS’ mobility-focused concept and powered by NVIDIA DRIVE. The goal of this collaboration is to bring these safe and efficient technologies to everyday transportation around the world.

Startup Pony.AI is also collaborating with global automakers such as Toyota and Hyundai, developing a robotaxi fleet with the NVIDIA DRIVE AGX platform at its core.

As the NVIDIA DRIVE ecosystem pushes into the next decade of autonomous transportation, safer, more convenient rides will soon just be a push of a button away. At GTC 2020, attendees will get a glimpse to just where this future is going — register today with code CMAUTO for a 20 percent discount.

The post Hail Yeah! How Robotaxis Will Change the Way We Move appeared first on The Official NVIDIA Blog.

Look Under the Hood of Self-Driving Development at GTC 2020

The progress of self-driving cars can be seen in test vehicles on the road. But the major mechanics for autonomous driving development are making tracks in the data center.

Training, testing and validating self-driving technology requires enormous amounts of data, which must be managed by a robust hardware and software infrastructure. Companies around the world are turning to high-performance, energy efficient GPU technology to build the AI infrastructure needed to put autonomous driving deep neural networks (DNNs) through their paces.

At next month’s GPU Technology Conference in San Jose, Calif., automakers, suppliers, startups and safety experts will discuss how they’re tackling the infrastructure component of autonomous vehicle development.

By attending sessions on topics such as DNN training, data creation. and validation in simulation, attendees can learn the end-to-end process of building a self-driving car in the data center.

Mastering Learning Curves

Without a human at the wheel, autonomous vehicles rely on a wide range of DNNs that  perceive the surrounding environment. To recognize everything from pedestrians to street signs and traffic lights, these networks require exhaustive training on mountains of driving data.

Tesla has delivered nearly half a million vehicles with AI-assisted driving capabilities worldwide. They’re gathering data while continuously receiving the latest models through over-the-air updates.

At GTC, Tim Zaman, machine learning infrastructure engineering manager at Tesla, will share how the automaker built and maintains a low-maintenance, efficient and lightning-fast, yet user-friendly, machine-learning infrastructure that its engineers rely on to develop Tesla Autopilot.

As more test vehicles outfitted with sensors drive on public roads, the pool of training data can grow by terabytes. Ke Li, software engineer at Pony.ai, will talk about how the self-driving startup is building a GPU-centric infrastructure that can process the increasingly heavy sensor data more efficiently, scale with future advances in GPU compute power, and can integrate with other heterogeneous compute platforms.

For NVIDIA’s own autonomous vehicle development, we’ve built a scalable infrastructure to train self-driving DNNs. Clement Farabet, vice president of AI Infrastructure at NVIDIA, will discuss Project MagLev, an internal end-to-end AI platform for developing NVIDIA DRIVE software.

The session will cover how MagLev enables autonomous AI designers to iterate training of new DNN designs across thousands of GPU systems and validate the behavior of these designs over multi-petabyte-scale datasets.

Virtual Test Tracks

Before autonomous vehicles are widely deployed on public roads, they must be proven safe for all possible conditions the car could encounter — including rare and dangerous scenarios.

Simulation in the data center presents a powerful solution to what has otherwise been an insurmountable obstacle. By tapping into the virtual world, developers can safely and accurately test and validate autonomous driving hardware and software without leaving the office.

Zvi Greenstein, general manager at NVIDIA, will give an overview of the NVIDIA DRIVE Constellation VR simulation platform, a cloud-based solution that enables hardware-in-the-loop testing and large-scale deployment in data centers. The session will cover how NVIDIA DRIVE Constellation is used to validate safe autonomous driving and how companies can partner with NVIDIA and join the DRIVE Constellation ecosystem.

Having data as diverse and random as the real world is also a major challenge when it comes to validation. Nikita Jaipuria and Rohan Bhasin, research engineers at Ford, will discuss how to generate photorealistic synthetic data using generative adversarial networks (GANs). These simulated images can be used to represent a wide variety of situations for comprehensive autonomous vehicle testing.

Regulators and third-party safety agencies are also using simulation technology to evaluate self-driving cars. Stefan Merkl, mobility regional manager at TÜV SÜD America, Inc., will outline the agency’s universal framework to help navigate patchwork local regulations, providing a unified method for the assessment of automated vehicles.

In addition to these sessions, GTC attendees will hear the latest NVIDIA news and experience demos and hands-on training for a comprehensive view of the infrastructure needed to build the car of the future. Register before Feb. 13 to take advantage of early rates and receive 20% off with code CMAUTO.

The post Look Under the Hood of Self-Driving Development at GTC 2020 appeared first on The Official NVIDIA Blog.

NVIDIA’s Neda Cvijetic Explains the Science Behind Self-Driving Cars

What John Madden was to pro football, Neda Cvijetic is to autonomous vehicles. No one’s better at explaining the action, in real time, than Cvijetic.

Cvijetic, senior manager of autonomous vehicles at NVIDIA, drives our NVIDIA DRIVE Labs series of videos and blogs breaking down the science behind autonomous vehicles.

A Serbian-American electrical engineer, Cvijetic seems destined for this role. She literally grew up in the shadow of Nikola Tesla. His statue in Belgrade stood across the street from her childhood home.

On this week’s AI Podcast, Cvijetic spoke to host Rick Merritt about what’s driving autonomous vehicles. She also shared her perspective on how both broad initiatives and day-to-day actions can promote diversity in AI.

 Key Points From This Episode:

  • Autonomous vehicles use three key techniques: perception, localization, and control and planning.
  • Each self-driving car runs on dozens of deep neural networks, which are each trained on thousands of hours of real-world driving data and on NVIDIA DRIVE Constellation, which provides extensive testing in virtual reality before the car even hits the road.
  • Autonomous vehicles drive safely because diversity and redundancy are designed into their systems. Multiple cameras with overlapping fields of view, radar, and more provide a wealth of perception data for the highest level of accuracy.

Tweetables:

“I want every driver out there to feel that they understand AI, to understand how AI works in self-driving cars, and feel empowered by that understanding” — Neda Cvijetic [1:56]

“The NVIDIA DRIVE simulator seeks to create some of these corner cases that might take years to actually observe” — Neda Cvijetic [10:36]

You Might Also Like

Deep Learning 101: Will Ramey, NVIDIA Senior Manager for GPU Computing

If you’ve ever wanted a guided tour through the history of AI, this is the episode for you. NVIDIA’s Will Ramey covers the big bang of AI and the concepts that are now defining the industry.

How AI Turns Kiddie Cars Into Fast and Frugal Autonomous Racers

Take brains, a few hundred bones and a pink Barbie jeep. What have you got? For inventive hackers, a new sport filled with f-words — fast, furious, frugal. Founder of the Power Racing Series Jim Burke talks about why he’s bringing autonomous vehicles to a racing event built on the backs of $500 kiddie cars.

AutoX’s Professor X on the State of Automotive Autonomy

Jianxiong Xiao, CEO of startup AutoX, is speeding towards fully autonomous vehicles, as defined by the National Highway Traffic Administration. Ranging from level 0, or no automation, to level 5, full autonomy, Xiao is pursuing level 4 — a car that can perform all driving functions under certain conditions.

 

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

  

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post NVIDIA’s Neda Cvijetic Explains the Science Behind Self-Driving Cars appeared first on The Official NVIDIA Blog.

Man Meets Machine: Autonomous Driving Gets the Human Touch at CES 2020

Autonomous driving technology aims to eliminate the human at the wheel. However, the latest plans for the car of the future envision a greater integration of human-machine interaction throughout the rest of the vehicle.

At CES 2020, companies showed how they’re working toward safer and more efficient transportation. Drawing inspiration from creative concepts in other industries, and even trying out new areas of technological expertise, NVIDIA DRIVE partners showcased new ideas for human and machine harmony in the coming decade.

Quite a Concept

Drawing gearheads and technophiles alike, the Vision-S concept car was one of the most buzzed-about vehicles on the show floor. The electric vehicle incorporates cutting-edge imaging and sensor technology for autonomous driving as well as crystal-clear graphics for an intelligent cockpit experience. The most innovative feature of all? It wasn’t built by an automaker.

Courtesy of Sony

Electronics and entertainment company Sony worked with NVIDIA and other autotech companies to build its first ever vehicle prototype. Designed to showcase its latest sensor and AI infotainment technology, the sleek vehicle attracted crowds throughout the week.

The panoramic dashboard screen provides driving information, communication and entertainment, all effortlessly controlled by fluid gestures. Screens on the back of the front seats as well as speakers built into the headrests ensure every passenger can have a personalized experience.

The car’s hardware is designed for over-the-air updates, improving autonomous driving capabilities as well as adapting to the human driver and passengers’ preferences over time.

Though Sony isn’t building the Vision-S for production, the car’s concept for autonomy and seamless user experience provide a window into a highly intelligent transportation future.

Driving Instinct 

Mercedes-Benz, the automaker behind the first production AI cockpit, plans to take its revolutionary MBUX infotainment technology, powered by NVIDIA, even further.

During the CES opening keynote, Mercedes CEO Ola Kallenius said the next frontier for AI-powered infotainment is truly intuitive gesture control. The automaker’s vision for the future was then demonstrated in the Vision AVTR concept vehicle, designed for the upcoming sequels to the blockbuster film Avatar.

The hallmark feature of the Vision AVTR is the center console control element that replaces a typical steering wheel. It’s activated by human touch, using biometrics to control the car.

Courtesy of Mercedes-Benz

The concept illustrates Mercedes’ long-term vision of facilitating more natural interactions between the driver and the vehicle. And given the proliferation of the MBUX infotainment system — which is now in nearly 20 Mercedes models — this future may not be too far away.

AIs on the Road

CES attendees also experienced the latest innovations in autonomous driving firsthand from the NVIDIA DRIVE ecosystem.

Robotaxi company Yandex ferried conference goers around Las Vegas neighborhoods in a completely driverless ride. Powered by NVIDIA technology, the prototype vehicle reached speeds up to 45 miles per hour without any human intervention.

Courtesy of Yandex

Yandex has been rapidly expanding in its mission to provide safe autonomous transportation to the public. Since last CES, the company has driven 1.5 million autonomous miles and provided more than 5,000 robotaxi rides with no human driver at the wheel.

Supplier Faurecia Clarion showed attendees how its working to alleviate the stress of parking with its autonomous valet system. Using the high-performance, energy-efficient DRIVE AGX platform, the advanced driver assistance system seamlessly navigated a parking lot.

NVIDIA DRIVE ecosystem member Luminar brought the industry closer to widespread self-driving car deployment with the introduction of its Hydra lidar sensor. Designed for production Level 3 and Level 4 autonomous driving, the sensor is powered by NVIDIA Xavier and can detect and classify objects up to 250 meters away.

Courtesy of Luminar

Analog Devices uses the NVIDIA DRIVE platform to help autonomous vehicles see and understand the world around them. The company demoed its imaging radar point cloud at CES, using NVIDIA DRIVE AGX Xavier to process raw data from an imaging radar sensor into a perception point cloud.

With these latest developments in production technology as well as a cohesive vision for future AI-powered transportation, the age of self-driving is just a (human) touch away.

The post Man Meets Machine: Autonomous Driving Gets the Human Touch at CES 2020 appeared first on The Official NVIDIA Blog.

What Is Active Learning?

Reading one book on a particular subject won’t make you an expert. Nor will reading multiple books containing similar material. Truly mastering a skill or area of knowledge requires lots of information coming from a diversity of sources.

The same is true for autonomous driving and other AI-powered technologies.

The deep neural networks responsible for self-driving functions require exhaustive training. Both in situations they’re likely to encounter during daily trips, as well as unusual ones they’ll hopefully never come across. The key to success is making sure they’re trained on the right data.

What’s the right data? Situations that are new or uncertain. No repeating the same scenarios over and over.

Active learning is a training data selection method for machine learning that automatically finds this diverse data. It builds better datasets in a fraction of the time it would take for humans to curate.

It works by employing a trained model to go through collected data, flagging frames it’s having trouble recognizing. These frames are then labeled by humans. Then they’re added to the training data. This increases the model’s accuracy for situations like perceiving objects in tough conditions.

Finding the Needle in the Data Haystack

The amount of data needed to train an autonomous vehicle is enormous. Experts at RAND estimate that vehicles need 11 billion miles of driving to perform just 20 percent better than a human. This translates to more than 500 years of nonstop driving in the real world with a fleet of 100 cars.

And not just any driving data will do. Effective training data must contain diverse and challenging conditions to ensure the car can drive safely.

If humans were to annotate this validation data to find these scenarios, the 100-car fleet driving just eight hours a day would require more than 1 million labelers to manage frames from all the cameras on the vehicle — a gargantuan effort. In addition to the labor cost, the compute and storage resources needed to train DNNs on this data would be infeasible.

The combination of data annotation and curation poses a major challenge to autonomous vehicle development. By applying AI to this process, it’s possible to cut down on the time and cost spent on training, while also increasing the accuracy of the networks.

Why Active Learning

There are three common methods to selecting autonomous driving DNN training data. Random sampling extracts frames from a pool of data at uniform intervals, capturing the most common scenarios but likely leaving out rare patterns.

Metadata-based sampling uses basic tags (for example, rain, night) to select data, making it easy to find commonly encountered difficult situations, but missing unique frames that aren’t easily classified, like a tractor trailer or man on stilts crossing the road.

Caption: Not all data is created equal. Example of a common highway scene (top left) vs. some unusual driving scenarios (top right: cyclist doing a wheelie at night, bottom left: truck towing trailer towing quad, bottom right: pedestrian on jumping stilts).

Finally, manual curation uses metadata tags combined with visual browsing by human annotators — a time-consuming task that can be error-prone and difficult to scale.

Active learning makes it possible to automate the selection process while choosing valuable data points. It starts by training a dedicated DNN on already-labeled data. The network then sorts through unlabeled data, selecting frames that it doesn’t recognize, thereby finding data that would be challenging to the autonomous vehicle algorithm.

That data is then reviewed and labeled by human annotators, and added to the training data pool.

Active learning has already shown it can improve the detection accuracy of self-driving DNNs over manual curation. In our own research, we’ve found that the increase in precision when training with active learning data can be 3x for pedestrian detection and 4.4x for bicycle detection relative to the increase for data selected manually.

Advanced training methods like active learning, as well as transfer learning and federated learning, are most effective when run on a robust, scalable AI infrastructure. This makes it possible to manage massive amounts of data in parallel, shortening the development cycle.

NVIDIA will be providing developers access to these training tools as well as our rich library of autonomous driving deep neural networks on the NVIDIA GPU Cloud container registry.

The post What Is Active Learning? appeared first on The Official NVIDIA Blog.

NVIDIA DRIVE Ecosystem Charges into Next Decade of AI

Top transportation companies are using NVIDIA DRIVE to lead the way into the coming era of autonomous mobility.

Electric vehicle makers, mapping companies and mobility providers announced at the GPU Technology Conference in Suzhou, China, that they’re leveraging NVIDIA DRIVE in the development of their self-driving solutions.

By joining the DRIVE ecosystem, each of these companies can contribute industry experience and expertise to a worldwide community dedicated to delivering safer and more efficient transportation.

“NVIDIA created an open platform so the industry can team up together to realize this autonomous future,” NVIDIA CEO Jensen Huang said during his GTC keynote. “The rich ecosystem we’ve developed is a testament to the openness of this platform.”

Driving Autonomy Forward

DiDi Chuxing, the world’s leading mobile transportation platform, announced that it will use NVIDIA AI technologies to bring Level 4 autonomous vehicles and intelligent ride-hailing services to market. The company, which provides more than 30 million rides a day, will use the NVIDIA DRIVE platform and NVIDIA AI data center solutions to develop its fleets and provide services in the DiDi Cloud.

As part of the centralized AI processing of DiDi’s autonomous vehicles, NVIDIA DRIVE enables data to be fused from all types of sensors (cameras, lidar, radar, etc.) using numerous deep neural networks to understand the 360-degree environment surrounding the car and plan a safe path forward.

To train these DNNs, DiDi will use NVIDIA GPU data center servers. For cloud computing, DiDi will also build an AI infrastructure and launch virtual GPU cloud servers for computing, rendering and gaming.

DiDi autonomous vehicle on display at GTC.

Automakers, truck manufacturers and software startups on the GTC showfloor displayed the significant progress they’ve achieved on the NVIDIA DRIVE platform. Autonomous pilots from Momenta and WeRide continue to grow in sophistication, while autonomous trucking company TuSimple expands its highway trucking routes.

Next-generation production vehicles from Xpeng and Karma Automotive will leverage the DRIVE AGX platform for AI-powered driver assistance systems.

Karma Automotive is developing AI-assisted driving on the NVIDIA DRIVE AGX platform.

Triangulating Success with HD Mapmakers

At GTC, Amap and Kuandeng announced that their high-definition maps are now compatible with DRIVE Localization, an open, scalable platform that enables autonomous vehicles to localize themselves within centimeters on roads worldwide.

Localization makes it possible for a self-driving car to pinpoint its location so it can understand its surroundings and establish a sense of the road and lane structures. This enables it to plan lane changes ahead of what’s immediately visible and determine lane paths even when markings aren’t clear.

DRIVE Localization makes that centimeter-level positioning possible by matching semantic landmarks in the vehicle’s environment with features from HD maps by companies like Amap and Kuandeng to determine exactly where it is in real time.

With more than 100 million daily active users of its maps services, Amap is one of the leading mapping companies in China. It has already collected HD map data on more than 320,000 km of roadways and is working with automakers such as General Motors, Geely and FAW to provide commercial maps.

Kuandeng is dedicated to providing critical infrastructure data service for autonomous vehicles. Its HD map solution and high-precision localization product provide core technical and data support for automakers, suppliers and startups. Kuandeng is also building up a cloud-based, crowdsourced platform for real-time HD map updates, establishing a closed-loop reflux system of data.

Mapping the world with such a high level of precision is a virtually impossible task for one company to accomplish alone. By partnering with the top regional mapmakers around the world, NVIDIA is helping develop highly accurate and comprehensive maps for autonomous vehicle navigation and localization.

“HD maps make it possible to pinpoint an autonomous vehicle’s location with centimeter-level accuracy,” said Frank Jiang, vice general manager of the Automotive Business Department at Amap. “By making our maps compatible with NVIDIA DRIVE Localization, we can enable highly precise localization for autonomous vehicles.”

By growing the NVIDIA DRIVE ecosystem with leading companies around the world, we’re working to deliver safer, more efficient transportation to global roads, sooner.

The post NVIDIA DRIVE Ecosystem Charges into Next Decade of AI appeared first on The Official NVIDIA Blog.

As AI Universe Keeps Expanding, NVIDIA CEO Lays Out Plan to Accelerate All of It

With the AI revolution spreading across industries everywhere, NVIDIA founder and CEO Jensen Huang took the stage Wednesday to unveil the latest technology for speeding its mass adoption.

His talk — to more than 6,000 scientists, engineers and entrepreneurs gathered for this week’s GPU Technology Conference in Suzhou, two hours west of Shanghai — touched on advancements in AI deployment, as well as NVIDIA’s work in the automotive, gaming, and healthcare industries.

“We build computers for the Einsteins, Leonardo di Vincis, Michaelngelos of our time,” Huang told the crowd, which overflowed into the aisles. “We build these computers for all of you.”

Huang explained that demand is surging for technology that can accelerate the delivery of AI services of all kinds. And NVIDIA’s deep learning platform — which the company updated Wednesday with new inferencing software — promises to be the fastest, most efficient way to deliver these services.

It’s the latest example of how NVIDIA achieves spectacular speedups by applying a combination of GPUs optimized for parallel computation, work across the entire computing stack, and algorithm and ecosystem expertise in focused vertical markets.

“It is accepted now that GPU accelerated computing is the path forward as Moore’s law has ended,” Huang said.

Real-Time Conversational AI

The biggest news: groundbreaking new inference software enabling smarter, real-time conversational AI.

NVIDIA TensorRT 7 — the seventh generation of the company’s inference software development kit — features a new deep learning compiler designed to automatically optimize and accelerate the increasingly complex recurrent and transformer-based neural networks needed for complex new applications, such as AI speech.

This speeds the components of conversational AI by 10x compared to CPUs, driving latency below the 300-millisecond threshold considered necessary for real-time interactions.

“To have the ability to understand your intention, make recommendations, do searches and queries for you, and summarize what they’ve learned to a text to speech system… that loop is now possible,” Huang said. “It is now possible to achieve very natural, very rich, conversational AI in real time.”

Real-Time Recommendations: Baidu and Alibaba

Another challenge for AI: driving a new generation of powerful systems, known as recommender systems, able to connect individuals with what they’re looking for in a world where the options available to them is spiraling exponentially.

“The era of search has ended: if I put out a trillion, billion million things and they’re changing all the time, how can you find anything,” Huang asked. “The era of search is over. The era of recommendations has arrived.

Baidu — one of the world’s largest search companies – is harnessing NVIDIA technology to power advanced recommendation engines.

“It solves this problem of taking this enormous amount of data, and filtering it through this recommendation system so you only see 10 things,” Huang said.

With GPUs, Baidu can now train the models that power its recommender systems 10x faster, reducing costs, and, over the long term, increasing the accuracy of its models, improving the quality of its recommendations.

Another example such systems’ power: Alibaba, which relies on NVIDIA technology to help power the recommendation engines behind the success of Single’s Day.

This new shopping festival which takes place on Nov. 11 — or 11.11 — generated $38 billion in sales last month. That’s up by nearly a quarter from last year’s $31 billion, and more than double the online sales on Black Friday and Cyber Monday combined.

Helping to drive its success are recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver. Its systems need to run in real-time and at an incredible scale, something that’s only possible with GPUs.

“Deep learning inference is wonderful for deep recommender systems and these recommender systems will be the engine for the Internet,” Huang said. “Everything we do in the future, everything we do now, passes through a recommender system.”

Accelerating Automotive Innovations

Huang also announced NVIDIA will provide the transportation industry with source access to its NVIDIA DRIVE deep neural networks (DNNs) for autonomous vehicle development.

NVIDIA DRIVE has become a de facto standard for AV development, used broadly by automakers, truck manufacturers, robotaxi companies, software companies and universities.

Now, NVIDIA is providing source access of it’s pre-trained AI models and training code to AV developers. Using a suite of NVIDIA AI tools, the ecosystem can freely extend and customize the models to increase the robustness and capabilities of their self-driving systems.

In addition to providing source access to the DNNs, Huang announcing the availability of a suite of advanced tools so developers can customize and enhance NVIDIA’s DNNs, utilizing their own data sets and target feature set. These tools allow the training of DNNs utilizing active learning, federated learning and transfer learning, Huang said.

Haung also announced NVIDIA DRIVE AGX Orin, the world’s highest performance and most advanced system-on-a-chip. It delivers 7x the performance and 3x the efficiency per watt of Xavier, NVIDIA’s previous-generation automotive SoC. Orin — which will be available to be incorporated in customer production runs for 2022 — boasts 17 billion transistors, 12 CPU cores, and is capable of over 200 trillion operations per second.

Orin will be woven into a stack of products — all running a single architecture and compatible with software developed on Xavier — able to scale from simple level 2 autonomy, all the way up to full Level 5 autonomy.

And Huang announced that Didi — the world’s largest ride hailing company — will adopt NVIDIA DRIVE to bring robotaxis and intelligent ride-hailing services to market.

“We believe everything that moves will be autonomous some day,” Huang said. “This is not the work of one company, this is the work of one industry, and we’ve created an open platform so we can all team up together to realize this autonomous future.”

Game On

Adding to NVIDIA’s growing footprint in cloud gaming, Huang announced a collaboration with Tencent Games in cloud gaming.

“We are going to extend the wonderful experience of PC gaming to all the computers that are underpowered today, the opportunity is quite extraordinary,” Huang said. “We can extend PC gaming to the other 800 milliion gamers in the world.”

NVIDIA’s technology will power Tencent Games’ START cloud gaming service, which began testing earlier this year. START gives gamers access to AAA games on underpowered devices anytime, anywhere.

Huang also announced that six leading game developers will join the ranks of game developers around the world who have been using the realtime ray tracing capabilities of NVIDIA’s GeForce RTX to transform the image quality and lighting effects of their upcoming titles

Ray tracing is a graphics rendering technique that brings real-time, cinematic-quality rendering to content creators and game developers. NVIDIA GeForce RTX GPUs contain specialized processor cores designed to accelerate ray tracing so the visual effects in games can be rendered in real time.

The upcoming games include a mix of blockbusters, new franchises, triple-A titles and indie fare — all using real-time ray tracing to bring ultra-realistic lighting models to their gameplay.

They include Boundary, from Surgical Scalpels Studios; Convallarioa, from LoongForce;  F.I.S.T. from  Shanghai TiGames; an unnamed project from Mihyo; Ring of Elysium, from TenCent; and Xuan Yuan Sword VII from Softstar.

Accelerating Medical Advances, 5G

This year, Huang said, NVIDIA has added two major new applications to CUDA – 5G vRAN and genomic processing. With each, NVIDIA’s supported by world leaders in their respective industries – Ericsson in telecommunication and BGI in genomics.

Since the first human genome was sequenced in 2003, the cost of whole genome sequencing has steadily shrunk, far outstripping the pace of Moore’s law. That’s led to an explosion of genomic data, with the total amount of sequence data is doubling every seven months.

“The ability to sequence the human genome in its totality is incredibly powerful,” Huang said.

To put this data to work — and unlock the promise of truly personalized medicine — Huang announced that NVIDIA is working with Beijing Genomics Institute.

BGI is using NVIDIA V100 GPUs and software from Parabricks, an Ann Arbor, Michigan- based startup acquired by NVIDIA earlier this month — to build the highest throughput genome sequencer yet, potentially driving down the cost of genomics-based personalized medicine.

“It took 15 years to sequence the human genome for the first time,” Huang said. “It is now possible to sequence 16 whole genomes per day.”

Huang also announced the availability of the NVIDIA Parabricks Genomic Analysis Toolkit, and its availability on NGC, NVIDIA’s hub for GPU-optimized software for deep learning, machine learning, and high-performance computing.

Accelerated Robotics with NVIDIA Isaac

As the talk wound to a close, Huang announced a new version of NVIDIA’s Isaac software development kit. The Isaac SDK achieves an important milestone in establishing a unified robotic development platform — enabling AI, simulation and manipulation capabilities.

The showstopper: Leonardo, a robotic arm with exquisite articulation created by NVIDIA researchers in Seattle, that not only performed a sophisticated task — recognizing and rearranging four colored cubes — but responded almost tenderly to the actions of the people around it in real time. It purred out a deep squeak, seemingly out of a Steven Spielberg movie.

As the audience watched the robotic arm was able to gently pluck a yellow colored block from Hunag’s hand and set it down. It then went on to rearrange four colored blocks, gently stacking them with fine precision.

The feat was the result of sophisticated simulation and training, that allows the robot to learn in virtual worlds, before being put to work in the real world. “And this is how we’re going to create robots in the future,” Huang said.

Accelerating Everything

Huang finished his talk by by recapping NVIDIA’s sprawling accelerated computing story, one that spans ray tracing, cloud gaming, recommendation systems, real-time conversational AI, 5G, genomics analysis, autonomous vehicle and robotis, and more.

“I want to thank you for your collaboration to make accelerated computing amazing and thank you for coming to GTC,” Huang said.

The post As AI Universe Keeps Expanding, NVIDIA CEO Lays Out Plan to Accelerate All of It appeared first on The Official NVIDIA Blog.