Coming to a Desktop Near You: The Future of Self-Driving

There has never been a better time to learn how AI will transform the way people, goods and services move.

During GTC Digital, anyone can experience the latest developments in AI technology for free, from the comfort of their own home. Hundreds of expert talks and training sessions covering autonomous vehicles, robotics, healthcare, finance and more will soon be available at the click of a button.

Beginning March 25, GTC Digital attendees can tune in to sessions hosted by autonomous driving leaders from Ford, Toyota, Zoox and more, as well as receive virtual training from NVIDIA experts on developing AI for self-driving.

Check out what’s in store for the NVIDIA autonomous vehicle ecosystem.

Learn from Leaders

GTC Digital talks let AI experts and developers delve into their latest work, sharing key insights on how to deploy intelligent vehicle technology.

This year’s automotive speakers are covering the latest topics in self-driving development, including AI training, simulation and software.

  • Neda Cvijetic, senior manager of Autonomous Vehicles at NVIDIA, will apply an engineering focus to widely acknowledged autonomous vehicle challenges, and explain how NVIDIA is tackling them. This session will air live with a question-and-answer segment to follow.
  • Clement Farabet, vice president of AI Infrastructure at NVIDIA, will discuss NVIDIA’s end-to-end AI platform for developing NVIDIA DRIVE software. This live talk will cover how to scale the infrastructure to train self-driving deep neural networks and include a Q&A session.
  • Tokihiko Akita, project research fellow, Toyota Technical Institute, will detail how deep neural networks can be used for autonomous vehicle object recognition and tracking in adverse weather with millimeter-wave radar sensors.
  • Nikita Jaipuria, research scientist for computer vision and machine learning, and Rohan Bhasin, research engineer, both at Ford Motor Company, will discuss in their April 2 session how to leverage generative adversarial networks to create synthetic data for autonomous vehicle training and validation.
  • Zejia Zheng and Jeff Pyke, software engineers at Zoox, will outline the Zoox TensorRT conversion pipeline to optimize deep neural network deployment on high-performance NVIDIA GPUs.

Virtual Hands-On Training

Developers will also get a chance to dive deeper into autonomy with NVIDIA Deep Learning Institute courses as well as interact virtually with experts in autonomous vehicle development.

DLI will offer a variety of instructor-led training sessions addressing the biggest developer challenges in areas such as autonomous vehicles, manufacturing and robotics. Receive intensive instruction on topics such as autonomous vehicle perception and sensor integration in our live sessions this April.

Get detailed answers to all your development questions in our Connect with Experts sessions on intelligent cockpits, autonomous driving software development and validation and more with NVIDIA’s in-house specialists.

Register to access these sessions and more for free and receive the latest updates on GTC Digital.

The post Coming to a Desktop Near You: The Future of Self-Driving appeared first on The Official NVIDIA Blog.

Driving Progress: NVIDIA Leads Autonomous Vehicle Industry Report

As the industry begins to deploy autonomous driving technology, NVIDIA is leading the way in delivering a complete end-to-end solution, including data center infrastructure, software toolkits, libraries and frameworks, as well as high-performance, energy-efficient compute for safer, more efficient transportation.

Every year, advisory firm Navigant Research releases a report on the state of the autonomous vehicle industry. This year, the company broke out its in-depth research into “Automated Vehicle Compute Platforms” and “Automated Driving Vehicles” leaderboards, ranking the major players in each category.

In the 2020 Automated Vehicle Compute Platforms report, NVIDIA led the list of companies developing AV platforms to power the AI that will replace the human driver.

Automated Vehicle Compute Platforms Leaderboard

“NVIDIA is a proven company with a long history of producing really strong hardware and software,” said Sam Abuelsamid, principal research analyst at Navigant and author of both reports. “Having a platform that produces ever-increasing performance and power efficiency is crucial for the entire industry.”

While NVIDIA provides solutions for developing autonomous vehicles, the Automated Driving Vehicles report covers those who are building such vehicles for production.  Manufacturers using these compute platforms include automakers, tier 1 suppliers, robotaxi companies and software startups that are part of the NVIDIA DRIVE ecosystem.

These reports are just one piece of the larger autonomous vehicle development picture. While they don’t contain every detail of the current state of self-driving systems, they provide valuable insight into how the industry approaches this transformative technology.

Computing the Way Forward

The Navigant leaderboard uses a detailed methodology in determining both the companies it covers and how they rank.

The Automated Vehicle Compute Platforms report looks at those who are developing compute platforms and have silicon designs that are in at least sample production or have plans to be soon. These systems must have fail-operational capability to ensure safety when there’s no human backup at the wheel.

The report evaluates companies using factors such as vision, technology, go-to-market strategy and partners. NVIDIA’s top rating is based on leading performance across this wide range of factors.

NVIDIA Strategy and Execution Scores

With the high-performance, energy-efficient DRIVE AGX platform, NVIDIA topped the inaugural leaderboard report for its progress in the space. The platform’s scalable architecture and the open, modular NVIDIA DRIVE software make it easy for autonomous vehicle manufacturers to scale DRIVE solutions to meet their individual production plans.

“Without powerful, energy-efficient compute and the reliability and robustness to handle the automotive environment, none of rest would be possible,” Abuelsamid said. “Looking at the current compute platforms, NVIDIA is the clear leader.”

Compute is a critical element to autonomous vehicles. It’s what makes it possible for a vehicle to process in real time the terabytes of data coming in from camera, radar and lidar sensors. It also enables self-driving software to run multiple deep neural networks simultaneously, achieving the diversity and redundancy in operation that is critical for safety. Furthermore, by ensuring a compute platform has enough performance headroom, developers can continuously add features and improve the system over time.

Empowering Our Ecosystem

A second report from Navigant, the Automated Driving Vehicles report, ranks companies developing Level 4 autonomous driving systems — which don’t require a human backup in defined conditions — for passenger vehicles.

The leaderboard includes announced members of the NVIDIA DRIVE ecosystem such as Baidu, Daimler-Bosch, Ford, Toyota, Volkswagen, Volvo, Voyage, Yandex and Zoox, as well as many other partners who have yet to be disclosed, all developing autonomous vehicles using NVIDIA end-to-end AI solutions.

Automated Driving Vehicles Leaderboard

The report notes many of the challenges the industry faces, such as comprehensive validation and production costs. NVIDIA delivers autonomous vehicle development tools from the cloud to the car to help companies address these issues.

NVIDIA is unique in providing solutions for in the vehicle, as well as the datacenter infrastructure for the AV industry. With NVIDIA DGX systems and advanced AI learning tools, developers can efficiently train the deep neural networks that run in the vehicle on petabytes of data in the data center. These DNNs can then be tested and validated in the virtual world on the same hardware they would run on in the vehicle using the cloud-based, bit-accurate DRIVE Constellation simulation platform.

The ability to build a seamless workflow with compatible tools can greatly improve efficiency in development, according to Abuelsamid.

“As this technology evolves and matures, developers are going through many iterations of hardware and software,” he said. “Having an ecosystem that allows you to develop at all different levels, from in-vehicle, to data center, to simulation, and transfer knowledge you’ve gained from one to the other is very important.”

By providing our ecosystem with these industry-leading end-to-end solutions, we’re working to bring safer, more efficient transportation to roads around the world, sooner.

Navigant Research provides syndicated and custom research services across a range of technologies encompassing the energy ecosystem. Mobility reports, including the Automated Driving Leaderboard, the Automated Vehicle Compute Platform leaderboard and the Automated Vehicles Forecast, are available here.

The post Driving Progress: NVIDIA Leads Autonomous Vehicle Industry Report appeared first on The Official NVIDIA Blog.

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Catch up on all our automotive posts, here.

Lidar can give autonomous vehicles laser focus.

By bouncing laser signals off the surrounding environment, these sensors can enable a self-driving car to construct a detailed and accurate 3D picture of what’s around it.

However, traditional methods for processing lidar data pose significant challenges. These include limitations in the ability to detect and classify different types of objects, scenes and weather conditions, as well as limitations in performance and robustness.

In this DRIVE Labs episode, we introduce our multi-view LidarNet deep neural network, which uses multiple perspectives, or views, of the scene around the car to overcome the traditional limitations of lidar-based processing.

AI-Powered Solutions

AI in the form of DNN-based approaches has become the go-to solution to address traditional lidar perception challenges.

One AI method uses lidar DNNs that perform top‐down or “bird’s eye view” (BEV) object detection on lidar point cloud data. A virtual camera positioned at some height above the scene, similar to a bird flying overhead, reprojects 3D coordinates of each data point into that virtual camera view via orthogonal projection.

BEV lidar DNNs use 2D convolutions in their layers to detect dynamic objects such as cars, trucks, buses, pedestrians, cyclists, and other road users. 2D convolutions work fast, so they are well-suited for use in real-time autonomous driving applications.

However, this approach can get tricky when objects look alike top-down. For example, in BEV, pedestrians or bikes may appear similar to objects like poles, tree trunks or bushes, resulting in perception errors.

Another AI method uses 3D lidar point cloud data as input to a DNN that uses 3D convolutions in its layers to detect objects. This improves accuracy since a DNN can detect objects using their 3D shapes. However, 3D convolutional DNN processing of lidar point clouds is difficult to run in real-time for autonomous driving applications.

Enter Multi-View LidarNet

To overcome the limitations of both of these AI-based approaches, we developed our multi‐view LidarNet DNN, which acts in two stages. The first stage extracts semantic information about the scene using lidar scan data in perspective view (Figure 1). This “unwraps” a 360-degree surround lidar range scan so it looks as though the entire panorama is in front of the self-driving car.

This first-stage semantic segmentation approach performs very well for predicting object classes. This is because the DNN can better observe object shapes in perspective view (for example, the shape of a walking human).

The first stage segments the scene both into dynamic objects of different classes, such as cars, trucks, busses, pedestrians, cyclists and motorcyclists, as well as static road scene components, such as the road surface, sidewalks, buildings, trees, and traffic signs.

Figure 1. Multi-view LidarNet perspective view.

 

Figure 2. Multi-view LidarNet top-down bird’s eye view (BEV).

The semantic segmentation output of LidarNet’s first stage is then projected into BEV and combined with height data at each location, which is obtained from the lidar point cloud. The resulting output is applied as input to the second stage (Figure 2).

The second stage DNN is trained on BEV-labeled data to predict top-down 2D bounding boxes around objects identified by the first stage. This stage also uses semantic and height information to extract object instances. This is easier in BEV since objects are not occluding each other in this view.

The result of chaining these two DNN stages together is a lidar DNN that consumes only lidar data. It uses end-to-end deep learning to output a rich semantic segmentation of the scene, complete with 2D bounding boxes for objects. By using such methods, it can detect vulnerable road users, such as motorcyclists, bicyclists, and pedestrians, with high accuracy and completeness. Additionally, the DNN is very efficient — inference runs at 7ms per lidar scan on the NVIDIA DRIVE™AGX platform.

In addition to multi-view LidarNet, our lidar processing software stack includes a lidar object tracker. The tracker is a computer vision-based post-processing system that uses the BEV 2D bounding box information and lidar point geometry to compute 3D bounding boxes for each object instance. The tracker also helps stabilize per-frame DNN misdetections and, along with a low-level lidar processor, computes geometric fences that represent hard physical boundaries that a car should avoid.

This combination of AI-based and traditional computer vision-based methods increases the robustness of our lidar perception software stack. Moreover, the rich perception information provided by lidar perception can be combined with camera and radar detections to design even more robust Level 4 to Level 5 autonomous systems.

The post Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars appeared first on The Official NVIDIA Blog.

Hail Yeah! How Robotaxis Will Change the Way We Move

From pedicabs to yellow cabs, hailing rides has been a decades-long convenience. App-based services like Uber and Lyft have made on-demand travel even faster and easier.

With the advent of autonomous vehicles, ride-hailing promises to raise the bar on safety and efficiency. Known as robotaxis, these shared vehicles are purpose-built for transporting groups of people along optimized routes, without a human driver at the wheel.

The potential for a shared autonomous mobility industry is enormous. Financial services company UBS estimates that robotaxis could create a $2 trillion market globally over the next decade, with each vehicle generating as much as $27,000 annually.

In dense urban environments, like New York City, experts project that a taxi fleet converted to entirely autonomous vehicles could cut down commutes in some areas from 40 minutes to 15.

On top of the economic and efficiency benefits, autonomous vehicles are never distracted or drowsy. And they can run 24 hours a day, seven days a week, expanding mobility access to more communities.

To transform everyday transportation, the NVIDIA DRIVE ecosystem is gearing up with a new wave of electric, autonomous vehicles.

Up for Cabs

Building a new kind of vehicle from square one requires a fresh perspective. That’s why a crop of startups and technology companies have begun to invest in the idea of a shared car without a steering wheel or pedals.

Already transporting riders in Florida and San Jose, Calif. retirement communities, Voyage is deploying low-speed autonomous vehicles with the goal of widely expanding safe mobility. The company is using DRIVE AGX to operate its SafeStop supercharged automatic braking system in its current fleet of vehicles.

Optimus Ride is a Boston-based self-driving technology company developing systems for geo-fenced environments — pre-defined areas of operation, like a city center or shipping yard.

Its electric, autonomous vehicles run on the high-performance, energy-efficient NVIDIA DRIVE platform, and were the first such vehicles to run in NYC as part of a pilot launched in August.

Optimus Ride

Leveraging the performance of NVIDIA DRIVE AGX Pegasus, which can achieve up to 320 trillion operations per second, smart mobility startup WeRide is developing level 4 autonomous vehicles to provide accessible transportation to a wide range of passengers.

Starting from scratch, self-driving startup and DRIVE ecosystem member Zoox is developing a purpose-built vehicle for on-demand, autonomous transportation. Its robotaxi encompasses a futuristic vision of everyday mobility, able to drive in both directions.

Zoox says it plans to launch its zero-emissions vehicle for testing this year, followed by an autonomous taxi service.

At GTC China in December, ride-hailing giant Didi Chuxing announced it was developing level 4 autonomous vehicles for its mobility services using NVIDIA DRIVE and AI technology. Delivering 10 billion passenger trips per year, DiDi is working toward the safe, large-scale application of autonomous driving technology.

DiDi

Sharing Expertise for Shared Mobility

When it comes to industry-changing innovations, sometimes two (or three) heads are better than one.

Global automakers, suppliers and startups are also working to solve the question of shared autonomous mobility, collaborating on their own visions of the robotaxi of the future.

In December, Mercedes-Benz parent company Daimler and global supplier Bosch launched the first phase of their autonomous ride-hailing pilot in San Jose. The app-based service shuttles customers in an automated Mercedes-Benz S-Class monitored by a safety driver.

The companies are collaborating with NVIDIA to eventually launch a robotaxi powered by NVIDIA DRIVE AGX Pegasus.

Daimler

Across the pond, autonomous vehicle solution provider AutoX and Swedish electric vehicle manufacturer NEVS are working to deploy robotaxis in Europe by the end of this year.

The companies, which came together through the NVIDIA DRIVE ecosystem, are developing an electric autonomous vehicle based on NEVS’ mobility-focused concept and powered by NVIDIA DRIVE. The goal of this collaboration is to bring these safe and efficient technologies to everyday transportation around the world.

Startup Pony.AI is also collaborating with global automakers such as Toyota and Hyundai, developing a robotaxi fleet with the NVIDIA DRIVE AGX platform at its core.

As the NVIDIA DRIVE ecosystem pushes into the next decade of autonomous transportation, safer, more convenient rides will soon just be a push of a button away. At GTC 2020, attendees will get a glimpse to just where this future is going — register today with code CMAUTO for a 20 percent discount.

The post Hail Yeah! How Robotaxis Will Change the Way We Move appeared first on The Official NVIDIA Blog.

Look Under the Hood of Self-Driving Development at GTC 2020

The progress of self-driving cars can be seen in test vehicles on the road. But the major mechanics for autonomous driving development are making tracks in the data center.

Training, testing and validating self-driving technology requires enormous amounts of data, which must be managed by a robust hardware and software infrastructure. Companies around the world are turning to high-performance, energy efficient GPU technology to build the AI infrastructure needed to put autonomous driving deep neural networks (DNNs) through their paces.

At next month’s GPU Technology Conference in San Jose, Calif., automakers, suppliers, startups and safety experts will discuss how they’re tackling the infrastructure component of autonomous vehicle development.

By attending sessions on topics such as DNN training, data creation. and validation in simulation, attendees can learn the end-to-end process of building a self-driving car in the data center.

Mastering Learning Curves

Without a human at the wheel, autonomous vehicles rely on a wide range of DNNs that  perceive the surrounding environment. To recognize everything from pedestrians to street signs and traffic lights, these networks require exhaustive training on mountains of driving data.

Tesla has delivered nearly half a million vehicles with AI-assisted driving capabilities worldwide. They’re gathering data while continuously receiving the latest models through over-the-air updates.

At GTC, Tim Zaman, machine learning infrastructure engineering manager at Tesla, will share how the automaker built and maintains a low-maintenance, efficient and lightning-fast, yet user-friendly, machine-learning infrastructure that its engineers rely on to develop Tesla Autopilot.

As more test vehicles outfitted with sensors drive on public roads, the pool of training data can grow by terabytes. Ke Li, software engineer at Pony.ai, will talk about how the self-driving startup is building a GPU-centric infrastructure that can process the increasingly heavy sensor data more efficiently, scale with future advances in GPU compute power, and can integrate with other heterogeneous compute platforms.

For NVIDIA’s own autonomous vehicle development, we’ve built a scalable infrastructure to train self-driving DNNs. Clement Farabet, vice president of AI Infrastructure at NVIDIA, will discuss Project MagLev, an internal end-to-end AI platform for developing NVIDIA DRIVE software.

The session will cover how MagLev enables autonomous AI designers to iterate training of new DNN designs across thousands of GPU systems and validate the behavior of these designs over multi-petabyte-scale datasets.

Virtual Test Tracks

Before autonomous vehicles are widely deployed on public roads, they must be proven safe for all possible conditions the car could encounter — including rare and dangerous scenarios.

Simulation in the data center presents a powerful solution to what has otherwise been an insurmountable obstacle. By tapping into the virtual world, developers can safely and accurately test and validate autonomous driving hardware and software without leaving the office.

Zvi Greenstein, general manager at NVIDIA, will give an overview of the NVIDIA DRIVE Constellation VR simulation platform, a cloud-based solution that enables hardware-in-the-loop testing and large-scale deployment in data centers. The session will cover how NVIDIA DRIVE Constellation is used to validate safe autonomous driving and how companies can partner with NVIDIA and join the DRIVE Constellation ecosystem.

Having data as diverse and random as the real world is also a major challenge when it comes to validation. Nikita Jaipuria and Rohan Bhasin, research engineers at Ford, will discuss how to generate photorealistic synthetic data using generative adversarial networks (GANs). These simulated images can be used to represent a wide variety of situations for comprehensive autonomous vehicle testing.

Regulators and third-party safety agencies are also using simulation technology to evaluate self-driving cars. Stefan Merkl, mobility regional manager at TÜV SÜD America, Inc., will outline the agency’s universal framework to help navigate patchwork local regulations, providing a unified method for the assessment of automated vehicles.

In addition to these sessions, GTC attendees will hear the latest NVIDIA news and experience demos and hands-on training for a comprehensive view of the infrastructure needed to build the car of the future. Register before Feb. 13 to take advantage of early rates and receive 20% off with code CMAUTO.

The post Look Under the Hood of Self-Driving Development at GTC 2020 appeared first on The Official NVIDIA Blog.

Man Meets Machine: Autonomous Driving Gets the Human Touch at CES 2020

Autonomous driving technology aims to eliminate the human at the wheel. However, the latest plans for the car of the future envision a greater integration of human-machine interaction throughout the rest of the vehicle.

At CES 2020, companies showed how they’re working toward safer and more efficient transportation. Drawing inspiration from creative concepts in other industries, and even trying out new areas of technological expertise, NVIDIA DRIVE partners showcased new ideas for human and machine harmony in the coming decade.

Quite a Concept

Drawing gearheads and technophiles alike, the Vision-S concept car was one of the most buzzed-about vehicles on the show floor. The electric vehicle incorporates cutting-edge imaging and sensor technology for autonomous driving as well as crystal-clear graphics for an intelligent cockpit experience. The most innovative feature of all? It wasn’t built by an automaker.

Courtesy of Sony

Electronics and entertainment company Sony worked with NVIDIA and other autotech companies to build its first ever vehicle prototype. Designed to showcase its latest sensor and AI infotainment technology, the sleek vehicle attracted crowds throughout the week.

The panoramic dashboard screen provides driving information, communication and entertainment, all effortlessly controlled by fluid gestures. Screens on the back of the front seats as well as speakers built into the headrests ensure every passenger can have a personalized experience.

The car’s hardware is designed for over-the-air updates, improving autonomous driving capabilities as well as adapting to the human driver and passengers’ preferences over time.

Though Sony isn’t building the Vision-S for production, the car’s concept for autonomy and seamless user experience provide a window into a highly intelligent transportation future.

Driving Instinct 

Mercedes-Benz, the automaker behind the first production AI cockpit, plans to take its revolutionary MBUX infotainment technology, powered by NVIDIA, even further.

During the CES opening keynote, Mercedes CEO Ola Kallenius said the next frontier for AI-powered infotainment is truly intuitive gesture control. The automaker’s vision for the future was then demonstrated in the Vision AVTR concept vehicle, designed for the upcoming sequels to the blockbuster film Avatar.

The hallmark feature of the Vision AVTR is the center console control element that replaces a typical steering wheel. It’s activated by human touch, using biometrics to control the car.

Courtesy of Mercedes-Benz

The concept illustrates Mercedes’ long-term vision of facilitating more natural interactions between the driver and the vehicle. And given the proliferation of the MBUX infotainment system — which is now in nearly 20 Mercedes models — this future may not be too far away.

AIs on the Road

CES attendees also experienced the latest innovations in autonomous driving firsthand from the NVIDIA DRIVE ecosystem.

Robotaxi company Yandex ferried conference goers around Las Vegas neighborhoods in a completely driverless ride. Powered by NVIDIA technology, the prototype vehicle reached speeds up to 45 miles per hour without any human intervention.

Courtesy of Yandex

Yandex has been rapidly expanding in its mission to provide safe autonomous transportation to the public. Since last CES, the company has driven 1.5 million autonomous miles and provided more than 5,000 robotaxi rides with no human driver at the wheel.

Supplier Faurecia Clarion showed attendees how its working to alleviate the stress of parking with its autonomous valet system. Using the high-performance, energy-efficient DRIVE AGX platform, the advanced driver assistance system seamlessly navigated a parking lot.

NVIDIA DRIVE ecosystem member Luminar brought the industry closer to widespread self-driving car deployment with the introduction of its Hydra lidar sensor. Designed for production Level 3 and Level 4 autonomous driving, the sensor is powered by NVIDIA Xavier and can detect and classify objects up to 250 meters away.

Courtesy of Luminar

Analog Devices uses the NVIDIA DRIVE platform to help autonomous vehicles see and understand the world around them. The company demoed its imaging radar point cloud at CES, using NVIDIA DRIVE AGX Xavier to process raw data from an imaging radar sensor into a perception point cloud.

With these latest developments in production technology as well as a cohesive vision for future AI-powered transportation, the age of self-driving is just a (human) touch away.

The post Man Meets Machine: Autonomous Driving Gets the Human Touch at CES 2020 appeared first on The Official NVIDIA Blog.

What Is Active Learning?

Reading one book on a particular subject won’t make you an expert. Nor will reading multiple books containing similar material. Truly mastering a skill or area of knowledge requires lots of information coming from a diversity of sources.

The same is true for autonomous driving and other AI-powered technologies.

The deep neural networks responsible for self-driving functions require exhaustive training. Both in situations they’re likely to encounter during daily trips, as well as unusual ones they’ll hopefully never come across. The key to success is making sure they’re trained on the right data.

What’s the right data? Situations that are new or uncertain. No repeating the same scenarios over and over.

Active learning is a training data selection method for machine learning that automatically finds this diverse data. It builds better datasets in a fraction of the time it would take for humans to curate.

It works by employing a trained model to go through collected data, flagging frames it’s having trouble recognizing. These frames are then labeled by humans. Then they’re added to the training data. This increases the model’s accuracy for situations like perceiving objects in tough conditions.

Finding the Needle in the Data Haystack

The amount of data needed to train an autonomous vehicle is enormous. Experts at RAND estimate that vehicles need 11 billion miles of driving to perform just 20 percent better than a human. This translates to more than 500 years of nonstop driving in the real world with a fleet of 100 cars.

And not just any driving data will do. Effective training data must contain diverse and challenging conditions to ensure the car can drive safely.

If humans were to annotate this validation data to find these scenarios, the 100-car fleet driving just eight hours a day would require more than 1 million labelers to manage frames from all the cameras on the vehicle — a gargantuan effort. In addition to the labor cost, the compute and storage resources needed to train DNNs on this data would be infeasible.

The combination of data annotation and curation poses a major challenge to autonomous vehicle development. By applying AI to this process, it’s possible to cut down on the time and cost spent on training, while also increasing the accuracy of the networks.

Why Active Learning

There are three common methods to selecting autonomous driving DNN training data. Random sampling extracts frames from a pool of data at uniform intervals, capturing the most common scenarios but likely leaving out rare patterns.

Metadata-based sampling uses basic tags (for example, rain, night) to select data, making it easy to find commonly encountered difficult situations, but missing unique frames that aren’t easily classified, like a tractor trailer or man on stilts crossing the road.

Caption: Not all data is created equal. Example of a common highway scene (top left) vs. some unusual driving scenarios (top right: cyclist doing a wheelie at night, bottom left: truck towing trailer towing quad, bottom right: pedestrian on jumping stilts).

Finally, manual curation uses metadata tags combined with visual browsing by human annotators — a time-consuming task that can be error-prone and difficult to scale.

Active learning makes it possible to automate the selection process while choosing valuable data points. It starts by training a dedicated DNN on already-labeled data. The network then sorts through unlabeled data, selecting frames that it doesn’t recognize, thereby finding data that would be challenging to the autonomous vehicle algorithm.

That data is then reviewed and labeled by human annotators, and added to the training data pool.

Active learning has already shown it can improve the detection accuracy of self-driving DNNs over manual curation. In our own research, we’ve found that the increase in precision when training with active learning data can be 3x for pedestrian detection and 4.4x for bicycle detection relative to the increase for data selected manually.

Advanced training methods like active learning, as well as transfer learning and federated learning, are most effective when run on a robust, scalable AI infrastructure. This makes it possible to manage massive amounts of data in parallel, shortening the development cycle.

NVIDIA will be providing developers access to these training tools as well as our rich library of autonomous driving deep neural networks on the NVIDIA GPU Cloud container registry.

The post What Is Active Learning? appeared first on The Official NVIDIA Blog.

In the News: Mobileye’s Autonomous Industry Leadership

Mobileye 2020 CES Booth 6

» Download all images (ZIP, 72 MB)

Amid heavy debate about the future of autonomous vehicles (AVs), Intel’s Mobileye has maintained a clear and composed vision for the road ahead. From its early and sustained effort to deliver AV safety standards to its under-the-hood tour of its camera-only self-driving cars, the company is leading by example to deliver the transformational potential of AVs. Projecting significant and sustained revenue growth for its business over the next decade, and announcing new deals for advanced driver-assistance systems (ADAS) and driverless mobility-as-a-service (MaaS), Mobileye is demonstrating how its strategy is helping it achieve global scale and bringing the company closer to becoming a complete mobility provider.

More: Autonomous Driving at Intel | Mobileye News

Read on to see what experts have to say about Mobileye growing momentum and the company’s vision for realizing an AV future:

Intel’s Mobileye has a plan to dominate self-driving – and it might work (Ars Technica): “Mobileye doesn’t have Elon Musk’s star power or Google’s billions. But it has something that’s arguably even more important: a dominant position in today’s market for advanced driver-assistance systems (ADAS). … In a speech at the Consumer Electronics Show, Mobileye CEO Amnon Shashua made clear just how big of a strategic advantage this is.”

Mobileye, Intel’s fastest-growing business, explains its big bet on robotaxis (ZDNet): “Two years after Intel acquired Mobileye for $15 [billion], the maker of autonomous vehicle technology is exceeding expectations. It plans to grow its business with robotaxis and data monetization.”

Self-driving supplier Mobileye on timeline, costs, regulations for autonomous vehicles roll out (CNBC): “If more cars will be autonomous, more lives would be saved. A computer will do a better job than a human, eventually,” [Shashua] said. “It will even rival the cost of public transportation. So all that we know about transportation will change if we can make it work.”

Mobileye expands its robotaxi footprint with a new deal in South Korea (TechCrunch): “Mobileye announced an agreement to test and eventually deploy a robotaxi service in Daegu City, South Korea, the latest example of the company’s strategy to expand beyond its traditional business of supplying automakers with computer vision technology that power advanced driver-assistance systems.”

Emphasis on cameras over lidar on autonomous vehicles sets Mobileye apart from competition, CEO says (Bloomberg): “I think our biggest landmark is the fact that you can have two revolutions. One revolution is lifesaving. Now the more autonomous cars on the road, the more lives [that] will be saved. And the second revolution is a revolution in transportation – the fact that you can offer mobility at prices that rival public transportation.”

Intel’s Mobileye demos autonomous car equipped only with cameras, no other sensors (Reuters): “Intel Corp released a video of its Mobileye autonomous car navigating the streets of Jerusalem for about 20 minutes with the help of 12 on-board cameras and, unusually, no other sensors.”

Watch Mobileye’s self-driving car drive through Jerusalem using only cameras (The Verge): “At the Consumer Electronics Show in Las Vegas this week, [Mobileye] demonstrated how one of its autonomous test vehicles navigated the complex streets of Jerusalem using cameras only.”

Now “vidar” is a thing (Axios): “Mobileye CEO Amnon Shashua introduced auto industry followers at CES to new terminology this week: ‘Vidar’ is a computer vision system he claims can match expensive laser-based lidar solutions using only camera sensors.”

Intel grows bullish on autonomous future (Automotive News): “The company’s approach to developing both driverless and driver-assistance features is something of an outlier in the technology realm.”

Intel’s CFO talks about the AMD threat, chip profits and the future of AI (Barron’s): “Mobileye … is both about great hardware technology, but also extraordinary software. We’re investing heavily, and it’s getting adopted so rapidly, that it’s actually an attractive return in the near term and very attractive in the long term,” Intel CIO George Davis said. “If you look at the design wins in level two and level three [autonomous driving], Mobileye is leading across the world.”

The post In the News: Mobileye’s Autonomous Industry Leadership appeared first on Intel Newsroom.

2020 CES: Mobileye Raises the Bar

Equipped with a business strategy unlike any other automotive supplier and growth ambitions to become a complete mobility provider, Mobileye President and CEO Prof. Amnon Shashua called on the industry to be transparent with its autonomous driving technology and then showed a 23-minute unedited, uninterrupted drive of an autonomous vehicle (AV) using camera-only sensors.

More: All Mobileye and Intel News from CES 2020 | Mobileye News | Mobileye’s Computer Vision (Event Replay) | Autonomous Driving at Intel

During his annual CES address, Shashua provided an under-the-hood tour of Mobileye’s leading computer vision technology, showed how the company’s mapping strategy is helping the company achieve global scale, and introduced new deals for advanced driver-assistance systems (ADAS) and driverless mobility-as-a-service (Maas). Scroll down for a synopsis of Mobileye’s updates from #CES2020.

Under the Hood with Mobileye: In his annual CES address, Intel Senior Vice President and Mobileye CEO Prof. Amnon Shashua called for more transparency in technology to enable the future of autonomous driving. In front of a captivated audience in Las Vegas, Shashua went under the hood of Mobileye’s computer vision, presenting new details behind the company’s latest technology advancements to demonstrate the innovative approach it is taking to make autonomy a reality. For the first time, Shashua discussed “VIDAR,” Mobileye’s unique solution for achieving outputs akin to lidar using only camera sensors. In addition, he detailed how Mobileye achieves pixel-level scene segmentation that can be used to detect tiny fragments of road users such as wheelchairs, open vehicle doors and more, as well as the ways in which Mobileye technology turns two-dimensional sensors into 3D understanding. Highlighting the progress and purpose of Mobileye’s drive to full autonomy, Shashua’s address showcased exactly how the company will lead the industry in realizing autonomous driving. » Watch full presentation | » Download speaker presentation

Camera-Driven AV Navigates Jerusalem: Mobileye is developing two truly redundant sensing systems: one with surround-view cameras alone and the other with radars and lidars. In this unedited video demonstrating the camera-only technology in Jerusalem, you can see Mobileye’s car successfully navigate a complex driving environment heavy with pedestrians, unguarded intersections, delivery vehicles and more. This is the everyday capability of Mobileye’s technology. » Watch video on YouTube

Mobileye EyeQ numbers infographic

Mobileye in Numbers: At CES 2020, Mobileye revealed new growth metrics demonstrating the continued strength of Intel’s fastest-growing business, including more than 54 million EyeQ chips shipped to date. 2019 was another record year for the company, with sales close to $1 billion driven by significant growth in the ADAS market. Mobileye’s future business is expanding greatly with forays into data monetization and the nascent robotaxi market. » Click for full-size infographic

Mobileye Maps Las Vegas: Using its crowd-sourced Road Experience Management™ (REM™) technology, Mobileye created a demonstration high-definition map of more than 400 km (248 miles) of Las Vegas roads from over 16,000 drives. Map creation of Nevada-area roads took less than 24 hours. This map provides centimeter-level accuracy for thousands of on-road and near-road objects, including 60,000 signs, 20,000 poles and more than 1,500 km of lane centerlines. The near-real-time capability of REM coupled with the extremely low-bandwidth data upload (approximately 10 kilobits/km) from millions of Mobileye-equipped passenger cars makes this technology highly scalable and practical for both advanced ADAS (L2+) solutions and full AVs including driverless mobility-as-a-service (MaaS) fleets. » Download video: “2020 CES: Mobileye Maps Las Vegas (B-Roll)”

Mobileye 2020 CES Shashua Presentation

Mobileye in China: Mobileye announced a new agreement with SAIC, a leading Chinese OEM to use Mobileye’s REM mapping technology to map China for L2+ ADAS deployment while paving the way for autonomous vehicles in the country. The deployment of the mapping solution in China presents opportunities for additional OEM partners to enter the Chinese market with map-related features. China is the first country to benefit from the four Mobileye strategic product categories. With the addition of the SAIC agreement, Mobileye’s China footprint now includes L2+ ADAS, mapping (a first for China), MaaS and consumer AVs. Caption: Professor Amnon Shashua speaks at CES on Tuesday, Jan. 7, 2020. (Credit: Walden Kirsch/Intel Corporation) » Click for full image

Mobileye 2020 CES Booth 3 1

Mobileye Expands Driverless to South Korea: Mobileye and the leaders of Daegu Metropolitan City, South Korea, announced an agreement to establish a long-term cooperation to test and deploy robotaxi-based mobility solutions powered by Mobileye’s autonomous vehicle technology. Mobileye will integrate its industry-leading self-driving system into vehicles to enable a driverless MaaS operation. The agreement with Daegu City, one of South Korea’s largest metropolitan areas, extends Mobileye’s global MaaS footprint. Combined with Mobileye’s previously announced robotaxi-based mobility services agreements, the new deal shows how Mobileye is quickly scaling its autonomous MaaS ambitions globally. Caption: Jack Weast, Mobileye vice president and Intel senior principal engineer, gives a tour of the Mobileye booth at CES 2020.  (Credit: Walden Kirsch/Intel Corporation) » Click for full image

The post 2020 CES: Mobileye Raises the Bar appeared first on Intel Newsroom.

2020 CES: Mobileye’s Global Ambitions Take Shape with New Deals in China, South Korea

mobileye china south koreaWhat’s New: With sales close to $1 billion in 2019 and expected to rise double-digits this year, Mobileye’s global ambitions in advanced driver-assistance systems (ADAS) and autonomous mobility-as-a-service (MaaS) came into sharper focus with two agreements announced today. SAIC, a leading Chinese OEM, plans to use Mobileye’s REM mapping technology to map China for L2+ ADAS deployment while paving the way for autonomous vehicles in the country. And the leaders of Daegu Metropolitan City, South Korea, agreed to establish a long-term cooperation to deploy MaaS based on Mobileye’s self-driving system.

“These two new agreements build our global footprint in both MaaS and ADAS and demonstrate our commitment to true global leadership toward full autonomy.”
–Prof. Amnon Shashua, Mobileye president & CEO and Intel senior vice president

Why It Matters: The two deals show how Mobileye, an Intel Company, is executing on its multiprong strategy toward full autonomy, which includes mapping, ADAS, MaaS and consumer AVs. The agreements build on other recent announcements, including: an agreement with RATP in partnership with the city of Paris to bring robotaxis to France; a collaboration with NIO to manufacture Mobileye’s self-driving system and sell consumer AVs based on that system, and to supply robotaxis exclusively to Mobileye for China and other markets; a joint venture with UniGroup in China for use of map data; and a joint venture with Volkswagen and Champion Motors to operate an autonomous ride-hailing fleet in Israel.

Based on third-party data, Mobileye estimates the autonomous MaaS total addressable market (TAM) at $160 billion by 2030. Mobileye’s ADAS leadership, uniquely scalable mapping tools and global robotaxi-based mobility ambitions have been designed to address this massive opportunity.

China is the first country to benefit from the four Mobileye strategic product categories. With the addition of the SAIC agreement, Mobileye’s China footprint now includes L2+ ADAS, mapping (a first for China), MaaS and consumer AVs.

How the SAIC Agreement Works: SAIC and Mobileye have signed an agreement to use Mobileye’s Road Experience Management™ (REM™) mapping technology on SAIC vehicles via SAIC’s licensed map subsidiary (Heading). SAIC vehicles will contribute to Mobileye’s RoadBook by gathering information on China’s roadways, creating a high-definition map of the country that can be used by vehicles with L2+ and higher levels of autonomy. The deployment of the mapping solution in China presents opportunities for additional OEM partners to enter the Chinese market with map-related features.

The SAIC agreement marks Mobileye’s first design win with a major Chinese automaker to harvest road data while also utilizing Mobileye’s REM mapping technology to enable L2+ in passenger vehicles.

SAIC joins other Mobileye OEM partners around the world in collecting road data to enable a global real-time high-definition map. It is the first Chinese OEM to use Mobileye’s REM technology to offer sharper ADAS capabilities and accelerate the development of autonomous driving in China.

How the Daegu Metropolitan City Agreement Works: Mobileye and Daegu City will collaborate to test and deploy robotaxi-based mobility solutions powered by Mobileye’s autonomous vehicle technology. Mobileye will integrate its industry-leading self-driving system into vehicles to enable a driverless MaaS operation. Daegu Metropolitan City partners will ensure the regulatory framework supports the establishment of robotaxi fleet operation.

The agreement with Daegu City, one of South Korea’s largest metropolitan areas, extends Mobileye’s global MaaS footprint. Combined with Mobileye’s previously announced robotaxi-based mobility services agreements, the new deal shows how Mobileye is quickly scaling its autonomous MaaS ambitions globally. No other MaaS provider has declared a global MaaS footprint that rivals Mobileye’s strategy and go-to-market plan.

How Mobileye’s Strategy Differs: Leaders on the road to full autonomy must successfully navigate the phases of both ADAS and MaaS before the consumer AV industry can take shape. Doing this requires a simple, scalable mapping solution, such as Mobileye’s REM. With its eye toward full autonomy, Mobileye addresses these critical aspects of the autonomous revolution.

REM technology: Because it relies on crowd sourcing and low-bandwidth uploads, Mobileye REM technology is a fast and cost-effective way to create high-definition maps that can be utilized for enhanced ADAS such as L2+, as well as higher levels of autonomy for future self-driving cars. Mobileye’s REM map data has significant value beyond the automotive industry and can bring insights to businesses in new market segments, such as smart cities. SAIC is the latest OEM to turn passenger cars into harvesting vehicles that will contribute to the global RoadBook.

Robotaxis: Mobileye’s strategy for deploying robotaxis covers the specification, development and integration of all five value layers of the robotaxi market including: self-driving systems, self-driving vehicles, fleet operations, mobility intelligence, and rider experience and services. Mobileye’s approach is cost-effective, allowing the company to scale global operations more quickly than competitors and thereby capture a greater share of the aforementioned $160 billion global robotaxi opportunity, which is a significant step on the way to the fully autonomous future. Mobileye’s unique approach of scaling globally with a more economical solution, coupled with its superior technology, enable the company to lead MaaS and consumer-AV development at scale well ahead of the market.

More Context: Autonomous Driving at Intel | Intel and Mobileye at 2020 CES | Mobileye News

The post 2020 CES: Mobileye’s Global Ambitions Take Shape with New Deals in China, South Korea appeared first on Intel Newsroom.