More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot

It could be just a fender bender or an unforeseen rain shower, but a few seconds of disruption can translate to extra minutes or even hours of mind-numbing highway traffic.

But how much of this congestion could be avoided with AI at the wheel?

That’s what the Contra Costa Transportation Authority is working to determine in one of three federally funded automated driving system pilots in the next few years. Using vehicles retrofitted with the NVIDIA DRIVE AGX Pegasus platform, the agency will estimate just how much intelligent transportation can improve the efficiency of everyday commutes.

“As the population grows, there are more demands on roadways and continuing to widen them is just not sustainable,” said Randy Iwasaki, executive director of the CCTA. “We need to find better ways to move people, and autonomous vehicle technology is one way to do that.”

The CCTA was one of eight awardees – and the only local agency – of the Automated Driving System Demonstration Grants Program from the U.S. Department of Transportation, which aims to test the safe integration of self-driving cars into U.S. roads.

The Bay Area agency is using the funds for the highway pilot, as well as two other projects to develop robotaxis equipped with self-docking wheelchair technology and test autonomous shuttles for a local retirement community.

A More Intelligent Interstate

From the 101 to the 405, California is known for its constantly congested highways. In Contra Costa, Interstate 680 is one of those high-traffic corridors, funneling many of the area’s 120,000 daily commuters. This pilot will explore how the Highway Capacity Manual – which sets assumptions for modeling freeway capacity – can be updated to incorporate future automated vehicle technology.

Iwasaki estimates that half of California’s congestion is recurrent, meaning demand for roadways is higher than supply.  The other half is non-recurrent and can be attributed to things like weather events, special events — such as concerts or parades — and accidents. By eliminating human driver error, which has been estimated by the National Highway Traffic Safety Administration to be the cause of 94 percent of traffic accidents, the system becomes more efficient and reliable.

Autonomous vehicles don’t get distracted or drowsy, which are two of the biggest causes of human error while driving. They also use redundant and diverse sensors as well as high-definition maps to detect and plan the road ahead much farther than a human driver can.

These attributes make it easier to maintain constant speeds as well as space for vehicles to merge in and out of traffic for a smoother daily commute.

Driving Confidence

The CCTA will be using a fleet of autonomous test vehicles retrofitted with sensors and NVIDIA DRIVE AGX to gauge how much this technology can improve highway capacity.

The NVIDIA DRIVE AGX Pegasus AI compute platform uses the power of two Xavier systems-on-a-chip and two NVIDIA Turing architecture GPUs to achieve an unprecedented 320 trillion operations per second of supercomputing performance. The platform is designed and built for Level 4 and Level 5 autonomous systems, including robotaxis.

NVIDIA DRIVE AGX Pegasus

Iwasaki said the agency tapped NVIDIA for this pilot because the company’s vision matches its own: to solve real problems that haven’t been solved before, using proactive safety measures every step of the way.

With half of adult drivers reporting they’re fearful of self-driving technology, this approach to autonomous vehicles is critical to gaining public acceptance, he said.

“We need to get the word out that this technology is safer and let them know who’s behind making sure it’s safer,” Iwasaki said.

The post More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot appeared first on The Official NVIDIA Blog.

In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA

The Mercedes-Benz S-Class has always combined the best in engineering with a legendary heritage of craftsmanship. Now, the flagship sedan is adding intelligence to the mix, fusing AI with the embodiment of automotive luxury.

At a world premiere event, the legendary premium automaker debuted the redesigned flagship S-Class sedan. It features the all-new MBUX AI cockpit system, with an augmented reality head-up display, AI voice assistant and rich interactive graphics to enable every passenger in the vehicle, not just the driver, to enjoy personalized, intelligent features.

“This S-Class is going to be the most intelligent Mercedes ever,” said Mercedes-Benz CEO Ola Källenius during the virtual launch.

Like its predecessor, the next-gen MBUX system runs on the high-performance, energy-efficient compute of NVIDIA GPUs for instantaneous AI processing and sharp graphics.

“Mercedes-Benz is a perfect match for NVIDIA, because our mission is to use AI to solve problems no ordinary computers can,” said NVIDIA founder and CEO Jensen Huang, who took the new S-Class for a spin during the launch. “The technology in this car is remarkable.”

Jensen was featured alongside Grammy award-winning artist Alicia Keys and Formula One driver Lewis Hamilton at the premiere event, each showcasing the latest innovations of the premium sedan.

Watch NVIDIA founder and CEO Jensen Huang take the all new Mercedes-Benz S-Class for a spin.

The S-Class’s new intelligent system represents a significant step toward a software-defined, autonomous future. When more automated and self-driving features are integrated into the car, the driver and passengers alike can enjoy the same entertainment and productivity features, experiencing a personalized ride, no matter where they’re seated.

Unparalleled Performance

AI cockpits orchestrate crucial safety and convenience features, constantly learning to continuously deliver joy to the customer.

“For decades, the magic moment in car manufacturing was when the chassis received its engine,” Källenius said. “Today, there’s another magic moment that is incredibly important — the ‘marriage’ of the car’s body and its brain — the all-new head unit with the next-level MBUX-system.”

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Leveraging NVIDIA technology, Mercedes-Benz was able to consolidate these components into an AI platform — removing 27 switches and buttons — to simplify the architecture while creating more space to add new features.

And the S-Class’s new compute headroom is as massive as its legroom. With NVIDIA at the helm, the premium sedan contains about the same computing power as 60 average vehicles. Just one chip each controls the 3D cluster, infotainment and rear seat displays.

“There’s more computing power packed into this car than any car, ever — three powerful computer chips with NVIDIA GPUs,” Jensen said. “Those three computer chips represent the brain and the nervous system of this car.”

Effortless Convenience

The new MBUX system makes the cutting edge in graphics, passenger detection and natural language processing seem effortless.

The S-Class features five large screens, each with brilliant displays — the 12.8-inch central infotainment with OLED technology — making vehicle and comfort controls even more user-friendly for every passenger. The new 3D driver display gives a spatial view at the touch of a button, providing a realistic view of the car in its surroundings.

The system delivers even more security, enabling fingerprint, face and voice recognition, alongside a traditional PIN to access personal features. Its cameras can detect if a passenger is about to exit into oncoming traffic and warn them before they open the door. The same technology is used to monitor whether a child seat is correctly attached and if the driver is paying attention to the road.

MBUX can even carry on more conversation. It can answer a wider range of questions, some without the key phrase “Hey Mercedes,” and can interact in 27 languages, including Thai and Czech.

These futuristic functions are the result of over 30 million lines of code written by hundreds of engineers, who are continuously developing new and innovative ways for customers to enjoy their drive.

“These engineers are practically in your garage and they’re constantly working on the software, improving it, enhancing it, creating more features, and will update it over the air,” Jensen said. “Your car can now get better and better over time.”

The post In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA appeared first on The Official NVIDIA Blog.

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Self-driving cars continue to amaze passengers as a truly transformative technology. However, in the time of COVID-19, a self-cleaning car may be even more appealing.

Robotaxi startup Voyage introduced its third-generation vehicle, the G3, this week. The  autonomous vehicle, a Chrysler Pacifica Hybrid minivan retrofitted with self-driving technology, is the company’s first designed to operate without a driver and is equipped with an ambulance-grade ultraviolet light disinfectant system to keep passengers healthy.

The new vehicles use the NVIDIA DRIVE AGX Pegasus compute platform to enable the startup’s self-driving AI for robust perception and planning. The automotive-grade platform delivers safety to the core of Voyage’s autonomous fleet.

Given the enclosed space and the proximity of the driver and passengers, ride-hailing currently poses a major risk in a COVID-19 world. By implementing a disinfecting system alongside driverless technology, Voyage is ensuring self-driving cars will continue to develop as a safer, more efficient alternative to everyday mobility.

The G3 vehicle uses an ultraviolet-C system from automotive supplier GHSP to destroy pathogens in the vehicle between rides. UV-C works by inactivating a pathogen’s DNA, blocking its reproductive cycle. It’s been proven to be up to 99.9 percent effective and is commonly used to sterilize ambulances and hospital rooms.

The G3 is production-ready and currently testing on public roads in San Jose, Calif., with production vehicles planned to come out next year.

G3 Compute Horsepower Takes Off with DRIVE AGX Pegasus

Voyage has been using the NVIDIA DRIVE AGX platform in its previous-generation vehicles to power its Shield automatic emergency braking system.

With the G3, the startup is unleashing the 320 TOPS of performance from NVIDIA DRIVE AGX Pegasus to process sensor data and run diverse and redundant deep neural networks simultaneously for driverless operation. Voyage’s onboard computers are automotive grade and safety certified, built to handle the harsh vehicle environment for safe daily operation.

NVIDIA DRIVE AGX Pegasus delivers the compute necessary for level 4 and level 5 autonomous driving.

DRIVE AGX Pegasus is built on two NVIDIA Xavier systems-on-a-chip. Xavier is the first SoC built for autonomous machines and was recently determined by global safety agency TÜV SÜD to meet all applicable requirements of ISO 26262. This stringent assessment means it meets the strictest standard for functional safety.

Xavier’s safety architecture combined with the AI compute horsepower of the DRIVE AGX Pegasus platform delivers the robustness and performance necessary for the G3’s fully autonomous capabilities.

Moving Forward as the World Shelters in Place

As the COVID-19 pandemic continues to limit the way people live and work, transportation must adapt to keep the world moving.

In addition to the UV-C lights, Voyage has also equipped the car with HEPA-certified air filters to ensure safe airflow inside the car. The startup uses its own employees to manage and operate the fleet, enacting strict contact tracing and temperature checks to help minimize virus spread.

The Voyage G3 is equipped with a UV-C light system to disinfect the vehicle between rides.

While these measures are in place to specifically protect against the COVID-19 virus, they demonstrate the importance of an autonomous vehicle as a place where passengers can feel safe. No matter the condition of the world, autonomous transportation translates to a worry-free voyage, every time.

The post Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE

Self-driving trucks are coming to an interstate near you.

Autonomous trucking startup TuSimple and truck maker Navistar recently announced they will build self-driving semi trucks, powered by the NVIDIA DRIVE AGX platform. The collaboration is one of the first to develop autonomous trucks, set to begin production in 2024.

Over the past decade, self-driving truck developers have relied on traditional trucks retrofitted with the sensors, hardware and software necessary for autonomous driving. Building these trucks from the ground up, however, allows for companies to custom-build them for the needs of a self-driving system as well as take advantage of the infrastructure of a mass production truck manufacturer.

This transition is the first step from research to widespread deployment, said Chuck Price, chief product officer at TuSimple.

“Our technology, developed in partnership with NVIDIA, is ready to go to production with Navistar,” Price said. “This is a significant turning point for the industry.”

Tailor-Made Trucks

Developing a truck to drive on its own takes more than a software upgrade.

Autonomous driving relies on redundant and diverse deep neural networks, all running simultaneously to handle perception, planning and actuation. This requires massive amounts of compute.

The NVIDIA DRIVE AGX platform delivers high-performance, energy-efficient compute to enable AI-powered and autonomous driving capabilities. TuSimple has been using the platform in its test vehicles and pilots, such as its partnership with the United States Postal Service.

Building dedicated autonomous trucks makes it possible for TuSimple and Navistar to develop a centralized architecture optimized for the power and performance of the NVIDIA DRIVE AGX platform. The platform is also automotive grade, meaning it is built to withstand the wear and tear of years driving on interstate highways.

Invaluable Infrastructure

In addition to a customized architecture, developing an autonomous truck in partnership with a manufacturer opens up valuable infrastructure.

Truck makers like Navistar provide nationwide support for their fleets, with local service centers and vehicle tracking. This network is crucial for deploying self-driving trucks that will criss-cross the country on long-haul routes, providing seamless and convenient service to maintain efficiency.

TuSimple is also building out an HD map network of the nation’s highways for the routes its vehicles will travel. Combined with the widespread fleet management network, this infrastructure makes its autonomous trucks appealing to a wide variety of partners — UPS, U.S. Xpress, Penske Truck Leasing and food service supply chain company McLane Inc., a Berkshire Hathaway company, have all signed on to this autonomous freight network.

And backed by the performance of NVIDIA DRIVE AGX, these vehicles will continue to improve, delivering safer, more efficient logistics across the country.

“We’re really excited as we move into production to have a partner like NVIDIA with us the whole way,” Price said.

The post Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories

Driving requires the ability to predict the future. Every time a car suddenly cuts into a lane or multiple cars arrive at the same intersection, drivers must make predictions as to how others will act to safely proceed.

While humans rely on driver cues and personal experience to read these situations, self-driving cars can use AI to anticipate traffic patterns and safely maneuver in a complex environment.

We have trained the PredictionNet deep neural network to understand the driving environment around a car in top-down or bird’s-eye view, and to predict the future trajectories of road users based on both live perception and map data.

PredictionNet analyzes past movements of all road agents, such as cars, buses, trucks, bicycles and pedestrians, to predict their future movements. The DNN looks into the past to take in previous road user positions, and also takes in positions of fixed objects and landmarks on the scene, such as traffic lights, traffic signs and lane line markings provided by the map.

Based on these inputs, which are rasterized in top-down view, the DNN predicts road user trajectories into the future, as shown in figure 1.

Predicting the future has inherent uncertainty. PredictionNet captures this by also providing the prediction statistics of the future trajectory predicted for each road user, as also shown in figure 1.

Figure 1. PredictionNet results visualized in top-down view. Gray lines denote the map, dotted white lines represent vehicle trajectories predicted by the DNN, while white boxes represent ground truth trajectory data. The colorized clouds represent the probability distributions for predicted vehicle trajectories, with warmer colors representing points that are closer in time to the present, and cooler colors representing points further in the future.

A Top-Down Convolutional RNN-Based Approach

Previous approaches to predicting future trajectories for self-driving cars have leveraged both imitation learning and generative models that sample future trajectories, as well as convolutional neural networks and recurrent neural networks for processing perception inputs and predicting future trajectories.

For PredictionNet, we adopt an RNN-based architecture that uses two-dimensional convolutions. This structure is highly scalable for arbitrary input sizes, including the number of road users and prediction horizons.

As is typically the case with any RNN, different time steps are fed into the DNN sequentially. Each time step is represented by a top-down view image that shows the vehicle surroundings at that time, including both dynamic obstacles observed via live perception, and fixed landmarks provided by a map.

This top-down view image is processed by a set of 2D convolutions before being passed to the RNN. In the current implementation, PredictionNet is able to confidently predict one to five seconds into the future, depending on the complexity of the scene (for example, highway versus urban).

The PredictionNet model also lends itself to a highly efficient runtime implementation in the TensorRT deep learning inference SDK, with 10 ms end-to-end inference times achieved on an NVIDIA TITAN RTX GPU.

Scalable Results

Results thus far have shown PredictionNet to be highly promising for several complex traffic scenarios. For example, the DNN can predict which cars will proceed straight through an intersection versus which will turn. It’s also able to correctly predict the car’s behavior in highway merging scenarios.

We have also observed that PredictionNet is able to learn velocities and accelerations of vehicles on the scene. This enables it to correctly predict speeds of both fast-moving and fully stopped vehicles, as well as to predict stop-and-go traffic patterns.

PredictionNet is trained on highly accurate lidar data to achieve higher prediction accuracy. However, the inference-time perception input to the DNN can be based on any sensor input combination (that is, camera, radar or lidar data) without retraining. This also means that the DNN’s prediction capabilities can be leveraged for various sensor configurations and levels of autonomy, from level 2+ systems all the way to level 4/level 5.

PredictionNet’s ability to anticipate behavior in real time can be used to create an interactive training environment for reinforcement learning-based planning and control policies for features such as automatic cruise control, lane changes or intersections handling.

By using PredictionNet to simulate how other road users will react to an autonomous vehicle’s behavior based on real-world experiences, we can train a more safe, robust and courteous AI driver.

The post All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories appeared first on The Official NVIDIA Blog.

Driving the Future: What Is an AI Cockpit?

From Knight Rider’s KITT to Ironman’s JARVIS, intelligent copilots have been a staple of forward-looking pop culture.

Advancements in AI and high-performance processors are turning these sci-fi concepts into reality. But what, exactly, is an AI cockpit, and how will it change the way we move?

AI is enabling a range of new software-defined, in-vehicle capabilities across the transportation industry. With centralized, high-performance compute, automakers can now build vehicles that become smarter over time.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Consolidating these components with an AI platform such as NVIDIA DRIVE AGX simplifies the architecture while creating more compute headroom to add new features. In addition, NVIDIA DRIVE IX provides an open and extensible software framework for a software-defined cockpit experience.

Mercedes-Benz released the first such intelligent cockpit, the MBUX AI system, powered by NVIDIA technology, in 2018. The system is currently in more than 20 Mercedes-Benz models, with the second generation debuting in the upcoming S-Class.

The second-generation MBUX system is set to debut in the Mercedes-Benz S-Class.

MBUX and other such AI cockpits orchestrate crucial safety and convenience features much more smoothly than the traditional vehicle architecture. They centralize compute for streamlined functions, and they’re constantly learning. By regularly delivering new features, they extend the joy of ownership throughout the life of the vehicle.

Always Alert

But safety is the foremost benefit of AI in the vehicle. AI acts as an extra set of eyes on the 360-degree environment surrounding the vehicle, as well as an intelligent guardian for drivers and passengers inside.

One key feature is driver monitoring. As automated driving functions become more commonplace across vehicle fleets, it’s critical to ensure the human at the wheel is alert and paying attention.

AI cockpits use interior cameras to monitor whether the driver is paying attention to the road.

Using interior-facing cameras, AI-powered driver monitoring can track driver activity, head position and facial movements to analyze whether the driver is paying attention, drowsy or distracted. The system can then alert the driver, bringing attention back to the road.

This system can also help keep those inside and outside the vehicle safe and alert. By sensing whether a passenger is about to exit a car and using exterior sensors to monitor the outside environment, AI can warn of oncoming traffic or pedestrians and bikers potentially in the path of the opening door.

It also acts as a guardian in emergency situations. If a passenger is not sitting properly in their seat, the system can prevent an airbag activation that would harm rather than help them. It can also use AI to detect the presence of children or pets left behind in the vehicle, helping prevent heat stroke.

An AI cockpit is always on the lookout for a vehicle’s occupants, adding an extra level of safety with full cabin monitoring so they can enjoy the ride.

Constant Convenience

In addition to safety, AI helps make the daily drive easier and more enjoyable.

With crystal-clear graphics, drivers can receive information about their route, as well as what the sensors on the car see, quickly and easily. Augmented reality heads-up displays and virtual reality views of the vehicle’s surroundings deliver the most important data (such as parking assistance, directions, speed and oncoming obstacles) without disrupting the driver’s line of sight.

These visualizations help build trust in the driver assistance system as well as understanding of its capabilities and limitations for a safer and more effective driving experience.

Using natural language processing, drivers can control vehicle settings without taking their eyes off the road. Conversational AI enables easy access to search queries, like finding the best coffee shops or sushi restaurants along a given route. The same system that monitors driver attention can also interpret gesture controls, providing another way for drivers to communicate with the cockpit without having to divert their gaze.

Natural language processing makes it possible to access vehicle controls without taking your eyes off the road.

These technologies can also be used to personalize the driving experience. Biometric user authentication and voice recognition allow the car to identify who is driving, and adjust settings and preferences accordingly.

AI cockpits are being integrated into more models every year, making them smarter and safer and constantly adding new features. High-performance, energy-efficient AI compute platforms, consolidate in-car systems with a centralized architecture to enable the open NVIDIA DRIVE IX software platform to meet future cockpit needs.

What used to be fanciful fiction will soon be part of our daily driving routine.

The post Driving the Future: What Is an AI Cockpit? appeared first on The Official NVIDIA Blog.

Mercedes-Benz, NVIDIA Partner to Build the World’s Most Advanced, Software-Defined Vehicles

The automaker responsible for the world’s first car is now the first to deliver software-defined vehicles across its entire fleet, powered by NVIDIA.

In a live-streamed media event from Stuttgart and Silicon Valley, Mercedes-Benz CEO Ola Källenius and NVIDIA founder and CEO Jensen Huang announced today that the automaker will launch software-defined, intelligent vehicles using end-to-end NVIDIA technology.

“This is the biggest partnership of its kind in the transportation industry,” Huang said. “We’re breaking ground on several different fronts, from technology to business models, and I can’t imagine a better company to do this with than Mercedes-Benz.”

Starting in 2024, every next-generation Mercedes-Benz vehicles will include this first-of-its-kind software-defined computing architecture that includes the most powerful computer, system software and applications for consumers, marking the turning point of traditional vehicles becoming high-performance, updateable computing devices.

These revolutionary vehicles are enabled by NVIDIA DRIVE AGX Orin, with multiple processing engines for high-performance, energy efficient compute and AI, and equipped with surround sensors. Primary features will be the ability to drive regular routes from address to address autonomously and Level 4 automated parking, in addition to countless safety and convenience applications. These capabilities will continue to be better, as AI technology advances at lightning rates.

“At no time in history has so much computing power been put into a car,” Huang said.

“This is a field that has so many opportunities, it will make driving much safer on the road to fully autonomous driving,” said Källenius. “By using an architecture from NVIDIA that makes it possible to update through the life of the vehicle, there will be endless possibilities.”

Centralizing and unifying computing in the car will make it easier to integrate and update advanced software features as they are developed. Just like a mobile phone, which periodically gets software updates, these software-defined vehicles will be able to do the same.

With over-the-air updates, vehicles can constantly receive the latest autonomous driving and intelligent cockpit features, increasing value and extending the joy of ownership with each software upgrade.

Huang said: “The definition of a car will change forever. No longer will the best moment of your car be at the point of sale. The Mercedes-Benz partnership is the starting point, and behind it are thousands of engineers. They’re your own personal software and research lab, who will stay with you over the life of the car.

This will revolutionize the way cars are sold and enjoyed.”

State of the Art at Any Age

Breakthroughs in AI and computing are opening future fleets to dramatic new functionality, fundamentally transforming the vehicle architecture for the first time in decades.

And this transformation is occurring rapidly. The market for AI hardware, software and in-car services is worth about $5 billion today. By extending these capabilities to a fleet of about 100 million vehicles, the market has the potential to grow to $700 billion.

Like all modern computing devices, these intelligent vehicles are supported by a large team of AI and software engineers, dedicated to improving the performance and capability of the car as technology advances.

This software-defined architecture is also opening up new business models for automakers. Mercedes-Benz can add capabilities and services over the air at any time through the life of not just one customer, but across customers through the life of the car.

“Mercedes-Benz customers who love Mercedes-Benz will have a new way of enjoying their car,” said Huang. “This is a level of scale that has never been achieved before.”

An Amped Up Architecture

Mercedes’ next generation of automotive innovation will be developed with AI from end to end.

This development begins in the data center. Both companies will work together to validate intelligent new autonomous driving experiences using NVIDIA DRIVE Infrastructure solutions.

DRIVE Infrastructure encompasses the complete data center hardware, software and workflows needed to develop and validate autonomous driving technology, from raw data collection through validation. It provides the building blocks required for DNN development and training as well as provides necessary validation, replay and testing in simulation to enable a safe autonomous driving experience.

NVIDIA’s complete DRIVE Software stack (including DriveWorks, Perception, Mapping and Planning) will run on Mercedes-Benz’ new, centralized compute architecture using the NVIDIA DRIVE AGX Orin platform. This next-generation AI compute platform delivers 200 trillion operations per second (TOPS) for the entire Mercedes-Benz lineup, from entry level to high end.

By developing an architecturally coherent and programmable fleet, Mercedes-Benz will again redefine what it is to be a mobility company, nurturing a growing installed base to offer software upgradeable applications and subscription services for the entire life of its upcoming fleet.

The post Mercedes-Benz, NVIDIA Partner to Build the World’s Most Advanced, Software-Defined Vehicles appeared first on The Official NVIDIA Blog.

Checking the Rearview Mirror: NVIDIA DRIVE Labs Looks Back at Year of Self-Driving Software Development

The NVIDIA DRIVE Labs video series provides an inside look at how self-driving software is developed. One year and 20 episodes later, it’s clear there’s nearly endless ground to cover.

The series dives into topics ranging from 360-degree perception to panoptic segmentation, and even predicting the future. Autonomous vehicles are one of the great computing challenges of our time, and we’re approaching software development one building block at a time.

DRIVE Labs is meant to inform and educate. Whether you’re just beginning to learn about this transformative technology or have been working on it for a decade, the series is a window into what we at NVIDIA view as the most important development challenges and how we’re approaching them for safer, more efficient transportation.

Here’s a brief look at what we’ve covered this past year, and how we’re planning for the road ahead.

A Cross-Section of Perception Networks

Before a vehicle plans a path and executes a driving decision, it must be able to see and understand the entire environment around the vehicle.

DRIVE Labs has detailed a variety of the deep neural networks responsible for vehicle perception. Our approach relies on redundant and diverse DNNs — our models cover a variety of capabilities, like detecting intersections, detecting traffic lights and traffic signs and understanding intersection structure. They’re also capable of multiple tasks, like spotting parking spaces or detecting whether sensors are obstructed.

These DNNs do more than draw bounding boxes around pedestrians and traffic signals. They can break down images pixel by pixel for enhanced accuracy, and even track those pixels through time for precise positioning information.

For nighttime driving, AutoHighBeamNet enables automated vehicle headlight control, while our active learning approach improves pedestrian detection in the dark.

DNNs also make it possible to extract 3D distances from 2D camera images for accurate motion planning.

And our perception capabilities operate all around the vehicle. With surround camera object tracking and surround camera-radar fusion, we ensure there are no perception blind spots.

Predicting the Road Ahead

In addition to perceiving their environment, autonomous vehicles must be able to understand how other road actors behave to plan a safe path forward.

With recurrent neural networks, DRIVE Labs has shown how a self-driving car can use past insights about an object’s motion to compute future motion predictions.

Our Safety Force Field collision avoidance software adds diversity and redundancy to planning and control software. It constantly runs in the background to double-check controls from the primary system and veto any action that it deems to be unsafe.

The DNNs and software components are just a sampling of the development that goes into an autonomous vehicle. This monumental challenge requires rigorous training and testing, both in the data center and the vehicle. And as transportation continues to change, the vehicle software must be able to adapt.

We’ll explore these topics and more in upcoming DRIVE Labs episodes. As we continue to advance self-driving car software development, we’ll share those insights with you.

The post Checking the Rearview Mirror: NVIDIA DRIVE Labs Looks Back at Year of Self-Driving Software Development appeared first on The Official NVIDIA Blog.

At a Crossroads: How AI Helps Autonomous Vehicles Understand Intersections

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series. With this series, we’re taking an engineering-focused look at individual autonomous vehicle challenges and how the NVIDIA DRIVE AV Software team is mastering them. Catch up on our earlier posts, here.

Intersections are common roadway features, whether four-way stops in a neighborhood or traffic-light-filled exchanges on busy multi-lane thoroughfares.

Given the frequency, variety and risk associated with intersections — more than 50 percent of serious accidents in the U.S. happen at or near them — it’s critical that an autonomous vehicle be able to accurately navigate them.

Handling intersections autonomously presents a complex set of challenges for self-driving cars. This includes the ability to stop accurately at an intersection wait line or crosswalk, correctly process and interpret right of way traffic rules in various scenarios, and determine and execute the correct path for a variety of maneuvers, such as proceeding straight through the intersection and unprotected intersection turns.

Earlier in the DRIVE Labs series, we demonstrated how we detect intersections, traffic lights, and traffic signs with the WaitNet DNN. And how we classify traffic light state and traffic sign type with the LightNet and SignNet DNNs. In this episode, we go further to show how NVIDIA uses AI to perceive the variety of intersection structures that an autonomous vehicle could encounter on a daily drive.

Manual Mapmaking

Previous methods have relied on high-definition 3D semantic maps of an intersection and its surrounding area to understand the intersection structure and create paths to navigate safely.

Human labeling is heavily involved to create such a map, hand-encoding all potentially relevant intersection structure features, such as where the intersection entry/exit lines and dividers are located, where any traffic lights or signs are, and how many lanes there are in each direction. The more complex the intersection scenario, the more heavily the map would need to be manually annotated.

An important practical limitation of this approach is lack of scalability. Every intersection in the world would need to be manually labeled before an autonomous vehicle could navigate them, which would create highly impractical data collection, labeling, and cost challenges.

Another challenge lies in temporary conditions, such as construction zones. Because of the temporary nature of these scenarios, writing them into and out of a map can be highly complex.

In contrast, our approach is analogous to how humans drive. Humans use live perception rather than maps to understand intersection structure and navigate intersections.

A Structured Approach to Intersections

Our algorithm extends the capability of our WaitNet DNN to predict intersection structure as a collection of points we call “joints,” which are analogous to joints in a human body. Just as the actuation of human limbs is achieved through connections between our joints, in our approach, actuation of an autonomous vehicle through an intersection may be achieved by connecting the intersection structure joints into a path for the vehicle to follow.

Figure 1 illustrates intersection structure prediction using our DNN-based method. As shown, we can detect and classify intersection structure features into different classes, such as intersection entry and exit points for both the ego car and other vehicles on the scene, as well as pedestrian crossing entries and exits.

Figure 1. Prediction of intersection structure. Red = intersection entry wait line for ego car; Yellow = intersection entry wait line for other cars; Green = intersection exit line. In this figure, the green lines indicate all the possible ways the ego car could exit the intersection if arriving at it from the left-most lane – specifically, it could continue driving straight, take a left turn, or make a U-turn.

Rather than segment the contours of an image, our DNN is able to differentiate intersection entry and exit points for different lanes. Another key benefit of our approach is that the intersection structure prediction is robust to occlusions and partial occlusions, and it’s able to predict both painted and inferred intersection structure lines.

The intersection key points of figure 1 may also be connected into paths for navigating the intersection. By connecting intersection entry and exit points, paths and trajectories that represent the ego car movements can be predicted.

Our live perception approach enables scalability for handling various types of intersections without the burden of manually labeling each intersection individually. It can also be combined with map information, where high-quality data is available, to create diversity and redundancy for complex intersection handling.

Our DNN-based intersection structure perception capability will become available to developers in an upcoming DRIVE Software release as an additional head of our WaitNet DNN. To learn more about our DNN models, visit our DRIVE Perception page.

The post At a Crossroads: How AI Helps Autonomous Vehicles Understand Intersections appeared first on The Official NVIDIA Blog.

NVIDIA Xavier Achieves Industry First with Expert Safety Assessment

Attaining the highest levels of safety takes years of hard engineering work and investment. Now, autonomous vehicle developers can achieve it with a single system-on-a-chip.

The NVIDIA Xavier SoC passed the final assessment for safety product approval by TÜV SÜD, one of the most knowledgeable and stringent safety assessment bodies in the industry.

TÜV SÜD has determined that the chip meets ISO 26262 random hardware integrity of ASIL C and a systematic capability of ASIL D for process — the strictest standard for functional safety.

NVIDIA Xavier, the world’s first processor for autonomous driving, is the most complex SoC the safety agency has assessed in its 150-year history.

As part of a three-step approach, TÜV SÜD has previously assessed the process to develop Xavier as well as the SoC’s architecture. This current assessment completes the last step to show that Xavier SoC meets all applicable requirements of ISO 26262.

NVIDIA is working with the entire industry to ensure the safe deployment of autonomous vehicles. It participates in standardization and regulation bodies worldwide, including the International Organization for Standardization (ISO), the Society of Automotive Engineers (SAE), the Institute of Electrical and Electronics Engineers (IEEE), the United Nations Economic Commission of Europe (UNECE), the National Highway Traffic Safety Administration (NHTSA), the Association for Standardization of Automation and Measuring Systems (ASAM) and the European Association of Automotive Suppliers (CLEPA).

By working with these groups — and having our technology reviewed by them — we’re able to share our expertise while also delivering a robust AI computing system for the entire industry.

It Takes a Village

The TÜV SÜD assessment spans multiple disciplines in Xavier’s development. The audit reviewed 1,400 internal work products across a range of cross-functional teams, all contributing to the most complex SoC ever assessed.

Xavier contains 9 billion transistors to process vast amounts of data, as well as thousands of safety mechanisms to address random hardware failures. Its MIPI CSI-2 and Gigabit Ethernet high-speed I/O connects Xavier to the largest array of lidar, radar and camera sensors of any chip ever built.

Inside the SoC, six types of processors — ISP (image signal processor), VPU (video processing unit), PVA (programmable vision accelerator), DLA (deep learning accelerator), CUDA GPU, and CPU — process 30 trillion operations per second.

Using any one of these components separately would require significant investment to achieve the same safety functionality as the complete Xavier SoC. By choosing Xavier, autonomous vehicle developers can meet the highest levels of safety with a single processor.

Raising the Bar 

This milestone also marks Xavier as one of the first processors to meet the requirements of the latest ISO 26262 standard.

ISO 26262 is the definitive global standard for automotive functional safety — a system’s ability to avoid, identify and manage failures. In 2018, the organization released the second edition of these standards to adapt to new vehicle technologies.

The standards cover the hardware itself as well as the processes that surround it — ensuring a product has been developed in a way that mitigates potential systematic and random hardware faults. That is, SoC development must not only avoid failures whenever possible, but also detect and respond to them when they cannot be avoided.

Under these standards, Xavier has been determined to meet the requirements for random hardware integrity of ASIL C and a systematic capability of ASIL D — the highest degree of safety integrity. ASIL refers to a component’s automotive safety integrity level and classifies the ability to mitigate risk of hazard on a scale of A to D, A representing the lowest degree and D the highest.

By meeting these requirements, Xavier has demonstrated the ability to achieve the necessary complexity for high-performance compute, while also maintaining functional safety.

Completing this assessment is just the start of NVIDIA’s journey to deliver safer and more efficient transportation. We continue to raise the bar for AI compute, ensuring safety in development and execution at every step.

The post NVIDIA Xavier Achieves Industry First with Expert Safety Assessment appeared first on The Official NVIDIA Blog.