More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot

It could be just a fender bender or an unforeseen rain shower, but a few seconds of disruption can translate to extra minutes or even hours of mind-numbing highway traffic.

But how much of this congestion could be avoided with AI at the wheel?

That’s what the Contra Costa Transportation Authority is working to determine in one of three federally funded automated driving system pilots in the next few years. Using vehicles retrofitted with the NVIDIA DRIVE AGX Pegasus platform, the agency will estimate just how much intelligent transportation can improve the efficiency of everyday commutes.

“As the population grows, there are more demands on roadways and continuing to widen them is just not sustainable,” said Randy Iwasaki, executive director of the CCTA. “We need to find better ways to move people, and autonomous vehicle technology is one way to do that.”

The CCTA was one of eight awardees – and the only local agency – of the Automated Driving System Demonstration Grants Program from the U.S. Department of Transportation, which aims to test the safe integration of self-driving cars into U.S. roads.

The Bay Area agency is using the funds for the highway pilot, as well as two other projects to develop robotaxis equipped with self-docking wheelchair technology and test autonomous shuttles for a local retirement community.

A More Intelligent Interstate

From the 101 to the 405, California is known for its constantly congested highways. In Contra Costa, Interstate 680 is one of those high-traffic corridors, funneling many of the area’s 120,000 daily commuters. This pilot will explore how the Highway Capacity Manual – which sets assumptions for modeling freeway capacity – can be updated to incorporate future automated vehicle technology.

Iwasaki estimates that half of California’s congestion is recurrent, meaning demand for roadways is higher than supply.  The other half is non-recurrent and can be attributed to things like weather events, special events — such as concerts or parades — and accidents. By eliminating human driver error, which has been estimated by the National Highway Traffic Safety Administration to be the cause of 94 percent of traffic accidents, the system becomes more efficient and reliable.

Autonomous vehicles don’t get distracted or drowsy, which are two of the biggest causes of human error while driving. They also use redundant and diverse sensors as well as high-definition maps to detect and plan the road ahead much farther than a human driver can.

These attributes make it easier to maintain constant speeds as well as space for vehicles to merge in and out of traffic for a smoother daily commute.

Driving Confidence

The CCTA will be using a fleet of autonomous test vehicles retrofitted with sensors and NVIDIA DRIVE AGX to gauge how much this technology can improve highway capacity.

The NVIDIA DRIVE AGX Pegasus AI compute platform uses the power of two Xavier systems-on-a-chip and two NVIDIA Turing architecture GPUs to achieve an unprecedented 320 trillion operations per second of supercomputing performance. The platform is designed and built for Level 4 and Level 5 autonomous systems, including robotaxis.

NVIDIA DRIVE AGX Pegasus

Iwasaki said the agency tapped NVIDIA for this pilot because the company’s vision matches its own: to solve real problems that haven’t been solved before, using proactive safety measures every step of the way.

With half of adult drivers reporting they’re fearful of self-driving technology, this approach to autonomous vehicles is critical to gaining public acceptance, he said.

“We need to get the word out that this technology is safer and let them know who’s behind making sure it’s safer,” Iwasaki said.

The post More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot appeared first on The Official NVIDIA Blog.

In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA

The Mercedes-Benz S-Class has always combined the best in engineering with a legendary heritage of craftsmanship. Now, the flagship sedan is adding intelligence to the mix, fusing AI with the embodiment of automotive luxury.

At a world premiere event, the legendary premium automaker debuted the redesigned flagship S-Class sedan. It features the all-new MBUX AI cockpit system, with an augmented reality head-up display, AI voice assistant and rich interactive graphics to enable every passenger in the vehicle, not just the driver, to enjoy personalized, intelligent features.

“This S-Class is going to be the most intelligent Mercedes ever,” said Mercedes-Benz CEO Ola Källenius during the virtual launch.

Like its predecessor, the next-gen MBUX system runs on the high-performance, energy-efficient compute of NVIDIA GPUs for instantaneous AI processing and sharp graphics.

“Mercedes-Benz is a perfect match for NVIDIA, because our mission is to use AI to solve problems no ordinary computers can,” said NVIDIA founder and CEO Jensen Huang, who took the new S-Class for a spin during the launch. “The technology in this car is remarkable.”

Jensen was featured alongside Grammy award-winning artist Alicia Keys and Formula One driver Lewis Hamilton at the premiere event, each showcasing the latest innovations of the premium sedan.

Watch NVIDIA founder and CEO Jensen Huang take the all new Mercedes-Benz S-Class for a spin.

The S-Class’s new intelligent system represents a significant step toward a software-defined, autonomous future. When more automated and self-driving features are integrated into the car, the driver and passengers alike can enjoy the same entertainment and productivity features, experiencing a personalized ride, no matter where they’re seated.

Unparalleled Performance

AI cockpits orchestrate crucial safety and convenience features, constantly learning to continuously deliver joy to the customer.

“For decades, the magic moment in car manufacturing was when the chassis received its engine,” Källenius said. “Today, there’s another magic moment that is incredibly important — the ‘marriage’ of the car’s body and its brain — the all-new head unit with the next-level MBUX-system.”

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Leveraging NVIDIA technology, Mercedes-Benz was able to consolidate these components into an AI platform — removing 27 switches and buttons — to simplify the architecture while creating more space to add new features.

And the S-Class’s new compute headroom is as massive as its legroom. With NVIDIA at the helm, the premium sedan contains about the same computing power as 60 average vehicles. Just one chip each controls the 3D cluster, infotainment and rear seat displays.

“There’s more computing power packed into this car than any car, ever — three powerful computer chips with NVIDIA GPUs,” Jensen said. “Those three computer chips represent the brain and the nervous system of this car.”

Effortless Convenience

The new MBUX system makes the cutting edge in graphics, passenger detection and natural language processing seem effortless.

The S-Class features five large screens, each with brilliant displays — the 12.8-inch central infotainment with OLED technology — making vehicle and comfort controls even more user-friendly for every passenger. The new 3D driver display gives a spatial view at the touch of a button, providing a realistic view of the car in its surroundings.

The system delivers even more security, enabling fingerprint, face and voice recognition, alongside a traditional PIN to access personal features. Its cameras can detect if a passenger is about to exit into oncoming traffic and warn them before they open the door. The same technology is used to monitor whether a child seat is correctly attached and if the driver is paying attention to the road.

MBUX can even carry on more conversation. It can answer a wider range of questions, some without the key phrase “Hey Mercedes,” and can interact in 27 languages, including Thai and Czech.

These futuristic functions are the result of over 30 million lines of code written by hundreds of engineers, who are continuously developing new and innovative ways for customers to enjoy their drive.

“These engineers are practically in your garage and they’re constantly working on the software, improving it, enhancing it, creating more features, and will update it over the air,” Jensen said. “Your car can now get better and better over time.”

The post In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA appeared first on The Official NVIDIA Blog.

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Self-driving cars continue to amaze passengers as a truly transformative technology. However, in the time of COVID-19, a self-cleaning car may be even more appealing.

Robotaxi startup Voyage introduced its third-generation vehicle, the G3, this week. The  autonomous vehicle, a Chrysler Pacifica Hybrid minivan retrofitted with self-driving technology, is the company’s first designed to operate without a driver and is equipped with an ambulance-grade ultraviolet light disinfectant system to keep passengers healthy.

The new vehicles use the NVIDIA DRIVE AGX Pegasus compute platform to enable the startup’s self-driving AI for robust perception and planning. The automotive-grade platform delivers safety to the core of Voyage’s autonomous fleet.

Given the enclosed space and the proximity of the driver and passengers, ride-hailing currently poses a major risk in a COVID-19 world. By implementing a disinfecting system alongside driverless technology, Voyage is ensuring self-driving cars will continue to develop as a safer, more efficient alternative to everyday mobility.

The G3 vehicle uses an ultraviolet-C system from automotive supplier GHSP to destroy pathogens in the vehicle between rides. UV-C works by inactivating a pathogen’s DNA, blocking its reproductive cycle. It’s been proven to be up to 99.9 percent effective and is commonly used to sterilize ambulances and hospital rooms.

The G3 is production-ready and currently testing on public roads in San Jose, Calif., with production vehicles planned to come out next year.

G3 Compute Horsepower Takes Off with DRIVE AGX Pegasus

Voyage has been using the NVIDIA DRIVE AGX platform in its previous-generation vehicles to power its Shield automatic emergency braking system.

With the G3, the startup is unleashing the 320 TOPS of performance from NVIDIA DRIVE AGX Pegasus to process sensor data and run diverse and redundant deep neural networks simultaneously for driverless operation. Voyage’s onboard computers are automotive grade and safety certified, built to handle the harsh vehicle environment for safe daily operation.

NVIDIA DRIVE AGX Pegasus delivers the compute necessary for level 4 and level 5 autonomous driving.

DRIVE AGX Pegasus is built on two NVIDIA Xavier systems-on-a-chip. Xavier is the first SoC built for autonomous machines and was recently determined by global safety agency TÜV SÜD to meet all applicable requirements of ISO 26262. This stringent assessment means it meets the strictest standard for functional safety.

Xavier’s safety architecture combined with the AI compute horsepower of the DRIVE AGX Pegasus platform delivers the robustness and performance necessary for the G3’s fully autonomous capabilities.

Moving Forward as the World Shelters in Place

As the COVID-19 pandemic continues to limit the way people live and work, transportation must adapt to keep the world moving.

In addition to the UV-C lights, Voyage has also equipped the car with HEPA-certified air filters to ensure safe airflow inside the car. The startup uses its own employees to manage and operate the fleet, enacting strict contact tracing and temperature checks to help minimize virus spread.

The Voyage G3 is equipped with a UV-C light system to disinfect the vehicle between rides.

While these measures are in place to specifically protect against the COVID-19 virus, they demonstrate the importance of an autonomous vehicle as a place where passengers can feel safe. No matter the condition of the world, autonomous transportation translates to a worry-free voyage, every time.

The post Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE

Self-driving trucks are coming to an interstate near you.

Autonomous trucking startup TuSimple and truck maker Navistar recently announced they will build self-driving semi trucks, powered by the NVIDIA DRIVE AGX platform. The collaboration is one of the first to develop autonomous trucks, set to begin production in 2024.

Over the past decade, self-driving truck developers have relied on traditional trucks retrofitted with the sensors, hardware and software necessary for autonomous driving. Building these trucks from the ground up, however, allows for companies to custom-build them for the needs of a self-driving system as well as take advantage of the infrastructure of a mass production truck manufacturer.

This transition is the first step from research to widespread deployment, said Chuck Price, chief product officer at TuSimple.

“Our technology, developed in partnership with NVIDIA, is ready to go to production with Navistar,” Price said. “This is a significant turning point for the industry.”

Tailor-Made Trucks

Developing a truck to drive on its own takes more than a software upgrade.

Autonomous driving relies on redundant and diverse deep neural networks, all running simultaneously to handle perception, planning and actuation. This requires massive amounts of compute.

The NVIDIA DRIVE AGX platform delivers high-performance, energy-efficient compute to enable AI-powered and autonomous driving capabilities. TuSimple has been using the platform in its test vehicles and pilots, such as its partnership with the United States Postal Service.

Building dedicated autonomous trucks makes it possible for TuSimple and Navistar to develop a centralized architecture optimized for the power and performance of the NVIDIA DRIVE AGX platform. The platform is also automotive grade, meaning it is built to withstand the wear and tear of years driving on interstate highways.

Invaluable Infrastructure

In addition to a customized architecture, developing an autonomous truck in partnership with a manufacturer opens up valuable infrastructure.

Truck makers like Navistar provide nationwide support for their fleets, with local service centers and vehicle tracking. This network is crucial for deploying self-driving trucks that will criss-cross the country on long-haul routes, providing seamless and convenient service to maintain efficiency.

TuSimple is also building out an HD map network of the nation’s highways for the routes its vehicles will travel. Combined with the widespread fleet management network, this infrastructure makes its autonomous trucks appealing to a wide variety of partners — UPS, U.S. Xpress, Penske Truck Leasing and food service supply chain company McLane Inc., a Berkshire Hathaway company, have all signed on to this autonomous freight network.

And backed by the performance of NVIDIA DRIVE AGX, these vehicles will continue to improve, delivering safer, more efficient logistics across the country.

“We’re really excited as we move into production to have a partner like NVIDIA with us the whole way,” Price said.

The post Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories

Driving requires the ability to predict the future. Every time a car suddenly cuts into a lane or multiple cars arrive at the same intersection, drivers must make predictions as to how others will act to safely proceed.

While humans rely on driver cues and personal experience to read these situations, self-driving cars can use AI to anticipate traffic patterns and safely maneuver in a complex environment.

We have trained the PredictionNet deep neural network to understand the driving environment around a car in top-down or bird’s-eye view, and to predict the future trajectories of road users based on both live perception and map data.

PredictionNet analyzes past movements of all road agents, such as cars, buses, trucks, bicycles and pedestrians, to predict their future movements. The DNN looks into the past to take in previous road user positions, and also takes in positions of fixed objects and landmarks on the scene, such as traffic lights, traffic signs and lane line markings provided by the map.

Based on these inputs, which are rasterized in top-down view, the DNN predicts road user trajectories into the future, as shown in figure 1.

Predicting the future has inherent uncertainty. PredictionNet captures this by also providing the prediction statistics of the future trajectory predicted for each road user, as also shown in figure 1.

Figure 1. PredictionNet results visualized in top-down view. Gray lines denote the map, dotted white lines represent vehicle trajectories predicted by the DNN, while white boxes represent ground truth trajectory data. The colorized clouds represent the probability distributions for predicted vehicle trajectories, with warmer colors representing points that are closer in time to the present, and cooler colors representing points further in the future.

A Top-Down Convolutional RNN-Based Approach

Previous approaches to predicting future trajectories for self-driving cars have leveraged both imitation learning and generative models that sample future trajectories, as well as convolutional neural networks and recurrent neural networks for processing perception inputs and predicting future trajectories.

For PredictionNet, we adopt an RNN-based architecture that uses two-dimensional convolutions. This structure is highly scalable for arbitrary input sizes, including the number of road users and prediction horizons.

As is typically the case with any RNN, different time steps are fed into the DNN sequentially. Each time step is represented by a top-down view image that shows the vehicle surroundings at that time, including both dynamic obstacles observed via live perception, and fixed landmarks provided by a map.

This top-down view image is processed by a set of 2D convolutions before being passed to the RNN. In the current implementation, PredictionNet is able to confidently predict one to five seconds into the future, depending on the complexity of the scene (for example, highway versus urban).

The PredictionNet model also lends itself to a highly efficient runtime implementation in the TensorRT deep learning inference SDK, with 10 ms end-to-end inference times achieved on an NVIDIA TITAN RTX GPU.

Scalable Results

Results thus far have shown PredictionNet to be highly promising for several complex traffic scenarios. For example, the DNN can predict which cars will proceed straight through an intersection versus which will turn. It’s also able to correctly predict the car’s behavior in highway merging scenarios.

We have also observed that PredictionNet is able to learn velocities and accelerations of vehicles on the scene. This enables it to correctly predict speeds of both fast-moving and fully stopped vehicles, as well as to predict stop-and-go traffic patterns.

PredictionNet is trained on highly accurate lidar data to achieve higher prediction accuracy. However, the inference-time perception input to the DNN can be based on any sensor input combination (that is, camera, radar or lidar data) without retraining. This also means that the DNN’s prediction capabilities can be leveraged for various sensor configurations and levels of autonomy, from level 2+ systems all the way to level 4/level 5.

PredictionNet’s ability to anticipate behavior in real time can be used to create an interactive training environment for reinforcement learning-based planning and control policies for features such as automatic cruise control, lane changes or intersections handling.

By using PredictionNet to simulate how other road users will react to an autonomous vehicle’s behavior based on real-world experiences, we can train a more safe, robust and courteous AI driver.

The post All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories appeared first on The Official NVIDIA Blog.

Driving the Future: What Is an AI Cockpit?

From Knight Rider’s KITT to Ironman’s JARVIS, intelligent copilots have been a staple of forward-looking pop culture.

Advancements in AI and high-performance processors are turning these sci-fi concepts into reality. But what, exactly, is an AI cockpit, and how will it change the way we move?

AI is enabling a range of new software-defined, in-vehicle capabilities across the transportation industry. With centralized, high-performance compute, automakers can now build vehicles that become smarter over time.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Consolidating these components with an AI platform such as NVIDIA DRIVE AGX simplifies the architecture while creating more compute headroom to add new features. In addition, NVIDIA DRIVE IX provides an open and extensible software framework for a software-defined cockpit experience.

Mercedes-Benz released the first such intelligent cockpit, the MBUX AI system, powered by NVIDIA technology, in 2018. The system is currently in more than 20 Mercedes-Benz models, with the second generation debuting in the upcoming S-Class.

The second-generation MBUX system is set to debut in the Mercedes-Benz S-Class.

MBUX and other such AI cockpits orchestrate crucial safety and convenience features much more smoothly than the traditional vehicle architecture. They centralize compute for streamlined functions, and they’re constantly learning. By regularly delivering new features, they extend the joy of ownership throughout the life of the vehicle.

Always Alert

But safety is the foremost benefit of AI in the vehicle. AI acts as an extra set of eyes on the 360-degree environment surrounding the vehicle, as well as an intelligent guardian for drivers and passengers inside.

One key feature is driver monitoring. As automated driving functions become more commonplace across vehicle fleets, it’s critical to ensure the human at the wheel is alert and paying attention.

AI cockpits use interior cameras to monitor whether the driver is paying attention to the road.

Using interior-facing cameras, AI-powered driver monitoring can track driver activity, head position and facial movements to analyze whether the driver is paying attention, drowsy or distracted. The system can then alert the driver, bringing attention back to the road.

This system can also help keep those inside and outside the vehicle safe and alert. By sensing whether a passenger is about to exit a car and using exterior sensors to monitor the outside environment, AI can warn of oncoming traffic or pedestrians and bikers potentially in the path of the opening door.

It also acts as a guardian in emergency situations. If a passenger is not sitting properly in their seat, the system can prevent an airbag activation that would harm rather than help them. It can also use AI to detect the presence of children or pets left behind in the vehicle, helping prevent heat stroke.

An AI cockpit is always on the lookout for a vehicle’s occupants, adding an extra level of safety with full cabin monitoring so they can enjoy the ride.

Constant Convenience

In addition to safety, AI helps make the daily drive easier and more enjoyable.

With crystal-clear graphics, drivers can receive information about their route, as well as what the sensors on the car see, quickly and easily. Augmented reality heads-up displays and virtual reality views of the vehicle’s surroundings deliver the most important data (such as parking assistance, directions, speed and oncoming obstacles) without disrupting the driver’s line of sight.

These visualizations help build trust in the driver assistance system as well as understanding of its capabilities and limitations for a safer and more effective driving experience.

Using natural language processing, drivers can control vehicle settings without taking their eyes off the road. Conversational AI enables easy access to search queries, like finding the best coffee shops or sushi restaurants along a given route. The same system that monitors driver attention can also interpret gesture controls, providing another way for drivers to communicate with the cockpit without having to divert their gaze.

Natural language processing makes it possible to access vehicle controls without taking your eyes off the road.

These technologies can also be used to personalize the driving experience. Biometric user authentication and voice recognition allow the car to identify who is driving, and adjust settings and preferences accordingly.

AI cockpits are being integrated into more models every year, making them smarter and safer and constantly adding new features. High-performance, energy-efficient AI compute platforms, consolidate in-car systems with a centralized architecture to enable the open NVIDIA DRIVE IX software platform to meet future cockpit needs.

What used to be fanciful fiction will soon be part of our daily driving routine.

The post Driving the Future: What Is an AI Cockpit? appeared first on The Official NVIDIA Blog.

Mobileye and Ford Announce High-Volume Agreement for ADAS in Global Vehicles

Ford MobileyeMobileye, an Intel company, and Ford Motor Company are collaborating on cutting-edge driver-assistance systems across Ford’s global product lineup.

As the chosen supplier of vision-sensing technology for Ford’s advanced driver-assistance systems (ADAS), Mobileye will provide its EyeQ® family of devices, together with vision-processing software, to support Level 1 and Level 2 ADAS in Ford vehicles globally.

More: Autonomous Driving at Mobileye (Press Kit) | Mobileye Advanced Driver-Assistance Systems (Fact Sheet)

“It is a privilege to extend and expand our long-standing collaboration with a company that is so committed to safety on behalf of its global customer base,” said Professor Amnon Shashua, president and CEO of Mobileye. “We look forward to working closely together to bring these functionalities to market in the full Ford product lineup.”

Working together, Ford and Mobileye have agreed to the following:

  • Ford and Mobileye will offer better camera-based detection capabilities for ADAS, including improved forward-collision warning; vehicle, pedestrian and cyclist detection; plus lane-keeping features.
  • Mobileye will provide its suite of EyeQ sensing technology to support Ford Co-Pilot360 Technology available ADAS features, such as Lane-Keeping System, auto high-beam headlamps, Pre-Collision Assist with Automatic Emergency Braking, and Adaptive Cruise Control with Stop-and-Go and Lane-Centering.
  • Ford will display Mobileye’s name in vehicles through the inclusion of its logo in the automaker’s SYNC® ADAS communication displays, making customers aware that some Ford Co-Pilot360 Technology features use sensing capabilities provided by Mobileye.

Read the full news release on Ford’s website: Ford and Mobileye Expand Relationship to Offer Better Camera-Based Collision Avoidance in Global Vehicles

The post Mobileye and Ford Announce High-Volume Agreement for ADAS in Global Vehicles appeared first on Intel Newsroom.

Mobileye Starts Testing Self-Driving Vehicles in Germany

Intel Mobileye Munich x
In July 2020, Mobileye announced that Germany’s independent technical service provider, TÜV Süd, had awarded it an automated vehicle testing permit. It allows the company to drive its test vehicles in real-world traffic on all German roads at speeds up to 130 kilometers per hour. Mobileye is starting testing in Munich and also plans testing in other parts of Germany. (Credit: Mobileye)
» Click for full image
What’s New: Mobileye, an Intel company, received an automated vehicle (AV) testing permit recommendation from the independent technical service provider TÜV SÜD. As one of the leading experts in the field of safe and secure automated driving, TÜV SÜD enabled Mobileye to obtain approval from German authorities by validating the vehicle and functional safety concepts of Mobileye’s AV test vehicle. This allows Mobileye to perform AV testing anywhere in Germany, including urban and rural areas as well as the Autobahn at regular driving speed of up to 130 kilometers per hour. The AV testing in Germany in real-world traffic is starting now in and around Munich.

“Mobileye is eager to show the world our best-in-class self-driving vehicle technology and safety solutions as we get closer to making safe, affordable self-driving mobility solutions and consumer vehicles a reality. The new AV Permit provides us an opportunity to instill even more confidence in autonomous driving with future riders, global automakers and international transportation agencies. We thank TÜV SÜD for their trusted collaboration as we expand our AV testing to public roads in Germany.”
– Johann Jungwirth, vice president, Mobility-as-a-Service (MaaS), Mobileye

Why It Matters: Mobileye is one of the first non-OEM companies to receive a permit to test AVs on open roads in Germany. Until now, AV test drives in Germany have primarily taken place in closed and simulated environments. The basis for the independent vehicle assessment by TÜV SÜD in Germany builds on Mobileye’s existing program in place in Israel, where it has tested AVs for several years.

“With the TÜV SÜD AV-permit we bring in our broad expertise as a neutral and independent third party on the way to safe and secure automated mobility of the future,” sais Patrick Fruth, CEO Division Mobility, TÜV SÜD. “Our demanding assessment framework and test procedure considers state-of-the-art approaches to safety and combines physical real-world tests and scenario-based simulations.”

With the ability to test automated vehicles with a safety operator on public roads in Germany, Mobileye is taking another significant step toward the goal of a driverless future. On the heels of Mobileye’s acquisition of Moovit, a leading MaaS solutions company, as well as recent collaborations to test and deploy self-driving vehicles in France, Japan, Korea and Israel, the new testing permit strengthens Mobileye’s growing global leadership position as an AV technology as well as complete mobility solutions provider.

How It Works: The new permit will allow Mobileye to demonstrate to the global automotive industry and partners the safety, functionality and scalability of its unique self-driving system (SDS) for MaaS and consumer autonomous vehicles. The Mobileye SDS is comprised of the industry’s most advanced vision sensing technology, True Redundancy with two independent perception sub-systems, crowd-sourced mapping in the form of Road Experience Management™ (REM™) and its pioneering Responsibility-Sensitive Safety (RSS) driving policy.

Although the first tests of AVs using Mobileye’s SDS will be completed in Munich, the company plans to also perform AV testing in other parts of Germany. In addition, Mobileye expects to scale open-road testing in other countries before the end of 2020.

In order to obtain the authorization, Mobileye-powered AV test vehicles underwent a series of rigorous safety tests and provided comprehensive technical documentation. Part of the application also included a detailed hazard analysis, vehicles safety and functional safety concepts and proof that the cars can be safely integrated into public road traffic – an assessment that was made possible using Mobileye’s RSS.

Press Release AV Testing Germany Europe REM Map x» Click for full image

More Context: As Mobileye begins self-driving vehicle testing in Germany, Mobileye and Moovit will start demonstrating full end-to-end ride hailing mobility services based on Moovit’s mobility platform and apps using Mobileye’s AVs. Intel is pursuing the goal of continuing to develop pioneering technologies together with Mobileye and Moovit that will make roads safer for all road users while also improving mobility access for all.

In addition to the development of market-ready technologies, an important prerequisite is the worldwide mapping of roads. Mobileye has already successfully laid the foundations with REM. In cooperation with various automobile manufacturers, data from 25 million vehicles is expected to be collected by 2025. Mobileye is creating high-definition maps of the worldwide road infrastructure as the basis for safe autonomous driving. Millions of kilometers of roads across the globe are mapped every day with the REM technology.

Together, Intel, Mobileye and Moovit are driving forward the implementation of their mobility-as-a-service strategy. This strategy offers society and individuals solutions to today’s major social costs of transportation. The goal is to make mobility safe, accessible, clean, affordable and convenient, so that people can travel efficiently, flexibly and smartly from Point A to Point B. All means of transport — from public transport to car and bike sharing services to ride hailing and ride sharing with self-driving vehicles — will be bundled within one service offering of Moovit and Mobileye, smartly managed by Moovit’s mobility intelligence platform. The advantages are manifold: traffic congestion is minimized, emissions are reduced, and people are given equal and affordable access to mobility — an approach that is a top priority at Intel.

Even More Context: Mobileye Autonomous Driving (Press Kit) | Intel acquires Moovit to accelerate Mobileye’s Mobility-as-a-Service offering | Infographic: The Passenger Economy | Navigating the Winding Road Toward Driverless Mobility

The post Mobileye Starts Testing Self-Driving Vehicles in Germany appeared first on Intel Newsroom.

Mobileye and WILLER Partner on Self-Driving Mobility Solutions for Japan, Southeast Asia

JERUSALEM, Israel, and OSAKA, Japan, July 8, 2020 – Mobileye, an Intel Company, and WILLER, one of the largest transportation operators in Japan, Taiwan and the Southeast Asian region, today announced a strategic collaboration to launch an autonomous robotaxi service in Japan and markets across Southeast Asia, including Taiwan. Beginning in Japan, the companies will collaborate on the testing and deployment of autonomous transportation solutions based on Mobileye’s automated vehicle (AV) technology.

More: Autonomous Driving at Intel | Mobileye News

“Our new collaboration with WILLER brings a meaningful addition to Mobileye’s growing global network of transit and mobility ecosystem partners,” said Prof. Amnon Shashua, Intel senior vice president and president and CEO of Mobileye. “We look forward to collaborating with WILLER as we work together for new mobility in the region by bringing self-driving mobility services to Japan, Taiwan and ASEAN markets.”

“Collaboration with Mobileye is highly valuable for WILLER and a big step moving forward to realize our vision of innovating transportation services: travel anytime and anywhere by anybody,” said Shigetaka Murase, founder and CEO of WILLER. “Innovation of transportation will lead to a smarter, safer and more sustainable society where people enjoy higher quality of life.”

Together, Mobileye and WILLER are seeking to commercialize self-driving taxis and autonomous on-demand shared shuttles in Japan, while leveraging each other’s strengths. Mobileye will supply autonomous vehicles integrating its self-driving system and WILLER will offer services adjusted to each region and user tastes, ensure regulatory framework, and provide mobility services and solutions for fleet operation companies.

The two companies aim to begin testing robotaxis on public roads in Japan in 2021, with plans to launch fully self-driving ride-hailing and ride-sharing mobility services in 2023, while exploring opportunities for similar services in Taiwan and other Southeast Asian markets.

For Mobileye, the collaboration with WILLER advances the company’s global mobility-as-a-service (MaaS) ambitions. Since announcing its intention to become a complete mobility provider, Mobileye has begun a series of collaborations with cities, transportation agencies and mobility technology companies to develop and deploy self-driving mobility solutions in key markets. The agreement with WILLER builds on Mobileye’s existing MaaS partnerships. Examples include the agreement with Daegu Metropolitan City, South Korea, to deploy robotaxis based on Mobileye’s self-driving system, and the joint venture with Volkswagen and Champion Motors to operate an autonomous ride-hailing fleet in Israel. The collaboration with WILLER greatly expands and strengthens the company’s global MaaS ambition.

WILLER aims to unify user experiences across countries in the region; it released a MaaS app in 2019 and enabled a QR-code-based payment system this year. WILLER has partnered with Kuo-Kuang Motor Transportation, the largest bus operator in Taiwan, and Mai Linh, the largest taxi company in Vietnam, as well as invested in Car Club, a car-sharing service provider in Singapore. WILLER also partners with 150 local transportation providers in Japan. On top of these partnerships, WILLER will provide self-driving ride-hailing and ride-sharing services in the region and provide the best customer-ride experiences together with Mobileye.

The collaboration between WILLER and Mobileye will add a new transportation mode to the existing range of transportation services, including highway buses, railways and car-sharing. Adding self-driving vehicles, on-demand features and sharing services will improve customer ride experiences and address social challenges such as traffic accidents, congestion and, especially, the shortage of drivers and the challenges resulting from Japan’s aging society. Together Mobileye and WILLER will accelerate the social benefits of self-driving transportation solutions that contribute to higher quality of daily lives, making society smarter, safer and more sustainable.

About Mobileye
Mobileye is the global leader in the development of computer vision and machine learning, data analysis, localization and mapping for advanced driver-assistance systems and automated driving. Mobileye’s technology helps keep people safer on the road, reduces the risks of traffic accidents, saves lives and aims to revolutionize the driving experience by enabling autonomous driving. Mobileye’s proprietary software algorithms and EyeQ® chips perform detailed interpretations of the visual field in order to anticipate possible collisions with other vehicles, pedestrians, cyclists, animals, debris and other obstacles. Mobileye’s products are also able to detect roadway markings such as lanes, road boundaries, barriers and similar items; identify and read traffic signs, directional signs and traffic lights; create a RoadBook™ of localized drivable paths and visual landmarks using REM™; and provide mapping for autonomous driving. More information is available in Mobileye’s press kit.

About WILLER
WILLER was established in 1994 to provide society- and community-centric transportation services. WILLER pursues cutting-edge technology and marketing strategies to better customers’ ride experiences and create innovative values for society and local community. In Japan, WILLER has the largest intercity bus networks and operates a railway in Kyoto and operates unique restaurant buses that offers local cuisine area by area. Besides Japan, WILLER operates car-sharing services in Singapore and ride-hailing taxis in Vietnam.

The post Mobileye and WILLER Partner on Self-Driving Mobility Solutions for Japan, Southeast Asia appeared first on Intel Newsroom.