Coming to a Desktop Near You: The Future of Self-Driving

There has never been a better time to learn how AI will transform the way people, goods and services move.

During GTC Digital, anyone can experience the latest developments in AI technology for free, from the comfort of their own home. Hundreds of expert talks and training sessions covering autonomous vehicles, robotics, healthcare, finance and more will soon be available at the click of a button.

Beginning March 25, GTC Digital attendees can tune in to sessions hosted by autonomous driving leaders from Ford, Toyota, Zoox and more, as well as receive virtual training from NVIDIA experts on developing AI for self-driving.

Check out what’s in store for the NVIDIA autonomous vehicle ecosystem.

Learn from Leaders

GTC Digital talks let AI experts and developers delve into their latest work, sharing key insights on how to deploy intelligent vehicle technology.

This year’s automotive speakers are covering the latest topics in self-driving development, including AI training, simulation and software.

  • Neda Cvijetic, senior manager of Autonomous Vehicles at NVIDIA, will apply an engineering focus to widely acknowledged autonomous vehicle challenges, and explain how NVIDIA is tackling them. This session will air live with a question-and-answer segment to follow.
  • Clement Farabet, vice president of AI Infrastructure at NVIDIA, will discuss NVIDIA’s end-to-end AI platform for developing NVIDIA DRIVE software. This live talk will cover how to scale the infrastructure to train self-driving deep neural networks and include a Q&A session.
  • Tokihiko Akita, project research fellow, Toyota Technical Institute, will detail how deep neural networks can be used for autonomous vehicle object recognition and tracking in adverse weather with millimeter-wave radar sensors.
  • Nikita Jaipuria, research scientist for computer vision and machine learning, and Rohan Bhasin, research engineer, both at Ford Motor Company, will discuss in their April 2 session how to leverage generative adversarial networks to create synthetic data for autonomous vehicle training and validation.
  • Zejia Zheng and Jeff Pyke, software engineers at Zoox, will outline the Zoox TensorRT conversion pipeline to optimize deep neural network deployment on high-performance NVIDIA GPUs.

Virtual Hands-On Training

Developers will also get a chance to dive deeper into autonomy with NVIDIA Deep Learning Institute courses as well as interact virtually with experts in autonomous vehicle development.

DLI will offer a variety of instructor-led training sessions addressing the biggest developer challenges in areas such as autonomous vehicles, manufacturing and robotics. Receive intensive instruction on topics such as autonomous vehicle perception and sensor integration in our live sessions this April.

Get detailed answers to all your development questions in our Connect with Experts sessions on intelligent cockpits, autonomous driving software development and validation and more with NVIDIA’s in-house specialists.

Register to access these sessions and more for free and receive the latest updates on GTC Digital.

The post Coming to a Desktop Near You: The Future of Self-Driving appeared first on The Official NVIDIA Blog.

Driving Progress: NVIDIA Leads Autonomous Vehicle Industry Report

As the industry begins to deploy autonomous driving technology, NVIDIA is leading the way in delivering a complete end-to-end solution, including data center infrastructure, software toolkits, libraries and frameworks, as well as high-performance, energy-efficient compute for safer, more efficient transportation.

Every year, advisory firm Navigant Research releases a report on the state of the autonomous vehicle industry. This year, the company broke out its in-depth research into “Automated Vehicle Compute Platforms” and “Automated Driving Vehicles” leaderboards, ranking the major players in each category.

In the 2020 Automated Vehicle Compute Platforms report, NVIDIA led the list of companies developing AV platforms to power the AI that will replace the human driver.

Automated Vehicle Compute Platforms Leaderboard

“NVIDIA is a proven company with a long history of producing really strong hardware and software,” said Sam Abuelsamid, principal research analyst at Navigant and author of both reports. “Having a platform that produces ever-increasing performance and power efficiency is crucial for the entire industry.”

While NVIDIA provides solutions for developing autonomous vehicles, the Automated Driving Vehicles report covers those who are building such vehicles for production.  Manufacturers using these compute platforms include automakers, tier 1 suppliers, robotaxi companies and software startups that are part of the NVIDIA DRIVE ecosystem.

These reports are just one piece of the larger autonomous vehicle development picture. While they don’t contain every detail of the current state of self-driving systems, they provide valuable insight into how the industry approaches this transformative technology.

Computing the Way Forward

The Navigant leaderboard uses a detailed methodology in determining both the companies it covers and how they rank.

The Automated Vehicle Compute Platforms report looks at those who are developing compute platforms and have silicon designs that are in at least sample production or have plans to be soon. These systems must have fail-operational capability to ensure safety when there’s no human backup at the wheel.

The report evaluates companies using factors such as vision, technology, go-to-market strategy and partners. NVIDIA’s top rating is based on leading performance across this wide range of factors.

NVIDIA Strategy and Execution Scores

With the high-performance, energy-efficient DRIVE AGX platform, NVIDIA topped the inaugural leaderboard report for its progress in the space. The platform’s scalable architecture and the open, modular NVIDIA DRIVE software make it easy for autonomous vehicle manufacturers to scale DRIVE solutions to meet their individual production plans.

“Without powerful, energy-efficient compute and the reliability and robustness to handle the automotive environment, none of rest would be possible,” Abuelsamid said. “Looking at the current compute platforms, NVIDIA is the clear leader.”

Compute is a critical element to autonomous vehicles. It’s what makes it possible for a vehicle to process in real time the terabytes of data coming in from camera, radar and lidar sensors. It also enables self-driving software to run multiple deep neural networks simultaneously, achieving the diversity and redundancy in operation that is critical for safety. Furthermore, by ensuring a compute platform has enough performance headroom, developers can continuously add features and improve the system over time.

Empowering Our Ecosystem

A second report from Navigant, the Automated Driving Vehicles report, ranks companies developing Level 4 autonomous driving systems — which don’t require a human backup in defined conditions — for passenger vehicles.

The leaderboard includes announced members of the NVIDIA DRIVE ecosystem such as Baidu, Daimler-Bosch, Ford, Toyota, Volkswagen, Volvo, Voyage, Yandex and Zoox, as well as many other partners who have yet to be disclosed, all developing autonomous vehicles using NVIDIA end-to-end AI solutions.

Automated Driving Vehicles Leaderboard

The report notes many of the challenges the industry faces, such as comprehensive validation and production costs. NVIDIA delivers autonomous vehicle development tools from the cloud to the car to help companies address these issues.

NVIDIA is unique in providing solutions for in the vehicle, as well as the datacenter infrastructure for the AV industry. With NVIDIA DGX systems and advanced AI learning tools, developers can efficiently train the deep neural networks that run in the vehicle on petabytes of data in the data center. These DNNs can then be tested and validated in the virtual world on the same hardware they would run on in the vehicle using the cloud-based, bit-accurate DRIVE Constellation simulation platform.

The ability to build a seamless workflow with compatible tools can greatly improve efficiency in development, according to Abuelsamid.

“As this technology evolves and matures, developers are going through many iterations of hardware and software,” he said. “Having an ecosystem that allows you to develop at all different levels, from in-vehicle, to data center, to simulation, and transfer knowledge you’ve gained from one to the other is very important.”

By providing our ecosystem with these industry-leading end-to-end solutions, we’re working to bring safer, more efficient transportation to roads around the world, sooner.

Navigant Research provides syndicated and custom research services across a range of technologies encompassing the energy ecosystem. Mobility reports, including the Automated Driving Leaderboard, the Automated Vehicle Compute Platform leaderboard and the Automated Vehicles Forecast, are available here.

The post Driving Progress: NVIDIA Leads Autonomous Vehicle Industry Report appeared first on The Official NVIDIA Blog.

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Catch up on all our automotive posts, here.

Lidar can give autonomous vehicles laser focus.

By bouncing laser signals off the surrounding environment, these sensors can enable a self-driving car to construct a detailed and accurate 3D picture of what’s around it.

However, traditional methods for processing lidar data pose significant challenges. These include limitations in the ability to detect and classify different types of objects, scenes and weather conditions, as well as limitations in performance and robustness.

In this DRIVE Labs episode, we introduce our multi-view LidarNet deep neural network, which uses multiple perspectives, or views, of the scene around the car to overcome the traditional limitations of lidar-based processing.

AI-Powered Solutions

AI in the form of DNN-based approaches has become the go-to solution to address traditional lidar perception challenges.

One AI method uses lidar DNNs that perform top‐down or “bird’s eye view” (BEV) object detection on lidar point cloud data. A virtual camera positioned at some height above the scene, similar to a bird flying overhead, reprojects 3D coordinates of each data point into that virtual camera view via orthogonal projection.

BEV lidar DNNs use 2D convolutions in their layers to detect dynamic objects such as cars, trucks, buses, pedestrians, cyclists, and other road users. 2D convolutions work fast, so they are well-suited for use in real-time autonomous driving applications.

However, this approach can get tricky when objects look alike top-down. For example, in BEV, pedestrians or bikes may appear similar to objects like poles, tree trunks or bushes, resulting in perception errors.

Another AI method uses 3D lidar point cloud data as input to a DNN that uses 3D convolutions in its layers to detect objects. This improves accuracy since a DNN can detect objects using their 3D shapes. However, 3D convolutional DNN processing of lidar point clouds is difficult to run in real-time for autonomous driving applications.

Enter Multi-View LidarNet

To overcome the limitations of both of these AI-based approaches, we developed our multi‐view LidarNet DNN, which acts in two stages. The first stage extracts semantic information about the scene using lidar scan data in perspective view (Figure 1). This “unwraps” a 360-degree surround lidar range scan so it looks as though the entire panorama is in front of the self-driving car.

This first-stage semantic segmentation approach performs very well for predicting object classes. This is because the DNN can better observe object shapes in perspective view (for example, the shape of a walking human).

The first stage segments the scene both into dynamic objects of different classes, such as cars, trucks, busses, pedestrians, cyclists and motorcyclists, as well as static road scene components, such as the road surface, sidewalks, buildings, trees, and traffic signs.

Figure 1. Multi-view LidarNet perspective view.

 

Figure 2. Multi-view LidarNet top-down bird’s eye view (BEV).

The semantic segmentation output of LidarNet’s first stage is then projected into BEV and combined with height data at each location, which is obtained from the lidar point cloud. The resulting output is applied as input to the second stage (Figure 2).

The second stage DNN is trained on BEV-labeled data to predict top-down 2D bounding boxes around objects identified by the first stage. This stage also uses semantic and height information to extract object instances. This is easier in BEV since objects are not occluding each other in this view.

The result of chaining these two DNN stages together is a lidar DNN that consumes only lidar data. It uses end-to-end deep learning to output a rich semantic segmentation of the scene, complete with 2D bounding boxes for objects. By using such methods, it can detect vulnerable road users, such as motorcyclists, bicyclists, and pedestrians, with high accuracy and completeness. Additionally, the DNN is very efficient — inference runs at 7ms per lidar scan on the NVIDIA DRIVE™AGX platform.

In addition to multi-view LidarNet, our lidar processing software stack includes a lidar object tracker. The tracker is a computer vision-based post-processing system that uses the BEV 2D bounding box information and lidar point geometry to compute 3D bounding boxes for each object instance. The tracker also helps stabilize per-frame DNN misdetections and, along with a low-level lidar processor, computes geometric fences that represent hard physical boundaries that a car should avoid.

This combination of AI-based and traditional computer vision-based methods increases the robustness of our lidar perception software stack. Moreover, the rich perception information provided by lidar perception can be combined with camera and radar detections to design even more robust Level 4 to Level 5 autonomous systems.

The post Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars appeared first on The Official NVIDIA Blog.

Hail Yeah! How Robotaxis Will Change the Way We Move

From pedicabs to yellow cabs, hailing rides has been a decades-long convenience. App-based services like Uber and Lyft have made on-demand travel even faster and easier.

With the advent of autonomous vehicles, ride-hailing promises to raise the bar on safety and efficiency. Known as robotaxis, these shared vehicles are purpose-built for transporting groups of people along optimized routes, without a human driver at the wheel.

The potential for a shared autonomous mobility industry is enormous. Financial services company UBS estimates that robotaxis could create a $2 trillion market globally over the next decade, with each vehicle generating as much as $27,000 annually.

In dense urban environments, like New York City, experts project that a taxi fleet converted to entirely autonomous vehicles could cut down commutes in some areas from 40 minutes to 15.

On top of the economic and efficiency benefits, autonomous vehicles are never distracted or drowsy. And they can run 24 hours a day, seven days a week, expanding mobility access to more communities.

To transform everyday transportation, the NVIDIA DRIVE ecosystem is gearing up with a new wave of electric, autonomous vehicles.

Up for Cabs

Building a new kind of vehicle from square one requires a fresh perspective. That’s why a crop of startups and technology companies have begun to invest in the idea of a shared car without a steering wheel or pedals.

Already transporting riders in Florida and San Jose, Calif. retirement communities, Voyage is deploying low-speed autonomous vehicles with the goal of widely expanding safe mobility. The company is using DRIVE AGX to operate its SafeStop supercharged automatic braking system in its current fleet of vehicles.

Optimus Ride is a Boston-based self-driving technology company developing systems for geo-fenced environments — pre-defined areas of operation, like a city center or shipping yard.

Its electric, autonomous vehicles run on the high-performance, energy-efficient NVIDIA DRIVE platform, and were the first such vehicles to run in NYC as part of a pilot launched in August.

Optimus Ride

Leveraging the performance of NVIDIA DRIVE AGX Pegasus, which can achieve up to 320 trillion operations per second, smart mobility startup WeRide is developing level 4 autonomous vehicles to provide accessible transportation to a wide range of passengers.

Starting from scratch, self-driving startup and DRIVE ecosystem member Zoox is developing a purpose-built vehicle for on-demand, autonomous transportation. Its robotaxi encompasses a futuristic vision of everyday mobility, able to drive in both directions.

Zoox says it plans to launch its zero-emissions vehicle for testing this year, followed by an autonomous taxi service.

At GTC China in December, ride-hailing giant Didi Chuxing announced it was developing level 4 autonomous vehicles for its mobility services using NVIDIA DRIVE and AI technology. Delivering 10 billion passenger trips per year, DiDi is working toward the safe, large-scale application of autonomous driving technology.

DiDi

Sharing Expertise for Shared Mobility

When it comes to industry-changing innovations, sometimes two (or three) heads are better than one.

Global automakers, suppliers and startups are also working to solve the question of shared autonomous mobility, collaborating on their own visions of the robotaxi of the future.

In December, Mercedes-Benz parent company Daimler and global supplier Bosch launched the first phase of their autonomous ride-hailing pilot in San Jose. The app-based service shuttles customers in an automated Mercedes-Benz S-Class monitored by a safety driver.

The companies are collaborating with NVIDIA to eventually launch a robotaxi powered by NVIDIA DRIVE AGX Pegasus.

Daimler

Across the pond, autonomous vehicle solution provider AutoX and Swedish electric vehicle manufacturer NEVS are working to deploy robotaxis in Europe by the end of this year.

The companies, which came together through the NVIDIA DRIVE ecosystem, are developing an electric autonomous vehicle based on NEVS’ mobility-focused concept and powered by NVIDIA DRIVE. The goal of this collaboration is to bring these safe and efficient technologies to everyday transportation around the world.

Startup Pony.AI is also collaborating with global automakers such as Toyota and Hyundai, developing a robotaxi fleet with the NVIDIA DRIVE AGX platform at its core.

As the NVIDIA DRIVE ecosystem pushes into the next decade of autonomous transportation, safer, more convenient rides will soon just be a push of a button away. At GTC 2020, attendees will get a glimpse to just where this future is going — register today with code CMAUTO for a 20 percent discount.

The post Hail Yeah! How Robotaxis Will Change the Way We Move appeared first on The Official NVIDIA Blog.

Look Under the Hood of Self-Driving Development at GTC 2020

The progress of self-driving cars can be seen in test vehicles on the road. But the major mechanics for autonomous driving development are making tracks in the data center.

Training, testing and validating self-driving technology requires enormous amounts of data, which must be managed by a robust hardware and software infrastructure. Companies around the world are turning to high-performance, energy efficient GPU technology to build the AI infrastructure needed to put autonomous driving deep neural networks (DNNs) through their paces.

At next month’s GPU Technology Conference in San Jose, Calif., automakers, suppliers, startups and safety experts will discuss how they’re tackling the infrastructure component of autonomous vehicle development.

By attending sessions on topics such as DNN training, data creation. and validation in simulation, attendees can learn the end-to-end process of building a self-driving car in the data center.

Mastering Learning Curves

Without a human at the wheel, autonomous vehicles rely on a wide range of DNNs that  perceive the surrounding environment. To recognize everything from pedestrians to street signs and traffic lights, these networks require exhaustive training on mountains of driving data.

Tesla has delivered nearly half a million vehicles with AI-assisted driving capabilities worldwide. They’re gathering data while continuously receiving the latest models through over-the-air updates.

At GTC, Tim Zaman, machine learning infrastructure engineering manager at Tesla, will share how the automaker built and maintains a low-maintenance, efficient and lightning-fast, yet user-friendly, machine-learning infrastructure that its engineers rely on to develop Tesla Autopilot.

As more test vehicles outfitted with sensors drive on public roads, the pool of training data can grow by terabytes. Ke Li, software engineer at Pony.ai, will talk about how the self-driving startup is building a GPU-centric infrastructure that can process the increasingly heavy sensor data more efficiently, scale with future advances in GPU compute power, and can integrate with other heterogeneous compute platforms.

For NVIDIA’s own autonomous vehicle development, we’ve built a scalable infrastructure to train self-driving DNNs. Clement Farabet, vice president of AI Infrastructure at NVIDIA, will discuss Project MagLev, an internal end-to-end AI platform for developing NVIDIA DRIVE software.

The session will cover how MagLev enables autonomous AI designers to iterate training of new DNN designs across thousands of GPU systems and validate the behavior of these designs over multi-petabyte-scale datasets.

Virtual Test Tracks

Before autonomous vehicles are widely deployed on public roads, they must be proven safe for all possible conditions the car could encounter — including rare and dangerous scenarios.

Simulation in the data center presents a powerful solution to what has otherwise been an insurmountable obstacle. By tapping into the virtual world, developers can safely and accurately test and validate autonomous driving hardware and software without leaving the office.

Zvi Greenstein, general manager at NVIDIA, will give an overview of the NVIDIA DRIVE Constellation VR simulation platform, a cloud-based solution that enables hardware-in-the-loop testing and large-scale deployment in data centers. The session will cover how NVIDIA DRIVE Constellation is used to validate safe autonomous driving and how companies can partner with NVIDIA and join the DRIVE Constellation ecosystem.

Having data as diverse and random as the real world is also a major challenge when it comes to validation. Nikita Jaipuria and Rohan Bhasin, research engineers at Ford, will discuss how to generate photorealistic synthetic data using generative adversarial networks (GANs). These simulated images can be used to represent a wide variety of situations for comprehensive autonomous vehicle testing.

Regulators and third-party safety agencies are also using simulation technology to evaluate self-driving cars. Stefan Merkl, mobility regional manager at TÜV SÜD America, Inc., will outline the agency’s universal framework to help navigate patchwork local regulations, providing a unified method for the assessment of automated vehicles.

In addition to these sessions, GTC attendees will hear the latest NVIDIA news and experience demos and hands-on training for a comprehensive view of the infrastructure needed to build the car of the future. Register before Feb. 13 to take advantage of early rates and receive 20% off with code CMAUTO.

The post Look Under the Hood of Self-Driving Development at GTC 2020 appeared first on The Official NVIDIA Blog.

Man Meets Machine: Autonomous Driving Gets the Human Touch at CES 2020

Autonomous driving technology aims to eliminate the human at the wheel. However, the latest plans for the car of the future envision a greater integration of human-machine interaction throughout the rest of the vehicle.

At CES 2020, companies showed how they’re working toward safer and more efficient transportation. Drawing inspiration from creative concepts in other industries, and even trying out new areas of technological expertise, NVIDIA DRIVE partners showcased new ideas for human and machine harmony in the coming decade.

Quite a Concept

Drawing gearheads and technophiles alike, the Vision-S concept car was one of the most buzzed-about vehicles on the show floor. The electric vehicle incorporates cutting-edge imaging and sensor technology for autonomous driving as well as crystal-clear graphics for an intelligent cockpit experience. The most innovative feature of all? It wasn’t built by an automaker.

Courtesy of Sony

Electronics and entertainment company Sony worked with NVIDIA and other autotech companies to build its first ever vehicle prototype. Designed to showcase its latest sensor and AI infotainment technology, the sleek vehicle attracted crowds throughout the week.

The panoramic dashboard screen provides driving information, communication and entertainment, all effortlessly controlled by fluid gestures. Screens on the back of the front seats as well as speakers built into the headrests ensure every passenger can have a personalized experience.

The car’s hardware is designed for over-the-air updates, improving autonomous driving capabilities as well as adapting to the human driver and passengers’ preferences over time.

Though Sony isn’t building the Vision-S for production, the car’s concept for autonomy and seamless user experience provide a window into a highly intelligent transportation future.

Driving Instinct 

Mercedes-Benz, the automaker behind the first production AI cockpit, plans to take its revolutionary MBUX infotainment technology, powered by NVIDIA, even further.

During the CES opening keynote, Mercedes CEO Ola Kallenius said the next frontier for AI-powered infotainment is truly intuitive gesture control. The automaker’s vision for the future was then demonstrated in the Vision AVTR concept vehicle, designed for the upcoming sequels to the blockbuster film Avatar.

The hallmark feature of the Vision AVTR is the center console control element that replaces a typical steering wheel. It’s activated by human touch, using biometrics to control the car.

Courtesy of Mercedes-Benz

The concept illustrates Mercedes’ long-term vision of facilitating more natural interactions between the driver and the vehicle. And given the proliferation of the MBUX infotainment system — which is now in nearly 20 Mercedes models — this future may not be too far away.

AIs on the Road

CES attendees also experienced the latest innovations in autonomous driving firsthand from the NVIDIA DRIVE ecosystem.

Robotaxi company Yandex ferried conference goers around Las Vegas neighborhoods in a completely driverless ride. Powered by NVIDIA technology, the prototype vehicle reached speeds up to 45 miles per hour without any human intervention.

Courtesy of Yandex

Yandex has been rapidly expanding in its mission to provide safe autonomous transportation to the public. Since last CES, the company has driven 1.5 million autonomous miles and provided more than 5,000 robotaxi rides with no human driver at the wheel.

Supplier Faurecia Clarion showed attendees how its working to alleviate the stress of parking with its autonomous valet system. Using the high-performance, energy-efficient DRIVE AGX platform, the advanced driver assistance system seamlessly navigated a parking lot.

NVIDIA DRIVE ecosystem member Luminar brought the industry closer to widespread self-driving car deployment with the introduction of its Hydra lidar sensor. Designed for production Level 3 and Level 4 autonomous driving, the sensor is powered by NVIDIA Xavier and can detect and classify objects up to 250 meters away.

Courtesy of Luminar

Analog Devices uses the NVIDIA DRIVE platform to help autonomous vehicles see and understand the world around them. The company demoed its imaging radar point cloud at CES, using NVIDIA DRIVE AGX Xavier to process raw data from an imaging radar sensor into a perception point cloud.

With these latest developments in production technology as well as a cohesive vision for future AI-powered transportation, the age of self-driving is just a (human) touch away.

The post Man Meets Machine: Autonomous Driving Gets the Human Touch at CES 2020 appeared first on The Official NVIDIA Blog.

NVIDIA Brings the Future into Focus at CES 2020

CES 2020 will be bursting with vivid visual entertainment and smart everything, powered, in part, by NVIDIA and its partners.

Attendees packing the annual techfest will experience the latest additions to GeForce, the world’s most powerful PC gaming platform and the first to deliver ray tracing. They’ll see powerful displays and laptops, ultra-realistic game titles and capabilities offering new levels of game play.

NVIDIA’s Vegas headliners include three firsts  — a 360Hz esports display, the first 14-inch laptops and all-in-one PCs delivering the graphics realism of ray tracing.

The same GPU technologies powering next-gen gaming are also spawning an age of autonomous machines. CES 2020 will be alive with robots such as Toyota’s new T-HR3, thanks to advances in the NVIDIA Isaac platform. And the newly minted DRIVE AGX Orin promises 7x performance gains for future autonomous vehicles.

Together, they’re knitting together an AI-powered Internet of Things from the cloud to the network’s edge that will touch everything from entertainment to healthcare and transportation.

A 2020 Vision for Play

NVIDIA’s new G-SYNC display for esports gamers delivers a breakthrough at 360Hz, projecting a vision of game play that’s more vivid than ever.  NVIDIA and ASUS this week unveiled the ASUS ROG 360, the world’s fastest display, powered by NVIDIA G-SYNC. Its 360Hz refresh rate in a 24.5-inch form factor let esports and competitive gamers keep every pixel of action in their field of view during the heat of competition.

The 24-inch ASUS ROG Swift sports a 360Hz refresh rate.

Keeping the picture crisp, Acer, Asus and LG are expanding support for G-SYNC. First introduced in 2013, G-SYNC is best known for its innovative Variable Refresh Rate technology that eliminates screen tearing by synchronizing the refresh rate of the display with the GPU’s frame rate.

In 2019, LG became the first TV manufacturer to offer NVIDIA G-SYNC compatibility, bringing the must-have gaming feature to select OLED TV models. Thirteen new models for 2020 will provide a flawless gaming experience on the big screen, without screen tearing or other distracting visual artifacts.

In addition, Acer and Asus are showcasing two upcoming G-SYNC ULTIMATE displays. They feature the latest full-array direct backlight technology with 1,400 nits brightness, significantly increasing display contrast for darker blacks and more vibrant colors. Gamers will enjoy the fast response time and ultra-low lag of these displays running at up to 144Hz at 4K.

Game On, RTX On

The best gaming monitors need awesome content to shine. So today, Bethesda turned on ray tracing in Wolfenstein: Youngblood, bringing a new level of realism to the popular title. An update that sports ray-tracing reflections and DLSS is available as a free downloadable patch starting today for gamers with a GeForce RTX GPU.

Bethesda joins the world’s leading publishers who are embracing ray tracing as the next big thing in their top franchises. Call of Duty Modern Warfare and Control — IGN’s Game of the Year — both feature incredible real-time ray-tracing effects.

VR is donning new headsets, games and innovations for CES 2020.

NVIDIA’s new rendering technique, Variable Rate Super Sampling, in the latest Game Ready Driver improves image quality in VR games. It uses Variable Rate Shading, part of the NVIDIA Turing architecture, to dynamically apply up to 8x supersampling to the center. or foveal region. of the VR headset, enhancing image quality where it matters most while delivering stellar performance.

In addition, Game Ready Drivers now make it possible to set the max frame rate a 3D application or game can render to save power and reduce system latency. They enable the best gaming experience by keeping a G-SYNC display within the range where the technology shines.

Creators’ Visions Coming into Focus

A total of 14 hardware OEMs introduced new RTX Studio systems at CES 2020. Combined with NVIDIA Studio Drivers, they’re powering more than 55 creative and design apps with RTX-accelerated ray tracing and AI.

HP launched the ENVY 32 All-in-One with GeForce RTX graphics, configurable with up to GeForce RTX 2080. Acer has three new systems from its ConceptD line. And ten other system builders across North America, Europe and China all now have RTX Studio offerings.

These RTX Studio systems adhere to stringent hardware and software requirements to empower creativity at the speed of imagination. They also ship with NVIDIA’s Studio Drivers, providing the ultimate performance and stability for creative applications.

Robots Ring in the New Year

The GPU technology that powers games is also driving AI, accelerating the development of a host of autonomous vehicles and robots at CES 2020.

Toyota’s new T-HR3 humanoid partner robot will have a Vegas debut at its booth (LVCC, North Hall, Booth 6919). A human operator wearing a VR headset controls the system using augmented video and perception data fed from an NVIDIA Jetson AGX Xavier computer in the robot.

Toyota’s T-HR3 makes its Vegas debut at CES 2020.

Attendees can try out the autonomous wheelchair from WHILL, which had won a CES 2019 Innovation of the Year award, powered by a Jetson TX2. Sunflower Labs will demo its new home security robot, also packing a Jetson TX2. Other NVIDIA-powered systems at CES include a delivery robot from PostMates and an inspection snake robot from Sarcos.

The Isaac software development kit marks a milestone in establishing a unified AI robotic development platform we call NVIDIA Isaac, an open environment for mapping, model training, simulation and computing. It includes a variety of camera-based perception deep neural networks for functions such as object detection, 3D pose estimation and 2D human pose estimation.

This release also introduces Isaac Sim, which lets developers train on simulated robots and deploy their lessons to real ones, promising to greatly accelerate robotic development especially for environments such as large logistics operations. Isaac Simulation will add early-access availability for manipulation later this month.

Driving an Era of Autonomous Vehicles

This marks a new decade of automotive performance, defined by AI compute rather than horsepower. It will spread autonomous capabilities across today’s $10 trillion transportation industry. The transformation will require dramatically more compute performance to handle exponential growth in AI models being developed to ensure autonomous vehicles  are both functional and safe.

NVIDIA DRIVE AV, an end-to-end, software-defined platform for AVs, delivers just that. It includes a development flow, data center infrastructure, an in-vehicle computer and the highest quality pre-trained AI models that can be adapted by OEMs.

Last month, NVIDIA announced the latest piece of that platform, DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles.

The platform is powered by a new system-on-a-chip called Orin, which achieves 200 TOPS — nearly 7x the performance of the previous generation SoC Xavier. It’s designed to handle the large number of applications and DNNs that run simultaneously in autonomous vehicles, while achieving systematic safety standards such as ISO 26262 ASIL-D.

NVIDIA is now providing access to its pre-trained DNNs and cutting-edge training processes on the NGC container registry. With industry-leading networks and advanced learning techniques such as active learning, transfer learning and federated learning, developers can turbo charge development and custom applications

Working Together

NVIDIA’S AI ecosystem of innovators is spread across the CES 2020 show floor, including more than 100 members of Inception, a company program that nurtures cutting-edge startups that are revolutionizing industries with AI.

Among established leaders, Mercedes-Benz, an NVIDIA DRIVE customer, will open the show Monday night with a keynote on the future of intelligent transportation. And GeForce partners will crank up the gaming excitement in demos across the event.

The post NVIDIA Brings the Future into Focus at CES 2020 appeared first on The Official NVIDIA Blog.

NVIDIA DRIVE Ecosystem Charges into Next Decade of AI

Top transportation companies are using NVIDIA DRIVE to lead the way into the coming era of autonomous mobility.

Electric vehicle makers, mapping companies and mobility providers announced at the GPU Technology Conference in Suzhou, China, that they’re leveraging NVIDIA DRIVE in the development of their self-driving solutions.

By joining the DRIVE ecosystem, each of these companies can contribute industry experience and expertise to a worldwide community dedicated to delivering safer and more efficient transportation.

“NVIDIA created an open platform so the industry can team up together to realize this autonomous future,” NVIDIA CEO Jensen Huang said during his GTC keynote. “The rich ecosystem we’ve developed is a testament to the openness of this platform.”

Driving Autonomy Forward

DiDi Chuxing, the world’s leading mobile transportation platform, announced that it will use NVIDIA AI technologies to bring Level 4 autonomous vehicles and intelligent ride-hailing services to market. The company, which provides more than 30 million rides a day, will use the NVIDIA DRIVE platform and NVIDIA AI data center solutions to develop its fleets and provide services in the DiDi Cloud.

As part of the centralized AI processing of DiDi’s autonomous vehicles, NVIDIA DRIVE enables data to be fused from all types of sensors (cameras, lidar, radar, etc.) using numerous deep neural networks to understand the 360-degree environment surrounding the car and plan a safe path forward.

To train these DNNs, DiDi will use NVIDIA GPU data center servers. For cloud computing, DiDi will also build an AI infrastructure and launch virtual GPU cloud servers for computing, rendering and gaming.

DiDi autonomous vehicle on display at GTC.

Automakers, truck manufacturers and software startups on the GTC showfloor displayed the significant progress they’ve achieved on the NVIDIA DRIVE platform. Autonomous pilots from Momenta and WeRide continue to grow in sophistication, while autonomous trucking company TuSimple expands its highway trucking routes.

Next-generation production vehicles from Xpeng and Karma Automotive will leverage the DRIVE AGX platform for AI-powered driver assistance systems.

Karma Automotive is developing AI-assisted driving on the NVIDIA DRIVE AGX platform.

Triangulating Success with HD Mapmakers

At GTC, Amap and Kuandeng announced that their high-definition maps are now compatible with DRIVE Localization, an open, scalable platform that enables autonomous vehicles to localize themselves within centimeters on roads worldwide.

Localization makes it possible for a self-driving car to pinpoint its location so it can understand its surroundings and establish a sense of the road and lane structures. This enables it to plan lane changes ahead of what’s immediately visible and determine lane paths even when markings aren’t clear.

DRIVE Localization makes that centimeter-level positioning possible by matching semantic landmarks in the vehicle’s environment with features from HD maps by companies like Amap and Kuandeng to determine exactly where it is in real time.

With more than 100 million daily active users of its maps services, Amap is one of the leading mapping companies in China. It has already collected HD map data on more than 320,000 km of roadways and is working with automakers such as General Motors, Geely and FAW to provide commercial maps.

Kuandeng is dedicated to providing critical infrastructure data service for autonomous vehicles. Its HD map solution and high-precision localization product provide core technical and data support for automakers, suppliers and startups. Kuandeng is also building up a cloud-based, crowdsourced platform for real-time HD map updates, establishing a closed-loop reflux system of data.

Mapping the world with such a high level of precision is a virtually impossible task for one company to accomplish alone. By partnering with the top regional mapmakers around the world, NVIDIA is helping develop highly accurate and comprehensive maps for autonomous vehicle navigation and localization.

“HD maps make it possible to pinpoint an autonomous vehicle’s location with centimeter-level accuracy,” said Frank Jiang, vice general manager of the Automotive Business Department at Amap. “By making our maps compatible with NVIDIA DRIVE Localization, we can enable highly precise localization for autonomous vehicles.”

By growing the NVIDIA DRIVE ecosystem with leading companies around the world, we’re working to deliver safer, more efficient transportation to global roads, sooner.

The post NVIDIA DRIVE Ecosystem Charges into Next Decade of AI appeared first on The Official NVIDIA Blog.

Introducing NVIDIA DRIVE AGX Orin: Vehicle Performance for the AI Era

A performance car isn’t judged on horsepower alone. Acceleration, braking, cornering speed and suspension are among the features that define how a car will fare on the road.

The same is true for the processors inside an autonomous vehicle. While processing speed (measured in trillions of operations per second, or TOPS) is a key performance indicator, it’s the architecture, the programmability and the AI software stack running that determine excellence.

Today, NVIDIA is ushering in a new era of AI computing and software for autonomous driving.

At the GPU Technology Conference, we introduced NVIDIA DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles. The platform is powered by a new system-on-a-chip (SoC) called Orin, which consists of 17 billion transistors and is the result of four years of R&D investment. It’s an SoC born out of the data center.

Orin achieves 200 TOPS — nearly 7x the performance of the previous generation SoC Xaiver — and is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D.

Orin is software compatible with Xavier, allowing customers to leverage their existing development investments.  It’s also scalable — with a range of configurations able to deliver Level 2+ AI-assisted driving up to Level 5 fully driverless operation.

Now with access to NVIDIA’s pre-trained DNNs and cutting-edge training processes such as active learning, transfer learning and federated learning, the NVIDIA DRIVE ecosystem can turbo charge development and custom application development.

A wide variety of pre-trained DNNs are available on the NVIDIA GPU Cloud.

Taking the Pedal to the Metal on Inference

Cars and trucks of the future will require an optimized AI architecture not only for autonomous driving, but also for intelligent vehicle features like speech and driver monitoring. We’ll converse with the car and it’ll respond: answering questions, directing us to destinations, warning us of road conditions. It will be our co-pilot and guardian, able to take over and drive autonomously, monitor our alertness and safeguard us.

Xavier, our current-generation SoC, has proven to be the best-performing processor for inference on the market. In this year’s MLPerf assessment, it ranked as the highest performer under edge-focused scenarios among commercially available edge and mobile SoCs.

Leveraging this industry-topping architecture, Orin will continue to lead the way for edge computing. It’s designed specifically to handle the large number of applications and DNNs running necessary for safe autonomous vehicle operation.

Leveling Up

With the introduction of DRIVE AGX Orin, we’re dramatically raising the bar for vehicles starting production in 2022. Combining our industry-leading unified architecture and the open DRIVE Software stack, NVIDIA is redefining vehicle performance for the age of autonomy.

The post Introducing NVIDIA DRIVE AGX Orin: Vehicle Performance for the AI Era appeared first on The Official NVIDIA Blog.

2020 Vision: See the Future of Transportation at GTC

The next decade of autonomous transportation is just starting to come into focus. GTC attendees will be among the first to see what’s new for safer, more efficient mobility.

The NVIDIA GPU Technology Conference in San Jose, Calif., draws autonomous vehicle developers, researchers, press and analysts from around the world. The annual event is a forum to exhibit and discuss innovations in AI-powered transportation, as well as to form lasting connections across the industry.

This March, NVIDIA CEO Jensen Huang will kick off GTC with a keynote address on the rapid growth of AI computing, mapping out how GPU-powered technology is changing the automotive industry in addition to healthcare, robotics, retail and more.

Attendees will get the opportunity to dive deeper into these topics in dedicated sessions, as well as experience them firsthand on the exhibit floor. NVIDIA experts will also be onsite for hands-on training and networking events.

Expert AI Sight

GTC talks let AI experts and developers delve into their latest work, sharing key insights on how to deploy intelligent vehicle technology. The automotive speakers for GTC 2020 represent nearly every facet of the industry — including automakers, suppliers, startups and universities — and will cover a diversity of topics sure to satisfy any autonomous driving interest.

TuSimple founder and president Xiaodi Hou hosts a session on autonomous trucking at GTC 2019.

Here’s a brief look at some of the 40+ automotive sessions at GTC 2020:

  • Nikita Jaipuria, research scientist for computer vision and machine learning, and Rohan Bhasin, research engineer, both at Ford Motor Company, will discuss how to leverage generative adversarial networks to create synthetic data for autonomous vehicle training and validation.
  • Mate Rimac, CEO of Rimac, will outline how AI is transforming performance vehicles, from advanced driver assistance to intelligent coaching to bringing a video game-like experience to the track.
  • Wadim Kehl, research scientist, and Arjun Bhargava, machine learning engineer, both at Toyota Research Institute, will detail how they’ve combined PyTorch with NVIDIA TensorRT to strike the delicate balance of algorithm complexity, data management and energy efficiency needed for safe self-driving operation.
  • Neda Cvijetic, senior manager of Autonomous Vehicles at NVIDIA, will apply an engineering focus to widely acknowledged autonomous vehicle challenges, including drivable path perception and handling intersections, and then explain how NVIDIA is tackling them.

Zoom In 

Developers at GTC will also get a chance to dive deeper into autonomy with NVIDIA Deep Learning Institute courses as well as interact with experts in autonomous vehicle development.

Throughout the week, DLI will offer more than 60 instructor-led training sessions, 30 self-paced courses and six full-day workshops addressing the biggest developer challenges in areas such as autonomous vehicles, manufacturing and robotics.

Stuck in a self-driving rut? Visit our Connect with Experts sessions on intelligent cockpits, vehicle perception, autonomous driving software development and more to ask NVIDIA’s in-house specialists.

Not Just a Vision

The GTC exhibit hall will be the place to see and interact with autonomous driving technology firsthand.

At the NVIDIA booth, attendees can watch NVIDIA DRIVE in action, including its various deep neural networks, simulation and DRIVE AGX platforms. The dedicated autonomous vehicle zone will feature the latest vehicles from the DRIVE ecosystem, which includes automakers, suppliers, robotaxi companies, truckmakers, sensor companies and software startups.

Autonomous driving software company AutoX showcased their self-driving prototype to attendees at GTC 2019.

GTC goers will get the exclusive chance to experience all this as well as learn about AI developments in industries such as healthcare, energy, finance and gaming with a variety of networking opportunities throughout the week.

Register in 2019 for GTC to take advantage of early bird pricing — the first 1,000 people to sign up for conference passes will get priority access to Huang’s keynote.

Mark your calendar for March 2020 and see for yourself how AI is changing the next decade of autonomous transportation.

The post 2020 Vision: See the Future of Transportation at GTC appeared first on The Official NVIDIA Blog.