Amid CES, NVIDIA Packs Flying, Driving, Gaming Tech News into a Single Week

Flying, driving, gaming, racing… amid the first-ever virtual Consumer Electronics Show this week, NVIDIA-powered technologies spilled out in all directions.

In automotive, Chinese automakers SAIC and NIO announced they’ll use NVIDIA DRIVE in future vehicles.

In gaming, NVIDIA on Tuesday led off a slew of gaming announcements by revealing the affordable new RTX 3060 GPU and detailing the arrival of more than 70 30-series GPUs for gamers and creatives.

In robotics, the Skydio X2 drone has received the CES 2021 Best of Innovation Award for Drones and Unmanned Systems.

And in, well, a category all its own, the Indy Autonomous Challenge, unveiled Thursday, will pit college teams equipped with sleek, swift vehicles equipped with ADLINK DLP-8000 robot controller powered by NVIDIA GPUs against each other for a $1.5 million prize.

This week’s announcements were just the latest examples of how NVIDIA is driving AI and innovation into every aspect of our lives.

Game On

Bringing more gaming capabilities to millions more gamers, NVIDIA Tuesday announced more than 70 new laptops will feature GeForce RTX 30 Series Laptop GPUs and unveiled the NVIDIA GeForce RTX 3060 graphics card for desktops, priced at just $329.

All are powered by the award-winning NVIDIA Ampere GPU architecture, the second generation of RTX with enhanced Ray Tracing Cores, Tensor Cores, and new streaming multiprocessors.

NVIDIA also announced Call of Duty: Warzone and Square Enix’s new IP, Outriders. And Five Nights at Freddy’s: Security Breach and F.I.S.T.: Forged in Shadow Torch will be adding RTX ray tracing and DLSS.

The games are just the latest to support the real-time ray tracing and AI-based DLSS (deep learning super sampling) technologies, known together called RTX, which NVIDIA introduced two years ago.

The announcements were among the highlights of a streamed presentation from Jeff Fisher, senior vice president of NVIDIA’s GeForce business.

Amid the unprecedented challenges of 2020, “millions of people tuned into gaming — to play, create and connect with one another,” Fisher said. “More than ever, gaming has become an integral part of our lives.”

Hitting the Road

In automotive, two Chinese automakers announced they’ll be relying on NVIDIA DRIVE technologies.

Just as CES was starting electric car startup NIO announced a supercomputer to power its automated and autonomous driving features, with NVIDIA DRIVE Orin at its core.

The computer, known as Adam, achieves over 1,000 trillion operations per second of performance with the redundancy and diversity necessary for safe autonomous driving.

The Orin-powered supercomputer will debut in NIO’s flagship ET7 sedan, scheduled for production in 2022, and every NIO model to follow.

And on Thursday, SAIC, China’s largest automaker, announced it’s joining forces with online retail giant Alibaba to unveil a new premium EV brand, dubbed IM for “intelligence in motion.”

The long-range electric vehicles will feature AI capabilities powered by the high-performance, energy-efficient NVIDIA DRIVE Orin compute platform.

The news comes as EV startups in China have skyrocketed in popularity, with NVIDIA working with NIO along with Li Auto and Xpeng to bolster the growth of new-energy vehicles.

Taking to the Skies

Meanwhile, Skydio, the leading U.S. drone manufacturer and world leader in autonomous flight, today announced it received the CES 2021 Best of Innovation Award for Drones and Unmanned Systems for the Skydio X2.

Skydio’s new autonomous drone offers enterprise and public sector customers up to 35 minutes of autonomous flight time.

Packing six 4k cameras and powered by the NVIDIA Jetson TX2 mobile supercomputer, it’s built to offer situational awareness, asset inspection, and security patrol.

The post Amid CES, NVIDIA Packs Flying, Driving, Gaming Tech News into a Single Week appeared first on The Official NVIDIA Blog.

AI, Computational Advances Ring In New Era for Healthcare

We’re at a pivotal moment to unlock a new, AI-accelerated era of discovery and medicine, says Kimberly Powell, NVIDIA’s vice president of healthcare.

Speaking today at the J.P. Morgan Healthcare conference, held virtually, Powell outlined how AI and accelerated computing are enabling scientists to take advantage of the boom in biomedical data to power faster research breakthroughs and better patient care.

Understanding disease and discovering therapies is our greatest human endeavor, she said — and the trillion-dollar drug discovery industry illustrates just how complex a challenge it is.

How AI Can Drive Down Drug Discovery Costs

The typical drug discovery process takes about a decade, costs $2 billion and suffers a 90 percent failure rate during clinical development. But the rise of digital data in healthcare in recent years presents an opportunity to improve those statistics with AI.

“We can produce today more biomedical data in about three months than the entire 300-year history of healthcare,” she said. “And so this is now becoming a problem that no human really can synthesize that level of data, and we need to call upon artificial intelligence.”  

Powell called AI “the most powerful technology force of our time. It’s software that writes software that no humans can.”

But AI works best when it’s domain specific, combining data and algorithms tailored to a specific field like radiology, pathology or patient monitoring. The NVIDIA Clara application framework bridges this gap by providing researchers and clinicians the tools for GPU-accelerated AI in medical imaging, genomics, drug discovery and smart hospitals.

Downloads of NVIDIA Clara grew 5x last year, Powell shared, with developers taking up our new platforms for conversational AI and federated learning.

Healthcare Ecosystem Rallies Around AI

She noted that amid the COVID-19 pandemic, momentum around AI for healthcare has accelerated, with startups estimated to have raised well over $5 billion in 2020. More than 1,000 healthcare startups are in the NVIDIA Inception accelerator program, up 4x since 2017. And over 20,000 AI healthcare papers were submitted last year to PubMed, showing exponential growth over the past decade.

Leading research institutions like the University of California, San Francisco, are using NVIDIA GPUs to power their work in cryo-electron microscopy, a technique used to study the structure of molecules — such as the spike proteins on the COVID-19 virus — and accelerate drug and vaccine discovery.

And pharmaceutical companies, including GlaxoSmithKline, and major healthcare systems, like the U.K.’s National Health Service, will harness the Cambridge-1 supercomputer — an NVIDIA DGX SuperPOD system and the U.K.’s fastest AI supercomputer — to solve large-scale problems and improve patient care, diagnosis and delivery of critical medicines and vaccines.

Software-Defined Instruments Link AI Innovation and Medical Practice

Powell sees software-defined instruments — devices that can be regularly updated to reflect the latest scientific understanding and AI algorithms — as key to connecting the latest research breakthroughs with the practice of medicine.

“Artificial intelligence, like the practice of medicine, is constantly learning. We want to learn from the data, we want to learn from the changing environment,” Powell said.

By making medical instruments software-defined, tools like smart cameras for patient monitoring or AI-guided ultrasound systems can not only be developed in the first place, she said, but also retain their value and improve over time.

U.K.-based sequencing company Oxford Nanopore Technologies is a leader in software-defined instruments, deploying a new generation of DNA sequencing technology across an electronics-based platform. Its nanopore sequencing devices have been used in more than 50 countries to sequence and track new variants of the virus that causes COVID-19, as well as for large-scale genomic analyses to study the biology of cancer.

The company uses NVIDIA GPUs to power several of its instruments, from the handheld MinION Mk1C device to its ultra-high throughput PromethION, which can produce more than three human genomes’ worth of sequence data in a single run. To power the next generation of PromethION, Oxford Nanopore is adopting NVIDIA DGX Station, enabling its real-time sequencing technology to pair with rapid and highly accurate genomic analyses.

For years, the company has been using AI to improve the accuracy of basecalling, the process of determining the order of a molecule’s DNA bases from tiny electrical signals that pass through a nanoscale hole, or nanopore.

This technology “truly touches on the entire practice of medicine,” Powell said, whether COVID epidemiology or in human genetics and long read sequencing. “Through deep learning, their base calling model is able to reach an overall accuracy of 98.3 percent, and AI-driven single nucleotide variant calling gets them to 99.9 percent accuracy.”

Path Forward for AI-Powered Healthcare

AI-powered breakthroughs like these have grown in significance amid the pandemic, said Powell.

“The tremendous focus of AI on a single problem in 2020, like COVID-19, really showed us that with that tremendous focus, we can see every piece and part that can benefit from artificial intelligence,” she said. “What we’ve discovered over the last 12 months is only going to propel us further in the future. Everything we’ve learned is applicable for every future drug discovery program there is.”

Across fields as diverse as genome analysis, computational drug discovery and clinical diagnostics, healthcare heavyweights are making strides with GPU-accelerated AI. Hear more about it on Jan. 13 at 11 a.m. Pacific, when Powell joins a Washington Post Live conversation on AI in healthcare.

Subscribe to NVIDIA healthcare news here.

The post AI, Computational Advances Ring In New Era for Healthcare appeared first on The Official NVIDIA Blog.

Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020

Much of 2020 may look best in the rearview mirror, but the year also held many moments of outstanding work, gems worth hitting the rewind button to see again.

So, here’s a countdown — roughly in order of ascending popularity — of 10 favorite NVIDIA videos that hit YouTube in 2020. With two exceptions for videos that deserve a wide audience, all got at least 200,000 views and most, but not all, can be found on the NVIDIA YouTube channel.

#10 Coronavirus Gets a Close-Up

The pandemic was clearly the story of the year.

We celebrated the work of many healthcare providers and researchers pushing science forward to combat it, including the team that won a prestigious Gordon Bell award for using high performance computing and AI to see how the coronavirus works, something they explained in detail in their own video here.

In another one of the many responses to COVID-19, the Folding@Home project received donations of time on more than 200,000 NVIDIA GPUs to study the coronavirus. Using NVIDIA Omniverse, we created a visualization (described below) of data they amassed on their virtual exascale computer.

#9 Cruising into a Ray-Traced Future

Despite the challenging times, many companies continued to deliver top-notch work. For example, Autodesk VRED 2021 showed the shape of things to come in automotive design.

The demo below displays the power of ray tracing and AI to deliver realistic 3D visualizations in real time using RTX technology, snagging nearly a quarter million views. (Note: There’s no audio on this one, just amazing images.)

#8 A Test Drive in the Latest Mercedes

Just for fun — yes, even 2020 included fun — we look back at NVIDIA CEO Jensen Huang taking a spin in the latest Mercedes-Benz S-Class as part of the world premiere of the flagship sedan. He shared the honors with Grammy award-winning Alicia Keys and Formula One champ Lewis Hamilton.

The S-Class uses AI to deliver intelligent features like a voice assistant personalized for each driver. An engineer and a car enthusiast at heart, Huang gave kudos to the work of hundreds of engineers who delivered a vehicle that with over-the-air software updates will get better and better.

#7 Playing Marbles After Dark

The NVIDIA Omniverse team pointed the way to a future of photorealistic games and simulations rendered in real time. They showed how a distributed team of engineers and artists can integrate multiple tools to play more than a million polygons smoothly with ray-traced lighting at 1440p on a single GeForce RTX 3090.

The mesmerizing video captured the eyeballs of nearly half a million viewers.

#6 An AI Platform for the Rest of Us

Great things sometimes come in small packages. In October, we debuted the DGX Station A100, a supercomputer that plugs into a standard wall socket to let data scientists do world-class work in AI. More than 400,000 folks tuned in.

#5 Seeing Virtual Meetings Through a New AI

With online gatherings the new norm, NVIDIA Maxine attracted a lot of eyeballs. More than 800,000 viewers tuned into this demo of how we’re using generative adversarial networks to lower the bandwidth and turn up the quality of video conferencing.

#4 What’s Jensen Been Cooking?

Our most energy-efficient video of 2020 was a bit of a tease. It lasted less than 30 seconds, but Jensen Huang’s preview of the first NVIDIA Ampere architecture GPU drew nearly a million viewers.

#3 Voila, Jensen Whips Up the First Kitchen Keynote

In the days of the Great Depression, vacuum tubes flickered with fireside chats. The 2020 pandemic spawned a slew of digital events with GTC among the first of them.

In May, Jensen recorded in his California home the first kitchen keynote. In a playlist of nine virtual courses, he served a smorgasbord where the NVIDIA A100 GPU was an entrée surrounded by software side dishes that included frameworks for conversational AI (Jarvis) and recommendation systems (Merlin). The first chapter alone attracted more than 300,000 views.

And we did it all again in October when we featured the first DPU, its DOCA software and a framework to accelerate drug discovery.

#2 Delivering Enterprise AI in a Box

The DGX A100 emerged as one of the favorite dishes from our May kitchen keynote. The 5-petaflops system packs AI training, inference and analytics for any data center.

Some 1.3 million viewers clicked to get a virtual tour of the eight A100 GPUs and 200 Gbit/second InfiniBand links inside it.

#1 Enough of All This Hard Work, Let’s Have Fun!

By September it was high time to break away from a porcupine of a year. With the GeForce RTX 30 Series GPUs, we rolled out engines to create lush new worlds for those whose go-to escape is gaming.

The launch video, viewed more than 1.5 million times, begins with a brief tour of the history of computer games. Good days remembered, good days to come.

For Dessert: Two Bytes of Chocolate

We’ll end 2020, happily, with two special mentions.

Our most watched video of the year was a blistering five-minute clip of game play on DOOM Eternal running all out on a GeForce RTX 3080 in 4K.

And perhaps our sweetest feel good moment of 2020 was delivered by an NVIDIA engineer, Bryce Denney, who hacked a way to let choirs sing together safely in the pandemic. Play it again, Bryce!

 

The post Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020 appeared first on The Official NVIDIA Blog.

Inception to the Rule: AI Startups Thrive Amid Tough 2020

2020 served up a global pandemic that roiled the economy. Yet the startup ecosystem has managed to thrive and even flourish amid the tumult. That may be no coincidence.

Crisis breeds opportunity. And nowhere has that been more prevalent than with startups using AI, machine learning and data science to address a worldwide medical emergency and the upending of typical workplace practices.

This is also reflected in NVIDIA Inception, our program to nurture startups transforming industries with AI and data science. Here are a few highlights from a tremendous year for the program and the members it’s designed to propel toward growth and success.

Increased membership:

  • Inception hit a record 7,000 members — that’s up 25 percent on the year.
  • IT services, healthcare, and media and entertainment were the top three segments, reflecting the global pandemic’s impact on remote work, medicine and home-based entertainment.
  • Early-stage and seed-stage startups continue to lead the rate of joining NVIDIA Inception. This has been a consistent trend over recent years.

Startups ramp up: 

  • 100+ Inception startups reached the program’s Premier level, which unlocks increased marketing support, engineering access and exposure to senior customer contacts.
  • Developers from Inception startups enrolled in more than 2,000 sessions with the NVIDIA Deep Learning Institute, which offers hands-on training and workshops.
  • GPU Ventures, the venture capital arm of NVIDIA Inception, made investments in three startup companies — Plotly, Artisight and Rescale.

Deepening partnerships: 

  • NVIDIA Inception added Oracle’s Oracle for Startups program to its list of accelerator partners, which already includes AWS Activate and Microsoft for Startups, as well as a variety of regional programs. These tie-ups open the door for startups to access free cloud credits, new marketing channels, expanded customer networks, and other benefits across programs.
  • The NVIDIA Inception Alliance for Healthcare launched earlier this month, starting with healthcare leaders GE Healthcare and Nuance, to provide a clear go-to-market path for medical imaging startups.

At its core, NVIDIA Inception is about forging connections for prime AI startups, finding new paths for them to pursue success, and providing them with the tools or resources to take their business to the next level.

Read more about NVIDIA Inception partners on our blog and learn more about the program at https://www.nvidia.com/en-us/deep-learning-ai/startups/.

The post Inception to the Rule: AI Startups Thrive Amid Tough 2020 appeared first on The Official NVIDIA Blog.

Scotland’s Rural College Makes Moo-ves Against Bovine Tuberculosis with AI

Each morning millions of bleary-eyed people pour milk into their bowls of cereal or cups of coffee without a second thought as to where that beverage came from.

Few will consider the processes in place to maintain the health of the animals involved in milk production and to ensure that the final product is fit for consumption.

For cattle farmers, few things can sour their efforts like bovine tuberculosis (bTB), a chronic, slow-progressing and debilitating disease. bTB presents significant economic and welfare challenges to the worldwide cattle sector.

Applying GPU-accelerated AI and data science, Scotland’s Rural College (SRUC), headquartered in Edinburgh, recently spearheaded groundbreaking research into how bTB can be monitored and treated more effectively and efficiently.

Bovine Tuberculosis

Caused by bacteria, bTB is highly infectious among cattle and transmissible to other animals and humans.

It also causes substantial financial strain through involuntary culling, animal movement restrictions, and the cost of control and eradication programs. In countries where mandatory eradication programs are not in place for bTB carriers, the disease also carries considerable public health implications.

As bTB is a slow-developing disease, it’s rare for cattle to show any signs of infection until the disease has progressed to its later stages.

To monitor the health of herds, cattle need to receive regular diagnostic tests. Currently, the standard is a single intradermal comparative cervical tuberculin (SICCT) skin test. These tests are time consuming, labor intensive and only correctly identify an infected animal about 50-80 percent of the time.

Milking It

SRUC’s research brought to light a new method of monitoring bTB based on milk samples that were already being collected as part of regular quality control checks through what is called mid-infrared (MIR) analysis.

First, the bTB phenotype (the observable characteristics of an infected animal) was created using data relating to traditional SICCT skin-test results, culture status, whether a cow was slaughtered, and whether any bTB-caused lesions were observed. Information from each of these categories was combined to create a binary phenotype, with zero representing healthy cows and 1 representing bTB-affected cows.

Contemporaneous individual milk MIR data was collected as part of monthly routine milk recording, matched to bTB status of individual animals on the SICCT test date, and converted into 53×20-pixel images. These were used to train a deep convolutional neural network on an NVIDIA DGX Station that was able to identify particular high-level features indicative of bTB infection.

SRUC’s models were able to identify which cows would be expected to fail the SICCT skin test, with an accuracy of 95 percent and a corresponding sensitivity and specificity of 0.96 and 0.94, respectively.

To process the millions of data points used for training their bTB prediction models, the team at SRUC needed a computing system that was fast, stable and secure. Using an NVIDIA DGX Station, models that had previously needed months of work now could be developed in a matter of days. And with RAPIDS data science software on top, the team further accelerated their research and started developing deep learning models in just a few hours.

“By running our models on NVIDIA DGX Station with RAPIDS, we were able to speed up the time it took to develop models at least tenfold,” said Professor Mike Coffey, leader of the Animal Breeding Team and head of EGENES at SRUC. “Speeding up this process means that we’ll be able to get meaningful solutions for combating bTB into the hands of farmers faster and vastly improve how bTB is handled nationwide.”

Moo-ving Forward

Using routinely collected milk samples for the early identification of bTB-infected cows represents an innovative, low-cost and, importantly, noninvasive tool that has the potential to contribute substantially to the push to eradicate bTB in the U.K. and beyond.

Such a tool would enable farmers to get access to crucial information much faster than currently possible. And this would enable farmers to make more efficient and informed decisions that significantly increase the health and welfare of their animals, as well as reduce costs to the farm, government and taxpayer.

The success of predicting bTB status with deep learning also opens up the possibility to calibrate MIR analysis for other diseases, such as paratuberculosis (Johne’s disease), to help improve cattle welfare further.

The post Scotland’s Rural College Makes Moo-ves Against Bovine Tuberculosis with AI appeared first on The Official NVIDIA Blog.

NVIDIA Chief Scientist Highlights New AI Research in GTC Keynote

NVIDIA researchers are defining ways to make faster AI chips in systems with greater bandwidth that are easier to program, said Bill Dally, NVIDIA’s chief scientist, in a keynote released today for a virtual GTC China event.

He described three projects as examples of how the 200-person research team he leads is working to stoke Huang’s Law — the prediction named for NVIDIA CEO Jensen Huang that GPUs will double AI performance every year.

“If we really want to improve computer performance, Huang’s Law is the metric that matters, and I expect it to continue for the foreseeable future,” said Dally, who helped direct research at NVIDIA in AI, ray tracing and fast interconnects.

Huang's Law slide 11 jpg
NVIDIA has more than doubled performance of GPUs on AI inference every year.

An Ultra-Efficient Accelerator

Toward that end, NVIDIA researchers created a tool called MAGNet that generated an AI inference accelerator that hit 100 tera-operations per watt in a simulation. That’s more than an order of magnitude greater efficiency than today’s commercial chips.

MAGNet uses new techniques to orchestrate the flow of information through a device in ways that minimize the data movement that burns most of the energy in today’s chips. The research prototype is implemented as a modular set of tiles so it can scale flexibly.

A separate effort seeks to replace today’s electrical links inside systems with faster optical ones.

Firing on All Photons

“We can see our way to doubling the speed of our NVLink [that connects GPUs] and maybe doubling it again, but eventually electrical signaling runs out of gas,” said Dally, who holds more than 120 patents and chaired the computer science department at Stanford before joining NVIDIA in 2009.

The team is collaborating with researchers at Columbia University on ways to harness techniques telecom providers use in their core networks to merge dozens of signals onto a single optical fiber.

Called dense wavelength division multiplexing, it holds the potential to pack multiple terabits per second into links that fit into a single millimeter of space on the side of a chip, more than 10x the density of today’s interconnects.

Besides faster throughput, the optical links enable denser systems. For example, Dally showed a mockup (below) of a future NVIDIA DGX system with more than 160 GPUs.

GPU tray with optical links slide 73
Optical links help pack dozens of GPUs in a system.

In software, NVIDIA’s researchers have prototyped a new programming system called Legate. It lets developers take a program written for a single GPU and run it on a system of any size — even a giant supercomputer like Selene that packs thousands of GPUs.

Legate couples a new form of programming shorthand with accelerated software libraries and an advanced runtime environment called Legion. It’s already being put to the test at U.S. national labs.

Rendering a Vivid Future

The three research projects make up just one part of Dally’s keynote, which describes NVIDIA’s domain-specific platforms for a variety of industries such as healthcare, self-driving cars and robotics. He also delves into data science, AI and graphics.

“In a few generations our products will produce amazing images in real time using path tracing with physically based rendering, and we’ll be able to generate whole scenes with AI,” said Dally.

He showed the first public demonstration (below) that combines NVIDIA’s conversational AI framework called Jarvis with GauGAN, a tool that uses generative adversarial networks to create beautiful landscapes from simple sketches. The demo lets users instantly generate photorealistic landscapes using simple voice commands.

In an interview between recording sessions for the keynote, Dally expressed particular pride for the team’s pioneering work in several areas.

“All our current ray tracing started in NVIDIA Research with prototypes that got our product teams excited. And in 2011, I assigned [NVIDIA researcher] Bryan Catanzaro to work with [Stanford professor] Andrew Ng on a project that became CuDNN, software that kicked off much of our work in deep learning,” he said.

A First Foothold in Networking

Dally also spearheaded a collaboration that led to the first prototypes of NVLink and NVSwitch, interconnects that link GPUs running inside some of the world’s largest supercomputers today.

“The product teams grabbed the work out of our hands before we were ready to let go of it, and now we’re considered one of the most advanced networking companies,” he said.

With his passion for technology, Dally said he often feels like a kid in a candy store. He may hop from helping a group with an AI accelerator one day to helping another team sort through a complex problem in robotics the next.

“I have one of the most fun jobs in the company if not in the world because I get to help shape the future,” he said.

The keynote is just one of more than 220 sessions at GTC China. All the sessions are free and most are conducted in Mandarin.

Panel, Startup Showcase at GTC China

Following the keynote, a panel of senior NVIDIA executives will discuss how the company’s technologies in AI, data science, healthcare and other fields are being adopted in China.

The event also includes a showcase of a dozen top startups in China, hosted by NVIDIA Inception, an acceleration program for AI and data science startups.

Companies participating in GTC China include Alibaba, AWS, Baidu, ByteDance, China Telecom, Dell Technologies, Didi, H3C, Inspur, Kuaishou, Lenovo, Microsoft, Ping An, Tencent, Tsinghua University and Xiaomi.

The post NVIDIA Chief Scientist Highlights New AI Research in GTC Keynote appeared first on The Official NVIDIA Blog.

Stuttgart Supercomputing Center Shifts into AI Gear

Stuttgart’s supercomputer center has been cruising down the autobahn of high performance computing like a well-torqued coupe, and now it’s making a pitstop for some AI fuel.

Germany’s High-Performance Computing Center Stuttgart (HLRS), one of Europe’s largest supercomputing centers, has tripled the size of its staff and increased its revenues from industry collaborations 20x since Michael Resch became director in 2002. In the past year, much of the growth has come from interest in AI.

With demand for machine learning on the rise, HLRS signed a deal to add 192 NVIDIA Ampere architecture GPUs linked on NVIDIA Mellanox InfiniBand network to its Hawk supercomputer based on an Apollo system from Hewlett Packard Enterprise.

Hawk Flies to New Heights

The GPUs will propel what’s already ranked as the world’s 16th largest system to new heights. In preparation for the expansion, researchers are gearing up AI projects that range from predicting the path of the COVID-19 pandemic to the science behind building better cars and planes.

“Humans can create huge simulations, but we can’t always understand all the data — the big advantage of AI is it can work through the data and see its consequences,” said Resch, who also serves as a professor at the University of Stuttgart with a background in engineering, computer science and math.

The center made its first big leap into AI last year when it installed a Cray CS-Storm system with more than 60 NVIDIA GPUs. It is already running AI programs that analyze market data for Mercedes-Benz, investment portfolios for a large German bank and a music database for a local broadcaster.

“It turned out to be an extremely popular system because there’s a growing community of people who understand AI has a benefit for them,” Resch said of the system now running at near capacity. “By the middle of this year it was clear we had to expand to cover our growing AI requirements,” he added.

The New Math: HPC+AI

The future for the Stuttgart center, and the HPC community generally, is about hybrid computing where CPUs and GPUs work together, often to advance HPC simulations with AI.

“Combining the two is a golden bullet that propels us into a better future for understanding problems,” he said.

For example, one researcher at the University of Stuttgart will use data from as many as 2 billion simulations to train neural networks that can quickly and economically evaluate metal alloys. The AI model it spawns could run on a PC and help companies producing sheet metal choose the best alloys for, say, a car door.

“This is extremely helpful in situations where experimentation is difficult or costly,” he said.

And it’s an apropos app for the center situated in the same city that’s home to the headquarters of both Mercedes and Porsche.

In the Flow with Machine Learning

A separate project in fluid dynamics will take a similar approach.

A group from the university will train neural networks on data from highly accurate simulations to create an AI model that can improve analysis of turbulence. It’s a critical topic for companies such as Airbus that are collaborating with HLRS on efforts to mine the aerospace giant’s data on airflow.

The Stuttgart center also aims to use AI as part of a European research project to predict when hospital beds could fill up in intensive-care units amid the pandemic. The project started before the coronavirus hit, but it accelerated in the wake of COVID-19.

Tracking the Pandemic with AI

One of the project’s goals is to give policy makers a four-week window to respond before hospitals would reach their capacity.

“It’s a critical question with so many people dying — we’ve seen scenarios in places like Italy, New York and Wuhan where ICUs filled up in the first weeks of pandemic,” Resch said.

“So, we will conduct simulations and predictions of the outlook for the pandemic over the next weeks and months, and GPUs will be extremely helpful for that,” he added.

It’s perhaps the highest profile of many apps now in the pipeline for the GPU-enhanced engine that will propel Stuttgart’s researchers further down the road on their journey into AI.

The post Stuttgart Supercomputing Center Shifts into AI Gear appeared first on The Official NVIDIA Blog.

How GPUs Can Democratize Deep Reinforcement Learning for Robotics Development

It can take a puppy weeks to learn that certain kinds of behaviors will result in a yummy treat, extra cuddles or a belly rub — and that other behaviors won’t. With a system of positive reinforcement, a pet pooch will in time anticipate that chasing squirrels is less likely to be rewarded than staying by their human’s side.

Deep reinforcement learning, a technique used to train AI models for robotics and complex strategy problems, works off the same principle.

In reinforcement learning, a software agent interacts with a real or virtual environment, relying on feedback from rewards to learn the best way to achieve its goal. Like the brain of a puppy in training, a reinforcement learning model uses information it’s observed about the environment and its rewards, and determines which action the agent should take next.

To date, most researchers have relied on a combination of CPUs and GPUs to run reinforcement learning models. This means different parts of the computer tackle different steps of the process — including simulating the environment, calculating rewards, choosing what action to take next, actually taking action, and then learning from the experience.

But switching back and forth between CPU cores and powerful GPUs is by nature inefficient, requiring data to be transferred from one part of the system’s memory to another at multiple points during the reinforcement learning training process. It’s like a student who has to carry a tall stack of books and notes from classroom to classroom, plus the library, before grasping a new concept.

With Isaac Gym, NVIDIA developers have made it possible to instead run the entire reinforcement learning pipeline on GPUs — enabling significant speedups and reducing the hardware resources needed to develop these models.

Here’s what this breakthrough means for the deep reinforcement learning process, and how much acceleration it can bring developers.

Reinforcement Learning on GPUs: Simulation to Action 

When training a reinforcement learning model for a robotics task — like a humanoid robot that walks up and down stairs — it’s much faster, safer and easier to use a simulated environment than the physical world. In a simulation, developers can create a sea of virtual robots that can quickly rack up thousands of hours of experience at a task.

If tested solely in the real world, a robot in training could fall down, bump into or mishandle objects — causing potential damage to its own machinery, the object it’s interacting with or its surroundings. Testing in simulation provides the reinforcement learning model a space to practice and work out the kinks, giving it a head start when shifting to the real world.

In a typical system today, the NVIDIA PhysX simulation engine runs this experience-gathering phase of the reinforcement learning process on NVIDIA GPUs. But for other steps of the training application, developers have traditionally still used CPUs.

traditional deep reinforcement learning pipeline
Traditional deep reinforcement learning uses a combination of CPU and GPU computing resources, requiring significant data transfers back and forth.

A key part of reinforcement learning training is conducting what’s known as the forward pass: First, the system simulates the environment, records a set of observations about the state of the world and calculates a reward for how well the agent did.

The recorded observations become the input to a deep learning “policy” network, which chooses an action for the agent to take. Both the observations and the rewards are stored for use later in the training cycle.

Finally, the action is sent back to the simulator so that the rest of the environment can be updated in response.

After several rounds of these forward passes, the reinforcement learning model takes a look back, evaluating whether the actions it chose were effective or not. This information is used to update the policy network, and the cycle begins again with the improved model.

GPU Acceleration with Isaac Gym 

To eliminate the overhead of transferring data back and forth from CPU to GPU during this reinforcement learning training cycle, NVIDIA researchers have developed an approach to run every step of the process on GPUs. This is Isaac Gym, an end-to-end training environment, which includes the PhysX simulation engine and a PyTorch tensor-based API.

Isaac Gym makes it possible for a developer to run tens of thousands of environments simultaneously on a single GPU. That means experiments that previously required a data center with thousands of CPU cores can in some cases be trained on a single workstation.

deep reinforcement learning on GPUs
NVIDIA Isaac Gym runs entire reinforcement learning pipelines on GPUs, enabling significant speedups.

Decreasing the amount of hardware required makes reinforcement learning more accessible to individual researchers who don’t have access to large data center resources. It can also make the process a lot faster.

A simple reinforcement learning model tasked with getting a humanoid robot to walk can be trained in just a few minutes with Isaac Gym. But the impact of end-to-end GPU acceleration is most useful for more challenging tasks, like teaching a complex robot hand to manipulate a cube into a specific position.

This problem requires significant dexterity by the robot, and a simulation environment that involves domain randomization, a mechanism that allows the learned policy to more easily transfer to a real-world robot.

Research by OpenAI tackled this task with a cluster of more than 6,000 CPU cores plus multiple NVIDIA Tensor Core GPUs — and required about 30 hours of training for the reinforcement learning model to succeed at the task 20 times in a row using a feed-forward network model.

Using just one NVIDIA A100 GPU with Isaac Gym, NVIDIA developers were able to achieve the same level of success in around 10 hours — a single GPU outperforming an entire cluster by a factor of 3x.

To learn more about Isaac Gym, visit our developer news center.

Video above shows a cube manipulation task trained by Isaac Gym on a single NVIDIA A100 GPU and rendered in NVIDIA Omniverse.

The post How GPUs Can Democratize Deep Reinforcement Learning for Robotics Development appeared first on The Official NVIDIA Blog.

Talk Stars: Israeli AI Startup Brings Fluency to Natural Language Understanding

Whether talking with banks, cell phone providers or insurance companies, customers often encounter AI-powered voice interfaces to direct their calls to the right department.

But these interfaces typically are limited to understanding certain keywords. Onvego, an Israel-based startup, is working to make these systems understand what you say, no matter how you say it.

Before starting Onvego, the company’s founders created a mobile speech apps platform to assist blind people. Now they’re creating pre-built AI models for such use cases as accepting or requesting payments, scheduling appointments or booking reservations.

Onvego is a member of NVIDIA Inception, a program that accelerates AI and data science startups with go-to-market support, expertise and technology.

The company’s AI enables enterprises to easily build their own conversational interfaces in 10 different languages, with more on the way. Its technology already powers Israel’s toll road payment systems, enabling drivers to pay with their voice.

“Say the customer said, ‘I want to pay my bill.’ The system has to understand what that means,” said Alon Buchnik, CTO and co-founder of Onvego. “Once it does, it sends that information back to the speech machine, where logic is applied.”

The system then walks the driver through the payment process. Onvego’s AI also powers two emergency road services providers in Israel, providing AI-powered answers to drivers in need.

“The speech machine understands exactly what the problem is,” said Buchnik. “It understands if it needs to send a tow truck or just a technician.”

In Search of Ubiquity

Road-related applications are just the tip of the iceberg for Onvego. The company envisions its technology being inside everything from coffee machines to elevators. Along those lines, it’s forged partnerships with GE, GM, Skoda, Amazon and numerous other companies.

For instance, Onvego’s AI is being incorporated into a line of elevators, enabling the manufacturer to provide a conversational voice interface for users.

With the COVID-19 pandemic raging around the globe, Buchnik believes the company’s no-touch technology, such as activating elevators by voice only, can deliver an added benefit by reducing transmission of the virus.

But Onvego’s most-ambitious undertaking may be its contact call center technology. The company has developed an application, powered by NVIDIA GPUs, that’s designed to do the work of an entire call center operation.

It runs as a cloud-based service as well as enterprise on-premises solution that would provide real-time natural language call center support for IoT devices at the network’s edge, even where there’s no internet connectivity.

GPUs at the Core

Buchnik said that while it would be possible for the Onvego call center application to answer 50 simultaneous calls without GPUs, “it would require a huge CPU” to do so. “For the GPU, it’s nothing,” he said.

Onvego also uses a CUDA decoder so developers can access decoding capabilities on the GPU.

Training of the company’s automatic speech recognition models occurs on NVIDIA GPU-powered instances from AWS or Azure, which Onvego acquired through NVIDIA Inception.

Aside from its efforts to expand the use of its technology, Onvego is focused on the standalone container for locations at the edge or completely independent from the network. They plan to run it on an NVIDIA Jetson Nano.

The idea of providing intelligent natural language interfaces to people wherever they’re needed is providing Buchnik and his team with all the motivation they need.

“This is our vision,” he said. “This is where we want to be.”

Buchnik credits the NVIDIA Inception program for providing the company access to the best AI experts, top technical resources and support, and access to a large marketing program with strong positioning in different market verticals.

By using the NVIDIA resources and platforms, Onvego is hoping to promote its intelligent voice solutions to markets and industries that it has not yet reached.

The post Talk Stars: Israeli AI Startup Brings Fluency to Natural Language Understanding appeared first on The Official NVIDIA Blog.

NVIDIA Chief Scientist Bill Dally to Keynote at GTC China

Bill Dally — one of the world’s foremost computer scientists and head of NVIDIA’s research efforts — will deliver the keynote address during GTC China, the latest event in the world’s premier conference series focused on AI, deep learning and high performance computing.

Registration is not required to view the keynote, which will take place on Dec. 14, at 6 p.m. Pacific time (Dec. 15, 10 a.m. China Standard time). GTC China is a free, online event, running Dec. 15-19.

Tens of thousands of attendees are expected to join the event, with thousands more tuning in to hear Dally speak on the latest innovations in AI, graphics, HPC, healthcare, edge computing and autonomous machines. He will also share new research in the areas of AI inference, silicon photonics, and GPU cluster acceleration.

In a career spanning nearly four decades, Dally has pioneered many of the fundamental technologies underlying today’s supercomputer and networking architectures. As head of NVIDIA Research, he leads a team of more than 200 around the globe who are inventing technologies for a wide variety of applications, including AI, HPC, graphics and networking.

Prior to joining NVIDIA as chief scientist and senior vice president of research in 2009, he chaired Stanford University’s computer science department.

Dally is a member of the National Academy of Engineering and a fellow of the American Academy of Arts & Sciences, the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery. He’s written four textbooks, published more than 250 papers and holds over 120 patents, and has received the IEEE Seymour Cray Award, ACM Eckert-Mauchly Award and ACM Maurice Wilkes Award.

Following Dally’s keynote, four senior NVIDIA executives will describe how the company’s latest breakthroughs in AI, data science and healthcare are being adopted in China. The panel discussion will take place on Monday, Dec. 14, at 7:10 p.m. Pacific (Dec. 15 at 11:10 a.m. CST).

GTC China Highlights

GTC is the premier conference for developers to strengthen their skills on a wide range of technologies. It will include 220+ live and on-demand sessions and enable attendees to ask questions and interact with experts.

Many leading organizations will participate, including Alibaba, AWS, Baidu, ByteDance, China Telecom, Dell Technologies, Didi, Hewlett Packard Enterprise, Inspur, Kuaishou, Lenovo, Microsoft, Ping An, Tencent, Tsinghua University and Xiaomi.

Certified instructors will provide virtual training for hundreds of participants in the NVIDIA Deep Learning Institute. DLI seats are currently sold out.

NVIDIA Inception, an acceleration program for AI and data science startups, will host 12 leading Chinese startups in the NVIDIA Inception Startup Showcase. Attendees will have the opportunity to see presentations from the 12 CXOs, whose companies were selected by winning a vote among more than 40 participating startups.

For more details and to register for GTC China at no charge, visit www.nvidia.cn/gtc/.

The post NVIDIA Chief Scientist Bill Dally to Keynote at GTC China appeared first on The Official NVIDIA Blog.