AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra

AI is the rocket fuel that will get us to Mars. It’s the vaccine that will save us on Earth. And it’s the people who aspire to make a dent in the universe.

Our latest “I Am AI” video, unveiled during NVIDIA CEO Jensen Huang’s keynote address at the GPU Technology Conference, pays tribute to the scientists, researchers, artists and many others making historic advances with AI.

To grasp AI’s global impact, consider: the technology is expected to generate $2.9 trillion worth of business value by 2021, according to Gartner.

It’s on course to classify 2 trillion galaxies to understand the universe’s origin, and to zero in on the molecular structure of the drugs needed to treat coronavirus and cancer.

As depicted in the latest video, AI has an artistic side, too. It can paint as well as Bob Ross. And its ability to assist in the creation of original compositions is worthy of the London Symphony Orchestra, which plays the accompanying theme music, a piece that started out written by a recurrent neural network.

AI is also capable of creating text-to-speech synthesis for narrating a short documentary. And that’s just what it did.

These fireworks and more are the story of I Am AI. Sixteen companies and research organizations are featured in the video. The action moves fast, so grab a bowl of popcorn, kick back and enjoy this tour of some of the highlights of AI in 2020.

Reaching Into Outer Space

Understanding the formation of the structure and the amount of matter in the universe requires observing and classifying celestial objects such as galaxies. With an estimated 2 trillion galaxies to examine in the observable universe, it’s what cosmologists call a “computational grand challenge.”

The recent Dark Energy Survey collected data from over 300 million galaxies. To study them with unprecedented precision, the Center for Artificial Intelligence Innovation at the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign teamed up with the Argonne Leadership Computing Facility at the U.S. Department of Energy’s Argonne National Laboratory.

NCSA tapped the Galaxy Zoo project, a crowdsourced astronomy effort that labeled millions of galaxies observed by the Sloan Digital Sky Survey. Using that data, an AI model with 99.6 percent accuracy can now chew through unlabeled galaxies to ID them and accelerate scientific research.

With Mars targeted for human travel, scientists are seeking the safest path. In that effort, the NASA Solar Dynamics Observatory takes images of the sun every 1.3 seconds. And researchers have developed an algorithm that removes errors from the images, which are placed into a growing archive for analysis.

Using such data, NASA is tapping into NVIDIA GPUs to analyze solar surface flows so that it can build better models for predicting the weather in space. NASA also aims to identify origins of energetic particles in Earth’s orbit that could damage interplanetary spacecraft, jeopardizing trips to Mars.

Restoring Voice and Limb

Voiceitt — a Tel Aviv-based startup that’s developed signal processing, speech recognition technologies and deep neural nets — offers a synthesized voice for those whose speech has been distorted. The company’s app converts unintelligible speech into easily understood speech.

The University of North Carolina at Chapel Hill’s Neuromuscular Rehabilitation Engineering Laboratory and North Carolina State University’s Active Robotic Sensing (ARoS) Laboratory develop experimental robotic limbs used in the labs.

The two research units have been working on walking environment recognition, aiming to develop environmental adaptive controls for prostheses. They’ve been using CNNs for prediction running on NVIDIA GPUs. And they aren’t alone.

Helping in Pandemic

Whiteboard Coordinator remotely monitors the temperature of people entering buildings to minimize exposure to COVID-19. The Chicago-based startup provides temperature-screening rates of more than 2,000 people per hour at checkpoints. Whiteboard Coordinator and NVIDIA bring AI to the edge of healthcare with NVIDIA Clara Guardian, an application framework that simplifies the development and deployment of smart sensors. uses AI to inform neurologists about strokes much faster than traditional methods. With the onset of the pandemic, moved to help combat the new virus with an app that alerts care teams to positive COVID-19 results.

Axial3D is a Belfast, Northern Ireland, startup that enlists AI to accelerate the production time of 3D-printed models for medical images used in planning surgeries. Having redirected its resources at COVID-19, the company is now supplying face shields and is among those building ventilators for the U.K.’s National Health Service. It has also begun 3D printing of swab kits for testing as well as valves for respirators. (Check out their on-demand webinar.)

Autonomizing Contactless Help

KiwiBot, a cheery-eyed food delivery bot from Berkeley, Calif., has included in its path a way to provide COVID-19 services. It’s autonomously delivering masks, sanitizers and other supplies with its robot-to-human service.

Masterpieces of Art, Compositions and Narration

Researchers from London-based startup Oxia Palus demonstrated in a paper, “Raiders of the Lost Art,” that AI could be used to recreate lost works of art that had been painted over. Beneath Picasso’s 1902 The Crouching Beggar lies a mountainous landscape that art curators believe is of Parc del Laberint d’Horta, near Barcelona.

They also know that Santiago Rusiñol painted Parc del Laberint d’Horta. Using a modified X-ray fluorescence image of The Crouching Beggar and Santiago Rusiñol’s Terraced Garden in Mallorca, the researchers applied neural style transfer, running on NVIDIA GPUs, to reconstruct the lost artwork, creating Rusiñol’s Parc del Laberint d’Horta.


For GTC a few years ago, Luxembourg-based AIVA AI composed the start — melodies and accompaniments — of what would become an original classical music piece meriting an orchestra. Since then we’ve found it one.

Late last year, the London Symphony Orchestra agreed to play the moving piece, which was arranged for the occasion by musician John Paesano and was recorded at Abbey Road Studios.


NVIDIA alum Helen was our voice-over professional for videos and events for years. When she left the company, we thought about how we might continue the tradition. We turned to what we know: AI. But there weren’t publicly available models up to the task.

A team from NVIDIA’s Applied Deep Learning Research group published the answer to the problem: Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis. Licensing Helen’s voice, we trained the network on dozens of hours of it.

First, Helen produced multiple takes, guided by our creative director. Then our creative director was able to generate multiple takes from Flowtron and adjust parameters of the model to get the desired outcome. And what you hear is “Helen” speaking in the I Am AI video narration.

The post AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra appeared first on The Official NVIDIA Blog.

What’s a DPU?

What’s a DPU?

Of course, you’re probably already familiar with the Central Processing Unit or CPU. Flexible and responsive, for many years CPUs were the sole programmable element in most computers.

More recently the GPU, or graphics processing unit, has taken a central role. Originally used to deliver rich, real-time graphics, their parallel processing capabilities make them ideal for accelerated computing tasks of all kinds.

That’s made them the key to artificial intelligence, deep learning, and big data analytics applications.

Over the past decade, however, computing has broken out of the boxy confines of PC and servers — with CPUs and GPUs powering sprawling new hyperscale data centers.

These data centers are knit together with a powerful new category of processors. The DPU, or data processing unit, has become the third member of the data centric accelerated computing model.“This is going to represent one of the three major pillars of computing going forward,” NVIDIA CEO Jensen Huang said during a talk earlier this month.

“The CPU is for general purpose computing, the GPU is for accelerated computing and the DPU, which moves data around the data center, does data processing.”

What's a DPU?

Data Processing Unit
Industry-standard, high-performance, software-programmable multi-core CPU
High-performance network interface
Flexible and programmable acceleration engines

So What Makes a DPU Different?

A DPU is a new class of programmable processor that combines three key elements. A DPU is a system on a chip, or SOC, that combines:
An industry standard, high-performance, software programmable, multi-core CPU, typically based on the widely-used Arm architecture, tightly coupled to the other SOC components

A high-performance network interface capable of parsing, processing, and efficiently transferring data at line rate, or the speed of the rest of the network, to GPUs and CPUs

A rich set of flexible and programmable acceleration engines that offload and improve applications performance for AI and Machine Learning, security, telecommunications, and storage, among others.

All these DPU capabilities are critical to enable an isolated, bare-metal, cloud-native computing that will define the next generation of cloud-scale computing.

DPUs: Incorporated into SmartNICs

The DPU can be used as a stand-alone embedded processor, but it’s more often incorporated into a SmartNIC, a network interface controller that’s  used as a key component in a next generation server.

Other devices that claim to be DPUs miss significant elements of these three critical capabilities that are fundamental to claiming to answer the question: What is a DPU?

DPUs, or data processing units, can be used as a stand-alone embedded processor, but they’re more often incorporated into a SmartNIC, a network interface controller that’s used as a key component in a next generation server.
DPUs can be used as a stand-alone embedded processor, but they’re more often incorporated into a SmartNIC, a network interface controller that’s used as a key component in a next generation server.

For example, some vendors use proprietary processors that don’t benefit from the rich development and application infrastructure offered by the broad Arm CPU ecosystem.

Others claim to have DPUs but make the mistake of focusing solely on the embedded CPU to perform data path processing.

DPUs: A Focus on Data Processing

This isn’t competitive and doesn’t scale, because trying to beat the traditional x86 CPU with a brute force performance attack is a losing battle. If 100 Gigabit/sec packet processing brings an x86 to its knees, why would an embedded CPU perform better?

Instead the network interface needs to be powerful and flexible enough to handle all network data path processing. The embedded CPU should be used for control path initialization and exception processing, nothing more.

At a minimum, there 10 capabilities the network data path acceleration engines need to be able to deliver:

  • Data packet parsing, matching, and manipulation to implement an open virtual switch (OVS)
  • RDMA data transport acceleration for Zero Touch RoCE
  • GPU-Direct accelerators to bypass the CPU and feed networked data directly to GPUs (both from storage and from other GPUs)
  • TCP acceleration including RSS, LRO, checksum, etc
  • Network virtualization for VXLAN and Geneve overlays and VTEP offload
  • Traffic shaping “packet pacing” accelerator to enable multi-media streaming, content distribution networks, and the new 4K/8K Video over IP (RiverMax for ST 2110)
  • Precision timing accelerators for telco Cloud RAN such as 5T for 5G capabilities
  • Crypto acceleration for IPSEC and TLS performed inline so all other accelerations are still operation
  • Virtualization support for SR-IOV, VirtIO and para-virtualization
  • Secure Isolation: root of trust, secure boot, secure firmware upgrades, and authenticated containers and application life cycle management

These are just 10 of the acceleration and hardware capabilities that are critical to being able to answer yes to the question: “What is a DPU?”

So what is a DPU? This is a DPU:

What's a DPU? This is a DPU, also known as a Data Processing Unit.

Many so-called DPUs focus solely on delivering one or two of these functions.

The worst try to offload the datapath in proprietary processors.

While good for prototyping, this is a fool’s errand, because of the scale, scope, and breadth of data center.

Additional DPU-Related Resources

The post What’s a DPU? appeared first on The Official NVIDIA Blog.

May AI Help You? Square Takes Edge Off Conversational AI with GPUs

The next time a virtual assistant seems particularly thoughtful rescheduling your appointment, you could thank it. Who knows, maybe it was built to learn from compliments. But you might actually have Gabor Angeli to thank.

The engineering manager and members of his team at Square Inc. published a paper on techniques for creating AI assistants that are sympathetic listeners. It described AI models that approach human performance in techniques like reflective listening — re-phrasing someone’s request so they feel heard.

These days his team is hard at work expanding Square Assistant from a virtual scheduler to a conversational AI engine driving all the company’s products.

“There is a huge surface area of conversations between buyers and sellers that we can and should help people navigate,” said Angeli, who will describe the work in a session available now with a free registration to GTC Digital.

Square, best known for its stylish payment terminals, offers small businesses a wide range of services from handling payroll to creating loyalty programs.

Hearing the Buzz on Conversational AI

A UC Berkeley professor’s intro to AI course lit a lasting fire in Angeli for natural-language processing more than a decade ago. He researched the emerging field in the university’s AI lab and eventually co-founded Eloquent, an NLP startup acquired by Square last May.

Six months later, Square Assistant was born as a virtual scheduler.

“We wanted to get something good but narrowly focused in front of customers quickly,” Angeli said. “We’re adding advanced features to Square Assistant now, and our aim is to get it into nearly everything we offer.”

Results so far are promising. Square Assistant can understand and provide help for 75 percent of customer’s questions, and it’s reducing appointment no-shows by 10 percent.

But to make NLP the talk of the town, the team faces knotty linguistic and technical challenges. For example, is “next Saturday” this coming one or the one after it?

What’s more, there’s a long tail of common customer queries. As the job description of Square Assistant expands from dozens to thousands of tasks, its neural network models expand and require more training.

“It’s exciting to see BERT [Bidirectional Encoder Representations from Transformers] do things we didn’t think were possible, like showing AI for reading comprehension. It amazes me this is possible, but these are much larger models that present challenges in the time it takes to train and deploy them,” he said.

GPUs Speed Up Inference, Training

Angeli’s team started training AI models at Eloquent on single NVIDIA GPUs running CUDA in desktop PCs. At Square it’s using desktops with dual GPUs supplemented with training for large hyper-parameter jobs run on GPUs in the AWS cloud service.

In its tests, Square found inference jobs on average-size models run twice as fast on GPUs than CPUs. Inference on large models such as RoBERTa run 10x faster on the AWS GPU service than on CPUs.

The difference for training jobs is “even more stark,” he reported. “It’s hard to train a modern machine-learning model without a GPU. If we had to run deep learning on CPUs, we’d be a decade behind,” he added.

Faster training also helps motivate AI developers to iterate designs more often, resulting in better models, he said.

His team uses a mix of small, medium and large NLP models, applying pre-training tricks that proved their worth with computer vision apps. Long term, he believes engineers will find general models that work well across a broad range of tasks.

In the meantime, conversational AI is a three-legged race with developers like Angeli’s team crafting more efficient models as GPU architects design beefier chips.

“Half the work is in algorithm design, and half is in NVIDIA making hardware that’s more optimized for machine learning and runs bigger models,” he said.

The post May AI Help You? Square Takes Edge Off Conversational AI with GPUs appeared first on The Official NVIDIA Blog.

Working Remotely: Connecting to Your Office Workstation

With so many people working from home amid the COVID-19 outbreak, staying productive can be challenging.

At NVIDIA, some of us have RTX laptops and remote-working capabilities powered by our virtual GPU software via on-prem servers and the cloud. To help support the many other businesses with GPUs in their servers, we recently made vGPU licenses free for up to 500 users for 90 days to explore their virtualization options.

But many still require access to physical Quadro desktop workstations due to specific hardware configurations or data requirements. And we know this situation is hardly unique.

Many designers, engineers, artists and architects have Quadro RTX mobile workstations that are on par with their desktop counterparts, which helps them stay productive anywhere. However, a vast number of professionals don’t have access to their office-based workstations — with multiple high-end GPUs, large memory and storage, as well as applications and data.

These workstations are critical for keeping  everything from family firms to multinational corporations going. And this has forced IT teams to explore different ways to address the challenges of working from home by connecting remotely to an office workstation.

Getting Started: Tools for Remote Connections

The list below shows several publicly available remote-working tools that are helpful to get going quickly. For details on features and licensing, contact the respective providers.

Managing Access, Drivers and Reboots

Once you’re up and running, keep these considerations in mind:

Give yourself a safety net when working on a remote system 

There are times that your tools can stop working, so it’s a good idea to have a safety net. Always install a VNC server on the machine (, or others) no matter what remote access tool you use. It’s also a good idea to enable Access to Microsoft Remote Desktop as another option. These run quietly in the background, but are ready if you need them in an emergency

Updating your driver remotely

We recommend you use a VNC connection to upgrade your drivers. Changing the driver often changes the parts the driver the remote access tools are using, so you often lose the connection. VNC doesn’t connect into the driver at a low level, so keeps working as the old driver is changed out to the new. Once the driver is updated, you can go back to your other remote access tools.

Rebooting your machine remotely

Normally you can reboot with the windows menus. Give the system a few minutes to restart and then log back in. If your main remote-working tools have stopped functioning, try a VNC connection. You can also restart from a PowerShell Window or command prompt from your local machine with the command: shutdown /r /t 0 /m \\[machine-name]

App-Specific Resources

Several software makers with applications for professionals working in the manufacturing, architecture, and media and entertainment industries have provided instructions on using their applications from home. Here are links to a few recent articles:

Where to Get Help

Given the inherent variability in working from home, there’s no one-size-fits-all solution. If you run into technical issues and have questions, feel free to contact us at We’ll do our best to help.

The post Working Remotely: Connecting to Your Office Workstation appeared first on The Official NVIDIA Blog.

With DLSS 2.0, AI Continues to Revolutionize Gaming

From in-game physics to animation simulation to AI-assisted broadcasting, artificial intelligence is revolutionizing gaming.

DLSS 2.0, releasing this week in Control and MechWarrior 5: Mercenaries, represents another major advance for AI in gaming.

DLSS 2.0 — A Big Leap in AI Rendering

Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS 2.0 is an improved deep learning neural network that boosts frame rates while generating beautiful game images.

It gives gamers the performance headroom to maximize ray tracing settings and increase output resolutions.

DLSS 2.0 offers several key enhancements over the original version:

  • Superior Image Quality — DLSS 2.0 offers image quality comparable to native resolution while only having to render one quarter to one half of the pixels. It employs new temporal feedback techniques for sharper image details and improved stability from frame to frame.
  • Great Scaling Across All RTX GPUs and Resolutions — a new AI model more efficiently uses Tensor Cores to execute 2x faster than the original, improving frame rates and removing restrictions on supported GPUs, settings and resolutions.
  • One Network for All Games —The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games.
  • Customizable Options — DLSS 2.0 offers users three image quality modes (Quality, Balanced and Performance) that control render resolution, with Performance mode now enabling up to a 4x super resolution. (i.e. 1080p → 4K). This means more user choice, and even bigger performance boosts.

In addition to Control and MechWarrior 5, DLSS 2.0 has delivered big performance boosts to Deliver Us The Moon and Wolfenstein: Youngblood.

DLSS 2.0 is now also available to Unreal Engine 4 developers through the DLSS Developer Program that will accelerate deployment in one of the world’s most popular game engines.

RTX Momentum Builds Across the Ecosystem

DLSS is one of several major graphics innovations, including ray tracing and variable rate shading, introduced in 2018 with the launch of our NVIDIA RTX GPUs.

Since then, more than 15 million NVIDIA RTX GPUs have been sold. More than 30 major games have been released or announced with ray tracing or NVIDIA DLSS powered by NVIDIA RTX. And ray tracing has been adopted by all major APIs and game engines.

That momentum continues with Microsoft’s announcement last week of DirectX 12 Ultimate, as well as NVIDIA RTX Global Illumination SDK, new tool support for the Vulkan graphics API, and new Photoshop texture tool plugins.

DirectX 12 Ultimate

Last week, Microsoft unveiled DirectX 12 Ultimate, the latest version of the widely used DirectX graphics standard.

DirectX 12 Ultimate codifies NVIDIA RTX’s innovative technologies, including ray tracing, variable rate shading, mesh shading, and texture space shading, as the standard for multi-platform, next-gen games.

Game developers can take full advantage of all these technologies knowing they’ll be compatible with PCs equipped with NVIDIA RTX GPUs and Microsoft’s upcoming Xbox Series X console.

RTX Global Illumination SDK 

The NVIDIA RTX Global Illumination SDK, released Monday, gives developers a scalable solution to implement beautiful, ray-traced indirect lighting in games while still achieving performance targets.

The RTX GI SDK is supported on any DirectX raytracing-enabled GPU. It’s an ideal starting point to bring the benefits of ray tracing to more games.

Vulkan Game Developers Get New Tools

NVIDIA also Monday added support for the Vulkan graphics API to two of its most popular game development tools.

Nsight Aftermath, which provides precise information on where GPU crashes occur and why, is now available for the first time for Vulkan.

“NVIDIA will also provide Vulkan developers with GPU Trace, a low-level profiler in Nsight Graphics that provides hardware unit metrics and precise timing information

NVIDIA Texture Tools Exporter

The NVIDIA Texture Tools Exporter allows users to create highly compressed texture files that stay small on disk and in memory.

It allows game and app developers to use higher quality textures in their applications, and provides users a smaller, faster, download size.

It’s available as a standalone tool or as an Adobe Photoshop plug-in for game developers and texture artists.

More Advancements for Gamers

Add it all up: DLSS 2.0, support for RTX technologies with DirectX 12 Ultimate, the introduction of the NVIDIA RTX Global Illumination SDK, support for the Vulkan graphics API in more NVIDIA developer tools, as well as more texture tools. The result is more advancements in the hands of more gamers, developers, and creators than ever before.

The post With DLSS 2.0, AI Continues to Revolutionize Gaming appeared first on The Official NVIDIA Blog.

Virtually Free GTC: 30,000 Developers and AI Researchers to Access Hundreds of Hours of No-Cost Sessions at GTC Digital

Just three weeks ago, we announced plans to take GTC online due to the COVID-19 crisis.

Since then, a small army of researchers, partners, customers and NVIDIA employees has worked remotely to produce GTC Digital, which kicks off this week.

GTC typically packs hundreds of hours of talks, presentations and conversations into a five-day event in San Jose.

Our goal with GTC Digital is to bring some of the best aspects of this event to a global audience, and make it accessible for months.

Hundreds of our speakers — among the most talented, experienced scientists and researchers in the world — agreed to participate. Apart from the instructor-led, hands-on workshops and training sessions, which require a nominal fee, we’re delighted to bring this content to the global community at no cost. And we’ve incorporated new platforms to facilitate interaction and engagement.

Accelerating Blender Cycles with NVIDIA RTX: Blender is an open-source 3D software package that comes with the Cycles Renderer. Cycles is already a GPU enabled path-tracer, now super-charged with the latest generation of RTX GPUs. Furthering the rendering speed, RTX AI features such as the OptiX Denoiser infers rendering results for a truly interactive ray tracing experience.
Accelerating Blender Cycles with NVIDIA RTX: Blender is an open-source 3D software package that comes with the Cycles Renderer. Cycles is already a GPU enabled path-tracer, now super-charged with the latest generation of RTX GPUs. Furthering the rendering speed, RTX AI features such as the OptiX Denoiser infers rendering results for a truly interactive ray tracing experience.

We provided refunds to those who purchased a GTC 2020 pass, and those tickets have been converted to GTC Digital passes. Passholders just need to log in with GTC 2020 credentials to get started. Anyone else can attend with free registration.

Most GTC Digital content is for a technical audience of data scientists, researchers and developers. But we also offer high-level talks and podcasts on various topics, including women in data science, AI for business and responsible AI.

What’s in Store at GTC Digital

The following activities will be virtual events that take place at a specific time (early registration recommended). Participants will be able to interact in real time with the presenters.

Training Sessions:

  • Seven full-day, instructor-led workshops, from March 25 to April 2, on data science, deep learning, CUDA, cybersecurity, AI for predictive maintenance, AI for anomaly detection, and autonomous vehicles. Each full-day workshop costs $79.
  • Fifteen 2-hour training sessions running April 6-10, on various topics, including autonomous vehicles, CUDA, conversational AI, data science, deep learning inference, intelligent video analytics, medical imaging, recommendation systems, deep learning training at scale, and using containers for HPC. Each two-hour instructor-led session costs $39. 

Live Webinars: Seventeen 1-hour sessions, from March 24-April 8, on various topics, including data science, conversational AI, edge computing, deep learning, IVA, autonomous machines and more. Live webinars will be converted to on-demand content and posted within 48 hours. Free. 

Connect with Experts: Thirty-eight 1-hour sessions, from March 25-April 10, where participants can chat one-on-one with NVIDIA experts to get answers in a virtual classroom. Topics include conversational AI, recommender systems, deep learning training and autonomous vehicle development. Free. 

The following activities will be available on demand:

Recorded Talks: More than 150 recorded presentations with experts from leading companies around the world, speaking on a variety of topics such as computer vision, edge computing, conversational AI, data science, CUDA, graphics and ray tracing, medical imaging, virtualization, weather modeling and more. Free. 

Tech Demos: We’ll feature amazing demo videos, narrated by experts, highlighting how NVIDIA GPUs are accelerating creative workflows, enabling analysis of massive datasets and helping advance research. Free. 

AI Podcast: Several half-hour interviews with leaders across AI and accelerated computing will be posted over the next four weeks. Among them: Kathie Baxter, of Salesforce, on responsible AI; Stanford Professor Margot Gerritsen on women in data science and how data science intersects with AI; Ryan Coffee, of the SLAC National Accelerator Lab, on how deep learning is advancing physics research; and Richard Loft, of the National Center of Atmospheric Research, on how AI is helping scientists better model climate change. Free.

Posters: A virtual gallery of 140+ posters from researchers around the world showing how they are solving unique problems with GPUs. Registrants will be able to contact and share feedback with researchers. Free. 

For the Einsteins and Da Vincis of Our Time

The world faces extraordinary challenges now, and the scientists, researchers and developers focused on solving them need extraordinary tools and technology. Our goal with GTC has always been to help the world’s leading developers — the Einsteins and Da Vincis of our time — solve difficult challenges with accelerated computing. And that’s still our goal with GTC Digital.

Whether you work for a small startup or a large enterprise, in the public or private sector, wherever you are, we encourage you to take part, and we look forward to hearing from you.

The post Virtually Free GTC: 30,000 Developers and AI Researchers to Access Hundreds of Hours of No-Cost Sessions at GTC Digital appeared first on The Official NVIDIA Blog.

Silicon Valley High Schooler Takes Top Award in Jetson Nano Competition

Over the phone, Andrew Bernas leaves the impression he’s a veteran Silicon Valley software engineer focused on worldwide social causes with a lot of heart.

He’s in fact a 16-year-old high school student, and he recently won first place in the AI for Social Good category of the NVIDIA-supported AI at the Edge Challenge.

At — an online community of developers and hobbyists — he and others began competing in October, building AI projects using the NVIDIA Jetson Nano Developer Kit.

An Eagle Scout who leads conservation projects, he wanted to use AI to solve a big problem.

“I got the idea to use Jetson Nano processing power to compute a program to recognize handwritten and printed text to allow those who are visually impaired or disabled to have access to reading,” said Bernas, a junior at Palo Alto High School.

He was among 10 winners in the competition, which drew more than 2,500 registrants from 35 countries for a share of NVIDIA supercomputing prizes. Bernas used NVIDIA’s Getting Started With Jetson Nano Deep Learning Institute course to begin his project.

AI on Reading

Bernas’s winning entry, Reading Eye for the Blind with NVIDIA Jetson Nano, is a text-to-voice AI app and a device prototype to aid the visually impaired.

The number of people worldwide visually impaired — those with moderate to severe vision loss — is estimated to be 285 million, with 39 million of them blind, according to the World Health Organization.

His device, which can be seen in the video below, allows people to place books or handwritten text to be scanned by a camera and converted to voice.

“Part of the inspiration was for creating a solution for my grandmother and other people with vision loss,” said Bernas. “She’s very proud of me.”

DIY Jetson Education

Bernas enjoys do-it-yourself building. His living room stores some of the more than 20 radio-controlled planes and drones he has designed and built. He also competes in his school’s Science Olympiad team in electrical engineering, circuits, aeronautical and physics.

Between high school courses, online programs and forums, Bernas has learned to use HTML, CSS, Python, C, C++, Java and JavaScript. For him, developing models using the NVIDIA Jetson Nano Developer Kit was a logical next step in his education for DIY building.

He plans to develop his text-to-voice prototype to include Hindi, Mandarin, Russian and Spanish. Meanwhile, he has his sights on AI for robotics and autonomy as a career path.

“Now that machine learning is so big, I’m planning to major in something engineering-related with programming and machine learning,” he said of his college plans.


NVIDIA Jetson Nano makes adding AI easier and more accessible to makers, self-taught developers and embedded tech enthusiasts. Learn more about Jetson Nano and view more community projects to get started.

The post Silicon Valley High Schooler Takes Top Award in Jetson Nano Competition appeared first on The Official NVIDIA Blog.

Farm to Frameworks: French Duo Fork Into Sustainable Farming with Robotics

What began as two classmates getting their hands dirty at a farm in the French Alps has hatched a Silicon Valley farming startup building robots in Detroit and operating in one of the world’s largest agriculture regions.

San Francisco-based FarmWise offers farmers an AI-driven robotic system for more sustainable farming methods, and to address a severe labor shortage. Its machines can remove weeds without pesticides — a holy grail for organic farmers in the Golden State and elsewhere.

FarmWise’s robotic farming machine runs 10 cameras, capturing crops and weeds it passes over to send through image recognition models. The machine sports five NVIDIA DRIVE GPUs to help it navigate and make split-second decisions on weed removal.

The company is operating in two of California’s agriculture belts, Salinas and San Luis Obispo, where farms have its machines deployed in the field.

“We don’t use chemicals at all — we use blades and automate the process,” said Sebastien Boyer, the company’s CEO. “We’re working with a few of the largest vegetable growers in California.”

AI has played an increasing role in agriculture as researchers, startups and public companies alike are plowing it for environmental and business benefits.

FarmWise recently landed $14.5 million in Series A funding to further develop its machines.

Robotics for Weed Removal

It wasn’t an easy start. Boyer and Thomas Palomares, computer science classmates from France’s Ecole PolyTechnique, decided to work on Palomares’s family farm in the Alps to try big data on farming. Their initial goal was to help farmers use information to work more sustainably while also improving crop yields. It didn’t pan out as planned.

The two discovered farms lacked the equipment to support sustainable methods, so they shelved their idea and instead packed their bags for grad school in the U.S. After that, the friends came back to their concept but with a twist: using AI-driven robotic machinery.

“We decided to move our focus to robotics to build new types of agriculture machines that are better-suited to take advantage of data,” Boyer said. “Weed removal is our first application.”

In April, FarmWise began manufacturing its farm machines. It tapped custom automotive parts maker Roush, which serves Detroit and has built self-driving vehicle prototypes for the likes of Google.

FarmWise for Labor Shortage

Farm labor is in short supply. A California Farm Bureau Federation survey of more than 1,000 farmers found that 56 percent were unable to hire sufficient labor to tend their crops in the past five years.

Of those surveyed, 37 percent said they had to change their cultivation practices, including by reducing weeding and pruning. More than half were already using labor-saving technologies. Not to mention that weeding is often back-breaking work.

FarmWise helps fill this void. The company’s automated weeders can do the labor of 10 workers. And it can work 24/7 autonomously.

“We’re filling the gaps of missing people, and those tasks that aren’t getting done — and we’re offering an alternative to chemical herbicides,” said Boyer, adding that weed management is crucial for crop yields.

When farms can’t get back-breaking weeding covered, they turn to herbicides as an alternative. FarmWise can help to reduce that. Plus, there’s a financial incentive: medium-size California farms can expect to save as much as $500,000 a year on pesticides and other costs by using FarmWise, he said.

Training Autonomous Farming Machines

To help farmers, FarmWise AI recognizes the difference between weeds and plants, and its machines can make 25 cuts per second to remove weeds. Its NVIDIA GPU-powered image networks recognize 10 different crops and can spot the typical weeds of California and Arizona.

“As we operate on fields, we continuously capture data, label that data and use it to improve our algorithms,” said Boyer.

FarmWise’s weeding machines are geo-fenced by uploading maps of the fields. The onboard cameras can be used as an override for safety to stop the machines.

The 30-person company attracted recruits to its sustainable farming mission from SpaceX, Tesla, Cruise and Facebook as well as experts in farm machine design and operations, said Boyer.

Developing machines for farms, said Boyer, requires spending time in the field to understand the needs of farmers and translating their ideas into technology.

“We’re a group of engineers with very close ties to the farming community,” he said.

The post Farm to Frameworks: French Duo Fork Into Sustainable Farming with Robotics appeared first on The Official NVIDIA Blog.

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started.

NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can hear about best practices for AI at the edge and get an update on how NVIDIA GPUs are paving the way to better, smarter 5G services.

It’s a story that’s moving fast.

AI was born in the cloud to process the vast amounts of data needed for jobs like recommending new products and optimizing news feeds. But most enterprises interact with their customers and products in the physical world at the edge of the network — in stores, warehouses and smart cities.

The need to sense, infer and act in real time as conditions change is driving the next wave of AI adoption at the edge. That’s why a growing list of forward-thinking companies are building their own AI capabilities using the NVIDIA EGX edge-computing platform.

Walmart, for example, built a smart supermarket it calls its Intelligent Retail Lab. Jakarta uses AI in a smart city application to manage its vehicle registration program. BMW and Procter & Gamble automate inspection of their products in smart factories. They all use NVIDIA EGX along with our Metropolis application framework for video and data analytics.

For conversational AI, the NVIDIA Jarvis developer kit enables voice assistants geared to run on embedded GPUs in smart cars or other systems. WeChat, the world’s most popular smartphone app, accelerates conversational AI using NVIDIA TensorRT software for inference.

All these software stacks ride on our CUDA-X libraries, tools, and technologies that run on an installed base of more than 500 million NVIDIA GPUs.

Carriers Make the Call

At MWC Los Angeles this year, NVIDIA founder and CEO Jensen Huang announced Aerial, software that rides on the EGX platform to let telecommunications companies harness the power of GPU acceleration.

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage at MWC LA to announce their collaboration.

With Aerial, carriers can both increase the spectral efficiency of their virtualized 5G radio-access networks and offer new AI services for smart cities, smart factories, cloud gaming and more — all on the same computing platform.

In Barcelona, NVIDIA and partners including Ericsson will give an update on how Aerial will reshape the mobile edge network.

Verizon is already using NVIDIA GPUs at the edge to deliver real-time ray tracing for AR/VR applications over 5G networks.

It’s one of several ways telecom applications can be taken to the next level with GPU acceleration. Imagine having the ability to process complex AI jobs on the nearest base station with the speed and ease of making a cellular call.

Your Dance Card for Barcelona

For a few days in February, we will turn our innovation center — located at Fira de Barcelona, Hall 4 — into a virtual university on AI with 5G at the edge. Attendees will get a world-class deep dive on this strategic technology mashup and how companies are leveraging it to monetize 5G.

Sessions start Monday morning, Feb. 24, and include AI customer case studies in retail, manufacturing and smart cities. Afternoon talks will explore consumer applications such as cloud gaming, 5G-enabled cloud AR/VR and AI in live sports.

We’ve partnered with the organizers of MWC on applied AI sessions on Tuesday, Feb. 25. These presentations will cover topics like federated learning, an emerging technique for collaborating on the development and training of AI models while protecting data privacy.

Wednesday’s schedule features three roundtables where attendees can meet executives working at the intersection of AI, 5G and edge computing. The week also includes two instructor-led sessions from the NVIDIA Deep Learning Institute that trains developers on best practices.

See Demos, Take a Meeting

For a hands-on experience, check out our lineup of demos based on the NVIDIA EGX platform. These will highlight applications such as object detection in a retail setting, ways to unsnarl traffic congestion in a smart city and our cloud-gaming service GeForce Now.

To learn more about the capabilities of AI, 5G and edge computing, check out the full agenda and book an appointment here.

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

BERT Does Europe: AI Language Model Learns German, Swedish

BERT is at work in Europe, tackling natural-language processing jobs in multiple industries and languages with help from NVIDIA’s products and partners.

The AI model formally known as Bidirectional Encoder Representations from Transformers debuted just last year as a state-of-the-art approach to machine learning for text. Though new, BERT is already finding use in avionics, finance, semiconductor and telecom companies on the continent, said developers optimizing it for German and Swedish.

“There are so many use cases for BERT because text is one of the most common data types companies have,” said Anders Arpteg, head of research for Peltarion, a Stockholm-based developer that aims to make the latest AI techniques such as BERT inexpensive and easy for companies to adopt.

Natural-language processing will outpace today’s AI work in computer vision because “text has way more apps than images — we started our company on that hypothesis,” said Milos Rusic, chief executive of deepset in Berlin. He called BERT “a revolution, a milestone we bet on.”

Deepset is working with PricewaterhouseCoopers to create a system that uses BERT to help strategists at a chip maker query piles of annual reports and market data for key insights. In another project, a manufacturing company is using NLP to search technical documents to speed maintenance of their products and predict needed repairs.

Peltarion, a member of NVIDIA’s Inception program that nurtures startups with access to its technology and ecosystem, packed support for BERT into its tools in November. It is already using NLP to help a large telecom company automate parts of its process for responding to product and service requests. And it’s using the technology to let a large market research company more easily query its database of surveys.

Work in Localization

Peltarion is collaborating with three other organizations on a three-year, government-backed project to optimize BERT for Swedish. Interestingly, a new model from Facebook called XLM-R suggests training on multiple languages at once could be more effective than optimizing for just one.

“In our initial results, XLM-R, which Facebook trained on 100 languages at once, outperformed a vanilla version of BERT trained for Swedish by a significant amount,” said Arpteg, whose team is preparing a paper on their analysis.

Nevertheless, the group hopes to have before summer a first version of a Swedish BERT model that performs really well, said Arpteg, who headed up an AI research group at Spotify before joining Peltarion three years ago.

An analysis by deepset of its German version of BERT.

In June, deepset released as open source a version of BERT optimized for German. Although its performance is only a couple percentage points ahead of the original model, two winners in an annual NLP competition in Germany used the deepset model.

Right Tool for the Job

BERT also benefits from optimizations for specific tasks such as text classification, question answering and sentiment analysis, said Arpteg. Peltarion researchers plans to publish in 2020 results of an analysis of gains from tuning BERT for areas with their own vocabularies such as medicine and legal.

The question-answering task has become so strategic for deepset it created Haystack, a version of its FARM transfer-learning framework to handle the job.

In hardware, the latest NVIDIA GPUs are among the favorite tools both companies use to tame big NLP models. That’s not surprising given NVIDIA recently broke records lowering BERT training time.

“The vanilla BERT has 100 million parameters and XML-R has 270 million,” said Arpteg, whose team recently purchased systems using NVIDIA Quadro and TITAN GPUs with up to 48GB of memory. It also has access to NVIDIA DGX-1 servers because “for training language models from scratch, we need these super-fast systems,” he said.

More memory is better, said Rusic, whose German BERT models weigh in at 400MB. Deepset taps into NVIDIA V100 Tensor Core 100 GPUs on cloud services and uses another NVIDIA GPU locally.

The post BERT Does Europe: AI Language Model Learns German, Swedish appeared first on The Official NVIDIA Blog.