Intel Highlighted Why NVIDIA Tensor Core GPUs Are Great for Inference

It’s not every day that one of the world’s leading tech companies highlights the benefits of your products.

Intel did just that last week, comparing the inference performance of two of their most expensive CPUs to NVIDIA GPUs.

To achieve the performance of a single mainstream NVIDIA V100 GPU, Intel combined two power-hungry, highest-end CPUs with an estimated price of $50,000-$100,000, according to Anandtech. Intel’s performance comparison also highlighted the clear advantage of NVIDIA T4 GPUs, which are built for inference. When compared to a single highest-end CPU, they’re not only faster but also 7x more energy-efficient and an order of magnitude more cost-efficient.

Inference performance is crucial, as AI-powered services are growing exponentially. And Intel’s latest Cascade Lake CPUs include new instructions that improve inference, making them the best CPUs for inference. However, it’s hardly competitive with NVIDIA deep learning-optimized Tensor Core GPUs.

Inference (also known as prediction), in simple terms, is the “pattern recognition” that a neural network does after being trained. It’s where AI models provide intelligent capabilities in applications, like detecting fraud in financial transactions, conversing in natural language to search the internet, and predictive analytics to fix manufacturing breakdowns before they even happen.

While most AI inference today happens on CPUs, NVIDIA Tensor Core GPUs are rapidly being adopted across the full range of AI models. Tensor Core, a breakthrough innovation has transformed NVIDIA GPUs to highly efficient and versatile AI processors. Tensor Cores do multi-precision calculations at high rates to provide optimal precision for diverse AI models and have automatic support in popular AI frameworks.

It’s why a growing list of consumer internet companies — Microsoft, Paypal, Pinterest, Snap and Twitter among them — are adopting GPUs for inference.

Compelling Value of Tensor Core GPUs for Computer Vision

First introduced with the NVIDIA Volta architecture, Tensor Core GPUs are now in their second generation with NVIDIA Turing. Tensor Cores perform extremely efficient computations for AI for a full range of precision — from 16-bit floating point with 32-bit accumulate to 8-bit and even 4-bit integer operations with 32-bit accumulate.

They’re designed to accelerate both AI training and inference, and are easily enabled using automatic mixed precision features in the TensorFlow and PyTorch frameworks. Developers can achieve 3x training speedups by adding just two lines of code to their TensorFlow projects.

On computer vision, as the table below shows, when comparing the same number of processors, the NVIDIA T4 is faster, 7x more power-efficient and far more affordable. NVIDIA V100, designed for AI training, is 2x faster and 2x more energy efficient than CPUs on inference.

Table 1: Inference on ResNet-50.

 Two-Socket
Intel Xeon 9282
NVIDIA V100
(Volta)
NVIDIA T4
(Turing)
ResNet-50 Inference (images/sec)7,8787,8444,944
# of Processors211
Total Processor TDP800 W350 W70 W
Energy Efficiency (Taking TDP)10 img/sec/W22 img/sec/W71 img/sec/W
Performance per Processor (images/sec)3,9397,8444,944
GPU Performance Advantage1.0 (baseline)2.0x1.3x
GPU Energy-Efficiency Advantage1.0 (baseline)2.3x7.2x

Source: Intel Xeon performance; NVIDIA GPU performance

Compelling Value of Tensor Core GPUs for Understanding Natural Language

AI has been moving at a frenetic pace. This rapid progress is fueled by teams of AI researchers and data scientists who continue to innovate and create highly accurate and exponentially more complex AI models.

Over four years ago, computer vision was among the first applications where AI from Microsoft was able to perform at superhuman accuracy using models like ResNet-50. Today’s advanced models perform even more complex tasks like understanding language and speech at superhuman accuracy. BERT, a highly complex AI model open-sourced by Google last year, can now understand prose and answer questions with superhuman accuracy.

A measure of the complexity of AI models is the number of parameters they have. Parameters in an AI model are the variables that store information the model has learned. While ResNet-50 has 25 million parameters, BERT has 340 million, a 13x increase.

On an advanced model like BERT, a single NVIDIA T4 GPU is 56x faster than a dual-socket CPU server and 240x more power-efficient.

Table 2: Inference on BERT. Workload: Fine-Tune Inference on BERT Large dataset.

 Dual Intel Xeon
Gold 6240
NVIDIA T4
(Turing)
BERT Inference,
Question-Answering (sentences/sec)
2118
Processor TDP300 W (150 Wx2)70 W
Energy Efficiency (using TDP)0.007 sentences/sec/W1.7 sentences/sec/W
GPU Performance Advantage1.0 (baseline)59x
GPU Energy-Efficiency Advantage1.0 (baseline)240x

CPU server: Dual-socket Xeon Gold 6240@2.6GHz; 384GB system RAM; FP32 precision; with Intel’s TF Docker container v. 1.13.1. Note: Batch-size 4 results yielded the best CPU score.

GPU results: T4: Dual-socket Xeon Gold 6240@2.6GHz; 384GB system RAM; mixed precision; CUDA 10.1.105; NCCL 2.4.3, cuDNN 7.5.0.56, cuBLAS 10.1.105; NVIDIA driver 418.67; on TensorFlow using automatic mixed precision and XLA compiler; batch-size 4 and sequence length 128 used for all platforms tested. 

Compelling Value of Tensor Core GPUs for Recommender Systems

Another key usage of AI is in recommendation systems, which are used to provide relevant content recommendations on video sharing sites, news feeds on social sites and product recommendations on e-commerce sites.

Neural collaborative filtering, or NCF, is a recommender system that uses the prior interactions of users with items to provide recommendations. When running inference on the NCF model that is a part of the MLPerf 0.5 training benchmark, NVIDIA T4 brings 12x more performance and 24x higher energy efficiency than CPUs.

Table 3: Inference on NCF.

 Single Intel Xeon
Gold 6140
NVIDIA T4
(Turing)
Recommender Inference Throughput (MovieLens)(thousands of samples/sec)2,86027,800
Processor TDP150 W70 W
Energy Efficiency (using TDP)19 samples/sec/W397 samples/sec/W
GPU Performance Advantage1.0 (baseline)10x
GPU Energy-Efficiency Advantage1.0 (baseline)20x

CPU server: Single-socket Xeon Gold 6240@2.6GHz; 384GB system RAM; Used Intel Benchmark for NCF on TensorFlow with Intel’s TF Docker container version 1.13.1; FP32 precision. Note: Single-socket CPU config used for CPU tests as it yielded a better score than dual-socket.

GPU results: T4: Single-socket Xeon Gold 6140@2.3GHz; 384GB system RAM; CUDA 10.1.105; NCCL 2.4.3, cuDNN 7.5.0.56, cuBLAS 10.1.105; NVIDIA driver 418.40.04; on TensorFlow using automatic mixed precision and XLA compiler; batch-size: 2,048 for CPU, 1,048,576 for T4; precision: FP32 for CPU, mixed precision for T4. 

Unified Platform for AI Training and Inference

The use of AI models in applications is an iterative process designed to continuously improve their performance. Data scientist teams constantly update their models with new data and algorithms to improve accuracy. These models are then updated in applications by developers.

Updates can happen monthly, weekly and even on a daily basis. Having a single platform for both AI training and inference can dramatically simplify and accelerate this process of deploying and updating AI in applications.

NVIDIA’s data center GPU computing platform leads the industry in performance by a large margin for AI training, as demonstrated by the standard AI benchmark, MLPerf. And the NVIDIA platform provides compelling value for inference, as the data presented here attests. That value increases with the growing complexity and progress of modern AI.

To help fuel the rapid progress in AI, NVIDIA has deep engagements with the ecosystem and constantly optimizes software, including key frameworks like TensorFlow, Pytorch and MxNet as well as inference software like TensorRT and TensorRT Inference Server.

NVIDIA also regularly publishes pre-trained AI models for inference and model scripts for training models using your own data. All of this software is freely made available as containers, ready to download and run from NGC, NVIDIA’s hub for GPU-accelerated software.

Get the full story about our comprehensive AI platform.

The post Intel Highlighted Why NVIDIA Tensor Core GPUs Are Great for Inference appeared first on The Official NVIDIA Blog.

ACR AI-LAB and NVIDIA Make AI in Hospitals Easy on IT, Accessible to Every Radiologist

For radiology to benefit from AI, there needs to be easy, consistent and scalable ways for hospital IT departments to implement the technology. It’s a return to a service-oriented architecture, where logical components are separated and can each scale individually, and an efficient use of the additional compute power these tools require.

AI is coming from dozens of vendors as well as internal innovation groups, and needs a place within the hospital network to thrive. That’s why NVIDIA and the American College of Radiology (ACR) have published a Hospital AI Reference Architecture Framework. It helps hospitals easily get started with AI initiatives.

A Cookbook to Make AI Easy

The Hospital AI Reference Architecture Framework was published at yesterday’s annual ACR meeting for public comment. This follows the recent launch of the ACR AI-LAB, which aims to standardize and democratize AI in radiology. The ACR AI-LAB uses infrastructure such as NVIDIA GPUs and the NVIDIA Clara AI toolkit, as well as GE Healthcare’s Edison platform, which helps bring AI from research into FDA-cleared smart devices.

The Hospital AI Reference Architecture Framework outlines how hospitals and researchers can easily get started with AI initiatives. It includes descriptions of the steps required to build and deploy AI systems, and provides guidance on the infrastructure needed for each step.

Hospital AI Architecture Framework
Hospital AI Architecture Framework

To drive an effective AI program within a healthcare institution, there must first be an understanding of the workflows involved, compute needs and data required. It comes from a foundation of enabling better insights from patient data with easy-to deploy compute at the edge.

Using a transfer client, seed models can be downloaded from a centralized model store. A clinical champion uses an annotation tool to locally create data that can be used for fine-tuning the seed model or training a new model. Then, using the training system with the annotated data, a localized model is instantiated. Finally, an inference engine is used to conduct validation and ultimately inference on data within the institution.

These four workflows sit atop AI compute infrastructure, which can be accelerated with NVIDIA GPU technology for best performance, alongside storage for models and annotated studies. These workflows tie back into other hospital systems such as PACS, where medical images are archived.

Three Magic Ingredients: Hospital Data, Clinical AI Workflows, AI Computing

Healthcare institutions don’t have to build the systems to deploy AI tools themselves.

This scalable architecture is designed to support and provide computing power to solutions from different sources. GE Healthcare’s Edison platform now uses NVIDIA’s TRT-IS inference capabilities to help AI run in an optimized way within GPU-powered software and medical devices. This integration makes it easier to deliver AI from multiple vendors into clinical workflows — and is the first example of the AI-LAB’s efforts to help hospitals adopt solutions from different vendors.

Together, Edison with TRT-IS offers a ready-made device inferencing platform that is optimized for GPU-compliant AI, so models built anywhere can be deployed in an existing healthcare workflow.

Hospitals and researchers are empowered to embrace AI technologies without building their own standalone technology or yielding their data to the cloud, which has privacy implications.

The post ACR AI-LAB and NVIDIA Make AI in Hospitals Easy on IT, Accessible to Every Radiologist appeared first on The Official NVIDIA Blog.

By the Book: AI Making Millions of Ancient Japanese Texts More Accessible

Natural disasters aren’t just threats to people and buildings, they can also erase history — by destroying rare archival documents. As a safeguard, scholars in Japan are digitizing the country’s centuries-old paper records, typically by taking a scan or photo of each page.

But while this method preserves the content in digital form, it doesn’t mean researchers will be able to read it. Millions of physical books and documents were written in an obsolete script called Kuzushiji, legible to fewer than 10 percent of Japanese humanities professors.

“We end up with billions of images which will take researchers hundreds of years to look through,” said Tarin Clanuwat, researcher at Japan’s ROIS-DS Center for Open Data in the Humanities. “There is no easy way to access the information contained inside those images yet.”

Extracting the words on each page into machine-readable, searchable form takes an extra step: transcription, which can be done either by hand or through a computer vision method called optical character recognition, or OCR.

Clanuwat and her colleagues are developing a deep learning OCR system to transcribe Kuzushiji writing — used for most Japanese texts from the 8th century to the start of the 20th — into modern Kanji characters.

Clanuwat said GPUs are essential for both training and inference of the AI.

“Doing it without GPUs would have been inconceivable,” she said. “GPU not only helps speed up the work, but it makes this research possible.”

Parsing a Forgotten Script

Before the standardization of the Japanese language in 1900 and the advent of modern printing, Kuzushiji was widely used for books and other documents. Though millions of historical texts were written in the cursive script, just a few experts can read it today.

Only a tiny fraction of Kuzushiji texts have been converted to modern scripts — and it’s time-consuming and expensive for an expert to transcribe books by hand. With an AI-powered OCR system, Clanuwat hopes a larger body of work can be made readable and searchable by scholars.

She collaborated on the OCR system with Asanobu Kitamoto from her research organization and Japan’s National Institute of Informatics, and Alex Lamb of the Montreal Institute for Learning Algorithms. Their paper was accepted in 2018 to the Machine Learning for Creativity and Design workshop at the prestigious NeurIPS conference.

Using a labeled dataset of 17th to 19th century books from the National Institute of Japanese Literature, the researchers trained their deep learning model on NVIDIA GPUs, including the TITAN Xp. Training the model took about a week, Clanuwat said, but “would be impossible” to train on CPU.

Kuzushiji has thousands of characters, with many occurring so rarely in datasets that it is difficult for deep learning models to recognize them. Still, the average accuracy of the researchers’ KuroNet document recognition model is 85 percent — outperforming prior models.

The newest version of the neural network can recognize more than 2,000 characters. For easier documents with fewer than 300 character types, accuracy jumps to about 95 percent, Clanuwat said. “One of the hardest documents in our dataset is a dictionary, because it contains many rare and unusual words.”

One challenge the researchers faced was finding training data representative of the long history of Kuzushiji. The script changed over the hundreds of years it was used, while the training data came from the more recent Edo period.

Clanuwat hopes the deep learning model could expand access to Japanese classical literature, historical documents and climatology records to a wider audience.

The post By the Book: AI Making Millions of Ancient Japanese Texts More Accessible appeared first on The Official NVIDIA Blog.

Paige.AI Ramps Up Cancer Pathology Research Using NVIDIA Supercomputer

An accurate diagnosis is key to treating cancer — a disease that kills 600,000 people a year in the U.S. alone — and AI can help.

Common forms of the disease, like breast, lung and prostate cancer, can have good recovery rates when diagnosed early. But diagnosing the tumor, the work of pathologists, can be a very manual, challenging and time-consuming process.

Pathologists traditionally interpret dozens of slides per cancer case, searching for clues pointing to a cancer diagnosis. For example, there can be more than 60 slides for a single breast cancer case and, out of those, only a handful may contain important findings.

AI can help pathologists become more productive by accelerating and enhancing their workflow as they examine massive amounts of data. It gives the pathologists the tools to analyze images, provide insight based on previous cases and diagnose faster by pinpointing anomalies.

Paige.AI is applying AI to pathology to increase diagnostic accuracy and deliver better patient outcomes, starting with prostate and breast cancer. Earlier this year, Paige.AI was granted “Breakthrough Designation” by the U.S. Food and Drug Administration, the first such designation for AI in cancer diagnosis.

The FDA grants the designation for technologies that have the potential to provide for more effective diagnosis or treatment for life-threatening or irreversibly debilitating diseases, where timely availability is in the best interest of patients.

To find breakthroughs in cancer diagnosis, Paige.AI will access millions of pathology slides, providing the volume of data necessary to train and develop cutting-edge AI algorithms.

DGX-1 AI supercomputer
NVIDIA DGX-1 is proving to be an important research tool for many of the world’s leading AI researchers.

To make sense of all this data, Paige.AI uses an AI supercomputer made up of 10 interconnected NVIDIA DGX-1 systems. The supercomputer has the enormous computing power of over 10 petaflops necessary to develop a clinical-grade model for pathology and, for the first time, bridge the gap from research to a clinical setting that benefits future patients.

One example of how NVIDIA’s technology is already being used is a recent study by Paige.AI that used seven NVIDIA DGX-1 systems to train neural networks on a new dataset to detect prostate cancer. The dataset consisted of 12,160 slides, two orders of magnitude larger than previous datasets in pathology. The researchers achieved near perfect accuracy on a test set consisting of 1,824 real-world slides without any manual image-annotation.

By minimizing the time pathologists spend processing data, AI can help them focus their time on analyzing it. This is especially critical given the short supply of pathologists.

According to The Lancet medical journal, there is a single pathologist for every million people in sub-Saharan Africa and one for every 130,000 people in China. In the United States, there is one for rohly every 20,000 people, however, studies predict that number will shrink to one for about every 30,000 people by 2030.

AI gives a big boost to computational pathology by enabling quantitative analysis of the study of structures seen under a microscope and cell biology. This advancement is made possible by combining novel image analysis, computer vision and machine learning techniques.

“With the help of NVIDIA technology, Paige.AI is able to train deep neural networks from hundreds of thousands of gigapixel images of whole slides. The result is clinical-grade artificial intelligence for pathology,” said Dr. Thomas Fuchs, co-founder and chief scientific officer at Paige.AI. “Our vision is to help pathologists improve the efficiency of their work, for researchers to generate new insights, and clinicians to improve patient care.”

 

Feature image credit: Dr. Cecil Fox, National Cancer Institute, via Wikimedia Commons.

The post Paige.AI Ramps Up Cancer Pathology Research Using NVIDIA Supercomputer appeared first on The Official NVIDIA Blog.

Bird’s-AI View: Harnessing Drones to Improve Traffic Flow

Traffic. It’s one of the most commonly cited frustrations across the globe.

It consumed nearly 180 hours of productive time for the average U.K. driver last year. German drivers lost an average of 120 hours. U.S. drivers lost nearly 100 hours.

Because time is too precious to waste, RCE Systems — a Brno, Czech Republic-based startup and member of the NVIDIA Inception program — is taking its tech to the air to improve traffic flow.

Its DataFromSky platform combines trajectory analysis, computer vision and drones to ease congestion and improve road safety.

AI in the Sky

Traffic analysis has traditionally been based on video footage from fixed cameras, mounted at specific points along roads and highways.

This can severely limit the analysis of traffic which is, by nature, constantly moving and changing.

Capturing video from a bird’s-eye perspective via drones allows RCE Systems to gain deeper insights into traffic.

Beyond monitoring objects captured on video, the DataFromSky platform interprets movements using AI to provide highly accurate telemetric data about every object in the traffic flow.

RCE Systems trains its deep neural networks using thousands of hours of video footage from around the globe, shot in various weather conditions. The training takes place on NVIDIA GPUs using Caffe and TensorFlow.

These specialized neural networks can then recognize objects of interest and continually track them in video footage.

The data captured via this AI process is used in numerous research projects, enabling deeper analysis of object interaction and new behavioral models of drivers in specific traffic situations.

Ultimately, this kind of data will also be crucial for the development of autonomous vehicles.

Driving Impact

The DataFromSky platform is still in its early days, but its impact is already widespread.

RCE Systems is working on a system for analyzing safety at intersections, based on driver behavior. This includes detecting situations where accidents were narrowly avoided and then determining root causes.

By understanding these situations better, their occurrence can be avoided — making traffic flow easier and preventing vehicle damage as well as potential loss of life.

Toyota Europe used RCE Systems’ findings from the DataFromSky platform to create probabilistic models of driver behavior as well as deeper analysis of interactions with roundabouts.

Leidos used insights gathered by RCE Systems to calibrate traffic simulation models as part of its projects to examine narrowing freeway lanes and shoulders in Dallas, Seattle, San Antonio and Honolulu.

And the value of RCE Systems’ analysis is not limited to vehicles. The Technical University of Munich has used it to perform a behavioral study of cyclists and pedestrians.

Moving On

RCE Systems is looking to move to NVIDIA Jetson AGX Xavier in the future to accelerate their AI at the edge solution. They are currently developing a “monitoring drone” capable of evaluating image data in flight, in real time.

It could one day replace a police helicopter during high-speed chases or act as a mobile surveillance system for property protection.

The post Bird’s-AI View: Harnessing Drones to Improve Traffic Flow appeared first on The Official NVIDIA Blog.

AI of the Storm: Deep Learning Analyzes Atmospheric Events on Saturn

Saturn is many times the size of Earth, so it’s only natural that its storms are more massive — lasting months, covering thousands of miles, and producing lightning bolts thousands of times more powerful.

While scientists have access to galaxies of data on these storms, its sheer volume leaves traditional methods inadequate for studying the planet’s weather systems in their entirety.

Now AI is being used to launch into that trove of information. Researchers from University College London and the University of Arizona are working with data collected by NASA’s Cassini spacecraft, which spent 13 years studying Saturn before disintegrating in the planet’s atmosphere in 2017.

A recently published Nature Astronomy paper describes how the scientists’ deep learning model can reveal previously undetected atmospheric features on Saturn, and provide a clearer view of the planet’s storm systems at a global level.

In addition to providing new insights about Saturn, the AI can shed light on the behavior of planets both within and beyond our solar system.

“We have these missions that go around planets for many years now, and much of this data basically sits in an archive and it’s not being looked at,” said Ingo Waldmann, deputy director of the University College London’s Centre for Space Exoplanet Data. “It’s been difficult so far to look at the bigger picture of this global dataset because people have been analyzing data by hand.”

The researchers used an NVIDIA V100 GPU, the most advanced data center GPU, for both training and inference of their neural networks.

Parting the Clouds

Scientists studying the atmosphere of other planets take one of two strategies, Waldmann says. Either they conduct a detailed manual analysis of a small region of interest, which could take a doctoral student years — or they simplify the data, resulting in rough, low-resolution findings.

“The physics is quite complicated, so the data analysis has been either quite old-fashioned or simplistic,” Waldmann said. “There’s a lot of science one can do by using big data approaches on old problems.”

Thanks to the Cassini satellite, researchers have terabytes of data available to them. Primarily using unsupervised learning, Waldmann and Caitlin Griffith, his co-author from the University of Arizona, trained their deep learning model on data from the satellite’s mapping spectrometer.

This data is commonly collected on planetary missions, Waldmann said, making it easy to apply their AI model to study other planets.

The researchers saw speedups of 30x when training their deep learning models on a single V100 GPU compared to CPU. They’re now transitioning to using clusters of multiple GPUs. For inference, Waldmann said the GPU was around twice as fast as using a CPU.

Using the AI model, the researchers were able to analyze a months-long electrical storm that churned through Saturn’s southern hemisphere in 2008. Scientists had previously detected a bright ammonia cloud from satellite images of the storm — a feature more commonly spotted on Jupiter, but rarely seen on Saturn.

Waldmann and Griffith’s AI-analyzed data from this months-long electric storm on Saturn. Left image shows the planet in colors similar to how the human eye would see it, while the image on the right is color enhanced, making the storm stand out more clearly. (Image credit: NASA/JPL/Space Science Institute)

Waldmann and Griffith’s neural network found that the ammonia cloud visible by eye was just the tip of a “massive upwelling” of ammonia hidden under a thin layer of other clouds and gases.

“What you can see by eye is just the strongest bit of that ammonia feature,” Waldmann said. “It’s just the tip of the iceberg, literally. The rest is not visible by eye — but it’s definitely there.”

To Infinity and Beyond

For researchers like Waldmann, these findings are just the first step. Deep learning can provide planetary scientists for the first time with depth and breadth at once, producing detailed analyses that also cover vast geographic regions.

“It will tell you very quickly what the global picture is and how it all connects together,” said Waldmann. “Then researchers can go and look at individual spots that are interesting within a particular system, rather than blindly searching.”

A better understanding of Saturn’s atmosphere can help scientists analyze how our solar system behaves, and provide insights that can be extrapolated to planets around other stars.

Already, the researchers are extending their model to study features on Mars, Venus and Earth using transfer learning — which they were surprised to learn “works really well between planets.”

While Venus and Earth are almost identical in size, Venus has no global plate tectonics. In collaboration with the Observatoire de Paris, the team is starting a project to analyze Venus’s cloud structure and planetary surface to understand why the planet lacks tectonic plates.

Rather than atmospheric features, the researchers’ Mars project focuses on studying the planet’s surface. Data from the Mars Reconaissance Orbiter can create a global analysis that scientists can use to deduce where ancient water was most likely present, and to determine where the next Mars rover should land.

The underlying pattern recognition algorithm can be extended even further, Waldmann said. On Earth, it can be repurposed to spot rogue fishing vessels to preserve protected environments. And across the solar system on Jupiter, a transfer learning approach can train an AI model to analyze how the planet’s storms change over time.

Waldmann says there’s relatively easy access to training data — creating an open field of opportunities for researchers.

“This is the beautiful thing about planetary science,” he said. “All of the data for all of the planets is publicly available.”

Main image, captured in 2011, shows the largest storm observed on Saturn by the Cassini spacecraft. (Image credit NASA/JPL-Caltech/SSI)

The post AI of the Storm: Deep Learning Analyzes Atmospheric Events on Saturn appeared first on The Official NVIDIA Blog.

Goodwill Farming: Startup Harvests AI to Reduce Herbicides

Jorge Heraud is an anomaly for a founder whose startup was recently acquired by a corporate giant: Instead of counting days to reap earn-outs, he’s sowing the company’s goodwill message.

That might have something to do with the mission. Blue River Technology, acquired by John Deere more than a year ago for $300 million, aims to reduce herbicide use in farms.

The effort has been a calling to like-minded talent in Silicon Valley who want to apply their technology know-how to more meaningful problems than the next hot app, said Heraud, who continues to serve as Blue River’s CEO.

“We’re using machine learning to make a positive impact on the world. We don’t see it as just a way of making a profit. It’s about solving problems that are worthy of solving — that attracts people to us,” he said.

Heraud and co-founder Lee Redden, who continues to serve as Blue River’s CTO, were attending Stanford University in 2011 when they decided to form the startup. Redden was pursuing graduate studies in computer vision and machine learning applied to robotics while Heraud was getting an executive MBA.

The duo’s work formed one of the early success stories of many for harnessing NVIDIA GPUs and computer vision to tackle complex industrial problems with big benefits to humanity.

“Growing food is one of the biggest and oldest industries — it doesn’t get bigger than that,” said Ryan Kottenstette, who invested in Blue River at Khosla Ventures.

Herbicide Spraying 2.0

As part of tractor giant John Deere, Blue River remains committed to herbicide reduction. The company is engaged in multiple pilots of its See & Spray smart agriculture technology.

Pulled behind tractors, its See & Spray machine is about 40 feet wide and covers 12 rows of crops. It has 30 mounted cameras to capture photos of plants every 50 milliseconds and process them through its on-board 25 Jetson AGX Xavier supercomputing modules.

As a tractor pulls at about 7 miles per hour, according to Blue River, the Jetson Xavier modules running Blue River’s image recognition algorithms need to decide whether images fed from the 30 cameras are a weed or crop plant quicker than the blink of an eye. That allows enough time for the See & Spray’s robotic sprayer — it features 200 precision sprayers — to zap each weed individually with herbicide.

“We use Jetson to run inference on our machine learning algorithms and to decide on the fly if a plant is a crop or a weed, and spray only the weeds,” Heraud said.

GPUs Fertilize AgTech

Blue River has trained its convolutional neural networks on more than a million images and its See & Spray pilot machines keep feeding new data as they get used.

Capturing as many possible varieties of weeds in different stages of growth is critical to training the neural nets, which are processed on a “server closet full of GPUs” as well as on hundreds of GPUs at AWS, said Heraud.

Using cloud GPU instances, Blue River has been able to train networks much faster. “We have been able to solve hard problems and train in minutes instead of hours. It’s pretty cool what new possibilities are coming out,” he said.

Among them, Jetson Xavier’s compact design has enabled Blue River to move away from using PCs equipped with GPUs on board tractors. John Deere has ruggedized the Jetson Xavier modules, which offer some protection from the heat and dust of farms.

Business and Environment

Herbicides are expensive. A farmer spending a quarter-million dollars a year on herbicides was able to reduce that expense by 80 percent, Heraud said.

Blue River’s See & Spray can take the place of conventional, or aerial spraying of herbicides, which blankets entire crops with chemicals, something most countries are trying to reduce.

See & Spray can reduce the world’s herbicide use by roughly 2.5 billion pounds, an 80 percent reduction, which could have huge environmental benefits.

“It’s a tremendous reduction in the amount of chemicals. I think it’s very aligned with what customers want,” said Heraud.

 

Image credit: Blue River

The post Goodwill Farming: Startup Harvests AI to Reduce Herbicides appeared first on The Official NVIDIA Blog.

Succeeding by Predicting Failure: AI Startup Using Factory-Based Sensors to Avert Shutdowns

In a world reliant on the power of machines, breakdowns can be problematic, sometimes catastrophic.

A system failure at an auto manufacturer can cost up to $1.3 million an hour. An offshore oil platform going offline can waste around $3.5 million a day.

But technical failures don’t just drain money. They also risk the safety of employees, put customer relations on the line and can threaten the environment.

To counter this, many firms implement predictive maintenance programs to detect equipment flaws before damage occurs. Traditional techniques rely on installing a large number of purpose-built sensors and measuring the performance of specific machines.

But this narrow, isolated view means that larger, holistic problems are often missed or root causes aren’t addressed. And this can lead to additional, preventable breakages further down the line.

Reliability Solutions is taking a different approach. The Krakow, Poland-based startup uses deep learning to derive insights from the huge amount of data already being collected by the myriad of sensors previously installed by their clients, on premise.

A member of the NVIDIA Inception program, Reliability Solutions is one of the first companies to take this approach and is already working with some big names, including energy provider Tauron and automakers Opel and Volkswagen.

Predicting Failure Efficiently and Effectively

Predictive maintenance aims to predict when equipment failure might occur in sufficient time to take preventative measures.

Reliability Solution’s approach to predictive maintenance uses deep neural networks powered by an NVIDIA Tesla P100 GPU cluster in the data center.

“By using deep learning, we can avoid the common pain points associated with traditional predictive maintenance models — high hardware costs, high engineering costs and long lead times,” explains Mateusz Marzec, CEO of Reliability Solutions. “With the power of NVIDIA GPUs, we can train our models, using terabytes of data, in a few hours.”

One of the largest energy companies in Europe turned to Reliability Solutions to build a predictive model that could detect the failure of a fluidized bed combustion boiler. These systems burn solid fuels to create energy at lower temperatures and with reduced sulfur emissions than would otherwise be possible.

As the entire network of boilers supply approximately 50 TWh of electricity to over 5.5 million customers per year, any downtime has extensive consequences.

Reliability Solutions developed a predictive model based on 700GB of historical data collected from sensors already installed at the plant. It also utilized a full description of the events that had impacted the boiler over a three-year period, from 2013-2015. This data was used to train a series of deep neural networks on a cluster of NVIDIA GPUs.

When validated against operating data for 2016, the system predicted all of the failures with an accuracy level of 100 percent — and without any false positives. Every breakdown of the fluidized bed boiler was anticipated from between 2.5 and 17 hours before the actual breakdown took place. This would’ve given maintenance teams sufficient time to stop the malfunction, or at least minimize the damage caused.

With the predictive maintenance module now fully incorporated, the company is making yearly savings of 4 million euros.

From Predictive to Prescriptive

Reliability Solutions is now turning its attention to developing prescriptive maintenance. This enables them to not only identify what will go wrong, and when, but to suggest a recommended course of action.

This approach also applies to companies looking to optimize the performance of their machinery, rather than fix issues. In these cases, the prescriptive model can propose steps that will save companies money or reduce their CO2 emissions, for example.

Reliability Solutions is already working with one of the biggest chemical companies in central Europe to minimize resource consumption and maximize output by optimizing the configuration of their installation.

The startup built a deep neural network-based metamodel of the chemical installation and then validated the configuration in real life. They found that the metamodel had a 90 percent accuracy rate.

Using the prescriptive model, Reliability Solutions was able to reduce the company’s hydrogen consumption by more than 2 percent, which will save the company millions of euros each year.

The post Succeeding by Predicting Failure: AI Startup Using Factory-Based Sensors to Avert Shutdowns appeared first on The Official NVIDIA Blog.

Springing into Deep Learning: How AI Could Track Allergens on Every Block

As seasonal allergy sufferers will attest, the concentration of allergens in the air varies every few paces. A nearby blossoming tree or sudden gust of pollen-tinged wind can easily set off sneezing and watery eyes.

But concentrations of airborne allergens are reported city by city, at best.

A network of deep learning-powered devices could change that, enabling scientists to track pollen density block by block.

Researchers at the University of California, Los Angeles, have developed a portable AI device that identifies levels of five common allergens from pollen and mold spores with 94 percent accuracy, according to the team’s recent paper. That’s a 25 percent improvement over traditional machine learning methods.

Using NVIDIA GPUs for inference, the deep learning models can even be implemented in real time, said Aydogan Ozcan, associate director of the UCLA California NanoSystems Institute and senior author on the study. UCLA graduate student Yichen Wu is the paper’s first author.

Putting Traditional Sensing Methods Out to Pasture

Tiny biological particles including pollen, spores and microbes make their way into the human body with every breath. But it can be hard to tell just how many of these microscopic particles, called bioaerosols, are in a specific park or at a street corner.

Bioaerosols are typically collected by researchers using filters or spore traps, then stained and manually inspected in a laboratory — a half-century-old method that takes several hours to several days.

The UCLA researchers set out to improve that process by monitoring allergens directly in the field with a portable and cost-effective device, Ozcan said, “so that the time and labor cost involved in sending the sample, labeling and manual inspection can be avoided.”

Unlike traditional methods, their device automatically sucks in air, trapping it on a sticky surface illuminated by a laser. The laser creates a hologram of any particles, making the often-transparent allergens visible and capturable by an image sensor chip in the device.

The holographic image is then processed by two separate neural networks: one to clean up and crop the image to focus on the sections depicting biological particles, and another to classify the allergens.

Conventional machine learning algorithms achieve around 70 percent accuracy at classifying bioaerosols from holographic images. With deep learning, the researchers were able to boost that accuracy to an “unprecedented” 94 percent.

Using an NVIDIA GPU accelerates the training of the neural networks by hundreds of times, Wu said, and enables real-time testing, or inference.

A Blossoming Solution for Real-Time Analysis

While the version of the device described in the paper transmits the holograms to a remote server for the deep learning analysis, Wu said future versions of the device could have an embedded GPU to run AI models at the edge.

For scientists, the portable tool saves money and would enable them to gather data from distributed sensors at multiple locations, creating a real-time air-quality map with finer resolution. This map could be made available online to the general public — a useful tool as climate change makes allergy season longer and more severe.

Alternatively, the device itself — which weighs a little over a pound — could be used by individual allergy or asthma sufferers, allowing them to monitor the air quality around them anytime and access the data through a smartphone.

Since the device can be operated wirelessly, it also could be mounted on drones to monitor air quality in sites that are dangerous or difficult to access.

The researchers plan to expand the AI model to sense more classes of bioaerosols and other particles — and improve the physical device so it can conduct continuous sensing over several months.

The post Springing into Deep Learning: How AI Could Track Allergens on Every Block appeared first on The Official NVIDIA Blog.

Lights, Camera, AI: Cambridge Consultants Puts Deep Learning in Director’s Chair

AI is commonly associated with data. Less known is its artistic side — composing music scores, transforming doodles into photorealistic masterpieces, and dancing the night away.

Cambridge Consultants knows it well, having already demonstrated AI’s artistic prowess with Vincent AI, which turns your squiggles into art in one of seven styles resembling everything from moody J.M.W. Turner oil paintings to neon-hued pop art.

Last month, in collaboration with artist and animator Jo Lawrence, the U.K.-based consultancy brought a world first to the Collusion 2019 Showcase, an exhibition in Cambridge of interactive and immersive art exploring our relationship with new technologies.

Datacosm is an AI-driven animated film setting out our changing relationship with technology. What makes it special is that AI chooses the ending as the story unfolds based on the type of music played by a live pianist.

When Data Becomes Art

The Collusion 2019 Showcase celebrated the intersection of technology and art in a rare and thought-provoking manner.

Tasked with investigating the ever-intensifying and complex effects of emerging technology on culture and society, Lawrence and a select number of other artists set out to express their findings in their chosen medium.

Talking of how the film came to be, she explained, “Data can communicate, it can be grown, farmed, harvested, stored, distributed, consumed, corrupted and disseminated. Inspired, I developed ideas for a narrative animation exploring data-based themes using a combination of stop-motion animation of puppets and objects, pixilation and film.”

The result, Datacosm, tells the story of the movement of data from A to B, revealing the process of performing and making.

In the film, the top half of the screen shows the stage and animation as a combination of physical puppetry and digital production. The bottom half shows puppeteers working. Dividing the screen is a continuous block of code — bringing to the forefront the AI work being done behind the scenes.

AI developed by Cambridge Consultants — NVIDIA’s first deep learning service delivery partner in Europe — drove the final narrative of the film at the showcase, based on music supplied by a pianist.

As the music played, the AI identified the musical genre and changed the direction of the film by adding different layers of animation. Depending on what was played, one of four endings was shown.

AI Aficionado

The machine learning technology driving Datacosm, dubbed “the Aficionado,” can instantly identify a variety of music genres — from baroque and classical, to ragtime and jazz.

Trained using hundreds of hours of music on 16 NVIDIA GPUs, the Aficionado can even outperform humans and traditional coding in accurately identifying musical genres.

The project is just one of a number developed by Cambridge Consultants as part of its Digital Greenhouse initiative.

This purpose-built AI research facility is built around the NVIDIA DGX POD reference architecture with NetApp storage, known as ONTAP AI. It is designed for discovering, developing and testing machine learning approaches in a secure environment.

The cutting-edge research performed in the Digital Greenhouse is then used to solve the various challenges faced by Cambridge Consultants’ clients.

“Combining NVIDIA DGX POD with NetApp storage has enabled us to tackle the unprecedented demands on compute, storage, networking and facilities that these projects bring,” said Dominic Kelly, head of AI research at Cambridge Consultants, which employs a global team of over 850 engineers, designers and scientists. “The combination accelerates our AI research and provides the most efficient way of transferring technology from our lab to real deployments for our clients.

“The Collusion project has helped us explore innovative and highly sophisticated technologies, which hold world-changing potential and social impact. The project has been fascinating, helping us combine technical and artistic perspectives to create thought-provoking art that’s accessible to a broad audience,” Kelly added.

 

The post Lights, Camera, AI: Cambridge Consultants Puts Deep Learning in Director’s Chair appeared first on The Official NVIDIA Blog.