Researchers Make Movies of the Brain with CUDA

When colleagues told Sally Epstein they sped up image processing by three orders of magnitude for a client’s brain-on-a-chip technology, she responded like any trained scientist. Go back and check your work, the biomedical engineering Ph.D. told them.

Yet it was true. The handful of researchers at Cambridge Consultants had devised a basket of techniques to process an image on GPUs in an NVIDIA DGX-1 system in 300 milliseconds, a 3,000x boost over the 18 minutes the task took on an Intel Core i9 CPU.

The achievement makes it possible for researchers to essentially watch movies of neurons firing in real time using the brain-on-a-chip technology from NETRI, a French startup.

“Animal studies revolutionized medicine. This is the next step in testing for areas like discovering new drugs,” said Epstein, head of Strategic Technology at Cambridge Consultants, which develops products and technologies for a wide variety of established companies and startups such as NETRI.

The startup designs chips that sport 3D microfluidic channels to host neural tissues and a CMOS camera sensor with polarizing filters to detect individual neurons firing. It hopes its precision imaging can speed the development of novel treatments for neurological disorders such as Alzheimer’s disease.

Facing a Computational Bottleneck

NETRI’s chips generate 100-megapixel images at up to 1,000 frames per second — the equivalent of a hundred 4K gaming systems running at 120fps. Besides spawning tons of data, they use highly complex math.

As a result, processing a single second of a recording took NETRI 12 days, an unacceptable delay. So, the startup turned to Cambridge Consultants to bust through the bottleneck.

“Our track record in scientific and biological imaging turned out to be very relevant,” said Monty Barlow, Director of Strategic Technology at Cambridge Consultants. And when NETRI heard about the 3,000x boost, “they trusted us even though we didn’t trust ourselves at first,” he quipped.

Leveraging Math, Algorithms and GPUs

A handful of specialists at Cambridge Consultants delivered the 3,000x speedup using multiple techniques. For example, math and algorithm experts employed a mix of Gaussian filters, multivariate calculus and other tools to eliminate redundant tasks and reduce peak RAM requirements.

Software developers migrated NETRI’s Python code to CuPy to take advantage of the massive parallelism of NVIDIA’s CUDA software. And hardware specialists optimized the code to fit into GPU memory, eliminating unnecessary data transfers inside the DGX-1.

The CUDA profiler helped find bottlenecks in NETRI’s code and alternatives to resolve them. “NVIDIA gave us the tools to execute this work efficiently — it happened within a week with a core team of four researchers and a few specialists,” said Epstein.

Looking ahead, Cambridge Consultants expects to find further speedups for the code using the DGX-1 that could enable real-time manipulation of neurons using a laser. Researchers also aim to explore NVIDIA IndeX software to help visualize neural activity.

The work with NETRI is one of several DGX-1 applications at the company. It also hosts a Digital Greenhouse for AI research. Last year, it used the DGX-1 to create a low-cost but highly accurate tool for monitoring tuberculosis.

The post Researchers Make Movies of the Brain with CUDA appeared first on The Official NVIDIA Blog.

Italy Forges AI Future in Partnership with NVIDIA

Italy is well known for its architecture, culture and cuisine. Soon, its contributions to AI may be just as renowned.

Taking a step in that direction, a national research organization forged a three-year collaboration with NVIDIA. Together they aim to accelerate AI research and commercial adoption across the country.

Leading the charge for Italy is CINI, the National Inter-University Consortium for Informatics that includes a faculty of more than 1,300 professors in various computing fields across 39 public universities.

CINI’s National Laboratory of Artificial Intelligence and Intelligence Systems (AIIS) is spearheading the effort as part of its goal to expand Italy’s ecosystem for both academic research and commercial applications of AI.

“Leveraging NVIDIA’s expertise to build systems specifically designed for the creation of AI will help secure Italy’s position as a top player in AI education, research and industry adoption,” said Rita Cucchiara, a professor of computer engineering and science and director of AIIS.

National effort begins in Modena

The joint initiative aims to train students, nurture startups and spread adoption of the attest AI technology throughout Italy. As a first step, the partners will create a local hub at the University of Modena and Reggio Emilia (Unimore) for the global NVIDIA AI Technology Center.

The partnership marks an important expansion of NVIDIA’s work with the university whose roots date back to the medieval period.

In December, the company supported research on a novel way to automate the process of describing actions in a video. A team of four researchers at Unimore and one from Milan-based AI startup Metaliquid developed an AI model that achieved up to a 16 percent relative improvement compared to prior solutions. In a final stage of the project, NVIDIA helped researchers analyze their network’s topology to optimize training it on an NVIDIA DGX-1 system.

In July, Unimore and NVIDIA collaborated on an event for AI startups. Unimore’s AImageLab hosted the event, which included representatives of NVIDIA’s Inception program, an initiative to nurture AI startupswith access to the company’s technology and ecosystem.

The collaboration comes at a time when the AImageLab, host for the new NVIDIA hub, is already making its mark in areas such as machine vision and medical imaging.

Winning kudos in image recognition

In September, two world-class research events singled out the AImageLab for recognition. One team from the lab won a best paper award at the International Conference on Computer Analysis of Images and Patterns. Another came third out of 64 research groups in an international competition using AI to classify skin lesions.

The Modena hub becomes the latest of more than 12 collaborations with countries worldwide for the NVIDIA AI Technology Center. NVAITC maintains an open database of research and tools developed with and for its partners.

Overall, the new collaboration “will bring together NVIDIA and CINI in our shared mission to enable, support and inform Italy’s AI ecosystem for research, industry and society,” said Simon See, senior director of NVAITC.

The post Italy Forges AI Future in Partnership with NVIDIA appeared first on The Official NVIDIA Blog.

Life of Pie: How AI Delivers at Domino’s

Some like their pies with extra cheese, extra sauce or double pepperoni. Zack Fragoso’s passion is for pizza with plenty of data.

Fragoso, a data science and AI manager at pizza giant Domino’s, got his Ph.D. in occupational psychology, a field that employs statistics to sort through the vagaries of human behavior.

“I realized I liked the quant part of it,” said Fragoso, whose nimbleness with numbers led to consulting jobs in analytics for the police department and symphony orchestra in his hometown of Detroit before landing a management job on Domino’s expanding AI team.

The pizza maker “has grown our data science team exponentially over the last few years, driven by the impact we’ve had on translating analytics insights into action items for the business team.”

Making quick decisions is important when you need to deliver more than 3 billion pizzas a year — fast. So, Domino’s is exploring the use of AI for a host of applications, including more accurately predicting when an order will be ready.

Points for Pie, launched at last year’s Super Bowl, has been Domino’s highest profile AI project to date. Snap a smartphone picture of whatever pizza you’re eating and the company gave the customer loyalty points toward a free pizza.

“There was a lot of excitement for it in the organization, but no one was sure how to recognize purchases and award points,” Fragoso recalled.

“The data science team said this is a great AI application, so we built a model that classified pizza images. The response was overwhelmingly positive. We got a lot of press and massive redemptions, so people were using it,” he added.

Domino’s trained its model on an NVIDIA DGX system equipped with eight V100 Tensor Core GPUs using more than 5,000 images, including pictures some customers sent in of plastic pizza dog toys. A survey sent in response to the pictures helped automate some of the job of labeling the unique dataset now considered a strategic corporate asset.

AI Knows When the Order Will Be Ready

More recently, Fragoso’s team hit another milestone, boosting accuracy from 75% to 95% for predictions of when an order will be ready. The so-called load-time model factors in variables such as how many managers and employees are working, the number and complexity of orders in the pipeline and current traffic conditions.

The improvement has been well received and could be the basis for future ways to advance operator efficiencies and customer experiences, thanks in part to NVIDIA GPUs.

“Domino’s does a very good job cataloging data in the stores, but until recently we lacked the hardware to build such a large model,” said Fragoso.

At first, it took three days to train the load-time model, too long to make its use practical.

“Once we had our DGX server, we could train an even more complicated model in less than an hour,” he said of the 72x speed-up. “That let us iterate very quickly, adding new data and improving the model, which is now in production in a version 3.0,” he added.

More AI in the Oven

The next big step for Fragoso’s team is tapping a bank of NVIDIA Turing T4 GPUs to accelerate AI inferencing for all Domino’s tasks that involve real-time predictions.

Some emerging use cases in the works are still considered secret ingredients at Domino’s. However, the data science team is exploring computer vision applications to make getting customers their pizza as quick and easy as possible.

“Model latency is extremely important, so we are building out an inference stack using T4s to host our AI models in production. We’ve already seen pretty extreme improvements with latency down from 50 milliseconds to sub-10ms,” he reported.

Separately, Domino’s recently tapped BlazingSQL, open-source software to run data-science queries on GPUs. NVIDIA RAPIDS software eased the transition, supporting the APIs from a prior CPU-based tool while delivering better performance.

It’s delivering an average 10x speed-up across all use cases in the part of the AI process that involves building datasets.

“In the past some of the data-cleaning and feature-engineering operations might have taken 24 hours, but now we do them in less than an hour,” he said.

Try Out AI at NRF 2020

Domino’s is one of many forward-thinking companies using GPUs to bring AI to retail.

NVIDIA GPUs helped power Alibaba to $38 billion in revenue on Singles Day, the world’s largest shopping event. And the world’s largest retailer, Walmart, talked about its use of GPUs and NVIDIA RAPIDS at an event earlier this year.

Separately, IKEA uses AI software from NVIDIA partner Winnow to reduce food waste in its cafeterias.

You can learn more about best practices of using AI in retail at this week’s NRF 2020, the National Retail Federation’s annual event. NVIDIA and some of its 100+ retail partners will be on hand demonstrating our EGX edge computing platform, which scales AI to local environments where data is gathered — store aisles, checkout counters and warehouses.

The EGX platform’s real-time edge compute abilities can notify store associates to intervene during shrinkage, open new checkout counters when lines are getting long and deliver the best customer shopping experiences.

Book a meeting with NVIDIA at NRF here.

The post Life of Pie: How AI Delivers at Domino’s appeared first on The Official NVIDIA Blog.

How AI Accelerates Blood Cell Analysis at Taiwan’s Largest Hospital

Blood tests tell doctors about the function of key organs, and can reveal countless medical conditions, including heart disease, anemia and cancer. At major hospitals, the number of blood cell images awaiting analysis can be overwhelming.

With over 10,000 beds and more than 8 million outpatient visits annually, Taiwan’s Chang Gung Memorial Hospital collects at least a million blood cell images each year. Its clinicians must be on hand 24/7, since blood analysis is key in the emergency department. To improve its efficiency and accuracy, the health care network — with seven hospitals across the island — is adopting deep learning tools developed on AI-Ready Infrastructure, or AIRI.

An integrated architecture from Pure Storage and NVIDIA, AIRI is based on the NVIDIA DGX POD reference design and powered by NVIDIA DGX-1 in combination with Pure Storage FlashBlade. The hospital’s AIRI solution is equipped with four NVIDIA DGX-1 systems, delivering over one petaflop of AI compute performance per system. Each DGX-1 integrates eight of the world’s fastest data center accelerators: the NVIDIA V100 Tensor Core GPU.

Chang Gung Memorial’s current blood cell analysis tools are capable of automatically identifying five main types of white blood cells, but still require doctors to manually identify other cell types, a time-consuming and expensive process.

Its deep learning model provides a more thorough analysis, classifying 18 types of blood cells from microscopy images with 99 percent accuracy. Having an AI tool that identifies a wide variety of blood cells also boosts doctors’ abilities to classify rare cell types, improving disease diagnosis. Using AI can help reduce clinician workloads without compromising on test quality.

To accelerate the training and inference of its deep learning models, the hospital relies on the integrated infrastructure design of AIRI, which incorporates best practices for compute, networking, storage, power and cooling.

AI Runs in This Hospital’s Blood

After a patient has blood drawn, Chang Gung Memorial uses automated tools to sample the blood, smear it on a glass microscope slide and stain it, so that red blood cells, white blood cells and platelets can be examined. The machine then captures an image of the slide, known as a blood film, so it can be analyzed by algorithms.

Using transfer learning, the hospital trained its convolutional neural networks on a dataset of more than 60,000 blood cell images on AIRI.

The AI takes just two seconds to interpret a set of 25 images using a server of NVIDIA T4 GPUs for inference — a task that’s more than a hundred times faster than the usual procedure involving a team of three medical experts spending up to five minutes.

In addition to providing faster blood test results, deep learning can reduce physician fatigue and enhance the quality of blood cell analysis.

“AI will improve the whole medical diagnosis process, especially the doctor-patient relationship, by solving two key problems: time constraints and human resource costs,” said Chang-Fu Kuo, director of the hospital’s Center for Artificial Intelligence in Medicine.

Some blood cell types are very rare, leading to an imbalance in the training dataset. To augment the number of example images for rare cell types and to improve the model’s performance, the researchers are experimenting with generative adversarial networks, or GANs.

The hospital is also using AIRI for fracture image identification, genomics and immunofluorescence projects. While the current AI tools focus on identifying medical conditions, future applications could be used for disease prediction.

The post How AI Accelerates Blood Cell Analysis at Taiwan’s Largest Hospital appeared first on The Official NVIDIA Blog.

How American Express Uses Deep Learning for Better Decision Making

Financial fraud is on the rise. As the number of global transactions increase and digital technology advances, the complexity and frequency of fraudulent schemes are keeping pace.

Security company McAfee estimated in a 2018 report that cybercrime annually costs the global economy some $600 billion, or 0.8 percent of global gross domestic product.

One of the most prevalent — and preventable — types of cybercrime is credit card fraud, which is exacerbated by the growth in online transactions.

That’s why American Express, a global financial services company, is developing deep learning generative and sequential models to prevent fraudulent transactions.

“The most strategically important use case for us is transactional fraud detection,” said Dmitry Efimov, vice president of machine learning research at American Express. “Developing techniques that more accurately identify and decline fraudulent purchase attempts helps us protect our customers and our merchants.”

Cashing into Big Data

The company’s effort spanned several teams that conducted research on using generative adversarial networks, or GANs, to create synthetic data based on sparsely populated segments.

In most financial fraud use cases, machine learning systems are built on historical transactional data. The systems use deep learning models to scan incoming payments in real time, identify patterns associated with fraudulent transactions and then flag anomalies.

In some instances, like new product launches, GANs can produce additional data to help train and develop more accurate deep learning models.

Given its global integrated network with tens of millions of customers and merchants, American Express deals with massive volumes of structured and unstructured data sets.

Using several hundred data features, including the time stamps for transactional data, the American Express teams found that sequential deep learning techniques, such as long short-term memory and temporal convolutional networks, can be adapted for transaction data to produce superior results compared to classical machine learning approaches.

The results have paid dividends.

“These techniques have a substantial impact on the customer experience, allowing American Express to improve speed of detection and prevent losses by automating the decision-making process,” Efimov said.

Closing the Deal with NVIDIA GPUs 

Due to the huge amount of customer and merchant data American Express works with, they selected NVIDIA DGX-1 systems, which contain eight NVIDIA V100 Tensor Core GPUs, to build models with both TensorFlow and PyTorch software.

Its NVIDIA GPU-powered machine learning techniques are also used to forecast customer default rates and to assign credit limits.

“For our production environment, speed is extremely important with decisions made in a matter of milliseconds, so the best solution to use are NVIDIA GPUs,” said Efimov.

As the systems go into production in the next year, the teams plan on using the NVIDIA TensorRT platform for high-performance deep learning inference to deploy the models in real time, which will help improve American Express’ fraud and credit loss rates.

Efimov will be presenting his team’s work at the GPU Technology Conference in San Jose in March. To learn more about credit risk management use cases from American Express, register for GTC, the premier AI conference for insights, training and direct access to experts on the key topics in computing across industries.

The post How American Express Uses Deep Learning for Better Decision Making appeared first on The Official NVIDIA Blog.

NVIDIA and Partners Bring AI Supercomputing to Enterprises

Academia, hyperscalers and scientific researchers have been big beneficiaries of high performance computing and AI infrastructure. Yet businesses have largely been on the outside looking in.

No longer. NVIDIA DGX SuperPOD provides businesses a proven design formula for building and running enterprise-grade AI infrastructure with extreme scale. The reference architecture gives businesses a prescription to follow to avoid exhaustive, protracted design and deployment cycles and capital budget overruns.

Today, at SC19, we’re taking DGX SuperPOD a step further. It’s available as a consumable solution that now integrates with the leading names in data center IT — including DDN, IBM, Mellanox and NetApp — and is fulfilled through a network of qualified resellers. We’re also working with ScaleMatrix to bring self-contained data centers in a cabinet to the enterprise.

The Rise of the Supercomputing Enterprise

AI is an accelerant for gaining competitive advantage. It can open new markets and even address a business’s existential threats. Formerly untrainable models for use cases like natural language processing become solvable with massive infrastructure scale.

But leading-edge AI demands leadership-class infrastructure — and DGX SuperPOD offers extreme-scale multi-node training of even the most complex models, like BERT for conversational AI.

It consolidates often siloed pockets of AI and machine learning development into a centralized shared infrastructure, bringing together data science talent so projects can quickly go from concept to production at scale.

And it maximizes resource efficiency, avoiding stranded, underutilized assets and increasing a business’s return on its infrastructure investments.

Data Center Leaders Support NVIDIA DGX SuperPOD 

Several of our partners have completed the testing and validation of DGX SuperPOD in combination with their high-performance storage offerings and the Mellanox InfiniBand and Ethernet terabit-speed network fabric.

DGX SuperPOD with IBM Spectrum Storage

“Deploying faster with confidence is only one way our clients are realizing the benefits of the DGX SuperPOD reference architecture with IBM Storage,” said Douglas O’Flaherty, director of IBM Storage Product Marketing. “With comprehensive data pipeline support, they can start with an all-NVMe ESS 3000 flash solution and adapt quickly. With the software-defined flexibility of IBM Spectrum Scale, the DGX SuperPOD design easily scales, extends to public cloud, or integrates IBM Cloud Object Storage and IBM Spectrum Discover. Supported by the expertise of our business partners, we enhance data science productivity and organizational adoption of AI.”

DGX SuperPOD with DDN Storage

“Meeting the massive demands of emerging large-scale AI initiatives requires compute, networking and storage infrastructure that exceeds architectures historically available to most commercial organizations,” said James Coomer, senior vice president of products at DDN. “Through DDN’s extensive development work and testing with NVIDIA and their DGX SuperPOD, we have demonstrated that it is now possible to shorten supercomputing-like deployments from weeks to days and deliver infrastructure and capabilities that are also rock solid and easy to manage, monitor and support. When combined with DDN’s A3I data management solutions, NVIDIA DGX SuperPOD creates a real competitive advantage for customers looking to deploy AI at scale.”

DGX SuperPOD with NetApp

“Industries are gaining competitive advantage with high performance computing and AI infrastructure, but many are still hesitant to take the leap due to the time and cost of deployment,” said Robin Huber, vice president of E-Series at NetApp. “With the proven NVIDIA DGX SuperPOD design built on top of the award-winning NetApp EF600 all-flash array, customers can move past their hesitation and will be able to accelerate their time to value and insight while controlling their deployment costs.”

NVIDIA has built a global network of partners who’ve been qualified to sell and deploy DGX SuperPOD infrastructure:

  • In North America: Worldwide Technologies
  • In Europe, the Middle East and Africa: ATOS
  • In Asia: LTK and Azwell
  • In Japan: GDEP

To get started, read our solution brief and then reach out to your preferred DGX SuperPOD partner.

Scaling Supercomputing Infrastructure — Without a Data Center

Many organizations that need to scale supercomputing simply don’t have access to a data center that’s optimized for the unique demands of AI and HPC infrastructure. We’re partnering with ScaleMatrix, a DGX Ready Data Center Program partner, to bring self-contained data centers in a rack to the enterprise.

In addition to colocation services for DGX infrastructure, ScaleMatrix offers its Dynamic Density Control cabinet technology, which enables businesses to bypass the constraints of data center facilities. This lets enterprises deploy DGX POD and SuperPOD environments almost anywhere while delivering the power and technology of a state-of-the-art data center.

With self-contained cooling, fire suppression, various security options, shock mounting, extreme environment support and more, the DDC solution offered through our partner Microway removes the dependency on having a traditional data center for AI infrastructure.

Learn more about this offering here.

The post NVIDIA and Partners Bring AI Supercomputing to Enterprises appeared first on The Official NVIDIA Blog.

First Time’s the Charm: Sydney Startup Uses AI to Improve IVF Success Rate

In vitro fertilization, a common treatment for infertility, is a lengthy undertaking for prospective parents, involving ultrasounds, blood tests and injections of fertility medications. If the process doesn’t end up in a successful pregnancy — which is often the case — it can be a major emotional and financial blow.

Sydney-based healthcare startup Harrison.ai is using deep learning to improve the odds of success for thousands of IVF patients. Its AI model, IVY, is used by Virtus Health, a global provider of assisted reproductive services, to help doctors evaluate which embryo candidate has the best chance of implantation into the patient.

Founded by brothers Aengus and Dimitry Tran in 2017, Harrison.ai builds customized predictive algorithms that integrate into existing clinical workflows to inform critical healthcare decisions and improve patient outcomes.

Ten or more eggs can be harvested from a patient during a single cycle of IVF. The embryos are incubated in the lab for five days before the most promising candidate (or candidates) are implanted into the patient’s uterus. Yet, the success rate of implantation for five-day embryos is under 50 percent, and closer to 25 percent for women over the age of 40, according to the U.S. Centers for Disease Control and Prevention.

“In the past, people used to have to implant three or four embryos and hope one works,” said Aengus Tran, cofounder and medical AI director of Harrison.ai, a member of the NVIDIA Inception virtual accelerator program, which offers go-to-market support, expertise, and technology for AI startups revolutionizing industries. “But sometimes that works a little too well and patients end up with twins or triplets. It sounds cute, but it can be a dangerous pregnancy.”

Built using NVIDIA V100 Tensor Core GPUs on premises and in the cloud, IVY processes time-lapse video of fertilized eggs developing in the lab, predicting which are most likely to result in a positive outcome.

The goal: a single embryo transfer that leads to a single successful pregnancy.

Going Frame by Frame

Embryologists manually analyze time-lapse videos of embryo growth to pick the highest-quality candidates. It’s a subjective process, with no universal grading system and low agreement between experts. And with five days of footage for every embryo, it’s nearly impossible for doctors to look at every frame.

Harrison.ai’s IVY deep learning model analyzes the full five-day video feed from an embryoscope, helping it surpass the performance of AI tools that provide insights based on still images.

“Most of the visual AI tools we see these days are image recognition,” said Aengus. “But with an early multi-cell embryo, the development process matters a lot more than how it looks at the end of five days. The critical event could have happened days before, and the damage already done.”

The company trained its deep learning models on a dataset from Virtus Health including more than 10,000 human embryos from eight IVF labs across four countries. Instead of annotating each video with detailed morphological features of the embryos, the team classified each embryo with a single label: positive or negative outcome. A positive outcome meant that a patient’s six-week ultrasound showed a fetus with a heartbeat — a key predictor of successful live births.

In a recent study, IVY was able to predict which embryos would develop a heartbeat with 93 percent accuracy. Aengus and Dimitry say the tool could help standardize embryo selection by reducing disagreement among human readers.

To keep up with Harrison.ai’s growing training datasets, the team upgraded their GPU clusters from four GeForce cards to the NVIDIA DGX Station, the world’s fastest workstation for deep learning. Training on the Tensor Core GPUs allowed them to leverage mixed-precision computing, shrinking their training time by 4x.

“It’s almost unreal to have that much power at your fingertips,” Aengus said. Using the DGX Station, Harrison.ai was able to boost productivity and improve their deep learning models by training with bigger datasets.

The company uses the deskside DGX Station for experimentation, research and development. For training their biggest datasets, they scale up to larger clusters of NVIDIA V100 GPUs in Amazon EC2 P3 cloud instances — relying on NGC containers to seamlessly shift their workflows from on-premises systems to the cloud.

IVY has been used in thousands of cases in Virtus Health clinics so far. Harrison.ai is also collaborating with Vitrolife, a major embryoscope manufacturer, to more smoothly integrate its neural networks into the clinical workflow.

While Harrison.ai’s first project is for IVF, the company is also developing tools for other healthcare applications.

The post First Time’s the Charm: Sydney Startup Uses AI to Improve IVF Success Rate appeared first on The Official NVIDIA Blog.

Clearing the Air: NASA Scientists Use NVIDIA RAPIDS to Accelerate Pollution Forecasts

Air quality is a vastly underestimated problem, said NASA research scientist Christoph Keller in a talk at this week’s GTC DC, the Washington edition of NVIDIA’s GPU Technology Conference.

Nine in 10 people breathe polluted air, and millions of deaths a year are attributed to household or outdoor air pollution. Poor air also lowers crop yields, costing billions of dollars in agricultural yield losses annually.

To better understand and forecast air quality, NASA researchers are developing a machine learning model that tracks global air pollution in real time. The model also provides forecasts up to five days in advance that can help government agencies and individuals make decisions.

Keller’s team is using NVIDIA V100 Tensor Core GPUs and NVIDIA RAPIDS data science software libraries to accelerate its machine learning algorithms. The trained model, which uses data from the NASA Center for Climate Simulation to model air pollution formation, can then be plugged into an existing full earth system model to provide global air quality simulations in half the time.

Algorithms Run Like the Wind on RAPIDS, NVIDIA DGX Systems

Satellite observations by NASA and other space agencies collect massive amounts of data about what’s happening on Earth, including detailed measurements of air quality.

This data is fed into NASA’s global air quality model, but the science involved is too complex to process fast enough for real-time insights. GPU-accelerated machine learning can change that, bringing scientists closer to detailed, live air quality maps.

“NASA’s global models quickly produce terabytes of data, and what we’d like to do is train the machine learning model on these huge datasets,” said Keller, who is part of the agency’s Goddard Space Flight Center, in an interview. “That’s where we quickly reached limitations with normal software and hardware, and where I turned to GPUs and RAPIDS software.”

NVIDIA developers collaborated with Keller to accelerate the training of his machine learning models using the cuDF and XGBoost software libraries. Running on three GPU-powered systems, including NVIDIA DGX-1, the team was able to cut down training time from almost a full working day down to seconds — enabling faster iteration.

“Before, you would hit the button and wait six or seven hours to get the results. Even to make a small tweak, you’d have to resubmit it and wait again,” he said. “Speeding up the training cycle was a total game changer for developing the models.”

The scientists’ air quality forecasts are publicly available through NASA, but the team also hopes it will be used by app developers, nonprofits and cities worldwide. Government groups including the Environmental Protection Agency, the State Department and the U.S. Army Public Health Center are also interested in the data as a way to track air quality and provide timely warnings of dangerous air.

These organizations can use NASA data and forecasts to build tools that explain to the public why the air is worse on a specific day, linking air quality index data to pollution episodes such as wildfires, industrial activities, weather or heavy traffic. Governments can also rely on the forecasts to quantify the impact of specific sources of emissions, like an individual power plant.

The post Clearing the Air: NASA Scientists Use NVIDIA RAPIDS to Accelerate Pollution Forecasts appeared first on The Official NVIDIA Blog.

How to Streamline the Path to an AI Infrastructure

Organizations are looking to AI to improve processes, reduce costs and achieve better results. However, AI workloads pose a different set of challenges than business workloads supported by corporate IT.

If businesses don’t pay attention to the unique requirements of AI infrastructure, they can be faced with increased deployment time, higher costs and delayed insights.

NetApp and NVIDIA are helping organizations avoid these pitfalls with a new suite of solutions that make accessing and acquiring AI-ready infrastructure easier than ever – the ONTAP AI-Ready Data Center.

Significant portions of AI budgets are allocated to solving two problems: designing a well-functioning system and outfitting a data center to house it. Because of this, investment in AI infrastructure can be daunting.

The key to deploying faster and at lower costs is to simply eliminate these factors.

NetApp ONTAP AI is fully optimized and tested infrastructure for AI workloads. It’s powered by the revolutionary NVIDIA DGX-1 and DGX-2 AI systems paired with NetApp cloud-connected all-flash storage. With ONTAP AI, users get a complete AI infrastructure that is highly performant and ready-to-run.

Optimized Infrastructure Without the Overhead 

With the ONTAP AI-Ready Data Center from our network of world-class colocation services partners, every organization can get access to AI-ready infrastructure without the overhead of designing a system from off-the-shelf components.

As a full-stack solution composed of data center hardware and software from NVIDIA and NetApp, ONTAP AI-Ready Data Center is optimized and ready to use, providing results that are predictable and scalable without tying up valuable IT and AI resources in design and implementation.

The offering gives users flexibility in how they consume AI infrastructure and resources. They can choose to deploy ONTAP AI with colocation partners in two ways:

  • ONTAP AI-Ready Data Center — Customer-owned infrastructure deployed in state-of-the art colocation facilities that eliminates the burden of creating and maintaining an AI-ready data center.
  • ONTAP AI Test Drive — Try out ONTAP AI from select providers without the overhead of shipping or installation. Run benchmarks, test with public datasets or run your own workloads before purchasing.

Colocated AI infrastructure offers a cost-effective financial model. Users can deploy faster, quicken the pace of insights and increase return on investment without funding large-scale capital investments for their own data centers.

For organizations that already house their business infrastructure at colocation facilities, bringing the AI infrastructure to where the data resides improves performance and reduces wasted time and costs migrating data between sites.

The recently announced ONTAP AI offering from Flexential is a prime example of this powerful new approach to deploying AI infrastructure.

Cost-Effective, Optimized and Supported

Going forward, leading any industry will require AI leadership. To achieve this, organizations must have a solid plan for AI infrastructure and avoid getting mired in DIY solutions that are difficult to manage, even harder to support and aren’t optimized between components or across the data center.

ONTAP AI-Ready Data Center solves AI infrastructure problems fully and cost-effectively today. It allows organizations to provide infrastructure that is purpose-built for AI and to break the barriers of enterprise IT. Expect to see additional offerings to further the vision of “AI infrastructure as a service,” making consumption even easier.

If you’re attending NetApp Insight in Las Vegas this week, stop by and chat with the NVIDIA team at booth 706 and learn more about accelerated computing with NVIDIA DGX systems and NetApp ONTAP AI. You can read about the full NVIDIA presence at the show in this blog.

To learn more about ONTAP AI Test Drive, contact us.

The post How to Streamline the Path to an AI Infrastructure appeared first on The Official NVIDIA Blog.

AI Frame of Mind: Neural Networks Bring Speed, Consistency to Brain Scan Analysis

In the field of neuroimaging, two heads are better than one. So radiologists around the globe are exploring the use of AI tools to share their heavy workloads — and improve the consistency, speed and accuracy of brain scan analysis.

“We often refer to manual annotation as the gold standard for neuroimaging, when it’s actually probably not,” said Tim Wang, director of operations at the Sydney Neuroimaging Analysis Centre, or SNAC. “In many cases, AI provides a more consistent, less biased evaluation than manual classification or segmentation.”

An Australian company co-located with the University of Sydney’s Brain and Mind Centre, SNAC conducts neuroimaging research as well as commercial image analysis for clinical research trials. The center is building AI tools to automate laborious analysis tasks in their research workflow, like isolating brain images from head scans and segmenting brain lesions.

Additional algorithms are in development and being validated for clinical use. One compares how a patient’s brain volume and lesions change over time. Another flags critical brain scans, so radiologists can more quickly attend to urgent cases.

SNAC uses the NVIDIA DGX-1 and DGX Station, powered by NVIDIA V100 Tensor Core GPUs,  as well as PC workstations with NVIDIA GeForce RTX 2080 Ti graphics cards. The researchers develop their algorithms using the NVIDIA Clara suite of medical imaging tools, as well as cuDNN libraries and TensorRT inference software.

Brainstorming AI Solutions

When developing medicines, pharmaceutical companies conduct clinical trials to test how effective a new drug treatment is — often using brain imaging metrics such as brain atrophy rates and lesion changes as key indicators.

To ensure accurate and consistent measurements, pharma companies rely on centralized reading centers that evaluate trial participants’ brain scans in a blind analysis.

That’s where SNAC comes in. It analyzes patient MRI and CT scans acquired at clinical sites around the world. Its expertise in multicenter studies makes it well-positioned to develop AI tools that address challenges faced by radiologists and clinicians.

With a training dataset of more than 15,000 three-dimensional CT and MRI images, SNAC is building its deep learning algorithms using the PyTorch and TensorFlow frameworks.

One of the center’s AI models automates the time-consuming task of cleaning up MRI images to isolate the brain from other parts of the head, such as the venous sinuses and fluid-filled compartments around the brain. Using the NVIDIA DGX-1 system for inference, SNAC can speed up this process by at least 10x.

“That’s no small difference,” Wang said. “Previously, this would take our analysts 20 to 30 minutes with semi-automatic methods. Now, that’s down to 2 or 3 minutes of pure machine time, while performing better and more consistently than a human.”

Another tool tackles brain lesion analysis for multiple sclerosis cases. In research and clinical trials, image analysts typically segment brain lesions and determine their volume by manually examining scans — a process that takes up to 15 minutes.

AI can shrink the time needed to determine lesion volume to just 3 seconds. That makes it possible for these metrics to be used in clinical practice as well, where due to time constraints, radiologists often simply eyeball scans to estimate lesion volumes.

“By providing quantitative, individualized neuroimaging measurements, we can help streamline and add value to clinical radiology,” said Wang.

The center collaborates with I-MED, one of the largest imaging providers in the world, as well as the computational neuroscience team at the University of Sydney’s Brain and Mind Centre. The group also works closely with radiologists at major Australian hospitals to validate its algorithms.

SNAC plans to integrate its analysis tools with systems already used by clinicians, so that once a scan is taken, it’s automatically routed to a server and processed. The AI-evaluated scan is then passed on to radiologists’ viewers — giving them the analysis results without altering their workflow.

“Someone can develop a fantastic tool, but it’s hard to ask radiologists to use it by opening yet another application, or another browser on their workstations,” Wang said. “They don’t want to do that simply because they’re time poor, often punching through a very large volume of clinical scans a day.”

Main image shows a side-by-side comparison of multiple sclerosis lesion segmentation. Left image shows manual lesion segmentation, while right shows fully automated lesion segmentation. Image courtesy of Sydney Neuroimaging Analysis Center. 

The post AI Frame of Mind: Neural Networks Bring Speed, Consistency to Brain Scan Analysis appeared first on The Official NVIDIA Blog.