Japan’s Fastest Supercomputer Adopts NGC, Enabling Easy Access to Deep Learning Frameworks

From discovering drugs, to locating black holes, to finding safer nuclear energy sources, high performance computing systems around the world have enabled breakthroughs across all scientific domains.

Japan’s fastest supercomputer, ABCI, powered by NVIDIA Tensor Core GPUs, enables similar breakthroughs by taking advantage of AI. The system is the world’s first large-scale, open AI infrastructure serving researchers, engineers and industrial users to advance their science.

The software used to drive these advances is as critical as the servers the software runs on. However, installing an application on an HPC cluster is complex and time consuming. Researchers and engineers are unproductive as they wait to access the software, and their requests to have applications installed distract system admins from completing mission-critical tasks.

Containers — packages that contain software and relevant dependencies — allow users to pull and run the software on a system without actually installing the software. They’re a win-win for users and system admins.

NGC: Driving Ease of Use of AI, Machine Learning and HPC Software

NGC offers over 50 GPU-optimized containers for deep learning frameworks, machine learning algorithms and HPC applications that run on both Docker and Singularity.

The HPC applications provide scalable performance on GPUs within and across nodes. NVIDIA continuously optimizes key deep learning frameworks and libraries, with updates released monthly. This provides users access to top performance for training and inference for all their AI projects.

ABCI Runs NGC Containers

Researchers and industrial users are taking advantage of ABCI to run AI-powered scientific workloads across domains, from nuclear physics to manufacturing. Others are taking advantage of the system’s distributed computing to push the limits on speeding AI training.

To achieve this, the right set of software and hardware tools must be in place, which is why ABCI has adopted NGC.

“Installing deep learning frameworks from the source is complicated and upgrading the software to keep up with the frequent releases is a resource drain,” said Hirotaka Ogawa, team leader of the Artificial Intelligence Research Center at AIST. “NGC allows us to support our users with the latest AI frameworks and the users enjoy the best performance they can achieve on NVIDIA GPUs.”

ABCI has turned to containers to address another user need — portability.

“Most of our users are from industrial segments who are looking for portability between their on-prem systems and ABCI,” said Ogawa. “Thanks to NGC and Singularity, the users can develop, test, and deploy at scale across different platforms. Our sampling data showed that NGC containers were used by 80 percent of the over 100,000 jobs that ran on Singularity.”

NGC Container Replicator Simplifies Ease of Use for System Admins and Users

System admins managing HPC systems at supercomputing centers and universities can now download and save NGC containers on their clusters. This gives users faster access to the software, alleviates their network traffic, and saves storage space.

NVIDIA offers NGC Container Replicator, which automatically checks and downloads the latest versions of NGC containers.

NGC container replicator chart

Without lifting a finger, system admins can ensure that their users benefit from the superior performance and newest features from the latest software.

More Than Application Containers

In addition to deep learning containers, NGC hosts 60 pre-trained models and 17 model scripts for popular use cases like object detection, natural language processing and text to speech.

It’s much faster to tune a pre-trained model for a use case than to start from scratch. The pre-trained models allow researchers to quickly fine-tune a neural network or build on top of an already optimized network for specific use-case needs.

The model training scripts follow best practices, have state-of-the-art accuracy and deliver superior performance. They’re ideal for researchers and data scientists planning to build a network from scratch and customize it to their liking.

The models and scripts take advantage of mixed precision powered by NVIDIA Tensor Core GPUs to deliver up to 3x deep learning performance speedups over previous generations.

Take NGC for a Spin

NGC containers are built and tested to run on-prem and in the cloud. They also support hybrid as well as multi-cloud deployments. Visit ngc.nvidia.com, pull your application container on any GPU-powered system or major cloud instance, and see how easy it is to get up and running for your next scientific research.

The post Japan’s Fastest Supercomputer Adopts NGC, Enabling Easy Access to Deep Learning Frameworks appeared first on The Official NVIDIA Blog.

An On-Ramp for AI: NVIDIA Expands NGC-Ready Portfolio and Support Services

Industries are doubling down on AI for competitive advantage.

Whether it’s identifying fraud in insurance claims or predicting customer demand in retail, business leaders are realizing the benefits of AI and scaling the technology across their organizations.

Enterprises need their data scientists and developers to have the right AI-powered solutions, right now. So IT leaders and system admins are tasked to identify, purchase and support infrastructure that can be set up fast, deliver maximum performance and have minimal downtime.

They also want to work with suppliers they are comfortable with and procure systems that easily integrate with current infrastructure.

To meet these needs, we created NVIDIA NGC-Ready, a server validation program that helps buyers quickly identify systems capable of running today’s demanding AI workloads in their data centers, in the cloud and at the edge.

NGC-Ready Helps Businesses Get in the AI Fast Lane

NGC-Ready systems increase user productivity by running the AI and machine learning software provided by the NGC container registry. NGC includes GPU-accelerated software that delivers optimal performance, easy access to the latest frameworks and pre-trained models with state-of-the-art accuracy for common use cases.

We’re announcing seven new NGC-Ready validated systems from six partners:

  • ASRock Rack 2U2G_C622
  • ASUS ESC 4000 G4
  • GIGABYTE G191-H44
  • QCT QuantaGrid D52BV-2U
  • QCT QuantaGrid D52G-4U
  • Tyan TN76-B7102
  • WiWynn SV310G3

These systems join a large and growing portfolio of NGC-Ready systems offered by the world’s leading OEMs.

Powered by NVIDIA T4 and NVIDIA V100 GPUs, NGC-Ready systems make it easy for system admins to pick solutions that are the best fit for their AI workloads. These systems are validated for functionality and deliver optimized performance of AI and machine learning workloads so data scientists and developers can quickly build their solutions.

And, because each NGC-Ready validated system has demonstrated its ability to run demanding AI inference workloads at the edge, every NVIDIA T4-powered NGC-Ready system is a part of the NVIDIA EGX edge computing platform announced today.

NGC Support Services Extended to New NGC-Ready Systems

With the growth in AI adoption, many IT teams are managing new types of workloads, software stacks and hardware for a diverse set of users. They need to address issues fast, but may lack the expertise.

NVIDIA’s NGC Support Services provide enterprise-grade support to ensure NGC-Ready systems run optimally and maximize system utilization and user productivity. The services are available for purchase as an option with all of the NGC-Ready systems announced today.

The support services cover L1-L3 issues and provide IT teams with direct access to NVIDIA subject-matter experts to quickly address software issues, minimize system downtime and maintain user productivity.

NGC Software Enables AI Everywhere: Data Center, Cloud and Edge

To effectively build AI applications and workflows, development software needs to run in the data center, in the cloud and on the edge. A common use case is to train models in the data center or cloud, where processing power is abundant, but then deploy the trained models for inference on the edge, where new data comes in for prediction.

For example, organizations are discovering the power of video analytics for use cases such as parking management, retail inventory replenishment, securing critical infrastructure and logistics management.

To help with this, NGC provides a comprehensive AI workflow that enables training where the data resides — on premises or in the cloud — with the NVIDIA Transfer Learning Toolkit. And NVIDIA’s DeepStream SDK enables inference at the edge, closer to where data is collected.

Deploy NGC-Ready Systems Today On-Prem or at the Edge

Quickly deploy AI infrastructure and maximize user productivity either on-prem or at the edge with NGC-Ready systems and NGC Support Services today.

The post An On-Ramp for AI: NVIDIA Expands NGC-Ready Portfolio and Support Services appeared first on The Official NVIDIA Blog.

ACR AI-LAB and NVIDIA Make AI in Hospitals Easy on IT, Accessible to Every Radiologist

For radiology to benefit from AI, there needs to be easy, consistent and scalable ways for hospital IT departments to implement the technology. It’s a return to a service-oriented architecture, where logical components are separated and can each scale individually, and an efficient use of the additional compute power these tools require.

AI is coming from dozens of vendors as well as internal innovation groups, and needs a place within the hospital network to thrive. That’s why NVIDIA and the American College of Radiology (ACR) have published a Hospital AI Reference Architecture Framework. It helps hospitals easily get started with AI initiatives.

A Cookbook to Make AI Easy

The Hospital AI Reference Architecture Framework was published at yesterday’s annual ACR meeting for public comment. This follows the recent launch of the ACR AI-LAB, which aims to standardize and democratize AI in radiology. The ACR AI-LAB uses infrastructure such as NVIDIA GPUs and the NVIDIA Clara AI toolkit, as well as GE Healthcare’s Edison platform, which helps bring AI from research into FDA-cleared smart devices.

The Hospital AI Reference Architecture Framework outlines how hospitals and researchers can easily get started with AI initiatives. It includes descriptions of the steps required to build and deploy AI systems, and provides guidance on the infrastructure needed for each step.

Hospital AI Architecture Framework
Hospital AI Architecture Framework

To drive an effective AI program within a healthcare institution, there must first be an understanding of the workflows involved, compute needs and data required. It comes from a foundation of enabling better insights from patient data with easy-to deploy compute at the edge.

Using a transfer client, seed models can be downloaded from a centralized model store. A clinical champion uses an annotation tool to locally create data that can be used for fine-tuning the seed model or training a new model. Then, using the training system with the annotated data, a localized model is instantiated. Finally, an inference engine is used to conduct validation and ultimately inference on data within the institution.

These four workflows sit atop AI compute infrastructure, which can be accelerated with NVIDIA GPU technology for best performance, alongside storage for models and annotated studies. These workflows tie back into other hospital systems such as PACS, where medical images are archived.

Three Magic Ingredients: Hospital Data, Clinical AI Workflows, AI Computing

Healthcare institutions don’t have to build the systems to deploy AI tools themselves.

This scalable architecture is designed to support and provide computing power to solutions from different sources. GE Healthcare’s Edison platform now uses NVIDIA’s TRT-IS inference capabilities to help AI run in an optimized way within GPU-powered software and medical devices. This integration makes it easier to deliver AI from multiple vendors into clinical workflows — and is the first example of the AI-LAB’s efforts to help hospitals adopt solutions from different vendors.

Together, Edison with TRT-IS offers a ready-made device inferencing platform that is optimized for GPU-compliant AI, so models built anywhere can be deployed in an existing healthcare workflow.

Hospitals and researchers are empowered to embrace AI technologies without building their own standalone technology or yielding their data to the cloud, which has privacy implications.

The post ACR AI-LAB and NVIDIA Make AI in Hospitals Easy on IT, Accessible to Every Radiologist appeared first on The Official NVIDIA Blog.

NVIDIA and Red Hat Team to Accelerate Enterprise AI

For enterprises looking to get their GPU-accelerated AI and data science projects up and running more quickly, life just got easier.

At Red Hat Summit today, NVIDIA and Red Hat introduced the combination of NVIDIA’s GPU-accelerated computing platform and the just-announced Red Hat OpenShift 4 to speed on-premises Kubernetes deployments for AI and data science.

The result: Kubernetes management tasks that used to take an IT administrator the better part of a day can now be completed in under an hour.

More GPU Acceleration, Less Deployment Hassle

This collaboration comes at a time when enterprises are relying on AI and data science to turn their vast amounts of data into actionable intelligence.

But meaningful AI and data analytics work requires accelerating the full stack of enterprise IT software with GPU computing. Every layer of software — from NVIDIA drivers to container runtimes to application frameworks — needs to be optimized.

Our CUDA parallel computing architecture and CUDA-X acceleration libraries have been embraced by a community of more than 1.2 million developers for accelerating applications across a broad set of domains — from AI to high-performance computing to VDI.

And because NVIDIA’s common architecture runs on every computing device imaginable — from a laptop to the data center to the cloud — the investment in GPU-accelerated applications is easy to justify and just makes sense.

Accelerating AI and data science workloads is only the first step, however. Getting the optimized software stack deployed the right way in large-scale, GPU-accelerated data centers can be frustrating and time consuming for IT organizations. That’s where our work with Red Hat comes in.

Red Hat OpenShift is the leading enterprise-grade Kubernetes platform in the industry. Advancements in OpenShift 4 make it easier than ever to deploy Kubernetes across a cluster. Red Hat’s investment in Kubernetes Operators, in particular, reduces administrative complexity by automating many routine data center management and application lifecycle management tasks.

NVIDIA has been working on its own GPU operator to automate a lot of the work IT managers previously did through shell scripts, such as installing device drivers, ensuring the proper GPU container runtimes are present on all nodes in the data center, as well as monitoring GPUs.

Thanks to our work with Red Hat, once the cluster is set up, you simply run the GPU operator to add the necessary dependencies to the worker nodes in the cluster. It’s just that easy. This can make it as simple for an organization to get its GPU-powered data center clusters up and running with OpenShift 4 as it is to spin up new cloud resources.

Preview and Early Access Program

At Red Hat Summit, we’re showing in our booth 1039 a preview of how easy it is to set up bare-metal GPU clusters with OpenShift and GPU operators.

Also, you won’t want to miss Red Hat Chief Technology Officer Chris Wright’s keynote on Thursday when NVIDIA Vice President of Compute Software Chris Lamb will join him on stage to demonstrate how our technologies work together and discuss our collaboration in further detail.

Red Hat and NVIDIA are inviting our joint customers in a white-glove early access program. Customers who want to learn more or participate in the early access program can sign up at https://www.openshift.com/accelerated-ai.

The post NVIDIA and Red Hat Team to Accelerate Enterprise AI appeared first on The Official NVIDIA Blog.

Seeing Stars: Astronomers Turn to AI to Track Galaxies as New Telescopes Come Online

Good news: astronomers are getting new tools to let them see further, better than ever before. The bad news: they’ll soon be getting more data than humans can handle.

To turn the vast quantities of data that will soon be pouring out of these instruments into word-changing scientific discoveries Brant Robertson, currently a visiting professor at Princeton’s Institute for Advanced Studies and an associate professor of astronomy at UC Santa Cruz, is turning to AI.

“Astronomy is on the cusp of a new data revolution,” he said told a packed room at this week’s GPU Technology Conference in Silicon Valley.

Better Eyes on the Sky

Within a few years the range of instruments available to the world’s star-gazers will give them once unimagined capabilities. Measuring an enormous 6.5 meters across, the James Webb Space Telescope — which will be deployed by NASA, the U.S space agency, will be sensitive enough to give us a peek back at galaxies formed just a few hundred million years after the Big Bang.

The Large Synoptic Survey Telescope gets less press, but it has astronomers equally excited. The telescope, largely funded by the U.S. National Science Foundation and the Department of Energy, and being built on a mountaintop in Chile, will give astronomers the ability to survey the entire southern sky every three nights. This will produce a massive amount of data —10 terabytes a night.

The Large Synoptic Survey Telescope, on Cerro Pachón, in Chile, will give astronomers the ability to survey the entire southern sky every three nights when it is completed in 2020.

Finally, the Wide Field Infrared Survey Telescope puts an enormous digital camera into space. With origins in the U.S. spy satellite program, the satellite’s features will include a 288-megapixel multi-band near-infrared camera with a field of view 100 times larger than that of the Hubble.

‘Richly Complex’ Data

Together, these three instruments will generate vast quantities of “richly complex,” data, Robertson said. “We want to take that information and learn as much as we can,” he said. “Both from individual pixels and by aggregating them together.”

It’s a task far too large for humans alone. To keep up, Robertson is turning to AI. Created by Ryan Hausen, a PhD student in UC Santa Cruz’s computer science department, Morpheus — a deep learning framework classifies astronomical objects, such as galaxies, based on the raw data streaming out of telescopes — such as the Hubble Space Telescope — on a pixel by pixel basis.

“In astronomy we really do care about the technological advances that people in this room are engineering,” Robertson told his audience at GTC.

Translation: in order to find new stars in outer space this prominent astrophysicists is looking, first, to deep learning stars here on Earth for help.

Image credit: NASA. 

The post Seeing Stars: Astronomers Turn to AI to Track Galaxies as New Telescopes Come Online appeared first on The Official NVIDIA Blog.

GTC 2019: Huang Kicks Off GTC, Focuses on NVIDIA Datacenter Momentum, Blue Chip Partners

NVIDIA’s message was unmistakable as it kicked off the 10th annual GPU Technology Conference: it’s doubling-down on the datacenter.

Founder and CEO Jensen Huang delivered a sweeping opening keynote at San Jose State University, describing the company’s progress accelerating the sprawling datacenters that power the world’s most dynamic industries.

With a record GTC registered attendance of 9,000, he rolled out a spate of new technologies, detailed their broad adoption by industry leaders including Cisco, Dell, Hewlett-Packard Enterprise, and Lenovo, and highlighted how NVIDIA technologies are Communications by some of the world’s biggest names, including Accenture, Amazon, Charter Spectrum, Microsoft and Toyota.

“The accelerated computing approach that we pioneered is really taking off,” said Huang, who exactly a week ago announced the company’s $6.9 billion acquisition of Mellanox, a leader in high-performance computing interconnect technology.  “If you take look at what we achieved last year, the momentum is absolutely clear.”

To be sure, Huang also detailed progress outside the data center, rolling out innovations targeting everything from robotics to pro graphics to the automotive industry.

Developers, Developers, Developers

The recurring theme, however, was how NVIDIA’s ability to couple software and silicon delivers the advances in computing power needed to transform torrents of data into insights and intelligence.

“Accelerated computing is not just about the chips,” Huang said. “Accelerated computing is a collaboration, a codesign, a continuous optimization between the architecture of the chip, the systems, the algorithm and the application.”

As a result, the GPU developer ecosystem is growing fast, Huang said. The number of developers has grown to more than 1.2 million from 800,000 last year; there now are 125 GPU powered systems among the world’s 500 fastest supercomputers; and there are more than 600 applications powered by NVIDIA’s CUDA parallel computing platform.

Mellanox — whose interconnect technology helps power more than half  the world’s 500 fastest supercomputers — complement’s NVIDA’s strength in datacenters and high-performance computing, Huang said, explaining why NVIDIA agreed to buy the company earlier this month.

Mellanox CEO Eyal Waldman, who joined Huang on stage, said: “We’re seeing a great growth in data, we’re seeing an exponential growth. The program-centric datacenter is changing into a data-centric datacenter, which means the data will flow and create the programs, rather than the programs creating the data.”

Bringing AI to Datacenters

These technologies are all finding their way into the world’s datacenters as enterprises build more powerful servers — “scaling up” or “capability” systems, as Huang called it  — and network their servers more closely together than ever — or “scaling out,” or “capacity” systems, as businesses seek to turn data into a competitive advantage.

To help businesses move faster, Huang introduced CUDA-X AI, the world’s only end-to-end acceleration libraries for data science. CUDA-X AI arrives as businesses turn to AI — deep learning, machine learning and data analytics — to make data more useful, Huang explained.

The typical workflow for all these: data processing, feature determination, training, verification and deployment. CUDA-X AI unlocks the flexibility of our NVIDIA Tensor Core GPUs to uniquely address this end-to-end AI pipeline.

Matt Garman, vice president of computing services at Amazon Web Services joined NVIDIA CEO Jensen Huang on stage. Monday at GTC.

CUDA-X AI has been adopted by all the major cloud services, including Amazon Web Services, Google Cloud Platform, and Microsoft Azure. It’s been adopted by Charter, PayPal, SAS, and Walmart.

Huang also announced servers equipped with our NVIDIA T4 inferencing GPUs from all the world’s top computer and server makers. T4 will also be offered by Amazon Web Services.

“Think about not just the costs that they’ree saving, but the most precious resource that these data scientists have — time and iterations,” said Matt Garman, vice president of computing services at Amazon Web Services.

Turing, RTX, and Omniverse

NVIDIA’s Turing  GPU architecture — and its RTX real-time ray tracing technology — is also being widely adopted. NVIDIA RTX enjoys wide support with Huang highlighting more than 20 partners — including Adobe, Autodesk, Dassault Systèmes, Pixar, Siemens, Unity, Unreal, and Weta Digital — supporting RTX.

And to support the fast-growing numbers of creative professionals across an increasingly complex pipeline around the globe, Huang introduced Omniverse, enabling creative professionals to harness multiple applications to create and share scenes across different teams and from different locations. He described is as a collaboration tools like Google Docs for 3D designers, who could be located anywhere in the world while working on the same project.

“We wanted to make a tool that made it possible for studios all around the world to collaborate,” Huang said. “Omniverse basically connects up all the designers in the studios, it works with every tool.”

To speed the work of graphics pros using these, and other tools, Huang introduced the NVIDIA RTX Server, a reference architecture that will be delivered with top system vendors.

The massive power savings alone mean these machines don’t just accelerate your work, they pay for themselves. “I used to say ‘The more you buy the more you save,’ but I think I was wrong,” Huang said, with a smile. “RTX Servers are free.”

To accelerate data preparation, model training and visualization, Huang also introduced the NVIDIA-powered Data Science Workstation. Built with Quadro RTX GPUs  and pre-installed with CUDA-X AI accelerated machine learning and deep learning software, these systems for data scientists are available from global workstation providers.

Bringing gaming technology to the datacenter, as well, Huang announced the GeForce Now alliance. Built around specialized pods, each packing 1,280 GPUs in 10 racks, all interconnected with Mellanox high-speed interconnect technology, it expands NVIDIA’s GFN online gaming service through partnerships with global telecoms providers.

Together, GeForce NOW Alliance partners will scale GeForce NOW to serve millions more gamers, Huang said. Softbank and LG Uplus be among the first partners to deploy RTX cloud gaming servers in Japan and Korea later this year.

To underscore his announcement, he rolled a witty demo featuring characters in high-tech armor at a futuristic firing range, drawing broad applause from the audience. “Very few tech companies get to sit at the intersection of art and science and it’s such a thrill to be here,’ Huang said. “NVIDIA is the ILM of real time computer graphics and you can see it here.

Robotics

Inviting makers to build on NVIDIA’s platform, Huang announced Jetson Nano. It’s a small, powerful CUDA-X AI computer delivering 472 GFLOPs of compute performance for running modern AI workloads, consumes just 5 watts. It supports the same architecture and software powering America’s fastest supercomputers.

Jetson Nano will come in two flavors, a $99 dev kit for makers, developers, learners, students available now; and a $129 production-ready module for creating mass-market AI powered edge systems  available June 2019.

“Here’s the amazing thing about this little thing,” Huang said. “It’s 99 dollars — the whole computer — and if you use Raspberry Pi and you just don’t have enough computer performance you just get yourself one of these, and it runs the entire CUDA X AI stack.”

Huang also announced the general availability of the Isaac SDK, a  toolbox that saves manufacturers, researchers and startups hundreds of hours by making it easier to add AI for perception, navigation and manipulation into next-generation robots.

Autonomous Vehicles

Huang finished his keynote with a flurry of automotive news.

He announced that NVIDIA is collaborating with Toyota, Toyota Research Institute Advanced-Development in Japan and Toyota Research Institute in the United States on the entire end-to-end workflow of developing, training, and validating self-driving vehicles.

“Today we are announcing that the world’s largest car company is partnering with us from end to end,” Huang said.

The deal builds on ongoing relationship with Toyota to utilize DRIVE AGX Xavier AV compute and expands collaboration to new testing, validation using DRIVE Constellation — which is now available and allows automakers to simulate billions of miles of driving in all conditions.

And Huang announced Safety Force Field — a driving policy designed to shield self-driving cars from collisions, a sort of “cocoon,” of safety.

“We have a computational method that detects the surrounding cars and predicts their natural path – knowing our own path – and computationally avoids traffic,” Huang said, adding that the open software has been validated in simulation and can be combined with any driving software.

The post GTC 2019: Huang Kicks Off GTC, Focuses on NVIDIA Datacenter Momentum, Blue Chip Partners appeared first on The Official NVIDIA Blog.

NVIDIA Expands NGC Software Hub with Tools for Data Scientists to Build Optimized Solutions Faster

Whether advancing science, building self-driving cars or gathering business insight from mountains of data, data scientists, researchers and developers need powerful GPU compute. They also need the right software tools.

AI is complex and building models can be time consuming. So container technology plays a vital role in simplifying complex deployments and workflows.

At GTC 2019, we’ve supercharged NGC — a hub of essential software for deep learning, machine learning, HPC and more — with pre-trained AI models, model training scripts and industry-specific software stacks.

With these new tools, no matter your skill level, you can quickly and easily realize value with AI.

NGC Takes Care of the Plumbing, So You Can Focus on Doing Your Business

Data scientists’ time is expensive, and the compute resources they need to develop models are in high demand. If they spend hours and even days compiling a framework from the source just to find errors, that’s a loss of productivity, revenue and competitive edge.

Thousands of data scientists and developers have pulled performance-optimized deep learning framework containers like TensorFlow and PyTorch, updated monthly, from NGC because they can bypass time-consuming and error-prone deployment steps and instead focus on building their solutions.

NGC lowers the barrier to entry for companies that want to engage in the latest trends in computing. And for those already engaged, it lets them deliver greater value, faster.

Accelerate AI Projects with Pre-Trained Models and Training Scripts

Many AI applications have common needs: classification, object detection, language translation, text-to-speech, recommender engines, sentiment analysis and more. When developing applications or services with these capabilities, it’s much faster to tune a pre-trained model for your use case than to start from scratch.

NGC’s new model registry provides data scientists and researchers with a repository of the most popular AI models, giving them a starting point to retrain, benchmark and rapidly build their AI applications.

NGC enterprise account holders also can upload, share and version their own models across their organizations and teams through a hosted private registry. The model registry is accessible through https://ngc.nvidia.com and a command line interface, so users can deploy it in a hybrid cloud environment and provide their organizations with controlled access to versioned models.

NGC also provides model training scripts with best practices that take advantage of mixed precision powered by the NVIDIA Tensor Cores that enable NVIDIA Turing and Volta GPUs to deliver up to 3x performance speedups in training and inference over previous generations.

By offering models and training scripts that have been tested for accuracy and convergence, NGC provides users with centralization and curation of the most important NVIDIA deep learning assets.

Graphic of NGC Software Stack

Training and Deployment Stacks for Medical Imaging and Smart Cities

An efficient workflow across industries starts from pre-trained models and then performs transfer learning training with new data. Next, it prunes and optimizes the network, and then deploys to edge devices for inference. The combination of these pre-trained models with transfer learning eliminates the high costs associated with large-scale data collection, labeling and training models from scratch, providing domain experts a jumpstart on their deep learning workflows.

However, the details of the training, optimization and deployment differs dramatically by industry. NGC now provides industry-specific workflows for smart cities and medical imaging.

For smart cities, the NVIDIA Transfer Learning Toolkit for Streaming Analytics provides transfer learning tailored to intelligent video analytics workloads, such as object detection and classification from frames of camera video. Then the retrained, optimized and pruned models are deployed to NVIDIA Tesla or Jetson platforms through the NVIDIA DeepStream SDK for smart cities.

For medical imaging, the NVIDIA Clara Train SDK enables medical institutions to start with pre-trained models of MRI scans for organ segmentation and use transfer learning to improve those models based on datasets owned by that institution. Clara Train produces optimized models, which are then deployed using the NVIDIA Clara Deploy SDK to provide enhanced segmentation on new patient scans.

NGC-Ready Systems — Validated Platforms Optimized for AI Workloads

NGC-Ready systems, offered by top system manufacturers around the world, are validated by NVIDIA so data scientists and developers can quickly get their deep learning and machine learning workloads up and running optimally.

Maximum performance systems are powered by NVIDIA V100 GPUs, with 640 Tensor Cores and up to 32GB of memory. For maximum utilization, systems are powered by the new NVIDIA T4 GPUs, which excel across the full range of accelerated workloads — machine learning, deep learning, virtual desktops and HPC. View a list of validated NGC-Ready systems.

Deploy AI Infrastructure with Confidence

The adoption of AI across industries has skyrocketed. This has led IT teams to support new types of workloads, software stacks and hardware for a diverse set of users. While the playing field has changed, the need to minimize system downtime, and keep users productive, remains critical.

To address this concern, we’ve introduced NVIDIA NGC Support Services, which provide enterprise-grade support to ensure NGC-Ready systems run optimally and maximize system utilization and user productivity. These new services provide IT teams with direct access to NVIDIA subject-matter experts to quickly address software issues and minimize system downtime.

NGC Support Services are available through sellers of NGC-Ready systems, with immediate availability from Cisco for its NGC-Ready validated NVIDIA V100 system, Cisco UCS C480 ML. HPE will offer the services for the HPE ProLiant DL380 Gen10 server as a validated NGC-Ready NVIDIA T4 server in June. Several other OEMs are expected to begin selling the services in the coming months.

Get Started with NGC Today

Pull and run the NGC containers and pre-trained models at no charge on GPU-powered systems or cloud instances at ngc.nvidia.com.

The post NVIDIA Expands NGC Software Hub with Tools for Data Scientists to Build Optimized Solutions Faster appeared first on The Official NVIDIA Blog.

NVIDIA Announces CUDA-X AI SDK for GPU-Accelerated Data Science

Data scientists working in data analytics, machine learning and deep learning will get a massive speed boost with NVIDIA’s new CUDA-X AI libraries.

Unlocking the flexibility of Tensor Core GPUs, CUDA-X accelerates:

  • … data science from ingest of data, to ETL, to model training, to deployment.
  • … machine learning algorithms for regression, classification, clustering.
  • … every deep learning training framework and, with this release, automatically optimizes for NVIDIA Tensor Core GPUs.
  • … inference and large-scale Kubernetes deployment in the cloud.
  • … data science in PC, workstation, supercomputers cloud, and enterprise data centers.
  • … data science in Amazon Web Services, Google Cloud and Microsoft Azure AI services.
  • … data science.

CUDA-X accelerates data science.

Introduced today at NVIDIA’s GPU Technology Conference, CUDA-X AI is the only end-to-end platform for the acceleration of data science.

CUDA-X AI arrives as businesses turn to AI — deep learning, machine learning and data analytics — to make data more useful.

The typical workflow for all these: data processing, feature determination, training, verification and deployment.

CUDA-X AI unlocks the flexibility of our NVIDIA Tensor Core GPUs to uniquely address this end-to-end AI pipeline.

Capable of speeding up machine learning and data science workloads by as much as 50x, CUDA-X AI consists of more than a dozen specialized acceleration libraries.

It’s already accelerating data analysis with cuDF, deep learning primitives with cuDNN; machine learning algorithms with cuML; and data processing with DALI, among others.

Together, these libraries accelerate every step in a typical AI workflow, whether it involves using deep learning to train speech and image recognition systems or data analytics to assess the risk profile of a mortgage portfolio.

Each step in these workflows requires processing large volumes of data, and each step benefits from GPU accelerated computing.

Broad Adoption

As a result, CUDA-X AI is relied on by top companies such as Charter, Microsoft, PayPal and Walmart.

It’s integrated into major deep learning frameworks such as TensorFlow, PyTorch and MXNet.

Major cloud service providers around the world use CUDA-X AI to speed up their cloud services.

And today eight of the world’s leading computer makers announced data science workstations and servers optimized to run NVIDIA’s CUDA-X AI libraries.

Available Everywhere

CUDA-X AI acceleration libraries are freely available as individual downloads or as containerized software stacks from the NVIDIA NGC software hub.

They can be deployed everywhere, including desktops, workstations, servers and on cloud computing platforms.

It’s integrated into all the data science workstations announced at GTC today. And, all the NVIDIA T4 servers announced today are optimized to run CUDA-X AI.

Learn more at https://www.nvidia.com/en-us/technologies/cuda-x.

The post NVIDIA Announces CUDA-X AI SDK for GPU-Accelerated Data Science appeared first on The Official NVIDIA Blog.

NVIDIA Sets Six Records in AI Performance

NVIDIA has set six AI performance records with today’s release of the industry’s first broad set of AI benchmarks.

Backed by Google, Intel, Baidu, NVIDIA and dozens more technology leaders, the new MLPerf benchmark suite measures a wide range of deep learning workloads. Aiming to serve as the industry’s first objective AI benchmark suite, it covers such areas as computer vision, language translation, personalized recommendations and reinforcement learning tasks.

NVIDIA achieved the best performance in the six MLPerf benchmark results it submitted for. These cover a variety of workloads and infrastructure scale – ranging from 16 GPUs on one node to up to 640 GPUs across 80 nodes.

The six categories include image classification, object instance segmentation, object detection, non-recurrent translation, recurrent translation and recommendation systems. NVIDIA did not submit results for the seventh category for reinforcement learning, which does not yet take advantage of GPU acceleration.

A key benchmark on which NVIDIA technology performed particularly well was language translation, training the Transformer neural network in just 6.2 minutes. More details on all six submissions are available on the NVIDIA Developer news center.

NVIDIA engineers achieved their results on NVIDIA DGX systems, including NVIDIA DGX-2, the world’s most powerful AI system, featuring 16 fully connected V100 Tensor Core GPUs.

NVIDIA is the only company to have entered as many as six benchmarks, demonstrating the versatility of V100 Tensor Core GPUs for the wide variety of AI workloads deployed today.

“The new MLPerf benchmarks demonstrate the unmatched performance and versatility of NVIDIA’s Tensor Core GPUs,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “Exceptionally affordable and available in every geography from every cloud service provider and every computer maker, our Tensor Core GPUs are helping developers around the world advance AI at every stage of development.”

State-of-the-Art AI Computing Requires Full Stack Innovation

Performance on complex and diverse computing workloads takes more than great chips. Accelerated computing is about more than an accelerator. It takes the full stack.

NVIDIA’s stack includes NVIDIA Tensor Cores, NVLink, NVSwitch, DGX systems, CUDA, cuDNN, NCCL, optimized deep learning framework containers and NVIDIA software development kits.

NVIDIA’s AI platform is also the most accessible and affordable. Tensor Core GPUs are available on every cloud and from every computer maker and in every geography.

The same power of Tensor Core GPUs is also available on the desktop, with the most powerful desktop GPU, NVIDIA TITAN RTX costing only $2,500. When amortized over three years, this translates to just a few cents per hour.

And the software acceleration stacks are always updated on the NVIDIA GPU Cloud (NGC) cloud registry.

NVIDIA’s Record-Setting Platform Available Now on NGC

The software innovations and optimizations used to achieve NVIDIA’s industry-leading MLPerf performance are available free of charge in our latest NGC deep learning containers. Download them from the NGC container registry.

The containers include the complete software stack and the top AI frameworks, optimized by NVIDIA. Our 18.11 release of the NGC deep learning containers includes the exact software used to achieve our MLPerf results.

Developers can use them everywhere, at every stage of development:

  • For data scientists on desktops, the containers enable cutting-edge research with NVIDIA TITAN RTX GPUs.
  • For workgroups, the same containers run on NVIDIA DGX Station.
  • For enterprises, the containers accelerate the application of AI to their data in the cloud with NVIDIA GPU-accelerated instances from Alibaba Cloud, AWS, Baidu Cloud, Google Cloud Platform, IBM Cloud, Microsoft Azure, Oracle Cloud Infrastructure and Tencent Cloud.
  • For organizations building on-premise AI infrastructure, NVIDIA DGX systems and NGC-Ready systems from Atos, Cisco, Cray, Dell EMC, HP, HPE, Inspur, Lenovo, Sugon and Supermicro put AI to work.

To get started on your AI project, or to run your own MLPerf benchmark, download containers from the NGC container registry.

The post NVIDIA Sets Six Records in AI Performance appeared first on The Official NVIDIA Blog.

NVIDIA Extends PhysX for High Fidelity Simulations, Goes Open Source

NVIDIA PhysX, the most popular physics simulation engine on the planet, is going open source.

We’re doing this because physics simulation — long key to immersive games and entertainment — turns out to be more important than we ever thought.

Physics simulation dovetails with AI, robotics and computer vision, self-driving vehicles, and high-performance computing.

It’s foundational for so many different things we’ve decided to provide it to the world in an open source fashion.

Meanwhile, we’re building on more than a decade of continuous investment in this area to simulate the world with ever greater fidelity, with on-going research and development to meet the needs of those working in robotics and with autonomous vehicles.

Free, Open-Source, GPU-Accelerated

PhysX will now be the only free, open-source physics solution that takes advantage of GPU acceleration and can handle large virtual environments.

It will be available as open source starting Monday, Dec. 3, under the simple BSD-3 license.

PhysX solves some serious challenges.

  • In AI, researchers need synthetic data — artificial representations of the real world — to train data-hungry neural networks.
  • In robotics, researchers need to train robotic minds in environments that work like the real one.
  • For self-driving cars, PhysX allows vehicles to drive for millions of miles in simulators that duplicate real-world conditions.
  • In game development, canned animation doesn’t look organic and is time consuming to produce at a polished level.
  • In high-performance computing, physics simulations are being done on ever more powerful machines with ever greater levels of fidelity.

The list goes on.

PhysX SDK addresses these challenges with scalable, stable and accurate simulations. It’s widely compatible, and it’s now open source.

NVIDIA PhysX scales to large numbers of interacting bodies.

PhysX SDK is a scalable multi-platform game physics solution supporting a wide range of devices, from smartphones to high-end multicore CPUs and GPUs.

It’s already integrated into some of the most popular game engines, including Unreal Engine (versions 3 and 4) and Unity3D.

You can also find the full source code on GitHub. Dig in.

The post NVIDIA Extends PhysX for High Fidelity Simulations, Goes Open Source appeared first on The Official NVIDIA Blog.