Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans

Qure.ai, a Mumbai-based startup, has been developing AI tools to detect signs of disease from lung scans since 2016. So when COVID-19 began spreading worldwide, the company raced to retool its solution to address clinicians’ urgent needs.

In use in more than two dozen countries, Qure.ai’s chest X-ray tool, qXR, was trained on 2.5 million scans to detect lung abnormalities — signs of tumors, tuberculosis and a host of other conditions.

As the first COVID-specific datasets were released by countries with early outbreaks — such as China, South Korea and Iran — the company quickly incorporated those scans, enabling qXR to mark areas of interest on a chest X-ray image and provide a COVID-19 risk score.

“Clinicians around the world are looking for tools to aid critical decisions around COVID-19 cases — decisions like when a patient should be admitted to the hospital, or be moved to the ICU, or be intubated,” said Chiranjiv Singh, chief commercial officer of Qure.ai. “Those clinical decisions are better made when they have objective data. And that’s what our AI tools can provide.”

While doctors have data like temperature readings and oxygen levels on hand, AI can help quantify the impact on a patient’s lungs — making it easier for clinicians to triage potential COVID-19 cases where there’s a shortage of testing kits, or compare multiple chest X-rays to track the progression of disease.

In recent weeks, the company deployed the COVID-19 version of its tool in around 50 sites around the world, including hospitals in the U.K., India, Italy and Mexico. Healthcare workers in Pakistan are using qXR in medical vans that actively track cases in the community.

A member of the NVIDIA Inception program, which provides resources to help startups scale faster, Qure.ai uses NVIDIA TITAN GPUs on premises, and V100 Tensor Core GPUs through Amazon Web Services for training and inference of its AI models. The startup is in the process of seeking FDA clearance for qXR, which has received the CE mark in Europe.

Capturing an Image of COVID-19

For coronavirus cases, chest X-rays are just one part of the picture — because not every case shows impact on the lungs. But due to the wide availability of X-ray machines, including portable bedside ones, they’ve quickly become the imaging modality of choice for hospitals admitting COVID-19 patients.

“Based on the literature to date, we know certain indicators of COVID-19 are visible in chest  X-rays. We’re seeing what’s called ground-glass opacities and consolidation, and noticed that the virus tends to settle in both sides of the lung,” Singh said. “Our AI model applies a positive score to these factors and relevant findings, and a negative score to findings like calcifications and pleural effusion that suggest it’s not COVID.”

The qXR tool provides clinicians with one of four COVID-19 risk scores: high, medium, low or none. Within a minute, it also labels and quantifies lesions, providing an objective measurement of lung impact.

By rapidly processing chest X-ray images, qXR is helping some doctors triage patients with COVID-19 symptoms while they wait for test results. Others are using the tool to monitor disease progression by comparing multiple scans taken of the same patient over time. For ease of use, qXR integrates with radiologists’ existing workflows, including the PACS imaging system.

“Workflow integration is key, as the more you can make your AI solution invisible and smoothly embedded into the healthcare workflow, the more it’ll be adopted and used,” Singh said.

While the first version of qXR with COVID-19 analysis was trained and validated on around 11,500 scans specific to the virus, the team has been adding a couple thousand additional scans to the dataset each week, improving accuracy as more data becomes available.

Singh credits the company’s ability to pivot quickly in part to the diverse dataset of chest X-rays it’s collected over the years. In total, Qure.ai has almost 8 million studiess, spread evenly across North America, Europe, the Middle East and Asia, as well as a mix of studies taken on different equipment manufacturers and in different healthcare settings.

“The volume and variety of data helps our AI model’s accuracy,” Singh said. “You don’t want something built on perfect, clean data from a single site or country, where the moment it goes to a new environment, it fails.”

From the Cloud to Clinicians’ Hands

The Bolton NHS Foundation Trust in the U.K. and San Rafaelle University Hospital in Milan, are among dozens of sites that have deployed qXR to help radiologists monitor COVID-19 disease progression in patients.

Most clients can get up and running with qXR within an hour, with deployment over the cloud. In an urgent environment like the current pandemic, this allows hospitals to move quickly, even when travel restrictions make live installations impossible. Hospital customers with on-premises data centers can choose to use their onsite compute resources for inference.

Qure.ai’s next step, Singh said, “is to get this tool in the hands of as many radiologists and other clinicians directly interacting with patients around the world as we can.”

The company has also developed a natural language processing tool, qScout, that uses a chatbot to handle regular check-ins with patients who feel they may have the virus or are recovering at home. Keeping in contact with people in an outpatient setting is an important tool to monitor symptoms, alerting healthcare workers when a patient may need to be admitted to the hospital or track patient recovery without overburdening hospital infrastructure.

It took the team just six weeks to take qScout from a concept to its first customer: the Ministry of Health in Oman.

To learn more about Qure.ai, watch the recent COMPUTE4COVID webinar session, Healthcare AI Startups Against COVID-19. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the virus.

The post Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans appeared first on The Official NVIDIA Blog.

How NVIDIA EGX Is Forming Central Nervous System of Global Industries

Massive change across every industry is being driven by the rising adoption of IoT sensors, including cameras for seeing, microphones for hearing, and a range of other smart devices that help enterprises perceive and understand what’s happening in the physical world.

The amount of data being generated at the edge is growing exponentially. The only way to process this vast data in real time is by placing servers near the point of action and by harnessing the immense computational power of GPUs.

The enterprise data center of the future won’t have 10,000 servers in one location, but one or more servers across 10,000 different locations. They’ll be in office buildings, factories, warehouses, cell towers, schools, stores and banks. They’ll detect traffic jams and forest fires, route traffic safely and prevent crime.

By placing a network of distributed servers where data is being streamed from hundreds of sensors, enterprises can use networks of data centers at the edge to drive immediate action with AI. Additionally, by processing data at the edge, privacy concerns are mitigated and data sovereignty concerns are put to rest.

Edge servers lack the physical security infrastructure that enterprise IT takes for granted. And companies lack the budget to invest in roaming IT personnel to manage these remote systems. So edge servers need to be designed to be self-secure and easy to update, manage and deploy from afar.

Plus, AI systems need to be running all the time, with zero downtime.

We’ve built the NVIDIA EGX Edge AI platform to ensure security and resiliency on a global scale. By simplifying deployment and management, NVIDIA EGX allows always-on AI applications to automate the critical infrastructure of the future. The platform is a Kubernetes and container-native software platform that brings GPU-accelerated AI to everything from dual-socket x86 servers to Arm-based NVIDIA Jetson SoCs.

To date, there are over 20 server vendors building EGX-powered edge and micro-edge servers, including ADLINK, Advantech, Atos, AVerMedia, Cisco, Connect Tech, Dell Technologies, Diamond Systems, Fujitsu, Gigabyte, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta Technologies and Supermicro. As well as dozens of hybrid-cloud and network security partners in the NVIDIA edge ecosystem, such as Canonical, Check Point, Excelero, Guardicore, IBM, Nutanix, Palo Alto Networks, Rancher, Red Hat, VMware, Weka and Wind River.

There are also hundreds of AI applications and integrated solutions vendors building on NVIDIA EGX to deliver industry-specific offerings to enterprises across the globe.

Enterprises running AI need to protect not just customer data, but also the AI models that transform the data into actions. By combining an NVIDIA Mellanox SmartNIC, the industry standard for secure, high-performance networking, with our AI processors into NVIDIA EGX A100, a combined converged accelerator, we’re introducing fundamental new innovations for edge AI.

Enhanced Security and Performance

A secure, authenticated boot of the GPU and SmartNIC from Hardware Root-of-Trust ensures the device firmware and lifecycle are securely managed. Third-generation Tensor Core technology in the NVIDIA Ampere architecture brings industry-leading AI performance. Specific to EGX A100, the confidential AI enclave uses a new GPU security engine to load encrypted AI models and encrypt all AI outputs, further preventing the theft of valuable IP.

As the edge moves to encrypted high-resolution sensors, SmartNICs support in-line cryptographic acceleration at the line rate. This allows encrypted data feeds to be decrypted and sent directly to the GPU memory, bypassing the CPU and system memory.

The edge also requires a greater level of security to protect against threats from other devices on the network. With dynamically reconfigurable firewall offloads in hardware, SmartNICs efficiently deliver the first line of defense for hybrid-cloud, secure service mesh communications.

NVIDIA Mellanox’s time-triggered transport technology for telco (5T for 5G) ensures commercial off-the-shelf solutions can meet the most time-sensitive use cases for 5G vRAN with our NVIDIA Aerial SDK. This will lead to a new wave of CloudRAN in the telecommunications industry.

With an NVIDIA Ampere GPU and Mellanox ConnectX-6 D on one converged product, the EGX A100 delivers low-latency, high-throughput packet processing for security and virtual network functions.

Simplified Deployment, Management and Security at Scale

Through NGC, NVIDIA’s catalog of GPU-optimized containers, we provide industry application frameworks and domain-specific AI toolkits to simplify getting started and for tuning AI applications to new edge environments. They can be used together or individually and open new possibilities for a variety of edge use cases.

And with the NGC private registry, applications can be signed before publication to ensure they haven’t been tampered with in transit, then authenticated before running at the edge. The NGC private registry also supports model versioning and encryption, so lightweight model updates can be delivered quickly and securely.

The future of edge computing requires secure, scalable, resilient, easy-to-manage fleets of AI-powered systems operating at the network edge. By bringing the combined acceleration of NVIDIA GPUs and NVIDIA Mellanox SmartNICs together with NVIDIA EGX, we’re building both the platform and the ecosystem to form the AI nervous system of every global industry.

The post How NVIDIA EGX Is Forming Central Nervous System of Global Industries appeared first on The Official NVIDIA Blog.

NVIDIA CloudXR Cuts the Cord for VR, Raises the Bar for AR

Power up your XR displays and 5G devices because NVIDIA is taking streaming to the next level.

With the announcement today of the NVIDIA CloudXR 1.0 software development kit, we’re bringing major advancements to streaming augmented reality, mixed reality and virtual reality content — collectively known as XR — over 5G, Wi-Fi and other high-performance networks.

With the NVIDIA CloudXR platform, any end device — including head-mounted displays (HMDs) and connected Windows and Android devices — can become a high-fidelity XR display capable of showcasing professional-quality graphics.

CloudXR is built on NVIDIA RTX GPUs and the CloudXR SDK to allow streaming of immersive AR, MR or VR experiences from anywhere, whether from the data center, cloud or at the edge. And with NVIDIA GPU virtualization software, CloudXR scales efficiently allowing multiple users or tenants to securely share GPU resources.

Window to an XR World

From architecture to retail, NVIDIA CloudXR is bringing innovation to many industries as 5G networks roll out around the world. By streaming XR experiences from GPU-powered edge servers, companies can expand mobile access to graphics-intensive applications and content, enabling immersive, responsive XR experiences that can be enjoyed on a remote client.

Whether through an HMD, smartphone or tablet, NVIDIA CloudXR accelerates professional XR experiences to power design reviews, speed collaboration and heighten creative productivity.

Additionally, NVIDIA CloudXR early access partners including ZeroLight, The GRID Factory, Theia Interactive, Luxion KeyVR, ESI Group, PresenZ and PiXYZ have tested CloudXR with a suite of apps and are thrilled with the results. Their customers get access to the highest quality visuals, all from a lightweight mobile XR device.

Tethered or Not, Here XR and 5G Come

The NVIDIA CloudXR SDK is created for telecommunications platforms, enterprise data centers, consumer platforms and next-generation display devices to deliver graphics-rich XR content.

The SDK consists of powerful tools and APIs packaged in three core components:

  1. CloudXR Server Driver: Server-side binaries and libraries
  2. CloudXR Client App: Operating system-specific sample application
  3. CloudXR Client SDK: OS-specific binaries and libraries

CloudXR fits seamlessly with NVIDIA RTX Servers to deliver the richest immersive experiences, and NVIDIA Quadro Virtual Workstation gives the flexibility to enable users to scale with demand.

NVIDIA Partners Bring Mobile XR to the World

NVIDIA is collaborating with Ericsson and Qualcomm Technologies to bring a unique 5G VR solution to market.

Qualcomm Technologies’ latest reference design HMD is powered by the Qualcomm Snapdragon XR2 Platform, the world’s first 5G-enabled XR device that drives all on-device processing workloads. The high-performance 5G network of Ericsson connects the HMD to the edge by delivering high-speed, low-latency and reliable wireless connectivity.

Combining NVIDIA RTX graphics with CloudXR and GPU virtualization, Qualcomm Technologies’ Boundless XR client optimizations and Ericsson’s network have yielded an unparalleled ability to deliver boundless XR over 5G.

Learn more about the bundle, which is available now.

Drive XR to the Next Level

Additional information is available for telecommunications providers and developers who register for access to the CloudXR 1.0 SDK.

The post NVIDIA CloudXR Cuts the Cord for VR, Raises the Bar for AR appeared first on The Official NVIDIA Blog.

NVIDIA CEO Introduces NVIDIA Ampere Architecture, NVIDIA A100 GPU in News-Packed ‘Kitchen Keynote’

NVIDIA today set out a vision for the next generation of computing that shifts the focus of the global information economy from servers to a new class of powerful, flexible data centers.

In a keynote delivered in six simultaneously released episodes recorded from the kitchen of his California home, NVIDIA founder and CEO Jensen Huang discussed NVIDIA’s recent Mellanox acquisition, new products based on the company’s much-awaited NVIDIA Ampere GPU architecture and important new software technologies.

Original plans for the keynote to be delivered live at NVIDIA’s GPU Technology Conference in late March in San Jose were upended by the coronavirus pandemic.

Huang kicked off his keynote on a note of gratitude.

“I want to thank all of the brave men and women who are fighting on the front lines against COVID-19,” Huang said.

NVIDIA, Huang explained, is working with researchers and scientists to use GPUs and AI computing to treat, mitigate, contain and track the pandemic. Among those mentioned:

  • Oxford Nanopore Technologies has sequenced the virus genome in just seven hours.
  • Plotly is doing real-time infection rate tracing.
  • Oak Ridge National Laboratory and the Scripps Research Institute have screened a billion potential drug combinations in a day.
  • Structura Biotechnology, the University of Texas at Austin and the National Institutes of Health have reconstructed the 3D structure of the virus’s spike protein.

NVIDIA also announced updates to its NVIDIA Clara healthcare platform aimed at taking on COVID-19.

“Researchers and scientists applying NVIDIA accelerated computing to save lives is the perfect example of our company’s purpose — we build computers to solve problems normal computers cannot,” Huang said.

At the core of Huang’s talk was a vision for how data centers, the engine rooms of the modern global information economy, are changing, and how NVIDIA and Mellonox, acquired in a deal that closed last month, are together driving those changes.

“The data center is the new computing unit,” Huang said, adding that NVIDIA is accelerating performance gains from silicon, to the ways CPUs and GPUs connect, to the full software stack, and, ultimately, across entire data centers.

Systems Optimized for Data Center-Scale Computing

That starts with a new GPU architecture that’s optimized for this new kind of data center-scale computing, unifying AI training and inference, and making possible flexible, elastic acceleration.

NVIDIA A100, the first GPU based on the NVIDIA Ampere architecture, providing the greatest generational performance leap of NVIDIA’s eight generations of GPUs, is also built for data analytics, scientific computing and cloud graphics, and is in full production and shipping to customers worldwide, Huang announced.

Eighteen of the world’s leading service providers and systems builders are incorporating them, among them Alibaba Cloud, Amazon Web Services, Baidu Cloud, Cisco, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Microsoft Azure and Oracle.

The A100, and the NVIDIA Ampere architecture it’s built on, boost performance by up to 20x over its predecessors, Huang said. He detailed five key features of A100, including:

  • More than 54 billion transistors, making it the world’s largest 7-nanometer processor.
  • Third-generation Tensor Cores with TF32, a new math format that accelerates single-precision AI training out of the box. NVIDIA’s widely used Tensor Cores are now more flexible, faster and easier to use, Huang explained.
  • Structural sparsity acceleration, a new efficiency technique harnessing the inherently sparse nature of AI math for higher performance.
  • Multi-instance GPU, or MIG, allowing a single A100 to be partitioned into as many as seven independent GPUs, each with its own resources.
  • Third-generation NVLink technology, doubling high-speed connectivity between GPUs, allowing A100 servers to act as one giant GPU.

The result of all this: 6x higher performance than NVIDIA’s previous generation Volta architecture for training and 7x higher performance for inference.

NVIDIA DGX A100 Packs 5 Petaflops of Performance

NVIDIA is also shipping a third generation of its NVIDIA DGX AI system based on NVIDIA A100 — the NVIDIA DGX A100 — the world’s first 5-petaflops server. And each DGX A100 can be divided into as many as 56 applications, all running independently.

The U.S. Department of Energy’s Argonne National Laboratory will use DGX A100’s AI and computing power to better understand and fight COVID-19.

This allows a single server to either “scale up” to race through computationally intensive tasks such as AI training, or “scale out,” for AI deployment, or inference, Huang said.

Among initial recipients of the system are the U.S. Department of Energy’s Argonne National Laboratory, which will use the cluster’s AI and computing power to better understand and fight COVID-19; the University of Florida; and the German Research Center for Artificial Intelligence.

A100 will also be available for cloud and partner server makers as HGX A100.

A data center powered by five DGX A100 systems for AI training and inference running on just 28 kilowatts of power costing $1 million can do the work of a typical data center with 50 DGX-1 systems for AI training and 600 CPU systems consuming 630 kilowatts and costing over $11 million, Huang explained.

“The more you buy, the more you save,” Huang said, in his common keynote refrain.

Need more? Huang also announced the next-generation DGX SuperPOD. Powered by 140 DGX A100 systems and Mellanox networking technology, it offers 700 petaflops of AI performance, Huang said, the equivalent of one of the 20 fastest computers in the world.

The next-generation DGX SuperPOD delivers 700 petaflops of AI performance.

NVIDIA is expanding its own data center with four DGX SuperPODs, adding 2.8 exaflops of AI computing power — for a total of 4.6 exaflops of total capacity — to its SATURNV internal supercomputer, making it the world’s fastest AI supercomputer.

Huang also announced the NVIDIA EGX A100, bringing powerful real-time cloud-computing capabilities to the edge. Its NVIDIA Ampere architecture GPU offers third-generation Tensor Cores and new security features. Thanks to its NVIDIA Mellanox ConnectX-6 SmartNIC, it also includes secure, lightning-fast networking capabilities.

Software for the Most Important Applications in the World Today

Huang also announced NVIDIA GPUs will power major software applications for accelerating three critical usages: managing big data, creating recommender systems and building real-time, conversational AI.

These new tools arrive as the effectiveness of machine learning has driven companies to collect more and more data. “That positive feedback is causing us to experience an exponential growth in the amount of data that is collected,” Huang said.

To help organizations of all kinds keep up, Huang announced support for NVIDIA GPU acceleration on Spark 3.0, describing the big data analytics engine as “one of the most important applications in the world today.”

Built on RAPIDS, Spark 3.0 shatters performance benchmarks for extracting, transforming and loading data, Huang said. It’s already helped Adobe Intelligent Services achieve a 90 percent compute cost reduction.

Key cloud analytics platforms — including Amazon SageMaker, Azure Machine Learning, Databricks, Google Cloud AI and Google Cloud Dataproc — will all accelerate with NVIDIA, Huang announced.

“We’re now prepared for a future where the amount of data will continue to grow exponentially from tens or hundreds of petabytes to exascale and beyond,” Huang said.

Huang also unveiled NVIDIA Merlin, an end-to-end framework for building next-generation recommender systems, which are fast becoming the engine of a more personalized internet. Merlin slashes the time needed to create a recommender system from a 100-terabyte dataset to 20 minutes from four days, Huang said.

And he detailed NVIDIA Jarvis, a new end-to-end platform for creating real-time, multimodal conversational AI that can draw upon the capabilities unleashed by NVIDIA’s AI platform.

Huang highlighted its capabilities with a demo that showed him interacting with a friendly AI, Misty, that understood and responded to a sophisticated series of questions about the weather in real time.

Huang also dug into NVIDIA’s swift progress in real-time ray tracing since NVIDIA RTX was launched at SIGGRAPH in 2018, and he announced that NVIDIA Omniverse, which allows “different designers with different tools in different places doing different parts of the same design,” to work together simultaneously is now available for early access customers.

Autonomous Vehicles

Autonomous vehicles are one of the greatest computing challenges of our time, Huang said, an area where NVIDIA continues to push forward with NVIDIA DRIVE.

NVIDIA DRIVE will use the new Orin SoC with an embedded NVIDIA Ampere GPU to achieve the energy efficiency and performance to offer a 5-watt ADAS system for the front windshield as well as scale up to a 2,000 TOPS, level-5 robotaxi system.

Now automakers have a single computing architecture and single software stack to build AI into every one of their vehicles.

“It’s now possible for a carmaker to develop an entire fleet of cars with one architecture, leveraging the software development across their whole fleet,” Huang said.

The NVIDIA DRIVE ecosystem now encompasses cars, trucks, tier one automotive suppliers, next-generation mobility services, startups, mapping services, and simulation.

And Huang announced NVIDIA is adding NVIDIA DRIVE RC for managing entire fleets of autonomous vehicles to its suite of NVIDIA DRIVE technologies.


NVIDIA also continues to push forward with its NVIDIA Isaac software-defined robotics platform, announcing that BMW has selected NVIDIA Isaac robotics to power its next-generation factories.

BMW’s 30 factories around the globe build one vehicle every 56 seconds: that’s 40 different models, each with hundreds of different options, made from 30 million parts flowing in from nearly 2,000 suppliers around the world, Huang explained.

BMW joins a sprawling NVIDIA robotics global ecosystem that spans delivery services, retail, autonomous mobile robots, agriculture, services, logistics, manufacturing and healthcare.

In the future, factories will, effectively, be enormous robots. “All of the moving parts inside will be driven by artificial intelligence,” Huang said. “Every single mass-produced product in the future will be customized.”

The post NVIDIA CEO Introduces NVIDIA Ampere Architecture, NVIDIA A100 GPU in News-Packed ‘Kitchen Keynote’ appeared first on The Official NVIDIA Blog.

Boxing for Science: Researchers Accelerate Innovation with Containers

Researchers at the University of Minnesota Supercomputing Institute have found a way to keep the lid on the legions of small but significant software elements that high-performance computing and AI spawn by using containers.

MSI is a hub for HPC research in academic institutions across the state, accelerating with NVIDIA GPUs more than 400 applications that range from understanding the genomics of cancer to the impacts of climate change. Empowering thousands of users statewide across these diverse applications is no simple task.

Each application has its own complex set of ingredients. The hardware configuration, compiler and libraries one application requires may clash with what another needs.

System administrators can get overwhelmed by the need to upgrade, install and monitor each app. The experience can leave both admins and users drained in a hunt for the latest and greatest code.

To avoid these pitfalls and empower their users, MSI adopted containers that essentially bundle apps with the libraries, runtime engines and other software elements they require.

Containers speed up application deployment

With containers, MSI’s users can deploy their apps in a few minutes, without help from administrators.

“Containers are a tool that can increase portability and reproducibility of key elements of research workflows,” said Benjamin Lynch, associate director of Research Computing at MSI. “They play an important role in the rapidly changing software ecosystems like we see in AI/ML on NVIDIA GPUs.”

Because a container provides everything needed to run an app, a researcher who wants to test an application built in Ubuntu doesn’t have to worry about incompatibility when running on MSI’s CentOS cluster.

“Containers are a critical tool for encapsulating complex agro-environmental models into reproducible and easily parallelized workflows that other researchers can replicate,” said Bryan Runck, a geo-computing scientist at the University of Minnesota.

NGC: Hub for GPU-Optimized HPC & AI Software

MSI’s researchers chose NVIDIA’s NGC registry as their source for GPU-optimized HPC and AI containers. The NGC catalog sports containers that span everything from deep learning to visualizations, all tested and tuned to deliver maximum performance.

Software optimizations deliver better performance on the same hardware.

These containers are tested for optimal performance. They’re also tested for compatibility across multiple architectures such as x86 and ARM, so system administrators can easily support diverse users.

When it comes to AI, NGC packs a large collection of pre-trained models and developer kits. Researchers can apply transfer learning to these models to create their own custom versions, reducing development time.

“Being able to run containerized applications on HPC platforms is an easy way to get started, and GPUs have reduced the computation time by more than 10x,” said Christina Poudyal, a data scientist with a focus on agriculture at the University of Minnesota.

HPC, AI Workloads Converging

The melding of HPC and AI applications is another driver for MSI’s adoption of containers. Both workloads leverage the parallel computing capabilities of MSI’s GPU-accelerated systems.

This convergence spawns work across disciplines.

“Application scientists are closely collaborating with computer scientists to fundamentally advance the way AI methods are using new sources of data and incorporating some of the physical processes that we already know,” said Jim Wilgenbusch, director of Research Computing at the university.

These multi-disciplinary teams partner with NVIDIA to optimize their workflows and algorithms. And they rely on containers updated, tested and stored in NGC to keep pace with rapid changes in AI software.

The post Boxing for Science: Researchers Accelerate Innovation with Containers appeared first on The Official NVIDIA Blog.

Sky-High Performance in an Instance: Quadro Virtual Workstations in the Cloud

With much of the world suddenly working from home, IT leaders need to scale resources, fast. Global cloud providers hosting Quadro Virtual Workstations can help them meet the challenge.

Many knowledge workers can work virtually with ease. But it’s more complex for people working with graphics-intensive applications across the architecture, design, manufacturing, media and entertainment, and oil and gas industries, as well as those working on scientific visualizations.

Some IT teams can repurpose on-premises GPU resources to support remote workers through Quadro Virtual Workstation software (Quadro vWS) hosted onsite. In fact, NVIDIA recently expanded its free, 90-day virtual GPU software evaluation from 128 to 500 licenses to help those with on-prem GPUs they can put to work.

For others, cloud providers such as Amazon Web Services and Google Cloud offer Quadro vWS instances to support remote, graphics-intensive work quickly without the need for any on-prem infrastructure. End-users only need a connected laptop or thin client.

These virtual workstations support the same NVIDIA Quadro drivers and features as the physical Quadro GPUs that professional artists and designers run in local workstations.

Virtual Workstations Simply, Quickly, Easily

With Quadro vWS running in the cloud, IT departments can spin up a GPU-accelerated virtual workstation — or multiple systems — easily and in minutes. Cloud-based Quadro virtual workstations are simple to deploy and can be scaled up or down according to need, with IT only paying for what is used on an hourly basis.

Drivers and upgrades are managed by the cloud service provider, so IT can just plug in and let their teams get to work. Since Quadro vWS is certified for leading graphics software, IT can rest assured that applications will run smoothly.

Data stays secure in the cloud, while users gain advanced computer graphics features, including NVIDIA RTX for interactive, photorealistic rendering, AI-enhanced denoising, and rapid, responsive video and image processing.


AWS offers multiple options for customers to set up and provision virtual workstations that use the latest NVIDIA technology. These options include building a cloud-based virtual workstation from the ground up for graphics-intensive digital content creation work, such as for VFX or animation, as well as fully managed services that allow customers to instantly scale.

For those who want to build from the ground up, Amazon Elastic Compute Cloud (EC2) G4 instances now include Quadro vWS technology at no extra cost to help support customers who need powerful performance from the cloud. These instances deliver cost-effective and versatile GPU performance for graphics-intensive applications. Available worldwide, EC2 G4 instances provide support for NVIDIA RTX-enabled applications with NVIDIA T4 Tensor Core GPUs. In addition to the G4 instances, AWS continues to support G3 instances based on NVIDIA M60 GPUs.

For those who prefer the ease and agility of a managed service, Amazon AppStream 2.0 is a fully managed application streaming service that allows customers to deliver Windows applications running on instances in AWS to any end-user device. Amazon AppStream 2.0 provides two NVIDIA graphics instance families – Graphics Pro instances based on the EC2 G3 family and Graphics G4dn instances. Amazon WorkSpaces is a fully managed desktop as a service that enables customers to easily setup and provide cloud desktops to enterprise users. WorkSpaces offer a Graphics Pro bundle that uses the EC2 G3 instances.

Google Cloud

Offering performance similar to a native workstation, Google Cloud offers Quadro vWS on Windows Server or Ubuntu for users running high-performance simulations as well as rendering and design workloads from the cloud. Quadro vWS on Google Cloud is available globally and can be launched in minutes.

Now more than ever, cloud-based flexibility is a critical solution for millions of companies in every industry around the world. With Quadro virtual workstations from global cloud providers and access via PC-over-IP technology like Teradici Cloud Access Software to ensure a secure and great user experience, IT can rely on a flexible solution for meeting these unexpected demands, as well as user expectations.

If your company already has NVIDIA vGPU-certified servers and needs to support more knowledge workers through VDI or virtual servers for compute-intensive workloads, learn more about additional options for virtualizing NVIDIA performance in our post about vGPU software.

If your employees need access to workstations in the office, check out our post on connecting to remote workstations.

Supporting links:

The post Sky-High Performance in an Instance: Quadro Virtual Workstations in the Cloud appeared first on The Official NVIDIA Blog.

A Taste for Acceleration: DoorDash Revs Up AI with GPUs

When it comes to bringing home the bacon — or sushi or quesadillas — DoorDash is shifting into high gear, thanks in part to AI.

The company got its start in 2013, offering deals such as delivering pad thai to Stanford University dorm rooms. Today with a phone tap, customers can order a meal from more than 310,000 vendors — including Chipotle, Walmart and Wingstop — across 4,000 cities in the U.S., Canada and Australia.

Part of its secret sauce is a digital logistics engine that connects its three-sided marketplace of merchants, customers and independent contractors the company calls Dashers. Each community taps into the platform for different reasons.

Using a mix of machine-learning models, the logistics engine serves personalized restaurant recommendations and delivery-time predictions to customers who want on-demand access to their local businesses. Meanwhile, it assigns Dashers to orders and sorts through trillions of options to find their optimal routes while calculating delivery prices dynamically.

The work requires a complex set of related algorithms embedded in numerous machine-learning models, crunching ever-changing data flows. To accelerate the process, DoorDash has turned to NVIDIA GPUs in the cloud to train its AI models.

Training in One-Tenth the Time

Moving from CPUs to GPUs for AI training netted DoorDash a 10x speed-up. Migrating from single to multiple GPUs accelerated its work another 3x, said Gary Ren, a machine-learning engineer at DoorDash who will describe the company’s approach to AI in an online talk at GTC Digital.

“Faster training means we get to try more models and parameters, which is super critical for us — faster is always better for training speeds,” Ren said.

“A 10x training speed-up means we spin up cloud clusters for a tenth the time, so we get a 10x reduction in computing costs. The impacts of trying 10x more parameters or models is trickier to quantify, but it gives us some multiple of increased overall business performance,” he added.

Making Great Recommendations

So far, DoorDash has discussed one of its deep-learning applications — its recommendation engine that’s been in production about two years. Recommendations are definitely becoming more important as companies such as DoorDash realize consumers don’t always know what they’re looking for.

Potential customers may “hop on our app and explore their options so — given our huge number of merchants and consumers — recommending the right merchants can make a difference between getting an order or the customer going elsewhere,” he said.

Because its recommendation engine is so important, DoorDash continually fine tunes it. For example, in its engineering blogs, the company describes how it crafts embedded n-dimensional vectors for each merchant to find nuanced similarities among vendors.

It also adopts the so-called multi-level, multi-armed bandit algorithms that let AI models simultaneously exploit choices customers have liked in the past and explore new possibilities.

Speaking of New Use Cases

While it optimizes its recommendation engine, DoorDash is exploring new AI use cases, too.

“There are several areas where conversations happen between consumers and dashers or support agents. Making those conversations quick and seamless is critical, and with improvements in NLP (natural-language processing) there’s definitely potential to use AI here, so we’re exploring some solutions,” Ren said.

NLP is one of several use cases that will drive future performance needs.

“We deal with data from the real world and it’s always changing. Every city has unique traffic patterns, special events and weather conditions that add variance — this complexity makes it a challenge to deliver predictions with high accuracy,” he said.

Other challenges the company’s growing business presents are in making recommendations for first-time customers and planning routes in new cities it enters.

“As we scale, those boundaries get pushed — our inference speeds are good enough today, but we’ll need to plan for the future,” he added.

The post A Taste for Acceleration: DoorDash Revs Up AI with GPUs appeared first on The Official NVIDIA Blog.

From ER Visit to AI Startup, CloudMedx Pursues Predictive Healthcare Models

Twice Sahar Arshad’s father-in-law went to an emergency room in Pakistan complaining of frequent headaches. Twice doctors sent him home with a diagnosis of allergies.

Turns out he was suffering from a subdural hematoma — bleeding inside the head. Following the second misdiagnosis, he went into a coma and required emergency brain surgery (and made a full recovery.)

Arshad and her husband, Tashfeen Suleman — both computer scientists living in Bellevue, Wash., at the time — afterwards tried to get to the root of the inaccurate diagnoses. The hematoma turned out to be a side effect of a new medication Suleman’s father had been prescribed a couple weeks prior. And he lacked physical symptoms like slurred speech and difficulty walking, which would have prompted doctors to order a CT scan and detect the bleeding earlier.

Too Much Data, Too Little Time

It’s a common problem, Arshad and Suleman found. Physicians often have to rely on limited information, either because there’s insufficient data on a patient or because there’s not enough time to analyze large datasets.

The couple thought AI could help address this challenge. In late 2014, they together founded CloudMedx, a Palo Alto-based startup that develops predictive healthcare models for health providers, insurers and patients.

A member of the NVIDIA Inception virtual accelerator program, CloudMedx is working with the University of California, San Francisco; Barrow Neurological Institute, a member of Dignity Health, a nonprofit healthcare organization; and some of the largest health insurers in the country.

Its AI models, trained using NVIDIA V100 Tensor Core GPUs through Amazon Web Services, can help automate medical coding, predict disease progression and determine the likelihood a patient may have a complication and need to be readmitted to the hospital within 30 days.

“What we’ve built is a natural language model that understands how different diseases, symptoms and medications are related to each other,” said Arshad, chief operating officer at CloudMedx. “If we’d had this tool in Tashfeen’s father’s case, it would have flagged the risk of internal head hemorrhaging and recommended obtaining a CT scan.”

Working with an AI to Risk Assessment

The CloudMedx team has developed a deep neural network that can process medical data to provide risk assessment scores, saving clinicians time and providing personalized insight for patients. It’s trained on a dataset of 54 million patient encounters.

In a study to evaluate its deep learning model, the clinical AI tool took a mock medical exam — and outperformed human doctors by 10 percent, on average. On their own, physicians scored between 68 to 81 percent. When taking the exam along with CloudMedx AI, they achieved a high score of 91 percent.

The startup’s AI models are used in multiple tools, including a coding analyzer that converts doctor’s notes into a series of medical codes that inform the billing process, as well as a clinical analyzer that evaluates a patient’s health records to generate risk assessments.

CloudMedx is collaborating with UCSF’s Division of Gastroenterology to stratify patients awaiting liver transplants based on risk, so that patients can be matched with donors before the tumor progresses too far for a transplant.

The company is also working with one of the largest health insurers in the U.S. to better identify congestive heart failure patients with a high risk of readmission to the hospital. With these insights, health providers can follow up more often with at-risk patients, reducing readmissions and potentially saving billions of dollars in treatment costs.

Predictive Analytics for Every Healthcare Player

Predictive analytics can even improve the operational side of healthcare, giving hospitals a heads-up when they might need additional beds or staff members to meet rising patient demand.

“It’s an expensive manual process to find additional resources and bring on extra nurses at the last minute,” Arshad said. “If hospitals are able to use AI tools for surge prediction, they can better plan resources ahead of time.”

In addition to providing new insights for health providers and payers, these tools save time by processing large amounts of medical data in a fraction of the time it would take humans.

CloudMedx has also developed an AI tool for patients. Available on the Medicare website to its 53 million patient beneficiaries, the system helps users access their own claims data, correlates a person’s medical history with symptoms, and will soon also estimate treatment costs.

NVIDIA Inception Program

As members of the NVIDIA Inception program, the CloudMedx team was able to reach out to NVIDIA developers and the company’s healthcare team for help with some of the challenges they faced when scaling up for cloud deployment.

Inception helps startups during critical stages of product development, prototyping and deployment with tools and expertise to help early-stage companies grow.

Both Suleman and Arshad have spoken at NVIDIA’s annual GPU Technology Conference, with Arshad participating in a Women@GTC healthcare panel last year. The conference has helped the team meet some of their customers, said Arshad, who’s also a finalist for Entrepreneur of the Year at the 2020 Women in IT Awards New York.

Check out the healthcare track for GTC, taking place in San Jose, March 22-26.

The post From ER Visit to AI Startup, CloudMedx Pursues Predictive Healthcare Models appeared first on The Official NVIDIA Blog.

AWS Outposts Station a GPU Garrison in Your Datacenter

All the goodness of GPU acceleration on Amazon Web Services can now also run inside your own data center.

AWS Outposts powered by NVIDIA T4 Tensor Core GPUs are generally available starting today. They bring cloud-based Amazon EC2 G4 instances inside your data center to meet user requirements for security and latency in a wide variety of AI and graphics applications.

With this new offering, AI is no longer a research project.

Most companies still keep their data inside their own walls because they see it as their core intellectual property. But for deep learning to transition from research into production, enterprises need the flexibility and ease of development the cloud offers — right beside their data. That’s a big part of what AWS Outposts with T4 GPUs now enables.

With this new offering, enterprises can install a fully managed rack-scale appliance next to the large data lakes stored securely in their data centers.

AI Acceleration Across the Enterprise

To train neural networks, every layer of software needs to be optimized, from NVIDIA drivers to container runtimes and application frameworks. AWS services like Sagemaker, Elastic MapReduce and many others designed on custom-built Amazon Machine Images require model development to start with the training on large datasets. With the introduction of NVIDIA-powered AWS Outposts, those services can now be run securely in enterprise data centers.

The GPUs in Outposts accelerate deep learning as well as high performance computing and other GPU applications. They all can access software in NGC, NVIDIA’s hub for GPU-accelerated software optimization, which is stocked with applications, frameworks, libraries and SDKs that include pre-trained models.

For AI inference, the NVIDIA EGX edge-computing platform also runs on AWS Outposts and works with the AWS Elastic Kubernetes Service. Backed by the power of NVIDIA T4 GPUs, these services are capable of processing orders of magnitudes more information than CPUs alone. They can quickly derive insights from vast amounts of data streamed in real time from sensors in an Internet of Things deployment whether it’s in manufacturing, healthcare, financial services, retail or any other industry.

On top of EGX, the NVIDIA Metropolis application framework provides building blocks for vision AI, geared for use in smart cities, retail, logistics and industrial inspection, as well as other AI and IoT use cases, now easily delivered on AWS Outposts.

Alternatively, the NVIDIA Clara application framework is tuned to bring AI to healthcare providers whether it’s for medical imaging, federated learning or AI-assisted data labeling.

The T4 GPU’s Turing architecture uses TensorRT to accelerate the industry’s widest set of AI models. Its Tensor Cores support multi-precision computing that delivers up to 40x more inference performance than CPUs.

Remote Graphics, Locally Hosted

Users of high-end graphics have choices, too. Remote designers, artists and technical professionals who need to access large datasets and models can now get both cloud convenience and GPU performance.

Graphics professionals can benefit from the same NVIDIA Quadro technology that powers most of the world’s professional workstations not only on the public AWS cloud, but on their own internal cloud now with AWS Outposts packing T4 GPUs.

Whether they’re working locally or in the cloud, Quadro users can access the same set of hundreds of graphics-intensive, GPU-accelerated third-party applications.

The Quadro Virtual Workstation AMI, available in AWS Marketplace, includes the same Quadro driver found on physical workstations. It supports hundreds of Quadro-certified applications such as Dassault Systèmes SOLIDWORKS and CATIA; Siemens NX; Autodesk AutoCAD and Maya; ESRI ArcGIS Pro; and ANSYS Fluent, Mechanical and Discovery Live.

Learn more about AWS and NVIDIA offerings and check out our booth 1237 and session talks at AWS re:Invent.

The post AWS Outposts Station a GPU Garrison in Your Datacenter appeared first on The Official NVIDIA Blog.

Smart into Art: NVIDIA SC19 Booth Turns Computer Scientists into Art at News-Filled Show

Back in the day, the annual SC supercomputing conference was filled with tabletops hung with research posters. Three decades on, the show’s Denver edition this week was a sea of sharp-angled booths, crowned with three-dimensional signage, promoting logos in a multitude of blues and reds.

But nowhere on the SC19 show floor drew more of the show’s 14,000 attendees than NVIDIA’s booth, built around a broad, floor-to-ceiling triangle with 2,500 square feet of ultra-high def LED screens. With a packed lecture hall on one side and HPC simulations playing on a second, it was the third wall that drew the most buzz.

Cycling through was a collection of AI-enhanced photos of several hundred GPU developers — grad students, CUDA pioneers, supercomputing rockstars — together with descriptions of their work.

Like accelerated computing’s answer to baseball cards, they were rendered into art using AI style transfer technology inspired by various painters — from the classicism of Vermeer to van Gogh’s impressionism to Paul Klee’s abstractions.

Meanwhile, NVIDIA sprinted through the show, kicking things off with a news-filled keynote by founder and CEO Jensen Huang, helping to power research behind the two finalists nominated for the Gordon Bell prize, and joining in to celebrate its partner Mellanox.

And in its booth, 200 engineers took advantage of free AI training through the Deep Learning Institute and dozens of tech talks were provided by leading researchers packed in shoulder to shoulder.

Wall in the Family 

Piecing together the Developer Wall project took a dozen NVIDIANs scrambling for weeks in their spare time. The team of designers, technologists and marketers created an app where developers could enter some background, which would be paired with their photo once it’s run through style filters at DeepArt.io, a German startup that’s part of NVIDIA’s Inception startup incubator.

“What we’re trying to do is showcase and celebrate the luminaries in our field. They amazing work they’ve done is the reason this show exists,” said Doug MacMillian, a developer evangelist who helped run the big wall initiative.

Behind him flashed an image of Jensen Huang, rendered as if painted by Cezanne. Alongside him was John Stone, the legendary HPC researcher at the University of Illinois, as if painted by Vincent Van Gogh. Close by were Erik Lindahl, who heads the international GROMACS molecular simulation project, right out of a Joan Miró painting. Paresh Kharya, a data center specialist at NVIDIA, looked like an abstracted sepia-tone circuit board.

Enabling the Best and Brightest 

That theme — how NVIDIA’s working to accelerate the work of people in an ever growing array of industries — continued behind the scenes.

In a final rehearsal hours before Huang’s keynote, Ashley Korzun — a Ph.D. engineer who’s spent years working on the manned mission to Mars set for the 2030s — saw for the first time a demo visualizing her life’s work at the space agency.

As she stood on stage, she witnessed an event she’s spent years simulating purely with data – the fiery path that the Mars lander, a capsule the size of a two-story condo, will take as it slows in seven dramatic minutes from 12,000 miles an hour to gently stick its landing on the Red Planet.

“This is amazing,” she quietly said through tears. “I never thought I’d be able to visualize this.”

Flurry of News

Huang later took the stage and in a broad-sweeping two hour keynote set out a range of announcements that show how NVIDIA’s helping others do their life’s work, including:

Award-Winning Work

SC19 plays host to a series of awards throughout the show, and NVIDIA featured in a number of them.

Both finalists for the Gordon Bell Prize for outstanding achievement in high performance computing — the ultimate winner, ETH Zurich, as well as University of Michigan — ran their work on Oak Ridge National Laboratory’s Summit supercomputer, powered by nearly 28,000 V100 GPUs.

NVIDIA’s founding chief scientist, David Kirk, received this year’s Seymour Cray Computer Engineering Award, for innovative contributions to HPC systems. He was recognized for his path-breaking work around development of the GPU.

And NVIDIA’s Vasily Volkov co-authored with UC Berkeley’s James Demmel a seminal paper 11 years ago recognized with the Time of Time Award  for a work of lasting impact. The paper, which has resulted in a new way of thinking and modeling algorithms on GPUs, has had nearly 1,000 citations.

Looking Further Ahead

If the SC show is about powering the future, no corner of the show was more forward looking than the annual Supercomputing Conference Student Cluster Competition.

This year, China’s Tsinghua University captured the top crown. It beat out 15 other undergrad teams using NVIDIA V100 Tensor Core GPUs in an immersive HPC challenge demonstrating the breadth of skills, technologies and science that it takes to build, maintain and use supercomputers. Tsinghua also won the IO500 competition, while two other prizes were won by Singapore’s Nanyang Technological University.

The teams came from xx different markets, including Germany, Latvia, Poland and Taiwan, in addition to China and Singapore.

Up Next: More Performance for the World’s Data Centers

NVIDIA’s frenetic week at SC19 ended with a look at what’s next, with Jensen joining Mellanox CEO Eyal Waldman on stage at an evening event hosted by the networking company, which NVIDIA agreed to acquire earlier this year.

Jensen and Eyal discussed how their partnership will enable the future of computing, with Jensen detailing the synergies between the companies. “Mellanox has an incredible vision,” Huang said. ““In a couple years we’re going to bring more compute performance to data centers than all of the compute since the beginning of time.”

The post Smart into Art: NVIDIA SC19 Booth Turns Computer Scientists into Art at News-Filled Show appeared first on The Official NVIDIA Blog.