Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

As businesses and schools consider reopening around the world, they’re taking safety precautions to mitigate the lingering threat of COVID-19 — often taking the temperature of each individual entering their facilities.

Fever is a common warning sign for the virus (and the seasonal flu), but manual temperature-taking with infrared thermometers takes time and requires workers stationed at a building’s entrances to collect temperature readings. AI solutions can speed the process and make it contactless, sending real-time alerts to facilities management teams when visitors with elevated temperatures are detected.

Central California-based IntelliSite Corp. and its recently acquired startup, Deep Vision AI, have developed a temperature screening application that can scan over 100 people a minute. Temperature readings are accurate within a tenth of a degree Celcius. And customers can get up and running with the app within a few hours, with an AI platform running on NVIDIA GPUs on premises or in the cloud for inference.

“Our software platform has multiple AI modules, including foot traffic counting and occupancy monitoring, as well as vehicle recognition,” said Agustin Caverzasi, co-founder of Deep Vision AI, and now president of IntelliSite’s AI business unit. “Adding temperature detection was a natural, easy step for us.”

The temperature screening tool has been deployed in several healthcare facilities and is being tested at U.S. airports, amusement parks and education facilities. Deep Vision is part of NVIDIA Inception, a program that helps startups working in AI and data science get to market faster.

“Deep Vision AI joined Inception at the very beginning, and our engineering and research teams received support with resources like GPUs for training,” Caverzasi said. “It was really helpful for our company’s initial development.”

COVID Risk or Coffee Cup? Building AI for Temperature Tracking

As the pandemic took hold, and social distancing became essential, Caverzasi’s team saw that the technology they’d spent years developing was more relevant than ever.

“The need to protect people from harmful viruses has never been greater,” he said. “With our preexisting AI modules, we can monitor in real time the occupancy levels in a store or a hospital’s waiting room, and trigger alerts before the maximum occupancy is reached in a given area.”

With governments and health organizations advising temperature checking, the startup applied its existing AI capabilities to thermal cameras for the first time. In doing so, they had to fine-tune the model so it wouldn’t be fooled by false positives — for example, when a person shows up red on a thermal camera because of their cup of hot coffee..

This AI model is paired with one of IntelliSite’s IoT solutions called human-based monitoring, or hBM. The hBM platform includes a hardware component: a mobile cart mounted with a thermal camera, monitor and Dell Precision tower workstation for inference. The temperature detection algorithms can now scan five people at the same time.

Double Quick: Faster, Easier Screening

The workstation uses the NVIDIA Quadro RTX 4000 GPU for real-time inference on thermal data from the live camera view. This reduces manual scanning time for healthcare customers by 80 percent, and drops the total cost of conducting temperature scans by 70 percent.

Facilities using hBM can also choose to access data remotely and monitor multiple sites, using either an on-premises Dell PowerEdge R740 server with NVIDIA T4 Tensor Core GPUs, or GPU resources through the IntelliSite Cloud Engine.

If businesses and hospitals are also taking a second temperature measurement with a thermometer, these readings can be logged in the hBM system, which can maintain records for over a million screenings. Facilities managers can configure alerts via text message or email when high temperatures are detected.

The Deep Vision developer team, based in Córdoba, Argentina, also had to adapt their AI models that use regular camera data to detect people wearing face masks. They use the NVIDIA Metropolis application framework for smart cities, including the NVIDIA DeepStream SDK for intelligent video analytics and NVIDIA TensorRT to accelerate inference.

Deep Vision and IntelliSite next plan to integrate the temperature screening AI with facial recognition models, so customers can use the application for employee registration once their temperature has been checked.

IntelliSite is a member of the NVIDIA Clara Guardian ecosystem, bringing edge AI to healthcare facilities. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the pandemic.

FDA disclaimer: Thermal measurements are designed as a triage tool and should not be the sole means of diagnosing high-risk individuals for any viral threat. Elevated thermal readings should be confirmed with a secondary, clinical-grade evaluation tool. FDA recommends screening individuals one at a time, not in groups.

NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light

The NVIDIA A100 Tensor Core GPU has landed on Google Cloud.

Available in alpha on Google Compute Engine just over a month after its introduction, A100 has come to the cloud faster than any NVIDIA GPU in history.

Today’s introduction of the Accelerator-Optimized VM (A2) instance family featuring A100 makes Google the first cloud service provider to offer the new NVIDIA GPU.

A100, which is built on the newly introduced NVIDIA Ampere architecture, delivers NVIDIA’s greatest generational leap ever. It boosts training and inference computing performance by 20x over its predecessors, providing tremendous speedups for workloads to power the AI revolution.

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads, ” said Manish Sainani, director of Product Management at Google Cloud. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

In cloud data centers, A100 can power a broad range of compute-intensive applications, including AI training and inference, data analytics, scientific computing, genomics, edge video analytics, 5G services, and more.

Fast-growing, critical industries will be able to accelerate their discoveries with the breakthrough performance of A100 on Google Compute Engine. From scaling up AI training and scientific computing, to scaling out inference applications, to enabling real-time conversational AI, A100 accelerates complex and unpredictable workloads of all sizes running in the cloud. 

NVIDIA CUDA 11, coming to general availability soon, makes accessible to developers the new capabilities of NVIDIA A100 GPUs, including Tensor Cores, mixed-precision modes, multi-instance GPU, advanced memory management and standard C++/Fortran parallel language constructs.

Breakthrough A100 Performance in the Cloud for Every Size Workload

The new A2 VM instances can deliver different levels of performance to efficiently accelerate workloads across CUDA-enabled machine learning training and inference, data analytics, as well as high performance computing.

For large, demanding workloads, Google Compute Engine offers customers the a2-megagpu-16g instance, which comes with 16 A100 GPUs, offering a total of 640GB of GPU memory and 1.3TB of system memory — all connected through NVSwitch with up to 9.6TB/s of aggregate bandwidth.

For those with smaller workloads, Google Compute Engine is also offering A2 VMs in smaller configurations to match specific applications’ needs.

Google Cloud announced that additional NVIDIA A100 support is coming soon to Google Kubernetes Engine, Cloud AI Platform and other Google Cloud services. For more information, including technical details on the new A2 VM family and how to sign up for access, visit the Google Cloud blog.

The post NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light appeared first on The Official NVIDIA Blog.

NVIDIA Puts More Tools in Hands of Artists, Designers and Data Scientists Working Remotely

For many organizations, the coronavirus pandemic has created a permanent shift in how their employees work. From now on, they’ll have the option to collaborate at home or in the office.

NVIDIA is giving these millions of professionals around the world a boost with a new version of our virtual GPU software, vGPU July 2020. The software adds support for more workloads and is loaded with features that improve operational efficiencies for IT administrators.

GPU virtualization is key to offering everyone from designers to data scientists a flexible way to collaborate on projects that require advanced graphics and computing power, wherever they are.

Employee productivity was the primary concern among organizations addressing remote work due to the COVID-19 pandemic, according to recent research by IDC. When the market intelligence firm interviewed NVIDIA customers using GPU-accelerated virtual desktops, it found organizations with 500-1,000 users experienced a 13 percent increase in productivity, resulting in approximately more than $1 million in annual savings.

According to Alex Herrera, an analyst with Jon Peddie Research/Cadalyst, “In a centralized computing environment with virtualized GPU technology, users no longer have to be tied to their physical workstations. As proven recently through remote work companies can turn on a dime, enabling anywhere/anytime access to big data without compromising on performance.”

Expanded Support in the Data Center and Cloud with SUSE

NVIDIA has expanded hypervisor support by partnering with SUSE on its Linux Enterprise Server, providing vGPU support on its kernel-based virtual machine platform.

Initial offerings will be supported with NVIDIA vComputeServer software, enabling GPU virtualization for AI and data science workloads. This will expand hypervisor platform options for enterprises and cloud service providers that are seeing an increased need to support GPUs.

“Demand for accelerated computing has grown beyond specialized HPC environments into virtualized data centers,” said Brent Schroeder, global chief technology officer at SUSE. “To ensure the needs of business leaders are met, SUSE and NVIDIA have worked to simplify the use of NVIDIA virtual GPUs in SUSE Linux Enterprise Server. These efforts modernize the IT infrastructure and accelerate AI and ML workloads to enhance high-performance and time-sensitive workloads for SUSE customers everywhere.”

Added Support for Immersive Collaboration

NVIDIA CloudXR technology uses NVIDIA RTX and vGPU software to deliver VR and augmented reality across 5G and Wi-Fi networks. vGPU July 2020 adds 120Hz VSync support at resolutions up to 4K, giving CloudXR users an even smoother immersive experience on untethered devices. It creates a level of fidelity that’s indistinguishable from native tethered configurations.

“Streaming AR/VR over Wi-Fi or 5G enables organizations to truly take advantage of its benefits, enabling immersive training, product design and architecture and construction,” said Matt Coppinger, director of AR/VR at VMware. “We’re partnering with NVIDIA to more securely deliver AR and VR applications running on VMware vSphere and NVIDIA Quadro Virtual Workstation, streamed using NVIDIA CloudXR to VMware’s Project VXR client application running on standalone headsets.”

The latest release of vGPU enables a better user experience and manageability needed for demanding workloads like the recently debuted Omniverse AEC Experience, which combines Omniverse, a real-time collaboration platform, with RTX Server and NVIDIA Quadro Virtual Workstation software for the data center. The reference design supports up to two virtual workstations on an NVIDIA Quadro RTX GPU, running multiple workloads such as collaborative, computer-aided design while also providing real-time photorealistic rendering of the model.

With Quadro vWS, an Omniverse-enabled virtual workstation can be provisioned in minutes to new users, anywhere in the world. Users don’t need specialized client hardware, just an internet-connected device, laptop or tablet, and data remains highly secured in the data center.

Improved Operational Efficiency for IT Administrators

New features in vGPU July 2020 help enterprise IT admins and cloud service providers streamline management, boosting their operational efficiency.

This includes cross-branch support, where the host and guest vGPU software can be on different versions, easing upgrades and large deployments.

IT admins can move quicker to the latest hypervisor versions to pick up fixes, security patches and new features, while staggering deployments for end-user images.

Enterprise data centers running VMware vSphere will see improved operational efficiency by having the ability to manage vGPU powered VMs with the latest release of VMware vRealize Operations.

As well, VMware recently added Distributed Resource Scheduler support for GPU-enabled VMs into vSphere. Now, vSphere 7 introduces a new feature called “Assignable Hardware,” which enhances initial placement so that a VM can be automatically “placed” on a host that has exactly the right GPU and  profile available before powering it on.

For IT managing large deployments, this means reducing deployment time of new VMs to a few minutes, as opposed to a manual process that can take hours. As well, this feature works with VMware’s vSphere High Availability, so if a host fails for any reason, a GPU-enabled VM can be automatically restarted on another host with the right GPU resources.

Availability

NVIDIA vGPU July 2020 release is coming soon. Learn more at nvidia.com/virtualization and watch this video.

The post NVIDIA Puts More Tools in Hands of Artists, Designers and Data Scientists Working Remotely appeared first on The Official NVIDIA Blog.

Schrödinger Transforming Drug Discovery with GPU-Powered Platform 

The pharmaceutical industry has grown accustomed to investing billions of dollars to bring drugs to market, only to watch 90 percent of them fail even before clinical trials.

The problem is, and always has been, there’s simply not enough compute power in the world to accurately assess the properties of all possible molecules, nor to support the extensive experimental efforts needed in drug discovery.

“There may be more potential drug compounds than there are atoms in the universe,” said Patrick Lorton, chief technology officer at Schrödinger, the New York-based developer of a physics-based software platform designed to model and compute the properties of novel molecules for the pharma and materials industries.

“If you look at a billion molecules and you say there’s no good drug here, it’s the same as looking at a drop of water in the ocean and saying fish don’t exist,” he said.

Fresh off of a successful IPO earlier this year, Schrödinger has devoted decades to refining computational algorithms to accurately compute important properties of molecules. The company uses NVIDIA GPUs to generate and evaluate petabytes of data to accelerate drug discovery, which is a dramatic improvement over the traditional process of slow and expensive lab work.

The company works with all 20 of the biggest biopharma companies in the world, several of which have standardized on Schrödinger’s platform as a key component of preclinical research.

The COVID-19 pandemic highlights the need for a more efficient and effective drug discovery process. To that end, the company has joined the global COVID R&D alliance to offer resources and collaborate. Recently, Google Cloud has also thrown its weight behind this alliance, donating over 16 million hours of NVIDIA GPU time to hunt for a cure.

“We hope to develop an antiviral therapeutic for SARS-CoV-2, the virus that causes COVID-19, in time to have treatments available for future waves of the pandemic,” Lorton said.

Advanced Simulation Software

The pharmaceutical industry has long depended on manually intensive physical processes to find new therapeutics. This allowed it to develop many important remedies over the last 50 years, but only through a laborious trial-and-error approach, Lorton said.

He makes the comparison to airplane manufacturers, which formerly carved airplane designs out of balsa wood and tested their drag coefficient in wind tunnels. They now rely on advanced simulation software that reduces the time and resources needed to test designs.

With the pharmaceutical industry traditionally using the equivalent of balsa, Schrödinger’s drug discovery platform has become a game changer.

“We’re trying to make preclinical drug discovery more efficient,” said Lorton. “This will enable the industry to treat more diseases and help more conditions.”

Exploring New Space

For more than a decade, every major pharmaceutical company has been using Schrödinger’s software, which can perform physics simulations down to the atomic level. For each potential drug candidate, Schrödinger uses recently developed physics-based computational approaches to calculate as many as 3,000 possible compounds. This requires up to 12,000 GPU hours on high-performance computers.

Once the physics-based calculations are completed for the original set of randomly selected compounds, a layer of active learning is applied, making projections on the probable efficacy of a billion molecules.

Lorton said it currently takes four or five iterations to get a machine-learning algorithm accurate enough to be predictive, though even these projections are always double-checked with the physics-based methods before synthesizing any molecules in the lab.

This software-based approach yields much faster results, but that’s only part of the value. It also greatly expands the scope of analysis, evaluating data that human beings never would have had time to address.

“The thing that is most compelling is exploring new space,” said Lorton. “It’s not just being cheaper. It’s being cheaper and finding things you would have otherwise not explored.”

For that reason, Schrödinger’s work focuses on modeling and simulation, and using the latest high performance computing resources to expand its discovery capabilities.

Bayer Proving Platform’s Value

One customer that’s been putting Schrödinger’s technology to use is Bayer AG. Schrödinger software has been helping Bayer scientists find lead structures for several drug discovery projects, ultimately contributing to clinical development candidates.

Recently both companies agreed to co-develop a novel drug discovery platform to accelerate the process of estimating the binding affinity, as well as other properties, and synthesizability of small molecules.

Bayer can’t yet share any specific results that the platform has delivered, but Dr. Alexander Hillisch, the company’s head of computational drug design, said it’s had an impact on several active projects.

Dr. Hillisch said that the software is expected to speed up work and effectively widen Bayer’s drug-discovery capabilities. As a result, he believes it’s time for NVIDIA GPUs to get a lot more recognition within the industry.

In a typical drug discovery project, Bayer evaluates binding affinities and other properties of molecules such as absorption and metabolic stability. With Schrödinger software and NVIDIA GPUs, “we’re enumerating millions to billions of virtual compounds and are thus scanning the chemical space much more broadly than we did before, in order to identify novel lead compounds with favorable properties,” he said.

Dr. Hillisch also suggested that the impact of holistic digital drug discovery approaches can soon be judged. “We expect to know how substantial the impact of this scientific approach will be in the near future,” he said.

The drug design platform also will be part of Bayer’s work on COVID-19. The company spun off its antiviral research into a separate company in 2006, but it recently joined a European coronavirus research initiative to help identify novel compounds that could provide future treatment.

Tailor-Made Task for GPUs

Given the scope of Schrödinger’s task, Lorton made it clear that NVIDIA’s advances in developing a full-stack computing platform for HPC and AI that pushes the boundaries of performance have been as important to his company’s accomplishments as its painstaking algorithmic and scientific work.

“It could take thousands, or tens of thousands, or in some crazy case, even hundreds of thousands of dollars to synthesize and get the binding affinity of a drug molecule,” he said. “We can do it for a few dollars of compute costs on a GPU.”

Lorton said that if the company had started one of its physics calculations on a single CPU when it was founded in 1990, it would have taken until today to reach the conclusions that a single GPU can now deliver in less than an hour.

Even with the many breakthroughs in compute speed on NVIDIA GPUs, Schrödinger’s discovery projects require thousands of NVIDIA T4 and V100 Tensor Core GPUs every day, both on premises and on the Google Cloud Platform. It’s this next level of compute, combined with continued investment in the underlying science, that the company hopes will change the way all drug discovery is done.

The post Schrödinger Transforming Drug Discovery with GPU-Powered Platform  appeared first on The Official NVIDIA Blog.

Taking AI to Market: NVIDIA and Arterys Bridge Gap Between Medical Researchers and Clinicians

Around the world, researchers in startups, academic institutions and online communities are developing AI models for healthcare. Getting these models from their hard drives and into clinical settings can be challenging, however.

Developers need feedback from healthcare practitioners on how their models can be optimized for the real world. So, San Francisco-based AI startup Arterys built a forum for these essential conversations between clinicians and researchers.

Called the Arterys Marketplace, and now integrated with the NVIDIA Clara Deploy SDK, the platform makes it easy for researchers to share medical imaging AI models with clinicians, who can try it on their own data.

“By integrating the NVIDIA Clara Deploy technology into our platform, anyone building an imaging AI workflow with the Clara SDK can take their pipeline online with a simple handoff to the Arterys team,” said Christian Ulstrup, product manager for Arterys Marketplace. “We’ve streamlined the process and are excited to make it easy for Clara developers to share their models.”

Researchers can submit medical imaging models in any stage of development — from AI tools for research use to apps with regulatory clearance. Once the model is posted on the public Marketplace site, anyone with an internet connection can test it by uploading a medical image through a web browser.

Models on Arterys Marketplace run on NVIDIA GPUs through Amazon Web Services for inference.

A member of both the NVIDIA Inception and AWS Activate programs, which collaborate to help startups get to market faster, Arterys was founded in 2011. The company builds clinical AI applications for medical imaging and launched the Arterys Marketplace at the RSNA 2019 medical conference.

It recently raised $28 million in funding to further develop the ecosystem of partners and clinical-grade AI solutions on its platform.

Several of the models now on the Arterys Marketplace are focused on COVID-19 screening from chest X-rays and CT images. Among them is a model jointly developed by NVIDIA’s medical imaging applied research team and clinicians and data scientists at the National Institutes of Health. Built in under three weeks using the NVIDIA Clara Train framework, the model can help researchers study the detection of COVID-19 from chest CT scans.

Building AI Pillar of the Community

While there’s been significant investment in developing AI models for healthcare in the last decade, the Arterys team found that it can still take years to get radiologists’ hands on the tools.

“There’s been a huge gap between the smart, passionate researchers building AI models for healthcare and the end users — radiologists and clinicians who can use these models in their workflow,” Ulstrup said. “We realized that no research institution, no startup was going to be able to do this alone.”

The Arterys Marketplace was created with simplicity in mind. Developers need only fill out a short form to submit an AI model for inclusion, and then can send the model to users as a URL — all for free.

For clinicians around the world, there’s no need to download and install an AI model. All that’s needed is an internet connection and a couple medical images to upload for testing with the AI models. Users can choose whether or not their imaging data is shared with the researchers.

The images are analyzed with NVIDIA GPUs in the cloud, and results are emailed to the user within minutes. A Slack channel provides a forum for clinicians to provide feedback to researchers, so they can work together to improve the AI model.

“In healthcare, it can take years to get from an idea to seeing it implemented in clinical settings. We’re reducing that to weeks, if not days,” said Ulstrup. “It’s absurdly easy compared to what the process has been in the past.”

With a focus on open innovation and rapid iteration, Ulstrup says, the Arterys Marketplace aims to bring doctors into the product development cycle, helping researchers build better AI tools. By interacting with clinicians in different geographies, developers can improve their models’ ability to generalize across different medical equipment and imaging datasets.

Over a dozen AI models are on the Arterys Marketplace so far, with more than 300 developers, researchers, and startups joining the community discussion on Slack.

“Once models are hosted on the Arterys Marketplace, developers can send them to researchers anywhere in the world, who in turn can start dragging and dropping data in and getting results,” Ulstrup said. “We’re seeing discussion threads between researchers and clinicians on every continent, sharing screenshots and feedback — and then using that feedback to make the models even better.”

Check out the research-targeted AI COVID-19 Classification Pipeline developed by NVIDIA and NIH researchers on the Arterys Marketplace. To hear more from the Arterys team, register for the Startups4COVID webinar, taking place July 28.

The post Taking AI to Market: NVIDIA and Arterys Bridge Gap Between Medical Researchers and Clinicians appeared first on The Official NVIDIA Blog.

Untold Studios Powers Creative Workflows from Home with NVIDIA Quadro in the Cloud

Studios, creative departments and production companies are searching for tools that make it easier to work from home. But one independent studio is staying ahead of the game by being one of the first to completely operate in the cloud.

Untold Studios, based in the U.K., is home to 100 artists and producers. Their projects range from visual effects and advertisements, including the popular John Lewis & Partners Christmas featuring Excitable Edgar, a lovable computer graphics dragon, to the latest season of Netflix’s The Crown.

Most employees worked out of Untold Studio’s London office on Old Street, but the studio made the decision to use NVIDIA Quadro Virtual Workstations (Quadro vWS) running on AWS when they first opened for business.

So when the coronavirus outbreak occurred, Untold’s IT team had very little to change to enable employees to work from home.

Ramping Up with AWS

With the Amazon Elastic Compute Cloud (EC2) G4 instances powered by Quadro vWS and NVIDIA T4 Tensor Core GPUs, the artists at Untold Studios get all the creative power they need to work remotely. Employees can access professional graphics workstations to work on image rendering, content creation, video editing and simulation from anywhere, on any device.

“We opted to work from home before the government made it mandatory, as there was no technical reason that artists needed to be present in the studio,” said Sam Reid, head of technology at Untold Studios. “We ensured that all staff had a VPN connection, which allowed them to work from home with no change to their workflow. This made creativity within teams more fluid, allowing for wider conversations between broader teams across the globe.”

Being cloud-based, Untold Studios has the advantage of accessing their resources remotely. Through private VPN connections, the artists can connect to the MPAA secure environment from home and access content in the same way as if they were physically in the studio.

Scalable for Graphics-Intensive Workflows

Untold Studios takes advantage of the scalability that comes from running in the cloud. When the team receives a big project, they’re not limited by hardware availability. Instead, Quadro vWS allows them to quickly add more virtual workstations as they hire contractors and freelance artists.

Untold Studios also harnesses the power of the cloud for rendering workflows. Since all the data is already hosted in the cloud on a highly scalable and resilient storage platform, the team doesn’t need to manage the same restrictions as others who store their data on premises. The artists also don’t have to wait for lengthy transfers to the cloud and back when rendering.

Image courtesy of Untold Studios, from the John Lewis & Partners Christmas featuring CG dragon, Excitable Edgar.

Thanks to the cloud, Untold Studios can also easily upgrade to the latest technology, ensuring that artists’ creativity is not hindered by running on old systems. This makes it easier to run the latest software and content creation tools, get fast support and maintenance, and increase productivity.

“We were able to migrate painlessly from the NVIDIA Tesla M60 to the T4 GPU in just minutes by changing a single line of code,” said Reid. “Moving to T4 gave us improved performance with some apps and ended up being less expensive because we were able to provision artists with workstations that are more suitable for their workload.”

The Amazon EC2 G4 instances provide Untold Studios with the latest Quadro technology to accelerate creative workflows, reduce time-to-market, and improve design quality. Quadro Virtual Workstations on AWS G4 instances offer color and pixel accuracy, video and audio sync, support for dual HD monitors, and full Wacom support.

Untold Studios is also pioneering the use of real-time ray-tracing, working with Epic Games’ Unreal Engine to enable this technology to be used in state-of-the-art commercials.

Learn more about Untold Studios and how NVIDIA can help you work virtually.

Supporting links:

Featured blog image courtesy of Untold Studios. ©NETFLIX

The post Untold Studios Powers Creative Workflows from Home with NVIDIA Quadro in the Cloud appeared first on The Official NVIDIA Blog.

Fighting COVID-19 in New Era of Scientific Computing

Scientists and researchers around the world are racing to find a cure for COVID-19.

That’s made the work of all those digitally gathered for this week’s high performance computing conference, ISC 2020 Digital, more vital than ever.

And the work of these researchers is broadening to encompass a wider range of approaches than ever.

The NVIDIA scientific computing platform plays a vital role, accelerating progress across this entire spectrum of approaches — from data analytics to simulation and visualization to AI to edge processing.

Some highlights:

  • In genomics, Oxford Nanopore Technologies was able to sequence the virus genome in just 7 hours using our GPUs.
  • In infection analysis and prediction, the NVIDIA RAPIDS team has GPU-accelerated Plotly’s Dash, a data visualization tool, enabling clearer insights into real-time infection rate analysis.
  • In structural biology, the U.S. National Institutes of Health and the University of Texas, Austin, are using GPU-accelerated software CryoSPARC to reconstruct the first 3D structure of the virus protein using cryogenic electron microscopy.
  • In treatment, NVIDIA worked with the National Institutes of Health and built an AI to accurately classify COVID-19 infection based on lung scans so efficient treatment plans can be devised.
  • In drug discovery, Oak Ridge National Laboratory ran the Scripps Research Institute’s AutoDock on the GPU accelerated Summit Supercomputer to screen a billion potential drug combinations in just 12 hours.

  • In robotics, startup Kiwi is building robots to deliver medical supplies autonomously.
  • And in edge detection, Whiteboard Coordinator Inc. built an AI system to automatically measure and screen elevated body temperatures, screening well over 2,000 healthcare workers per hour.

It’s truly inspirational to wake up every day and see the amazing effort going on around the world and the role NVIDIA’s scientific computing platform plays in helping understand the virus and discovering testing and treatment options to fight the COVID-19 pandemic.

The reason we’re able to play a role in so many efforts, across so many areas, is because of our strong focus on providing end-to-end workflows for the scientific computing community.

We’re able to provide these workflows because of our approach to full-stack innovation to accelerate all key application areas.

For data analytics, we accelerate the key frameworks like Spark3.0, RAPIDS and Dask. This acceleration is built using our domain-specific CUDA-X libraries for data analytics such as cuDF, cuML and cuGRAPH, along with I/O acceleration technologies from Magnum IO.

These libraries contain millions of lines of code and provide seamless acceleration to developers and users, whether they’re creating applications on the desktops accelerated with our GPUs or running them in data centers, in edge computers, in supercomputers, or in the cloud.

Similarly, we accelerate over 700 HPC applications, including all the most widely used scientific applications.

NVIDIA accelerates all frameworks for AI, which has become crucial for tasks where the information is incomplete — where there are no first principles to work with or the first principle-based approaches are too slow.

And, thanks to our roots in visual computing, NVIDIA provides accelerated visualization solutions, so terabytes of data can be visualized.

NASA, for instance, used our acceleration stack to visualize the landing of the first manned mission to Mars, in what is the world’s largest real-time, interactive volumetric visualization (150TB).

Our deep domain libraries also provide a seamless performance boost to scientific computing users on their applications across the different generations of our architecture. Going from Volta to Ampere, for instance.

NVIDIA’s also making all our new and improved GPU-optimized scientific computing applications available through NGC for researchers to accelerate their time to insight

Together, all of these pillars of scientific computing — simulation, AI and data analytics , edge streaming and visualization workflows — are key to tackling the challenges of today, and tomorrow.

The post Fighting COVID-19 in New Era of Scientific Computing appeared first on The Official NVIDIA Blog.

Programming the Modern Data Center: Cumulus Joins NVIDIA’s Networking Group

Starting today, the open, modern data center has a familiar partner with a new look and a broader reach.

Cumulus Networks is now officially a part of NVIDIA’s networking business unit formed earlier this year with the Mellanox acquisition. The combination gives users a choice of ingredients to power data centers that are accelerated, disaggregated and software-defined to meet the exponential growth in AI, cloud and high performance computing.

The data center is rapidly evolving into the new unit of computing. It’s built on a distributed network of compute and storage resources users need to program to address their changing workloads and expanding datasets.

That’s why Cumulus is a key part of NVIDIA’s networking vision. It provides a popular suite of networking software from operating systems to analytics that gives users choice in how they deploy and automate their data centers.

Driving the Open Road in Networking

Cumulus supports more than 2,000 customers using 130 hardware platforms that run Cumulus Linux, its operating system for network switches. Likewise, Mellanox has been giving users a choice of networking software under its Open Ethernet strategy, forged in 2013.

Since 2016, our ultrafast Mellanox Spectrum switches have shipped with Cumulus Linux and SONiC, the open source offering forged in Microsoft’s Azure cloud and managed by the Open Compute Project. And today, the ONIE environment Cumulus created is a software foundation for Mellanox’s switches.

In addition, Mellanox supports DENT, a distributed Linux software framework for retail and other enterprises at the edge of the network. And our Onyx operating system continues to expand, especially in Ethernet Storage Fabrics.

All of today’s major network operating systems, including Cumulus Linux, SONiC and DENT, are built on Free Range Routing (FRR), the open-source software for Ethernet routing that Cumulus helped create. NVIDIA will continue Cumulus’ work maintaining and advancing FRR for the Linux community as a key plank of open networking.

Our history of support for choice in networking software and hardware continues and expands. For example, our network analytics and management software, Cumulus NetQ, will continue to embrace and extend innovations for open networking.

A Programmable Network Orchestrates GPUs, DPUs 

Users need rich choices in networking software to orchestrate the evolving elements of the data center.

The rise of AI and data analytics has made GPU accelerators a key ingredient. Meanwhile, data processing units (DPUs) have emerged in SmartNICs like the Mellanox BlueField-2 to handle networking, security and storage tasks.

The combination of hardware and software provides unique opportunities. For example, analytics programs could identify issues that AI software could automatically address in self-healing networks.

It’s the kind of advance that requires companies that can innovate across the stack from chips to analytics in networking and accelerated computing with GPUs and DPUs.

Networking hardware and software must go hand in hand so AI, cloud and HPC workloads can run flexibly across any part of the entire data center. With the demand for this kind of elastic computing, there’s no going back to the past where each network system had its own proprietary programming environment.

That’s why we’re glad to welcome Cumulus today as part of NVIDIA. We look forward to the innovations we’ll deliver to customers together.

The post Programming the Modern Data Center: Cumulus Joins NVIDIA’s Networking Group appeared first on The Official NVIDIA Blog.

Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans

Qure.ai, a Mumbai-based startup, has been developing AI tools to detect signs of disease from lung scans since 2016. So when COVID-19 began spreading worldwide, the company raced to retool its solution to address clinicians’ urgent needs.

In use in more than two dozen countries, Qure.ai’s chest X-ray tool, qXR, was trained on 2.5 million scans to detect lung abnormalities — signs of tumors, tuberculosis and a host of other conditions.

As the first COVID-specific datasets were released by countries with early outbreaks — such as China, South Korea and Iran — the company quickly incorporated those scans, enabling qXR to mark areas of interest on a chest X-ray image and provide a COVID-19 risk score.

“Clinicians around the world are looking for tools to aid critical decisions around COVID-19 cases — decisions like when a patient should be admitted to the hospital, or be moved to the ICU, or be intubated,” said Chiranjiv Singh, chief commercial officer of Qure.ai. “Those clinical decisions are better made when they have objective data. And that’s what our AI tools can provide.”

While doctors have data like temperature readings and oxygen levels on hand, AI can help quantify the impact on a patient’s lungs — making it easier for clinicians to triage potential COVID-19 cases where there’s a shortage of testing kits, or compare multiple chest X-rays to track the progression of disease.

In recent weeks, the company deployed the COVID-19 version of its tool in around 50 sites around the world, including hospitals in the U.K., India, Italy and Mexico. Healthcare workers in Pakistan are using qXR in medical vans that actively track cases in the community.

A member of the NVIDIA Inception program, which provides resources to help startups scale faster, Qure.ai uses NVIDIA TITAN GPUs on premises, and V100 Tensor Core GPUs through Amazon Web Services for training and inference of its AI models. The startup is in the process of seeking FDA clearance for qXR, which has received the CE mark in Europe.

Capturing an Image of COVID-19

For coronavirus cases, chest X-rays are just one part of the picture — because not every case shows impact on the lungs. But due to the wide availability of X-ray machines, including portable bedside ones, they’ve quickly become the imaging modality of choice for hospitals admitting COVID-19 patients.

“Based on the literature to date, we know certain indicators of COVID-19 are visible in chest  X-rays. We’re seeing what’s called ground-glass opacities and consolidation, and noticed that the virus tends to settle in both sides of the lung,” Singh said. “Our AI model applies a positive score to these factors and relevant findings, and a negative score to findings like calcifications and pleural effusion that suggest it’s not COVID.”

The qXR tool provides clinicians with one of four COVID-19 risk scores: high, medium, low or none. Within a minute, it also labels and quantifies lesions, providing an objective measurement of lung impact.

By rapidly processing chest X-ray images, qXR is helping some doctors triage patients with COVID-19 symptoms while they wait for test results. Others are using the tool to monitor disease progression by comparing multiple scans taken of the same patient over time. For ease of use, qXR integrates with radiologists’ existing workflows, including the PACS imaging system.

“Workflow integration is key, as the more you can make your AI solution invisible and smoothly embedded into the healthcare workflow, the more it’ll be adopted and used,” Singh said.

While the first version of qXR with COVID-19 analysis was trained and validated on around 11,500 scans specific to the virus, the team has been adding a couple thousand additional scans to the dataset each week, improving accuracy as more data becomes available.

Singh credits the company’s ability to pivot quickly in part to the diverse dataset of chest X-rays it’s collected over the years. In total, Qure.ai has almost 8 million studiess, spread evenly across North America, Europe, the Middle East and Asia, as well as a mix of studies taken on different equipment manufacturers and in different healthcare settings.

“The volume and variety of data helps our AI model’s accuracy,” Singh said. “You don’t want something built on perfect, clean data from a single site or country, where the moment it goes to a new environment, it fails.”

From the Cloud to Clinicians’ Hands

The Bolton NHS Foundation Trust in the U.K. and San Rafaelle University Hospital in Milan, are among dozens of sites that have deployed qXR to help radiologists monitor COVID-19 disease progression in patients.

Most clients can get up and running with qXR within an hour, with deployment over the cloud. In an urgent environment like the current pandemic, this allows hospitals to move quickly, even when travel restrictions make live installations impossible. Hospital customers with on-premises data centers can choose to use their onsite compute resources for inference.

Qure.ai’s next step, Singh said, “is to get this tool in the hands of as many radiologists and other clinicians directly interacting with patients around the world as we can.”

The company has also developed a natural language processing tool, qScout, that uses a chatbot to handle regular check-ins with patients who feel they may have the virus or are recovering at home. Keeping in contact with people in an outpatient setting is an important tool to monitor symptoms, alerting healthcare workers when a patient may need to be admitted to the hospital or track patient recovery without overburdening hospital infrastructure.

It took the team just six weeks to take qScout from a concept to its first customer: the Ministry of Health in Oman.

To learn more about Qure.ai, watch the recent COMPUTE4COVID webinar session, Healthcare AI Startups Against COVID-19. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the virus.

The post Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans appeared first on The Official NVIDIA Blog.

How NVIDIA EGX Is Forming Central Nervous System of Global Industries

Massive change across every industry is being driven by the rising adoption of IoT sensors, including cameras for seeing, microphones for hearing, and a range of other smart devices that help enterprises perceive and understand what’s happening in the physical world.

The amount of data being generated at the edge is growing exponentially. The only way to process this vast data in real time is by placing servers near the point of action and by harnessing the immense computational power of GPUs.

The enterprise data center of the future won’t have 10,000 servers in one location, but one or more servers across 10,000 different locations. They’ll be in office buildings, factories, warehouses, cell towers, schools, stores and banks. They’ll detect traffic jams and forest fires, route traffic safely and prevent crime.

By placing a network of distributed servers where data is being streamed from hundreds of sensors, enterprises can use networks of data centers at the edge to drive immediate action with AI. Additionally, by processing data at the edge, privacy concerns are mitigated and data sovereignty concerns are put to rest.

Edge servers lack the physical security infrastructure that enterprise IT takes for granted. And companies lack the budget to invest in roaming IT personnel to manage these remote systems. So edge servers need to be designed to be self-secure and easy to update, manage and deploy from afar.

Plus, AI systems need to be running all the time, with zero downtime.

We’ve built the NVIDIA EGX Edge AI platform to ensure security and resiliency on a global scale. By simplifying deployment and management, NVIDIA EGX allows always-on AI applications to automate the critical infrastructure of the future. The platform is a Kubernetes and container-native software platform that brings GPU-accelerated AI to everything from dual-socket x86 servers to Arm-based NVIDIA Jetson SoCs.

To date, there are over 20 server vendors building EGX-powered edge and micro-edge servers, including ADLINK, Advantech, Atos, AVerMedia, Cisco, Connect Tech, Dell Technologies, Diamond Systems, Fujitsu, Gigabyte, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta Technologies and Supermicro. As well as dozens of hybrid-cloud and network security partners in the NVIDIA edge ecosystem, such as Canonical, Check Point, Excelero, Guardicore, IBM, Nutanix, Palo Alto Networks, Rancher, Red Hat, VMware, Weka and Wind River.

There are also hundreds of AI applications and integrated solutions vendors building on NVIDIA EGX to deliver industry-specific offerings to enterprises across the globe.

Enterprises running AI need to protect not just customer data, but also the AI models that transform the data into actions. By combining an NVIDIA Mellanox SmartNIC, the industry standard for secure, high-performance networking, with our AI processors into NVIDIA EGX A100, a combined converged accelerator, we’re introducing fundamental new innovations for edge AI.

Enhanced Security and Performance

A secure, authenticated boot of the GPU and SmartNIC from Hardware Root-of-Trust ensures the device firmware and lifecycle are securely managed. Third-generation Tensor Core technology in the NVIDIA Ampere architecture brings industry-leading AI performance. Specific to EGX A100, the confidential AI enclave uses a new GPU security engine to load encrypted AI models and encrypt all AI outputs, further preventing the theft of valuable IP.

As the edge moves to encrypted high-resolution sensors, SmartNICs support in-line cryptographic acceleration at the line rate. This allows encrypted data feeds to be decrypted and sent directly to the GPU memory, bypassing the CPU and system memory.

The edge also requires a greater level of security to protect against threats from other devices on the network. With dynamically reconfigurable firewall offloads in hardware, SmartNICs efficiently deliver the first line of defense for hybrid-cloud, secure service mesh communications.

NVIDIA Mellanox’s time-triggered transport technology for telco (5T for 5G) ensures commercial off-the-shelf solutions can meet the most time-sensitive use cases for 5G vRAN with our NVIDIA Aerial SDK. This will lead to a new wave of CloudRAN in the telecommunications industry.

With an NVIDIA Ampere GPU and Mellanox ConnectX-6 D on one converged product, the EGX A100 delivers low-latency, high-throughput packet processing for security and virtual network functions.

Simplified Deployment, Management and Security at Scale

Through NGC, NVIDIA’s catalog of GPU-optimized containers, we provide industry application frameworks and domain-specific AI toolkits to simplify getting started and for tuning AI applications to new edge environments. They can be used together or individually and open new possibilities for a variety of edge use cases.

And with the NGC private registry, applications can be signed before publication to ensure they haven’t been tampered with in transit, then authenticated before running at the edge. The NGC private registry also supports model versioning and encryption, so lightweight model updates can be delivered quickly and securely.

The future of edge computing requires secure, scalable, resilient, easy-to-manage fleets of AI-powered systems operating at the network edge. By bringing the combined acceleration of NVIDIA GPUs and NVIDIA Mellanox SmartNICs together with NVIDIA EGX, we’re building both the platform and the ecosystem to form the AI nervous system of every global industry.

The post How NVIDIA EGX Is Forming Central Nervous System of Global Industries appeared first on The Official NVIDIA Blog.