Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans

Qure.ai, a Mumbai-based startup, has been developing AI tools to detect signs of disease from lung scans since 2016. So when COVID-19 began spreading worldwide, the company raced to retool its solution to address clinicians’ urgent needs. In use in more than two dozen countries, Qure.ai’s chest X-ray tool, qXR, was trained on 2.5 million Read article >

The post Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans appeared first on The Official NVIDIA Blog.

Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans

Qure.ai, a Mumbai-based startup, has been developing AI tools to detect signs of disease from lung scans since 2016. So when COVID-19 began spreading worldwide, the company raced to retool its solution to address clinicians’ urgent needs.

In use in more than two dozen countries, Qure.ai’s chest X-ray tool, qXR, was trained on 2.5 million scans to detect lung abnormalities — signs of tumors, tuberculosis and a host of other conditions.

As the first COVID-specific datasets were released by countries with early outbreaks — such as China, South Korea and Iran — the company quickly incorporated those scans, enabling qXR to mark areas of interest on a chest X-ray image and provide a COVID-19 risk score.

“Clinicians around the world are looking for tools to aid critical decisions around COVID-19 cases — decisions like when a patient should be admitted to the hospital, or be moved to the ICU, or be intubated,” said Chiranjiv Singh, chief commercial officer of Qure.ai. “Those clinical decisions are better made when they have objective data. And that’s what our AI tools can provide.”

While doctors have data like temperature readings and oxygen levels on hand, AI can help quantify the impact on a patient’s lungs — making it easier for clinicians to triage potential COVID-19 cases where there’s a shortage of testing kits, or compare multiple chest X-rays to track the progression of disease.

In recent weeks, the company deployed the COVID-19 version of its tool in around 50 sites around the world, including hospitals in the U.K., India, Italy and Mexico. Healthcare workers in Pakistan are using qXR in medical vans that actively track cases in the community.

A member of the NVIDIA Inception program, which provides resources to help startups scale faster, Qure.ai uses NVIDIA TITAN GPUs on premises, and V100 Tensor Core GPUs through Amazon Web Services for training and inference of its AI models. The startup is in the process of seeking FDA clearance for qXR, which has received the CE mark in Europe.

Capturing an Image of COVID-19

For coronavirus cases, chest X-rays are just one part of the picture — because not every case shows impact on the lungs. But due to the wide availability of X-ray machines, including portable bedside ones, they’ve quickly become the imaging modality of choice for hospitals admitting COVID-19 patients.

“Based on the literature to date, we know certain indicators of COVID-19 are visible in chest  X-rays. We’re seeing what’s called ground-glass opacities and consolidation, and noticed that the virus tends to settle in both sides of the lung,” Singh said. “Our AI model applies a positive score to these factors and relevant findings, and a negative score to findings like calcifications and pleural effusion that suggest it’s not COVID.”

The qXR tool provides clinicians with one of four COVID-19 risk scores: high, medium, low or none. Within a minute, it also labels and quantifies lesions, providing an objective measurement of lung impact.

By rapidly processing chest X-ray images, qXR is helping some doctors triage patients with COVID-19 symptoms while they wait for test results. Others are using the tool to monitor disease progression by comparing multiple scans taken of the same patient over time. For ease of use, qXR integrates with radiologists’ existing workflows, including the PACS imaging system.

“Workflow integration is key, as the more you can make your AI solution invisible and smoothly embedded into the healthcare workflow, the more it’ll be adopted and used,” Singh said.

While the first version of qXR with COVID-19 analysis was trained and validated on around 11,500 scans specific to the virus, the team has been adding a couple thousand additional scans to the dataset each week, improving accuracy as more data becomes available.

Singh credits the company’s ability to pivot quickly in part to the diverse dataset of chest X-rays it’s collected over the years. In total, Qure.ai has almost 8 million studiess, spread evenly across North America, Europe, the Middle East and Asia, as well as a mix of studies taken on different equipment manufacturers and in different healthcare settings.

“The volume and variety of data helps our AI model’s accuracy,” Singh said. “You don’t want something built on perfect, clean data from a single site or country, where the moment it goes to a new environment, it fails.”

From the Cloud to Clinicians’ Hands

The Bolton NHS Foundation Trust in the U.K. and San Rafaelle University Hospital in Milan, are among dozens of sites that have deployed qXR to help radiologists monitor COVID-19 disease progression in patients.

Most clients can get up and running with qXR within an hour, with deployment over the cloud. In an urgent environment like the current pandemic, this allows hospitals to move quickly, even when travel restrictions make live installations impossible. Hospital customers with on-premises data centers can choose to use their onsite compute resources for inference.

Qure.ai’s next step, Singh said, “is to get this tool in the hands of as many radiologists and other clinicians directly interacting with patients around the world as we can.”

The company has also developed a natural language processing tool, qScout, that uses a chatbot to handle regular check-ins with patients who feel they may have the virus or are recovering at home. Keeping in contact with people in an outpatient setting is an important tool to monitor symptoms, alerting healthcare workers when a patient may need to be admitted to the hospital or track patient recovery without overburdening hospital infrastructure.

It took the team just six weeks to take qScout from a concept to its first customer: the Ministry of Health in Oman.

To learn more about Qure.ai, watch the recent COMPUTE4COVID webinar session, Healthcare AI Startups Against COVID-19. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the virus.

The post Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans appeared first on The Official NVIDIA Blog.

40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers

Forty years to the day since PAC-MAN first hit arcades in Japan, and went on to munch a path to global stardom, the retro classic has been reborn, delivered courtesy of AI.

Trained on 50,000 episodes of the game, a powerful new AI model created by NVIDIA Research, called NVIDIA GameGAN, can generate a fully functional version of PAC-MAN — without an underlying game engine. That means that even without understanding a game’s fundamental rules, AI can recreate the game with convincing results.

GameGAN is the first neural network model that mimics a computer game engine by harnessing generative adversarial networks, or GANs. Made up of two competing neural networks, a generator and a discriminator, GAN-based models learn to create new content that’s convincing enough to pass for the original.

“This is the first research to emulate a game engine using GAN-based neural networks,” said Seung-Wook Kim, an NVIDIA researcher and lead author on the project. “We wanted to see whether the AI could learn the rules of an environment just by looking at the screenplay of an agent moving through the game. And it did.”

As an artificial agent plays the GAN-generated game, GameGAN responds to the agent’s actions, generating new frames of the game environment in real time. GameGAN can even generate game layouts it’s never seen before, if trained on screenplays from games with multiple levels or versions.

This capability could be used by game developers to automatically generate layouts for new game levels, as well as by AI researchers to more easily develop simulator systems for training autonomous machines.

“We were blown away when we saw the results, in disbelief that AI could recreate the iconic PAC-MAN experience without a game engine,” said Koichiro Tsutsumi from BANDAI NAMCO Research Inc., the research development company of the game’s publisher BANDAI NAMCO Entertainment Inc., which provided the PAC-MAN data to train GameGAN. “This research presents exciting possibilities to help game developers accelerate the creative process of developing new level layouts, characters and even games.”

We’ll be making our AI tribute to the game available later this year on AI Playground, where anyone can experience our research demos firsthand.

AI Goes Old School

PAC-MAN enthusiasts once had to take their coins to the nearest arcade to play the classic maze chase. Take a left at the pinball machine and continue straight past the air hockey, following the unmistakable soundtrack of PAC-MAN gobbling dots and avoiding ghosts Inky, Pinky, Blinky and Clyde.

In 1981 alone, Americans inserted billions of quarters to play 75,000 hours of coin-operated games like PAC-MAN. Over the decades since, the hit game has seen versions for PCs, gaming consoles and cell phones.

NVIDIA Researcher Seung-Wook Kim
Game Changer: NVIDIA Researcher Seung-Wook Kim and his collaborators trained GameGAN on 50,000 episodes of PAC-MAN.

The GameGAN edition relies on neural networks, instead of a traditional game engine, to generate PAC-MAN’s environment. The AI keeps track of the virtual world, remembering what’s already been generated to maintain visual consistency from frame to frame.

No matter the game, the GAN can learn its rules simply by ingesting screen recordings and agent keystrokes from past gameplay. Game developers could use such a tool to automatically design new level layouts for existing games, using screenplay from the original levels as training data.

With data from BANDAI NAMCO Research, Kim and his collaborators at the NVIDIA AI Research Lab in Toronto used NVIDIA DGX systems to train the neural networks on the PAC-MAN episodes (a few million frames, in total) paired with data on the keystrokes of an AI agent playing the game.

The trained GameGAN model then generates static elements of the environment, like a consistent maze shape, dots and Power Pellets — plus moving elements like the enemy ghosts and PAC-MAN itself.

It learns key rules of the game, both simple and complex. Just like in the original game, PAC-MAN can’t walk through the maze walls. He eats up dots as he moves around, and when he consumes a Power Pellet, the ghosts turn blue and flee. When PAC-MAN exits the maze from one side, he’s teleported to the opposite end. If he runs into a ghost, the screen flashes and the game ends.

Since the model can disentangle the background from the moving characters, it’s possible to recast the game to take place in an outdoor hedge maze, or swap out PAC-MAN for your favorite emoji. Developers could use this capability to experiment with new character ideas or game themes.

It’s Not Just About Games

Autonomous robots are typically trained in a simulator, where the AI can learn the rules of an environment before interacting with objects in the real world. Creating a simulator is a time-consuming process for developers, who must code rules about how objects interact with one another and how light works within the environment.

Simulators are used to develop autonomous machines of all kinds, such as warehouse robots learning how to grasp and move objects around, or delivery robots that must navigate sidewalks to transport food or medicine.

GameGAN introduces the possibility that the work of writing a simulator for tasks like these could one day be replaced by simply training a neural network.

Suppose you install a camera on a car. It can record what the road environment looks like or what the driver is doing, like turning the steering wheel or hitting the accelerator. This data could be used to train a deep learning model that can predict what would happen in the real world if a human driver — or an autonomous car — took an action like slamming the brakes.

“We could eventually have an AI that can learn to mimic the rules of driving, the laws of physics, just by watching videos and seeing agents take actions in an environment,” said Sanja Fidler, director of NVIDIA’s Toronto research lab. “GameGAN is the first step toward that.”

NVIDIA Research has more than 200 scientists around the globe, focused on areas such as AI, computer vision, self-driving cars, robotics and graphics.

GameGAN is authored by Fidler, Kim, NVIDIA researcher Jonah Philion, University of Toronto student Yuhao Zhou and MIT professor Antonio Torralba. The paper will be presented at the prestigious Conference on Computer Vision and Pattern Recognition in June.

PAC-MANTM & ©BANDAI NAMCO Entertainment Inc.

The post 40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers appeared first on The Official NVIDIA Blog.

40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers

Forty years to the day since PAC-MAN first hit arcades in Japan, and went on to munch a path to global stardom, the retro classic has been reborn, delivered courtesy of AI. Trained on 50,000 episodes of the game, a powerful new AI model created by NVIDIA Research, called NVIDIA GameGAN, can generate a fully Read article >

The post 40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers appeared first on The Official NVIDIA Blog.

Earth to AI: Three Startups Using Deep Learning for Environmental Monitoring

Sometimes it takes an elevated view to appreciate the big picture.

NASA’s iconic “Blue Marble,” taken in 1972, helped inspire the modern environmental movement by capturing the finite and fragile nature of Earth for the first time. Today, aerial imagery from satellites and drones powers a range of efforts to monitor and protect our planet — accelerated by AI and NVIDIA GPUs.

On the 50th anniversary of Earth Day, see how companies in the NVIDIA Inception program are using aerial imagery and AI to track global deforestation, monitor thawing permafrost in the Arctic and prevent natural gas leaks.

Inception is a virtual accelerator that equips startups in AI and data science with fundamental tools to support product development, prototyping and deployment.

Tracking Tree Loss: Orbital Insight

Millions of acres of forests are lost each year due to agriculture and illegal logging. Orbital Insight is working with the World Resources Institute to identify areas around the world where virgin rainforest is being replaced with new roads, buildings and palm oil plantations.

Image courtesy Orbital Insight

Using over 600,000 five-meter resolution satellite images, the startup’s deep learning algorithm maps deforestation as part of Global Forest Watch’s initiative for real-time forest monitoring. AI can give researchers a head start, allowing them to spot signs of impending deforestation by extrapolating trends into the future — rather than relying on monthly alerts that come too late to prevent tree loss.

The tool can also help companies assess the risk of deforestation in their supply chains. Commodities like palm oil have driven widespread deforestation in Southeast Asia, leading several producers to pledge to achieve zero net deforestation in their supply chains this year.

Based in Palo Alto, Calif., Orbital Insight uses convolutional neural networks to analyze satellite imagery and radar data for supply chain monitoring, real estate, mapping and infrastructure. Its geospatial AI algorithms are accelerated by NVIDIA GPUs through Amazon Web Services.

Using GPUs in the cloud allows the team to upscale and downscale their usage as needed, and enables a 100x inference speedup on huge satellite images, said Manuel Gonzalez-Rivero, senior computer vision scientist at the company.

AI on the Arctic: 3vGeomatics

One of the biggest burgeoning climate threats today is thawing permafrost. Mainly found in polar regions like the Canadian Arctic, permafrost is composed of ice, rock and sediment located under a layer of soil. Rich in organic material, the world’s permafrost is estimated to contain 1,500 billion tons of carbon — twice as much as in the Earth’s atmosphere.

Image by National Park Service Climate Change Response. Licensed from Wikimedia Commons under CC BY 2.0.

As much as 70 percent of permafrost could melt by 2100, releasing massive amounts of carbon into the atmosphere. Climate change-induced permafrost thaw also causes landslides and erosion that threaten communities and critical infrastructure.

Through a project for the Canadian Space Agency, Inception startup 3vGeomatics is using a remote sensing radar satellite-based technology called InSAR to monitor thawing permafrost across the Canadian Arctic.

Conducting analyses via an on-premises server with dozens of NVIDIA data center GPUs enables thousand-fold increases in processing speed of the radar satellite images, each of which contains billions of pixels and covers thousands of square kilometers.

“Before, it would take months to analyze satellite images and deliver results, only to tell our client that they had a landslide 5 weeks ago,” said Parwant Ghuman, chief technology officer of 3vGeomatics. “Leveraging GPUs enables us to deliver actionable intelligence about the risks they have today.”

Preventing Oil, Gas Leaks: Azavea

The U.S. oil and gas industry leaks an estimated 13 million metric tons of methane into the atmosphere each year — much of which is preventable. One of the leading sources is excavation damage caused by third parties, unaware that they’re digging over a natural gas pipeline.

Azavea, a Philadelphia-based startup that builds geospatial analytics tools for civic and social impact, is collaborating with aerial services company American Aerospace to detect construction over known pipelines — using NVIDIA GPUs for training as well as inference at the edge.

Image courtesy of American Aerospace

Neural networks deployed on planes or drones can detect visible construction vehicles and trucks on the ground, warning oil and gas companies of potential excavations that could damage pipelines.

“Currently, a pilot will just fly at low altitudes over known pipelines and look out the window to see if there’s any indication of construction vehicles,” said Rob Emanuele, vice president of research at Azavea. “Our work allows pilots to fly much safer by flying higher off the ground, relying on AI to detect where construction vehicles are present.”

Developed with Raster Vision, Azavea’s open-source deep learning framework for aerial imagery, the AI algorithms are being tested using an NVIDIA RTX laptop in a small plane for real-time inference. Future deployments would instead use embedded GPUs on unmanned aircraft and drones.

Learn more about how GPU technology is driving applications with social impact, including environmental projects.

Main image credit: NASA

The post Earth to AI: Three Startups Using Deep Learning for Environmental Monitoring appeared first on The Official NVIDIA Blog.

Earth to AI: Three Startups Using Deep Learning for Environmental Monitoring

Sometimes it takes an elevated view to appreciate the big picture. NASA’s iconic “Blue Marble,” taken in 1972, helped inspire the modern environmental movement by capturing the finite and fragile nature of Earth for the first time. Today, aerial imagery from satellites and drones powers a range of efforts to monitor and protect our planet Read article >

The post Earth to AI: Three Startups Using Deep Learning for Environmental Monitoring appeared first on The Official NVIDIA Blog.

AI a ‘Once-in-a-Generation Opportunity,’ Says American College of Radiology Chair at GTC Digital

Radiologists could use a fresh pair of AIs. “I see these as decision aids to really augment my performance as a human radiologist,” said Geraldine McGinty, chair of the American College of Radiology and chief strategy and contracting officer at Weill Cornell Medicine, in a GTC Digital talk. McGinty is a featured speaker among dozens Read article >

The post AI a ‘Once-in-a-Generation Opportunity,’ Says American College of Radiology Chair at GTC Digital appeared first on The Official NVIDIA Blog.

AI a ‘Once-in-a-Generation Opportunity,’ Says American College of Radiology Chair at GTC Digital

Radiologists could use a fresh pair of AIs.

“I see these as decision aids to really augment my performance as a human radiologist,” said Geraldine McGinty, chair of the American College of Radiology and chief strategy and contracting officer at Weill Cornell Medicine, in a GTC Digital talk. McGinty is a featured speaker among dozens of healthcare sessions at the virtual conference.

In the talk, McGinty shares her perspective on how healthcare organizations can harness the “once-in-a-generation opportunity” AI provides to improve the quality of care while driving down costs.

Though a radiologist specializing in breast imaging, she’s unfazed by headlines declaring patients should want AI reading their next mammogram.

“Reading mammograms, I know I’m not going to see every breast cancer,” McGinty said. “I know that I’m going to have to call some patients back for additional imaging, or even a biopsy that will turn out not to be cancer.”

The goal of radiologists, she says, is to provide patients with “all the imaging that’s beneficial, and none that’s not.”

Photo courtesy of Geraldine McGinty

AI can help radiologists with that by reducing the variability among readers and hospitals when analyzing the same scans. It can also streamline their workflows, giving radiologists more time to talk to patients. And to consider the factors beyond imaging scans that go into deciding treatments, such as pathology reports, genetic risk data and records of other conditions the patient may have already.

“The imaging findings of disease are rarely binary,” she said.

As a medical field that has always rapidly adopted technology — starting with X-rays and moving on to MRI and digital archiving systems like PACS — radiology is well-positioned to integrate AI tools, McGinty said.

“Artificial intelligence feels like a natural evolution of our love of technology in the service of our patients.”

But new tools need to be carefully validated, as doctors learned decades ago when radiation poisoning took the life of innovator Marie Curie. McGinty says radiologists need to better understand how deep learning models reach their conclusions, advocating for more explainability in AI algorithms.

Developing powerful AI algorithms is just the start. These tools should be equally available to all patients, McGinty pointed out, and must be trained on diverse datasets to help combat existing disparities such as breast cancer outcomes for women of color.

“It’s not just about having accurate systems,” she said. “We actually have to challenge ourselves to use them in a powerful way.”

Watch the full talk here, and check out the lineup of healthcare talks at GTC Digital, available to stream for free. 

Main image from The Medical Futurist. Licensed from Wikimedia Commons under CC-BY-4.0.

The post AI a ‘Once-in-a-Generation Opportunity,’ Says American College of Radiology Chair at GTC Digital appeared first on The Official NVIDIA Blog.

Mixing It Up: Saudi Researchers Accelerate Environmental Models with Mixed Precision

Scientists studying environmental variables — like sea surface temperature or wind speed — must strike a balance between the number of data points in their statistical models and the time it takes to run them.

A team of researchers at Saudi Arabia’s King Abdullah University of Science and Technology, known as KAUST, is using NVIDIA GPUs to strike a win-win deal for statisticians: large-scale, high-resolution regional models that run twice as fast. They presented their work, which helps scientists develop more detailed predictions, in a session at GTC Digital.

The software package they developed, ExaGeoStat, can handle data from millions of locations. It’s also accessible in the programming language R through the package ExaGeoStatR, making it easy for scientists using R to take advantage of GPU acceleration.

“Statisticians rely heavily on R, but previous software packages could only process limited data sizes, making it impractical to analyze large environmental datasets,” said Sameh Abdulah, a research scientist at the university. “Our goal is to enable scientists to run GPU-accelerated experiments from R, without needing a deep understanding of the underlying CUDA framework.”

Abdulah and his colleagues use a variety of NVIDIA data center GPUs, most recently adopting NVIDIA V100 Tensor Core GPUs to further speed up weather simulations using mixed-precision computing.

Predicting Weather, Come Rain or Shine 

Climate and weather models are complex and incredibly time-consuming simulations, taking up significant computational resources on supercomputers worldwide. ExaGeoStat helps statisticians find insights from these large datasets faster.

The application predicts measurements like temperature, soil moisture levels or wind speed for different locations within a region. For example, if the dataset shows that the temperature in Riyadh is 21 degrees Celsius, the application would provide a likely estimation of the temperature at that same point in time further east in, say, Abu Dhabi.

Abdulah and his colleagues are working to extend these predictions to not just different locations in a region, but also to different points in time — for instance, predicting the wind speed in Philadelphia next week based on data from New York City today.

The software reduces the system memory required to run predictions from large-scale simulations, enabling scientists to work with much larger meteorological datasets than previously possible. Larger datasets allow researchers to make estimations about more locations, increasing the geographic scope of their simulations.

The team runs models with data for a couple million locations, primarily focusing on datasets in the Middle East. They’ve also applied ExaGeoStat to soil moisture data from the Mississippi River Basin, and plan to model more environmental data for regions in the U.S.

Compared to using a CPU, the researchers saw a nearly 10x speedup — from 400 seconds to 45 —  running one iteration of the model on a single NVIDIA GPU. It takes about 175 iterations to converge a full simulation.

“Now, with V100 GPUs in our computing center, we’ll be able to accelerate our application even further,” Abdulah said. “While so far we’ve been using double precision and single precision, with Tensor Cores we can also start using half precision.”

Besides higher performance and faster completion times, mixed-precision algorithms also save energy, Abdulah says, by decreasing the amount of time and power consumption required to run the models.

Using a combination of single and double precision, the researchers achieve, on average, a 1.9x speedup of their algorithm on a system with an NVIDIA V100 GPU. He and his colleagues next plan to evaluate how much half-precision computing using NVIDIA Tensor Cores will further accelerate their application. To do so, they’ll use V100 GPUs at their university as well as on Oak Ridge National Laboratory’s Summit system, the world’s fastest supercomputer.

To learn more about Abdulah’s work, watch the full on-demand talk. His collaborators are Hatem Ltaief, David Keyes, Marc Genton and Ying Sun from the Extreme Computing Research Center and the statistics program at King Abdullah University of Science and Technology.

Main image shows wind speed data over the Middle East and the Arabian Sea. 

The post Mixing It Up: Saudi Researchers Accelerate Environmental Models with Mixed Precision appeared first on The Official NVIDIA Blog.

Mixing It Up: Saudi Researchers Accelerate Environmental Models with Mixed Precision

Scientists studying environmental variables — like sea surface temperature or wind speed — must strike a balance between the number of data points in their statistical models and the time it takes to run them. A team of researchers at Saudi Arabia’s King Abdullah University of Science and Technology, known as KAUST, is using NVIDIA Read article >

The post Mixing It Up: Saudi Researchers Accelerate Environmental Models with Mixed Precision appeared first on The Official NVIDIA Blog.