AI in the Sky: Startup Uses Satellite Imagery to Assess Hurricane, Fire Damage

Technology can’t stop a hurricane in its tracks — but it can make it easier to rebuild in the wake of one.

Following a natural disaster, responders rush to provide aid and restore infrastructure, relying on limited information to make critical decisions about where resources should be deployed.

It can be challenging to analyze these situations from the ground. So San Francisco-based startup CrowdAI has turned to satellite and aerial imagery, extracting insights that help governments and companies send disaster response efforts where they’re most urgently needed.

“Over the last decade, a tremendous amount of satellite and drone data has come online,” said Devaki Raj, founder and CEO of CrowdAI, a member of the NVIDIA Inception virtual accelerator program. “We’re using recent advances in computer vision to unlock that data.”

A few hundred natural disasters occur each year around the globe. That’s a devastating number in terms of human impact, but it’s a fairly small sample size for a machine learning model being trained to assess disaster damage in disparate regions of the world.

While a major earthquake will occur on average around once a month, any given fault line may go a century or more without a strong seismic event. And though a few dozen hurricanes and typhoons strike each year, they cause destruction across vastly different landscapes.

An effective AI for this use case must be specific enough to identify roads and buildings, but generic enough to recognize what a home or road looks like in South Carolina, as well as Sumatra. To achieve this requires finding enough training data, which consists of diverse shots taken both before and after a disaster.

This was once a challenge, but Jigar Doshi, machine learning lead at CrowdAI, says, “as computer vision has matured, we don’t need that many samples of ‘before fire’ and ‘after fire’ to make great models to predict the impacts of disaster.”

In the aftermath of Hurricane Michael last year, CrowdAI worked with telecoms provider WOW! to assess building damage across Panama City, Fla. Analyzing satellite images of the city, provided by NOAA, helped the provider deploy on-the-ground operators to locations most severely hit by the storm and restore service faster.

When unaided by AI, the responders must physically drive around the area and rely on phone calls from customers to determine which regions need assistance — a time-consuming and imprecise process.

wildfire damage
Analysis courtesy of CrowdAI and image courtesy of DigitalGlobe 2018’s Open Data Program.

The startup’s research was recognized last year by two of the world’s leading AI conferences: Doshi had a paper accepted at CVPR and presented a joint paper with Facebook at NeurIPS. And after last year’s Camp Fire in Butte County, Calif., the deadliest wildfire in the United States in a century, he and the CrowdAI team used open data to analyze damage.

CrowdAI uses NVIDIA GPUs in the cloud and onsite for training their algorithms and for inference. Depending on the inference environment used, the AI models can return insights with just a one-second lag.

“When a disaster happens, we aren’t training — we’re just doing inference,” said Raj. “That’s why we need this speed.”

CrowdAI is also exploring models that could forecast damage, as well as deep learning tools that go beyond satellite images to integrate wind, precipitation and social media data.

* Main image shows a Florida neighborhood after Hurricane Michael, overlaid with CrowdAI’s categorization of building damage. Analysis courtesy of CrowdAI and imagery courtesy of NOAA’s National Ocean Service.

The post AI in the Sky: Startup Uses Satellite Imagery to Assess Hurricane, Fire Damage appeared first on The Official NVIDIA Blog.

Talk to Me: Deep Learning Identifies Depression in Speech Patterns

“Talk therapy” is often used by psychotherapists to help patients overcome depression or anxiety through conversation.

A research team at Massachusetts Institute of Technology is using deep learning to uncover what might be called “talk diagnosis” — detecting signs of depression by analyzing a patient’s speech.

The research could lead to effective, and inexpensive, diagnosis of serious mental health issues.

An estimated one in 15 adults in the U.S. reports having a bout of major depression in any given year, according to the National Institute of Mental Health. The condition can lead to serious disruptions in a person’s life, yet our understanding of it remains limited.

The techniques used to identify depression typically involve mental health experts asking direct questions and drawing educated conclusions.

In the future, these pointed assessments may be less necessary, according to lead researcher Tuka Alhanai, an MIT research assistant and Ph.D. candidate in computer science. She envisions her team’s work becoming part of the ongoing monitoring of individual mental health.

All About the Dataset

A key aspect of getting started with deep learning is getting good data.

That was a challenge for Alhanai when her team went to train its model. She was specifically looking for datasets of conversations in which some of the participants were depressed.

Eventually, she found one from the University of Southern California, which had teamed with German researchers on conducting interviews with a group of 180 people, 20 percent of whom had some signs of depression. The interviews consisted of 20 minutes of questions about where the subjects lived, who their friends were and whether they felt depressed.

Alhanai was emboldened by the researchers’ conclusion that depression can, in fact, be detected in speech patterns and vocabulary. But she wanted to take things a step further by removing the leading, predictive questions, and instead train a model to detect depression during normal, everyday conversation.

“There is significant signal in the data that will cue you to whether people have depression,” she said. “You listen to overall conversation and absorb the trajectory of the conversation and speech, and the larger context in which things are said.”

Alhanai and her team combined the processing power of a cluster of machines running more than 40 NVIDIA TITAN X GPUs with the TensorFlow, Keras and cuDNN deep learning libraries, and set to work training their model.

They fed it with snippets of the interviews from the dataset, minus the obvious questions and references to depression, leaving the model to determine whether there were depression cues present or not. They subsequently exposed the model to sections of conversation from a healthy person and a depressed person, and then told the model which one was which.

After enough cycles of this, the researchers would feed the model another section of conversation and ask it to determine whether there was an indication of possible depression. The team trained dozens of models this way, something Alhanai said would not have been possible without access to GPUs.

Success Breeds Ambition

Ultimately, the training resulted in the team’s model identifying depression from normal conversation with more than 70 percent accuracy during inference — on par with mental health experts’ diagnosis — with each experiment occurring on a single TITAN X.

The team reported its findings in a paper submitted at the recent Interspeech 2018 conference in Hyderabad, India, and is now primed to take the work to the next level.

“This work is very encouraging,” said Alhanai. “Let’s get these systems out there and have them do predictions for evaluation purposes — not to use clinically yet, but to collect more data and build more robustness.”

Naturally, Alhanai craves access to faster and more powerful GPUs so she can run more experiments with larger datasets. But her long-term view is to explore the impact that using deep learning to analyze communication — not just speech — can have in diagnosing and managing other mental health conditions.

“Any condition you can hear and feel in speech, or through other gestures, a machine should be able to determine,” she said. “It doesn’t matter what the signal is — it could be speech, it could be writing, it could be jaw movement, it could be muscle tension. It will be a very non-invasive way to monitor these things.”

The post Talk to Me: Deep Learning Identifies Depression in Speech Patterns appeared first on The Official NVIDIA Blog.

Reading the Vital Signs: Leading Minds in Medicine to Discuss AI Progress in Healthcare

The buzz around AI in medicine is contagious. From radiology and drug discovery to disease risk prediction and patient care, deep learning is transforming healthcare from every angle.

You can learn about it all at the GPU Technology Conference, where healthcare innovators from industry, universities and medical institutions will gather to share how AI and GPUs are empowering doctors and researchers. GTC takes place March 17-21 in San Jose.

The conference healthcare sessions feature presentations by renowned names in medicine — including from four of the top five academic medical centers in the United States, and from five of the nation’s top seven radiology departments.

A highlight of the week will be the Tuesday morning talk by luminary Eric Topol, founder and director of the Scripps Research Translational Institute. Topol will speak about the opportunities AI and deep learning present for clinicians, health systems and patients.

The talk will be followed by a signing for his forthcoming book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.

A Host of Opportunities to Learn

The week is packed with more than 40 healthcare sessions, covering topics as diverse as medical imaging, genomics and computational chemistry. Here are a few standouts:

Throughout the week, attendees can join two-hour, instructor-led training sessions on AI and accelerated computing, including workshops specific to medical use cases.

Robust Presence on the Show Floor

Healthcare entrepreneurs in NVIDIA’s Inception virtual accelerator program can be found at speaker sessions, booths and poster sessions and at the Inception Theater on the show floor. You’ll hear how companies including Proscia, Subtle Medical, Arterys and ImFusion are using AI for medical imaging, pathology and more.

Major healthcare institutions including Mass General Hospital will also be sharing how they use AI and GPUs to transform medicine and patient care.

For healthcare startups, there will be a meetup at the Inception Lounge on Wednesday, March 20, from 5-6 pm. And, of course, our booth will feature demos of GPU-powered healthcare applications and showcase the NVIDIA Clara platform for medical imaging applications.

Check out the full healthcare track at GTC, and register today. Here are five more reasons to attend:

The post Reading the Vital Signs: Leading Minds in Medicine to Discuss AI Progress in Healthcare appeared first on The Official NVIDIA Blog.

AI Hotline: Startup Analyzes Emergency Calls to Identify Cardiac Arrest Victims

“911. What’s your emergency?”

Denmark-based startup Corti knows that the stakes are high when dialing emergency services. So it built an AI tool to provide immediate feedback and guidance to emergency call responders, helping them ask the right questions and quickly identify highly acute cases.

Its speech-recognition software, Corti AI, can cut down the number of undetected out-of-hospital cardiac arrests by almost half, and help responders more quickly dispatch emergency services.

Corti AI can detect cardiac arrest within 50 seconds of an emergency phone call, which is more than 10 seconds faster than dispatchers unaided by AI — and every second counts.

Cardiac arrest “is the highest critical diagnosis there is,” said Lars Maaloee, Corti’s co-founder and chief technology officer. When the heart stops beating, all organs — including the brain — are deprived of oxygen. Immediate CPR is critical for the patient to have any chance of survival. “Responders need to act extremely fast in sending the ambulance and instructing the bystander on what to do.”

If cardiac arrest treatment is delayed by more than 10 minutes, a victim’s chance of survival is less than 5 percent.

A study found that Corti AI identified cardiac arrest 95 percent of the time from emergency call audio, while emergency dispatchers in Copenhagen spotted 73 percent of cases.

Corti’s solution is currently deployed throughout the Copenhagen metropolitan area, covering nearly 2 million residents. A member of the NVIDIA Inception program, the startup was a finalist at last year’s GTC Europe Inception Awards.

Making the Right Call

On the desk of each Corti-enabled emergency dispatcher sits a white cylinder a few inches tall, similar to a miniature lampshade or Bluetooth speaker. Called the Orb, it connects to a responder’s telephone and captures audio during emergency calls.

Corti Orb
Each Orb contains an NVIDIA Jetson TX2 module that connects to an emergency dispatcher’s phone line. (Image courtesy of Corti.)

Designed in collaboration with Danish lamp designer Tom Rossau, each Orb houses a powerful NVIDIA Jetson TX2 module. With the JetPack SDK, Corti can run multiple neural networks on the device, including a combination of CNNs and RNNs.

The Orb identifies relevant parts of the phone conversation, even looking for clues by sifting through background noise and non-verbal signals like breathing patterns.

Audio snippets are then sent from the Orb to Corti’s servers. Powered by NVIDIA GPUs, the servers rapidly send back insights that are displayed on the emergency responder’s computer screen in a desktop interface called Corti Triage.

The Triage tool guides dispatchers during the call, suggesting questions to ask and alerting responders of possible serious conditions.

“Throughout the medical sector, there are a lot of decisions taken by a single person that can have major consequences,” said Maaloee. “Though the dispatchers are highly trained, it’s always possible to help them make better decisions.”

Though trained on data from past emergency calls, the AI can be easily customized to the protocol of a specific department. To help emergency call centers improve in the long term, a complementary software module called Corti Review analyzes data from each call and provides feedback to dispatchers and managers.

Corti is collaborating with the University of Copenhagen and the University of Washington to study the efficacy of its AI tools.

The startup is using transfer learning to train its algorithms to process languages other than English, and is expanding its services to other cities in Europe. Corti is also making its AI more flexible, so that the software can be used to analyze other types of medical conversations — such as a primary care physician’s interaction with a patient.

The post AI Hotline: Startup Analyzes Emergency Calls to Identify Cardiac Arrest Victims appeared first on The Official NVIDIA Blog.

How UCSF Researchers Are Using AI on Some of Healthcare’s Toughest Problems

Hospitals produce huge volumes of medical data that, when handled with concern for privacy and security, could be used to re-examine everything from hospital administration to patient care. But this means finding careful ways to assemble this data, so it’s no longer locked in organizational silos.

A research team at the University of California, San Francisco, is demonstrating just how powerful the combination of deep learning and data can be — and the potential this research holds for improving the healthcare system.

“Medicine is still in the beginning stages of using deep learning and other AI technologies,” said Dexter Hadley, assistant professor of pediatrics, pathology and laboratory medicine at UCSF’s Bakar Computational Health Sciences Institute. “But the technology industry is starting to show how useful it can be.”

Hadley cited the example of work by Google, UCSF and other academic medical centers to develop methods that can predict which patients are more likely to be readmitted to the hospital better than the hospitals’ current algorithms.

“With deep learning and a little bit of insight, you can do a lot,” he said.

Hadley’s initial work identified Alzheimer’s disease every time it was present more than six years before doctors were able to.

Powerful First Stepsucsf logo

The effort started when Benjamin Franc, a former UCSF professor and radiologist who has since left the hospital, approached Hadley with a desire to demonstrate how tapping reservoirs of imaging data could lead to advances in diagnoses.

Franc zeroed in on a particular pool of shared data: the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multisite study funded by the National Institutes of Health and focused on clinical trials to improve prevention and treatment of the disease.

The early work was promising: After training a deep learning algorithm using more than 2,000 brain images from 1,000 Alzheimer’s patients, the team achieved 100 percent sensitivity in detecting the disease an average of 75 months earlier than a final diagnosis. (More details on the findings can be found in the paper the team published earlier this year.)

This, Franc and Hadley believe, represents the tip of an iceberg of what may be possible.

“If we are to scale up molecular imaging to benefit medicine worldwide, we have to find ways to use all the information we have,” Franc said. “This is where AI can help.”

For the Alzheimer’s algorithm, both training and inference occurred on a six-core server running four NVIDIA TITAN X GPUs. Training was done using 90 percent of the ADNI images, while 10 percent were held out for validation. The team also made use of the TensorFlow and Keras deep learning libraries, as well as Google’s Inception neural network architecture for image classification and detection.

The Data Is All There

In addition to being potentially game-changing, the Alzheimer’s findings have been a source of frustration for Hadley, whose mother has the disease. He says all the neuroimaging information needed to speed up diagnoses is there, but hard to access because of an industry-wide reluctance to share the information, largely out of concerns for patient privacy.

Healthcare providers are forgetting about the “portability” aspect of HIPAA regulations, he says, and that the regulations are designed to ensure that data is shared appropriately.

Hadley believes this is leading to unnecessary challenges for Alzheimer’s patients and their families.

“If we knew this six years ago, it would have been totally different,” said Hadley.

That’s what he believes makes the work he and Franc started so important. By showing what can be accomplished by pooling existing data, Hadley’s hoping deep learning can be the answer to early detection of numerous diseases.

He cited breast cancer as an example of a disease that could be diagnosed faster. He noted that if the data were shared, we could simulate retrospective multi-institutional clinical trials across millions of patients.

Fewer patients could be subjected to trials, and diagnoses might be sped up, if the data could be used more effectively and deep learning methods applied.

“Technology isn’t the limiting factor in medicine anymore — it’s politics and policy,” said Hadley. “If those bottlenecks are solved, the future is quite bright.”

The post How UCSF Researchers Are Using AI on Some of Healthcare’s Toughest Problems appeared first on The Official NVIDIA Blog.

What Is Transfer Learning?

You probably have a career. But hit the books for a graduate degree or take online certificate courses by night, and you could start a new career building on your past experience.

Transfer learning is the same idea. This deep learning technique enables developers to harness a neural network used for one task and apply it to another domain.

Take image recognition. Let’s say that you want to identify horses, but there aren’t any publicly available algorithms that do an adequate job. With transfer learning, you begin with an existing convolutional neural network commonly used for image recognition of other animals, and you tweak it to train with horses.

ResNet to the Rescue

Developers might start with ResNet-50 — a pre-trained deep learning model consisting of 50 layers — because it has a high accuracy level for identifying cats or dogs. Within the neural network are layers that are used to identify outlines, curves, lines and other identifying features of these animals. The layers required a lot of labeled training data, so using them saves a lot of time.

Those layers can be applied to the task of carrying out the same identification on some horse features. You might be able to identify eyes, ears, legs and outlines of horses with ResNet-50, but to determine it was a horse and not a dog might require some additional training data.

And with additional training by feeding labeled training data for horses — more horse-specific features can be built into the model.

Transfer Learning Explained

Here’s how it works: First, you delete what’s known as the “loss output” layer, which is the final layer used to make predictions, and replace it with a new loss output layer for horse prediction. This loss output layer is a fine-tuning node for determining how training penalizes deviations from the labeled data and the predicted output.

Next, you would take your smaller dataset for horses and train it on the entire 50-layer neural network or the last few layers or just the loss layer alone. By applying these transfer learning techniques, your output on the new CNN will be horse identification.

Word Up, Speech!

Transfer learning isn’t just for image recognition. Recurrent neural networks, often used in speech recognition, can take advantage of transfer learning, as well. However, you’ll need two similar speech-related datasets, such as a million hours of speech from a pre-existing model and 10 hours of speech specific to the new task.

Similar to techniques used on a CNN, this new neural network’s loss layer is removed. Next, you might create two or more layers in its place that use your new speech data to help train the network and feed into a new loss layer for making predictions about speech.

Baidu’s Deep Speech neural network offers a jump start for speech-to-text models, for example, allowing an opportunity to use transfer learning to bake in special speech features.

Why Transfer Learning?

Transfer learning is useful when you have insufficient data for a new domain you want handled by a neural network and there is a big pre-existing data pool that can be transferred to your problem.

So you might have only 1,000 images of horses, but by tapping into an existing CNN such as ResNet, trained with more than 1 million images, you can gain a lot of low-level and mid-level feature definitions.

For developers and data scientists interested in accelerating their AI training workflow with transfer learning capabilities, the NVIDIA Transfer Learning Toolkit offers GPU-accelerated pre-trained models and functions to fine-tune your model for various domains such as intelligent video analytics and medical imaging.

And when it’s time for deployment, you can roll out your application with an end-to-end deep learning workflow using Transfer Learning Toolkit for IVA and medical imaging.

Plus, with just a few online courses, you could become your company’s expert — and launch yourself into an entirely new career path.

 

The post What Is Transfer Learning? appeared first on The Official NVIDIA Blog.

Munich Startup Uses AI to Take Medical Imaging to Another Dimension

Medical imaging provides a window into the human body, allowing us to see under the skin.

But to really understand what’s going on inside our bodies, doctors need 3D imagery. And there’s no time this would be more helpful than during surgery.

Now, ImFusion, a Munich-based startup and NVIDIA Inception program member, is taking medical imaging into the next dimension. It’s using AI to turn 2D ultrasound data into 3D images.

Making Surgical Procedures More Efficient

Computed tomography (CT) and magnetic resonance imaging (MRI) scans give insights into the anatomy and processes of the body. These enable doctors to make effective diagnoses and build comprehensive treatment plans.

But CTs and MRIs have limitations, including the fact that doctors can’t use them during surgery due to the large, complex machinery involved.

ImFusion’s software suite is set to change this.

It converts 2D ultrasound scans into 3D images using a series of AI algorithms, developed on NVIDIA GPUs.

ImFusion takes 2D ultrasound data and translates it into interactive 3D models.

A 2D ultrasound probe captures real-time images, which are then superimposed onto a previously obtained CT or MRI image. This can be performed during a surgical procedure, making it possible for doctors to have a single, comprehensive view — when they need it most.

“We’re providing all surgeons more information during surgery,” said Wolfgang Wein, CEO of ImFusion. “The power of deep learning and image processing increases the dimensionality of the data we have.”

The ImFusion framework is extremely flexible. Customers can choose the components best suited for their needs, including data processing, AI algorithms and visualization tools.

With the availability of a development kit and the standalone ImFusion Suite, the software is poised to enhance the work of clinicians, as well as engineers and researchers.

Giving Doctors 3D Vision

Hospitals, research institutes and companies are already using ImFusion’s algorithms to develop prototypes of new medical imaging devices and surgical robots.

Austria’s Piur imaging GmbH is working with the company to replace expensive and time-consuming scans with tomographic ultrasound imaging.

Its PIUR tUS system improves the clinical workflows for vascular, abdominal and neurological diagnostics and treatment. It’s the first system capable of enhancing any ultrasound device with tomographic capabilities. This includes 2D to 3D ultrasound reconstruction, real-time detection and segmentation of vessels, as well as multi-scan registration and stitching.

With ImFusion’s framework, piur imaging is able to provide more comprehensive insights for clinicians and patients to improve patient care.

Learn more about ImFusion’s work at GTC 2019, March 17-21 in Silicon Valley, for their session “Shaping the Future of Medical Ultrasound Imaging with Deep Learning and GPU Computing.”

Register for GTC here.

 

* The ImFusion Suite standalone software is primarily meant for visualization and prototyping purposes and, therefore, is currently not certified as a medical product. It has neither FDA approval nor does it bear a CE marking.

The post Munich Startup Uses AI to Take Medical Imaging to Another Dimension appeared first on The Official NVIDIA Blog.

So You Think AI Can Dance: Watch This Japanese Dance Troupe Collaborate with AI on Stage

AI has shown off a broad repertoire of creative talents in recent years — composing music, writing stories, crafting meme captions. Now, the stage is set for AI models to take on a new venture in the performing arts — by stepping onto the dance floor.

In a Japanese production titled discrete figures, an AI dancer was projected on stage, joining a live dancer for a duet. The show also presented a partly trained neural network that used recordings of audience members dancing as input data.

Japanese digital art group Rhizomatiks, multimedia dance troupe Elevenplay and media artist Kyle McDonald collaborated on the work, which was performed last year in Canada, the United States, Japan and Spain.

AI Choreography, Behind the Scenes

An artist with a technical background, McDonald has worked with dancers since 2010 on visual experiences that “play with the boundary between real and virtual spaces,” he said. He’s been an adjunct professor at NYU’s Tisch School of the Arts and has a graduate degree in electronic arts from Rensselaer Polytechnic Institute.

For discrete figures, McDonald collected 2.5 hours of motion capture data from eight dancers improvising to a consistent beat. Using NVIDIA GPUs, this training data was fed into a neural network called dance2dance, which generated movements that can be rendered as 3D stick-figures.

AI dancer
The projected AI dancer shared the stage with Maruyama, seen in this screenshot from the video below.

During a duet between the AI and troupe dancer Masako Maruyama, the AI was overlaid with a 3D body projected on stage, appearing to float next to Maruyama. It started out as a silvery outline before crystallizing into Maruyama’s doppelganger.

The projected figure first performed movements generated in advance by the neural network, then switched to moves set by the dance company’s choreographer, Mikiko. Maruyama and the AI dancer moved in unison until she left the stage, leaving the projection to transition back into its silvery outline and AI-generated choreography.

The interplay between human-created and AI-generated movement spoke to the main themes of discrete figures.

“I think the audience will start asking more questions about where human intentionality ends, and where automation begins,” said McDonald. “It seems appropriate to ask in a time when algorithms rule everything from what we consume to how we express ourselves.”

Step Up: Audience Members Hit the Dance Floor

Another scene in the production incorporated the pix2pixHD algorithm developed by NVIDIA Research — and called for audience participation.

For each performance, between the time the doors opened and the show began, the team had 16 audience members (of all ages and dance experience levels) step into a booth to record one minute of dance movement against a black backdrop. Each dance video was sent to a single NVIDIA GPU in the cloud for pose estimation and training the pix2pixHD algorithm from scratch.

After just 15 minutes of training, the algorithm generated an output video for a single synchronized dance. By the time the show began, McDonald had compiled these 16 AI-created videos into a montage set to music, which was shown during the performance on a projection screen.

generated dance montage
The production presented a montage of videos generated by a partially trained neural network. Recordings of audience members dancing were used as the input data. (Image courtesy Kyle McDonald)

Using audience recordings allowed participants to experience their “movement data being reimagined by the machine,” McDonald wrote in a Medium post describing the project. “It follows a recurring theme throughout the entire performance: seeing your humanity reflected in the machine, and vice versa.”

The scene also included a creative visualization of the neural network’s raw data using 3D point clouds and a skeleton model of the generated dancer.

“When we saw the unfinished look of pix2pixHD after only 15 minutes of training, we felt that part of the training process would be perfect for the stage,” said McDonald. “The first 15 minutes of training really captures some themes in the performance such as the emergence of AI and the messy boundaries between the digital and physical worlds.”

The post So You Think AI Can Dance: Watch This Japanese Dance Troupe Collaborate with AI on Stage appeared first on The Official NVIDIA Blog.

Luxembourg Government Signs Europe’s First National AI Collaboration with NVIDIA

With an area of just over 2,500 square kilometres and a total population about one-tenth the size of Madrid, Luxembourg is one of the smallest countries in Europe.

However, when it comes to advances in AI, Luxembourg is going big. The country’s government recently announced a joint initiative with NVIDIA to found a national AI laboratory aimed at solving the world’s greatest challenges.

Blue-Sky Thinking, Real-World Problems

From healthcare, finance and security to exploring space, Luxembourg’s new AI lab has a broad and ambitious remit.

The team behind the lab includes representatives from the University of Luxembourg’s High-Performance Computing Team, the Luxembourg Centre for Systems Biomedicine (LCSB) and Interdisciplinary Centre for Security, Reliability and Trust (SnT), and the Luxembourg Institute for Science and Technology (LIST).

NVIDIA will contribute engineering expertise, as well as the compute power and software required to accelerate the lab’s work.

“Luxembourg is nurturing a pan-European innovation ecosystem,” explained Prime Minister Xavier Bettel. “This cooperation with NVIDIA is big news for our local innovators, and our country is proud to be the first European country to create an AI partnership with NVIDIA.”

Luxembourg Prime Minister Xavier Bettel speaking
Prime Minister Xavier Bettel explains that the new lab will power AI innovation in Luxembourg.

Connecting Academia to Industry

As well as undertaking fundamental academic research, one of the AI lab’s founding principles is that its work meets the needs of industry and society. With this in mind, the lab is closely affiliated with Digital Luxembourg, the country’s initiative to position itself as a technological frontrunner on the global stage.

“Knowledge, innovation and an appetite to shape the future are highly valuable resources for Luxembourg,” said Fernand Reinig, CEO a.i. of LIST. “This initiative will bring together Luxembourg’s research community with the leading role NVIDIA plays in applying AI to a wide range of applications.”

“We’re particularly focused on domains where high performance computing and AI have the potential to deliver significant breakthroughs in the near term,” added Reinig. “This is not science fiction — we’re working on real problems like Industry 4.0, regulatory technology and autonomous vehicles.”

A Cross-Disciplinary Approach

One of the most powerful aspects of the AI lab is its emphasis on cross-pollination between disciplines, explained Stéphane Pallage, rector of the University of Luxembourg.

“Tackling real-world problems involves bringing together experts from across disciplines, not just the sciences but also law, the humanities and beyond,” said Pallage. “This collaboration provides a new context for this approach, and we are sure it will have a very positive impact on the economy and society.

“Whether this is in identifying new application areas or pushing ahead with existing work, from our use of drones for automated airplane inspections to the analysis of genomes and mobile health sensor data, we look forward to seeing the results,” he said.

Digital Luxembourg will also support the creation of educational materials to help researchers and professionals across industries apply AI in their work.

“From education and research to technology transfer and implementation, this collaboration with NVIDIA will boost every aspect of our national AI ecosystem,” said Reinig.

The post Luxembourg Government Signs Europe’s First National AI Collaboration with NVIDIA appeared first on The Official NVIDIA Blog.

Taking Deep Learning to Heart: Startup Using AI to Improve Heart Disease Diagnosis

Cars need smog tests. Bridges need quality inspections. And humans need health screenings, especially for the heart.

That’s because heart disease is the leading cause of death worldwide. In the United States alone, it’s responsible for about one in four deaths each year.

With deep learning, heart disease diagnosis is becoming easier and more accessible — which in turn can improve treatment and patient outcomes.

Echocardiograms — ultrasound tests that generate images of the heart — are used to detect and manage heart disease cases. An echo, as it’s commonly called, is also used as an assessment tool for specific populations, such as chemotherapy patients, because of their increased risk of heart failure.

In the United States, more than 10 million echocardiograms are performed each year. They’re typically done by registered diagnostic cardiac sonographers who go through a two-year degree program to learn how to perform them.

Bay Labs hopes that its deep learning tool, EchoGPS, will allow a nurse or other non-specialist to run diagnostic-quality echocardiograms with just a few days of training.

“We know that getting access to high-quality echo can be challenging and there are many opportunities to do echo more often,” said Shara Senior, head of product at Bay Labs. “If we can empower more clinical team members with the ability to perform an echo, it would enhance the quality of patient care.”

The startup has raised more than $9 million since its founding in 2015. In 2017, it received a $125,000 prize at the Inception competition during NVIDIA’s annual GPU Technology Conference.

Transforming Echo Capture and Analysis

When performing an ultrasound scan, a sonographer positions a probe, called a transducer, on the patient’s body to capture different images of the heart. The deep learning technology in the EchoGPS software guides non-expert healthcare providers on how to maneuver the transducer to get optimal-quality images for diagnosis.

echocardiogram capture with deep learning
EchoGPS software helps medical professionals conduct high-quality echocardiograms, improving heart disease diagnosis. (Photo courtesy of Bay Labs.)

Expanding access to high-quality echocardiograms could improve heart disease diagnosis and treatment in both traditional healthcare systems and underserved settings — resulting in better outcomes for patients and cost savings for healthcare providers.

Bay Labs has also developed an echocardiogram analysis software suite, EchoMD. It automatically chooses the highest-quality images from an echocardiogram and provides an assessment of ejection fraction, one of the metrics of cardiac function that doctors use to inform their clinical decisions.

Using convolutional neural networks, the startup’s deep learning models are trained on a database of 1 million echocardiograms from different clinical partners, including the Minneapolis Heart Institute.

A key capability of Bay Labs’ technology is real-time image processing during ultrasound capture. “That’s really only possible using NVIDIA technology,” Senior said. “We’ve gone as far as to approach our hardware provider and convince them to embed GPUs in their future devices so we can take advantage of the processing power.”

The company uses NVIDIA GPUs for training its neural networks for EchoGPS and EchoMD. The EchoGPS software runs on an NVIDIA Quadro P3000 GPU embedded into an ultrasound machine for real-time inference.

Deep Learning Echocardiograms in the Field

EchoMD received FDA clearance in June. Preparation for FDA clearance of EchoGPS is underway, with the tool currently available only for investigational use, and a major clinical trial in the offing.

The startup recently launched a clinical study with Northwestern Medicine, in Chicago, to determine whether certified medical assistants with no prior scanning experience can capture high-quality echocardiograms with EchoGPS. The study includes 1,200 patients over the age of 65, and will also test the EchoMD analysis suite for detecting certain types of heart disease.

The company will also bring EchoGPS to two district hospitals in Rwanda later this month, part of a mission with Team Heart — a local nonprofit medical organization — to expand sustainable cardiac care in the country.

In the East African country, Bay Labs will demo its ultrasound software, including an investigational algorithm to detect rheumatic heart disease. It’s a condition that affects 32 million children and young people worldwide.

Stemming from inadequately treated strep throat, this acquired heart disease can inflict serious damage and has a high mortality rate, estimated at 275,000 persons per year globally.

“Rheumatic heart disease is rarely seen in developed countries. It is known as a disease of poverty,” Senior said. Risk factors for the disease are overcrowding, lack of nutrition and poor hygiene.

However, if detected early, rheumatic heart disease can be easily treated.

The post Taking Deep Learning to Heart: Startup Using AI to Improve Heart Disease Diagnosis appeared first on The Official NVIDIA Blog.