Make Room for Robots at GTC 2019

AI will take center stage at the 10th annual GPU Technology Conference, a multi-day deep dive into the present and future of AI-powered autonomous machines. GTC is the best place to get an inside look at how robots are transforming virtually every industry imaginable — from smart agriculture to DNA sequencing. Join us March 17-21 Read article >

The post Make Room for Robots at GTC 2019 appeared first on The Official NVIDIA Blog.

Make Room for Robots at GTC 2019

AI will take center stage at the 10th annual GPU Technology Conference, a multi-day deep dive into the present and future of AI-powered autonomous machines.

GTC is the best place to get an inside look at how robots are transforming virtually every industry imaginable — from smart agriculture to DNA sequencing.

Join us March 17-21 at the San Jose Convention Center to connect with developers, researchers and roboticists to hear, see and experience the latest in AI innovations.

There’s an incredible lineup of speakers, hands-on labs, posters and more just waiting for you at the event.

Delivery Robots? Check. An Autonomous Strawberry Harvester? Got Those, Too

Venture to the cutting edge of deep learning and autonomous machines. Start with our extensive list of speakers and sessions from some of the world’s most prominent organizations, including Kiwi Campus, MIT, Musashi Seimitsu, Postmates X, UC Berkeley and more.

Some of the highlights:

On Sunday, March 17, you can also attend a full-day, hands-on DLI workshop and learn from a certified instructor to earn a certificate.

Astounding Demos from Amazing Robots

We’ve rounded up an array of incredible robots that can collaborate, fabricate, inspect, pick, sort, navigate and deliver. Keep your eyes peeled for them in the GTC exhibition hall.

A few of the bots sure to blow you away:

  • Sarcos Robotics: Come see their robo-snake that can do underground tank inspections and more, autonomously maneuvering through complex and dangerous terrain.
  • Canvas Technology: Industrial carts are great to transport things. They’re even better if they’re autonomous and can easily move within indoor factory and warehouse environments. See how Canvas is paving the way.
  • Meituan-Dianping: See how they’re using Jetson to power AI in autonomous delivery vehicles, one of which will be on display, to move meals from restaurants to hungry consumers.

Get a Leg Up in Robotics with Hands-on Jetson Labs

See, hear, listen and learn from the amazing people, companies and machines all over the conference. Then put that inspiration to good use and dive into the code for yourself in hands-on labs.

Whatever your comfort level — introductory courses or advanced robotics — we’ve got a lab for you.

Cool Tech, Hotter Hardware

NVIDIA Jetson is the world’s leading computing platform for AI at the edge. High in performance and energy efficient, it’s ideal for compute-intensive embedded applications in industries like agriculture, logistics, retail, healthcare and more.

The Jetson Partner Pavilion at GTC will have 25+ ecosystem partners showcasing cameras, AI boxes/NVRs, robot reference platforms and more.

Featured partners to check out include ADLINK, BoulderAI, Crew Systems, Connect Tech, D3 Engineering, Foxconn, Leopard Imaging and Mobiliya.

Get exclusive pricing on Jetson products at GTC.

Once you get inspired by the ecosystem, get to work on an amazing innovation of your own. Jetson TX2 and Jetson AGX Xavier Developer Kits will be offered onsite at special show pricing of $299 and $899, respectively.

You never know what other surprises we’ll have in store, so register for GTC today.

The post Make Room for Robots at GTC 2019 appeared first on The Official NVIDIA Blog.

AI in the Sky: Startup Uses Satellite Imagery to Assess Hurricane, Fire Damage

Technology can’t stop a hurricane in its tracks — but it can make it easier to rebuild in the wake of one.

Following a natural disaster, responders rush to provide aid and restore infrastructure, relying on limited information to make critical decisions about where resources should be deployed.

It can be challenging to analyze these situations from the ground. So San Francisco-based startup CrowdAI has turned to satellite and aerial imagery, extracting insights that help governments and companies send disaster response efforts where they’re most urgently needed.

“Over the last decade, a tremendous amount of satellite and drone data has come online,” said Devaki Raj, founder and CEO of CrowdAI, a member of the NVIDIA Inception virtual accelerator program. “We’re using recent advances in computer vision to unlock that data.”

A few hundred natural disasters occur each year around the globe. That’s a devastating number in terms of human impact, but it’s a fairly small sample size for a machine learning model being trained to assess disaster damage in disparate regions of the world.

While a major earthquake will occur on average around once a month, any given fault line may go a century or more without a strong seismic event. And though a few dozen hurricanes and typhoons strike each year, they cause destruction across vastly different landscapes.

An effective AI for this use case must be specific enough to identify roads and buildings, but generic enough to recognize what a home or road looks like in South Carolina, as well as Sumatra. To achieve this requires finding enough training data, which consists of diverse shots taken both before and after a disaster.

This was once a challenge, but Jigar Doshi, machine learning lead at CrowdAI, says, “as computer vision has matured, we don’t need that many samples of ‘before fire’ and ‘after fire’ to make great models to predict the impacts of disaster.”

In the aftermath of Hurricane Michael last year, CrowdAI worked with telecoms provider WOW! to assess building damage across Panama City, Fla. Analyzing satellite images of the city, provided by NOAA, helped the provider deploy on-the-ground operators to locations most severely hit by the storm and restore service faster.

When unaided by AI, the responders must physically drive around the area and rely on phone calls from customers to determine which regions need assistance — a time-consuming and imprecise process.

wildfire damage
Analysis courtesy of CrowdAI and image courtesy of DigitalGlobe 2018’s Open Data Program.

The startup’s research was recognized last year by two of the world’s leading AI conferences: Doshi had a paper accepted at CVPR and presented a joint paper with Facebook at NeurIPS. And after last year’s Camp Fire in Butte County, Calif., the deadliest wildfire in the United States in a century, he and the CrowdAI team used open data to analyze damage.

CrowdAI uses NVIDIA GPUs in the cloud and onsite for training their algorithms and for inference. Depending on the inference environment used, the AI models can return insights with just a one-second lag.

“When a disaster happens, we aren’t training — we’re just doing inference,” said Raj. “That’s why we need this speed.”

CrowdAI is also exploring models that could forecast damage, as well as deep learning tools that go beyond satellite images to integrate wind, precipitation and social media data.

* Main image shows a Florida neighborhood after Hurricane Michael, overlaid with CrowdAI’s categorization of building damage. Analysis courtesy of CrowdAI and imagery courtesy of NOAA’s National Ocean Service.

The post AI in the Sky: Startup Uses Satellite Imagery to Assess Hurricane, Fire Damage appeared first on The Official NVIDIA Blog.

AI in the Sky: Startup Uses Satellite Imagery to Assess Hurricane, Fire Damage

Technology can’t stop a hurricane in its tracks — but it can make it easier to rebuild in the wake of one. Following a natural disaster, responders rush to provide aid and restore infrastructure, relying on limited information to make critical decisions about where resources should be deployed. It can be challenging to analyze these Read article >

The post AI in the Sky: Startup Uses Satellite Imagery to Assess Hurricane, Fire Damage appeared first on The Official NVIDIA Blog.

Talk to Me: Deep Learning Identifies Depression in Speech Patterns

“Talk therapy” is often used by psychotherapists to help patients overcome depression or anxiety through conversation.

A research team at Massachusetts Institute of Technology is using deep learning to uncover what might be called “talk diagnosis” — detecting signs of depression by analyzing a patient’s speech.

The research could lead to effective, and inexpensive, diagnosis of serious mental health issues.

An estimated one in 15 adults in the U.S. reports having a bout of major depression in any given year, according to the National Institute of Mental Health. The condition can lead to serious disruptions in a person’s life, yet our understanding of it remains limited.

The techniques used to identify depression typically involve mental health experts asking direct questions and drawing educated conclusions.

In the future, these pointed assessments may be less necessary, according to lead researcher Tuka Alhanai, an MIT research assistant and Ph.D. candidate in computer science. She envisions her team’s work becoming part of the ongoing monitoring of individual mental health.

All About the Dataset

A key aspect of getting started with deep learning is getting good data.

That was a challenge for Alhanai when her team went to train its model. She was specifically looking for datasets of conversations in which some of the participants were depressed.

Eventually, she found one from the University of Southern California, which had teamed with German researchers on conducting interviews with a group of 180 people, 20 percent of whom had some signs of depression. The interviews consisted of 20 minutes of questions about where the subjects lived, who their friends were and whether they felt depressed.

Alhanai was emboldened by the researchers’ conclusion that depression can, in fact, be detected in speech patterns and vocabulary. But she wanted to take things a step further by removing the leading, predictive questions, and instead train a model to detect depression during normal, everyday conversation.

“There is significant signal in the data that will cue you to whether people have depression,” she said. “You listen to overall conversation and absorb the trajectory of the conversation and speech, and the larger context in which things are said.”

Alhanai and her team combined the processing power of a cluster of machines running more than 40 NVIDIA TITAN X GPUs with the TensorFlow, Keras and cuDNN deep learning libraries, and set to work training their model.

They fed it with snippets of the interviews from the dataset, minus the obvious questions and references to depression, leaving the model to determine whether there were depression cues present or not. They subsequently exposed the model to sections of conversation from a healthy person and a depressed person, and then told the model which one was which.

After enough cycles of this, the researchers would feed the model another section of conversation and ask it to determine whether there was an indication of possible depression. The team trained dozens of models this way, something Alhanai said would not have been possible without access to GPUs.

Success Breeds Ambition

Ultimately, the training resulted in the team’s model identifying depression from normal conversation with more than 70 percent accuracy during inference — on par with mental health experts’ diagnosis — with each experiment occurring on a single TITAN X.

The team reported its findings in a paper submitted at the recent Interspeech 2018 conference in Hyderabad, India, and is now primed to take the work to the next level.

“This work is very encouraging,” said Alhanai. “Let’s get these systems out there and have them do predictions for evaluation purposes — not to use clinically yet, but to collect more data and build more robustness.”

Naturally, Alhanai craves access to faster and more powerful GPUs so she can run more experiments with larger datasets. But her long-term view is to explore the impact that using deep learning to analyze communication — not just speech — can have in diagnosing and managing other mental health conditions.

“Any condition you can hear and feel in speech, or through other gestures, a machine should be able to determine,” she said. “It doesn’t matter what the signal is — it could be speech, it could be writing, it could be jaw movement, it could be muscle tension. It will be a very non-invasive way to monitor these things.”

The post Talk to Me: Deep Learning Identifies Depression in Speech Patterns appeared first on The Official NVIDIA Blog.

Talk to Me: Deep Learning Identifies Depression in Speech Patterns

“Talk therapy” is often used by psychotherapists to help patients overcome depression or anxiety through conversation. A research team at Massachusetts Institute of Technology is using deep learning to uncover what might be called “talk diagnosis” — detecting signs of depression by analyzing a patient’s speech. The research could lead to effective, and inexpensive, diagnosis Read article >

The post Talk to Me: Deep Learning Identifies Depression in Speech Patterns appeared first on The Official NVIDIA Blog.

Reading the Vital Signs: Leading Minds in Medicine to Discuss AI Progress in Healthcare

The buzz around AI in medicine is contagious. From radiology and drug discovery to disease risk prediction and patient care, deep learning is transforming healthcare from every angle.

You can learn about it all at the GPU Technology Conference, where healthcare innovators from industry, universities and medical institutions will gather to share how AI and GPUs are empowering doctors and researchers. GTC takes place March 17-21 in San Jose.

The conference healthcare sessions feature presentations by renowned names in medicine — including from four of the top five academic medical centers in the United States, and from five of the nation’s top seven radiology departments.

A highlight of the week will be the Tuesday morning talk by luminary Eric Topol, founder and director of the Scripps Research Translational Institute. Topol will speak about the opportunities AI and deep learning present for clinicians, health systems and patients.

The talk will be followed by a signing for his forthcoming book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.

A Host of Opportunities to Learn

The week is packed with more than 40 healthcare sessions, covering topics as diverse as medical imaging, genomics and computational chemistry. Here are a few standouts:

Throughout the week, attendees can join two-hour, instructor-led training sessions on AI and accelerated computing, including workshops specific to medical use cases.

Robust Presence on the Show Floor

Healthcare entrepreneurs in NVIDIA’s Inception virtual accelerator program can be found at speaker sessions, booths and poster sessions and at the Inception Theater on the show floor. You’ll hear how companies including Proscia, Subtle Medical, Arterys and ImFusion are using AI for medical imaging, pathology and more.

Major healthcare institutions including Mass General Hospital will also be sharing how they use AI and GPUs to transform medicine and patient care.

For healthcare startups, there will be a meetup at the Inception Lounge on Wednesday, March 20, from 5-6 pm. And, of course, our booth will feature demos of GPU-powered healthcare applications and showcase the NVIDIA Clara platform for medical imaging applications.

Check out the full healthcare track at GTC, and register today. Here are five more reasons to attend:

The post Reading the Vital Signs: Leading Minds in Medicine to Discuss AI Progress in Healthcare appeared first on The Official NVIDIA Blog.

Reading the Vital Signs: Leading Minds in Medicine to Discuss AI Progress in Healthcare

The buzz around AI in medicine is contagious. From radiology and drug discovery to disease risk prediction and patient care, deep learning is transforming healthcare from every angle. You can learn about it all at the GPU Technology Conference, where healthcare innovators from industry, universities and medical institutions will gather to share how AI and Read article >

The post Reading the Vital Signs: Leading Minds in Medicine to Discuss AI Progress in Healthcare appeared first on The Official NVIDIA Blog.

Get a Nuance Look at AI in Healthcare in the AI Podcast and at GTC

Nuance is a pioneer in voice recognition technology. You probably recognize its name from their work bringing AI to speech recognition and virtual assistant technology. What you might not know is they’ve also gotten into using AI to chart the course of the healthcare industry. And to understand how physicians can use the technology to Read article >

The post Get a Nuance Look at AI in Healthcare in the AI Podcast and at GTC appeared first on The Official NVIDIA Blog.

Get a Nuance Look at AI in Healthcare in the AI Podcast and at GTC

Nuance is a pioneer in voice recognition technology. You probably recognize its name from their work bringing AI to speech recognition and virtual assistant technology.

What you might not know is they’ve also gotten into using AI to chart the course of the healthcare industry. And to understand how physicians can use the technology to make people healthier, and make the work of doctors better.

To learn more on that, we invited Karen Holzberger, vice president and general manager of Nuance’s Healthcare diagnostic solutions business, onto the latest episode of our AI Podcast.

And if you’d like to hear more from Nuance, Raghu Vemula, vice president of research and development at the Healthcare Division of Nuance Communications, will be speaking as part of a panel on “Enabling medical imaging experts to bring AI to the clinic,” at next month’s GPU Technology Conference.

The session will focus on how developers from industry and institutions use the NVIDIA Clara platform to integrate AI into hospitals to bend the cost curve and improve patient outcomes.

The Clara platform is designed for developers to bring NVIDIA technology and expertise in high performance computing, AI and photorealistic rendering to the medical imaging industry.

Listen to the podcast with Holzberger below. And register to attend GTC here.

The post Get a Nuance Look at AI in Healthcare in the AI Podcast and at GTC appeared first on The Official NVIDIA Blog.