AI Podcast: Margot Gerritsen’s Got Binders Full of Women in Data Science — and She’s Serious

This week’s AI Podcast guest is a renaissance woman with a special passion for data science.

Margot Gerritsen is senior associate dean for educational affairs and professor of energy resources engineering at Stanford University. She’s the co-founder and co-director of the organization Women in Data Science (WiDS). And she’s the host of the WiDS podcast.

Gerritsen spoke to AI Podcast host Noah Kravitz about WiDS, the projects she’s overseeing at Stanford, and what she’s excited about in the current era of data science: the democratization of data.

Gerritsen sees today’s vast quantities of data, open source code and computational power as a “perfect storm” for groundbreaking analytical work.

Key Points From This Episode:

  • The idea for WiDS was born during a conversation at Stanford’s Coupa Cafe, in which Gerritsen lamented the lack of female speakers at technology conferences and was inspired to take action.
  • WiDS hosted its major technical conference at Stanford earlier this month. Conference sessions are available to watch for free. This event is traditionally followed by a series of over 150 regional events across the world through the month of March.

Tweetables:

“We wanted to create binders of women in data science so that we could help promote them, and that’s a very serious thing because we want to make sure that these women who are making outstanding contributions are being seen, and listened to.” — Margot Gerritsen [3:23]

“You know, when you can use your data skills and your modeling and simulation skills to come up with better policies — that’s the golden spot. That’s the best place to be.” — Margot Gerritsen [30:50]

You Might Also Like

AI4Good: Canadian Lab Empowers Women in Computer Science

Doina Precup, an associate professor at McGill University and research team lead at AI startup DeepMind, speaks about the AI4Good Lab she co-founded to give women more access to machine learning training.

Entrepreneur Brings GPUs to Fashion

Pinar Yanardag, a postdoctoral research associate at MIT Media Lab, talks about her innovative project at the MIT Media Lab that’s producing a host of AI-inspired creations, including AI fashion.

Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

Serial entrepreneur Andrew Mason is making podcast editing easier and more collaborative with his company, Descript Podcast Studio, which uses AI, natural language processing and automatic speech synthesis.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Podcast: Margot Gerritsen’s Got Binders Full of Women in Data Science — and She’s Serious appeared first on The Official NVIDIA Blog.

Keeping an Eye on AI: Building Ethical Technology at Salesforce

Kathy Baxter, the architect of the ethical AI practice at Salesforce, is helping her team and clients create more responsible technology. To do so, she supports employee education, the inclusion of safeguards in Salesforce technology, and collaboration with other companies to improve ethical AI across industries.

Baxter spoke with AI Podcast host Noah Kravitz about her role at the company, a position she helped create as the need for AI ethicists became apparent.

She’s helped construct practices such as release readiness planning, in which teams brainstorm any potential unintended negative consequences, along with ways to mitigate them.

In the future, Baxter predicts more global policies that will help companies define ethical AI and guide them in creating responsible technology.

Kathy Baxter, architect of ethical AI at Salesforce.

Key Points From This Episode:

  • There are several ways to correct bias in AI. This includes making edits to the training data or editing the model itself (for example, not using race or gender as a factor).
  • Einstein is Salesforce’s AI platform. The company implements in-app guidance through a feature called Einstein Discovery. One of its functions is to alert users when they might be using sensitive variables such as age, race or gender. Administrators can also select the variables they don’t want to include in their model, to avoid accidental bias.

Tweetables:

“We have to understand that everything that we build and bring into society has an impact,” — Kathy Baxter [2:29]

“One of the magical things about AI is that we can become aware of biases that we might not have known even existed in our business processes in the first place.” — Kathy Baxter [10:18]

You Might Also Like

How Federated Learning Can Help Keep Data Private

Walter De Brouwer, CEO of Doc.ai — a company building a medical research platform that addresses the issue of data privacy with federated learning — talks about the complications of putting data to work in industries such as healthcare.

Good News About Fake News: AI Can Now Help Detect False Information

If only there was a way to filter the fake news from the real. Thanks to Vagelis Papalexakis, a professor of computer science at the University of California, Riverside, there is. He discusses his algorithm that can detect fake news with 75 percent accuracy.

Teaching Families to Embrace AI

Tara Chklovski is CEO and founder of Iridescent, a nonprofit that provides access to hands-on learning opportunities to prepare underrepresented children and adults for the future of work. She talks about Iridescent, the UN’s AI for Good Global Summit and the AI World Championship — part of the AI Family Challenge.

Tune In to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

Listen on Apple Podcast Listen on Google Podcast Listen on Spotify Podcast

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post Keeping an Eye on AI: Building Ethical Technology at Salesforce appeared first on The Official NVIDIA Blog.

Get the Picture: For Latest in AI and Medical Imaging, Tune In to GTC Digital

Picture this: dozens of talks about AI in medical imaging, presented by experts from top radiology departments and academic medical centers around the world, all available free online.

That’s just a slice of GTC Digital, a vast library of live and on-demand webinars, training sessions and office hours from NVIDIA’s GPU Technology Conference.

Healthcare innovators across radiology, genomics, microscopy and more will share the latest AI and GPU-accelerated advancements in their fields through talks on GTC Digital.

Researchers in Sydney, Australia, are using AI to analyze brain scans. In Massachusetts, another is segmenting the prostate gland from ultrasound images to help doctors fine-tune radiation doses. And in Munich, Germany, they’re streamlining radiology reports to foster real-time reporting.

Read more about these standout speakers advancing the use of deep learning in medical imaging worldwide below. And register for GTC Digital for free to see the whole healthcare lineup.

Mental Math: Australian Center Uses AI to Analyze Brain Scans

When studying neurodegenerative disease, quantifying brain tissue loss over time helps physicians and clinical trialists monitor disease progression. Radiologists typically inspect brain scans visually and classify the brain shrinkage as “moderate” or “severe” — a qualitative assessment. With accelerated computing, brain tissue loss can instead be measured precisely and quantitatively, without losing time.

The Sydney Neuroimaging Analysis Centre conducts neuroimaging research as well as commercial image analysis for clinical research trials. SNAC will share at GTC Digital how it uses AI and NVIDIA GPUs to accelerate AI tools that automate laborious analysis tasks in their research workflow.

One model precisely isolates brain images from head scans, segmenting brain lesions for multiple sclerosis cases. The AI reduces the time to segment and determine the volume of brain lesions from up to 15 minutes for a manual examination down to just three seconds, regardless of the number or volume of lesions.

“NVIDIA GPUs and DGX systems are the core of our AI platform, and are transforming the delivery of clinical and research radiology with our AI innovation,” said Tim Wang, director of operations at SNAC. “We are particularly excited by the application of this technology to brain imaging.”

SNAC uses the NVIDIA Clara Train SDK’s AI-assisted annotation tools for model development and the NVIDIA Clara Deploy SDK for integration with clinical and research workflows. It’s also exploring the NVIDIA Clara platform as a tool for federated learning. The center relies on the NVIDIA DGX-1 server, NVIDIA DGX Station and GPU-powered PC workstations for both training and inference of its AI algorithms.

Harvard Researcher Applies AI to Prostate Cancer Therapy

Around one in nine men is diagnosed with prostate cancer at some point during his life. Medical imaging tools like ultrasound and MRI are key methods doctors use to check prostate health and plan for surgery and radiotherapy.

Davood Karimi, a research fellow at Harvard Medical School, is developing deep learning models to more quickly and accurately segment the prostate gland from ultrasound images — a difficult task because the boundaries of the prostate are often either not visible or blurry in ultrasound images.

“Accurate segmentation is necessary to make sure radiologists can deliver the needed radiation dose to the prostate, but avoid damaging critical nearby structures like the rectum or bladder,” he said.

In his GTC Digital talk, Karimi will do a deep dive into a research paper he presented at the prestigious MICCAI healthcare conference last year. Using an NVIDIA TITAN GPU, Karimi has accelerated neural network inference to under a second per scan, while improving accuracy over current segmentation techniques radiologists use.

German Company Streamlines Radiology Reports with NVIDIA Clara

Healthcare providers worldwide record their analyses of patient data, including medical images, into text-based reports. But no two radiologists or hospitals do it exactly the same.

Munich-based Smart Reporting GmbH aims to streamline and standardize the reporting workflow for radiologists. The company uses a structured reporting interface that organizes patient data and doctor’s notes into a consistent format.

Smart Reporting uses the NVIDIA Clara platform to segment prostate cancer lesions from medical images. This image annotation is loaded into a draft diagnosis report that radiologists can approve, edit or reject before generating a final report to provide to surgeons and other healthcare professionals.

A member of the NVIDIA Inception virtual accelerator program, Smart Reporting is working with major healthcare organizations including Siemens Healthineers.

“When we release a prototype for radiologists in the clinic, it’ll be essential to have almost real-time reporting,” said Dominik Noerenberg, the company’s chief medical officer. “We’re able to see that speedup running on multi-GPU containers in NGC.”

Noerenberg and Alvaro Sanchez, principal software engineer at Smart Reporting, will present a talk on the advantages of AI-enhanced radiology workflows at GTC Digital.

See the full lineup of healthcare talks on GTC Digital and register for free.

Main image shows a side-by-side comparison of brain segmentation. Left image shows manual segmentation, while right shows AI segmentation. Image courtesy of Sydney Neuroimaging Analysis Centre. 

The post Get the Picture: For Latest in AI and Medical Imaging, Tune In to GTC Digital appeared first on The Official NVIDIA Blog.

DarwinAI Makes AI Applications More Efficient and Less of a ‘Black Box’ — with Its Own AI

DarwinAI
Employees of DarwinAI, an artificial intelligence software startup based in Waterloo, Ontario, gather with company CEO Sheldon Fernandez (seated, center, in the jacket). Credit: DarwinAI

As a student pursuing a doctorate in systems design engineering at the University of Waterloo, Alexander Wong didn’t have enough money for the hardware he needed to run his experiments in computer vision. So he invented a technique to make neural network models smaller and faster.

“He was giving a presentation, and somebody said, ‘Hey, your doctorate work is cool, but you know the real secret sauce is the stuff that you created to do your doctorate work, right?’” recalls Sheldon Fernandez.

Fernandez is the CEO of DarwinAI, the Waterloo, Ontario-based startup now commercializing that secret sauce. Wong is the company’s chief scientist. And Intel is helping the company multiply the performance of its remarkable software, from the data center to edge applications.

“We use other forms of artificial intelligence to probe and understand a neural network in a fundamental way,” says Fernandez, describing DarwinAI’s playbook. “We build up a very sophisticated understanding of it, and then we use AI a second time to generate a new family of neural networks that’s as good as the original, a lot smaller and can be explained.”

That last part is critical: A big challenge with AI, says Fernandez, is that “it’s a black box to its designers.” Without knowing how an AI application functions and makes decisions, developers struggle to improve performance or diagnose problems.

An automotive customer of DarwinAI, for instance, was troubleshooting an automated vehicle with a strange tendency to turn left when the sky was a particular shade of purple. DarwinAI’s solution — which it calls Generative Synthesis — helped the team recognize how the vehicle’s behavior was affected by training for certain turning scenarios that had been conducted in the Nevada desert, coincidentally when the sky was that purple hue (read DarwinAI’s recent deep dive on explainability).

Another way to think about Generative Synthesis, Fernandez explains, is to imagine an AI application that looked at a house designed by a human being, noted the architectural contours, and then designed a completely new one that was stronger and more reliable. “Because it’s AI, it sees efficiencies that would just never occur to a human mind,” Fernandez says. “That’s what we are doing with neural networks.” (A neural network is an approach to break down sophisticated tasks into a large number of simple computations.)

Intel is in the business of making AI not only accessible to everyone, but also faster and easier to use. Through the Intel AI Builders program, Intel has worked with DarwinAI to pair Generative Synthesis with the Intel® Distribution of OpenVINO™ toolkit and other Intel AI software components to achieve order-of-magnitude gains in performance.

In a recent case study, neural networks built using the Generative Synthesis platform coupled with Intel® Optimizations for TensorFlow were able to deliver up to 16.3 times and 9.6 times performance increases on two popular image recognition workloads (ResNet50 and NASNet, respectively) over baseline measurements for an Intel Xeon Platinum 8153 processor.

“Intel and DarwinAI frequently work together to optimize and accelerate artificial intelligence performance on a variety of Intel hardware,” says Wei Li, vice president and general manager of Machine Learning Performance at Intel.

The two companies’ tools are “very complementary,” Fernandez says. “You use our tool and get a really optimized neural network and then you use OpenVINO and the Intel tool sets to actually get it onto a device.”

This combination can deliver AI solutions that are simultaneously compact, accurate and tuned for the device where they are deployed, which is becoming critical with the rise of edge computing.

“AI at the edge is something we’re increasingly seeing,” says Fernandez. “We see the edge being one of the themes that is going to dominate the discussion in the next two, three years.”

In the shadow of coronavirus: Dominating all discussion right now is coronavirus. DarwinAI announced this week that “we have collaborated with researchers at the University of Waterloo’s VIP Lab to develop COVID-Net: a convolutional neural network for COVID-19 detection via chest radiography.” The company has made the source code and dataset available by open source on GitHub. Read about Intel and coronavirus.

More Customer Stories: Intel Customer Spotlight on Intel.com | Customer Stories on Intel Newsroom

The post DarwinAI Makes AI Applications More Efficient and Less of a ‘Black Box’ — with Its Own AI appeared first on Intel Newsroom.

Silicon Valley High Schooler Takes Top Award in Hackster.io Jetson Nano Competition

Over the phone, Andrew Bernas leaves the impression he’s a veteran Silicon Valley software engineer focused on worldwide social causes with a lot of heart.

He’s in fact a 16-year-old high school student, and he recently won first place in the AI for Social Good category of the NVIDIA-supported AI at the Edge Challenge.

At Hackster.io — an online community of developers and hobbyists — he and others began competing in October, building AI projects using the NVIDIA Jetson Nano Developer Kit.

An Eagle Scout who leads conservation projects, he wanted to use AI to solve a big problem.

“I got the idea to use Jetson Nano processing power to compute a program to recognize handwritten and printed text to allow those who are visually impaired or disabled to have access to reading,” said Bernas, a junior at Palo Alto High School.

He was among 10 winners in the competition, which drew more than 2,500 registrants from 35 countries for a share of NVIDIA supercomputing prizes. Bernas used NVIDIA’s Getting Started With Jetson Nano Deep Learning Institute course to begin his project.

AI on Reading

Bernas’s winning entry, Reading Eye for the Blind with NVIDIA Jetson Nano, is a text-to-voice AI app and a device prototype to aid the visually impaired.

The number of people worldwide visually impaired — those with moderate to severe vision loss — is estimated to be 285 million, with 39 million of them blind, according to the World Health Organization.

His device, which can be seen in the video below, allows people to place books or handwritten text to be scanned by a camera and converted to voice.

“Part of the inspiration was for creating a solution for my grandmother and other people with vision loss,” said Bernas. “She’s very proud of me.”

DIY Jetson Education

Bernas enjoys do-it-yourself building. His living room stores some of the more than 20 radio-controlled planes and drones he has designed and built. He also competes in his school’s Science Olympiad team in electrical engineering, circuits, aeronautical and physics.

Between high school courses, online programs and forums, Bernas has learned to use HTML, CSS, Python, C, C++, Java and JavaScript. For him, developing models using the NVIDIA Jetson Nano Developer Kit was a logical next step in his education for DIY building.

He plans to develop his text-to-voice prototype to include Hindi, Mandarin, Russian and Spanish. Meanwhile, he has his sights on AI for robotics and autonomy as a career path.

“Now that machine learning is so big, I’m planning to major in something engineering-related with programming and machine learning,” he said of his college plans.

 

NVIDIA Jetson Nano makes adding AI easier and more accessible to makers, self-taught developers and embedded tech enthusiasts. Learn more about Jetson Nano and view more community projects to get started.

The post Silicon Valley High Schooler Takes Top Award in Hackster.io Jetson Nano Competition appeared first on The Official NVIDIA Blog.

Intel Scales Neuromorphic Research System to 100 Million Neurons

Intel Pohoiki Springs Neuromorphic 2

» Download all images (ZIP, 11 MB)

What’s New: Today, Intel announced the readiness of Pohoiki Springs, its latest and most powerful neuromorphic research system providing the computational capacity of 100 million neurons. The cloud-based system will be made available to members of the Intel Neuromorphic Research Community (INRC), extending their neuromorphic work to solve larger, more complex problems.

“Pohoiki Springs scales up our Loihi neuromorphic research chip by more than 750 times, while operating at a power level of under 500 watts. The system enables our research partners to explore ways to accelerate workloads that run slowly today on conventional architectures, including high-performance computing (HPC) systems.”
–Mike Davies, director of Intel’s Neuromorphic Computing Lab

What It is: Pohoiki Springs is a data center rack-mounted system and is Intel’s largest neuromorphic computing system developed to date. It integrates 768 Loihi neuromorphic research chips inside a chassis the size of five standard servers.

Loihi processors take inspiration from the human brain. Like the brain, Loihi can process certain demanding workloads up to 1,000 times faster and 10,000 times more efficiently than conventional processors. Pohoiki Springs is the next step in scaling this architecture to assess its potential to solve not just artificial intelligence (AI) problems, but a wide range of computationally difficult problems. Intel researchers believe the extreme parallelism and asynchronous signaling of neuromorphic systems may provide significant performance gains at dramatically reduced power levels compared with the most advanced conventional computers available today.

What the Opportunity for Scale is: In the natural world even some of the smallest living organisms can solve remarkably hard computational problems. Many insects, for example, can visually track objects and navigate and avoid obstacles in real time, despite having brains with well under 1 million neurons.

Similarly, Intel’s smallest neuromorphic system, Kapoho Bay, comprises two Loihi chips with 262,000 neurons and supports a variety of real-time edge workloads. Intel and INRC researchers have demonstrated the ability for Loihi to recognize gestures in real time, read braille using novel artificial skin, orient direction using learned visual landmarks and learn new odor patterns
– all while consuming tens of milliwatts of power. These small-scale examples have so far shown excellent scalability, with larger problems running faster and more efficiently on Loihi compared with conventional solutions. This mirrors the scalability of brains found in nature, from insects to human brains.

With 100 million neurons, Pohoiki Springs increases Loihi’s neural capacity to the size of a small mammal brain, a major step on the path to supporting much larger and more sophisticated neuromorphic workloads. The system lays the foundation for an autonomous, connected future, which will require new approaches to real-time, dynamic data processing.

How It will be Used: Intel’s neuromorphic systems, such as Pohoiki Springs, are still in the research phase and are not intended to replace conventional computing systems. Instead, they provide a tool for researchers to develop and characterize new neuro-inspired algorithms for real-time processing, problem solving, adaptation and learning.

INRC members will access and build applications on Pohoiki Springs via the cloud using Intel’s Nx SDK and community-contributed software components.

Examples of promising, highly scalable algorithms being developed for Loihi include:

  • Constraint satisfaction: Constraint satisfaction problems are present everywhere in the real world, from the game of sudoku to airline scheduling, to package delivery planning. They require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. Loihi can accelerate such problems by exploring many different solutions in parallel at high speed.
  • Searching graphs and patterns: Every day, people search graph-based data structures to find optimal paths and closely matching patterns, for example to obtain driving directions or to recognize faces. Loihi has shown the ability to rapidly identify the shortest paths in graphs and perform approximate image searches.
  • Optimization problems: Neuromorphic architectures can be programmed so that their dynamic behavior over time mathematically optimizes specific objectives. This behavior may be applied to solve real-world optimization problems, such as maximizing the bandwidth of a wireless communication channel or allocating a stock portfolio to minimize risk at a target rate of return.

About Neuromorphic Computing: Traditional general-purpose processors, like CPUs and GPUs, are particularly skilled at tasks that are difficult for humans, such as highly precise mathematical calculations. But the role and applications of technology are expanding. From automation to AI and beyond, there is a rising need for computers to operate more like humans, processing unstructured and noisy data in real time, while adapting to change. This challenge motivates new and specialized architectures.

Neuromorphic computing is a complete rethinking of computer architecture from the bottom up. The goal is to apply the latest insights from neuroscience to create chips that function less like traditional computers and more like the human brain. Neuromorphic systems replicate the way neurons are organized, communicate and learn at the hardware level. Intel sees Loihi and future neuromorphic processors defining a new model of programmable computing to serve the world’s rising demand for pervasive, intelligent devices.

More Context: Neuromorphic Computing (Press Kit) | Intel Labs (Press Kit)

The post Intel Scales Neuromorphic Research System to 100 Million Neurons appeared first on Intel Newsroom.

Computers That Smell: Intel’s Neuromorphic Chip Can Sniff Out Hazardous Chemicals

loihi
A close-up photo shows Loihi, Intel’s neuromorphic research chip. Intel’s latest neuromorphic system, Pohoiki Beach, will be comprised of 64 of these Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)
» Click for full image

What’s New: In a joint paper published in Nature Machine Intelligence, researchers from Intel Labs and Cornell University demonstrated the ability of Intel’s neuromorphic research chip, Loihi, to learn and recognize hazardous chemicals in the presence of significant noise and occlusion. Loihi learned each odor with just a single sample, without disrupting its memory of previously learned scents. It demonstrated superior recognition accuracy compared with conventional state-of-the-art methods, including a deep learning solution that required 3,000 times more training samples per class to reach the same level of classification accuracy.

“We are developing neural algorithms on Loihi that mimic what happens in your brain when you smell something. This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihi’s potential to provide important sensing capabilities that could benefit various industries.”
–Nabil Imam, senior research scientist in Intel’s Neuromorphic Computing Lab

Intel Nabil Imam
Intel Labs’ Nabil Imam holds a Loihi neuromorphic test chip in his Santa Clara, California, neuromorphic computing lab. (Credit: Walden Kirsch/Intel Corporation)
» How a Computer Chip Can Smell without a Nose

About the Research: Using a neural algorithm derived from the architecture and dynamics of the brain’s olfactory circuits, researchers from Intel and Cornell trained Intel’s Loihi neuromorphic research chip to learn and recognize the scents of 10 hazardous chemicals. To do so, the team used a dataset consisting of the activity of 72 chemical sensors in response to these smells and configured the circuit diagram of biological olfaction on Loihi. The chip quickly learned the neural representation of each of the smells and recognized each odor, even when significantly occluded, demonstrating a promising future for the intersection of neuroscience and artificial intelligence.

More Context: How a Computer Chip Can Smell Without a Nose | Nature Machine Intelligence | Neuromorphic Computing at Intel | Intel Labs

The post Computers That Smell: Intel’s Neuromorphic Chip Can Sniff Out Hazardous Chemicals appeared first on Intel Newsroom.

Namaste to AI: Yoga App Among 10 Winners of Hackster.io Jetson Competition

The developers behind MixPose, a yoga posture-recognizing application, aim to improve your downward dog and tree pose positions with a little nudge from AI.

MixPose enables yoga teachers to broadcast live-streaming instructional videos that include skeletal lines to help students better understand the angles of postures. It also enables students to capture their skeletal outlines and share them in live class settings for virtual feedback.

“Our goal is to create and enhance the connections between yoga instructors and students, and we believe using a Twitch-like streaming class is an innovative way to accomplish that,”  said Peter Ma, 36, a co-founder at the MixPose team.

MixPose’s streaming video platform can be broadcasted with Jetson Nano. The live stream content can then be viewed on Android TV and mobile phones.

On Tuesday, the group was among 10 teams awarded top honors at the AI at the Edge Challenge, launched in October, on Hackster.io. Winners competed for a share of NVIDIA supercomputing prizes, as well as a trip to our Silicon Valley headquarters.

Hackster.io is an online community of developers, engineers and hobbyists who work on hardware projects. To date, it’s seen more than 1.3 million members across 150 countries working on more than 21,000 open source projects and 240 company platforms.

MixPose, based in San Francisco, taps PoseNet pose estimation networks powered by Jetson Nano to do inference on yoga positions for yoga instructors, allowing the teachers and students to engage remotely based on the AI pose estimation. It is developing networks for different yoga poses, utilizing Jetpack SDK, CUDA ToolKit and cuDNN.

Four Prized Categories

MixPose took first place in the Artificial Intelligence of Things (AIoT) category, one of four project areas in the competition that drew 2,542 registrants from 35 countries and 79 submissions completed with code and shared with the community.

MixPose demos its streaming app

The team also landed third place in AIoT for its Jetson Clean Water AI entry, using object detection for water contamination.

“It can determine whether the water is clean for drinking or not,” said 27-year-old MixPose co-founder Sarah Han.

Contest categories also included Autonomous Machines and Robotics, Intelligent Video Analytics and Smart Cities. First, second and third place winners in each took home awards.

RNNs for Reading

NVIDIA gave a  single award in the category AI for Social Good, which Palo Alto High School junior Andrew Bernas took home for his Reading Eye for the Blind with NVIDIA Jetson Nano. It’s a text-to-voice app device for the visually impaired that uses CNNs and RNNs.

“Part of the inspiration was creating a solution for my grandmother and other people with vision loss to be able to read,” said Bernas.

Andrew Bernas’ text-to-speech device for the visually impaired

AI Whacks Weeds

First-place winners also included a team from India behind the weed removal robot Nindamani, in the Autonomous Machines and Robotics category.

Nindamani’s AI-driven wee removal robot 

Traffic Gets Moving

And a duo working on adaptive traffic controls took a top award for their networks used to help improve traffic flows, in the Intelligent Video Analytics and Smart Cities category.

Chathuranga Liyanage and Sandali Jayaweera do AI for visual traffic aids for drivers.

 

NVIDIA Jetson Nano makes adding AI easier and more accessible to makers, self-taught developers and embedded tech enthusiasts. Learn more about Jetson Nano and view more projects to get started.

The post Namaste to AI: Yoga App Among 10 Winners of Hackster.io Jetson Competition appeared first on The Official NVIDIA Blog.

A Taste for Acceleration: DoorDash Revs Up AI with GPUs

When it comes to bringing home the bacon — or sushi or quesadillas — DoorDash is shifting into high gear, thanks in part to AI.

The company got its start in 2013, offering deals such as delivering pad thai to Stanford University dorm rooms. Today with a phone tap, customers can order a meal from more than 310,000 vendors — including Chipotle, Walmart and Wingstop — across 4,000 cities in the U.S., Canada and Australia.

Part of its secret sauce is a digital logistics engine that connects its three-sided marketplace of merchants, customers and independent contractors the company calls Dashers. Each community taps into the platform for different reasons.

Using a mix of machine-learning models, the logistics engine serves personalized restaurant recommendations and delivery-time predictions to customers who want on-demand access to their local businesses. Meanwhile, it assigns Dashers to orders and sorts through trillions of options to find their optimal routes while calculating delivery prices dynamically.

The work requires a complex set of related algorithms embedded in numerous machine-learning models, crunching ever-changing data flows. To accelerate the process, DoorDash has turned to NVIDIA GPUs in the cloud to train its AI models.

Training in One-Tenth the Time

Moving from CPUs to GPUs for AI training netted DoorDash a 10x speed-up. Migrating from single to multiple GPUs accelerated its work another 3x, said Gary Ren, a machine-learning engineer at DoorDash who will describe the company’s approach to AI in an online talk at GTC Digital.

“Faster training means we get to try more models and parameters, which is super critical for us — faster is always better for training speeds,” Ren said.

“A 10x training speed-up means we spin up cloud clusters for a tenth the time, so we get a 10x reduction in computing costs. The impacts of trying 10x more parameters or models is trickier to quantify, but it gives us some multiple of increased overall business performance,” he added.

Making Great Recommendations

So far, DoorDash has discussed one of its deep-learning applications — its recommendation engine that’s been in production about two years. Recommendations are definitely becoming more important as companies such as DoorDash realize consumers don’t always know what they’re looking for.

Potential customers may “hop on our app and explore their options so — given our huge number of merchants and consumers — recommending the right merchants can make a difference between getting an order or the customer going elsewhere,” he said.

Because its recommendation engine is so important, DoorDash continually fine tunes it. For example, in its engineering blogs, the company describes how it crafts embedded n-dimensional vectors for each merchant to find nuanced similarities among vendors.

It also adopts the so-called multi-level, multi-armed bandit algorithms that let AI models simultaneously exploit choices customers have liked in the past and explore new possibilities.

Speaking of New Use Cases

While it optimizes its recommendation engine, DoorDash is exploring new AI use cases, too.

“There are several areas where conversations happen between consumers and dashers or support agents. Making those conversations quick and seamless is critical, and with improvements in NLP (natural-language processing) there’s definitely potential to use AI here, so we’re exploring some solutions,” Ren said.

NLP is one of several use cases that will drive future performance needs.

“We deal with data from the real world and it’s always changing. Every city has unique traffic patterns, special events and weather conditions that add variance — this complexity makes it a challenge to deliver predictions with high accuracy,” he said.

Other challenges the company’s growing business presents are in making recommendations for first-time customers and planning routes in new cities it enters.

“As we scale, those boundaries get pushed — our inference speeds are good enough today, but we’ll need to plan for the future,” he added.

The post A Taste for Acceleration: DoorDash Revs Up AI with GPUs appeared first on The Official NVIDIA Blog.

Meet Your Match: AI Finds the Right Clinical Trial for Cancer Patients

Clinical trials need a matchmaker.

Healthcare researchers and pharmaceutical companies rely on trials to validate new, potentially life-saving therapies for cancer and other serious conditions. But fewer than 10 percent of cancer patients participate in clinical trials, and four out of five studies are delayed due to the challenges involved in recruiting participants.

For patients interested in participating in trials, there’s no easy way to determine which they’re eligible for. AI tool Ancora aims to improve the matchmaking process, using natural language processing models to pair patients with potential studies.

“This all started because my friend’s parent was diagnosed with stage 3 cancer,” said Danielle Ralic, founder and CEO of Intrepida, the Switzerland-based startup behind Ancora. “I knew there were trials out there, but when I tried to help them find options, it was so hard.”

The U.S. National Institutes of Health maintains a database of hundreds of thousands of clinical trials. Each study lists a detailed series of text-based requirements, known as inclusion and exclusion criteria, for trial participants.

While users can sort by condition and basic demographics, there may still be hundreds of studies to manually sort through — a time-consuming process of weeding through complex medical terminology.

Intrepida’s customized natural language processing models do the painstaking work of interpreting these text-heavy criteria for patients and physicians, processing new studies on NVIDIA GPUs. The studies listed in the Ancora tool are updated weekly, and users can fill out a simple, targeted questionnaire to shortlist suitable clinical trials, and receive alerts for new potential studies.

“We assessed what 20 questions we could ask that can most effectively knock a patient’s list down from, for example, 250 possible trials to 10,” Ralic said. The platform also shows patients useful information to help decide on a trial, such as how the treatment will be administered, and if it’s been approved in the past to treat other conditions.

Intrepida’s tool is currently available for breast and lung cancer patients. A physician version will soon be available to help doctors find trials for their patients. The company is a member of the NVIDIA Inception virtual accelerator program, which provides go-to-market support for AI startups — including NVIDIA Deep Learning Institute credits, marketing support and preferred pricing on hardware.

Finding the Perfect Match

Intrepida founder Danielle Ralic
Intrepida founder Danielle Ralic presented on AI and drug discovery at last year’s Annual Meeting of the Biophysical Society.

Though the primary way patients hear about clinical trials is from their physicians, less than a quarter of patients hear about trials as an option from their doctors, who have limited time and resources to keep track of existing trials.

Ralic recalls being surprised to meet a stage 4 cancer survivor while hiking in Patagonia, and finding out the man had participated in a clinical trial for a new breakthrough drug.

“I asked him, how did you know about the trial? And he said he found out through a relative of his wife’s friend. That’s not how this should work,” Ralic said.

For physicians and patients, a better and more democratized way to discover clinical trials could lead to life-saving results. It could also speed up the research cycle by improving trial enrollment rates, helping pharmaceutical companies more quickly validate new drugs and bring them to market.

As members of the NVIDIA Inception program, Ralic says she and the Intrepida team were able to meet with other AI startups and with NVIDIA developers at the GPU Technology Conference held in Munich in 2018.

“We joined the program because, as a company that was working with NVIDIA GPUs already, we wanted to develop more sophisticated natural language models,” she said. “There’s been a lot to learn from NVIDIA team members and other Inception startups.”

Using NVIDIA GPUs has enabled Intrepida to shrink training times for one epoch from 20 minutes to just 12 seconds.

Diversifying the Data

A female startup founder in an industry that to date has been dominated by men, Ralic says more diversity is key to improving the healthcare industry as a whole — and especially clinical trials.

“Healthcare is holistic. It involves so many different types of people and knowledge,” she said. “Without a diversity of perspectives, we can never address the problems the healthcare industry has.”

The data backs her up. Clinical trial participants in the United States skew overwhelmingly white and male. The lack of diversity in trials can lead to critical errors in drug dosage.

For example, in 2013, the U.S. Food and Drug Administration mandated doses for several sleeping aids to be cut in half for women. Because females metabolize the drug differently, it increased their risk of getting in a car accident the morning after taking a sleeping pill.

“If we don’t have a diverse trial population, we won’t know whether a patient of a different gender or ethnicity will react differently to a new drug,” Ralic said. “If we did it right from the start, we could improve how we prescribe medicine to people, because we’re all different.”

The post Meet Your Match: AI Finds the Right Clinical Trial for Cancer Patients appeared first on The Official NVIDIA Blog.