Germany’s BrighterAI Named Hottest Startup at GTC Europe

On the tenth day of the tenth month in Munich, there were ten.

Deeply ambitious AI startups, that is, going head to head at GTC Europe. They were vying for the title of Europe’s Hottest Startup in a series of lightning-fast pitches and Q&A with a panel of startup specialists at the show’s Inception Awards.

With some 400 tech execs, developers and academics looking on, Germany’s BrighterAI took the bragging rights, along with a tidy prize valued at about $200,000 in cash and an NVIDIA DGX Station personal AI supercomputer. At news of the victory, much of the company’s young 12-member team, who had made the 600-kilometer trip from Berlin, leapt from their seats hooting and pumping fists.

BrighterAI’s co-founder and CEO Marian Gläser had just three minutes to describe their work, which uses deep neural networks to enable businesses to store and process images and video in a way that complies with GDPR and other increasingly important privacy laws and practices. The company creates perception layers from camera inputs to anonymize images in a natural way, while also stripping out distortions from weather and other factors. He projected revenue from the two-year-old company to grow to $70 million in five years.

All 10 Inception finalists were focused on applying AI to specific vertical industries – healthcare, financial analysis, manufacturing optimization and call-center operations, among others.

In presenting the award to BrighterAI, Jensen Huang, who admitted to being far less polished when he founded NVIDIA 25 years ago, said that the next wave of computing will be focused on such companies.

“The revolution of the past was inventing computing tools, but the revolution of now is applying computing technology to solve the great challenges of humanity,” he said. “The last 35 years were about the computer industry, the next 35 years is about your industries.”

The finalists had been whittled down from a list of more than 140 entrants from the 1,600 European AI startups in NVIDIA’s Inception program, a virtual incubator for AI companies that in total has more than 3,000 members.

Other finalists included:

  • axial3D (Ireland): Produces medical 3D printing software to advance the standard and efficiency of surgical intervention.
  • ATLAN Space (Morocco): Makes drones smarter and enables them to monitor vast areas, identify risks and make smart decisions.
  • Axyon AI (Italy): Uses its deep learning platform to augment work in financial areas from credit risk and wealth management to churn-rate prediction and fraud detection.
  • Conundrum (Russia): Uses AI and machine learning to predict failures, malfunctions and quality issues in complex industrial equipment and processes.
  • Corti Labs (Denmark): Uses deep learning to help medical personnel in call centers and other settings to make critical decisions when time is of the essence. It provides accurate diagnostic support to emergency services, allowing patients to get the right treatment faster.
  • IPT (Germany): Deploys its software to enable engineers to combine their own experience, AI and data to optimize manufacturing processes.
  • RetinAI (Switzerland): Develops preventative treatment for such eye diseases as age-related macular degeneration, diabetic retinopathy and glaucoma.
  • Serket (Netherlands): Uses AI to assist farmers in tracking and monitoring the health of livestock. It’s named for the Egyptian goddess of nature, animals, medicine and magic.
  • TheraPanacea (France): Uses AI, high performance computing, physics-based simulation and medical imaging to improve the efficiency and accuracy of radiotherapy treatment planning.

Special thanks to our Inception Awards sponsors: InfineonMD Elektronik and Qwant.

The post Germany’s BrighterAI Named Hottest Startup at GTC Europe appeared first on The Official NVIDIA Blog.

NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE

Big data is bigger than ever. Now, thanks to GPUs, it will be faster than ever, too.

NVIDIA founder and CEO Jensen Huang took the stage Wednesday in Munich to introduce RAPIDS, accelerating “big data, for big industry, for big companies, for deep learning,” Huang told a packed house of more than 3,000 developers and executives gathered for the three-day GPU Technology Conference in Europe.

Already backed by Walmart, IBM, Oracle, Hewlett-Packard Enterprise and some two dozen other partners, the open-source GPU-acceleration platform promises 50x speedups on the NVIDIA DGX-2 AI supercomputer compared with CPU-only systems, Huang said.

The result is an invaluable tool as companies in every industry look to harness big data for a competitive edge, Huang explained as he detailed how RAPIDS will turbo-charge the work of the world’s data scientists.

“We’re accelerating things by 1000x in the domains we focus on,” Huang said. “When we accelerate something 1000x in ten years, if your demand goes up 100 times your cost goes down by 10 times.”

Over the course of a keynote packed with news and demos, Huang detailed how NVIDIA is bringing that 1000x acceleration to bear on challenges ranging from autonomous vehicles to robotics to medicine.

Among the highlights: Volvo Cars had selected the NVIDIA DRIVE AGX Xavier computer for its next generation of vehicles; King’s College London is adopting NVIDIA’s Clara medical platform; and startup Oxford Nanopore will use Xavier to build the world’s first handheld, low-cost, real-time DNA sequencer.

Big Gains for GPU Computing

Huang opened his talk by detailing the eye-popping numbers driving the adoption of accelerated computing — gains in computing power of 1,000x over the past 10 years.

“In ten years time, while Moore’s law has ended, our computing approach has resulted in a 1000x increase in computing performance.” Huang said. “It’s now recognized as the path forward.”

Huang also spoke about how NVIDIA’s new Turing architecture — launched in August — brings AI and computer graphics together.

Turing combines support for next-generation rasterization, real-time ray-tracing and AI to drive big performance gains in gaming with NVIDIA GeForce RTX GPUs, visual effects with new NVIDIA Quadro RTX pro graphics cards, and hyperscale data centers with the new NVIDIA Tesla T4 GPU, the world’s first universal deep learning accelerator.

One Small Step for Man…

With a stunning demo, Huang showcased how our latest NVIDIA RTX GPUs — which enable real-time ray-tracing for the first time — allowed our team to digitally rebuild the scene around one of the lunar landing’s iconic photographs, that of astronaut Buzz Aldrin clambering down the lunar module’s lander.

The demonstration puts to rest the assertion that the photo can’t be real because Buzz Aldrin is lit too well as he climbs down to the surface of the moon while in the shadow of the lunar lander. Instead the simulation shows how the reflectivity of the surface of the moon accounts for exactly what’s seen in the controversial photo.

“This is the benefit of NVIDIA RTX, using this type of rendering technology we can simulate light physics and things are going to look the way things should look,” Huang said.

…One Giant Leap for Data Science

Bringing GPU computing back down to Earth, Huang announced a plan to accelerate the work of data scientists at the world’s largest enterprises.

RAPIDS open-source software gives data scientists facing complex challenges a giant performance boost. These challenges range from predicting credit card fraud to forecasting retail inventory and understanding customer buying behavior, Huang explained.

Analysts estimate the server market for data science and machine learning at $20 billion. Together with scientific analysis and deep learning, this pushes up the value of the high performance computing market to approximately $36 billion.

Developed over the past two years by NVIDIA engineers in close collaboration with key open-source contributors, RAPIDS offers a suite of open-source libraries for GPU-accelerated analytics, machine learning and, soon, data visualization.

RAPIDS has already won support from tech leaders such as Hewlett-Packard Enterprise, IBM and Oracle as well as open-source pioneers such as Databracks and Anaconda, Huang said.

“We have integrated RAPIDS into basically the world’s data science ecosystem, and companies big and small, their researchers can get into machine learning using RAPIDS and be able to accelerate it and do it quickly, and if they want to take it as a way to get into deep learning, they can do so,” Huang said.

Bringing Data to Your Drive

Huang also outlined the strides NVIDIA is making with automakers, announcing that Swedish automaker Volvo has selected the NVIDIA DRIVE AGX Xavier computer for its vehicles, with production starting in the early 2020s.

DRIVE AGX Xavier — built around our Xavier SoC, the world’s most advanced — is a highly integrated AI car computer that enables Volvo to streamline development of self-driving capabilities while reducing total cost of development and support.

The initial production release will deliver Level 2+ automated driving features, going beyond traditional advanced driver assistance systems.The companies are working together to develop automated driving capabilities, uniquely integrating 360-degree surround perception and a driver monitoring system.

The NVIDIA-based computing platform will enable Volvo to implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.

It’s a vision that’s backed by a growing number of automotive companies, with Huang announcing Wednesday that, in addition to Volvo Cars, Volvo Trucks, tier one automotive components supplier Continental, and automotive technology companies Veoneer and Zenuity and have all adopted NVIDIA DRIVE AGX.

Jensen also showed the audience a video of how, this month, an autonomous NVIDIA test vehicle, nicknamed BB8, completed a jam-packed 80-kilometer, or 50 mile, loop, in Silicon Valley without the need for the safety driver to take control — even once.

Running on the NVIDIA DRIVE AGX Pegasus AI supercomputer, the car handled highway entrance and exits and numerous lane changes entirely on its own.

From Hospitals Serving Millions to Medicine Tailored Just for You

AI is also driving breakthroughs in the healthcare, Huang explained, detailing how NVIDIA Clara will harness GPU computing for everything from medical scanning to robotic surgery.

He also announced a partnership with King’s College London to bring AI tools to radiology, and deploy it to three hospitals serving 8 million patients in the U.K.

In addition, he announced NVIDIA Clara AGX — which brings the power of Xavier to medical devices — has been selected by Oxford Nanopore to power its personal DNA sequencer MinION, which promises to driven down the cost and drive up the availability of medical care that’s tailored to a patient’s DNA.

A New Computing Era

Huang finished his talk by recapping the new NVIDIA platforms being rolled out — the Turing GPU architecture; the RAPIDS data science platform; and DRIVE AGX for autonomous machines of all kinds.

Then he left the audience with a stunning demo of a nameless hero being prepared for action by his robotic assistants — before he returns to catch his robotics bopping along to K.C. and the Sunshine Band and join in the fun before returning to stage with a quick caveat.

“And I forgot to tell you everything was done in real time,” Huang said. “That was not a movie.”

The post NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

First Mover: Germany’s DFKI to Deploy Europe’s Initial DGX-2 Supercomputer

DFKI, the leading research center in Germany in the field of innovative commercial software technology using AI, is the first group in Europe to adopt the NVIDIA DGX-2 AI supercomputer.

The research center will use the system to quickly analyze large-scale satellite and aerial imagery using image processing and deep neural network training, as well as for various deep learning experiments.

One experiment aims to develop new applications that will support rescuers in disaster-response scenarios by enabling them to make faster decisions. The resulting applications could help answer important questions such as finding the areas affected by a disaster and the accessibility of infrastructure during events such as floods.

Another highly topical research area is to measure and understand convolutional neural networks (CNNs) by quantifying the amount of input the let in. This technology is breaking new ground in the area of neural network understanding, opening a new way to reason, debug and interpret results.

DGX-2 integrates 16 NVIDIA Tesla V100 Tensor Core GPUs connected via NVIDIA NVSwitch, an AI network fabric that delivers throughput of 2.4TB per second.

“The analysis of big amounts of data — for example, large-scale aerial and satellite imagery — requires a powerful solution to process and train these deep neural networks,” said Andreas Dengel, head of the research department Smart Data & Knowledge Services at DFKI in Kaiserslautern. “The increased memory footprint of the DGX-2, enabled by the fully connected GPUs based on the NVSwitch architecture, will play a key role for us in improving the development of effective AI applications and expand the unique infrastructure of our Deep Learning Competence Center.”

Founded in 1988, DFKI previously used the NVIDIA DGX-1 for various projects including its DeepEye project. To help estimate and forecast damages of natural disasters, the system trained multiple CNN models to extract relevant information from text, image and metadata from social media.

For more NVIDIA developments at #GTC18, follow @NVIDIAEU.

 

The post First Mover: Germany’s DFKI to Deploy Europe’s Initial DGX-2 Supercomputer appeared first on The Official NVIDIA Blog.

King’s College London, NVIDIA Build Gold Standard for AI Infrastructure in the Clinic

King’s College London, a leader in medical research, is Europe’s first clinical partner to adopt NVIDIA DGX-2 and the NVIDIA Clara platform. KCL is deploying NVIDIA AI solutions to rethink the practice of radiology and pathology in a quest to better serve 8 million patients in the U.K.’s National Healthcare System.

NVIDIA and KCL will co-locate researchers and engineers with clinicians from major London hospitals that are part of the NHS Trust citywide network, including King’s College Hospital, Guy’s and St Thomas’, and South London and Maudsley. The trio of research, technology and clinicians will accelerate discovery of critical data strategies, targeted AI problems and speed deployment in the clinic.

“This is a huge opportunity to transform patient outcomes by applying the extraordinary capabilities of AI to ultimately make diagnoses earlier and more accurately than in the past,” said Professor Sebastien Ourselin, head of the School of Biomedical Engineering and Imaging Sciences at KCL. “This partnership will combine our expertise in medical imaging and health records with NVIDIA’s technology to improve patient care across the U.K.”

First up is unleashing the power of DGX-2 on the advanced imaging and analytics challenges at KCL. The DGX-2 system’s large memory and 2 petaflops of computing prowess make it perfect to tackle the training of large, 3D datasets in minutes instead of days.

Training at scale is tricky, but the DGX-2 can enhance medical imaging AI tools like Niftynet, a TensorFlow-based open-source convolutional neural network platform for research in medical image analysis and image-guided therapy developed at KCL.

If infrastructure and tools are at the heart of developing AI applications, data is the blood that makes the heart do something magical. Federated learning, the ability to learn from data that is not centralized, is one example.

Working with KCL’s clinical network to crack the technical and data governance issues of federated learning could lead to breakthroughs such as more precisely classifying stroke and neurological impairments to recommend the best treatment or automatic biomarker determination.

NVIDIA Clara is the computing platform for deploying breakthrough applications like these. Like its namesake, Clara Barton, the platform is meant to help people. It’s universal, scalable and accessible to the applications that need to run in clinical workflows.

From development to deployment, NVIDIA and KCL plan to streamline AI while building the necessary tools, infrastructure and best practices to empower the entire clinical ecosystem.

The post King’s College London, NVIDIA Build Gold Standard for AI Infrastructure in the Clinic appeared first on The Official NVIDIA Blog.

New Intel Vision Accelerator Solutions Speed Deep Learning and Artificial Intelligence on Edge Devices

iot vision 2x1What’s New: Today, Intel unveiled its family of Intel® Vision Accelerator Design Products targeted at artificial intelligence (AI) inference and analytics performance on edge devices, where data originates and is acted upon. The new acceleration solutions come in two forms: one that features an array of Intel® Movidius™ vision processors and one built on the high-performance Intel® Arria® 10 FPGA. The accelerator solutions build on the OpenVINO™ software toolkit that provides developers with improved neural network performance on a variety of Intel products and helps them further unlock cost-effective, real-time image analysis and intelligence within their Internet of Things (IoT) devices.

“Until recently, businesses have been struggling to implement deep learning technology. For transportation, smart cities, healthcare, retail and manufacturing industries, it takes specialized expertise, a broad range of form factors and scalable solutions to make this happen. Intel’s Vision Accelerator Design Products now offer businesses choice and flexibility to easily and affordably accelerate AI at the edge to drive real-time insights.”
–Jonathan Ballon, Intel vice president and general manager, Internet of Things Group

Why This Is Important: The need for intelligence on edge devices has never been greater. As deep learning approaches rapidly replace more traditional computer vision techniques, businesses can unlock rich data from digital video. With Intel Vision Accelerator Design Products, businesses can implement vision-based AI systems to collect and analyze data right on edge devices for real-time decision-making. Advanced edge computing capabilities help cut costs, drive new revenue streams and improve services.

What This Delivers: Combined with Intel Vision products such as Intel CPUs with integrated graphics, these new edge accelerator cards allow businesses the choice and flexibility of price, power and performance to meet specific requirements from camera to cloud. Intel’s Vision Accelerator Design Products will build upon growing industry adoption for the OpenVINO toolkit:

  • Smart, Safe Cities: With the OpenVINO toolkit, stadium security provider AxxonSoft* used existing installed-base hardware to achieve 9.6 times the performance on standard Intel® Core™ i7 processors and 3.1 times the performance on Intel® Xeon® Scalable processors in order to ensure the safety of 2 million visitors to the FIFA 2018 World Cup.*

Who Uses This: Leading companies such as Dell*, Honeywell* and QNAP* are planning products based on Intel Vision Accelerator Designs. Additional partners and customers, from equipment builders, solution developers and cloud service providers support these products.

More Context: Intel’s Vision Accelerator Design Products Customer QuotesVideo | Infographic

How This Works: Intel Vision Accelerator Design Products work by offloading AI inference workloads to purpose-built accelerator cards that feature either an array of Intel Movidius Vision Processing Units, or a high-performance Intel Arria 10 FPGA. Deep learning inference accelerators scale to the needs of businesses using Intel Vision solutions, whether they are adopting deep learning AI applications in the data center, in on-premise servers or inside edge devices. With the OpenVINO toolkit, developers can easily extend their investment in deep learning inference applications on Intel CPUs and integrated GPUs to these new accelerator designs, saving time and money.

The Small Print:

1Automated product quality data collected by Yumei using JWIPC® model IX7, ruggedized, fan-less edge compute node/industrial PC running an Intel® Core™ i7 CPU with integrated on die GPU and OpenVINO SDK. 16GB of system memory, connected to a 5MP POE Basler* Camera model acA 1920-40gc. Together these components, along with the Intel developed computer vision and deep learning algorithms, provide Yumei factory workers information on product defects near real-time (within 100 milliseconds). Sample size >100,000 production units collected over 6 months in 2018.

The post New Intel Vision Accelerator Solutions Speed Deep Learning and Artificial Intelligence on Edge Devices appeared first on Intel Newsroom.

Real-Time DNA Sequencing in the Palm of Your Hand

In the vast stretches of iron-red soil fields in much of Africa, cassava plays an essential role in the food supply and livelihoods of over 800 million people. Think rice in Asia and wheat in Europe.

So when two whitefly-borne viruses attack the plant, the consequences to farmers and consumers can be devastating. Timely diagnosis of the cassava pathogen hadn’t been possible. Instead, farmers who suspected their plants were diseased had to destroy their crops.

However, DNA information can help farmers identify which virus the plant has, so they can take the appropriate action.

UK startup Oxford Nanopore Technologies is speeding the discovery of pathogens with its MinION device, a portable, low-cost, real-time DNA and RNA sequencer. The MinION connects to the just-introduced MinIT hand-held AI supercomputer, powered by NVIDIA AGX, and runs sequence analysis in real time, anywhere — even in the field.

“MinIT is the perfect, powerful companion to MinION, the only portable real-time DNA sequencer,” said Gordon Sanghera, CEO of Oxford Nanopore. “As data streams faster from our sequencing devices, GPU technology like NVIDIA’s becomes central to preserving the real-time properties of our DNA sequencing technology.”

MinIT is now being shipped to scientists working across a variety of scientific disciplines. Many are working on healthcare applications, such as the rapid identification of infectious disease, cancer or other diseases. Others are focused on plant or environmental science, or even using the technology in education.

AI for Basecalling

Nanopore sequencing measures tiny ionic currents that pass through nanoscale holes called nanopores. It detects signal changes when DNA passes through these holes. This captured signal produces raw data that requires signal processing to determine the order of DNA bases – known as the “sequence.” This is called basecalling.

This analysis problem is a perfect match for AI, specifically recurrent neural networks. Compared with previous methods, RNNs allow for more accuracy in time-series data, which Oxford Nanopore’s sequencers are known for.

Accessibility + Speed = Breakthroughs

Since its launch in 2015, the MinION has put DNA sequencing in hands of the most curious scientists, some of whom work in the most hard to reach places in the world. Examples are frequently posted to Oxford Nanopore’s twitter feed @nanopore.

The cassava case study is told in this brief, beautiful video.

Another, posted just this week describes how MinIT and MinION have been used onboard a marine research vessel in Alaska to perform onboard scientific analyses of seawater. The work focuses on  DNA analysis of microbial life in the sea helps the researchers understand the marine ecosystem, biodiversity in the ocean and how climate change can affect microorganisms there.

Focused on performance — both speed and accuracy — Oxford Nanopore uses NVIDIA GPUs for data analysis along with all their DNA sequencing devices. With MinIT on NVIDIA AGX, they’re approaching a 10x performance improvement over previous versions to help unlock real-time human and plant genomics. Its benchtop PromethION product is powered by NVIDIA Volta GPUs and can crank out a human genome for under $800.

Learn more about the MinION.

The post Real-Time DNA Sequencing in the Palm of Your Hand appeared first on The Official NVIDIA Blog.

GTC DC: Learn How Washington Plans to Keep the U.S. in Front in the AI Race

The U.S. government spends about $4 trillion a year, and the question every taxpayer seems to ask is: How can we get more, while paying less?

The answer more and more leaders are turning to: AI.

That’s why thousands of agency leaders, congressional staffers, entrepreneurs, developers and media will attend our third annual GTC DC event October 22-24 at the Reagan Center in Washington. GTC DC has quickly become the largest AI event in Washington.

It’s research, not rhetoric, attendees will tell you, that makes D.C. an AI accelerator like no other. The conference is packed with representatives from federal agencies — among them, the National Science Foundation, the National Institutes of Health, and DARPA — that routinely marshal scientific efforts on a scale far beyond that of anywhere else in the world.

Unmatched Research Leadership

These efforts extend deep into the computing industry, with the federal government commissioning the construction of supercomputers with ever more stupendous capabilities. Summit and Sierra, a pair of GPU-powered machines completed this year, represent an investment of $325 million. Summit is easily the world’s fastest.

And while Washington’s leaders are transforming AI, AI is transforming the region’s economy into one of the nation’s most vibrant startup hubs, with 54 deals worth $544 million in the second quarter of 2018 — up 29 percent from the year-ago period, according to the latest PwC MoneyTree report.

Bringing Public, Private Sector Leaders Together

All of this makes GTC DC a one-of-a-kind gathering, bringing together leaders from the public and private sectors for panel discussions about AI policy, and 150 talks about applying AI to a wide range of applications, from healthcare to cybersecurity to self-driving cars and autonomous machines.

The event features two keynote talks, from U.S. Chief Information Officer Suzette Kent and NVIDIA VP of Accelerated Computing Ian Buck

Other notable speakers include Heidi King of the National Highway Traffic Safety Administration; James Kurose of the NSF; Derek Kan from the DOT; Elizabeth Jones from the National Institute of Health; Missye Brickell from the U.S. Senate Committee on Commerce, Science and Transportation; Bakul Patel from the FDA; and Melissa Froelich from the House Committee on Energy and Commerce.

Leaders from the public and private sector will participate in panel discussions to discuss policy issues for:

  • Artificial Intelligence and Autonomy for Humanitarian Assistance and Disaster Relief
  • American Leadership in AI Research
  • The Keys to Deploying Self-driving Cars
  • How AI Can Improve Citizen Services
  • AI for Healthcare
  • Transforming Agriculture with AI

This is your opportunity to join in the discussions around these — and other efforts — that the rest of the nation, and the world, will be seeing in the news months from now.

AI in Healthcare

Anchored by the National Institutes of Health, which spends more than $37 billion on research annually — making it the world’s largest funder of biomedical research — the DC area is home to a constellation of healthcare innovators, many of whom are flocking to GTC.

Luminaries such as Elizabeth Jones, acting director of radiology and imaging sciences at the National Institutes for Health; Baku Patel, associate director at the U.S. Food and Drug Administration; and Agata Anthony, regulatory affairs executive at General Electric, will discuss how to bring AI out of labs and into clinics.

Other healthcare speakers include:

  • Daniel Jacobson, chief scientist for systems biology at Oak Ridge National Laboratory, who will talk about how Summit, the world’s fastest supercomputer, is being used to attack the opioid epidemic.
  • Faisal Mahmood, a postdoctoral fellow from from Johns Hopkins University, who will talk about how a new generation of AI generated images can be used to accelerate efforts to train up sophisticated new medical imaging systems.
  • Avantika Lal, a research scientist from NVIDIA’s deep learning genomics group, who will explain how deep learning can transform noisy, low-quality DNA sequencing data into clean, high-quality data.

AI in Cyber-Security and Law Enforcement

Cyber security’s another industry where the DC area leads the way — with the region employing more than 77,500 cybersecurity professionals. It’s an industry that’s one of the leaders in AI adoption.  Among featured speakers covering the topic:

  • Booz Allen Hamilton Lead Data Scientist Rachel Allen will talk about how to secure sprawling commercial and government networks.
  • NVIDIA’s Bartley Richardson, senior data scientist for AI infrastructure will talk about how new machine learning approaches to cyber security threats.
  • And, if you’re a fan of CSi, Graphistry CEO Leo Meyerovich will talk about bringing the latest graphics technology to crime scene analytics.

AI for Safer Driving

The big-picture thinking at GTC DC extends to self-driving cars, too.

While carmakers continue to add more and more autonomy to their vehicles, policymakers working on the infrastructure and regulatory changes that will make mass adoption of fully autonomous vehicles possible.

The highlight: a discussion at GTC DC will be a panel on deploying self-driving cars.

Among the speakers:

  • Melissa Froelich, a staffer from the U.S. House Committee on Energy and Commerce;
  • Audi director of Government Affairs Brad Stertz; Bert Kaufman, head of corporate and regulatory affairs at Zoox;
  • Finch Fulton, deputy assistant secretary for Transportation Policy at the U.S. Transportation Department.

Get ahead of the game – come to GTC DC to learn what the rest of the nation, and the world, will be seeing in the news months from now.

The post GTC DC: Learn How Washington Plans to Keep the U.S. in Front in the AI Race appeared first on The Official NVIDIA Blog.

Noisy Data: Nigerian Startup Aims to Save Lives by Analyzing Infant Cries with AI

There’s more to an infant’s cry than what meets the ear.

A Nigerian startup is seeking potentially life-saving information in those wails using AI.

It’s developing a deep learning model to enable hospitals in developing nations to use this data to improve treatment of birth asphyxia, one of the most common — and deadly — neonatal conditions.

The World Health Organization estimates that birth asphyxia, or the loss of oxygen to a newborn during birth, kills as many as 4 million infants every year, or more than one-third of all deaths of children under the age of five.

Charles Onu, founder of startup Ubenwa (which literally means “cry of a baby” in Igbo, a language spoken by millions of people in Nigeria and elsewhere) learned about asphyxia during his undergraduate studies several years ago. He wanted to apply his engineering education to solving the problem.

Onu later read about a piece of research from the 1970s that established a connection between voice and asphyxia, and he began wondering about the connection between the condition and a newborn’s cry.

“It was this work that led me into machine learning,” Onu said. “My natural inclination was to ask, ‘Is there something I can do about it?’”

The Road to Deep Learning

As Onu learned about machine learning, he realized that pattern identification was an approach that fit well with identifying subtle indicators in a newborn infant’s cry.

Eventually, he founded Ubenwa with engineering lead Innocent Udeogu, obtained a dataset of 1,400 infant crying examples from researchers in Mexico, and started developing a machine learning algorithm that would detect subtle distinctions in infant cries to identify the presence of asphyxia.

Ubenwa is based in Nigeria, but Onu works with a small team in Montreal, where he is a graduate research assistant at McGill University.

NVIDIA TITAN X
The NVIDIA TITAN X.

In 2015, Onu won the best paper award at the Neural Information Process Systems Workshop on Machine Learning in Healthcare. The prize was an NVIDIA TITAN X.

Onu and Udeogu paired that GPU with the TensorFlow deep learning library to develop their models. The company is now looking to acquire more data from hospitals to further validate its models. Part of that effort involves scraping public websites such as YouTube for examples of crying infants.

On the Cusp of Impact

If its early machine learning algorithms are a predictor, Ubenwa is onto something: its first models achieved a 85 percent success rate in predicting the presence of asphyxia, an achievement detailed in a recent paper. This helped launch the company into the second round of IBM’s years-long Watson AI XPRIZE competition, which will award $5 million to the winner.

In developed nations, newborns are routinely screened for conditions such as asphyxia using blood drawn immediately after birth. Elsewhere, hospitals often lack the required equipment and expertise, and have undependable electricity.

Ubenwa plans to offer a mobile application that will substitute for clinical efforts in developing countries. Caregivers could use Ubenwa to record an infant’s cry, and the app will compare the amplitude and frequency patterns of that cry against its deep learning models to determine whether asphyxia is present.

The company hopes to begin field trials in January 2020, initially focusing on Nigeria and its population of about 200 million. Regulatory complexities aren’t inconsiderable, by Onu believes Ubenwa can have a global impact, and potentially enable other diagnoses to be gleaned from an infant’s cry.

“Sudden infant death syndrome and other conditions could be detected earlier,” he said.

The post Noisy Data: Nigerian Startup Aims to Save Lives by Analyzing Infant Cries with AI appeared first on The Official NVIDIA Blog.

Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides

Hundreds of millions of tissue biopsies are performed worldwide each year — most of which are diagnosed as non-cancerous. But for the few days or weeks it takes a lab to provide a result, uncertainty weighs on patients.

“Patients suffer emotionally, and their cancer is progressing as the clock ticks,” said David West, CEO of digital pathology startup Proscia.

That turnaround time has the potential to dramatically reduce. In recent years, the biopsy process has begun to digitize, with more and more pathologists looking at digital scans of body tissue instead of physical slides under a microscope.

Proscia, a member of our Inception virtual accelerator program, is hosting these digital biopsy specimens in the cloud. This makes specimen analysis borderless, with one hospital able to consult a pathologist in a different region. It also creates the opportunity for AI to assist experts as they analyze specimens and make their diagnoses.

“If you have the opportunity to read twice as many slides in the same amount of time, it’s an obvious win for the laboratories,” said West.

The Philadelphia-based company recently closed a $8.3 million Series A funding round, which will power its AI development and software deployment. And a feasibility study published last week demonstrated that Proscia’s deep learning software scores over 99 percent accuracy for classifying three common types of skin pathologies.

Biopsy Analysis, Behind the Scenes

Pathologists have the weighty task of examining lab samples of body tissue to determine if they’re cancerous or benign. But depending on the type and stage of disease, two pathologists looking at the same tissue may disagree on a diagnosis more than half the time, West says.

These experts are also overworked and in short supply globally. Laboratories around the world have too many slides and not enough people to read them.

China has one pathologist per 80,000 patients, said West. And while the United States has one per 25,000 patients, it’s facing a decline as many pathologists are reaching retirement age. Many other countries have so few pathologists that they are “on the precipice of a crisis,” according to West.

He projects that 80 to 90 percent of major laboratories will have switched their biopsy analysis from microscopes to scanners in the next five years. Proscia’s subscription-based software platform aims to help pathologists more efficiently analyze these digital biopsy specimens, assisted by AI.

The company uses a range of NVIDIA Tesla GPUs through Amazon Web Services to power its digital pathology software and AI development. The platform is currently being used worldwide by more than 4,000 pathologists, scientists and lab managers to manage biopsy data and workflows.

screenshot of Proscia's DermAI tool
Proscia’s digital pathology and AI platform displays a heat map analysis of this H&E stained skin tissue image.

In December, Proscia will release its first deep learning module, DermAI. This tool will be able to analyze skin biopsies and is trained to recognize roughly 70 percent of the pathologies a typical dermatology lab sees. Three other modules are currently under development.

Proscia works with both labeled and unlabeled data from clinical partners to train its algorithms. The labeled dataset, created by expert pathologists, are tagged with the overall diagnosis as well as more granular labels for specific tissue formations within the image.

While biopsies can be ordered at multiple stages of treatment, Proscia focuses on the initial diagnosis stage, when doctors are looking at tissue and making treatment decisions.

“The AI is checking those cases as a virtual second opinion behind the scenes,” said West. This could lower the chances of missing tricky-to-spot cancers like melanoma, and make diagnoses more consistent among pathologists.

The post Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides appeared first on The Official NVIDIA Blog.

AI Chatbot Offers Better Way to Search Maze of Company Info

Your company’s internal online directory of resources resembles a medieval labyrinth. Finding something like the in-house holiday schedule can entail hitting lots of walls.

Startup Jane.ai aims to help navigate around those headaches.

The St. Louis company is developing a chatbot to find company information, intending to cut back the pain of lengthy, often fruitless searches. You can talk to Jane like any colleague on your Slack, Skype, email, SMS text or webpage.

Jane’s name was chosen to feel as familiar as those of people in cubicles near you, so that communication with the chatbot would feel natural, according to the startup.

There’s a good shot this could help enterprises everywhere. That’s because Jane’s AI — developed on NVIDIA GPUs —  can comb through apps, documents and other bits of information in databases to help people unearth company answers in moments.

Jane.ai was co-founded in early 2017 by David Karandish and Chris Sims, the former co-founders of Answers.com. The 45-person startup recently scored $8.4 million in a Series A funding round.

The company offers a cloud-based service, with subscriptions based on the number of users. So far, it has about a dozen customers, among them utilities, mortgage and financial companies, consumer packaged goods, and universities.

Jane’s Next Act?

Plans for Jane are to go beyond just answering queries. The chatbot can already be used to perform actions across a wide spectrum of enterprise apps and APIs, including scheduling a meeting, creating a ticket, searching your files, returning CRM account details and fetching data from a spreadsheet.

In the future, it might be used to offer proactive messages to take actions, such as reminding you to participate in open enrollment for benefits if it spots that you haven’t already.

Jane’s natural language processing was built on several different deep neural networks and machine learning algorithms to provide the results the company wanted, Karandish said.

Jane Likes Algorithms

“Some algorithms do fantastic under certain scenarios and not so on others — by running them together you get fantastic results in a relatively short period,” Karandish said.

The team at Jane trained its neural networks on NVIDIA GPUs on AWS. The natural language processing does topic clustering and text clustering to help develop pools of answers to questions, and then combines that with entity tagging, part-of-speech tagging and sentence similarity to zoom in on each user’s unique intent.

Jane has “a whole ensemble of different models that are running in real time in production on GPUs,” said Dave Costenaro, the company’s AI lead.

The post AI Chatbot Offers Better Way to Search Maze of Company Info appeared first on The Official NVIDIA Blog.