Top Healthcare Innovators Share AI Developments at GTC

Healthcare is under the microscope this year like never before. Hospitals are being asked to do more with less, and researchers are working around the clock to answer pressing questions.

NVIDIA’s GPU Technology Conference brings everything you need to know about the future of AI and HPC in healthcare together in one place.

Innovators across healthcare will come together at the event to share how they are using AI and GPUs to supercharge their medical devices and biomedical research.

Scores of on-demand talks and hands-on training sessions will focus on AI in medical imaging, genomics, drug discovery, medical instruments and smart hospitals.

And advancements powered by GPU acceleration in fields such as imaging, genomics and drug discovery, which are playing a vital role in COVID-19 research, will take center stage at the conference.

There are over 120 healthcare sessions taking place at GTC, which will feature amazing demos, hands-on training, breakthrough research and more from October 5-9.

Turning Months into Minutes for Drug Discovery

AI and HPC are improving speed, accuracy and scalability for drug discovery. Companies and researchers are turning to AI to enhance current methods in the field. Molecular simulation like docking, free energy pertubation (FEP) and molecular dynamics requires a huge amount of computing power. At every phase of drug discovery, researchers are incorporating AI methods to accelerate the process.

Here are some drug discovery sessions you won’t want to miss:

Architecting the Next Generation of Hospitals

AI can greatly improve hospital efficiency and prevent costs from ballooning. Autonomous robots can help with surgeries, deliver blankets to patients’ rooms and perform automatic check-ins. AI systems can search patient records, monitor blood pressure and oxygen saturation levels, flag thoracic radiology images that show pneumonia, take patient temperatures and notify staff immediately of changes.

Here are some sessions on smart hospitals you won’t want to miss:

Training AI for Medical Imaging

AI models are being developed at a rapid pace to optimize medical imaging analysis for both radiology and pathology. Get exposure to cutting-edge use cases for AI in medical imaging and how developers can use the NVIDIA Clara Imaging application framework to deploy their own AI applications.

Building robust AI requires massive amounts of data. In the past, hospitals and medical institutions have struggled to share and combine their local knowledge without compromising patient privacy, but federated learning is making this possible. The learning paradigm enables different clients to securely collaborate, train and contribute to a global model. Register for this session to learn more about federated learning and its use on AI COVID-19 model development from a panel of experts.

Must-see medical imaging sessions include:

Accelerating Genomic Analysis

Genomic data is foundational in making precision medicine a reality. As next-generation sequencing becomes more routine, large genomic datasets are becoming more prevalent. Transforming the sequencing data into genetic information is just the first step in a complicated, data-intensive workflow. With high performance computing, genomic analysis is being streamlined and accelerated to enable novel discoveries about the human genome.

Genomic sessions you won’t want to miss include:

The Best of MICCAI at GTC

This year’s GTC is also bringing to attendees the best of MICCAI, a conference focused on cutting-edge deep learning medical imaging research. Developers will have the opportunity to dive into the papers presented, connect with the researchers at a variety of networking opportunities, and watch on-demand trainings from the first ever MONAI Bootcamp hosted at MICCAI.

Game-Changing Healthcare Startups

Over 70 healthcare AI startups from the NVIDIA Inception program will showcase their latest breakthroughs at GTC. Get inspired by the AI- and HPC-powered technologies these startups are developing for personalized medicine and next-generation clinics.

Here are some Inception member-led talks not to miss:

Make New Connections, Share Ideas

GTC will have new ways to connect with fellow attendees who are blazing the trail for healthcare and biomedical innovation. Join a Dinner with Strangers conversation to network with peers on topics spanning drug discovery, medical imaging, genomics and intelligent instrument development. Or, book a Braindate to have a knowledge-sharing conversation on a topic of your choice with a small group or one-on-one.

Learn more about networking opportunities at GTC.

Brilliant Minds Never Turn Off

GTC will showcase the hard work and groundbreaking discoveries of developers, researchers, engineers, business leaders and technologists from around the world. Nowhere else can you access five days of continuous programming with regionally tailored content. This international event will unveil the future of healthcare technology, all in one place.

Check out the full healthcare session lineup at GTC, including talks from over 80 startups using AI to transform healthcare, and register for the event today.

The post Top Healthcare Innovators Share AI Developments at GTC appeared first on The Official NVIDIA Blog.

Intel, Baidu Drive Intelligent Infrastructure Transformation

Intel and Baidu
Baidu World 2020: Rui Wang (right), Intel vice president in the Sales and Marketing Group and PRC country manager, and Zhenyu Hou, corporate vice president of Baidu, discuss the two companies’ on-going strategic collaboration to bring intelligent infrastructure and intelligent computing to Baidu and its customers.

What’s New: At Baidu World 2020, Intel announced a series of collaborations with Baidu in artificial intelligence (AI), 5G, data center and cloud computing infrastructure. Intel and Baidu executives discussed the trends of intelligent infrastructure and intelligent computing, and shared details on the two companies’ strategic vision to jointly drive the industry’s intelligent transformation within the cloud, network and edge computing environments.

“In China, the most important thing for developing an industry ecosystem is to truly take root in the local market and its users’ needs. With the full-speed development of ‘new infrastructure’ and 5G, China has entered the stage of accelerated development of the industrial internet. Intel and Baidu will collaborate comprehensively to create infinite possibilities for the future through continuous innovation, so that technology can enrich the lives of everyone.”
– Rui Wang, Intel vice president in the Sales & Marketing Group and PRC country manager

Why It Matters: Zhenyu Hou, corporate vice president of Baidu, said that Baidu and Intel are both extremely focused on technology innovation and have always been committed to promoting intelligent transformation through innovative technology exploration. In the wave of new infrastructure, Baidu continues to deepen its collaboration with Intel to seize opportunities in the AI industry and bring more value to the industry, society and individuals.

A Series of Recent Collaborations:

  • AI in the Cloud: Intel and Baidu have delivered technological innovations over the past decade, from search and AI to autonomous driving, 5G and cloud services. Recently Baidu and Intel worked on customizing Intel® Xeon® Scalable processors to deliver optimized performance, thermal design power (TDP), temperature and feature sets within Baidu’s cloud infrastructure. With the latest 3rd generation Intel Xeon Scalable processor with built-in BFloat16 instruction set, Intel supports Baidu’s optimization of the PaddlePaddle framework to provide enhanced speech prediction services and multimedia processing support within the Baidu cloud to deliver highly optimized, highlight efficient cloud management, operation and maintenance.
  • Next-Gen server architecture: Intel and Baidu have designed and carried out the commercial deployment of next-generation 48V rack servers based on Intel Xeon Scalable processors to achieve higher rack power density, reduce power consumption and improve energy efficiency. The two companies are working to drive ecosystem maturity of 48V and promote the full adoption of 48V in the future based on the next-generation Xeon® Scalable processor (code named Sapphire Rapids).
  • Networking: In an effort to improve virtualization and workload performance, while accelerating data processing speeds and reducing total cost of ownership (TCO) within Baidu infrastructure, Intel and Baidu are deploying Smart NIC (network interface card) innovations based on Intel® SoC FPGAs and Intel® Ethernet 800 Series adapter with Application Device Queues (ADQ) technology. Smart NICs greatly increase port speed, optimize network load, realize large-scale data processing, and create an efficient and scalable bare metal and virtualization environment for the Baidu AICloud.
  • Baidu Smart NICs are built on the latest Intel Ethernet 800 series, Intel Xeon-D processor and Intel Arria® 10-based FPGAs. From the memory and storage side, Intel and Baidu built a high-performance, ultra-scalable, and unified user space single-node storage engine using Intel® Optane™ persistent memory and Intel Optane NVMe SSDs to enable Baidu to configure multiple storage scenarios through one set of software.
  • 5G and edge computing: In the area of 5G and edge computing, Intel and Baidu have utilized their technology expertise and collaborated on a joint innovation using the capabilities of the OpenNESS (Open Network Edge Services Software) toolkit developed by Intel, and Baidu IME (Intelligent Mobile Edge), to help achieve a highly reliable edge compute solution with AI capabilities for low-latency applications.

What’s Next: Looking forward, Intel will continue to leverage its comprehensive data center portfolio to collaborate with Baidu on a variety of developments including:

  • Developing a future autonomous vehicle architecture platform and intelligent transportation vehicle-road computing architecture.
  • Exploring mobile edge computing to provide users with edge resources to connect Baidu AI operator services.
  • Expanding Baidu Smart Cloud in infrastructure technology.
  • Improving the optimization of Xeon Scalable processors and PaddlePaddle.
  • Bringing increased benefits to Baidu online AI businesses, thus creating world-changing technology that truly enriches lives.

The post Intel, Baidu Drive Intelligent Infrastructure Transformation appeared first on Intel Newsroom.

Letter From Jensen: Creating a Premier Company for the Age of AI

NVIDIA founder and CEO Jensen Huang sent the following letter to NVIDIA employees today:

Hi everyone, 

Today, we announced that we have signed a definitive agreement to purchase Arm. 

Thirty years ago, a visionary team of computer scientists in Cambridge, U.K., invented a new CPU architecture optimized for energy-efficiency and a licensing business model that enables broad adoption. Engineers designed Arm CPUs into everything from smartphones and PCs to cloud data centers and supercomputers. An astounding 180 billion computers have been built with Arm — 22 billion last year alone. Arm has become the most popular CPU in the world.   

Simon Segars, its CEO, and the people of Arm have built a great company that has shaped the computer industry and nearly every technology market in the world. 

We are joining arms with Arm to create the leading computing company for the age of AI. AI is the most powerful technology force of our time. Learning from data, AI supercomputers can write software no human can. Amazingly, AI software can perceive its environment, infer the best plan, and act intelligently. This new form of software will expand computing to every corner of the globe. Someday, trillions of computers running AI will create a new internet — the internet-of-things — thousands of times bigger than today’s internet-of-people.   

Uniting NVIDIA’s AI computing with the vast reach of Arm’s CPU, we will engage the giant AI opportunity ahead and advance computing from the cloud, smartphones, PCs, self-driving cars, robotics, 5G, and IoT. 

NVIDIA will bring our world-leading AI technology to Arm’s ecosystem while expanding NVIDIA’s developer reach from 2 million to more than 15 million software programmers. 

Our R&D scale will turbocharge Arm’s roadmap pace and accelerates data center, edge AI, and IoT opportunities. 

Arm’s business model is brilliant. We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with NVIDIA’s world-leading GPU and AI technology. 

Arm’s headquarter will remain in Cambridge and continue to be a cornerstone of the U.K. technology ecosystem. NVIDIA will retain the name and strong brand identity of Arm. Simon and his management team are excited to be joining NVIDIA.  

Arm gives us the critical mass to invest in the U.K. We will build a world-class AI research center in Cambridge — the university town of Isaac Newton and Alan Turing, for whom NVIDIA’s Turing GPUs and Isaac robotics platform were named. This NVIDIA research center will be the home of a state-of-the-art AI supercomputer powered by Arm CPUs. The computing infrastructure will be a major attraction for scientists from around the world doing groundbreaking research in healthcare, life sciences, robotics, self-driving cars, and other fields. This center will serve as our European hub to collaborate with universities, industrial partners, and startups. It will also be the NVIDIA Deep Learning Institute for Europe, where we teach the methods of applying this marvelous AI technology.  

The foundation built by Arm and NVIDIA employees has provided this fantastic opportunity to create the leading computing company for the age of AI. The possibilities of our combined companies are beyond exciting.   

I can’t wait. 

Jensen

The post Letter From Jensen: Creating a Premier Company for the Age of AI appeared first on The Official NVIDIA Blog.

Letter From Jensen: Creating a Premier Company for the Age of AI

NVIDIA founder and CEO Jensen Huang sent the following letter to NVIDIA employees today:

Hi everyone, 

Today, we announced that we have signed a definitive agreement to purchase Arm. 

Thirty years ago, a visionary team of computer scientists in Cambridge, U.K., invented a new CPU architecture optimized for energy-efficiency and a licensing business model that enables broad adoption. Engineers designed Arm CPUs into everything from smartphones and PCs to cloud data centers and supercomputers. An astounding 180 billion computers have been built with Arm — 22 billion last year alone. Arm has become the most popular CPU in the world.   

Simon Segars, its CEO, and the people of Arm have built a great company that has shaped the computer industry and nearly every technology market in the world. 

We are joining arms with Arm to create the leading computing company for the age of AI. AI is the most powerful technology force of our time. Learning from data, AI supercomputers can write software no human can. Amazingly, AI software can perceive its environment, infer the best plan, and act intelligently. This new form of software will expand computing to every corner of the globe. Someday, trillions of computers running AI will create a new internet — the internet-of-things — thousands of times bigger than today’s internet-of-people.   

Uniting NVIDIA’s AI computing with the vast reach of Arm’s CPU, we will engage the giant AI opportunity ahead and advance computing from the cloud, smartphones, PCs, self-driving cars, robotics, 5G, and IoT. 

NVIDIA will bring our world-leading AI technology to Arm’s ecosystem while expanding NVIDIA’s developer reach from 2 million to more than 15 million software programmers. 

Our R&D scale will turbocharge Arm’s roadmap pace and accelerates data center, edge AI, and IoT opportunities. 

Arm’s business model is brilliant. We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with NVIDIA’s world-leading GPU and AI technology. 

Arm’s headquarter will remain in Cambridge and continue to be a cornerstone of the U.K. technology ecosystem. NVIDIA will retain the name and strong brand identity of Arm. Simon and his management team are excited to be joining NVIDIA.  

Arm gives us the critical mass to invest in the U.K. We will build a world-class AI research center in Cambridge — the university town of Isaac Newton and Alan Turing, for whom NVIDIA’s Turing GPUs and Isaac robotics platform were named. This NVIDIA research center will be the home of a state-of-the-art AI supercomputer powered by Arm CPUs. The computing infrastructure will be a major attraction for scientists from around the world doing groundbreaking research in healthcare, life sciences, robotics, self-driving cars, and other fields. This center will serve as our European hub to collaborate with universities, industrial partners, and startups. It will also be the NVIDIA Deep Learning Institute for Europe, where we teach the methods of applying this marvelous AI technology.  

The foundation built by Arm and NVIDIA employees has provided this fantastic opportunity to create the leading computing company for the age of AI. The possibilities of our combined companies are beyond exciting.   

I can’t wait. 

Jensen

The post Letter From Jensen: Creating a Premier Company for the Age of AI appeared first on The Official NVIDIA Blog.

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics

NVIDIA Research comprises more than 200 scientists around the world driving innovation across a range of industries. One of its central figures is David Luebke, who founded the team in 2006 and is now the company’s vice president of graphics research.

Luebke spoke with AI Podcast host Noah Kravitz about what he’s working on. He’s especially focused on the interaction between AI and graphics. Rather than viewing the two as conflicting endeavors, Luebke argues that AI and graphics go together “like peanut butter and jelly.”

NVIDIA Research proved that with StyleGAN2, the second iteration of the generative adversarial network StyleGAN. Trained on high-resolution images, StyleGAN2 takes numerical input and produces realistic portraits.

Creating images comparable to those generated in films — which could take up to weeks to create just a single frame — the first version of StyleGAN only takes 24 milliseconds to produce an image.

Luebke envisions the future of GANs as an even larger collaboration between AI and graphics. He predicts that GANs such as those used in StyleGAN will learn to produce the key elements of graphics: shapes, materials, illumination and even animation.

Key Points From This Episode:

  • AI is especially useful in graphics by replacing or augmenting components of the traditional computer graphics pipeline, from content creation to mesh generation to realistic character animation.
  • Luebke researches a range of topics, one of which is virtual and augmented reality. It was, in fact, what inspired him to pursue graphics research — learning about VR led him to switch majors from chemical engineering.
  • Displays are a major stumbling block in virtual and augmented reality, he says. He emphasizes that VR requires high frame rates, low latency and very high pixel density.

Tweetables:

“Artificial intelligence, deep neural networks — that is the future of computer graphics” — David Luebke [2:34]

“[AI], like a renaissance artist, puzzled out the rules of perspective and rotation” — David Luebke [16:08]

You Might Also Like

NVIDIA Research’s Aaron Lefohn on What’s Next at Intersection of AI and Computer Graphics

Real-time graphics technology, namely, GPUs, sparked the modern AI boom. Now modern AI, driven by GPUs, is remaking graphics. This episode’s guest is Aaron Lefohn, senior director of real-time rendering research at NVIDIA. Aaron’s international team of scientists played a key role in founding the field of AI computer graphics.

GauGAN Rocket Man: Conceptual Artist Uses AI Tools for Sci-Fi Modeling

Ever wondered what it takes to produce the complex imagery in films like Star Wars or Transformers? Here to explain the magic is Colie Wertz, a conceptual artist and modeler who works on film, television and video games. Wertz discusses his specialty of hard modeling, in which he produces digital models of objects with hard surfaces like vehicles, robots and computers.

Cycle of DOOM Now Complete: Researchers Use AI to Generate New Levels for Seminal Video Game

DOOM, of course, is foundational to 3D gaming. 3D gaming, of course, is foundational to GPUs. GPUs, of course, are foundational to deep learning, which is, now, thanks to a team of Italian researchers, two of whom we’re bringing to you with this podcast, being used to make new levels for … DOOM.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics appeared first on The Official NVIDIA Blog.

Vision of AI: Startup Helps Diabetic Retinopathy Patients Retain Their Sight

Every year, 60,000 people go blind from diabetic retinopathy, a condition caused by damage to the blood vessels in the eye and a risk factor of high blood sugar levels.

Digital Diagnostics, a software-defined AI medical imaging company formerly known as IDx, is working to help those people retain their vision, using NVIDIA technology to do so.

The startup was founded a decade ago by Michael Abramoff, a retinal surgeon with a Ph.D. in computer science. While training as a surgeon, Abramoff often saw patients with diabetic retinopathy, or DR, that had progressed too far to be treated effectively, leading to permanent vision loss.

With the mission of increasing access to and quality of DR diagnosis, as well as decreasing its cost, Abramoff and his team have created an AI-based solution.

The company’s product, IDx-DR, takes images of the back of the eye, analyzes them and provides a diagnosis within minutes — referring the patient to a specialist for treatment if a more than mild case is detected.

The system is optimized on NVIDIA GPUs and its deep learning pipeline was built using the NVIDIA cuDNN library for high-performance GPU-accelerated operations. Training occurs using Amazon EC2 P3 instances featuring NVIDIA V100 Tensor Core GPUs and is based on images of DR cases confirmed by retinal specialists.

IDx-DR enables diagnostic tests to be completed in easily accessible settings like drugstores or primary care providers’ offices, rather than only at ophthalmology clinics, said John Bertrand, CEO at Digital Diagnostics.

“Moving care to locations the patient is already visiting improves access and avoids extra visits that overwhelm specialty physician schedules,” he said. “Patients avoid an extra copay and don’t have to take time off work for a second appointment.”

Autonomous, Not Just Assistive

“There are lots of good AI products specifically created to assist physicians and increase the detection rate of finding an abnormality,” said Bertrand. “But to allow physicians to practice to the top of their license, and reduce the costs of these low complexity tests, you need to use autonomous AI,” he said.

IDx-DR is the first FDA-cleared autonomous AI system — meaning that while the FDA has cleared many AI-based applications, IDx-DR was the first that doesn’t require physician oversight.

Clinical trials using IDx-DR consisted of machine operators who didn’t have prior experience taking retinal photographs, simulating the way the product would be used in the real world, according to Bertrand.

“Anyone with a high school diploma can perform the exam,” he said.

The platform has been deployed in more than 20 sites across the U.S., including Blessing Health System, in Illinois, where family medicine doctor Tim Beth said, “Digital Diagnostics has done well in developing an algorithm that can detect the possibility of early disease. We would be missing patients if we didn’t use IDx-DR.”

In addition to DR, Digital Diagnostics has created prototypes for products that diagnose glaucoma and age-related macular degeneration. The company is also looking to provide solutions for healthcare issues beyond eye-related conditions, including those related to the skin, nose and throat.

Stay up to date with the latest healthcare news from NVIDIA.

Digital Diagnostics is a Premier member of NVIDIA Inception, a program that supports AI startups with go-to-market support, expertise and technology.

The post Vision of AI: Startup Helps Diabetic Retinopathy Patients Retain Their Sight appeared first on The Official NVIDIA Blog.

The Great AI Bake-Off: Recommendation Systems on the Rise

If you want to create a world-class recommendation system, follow this recipe from a global team of experts: Blend a big helping of GPU-accelerated AI with a dash of old-fashioned cleverness.

The proof was in the pudding for a team from NVIDIA that won this year’s ACM RecSys Challenge. The competition is a highlight of an annual gathering of more than 500 experts who present the latest research in recommendation systems, the engines that deliver personalized suggestions for everything from restaurants to real estate.

At the Sept. 22-26 online event, the team will describe its dish, already available as open source code. They’re also sharing lessons learned with colleagues who build NVIDIA products like RAPIDS and Merlin, so customers can enjoy the fruits of their labor.

In an effort to bring more people to the table, NVIDIA will donate the contest’s $15,000 cash prize to Black in AI, a nonprofit dedicated to mentoring the next generation of Black specialists in machine learning.

GPU Server Doles Out Recommendations

This year’s contest, sponsored by Twitter, asked researchers to comb through a dataset of 146 million tweets to predict which ones a user would like, reply or retweet. The NVIDIA team’s work led a field of 34 competitors, thanks in part to a system with four NVIDIA V100 Tensor Core GPUs that cranked through hundreds of thousands of options.

Their numbers were eye-popping. GPU-accelerated software engineered in less than a minute features that required nearly an hour on a CPU, a 500x speedup. The four-GPU system trained the team’s AI models 120x faster than a CPU. And GPUs gave the group’s end-to-end solution a 280x speedup compared to an initial implementation on a CPU.

“I’m still blown away when we pull off something like a 500x speedup in feature engineering,” said Even Oldridge, a Ph.D. in machine learning who in the past year quadrupled the size of his group that designs NVIDIA Merlin, a framework for recommendation systems.

Recommendation systems on GPUs
GPUs and frameworks such as UCX provided up to 500x speedups compared to CPUs.

Competition Sparks Ideas for Software Upgrades  

The competition spawned work on data transformations that could enhance future versions of NVTabular, a Merlin library that eases engineering new features with the spreadsheet-like tables that are the basis of recommendation systems.

“We won in part because we could prototype fast,” said Benedikt Schifferer, one of three specialists in recommendation systems on the team that won the prize.

Schifferer also credits two existing tools. DASK, an open-source scheduling tool, let the team split memory-hungry jobs across multiple GPUs. And cuDF, part of NVIDIA’s RAPIDS framework for accelerated data science, let the group run the equivalent of the popular Pandas library on GPUs.

“Searching for features in the data using Pandas on CPUs took hours for each new feature,” said Chris Deotte, one of a handful of data scientists on the team who have earned the title Kaggle grandmaster for their prowess in competitions.

“When we converted our code to RAPIDS, we could explore features in minutes. It was life changing, we could search hundreds of features and that eventually led to discoveries that won that competition,” said Deotte, one of only two grandmasters who hold that title in all four Kaggle categories.

More enhancements for recommendation systems are on the way. For example, customers can look forward to improvements in text handling on GPUs, a key data type for recommendation systems.

An Aha! Moment Fuels the Race

Deotte credits a colleague in Brazil, Gilberto Titericz, with an insight that drove the team forward.

“He tracked changes in Twitter followers over time which turned out to be a feature that really fueled our accuracy — it was incredibly effective,” Deotte said.

“I saw patterns changing over time, so I made several plots of them,” said Titericz, who ranked as the top Kaggle grandmaster worldwide for a couple years.

“When I saw a really great result, I thought I made a mistake, but I took a chance, submitted it and to my surprise it scored high on the leaderboard, so my intuition was right,” he added.

In the end, the team used a mix of complementary AI models designed by Titericz, Schifferer and a colleague in Japan, Kazuki Onodera, all based on XGBoost, an algorithm well suited for recommendation systems.

Several members of the team are part of an elite group of Kaggle grandmasters that NVIDIA founder and CEO Jensen Huang dubbed KGMON, a playful takeoff on Pokemon. The team won dozens of competitions in the last four years.

Recommenders Getting Traction in B2C

For many members, including team leader Jean-Francois Puget in southern France, it’s more than a 9-to-5 job.

“We spend nights and weekends in competitions, too, trying to be the best in the world,” said Puget, who earned his Ph.D. in machine learning two decades before deep learning took off commercially.

Now the technology is spreading fast.

This year’s ACM RecSys includes three dozen papers and talks from companies like Amazon and Netflix that helped establish the field with recommenders that help people find books and movies. Now, consumer companies of all stripes are getting into the act including IKEA and Etsy, which are presenting at ACM RecSys this year.

“For the last three or four years, it’s more focused on delivering a personalized experience, really understanding what users want,” said Schifferer. It’s a cycle where “customers’ choices influence the training data, so some companies retrain their AI models every four hours, and some say they continuously train,” he added.

That’s why the team works hard to create frameworks like Merlin to make recommendation systems run easily and fast at scale on GPUs. Other members of NVIDIA’s winning team were Christof Henkel (Germany), Jiwei Liu and Bojan Tunguz (U.S.), Gabriel De Souza Pereira Moreira (Brazil) and Ahmet Erdem (Netherlands).

To get tips on how to design recommendation systems from the winning team, tune in to an online tutorial here on Friday, Sept. 25.

The post The Great AI Bake-Off: Recommendation Systems on the Rise appeared first on The Official NVIDIA Blog.

Intel AI Powers Samsung Medison’s Fetal Ultrasound Smart Workflow

Samsung Medison Intel 4

» Download all images (ZIP, 1 MB)

What’s New: Samsung Medison and Intel are collaborating on new smart workflow solutions to improve obstetric measurements that contribute to maternal and fetal safety and can help save lives. Using an Intel® Core™ i3 processor, the Intel® Distribution of OpenVINO™ toolkit and OpenCV library, Samsung Medison’s BiometryAssist™ automates and simplifies fetal measurements, while LaborAssist™ automatically estimates the fetal angle of progression (AoP) during labor for a complete understanding of a patient’s birthing progress, without the need for invasive digital vaginal exams.

“Samsung Medison’s BiometryAssist is a semi-automated fetal biometry measurement system that automatically locates the region of interest and places a caliper for fetal biometry, demonstrating a success rate of 97% to 99% for each parameter1. Such high efficacy enables its use in the current clinical practice with high precision.”
–Professor Jayoung Kwon, MD PhD, Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, Yonsei University College of Medicine, Yonsei University Health System in Seoul, Korea

Why It’s Needed: According to the World Health Organization, about 295,000 women died during and following pregnancy and childbirth in 2017, even as maternal mortality rates decreased. While every pregnancy and birth is unique, most maternal deaths are preventable. Research from the Perinatal Institute found that tracking fetal growth is essential for good prenatal care and can help prevent stillbirths when physicians are able to recognize growth restrictions.

“At Intel, we are focused on creating and enabling world-changing technology that enriches the lives of every person on Earth,” said Claire Celeste Carnes, strategic marketing director for Health and Life Sciences at Intel. “We are working with companies like Samsung Medison to adopt the latest technologies in ways that enhance the patient safety and improve clinical workflows, in this case for the important and time-sensitive care provided during pregnancy and delivery.”

How It Works: BiometryAssist automates and standardizes fetal measurements in approximately 85 milliseconds with a single click, providing over 97% accuracy1. This allows doctors to spend more time talking with their patients while also standardizing fetal measurements, which have historically proved challenging to accurately provide. With BiometryAssist, physicians can quickly verify consistent measurements for high volumes of patients.

“Samsung is working to improve the efficiency of new diagnostic features, as well as healthcare services, and the Intel Distribution of OpenVINO library and OpenCV toolkit have been a great ally in reaching these goals,” said Won-Chul Bang, corporate vice president and head of Product Strategy, Samsung Medison.

During labor, LaborAssist helps physicians estimate fetal AOP and head direction. This enables both the physician and patient to understand the fetal descent and labor process and determine the best method for delivery. There is always risk with delivery and a slowing progress could result in issues for the baby. Obtaining more accurate and real-time progression of labor can help physicians determine the best mode of delivery and potentially help reduce the number of unnecessary cesarean sections.

“LaborAssist provides automatic measurement of the angle of progression as well as information pertaining to fetal head direction and estimated head station. So it is useful for explaining to the patient and her family how the labor is progressing, using ultrasound images which show the change of head station during labor. It is expected to be of great assistance in the assessment of labor progression and decision-making for delivery,” said Professor Min Jeong Oh, MD, PhD, Department of Obstetrics and Gynecology, Korea University Guro Hospital in Seoul, Korea.

BiometryAssist and LaborAssist are already in use in 80 countries, including the United States, Korea, Italy, France, Brazil and Russia. The solutions received Class 2 clearance by the FDA in 2020.

What’s Next: Intel and Samsung Medison will continue to collaborate to advance the state of the art in ultrasounds by accelerating AI and leveraging advanced technology in Samsung Medison’s next-generation ultrasound solutions, including Nerve Tracking, SW Beamforming and AI Module.

More Context: Samsung Automates Ultrasound Measurements to Improve Clinical Workflows (Case Study) | Artificial Intelligence at Intel | Intel Health and Life Sciences

Intel Customer Stories: Intel Customer Spotlight on Intel.com | Customer Stories on Intel Newsroom

The Small Print: 

1 Source: Internal Samsung testing. System configuration: Intel® Core™ i3-4100Q CPU @ 2.4 GHz, 8 GB memory; OS: 64-bit Windows 10. Inference time without OpenVINO enhancements was 480 milliseconds. Inference time with OpenVINO enhancements was 85 milliseconds.

The post Intel AI Powers Samsung Medison’s Fetal Ultrasound Smart Workflow appeared first on Intel Newsroom.