Go Robot, Go: AI Team at MiR Helps Factory Robots Find Their Way

Like so many software developers, Elias Sorensen has been studying AI. Now he and his 10-member team are teaching it to robots.

When the AI specialists at Mobile Industrial Robots, based in Odense, Denmark, are done, the first graduating class of autonomous machines will be on their way to factories and warehouses, powered by NVIDIA Jetson Xavier NX GPUs.

“The ultimate goal is to make the robots behave in ways humans understand, so it’s easier for humans to work alongside them. And Xavier NX is at the bleeding edge of what we are doing,” said Sorensen, who will provide an online talk about MiR’s work at GTC Digital.

MiR’s low-slung robots carry pallets weighing as much as 2,200 pounds. They sport lidar and proximity sensors, as well as multiple cameras the team is now linking to Jetson Xavier GPUs.

Inferencing Their Way Forward

The new digital brains will act as pilots. They’ll fuse sensor data to let the bots navigate around people, forklifts and other objects, dynamically re-mapping safety zones and changing speeds as needed.

The smart bots use NVIDIA’s DeepStream and TensorRT software to run AI inference jobs on Xavier NX, based on models trained on NVIDIA GPUs in the AWS cloud.

MiR chose Xavier for its combination of high performance at low power and price, as well as its wealth of software.

“Lowering the cost and power consumption for AI processing was really important for us,” said Sorensen. “We make small, battery-powered robots and our price is a major selling point for us.” He noted that MiR has deployed more than 3,000 robots to date to users such as Ford, Honeywell and Toyota.

The new autonomous models are working prototypes. The team is training their object-detection models in preparation for first pilot tests.

Jetson Nano Powers Remote Vision

It’s MiR’s first big AI product, but not its first ever. Since November, the company has shipped smart, standalone cameras powered by Jetson Nano GPUs.

The Nano-based cameras process video at 15 frames per second to detect objects. They’re networked with each other and other robots to enhance the robots’ vision and help them navigate.

Both the Nano cameras and Xavier-powered robots process all camera data locally, only sending navigation decisions over the network. “That’s a major benefit for such a small, but powerful module because many of our customers are very privacy minded,” Sorensen said.

MiR developed a tool its customers use to train the camera by simply showing it pictures of objects such as robots and forklifts. The ease of customizing the cameras is a big measure of the product’s success, he added.

AI Training with Simulations

The company hopes its smart robots will be equally easy to train for non-technical staff at customer sites.

But here the challenge is greater. Public roads have standard traffic signs, but every factory and warehouse is unique with different floor layouts, signs and types of pallets.

MiR’s AI team aims to create a simulation tool that places robots in a virtual work area that users can customize. Such a simulation could let users who are not AI specialists train their smart robots like they train their smart cameras today.

The company is currently investigating NVIDIA’s Isaac platform, which supports training through simulations.

MiR is outfitting its family of industrial robots for AI.

The journey into the era of autonomous machines is just starting for MiR. It’s parent company, Teradyne, announced in February it is investing $36 million to create a hub for developing collaborative robots, aka co-bots, in Odense as part of a partnership with MiR’s sister company, Universal Robotics.

Market watchers at ABI Research predict the co-bot market could expand to $12 billion by 2030. In 2018, Danish companies including MiR and Universal captured $995 million of that emerging market, according to Damvad, a Danish analyst firm.

With such potential and strong ingredients from companies like NVIDIA, “it’s a great time in the robotics industry,” Sorensen said.

The post Go Robot, Go: AI Team at MiR Helps Factory Robots Find Their Way appeared first on The Official NVIDIA Blog.

Get the Picture: For Latest in AI and Medical Imaging, Tune In to GTC Digital

Picture this: dozens of talks about AI in medical imaging, presented by experts from top radiology departments and academic medical centers around the world, all available free online.

That’s just a slice of GTC Digital, a vast library of live and on-demand webinars, training sessions and office hours from NVIDIA’s GPU Technology Conference.

Healthcare innovators across radiology, genomics, microscopy and more will share the latest AI and GPU-accelerated advancements in their fields through talks on GTC Digital.

Researchers in Sydney, Australia, are using AI to analyze brain scans. In Massachusetts, another is segmenting the prostate gland from ultrasound images to help doctors fine-tune radiation doses. And in Munich, Germany, they’re streamlining radiology reports to foster real-time reporting.

Read more about these standout speakers advancing the use of deep learning in medical imaging worldwide below. And register for GTC Digital for free to see the whole healthcare lineup.

Mental Math: Australian Center Uses AI to Analyze Brain Scans

When studying neurodegenerative disease, quantifying brain tissue loss over time helps physicians and clinical trialists monitor disease progression. Radiologists typically inspect brain scans visually and classify the brain shrinkage as “moderate” or “severe” — a qualitative assessment. With accelerated computing, brain tissue loss can instead be measured precisely and quantitatively, without losing time.

The Sydney Neuroimaging Analysis Centre conducts neuroimaging research as well as commercial image analysis for clinical research trials. SNAC will share at GTC Digital how it uses AI and NVIDIA GPUs to accelerate AI tools that automate laborious analysis tasks in their research workflow.

One model precisely isolates brain images from head scans, segmenting brain lesions for multiple sclerosis cases. The AI reduces the time to segment and determine the volume of brain lesions from up to 15 minutes for a manual examination down to just three seconds, regardless of the number or volume of lesions.

“NVIDIA GPUs and DGX systems are the core of our AI platform, and are transforming the delivery of clinical and research radiology with our AI innovation,” said Tim Wang, director of operations at SNAC. “We are particularly excited by the application of this technology to brain imaging.”

SNAC uses the NVIDIA Clara Train SDK’s AI-assisted annotation tools for model development and the NVIDIA Clara Deploy SDK for integration with clinical and research workflows. It’s also exploring the NVIDIA Clara platform as a tool for federated learning. The center relies on the NVIDIA DGX-1 server, NVIDIA DGX Station and GPU-powered PC workstations for both training and inference of its AI algorithms.

Harvard Researcher Applies AI to Prostate Cancer Therapy

Around one in nine men is diagnosed with prostate cancer at some point during his life. Medical imaging tools like ultrasound and MRI are key methods doctors use to check prostate health and plan for surgery and radiotherapy.

Davood Karimi, a research fellow at Harvard Medical School, is developing deep learning models to more quickly and accurately segment the prostate gland from ultrasound images — a difficult task because the boundaries of the prostate are often either not visible or blurry in ultrasound images.

“Accurate segmentation is necessary to make sure radiologists can deliver the needed radiation dose to the prostate, but avoid damaging critical nearby structures like the rectum or bladder,” he said.

In his GTC Digital talk, Karimi will do a deep dive into a research paper he presented at the prestigious MICCAI healthcare conference last year. Using an NVIDIA TITAN GPU, Karimi has accelerated neural network inference to under a second per scan, while improving accuracy over current segmentation techniques radiologists use.

German Company Streamlines Radiology Reports with NVIDIA Clara

Healthcare providers worldwide record their analyses of patient data, including medical images, into text-based reports. But no two radiologists or hospitals do it exactly the same.

Munich-based Smart Reporting GmbH aims to streamline and standardize the reporting workflow for radiologists. The company uses a structured reporting interface that organizes patient data and doctor’s notes into a consistent format.

Smart Reporting uses the NVIDIA Clara platform to segment prostate cancer lesions from medical images. This image annotation is loaded into a draft diagnosis report that radiologists can approve, edit or reject before generating a final report to provide to surgeons and other healthcare professionals.

A member of the NVIDIA Inception virtual accelerator program, Smart Reporting is working with major healthcare organizations including Siemens Healthineers.

“When we release a prototype for radiologists in the clinic, it’ll be essential to have almost real-time reporting,” said Dominik Noerenberg, the company’s chief medical officer. “We’re able to see that speedup running on multi-GPU containers in NGC.”

Noerenberg and Alvaro Sanchez, principal software engineer at Smart Reporting, will present a talk on the advantages of AI-enhanced radiology workflows at GTC Digital.

See the full lineup of healthcare talks on GTC Digital and register for free.

Main image shows a side-by-side comparison of brain segmentation. Left image shows manual segmentation, while right shows AI segmentation. Image courtesy of Sydney Neuroimaging Analysis Centre. 

The post Get the Picture: For Latest in AI and Medical Imaging, Tune In to GTC Digital appeared first on The Official NVIDIA Blog.

Virtually Free GTC: 30,000 Developers and AI Researchers to Access Hundreds of Hours of No-Cost Sessions at GTC Digital

Just three weeks ago, we announced plans to take GTC online due to the COVID-19 crisis.

Since then, a small army of researchers, partners, customers and NVIDIA employees has worked remotely to produce GTC Digital, which kicks off this week.

GTC typically packs hundreds of hours of talks, presentations and conversations into a five-day event in San Jose.

Our goal with GTC Digital is to bring some of the best aspects of this event to a global audience, and make it accessible for months.

Hundreds of our speakers — among the most talented, experienced scientists and researchers in the world — agreed to participate. Apart from the instructor-led, hands-on workshops and training sessions, which require a nominal fee, we’re delighted to bring this content to the global community at no cost. And we’ve incorporated new platforms to facilitate interaction and engagement.

Accelerating Blender Cycles with NVIDIA RTX: Blender is an open-source 3D software package that comes with the Cycles Renderer. Cycles is already a GPU enabled path-tracer, now super-charged with the latest generation of RTX GPUs. Furthering the rendering speed, RTX AI features such as the OptiX Denoiser infers rendering results for a truly interactive ray tracing experience.
Accelerating Blender Cycles with NVIDIA RTX: Blender is an open-source 3D software package that comes with the Cycles Renderer. Cycles is already a GPU enabled path-tracer, now super-charged with the latest generation of RTX GPUs. Furthering the rendering speed, RTX AI features such as the OptiX Denoiser infers rendering results for a truly interactive ray tracing experience.

We provided refunds to those who purchased a GTC 2020 pass, and those tickets have been converted to GTC Digital passes. Passholders just need to log in with GTC 2020 credentials to get started. Anyone else can attend with free registration.

Most GTC Digital content is for a technical audience of data scientists, researchers and developers. But we also offer high-level talks and podcasts on various topics, including women in data science, AI for business and responsible AI.

What’s in Store at GTC Digital

The following activities will be virtual events that take place at a specific time (early registration recommended). Participants will be able to interact in real time with the presenters.

Training Sessions:

  • Seven full-day, instructor-led workshops, from March 25 to April 2, on data science, deep learning, CUDA, cybersecurity, AI for predictive maintenance, AI for anomaly detection, and autonomous vehicles. Each full-day workshop costs $79.
  • Fifteen 2-hour training sessions running April 6-10, on various topics, including autonomous vehicles, CUDA, conversational AI, data science, deep learning inference, intelligent video analytics, medical imaging, recommendation systems, deep learning training at scale, and using containers for HPC. Each two-hour instructor-led session costs $39. 

Live Webinars: Seventeen 1-hour sessions, from March 24-April 8, on various topics, including data science, conversational AI, edge computing, deep learning, IVA, autonomous machines and more. Live webinars will be converted to on-demand content and posted within 48 hours. Free. 

Connect with Experts: Thirty-eight 1-hour sessions, from March 25-April 10, where participants can chat one-on-one with NVIDIA experts to get answers in a virtual classroom. Topics include conversational AI, recommender systems, deep learning training and autonomous vehicle development. Free. 

The following activities will be available on demand:

Recorded Talks: More than 150 recorded presentations with experts from leading companies around the world, speaking on a variety of topics such as computer vision, edge computing, conversational AI, data science, CUDA, graphics and ray tracing, medical imaging, virtualization, weather modeling and more. Free. 

Tech Demos: We’ll feature amazing demo videos, narrated by experts, highlighting how NVIDIA GPUs are accelerating creative workflows, enabling analysis of massive datasets and helping advance research. Free. 

AI Podcast: Several half-hour interviews with leaders across AI and accelerated computing will be posted over the next four weeks. Among them: Kathie Baxter, of Salesforce, on responsible AI; Stanford Professor Margot Gerritsen on women in data science and how data science intersects with AI; Ryan Coffee, of the SLAC National Accelerator Lab, on how deep learning is advancing physics research; and Richard Loft, of the National Center of Atmospheric Research, on how AI is helping scientists better model climate change. Free.

Posters: A virtual gallery of 140+ posters from researchers around the world showing how they are solving unique problems with GPUs. Registrants will be able to contact and share feedback with researchers. Free. 

For the Einsteins and Da Vincis of Our Time

The world faces extraordinary challenges now, and the scientists, researchers and developers focused on solving them need extraordinary tools and technology. Our goal with GTC has always been to help the world’s leading developers — the Einsteins and Da Vincis of our time — solve difficult challenges with accelerated computing. And that’s still our goal with GTC Digital.

Whether you work for a small startup or a large enterprise, in the public or private sector, wherever you are, we encourage you to take part, and we look forward to hearing from you.

The post Virtually Free GTC: 30,000 Developers and AI Researchers to Access Hundreds of Hours of No-Cost Sessions at GTC Digital appeared first on The Official NVIDIA Blog.

A Taste for Acceleration: DoorDash Revs Up AI with GPUs

When it comes to bringing home the bacon — or sushi or quesadillas — DoorDash is shifting into high gear, thanks in part to AI.

The company got its start in 2013, offering deals such as delivering pad thai to Stanford University dorm rooms. Today with a phone tap, customers can order a meal from more than 310,000 vendors — including Chipotle, Walmart and Wingstop — across 4,000 cities in the U.S., Canada and Australia.

Part of its secret sauce is a digital logistics engine that connects its three-sided marketplace of merchants, customers and independent contractors the company calls Dashers. Each community taps into the platform for different reasons.

Using a mix of machine-learning models, the logistics engine serves personalized restaurant recommendations and delivery-time predictions to customers who want on-demand access to their local businesses. Meanwhile, it assigns Dashers to orders and sorts through trillions of options to find their optimal routes while calculating delivery prices dynamically.

The work requires a complex set of related algorithms embedded in numerous machine-learning models, crunching ever-changing data flows. To accelerate the process, DoorDash has turned to NVIDIA GPUs in the cloud to train its AI models.

Training in One-Tenth the Time

Moving from CPUs to GPUs for AI training netted DoorDash a 10x speed-up. Migrating from single to multiple GPUs accelerated its work another 3x, said Gary Ren, a machine-learning engineer at DoorDash who will describe the company’s approach to AI in an online talk at GTC Digital.

“Faster training means we get to try more models and parameters, which is super critical for us — faster is always better for training speeds,” Ren said.

“A 10x training speed-up means we spin up cloud clusters for a tenth the time, so we get a 10x reduction in computing costs. The impacts of trying 10x more parameters or models is trickier to quantify, but it gives us some multiple of increased overall business performance,” he added.

Making Great Recommendations

So far, DoorDash has discussed one of its deep-learning applications — its recommendation engine that’s been in production about two years. Recommendations are definitely becoming more important as companies such as DoorDash realize consumers don’t always know what they’re looking for.

Potential customers may “hop on our app and explore their options so — given our huge number of merchants and consumers — recommending the right merchants can make a difference between getting an order or the customer going elsewhere,” he said.

Because its recommendation engine is so important, DoorDash continually fine tunes it. For example, in its engineering blogs, the company describes how it crafts embedded n-dimensional vectors for each merchant to find nuanced similarities among vendors.

It also adopts the so-called multi-level, multi-armed bandit algorithms that let AI models simultaneously exploit choices customers have liked in the past and explore new possibilities.

Speaking of New Use Cases

While it optimizes its recommendation engine, DoorDash is exploring new AI use cases, too.

“There are several areas where conversations happen between consumers and dashers or support agents. Making those conversations quick and seamless is critical, and with improvements in NLP (natural-language processing) there’s definitely potential to use AI here, so we’re exploring some solutions,” Ren said.

NLP is one of several use cases that will drive future performance needs.

“We deal with data from the real world and it’s always changing. Every city has unique traffic patterns, special events and weather conditions that add variance — this complexity makes it a challenge to deliver predictions with high accuracy,” he said.

Other challenges the company’s growing business presents are in making recommendations for first-time customers and planning routes in new cities it enters.

“As we scale, those boundaries get pushed — our inference speeds are good enough today, but we’ll need to plan for the future,” he added.

The post A Taste for Acceleration: DoorDash Revs Up AI with GPUs appeared first on The Official NVIDIA Blog.

Meet Your Match: AI Finds the Right Clinical Trial for Cancer Patients

Clinical trials need a matchmaker.

Healthcare researchers and pharmaceutical companies rely on trials to validate new, potentially life-saving therapies for cancer and other serious conditions. But fewer than 10 percent of cancer patients participate in clinical trials, and four out of five studies are delayed due to the challenges involved in recruiting participants.

For patients interested in participating in trials, there’s no easy way to determine which they’re eligible for. AI tool Ancora aims to improve the matchmaking process, using natural language processing models to pair patients with potential studies.

“This all started because my friend’s parent was diagnosed with stage 3 cancer,” said Danielle Ralic, founder and CEO of Intrepida, the Switzerland-based startup behind Ancora. “I knew there were trials out there, but when I tried to help them find options, it was so hard.”

The U.S. National Institutes of Health maintains a database of hundreds of thousands of clinical trials. Each study lists a detailed series of text-based requirements, known as inclusion and exclusion criteria, for trial participants.

While users can sort by condition and basic demographics, there may still be hundreds of studies to manually sort through — a time-consuming process of weeding through complex medical terminology.

Intrepida’s customized natural language processing models do the painstaking work of interpreting these text-heavy criteria for patients and physicians, processing new studies on NVIDIA GPUs. The studies listed in the Ancora tool are updated weekly, and users can fill out a simple, targeted questionnaire to shortlist suitable clinical trials, and receive alerts for new potential studies.

“We assessed what 20 questions we could ask that can most effectively knock a patient’s list down from, for example, 250 possible trials to 10,” Ralic said. The platform also shows patients useful information to help decide on a trial, such as how the treatment will be administered, and if it’s been approved in the past to treat other conditions.

Intrepida’s tool is currently available for breast and lung cancer patients. A physician version will soon be available to help doctors find trials for their patients. The company is a member of the NVIDIA Inception virtual accelerator program, which provides go-to-market support for AI startups — including NVIDIA Deep Learning Institute credits, marketing support and preferred pricing on hardware.

Finding the Perfect Match

Intrepida founder Danielle Ralic
Intrepida founder Danielle Ralic presented on AI and drug discovery at last year’s Annual Meeting of the Biophysical Society.

Though the primary way patients hear about clinical trials is from their physicians, less than a quarter of patients hear about trials as an option from their doctors, who have limited time and resources to keep track of existing trials.

Ralic recalls being surprised to meet a stage 4 cancer survivor while hiking in Patagonia, and finding out the man had participated in a clinical trial for a new breakthrough drug.

“I asked him, how did you know about the trial? And he said he found out through a relative of his wife’s friend. That’s not how this should work,” Ralic said.

For physicians and patients, a better and more democratized way to discover clinical trials could lead to life-saving results. It could also speed up the research cycle by improving trial enrollment rates, helping pharmaceutical companies more quickly validate new drugs and bring them to market.

As members of the NVIDIA Inception program, Ralic says she and the Intrepida team were able to meet with other AI startups and with NVIDIA developers at the GPU Technology Conference held in Munich in 2018.

“We joined the program because, as a company that was working with NVIDIA GPUs already, we wanted to develop more sophisticated natural language models,” she said. “There’s been a lot to learn from NVIDIA team members and other Inception startups.”

Using NVIDIA GPUs has enabled Intrepida to shrink training times for one epoch from 20 minutes to just 12 seconds.

Diversifying the Data

A female startup founder in an industry that to date has been dominated by men, Ralic says more diversity is key to improving the healthcare industry as a whole — and especially clinical trials.

“Healthcare is holistic. It involves so many different types of people and knowledge,” she said. “Without a diversity of perspectives, we can never address the problems the healthcare industry has.”

The data backs her up. Clinical trial participants in the United States skew overwhelmingly white and male. The lack of diversity in trials can lead to critical errors in drug dosage.

For example, in 2013, the U.S. Food and Drug Administration mandated doses for several sleeping aids to be cut in half for women. Because females metabolize the drug differently, it increased their risk of getting in a car accident the morning after taking a sleeping pill.

“If we don’t have a diverse trial population, we won’t know whether a patient of a different gender or ethnicity will react differently to a new drug,” Ralic said. “If we did it right from the start, we could improve how we prescribe medicine to people, because we’re all different.”

The post Meet Your Match: AI Finds the Right Clinical Trial for Cancer Patients appeared first on The Official NVIDIA Blog.

Now That Everyone’s a Gamer, New Gaming Technologies Matter More Than Ever

Henry Cavill is a gamer. Keanu Reeves is a gamer. Gaming used to be a hobby for the few, appreciated by fewer still.

Now it’s woven into every aspect of popular culture, even as its global reach surpasses those of movies, sports or television.

And why not? Smartphones have brought gaming to billions. Consoles to hundreds of millions more. But it’s PC gaming — always big — that has become a cultural phenomenon, celebrated in esports arenas filled with tens of thousands of screaming fans — and a global audience of half a billion more online.

Esports alone will generate $1.1 billion in revenue this year.

With more than 2.7 billion gamers, worldwide, there’s no place where gaming, as a cultural phenomenon, isn’t on the move — even as the technology underpinning it continues to accelerate.

The support for real-time ray tracing built into NVIDIA GeForce RTX GPUs, introduced in 2018, gives game developers control over light, shadows and reflections, once only available to top moviemakers.

Deep learning — born on the NVIDIA GPUs that got their start with gaming — now makes graphics sharper and games smarter.

Laptops from every major brand featuring NVIDIA Max-Q design and a new generation of desktop-class mobile GPUs deliver the performance of a console so gamers can play amazing games anywhere. As a result, gaming laptop revenue has surged 12x in six years.

And cloud gaming — thanks to our long, continued investment in our GeForce NOW service — makes high-quality games available on the next billion devices: underpowered PCs, Macs and smartphones.

NVIDIA’s led the way on all these next-generation technologies, which are now spilling out across the gaming industry and accelerating a cultural phenomenon that’s breaking out everywhere you look.

Lights, Camera, Action

Ray tracing — which models the way light moves around the real world — has long been a mainstay of movies. Films rely on powerful banks of servers, known as render farms, to shape each scene.

The hardware-based ray-tracing capabilities built into NVIDIA GeForce RTX GPUs brought this cinematic tool to gaming, letting developers create more immersive environments.

But the change goes beyond just realistic light, shadows and reflections.

Real-time ray tracing helps game developers move faster and gives gamers much more freedom. By modeling the way light moves, in real time, ray tracing promises to free developers from painstakingly “pre-baking” every scene.

Hardware-accelerated ray tracing also makes the task of producing that pre-computed lighting less onerous. GPUs accelerate that process by an order of magnitude over traditional approaches.

Just look to Microsoft’s Minecraft for an example.

By pouring real-time ray tracing into the world-building game with Minecraft with RTX, Minecraft’s blocky environment is transformed into a more cinematic world.

Modeling a few of the billions of possible configurations of the game would have been impossible just a few years ago.

Real-time ray tracing, however, means gamers get to play in a sandbox where the way a scene is lit reacts to the environment they create.

And we’re working to bring the benefits of ray tracing, first, to as many as possible.

We’ve worked to enable ray tracing on our RTX GPUs, and with Microsoft and and the Khronos Group to advance support software standards that make it possible far beyond PCs.

That work continues with the developers and publishers creating the next generation of games across all platforms.

Faster Games

Games are also getting faster. That’s because the modern AI revolution — grounded in deep learning — was born on the NVIDIA GPUs gamers rely on. And now that revolution is coming back to where it all started, gaming, to make games faster still.

Powered by Turing’s Tensor Cores, which perform lightning-fast deep neural network processing, GeForce RTX GPUs support Deep Learning Super-Sampling. DLSS applies deep learning and AI to rendering techniques, resulting in crisp, smooth edges on rendered objects in games.

NVIDIA GPUs are being used to power sophisticated deep learning algorithms that allow the style from a work of art to be applied to an entire scene, changing the look and feel of a gaming experience.

More’s coming. That’s because the more AI races ahead, the more of these capabilities can be deployed in games, thanks to the NVIDIA GPUs that power them.

Imagine game developers being able to tap into powerful AI voice-generation algorithms to create realistic and emotive voices from a script — making games available in all the globe’s major languages.

Or, even wilder, imagine a conversational AI able to generate that script — and react to your actions — on the fly. Or, having the games non-player characters actually learn from the players, making encounters constantly evolve and challenging.


Cloud gaming brings state-of-the-art experiences to every platform without waiting.

It extends the highest quality gaming to the billion people who are playing on phones and tablets.

And it gives these gamers the same freedom as with a PC, so they can play the games they already enjoy on their laptops and PCs in more places.

It’s another way to access the games they already own — and that’s what we’ve done with GeForce NOW, which we opened to all last month.

Everything Else

There’s much more, of course.

Driven by a host of NVIDIA design innovations, gaming laptops continue to get thinner and lighter.

NVIDIA G-SYNC display technology — now ubiquitous in competitive gaming — eliminates tearing and stuttering, giving competitive gamers more accuracy, so they compete more effectively.

Variable rate shading, a technology pioneered by NVIDIA, increases rendering performance and quality by varying the shading rate for different regions of the frame. It will be coming to the next-generation Xbox.

NVIDIA has pioneered a host of technologies designed to reduce latency, or the lag, between a gamer’s input and what they see on their screens.

And equipped with a new generation of NVIDIA-pioneered game-streaming tools, gamers on Twitch and YouTube wield influence that rivals the celebrities who now flaunt their gaming credentials.

Features like these — and performance like no other — are why GeForce gamers will continue to enjoy the best experiences first on the most anticipated games, such as Cyberpunk 2077.

There’s no part of this ecosystem that we’re not working to support. Even the parts that aren’t cool … yet.

Gamer? Dig in to our GeForce channel for all the latest news, and deep-dive videos detailing all the latest developments. 

Image credit: DOTA 2 The International, Some Rights Reserved


The post Now That Everyone’s a Gamer, New Gaming Technologies Matter More Than Ever appeared first on The Official NVIDIA Blog.

Aiforia Paves Path for AI-Assisted Pathology

Pathology, the study and diagnosis of disease, is a growth industry.

As the global population ages and diseases such as cancer become more prevalent, demand for keen-eyed pathologists who can analyze medical images is on the rise. In the U.K. alone, about 300,000 tests are carried out daily by pathologists.

But there’s an acute personnel shortage, globally. In the U.S., there are only 5.7 pathologists for every 100,000 people. By 2030, this number is expected to drop to 3.7. In the U.K., a survey by the Royal College of Pathology showed that only 3 percent of histopathology departments had enough staff to meet demand. In some parts of Africa, there is only one pathologist for every 1.5 million people.

While pathologists are under increasing pressure to analyze as many samples as possible, patients are having to endure lengthy wait times to get their results.

Aiforia, a member of the NVIDIA Inception startup accelerator program, has created a set of AI-based tools to speed and improve pathology workflows — and a lot more.

The company, which has offices in Helsinki and Cambridge, Mass., enables tedious tasks to be automated and for complex challenges to be solved by unveiling quantitative data from tissue samples.

This helps pathologists overcome common obstacles to the development of versatile, scalable processes for medical research, drug development and diagnostics.

“Today we can already support pathologists with AI-assisted analysis, but AI can do so much more than that,” said Kaisa Helminen, CEO of Aiforia. “With deep learning AI, we are able to extract more information from a patient tissue sample than what’s been previously possible due to limitations of the human eye.

“This way, we are able to promote new discoveries from morphological patterns and facilitate more accurate and more personalized treatment for the patient,” she said.

Hidden Figures

AI has made it possible to automate medical imaging tasks that have traditionally proved nearly impossible for the human eye to handle. And it can reveal information that was previously hidden in image data.

With Aiforia’s AI tool assisting the diagnostic process, pathologists can improve the efficiency, accuracy and reproducibility of their results.

Its cloud-based deep learning image analysis platform, Aiforia Create, allows for the rapid development of AI-powered image analysis algorithms, initially optimized for digital pathology applications.

Aiforia initially developed its platform with a focus on cancer, as well as neurological, infectious and lifestyle diseases, but is now expanding it to other medical imaging domains.

For those who want to develop algorithms for a specific task, Aiforia Create provides domain experts with unique self-service AI development tools.

Aiforia trains its image analysis AI models using convolutional neural networks on NVIDIA GPUs. These networks are able to learn, detect and quantify specific features of interest in medical images generated by microscope scanners, X-rays, MRI or CT scans.

Its users can upload a handful of medical images at a time to the platform, which uses active learning techniques to increase the efficiency of annotating images for AI training.

Users don’t need to invest in local hardware or software — instead they can access the software services via an online platform, hosted on Microsoft Azure. The platform can be deployed instantly and scales up easily.

Unpacking Parkinson’s

Aiforia’s tools are also being used to improve the diagnosis of Parkinson’s disease, a debilitating neurological condition affecting around one in 500 people.

The disease is caused by a loss of nerve cells in a part of the brain called the substantia nigra. This loss causes a reduction in dopamine, a chemical that helps regulate the movement of the body.

Researchers are working to uncover what causes the loss of nerve cells in the first place. Doing this requires collecting unbiased estimates of brain cell (neuron) numbers, but this process is extremely labor-intensive, time-consuming and prone to human error.

University of Helsinki researchers collaborated with Aiforia to mitigate the challenges of traditional methods to count neurons. Brain histology images were uploaded to Aiforia Hub, then they deployed the Aiforia Create tool to quantify the number of dopamine neurons in the samples.

Introducing the computerized counting of neurons improves the reproducibility of results, reduces the impact of human error and makes analysis more efficient.

“It’s been studied so many times that if you send the same microscope slide to five different pathologists, you get different results,” Helminen said. “Using AI can help bring consistency and reproducibility, where AI is acting as a tireless assistant or like a ‘second opinion’ for the expert.”

The study carried out at the University of Helsinki would typically have taken weeks or even months without AI. Using Aiforia’s tools, the research team was able to achieve a 99 percent speed increase, freeing up their time to further their work into finding a cure for Parkinson’s.

Aiforia Create is sold with a research use status and is not intended for diagnostic procedures.

The post Aiforia Paves Path for AI-Assisted Pathology appeared first on The Official NVIDIA Blog.

From ER Visit to AI Startup, CloudMedx Pursues Predictive Healthcare Models

Twice Sahar Arshad’s father-in-law went to an emergency room in Pakistan complaining of frequent headaches. Twice doctors sent him home with a diagnosis of allergies.

Turns out he was suffering from a subdural hematoma — bleeding inside the head. Following the second misdiagnosis, he went into a coma and required emergency brain surgery (and made a full recovery.)

Arshad and her husband, Tashfeen Suleman — both computer scientists living in Bellevue, Wash., at the time — afterwards tried to get to the root of the inaccurate diagnoses. The hematoma turned out to be a side effect of a new medication Suleman’s father had been prescribed a couple weeks prior. And he lacked physical symptoms like slurred speech and difficulty walking, which would have prompted doctors to order a CT scan and detect the bleeding earlier.

Too Much Data, Too Little Time

It’s a common problem, Arshad and Suleman found. Physicians often have to rely on limited information, either because there’s insufficient data on a patient or because there’s not enough time to analyze large datasets.

The couple thought AI could help address this challenge. In late 2014, they together founded CloudMedx, a Palo Alto-based startup that develops predictive healthcare models for health providers, insurers and patients.

A member of the NVIDIA Inception virtual accelerator program, CloudMedx is working with the University of California, San Francisco; Barrow Neurological Institute, a member of Dignity Health, a nonprofit healthcare organization; and some of the largest health insurers in the country.

Its AI models, trained using NVIDIA V100 Tensor Core GPUs through Amazon Web Services, can help automate medical coding, predict disease progression and determine the likelihood a patient may have a complication and need to be readmitted to the hospital within 30 days.

“What we’ve built is a natural language model that understands how different diseases, symptoms and medications are related to each other,” said Arshad, chief operating officer at CloudMedx. “If we’d had this tool in Tashfeen’s father’s case, it would have flagged the risk of internal head hemorrhaging and recommended obtaining a CT scan.”

Working with an AI to Risk Assessment

The CloudMedx team has developed a deep neural network that can process medical data to provide risk assessment scores, saving clinicians time and providing personalized insight for patients. It’s trained on a dataset of 54 million patient encounters.

In a study to evaluate its deep learning model, the clinical AI tool took a mock medical exam — and outperformed human doctors by 10 percent, on average. On their own, physicians scored between 68 to 81 percent. When taking the exam along with CloudMedx AI, they achieved a high score of 91 percent.

The startup’s AI models are used in multiple tools, including a coding analyzer that converts doctor’s notes into a series of medical codes that inform the billing process, as well as a clinical analyzer that evaluates a patient’s health records to generate risk assessments.

CloudMedx is collaborating with UCSF’s Division of Gastroenterology to stratify patients awaiting liver transplants based on risk, so that patients can be matched with donors before the tumor progresses too far for a transplant.

The company is also working with one of the largest health insurers in the U.S. to better identify congestive heart failure patients with a high risk of readmission to the hospital. With these insights, health providers can follow up more often with at-risk patients, reducing readmissions and potentially saving billions of dollars in treatment costs.

Predictive Analytics for Every Healthcare Player

Predictive analytics can even improve the operational side of healthcare, giving hospitals a heads-up when they might need additional beds or staff members to meet rising patient demand.

“It’s an expensive manual process to find additional resources and bring on extra nurses at the last minute,” Arshad said. “If hospitals are able to use AI tools for surge prediction, they can better plan resources ahead of time.”

In addition to providing new insights for health providers and payers, these tools save time by processing large amounts of medical data in a fraction of the time it would take humans.

CloudMedx has also developed an AI tool for patients. Available on the Medicare website to its 53 million patient beneficiaries, the system helps users access their own claims data, correlates a person’s medical history with symptoms, and will soon also estimate treatment costs.

NVIDIA Inception Program

As members of the NVIDIA Inception program, the CloudMedx team was able to reach out to NVIDIA developers and the company’s healthcare team for help with some of the challenges they faced when scaling up for cloud deployment.

Inception helps startups during critical stages of product development, prototyping and deployment with tools and expertise to help early-stage companies grow.

Both Suleman and Arshad have spoken at NVIDIA’s annual GPU Technology Conference, with Arshad participating in a Women@GTC healthcare panel last year. The conference has helped the team meet some of their customers, said Arshad, who’s also a finalist for Entrepreneur of the Year at the 2020 Women in IT Awards New York.

Check out the healthcare track for GTC, taking place in San Jose, March 22-26.

The post From ER Visit to AI Startup, CloudMedx Pursues Predictive Healthcare Models appeared first on The Official NVIDIA Blog.

Putting AI on Trials: Deep 6 Speeds Search for Clinical-Study Recruits

Bringing a new medical treatment to market is a slow, laborious process — and for a good reason: patient safety is the top priority.

But when recruiting patients to test promising treatments in clinical trials, the faster the better.

“Many people in medicine have ideas of how to improve healthcare,” said Wout Brusselaers, CEO of Pasadena, Calif.-based startup Deep 6 AI. “What’s stopping them is being able to demonstrate that their new process or new drug works, and is safe and effective on real patients. For that, they need the clinical trial process.”

Over the past decade, the number of cancer clinical trials has grown 17 percent a year, on average. But nearly a fifth of these studies fail to recruit a sufficient number of participants that fit sometimes very specific trial criteria after three years of searching — and the problem isn’t getting any simpler.

“In the age of precision medicine, clinical trial criteria are getting more challenging,” Brusselaers said. “When developing a drug that is targeting patients with a rare genetic mutation, you have to be able to find those specific patients.”

By analyzing medical records with AI, Deep 6 can identify a patient population for clinical trials within minutes, accelerating what’s traditionally a months-long process. Major cancer centers and pharmaceutical companies, including Cedars Sinai Medical Center and Texas Medical Center, are using the AI tool. They’ve matched more than 100,000 patients to clinical trials so far.

The startup’s clinical trial acceleration software has specific tools to help hospitals recommend available trials to patients and to help pharmaceutical companies track and accelerate patient recruitment for their studies. Future versions of the software could also be made available for patients to browse trials.

A Match Made in AI

Deep 6 AI is a member of the NVIDIA Inception virtual accelerator program, which helps startups scale faster. The company uses an NVIDIA TITAN GPU to accelerate the development of its custom AI models that analyze patient data to identify and label clinical criteria relevant to trials.

“It’s more efficient and less expensive for us to develop our models on premises,” Brusselaers said. “We could turn around models right away and iterate faster, without having to wait to rerun the code.”

While the tool can be used for any diagnostic area or medical condition, Brusselaers says over a quarter of trials on the platform are oncology studies, followed closely by cardiology.

Trained on a combination of open-source databases and real-world data from Deep 6’s partners, the AI models first identify specific mentions of clinical terminology and medical codes in patient records with natural language processing.

Additional neural networks analyze unstructured data like doctor’s notes and pathology reports to gather additional information about a patient’s symptoms, diagnoses and treatments — even detecting potential conditions not mentioned in the medical records.

Deep 6’s tool then creates a patient graph that represents the individual’s clinical profile. These graphs can easily be matched by doctors and researchers to develop trial cohorts, upgrading a time-consuming, often unfruitful manual process.

Researchers at Los Angeles’ Cedars-Sinai Smidt Heart Institute — one of the startup’s clients — had enrolled just two participants for a new clinical trial after six months of recruitment effort. Using Deep 6 AI software, they found 16 qualified candidates in an hour.

Texas Medical Center, a collection of over 60 health institutions, is rolling out Deep 6 software across its network to replace the typical process of finding clinical trial candidates, which requires associates to manually flip through thick folders of medical records.

“It’s just a long slog to find patients for clinical trials,” said Bill McKeon, CEO of Texas Medical Center. Using Deep 6’s software tool “is just completely transforming.”

McKeon says in one case, it took six months to find a dozen eligible patients for a trial with traditional recruitment efforts. The same matching process through Deep 6’s software found 80 potential participants in minutes.

The post Putting AI on Trials: Deep 6 Speeds Search for Clinical-Study Recruits appeared first on The Official NVIDIA Blog.

An AI for Detail: Nanotronics Brings Deep Learning to Precision Manufacturing

Matthew Putman, this week’s guest on the AI Podcast, knows that the devil is in the details. That’s why he’s the co-founder and CEO of Nanotronics, a Brooklyn-based company providing precision manufacturing enhanced by AI, automation and 3D imaging.

He sat down with AI Podcast host Noah Kravitz to discuss how running deep learning networks in real-time on factory floors produces the best possible products, and how Nanotronics models and equipment are finding success in fields ranging from the semiconductor industry to genome sequencing.

SUBHEAD: Key Points From This Episode:

Nanotronics develops universal AI models that can be customized depending on individual customers’ processes and deployments.

The AI models that Nanotronics deploys at a customer site can be communicated directly from the GPU to the machine, without the cloud, to ensure security and speed.

When the new Nanotronics factory is finished (pictured, above), they’ll use their own deep learning models to ensure precision manufacturing as they construct their equipment.


  • “It’s a great advantage to our customers to actually have a smaller footprint because we have a computationally driven system, rather than a system that requires a lot of very expensive large hardware” — Matthew Putman [7:14]
  • “We can adjust actual controls in real time to make corrective actions for any type of anomalies that occur. It’s not so important to us what the absolute value is on each of the stations, it’s that by the end, the product has the most reproducibility and highest quality possible” Matthew Putman [8:47]

You Might Also Like

No More Trying Taxes: Intuit Uses AI for Smarter Finances

As tax season looms closer, listen to Intuit Senior Vice President and Chief Data Officer Ashok Srivastava as he explains how the personal finance giant utilizes AI to help customers.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Pieter Abbeel, director of the Berkeley Robot Learning Lab and cofounder of deep learning and robotics company Covariant AI, discusses how AI is the key to producing more efficient and natural robots.

Astronomers Turn to AI as New Telescopes Come Online

As our view of the skies improves, astronomers are accumulating more data than they can process. Brant Robertson, visiting professor at the Institute for Advanced Study in Princeton, explains how AI can transform data into discoveries.

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.


Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post An AI for Detail: Nanotronics Brings Deep Learning to Precision Manufacturing appeared first on The Official NVIDIA Blog.