Putting AI on Trials: Deep 6 Speeds Search for Clinical-Study Recruits

Bringing a new medical treatment to market is a slow, laborious process — and for a good reason: patient safety is the top priority.

But when recruiting patients to test promising treatments in clinical trials, the faster the better.

“Many people in medicine have ideas of how to improve healthcare,” said Wout Brusselaers, CEO of Pasadena, Calif.-based startup Deep 6 AI. “What’s stopping them is being able to demonstrate that their new process or new drug works, and is safe and effective on real patients. For that, they need the clinical trial process.”

Over the past decade, the number of cancer clinical trials has grown 17 percent a year, on average. But nearly a fifth of these studies fail to recruit a sufficient number of participants that fit sometimes very specific trial criteria after three years of searching — and the problem isn’t getting any simpler.

“In the age of precision medicine, clinical trial criteria are getting more challenging,” Brusselaers said. “When developing a drug that is targeting patients with a rare genetic mutation, you have to be able to find those specific patients.”

By analyzing medical records with AI, Deep 6 can identify a patient population for clinical trials within minutes, accelerating what’s traditionally a months-long process. Major cancer centers and pharmaceutical companies, including Cedars Sinai Medical Center and Texas Medical Center, are using the AI tool. They’ve matched more than 100,000 patients to clinical trials so far.

The startup’s clinical trial acceleration software has specific tools to help hospitals recommend available trials to patients and to help pharmaceutical companies track and accelerate patient recruitment for their studies. Future versions of the software could also be made available for patients to browse trials.

A Match Made in AI

Deep 6 AI is a member of the NVIDIA Inception virtual accelerator program, which helps startups scale faster. The company uses an NVIDIA TITAN GPU to accelerate the development of its custom AI models that analyze patient data to identify and label clinical criteria relevant to trials.

“It’s more efficient and less expensive for us to develop our models on premises,” Brusselaers said. “We could turn around models right away and iterate faster, without having to wait to rerun the code.”

While the tool can be used for any diagnostic area or medical condition, Brusselaers says over a quarter of trials on the platform are oncology studies, followed closely by cardiology.

Trained on a combination of open-source databases and real-world data from Deep 6’s partners, the AI models first identify specific mentions of clinical terminology and medical codes in patient records with natural language processing.

Additional neural networks analyze unstructured data like doctor’s notes and pathology reports to gather additional information about a patient’s symptoms, diagnoses and treatments — even detecting potential conditions not mentioned in the medical records.

Deep 6’s tool then creates a patient graph that represents the individual’s clinical profile. These graphs can easily be matched by doctors and researchers to develop trial cohorts, upgrading a time-consuming, often unfruitful manual process.

Researchers at Los Angeles’ Cedars-Sinai Smidt Heart Institute — one of the startup’s clients — had enrolled just two participants for a new clinical trial after six months of recruitment effort. Using Deep 6 AI software, they found 16 qualified candidates in an hour.

Texas Medical Center, a collection of over 60 health institutions, is rolling out Deep 6 software across its network to replace the typical process of finding clinical trial candidates, which requires associates to manually flip through thick folders of medical records.

“It’s just a long slog to find patients for clinical trials,” said Bill McKeon, CEO of Texas Medical Center. Using Deep 6’s software tool “is just completely transforming.”

McKeon says in one case, it took six months to find a dozen eligible patients for a trial with traditional recruitment efforts. The same matching process through Deep 6’s software found 80 potential participants in minutes.

The post Putting AI on Trials: Deep 6 Speeds Search for Clinical-Study Recruits appeared first on The Official NVIDIA Blog.

An AI for Detail: Nanotronics Brings Deep Learning to Precision Manufacturing

Matthew Putman, this week’s guest on the AI Podcast, knows that the devil is in the details. That’s why he’s the co-founder and CEO of Nanotronics, a Brooklyn-based company providing precision manufacturing enhanced by AI, automation and 3D imaging.

He sat down with AI Podcast host Noah Kravitz to discuss how running deep learning networks in real-time on factory floors produces the best possible products, and how Nanotronics models and equipment are finding success in fields ranging from the semiconductor industry to genome sequencing.

SUBHEAD: Key Points From This Episode:

Nanotronics develops universal AI models that can be customized depending on individual customers’ processes and deployments.

The AI models that Nanotronics deploys at a customer site can be communicated directly from the GPU to the machine, without the cloud, to ensure security and speed.

When the new Nanotronics factory is finished (pictured, above), they’ll use their own deep learning models to ensure precision manufacturing as they construct their equipment.

Tweetables:

  • “It’s a great advantage to our customers to actually have a smaller footprint because we have a computationally driven system, rather than a system that requires a lot of very expensive large hardware” — Matthew Putman [7:14]
  • “We can adjust actual controls in real time to make corrective actions for any type of anomalies that occur. It’s not so important to us what the absolute value is on each of the stations, it’s that by the end, the product has the most reproducibility and highest quality possible” Matthew Putman [8:47]

You Might Also Like

No More Trying Taxes: Intuit Uses AI for Smarter Finances

As tax season looms closer, listen to Intuit Senior Vice President and Chief Data Officer Ashok Srivastava as he explains how the personal finance giant utilizes AI to help customers.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Pieter Abbeel, director of the Berkeley Robot Learning Lab and cofounder of deep learning and robotics company Covariant AI, discusses how AI is the key to producing more efficient and natural robots.

Astronomers Turn to AI as New Telescopes Come Online

As our view of the skies improves, astronomers are accumulating more data than they can process. Brant Robertson, visiting professor at the Institute for Advanced Study in Princeton, explains how AI can transform data into discoveries.

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

  

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post An AI for Detail: Nanotronics Brings Deep Learning to Precision Manufacturing appeared first on The Official NVIDIA Blog.

Meet Six Smart Robots at GTC 2020

The GPU Technology Conference is like a new Star Wars movie. There are always cool new robots scurrying about.

This year’s event in San Jose, March 22-26, is no exception, with at least six autonomous machines expected on the show floor. Like C3PO and BB8, each one is different.

Among what you’ll see at GTC 2020:

  • a robotic dog that sniffs out trouble in complex environments such as construction sites
  • a personal porter that lugs your stuff while it follows your footsteps
  • a man-sized bot that takes inventory quickly and accurately
  • a short, squat bot that hauls as much as 2,200 pounds across a warehouse
  • a delivery robot that navigates sidewalks to bring you dinner

“What I find interesting this year is just how much intelligence is being incorporated into autonomous machines to quickly ingest and act on data while navigating around unstructured environments that sometimes are not safe for humans,” said Amit Goel, senior product manager for autonomous machines at NVIDIA and robot wrangler for GTC 2020.

The ANYmal C from ANYbotics AG (pictured above), based in Zurich, is among the svelte navigators, detecting obstacles and finding its own shortest path forward thanks to its Jetson AGX Xavier GPU. The four-legged bot can slip through passages just 23.6 inches wide and climb stairs as steep as 45 degrees on a factory floor to inspect industrial equipment with its depth, wide-angle and thermal cameras.

Gita robot
The Gita personal robot will demo hauling your stuff at GTC 2020.

The folks behind the Vespa scooter will show Gita, a personal robot that can carry up to 40 pounds of your gear for four hours on a charge. It runs computer vision algorithms on a Jetson TX2 GPU to identify and follow its owner’s legs on any hard surfaces.

Say cheese. Bossa Nova Robotics will show its retail robot that can scan a 40-foot supermarket aisle in 60 seconds, capturing 4,000 images that it turns into inventory reports with help from its NVIDIA Turing architecture RTX GPU. Walmart plans to use the bots in at least 650 of its stores.

Mobile Industrial Robots A/S, based in Odense, Denmark, will give a talk at GTC about how it’s adding AI with Jetson Xavier to its pallet-toting robots to expand their work repertoire. On the show floor, it will demonstrate one of the robots from its MiR family that can carry payloads up to 2,200 pounds while using two 3D cameras and other sensors to navigate safely around people and objects in a warehouse.

From the other side of the globe, ExaWizards Inc. (Tokyo) will show its multimodal AI technology running on robotic arms from Japan’s Denso Robotics. It combines multiple sensors to learn human behaviors and perform jobs such as weighing a set portion of water.

Boss Nova robot
Walmart will use the Bossa Nova robot to help automate inventory taking in at least 650 of its stories

Rounding out the cast, the Serve delivery robot from Postmates will make a return engagement at GTC. It can carry 50 pounds of goods for 30 miles, using a Jetson AGX Xavier and Ouster lidar to navigate sidewalks like a polite pedestrian. In a talk, a Postmates engineer will share lessons learned in its early deployments.

Many of the latest systems reflect the trend toward collaborative robotics that NVIDIA CEO Jensen Huang demonstrated in a keynote in December. He showed ways humans can work with and teach robots directly, thanks to an updated NVIDIA Isaac developers kit that also speeds development by using AI and simulations to train robots, now part of NVIDIA’s end-to-end offering in robotics.

Just for fun, GTC also will host races of AI-powered DIY robotic cars, zipping around a track on the show floor at speeds approaching 50 mph. You can sign up here if you want to bring your own Jetson-powered robocar to the event.

We’re saving at least one surprise in robotics for those who attend. To get in on the action, register here for GTC 2020.

The post Meet Six Smart Robots at GTC 2020 appeared first on The Official NVIDIA Blog.

NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research

Our NVIDIA Graduate Fellowship Program recently awarded up to $50,000 each to five Ph.D. students involved in GPU computing research.

Now in its 19th year, the fellowship program supports graduate students doing GPU-based work. We selected this year’s fellows from more than 300 applicants from a host of countries.

The fellows’ work puts them at the forefront of GPU computing, including projects in deep learning, graphics, high performance computing and autonomous machines.

“Our fellowship recipients are among the most talented graduate students in the world,” said NVIDIA Chief Scientist Bill Dally. “They’re working on some of the most important problems in computer science, and we’re delighted to support their research.”

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

Our 2020-2021 fellows are:

  • Anqi Li, University of Washington — Bridging the gap between robotics research and applications by exploiting complementary tools from machine learning and control theory
  • Benedikt Bitterli, Dartmouth College — Principled forms of sample reuse that unlock more efficient ray-tracing techniques for offline and real-time rendering
  • Vinu Joseph, University of Utah — Optimizing deep neural networks for performance and scalability
  • Xueting Li, University of California, Merced — Self-supervised learning and relation learning between different visual elements
  • Yue Wang, Massachusetts Institute of Technology — Designing sensible deep learning modules that learn effective representations of 3D data

And our 2020-2021 finalists are:

  • Guandao Yang, Cornell University
  • Michael Lutter, Technical University of Darmstadt
  • Yuanming Hu, Massachusetts Institute of Technology
  • Yunzhu Li, Massachusetts Institute of Technology
  • Zackory Erickson, Georgia Institute of Technology

The post NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research appeared first on The Official NVIDIA Blog.

How Evo’s AI Keeps Fashion Forward

Imagine if fashion houses knew that teal blue was going to replace orange as the new black. Or if retailers knew that tie dye was going to be the wave to ride when swimsuit season rolls in this summer.

So far, there hasn’t been an efficient way of getting ahead of consumer and market trends like these. But Italy-based startup Evo is helping retailers and fashion houses get a jump on changing tastes and a whole lot more.

The company’s deep-learning pricing and supply chain systems, powered by NVIDIA GPUs, let organizations quickly respond to changes — whether in markets, weather, inventory, customers, or competitor moves — by recommending optimal pricing, inventory and promotions in stores.

Evo is also a member of the NVIDIA Inception program, a virtual accelerator that offers startups in AI and data science go-to-market support, expertise and technology assistance.

The AI Show Stopper

Evo was born from a Ph.D. thesis by its founder, Fabrizio Fantini, while he was at Harvard.

Now the company’s CEO, Fantini discovered new algorithms that could outperform even the most complex and expensive commercial pricing systems in use at the time.

“Our research was shocking, as we measured an immediate 30 percent reduction in the average forecast error rate, and then continuous improvement thereafter,” Fantini said. “We realized that the ability to ingest more data, and to self-learn, was going to be of strategic importance to any player with any intention of remaining commercially viable.”

The software, developed in the I3P incubator at the Polytechnic University of Turin, examines patterns in fashion choices and draws data that anticipates market demand.

Last year, Evo’s systems managed goods worth over 10 billion euros from more than 2,000 retail stores. Its algorithms changed over 1 million prices and physically moved over 15 million items, while generating more than 100 million euros in additional profit for customers, according to the company.

Nearly three dozen companies, including grocers and other retailers, as well as fashion houses, have already benefited from these predictions.

“Our pilot clients showed a 10 percent improvement in margin within the first 12 months,” Fantini said. “And longer term, they achieved up to 5.5 points of EBITDA margin expansion, which was unprecedented.”

GPUs in Vogue 

Evo uses NVIDIA GPUs to run neural network models that transform data into predictive signals of market trends. This allows clients to make systematic and profitable decisions.

Using a combination of advanced machine learning methods and statistics, the system transforms products into “functional attributes,” such as type of sleeve or neckline, and into “style attributes,” such as the color or silhouette.

It works off a database that maps the social media, internet patterns and purchase behaviors of over 1.3 billion consumers, which is a fully representative sample of the entire world population.

Then the system uses multiple algorithms and approaches, including meta-modeling, to process market data that is tagged automatically based on the clients, prices, products and characteristics of a company’s main competitors.

This makes the data directly comparable across different companies and geographies, which is one of the key ingredients required for success.

“It’s a bit like Google Translate,” said Fantini. “Learning from its corpus of translations to make each new request smarter, we use our growing body of data to help each new prediction become more accurate, but we work directly on transaction data rather than images, text or voice as others do.”

These insights help retailers understand how to manage their supply chains and how to plan pricing and production even when facing rapid changes in demand.

In the future, Evo plans to use AI to help design fashion collections and forecast trends at increasingly earlier stages.

Resources:

Image by Pexels.

The post How Evo’s AI Keeps Fashion Forward appeared first on The Official NVIDIA Blog.

Washington Goes West: GTC 2020 Explores AI in Federal Government

The future of government will come into focus in Silicon Valley next month when experts from industries and government converge to discuss AI and high performance computing at the NVIDIA GPU Technology Conference.

Following on the success of GTC DC 2019, GTC 2020, taking place March 22-26 in San Jose, Calif., will bring together hundreds of researchers, government executives and national lab directors to discuss how agencies can improve citizen services and advance science with accelerated computing.

The show features more than 600 talks on subjects including AI in government, autonomous machines, cybersecurity and disaster relief. In addition to tech giants like Amazon, Google and Microsoft, the show attracts leaders from a variety of labs and agencies, including Lawrence Berkeley National Laboratory, NASA, NIST, NIH, the National Center for Atmospheric Research, Lockheed Martin, Oak Ridge National Lab, Raytheon, Booz Allen Hamilton and the U.S. Postal Service.

NVIDIA founder and CEO Jensen Huang will kick off the event with a keynote on March 23 to explain how AI is revolutionizing industries from healthcare to robotics.

Autonomous Everything

One of the hottest topics at GTC 2020 will be autonomous machines. This year, experts from Ford Motor Company, Amazon, Microsoft, Google and more will discuss everything autonomous, from precision manufacturing to self-driving cars to mobile robots. Highlights include:

  • Anthony Rizk, research engineer, and Jimmy Nassif, head of IT planning systems, at BMW Group, will discuss their autonomous transportation robot that’s being put to use in assembly and production sites.
  • Pieter Abbeel, director of the Berkeley Robot Learning Lab and co-director of the Berkeley Artificial Intelligence Lab, will summarize recent AI research breakthroughs in robotics and potential areas of progress going forward.
  • Cyra Richardson, general manager of hardware innovation and robotics at Microsoft, will present Microsoft and NVIDIA’s shared platform that’s accelerating time-to-market for robotics manufacturers.
  • Claire Delaunay, vice president of engineering at NVIDIA, will host a panel of industry experts to talk about the fourth industrial revolution, with a focus on autonomous robots.

Attendees who want to get started in robotics and embedded systems can also take part in Jetson Developer Days on March 25 and 26. Experts and community members from NVIDIA will be on hand to provide information on AI at the edge, medical imaging, intelligent video analytics and robotics.

Fireside Sessions and Cybersecurity

U.S. Rep. Jerry McNerney, of California, will be taking part in a fireside chat to discuss the current and upcoming governmental approach to AI, along with what policies enterprises should expect.

The issue of cybersecurity is also on the agenda. Sessions to look out for:

  • Jian Chang, staff algorithm expert, and Sanjian Chen, staff algorithm engineer, both at Alibaba Group, will present on the importance of protecting deep-learning models for specific uses from adversarial attacks, and propose methods of defense.
  • Marco Schreyer, a researcher at the University of St. Gallen, will identify how financial accounting machine learning models are vulnerable to adversarial attacks and present recent developments on the topic.
  • Bartley Richardson, AI infrastructure manager and senior cybersecurity data scientist at NVIDIA, will be joined by several colleagues in a “Connect with the Experts” session — an hour-long, intimate Q&A — in which attendees can learn more about GPU-accelerated anomaly detection, threat hunting and more.

AI for the Environment

Disaster relief and climate modeling are two of the many fields that federal agencies are interested in. GTC will feature several sessions on these topics, including:

  • David John Gagne, machine learning scientist at the National Center for Atmospheric Research, will illustrate how deep learning can aid in predicting hurricane intensity.
  • Amulya Vishwanath, product marketing lead, and Chintan Shah, product manager, at NVIDIA, will explain how to train, build and deploy intelligent vision applications to aid with faster disaster relief and logistics.
  • Oliver Fuhrer, senior director at Vulcan Inc., will present new methods for climate modeling based on GPU-accelerated HPC.

Many government agencies and contractors rely on high performance computing for scientific research. This year’s GTC will include an inaugural HPC Summit, where attendees will see the latest technology in simulation, space exploration, energy and more. Leaders from Schlumberger, NVIDIA, Mellanox and Oak Ridge National Labs will host breakout sessions with Summit attendees.

Hands-On Training

For those who want to sharpen their skills, the NVIDIA Deep Learning Institute will be hosting 60+ instructor-led training sessions and 30+ self-paced courses throughout the conference, and six full-day workshops on Sunday, March 22.

Day-long workshops include “Applications of AI for Anomaly Detection,” where participants will learn methods to identify network intrusions, cyberthreats, counterfeit financial transactions and more. And “Applications of AI for Predictive Maintenance” will focus on how machine learning can help avoid unplanned downtimes and predict outcomes — crucial to industries such as manufacturing, aerospace and energy.

Register here to experience the future of AI at GTC 2020.

The post Washington Goes West: GTC 2020 Explores AI in Federal Government appeared first on The Official NVIDIA Blog.

From Point AI to Point TB: DeepTek Detects Tuberculosis from X-Rays

Tuberculosis is an issue close to home for Pune, India-based healthcare startup DeepTek. India has the world’s highest prevalence of the disease — accounting for over one-quarter of the 10 million new cases each year.

It’s a fitting first project for the company, whose founders hope to greatly improve global access to medical imaging diagnostics with an AI-powered radiology platform. DeepTek’s DxTB tool screens X-ray images for pulmonary TB, flagging cases for prioritized review by medical experts.

India aims to eradicate TB by 2025, five years before the United Nations’ global goal to end the epidemic by 2030. Chest X-ray imaging is the most sensitive screening tool for pulmonary TB, helping clinicians determine which patients should be referred for further lab testing. But two-thirds of people worldwide lack access to even basic radiology services, in part due to high costs and insufficient infrastructure.

“There’s a huge shortage of imaging experts available to read X-ray scans,” said Amit Kharat, CEO of the startup and a clinical radiologist. “Since radiologists’ time is often sought for more demanding investigations like CT or MRI scans, this is an important gap where AI can add value.”

DeepTek is a member of NVIDIA Inception, a virtual accelerator program that enables early-stage companies with fundamental tools, expertise and go-to-market support. The startup uses NVIDIA GPUs through Google Cloud and Amazon Web Services for training and inference of its deep learning algorithms.

Its DxTB tool has been used to analyze over 70,000 chest X-rays so far in partnership with the Greater Chennai Corporation’s TB Free Chennai Initiative, a project supported by the Clinton Health Access Initiative. The system is deployed in mobile vans equipped with digital X-ray machines to conduct TB screening for high-risk population groups.

mobile TB clinic in Chennai, India

As patients are screened in the mobile clinics, scans of the chest X-ray images are securely transmitted to the cloud for inference. The accelerated turnaround time allows doctors to triage cases and conduct additional tests right away, minimizing the number of patients who don’t follow up for further testing and treatment.

“Doing this job would have taken a month’s time. With AI, it’s now feasible to do it within hours,” Kharat said. “The goal is to make sure not a single patient is lost.”

 DxTB can be deployed in the cloud or — where internet connections are weak or unavailable — as an edge service. Radiologists access the scans through a dashboard that enables them to review the studies and provide expert feedback and validation.

Patients detected as potentially TB-positive provide a sputum, or lung fluid, sample that undergoes a molecular test before doctors confirm the diagnosis and prescribe a medication program.

In addition to the mobile clinics, around 50 imaging centers and hospitals in India use DeepTek’s AI models. One hospital network will soon deploy the startup’s ICU Chest tool, which can diagnose a score of conditions relevant to intensive care patients.

DeepTek is also developing models to screen X-rays of the joints and spine; CT scans of the liver and brain; and brain MRI studies. The company’s in-house radiologists annotate scans by hand for training, validation and testing.

To further improve its deep learning network, the startup uses data augmentation and data normalization — and incorporates users’ radiology reports as feedback to refine and retrain the AI.

DeepTek now processes nearly 25,000 imaging studies a month on its cloud platform.

“Building AI models is just one part of the story,” Kharat said. “The whole technology pipeline needs to integrate smoothly with the radiology workflow at hospitals, imaging centers and mobile clinics.”

Main image shows a calcified nodule in the lung’s right upper lobe, detected by DeepTek’s AI model.

The post From Point AI to Point TB: DeepTek Detects Tuberculosis from X-Rays appeared first on The Official NVIDIA Blog.

AI-Listers: Oscar-Nominated Irishman, Avengers Set Stage for AI Visual Effects

This weekend’s Academy Awards show features a twice-nominated newcomer to the Oscars: AI-powered visual effects.

Two nominees in the visual effects category, The Irishman and Avengers: Endgame, used AI to push the boundaries between human actors and digital characters — de-aging the stars of The Irishman and bringing the infamous villain Thanos to life in Avengers.

Behind this groundbreaking, AI-enhanced storytelling are VFX studios Industrial Light & Magic and Digital Domain, which use NVIDIA Quadro RTX GPUs to accelerate production.

AI Time Machine

From World War II to a nursing home in the 2000s, and every decade in between, Netflix’s The Irishman tells the tale of hitman Frank Sheeran through scenes from different times in his life.

But all three leads in the film — Robert DeNiro, Al Pacino and Joe Pesci — are in their 70s. A makeup department couldn’t realistically transform the actors back to their 20s and 30s. And director Martin Scorcese was against using the typical motion capture markers or other intrusive equipment that gets in the way of raw performances during filming.

To meet this requirement, ILM developed a new three-camera rig to capture the actors’ performances on set — using the director’s camera flanked by two infrared cameras to record 3D geometry and textures. The team also developed software called ILM Facefinder that used AI to sift through thousands of images of the actors’ past performances.

The tool located frames that matched the camera angle, framing, lighting and expression of the scene being rendered, giving ILM artists a relevant reference to compare against every frame in the shot. These visual references were used to refine digital doubles created for each actor, so they could be transformed into the target age for each specific scene in the film.

“AI and machine learning are becoming a part of everything we do in VFX,” said Pablo Helman, VFX supervisor on The Irishman at ILM. “Paired with the NVIDIA Quadro RTX GPUs powering our production pipeline, these technologies have us excited for what the next decade will bring.”

Building Better VFX Villains

The highest-grossing film of all time, Marvel’s Avengers: Endgame included over 2,500 visual effects shots. VFX teams at Digital Domain used machine learning to animate actor Josh Brolin’s performance onto the digital version of the film franchise’s villain, the mighty Thanos.

A machine learning system called Masquerade was developed to take low resolution scans of the actor’s performance and facial movements, and then accurately transfer his expressions onto the high-resolution mesh of Thanos’ face. The technology saves time for VFX artists who would otherwise have to painstakingly animate the subtle facial movements manually to generate a realistic, emoting digital human.

“Key to this process were immediate realistic rendered previews of the characters’ emotional performances, which was made possible using NVIDIA GPU technology,” said Darren Hendler, head of Digital Humans at Digital Domain. “We now use NVIDIA RTX technology to drive all of our real-time ray-traced digital human projects.”

RTX It in Post: Studios, Apps Adopt AI-Accelerated VFX 

ILM and Digital Domain are just two of a growing set of visual effects studios and apps adopting AI tools accelerated by NVIDIA RTX GPUs.

In HBO’s The Righteous Gemstones series, lead actor John Goodman looks 30 years younger than he is. This de-aging effect was achieved with Shapeshifter, a custom software that uses AI to analyze face motion — how the skin stretches and moves over muscle and bone.

VFX studio Gradient Effects used Shapeshifter to transform the actor’s face in a process that, using NVIDIA GPUs, took weeks instead of months.

Companies such as Adobe, Autodesk and Blackmagic Design have developed RTX-accelerated apps to tackle other visual effects challenges with AI, including live-action scene depth reclamation, color adjustment, relighting and retouching, speed warp motion estimation for retiming and upscaling.

Netflix Greenlights AI-Powered Predictions 

Offscreen, streaming services such as Netflix use AI-powered recommendation engines to provide customers with personalized content based on their viewing history, or a similarity index that serves up content watched by people with similar viewing habits.

Netflix also customizes movie thumbnails to appeal to individual users, and uses AI to help optimize streaming quality at lower bandwidths. The company uses NVIDIA GPUs to accelerate its work with complex data models, enabling rapid iteration.

Rolling Out the Red Carpet at GTC 2020

Top studios including Lucasfilm’s ILMxLAB, Magnopus and Digital Domain will be speaking at NVIDIA’s GPU Technology Conference in San Jose, March 23-26.

Check out the lineup of media and entertainment talks and register to attend. Early pricing ends Feb. 13.

Feature image courtesy of Industrial Light & Magic. © 2019 NETFLIX

 

The post AI-Listers: Oscar-Nominated Irishman, Avengers Set Stage for AI Visual Effects appeared first on The Official NVIDIA Blog.

What Is AI Upscaling?

Putting on a pair of prescription glasses for the first time can feel like instantly snapping the world into focus.

Suddenly, trees have distinct leaves. Fine wrinkles and freckles show up on faces. Footnotes in books and even street names on roadside signs become legible.

Upscaling — converting lower resolution media to a higher resolution — offers a similar experience.

But with new AI upscaling techniques, the enhanced visuals look more crisp and realistic than ever.

Why Is Upscaling Necessary? 

One-third of television-owning households in the U.S. have a 4K TV, known as ultra-high definition. But much of the content people watch on popular streaming services like YouTube, HBO and Netflix is only available at lower resolutions.

digital video resolutions
4K TVs can muddy visuals by having to stretch lower-resolution images to fit their screen. AI upscaling makes lower-resolution images fit with unrivaled crispness.

Standard definition video, widely used in TVs until the 1990s, was just 480 pixels high. High-definition TVs bumped that up to 720 or 1080 pixels — and is still the most common resolution format for content on TV and the web.

Owners of ultra-HD displays get the most out of their screens when watching 4K-mastered content. But when watching lower-resolution content, the video must be upscaled to fill out the entire display.

For example, 1080p images, known as full HD, have just a quarter of the pixels in 4K images. To display a 1080p shot from edge to edge on a 4K screen, the picture has to be stretched to match the TV’s pixels.

Upscaling is done by the streaming device being used — such as a smart TV or streaming media player. But typically, media players use basic upscaling algorithms that are unable to significantly improve high-definition content for 4K TVs.

What Is Basic Upscaling? 

Basic upscaling is the simplest way of stretching a lower resolution image onto a larger display. Pixels from the lower resolution image are copied and repeated to fill out all the pixels of the higher resolution display.

Filtering is applied to smooth the image and round out unwanted jagged edges that may become visible due to the stretching. The result is an image that fits on a 4K display, but can often appear muted or blurry.

What Is AI Upscaling? 

Traditional upscaling starts with a low-resolution image and tries to improve its visual quality at higher resolutions. AI upscaling takes a different approach: Given a low-resolution image, a deep learning model predicts a high-resolution image that would downscale to look like the original, low-resolution image.

To predict the upscaled images with high accuracy, a neural network model must be trained on countless images. The deployed AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper, hair looks scruffier and landscapes pop with striking clarity.

ai upscaling

AI Upscaling on NVIDIA SHIELD TV

The NVIDIA SHIELD TV is the first streaming media player to feature AI upscaling. It can upscale 720p or 1080p HD content to 4K at up to 30 frames per second in real time.

Trained offline on a dataset of popular TV shows and movies, the model uses SHIELD’s NVIDIA Tegra X1+ processor for real-time inference. AI upscaling makes HD video content for top apps including HBO, Hulu, Netflix, Prime Video and YouTube appear sharper on 4K TVs, creating a more immersive viewing experience.

To see the difference between “basic upscaling” and “AI-enhanced upscaling” on SHIELD, click the image below and move the slider left and right.

NVIDIA SHIELD owners can toggle between basic and AI-enhanced modes in their device settings. A demo mode allows users to see a side-by-side comparison between regular content and AI-upscaled visuals. AI upscaling can be adjusted for high, medium or low detail enhancement — adjusting the confidence level of the neural network for detail prediction.

Learn more about upscaling on the NVIDIA SHIELD TV.

The post What Is AI Upscaling? appeared first on The Official NVIDIA Blog.

Spacing Out: How AI Provides Astronomers with Insights of Galactic Proportions

The gallery of galaxy images astronomers produce is multiplying faster than the number of selfies on a teen’s new smartphone.

Millions of these images have already been collected by astronomy surveys. But the volume is spiraling with projects like the recent Dark Energy Survey and upcoming Legacy Survey of Space and Time, which will capture billions more.

Volunteers flocked to a recent crowdsource project, Galaxy Zoo, to help classify over a million galaxy images from the Sloan Digital Sky Survey. But citizen science can carry astrophysics only so far.

“Galaxy Zoo was a very successful endeavor, but the rate at which next-generation surveys will gather data will make crowdsourcing methods no longer scalable,” said Asad Khan, a physics doctoral student at the University of Illinois at Urbana-Champaign. “This is where human-in-the-loop techniques present an approach to guide AI to data-driven discovery, including image classification.”

Using transfer learning from the popular image classification model Xception, Khan and his fellow researchers developed a neural network that categorizes galaxy images as elliptical or spiral with expert-level accuracy. Classifying galaxy shapes helps scientists determine how old they are. It can also help them understand more complex questions about dark energy and how fast the universe is expanding.

Automating elements of galaxy classification enables astrophysicists to spend less time on basic labeling and focus on more complex research questions.

The research — the first application of deep transfer learning for galaxy classification — was one of six projects featured at the Scientific Visualization and Data Analytics Showcase at SC19, the annual supercomputing trade show.

AI Wrinkle in Time

The researchers trained the deep learning network on around 35,000 galaxy images from the Sloan Digital Sky Survey. Using Argonne National Laboratory’s Cooley supercomputer, which is equipped with dozens of NVIDIA data center GPUs, the team accelerated neural network training from five hours to just eight minutes.

When tested on other images from the Sloan Digital Sky Survey, the AI achieved 99.8 percent accuracy for classifying images as either elliptical or spiral galaxies — an improvement compared to neural networks trained without transfer learning.

Using a single NVIDIA V100 Tensor Core GPU for inference, the team was able to classify 10,000 galaxies in under 30 seconds.

“We can already start using this network, or future versions of it, to start labeling the 300 million galaxies in the Dark Energy Survey,” Khan said, “With GPU-accelerated inference, we could classify all the images in no time at all.”

Khan and his team also developed a visualization to show how the neural network learned during training.

“Even if deep learning models can achieve impressive accuracy levels, when AI does make a mistake, we often don’t know why,” he said. “Visualizations like these can serve as a heuristic check on the network’s performance, providing more interpretability for science communities.”

The researchers next plan to study how the morphology of galaxies change with redshift, a phenomenon caused by the expansion of the universe.

Main image from the Sloan Digital Sky Survey, licensed from Wikimedia Commons under Creative Commons (CC BY 4.0).

The post Spacing Out: How AI Provides Astronomers with Insights of Galactic Proportions appeared first on The Official NVIDIA Blog.