Putting AI on Trials: Deep 6 Speeds Search for Clinical-Study Recruits

Bringing a new medical treatment to market is a slow, laborious process — and for a good reason: patient safety is the top priority.

But when recruiting patients to test promising treatments in clinical trials, the faster the better.

“Many people in medicine have ideas of how to improve healthcare,” said Wout Brusselaers, CEO of Pasadena, Calif.-based startup Deep 6 AI. “What’s stopping them is being able to demonstrate that their new process or new drug works, and is safe and effective on real patients. For that, they need the clinical trial process.”

Over the past decade, the number of cancer clinical trials has grown 17 percent a year, on average. But nearly a fifth of these studies fail to recruit a sufficient number of participants that fit sometimes very specific trial criteria after three years of searching — and the problem isn’t getting any simpler.

“In the age of precision medicine, clinical trial criteria are getting more challenging,” Brusselaers said. “When developing a drug that is targeting patients with a rare genetic mutation, you have to be able to find those specific patients.”

By analyzing medical records with AI, Deep 6 can identify a patient population for clinical trials within minutes, accelerating what’s traditionally a months-long process. Major cancer centers and pharmaceutical companies, including Cedars Sinai Medical Center and Texas Medical Center, are using the AI tool. They’ve matched more than 100,000 patients to clinical trials so far.

The startup’s clinical trial acceleration software has specific tools to help hospitals recommend available trials to patients and to help pharmaceutical companies track and accelerate patient recruitment for their studies. Future versions of the software could also be made available for patients to browse trials.

A Match Made in AI

Deep 6 AI is a member of the NVIDIA Inception virtual accelerator program, which helps startups scale faster. The company uses an NVIDIA TITAN GPU to accelerate the development of its custom AI models that analyze patient data to identify and label clinical criteria relevant to trials.

“It’s more efficient and less expensive for us to develop our models on premises,” Brusselaers said. “We could turn around models right away and iterate faster, without having to wait to rerun the code.”

While the tool can be used for any diagnostic area or medical condition, Brusselaers says over a quarter of trials on the platform are oncology studies, followed closely by cardiology.

Trained on a combination of open-source databases and real-world data from Deep 6’s partners, the AI models first identify specific mentions of clinical terminology and medical codes in patient records with natural language processing.

Additional neural networks analyze unstructured data like doctor’s notes and pathology reports to gather additional information about a patient’s symptoms, diagnoses and treatments — even detecting potential conditions not mentioned in the medical records.

Deep 6’s tool then creates a patient graph that represents the individual’s clinical profile. These graphs can easily be matched by doctors and researchers to develop trial cohorts, upgrading a time-consuming, often unfruitful manual process.

Researchers at Los Angeles’ Cedars-Sinai Smidt Heart Institute — one of the startup’s clients — had enrolled just two participants for a new clinical trial after six months of recruitment effort. Using Deep 6 AI software, they found 16 qualified candidates in an hour.

Texas Medical Center, a collection of over 60 health institutions, is rolling out Deep 6 software across its network to replace the typical process of finding clinical trial candidates, which requires associates to manually flip through thick folders of medical records.

“It’s just a long slog to find patients for clinical trials,” said Bill McKeon, CEO of Texas Medical Center. Using Deep 6’s software tool “is just completely transforming.”

McKeon says in one case, it took six months to find a dozen eligible patients for a trial with traditional recruitment efforts. The same matching process through Deep 6’s software found 80 potential participants in minutes.

The post Putting AI on Trials: Deep 6 Speeds Search for Clinical-Study Recruits appeared first on The Official NVIDIA Blog.

Meet Six Smart Robots at GTC 2020

The GPU Technology Conference is like a new Star Wars movie. There are always cool new robots scurrying about.

This year’s event in San Jose, March 22-26, is no exception, with at least six autonomous machines expected on the show floor. Like C3PO and BB8, each one is different.

Among what you’ll see at GTC 2020:

  • a robotic dog that sniffs out trouble in complex environments such as construction sites
  • a personal porter that lugs your stuff while it follows your footsteps
  • a man-sized bot that takes inventory quickly and accurately
  • a short, squat bot that hauls as much as 2,200 pounds across a warehouse
  • a delivery robot that navigates sidewalks to bring you dinner

“What I find interesting this year is just how much intelligence is being incorporated into autonomous machines to quickly ingest and act on data while navigating around unstructured environments that sometimes are not safe for humans,” said Amit Goel, senior product manager for autonomous machines at NVIDIA and robot wrangler for GTC 2020.

The ANYmal C from ANYbotics AG (pictured above), based in Zurich, is among the svelte navigators, detecting obstacles and finding its own shortest path forward thanks to its Jetson AGX Xavier GPU. The four-legged bot can slip through passages just 23.6 inches wide and climb stairs as steep as 45 degrees on a factory floor to inspect industrial equipment with its depth, wide-angle and thermal cameras.

Gita robot
The Gita personal robot will demo hauling your stuff at GTC 2020.

The folks behind the Vespa scooter will show Gita, a personal robot that can carry up to 40 pounds of your gear for four hours on a charge. It runs computer vision algorithms on a Jetson TX2 GPU to identify and follow its owner’s legs on any hard surfaces.

Say cheese. Bossa Nova Robotics will show its retail robot that can scan a 40-foot supermarket aisle in 60 seconds, capturing 4,000 images that it turns into inventory reports with help from its NVIDIA Turing architecture RTX GPU. Walmart plans to use the bots in at least 650 of its stores.

Mobile Industrial Robots A/S, based in Odense, Denmark, will give a talk at GTC about how it’s adding AI with Jetson Xavier to its pallet-toting robots to expand their work repertoire. On the show floor, it will demonstrate one of the robots from its MiR family that can carry payloads up to 2,200 pounds while using two 3D cameras and other sensors to navigate safely around people and objects in a warehouse.

From the other side of the globe, ExaWizards Inc. (Tokyo) will show its multimodal AI technology running on robotic arms from Japan’s Denso Robotics. It combines multiple sensors to learn human behaviors and perform jobs such as weighing a set portion of water.

Boss Nova robot
Walmart will use the Bossa Nova robot to help automate inventory taking in at least 650 of its stories

Rounding out the cast, the Serve delivery robot from Postmates will make a return engagement at GTC. It can carry 50 pounds of goods for 30 miles, using a Jetson AGX Xavier and Ouster lidar to navigate sidewalks like a polite pedestrian. In a talk, a Postmates engineer will share lessons learned in its early deployments.

Many of the latest systems reflect the trend toward collaborative robotics that NVIDIA CEO Jensen Huang demonstrated in a keynote in December. He showed ways humans can work with and teach robots directly, thanks to an updated NVIDIA Isaac developers kit that also speeds development by using AI and simulations to train robots, now part of NVIDIA’s end-to-end offering in robotics.

Just for fun, GTC also will host races of AI-powered DIY robotic cars, zipping around a track on the show floor at speeds approaching 50 mph. You can sign up here if you want to bring your own Jetson-powered robocar to the event.

We’re saving at least one surprise in robotics for those who attend. To get in on the action, register here for GTC 2020.

The post Meet Six Smart Robots at GTC 2020 appeared first on The Official NVIDIA Blog.

NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research

Our NVIDIA Graduate Fellowship Program recently awarded up to $50,000 each to five Ph.D. students involved in GPU computing research.

Now in its 19th year, the fellowship program supports graduate students doing GPU-based work. We selected this year’s fellows from more than 300 applicants from a host of countries.

The fellows’ work puts them at the forefront of GPU computing, including projects in deep learning, graphics, high performance computing and autonomous machines.

“Our fellowship recipients are among the most talented graduate students in the world,” said NVIDIA Chief Scientist Bill Dally. “They’re working on some of the most important problems in computer science, and we’re delighted to support their research.”

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

Our 2020-2021 fellows are:

  • Anqi Li, University of Washington — Bridging the gap between robotics research and applications by exploiting complementary tools from machine learning and control theory
  • Benedikt Bitterli, Dartmouth College — Principled forms of sample reuse that unlock more efficient ray-tracing techniques for offline and real-time rendering
  • Vinu Joseph, University of Utah — Optimizing deep neural networks for performance and scalability
  • Xueting Li, University of California, Merced — Self-supervised learning and relation learning between different visual elements
  • Yue Wang, Massachusetts Institute of Technology — Designing sensible deep learning modules that learn effective representations of 3D data

And our 2020-2021 finalists are:

  • Guandao Yang, Cornell University
  • Michael Lutter, Technical University of Darmstadt
  • Yuanming Hu, Massachusetts Institute of Technology
  • Yunzhu Li, Massachusetts Institute of Technology
  • Zackory Erickson, Georgia Institute of Technology

The post NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research appeared first on The Official NVIDIA Blog.

Washington Goes West: GTC 2020 Explores AI in Federal Government

The future of government will come into focus in Silicon Valley next month when experts from industries and government converge to discuss AI and high performance computing at the NVIDIA GPU Technology Conference.

Following on the success of GTC DC 2019, GTC 2020, taking place March 22-26 in San Jose, Calif., will bring together hundreds of researchers, government executives and national lab directors to discuss how agencies can improve citizen services and advance science with accelerated computing.

The show features more than 600 talks on subjects including AI in government, autonomous machines, cybersecurity and disaster relief. In addition to tech giants like Amazon, Google and Microsoft, the show attracts leaders from a variety of labs and agencies, including Lawrence Berkeley National Laboratory, NASA, NIST, NIH, the National Center for Atmospheric Research, Lockheed Martin, Oak Ridge National Lab, Raytheon, Booz Allen Hamilton and the U.S. Postal Service.

NVIDIA founder and CEO Jensen Huang will kick off the event with a keynote on March 23 to explain how AI is revolutionizing industries from healthcare to robotics.

Autonomous Everything

One of the hottest topics at GTC 2020 will be autonomous machines. This year, experts from Ford Motor Company, Amazon, Microsoft, Google and more will discuss everything autonomous, from precision manufacturing to self-driving cars to mobile robots. Highlights include:

  • Anthony Rizk, research engineer, and Jimmy Nassif, head of IT planning systems, at BMW Group, will discuss their autonomous transportation robot that’s being put to use in assembly and production sites.
  • Pieter Abbeel, director of the Berkeley Robot Learning Lab and co-director of the Berkeley Artificial Intelligence Lab, will summarize recent AI research breakthroughs in robotics and potential areas of progress going forward.
  • Cyra Richardson, general manager of hardware innovation and robotics at Microsoft, will present Microsoft and NVIDIA’s shared platform that’s accelerating time-to-market for robotics manufacturers.
  • Claire Delaunay, vice president of engineering at NVIDIA, will host a panel of industry experts to talk about the fourth industrial revolution, with a focus on autonomous robots.

Attendees who want to get started in robotics and embedded systems can also take part in Jetson Developer Days on March 25 and 26. Experts and community members from NVIDIA will be on hand to provide information on AI at the edge, medical imaging, intelligent video analytics and robotics.

Fireside Sessions and Cybersecurity

U.S. Rep. Jerry McNerney, of California, will be taking part in a fireside chat to discuss the current and upcoming governmental approach to AI, along with what policies enterprises should expect.

The issue of cybersecurity is also on the agenda. Sessions to look out for:

  • Jian Chang, staff algorithm expert, and Sanjian Chen, staff algorithm engineer, both at Alibaba Group, will present on the importance of protecting deep-learning models for specific uses from adversarial attacks, and propose methods of defense.
  • Marco Schreyer, a researcher at the University of St. Gallen, will identify how financial accounting machine learning models are vulnerable to adversarial attacks and present recent developments on the topic.
  • Bartley Richardson, AI infrastructure manager and senior cybersecurity data scientist at NVIDIA, will be joined by several colleagues in a “Connect with the Experts” session — an hour-long, intimate Q&A — in which attendees can learn more about GPU-accelerated anomaly detection, threat hunting and more.

AI for the Environment

Disaster relief and climate modeling are two of the many fields that federal agencies are interested in. GTC will feature several sessions on these topics, including:

  • David John Gagne, machine learning scientist at the National Center for Atmospheric Research, will illustrate how deep learning can aid in predicting hurricane intensity.
  • Amulya Vishwanath, product marketing lead, and Chintan Shah, product manager, at NVIDIA, will explain how to train, build and deploy intelligent vision applications to aid with faster disaster relief and logistics.
  • Oliver Fuhrer, senior director at Vulcan Inc., will present new methods for climate modeling based on GPU-accelerated HPC.

Many government agencies and contractors rely on high performance computing for scientific research. This year’s GTC will include an inaugural HPC Summit, where attendees will see the latest technology in simulation, space exploration, energy and more. Leaders from Schlumberger, NVIDIA, Mellanox and Oak Ridge National Labs will host breakout sessions with Summit attendees.

Hands-On Training

For those who want to sharpen their skills, the NVIDIA Deep Learning Institute will be hosting 60+ instructor-led training sessions and 30+ self-paced courses throughout the conference, and six full-day workshops on Sunday, March 22.

Day-long workshops include “Applications of AI for Anomaly Detection,” where participants will learn methods to identify network intrusions, cyberthreats, counterfeit financial transactions and more. And “Applications of AI for Predictive Maintenance” will focus on how machine learning can help avoid unplanned downtimes and predict outcomes — crucial to industries such as manufacturing, aerospace and energy.

Register here to experience the future of AI at GTC 2020.

The post Washington Goes West: GTC 2020 Explores AI in Federal Government appeared first on The Official NVIDIA Blog.

From Point AI to Point TB: DeepTek Detects Tuberculosis from X-Rays

Tuberculosis is an issue close to home for Pune, India-based healthcare startup DeepTek. India has the world’s highest prevalence of the disease — accounting for over one-quarter of the 10 million new cases each year.

It’s a fitting first project for the company, whose founders hope to greatly improve global access to medical imaging diagnostics with an AI-powered radiology platform. DeepTek’s DxTB tool screens X-ray images for pulmonary TB, flagging cases for prioritized review by medical experts.

India aims to eradicate TB by 2025, five years before the United Nations’ global goal to end the epidemic by 2030. Chest X-ray imaging is the most sensitive screening tool for pulmonary TB, helping clinicians determine which patients should be referred for further lab testing. But two-thirds of people worldwide lack access to even basic radiology services, in part due to high costs and insufficient infrastructure.

“There’s a huge shortage of imaging experts available to read X-ray scans,” said Amit Kharat, CEO of the startup and a clinical radiologist. “Since radiologists’ time is often sought for more demanding investigations like CT or MRI scans, this is an important gap where AI can add value.”

DeepTek is a member of NVIDIA Inception, a virtual accelerator program that enables early-stage companies with fundamental tools, expertise and go-to-market support. The startup uses NVIDIA GPUs through Google Cloud and Amazon Web Services for training and inference of its deep learning algorithms.

Its DxTB tool has been used to analyze over 70,000 chest X-rays so far in partnership with the Greater Chennai Corporation’s TB Free Chennai Initiative, a project supported by the Clinton Health Access Initiative. The system is deployed in mobile vans equipped with digital X-ray machines to conduct TB screening for high-risk population groups.

mobile TB clinic in Chennai, India

As patients are screened in the mobile clinics, scans of the chest X-ray images are securely transmitted to the cloud for inference. The accelerated turnaround time allows doctors to triage cases and conduct additional tests right away, minimizing the number of patients who don’t follow up for further testing and treatment.

“Doing this job would have taken a month’s time. With AI, it’s now feasible to do it within hours,” Kharat said. “The goal is to make sure not a single patient is lost.”

 DxTB can be deployed in the cloud or — where internet connections are weak or unavailable — as an edge service. Radiologists access the scans through a dashboard that enables them to review the studies and provide expert feedback and validation.

Patients detected as potentially TB-positive provide a sputum, or lung fluid, sample that undergoes a molecular test before doctors confirm the diagnosis and prescribe a medication program.

In addition to the mobile clinics, around 50 imaging centers and hospitals in India use DeepTek’s AI models. One hospital network will soon deploy the startup’s ICU Chest tool, which can diagnose a score of conditions relevant to intensive care patients.

DeepTek is also developing models to screen X-rays of the joints and spine; CT scans of the liver and brain; and brain MRI studies. The company’s in-house radiologists annotate scans by hand for training, validation and testing.

To further improve its deep learning network, the startup uses data augmentation and data normalization — and incorporates users’ radiology reports as feedback to refine and retrain the AI.

DeepTek now processes nearly 25,000 imaging studies a month on its cloud platform.

“Building AI models is just one part of the story,” Kharat said. “The whole technology pipeline needs to integrate smoothly with the radiology workflow at hospitals, imaging centers and mobile clinics.”

Main image shows a calcified nodule in the lung’s right upper lobe, detected by DeepTek’s AI model.

The post From Point AI to Point TB: DeepTek Detects Tuberculosis from X-Rays appeared first on The Official NVIDIA Blog.

This Building Is in Beta: Startup’s AI Can Design and Control Buildings

Beta testing is a common practice for intangible products like software. Release an application, let customers bang on it, put bug fixes into the next version for download onto devices. Repeat.

For brick-and-mortar products like buildings, beta testing is unusual if not unheard of. But two Salt Lake City entrepreneurs now offer a system that evaluates buildings while in development.

PassiveLogic CEO Troy Harvey says this could solve a lot of problems before construction begins. “If you’re an engineer, there’s this is kind of a crazy idea to go and build the first one without beta testing it, and to just hope it works out,” he said.

Hive controller

Harvey and Jeremy Fillingim in 2014 founded PassiveLogic, an AI platform to engineer and autonomously operate all the Internet of Things components of buildings.

PassiveLogic’s Hive system — the startup calls it “brains for buildings” — is powered by the energy-sipping, AI-capable Jetson Nano module. The system can also be retrofitted into existing structures.

The Hive can make split-second decisions on controlling buildings by merging data from multiple sensors using sensor fusion algorithms. And it enables automated interpretation and responsiveness for dynamic situations such as lights that brighten but can also add heat to a space or automated window louvers that reduce glare but also cool a room.

New Era for IoT: Jetson

PassiveLogic’s software enables designers and architects to digitally map out the components for the system controls architecture. Contractors and architects can then run AI-driven simulations on IoT systems before starting construction. The simulations are run with neural networks to help optimize for areas such as energy efficiency and comfort.

PassiveLogic’s Swarm sensor

In addition to the Hive controller for edge computing, the system uses the startup’s half-dollar-size Swarm room sensors and compact Cell modules to connect into building components for hard-wired control.

PassiveLogic’s Cell module“With the Jetson Nano, we’re getting all this computing power that we can put right at the edge, and so we can do all these things in AI with a real-time system,” said Harvey.

PassiveLogic’s pioneering application in building AI, edge computing and IoT comes as retailers, manufacturers, municipalities and scores of others are embracing NVIDIA GPU-driven edge computing for autonomy.

The company is a member of NVIDIA’s Inception program, which helps startups scale markets faster with networking

opportunities, technical guidance on GPUs and access to training.

“NVIDIA Inception is offering technical guidance on the capabilities and implementation of Jetson as PassiveLogic prepares to fulfill our backlog of customer demand,” said Harvey. “The capabilities of the Jetson chip open up opportunities for our platform.”

Hive: AI Edge Computing 

PassiveLogic’s Hive controllers can bring AI to ordinary edge devices such as closed-caption cameras, lighting, and heating and air conditioning systems. This allows image recognition applications for buildings with cameras and smart temperature controls, among other benefits.

“It becomes a control hub for all of those sensors in the building and all of the controllable things,” said Harvey.

Hive can also factor in where densities of people are in buildings, based on data taken from its networked Swarm devices, which use Bluetooth mesh trilateralization to locate building occupants. It can then adjust temperature, lights or other systems for where people are located.

Digital Twin AI Simulations

The company’s Cell modules — hard-wired, software-defined input-output units — are used to bridge all the physical building connections into its Hive AI edge computing systems. As customers connect these building block-like modules together, they’re also laying the software foundation for what this autonomous system looks like.

PassiveLogic enables customers to digitally lay out building controls and set up simulations within its software platform on Hive. Customers can import CAD designs or sketch them out, including all of the features of a building that need to be monitored.

The AI engine understands at a physics level how buildings components work, and it can run simulations of building systems, taking into account complex interactions, and making control decisions to optimize operation. Next, the Hive compares this optimal control path to actual sensor data, applies machine learning, and gets smarter about operating the building over time.

Whether it’s an existing building getting an update or a design for a new one, customers can run simulations with Hive to see how they can improve energy consumption and comfort.

“Once you plug it in, you can learn onsite and actually build up a unique local-based training using deep learning and compare it with other buildings,” Harvey said.

The post This Building Is in Beta: Startup’s AI Can Design and Control Buildings appeared first on The Official NVIDIA Blog.

AI-Listers: Oscar-Nominated Irishman, Avengers Set Stage for AI Visual Effects

This weekend’s Academy Awards show features a twice-nominated newcomer to the Oscars: AI-powered visual effects.

Two nominees in the visual effects category, The Irishman and Avengers: Endgame, used AI to push the boundaries between human actors and digital characters — de-aging the stars of The Irishman and bringing the infamous villain Thanos to life in Avengers.

Behind this groundbreaking, AI-enhanced storytelling are VFX studios Industrial Light & Magic and Digital Domain, which use NVIDIA Quadro RTX GPUs to accelerate production.

AI Time Machine

From World War II to a nursing home in the 2000s, and every decade in between, Netflix’s The Irishman tells the tale of hitman Frank Sheeran through scenes from different times in his life.

But all three leads in the film — Robert DeNiro, Al Pacino and Joe Pesci — are in their 70s. A makeup department couldn’t realistically transform the actors back to their 20s and 30s. And director Martin Scorcese was against using the typical motion capture markers or other intrusive equipment that gets in the way of raw performances during filming.

To meet this requirement, ILM developed a new three-camera rig to capture the actors’ performances on set — using the director’s camera flanked by two infrared cameras to record 3D geometry and textures. The team also developed software called ILM Facefinder that used AI to sift through thousands of images of the actors’ past performances.

The tool located frames that matched the camera angle, framing, lighting and expression of the scene being rendered, giving ILM artists a relevant reference to compare against every frame in the shot. These visual references were used to refine digital doubles created for each actor, so they could be transformed into the target age for each specific scene in the film.

“AI and machine learning are becoming a part of everything we do in VFX,” said Pablo Helman, VFX supervisor on The Irishman at ILM. “Paired with the NVIDIA Quadro RTX GPUs powering our production pipeline, these technologies have us excited for what the next decade will bring.”

Building Better VFX Villains

The highest-grossing film of all time, Marvel’s Avengers: Endgame included over 2,500 visual effects shots. VFX teams at Digital Domain used machine learning to animate actor Josh Brolin’s performance onto the digital version of the film franchise’s villain, the mighty Thanos.

A machine learning system called Masquerade was developed to take low resolution scans of the actor’s performance and facial movements, and then accurately transfer his expressions onto the high-resolution mesh of Thanos’ face. The technology saves time for VFX artists who would otherwise have to painstakingly animate the subtle facial movements manually to generate a realistic, emoting digital human.

“Key to this process were immediate realistic rendered previews of the characters’ emotional performances, which was made possible using NVIDIA GPU technology,” said Darren Hendler, head of Digital Humans at Digital Domain. “We now use NVIDIA RTX technology to drive all of our real-time ray-traced digital human projects.”

RTX It in Post: Studios, Apps Adopt AI-Accelerated VFX 

ILM and Digital Domain are just two of a growing set of visual effects studios and apps adopting AI tools accelerated by NVIDIA RTX GPUs.

In HBO’s The Righteous Gemstones series, lead actor John Goodman looks 30 years younger than he is. This de-aging effect was achieved with Shapeshifter, a custom software that uses AI to analyze face motion — how the skin stretches and moves over muscle and bone.

VFX studio Gradient Effects used Shapeshifter to transform the actor’s face in a process that, using NVIDIA GPUs, took weeks instead of months.

Companies such as Adobe, Autodesk and Blackmagic Design have developed RTX-accelerated apps to tackle other visual effects challenges with AI, including live-action scene depth reclamation, color adjustment, relighting and retouching, speed warp motion estimation for retiming and upscaling.

Netflix Greenlights AI-Powered Predictions 

Offscreen, streaming services such as Netflix use AI-powered recommendation engines to provide customers with personalized content based on their viewing history, or a similarity index that serves up content watched by people with similar viewing habits.

Netflix also customizes movie thumbnails to appeal to individual users, and uses AI to help optimize streaming quality at lower bandwidths. The company uses NVIDIA GPUs to accelerate its work with complex data models, enabling rapid iteration.

Rolling Out the Red Carpet at GTC 2020

Top studios including Lucasfilm’s ILMxLAB, Magnopus and Digital Domain will be speaking at NVIDIA’s GPU Technology Conference in San Jose, March 23-26.

Check out the lineup of media and entertainment talks and register to attend. Early pricing ends Feb. 13.

Feature image courtesy of Industrial Light & Magic. © 2019 NETFLIX

 

The post AI-Listers: Oscar-Nominated Irishman, Avengers Set Stage for AI Visual Effects appeared first on The Official NVIDIA Blog.

Trash-Talking AI Platform Schools You on Recycling

Get ready for trash-talking garbage cans.

Hassan Murad and Vivek Vyas have developed the world’s largest garbage dataset, dubbed WasteNet, and offer an AI-driven trash-sorting technology.

The Vancouver engineers’ startup, Intuitive AI, uses machine learning and computer vision to see what people are holding as they approach trash and recycling bins. Their product visually sorts the item on a display to nudge the user how to separate waste  — and verbally ribs people for misses.

The split-second detection of the item using WasteNet’s nearly 1 million images is made possible by the compact supercomputing of the NVIDIA Jetson TX2 AI module.

Murad and Vyas call their AI recycling platform Oscar, like Sesame Street’s trashcan muppet.  “Oscar is a grouchy, trash-talking AI. It rewards people with praise if they recycle right and playfully shouts at them for doing it wrong,” said Murad.

Intuitive AI’s launch is timely. In 2018, China banned imports of contaminated plastic, paper and other materials for processing into recycling.

Since then, U.S. exports of plastic scraps to China have plummeted nearly 90 percent, according to the Institute of Scrap Recycling Industries. And recycling processors everywhere are scrambling to better sort trash to produce cleaner recyclables.

The startup is also a member of NVIDIA’s Inception program, which helps startups scale markets faster with networking opportunities, technical guidance and access to training.

“NVIDIA really helped us understand which processor to try out and what kind of results to expect and then provided a couple of them for us to test on for free,” said Murad.

The early-stage AI company is also a cohort of Next AI, a Canada-based incubator that guides promising startups. Next AI gives startups access to professors from the University of Toronto, Harvard, MIT and big figures in the tech industry.

In January, NVIDIA and Next AI forged a partnership to jointly support their growing ecosystems of startups, providing AI education, investment, technical guidance and mentorship.

Turning Trash Into Cash

Trash is a surging environmental problem worldwide. And it’s not just the Great Pacific Garbage Patch — the headline-grabbing mass of floating plastic bits that’s twice the size of Texas.

Now that China requires clean recyclables from exporters — with no more than 0.5 percent contamination — nations across the world are facing mounting landfills.

Intuitive AI aims to help cities cope with soaring costs from recycling collection companies, which have limited markets to sell tons of contaminated plastics and other materials.

“The way to make the recycling chain work is by obtaining cleaner sorted materials. And it begins by measurement and education at the source so that  waste management companies get cleaner recyclables so that they can sell to China, India,Indonesia or not send it at all because eventually, we could be able to process it locally,” said Murad.

Garbage in, Garbage out

Deploying image recognition to make trash-versus-recycling decisions isn’t easy. The founders discovered objects in people’s hands are often obscured 80 percent from view. Also, there are thousands of different objects people might discard. They needed a huge dataset.

“It became quite clear to us we need to build the world’s largest garbage dataset, which we call WasteNet,” said Murad.

From deployments at malls, universities, airports and corporate campuses, Oscar has now demonstrated it can increase recycling by 300%.

WasteNet is a proprietary dataset. The founders declined to disclose the details of how they created such a massive dataset.

GPUs Versus CPUs

The startup’s system needs to work fast. After all, who wants to wait by a garbage bin? Initially, the founders used every possible hardware option on the market for image recognition, said Murad, including Raspberry Pi and Intel’s Movidius.

But requiring that people wait for up to six seconds — the result of their early hardware experiments — for where to toss an item just wasn’t an option. Once they moved to NVIDIA GPUs, they were able to get results down to half a second.

“Using Jetson TX2, we are able to run AI on the edge and help people change the world in three seconds,”  said Murad.

The post Trash-Talking AI Platform Schools You on Recycling appeared first on The Official NVIDIA Blog.

What Is AI Upscaling?

Putting on a pair of prescription glasses for the first time can feel like instantly snapping the world into focus.

Suddenly, trees have distinct leaves. Fine wrinkles and freckles show up on faces. Footnotes in books and even street names on roadside signs become legible.

Upscaling — converting lower resolution media to a higher resolution — offers a similar experience.

But with new AI upscaling techniques, the enhanced visuals look more crisp and realistic than ever.

Why Is Upscaling Necessary? 

One-third of television-owning households in the U.S. have a 4K TV, known as ultra-high definition. But much of the content people watch on popular streaming services like YouTube, HBO and Netflix is only available at lower resolutions.

digital video resolutions
4K TVs can muddy visuals by having to stretch lower-resolution images to fit their screen. AI upscaling makes lower-resolution images fit with unrivaled crispness.

Standard definition video, widely used in TVs until the 1990s, was just 480 pixels high. High-definition TVs bumped that up to 720 or 1080 pixels — and is still the most common resolution format for content on TV and the web.

Owners of ultra-HD displays get the most out of their screens when watching 4K-mastered content. But when watching lower-resolution content, the video must be upscaled to fill out the entire display.

For example, 1080p images, known as full HD, have just a quarter of the pixels in 4K images. To display a 1080p shot from edge to edge on a 4K screen, the picture has to be stretched to match the TV’s pixels.

Upscaling is done by the streaming device being used — such as a smart TV or streaming media player. But typically, media players use basic upscaling algorithms that are unable to significantly improve high-definition content for 4K TVs.

What Is Basic Upscaling? 

Basic upscaling is the simplest way of stretching a lower resolution image onto a larger display. Pixels from the lower resolution image are copied and repeated to fill out all the pixels of the higher resolution display.

Filtering is applied to smooth the image and round out unwanted jagged edges that may become visible due to the stretching. The result is an image that fits on a 4K display, but can often appear muted or blurry.

What Is AI Upscaling? 

Traditional upscaling starts with a low-resolution image and tries to improve its visual quality at higher resolutions. AI upscaling takes a different approach: Given a low-resolution image, a deep learning model predicts a high-resolution image that would downscale to look like the original, low-resolution image.

To predict the upscaled images with high accuracy, a neural network model must be trained on countless images. The deployed AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper, hair looks scruffier and landscapes pop with striking clarity.

ai upscaling

AI Upscaling on NVIDIA SHIELD TV

The NVIDIA SHIELD TV is the first streaming media player to feature AI upscaling. It can upscale 720p or 1080p HD content to 4K at up to 30 frames per second in real time.

Trained offline on a dataset of popular TV shows and movies, the model uses SHIELD’s NVIDIA Tegra X1+ processor for real-time inference. AI upscaling makes HD video content for top apps including HBO, Hulu, Netflix, Prime Video and YouTube appear sharper on 4K TVs, creating a more immersive viewing experience.

To see the difference between “basic upscaling” and “AI-enhanced upscaling” on SHIELD, click the image below and move the slider left and right.

NVIDIA SHIELD owners can toggle between basic and AI-enhanced modes in their device settings. A demo mode allows users to see a side-by-side comparison between regular content and AI-upscaled visuals. AI upscaling can be adjusted for high, medium or low detail enhancement — adjusting the confidence level of the neural network for detail prediction.

Learn more about upscaling on the NVIDIA SHIELD TV.

The post What Is AI Upscaling? appeared first on The Official NVIDIA Blog.

Laika’s Oscar-nominated ‘Missing Link’ Comes to Life with Intel Technology

As moviemaking — and even the actors themselves — goes increasingly digital, Laika studios in Oregon is a unique hybrid. Most movies today are live action with visual effects added later — or they’re fully digital. Laika starts with the century-old craft of stop motion — 24 handcrafted frames per second — and uses visual effects not only to clean up those frames but to add backgrounds and characters.

“We’re dedicated to pushing the boundaries and trying to expand what you can do in a stop motion film,” says Jeff Stringer, director of production technology at Laika. “We want to try and get as much as we can in-camera, but using visual effects allows us to scale it up and do more.”

That’s exactly what Laika did with its latest feature, “Missing Link,” the company’s fifth-straight movie to be nominated for an Academy Award for best animated feature, and its first to win a Golden Globe. “The scope of this movie is huge,” the film’s writer-director, Chris Butler, told the Los Angeles Times. According to Animation World Network, the computational requirements of the film’s digital backgrounds and characters topped a petabyte of storage, and rendering the entire movie took 112 million processor hours — or 12,785 years.

Like most motion picture content today, Laika rendered “Missing Link” on Intel® Xeon® Scalable processors.  Intel and Laika engineers are working together to apply AI to further automate and speed the company’s articulate process. “Our biggest metric is, is the performance believable and beautiful?” Stringer asks. “Our ethos is to not let the craft limit the storytelling but try to push the craft as far as the story wants to go.”

Voting for the 2020 Academy Awards ends Tuesday, Feb. 4, and the Oscars will be awarded Sunday, Feb. 9.

More: Go behind the scenes and explore a special interactive booklet celebrating the world-class artists who brought “Missing Link” to life. | All Intel Images

Laika Intel Newsroom 1

The post Laika’s Oscar-nominated ‘Missing Link’ Comes to Life with Intel Technology appeared first on Intel Newsroom.