Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics

With a new university campus nearby and an airport under construction, the city of Liverpool, Australia, 27 kilometers southwest of Sydney, is growing fast.

More than 30,000 people are expected to make a daily commute to its central business district. Liverpool needed to know the possible impact to traffic flow and movement of pedestrians, cyclists and vehicles.

The city already hosts closed-circuit televisions to monitor safety and security. Each CCTV captures lots of video and data that, due to stringent privacy regulations, is mainly combed through after an incident has been reported.

The challenge before the city was to turn this massive dataset into information that could help it run more efficiently, handle an influx of commuters and keep the place liveable for residents — without compromising anyone’s privacy.

To achieve this goal, the city has partnered with the Digital Living Lab of the University of Wollongong. Part of Wollongong’s SMART Infrastructure Facility, the DLL has developed what it calls the Versatile Intelligent Video Analytics platform. VIVA, for short, unlocks data so that owners of CCTV networks can access real-time, privacy-compliant data to make better informed decisions.

VIVA is designed to convert existing infrastructure into edge-computing devices embedded with the latest AI. The platform’s state-of-the-art deep learning algorithms are developed at DLL on the NVIDIA Metropolis platform. Their video analytics deep-learning models are trained using transfer learning to adapt to use cases, optimized via NVIDIA TensorRT software and deployed on NVIDIA Jetson edge AI computers.

“We designed VIVA to process video feeds as close as possible to the source, which is the camera,” said Johan Barthelemy, lecturer at the SMART Infrastructure Facility of the University of Wollongong. “Once a frame has been analyzed using a deep neural network, the outcome is transmitted and the current frame is discarded.”

Disposing of frames maintains privacy as no images are transmitted. It also reduces the bandwidth needed.

Beyond city streets like in Liverpool, VIVA has been adapted for a wide variety of applications, such as identifying and tracking wildlife; detecting culvert blockage for stormwater management and flash flood early warnings; and tracking of people using thermal cameras to understand people’s mobility behavior during heat waves. It can also distinguish between firefighters searching a building and other building occupants, helping identify those who may need help to evacuate.

Making Sense of Traffic Patterns

The research collaboration between SMART, Liverpool’s city council and its industry partners is intended to improve the efficiency, effectiveness and accessibility of a range of government services and facilities.

For pedestrians, the project aims to understand where they’re going, their preferred routes and which areas are congested. For cyclists, it’s about the routes they use and ways to improve bicycle usage. For vehicles, understanding movement and traffic patterns, where they stop, and where they park are key.

Understanding mobility within a city formerly required a fleet of costly and fixed sensors, according to Barthelemy. Different models were needed to count specific types of traffic, and manual processes were used to understand how different types of traffic interacted with each other.

Using computer vision on the NVIDIA Jetson TX2 at the edge, the VIVA platform can count the different types of traffic and capture their trajectory and speed. Data is gathered using the city’s existing CCTV network, eliminating the need to invest in additional sensors.

Patterns of movements and points of congestion are identified and predicted to help improve street and footpath layout and connectivity, traffic management and guided pathways. The data has been invaluable in helping Liverpool plan for the urban design and traffic management of its central business district.

Machine Learning Application Built Using NVIDIA Technologies

SMART trained the machine learning applications on its VIVA platform for Liverpool on four workstations powered by a variety of NVIDIA TITAN GPUs, as well as six workstations equipped with NVIDIA RTX GPUs to generate synthetic data and run experiments.

In addition to using open databases such as OpenImage, COCO and Pascal VOC for training, DLL created synthetic data via an in-house application based on the Unity Engine. Synthetic data allows the project to learn from numerous scenarios that might not otherwise be present at any given time, like rainstorms or masses of cyclists.

“This synthetic data generation allowed us to generate 35,000-plus images per scenario of interest under different weather, time of day and lighting conditions,” said Barthelemy. “The synthetic data generation uses ray tracing to improve the realism of the generated images.”

Inferencing is done with NVIDIA Jetson Nano, NVIDIA Jetson TX2 and NVIDIA Jetson Xavier NX, depending on the use case and processing required.

The post Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics appeared first on The Official NVIDIA Blog.

Heart of the Matter: AI Helps Doctors Navigate Pandemic

A month after it got FDA approval, a startup’s first product was saving lives on the front lines of the battle against COVID-19.

Caption Health develops software for ultrasound systems, called Caption AI. It uses deep learning to empower medical professionals, including those without prior ultrasound experience, to perform echocardiograms quickly and accurately.

The results are images of the heart often worthy of an expert sonographer that help doctors diagnose and treat critically ill patients.

The coronavirus pandemic provided plenty of opportunities to try out the first dozen systems. Two doctors who used the new tool shared their stories on the condition they are their patients remain anonymous.

A 53-year-old diabetic woman with COVID-19 went into cardiac shock in a New York hospital. Without the images from Caption AI, it would have been difficult to clinch the diagnosis, said a doctor on the scene.

The system helped the physician identify heart problems in an 86-year-old man with the virus in the same hospital, helping doctors bring him back to health. It was another case among more than 200 in the facility that was effectively turned into a COVID-19 hospital in mid-March.

The Caption Health system made a tremendous impact for a staff spread thin, said the doctor. It would have been hard for a trained sonographer to keep up with the demand for heart exams, he added.

Heart Test Becomes Standard Procedure

Caption AI helped doctors in North Carolina determine that a 62-year-old man had COVID-19-related heart damage. Thanks, in part, to the ease of using the system, the hospital now performs echocardiograms for most patients with the virus.

At the height of the pandemic’s first wave, the hospital stationed ultrasound systems with Caption AI in COVID-19 wards. Rather than sending sonographers from unit to unit, the usual practice, staff stationed at the wards used the systems. The change reduced staff exposure to the virus and conserved precious protective gear.

Beyond the pandemic, the system will help hospitals provide urgent services while keeping a lid on rising costs, said a doctor at that hospital.

“AI-enabled machines will be the next big wave in taking care of patients wherever they are,” said Randy Martin, chief medical officer of Caption Health and emeritus professor of cardiology at Emory University.

Martin joined the startup about four years ago after meeting its founders, who shared expertise and passion for medicine and AI. Today their software “takes a user through 10 standard views of the heart, coaching them through some 90 fine movements experts make,” he said.

“We don’t intend to replace sonographers; we’re just expanding the use of portable ultrasound systems to the periphery for more early detection,” he added.

Coping with an Unexpected Demand Spike

In the early days of the pandemic, that expansion couldn’t come fast enough.

In late March, the startup exhausted supplies that included NVIDIA Quadro P3000 GPUs that ran its AI software. In the early days of the global shutdown, the startup reached out to its supply chain.

“We are experiencing overwhelming demand for our product,” the company’s CEO wrote, after placing orders for 100 GPUs with a distributor.

Caption Health has systems currently in use at 11 hospitals. It expects to deploy Caption AI at a number of additional sites in the coming weeks.

GPUs at the Heart of Automated Heart Tests

The startup currently integrates its software in a portable ultrasound from Terason. It intends to partner with more ultrasound makers in the future. And it advises partners to embed GPUs in their future ultrasound equipment.

The Quadro P3000 in Caption AI runs real-time inference tasks using deep convolutional neural networks. They provide operators guidance in positioning a probe that captures images. Then they automatically choose the highest-quality heart images and interpret them to help doctors make informed decisions.

The NVIDIA GPU also freed up four CPU cores, making space to process other tasks on the system, such as providing a smooth user experience.

The startup trained its AI models on a database of 1 million echocardiograms from clinical partners. An early study in partnership with Northwestern Medicine and the Minneapolis Heart Institute showed Caption AI helped eight registered nurses with no prior ultrasound experience capture highly accurate images on a wide variety of patients.

Inception Program Gives Startup Traction

Caption Heath, formerly called Bay Labs, was founded in 2015 in Brisbane, Calif. It received a $125,000 prize at a 2017 GTC competition for members of NVIDIA’s Inception program, which gives startups access to technology, expertise and markets.

“Being part of the Inception program has provided us with increased recognition in the field of deep learning, a platform to share our AI innovations with healthcare and deep learning communities, and phenomenal support getting NVIDIA GPUs into our supply chain so we could deliver Caption AI,” said Charles Cadieu, co-founder and president of Caption Health.

Now that its tool has been tested in a pandemic, Caption Health looks forward to opportunities to help save lives across many ailments. The company aims to ride a trend toward more portable systems that extend availability and lower costs of diagnostic imaging.

“We hope to see our technology used everywhere from big hospitals to rural villages to examine people for a wide range of medical conditions,” said Cadieu.

To learn more about Caption Health and other companies like it, watch the webinar on healthcare startups against COVID-19 Heart of the Matter: AI Helps Doctors Navigate Pandemic appeared first on The Official NVIDIA Blog.

Taking AI to Market: NVIDIA and Arterys Bridge Gap Between Medical Researchers and Clinicians

Around the world, researchers in startups, academic institutions and online communities are developing AI models for healthcare. Getting these models from their hard drives and into clinical settings can be challenging, however.

Developers need feedback from healthcare practitioners on how their models can be optimized for the real world. So, San Francisco-based AI startup Arterys built a forum for these essential conversations between clinicians and researchers.

Called the Arterys Marketplace, and now integrated with the NVIDIA Clara Deploy SDK, the platform makes it easy for researchers to share medical imaging AI models with clinicians, who can try it on their own data.

“By integrating the NVIDIA Clara Deploy technology into our platform, anyone building an imaging AI workflow with the Clara SDK can take their pipeline online with a simple handoff to the Arterys team,” said Christian Ulstrup, product manager for Arterys Marketplace. “We’ve streamlined the process and are excited to make it easy for Clara developers to share their models.”

Researchers can submit medical imaging models in any stage of development — from AI tools for research use to apps with regulatory clearance. Once the model is posted on the public Marketplace site, anyone with an internet connection can test it by uploading a medical image through a web browser.

Models on Arterys Marketplace run on NVIDIA GPUs through Amazon Web Services for inference.

A member of both the NVIDIA Inception and AWS Activate programs, which collaborate to help startups get to market faster, Arterys was founded in 2011. The company builds clinical AI applications for medical imaging and launched the Arterys Marketplace at the RSNA 2019 medical conference.

It recently raised $28 million in funding to further develop the ecosystem of partners and clinical-grade AI solutions on its platform.

Several of the models now on the Arterys Marketplace are focused on COVID-19 screening from chest X-rays and CT images. Among them is a model jointly developed by NVIDIA’s medical imaging applied research team and clinicians and data scientists at the National Institutes of Health. Built in under three weeks using the NVIDIA Clara Train framework, the model can help researchers study the detection of COVID-19 from chest CT scans.

Building AI Pillar of the Community

While there’s been significant investment in developing AI models for healthcare in the last decade, the Arterys team found that it can still take years to get radiologists’ hands on the tools.

“There’s been a huge gap between the smart, passionate researchers building AI models for healthcare and the end users — radiologists and clinicians who can use these models in their workflow,” Ulstrup said. “We realized that no research institution, no startup was going to be able to do this alone.”

The Arterys Marketplace was created with simplicity in mind. Developers need only fill out a short form to submit an AI model for inclusion, and then can send the model to users as a URL — all for free.

For clinicians around the world, there’s no need to download and install an AI model. All that’s needed is an internet connection and a couple medical images to upload for testing with the AI models. Users can choose whether or not their imaging data is shared with the researchers.

The images are analyzed with NVIDIA GPUs in the cloud, and results are emailed to the user within minutes. A Slack channel provides a forum for clinicians to provide feedback to researchers, so they can work together to improve the AI model.

“In healthcare, it can take years to get from an idea to seeing it implemented in clinical settings. We’re reducing that to weeks, if not days,” said Ulstrup. “It’s absurdly easy compared to what the process has been in the past.”

With a focus on open innovation and rapid iteration, Ulstrup says, the Arterys Marketplace aims to bring doctors into the product development cycle, helping researchers build better AI tools. By interacting with clinicians in different geographies, developers can improve their models’ ability to generalize across different medical equipment and imaging datasets.

Over a dozen AI models are on the Arterys Marketplace so far, with more than 300 developers, researchers, and startups joining the community discussion on Slack.

“Once models are hosted on the Arterys Marketplace, developers can send them to researchers anywhere in the world, who in turn can start dragging and dropping data in and getting results,” Ulstrup said. “We’re seeing discussion threads between researchers and clinicians on every continent, sharing screenshots and feedback — and then using that feedback to make the models even better.”

Check out the research-targeted AI COVID-19 Classification Pipeline developed by NVIDIA and NIH researchers on the Arterys Marketplace. To hear more from the Arterys team, register for the Startups4COVID webinar, taking place July 28.

The post Taking AI to Market: NVIDIA and Arterys Bridge Gap Between Medical Researchers and Clinicians appeared first on The Official NVIDIA Blog.

Into the Woods: AI Startup Lets Foresters See the Wood for the Trees

AI startup Trefos is helping foresters see the wood for the trees.

Using custom lidar and camera-mounted drones, the Philadelphia-based company collects data for high-resolution, 3D forest maps. These metrics allow government agencies and the forestry industry to estimate the volume of timber and biomass in an area of forest, as well as the amount of carbon stored in the trees.

With this unprecedented detail, foresters can make more informed decisions when, for example, evaluating the need for controlled burns to clear biomass and reduce the risk of wildfires.

“Forests are often very dense, with a very repetitive layout,” said Steven Chen, founder and CEO of the startup, a member of the NVIDIA Inception program, which supports startups from product development to deployment. “We can use deep learning algorithms to detect trees, isolate them from the surrounding branches and vines, and use those as landmarks.”

Trained on NVIDIA GPUs, the deep learning algorithms detect trees from both camera images and lidar point clouds. AI can dramatically increase the amount of data foresters are able to collect, while delivering results much faster than traditional forest monitoring — where scientists physically walk through forest terrain to record metrics of interest, like the width of a tree trunk.

“It’s an extremely time-consuming process, often walking through very dense forests with a tape measure,” Chen said. “It would take at least a day to survey 100 acres, while measuring less than 2 percent of the trees.”

By collecting data by drone-mounted lidar and camera sensors, it would only take 30 minutes to go through the same 100 acres — while measuring every tree.

Getting AI Down to a Tree 

Chen began his career in finance, working as an agricultural-options trader. “There, I saw the importance of getting inventory data for forests,” he said.

So when he joined the University of Pennsylvania as a Ph.D. student in robotics, he began working on ways robotics and machine learning can help get a better picture of the layout and features of forests around the world. Much of the research behind Trefos originated in Chen’s work in the Vijay Kumar Lab, a robotics research group at UPenn.

Trefos’s custom-built drones can fly autonomously through both organized, planted forests and wild ones. Chen and his team are working with the New Jersey Forest Service, which allow Trefos’s drones to fly through state forests and provide perspective on the kinds of metrics that would be useful to foresters.

The company has collected and labeled all its own training data to ensure high quality and to maintain control over the properties being labeled — such as whether the algorithms should classify a tree and its branches as two separate elements or just one.

Some processing is done at the edge, helping autonomously fly the drone through forests. But the data collected for mapping is processed offline on NVIDIA hardware, including TITAN GPUs and RTX GPUs on desktop systems — plus the NVIDIA DGX Station and DGX-1 server for heavier computing workloads.

Its AI algorithms are developed using the TensorFlow deep learning framework. While the drone platform currently captures images at 1-megapixel resolution, Trefos is looking at 4K cameras for the deployed product.

Chen founded Trefos less than two years ago. The company has received a grant from the National Science Foundation’s Small Business Innovation Research program and is running pilot tests in forests across the U.S.

The post Into the Woods: AI Startup Lets Foresters See the Wood for the Trees appeared first on The Official NVIDIA Blog.

Fighting COVID-19 in New Era of Scientific Computing

Scientists and researchers around the world are racing to find a cure for COVID-19.

That’s made the work of all those digitally gathered for this week’s high performance computing conference, ISC 2020 Digital, more vital than ever.

And the work of these researchers is broadening to encompass a wider range of approaches than ever.

The NVIDIA scientific computing platform plays a vital role, accelerating progress across this entire spectrum of approaches — from data analytics to simulation and visualization to AI to edge processing.

Some highlights:

  • In genomics, Oxford Nanopore Technologies was able to sequence the virus genome in just 7 hours using our GPUs.
  • In infection analysis and prediction, the NVIDIA RAPIDS team has GPU-accelerated Plotly’s Dash, a data visualization tool, enabling clearer insights into real-time infection rate analysis.
  • In structural biology, the U.S. National Institutes of Health and the University of Texas, Austin, are using GPU-accelerated software CryoSPARC to reconstruct the first 3D structure of the virus protein using cryogenic electron microscopy.
  • In treatment, NVIDIA worked with the National Institutes of Health and built an AI to accurately classify COVID-19 infection based on lung scans so efficient treatment plans can be devised.
  • In drug discovery, Oak Ridge National Laboratory ran the Scripps Research Institute’s AutoDock on the GPU accelerated Summit Supercomputer to screen a billion potential drug combinations in just 12 hours.

  • In robotics, startup Kiwi is building robots to deliver medical supplies autonomously.
  • And in edge detection, Whiteboard Coordinator Inc. built an AI system to automatically measure and screen elevated body temperatures, screening well over 2,000 healthcare workers per hour.

It’s truly inspirational to wake up every day and see the amazing effort going on around the world and the role NVIDIA’s scientific computing platform plays in helping understand the virus and discovering testing and treatment options to fight the COVID-19 pandemic.

The reason we’re able to play a role in so many efforts, across so many areas, is because of our strong focus on providing end-to-end workflows for the scientific computing community.

We’re able to provide these workflows because of our approach to full-stack innovation to accelerate all key application areas.

For data analytics, we accelerate the key frameworks like Spark3.0, RAPIDS and Dask. This acceleration is built using our domain-specific CUDA-X libraries for data analytics such as cuDF, cuML and cuGRAPH, along with I/O acceleration technologies from Magnum IO.

These libraries contain millions of lines of code and provide seamless acceleration to developers and users, whether they’re creating applications on the desktops accelerated with our GPUs or running them in data centers, in edge computers, in supercomputers, or in the cloud.

Similarly, we accelerate over 700 HPC applications, including all the most widely used scientific applications.

NVIDIA accelerates all frameworks for AI, which has become crucial for tasks where the information is incomplete — where there are no first principles to work with or the first principle-based approaches are too slow.

And, thanks to our roots in visual computing, NVIDIA provides accelerated visualization solutions, so terabytes of data can be visualized.

NASA, for instance, used our acceleration stack to visualize the landing of the first manned mission to Mars, in what is the world’s largest real-time, interactive volumetric visualization (150TB).

Our deep domain libraries also provide a seamless performance boost to scientific computing users on their applications across the different generations of our architecture. Going from Volta to Ampere, for instance.

NVIDIA’s also making all our new and improved GPU-optimized scientific computing applications available through NGC for researchers to accelerate their time to insight

Together, all of these pillars of scientific computing — simulation, AI and data analytics , edge streaming and visualization workflows — are key to tackling the challenges of today, and tomorrow.

The post Fighting COVID-19 in New Era of Scientific Computing appeared first on The Official NVIDIA Blog.

Best AI Processor: NVIDIA Jetson Nano Wins 2020 Vision Product of the Year Award

The small but mighty NVIDIA Jetson Nano has added yet another accolade to the company’s armory of awards.

The Edge AI and Vision Alliance, a worldwide collection of companies creating and enabling applications for computer vision and edge AI, has named Jetson Nano its 2020 Vision Product of the Year Award for “Best AI Processor.”

Now in its third year, the Vision Product of the Year Awards were announced in five categories. The winning entries were chosen by an independent panel of judges based on innovation, impact on customers and the market, and competitive differentiation.

“Congratulations to NVIDIA on being selected for this prestigious award by our panel of independent judges,” said Jeff Bier, founder of the Edge AI and Vision Alliance. “NVIDIA is a pioneer in embedded computer vision and AI, and has sustained an impressive pace of innovation over many years.”

The NVIDIA Jetson Nano, launched last year, delivers powerful computing power for AI at the edge in a compact, easy-to-use platform with full software programmability. At just 70 x 45 mm, the Jetson Nano module is the smallest in the Jetson lineup.

But don’t let its credit card sized form factor fool you. With the performance and capabilities needed to run modern AI workloads fast, Jetson Nano delivers big when it comes to deploying AI at the edge across multiple industries — from robotics and smart cities to retail and healthcare.

Opening new possibilities for AI at the edge, Jetson Nano delivers up to 472 GFLOPS of accelerated computing and can run many modern neural networks in parallel.

It’s production-ready and supports all popular AI frameworks. This makes Jetson Nano ideal for developing AI-powered products such as IoT gateways, network video recorders, cameras, robots and optical inspection systems.

The system-on-module is powered by an NVIDIA Maxwell GPU and supported by the NVIDIA JetPack SDK, significantly expanding the choices now available for manufacturers, developers and educators looking for embedded edge-computing options that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.

This comprehensive software stack makes AI deployment on autonomous machines fast, reduces complexity and speeds time to market.

NVIDIA Jetson is the leading AI-at-the-edge computing platform, with nearly half a million developers. With support for cloud-native technologies now available across the full Jetson lineup, manufacturers of intelligent machines and developers of AI applications can build and deploy high-quality, software-defined features on embedded and edge devices targeting a wide variety of applications.

Cloud-native support allows users to implement frequent improvements, improve accuracy and quickly deploy new algorithms throughout an application’s lifecycle, at scale, while minimizing downtime.

Learn more about why the Edge AI and Vision Alliance selected Jetson Nano for its Best AI Processor award.

New to the Jetson platform? Get started.

The post Best AI Processor: NVIDIA Jetson Nano Wins 2020 Vision Product of the Year Award appeared first on The Official NVIDIA Blog.

Insights Expedited: AI Inference Expands Its Scope and Speed

AI is spreading from agriculture to X-rays thanks to its uncanny ability to quickly infer smart choices based on mounds of data.

As the datasets and the neural networks analyzing them grow, users are increasingly turning to NVIDIA GPUs to accelerate AI inference.

To see inference at work, just look under the hood of widely used products from companies that are household names.

GE Research, for example, deploys AI models accelerated with GPUs in aviation, healthcare, power and transportation industries. They automate the inspection of factories, enable smart trains, monitor power stations and interpret medical images.

GE runs these AI models in data center servers on NVIDIA DGX systems with V100 Tensor Core GPUs and in edge-computing networks with Jetson AGX Xavier modules. The hardware runs NVIDIA’s TensorRT inference engine and its CUDA/cuDNN acceleration libraries for deep learning as well as the NVIDIA JetPack tool kit for Jetson modules.

Video Apps, Contracts Embrace Inference

In the consumer market, two of the world’s most popular mobile video apps run AI inference on NVIDIA GPUs.

TikTok and its forerunner in China, Douyin, together hit 1 billion downloads globally in February 2019. The developer and host of the apps, ByteDance, uploads a staggering 50 million new videos a day for 400 million active daily users.

ByteDance runs TensorRT on thousands of NVIDIA T4 and P4 GPU servers so users can search and get recommendations about cool videos to watch. The company estimates it has saved millions of dollars using the NVIDIA products while slashing in half the latency of its online services.

In business, Deloitte uses AI inference in its dTrax software to help companies manage complex contracts. For instance, dTrax can find and update key passages in lengthy agreements when regulations change or when companies are planning a big acquisition.

Several companies around the world use dTrax today. The software — running on NVIDIA DGX-1 systems in data centers and AWS P3 instances in the cloud — won a smart-business award from the Financial Times in 2019.

Inference Runs 2-10x Faster on GPUs

Inference jobs on average-size models run twice as fast on GPUs than CPUs and on large models such as RoBERTa they run 10x faster, according to tests done by Square, a financial services company.

That’s why NVIDIA GPUs are key to its goal of spreading the use of its Square Assistant from a virtual scheduler to a conversational AI engine driving all the company’s products.

BMW Group just announced it’s developing five new types of robots using the NVIDIA Isaac robotics platform to enhance logistics in its car manufacturing plants. One of the new bots, powered by NVIDIA Jetson AGX Xavier, delivers up to 32 trillion operations per second of performance for computer vision tasks such as perception, pose estimation and path planning.

AI inference is happening inside cars, too. China’s Xpeng unveiled in late April its P7 all-electric sports sedan using the NVIDIA DRIVE AGX Xavier to help deliver level 3 automated driving features, using inference on data from a suite of sensors.

Inference performance on NVIDIA’s data center platform scaled nearly 50x in the last three years thanks in large part to the introduction of Tensor Cores and ongoing software optimizations in TensorRT and acceleration of AI frameworks such as PyTorch and TensorFlow.

Medical experts from around the world gave dozens of talks at GTC 2020 about use of AI in radiology, genomics, microscopy and other healthcare fields. In one talk, Geraldine McGinty, chair of the American College of Radiology, called AI a “once-in-a-generation opportunity” to improve the quality of care while driving down costs.

Down on the farm, a growing crop of startups are using AI to increase efficiency. For example, Rabbit Tractors, an NVIDIA Inception program member, uses Jetson Nano modules on multifunction robots to infer from camera and lidar data their way along a row they need to seed, spray or harvest.

The list of companies with use cases for GPU-accelerated inference goes on. It includes fraud detection at American Express, industrial inspection  for P&G and search engines for web giants.

Inference Gets Up to 7x Gains on A100

The potential for inference on GPUs is headed up and to the right.

The NVIDIA Ampere architecture gives inference up to a 7x speedup thanks to the multi-instance GPU feature. Support in the A100 GPUs for a new approach to sparsity in deep neural networks promises even further gains. It’s one of several new features of the architecture discussed in a technical overview of the A100 GPUs.

There are plenty of resources to discover where inference could go next and how to get started on the journey.

A webinar describes in more detail the potential for inference on the A100. Tutorials, more customer stories and a white paper on NVIDIA’s Triton Inference Server for deploying AI models at scale can all be found on a page dedicated to NVIDIA’s inference platform.

Users can find an inference runtime, optimization tools and code samples on the NVIDIA TensorRT page. Pre-trained models and containers that package up the code needed to get started are in the NGC software catalog.

The post Insights Expedited: AI Inference Expands Its Scope and Speed appeared first on The Official NVIDIA Blog.

Dialed In: Why Accelerated Analytics Is Game Changing for Telecoms

From rapidly evolving technologies to stiff competition, there’s nothing simple about the wireless industry.

Take 5G. Whether deciding where to locate a complex web of new infrastructure or analyzing performance and service levels, the common element in all of the challenges the telecom industry faces is data — petabytes of it.

Data flows within the industry are orders of magnitudes greater than just a few years ago. And systems have become faster. This puts a premium on smart, quick decision-making.

Collecting and normalizing huge network datasets is just the start of the process. The data also has to be analyzed. To address these issues, wireless carriers like Verizon and data providers like Skyhook are turning to data analytics accelerated by the OmniSci platform and NVIDIA GPUs.

Accelerating Analytics

OmniSci, based in San Francisco, pioneered the concept of using the incredible parallel processing power of GPUs to interactively query and visualize massive datasets in milliseconds.

NVIDIA is a partner of and an investor in the company through NVIDIA Inception GPU Ventures. OmnSci is also a premier member of NVIDIA Inception, a virtual accelerator program that enables startups with fundamental tools, expertise and go-to-market support.

Composed of a lightning-fast SQL engine along with rendering and visualization systems, the OmniSci accelerated analytics platform allows users to run SQL queries, filter the results and chart them over a map near instantaneously. In the time it takes for traditional analytics tools to respond to a single query, the OmniSci platform allows users to get answers to questions as fast as they can formulate them.

The extreme parallel processing speed of NVIDIA GPUs allows entire datasets to be explored — without indexing or pre-aggregation. Analysts can create dashboards composed of geo-point maps, geo-heat maps, choropleths and scatter plots, in addition to conventional line, bar and pie charts.

Even non-technical users can query and visualize millions of polygons, based on billions of rows of data, at their own pace. Enhancing the interface, the RAPIDS machine learning framework enables users to create predictive models based on existing data.

Multiple Applications

Ensuring the rollout of even coverage for wireless customers requires coordinating a huge number of new cellular base stations — in everything from cell towers to homes and businesses — as well as new distributed antenna systems for major indoor and outdoor facilities.

Wireless providers must also continually monitor and analyze network performance; surges and anomalies have to be identified and quickly addressed; and equipment must be constantly optimized to meet customer demands. Additionally, cybersecurity defenses require a never-ending cycle of management, reviews and upgrades.

Accelerated analytics helps wireless carriers solve many of these difficult operational problems. For network planning, GPUs offer much deeper analysis of market utilization and can spot gaps in daypart or geographic coverage. Log queries can be reviewed within moments, instead of hours, and help predict usage in specific geographic areas to better inform engineering or utilization planning decisions.

To ensure optimal service levels of customers, engineers are using GPU-accelerated analytics to better understand network demand by parameters such as daypart, service, and type of data. They can review these metrics in any combination and at any level  —  nationwide, regionally or even at street level  —  with results plotted in fractions of a second.

On the business side, marketing and customer service personnel require improved ways to attract new customers and reduce subscriber churn. Where prepaid wireless is the norm, it’s vital to introduce services that generate incremental revenue while reducing turnover.

With OmniSci, these teams can review mobile and application data to identify opportunities for promotions or upselling and to reduce customer churn. Location, activity and hardware profiles can all be taken into account to improve ad targeting and campaign measurement.

Global Reach

Verizon, America’s largest telecom with over 150 million subscribers, uses the OmniSci platform to improve its network reliability.

Anomalies are identified in just moments, versus old methods that would take 45 to 60 minutes, leading to faster problem resolution. Verizon also uses OmniSci to uncover long-term trends and to facilitate the expansion of its Wi-Fi services into new venues such as sports stadiums.

Skyhook, a mobile positioning and location provider, uses OmniSci to cross-reference Wi-Fi, cellular and sensor data to provide precise information about users and devices. Retailers, to cite one example, use this intelligence to analyze store visits and shopper behavior patterns. The data also helps with customer acquisition, site selection and various investment opportunities.

Skyhook’s insights further aid in the creation of location-based experiences such as customized storytelling and venue orientation. When disasters strike, the company’s real-time knowledge base helps first responders understand complex damage scenarios and to move quickly to locations where help is needed most.

Rather than mustering a little more performance out of legacy systems, new challenges require new solutions. OmniSci and NVIDIA are helping telcos answer the call.

The post Dialed In: Why Accelerated Analytics Is Game Changing for Telecoms appeared first on The Official NVIDIA Blog.

Robotics Reaps Rewards at ICRA: NVIDIA’s Dieter Fox Named RAS Pioneer

Thousands of researchers from around the globe will be gathering — virtually — next week for the IEEE International Conference on Robotics and Automation.

As a flagship conference on all things robotics, ICRA has become a renowned forum since its inception in 1984. This year, NVIDIA’s Dieter Fox will receive the RAS Pioneer Award, given by the IEEE Robotics and Automation Society.

Fox is the company’s senior director of robotics research and head of the NVIDIA Robotics Research Lab, in Seattle, as well as a professor at the University of Washington Paul G. Allen School of Computer Science & Engineering and head of the UW Robotics and State Estimation Lab. At the NVIDIA lab, Fox leads over 20 researchers and interns, fostering collaboration with the neighboring UW.

He’s receiving the RAS Pioneer Award “for pioneering contributions to probabilistic state estimation, RGB-D perception, machine learning in robotics, and bridging academic and industrial robotics research.”

“Being recognized with this award by my research colleagues and the IEEE society is an incredible honor,” Fox said. “I’m very grateful for the amazing collaborators and students I had the chance to work with during my career. I also appreciate that IEEE sees the importance of connecting academic and industrial research — I believe that bridging these areas allows us to make faster progress on the problems we really care about.”

Fox will also give a talk at the conference, where a total of 19 papers that investigate a variety of topics in robotics will be presented by researchers from NVIDIA Research.

Here’s a preview of some of the show-stopping NVIDIA research papers that were accepted at ICRA:

Robotics Work a Finalist for Best Paper Awards

6-DOF Grasping for Target-Driven Object Manipulation in Clutter” is a finalist for both the Best Paper Award in Robot Manipulation and the Best Student Paper.

The paper delves into the challenging robotics problem of grasping in cluttered environments, which is a necessity in most real-world scenes, said Adithya Murali, one of the lead researchers and a graduate student at the Robotics Institute at Carnegie Mellon University. Much current research considers only planar grasping, in which a robot grasps from the top down rather than moving in more dimensions.

Arsalan Mousavian, another lead researcher on the paper and a senior research scientist at the NVIDIA Robotics Research Lab, explained that they performed this research in simulation. “We weren’t bound by any physical robot, which is time-consuming and very expensive,” he said.

Mousavian and his colleagues trained their algorithms on NVIDIA V100 Tensor Core GPUs, and then tested on NVIDIA TITAN GPUs. For this particular paper, the training data consisted of simulating 750,000 robot object interactions in less than half a day, and the models were trained in a week. Once trained, the robot was able to robustly manipulate objects in the real world.

Replanning for Success

NVIDIA Research also considered how robots could plan to accomplish a wide variety of tasks in challenging environments, such as grasping an object that isn’t visible, in a paper called “Online Replanning in Belief Space for Partially Observable Task and Motion Problems.”

The approach makes a variety of tasks possible. Caelan Garrett, graduate student at MIT and a lead researcher on the paper, explained, “Our work is quite general in that we deal with tasks that involve not only picking and placing things in the environment, but also pouring things, cooking, trying to open doors and drawers.”

Garrett and his colleagues created an open-source algorithm, SS-Replan, that allows the robot to incorporate observations when making decisions, which it can adjust based on new observations it makes while trying to accomplish its goal.

They tested their work in NVIDIA Isaac Sim, a simulation environment used to develop, test and evaluate virtual robots, and on a real robot.

DexPilot: A Teleoperated Robotic Hand-Arm System

In another paper, NVIDIA researchers confronted the problem that current robotics algorithms don’t yet allow for a robot to complete precise tasks such as pulling a tea bag out of a drawer, removing a dollar bill from a wallet or unscrewing the lid off a jar autonomously.

In “DexPilot: Depth-Based Teleoperation of Dexterous Robotic Hand-Arm System,” NVIDIA researchers present a system in which a human can remotely operate a robotic system. DexPilot observes the human hand using cameras, and then uses neural networks to relay the motion to the robotic hand.

Whereas other systems require expensive equipment such as motion-capture systems, gloves and headsets, DexPilot archives teleoperation through a combination of deep learning and optimization.

It took 15 hours to train on a single GPU once we collected the data, according to NVIDIA researchers Ankur Handa and Karl Van Wyk, two of the authors of the paper. They and their colleagues used the NVIDIA TITAN GPU for their research.

Learn all about these papers and more at ICRA 2020.

The NVIDIA research team has more than 200 scientists around the globe, focused on areas such as AI, computer vision, self-driving cars, robotics and graphics. Learn more at www.nvidia.com/research.

The post Robotics Reaps Rewards at ICRA: NVIDIA’s Dieter Fox Named RAS Pioneer appeared first on The Official NVIDIA Blog.

Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans

Qure.ai, a Mumbai-based startup, has been developing AI tools to detect signs of disease from lung scans since 2016. So when COVID-19 began spreading worldwide, the company raced to retool its solution to address clinicians’ urgent needs.

In use in more than two dozen countries, Qure.ai’s chest X-ray tool, qXR, was trained on 2.5 million scans to detect lung abnormalities — signs of tumors, tuberculosis and a host of other conditions.

As the first COVID-specific datasets were released by countries with early outbreaks — such as China, South Korea and Iran — the company quickly incorporated those scans, enabling qXR to mark areas of interest on a chest X-ray image and provide a COVID-19 risk score.

“Clinicians around the world are looking for tools to aid critical decisions around COVID-19 cases — decisions like when a patient should be admitted to the hospital, or be moved to the ICU, or be intubated,” said Chiranjiv Singh, chief commercial officer of Qure.ai. “Those clinical decisions are better made when they have objective data. And that’s what our AI tools can provide.”

While doctors have data like temperature readings and oxygen levels on hand, AI can help quantify the impact on a patient’s lungs — making it easier for clinicians to triage potential COVID-19 cases where there’s a shortage of testing kits, or compare multiple chest X-rays to track the progression of disease.

In recent weeks, the company deployed the COVID-19 version of its tool in around 50 sites around the world, including hospitals in the U.K., India, Italy and Mexico. Healthcare workers in Pakistan are using qXR in medical vans that actively track cases in the community.

A member of the NVIDIA Inception program, which provides resources to help startups scale faster, Qure.ai uses NVIDIA TITAN GPUs on premises, and V100 Tensor Core GPUs through Amazon Web Services for training and inference of its AI models. The startup is in the process of seeking FDA clearance for qXR, which has received the CE mark in Europe.

Capturing an Image of COVID-19

For coronavirus cases, chest X-rays are just one part of the picture — because not every case shows impact on the lungs. But due to the wide availability of X-ray machines, including portable bedside ones, they’ve quickly become the imaging modality of choice for hospitals admitting COVID-19 patients.

“Based on the literature to date, we know certain indicators of COVID-19 are visible in chest  X-rays. We’re seeing what’s called ground-glass opacities and consolidation, and noticed that the virus tends to settle in both sides of the lung,” Singh said. “Our AI model applies a positive score to these factors and relevant findings, and a negative score to findings like calcifications and pleural effusion that suggest it’s not COVID.”

The qXR tool provides clinicians with one of four COVID-19 risk scores: high, medium, low or none. Within a minute, it also labels and quantifies lesions, providing an objective measurement of lung impact.

By rapidly processing chest X-ray images, qXR is helping some doctors triage patients with COVID-19 symptoms while they wait for test results. Others are using the tool to monitor disease progression by comparing multiple scans taken of the same patient over time. For ease of use, qXR integrates with radiologists’ existing workflows, including the PACS imaging system.

“Workflow integration is key, as the more you can make your AI solution invisible and smoothly embedded into the healthcare workflow, the more it’ll be adopted and used,” Singh said.

While the first version of qXR with COVID-19 analysis was trained and validated on around 11,500 scans specific to the virus, the team has been adding a couple thousand additional scans to the dataset each week, improving accuracy as more data becomes available.

Singh credits the company’s ability to pivot quickly in part to the diverse dataset of chest X-rays it’s collected over the years. In total, Qure.ai has almost 8 million studiess, spread evenly across North America, Europe, the Middle East and Asia, as well as a mix of studies taken on different equipment manufacturers and in different healthcare settings.

“The volume and variety of data helps our AI model’s accuracy,” Singh said. “You don’t want something built on perfect, clean data from a single site or country, where the moment it goes to a new environment, it fails.”

From the Cloud to Clinicians’ Hands

The Bolton NHS Foundation Trust in the U.K. and San Rafaelle University Hospital in Milan, are among dozens of sites that have deployed qXR to help radiologists monitor COVID-19 disease progression in patients.

Most clients can get up and running with qXR within an hour, with deployment over the cloud. In an urgent environment like the current pandemic, this allows hospitals to move quickly, even when travel restrictions make live installations impossible. Hospital customers with on-premises data centers can choose to use their onsite compute resources for inference.

Qure.ai’s next step, Singh said, “is to get this tool in the hands of as many radiologists and other clinicians directly interacting with patients around the world as we can.”

The company has also developed a natural language processing tool, qScout, that uses a chatbot to handle regular check-ins with patients who feel they may have the virus or are recovering at home. Keeping in contact with people in an outpatient setting is an important tool to monitor symptoms, alerting healthcare workers when a patient may need to be admitted to the hospital or track patient recovery without overburdening hospital infrastructure.

It took the team just six weeks to take qScout from a concept to its first customer: the Ministry of Health in Oman.

To learn more about Qure.ai, watch the recent COMPUTE4COVID webinar session, Healthcare AI Startups Against COVID-19. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the virus.

The post Qure.ai Helps Clinicians Answer Questions from COVID-19 Lung Scans appeared first on The Official NVIDIA Blog.