AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage

During a stroke, a patient loses an estimated 1.9 million brain cells every minute, so interpreting their CT scan even one second quicker is vital to maintaining their health.

To save precious time, Taiwan-based medical imaging startup Deep01 has created an AI-based medical imaging software, called DeepCT, to evaluate acute intracerebral hemorrhage (ICH), a type of stroke. The system works with 95 percent accuracy in just 30 seconds per case — about 10 times faster than competing methods.

Founded in 2016, Deep01 is the first AI company in Asia to have FDA clearances in both the U.S. and Taiwan. It’s a member of NVIDIA Inception, a program that helps startups develop, prototype and deploy their AI or data science technology and get to market faster.

The startup recently raised around $3 million for DeepCT, which detects suspected areas of bleeding around the brain and annotates where they’re located on CT scans, notifying physicians of the results.

The software was trained using 60,000 medical images that displayed all types of acute ICH. Deep01 uses a self-developed deep learning framework that runs images and trains the model on NVIDIA GPUs.

“Working with NVIDIA’s robust AI computing hardware, in addition to software frameworks like TensorFlow and PyTorch, allows us to deliver excellent AI inference performance,” said David Chou, founder and CEO of the company.

Making Quick Diagnosis Accessible and Affordable

Strokes are the world’s second-most common cause of death. When stroke patients are ushered into the emergency room, doctors must quickly determine whether the brain is bleeding and what next steps for treatment should be.

However, many hospitals lack enough manpower to perform such timely diagnoses, since only some emergency room doctors specialize in reading CT scans. Because of this, Deep01 was founded, according to Chou, with the mission of offering affordable AI-based solutions to medical institutions.

The 30-second speed with which DeepCT completes interpretation can help medical practitioners prioritize the patients in most urgent need for treatment.

Helpful for Facilities of All Types and Sizes

DeepCT has helped doctors evaluate more than 5,000 brain scans and is being used in nine medical institutions in Taiwan, ranging from small hospitals to large-scale medical centers.

“The lack of radiologists is a big issue even in large-scale medical centers like the one I work at, especially during late-night shifts when fewer staff are on duty,” said Tseng-Lung Yang, senior radiologist at Kaohsiung Veterans General Hospital in Taiwan.

Geng-Wang Liaw, an emergency physician at Yeezen General Hospital — a smaller facility in Taiwan — agreed that Deep01’s technology helps relieve physical and mental burdens for doctors.

“Doctors in the emergency room may misdiagnose a CT scan at times,” he said. “Deep01’s solution stands by as an assistant 24/7, to give doctors confidence and reduce the possibility for medical error.”

Beyond ICH, Deep01 is at work on expanding its technology to identify midline shift, a pathological finding that occurs when there’s increased pressure on the brain and increases mortality.

The post Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage appeared first on The Official NVIDIA Blog.

Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events

While a thunderstorm could knock out your neighborhood’s power for a few hours, a solar storm could knock out electricity grids across all of Earth, possibly taking weeks to recover from.

To try to predict solar storms — which are disturbances on the sun — and their potential effects on Earth, NASA’s Frontier Development Lab (FDL) is running what it calls a geoeffectiveness challenge.

It uses datasets of tracked changes in the magnetosphere — where the Earth’s magnetic field interacts with solar wind — to train AI-powered models that can detect patterns of space weather events and predict their Earth-related impacts.

The training of the models is optimized on NVIDIA GPUs available on Google Cloud, and data exploration is done on RAPIDS, NVIDIA’s open-source suite of software libraries built to execute data science and analytics pipelines entirely on GPUs.

Siddha Ganju, a solutions architect at NVIDIA who was named to Forbes’ 30 under 30 list in 2018, is advising NASA on the AI-related aspects of the challenge.

A deep learning expert, Ganju grew up going to hackathons. She says she’s always been fascinated by how an algorithm can read in between the lines of code.

Now, she’s applying her knowledge to NVIDIA’s automotive and healthcare businesses, as well NASA’s AI technical steering committee. She’s also written a book on practical uses of deep learning, published last October.

Modeling Space Weather Impacts with AI

Ganju’s work with the FDL began in 2017, when its founder, James Parr, asked her to start advising the organization. Her current task, advising the geoeffectiveness challenge, seeks to use machine learning to characterize magnetic field perturbations and model the impact of space weather events.

In addition to solar storms, space weather events can include such activities as solar flares, which are sudden flashes of increased brightness on the sun, and solar wind, a stream of charged particles released from it.

Not all space weather events impact the Earth, said Ganju, but in case one does, we need to be prepared. For example, a single powerful solar storm could knock out our planet’s telephone networks.

“Even if we’re able to predict the impact of an event just 15 minutes in advance, that gives us enough time to sound the alarm and prepare for potential connectivity loss,” said Ganju. “This data can also be useful for satellites to communicate in a better way.”

Exploring Spatial and Temporal Patterns

Solar events can impact parts of the Earth differently due to a variety of factors, Ganju said. With the help of machine learning, the FDL is trying to find spatial and temporal patterns of the effects.

“The datasets we’re working with are huge, since magnetometers collect data on the changes of a magnetic field at a particular location every second,” said Ganju. “Parallel processing using RAPIDS really accelerates our exploration.”

In addition to Ganju, researchers Asti Bhatt, Mark Cheung and Ryan McGranaghan, as well as NASA’s Lika Guhathakurta, are advising the geoeffectiveness challenge team. Its members include Téo Bloch, Banafsheh Ferdousi, Panos Tigas and Vishal Upendran.

The researchers use RAPIDS to explore the data quickly. Then, using the PyTorch and TensorFlow software libraries, they train the models for experiments to identify how the latitude of a location, the atmosphere above it, or the way sun rays hit it affect the consequences of a space weather event.

They’re also studying whether an earthly impact happens immediately as the space event occurs, or if it has a delayed effect, as an impact could depend on time-related factors, such as the Earth’s revolutions around the sun or its rotation about its own axis.

To detect such patterns, the team will continue to train the model and analyze data throughout the duration of FDL’s eight-week research sprint, which concludes later this month.

Other FDL projects participating in the sprint, according to Ganju, include the moon for good challenge, which aims to discover the best landing position on the moon. Another is the astronaut health challenge, which is investigating how high-radiation environments can affect an astronaut’s well-being.

The FDL is holding a virtual U.S. Space Science & AI showcase, on August 14, where the 2020 challenges will be presented. Register for the event here.

Feature image courtesy of NASA.

The post Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events appeared first on The Official NVIDIA Blog.

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks

NVIDIA delivers the world’s fastest AI training performance among commercially available products, according to MLPerf benchmarks released today.

The A100 Tensor Core GPU demonstrated the fastest performance per accelerator on all eight MLPerf benchmarks. For overall fastest time to solution at scale, the DGX SuperPOD system, a massive cluster of DGX A100 systems connected with HDR InfiniBand, also set eight new performance milestones. The real winners are customers applying this performance today to transform their businesses faster and more cost effectively with AI.

This is the third consecutive and strongest showing for NVIDIA in training tests from MLPerf, an industry benchmarking group formed in May 2018. NVIDIA set six records in the first MLPerf training benchmarks in December 2018 and eight in July 2019.

NVIDIA set records in the category customers care about mos commercially available products. We ran tests using our latest NVIDIA Ampere architecture as well as our Volta architecture.

The NVIDIA DGX SuperPOD system set new milestones for AI training at scale.

NVIDIA was the only company to field commercially available products for all the tests. Most other submissions used the preview category for products that may not be available for several months or the research category for products not expected to be available for some time.

NVIDIA Ampere Ramps Up in Record Time

In addition to breaking performance records, the A100, the first processor based on the NVIDIA Ampere architecture, hit the market faster than any previous NVIDIA GPU. At launch, it powered NVIDIA’s third-generation DGX systems, and it became publicly available in a Google cloud service just six weeks later.

Also helping meet the strong demand for A100 are the world’s leading cloud providers, such as Amazon Web Services, Baidu Cloud, Microsoft Azure and Tencent Cloud, as well as dozens of major server makers, including Dell Technologies, Hewlett Packard Enterprise, Inspur and Supermicro.

Users across the globe are applying the A100 to tackle the most complex challenges in AI, data science and scientific computing.

Some are enabling a new wave of recommendation systems or conversational AI applications while others power the quest for treatments for COVID-19. All are enjoying the greatest generational performance leap in eight generations of NVIDIA GPUs.

The NVIDIA Ampere architecture swept all eight tests of commercially available accelerators.

A 4x Performance Gain in 1.5 Years

The latest results demonstrate NVIDIA’s focus on continuously evolving an AI platform that spans processors, networking, software and systems.

For example, the tests show at equivalent throughput rates today’s DGX A100 system delivers up to 4x the performance of the system that used V100 GPUs in the first round of MLPerf training tests. Meanwhile, the original DGX-1 system based on NVIDIA V100 can now deliver up to 2x higher performance thanks to the latest software optimizations.

These gains came in less than two years from innovations across the AI platform. Today’s NVIDIA A100 GPUs — coupled with software updates for CUDA-X libraries — power expanding clusters built with Mellanox HDR 200Gb/s InfiniBand networking.

HDR InfiniBand enables extremely low latencies and high data throughput, while offering smart deep learning computing acceleration engines via the scalable hierarchical aggregation and reduction protocol (SHARP) technology.

4x improve x1280
NVIDIA evolves its AI performance with new GPUs, software upgrades and expanding system designs.

NVIDIA Shines in Recommendation Systems, Conversational AI, Reinforcement Learning

The MLPerf benchmarks — backed by organizations including Amazon, Baidu, Facebook, Google, Harvard, Intel, Microsoft and Stanford — constantly evolve to remain relevant as AI itself evolves.

The latest benchmarks featured two new tests and one substantially revised test, all of which NVIDIA excelled in. One ranked performance in recommendation systems, an increasingly popular AI task; another tested conversational AI using BERT, one of the most complex neural network models in use today. Finally, the reinforcement learning test used Mini-go with the full-size 19×19 Go board and was the most complex test in this round involving diverse operations from game play to training.

Convo RecSys customers x1000
Customers using NVIDIA AI for conversational AI and recommendation systems.

Companies are already reaping the benefits of this performance on these strategic applications of AI.

Alibaba hit a $38 billion sales record on Singles Day in November, using NVIDIA GPUs to deliver more than 100x more queries/second on its recommendation systems than CPUs. For its part, conversational AI is becoming the talk of the town, driving business results in industries from finance to healthcare.

NVIDIA is delivering both the performance needed to run these powerful jobs and the ease of use to embrace them.

Software Paves Strategic Paths to AI

In May, NVIDIA announced two application frameworks, Jarvis for conversational AI and Merlin for recommendation systems. Merlin includes the HugeCTR framework for training that powered the latest MLPerf results.

These are part of a growing family of application frameworks for markets including automotive (NVIDIA DRIVE), healthcare (Clara), robotics (Isaac) and retail/smart cities (Metropolis).

SDKs x1280
NVIDIA application frameworks simplify enterprise AI from development to deployment.

DGX SuperPOD Architecture Delivers Speed at Scale

NVIDIA ran MLPerf tests for systems on Selene, an internal cluster based on the DGX SuperPOD, its public reference architecture for large-scale GPU clusters that can be deployed in weeks. That architecture extends the design principles and best practices used in the DGX POD to serve the most challenging problems in AI today.

Selene recently debuted on the TOP500 list as the fastest industrial system in the U.S. with more than an exaflops of AI performance. It’s also the world’s second most power-efficient system on the Green500 list.

Customers are already using these reference architectures to build DGX PODs and DGX SuperPODs of their own. They include HiPerGator, the fastest academic AI supercomputer in the U.S., which the University of Florida will feature as the cornerstone of its cross-curriculum AI initiative.

Meanwhile, a top supercomputing center, Argonne National Laboratory, is using DGX A100 to find ways to fight COVID-19. Argonne was the first of a half-dozen high performance computing centers to adopt A100 GPUs.

DGX POD Users x1280
Many users have adopted NVIDIA DGX PODs.

DGX SuperPODs are already driving business results for companies like Continental in automotive, Lockheed Martin in aerospace and Microsoft in cloud-computing services.

These systems are all up and running thanks in part to a broad ecosystem supporting NVIDIA GPUs and DGX systems.

Strong MLPerf Showing by NVIDIA Ecosystem

Of the nine companies submitting results, seven submitted with NVIDIA GPUs including cloud service providers (Alibaba Cloud, Google Cloud, Tencent Cloud) and server makers (Dell, Fujitsu, and Inspur), highlighting the strength of NVIDIA’s ecosystem.

NVIDIA AI Ecosystem x1000
Many partners leveraged the NVIDIA AI platform for MLPerf submissions.

Many of these partners used containers on NGC, NVIDIA’s software hub, along with publicly available frameworks for their submissions.

The MLPerf partners represent part of an ecosystem of nearly two dozen cloud-service providers and OEMs with products or plans for online instances, servers and PCIe cards using NVIDIA A100 GPUs.

Test-Proven Software Available on NGC Today

Much of the same software NVIDIA and its partners used for the latest MLPerf benchmarks is available to customers today on NGC.

NGC is host to several GPU-optimized containers, software scripts, pre-trained models and SDKs. They empower data scientists and developers to accelerate their AI workflows across popular frameworks such as TensorFlow and PyTorch.

Organizations are embracing containers to save time getting to business results that matter. In the end, that’s the most important benchmark of all.

Artist’s rendering at top: NVIDIA’s new DGX SuperPOD, built in less than a month and featuring more than 2,000 NVIDIA A100 GPUs, swept every MLPerf benchmark category for at-scale performance among commercially available products. 

The post NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks appeared first on The Official NVIDIA Blog.

Banking on AI: RBC builds a DGX-powered private cloud

Royal Bank of Canada built a DGX-powered cloud and tied it to a strategic investment in AI. Despite headwinds from a global pandemic, it will further enable RBC to transform client experiences.

The voyage started in the fall of 2017. That’s when RBC, Canada’s largest bank with 17 million clients in 36 countries, created  its dedicated research institute, Borealis AI. The institute is headquartered next to Toronto’s MaRS Discovery District, a global hub for machine-learning experts.

Borealis AI quickly attracted dozens of top researchers. That’s no surprise given the institute is led by the bank’s chief science officer, Foteini Agrafioti, a patent-holding serial entrepreneur and Ph.D. in electrical and computer engineering who co-chairs Canada’s AI advisory council.

The bank initially booted up Borealis AI into a mix of systems. But as the group and the AI models it developed grew it needed a larger, dedicated AI engine.

Brokering a Private AI Cloud for Banking

“I had the good fortune to help commission our first infrastructure for Borealis AI, but it wasn’t adequate to meet our evolving AI needs,” said Mike Tardiff, a senior vice president of tech infrastructure at RBC.

The team wanted a distributed AI system that would serve four locations, from Vancouver to Montreal, securely behind the bank’s firewall. It needed to scale as workloads grew and leverage the regular flow of AI innovations in open source software without requiring hardware upgrades to do so.

In short, the bank aimed to build a state-of-the-art private AI cloud. For its key planks, RBC chose six NVIDIA DGX systems and Red Hat’s OpenShift to orchestrate containers running on those systems.

“We see NVIDIA as a leader in AI infrastructure. We were already using its DGX systems and wanted to expand our AI capabilities, so it was an obvious choice,” said Tardiff.

AI Steers Bank Toward Smart Apps

RBC is already reporting solid results with the system despite commissioning it early this year in the face of the oncoming COVID-19 storm.

The private AI cloud can run thousands of simulations and analyze millions of data points in a fraction of the time that it could before, the bank says. As a result, it expects to transform the customer banking experience with a new generation of smart applications. And that’s just the beginning.

“For instance, in our capital markets business we are now able to train thousands of statistical models in parallel to cover this vast space of possibilities,” said Agrafioti, head of Borealis AI.

“This would be impossible without a distributed and fully automated environment. We can populate the entire cluster with a single click using the automated pipeline that this new solution has delivered,” she added.

The platform has already helped reduce client calls and resulted in faster delivery of new applications for RBC clients, thanks to the performance of GPUs combined with the automation of orchestrated containers.

RBC deployed Red Hat OpenShift in combination with DGX infrastructure to rapidly spin up AI compute instances in a fraction of the time it used to take.

OpenShift helps by creating an environment where users can run thousands of containers simultaneously, extracting datasets to train AI models and run them in production on DGX systems, said Yan Fisher, a global evangelist for emerging technologies at Red Hat.

OpenShift and NGC, NVIDIA’s software hub, let the companies support the bank remotely through the pandemic, he added.

“Building our AI infrastructure with NVIDIA DGX has given us in-house capabilities similar to what the Amazons and Googles of the world offer and we’ve achieved some significant savings in total cost of ownership,” said Tardiff.

He singled out as key hardware assets the NVLink interconnect and NVIDIA’s support for enterprise networking standards with maximum bandwidth and reduced latency. They let users quickly access multiple GPUs within and between systems across data centers that host the bank’s AI cloud.

How a Bank with a Long History Stays Innovative

Though it’s 150 years old, RBC keeps in tune with the times by investing early in emerging technologies, as it did with Borealis AI.

“Innovation is in our DNA — we’re always looking at what’s coming around the corner and how we can operationalize it, and AI is a top strategic priority,” said Tardiff.

Although its main expertise is in banking, RBC has tech chops, too. During the COVID lockdown it managed to “pressure test” the latest systems, pushing them well beyond they thought were their limits.

“We’re co-creating this vision of AI infrastructure with NVIDIA, and through this journey we’re raising the bar for AI innovation which everyone in the financial services industry can benefit from,” Tardiff said.

Visit NVIDIA’s financial services industry page to learn more.

The post Banking on AI: RBC builds a DGX-powered private cloud appeared first on The Official NVIDIA Blog.

Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning

Whether you want to know if your squats have the correct form, you’re at the mirror deciding how to dress and wondering what the weather’s like, or you keep losing track of your darts score, the Smells Like ML duo have you covered — in all senses.

This maker pair is using machine learning powered by NVIDIA Jetson’s edge AI capabilities to provide smart solutions to everyday problems.

About the Makers

Behind Smells Like ML are Terry Rodriguez and Salma Mayorquin, freelance machine learning consultants based in San Francisco. The business partners met as math majors in 2013 at UC Berkeley and have been working together ever since. The duo wondered how they could apply their knowledge in theoretical mathematics more generally. Robotics, IoT and computer vision projects, they found, are the answer.

Their Inspiration

The team name, Smells Like ML, stems from the idea that the nose is often used in literature to symbolize intuition. Rodriguez described their projects as “the ongoing process of building the intuition to understand and process data, and apply machine learning in ways that are helpful to everyday life.”

To create proofs of concept for their projects, they turned to the NVIDIA Jetson platform.

“The Jetson platform makes deploying machine learning applications really friendly even to those who don’t have much of a background in the area,” said Mayorquin.

Their Favorite Jetson Projects

Of Smells Like ML’s many projects using the Jetson platform, here are some highlights:

SpecMirror — Make eye contact with this AI-powered mirror, ask it a question and it searches the web to provide an answer. The smart assistant mirror can be easily integrated into your home. It processes sound and video input simultaneously, with the help of NVIDIA Jetson Xavier NX and NVIDIA DeepStream SDK.

ActionAI — Whether you’re squatting, spinning or loitering, this device classifies all kinds of human movement. It’s optimized by the Jetson Nano developer kit’s pose estimation inference capabilities. Upon detecting the type of movement someone displays, it annotates the results right back onto the video it was analyzing. ActionAI can be used to prototype any products that require human movement detection, such as a yoga app or an invisible keyboard.

Shoot Your Shot — Bring a little analytics to your dart game. This computer vision booth analyzes dart throws from multiple camera angles, and then scores, logs and even predicts the results. The application runs on a single Jetson Nano system on module.

Where to Learn More 

In June, Smells Like ML won second place in NVIDIA’s AI at the Edge competition in the intelligent video analytics category.

For more sensory overload, check out other cool projects from Smells Like ML.

Anyone can get started on a Jetson project. Learn how on the Jetson developers page.

The post Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning appeared first on The Official NVIDIA Blog.

Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis

After an exhausting 12-hour shift caring for patients, it’s hard to blame frontline workers for forgetting to sing “Happy Birthday” twice to guarantee a full 30 seconds of proper hand-washing.

Though at times tedious, the process of confirming such detailed, protective measures like the amount of time hospital employees spend sanitizing their hands, the cleaning status of a room, or the number of beds available is crucial to preventing the spread of infectious diseases such as COVID-19.

DARVIS, an AI company founded in San Francisco in 2015, automates tasks like these to make hospitals “smarter” and give hospital employees more time for patient care, as well as peace of mind for their own protection.

The company developed a COVID-19 infection-control compliance model within a month of the pandemic breaking out. It provides a structure to ensure that workers are wearing personal protective equipment and complying with hygiene protocols amidst the hectic pace of hospital operations, compounded by the pandemic. The system can also provide information on the availability of beds and other equipment.

Short for “Data Analytics Real-World Visual Information System,” DARVIS uses the NVIDIA Clara Guardian application framework, employing machine learning and advanced computer vision.

The system analyzes information processed by optical sensors, which act as the “eyes and ears” of the machine, and alerts users if a bed is clean or not, or if a worker is missing a glove, among other contextual insights. Upon providing feedback, all records are fully anonymized.

“It’s all about compliance,” said Jan-Philipp Mohr, co-founder and CEO of the company. “It’s not about surveilling workers, but giving them feedback where they could harm themselves. It’s for both worker protection and patient security.”

DARVIS is a member of NVIDIA Inception, a program that helps startups working in AI and data science accelerate their product development, prototyping and deployment.

The Smarter the Hospital, the Better

Automation in hospitals has always been critical to saving lives and increasing efficiency, said Paul Warren, vice president of Product and team lead for AI at DARVIS. However, the need for smart hospitals is all the more urgent in the midst of the COVID-19 crisis, he said.

“We talk to the frontline caregivers, the doctors, the nurses, the transport staff and figure out what part of their jobs is particularly repetitive, frustrating or complicated,” said Warren. “And if we can help automate that in real time, they’re able to do their job a lot more efficiently, which is ultimately good for improving patient outcomes.”

DARVIS can help save money as well as lives. Even before the COVID crisis, the U.S. Centers for Disease Control and Prevention estimated the annual direct medical costs of infectious diseases in U.S. hospitals to be around $45 billion, a cost bound to rise due to the global pandemic. By optimizing infection control practices and minimizing the spread of infectious disease, smart hospitals can decrease this burden, Mohr said.

To save costs and time needed to train and deploy their own devices, DARVIS uses PyTorch and TensorFlow optimized on NGC, NVIDIA’s registry of GPU-accelerated software containers.

“NVIDIA engineering efforts to optimize deep learning solutions is a game-changer for us,” said Warren. “NGC makes structuring and maintaining the infrastructure environment very easy for us.”

DARVIS’s current centralized approach involves deep learning techniques optimized on NVIDIA GPU-powered servers running on large workstations within the hospital’s data center.

As they onboard more users, the company plans to also use NVIDIA DeepStream SDK on edge AI embedded systems like NVIDIA Jetson Xavier NX to scale out and deploy at hospitals in a more decentralized manner, according to Mohr.

Same Technology, Numerous Possibilities

While DARVIS was initially focused on tracking beds and inventory, user feedback led to the expansion of its platform to different areas of need.

The same technology was developed to evaluate proper usage of PPE, to analyze worker compliance with infection control practices and to account for needed equipment in an operating room.

The team at DARVIS continues to research what’s possible with their device, as well as in the field of AI more generally, as they expand and deploy their product at hospitals around the world.

Watch DARVIS in action:

Learn more about NVIDIA’s healthcare-application framework on the NVIDIA Clara developers page.

Images courtesy of DARVIS, Inc.

The post Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis appeared first on The Official NVIDIA Blog.

Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19

Ahmed Elnaggar and Michael Heinzinger are helping computers read proteins as easily as you read this sentence.

The researchers are applying the latest AI models used to understand text to the field of bioinformatics. Their work could accelerate efforts to characterize living organisms like the coronavirus.

By the end of the year, they aim to launch a website where researchers can plug in a string of amino acids that describe a protein. Within seconds, it will provide some details of the protein’s 3D structure, a key to knowing how to treat it with a drug.

Today, researchers typically search databases to get this kind of information. But the databases are growing rapidly as more proteins are sequenced, so a search can take up to 100 times longer than the approach using AI, depending on the size of a protein’s amino acid string.

In cases where a particular protein hasn’t been seen before, a database search won’t provide any useful results — but AI can.

“Twelve of the 14 proteins associated with COVID-19 are similar to well validated proteins, but for the remaining two we have very little data — for such cases, our approach could help a lot,” said Heinzinger, a Ph.D. candidate in computational biology and bioinformatics.

While time consuming, methods based on the database searches have been 7-8 percent more accurate than previous AI methods. But using the latest models and datasets, Elnaggar and Heinzinger cut the accuracy gap in half, paving the way for a shift to using AI.

AI Models, GPUs Drive Biology Insights

“The speed at which these AI algorithms are improving makes me optimistic we can close this accuracy gap, and no field has such fast growth in datasets as computational biology, so combining these two things I think we will reach a new state of the art soon,” said Heinzinger.

“This work couldn’t have been done two years ago,” said Elnaggar, an AI specialist with a Ph.D. in transfer learning. “Without the combination of today’s bioinformatics data, new AI algorithms and the computing power from NVIDIA GPUs, it couldn’t be done,” he said.

Elnaggar and Heinzinger are team members in the Rostlab at the Technical University of Munich, which helped pioneer this field at the intersection of AI and biology. Burkhard Rost, who heads the lab, wrote a seminal paper in 1993 that set the direction.

The Semantics of Reading a Protein

The underlying concept is straightforward. Proteins, the building blocks of life, are made up of strings of amino acids that need to be interpreted sequentially, just like words in a sentence.

So, researchers like Rost started applied emerging work in natural-language processing to understand proteins. But in the 1990s they had very little data on proteins and the AI models were still fairly crude.

Fast forward to today and a lot has changed.

Sequencing has become relatively fast and cheap, generating massive datasets. And thanks to modern GPUs, advanced AI models such as BERT can interpret language in some cases better than humans.

AI Models Grow 6x in Sophistication

The breakthroughs in natural-language processing have been particularly breathtaking. Just 18 months ago, Elnaggar and Heinzinger reported on work using a version of recurrent neural network models with 90 million parameters; this month their work leveraged Transformer models with 567 million parameters.

“Transformer models are hungry for compute power, so to do this work we used 5,616 GPUs on the Summit supercomputer and even then it took up to two days to train some of the models,” said Elnaggar.

Running the models on thousands of Summit’s nodes presented challenges.

Elnaggar tells a story familiar to those who work on supercomputers. He needed lots of patience to sync and manage files, storage, comms and their overheads at such a scale. He started small, working on a few nodes, and moved a step at a time.

Patient, stepwise work paid off in scaling complex AI algorithms across thousands of GPUs on the Summit supercomputer.

“The good news is we can now use our trained models to handle inference work in the lab using a single GPU,” he said.

Now Available: Pretrained AI Models

Their latest paper, published in July, characterizes the pros and cons of a handful of the latest AI models they used on various tasks. The work is funded with a grant from the COVID-19 High Performance Computing Consortium.

The duo also published the first versions of their pretrained models. “Given the pandemic, it’s better to have an early release,” rather than wait until the still ongoing project is completed, Elnaggar said.

“The proposed approach has the potential to revolutionize the way we analyze protein sequences,” said Heinzinger.

The work may not in itself bring the coronavirus down, but it is likely to establish a new and more efficient research platform to attack future viruses.

Collaborating Across Two Disciplines

The project highlights two of the soft lessons of science: Keep a keen eye on the horizon and share what’s working.

“Our progress mainly comes from advances in natural-language processing that we apply to our domain — why not take a good idea and apply it to something useful,” said Heinzinger, the computational biologist.

Elnaggar, the AI specialist, agreed. “We could only succeed because of this collaboration across different fields,” he said.

See more stories online of researchers advancing science to fight COVID-19.

The image at top shows language models trained without labelled samples picking up the signal of a protein sequence that is required for DNA binding.

The post Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19 appeared first on The Official NVIDIA Blog.

Heart of the Matter: AI Helps Doctors Navigate Pandemic

A month after it got FDA approval, a startup’s first product was saving lives on the front lines of the battle against COVID-19.

Caption Health develops software for ultrasound systems, called Caption AI. It uses deep learning to empower medical professionals, including those without prior ultrasound experience, to perform echocardiograms quickly and accurately.

The results are images of the heart often worthy of an expert sonographer that help doctors diagnose and treat critically ill patients.

The coronavirus pandemic provided plenty of opportunities to try out the first dozen systems. Two doctors who used the new tool shared their stories on the condition they are their patients remain anonymous.

A 53-year-old diabetic woman with COVID-19 went into cardiac shock in a New York hospital. Without the images from Caption AI, it would have been difficult to clinch the diagnosis, said a doctor on the scene.

The system helped the physician identify heart problems in an 86-year-old man with the virus in the same hospital, helping doctors bring him back to health. It was another case among more than 200 in the facility that was effectively turned into a COVID-19 hospital in mid-March.

The Caption Health system made a tremendous impact for a staff spread thin, said the doctor. It would have been hard for a trained sonographer to keep up with the demand for heart exams, he added.

Heart Test Becomes Standard Procedure

Caption AI helped doctors in North Carolina determine that a 62-year-old man had COVID-19-related heart damage. Thanks, in part, to the ease of using the system, the hospital now performs echocardiograms for most patients with the virus.

At the height of the pandemic’s first wave, the hospital stationed ultrasound systems with Caption AI in COVID-19 wards. Rather than sending sonographers from unit to unit, the usual practice, staff stationed at the wards used the systems. The change reduced staff exposure to the virus and conserved precious protective gear.

Beyond the pandemic, the system will help hospitals provide urgent services while keeping a lid on rising costs, said a doctor at that hospital.

“AI-enabled machines will be the next big wave in taking care of patients wherever they are,” said Randy Martin, chief medical officer of Caption Health and emeritus professor of cardiology at Emory University.

Martin joined the startup about four years ago after meeting its founders, who shared expertise and passion for medicine and AI. Today their software “takes a user through 10 standard views of the heart, coaching them through some 90 fine movements experts make,” he said.

“We don’t intend to replace sonographers; we’re just expanding the use of portable ultrasound systems to the periphery for more early detection,” he added.

Coping with an Unexpected Demand Spike

In the early days of the pandemic, that expansion couldn’t come fast enough.

In late March, the startup exhausted supplies that included NVIDIA Quadro P3000 GPUs that ran its AI software. In the early days of the global shutdown, the startup reached out to its supply chain.

“We are experiencing overwhelming demand for our product,” the company’s CEO wrote, after placing orders for 100 GPUs with a distributor.

Caption Health has systems currently in use at 11 hospitals. It expects to deploy Caption AI at a number of additional sites in the coming weeks.

GPUs at the Heart of Automated Heart Tests

The startup currently integrates its software in a portable ultrasound from Terason. It intends to partner with more ultrasound makers in the future. And it advises partners to embed GPUs in their future ultrasound equipment.

The Quadro P3000 in Caption AI runs real-time inference tasks using deep convolutional neural networks. They provide operators guidance in positioning a probe that captures images. Then they automatically choose the highest-quality heart images and interpret them to help doctors make informed decisions.

The NVIDIA GPU also freed up four CPU cores, making space to process other tasks on the system, such as providing a smooth user experience.

The startup trained its AI models on a database of 1 million echocardiograms from clinical partners. An early study in partnership with Northwestern Medicine and the Minneapolis Heart Institute showed Caption AI helped eight registered nurses with no prior ultrasound experience capture highly accurate images on a wide variety of patients.

Inception Program Gives Startup Traction

Caption Heath, formerly called Bay Labs, was founded in 2015 in Brisbane, Calif. It received a $125,000 prize at a 2017 GTC competition for members of NVIDIA’s Inception program, which gives startups access to technology, expertise and markets.

“Being part of the Inception program has provided us with increased recognition in the field of deep learning, a platform to share our AI innovations with healthcare and deep learning communities, and phenomenal support getting NVIDIA GPUs into our supply chain so we could deliver Caption AI,” said Charles Cadieu, co-founder and president of Caption Health.

Now that its tool has been tested in a pandemic, Caption Health looks forward to opportunities to help save lives across many ailments. The company aims to ride a trend toward more portable systems that extend availability and lower costs of diagnostic imaging.

“We hope to see our technology used everywhere from big hospitals to rural villages to examine people for a wide range of medical conditions,” said Cadieu.

To learn more about Caption Health and other companies like it, watch the webinar on healthcare startups against COVID-19 Heart of the Matter: AI Helps Doctors Navigate Pandemic appeared first on The Official NVIDIA Blog.