AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage

During a stroke, a patient loses an estimated 1.9 million brain cells every minute, so interpreting their CT scan even one second quicker is vital to maintaining their health.

To save precious time, Taiwan-based medical imaging startup Deep01 has created an AI-based medical imaging software, called DeepCT, to evaluate acute intracerebral hemorrhage (ICH), a type of stroke. The system works with 95 percent accuracy in just 30 seconds per case — about 10 times faster than competing methods.

Founded in 2016, Deep01 is the first AI company in Asia to have FDA clearances in both the U.S. and Taiwan. It’s a member of NVIDIA Inception, a program that helps startups develop, prototype and deploy their AI or data science technology and get to market faster.

The startup recently raised around $3 million for DeepCT, which detects suspected areas of bleeding around the brain and annotates where they’re located on CT scans, notifying physicians of the results.

The software was trained using 60,000 medical images that displayed all types of acute ICH. Deep01 uses a self-developed deep learning framework that runs images and trains the model on NVIDIA GPUs.

“Working with NVIDIA’s robust AI computing hardware, in addition to software frameworks like TensorFlow and PyTorch, allows us to deliver excellent AI inference performance,” said David Chou, founder and CEO of the company.

Making Quick Diagnosis Accessible and Affordable

Strokes are the world’s second-most common cause of death. When stroke patients are ushered into the emergency room, doctors must quickly determine whether the brain is bleeding and what next steps for treatment should be.

However, many hospitals lack enough manpower to perform such timely diagnoses, since only some emergency room doctors specialize in reading CT scans. Because of this, Deep01 was founded, according to Chou, with the mission of offering affordable AI-based solutions to medical institutions.

The 30-second speed with which DeepCT completes interpretation can help medical practitioners prioritize the patients in most urgent need for treatment.

Helpful for Facilities of All Types and Sizes

DeepCT has helped doctors evaluate more than 5,000 brain scans and is being used in nine medical institutions in Taiwan, ranging from small hospitals to large-scale medical centers.

“The lack of radiologists is a big issue even in large-scale medical centers like the one I work at, especially during late-night shifts when fewer staff are on duty,” said Tseng-Lung Yang, senior radiologist at Kaohsiung Veterans General Hospital in Taiwan.

Geng-Wang Liaw, an emergency physician at Yeezen General Hospital — a smaller facility in Taiwan — agreed that Deep01’s technology helps relieve physical and mental burdens for doctors.

“Doctors in the emergency room may misdiagnose a CT scan at times,” he said. “Deep01’s solution stands by as an assistant 24/7, to give doctors confidence and reduce the possibility for medical error.”

Beyond ICH, Deep01 is at work on expanding its technology to identify midline shift, a pathological finding that occurs when there’s increased pressure on the brain and increases mortality.

The post Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage appeared first on The Official NVIDIA Blog.

AI Explains AI: Fiddler Develops Model Explainability for Transparency

Your online loan application just got declined without explanation. Welcome to the AI black box.

Businesses of all stripes turn to AI for computerized decisions driven by data. Yet consumers using applications with AI get left in the dark on how automated decisions work. And many people working within companies have no idea how to explain the inner workings of AI to customers.

Fiddler Labs wants to change that.

The San Francisco-based startup offers an explainable AI platform that enables companies to explain, monitor and analyze their AI products.

Explainable AI is a growing area of interest for enterprises because those outside of engineering often need to understand how their AI models work.

Using explainable AI, banks can provide reasons to customers for a loan’s rejection, based on data points fed to models, such as maxed credit cards or high debt-to-income ratios. Internally, marketers can strategize about customers and products by knowing more about the data points that drive them.

“This is bridging the gap between hardcore data scientists who are building the models and the business teams using these models to make decisions,” said Anusha Sethuraman, head of product marketing at Fiddler Labs.

Fiddler Labs is a member of NVIDIA Inception, a program that enables companies working in AI and data science with fundamental tools, expertise and marketing support, and helps them get to market faster.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help explore the math inside an AI model. It can map out the data inputs and their weighted values that were used to arrive at the data output of the model.

All of this, essentially, enables a layperson to study the sausage factory at work on the inside of an otherwise opaque process. The result is explainable AI can help deliver insights into how and why a particular decision was made by a model.

“There’s often a hurdle to get AI into production. Explainability is one of the things that we think can address this hurdle,” Sethuraman said.

With an ensemble of models often at use, creating this is no easy job.

But Fiddler Labs CEO and co-founder Krishna Gade is up to the task. He previously led the team at Facebook that built the “Why am I seeing this post?” feature to help consumers and internal teams understand how its AI works in the Facebook news feed.

He and Amit Paka — a University of Minnesota classmate — joined forces and quit their jobs to start Fiddler Labs. Paka, the company’s chief product officer, was motivated by his experience at Samsung with shopping recommendation apps and the lack of understanding into how these AI recommendation models work.

Explainability for Transparency

Founded in 2018, Fiddler Labs offers explainability for greater transparency in businesses. It helps companies make better informed business decisions through a combination of data, explainable AI and human oversight, according to Sethuraman.

Fiddler’s tech is used by Hired, a talent and job matchmaking site driven by AI. Fiddler provides real-time reporting on how Hired’s AI models are working. It can generate explanations on candidate assessments and provide bias monitoring feedback, allowing Hired to assess its AI.

Explainable AI needs to be quickly available for consumer fintech applications. That enables customer service representatives to explain automated financial decisions — like loan rejections and robo rates — and build trust with transparency about the process.

The algorithms used for explanations require hefty processing. Sethuraman said that Fiddler Labs taps into NVIDIA cloud GPUs to make this possible, saying CPUs aren’t up to the task.

“You can’t wait 30 seconds for the explanations — you want explanations within milliseconds on a lot of different things depending on the use cases,” Sethuraman said.

Visit NVIDIA’s financial services industry page to learn more.

Image credit: Emily Morter, via the Unsplash Photo Community. 

The post AI Explains AI: Fiddler Develops Model Explainability for Transparency appeared first on The Official NVIDIA Blog.

Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections

Childhood buddies from back in South Africa, Nicholas Pilkington, Jono Millin and Mike Winn went off together to a nearby college, teamed up on a handful of startups and kept a pact: work on drones once a week.

That dedication is paying off. Their drone startup, based in San Francisco, is picking up interest worldwide and has landed $35 million in Series D funding.

It all catalyzed in 2014, when the friends were accepted into the AngelPad accelerator program in Silicon Valley. They founded DroneDeploy there, enabling contractors to capture photos, maps, videos and high-fidelity panoramic images for remote inspections of job sites.

“We had this a-ha moment: Almost any industry can benefit from aerial imagery, so we set out to build the best drone software out there and make it easy for everyone,” said Pilkington, co-founder and CTO at DroneDeploy.

DroneDeploy’s AI software platform — it’s the navigational brains and eyes — is operating in more than 200 countries and handling more than 1 million flights a year.

Nailing Down Applications

DroneDeploy’s software has been adopted in construction, agriculture, forestry, search and rescue, inspection, conservation and mining.

In construction, DroneDeploy is used by one-quarter of the world’s 400 largest building contractors and six of the top 10 oil and gas companies, according to the company.

DroneDeploy was one of three startups that recently presented at an NVIDIA Inception Connect event held by Japanese insurer Sompo Holdings. For good reason: Startups are helping insurance and reinsurance firms become more competitive by analyzing portfolio risks with AI.

The NVIDIA Inception program nurtures startups with access to GPU guidance, Deep Learning Institute courses, networking and marketing opportunities.

Navigating Drone Software

DroneDeploy offers features like fast setup of autonomous flights, photogrammetry to take physical measurements and APIs for drone data.

In addition to supporting industry-leading drones and hardware, DroneDeploy operates an app ecosystem for partners to build apps using its drone data platform. John Deere, for example, offers an app for customers to upload aerial drone maps of their fields to their John Deere account so that they can plan flights based on the field data.

Split-second photogrammetry and 360-degree images provided by DroneDeploy’s algorithms running on NVIDIA GPUs in the cloud help provide pioneering mapping and visibility.

AI on Safety, Cost and Time

Drones used in high places instead of people can aid in safety. The U.S. Occupational Safety and Health Administration last year reported that 22 people were killed in roofing-related accidents in the U.S.

Inspecting roofs and solar panels with drone technology can improve that safety record. It can also save on cost: The traditional alternative to having people on rooftops to perform these inspections is using helicopters.

Customers of the DroneDeploy platform can follow a quickly created map to carry out a sequence of inspections with guidance from cameras fed into image recognition algorithms.

Using drones, customers can speed up inspections by 80 percent, according to the company.  

“In areas like oil, gas and energy, it’s about zero-downtime inspections of facilities for operations and safety, which is a huge value driver for these customers,” said Pilkington.

The post Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections appeared first on The Official NVIDIA Blog.

Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives

A team in Israel is making a splash with AI.

It started as biz school buddies Netanel Eliav and Adam Bismut were looking to solve a problem to change the world. The problem found them: Bismut visited the Dead Sea after a drowning and noticed a lack of tech for lifeguards, who scanned the area with age-old binoculars.

The two aspiring entrepreneurs — recent MBA graduates of Ben-Gurion University, in the country’s south — decided this was their problem to solve with AI.

“I have two little girls, and as a father, I know the feeling that parents have when their children are near the water,” said Eliav, the company’s CEO.

They founded Sightbit in 2018 with BGU classmates Gadi Kovler and Minna Shezaf to help lifeguards see dangerous conditions and prevent drownings.

The startup is seed funded by Cactus Capital, the venture arm of their alma mater.

Sightbit is now in pilot testing at Palmachim Beach, a popular escape for sunbathers and surfers in the Palmachim Kibbutz area along the Mediterranean Sea, south of Tel Aviv. The sand dune-lined destination, with its inviting, warm aquamarine waters, gets packed with thousands of daily summer visitors.

But it’s also a place known for deadly rip currents.

Danger Detectors

Sightbit has developed image detection to help spot dangers to aid lifeguards in their work. In collaboration with the Israel Nature and Parks Authority, the Beersheba-based startup has installed three cameras that feed data into a single NVIDIA Jetson AGX at the lifeguard towers at Palmachim beach. NVIDIA Metropolis is deployed for video analytics.

The system of danger detectors enables lifeguards to keep tabs on a computer monitor that flags potential safety concerns while they scan the beach.

Sightbit has developed models based on convolutional neural networks and image detection to provide lifeguards views of potential dangers. Kovler, the company’s CTO, has trained the company’s danger detectors on tens of thousands of images, processed with NVIDIA GPUs in the cloud.

Training on the images wasn’t easy with sun glare on the ocean, weather conditions, crowds of people, and people partially submerged in the ocean, said Shezaf, the company’s CMO.

But Sightbit’s deep learning and proprietary algorithms have enabled it to identify children alone as well as clusters of people. This allows its system to flag children who have strayed from the pack.

Rip Current Recognition

The system also harnesses optical flow algorithms to detect dangerous rip currents in the ocean for helping lifeguards keep people out of those zones.  These algorithms make it possible to identify the speed of every object in an image, using partial differential equations to calculate acceleration vectors of every voxel in the image.

Lifeguards can get updates on ocean conditions so when they start work they have a sense of hazards present that day.

“We spoke with many lifeguards. The lifeguard is trying to avoid the next accident. Many people go too deep and get caught in the rip currents,” said Eliav.

Cameras at lifeguard towers processed on the single compact supercomputing Jetson Xavier and accessing Metropolis can offer split-second inference for alerts, tracking, statistics and risk analysis in real time.

The Israel Nature and Parks Authority is planning to have a structure built on the beach to house more cameras for automated safety, according to Sightbit.

COVID-19 Calls 

Palmachim Beach lifeguards have a lot to watch, especially now as people get out of their homes for fresh air after the region begins reopening from COVID-19-related closures.

As part of Sightbit’s beach safety developments, the company had been training its network to spot how far apart people were to help gauge child safety.

This work also directly applies to monitoring social distancing and has attracted the attention of potential customers seeking ways to slow the spread of COVID-19. The Sightbit platform can provide them crowding alerts when a public area is overcrowded and proximity alerts for when individuals are too close to each other, said Shezaf.

The startup has put in extra hours to work with those interested in its tech to help monitor areas for ways to reduce the spread of the pathogen.

“If you want to change the world, you need to do something that is going to affect people immediately without any focus on profit,” said Eliav.

 

Sightbit is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives appeared first on The Official NVIDIA Blog.

Productivity in Full Display: Quadro View Helps Professionals Optimize Desktop Workspaces

It’s time to put your comfort zone on display.

NVIDIA Quadro View, available now, enhances workspaces and boosts productivity by enabling professionals to manage their displays and arrange their workspaces in a way that best suits them.

Workspaces are vital to productivity because they set the tone for how people work. More and more, professionals across industries are tackling demanding workflows that require working simultaneously across multiple applications and windows.

They’re using more ultra-widescreen monitors, curved displays and multi-monitor setups to see all they need in one view. They’re also increasingly personalizing their workspace layout — from their desk space to the windows on screen — for easy access to the things they need.

The latest application in the NVIDIA Quadro Experience platform, Quadro View helps streamline workflows with a suite of desktop management tools to deliver maximum flexibility and control over displays.

With Quadro View, included with Quadro drivers or downloadable as a standalone app, users can easily customize their desktop layout, gaining full control over their displays so they can work more efficiently.

A View Designed to Fit the Way You Work

Quadro Experience provides users a range of productivity tools to choose from, including screen capture and desktop recording, so they can simplify time-consuming tasks.

Quadro View, easily launched from Quadro Experience, lets multitaskers streamline productivity even further through features like tailored workspaces with easy navigation, compatibility with top software applications, and powerful window management and deployment tools for a personalized desktop experience.

“Adjusting and organizing windows in my screen is a boring and often time-consuming daily task that takes time away from my work, especially when something changes in my workflow,” said Luis Paulo F. Mesquita, global macro financial portfolio manager at Venturestar Capital Management. “Quadro View’s simple drag-and-drop process saves precious minutes in my morning so I can get up and running in no time.”

Quadro View provides advanced capabilities that allow users to:

  • Divide workspaces using display gridlines and arrange applications to regions on monitors,  also known as window-snapping.
  • Save profiles based on personal workflows to deploy preset desktop and application configurations.
  • Specify how windows operate on desktops or display devices with advanced windows management.
  • Set up hotkeys to trigger actions and quickly access common functions.

Quadro View is available to download as a standalone application, and as a part of the latest Quadro Optimal Driver for Enterprise Release 450 U1, which includes advanced Quadro display features, added support for Windows 10 and new studio application updates.

The post Productivity in Full Display: Quadro View Helps Professionals Optimize Desktop Workspaces appeared first on The Official NVIDIA Blog.

AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra

AI is the rocket fuel that will get us to Mars. It’s the vaccine that will save us on Earth. And it’s the people who aspire to make a dent in the universe.

Our latest “I Am AI” video, unveiled during NVIDIA CEO Jensen Huang’s keynote address at the GPU Technology Conference, pays tribute to the scientists, researchers, artists and many others making historic advances with AI.

To grasp AI’s global impact, consider: the technology is expected to generate $2.9 trillion worth of business value by 2021, according to Gartner.

It’s on course to classify 2 trillion galaxies to understand the universe’s origin, and to zero in on the molecular structure of the drugs needed to treat coronavirus and cancer.

As depicted in the latest video, AI has an artistic side, too. It can paint as well as Bob Ross. And its ability to assist in the creation of original compositions is worthy of the London Symphony Orchestra, which plays the accompanying theme music, a piece that started out written by a recurrent neural network.

AI is also capable of creating text-to-speech synthesis for narrating a short documentary. And that’s just what it did.

These fireworks and more are the story of I Am AI. Sixteen companies and research organizations are featured in the video. The action moves fast, so grab a bowl of popcorn, kick back and enjoy this tour of some of the highlights of AI in 2020.

Reaching Into Outer Space

Understanding the formation of the structure and the amount of matter in the universe requires observing and classifying celestial objects such as galaxies. With an estimated 2 trillion galaxies to examine in the observable universe, it’s what cosmologists call a “computational grand challenge.”

The recent Dark Energy Survey collected data from over 300 million galaxies. To study them with unprecedented precision, the Center for Artificial Intelligence Innovation at the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign teamed up with the Argonne Leadership Computing Facility at the U.S. Department of Energy’s Argonne National Laboratory.

NCSA tapped the Galaxy Zoo project, a crowdsourced astronomy effort that labeled millions of galaxies observed by the Sloan Digital Sky Survey. Using that data, an AI model with 99.6 percent accuracy can now chew through unlabeled galaxies to ID them and accelerate scientific research.

With Mars targeted for human travel, scientists are seeking the safest path. In that effort, the NASA Solar Dynamics Observatory takes images of the sun every 1.3 seconds. And researchers have developed an algorithm that removes errors from the images, which are placed into a growing archive for analysis.

Using such data, NASA is tapping into NVIDIA GPUs to analyze solar surface flows so that it can build better models for predicting the weather in space. NASA also aims to identify origins of energetic particles in Earth’s orbit that could damage interplanetary spacecraft, jeopardizing trips to Mars.

Restoring Voice and Limb

Voiceitt — a Tel Aviv-based startup that’s developed signal processing, speech recognition technologies and deep neural nets — offers a synthesized voice for those whose speech has been distorted. The company’s app converts unintelligible speech into easily understood speech.

The University of North Carolina at Chapel Hill’s Neuromuscular Rehabilitation Engineering Laboratory and North Carolina State University’s Active Robotic Sensing (ARoS) Laboratory develop experimental robotic limbs used in the labs.

The two research units have been working on walking environment recognition, aiming to develop environmental adaptive controls for prostheses. They’ve been using CNNs for prediction running on NVIDIA GPUs. And they aren’t alone.

Helping in Pandemic

Whiteboard Coordinator remotely monitors the temperature of people entering buildings to minimize exposure to COVID-19. The Chicago-based startup provides temperature-screening rates of more than 2,000 people per hour at checkpoints. Whiteboard Coordinator and NVIDIA bring AI to the edge of healthcare with NVIDIA Clara Guardian, an application framework that simplifies the development and deployment of smart sensors.

Viz.ai uses AI to inform neurologists about strokes much faster than traditional methods. With the onset of the pandemic, Viz.ai moved to help combat the new virus with an app that alerts care teams to positive COVID-19 results.

Axial3D is a Belfast, Northern Ireland, startup that enlists AI to accelerate the production time of 3D-printed models for medical images used in planning surgeries. Having redirected its resources at COVID-19, the company is now supplying face shields and is among those building ventilators for the U.K.’s National Health Service. It has also begun 3D printing of swab kits for testing as well as valves for respirators. (Check out their on-demand webinar.)

Autonomizing Contactless Help

KiwiBot, a cheery-eyed food delivery bot from Berkeley, Calif., has included in its path a way to provide COVID-19 services. It’s autonomously delivering masks, sanitizers and other supplies with its robot-to-human service.

Masterpieces of Art, Compositions and Narration

Researchers from London-based startup Oxia Palus demonstrated in a paper, “Raiders of the Lost Art,” that AI could be used to recreate lost works of art that had been painted over. Beneath Picasso’s 1902 The Crouching Beggar lies a mountainous landscape that art curators believe is of Parc del Laberint d’Horta, near Barcelona.

They also know that Santiago Rusiñol painted Parc del Laberint d’Horta. Using a modified X-ray fluorescence image of The Crouching Beggar and Santiago Rusiñol’s Terraced Garden in Mallorca, the researchers applied neural style transfer, running on NVIDIA GPUs, to reconstruct the lost artwork, creating Rusiñol’s Parc del Laberint d’Horta.

 

For GTC a few years ago, Luxembourg-based AIVA AI composed the start — melodies and accompaniments — of what would become an original classical music piece meriting an orchestra. Since then we’ve found it one.

Late last year, the London Symphony Orchestra agreed to play the moving piece, which was arranged for the occasion by musician John Paesano and was recorded at Abbey Road Studios.

 

NVIDIA alum Helen was our voice-over professional for videos and events for years. When she left the company, we thought about how we might continue the tradition. We turned to what we know: AI. But there weren’t publicly available models up to the task.

A team from NVIDIA’s Applied Deep Learning Research group published the answer to the problem: Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis. Licensing Helen’s voice, we trained the network on dozens of hours of it.

First, Helen produced multiple takes, guided by our creative director. Then our creative director was able to generate multiple takes from Flowtron and adjust parameters of the model to get the desired outcome. And what you hear is “Helen” speaking in the I Am AI video narration.

The post AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra appeared first on The Official NVIDIA Blog.

What’s a DPU?

What’s a DPU?

Of course, you’re probably already familiar with the Central Processing Unit or CPU. Flexible and responsive, for many years CPUs were the sole programmable element in most computers.

More recently the GPU, or graphics processing unit, has taken a central role. Originally used to deliver rich, real-time graphics, their parallel processing capabilities make them ideal for accelerated computing tasks of all kinds.

That’s made them the key to artificial intelligence, deep learning, and big data analytics applications.

Over the past decade, however, computing has broken out of the boxy confines of PC and servers — with CPUs and GPUs powering sprawling new hyperscale data centers.

These data centers are knit together with a powerful new category of processors. The DPU, or data processing unit, has become the third member of the data centric accelerated computing model.“This is going to represent one of the three major pillars of computing going forward,” NVIDIA CEO Jensen Huang said during a talk earlier this month.

“The CPU is for general purpose computing, the GPU is for accelerated computing and the DPU, which moves data around the data center, does data processing.”

What's a DPU?

Data Processing Unit
Industry-standard, high-performance, software-programmable multi-core CPU
High-performance network interface
Flexible and programmable acceleration engines

So What Makes a DPU Different?

A DPU is a new class of programmable processor that combines three key elements. A DPU is a system on a chip, or SOC, that combines:
An industry standard, high-performance, software programmable, multi-core CPU, typically based on the widely-used Arm architecture, tightly coupled to the other SOC components

A high-performance network interface capable of parsing, processing, and efficiently transferring data at line rate, or the speed of the rest of the network, to GPUs and CPUs

A rich set of flexible and programmable acceleration engines that offload and improve applications performance for AI and Machine Learning, security, telecommunications, and storage, among others.

All these DPU capabilities are critical to enable an isolated, bare-metal, cloud-native computing that will define the next generation of cloud-scale computing.

DPUs: Incorporated into SmartNICs

The DPU can be used as a stand-alone embedded processor, but it’s more often incorporated into a SmartNIC, a network interface controller that’s  used as a key component in a next generation server.

Other devices that claim to be DPUs miss significant elements of these three critical capabilities that are fundamental to claiming to answer the question: What is a DPU?

DPUs, or data processing units, can be used as a stand-alone embedded processor, but they’re more often incorporated into a SmartNIC, a network interface controller that’s used as a key component in a next generation server.
DPUs can be used as a stand-alone embedded processor, but they’re more often incorporated into a SmartNIC, a network interface controller that’s used as a key component in a next generation server.

For example, some vendors use proprietary processors that don’t benefit from the rich development and application infrastructure offered by the broad Arm CPU ecosystem.

Others claim to have DPUs but make the mistake of focusing solely on the embedded CPU to perform data path processing.

DPUs: A Focus on Data Processing

This isn’t competitive and doesn’t scale, because trying to beat the traditional x86 CPU with a brute force performance attack is a losing battle. If 100 Gigabit/sec packet processing brings an x86 to its knees, why would an embedded CPU perform better?

Instead the network interface needs to be powerful and flexible enough to handle all network data path processing. The embedded CPU should be used for control path initialization and exception processing, nothing more.

At a minimum, there 10 capabilities the network data path acceleration engines need to be able to deliver:

  • Data packet parsing, matching, and manipulation to implement an open virtual switch (OVS)
  • RDMA data transport acceleration for Zero Touch RoCE
  • GPU-Direct accelerators to bypass the CPU and feed networked data directly to GPUs (both from storage and from other GPUs)
  • TCP acceleration including RSS, LRO, checksum, etc
  • Network virtualization for VXLAN and Geneve overlays and VTEP offload
  • Traffic shaping “packet pacing” accelerator to enable multi-media streaming, content distribution networks, and the new 4K/8K Video over IP (RiverMax for ST 2110)
  • Precision timing accelerators for telco Cloud RAN such as 5T for 5G capabilities
  • Crypto acceleration for IPSEC and TLS performed inline so all other accelerations are still operation
  • Virtualization support for SR-IOV, VirtIO and para-virtualization
  • Secure Isolation: root of trust, secure boot, secure firmware upgrades, and authenticated containers and application life cycle management

These are just 10 of the acceleration and hardware capabilities that are critical to being able to answer yes to the question: “What is a DPU?”

So what is a DPU? This is a DPU:

What's a DPU? This is a DPU, also known as a Data Processing Unit.

Many so-called DPUs focus solely on delivering one or two of these functions.

The worst try to offload the datapath in proprietary processors.

While good for prototyping, this is a fool’s errand, because of the scale, scope, and breadth of data center.

Additional DPU-Related Resources

The post What’s a DPU? appeared first on The Official NVIDIA Blog.

May AI Help You? Square Takes Edge Off Conversational AI with GPUs

The next time a virtual assistant seems particularly thoughtful rescheduling your appointment, you could thank it. Who knows, maybe it was built to learn from compliments. But you might actually have Gabor Angeli to thank.

The engineering manager and members of his team at Square Inc. published a paper on techniques for creating AI assistants that are sympathetic listeners. It described AI models that approach human performance in techniques like reflective listening — re-phrasing someone’s request so they feel heard.

These days his team is hard at work expanding Square Assistant from a virtual scheduler to a conversational AI engine driving all the company’s products.

“There is a huge surface area of conversations between buyers and sellers that we can and should help people navigate,” said Angeli, who will describe the work in a session available now with a free registration to GTC Digital.

Square, best known for its stylish payment terminals, offers small businesses a wide range of services from handling payroll to creating loyalty programs.

Hearing the Buzz on Conversational AI

A UC Berkeley professor’s intro to AI course lit a lasting fire in Angeli for natural-language processing more than a decade ago. He researched the emerging field in the university’s AI lab and eventually co-founded Eloquent, an NLP startup acquired by Square last May.

Six months later, Square Assistant was born as a virtual scheduler.

“We wanted to get something good but narrowly focused in front of customers quickly,” Angeli said. “We’re adding advanced features to Square Assistant now, and our aim is to get it into nearly everything we offer.”

Results so far are promising. Square Assistant can understand and provide help for 75 percent of customer’s questions, and it’s reducing appointment no-shows by 10 percent.

But to make NLP the talk of the town, the team faces knotty linguistic and technical challenges. For example, is “next Saturday” this coming one or the one after it?

What’s more, there’s a long tail of common customer queries. As the job description of Square Assistant expands from dozens to thousands of tasks, its neural network models expand and require more training.

“It’s exciting to see BERT [Bidirectional Encoder Representations from Transformers] do things we didn’t think were possible, like showing AI for reading comprehension. It amazes me this is possible, but these are much larger models that present challenges in the time it takes to train and deploy them,” he said.

GPUs Speed Up Inference, Training

Angeli’s team started training AI models at Eloquent on single NVIDIA GPUs running CUDA in desktop PCs. At Square it’s using desktops with dual GPUs supplemented with training for large hyper-parameter jobs run on GPUs in the AWS cloud service.

In its tests, Square found inference jobs on average-size models run twice as fast on GPUs than CPUs. Inference on large models such as RoBERTa run 10x faster on the AWS GPU service than on CPUs.

The difference for training jobs is “even more stark,” he reported. “It’s hard to train a modern machine-learning model without a GPU. If we had to run deep learning on CPUs, we’d be a decade behind,” he added.

Faster training also helps motivate AI developers to iterate designs more often, resulting in better models, he said.

His team uses a mix of small, medium and large NLP models, applying pre-training tricks that proved their worth with computer vision apps. Long term, he believes engineers will find general models that work well across a broad range of tasks.

In the meantime, conversational AI is a three-legged race with developers like Angeli’s team crafting more efficient models as GPU architects design beefier chips.

“Half the work is in algorithm design, and half is in NVIDIA making hardware that’s more optimized for machine learning and runs bigger models,” he said.

The post May AI Help You? Square Takes Edge Off Conversational AI with GPUs appeared first on The Official NVIDIA Blog.

Working Remotely: Connecting to Your Office Workstation

With so many people working from home amid the COVID-19 outbreak, staying productive can be challenging.

At NVIDIA, some of us have RTX laptops and remote-working capabilities powered by our virtual GPU software via on-prem servers and the cloud. To help support the many other businesses with GPUs in their servers, we recently made vGPU licenses free for up to 500 users for 90 days to explore their virtualization options.

But many still require access to physical Quadro desktop workstations due to specific hardware configurations or data requirements. And we know this situation is hardly unique.

Many designers, engineers, artists and architects have Quadro RTX mobile workstations that are on par with their desktop counterparts, which helps them stay productive anywhere. However, a vast number of professionals don’t have access to their office-based workstations — with multiple high-end GPUs, large memory and storage, as well as applications and data.

These workstations are critical for keeping  everything from family firms to multinational corporations going. And this has forced IT teams to explore different ways to address the challenges of working from home by connecting remotely to an office workstation.

Getting Started: Tools for Remote Connections

The list below shows several publicly available remote-working tools that are helpful to get going quickly. For details on features and licensing, contact the respective providers.

Managing Access, Drivers and Reboots

Once you’re up and running, keep these considerations in mind:

Give yourself a safety net when working on a remote system 

There are times that your tools can stop working, so it’s a good idea to have a safety net. Always install a VNC server on the machine (https://www.tightvnc.com/, https://www.realvnc.com/en/ or others) no matter what remote access tool you use. It’s also a good idea to enable Access to Microsoft Remote Desktop as another option. These run quietly in the background, but are ready if you need them in an emergency

Updating your driver remotely

We recommend you use a VNC connection to upgrade your drivers. Changing the driver often changes the parts the driver the remote access tools are using, so you often lose the connection. VNC doesn’t connect into the driver at a low level, so keeps working as the old driver is changed out to the new. Once the driver is updated, you can go back to your other remote access tools.

Rebooting your machine remotely

Normally you can reboot with the windows menus. Give the system a few minutes to restart and then log back in. If your main remote-working tools have stopped functioning, try a VNC connection. You can also restart from a PowerShell Window or command prompt from your local machine with the command: shutdown /r /t 0 /m \\[machine-name]

App-Specific Resources

Several software makers with applications for professionals working in the manufacturing, architecture, and media and entertainment industries have provided instructions on using their applications from home. Here are links to a few recent articles:

Where to Get Help

Given the inherent variability in working from home, there’s no one-size-fits-all solution. If you run into technical issues and have questions, feel free to contact us at desktop-remoting@nvidia.com. We’ll do our best to help.

The post Working Remotely: Connecting to Your Office Workstation appeared first on The Official NVIDIA Blog.