AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics

With a new university campus nearby and an airport under construction, the city of Liverpool, Australia, 27 kilometers southwest of Sydney, is growing fast.

More than 30,000 people are expected to make a daily commute to its central business district. Liverpool needed to know the possible impact to traffic flow and movement of pedestrians, cyclists and vehicles.

The city already hosts closed-circuit televisions to monitor safety and security. Each CCTV captures lots of video and data that, due to stringent privacy regulations, is mainly combed through after an incident has been reported.

The challenge before the city was to turn this massive dataset into information that could help it run more efficiently, handle an influx of commuters and keep the place liveable for residents — without compromising anyone’s privacy.

To achieve this goal, the city has partnered with the Digital Living Lab of the University of Wollongong. Part of Wollongong’s SMART Infrastructure Facility, the DLL has developed what it calls the Versatile Intelligent Video Analytics platform. VIVA, for short, unlocks data so that owners of CCTV networks can access real-time, privacy-compliant data to make better informed decisions.

VIVA is designed to convert existing infrastructure into edge-computing devices embedded with the latest AI. The platform’s state-of-the-art deep learning algorithms are developed at DLL on the NVIDIA Metropolis platform. Their video analytics deep-learning models are trained using transfer learning to adapt to use cases, optimized via NVIDIA TensorRT software and deployed on NVIDIA Jetson edge AI computers.

“We designed VIVA to process video feeds as close as possible to the source, which is the camera,” said Johan Barthelemy, lecturer at the SMART Infrastructure Facility of the University of Wollongong. “Once a frame has been analyzed using a deep neural network, the outcome is transmitted and the current frame is discarded.”

Disposing of frames maintains privacy as no images are transmitted. It also reduces the bandwidth needed.

Beyond city streets like in Liverpool, VIVA has been adapted for a wide variety of applications, such as identifying and tracking wildlife; detecting culvert blockage for stormwater management and flash flood early warnings; and tracking of people using thermal cameras to understand people’s mobility behavior during heat waves. It can also distinguish between firefighters searching a building and other building occupants, helping identify those who may need help to evacuate.

Making Sense of Traffic Patterns

The research collaboration between SMART, Liverpool’s city council and its industry partners is intended to improve the efficiency, effectiveness and accessibility of a range of government services and facilities.

For pedestrians, the project aims to understand where they’re going, their preferred routes and which areas are congested. For cyclists, it’s about the routes they use and ways to improve bicycle usage. For vehicles, understanding movement and traffic patterns, where they stop, and where they park are key.

Understanding mobility within a city formerly required a fleet of costly and fixed sensors, according to Barthelemy. Different models were needed to count specific types of traffic, and manual processes were used to understand how different types of traffic interacted with each other.

Using computer vision on the NVIDIA Jetson TX2 at the edge, the VIVA platform can count the different types of traffic and capture their trajectory and speed. Data is gathered using the city’s existing CCTV network, eliminating the need to invest in additional sensors.

Patterns of movements and points of congestion are identified and predicted to help improve street and footpath layout and connectivity, traffic management and guided pathways. The data has been invaluable in helping Liverpool plan for the urban design and traffic management of its central business district.

Machine Learning Application Built Using NVIDIA Technologies

SMART trained the machine learning applications on its VIVA platform for Liverpool on four workstations powered by a variety of NVIDIA TITAN GPUs, as well as six workstations equipped with NVIDIA RTX GPUs to generate synthetic data and run experiments.

In addition to using open databases such as OpenImage, COCO and Pascal VOC for training, DLL created synthetic data via an in-house application based on the Unity Engine. Synthetic data allows the project to learn from numerous scenarios that might not otherwise be present at any given time, like rainstorms or masses of cyclists.

“This synthetic data generation allowed us to generate 35,000-plus images per scenario of interest under different weather, time of day and lighting conditions,” said Barthelemy. “The synthetic data generation uses ray tracing to improve the realism of the generated images.”

Inferencing is done with NVIDIA Jetson Nano, NVIDIA Jetson TX2 and NVIDIA Jetson Xavier NX, depending on the use case and processing required.

The post Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics appeared first on The Official NVIDIA Blog.

Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives

A team in Israel is making a splash with AI.

It started as biz school buddies Netanel Eliav and Adam Bismut were looking to solve a problem to change the world. The problem found them: Bismut visited the Dead Sea after a drowning and noticed a lack of tech for lifeguards, who scanned the area with age-old binoculars.

The two aspiring entrepreneurs — recent MBA graduates of Ben-Gurion University, in the country’s south — decided this was their problem to solve with AI.

“I have two little girls, and as a father, I know the feeling that parents have when their children are near the water,” said Eliav, the company’s CEO.

They founded Sightbit in 2018 with BGU classmates Gadi Kovler and Minna Shezaf to help lifeguards see dangerous conditions and prevent drownings.

The startup is seed funded by Cactus Capital, the venture arm of their alma mater.

Sightbit is now in pilot testing at Palmachim Beach, a popular escape for sunbathers and surfers in the Palmachim Kibbutz area along the Mediterranean Sea, south of Tel Aviv. The sand dune-lined destination, with its inviting, warm aquamarine waters, gets packed with thousands of daily summer visitors.

But it’s also a place known for deadly rip currents.

Danger Detectors

Sightbit has developed image detection to help spot dangers to aid lifeguards in their work. In collaboration with the Israel Nature and Parks Authority, the Beersheba-based startup has installed three cameras that feed data into a single NVIDIA Jetson AGX at the lifeguard towers at Palmachim beach. NVIDIA Metropolis is deployed for video analytics.

The system of danger detectors enables lifeguards to keep tabs on a computer monitor that flags potential safety concerns while they scan the beach.

Sightbit has developed models based on convolutional neural networks and image detection to provide lifeguards views of potential dangers. Kovler, the company’s CTO, has trained the company’s danger detectors on tens of thousands of images, processed with NVIDIA GPUs in the cloud.

Training on the images wasn’t easy with sun glare on the ocean, weather conditions, crowds of people, and people partially submerged in the ocean, said Shezaf, the company’s CMO.

But Sightbit’s deep learning and proprietary algorithms have enabled it to identify children alone as well as clusters of people. This allows its system to flag children who have strayed from the pack.

Rip Current Recognition

The system also harnesses optical flow algorithms to detect dangerous rip currents in the ocean for helping lifeguards keep people out of those zones.  These algorithms make it possible to identify the speed of every object in an image, using partial differential equations to calculate acceleration vectors of every voxel in the image.

Lifeguards can get updates on ocean conditions so when they start work they have a sense of hazards present that day.

“We spoke with many lifeguards. The lifeguard is trying to avoid the next accident. Many people go too deep and get caught in the rip currents,” said Eliav.

Cameras at lifeguard towers processed on the single compact supercomputing Jetson Xavier and accessing Metropolis can offer split-second inference for alerts, tracking, statistics and risk analysis in real time.

The Israel Nature and Parks Authority is planning to have a structure built on the beach to house more cameras for automated safety, according to Sightbit.

COVID-19 Calls 

Palmachim Beach lifeguards have a lot to watch, especially now as people get out of their homes for fresh air after the region begins reopening from COVID-19-related closures.

As part of Sightbit’s beach safety developments, the company had been training its network to spot how far apart people were to help gauge child safety.

This work also directly applies to monitoring social distancing and has attracted the attention of potential customers seeking ways to slow the spread of COVID-19. The Sightbit platform can provide them crowding alerts when a public area is overcrowded and proximity alerts for when individuals are too close to each other, said Shezaf.

The startup has put in extra hours to work with those interested in its tech to help monitor areas for ways to reduce the spread of the pathogen.

“If you want to change the world, you need to do something that is going to affect people immediately without any focus on profit,” said Eliav.

 

Sightbit is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives appeared first on The Official NVIDIA Blog.

What Is Edge Computing?

Edge computing and doughnuts share something in common: the closer they are to the consumer, the better.

A trip to the corner doughnut shop may be deliciously quick. But a box of doughnuts within reach of your desk is instant gratification.

The same holds true for edge computing. Send data to an AI application running in the cloud, and it delays answers. Send it to a nearby edge server, and it’s like grabbing directly from that pink box of glazed raised and rainbow sprinkles.

Chances are, you can get a small taste of edge computing right from your pocket: The latest smartphones live at the “edge” of telecom networks, processing smarter voice responses and snazzier photos.

Edge computing — a decades-old term — is the concept of capturing and processing data as close to the source as possible.

Edge computing requires applying processors at the points where gigabytes to terabytes of streaming data is being collected — autonomous vehicles, robots on factory floors, medical imaging machines at hospitals, cameras or checkout stations in retail stores — for quick operations in autonomous applications.

The unveiling of 5G networks — expected to clock in 10x faster than 4G — only increases the possibilities for AI-enabled services, requiring further acceleration of edge computing.

From Smartphones to Smart Everything

New Google, Apple and Samsung smartphones pack more AI processing to better interpret users’ questions and polish images in milliseconds using computational photography.

Yet the vast sums of data streamed from Internet of Things devices are orders of magnitude more than that produced by people using smartphones.

A bonanza of connected cars, robots, drones, mobile devices, cameras and sensors for IoT, as well as medical imaging devices, has tipped the scales toward edge computing. The surge of data used in these compute-intensive workloads is demanding high-performance edge computing to deploy AI.

Split-second AI computations today require edge computing to reduce latency and bandwidth problems caused by shuttling data back and forth when processing on remote servers.

How Edge Computing Works

Data centers are centralized servers often situated where real estate and power is less expensive. Even on the zippiest fiber optic networks, data can’t travel faster than the speed of light. This physical distance between data and data centers causes latency.

Edge computing closes the gap.

Edge computing can be run at multiple network nodes to literally close the distance between data and processing to reduce bottlenecks and accelerate applications.

At the periphery of networks, billions of IoT and mobile devices operate on small, embedded processors, which are ideal for basic applications like video.

That would be just fine if industries and municipalities across the world today weren’t applying AI to data from IoT devices. But they are, developing and running compute-intense models, and they require new approaches to traditional edge computing.

Fortune 500 companies and startups alike are adopting AI at the edge for municipalities. For example, cities are developing AI applications to relieve traffic jams and increase safety.

Verizon uses NVIDIA Metropolis, the IoT application framework that, combined with Jetson’s deep learning capabilities, can analyze multiple streams of video data to look for ways to improve traffic flow, enhance pedestrian safety, optimize parking in urban areas, and more.

Ontario, Canada-based startup Miovision Technologies uses deep neural networks to analyze data from its own cameras and from city infrastructure to optimize traffic lights and keep vehicles moving.

Miovision and others’ work in this space can be accelerated by edge computing from the NVIDIA Jetson compact supercomputing module and insights from NVIDIA Metropolis. The energy-efficient Jetson can handle multiple video feeds simultaneously for AI processes. The combination delivers an alternative to network bottlenecks and traffic jams.

Edge computing scales up, too. Industry application frameworks like NVIDIA Metropolis and AI applications from third parties run atop of the NVIDIA EGX platform for optimal performance.

Benefits of Edge Computing for AI

Edge computing for AI has many benefits, like bringing AI computing to the environments where data is being generated across industries, including smart retail, healthcare, manufacturing, transportation and smart cities.

This shift in the computing landscape offers businesses new service opportunities and can unlock business efficiencies and cost-savings.

In place of traditional edge servers, the NVIDIA EGX platform offers compatibility across NVIDIA AI, from the Jetson line of supercomputing modules to full racks of NVIDIA T4 servers.

Businesses running edge computing for AI gain the flexibility of deploying low-latency AI applications on the tiny NVIDIA Jetson Nano, which takes just a few watts to deliver a half-trillion operations per second for such tasks as image recognition.

A rack of NVIDIA T4 servers delivers more than 10,000 trillion operations per second for the most demanding real-time speech recognition and other compute-heavy AI tasks.

Also, updates at the periphery of the AI-driven edge network are a snap. The EGX software stack runs on Linux and Kubernetes, allowing remote updates from the cloud or edge servers to continuously improve applications.

And NVIDIA EGX servers are tuned for CUDA-accelerated containers.

Enterprise Edge Computing and AI Services 

The world’s largest retailers are enlisting edge AI to become smart retailers. Intelligent video analytics, AI-powered inventory management, and customer and store analytics together offer improved margins and the opportunity to deliver better customer experiences.

Using the NVIDIA EGX platform, Walmart is able to compute in real time more than 1.6 terabytes of data generated a second. It can use AI for a wide variety of tasks, such as automatically alerting associates to restock shelves, retrieve shopping carts or open up new checkout lanes.

Connected cameras numbering in the hundreds or more can feed AI image recognition models processed on site by NVIDIA EGX. Meanwhile, smaller networks of video feeds in remote locations can be handled by Jetson Nano, linking with EGX and NVIDIA AI in the cloud.

Store aisles can be monitored by fully autonomous and capable conversational AI robots powered by Jetson AGX Xavier and running Isaac for SLAM navigation. All of this is compatible with EGX or NVIDIA AI in the cloud.

Whatever the application, NVIDIA T4 and Jetson GPUs at the edge provide a powerful combination for intelligent video analytics and machine learning applications.

Smart Devices to Sensor Fusion 

Factories, retailers, manufacturers and automakers are generating sensor data that can be used in a cross-referenced fashion to improve services.

This sensor fusion will enable retailers to deliver new services. Robots can use more than just voice and natural language processing models for conversational interactions. Those same bots can use video feeds to run on pose estimation models. Linking the voice and gesture sensor information can help robots better understand what products or directions customers are seeking.

Sensor fusion could create new user experiences for automakers to adopt for competitive advantages as well. Automakers could use pose estimation models to understand where a driver is looking along with natural language models that understand a request that correlates to 7-Eleven locations on a car’s GPS map.

Snap your fingers, point to a 7-Eleven and say “pull over for doughnuts,” and you’re in for a ride to the future, with your autonomous vehicle inferring your destination, aided by sensor fusion and AI at the edge.

Edge Computing for Gaming

Gamers are notorious for demanding high-performance, low-latency computing power. High-quality cloud gaming at the edge ups the ante. Next-generation gaming applications involving virtual reality, augmented reality and AI are an even bigger challenge.

Telecommunications providers are using NVIDIA RTX Servers — which deliver cinematic-quality graphics enhanced by ray tracing and AI — to gamers around the world. These servers power GeForce NOW, NVIDIA’s cloud gaming service, which transforms underpowered or incompatible hardware into powerful GeForce gaming PCs at the edge.

Taiwan Mobile, Korea’s LG U+, Japan’s SoftBank and Russia’s Rostelecom have all announced plans to roll out the service to their cloud gaming customers.

What Is AI Edge Computing as a Service?

With edge AI, telecommunications companies can develop next-generation services to offer their customers, providing new revenue streams.

Using NVIDIA EGX, telecom providers can analyze video camera feeds using image recognition models to help with everything from foot traffic to monitoring store shelves and deliveries.

For example, if a 7-Eleven ran out of doughnuts early in the morning on a Saturday in its store display, the convenience store manager could receive an alert that it needs restocking.

So in the future, when you bite into a fresh doughnut, you might have edge computing to thank.

 

High-performance deep learning inferencing happens at the edge with the NVIDIA Jetson embedded computing platform, and through NVIDIA EGX platform servers and data centers with NVIDIA Tesla GPU accelerators.

Photo credit: Creative Commons Attribution 2.0 Generic license.

The post What Is Edge Computing? appeared first on The Official NVIDIA Blog.

How NVIDIA Metropolis Is Paving the Way Toward Smarter Traffic

Traffic. Nobody likes it, but we all have to deal with it.

As the world’s cities grow more densely populated, scientists and entrepreneurs are looking for solutions to gridlock, pollution and the other byproducts of a world filled with cars.

Two sessions at the GPU Technology Conference earlier this month spoke to the role that data, deep learning and intelligent video analytics can play in easing traffic and improving quality of life for city dwellers the world over.

The Virtuous Cycle of Traffic

Kurtis McBride, CEO of Miovision Technologies, an IVA startup based in Ontario, Canada, spoke to a room full of developers about his company’s efforts — and their 40 percent year-over-year growth —  to make traffic flow a little easier.

Miovision’s Open City platform gets data from existing city infrastructure and the company’s own video cameras, and applies AI to create insights from it.

For example, the company’s Smart Intersection optimizes traffic light timing to keep city buses moving more, and sitting at red lights less. The more efficiently bus lines run, the more likely residents are to opt for them instead of cars. Fewer cars on the roads means less traffic, lower emissions and more room for those buses to run their routes efficiently.

That’s a virtuous cycle on its own, but it gets better. Miovision’s business model hinges on selling quality information to its clients. The company uses deep neural networks to analyze the raw data they get from cameras and city infrastructure. The more raw data they collect, the better they can train their networks. Well-trained networks yield better data for clients. And so it goes, another virtuous cycle.

As of GTC, Miovision had begun preliminary work with the NVIDIA Metropolis platform for analyzing video streams, and in particular are excited about how the Jetson TX2 AI supercomputer on a module will aid their work. The company is planning small-scale trials of Open City running on the Jetson TX2 in cities across North America. And the TX2’s improved energy efficiency has them looking at future solutions that could run entirely on the sun’s rays.

“With TX2, we’re within striking distance of being solar-powered,” McBride said, citing Jetson TX2’s ability to run on only 7 watts of power. “We’re probably a generation or two away from realizing that, but when we do, solar will bring deployment costs way down for municipalities.”

Cycling the Green Waves

Intelligent traffic flows aren’t just for cars and buses. Economist, mathematician and computer scientist Edward Zimmerman spoke to GTC attendees about his ongoing work using deep learning to create “green waves” for bicyclists in Germany, and beyond.

A green wave is that wondrous phenomenon of cruising through one green light after another, as though the traffic gods were smiling directly upon you during your commute. As Zimmerman explained, green waves are more science than fiction, resulting from timing patterns and algorithms often devised to give priority to cars and buses over cyclists and pedestrians in high-traffic areas.

Self-professed “data guy” Zimmerman is working with GESIG, a Germany-based maker of signaling equipment, to develop low-cost systems that create on-demand green waves for urban cyclists. The project aims to use the GPU power and energy efficiency of the Jetson TX1 platform to identify cyclists through neural networks fed real-time data from cameras installed in traffic lights. The networks then analyze the data to identify opportunities to bestow green waves upon the bicycle riders — ideally doing so in harmony with mass transit vehicles already riding their own waves.

Like Miovision’s McBride, Zimmerman sees the NVIDIA Metropolis platform and Jetson TX2’s improved energy efficiency as a step towards solar-powered intelligence — in this case, in the shape of smart traffic lights powered entirely by the sun’s rays. A variant of the project was tested in the city of Bonn, Germany, with plans in the works to spread the green waves across Europe and beyond.

The post How NVIDIA Metropolis Is Paving the Way Toward Smarter Traffic appeared first on The Official NVIDIA Blog.