AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

The four undergrads met for the first at the Stanford TreeHacks hackathon, became close friends, and developed an AI-powered app to help physical therapy patients ensure correct posture for their at-home exercises — all within 36 hours.

Back in February, just before the lockdown, Shachi Champaneri, Lilliana de Souza, Riley Howk and Deepa Marti happened to sit across from each other at the event’s introductory session and almost immediately decided to form a team for the competition.

Together, they created PocketPT, an app that lets users know whether they’re completing a physical therapy exercise with the correct posture and form. It captured two prizes against a crowded field, and inspired them to continue using AI to help others.

The app’s AI model uses the NVIDIA Jetson Nano developer kit to detect a user doing the tree pose, a position known to increase shoulder muscle strength and improve balance. The Jetson Nano performs image classification so the model can tell whether the pose is being done correctly based on 100+ images it was trained on, which the team took of themselves. Then, it provides feedback to the user, letting them know if they should adjust their form.

“It can be taxing for patients to go to the physical therapist often, both financially and physically,” said Howk.

Continuing exercises at home is a crucial part of recovery for physical therapy patients, but doing them incorrectly can actually hinder progress, she explained.

Bringing the Idea to Life

In the months leading up to the hackathon, Howk, a rising senior at the University of Alabama, was interning in Los Angeles, where a yoga studio is virtually on every corner. She’d arrived at the competition with the idea to create some kind of yoga app, but it wasn’t until the team came across the NVIDIA table at the hackathon’s sponsor fair that they realized the idea’s potential to expand and help those in need.

“A demo of the Jetson Nano displayed how the system can track bodily movement down to the joint,” said Marti, a rising sophomore at UC Davis. “That’s what sparked the possibility of making a physical therapy app, rather than limiting it to yoga.”

None of the team members had prior experience working with deep learning and computer vision, so they faced the challenge of learning how to implement the model in such a short period of time.

“The NVIDIA mentors were really helpful,” said Champaneri, a rising senior at UC Davis. “They put together a tutorial guide on how to use the Nano that gave us the right footing and outline to follow and implement the idea.”

Over the first night of the hackathon, the team took NVIDIA’s Deep Learning Institute course on getting started with AI on the Jetson Nano. They’d grasped the basics of deep learning. The next morning, they began hacking and training the model with images of themselves displaying correct versus incorrect exercise poses.

In just 36 hours since the idea first emerged, PocketPT was born.

Winning More Than Just Awards

The most exciting part of the weekend was finding out the team had made it to final pitches, according to Howk. They presented their project in front of a crowd of 500 and later found out that it had won the two prizes.

The hackathon attracted 197 projects. Competing against 65 other projects in the Medical Access category — many of which used cloud or other platforms — their project took home the category’s grand prize. It was also chosen as the “Best Use of Jetson Hack,” among 11 other groups that borrowed a Jetson for their projects.

But the quartet is looking to do more with their app than win awards.

Because of the fast-paced nature of the hackathon, PocketPT was only able to fully implement one pose, with others still in the works. However, the team is committed to expanding the product and promoting their overall mission of making physical therapy easily accessible to all.

While the hackathon took place just before the COVID outbreak in the U.S., the team highlighted how their project seems to be all the more relevant now.

“We didn’t even realize we were developing something that would become the future, which is telemedicine,” said de Souza, a rising senior at Northwestern University. “We were creating an at-home version of PT, which is very much needed right now. It’s definitely worth our time to continue working on this project.”

Read about other Jetson projects on the Jetson community projects page and get acquainted with other developers on the Jetson forum page.

Learn how to get started on a Jetson project of your own on the Jetson developers page.

The post It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises appeared first on The Official NVIDIA Blog.

Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning

Whether you want to know if your squats have the correct form, you’re at the mirror deciding how to dress and wondering what the weather’s like, or you keep losing track of your darts score, the Smells Like ML duo have you covered — in all senses.

This maker pair is using machine learning powered by NVIDIA Jetson’s edge AI capabilities to provide smart solutions to everyday problems.

About the Makers

Behind Smells Like ML are Terry Rodriguez and Salma Mayorquin, freelance machine learning consultants based in San Francisco. The business partners met as math majors in 2013 at UC Berkeley and have been working together ever since. The duo wondered how they could apply their knowledge in theoretical mathematics more generally. Robotics, IoT and computer vision projects, they found, are the answer.

Their Inspiration

The team name, Smells Like ML, stems from the idea that the nose is often used in literature to symbolize intuition. Rodriguez described their projects as “the ongoing process of building the intuition to understand and process data, and apply machine learning in ways that are helpful to everyday life.”

To create proofs of concept for their projects, they turned to the NVIDIA Jetson platform.

“The Jetson platform makes deploying machine learning applications really friendly even to those who don’t have much of a background in the area,” said Mayorquin.

Their Favorite Jetson Projects

Of Smells Like ML’s many projects using the Jetson platform, here are some highlights:

SpecMirror — Make eye contact with this AI-powered mirror, ask it a question and it searches the web to provide an answer. The smart assistant mirror can be easily integrated into your home. It processes sound and video input simultaneously, with the help of NVIDIA Jetson Xavier NX and NVIDIA DeepStream SDK.

ActionAI — Whether you’re squatting, spinning or loitering, this device classifies all kinds of human movement. It’s optimized by the Jetson Nano developer kit’s pose estimation inference capabilities. Upon detecting the type of movement someone displays, it annotates the results right back onto the video it was analyzing. ActionAI can be used to prototype any products that require human movement detection, such as a yoga app or an invisible keyboard.

Shoot Your Shot — Bring a little analytics to your dart game. This computer vision booth analyzes dart throws from multiple camera angles, and then scores, logs and even predicts the results. The application runs on a single Jetson Nano system on module.

Where to Learn More 

In June, Smells Like ML won second place in NVIDIA’s AI at the Edge Hackster.io competition in the intelligent video analytics category.

For more sensory overload, check out other cool projects from Smells Like ML.

Anyone can get started on a Jetson project. Learn how on the Jetson developers page.

The post Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning appeared first on The Official NVIDIA Blog.

Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis

After an exhausting 12-hour shift caring for patients, it’s hard to blame frontline workers for forgetting to sing “Happy Birthday” twice to guarantee a full 30 seconds of proper hand-washing.

Though at times tedious, the process of confirming such detailed, protective measures like the amount of time hospital employees spend sanitizing their hands, the cleaning status of a room, or the number of beds available is crucial to preventing the spread of infectious diseases such as COVID-19.

DARVIS, an AI company founded in San Francisco in 2015, automates tasks like these to make hospitals “smarter” and give hospital employees more time for patient care, as well as peace of mind for their own protection.

The company developed a COVID-19 infection-control compliance model within a month of the pandemic breaking out. It provides a structure to ensure that workers are wearing personal protective equipment and complying with hygiene protocols amidst the hectic pace of hospital operations, compounded by the pandemic. The system can also provide information on the availability of beds and other equipment.

Short for “Data Analytics Real-World Visual Information System,” DARVIS uses the NVIDIA Clara Guardian application framework, employing machine learning and advanced computer vision.

The system analyzes information processed by optical sensors, which act as the “eyes and ears” of the machine, and alerts users if a bed is clean or not, or if a worker is missing a glove, among other contextual insights. Upon providing feedback, all records are fully anonymized.

“It’s all about compliance,” said Jan-Philipp Mohr, co-founder and CEO of the company. “It’s not about surveilling workers, but giving them feedback where they could harm themselves. It’s for both worker protection and patient security.”

DARVIS is a member of NVIDIA Inception, a program that helps startups working in AI and data science accelerate their product development, prototyping and deployment.

The Smarter the Hospital, the Better

Automation in hospitals has always been critical to saving lives and increasing efficiency, said Paul Warren, vice president of Product and team lead for AI at DARVIS. However, the need for smart hospitals is all the more urgent in the midst of the COVID-19 crisis, he said.

“We talk to the frontline caregivers, the doctors, the nurses, the transport staff and figure out what part of their jobs is particularly repetitive, frustrating or complicated,” said Warren. “And if we can help automate that in real time, they’re able to do their job a lot more efficiently, which is ultimately good for improving patient outcomes.”

DARVIS can help save money as well as lives. Even before the COVID crisis, the U.S. Centers for Disease Control and Prevention estimated the annual direct medical costs of infectious diseases in U.S. hospitals to be around $45 billion, a cost bound to rise due to the global pandemic. By optimizing infection control practices and minimizing the spread of infectious disease, smart hospitals can decrease this burden, Mohr said.

To save costs and time needed to train and deploy their own devices, DARVIS uses PyTorch and TensorFlow optimized on NGC, NVIDIA’s registry of GPU-accelerated software containers.

“NVIDIA engineering efforts to optimize deep learning solutions is a game-changer for us,” said Warren. “NGC makes structuring and maintaining the infrastructure environment very easy for us.”

DARVIS’s current centralized approach involves deep learning techniques optimized on NVIDIA GPU-powered servers running on large workstations within the hospital’s data center.

As they onboard more users, the company plans to also use NVIDIA DeepStream SDK on edge AI embedded systems like NVIDIA Jetson Xavier NX to scale out and deploy at hospitals in a more decentralized manner, according to Mohr.

Same Technology, Numerous Possibilities

While DARVIS was initially focused on tracking beds and inventory, user feedback led to the expansion of its platform to different areas of need.

The same technology was developed to evaluate proper usage of PPE, to analyze worker compliance with infection control practices and to account for needed equipment in an operating room.

The team at DARVIS continues to research what’s possible with their device, as well as in the field of AI more generally, as they expand and deploy their product at hospitals around the world.

Watch DARVIS in action:

Learn more about NVIDIA’s healthcare-application framework on the NVIDIA Clara developers page.

Images courtesy of DARVIS, Inc.

The post Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis appeared first on The Official NVIDIA Blog.

COVID Caught on Camera: Startup’s Sensors Keep Hospitals Safe

Andrew Gostine’s startup aims to make hospitals more efficient, but when the coronavirus hit Chicago he pivoted to keeping them safer, too.

Gostine is a critical-care anesthesiologist at Northwestern Medicine’s 105-bed Lake Forest hospital, caring for 60 COVID-19 patients. He’s also the CEO of Whiteboard Coordinator Inc., a startup that had a network of 400 cameras and other sensors deployed across Northwestern’s 10 hospitals before the pandemic.

After the virus arrived, “the hospital said it was having a hard time screening people coming in for COVID-19 using conventional temperature probes, and asked if we could help,” he said.

Ten days later, the startup had thermal cameras linked to its network installed at 31 entrances to the hospitals. They detect about a dozen cases of fever in the 6,000 people coming through the doors each day.

The approach reduced lines waiting to get in. It also cut from four to one the number of people the hospital needed to post at each door.

Digital Window Protects Care Givers

About the same time, Northwestern asked Whiteboard for “a digital window” into COVID-19 rooms. They wanted to limit nurses’ exposure to the virus and reduce the need for the protective gear that’s now in high demand.

Whiteboard’s HIPAA-compliant thermal camera system can measure temperature to within +/- 0.3 °C on up to 36 people per video frame at a distance of nine meters.

So, the startup deployed another 400 cameras sporting night vision and microphones across the 10 hospitals. They use Whiteboard’s network of NVIDIA GPUs to transcode the video streams so they can be viewed securely on any hospital display.

“Nurses tell us the remote viewing is phenomenal. They report going into rooms less and consumption of protective gear is down. Our next challenge is using our computer-vision capabilities to track inventory of protective gear in real time,” he said.

The thermal cameras and patient monitors link to 36 NVIDIA RTX 2080 Ti GPUs. They handle transcoding and other algorithms to deliver low-latency feeds at 20 frames/second.

The current COVID-19 uses don’t require AI today, but deep learning is a core part of Whiteboard’s system. “Eighty percent of what we do is computer vision, but we can integrate different sensors for different problems,” Gostine said.

A Sensory-Friendly Guardian for Hospitals

Whiteboard’s system also supports Bluetooth and RFID sensors for a range of patient monitoring, inventory tracking, resource scheduling and security apps. One hospital increased the use of its operating rooms 27 percent while reducing its costs, thanks to the startup’s OR scheduling system. It currently runs on an NVIDIA Jetson TX2 and is being upgraded to Quadro 4000 GPUs.

For use cases such as fever and  mask detection, Whiteboard also plans to adopt NVIDIA Clara Guardian, an application framework that simplifies the deployment in hospitals of smart sensors with multi-mode AI. It is among 18 companies currently supporting Clara Guardian, software that runs on the NVIDIA EGX platform for AI computing on edge servers and embedded devices.

The pandemic spawned orders from 100 hospitals for Whiteboard’s thermal cameras. The startup currently has at least one of its systems installed in a total of 22 hospitals.

“Our biggest problem is sourcing cameras and other hardware we need because supply chains are in disarray,” Gostine said.

Seeking Better Surgery Outcomes with AI

Once the pandemic passes, the startup aims to employ AI to improve outcomes of surgical techniques used in the operating room. Long term, Whiteboard’s value will come from its expanding AI algorithms and datasets, trained on NVIDIA V100 Tensor Core GPUs in Microsoft’s Azure service, he said.

It’s a big opportunity. Accenture predicts by 2026 the top 10 AI healthcare apps will generate a $150 billion market. It will span areas such as robotic surgery, virtual nursing assistants and automated workflows.

The startup’s mission was born of Gostine’s personal passion for making hospitals more modern and efficient.

“When I got to med school, I was frustrated because it seemed we lagged behind the internet era I grew up in,” he said.

From Faxes to the Future

After graduating, he got an MBA and spent some time consulting with healthcare startups before his internship. Work with more than a dozen companies led to a position with a VC firm during his medical residency.

“It was like night and day. The venture world was thinking 10 years ahead, and I realized healthcare was really behind — we’re still using pagers and fax machines,” he said.

“There’s so much paperwork to get through every day, just so we can think about our patients. What we are doing at Whiteboard really stems from the frustrations I felt in my practice,” he added.

A series of chance encounters led him to three AI, software and medical experts who formed Whiteboard, a member of NVIDIA’s Inception program, which gives startups access to new technologies and other resources.

Whiteboard’s first product aimed to streamline OR scheduling, then it expanded into patient monitoring. Now the coronavirus has taken its networks all the way to the hospital’s front door.

The post COVID Caught on Camera: Startup’s Sensors Keep Hospitals Safe appeared first on The Official NVIDIA Blog.

Meet Six Smart Robots at GTC 2020

The GPU Technology Conference is like a new Star Wars movie. There are always cool new robots scurrying about.

This year’s event in San Jose, March 22-26, is no exception, with at least six autonomous machines expected on the show floor. Like C3PO and BB8, each one is different.

Among what you’ll see at GTC 2020:

  • a robotic dog that sniffs out trouble in complex environments such as construction sites
  • a personal porter that lugs your stuff while it follows your footsteps
  • a man-sized bot that takes inventory quickly and accurately
  • a short, squat bot that hauls as much as 2,200 pounds across a warehouse
  • a delivery robot that navigates sidewalks to bring you dinner

“What I find interesting this year is just how much intelligence is being incorporated into autonomous machines to quickly ingest and act on data while navigating around unstructured environments that sometimes are not safe for humans,” said Amit Goel, senior product manager for autonomous machines at NVIDIA and robot wrangler for GTC 2020.

The ANYmal C from ANYbotics AG (pictured above), based in Zurich, is among the svelte navigators, detecting obstacles and finding its own shortest path forward thanks to its Jetson AGX Xavier GPU. The four-legged bot can slip through passages just 23.6 inches wide and climb stairs as steep as 45 degrees on a factory floor to inspect industrial equipment with its depth, wide-angle and thermal cameras.

Gita robot
The Gita personal robot will demo hauling your stuff at GTC 2020.

The folks behind the Vespa scooter will show Gita, a personal robot that can carry up to 40 pounds of your gear for four hours on a charge. It runs computer vision algorithms on a Jetson TX2 GPU to identify and follow its owner’s legs on any hard surfaces.

Say cheese. Bossa Nova Robotics will show its retail robot that can scan a 40-foot supermarket aisle in 60 seconds, capturing 4,000 images that it turns into inventory reports with help from its NVIDIA Turing architecture RTX GPU. Walmart plans to use the bots in at least 650 of its stores.

Mobile Industrial Robots A/S, based in Odense, Denmark, will give a talk at GTC about how it’s adding AI with Jetson Xavier to its pallet-toting robots to expand their work repertoire. On the show floor, it will demonstrate one of the robots from its MiR family that can carry payloads up to 2,200 pounds while using two 3D cameras and other sensors to navigate safely around people and objects in a warehouse.

From the other side of the globe, ExaWizards Inc. (Tokyo) will show its multimodal AI technology running on robotic arms from Japan’s Denso Robotics. It combines multiple sensors to learn human behaviors and perform jobs such as weighing a set portion of water.

Boss Nova robot
Walmart will use the Bossa Nova robot to help automate inventory taking in at least 650 of its stories

Rounding out the cast, the Serve delivery robot from Postmates will make a return engagement at GTC. It can carry 50 pounds of goods for 30 miles, using a Jetson AGX Xavier and Ouster lidar to navigate sidewalks like a polite pedestrian. In a talk, a Postmates engineer will share lessons learned in its early deployments.

Many of the latest systems reflect the trend toward collaborative robotics that NVIDIA CEO Jensen Huang demonstrated in a keynote in December. He showed ways humans can work with and teach robots directly, thanks to an updated NVIDIA Isaac developers kit that also speeds development by using AI and simulations to train robots, now part of NVIDIA’s end-to-end offering in robotics.

Just for fun, GTC also will host races of AI-powered DIY robotic cars, zipping around a track on the show floor at speeds approaching 50 mph. You can sign up here if you want to bring your own Jetson-powered robocar to the event.

We’re saving at least one surprise in robotics for those who attend. To get in on the action, register here for GTC 2020.

The post Meet Six Smart Robots at GTC 2020 appeared first on The Official NVIDIA Blog.

Intel Offers Computer-Powered ‘Echolocation’ Tech and Artificial Intelligence Research at CVPR 2019

Intel is presenting a series of research papers at the Conference on Computer Vision and Pattern Recognition. The research has the potential to radically transform future technologies across industries, from industrial applications to healthcare to education. Photo from Intel’s 2018 CES booth. (Credit: Walden Kirsch/Intel Corporation)

What’s New: Intel is presenting a series of research papers that further the development of computer vision and pattern recognition software and have the potential to radically transform future technologies across industries – from industrial applications to healthcare to education. The Intel team presented their research, which leverages artificial intelligence (AI) and ecosystem-sensing technologies to build complete digital pictures of physical spaces, during the Conference on Computer Vision and Pattern Recognition (CVPR), the premier annual computer vision event in Long Beach, California, from June 16-20.

“Intel believes that technology – including the applications we’re showcasing at CVPR – can unlock new experiences that can transform the way we tackle problems across industries, from education to medicine to manufacturing. With advancements in computer vision technology, we can program our devices to help us identify hidden objects or even enable our machines to teach human behavioral norms.”
–Dr. Rich Uhlig, Intel Senior Fellow and managing director of Intel Labs

Some of the Intel research presented this week includes:

Seeing Around Corners with Sound

Title: Acoustic Non-Line-of-Sight Imaging by David B. Lindell (Intel Labs), Gordon Wetzstein (Stanford University), Vladlen Koltun (Intel Labs)

acousticWhy It Matters: In this paper, Intel demonstrates the ability to construct digital images and see around corners using acoustic echoes. Non-line-of-sight (NLOS) imaging technology enables unprecedented capabilities for applications including robotics and machine vision, remote sensing, autonomous vehicle navigation and medical imaging. This acoustic method can reconstruct hidden objects using inexpensive off-the-shelf hardware at longer distances with shorter exposure times compared with the leading alternative NLOS imaging technologies. In this solution, Intel demonstrates how a system of speakers can emit sound waves and leverage microphones that capture the timing of the returning echoes to inform reconstruction algorithms – inspired by seismic imaging – to build a digital picture of a physical object that is hidden from view. Watch the demonstration.

Abstract:  Intel demonstrates a new approach to seeing around corners using acoustic echoes. The solution is orders of magnitude less expensive than alternative non-line-of-sight imaging technologies, which are based on optical imaging. The new technology is able to see farther and faster around corners than state-of-the-art optical methods.

Using ‘Applied Knowledge’ to Train Deep Neural Networks

Title: Deeply-supervised Knowledge Synergy for Advancing the Training of Deep Convolutional Neural Networks  by Dawei Sun (Intel Labs), Anbang Yao (Intel Labs), Aojun Zhou (Intel Labs), Hao Zhao (Intel Labs)

Why It Matters: AI applications including facial recognition, image classification, object detection and semantic image segmentation leverage technologies inspired by biological neural structures, Deep Convolutional Neural Networks (CNNs), to process information and efficiently find answers. However, leading CNNs are a challenge to train, requiring a large number of hierarchically-stacked parameters for operation, and the more complex they become, the longer they take to train and the more energy they consume. In this paper, Intel researchers present a new training scheme – called Deeply-supervised Knowledge Synergy – that creates “knowledge synergies” and essentially enables the CNN to transfer what it has learned through layers of the network to improve training and performance of CNNs, improving accuracy, noisy data management and data recognition.

Abstract: Intel researchers present a novel training scheme, coined Deeply-supervised Knowledge Synergy (DKS), which can learn prevalent CNNs with much better performance against the currently mainstream scheme. In a sharp contrast to existing knowledge transfer designs dedicated to different CNN models, DKS shapes a new concept of knowledge transfer across different layers of a CNN. Through extensive experiments on public benchmarks, the models trained with DKS show much better performance compared with state-of-the-art training schemes.

Delivering Formative Feedback for Autistic Children Behavioral Therapy

Title: Interpretable Machine Learning for Generating Semantically Meaningful Formative Feedback  by Nese Alyuz (Intel Labs) and Tevfik Metin Sezgin (Koc University)

Why It Matters: We express our emotional state through a range of expressive modalities, such as facial expressions, vocal cues or body gestures. However, children on the autism spectrum experience difficulties in expressing and recognizing emotions with the accuracy of their neurotypical peers. Research shows that children on the autism spectrum can be trained to recognize and express emotions if they are given supportive and constructive feedback. In particular, providing formative feedback, e.g., feedback given by an expert describing how they need to modify their behavior to improve their expressiveness, has been found valuable in rehabilitation. Unfortunately, generating such formative feedback requires constant supervision of an expert who assesses each instance of emotional display. In this paper, an interpretable machine learning framework is demonstrated to provide the foundation for a system that monitors emotional inputs and generates formative recommendations to modify human behavior in order to achieve the appropriate expressive display.

Abstract: In this paper, a system is introduced for automatic formative assessment integrated into an automatic emotion recognition setup. The system is built on an interpretable machine learning framework that identifies a behavior that needs to be modified to achieve a desired expressive display. We report experiments conducted on a children’s voice data set with expression variations, showing that the proposed mechanism generates formative feedback aligned with the expectations reported from a clinical perspective.

The First Large-Scale Benchmark for 3D Object Understanding

Title: PartNet: A Large-Scale Benchmark for Fine-grained and  Hierarchical Part-level 3D Object Understanding by Kaichun Mo (Stanford University), Shilin Zhu (University of California San Diego), Angel X. Chang (Simon Fraser University), Li Yi (Stanford University), Subarna Tripathi (Intel AI Lab), Leonidas J. Guibas (Stanford University), Hao Su (University of California San Diego)

Microwave renderedWhy It Matters: Identifying objects and their parts is critical to how humans understand and interact with the world. For example, using a stove requires not only identifying the stove itself, but also its subcomponents: its burners, control knobs and more. This same capability is essential to many AI vision, graphics and robotics applications, including predicting object functionality, human-object interaction, simulation, shape editing and shape generation. This wide range of applications spurred great demand for large 3D datasets with part annotations. However, existing 3D shape datasets provide part annotations only on a relatively small number of object instances or on coarse, yet non-hierarchical, part annotations, making these datasets unsuitable for applications involving understanding. In other words, if we want AI to be able to make us a cup of tea, large new datasets are needed to better support the training of visual AI applications to parse and understand objects with many small details or with important subcomponents. More information on PartNet here.

Abstract: In this paper, Intel presents PartNet: a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level and hierarchical 3D part information. Using our dataset, we establish three benchmarking tasks for evaluating 3D part recognition, and we benchmark four state-of-the-art 3D deep learning algorithms against the criteria. We then introduce a novel method for part instance segmentation and demonstrating its superior performance over existing methods.

More Context: Intel at CVPR | Intel Labs (Press Kit)

The post Intel Offers Computer-Powered ‘Echolocation’ Tech and Artificial Intelligence Research at CVPR 2019 appeared first on Intel Newsroom.

Images: Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun

Intel NRF Pensa

» Download all images (ZIP, 24 MB)

Photo 1: Pensa’s autonomous drone system, utilizing in-store servers with Intel architecture to power the analytics, uses computer vision and artificial intelligence to inform retailers of what is on shelves and what is missing – across all stores, everywhere, at any point in time. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 2: Using cameras and artificial intelligence, NCR helps the bottom line by providing technology for retailers to improve the shopping experience while reducing shrink. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 3: Created by Mood Media, WestRock, In-Store Screen and Intel, Smart Digital Shelving allows brick-and-mortar retailers to harness data to design more engaging customer experiences, which result in greater traffic conversion and basket size. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 4: Kendu’s Interactive Archway allows retailers to highlight hero products in a store and demonstrates how a customer can engage with new products to learn more. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 5: JD.com’s Smart Vending JD Go removes friction and provides product recommendations to customers. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 6: CloudPick uses automated door access, weighting sensors, cameras and computer vision to create a frictionless experience for shoppers, and an efficient store for retailers. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 7: The ASICS Interactive Shoe Display uses a touchscreen totem to allow consumers to control the wall-sized display as they scroll through the ASICS catalog to find more information about each shoe. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

More: Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun

The post Images: Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun appeared first on Intel Newsroom.

Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun

Joe JensenBy Joe Jensen

Ten years ago, Intel formed the Retail Solutions Division at an opportune moment – the industry was ripe for disruption. At the time, the average store could barely make use of the information at its disposal. Because systems couldn’t talk to each other, nothing happened in real time. Retailers were giving up billions of dollars due to poor inventory management. And with little in the way of shopper analytics, they couldn’t easily understand their customer or personalize their in-store experience. In many, if not most, cases, shoppers walked into the store better equipped with both information and technology than the sales associates who were supposed to be helping them.

A decade later, on our 10th anniversary of participating at NRF, the fundamentals of retail haven’t changed. Retailers that stay relevant have always focused on experience, quality and curation. Consumers are now expecting those fundamentals in a different way, and we’re here to help. At Intel, our job is to be a catalyst for our customers and partners making that customer journey seamless – whether it’s curating immersive and personalized shopping experiences, fine-tuning inventory and supply chains, or driving operational efficiencies – so they can reinvest back into the customer experience.

More: Internet of Things News

As with every NRF show, Intel is proud to showcase a dynamic array of leading-edge technologies and best-in-class retail solutions – a rich and representative sample of the pioneering work we’re doing with our strong partner ecosystem. This year, Intel will announce the Open Retail Initiative. It is the first internet of things (IoT) open-source initiative that focuses on enabling retailers to unlock the power of data and insights within their businesses to scale and address market challenges.

Intel NRF Pensa

» Download all images (ZIP, 24 MB)

Retailers are getting serious about increasing inventory accuracy and attacking on-shelf availability problems. Unveiled for the first time at Intel’s NRF 2019 booth, Pensa will show its breakthrough retail inventory visibility system aimed squarely at the trillion-dollar retail shelf “blind spot” problem. Utilizing in-store servers with Intel architecture to power analytics, Pensa combines artificial intelligence (AI) and drones to autonomously scan shelves and then alert retailers and brands as to what is actually on shelves – across all stores, everywhere, at any point in time.

Some of the most exciting innovations in retail technology are the changes you can’t see – the technologies that are allowing back-of-house automation so valuable staffing hours can be put back into customer-facing activities. Rubikloud utilizes a machine learning platform for retail to automate the mass promotions and merchandising process – the most expensive business process – to deliver an optimal mix of promotion mechanics for accurate forecasts to reduce stock-outs and increase revenue.

AI is reinventing the retail landscape. No longer confined to the data center, AI has made its way into the store itself where retailers are using machine learning to streamline operations, improve supply chain and inventory management, fuel immersive experiences, and inform precision marketing. MeldCX/AOpen is the first AI-enabled, computer vision bulk product scale and labeling system powered by multiple Intel architectures, including Intel® Movidius™ Vision Processing Units. It allows bulk items to be purchased and determines the type and cost of an item, without a barcode – removing friction from the shopping experience by determining bag contents to save time, money and hassle.

Intel’s Open Retail Initiative removes barriers between innovators by connecting technologies and data through common, open-source frameworks. It promotes a free exchange of ideas within the retail industry to drive creative and technological advancement. Through collaborations within the EdgeX Foundry alliance and our ecosystem partners, starting with Canonical, Dell, Envirosell, HP, JD.com, JDA, Petrosoft, RetailNext, SAS, Shekel Brainweigh, SUSE, Toshiba Global Commerce Solutions, Verifone and VMware, Intel is removing barriers to technology adoption.

As Intel celebrates 10 years as a catalyst and key partner in building a bright future for retail, there’s nothing more exciting than imagining what’s ahead. The era of retail transformation has only just begun. Here’s to the next 10 years, and many more to come.

Joe Jensen is vice president of the Internet of Things Group and general manager of the Retail Solutions Division at Intel Corporation.

Intel at 2019 NRF: If you are interested in learning more and checking out our partner solutions and demos, visit our booth, #3437, or check out our demo fact sheet.

The post Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun appeared first on Intel Newsroom.