AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

Intel RealSense Technology Selected by RightHand Robotics to Revolutionize Automated Order Fulfillment

Intel RealSense 1

» Download all images (ZIP, 5 MB)

Due to the global pandemic, this year’s e-commerce sales are on the rise. Growth in online retail has placed increased pressure on warehouses to keep up with higher volumes of orders and with social-distancing protocols that have restricted the number of staff allowed on-site. Massachusetts-based RightHand Robotics addresses these challenges with its RightPick2 robot, powered by the Intel® RealSense™ D415 Depth Camera. The RightPick2 is an autonomous robotic piece-picking solution and labor multiplier that allows for rapid order fulfillment with little to no human contact.

More: Intel Real Sense (Press Kit)

The Intel RealSense D415 provides each RightHand robot with the ability to discern objects and their locations in a bin, while avoiding collisions when pulling them out. The camera also provides the data that helps RightHand Robotics improve its platform over time. Depth images from the Intel RealSense D415 gathered over millions of individual picks help RightHand learn the best way for the robot to approach different shapes and classes of items.

Aided by the RightPick2 robot, a single warehouse worker now has the ability to manage a fleet of robots, picking and placing thousands of SKUs instead of having to search warehouse aisles. This results in each robot significantly reducing lead times by fulfilling orders accurately at high speeds, and ultimately enables businesses to give customers what they need.

The RightHand Robotics solution is targeted to make warehouses safer for employees amid the pandemic and help facilities reopen while adhering to distancing guidelines. With more warehouses rapidly adopting the digital warehouse model, robotic process automation fueled by Intel RealSense technology provides a way to more efficiently fulfill the growing demand.

Customer Stories: Intel Customer Spotlight on | Customer Stories on Intel Newsroom

The post Intel RealSense Technology Selected by RightHand Robotics to Revolutionize Automated Order Fulfillment appeared first on Intel Newsroom.

Amped Up: HPC Centers Ride A100 GPUs to Accelerate Science

Six supercomputer centers around the world are among the first to adopt the NVIDIA Ampere architecture. They’ll use it to bring science into the exascale era in fields from astrophysics to virus microbiology.

The high performance computing centers scattered across the U.S. and Germany will use a total of nearly 13,000 A100 GPUs.

Together these GPUs pack more than 250 petaflops in peak performance for simulations that use 64-bit floating point math. For AI inference jobs that use mixed precision math and leverage the A100 GPU’s support for sparsity, they deliver a whopping 8.07 exaflops.

Researchers will harness that horsepower to drive science forward in many dimensions. They plan to simulate larger models, train and deploy deeper networks, and pioneer an emerging hybrid field of AI-assisted simulations.

Argonne deployed one of the first NVIDIA DGX-A100 systems. Photo courtesy of Argonne National Laboratory.

For example, Argonne’s researchers will seek a COVID-19 vaccine by simulating a key part of a protein spike on a coronavirus that’s made up of as many as 1.5 million atoms.

The molecule “is a beast, but the A100 lets us accelerate simulations of these subsystems so we can understand how this virus infects humans,” said Arvind Ramanathan, a computational biologist at Argonne National Laboratory that will use a cluster of 24 NVIDIA DGX A100 systems.

In other efforts, “we will see substantial improvement in drug discovery by scanning millions and billions of drugs at a time. And we may see things we could never see before, like how two proteins bind to one another,” he said.

A100 Puts AI in the Scientific Loop

“Much of this work is hard to simulate on a computer, so we use AI to intelligently guide where and when we will sample next,” said Ramanathan.

It’s part of an emerging trend of scientists using AI to steer simulations. The GPUs then will speed up the time to process biological samples by “at least two orders of magnitude,” he added.

Across the country, the National Energy Research Scientific Computing Center (NERSC) is poised to become the largest of the first wave of A100 users. The center in Berkeley, Calif., is working with Hewlett Packard Enterprise to deploy 6,200 of the GPUs in Perlmutter, its pre-exascale system.

“Across NERSC’s science and algorithmic areas, we have increased performance by up to 5x when comparing a single V100 GPU to a KNL CPU node on our current-generation Cori system, and we expect even greater gains with the A100 on Perlmutter,” said Sudip Dosanjh, NERSC’s director.

Exascale Computing Team Works on Simulations, AI

A team dedicated to exascale computing at NERSC has defined nearly 30 projects for Perlmutter that use large-scale simulations, data analytics or deep learning. Some projects blend HPC with AI, such as one using reinforcement learning to control light source experiments. Another employs generative models to reproduce expensive simulations at high-energy physics detectors.

Two of NERSC’s HPC applications already prototyped use of the A100 GPU’s double-precision Tensor Cores. They’re seeing significant increases in performance over previous generation Volta GPUs.

Software optimized for the 10,000-way parallelism Perlmutter’s GPUs offer will be ready to run on future exascale systems, Christopher Daley, an HPC performance engineer at NERSC said in a talk at GTC Digital. NERSC supports nearly a thousand scientific applications in areas such as astrophysics, Earth science, fusion energy and genomics.

“On Perlmutter, we need compilers that support all the programming models our users need and expect — MPI, OpenMP, OpenACC, CUDA and optimized math libraries. The NVIDIA HPC SDK checks all of those boxes,” said Nicholas Wright, NERSC’s chief architect.

German Effort to Map the Brain

AI will be the focus of some of the first applications for the A100 on a new 70-petaflops system designed by France’s Atos for the Jülich Supercomputing Center in western Germany.

One, called Deep Rain, aims to make fast, short-term weather predictions, complementing traditional systems that use large, relatively slow simulations of the atmosphere. Another project plans to construct an atlas of fibers in the human brain, assembled with deep learning from thousands of high-resolution 2D brain images.

The new A100 system at Jülich also will help researchers push the edges of understanding the strong forces binding quarks, the sub-atomic building blocks of matter. At the macro scale, a climate science project will model the Earth’s surface and subsurface water flow.

“Many of these applications are constrained by memory,” said Dirk Pleiter, a theoretical physicist who manages a research team in applications-oriented technology development at Jülich. “So, what is extremely interesting for us is the increased memory footprint and memory bandwidth of the A100,” he said.

The new GPU’s ability to accelerate double-precision math by up to 2.5x is another feature researchers are keen to harness. “I’m confident when people realize the opportunities of more compute performance, they will have a strong incentive to use GPUs,” he added.

Data-Hungry System Likes Fast NVLink

Some 230 miles south of Jülich, the Karlsruhe Institute of Technology (KIT) is partnering with Lenovo to build a new 17-petaflops system that will pack 740 A100 GPUs on an NVIDIA Mellanox 200 Gbit/s InfiniBand network. It will tackle grand challenges that include:

  • Atmospheric simulations at the kilometer scale for climate science
  • Research to fight COVID-19, including support for Folding@home
  • Explorations of particle physics beyond the Higgs boson for the Large Hadron Collider
  • Research on next-generation materials that could replace lithium-ion batteries
  •  AI applications in robotics, language processing and renewable energy

“We focus on data-intensive simulations and AI workflows, so we appreciate the third-generation NVLink connecting the new GPUs,” said Martin Frank, director of KIT’s supercomputing center and a professor of computational science and math.

“We also look forward to the multi-instance GPU feature that effectively gives us up to 28 GPUs per node instead of four — that will greatly benefit many of our applications,” he added.

Just outside Munich, the computer center for the Max Planck Institute is creating with Lenovo a system called Raven-GPU, powered by 768 NVIDIA A100 GPUs. It will support work in fields like astrophysics, biology, theoretical chemistry and advanced materials science. The research institute aims to have Raven-GPU installed by the end of the year and is taking requests now for support porting applications to the A100.

Indiana System Counters Cybersecurity Threats

Finally, Indiana University is building Big Red 200, a 6 petaflops system expected to become the fastest university-owned supercomputer in the U.S. It will use 256 A100 GPUs.

Announced in June, it’s among the first academic centers to adopt the Cray Shasta technology from Hewlett Packard Enterprise that others will use in future exascale systems.

Big Red 200 will apply AI to counter cybersecurity threats. It also will tackle grand challenges in genetics to help enable personalized healthcare as well as work in climate modeling, physics and astronomy.

Photo at top: Shyh Wang Hall at UC Berkeley will be the home of NERSC’s Perlmutter supercomputer.

The post Amped Up: HPC Centers Ride A100 GPUs to Accelerate Science appeared first on The Official NVIDIA Blog.

Intel and MIC Announce Scale to Serve Program to Rapidly Expand Remote ICUs to 100 US Hospitals

Virtual ICU6

» Download all images (ZIP, 4 MB)

What’s New: As part of Intel’s $50 million pandemic response, Intel and Medical Informatics Corp. (MIC) today announced the Scale to Serve Program to help hospitals rapidly install and scale MIC’s Sickbay™ platform. The platform is designed to help hospitals rapidly expand intensive care unit (ICU) bed capacity and create more efficient care of the most critical patients while also reducing risk of COVID-19 exposure for critical care providers who are at a higher risk of exposure due to the nature of their work. Through the Scale to Serve program, Intel has agreed to fund the implementation fees and MIC will waive the first 90 days of software subscription licensing fees for the first 100 hospitals that qualify.

“Intel technology has a role to help accelerate the core capabilities our medical community requires to combat COVID-19. This is why we’re committed to applying our technology to helping protect our front-line healthcare providers who are providing care for ICU patients by accelerating the access to virtual patient monitoring solutions. The solution improves the efficiency of ICU patient care from anywhere while protecting the health of caregivers on the front line of this crisis.”
–Lisa Spelman, Intel corporate vice president and general manager of the Xeon and Memory Group

Why It Matters: Hospitals are facing challenges due to the pandemic: from ramping ICU capacity and staffing the front lines to getting providers access to the data they need to effectively intervene. They are challenged with meeting these needs because patient data is currently tied to medical devices at the bedside and locked down in proprietary formats that don’t easily integrate with each other. The Sickbay platform helps address these issues by unlocking and unifying this disparate data from the bedside to enable flexible, scalable remote monitoring from any web-enabled device to create clinical distancing and help protect providers from exposure to COVID-19. The solution allows hospitals to turn any acute care bed into a monitored ICU bed in minutes and create flexible remote workflows, including the ability for one provider to remotely direct care for up to 100 patients in a single dashboard or “virtual ICU” (vICU) and/or create flexible virtual rounding of patients from conference rooms, offices or home on any PC. This flexibility helps hospitals rapidly expand staff capacity, including staff that may be in quarantine or coming out of retirement, to get more eyes on patients and help hospitals save more lives – both from patients and healthcare workers. Remote views bring together data from multiple medical devices, including ventilators and cardiac monitors from different vendors, to ensure that all members of the care team have the data they need to take action.

“In healthcare today, less than 1% of the data being generated by patients makes it into the electronic health record, forcing clinicians to work without a complete picture. We unlock that data to allow for predictive analytics, remote patient monitoring, AI and machine learning applications that will ultimately help providers create a new standard of care tomorrow,” said Emma Fauss, chief executive officer, MIC. “What COVID has done is crystallize the immediate need around reducing interactions in the room and looking at resource limitations in a new way. How can you monitor more patients with less, without sacrificing the quality of care? It requires being able to monitor patients remotely, monitor multiple patients at once, and leverage data for patient trajectory, analytics and risk scoring.

“Scale to Serve embodies what MIC and Intel believe: We’re here to serve healthcare providers on the front line to do their job effectively.  We want to be there to support that mission and ultimately transform the standard of care.”

About Sickbay’s Early Implementation: In March 2020, Houston Methodist Hospital deployed Sickbay as part of its telehealth initiative to create a vICU. Within one day of go-live, Houston Methodist began to see COVID-19 patients. Due to Sickbay’s software-based design, Houston Methodist was able to turn on new COVID-19 ICU beds in minutes. The hospital’s staff currently has the ability to monitor up to 180 COVID-19 patients remotely from their on-site command center, conference rooms and offices. Other MIC clients are also beginning to expand capacity for COVID-19, including the University of Alabama at Birmingham Hospital.

“We plan to use Sickbay in a centralized monitoring area where a single physician or nurse can monitor patients remotely and leverage hierarchical categorization of patients. We also plan to give our physicians who may be at higher risk of infection the opportunity to provide highly important consultation, without having to be in the line of fire, by logging in from a remote site on a web-based, secure, HIPPA-compliant system,” said Dr. Dan Berkowitz, M.D., chair of the Department of Anesthesiology and Perioperative Medicine at University of Alabama at Birmingham Hospital. “Every time a physician goes into an ICU room, there’s a risk of exposure and a need to utilize PPE. We can reduce exposure risk by enabling the physicians to access the physiologic patient data remotely and save precious PPE.”

» Download video: “Houston Methodist Deploys Medical Informatics Corp.’s Sickbay Platform (B-Roll)”

How It Works: MIC Sickbay is the only scalable FDA-cleared clinical surveillance and analytics platform created for ICUs. The software-based monitoring and analytics platform is vendor-agnostic, integrating seamlessly with medical devices that hospitals use. Sickbay utilizes Intel® Xeon® processor-based platforms to deliver data visualization and analytics, and can be accessed by providers using any connected PC, tablet or phone. Once deployed, Sickbay supports evidence-based care decisions by providing providers with near real-time waveform data across connected devices integrated into a single view, along with patient history for the entire length of stay, to help care teams automate the building of trends and create patient-specific analytics at scale.

Hospitals interested in access can apply for the program on the MIC website.

What’s Next: Intel and MIC are talking to hospitals across the U.S. to enable them to quickly deploy Sickbay and protect their critical care providers.

More Context: ATA Spotlight on COVID-19: Scaling ICU Beds and Capabilities (Webinar)

More Customer Stories: Intel Customer Spotlight on | Customer Stories on Intel Newsroom

intel tech supporting healthcare
» Click for full infographic

The post Intel and MIC Announce Scale to Serve Program to Rapidly Expand Remote ICUs to 100 US Hospitals appeared first on Intel Newsroom.

Using Artificial Intelligence to Save Coral Reefs

CORail 1

» Download all images (ZIP, 19 MB)

What’s New: Today, on Earth Day 2020, Accenture, Intel and the Sulubaaï Environmental Foundation announced Project: CORaiL, an artificial intelligence (AI)-powered solution to monitor, characterize and analyze coral reef resiliency. Project: CORaiL was deployed in May 2019 to the reef surrounding Pangatalan Island in the Philippines and has collected about 40,000 images, which have been used by researchers to gauge reef health in real time.

“Project: CORaiL is an incredible example of how AI and edge technology can be used to assist researchers with monitoring and restoring the coral reef. We are very proud to partner with Accenture and the Sulubaaï Environmental Foundation on this important effort to protect our planet.”
–Rose Schooler, Intel corporate vice president in the Sales and Marketing Group

Why It Matters: Coral reefs are among the world’s most diverse ecosystems, with more than 800 species of corals providing habitat and shelter for approximately 25% of global marine life. Coral reefs are also extremely beneficial to humans: They protect coastlines from tropical storms, provide food and income for 1 billion people, and generate $9.6 billion in tourism and recreation each year. But according to the United Nations Environment Programme, coral reefs are endangered and rapidly degrading due to overfishing, bottom trawling, warming temperatures and unsustainable coastal development.

“Artificial intelligence provides unprecedented opportunities to solve some of society’s most vexing problems,” said Jason Mitchell, a managing director in Accenture’s Communications, Media & Technology practice. “Our ecosystem of corporate and social partners for this ‘AI for social good’ project proves that there is strength in numbers to make a positive environmental impact.”

How It Works: The abundance and diversity of fish serve as an important indicator of overall reef health. Traditional coral reef monitoring efforts involve human divers either directly collecting data underwater or manually capturing video footage and photos of the reef to be analyzed later. Those methods are widely trusted and employed, but they come with disadvantages: Divers can interfere with wildlife behavior and unintentionally affect survey results, and time underwater is limited as divers can often only take photos and video for around 30 minutes.

Engineers from Accenture, Sulubaaï and Intel combined their expertise for Project: CORaiL with the goal of helping researchers restore and supplement the existing degraded reef in the Philippines. First, they built a Sulu-Reef Prosthesis, a concrete underwater platform designed by Sulubaaï to provide strong support for unstable coral fragments. The Sulu-Reef Prosthesis incorporates fragments of living coral within it that will grow and expand, providing a hybrid habitat for fish and marine life. Then, they strategically placed intelligent underwater video cameras, equipped with the Accenture Applied Intelligence Video Analytics Services Platform (VASP) to detect and photograph fish as they pass. VASP uses AI to count and classify the marine life, with the data then sent to a surface dashboard, where it provides analytics and trends to researchers in real time, enabling them to make data-driven decisions to protect the coral reef.

“The value of your data depends on how quickly you can glean insights to make decisions from it,” said Athina Kanioura, Accenture’s chief analytics officer and Accenture Applied Intelligence lead. “With the ability to do real-time analysis on streaming video, VASP enables us to tap into a rich data source — in effect doing ‘hands on’ monitoring without disrupting the underwater environment.”

Accenture’s VASP solution is powered by Intel® Xeon® processors, Intel® FPGA Programmable Acceleration Cards, Intel® Movidius™ VPU and the Intel® Distribution of OpenVINO™ toolkit.

What’s Next: Engineers are at work on the next-generation Project: CORaiL prototype, which will include an optimized convolutional neural network and a backup power supply. They are also considering infrared cameras, which enable nighttime video capture to create a complete picture of the coral ecosystem. Additional uses could include studying the migration rate of tropical fish to colder waters and monitoring intrusion in protected or restricted underwater areas.

More Context: Intel FPGA Acceleration Hub | Artificial Intelligence at Intel | Intel Reaches 1 Billion Gallons of Water Restored

More Customer Stories: Intel Customer Spotlight on | Customer Stories on Intel Newsroom

The post Using Artificial Intelligence to Save Coral Reefs appeared first on Intel Newsroom.

DarwinAI Makes AI Applications More Efficient and Less of a ‘Black Box’ — with Its Own AI

Employees of DarwinAI, an artificial intelligence software startup based in Waterloo, Ontario, gather with company CEO Sheldon Fernandez (seated, center, in the jacket). Credit: DarwinAI

As a student pursuing a doctorate in systems design engineering at the University of Waterloo, Alexander Wong didn’t have enough money for the hardware he needed to run his experiments in computer vision. So he invented a technique to make neural network models smaller and faster.

“He was giving a presentation, and somebody said, ‘Hey, your doctorate work is cool, but you know the real secret sauce is the stuff that you created to do your doctorate work, right?’” recalls Sheldon Fernandez.

Fernandez is the CEO of DarwinAI, the Waterloo, Ontario-based startup now commercializing that secret sauce. Wong is the company’s chief scientist. And Intel is helping the company multiply the performance of its remarkable software, from the data center to edge applications.

“We use other forms of artificial intelligence to probe and understand a neural network in a fundamental way,” says Fernandez, describing DarwinAI’s playbook. “We build up a very sophisticated understanding of it, and then we use AI a second time to generate a new family of neural networks that’s as good as the original, a lot smaller and can be explained.”

That last part is critical: A big challenge with AI, says Fernandez, is that “it’s a black box to its designers.” Without knowing how an AI application functions and makes decisions, developers struggle to improve performance or diagnose problems.

An automotive customer of DarwinAI, for instance, was troubleshooting an automated vehicle with a strange tendency to turn left when the sky was a particular shade of purple. DarwinAI’s solution — which it calls Generative Synthesis — helped the team recognize how the vehicle’s behavior was affected by training for certain turning scenarios that had been conducted in the Nevada desert, coincidentally when the sky was that purple hue (read DarwinAI’s recent deep dive on explainability).

Another way to think about Generative Synthesis, Fernandez explains, is to imagine an AI application that looked at a house designed by a human being, noted the architectural contours, and then designed a completely new one that was stronger and more reliable. “Because it’s AI, it sees efficiencies that would just never occur to a human mind,” Fernandez says. “That’s what we are doing with neural networks.” (A neural network is an approach to break down sophisticated tasks into a large number of simple computations.)

Intel is in the business of making AI not only accessible to everyone, but also faster and easier to use. Through the Intel AI Builders program, Intel has worked with DarwinAI to pair Generative Synthesis with the Intel® Distribution of OpenVINO™ toolkit and other Intel AI software components to achieve order-of-magnitude gains in performance.

In a recent case study, neural networks built using the Generative Synthesis platform coupled with Intel® Optimizations for TensorFlow were able to deliver up to 16.3 times and 9.6 times performance increases on two popular image recognition workloads (ResNet50 and NASNet, respectively) over baseline measurements for an Intel Xeon Platinum 8153 processor.

“Intel and DarwinAI frequently work together to optimize and accelerate artificial intelligence performance on a variety of Intel hardware,” says Wei Li, vice president and general manager of Machine Learning Performance at Intel.

The two companies’ tools are “very complementary,” Fernandez says. “You use our tool and get a really optimized neural network and then you use OpenVINO and the Intel tool sets to actually get it onto a device.”

This combination can deliver AI solutions that are simultaneously compact, accurate and tuned for the device where they are deployed, which is becoming critical with the rise of edge computing.

“AI at the edge is something we’re increasingly seeing,” says Fernandez. “We see the edge being one of the themes that is going to dominate the discussion in the next two, three years.”

In the shadow of coronavirus: Dominating all discussion right now is coronavirus. DarwinAI announced this week that “we have collaborated with researchers at the University of Waterloo’s VIP Lab to develop COVID-Net: a convolutional neural network for COVID-19 detection via chest radiography.” The company has made the source code and dataset available by open source on GitHub. Read about Intel and coronavirus.

More Customer Stories: Intel Customer Spotlight on | Customer Stories on Intel Newsroom

The post DarwinAI Makes AI Applications More Efficient and Less of a ‘Black Box’ — with Its Own AI appeared first on Intel Newsroom.

A Visit to The Sinclair, an All-Digital Tech Hotel

A visit to The Sinclair, Autograph Collection, in Fort Worth, Texas, is a step back in time. With the green marble entryway, the original elevator doors and the cigar boxes on display, it’s 1929 again and this is a new office building.

On check-in, the experience is anything but vintage.

Using a smartphone app, a hotel guest at The Sinclair can adjust the temperature of the room, the light settings in the bedroom and bathroom, the window shades, and even the shower temperature to the exact degree. And once The Sinclair’s technology is added in hotels around the world, guests can count on their preferred settings personalizing their next hotel room before they ever step foot into it.

More: Intel and Sinclair Holdings Build First All-Digital Hotel for a Greener and More Personal Experience | Tour the High-Tech Hotel of the Future (Today Show) | Intel partnered with the Sinclair Hotel to create the “hotel of the future” (Fortune) | Sinclair Hotel Spotlight | Internet of Things News

Intel’s internet of things (IoT) technology weaves through the DNA of The Sinclair, which markets itself as the first all-digital hotel. Features include in-room sensors, IoT gateways, dashboards, and IoT-based restaurant sinks and appliances. Together, they create more sustainable and efficient building operations to in-room environments personalized for guests.

Intel-based technologies enable the hotel from the smart features to the reservation systems, point of sale, networking infrastructure back office, and guest services, such as mobile key and wireless charging. Intel is also providing NUCs to support the gateways, controllers, data aggregation and edge computing.

Sinclair Intel 1

» Download all images (ZIP, 116 MB)

Farukh Aslam, CEO and president of Sinclair Holdings LLC, came up with the idea for the smart hotel when he had issues with an LED lighting system in a different building. He wanted a system for lights that didn’t need to be connected by an electrician and didn’t require  proprietary software. He chose Voltserver’s Power over Ethernet (PoE) to power Cisco switches. The system controls The Sinclair’s more than 2,000 light fixtures, minibars and automated window shades. Each device has its own IP address and can be remotely controlled by staff and guests. The Sinclair is the first hotel to utilize this PoE technology, which is projected to cut energy consumption by 40%.

“We are great ecosystem builders and connectors, and so when we engage in a project like this, we bring that level of expertise to the table around who’s the best partner to bring in to solve your business problem,” says Stacey Shulman, chief innovation officer at Intel’s IoT Retail Solutions Division.

High-tech interactions with hotel guests don’t end in the room.

Cisco’s Meraki smart Wi-Fi cloud networking solution with SAS data analytics integration offers location-based analytics and personalized guest messaging. Guests who opt-in receive location-based text messages as they move around the hotel. For example, when a guest passes the bar, he may be sent a coupon for a free drink.

This technology also makes the hotel staff more efficient and improves the guest’s quality of service. Mobile devices allow hotel staff to connect to reservation and property management software, so guests can check in or get their questions answered anywhere on the premises. Guests can also order food and drinks anywhere on the property through wireless point of sale systems. And for the meeting rooms, Intel Unite® wireless collaboration technology allows guests and meeting attendees to collaborate on content from any of their devices.

Intel has not only provided the IoT technology, but also acted as a matchmaker for The Sinclair and other technology partners. “I love the fact that Intel has a pool of partners and they put you all together that has been amazing. It feels like you’re part of a family of innovators,” Aslam says.

This is just the beginning of IoT technology modernizing the hotel industry. Using the data collected by the IoT sensors and gateways, advanced analytics will provide the ultimate guest, associate and operational experience. Sending guests personalized messages, making buildings more environmentally friendly, and having guest preferences saved so that any hotel room feels like home will all be commonplace in the hotel of the future.

The post A Visit to The Sinclair, an All-Digital Tech Hotel appeared first on Intel Newsroom.

Laika’s Oscar-nominated ‘Missing Link’ Comes to Life with Intel Technology

As moviemaking — and even the actors themselves — goes increasingly digital, Laika studios in Oregon is a unique hybrid. Most movies today are live action with visual effects added later — or they’re fully digital. Laika starts with the century-old craft of stop motion — 24 handcrafted frames per second — and uses visual effects not only to clean up those frames but to add backgrounds and characters.

“We’re dedicated to pushing the boundaries and trying to expand what you can do in a stop motion film,” says Jeff Stringer, director of production technology at Laika. “We want to try and get as much as we can in-camera, but using visual effects allows us to scale it up and do more.”

That’s exactly what Laika did with its latest feature, “Missing Link,” the company’s fifth-straight movie to be nominated for an Academy Award for best animated feature, and its first to win a Golden Globe. “The scope of this movie is huge,” the film’s writer-director, Chris Butler, told the Los Angeles Times. According to Animation World Network, the computational requirements of the film’s digital backgrounds and characters topped a petabyte of storage, and rendering the entire movie took 112 million processor hours — or 12,785 years.

Like most motion picture content today, Laika rendered “Missing Link” on Intel® Xeon® Scalable processors.  Intel and Laika engineers are working together to apply AI to further automate and speed the company’s articulate process. “Our biggest metric is, is the performance believable and beautiful?” Stringer asks. “Our ethos is to not let the craft limit the storytelling but try to push the craft as far as the story wants to go.”

Voting for the 2020 Academy Awards ends Tuesday, Feb. 4, and the Oscars will be awarded Sunday, Feb. 9.

More: Go behind the scenes and explore a special interactive booklet celebrating the world-class artists who brought “Missing Link” to life. | All Intel Images

Laika Intel Newsroom 1

The post Laika’s Oscar-nominated ‘Missing Link’ Comes to Life with Intel Technology appeared first on Intel Newsroom.

Counting Antarctic Penguins with AI

penguin countingAntarctica’s penguin populations are at serious risk. According to a 2019 study from the British Antarctic Survey, the world’s largest Emperor penguin colony has suffered unprecedented breeding issues for the past three years, is uniquely vulnerable to ongoing and projected climate change, and could virtually disappear by the year 2100. In order to study penguin populations, researchers first need to accurately count them. A new crowd counting solution from Intel AI Builder member and data science company Gramener could enable researchers to use computer vision to count penguin populations faster and more accurately.

“Today, on Penguin Awareness Day, it’s important to understand the impact we are having on penguin populations in Antarctica,” said Naveen Gattu, COO and co-founder of Gramener. “We believe that AI has the power to help researchers identify what is causing their decline, and are proud to be using Intel AI technologies for applications of social impact. Our crowd counting solution has the potential to help us better understand penguin populations.”

Gramener used an image dataset of Antarctica’s penguin colonies from the Penguin Watch Project, which included images from over 40 locations. In partnership with Microsoft AI for Earth, Gramener researchers trained a deep learning model to count the penguins. The model uses a density-based counting approach to approximate the number of penguins in clusters of different sizes from the images. This solution has been repurposed and benchmarked on Intel® Xeon® Scalable processors and the Intel® Optimization for PyTorch for optimized performance.

This solution could help researchers overcome challenges in manually counting penguins from camera traps, which can be tricky due to perspective distortion, penguins standing too close together or clustering, and diversity of camera angles.

More Context: Intel AI for Social Good | Artificial Intelligence at Intel | AI Models Poised to Save Penguins in Antarctica (Blog) | Gramener Image Recognition and Intel AI Saving Antarctic Penguins (Podcast)

The post Counting Antarctic Penguins with AI appeared first on Intel Newsroom.