Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

Revved Up Retail: Mercedes-Benz Consulting Optimizes Dealership Layout Using Modcam Store Analytics

Retailers are bringing the power of AI to their stores to better understand customer buying behavior and preferences and provide them a better experience.

AI startup Modcam, based in Sweden, uses smart sensors to provide detailed data on retail, showroom and office space traffic. These sensors, powered by NVIDIA Jetson Nano modules, perform real-time compute of AI algorithms using this data at the edge.

This allows retailers of all sorts to securely extract valuable insights regarding customer buying preferences.

Mercedes-Benz Consulting is working with the startup to hit the accelerator on the next generation of the automotive retail experience. Just outside its headquarters in Stuttgart, Germany, the company has constructed an experimental showroom equipped with Modcam sensors to test different layouts and new in-store technologies.

Since cars are an occasional, big-ticket purchase for most people, much of an automaker’s retail success relies on the showroom experience. As car companies invest in autonomous and intelligent driving technologies, they’re also looking to their storefronts to deliver an easy-to-navigate, optimized layout that enhances the shopping experience.

With the help of Modcam’s AI algorithms for edge computing, Mercedes-Benz Consulting has gained valuable insight into consumer behavior in both the show floor and service areas.

Smart Shopping

Modcam’s intelligent AI analyzes how people move in spaces. This helps retailers determine patterns, like whether a certain layout or signage is effective, and identifies customer interest in products that they may linger over.

It does so without collecting or storing private information. The deep neural networks running at the edge detect customers as people with non-identifying characteristics, and the smart sensors don’t store any of the images they analyze.

Modcam relies on the high-performance, energy-efficient Jetson Nano and is optimized using the NVIDIA Metropolis application framework to perform this real-time compute. This small, yet powerful computer lets Modcam run multiple deep neural networks in parallel for applications such as object detection, segmentation and tracking — all in an easy-to-use platform that runs in as little as 5 watts.

“Our previous generation of sensors and processors wasn’t powerful enough, so we upgraded to the NVIDIA Jetson Nano to deliver a 60x increase in neural network performance demanded by our next-generation systems,” said Andreas Nordgren, chief operating officer at Modcam.

And with this high-performance, intelligent edge solution, Modcam is helping retailers around the world deliver more optimized merchandising and a better shopping experience.

Modcam is a member of NVIDIA Inception, a virtual accelerator program that enables early-stage companies with fundamental tools, expertise and go-to-market support.

AI-Powered Luxury

With the concept store near its headquarters, Mercedes-Benz Consulting can test different floor layouts as well as touchscreen promotions and signage to display product information.

By outfitting the store with Modcam’s edge AI system, the automaker is able to measure the success of these layouts and campaigns, determining how much traffic flows to which models and how customers interact with different store configurations.

The luxury automaker can then extend these learnings to its dealerships around the world to optimize the customer buying and service experience.

And with the help of real-time edge computing, they can iterate quickly to consistently provide the best possible in-store experience.

The post Revved Up Retail: Mercedes-Benz Consulting Optimizes Dealership Layout Using Modcam Store Analytics appeared first on The Official NVIDIA Blog.

Taking Point: IBM, NVIDIA Collaborate at the Network’s Edge

NVIDIA is expanding its long-standing collaboration with IBM to accelerate the deployment of edge networks. Businesses are deploying these networks around the world as they switch on IoT sensors and extract real-time insights from the masses of data they generate.

Today, IBM announced new solutions for edge computing including the IBM Edge Application Manager on the NVIDIA EGX platform. The combination provides world-class software management on the most powerful offering for accelerated computing and AI.

With this IBM solution, an IT manager can deploy a new application or AI model simultaneously to as many as 10,000 edge devices. The software automates the work of managing those elements through their lifecycle.

The IBM Edge Application Manager enables flexibility as well as scalability. It supports containers through Red Hat OpenShift, so jobs can start and stop as microservices, orchestrated by Kubernetes and deployed on devices running Docker.

These capabilities are now readily available for businesses around the globe moving to edge networks.

NVIDIA supplies the latest GPU-optimized containers through the NGC software catalog. Data scientists and DevOps teams can rely on solid support for the IBM Edge Application Manager no matter where on the network’s edge their data is created and processed.

Running jobs on their own edge networks, users can harvest insights faster with greater reliability and without the costs or vulnerabilities of sending their data to remote, centralized servers.

EGX: A Solid Foundation for the Edge

Factories, supermarkets, warehouses and other businesses are already reaping the benefits of edge computing.

The NVIDIA EGX platform supports application frameworks, such as NVIDIA Metropolis, enabling use cases for smart cities and intelligent video analytics. It also hosts NVIDIA Aerial, a software developer kit to enable virtual 5G radio access networks and services based on them.

NVIDIA’s EGX runs on a full rack of NVIDIA T4 Tensor Core GPUs to Jetson Nano modules that fit in the palm of your hand. The EGX platform debuted in October with Red Hat among its first partners.

The EGX ecosystem includes more than 100 technology companies worldwide, from startups to established software vendors, cloud service providers and global server and device manufacturers. IBM is now one of the largest collaborators for EGX.

IBM and NVIDIA have a long history in high performance computing. For example, we collaborated to build Summit, the most powerful supercomputer in the world.

Now we’re bringing that expertise to the edge of the network. It’s where billions of always-on IoT sensors will be connected by 5G and processed by AI in the next era of computing.

The post Taking Point: IBM, NVIDIA Collaborate at the Network’s Edge appeared first on The Official NVIDIA Blog.

A Taste for Acceleration: DoorDash Revs Up AI with GPUs

When it comes to bringing home the bacon — or sushi or quesadillas — DoorDash is shifting into high gear, thanks in part to AI.

The company got its start in 2013, offering deals such as delivering pad thai to Stanford University dorm rooms. Today with a phone tap, customers can order a meal from more than 310,000 vendors — including Chipotle, Walmart and Wingstop — across 4,000 cities in the U.S., Canada and Australia.

Part of its secret sauce is a digital logistics engine that connects its three-sided marketplace of merchants, customers and independent contractors the company calls Dashers. Each community taps into the platform for different reasons.

Using a mix of machine-learning models, the logistics engine serves personalized restaurant recommendations and delivery-time predictions to customers who want on-demand access to their local businesses. Meanwhile, it assigns Dashers to orders and sorts through trillions of options to find their optimal routes while calculating delivery prices dynamically.

The work requires a complex set of related algorithms embedded in numerous machine-learning models, crunching ever-changing data flows. To accelerate the process, DoorDash has turned to NVIDIA GPUs in the cloud to train its AI models.

Training in One-Tenth the Time

Moving from CPUs to GPUs for AI training netted DoorDash a 10x speed-up. Migrating from single to multiple GPUs accelerated its work another 3x, said Gary Ren, a machine-learning engineer at DoorDash who will describe the company’s approach to AI in an online talk at GTC Digital.

“Faster training means we get to try more models and parameters, which is super critical for us — faster is always better for training speeds,” Ren said.

“A 10x training speed-up means we spin up cloud clusters for a tenth the time, so we get a 10x reduction in computing costs. The impacts of trying 10x more parameters or models is trickier to quantify, but it gives us some multiple of increased overall business performance,” he added.

Making Great Recommendations

So far, DoorDash has discussed one of its deep-learning applications — its recommendation engine that’s been in production about two years. Recommendations are definitely becoming more important as companies such as DoorDash realize consumers don’t always know what they’re looking for.

Potential customers may “hop on our app and explore their options so — given our huge number of merchants and consumers — recommending the right merchants can make a difference between getting an order or the customer going elsewhere,” he said.

Because its recommendation engine is so important, DoorDash continually fine tunes it. For example, in its engineering blogs, the company describes how it crafts embedded n-dimensional vectors for each merchant to find nuanced similarities among vendors.

It also adopts the so-called multi-level, multi-armed bandit algorithms that let AI models simultaneously exploit choices customers have liked in the past and explore new possibilities.

Speaking of New Use Cases

While it optimizes its recommendation engine, DoorDash is exploring new AI use cases, too.

“There are several areas where conversations happen between consumers and dashers or support agents. Making those conversations quick and seamless is critical, and with improvements in NLP (natural-language processing) there’s definitely potential to use AI here, so we’re exploring some solutions,” Ren said.

NLP is one of several use cases that will drive future performance needs.

“We deal with data from the real world and it’s always changing. Every city has unique traffic patterns, special events and weather conditions that add variance — this complexity makes it a challenge to deliver predictions with high accuracy,” he said.

Other challenges the company’s growing business presents are in making recommendations for first-time customers and planning routes in new cities it enters.

“As we scale, those boundaries get pushed — our inference speeds are good enough today, but we’ll need to plan for the future,” he added.

The post A Taste for Acceleration: DoorDash Revs Up AI with GPUs appeared first on The Official NVIDIA Blog.

How Evo’s AI Keeps Fashion Forward

Imagine if fashion houses knew that teal blue was going to replace orange as the new black. Or if retailers knew that tie dye was going to be the wave to ride when swimsuit season rolls in this summer.

So far, there hasn’t been an efficient way of getting ahead of consumer and market trends like these. But Italy-based startup Evo is helping retailers and fashion houses get a jump on changing tastes and a whole lot more.

The company’s deep-learning pricing and supply chain systems, powered by NVIDIA GPUs, let organizations quickly respond to changes — whether in markets, weather, inventory, customers, or competitor moves — by recommending optimal pricing, inventory and promotions in stores.

Evo is also a member of the NVIDIA Inception program, a virtual accelerator that offers startups in AI and data science go-to-market support, expertise and technology assistance.

The AI Show Stopper

Evo was born from a Ph.D. thesis by its founder, Fabrizio Fantini, while he was at Harvard.

Now the company’s CEO, Fantini discovered new algorithms that could outperform even the most complex and expensive commercial pricing systems in use at the time.

“Our research was shocking, as we measured an immediate 30 percent reduction in the average forecast error rate, and then continuous improvement thereafter,” Fantini said. “We realized that the ability to ingest more data, and to self-learn, was going to be of strategic importance to any player with any intention of remaining commercially viable.”

The software, developed in the I3P incubator at the Polytechnic University of Turin, examines patterns in fashion choices and draws data that anticipates market demand.

Last year, Evo’s systems managed goods worth over 10 billion euros from more than 2,000 retail stores. Its algorithms changed over 1 million prices and physically moved over 15 million items, while generating more than 100 million euros in additional profit for customers, according to the company.

Nearly three dozen companies, including grocers and other retailers, as well as fashion houses, have already benefited from these predictions.

“Our pilot clients showed a 10 percent improvement in margin within the first 12 months,” Fantini said. “And longer term, they achieved up to 5.5 points of EBITDA margin expansion, which was unprecedented.”

GPUs in Vogue 

Evo uses NVIDIA GPUs to run neural network models that transform data into predictive signals of market trends. This allows clients to make systematic and profitable decisions.

Using a combination of advanced machine learning methods and statistics, the system transforms products into “functional attributes,” such as type of sleeve or neckline, and into “style attributes,” such as the color or silhouette.

It works off a database that maps the social media, internet patterns and purchase behaviors of over 1.3 billion consumers, which is a fully representative sample of the entire world population.

Then the system uses multiple algorithms and approaches, including meta-modeling, to process market data that is tagged automatically based on the clients, prices, products and characteristics of a company’s main competitors.

This makes the data directly comparable across different companies and geographies, which is one of the key ingredients required for success.

“It’s a bit like Google Translate,” said Fantini. “Learning from its corpus of translations to make each new request smarter, we use our growing body of data to help each new prediction become more accurate, but we work directly on transaction data rather than images, text or voice as others do.”

These insights help retailers understand how to manage their supply chains and how to plan pricing and production even when facing rapid changes in demand.

In the future, Evo plans to use AI to help design fashion collections and forecast trends at increasingly earlier stages.

Resources:

Image by Pexels.

The post How Evo’s AI Keeps Fashion Forward appeared first on The Official NVIDIA Blog.

Life of Pie: How AI Delivers at Domino’s

Some like their pies with extra cheese, extra sauce or double pepperoni. Zack Fragoso’s passion is for pizza with plenty of data.

Fragoso, a data science and AI manager at pizza giant Domino’s, got his Ph.D. in occupational psychology, a field that employs statistics to sort through the vagaries of human behavior.

“I realized I liked the quant part of it,” said Fragoso, whose nimbleness with numbers led to consulting jobs in analytics for the police department and symphony orchestra in his hometown of Detroit before landing a management job on Domino’s expanding AI team.

The pizza maker “has grown our data science team exponentially over the last few years, driven by the impact we’ve had on translating analytics insights into action items for the business team.”

Making quick decisions is important when you need to deliver more than 3 billion pizzas a year — fast. So, Domino’s is exploring the use of AI for a host of applications, including more accurately predicting when an order will be ready.

Points for Pie, launched at last year’s Super Bowl, has been Domino’s highest profile AI project to date. Snap a smartphone picture of whatever pizza you’re eating and the company gave the customer loyalty points toward a free pizza.

“There was a lot of excitement for it in the organization, but no one was sure how to recognize purchases and award points,” Fragoso recalled.

“The data science team said this is a great AI application, so we built a model that classified pizza images. The response was overwhelmingly positive. We got a lot of press and massive redemptions, so people were using it,” he added.

Domino’s trained its model on an NVIDIA DGX system equipped with eight V100 Tensor Core GPUs using more than 5,000 images, including pictures some customers sent in of plastic pizza dog toys. A survey sent in response to the pictures helped automate some of the job of labeling the unique dataset now considered a strategic corporate asset.

AI Knows When the Order Will Be Ready

More recently, Fragoso’s team hit another milestone, boosting accuracy from 75% to 95% for predictions of when an order will be ready. The so-called load-time model factors in variables such as how many managers and employees are working, the number and complexity of orders in the pipeline and current traffic conditions.

The improvement has been well received and could be the basis for future ways to advance operator efficiencies and customer experiences, thanks in part to NVIDIA GPUs.

“Domino’s does a very good job cataloging data in the stores, but until recently we lacked the hardware to build such a large model,” said Fragoso.

At first, it took three days to train the load-time model, too long to make its use practical.

“Once we had our DGX server, we could train an even more complicated model in less than an hour,” he said of the 72x speed-up. “That let us iterate very quickly, adding new data and improving the model, which is now in production in a version 3.0,” he added.

More AI in the Oven

The next big step for Fragoso’s team is tapping a bank of NVIDIA Turing T4 GPUs to accelerate AI inferencing for all Domino’s tasks that involve real-time predictions.

Some emerging use cases in the works are still considered secret ingredients at Domino’s. However, the data science team is exploring computer vision applications to make getting customers their pizza as quick and easy as possible.

“Model latency is extremely important, so we are building out an inference stack using T4s to host our AI models in production. We’ve already seen pretty extreme improvements with latency down from 50 milliseconds to sub-10ms,” he reported.

Separately, Domino’s recently tapped BlazingSQL, open-source software to run data-science queries on GPUs. NVIDIA RAPIDS software eased the transition, supporting the APIs from a prior CPU-based tool while delivering better performance.

It’s delivering an average 10x speed-up across all use cases in the part of the AI process that involves building datasets.

“In the past some of the data-cleaning and feature-engineering operations might have taken 24 hours, but now we do them in less than an hour,” he said.

Try Out AI at NRF 2020

Domino’s is one of many forward-thinking companies using GPUs to bring AI to retail.

NVIDIA GPUs helped power Alibaba to $38 billion in revenue on Singles Day, the world’s largest shopping event. And the world’s largest retailer, Walmart, talked about its use of GPUs and NVIDIA RAPIDS at an event earlier this year.

Separately, IKEA uses AI software from NVIDIA partner Winnow to reduce food waste in its cafeterias.

You can learn more about best practices of using AI in retail at this week’s NRF 2020, the National Retail Federation’s annual event. NVIDIA and some of its 100+ retail partners will be on hand demonstrating our EGX edge computing platform, which scales AI to local environments where data is gathered — store aisles, checkout counters and warehouses.

The EGX platform’s real-time edge compute abilities can notify store associates to intervene during shrinkage, open new checkout counters when lines are getting long and deliver the best customer shopping experiences.

Book a meeting with NVIDIA at NRF here.

The post Life of Pie: How AI Delivers at Domino’s appeared first on The Official NVIDIA Blog.

Intel and AREA15 Bring Experiential Retail to Life in Las Vegas

area15 2 2x1
Opening in April 2020, the AREA15 property in Las Vegas will include a flexible platform where Intel innovation will play an integral role. (Credit: The Vox Agency)
» Click for full image

What’s New: Today, Intel announced a collaboration with AREA15, one of the first purpose-built experiential retail and entertainment complexes. To thrive in the digital age, traditional retailers and malls face a reinvent-or-die reality. AREA15 is tackling this issue by offering live events, immersive experiences and activations, and monumental art installations, employing ground-breaking technology and much more for the retail environment.

“Today, only top retailers can afford to explore and implement experiential design in their stores. We believe immersive, authentically engaging and inspiring experiences in retail are not only possible, but should be accessible for all. Ecosystem collaboration is in Intel’s DNA. AREA15 will help provide scalable, world-class experiential retail solutions for retailers and brands of all sizes.”
–Joe Jensen, Intel vice president and general manager of the Retail, Banking, Hospitality and Education Division

Why It’s Important: Research shows that 81 percent of Generation Z prefer to shop in stores, and 73 percent like to discover new products in stores. This offers the opportunity to transform how a new generation of consumers chooses to interact with brands. Retailers and brands can’t afford to miss out on engaging this demographic, which is on track to become the largest generation of consumers by 2020 — responsible for $29 billion to $143 billion in direct spending.

intel experiential innovation hub
The Intel Experience Incubation Hub will be a multiuse venue for innovation and collaboration. It will allow retail ecosystem partners to test new design concepts and technologies. (Credit: Design Distill)
» Click for full image

The rise of the “experience economy,” fueled by rapid shifts in technology-enabled design and culture, has resulted in the business-critical need to understand customers — not only Gen Z — and use that data to design a real-time personal experience.

Intel’s Role: The alliance will initially focus on immersive experiential retail design with the launch of the Intel© Experience Incubation Hub, a multiuse venue for innovation and collaboration. It will allow retail ecosystem partners — from creatives to technologists — to test new design concepts and leading-edge technologies.

Opening in April, the AREA15 property in Las Vegas aims to be the gravitational center for the new experience economy, building a flexible platform where Intel innovation will play an integral role. AREA15’s technical and physical infrastructure will be modular, allowing for innovations coming out of the Experience Incubation Hub to be easily tested for proof of concept and scalability — within AREA15 and beyond — in a variety of forms, from pop-ups to short-term engagements to more permanent installations.

Early collaborators and experiences featured in the Experience Incubation Hub include Artist TRAV, Papinee, Pressure Point Creative, ThenWhat Inc. and Variant.

“AREA15 is a radical reimagining of retail, where visitors can expect to be authentically engaged and inspired in an otherworldly setting,” said Winston Fisher, CEO of AREA15 and partner in Fisher Brothers. “Experience design cannot be separated from technology — it is essential that the two are intertwined and co-developed. That’s where our collaboration with Intel comes in. Together, we’re raising the standard of experience design, and developing best practices for combining technology, art and commerce in exciting, unexpected ways.”

More Context: Intel at 2020 NRF: Intel Gives Retail the Edge

The post Intel and AREA15 Bring Experiential Retail to Life in Las Vegas appeared first on Intel Newsroom.

All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event

Putting AI to work on a massive scale, Alibaba recently harnessed NVIDIA GPUs to serve its customers on 11/11, the year’s largest shopping event.

During Singles Day, as the Nov. 11 shopping event is also known, it generated $38 billion in sales. That’s up by nearly a quarter from last year’s $31 billion, and more than double online sales on Black Friday and Cyber Monday combined.

Singles Day — which has grown from $7 million a decade ago — illustrates the massive scale AI has reached in global online retail, where no player is bigger than Alibaba.

Each day, over 100 million shoppers comb through billions of available products on its site. Activity skyrockets on peak shopping days, when Alibaba’s systems field hundreds of thousands of queries a second.

And AI keeps things humming along, according to Lingjie Xu, Alibaba’s director of heterogeneous computing.

“To ensure these customers have a great user experience, we deploy state-of-the-art AI technology at massive scale using the NVIDIA accelerated computing platform, including T4 GPUs, cuBLAS, customized mixed precision and inference acceleration software,” he said.

“The platform’s intuitive search capabilities and reliable recommendations allow us to support a model six times more complex than in the past, which has driven a 10 percent improvement in click-through rate. Our largest model shows 100 times higher throughput with T4 compared to CPU,” he said.

One key application for Alibaba and other modern online retailers: recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver.

Every small improvement in click-through rate directly impacts the user experience and revenues. A 10 percent improvement from advanced recommender models that can run in real time, and at incredible scale, is only possible with GPUs.

Alibaba’s teams employ NVIDIA GPUs to support a trio of optimization strategies around resource allocation, model quantization and graph transformation to increase throughput and responsiveness.

This has enabled NVIDIA T4 GPUs to accelerate Alibaba’s wide and deep recommendation model and deliver 780 queries per second. That’s a huge leap from CPU-based inference, which could only deliver three queries per second.

Alibaba has also deployed NVIDIA GPUs to accelerate its systems for automatic advertisement banner-generating, ad recommendation, imaging processing to help identify fake products, language translation, and speech recognition, among others. As the world’s third-largest cloud service provider, Alibaba Cloud provides a wide range of heterogeneous computing products capable of intelligent scheduling, automatic maintenance and real-time capacity expansion.

Alibaba’s far-sighted deployment of NVIDIA’s AI platform is a straw in the wind, indicating what more is to come in a burgeoning range of industries.

Just as its tools filter billions of products for millions of consumers, AI recommenders running on NVIDIA GPUs will find a place among other countless other digital services — app stores, news feeds, restaurant guides and music services among them — keeping customers happy.

Learn more about NVIDIA’s AI inference platform.

The post All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event appeared first on The Official NVIDIA Blog.

Why AI Could Be the End of the Aisle for Shoplifters

Security guards. Closed circuit TV. Anti-theft tags and alarms.

Retailers are constantly battling shoplifters to protect store profits — up to 50 percent of which are lost to theft.

Now, they have a new weapon in their armory.

ThirdEye Labs, a London-based company and member of Inception, NVIDIA’s startup incubator, is combining off-the-shelf CCTV cameras with state-of-the-art AI algorithms to detect fraudulent activities in stores.

Caught AI Handed

Every year, U.S. retailers lose up to $32.25 billion due to theft.

In addition to those pocketing items straight from the shelves, it’s estimated that one percent of all customers who visit self-service checkouts steal. Sometimes it’s accidental — an item doesn’t scan through properly or the wrong type of pastry is selected from the bakery menu.

But some supermarket stealers are more slick — following schemes such as “the banana trick” (steaks scanned as potatoes) or “the switcheroo” (scanning the barcodes of cheaper items, instead of a pricier purchase).

To date, retailers’ attempts to deter thieves have had little effect. Hiring more security personnel is expensive and creates unpleasant shopping experiences. While security alarms are evaded and self-service counters continue to be deceived.

ThirdEye Labs’ AI algorithms help security staff work more effectively and efficiently. Trained on NVIDIA GPUs, the company’s deep learning networks can detect specific indicators of fraudulent behavior from CCTV footage and then alert staff, who can take appropriate action on the spot.

“We chose to train our algorithms on NVIDIA GPUs as they are fast, reliable and effective,” said Raz Ghafoor, CEO and co-founder at ThirdEye Labs. “Without the power of these GPUs, our development time would have doubled.”

ThirdEye Labs’ AI software can be used with existing security infrastructure — no additional hardware or software is needed. None of the video footage used is recorded or stored anywhere and the system doesn’t perform any facial recognition, meaning the system is GDPR compliant.

At stores where ThirdEye Labs’ system has been introduced at self-service checkouts, the AI technology analyzes every scan to detect non-scans, non-payments, substitute scanning and fraudulent refunds. Over the course of a month, two stores caught 27 thieves in action, up from basically zero, by implementing ThirdEye Labs’ point-of-sale system.

In the aisles, too, fraudulent behavior hasn’t gone unnoticed. ThirdEye Labs’ “In-Aisle Theft Detector” sends security guards push notifications every time someone picks up high-risk items, like champagne bottles or fresh meat. They can then decide whether or not to take action, helping them work more efficiently and effectively.

The service has saved stores tens of thousands of dollars in losses by helping security guards have their eyes on the right person, at the right time.

The Future of Convenient Shopping

ThirdEye Labs plans to expand its technology further to improve customer shopping experiences.

Its “Queue Detector” will predict when lots of customers are about to get in checkout lines. By alerting staff, tills can be manned before the rush.

Its “Stock-out Detector” will help stores monitor their shelves and identify when stock is low. Empty shelves cost retailers an estimated three percent of their total revenue each year, so optimizing stock replenishment has big benefits for sellers as well as those looking to purchase.

Image credit: kc0uvb

The post Why AI Could Be the End of the Aisle for Shoplifters appeared first on The Official NVIDIA Blog.

Images: Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun

Intel NRF Pensa

» Download all images (ZIP, 24 MB)

Photo 1: Pensa’s autonomous drone system, utilizing in-store servers with Intel architecture to power the analytics, uses computer vision and artificial intelligence to inform retailers of what is on shelves and what is missing – across all stores, everywhere, at any point in time. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 2: Using cameras and artificial intelligence, NCR helps the bottom line by providing technology for retailers to improve the shopping experience while reducing shrink. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 3: Created by Mood Media, WestRock, In-Store Screen and Intel, Smart Digital Shelving allows brick-and-mortar retailers to harness data to design more engaging customer experiences, which result in greater traffic conversion and basket size. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 4: Kendu’s Interactive Archway allows retailers to highlight hero products in a store and demonstrates how a customer can engage with new products to learn more. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 5: JD.com’s Smart Vending JD Go removes friction and provides product recommendations to customers. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 6: CloudPick uses automated door access, weighting sensors, cameras and computer vision to create a frictionless experience for shoppers, and an efficient store for retailers. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

Photo 7: The ASICS Interactive Shoe Display uses a touchscreen totem to allow consumers to control the wall-sized display as they scroll through the ASICS catalog to find more information about each shoe. Intel is at NRF 2019 from Jan. 13-15 at the Javits Convention Center in New York (Booth #3437). (Credit: Intel Corporation).

More: Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun

The post Images: Ten Years of Intel at NRF: Retail Transformation Has Only Just Begun appeared first on Intel Newsroom.