Startup’s AI Intersects With U.S. Traffic Lights for Better Flow, Safety

Thousands of U.S. traffic lights may soon be getting the green light on AI for safer streets.

That’s because startup CVEDIA has designed better and faster vehicle and pedestrian detections to improve traffic flow and pedestrian safety for Cubic Transportation Systems. These new AI capabilities will be integrated into Cubic’s GRIDSMART Solution, a single-camera intersection detection and actuation technology solution used across the United States.

Cubic needs computer vision models trained with specialized datasets for its new pedestrian safety and traffic systems. But curating data and training models from scratch takes months, so they are partnered with CVEDIA for synthetic data and model development.

CVEDIA’s synthetic algorithm technology accelerates development of object detection and image classification networks. Adopting the NVIDIA Transfer Learning Toolkit has enabled it to further compress development time. The traffic light implementation for smarter intersections is now being deployed in more than 6,000 intersections spanning 49 states.

NVIDIA today released Transfer Learning Toolkit version 3.0 into general availability.

“By using NVIDIA Transfer Learning Toolkit, we cut model training time in half and achieved the same level of model accuracy and throughput performance,” said Rodrigo Orph, CTIO and co-founder of CVEDIA.

Metropolis Boosts Infrastructure

CVEDIA develops applications using NVIDIA Metropolis and is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

NVIDIA Metropolis is an application framework for smart infrastructure. It provides powerful developer tools, including the DeepStream SDK, Transfer Learning Toolkit, pre-trained models on NGC, and NVIDIA TensorRT.

Transfer learning is a deep learning technique that enables developers to tap a pre-trained AI model used on one task and customize it for use in another domain. NVIDIA Transfer Learning Toolkit is used to build custom, production quality models faster with no coding required.

“Safety is the most fundamental need for all drivers and vulnerable road users traveling through intersections. CVEDIA’s AI and synthetic data expertise allow us to both augment our existing AI models and rapidly iterate for new applications,” said Jeff Price, Vice President and General Manager of Cubic Transportation Systems’ ITS unit.

Signaling Smarter Intersections

Cubic’s GRIDSMART Solution  is using 360-degree view cameras to optimize traffic flow by gathering and interpreting important traffic data. GRIDSMART empowers traffic engineers to adjust signal timing and traffic flow strategies, and enables real-time monitoring and visual assessment.

For this new system, CVEDIA is developing image classification and object detection models to follow the movement of vehicles, people, bicycles, pets and other safety concerns in intersections.

“Cubic wants to detect dangerous areas in an intersection and dangerous areas where a pedestrian might cross, and they want to better control traffic,” said Rodrigo Orph.

Access the NVIDIA Transfer Learning Toolkit in general release availability.

Image courtesy of Aaron Sebastian on Unsplash.

The post Startup’s AI Intersects With U.S. Traffic Lights for Better Flow, Safety appeared first on The Official NVIDIA Blog.

What Is Synthetic Data?

Data is the new oil in today’s age of AI, but only a lucky few are sitting on a gusher. So, many are making their own fuel, one that’s both inexpensive and effective. It’s called synthetic data.

What Is Synthetic Data?

Synthetic data is annotated information that computer simulations or algorithms generate as an alternative to real-world data.

Put another way, synthetic data is created in digital worlds rather than collected from or measured in the real world.

It may be artificial, but synthetic data reflects real-world data, mathematically or statistically. Research demonstrates it can be as good or even better for training an AI model than data based on actual objects, events or people.

Synthetic data generated in NVIDIA DRIVE Sim on Omniverse
Users can generate synthetic data for autonomous vehicles using Python inside NVIDIA Omniverse.

That’s why developers of deep neural networks increasingly use synthetic data to train their models. Indeed, a 2019 survey of the field calls use of synthetic data “one of the most promising general techniques on the rise in modern deep learning, especially computer vision” that relies on unstructured data like images and video.

The 156-page report by Sergey I. Nikolenko of the Steklov Institute of Mathematics in St. Petersburg, Russia, cites 719 papers on synthetic data. Nikolenko concludes “synthetic data is essential for further development of deep learning … [and] many more potential use cases still remain” to be discovered.

The rise of synthetic data comes as AI pioneer Andrew Ng is calling for a broad shift to a more data-centric approach to machine learning. He’s rallying support for a benchmark or competition on data quality which many claim represents 80 percent of the work in AI.

“Most benchmarks provide a fixed set of data and invite researchers to iterate on the code … perhaps it’s time to hold the code fixed and invite researchers to improve the data,” he wrote in his newsletter, The Batch.

Augmented and Anonymized Versus Synthetic Data

Most developers are already familiar with data augmentation, a technique that involves adding new data to an existing real-world dataset. For example, they might rotate or brighten an existing image to create a new one.

Given concerns and government policies about privacy, removing personal information from a dataset is an increasingly common practice. This is called data anonymization, and it’s especially popular for text, a kind of structured data used in industries like finance and healthcare.

Augmented and anonymized data are not typically considered synthetic data. However, it’s possible to create synthetic data using these techniques. For example, developers could blend two images of real-world cars to create a new synthetic image with two cars.

Why Is Synthetic Data So Important?

Developers need large, carefully labeled datasets to train neural networks. More diverse training data generally makes for more accurate AI models.

The problem is gathering and labeling datasets that may contain a few thousand to tens of millions of elements is time consuming and often prohibitively expensive.

Enter synthetic data. A single image that could cost $6 from a labeling service can be artificially generated for six cents, estimates Paul Walborsky, who co-founded one of the first dedicated synthetic data services, AI.Reverie.

Cost savings are just the start. “Synthetic data is key in dealing with privacy issues and reducing bias by ensuring you have the data diversity to represent the real world,” Walborsky added.

Because synthetic datasets are automatically labeled and can deliberately include rare but crucial corner cases, it’s sometimes better than real-world data.

What’s the History of Synthetic Data?

Synthetic data has been around in one form or another for decades. It’s in computer games like flight simulators and scientific simulations of everything from atoms to galaxies.

Donald B. Rubin, a Harvard statistics professor, was helping branches of the U.S. government sort out issues such as an undercount especially of poor people in a census when he hit upon an idea. He described it in a 1993 paper often cited as the birth of synthetic data.

“I used the term synthetic data in that paper referring to multiple simulated datasets,” Rubin explained.

“Each one looks like it could have been created by the same process that created the actual dataset, but none of the datasets reveal any real data — this has a tremendous advantage when studying personal, confidential datasets,” he added.

Synthetic data example
Developers can expand synthetic datasets with alterations that provide more variety and better AI accuracy.

In the wake of the Big Bang of AI, the ImageNet competition of 2012 when a neural network recognized objects faster than a human could, researchers started hunting in earnest for synthetic data.

Within a couple years, “researchers were using rendered images in experiments, and it was paying off well enough that people started investing in products and tools to generate data with their 3D engines and content pipelines,” said Gavriel State, a senior director of simulation technology and AI at NVIDIA.

Ford, BMW Generate Synthetic Data

Banks, car makers, drones, factories, hospitals, retailers, robots and scientists use synthetic data today.

In a recent podcast, researchers from Ford described how they combine gaming engines and generative adversarial networks (GANs) to create synthetic data for AI training.

To optimize the process of how it makes cars, BMW created a virtual factory using NVIDIA Omniverse, a simulation platform that lets companies collaborate using multiple tools. The data BMW generates helps fine tune how assembly workers and robots work together to build cars efficiently.

Synthetic Data at the Hospital, Bank and Store

Healthcare providers in fields such as medical imaging use synthetic data to train AI models while protecting patient privacy. For example, startup Curai trained a diagnostic model on 400,000 simulated medical cases.

“GAN-based architectures for medical imaging, either generating synthetic data [or] adapting real data from other domains … will define the state of the art in the field for years to come,” said Nikolenko in his 2019 survey.

GANs are getting traction in finance, too. American Express studied ways to use GANs to create synthetic data, refining its AI models that detect fraud.

In retail, companies such as startup Caper use 3D simulations to take as few as five images of a product and create a synthetic dataset of a thousand images. Such datasets enable smart stores where customers grab what they need and go without waiting in a checkout line.

How Do You Create Synthetic Data?

“There are a bazillion techniques out there” to generate synthetic data, said State from NVIDIA. For example, variational autoencoders compress a dataset to make it compact, then use a decoder to spawn a related synthetic dataset.

While GANs are on the rise, especially in research, simulations remain a popular option for two reasons. They support a host of tools to segment and classify still and moving images, generating perfect labels. And they can quickly spawn versions of objects and environments with different colors, lighting, materials and poses.

This last capability delivers the synthetic data that’s crucial for domain randomization, a technique increasingly used to improve the accuracy of AI models.

Pro Tip: Use Domain Randomization

Domain randomization uses thousands of variations of an object and its environment so an AI model can more easily understand the general pattern. The video below shows how a smart warehouse uses domain randomization to train an AI-powered robot.

Domain randomization helps close the so-called domain gap — the space short of the perfect predictions an AI model would make if it was trained on the exact situation it happens to find on a given day. That’s why NVIDIA is building domain randomization for synthetic data generation tools into Omniverse, one part of the work described in a recent talk at GTC.

Such techniques are helping computer vision apps move from detecting and classifying objects in images to seeing and understanding activities in videos.

“The market is moving in this direction, but the technology is more complex. Synthetic data is even more valuable here because it lets you create fully annotated video frames,” said Walborsky of AI.Reverie.

Where Can I Get Synthetic Data?

Though the sector is only a few years old, more than 50 companies already provide synthetic data. Each has its own special sauce, often a focus on a particular vertical market or technique.

For example, a handful specialize in health care uses. A half dozen offer open source tools or datasets, including the Synthetic Data Vault, a set of libraries, projects and tutorials developed at MIT.

NVIDIA aims to work with a wide range of synthetic data and data-labeling services. Among its latest partners:

  • AI.Reverie in New York offers simulation environments with configurable sensors that let users collect their own datasets, and it has worked on large-scale projects in areas such as agriculture, smart cities, security and manufacturing.
  • Sky Engine, based in London, works on computer vision apps across markets and can help users design their own data-science workflow.
  • Israel-based Datagen creates synthetic datasets from simulations for a wide range of markets, including smart stores, robotics and interiors for cars and buildings.
  • CVEDIA includes Airbus, Honeywell and Siemens among users of its customizable tools for computer vision based on synthetic data.

Enabling a Marketplace with Omniverse

With Omniverse, NVIDIA aims to enable an expanding galaxy of designers and programmers interested in building or collaborating in virtual worlds across every industry. Synthetic data generation is one of many businesses the company expects will live there.

NVIDIA created Isaac Sim as an application in Omniverse for robotics. Users can train robots in this virtual world with synthetic data and domain randomization and deploy the resulting software on robots working in the real world.

Omniverse supports multiple applications for vertical markets such as NVIDIA DRIVE Sim for autonomous vehicles. It’s been letting developers test self-driving cars in the safety of a realistic simulation, generating useful datasets even in the midst of the pandemic.

These applications are among the latest examples of how simulations are fulfilling the promise of synthetic data for AI.

The post What Is Synthetic Data? appeared first on The Official NVIDIA Blog.

Tilling AI: Startup Digs into Autonomous Electric Tractors for Organics

Who knew AI would have an impact on organic produce markets?

Bakur Kvezereli did. Raised on an Eastern European organic tomato farm, he saw labor shortages, pesticide problems and rising energy costs. Years later, studying at MIT in 2014, it dawned on him that AI could help address these issues for farmers.

A few years later, he founded Ztractor, an autonomous electric tractor startup based in Palo Alto, Calif.

Ztractor offers tractors that can be configured to work on 135 different types of crops. They rely on the NVIDIA Jetson edge AI platform for computer vision tasks to help farms improve plant conditions, increase crop yields and achieve higher efficiency.

Going electric means Ztractor machines also don’t belch out plumes of black diesel exhaust.

The company is among a growing field of agriculture companies like Bilberry, FarmWise, SeeTree, Smart Ag and John Deere-owned Blue River adopting NVIDIA GPUs for training and inference. These companies are leading the way in reducing the use of herbicides and supporting organic farms.

‘Tesla Moment’ for Tractors

Kvezereli said that one of the insights he had while studying for an MBA at MIT was that industrial agricultural machines were poised for their own electric revolution.

The same dynamics that affect autos — pollution reduction mandates and clean vehicle purchase rebates — are facing agricultural equipment makers and their customers, he said.

“I realized that the tractor industry is on the cusp of its own Tesla moment, with a transition to electric machines and AI,” said Kvezereli. “The tractor companies will be shifting to electric.”

The worldwide tractor market was valued at just under $150 billion in 2020, according to market researcher Mordor Intelligence.

AI Supports Organic Farms

In many countries, farm labor is limited. A survey by the California Farm Bureau Federation reported that 56 percent of more than 1,000 farmers were unable to hire adequate labor.

ZTractor offers three different models to help assist farmers. Each sports 67 sensors, six cameras and GPS. The tractors collect field data that’s fed into models run on the NVIDIA Jetson edge AI platform to provide insights about crop conditions.

Pests can be identified in milliseconds with convolutional neural networks, quantified and mapped to a particular zone, enabling faster treatment. If aphids are invading, and it’s an organic farm, it might be time to release the ladybugs to gobble them up. The alternative of relying on periodic boots on the ground for inspections might otherwise lead to crop yield losses.

“You need to hire 30 percent more people to achieve the same competitive quality of organic tomatoes as non-organic” said Kvezereli. “That’s where Ztractor comes in.”

Ztractor machines can also handle soil preparation tasks like tilling and disking. It can work with seeding and precision weed control equipment — like smart sprayers with AI-driven cameras —  to help make up for labor shortages for these tasks.

The machines can run eight to 12 hours on a charge, depending on the unit.

Jetson-Driven Tractor Autonomy 

Farmers can set coordinates for the tractor’s path using satellite data or aerial images and the onboard GPS for geofencing the machines.

The tractors run the TrailNet model, processed on the NVIDIA Jetson Xavier, for real-time path planning. The neural network — trained on a custom dataset of about 500 images — was run on cloud instances of NVIDIA GPUs.

Its plug-in electric system offers a 75 percent reduction in energy costs compared with diesel tractors, according to Kvezereli.

“There is a demand for zero-emission farming from both a public policy and customer perspective,” said Kvezereli.

Pilot tests of Ztractor’s system are underway at several organic garlic family farms based in Gilroy, Calif., just southeast of Silicon Valley.

The post Tilling AI: Startup Digs into Autonomous Electric Tractors for Organics appeared first on The Official NVIDIA Blog.

Need for Speed: Researchers Switch on World’s Fastest AI Supercomputer

It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more.

Perlmutter, officially dedicated today at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers.

That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn’t even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more.

A 3D Map of the Universe

In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure.

Researchers need the speed of Perlmutter’s GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year’s worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

“I’m really happy with the 20x speedups we’ve gotten on GPUs in our preparatory work,” said Rollin Thomas, a data architect at NERSC who’s helping researchers get their code ready for Perlmutter.

Perlmutter’s Persistence Pays Off

DESI’s map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe. Dark energy was largely discovered through the 2011 Nobel Prize-winning work of Saul Perlmutter, a still-active astrophysicist at Berkeley Lab who will help dedicate the new supercomputer named for him.

“To me, Saul is an example of what people can do with the right combination of insatiable curiosity and a commitment to optimism,” said Thomas, who worked with Perlmutter on projects following up the Nobel-winning discovery.

Supercomputer Blends AI, HPC

A similar spirit fuels many projects that will run on NERSC’s new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels.

Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time.

“In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that,” said Brandon Cook, an applications performance specialist at NERSC who’s helping researchers launch such projects.

That’s where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Similar work won NERSC recognition in November as a Gordon Bell finalist for its BerkeleyGW program using NVIDIA V100 GPUs. The extra muscle of the A100 promises to take such efforts to a new level, said Jack Deslippe, who led the project and oversees application performance at NERSC.

Software Helps Perlmutter Sing

Software is a strategic component of Perlmutter, too, said Deslippe, noting support for OpenMP and other popular programming models in the NVIDIA HPC SDK the system uses.

Separately, RAPIDS, open-source code for data science on GPUs, will speed the work of NERSC’s growing team of Python programmers. It proved its value in a project that analyzed all the network traffic on NERSC’s Cori supercomputer nearly 600x faster than prior efforts on CPUs.

“That convinced us RAPIDS will play a major part in accelerating scientific discovery through data,” said Thomas.

Coping with COVID’s Challenges

Despite the pandemic, Perlmutter is on schedule. But the team had to rethink critical steps like how it ran hackathons for researchers working from home on code for the system’s exascale-class applications.

Meanwhile, engineers from Hewlett Packard Enterprise helped assemble phase 1 of the system, collaborating with NERSC staff who upgraded their facility to accommodate the new system. “We greatly appreciate the work of those people onsite bringing the system up, especially under all the special COVID protocols,” said Thomas.

At the virtual launch event, NVIDIA CEO Jensen Huang congratulated the Berkeley Lab crew on its plans to advance science with the supercomputer.

Perlmutter’s ability to fuse AI and high performance computing will lead to breakthroughs in a broad range of fields from materials science and quantum physics to climate projections, biological research and more,” Huang said.

On Time for AI Supercomputing

The virtual ribbon cutting today represents a very real milestone.

“AI for science is a growth area at the U.S. Department of Energy, where proof of concepts are moving into production use cases in areas like particle physics, materials science and bioenergy,” said Wahid Bhimji, acting lead for NERSC’s data and analytics services group.

“People are exploring larger and larger neural-network models and there’s a demand for access to more powerful resources, so Perlmutter with its A100 GPUs, all-flash file system and streaming data capabilities is well timed to meet this need for AI,” he added.

Researchers who want to run their work on Perlmutter can submit a request for access to the system.

The post Need for Speed: Researchers Switch on World’s Fastest AI Supercomputer appeared first on The Official NVIDIA Blog.

What Is Explainable AI?

Banks use AI to determine whether to extend credit, and how much, to customers. Radiology departments deploy AI to help distinguish between healthy tissue and tumors. And HR teams employ it to work out which of hundreds of resumes should be sent on to recruiters.

These are just a few examples of how AI is being adopted across industries. And with so much at stake, businesses and governments adopting AI and machine learning are increasingly being pressed to lift the veil on how their AI models make decisions.

Charles Elkan, a managing director at Goldman Sachs, offers a sharp analogy for much of the current state of AI, in which organizations debate its trustworthiness and how to overcome objections to AI systems:

We don’t understand exactly how a bomb-sniffing dog does its job, but we place a lot of trust in the decisions they make.

To reach a better understanding of how AI models come to their decisions, organizations are turning to explainable AI.

What Is Explainable AI?

Explainable AI, or XAI, is a set of tools and techniques used by organizations to help people better understand why a model makes certain decisions and how it works. XAI is: 

  • A set of best practices: It takes advantage of some of the best procedures and rules that data scientists have been using for years to help others understand how a model is trained. Knowing how, and on what data, a model was trained helps us understand when it does and doesn’t make sense to use that model. It also shines a light on what sources of bias the model might have been exposed to.
  • A set of design principles: Researchers are increasingly focused on simplifying the building of AI systems to make them inherently easier to understand.
  • A set of tools: As the systems get easier to understand, the training models can be further refined by incorporating those learnings into it — and by offering those learnings to others for incorporation into their models.

How Does Explainable AI Work?

While there’s still a great deal of debate over the standardization of XAI processes, a few key points resonate across industries implementing it:

  • Who do we have to explain the model to?
  • How accurate or precise an explanation do we need?
  • Do we need to explain the overall model or a particular decision?
Source: DARPA

Data scientists are focusing on all these questions, but explainability boils down to: What are we trying to explain?

Explaining the pedigree of the model:

  • How was the model trained?
  • What data was used?
  • How was the impact of any bias in the training data measured and mitigated?

These questions are the data science equivalent of explaining what school your surgeon went to —  along with who their teachers were, what they studied and what grades they got. Getting this right is more about process and leaving a paper trail than it is about pure AI, but it’s critical to establishing trust in a model.

While explaining a model’s pedigree sounds fairly easy, it’s hard in practice, as many tools currently don’t support strong information-gathering. NVIDIA provides such information about its pretrained models. These are shared on the NGC catalog, a hub of GPU-optimized AI and high performance computing SDKs and models that quickly help businesses build their applications.

Explaining the overall model:

Sometimes called model interpretability, this is an active area of research. Most model explanations fall into one of two camps:

In a technique sometimes called “proxy modeling,” simpler, more easily comprehended models like decision trees can be used to approximately describe the more detailed AI model. These explanations give a “sense” of the model overall, but the tradeoff between approximation and simplicity of the proxy model is still more art than science.

Proxy modeling is always an approximation and, even if applied well, it can create opportunities for real-life decisions to be very different from what’s expected from the proxy models.

The second approach is “design for interpretability.” This limits the design and training options of the AI network in ways that attempt to assemble the overall network out of smaller parts that we force to have simpler behavior. This can lead to models that are still powerful, but with behavior that’s much easier to explain.

This isn’t as easy as it sounds, however, and it sacrifices some level of efficiency and accuracy by removing components and structures from the data scientist’s toolbox. This approach may also require significantly more computational power.

Why XAI Explains Individual Decisions Best

The best understood area of XAI is individual decision-making: why a person didn’t get approved for a loan, for instance.

Techniques with names like LIME and SHAP  offer very literal mathematical answers to this question — and the results of that math can be presented to data scientists, managers, regulators and consumers. For some data — images, audio and text — similar results can be visualized through the use of “attention” in the models — forcing the model itself to show its work.

In the case of the Shapley values used in SHAP, there are some mathematical proofs of the underlying techniques that are particularly attractive based on game theory work done in the 1950s. There is active research in using these explanations of individual decisions to explain the model as a whole, mostly focusing on clustering and forcing various smoothness constraints on the underlying math.

The drawback to these techniques is that they’re somewhat computationally expensive. In addition, without significant effort during the training of the model, the results can be very sensitive to the input data values. Some also argue that because data scientists can only calculate approximate Shapley values, the attractive and provable features of these numbers are also only approximate — sharply reducing their value.

While healthy debate remains, it’s clear that by maintaining a proper model pedigree, adopting a model explainability method that provides clarity to senior leadership on the risks involved in the model, and monitoring actual outcomes with individual explanations, AI models can be built with clearly understood behaviors.

For a closer look at examples of XAI work, check out the talks presented by Wells Fargo and ScotiaBank at NVIDIA GTC21.

The post What Is Explainable AI? appeared first on The Official NVIDIA Blog.

How Diversity Drives Innovation: Catch Up on Inclusion in AI with NVIDIA On-Demand

NVIDIA’s GPU Technology Conference is a hotbed for sharing groundbreaking innovations — making it the perfect forum for developers, students and professionals from underrepresented communities to discuss the challenges and opportunities surrounding AI.

Last month’s GTC brought together virtually tens of thousands of attendees from around the world, with more than 20,000 developers from emerging markets, hundreds of women speakers and a variety of session topics focused on diversity and inclusion in AI.

It had 6x increase in female attendees from last fall’s event, a 6x jump in Black attendees and a 5x boost in Hispanic and Latino attendees. Dozens signed up for hands-on training from the NVIDIA Deep Learning Institute and joined networking sessions hosted by NVIDIA community resource groups in collaboration with organizations like Black in AI and LatinX in AI.

More than 1,500 sessions from GTC 2021 are now available for free replay on NVIDIA On-Demand — including panel discussions on AI literacy and efforts to grow the participation of underrepresented groups in science and engineering.

Advocating for AI Literacy Among Youth

In a session called “Are You Smarter Than a Fifth Grader Who Knows AI?,” STEM advocates Justin Shaifer and Maynard Okereke (known as Mr. Fascinate and the Hip Hop M.D., respectively) led a conversation about initiatives to help young people understand AI.

Given the ubiquity of AI technologies, being surrounded by it “is essentially just how they live,” said Jim Gibbs, CEO of the Pittsburgh-based startup Meter Feeder. “They just don’t know any different.”

But school curriculums often don’t teach young people how AI technologies work, how they’re developed or about AI ethics. So it’s important to help the next generation of developers prepare “to take advantage of all the new opportunities that there are going to be for people who are familiar with machine learning and artificial intelligence,” he said.

Panelist Lisa Abel-Palmieri, CEO of the Boys & Girls Clubs of Western Pennsylvania, described how her organization’s STEM instructors audited graduate-level AI classes at Carnegie Mellon University to inform a K-12 curriculum for children from historically marginalized communities. NVIDIA recently announced a three-year AI education partnership with the organization to create an AI Pathways Toolkit that Boys & Girls Clubs nationwide can deliver to students, particularly those from underserved and underrepresented communities.

And Babak Mostaghimi, assistant superintendent of Georgia’s Gwinnett County Public Schools shared how his team helps students realize how AI is relevant to their daily experiences.

“We started really getting kids to understand that AI is already part of your everyday life,” he said. “And when kids realize that, it’s like, wait a minute, let me start asking questions like: Why does the algorithm behind something cause a certain video to pop up and not others?”

Watch the full session replay on NVIDIA On-Demand.

Diverse Participation Brings Unique Perspectives

Another panel, “Diversity Driving AI Innovation,” was led by Brennon Marcano, CEO of the National GEM Consortium, a nonprofit focused on diversifying representation in science and engineering.

Researchers and scientists from Apple, Amazon Web Services and the University of Utah shared their experiences working in AI, and the value that the perspectives of underrepresented groups can provide in the field.

“Your system on the outside is only as good as the data going in on the side,” said Marcano. “So if the data is homogeneous and not diverse, then the output suffers from that.”

But diversity of datasets isn’t the only problem, said Nashlie Sephus, a tech evangelist at Amazon Web Services AI who focuses on fairness and identifying biases. Another essential consideration is making sure developer teams are diverse.

“Just by having someone on the team with a diverse experience, a diverse perspective and background — it goes a long way. Teams and companies are now starting to realize that,” she said.

The panel described how developers can mitigate algorithmic bias, improve diversity on their teams and find strategies to fairly compensate focus groups who provide feedback on products.

“Whenever you are trying to create something in software that will face the world, the only way you can be precisely coupled to that world is to invite the world into that process,” said Rogelio Cardona-Rivera, assistant professor at the University of Utah. “There’s no way you will be able to be as precise if you leave diversity off the table.”

Watch the discussion here.

Learn more about diversity and inclusion at GTC, and watch additional session replays on NVIDIA On-Demand. Find the GTC keynote address by NVIDIA CEO Jensen Huang here.

The post How Diversity Drives Innovation: Catch Up on Inclusion in AI with NVIDIA On-Demand appeared first on The Official NVIDIA Blog.

AI Researcher Explains Deep Learning’s Collision Course with Particle Physics

For a particle physicist, the world’s biggest questions — how did the universe originate and what’s beyond it — can only be answered with help from the world’s smallest building blocks.

James Kahn, a consultant with German research platform Helmholtz AI and a collaborator on the global Belle II particle physics experiment, uses AI and the NVIDIA DGX A100 to understand the fundamental rules governing particle decay.

Kahn spoke with NVIDIA AI Podcast host Noah Kravitz about the specifics of how AI is accelerating particle physics.

He also touched on his work at Helmholtz AI. Khan helps researchers in fields spanning medicine to earth sciences apply AI to the problems they’re solving. His wide-ranging career — from particle physicist to computer scientist — shows how AI accelerates every industry.

Key Points From This Episode:

  • The nature of particle physics research, which requires numerous simulations and constant adjustments, requires massive AI horsepower. Kahn’s team used the DGX A100 to reduce the time it takes to optimize simulations from a week to roughly a day.
  • The majority of Kahn’s work is global — at Helmholtz AI, he collaborates with researchers from Beijing to Tel Aviv, with projects located anywhere from the Southern Ocean to Spain. And at the Belle II experiment, Kahn is one of more than 1,000 researchers from 26 countries.

Tweetables:

“If you’re trying to simulate all the laws of physics, that’s a lot of simulations … that’s where these big, powerful machines come into play.” — James Kahn [6:02]

“AI is seeping into every aspect of research.” — James Kahn [16:37]

You Might Also Like:

Speed of Light: SLAC’s Ryan Coffee Talks Ultrafast Science

Particle physicist Ryan Coffee, senior staff scientist at the SLAC National Accelerator Laboratory, talks about how he is putting deep learning to work.

A Conversation About Go, Sci-Fi, Deep Learning and Computational Chemistry

Olexandr Isayev, an assistant professor at the UNC Eshelman School of Pharmacy at the University of North Carolina at Chapel Hill, explains how deep learning, abstract strategy board game Go, sci-fi and computational chemistry intersect.

How Deep Learning Can Accelerate the Quest for Cheap, Clean Fusion Energy

William Tang, principal research physicist at the Princeton Plasma Physics Laboratory, is one of the world’s foremost experts on how the science of fusion energy and HPC intersect. He talks about how he sees AI enabling the quest to deliver fusion energy.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Researcher Explains Deep Learning’s Collision Course with Particle Physics appeared first on The Official NVIDIA Blog.

AI Slam Dunk: Startup’s Checkout-Free Stores Provide Stadiums Fast Refreshments

With live sports making a comeback, one thing remains a constant: Nobody likes to miss big plays while waiting in line for a cold drink or snack.

Zippin offers sports fans checkout-free refreshments, and it’s racking up wins among stadiums as well as retailers, hotels, apartments and offices. The startup, based in San Francisco, develops image-recognition models that run on the NVIDIA Jetson edge AI platform to help track customer purchases.

People can simply enter their credit card details into the company’s app, scan into a Zippin-driven store, grab a cold one and any snacks, and go. Their receipt is available in the app afterwards. Customers can also bypass the app and simply use a credit card to enter the stores and Zippin automatically keeps track of their purchases and charges them.

“We don’t want fans to be stuck waiting in line,” said Motilal Agrawal, co-founder and chief scientist at Zippin.

As sports and entertainment venues begin to reopen in limited capacities, Zippin’s grab-and-go stores are offering quicker shopping and better social distancing without checkout lines.

Zippin is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster. “The Inception team met with us, loaned us our first NVIDIA GPU and gave us guidance on NVIDIA SDKs for our application,” he said.

Streak of Stadiums

Zippin has launched in three stadiums so far, all in the U.S. It’s in negotiations to develop checkout-free shopping for several other major sports venues in the country.

In March, the San Antonio Spurs’ AT&T Center reopened with limited capacity for the NBA season, unveiling a Zippin-enabled Drink MKT beverage store. Basketball fans can scan in with the Zippin mobile app or use their credit card, grab drinks and go. Cameras and shelves with scales identify purchases to automatically charge customers.

The debut in San Antonio comes after Zippin came to Mile High Stadium, in Denver, in November, for limited capacity Broncos games. Before that, Zippin unveiled its first stadium, the Golden 1 Center, in Sacramento. It allows customers to purchase popcorn, draft beer and other snacks and drinks and is open for Sacramento Kings basketball games and concerts.

“Our mission is to accelerate the adoption of checkout-free stores, and sporting venues are the ideal location to benefit from our approach,” Agrawal said.

Zippin Store Advances  

In addition to stadiums, Zippin has launched stores within stores for grab-and-go food and beverages in Lojas Americanas, a large retail chain in Brazil.

In Russia, the startup has put a store within a store inside an Azbuka Vkusa supermarket chain store located in Moscow. Zippin is also in Japan, where it has a pilot store in Tokyo with Lawson, a convenience store chain in an office location and another store within the Yokohama Techno Tower Hotel.

As an added benefit for retailers, Zippin’s platform can track products to help automate inventory management.

“We provide a retailer dashboard to see how much inventory there is for each individual item and which items have run low on stock. We can help to know exactly how much is in the store — all these detailed analytics are part of our offering,” Agrawal said.

Jetson Processing

Zippin relies on the NVIDIA Jetson AI platform for inference at 30 frames per second for its models, enabling split-second decisions on customer purchases. The application’s processing speed means it can keep up with a crowded store.

The company runs convolutional neural networks for product identification and store location identification to help track customer purchases. Also, using Zippin’s retail implementations, stores utilize smart shelves to determine whether a product was removed or replaced on a shelf.

The NVIDIA edge AI-driven platform can then process the shelf data and the video data together — sensor fusion — to determine almost instantly who grabbed what.

“It can deploy and work effectively on two out of three sensors (visual, weight and location) and then figure out the products on the fly, with training ongoing in action in deployment to improve the system,” said Agrawal.

The post AI Slam Dunk: Startup’s Checkout-Free Stores Provide Stadiums Fast Refreshments appeared first on The Official NVIDIA Blog.

Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network

In 2019, the U.S. Postal Service had a need to identify and track items in its torrent of more than 100 million pieces of daily mail.

A USPS AI architect had an idea. Ryan Simpson wanted to expand an image analysis system a postal team was developing into something much broader that could tackle this needle-in-a-haystack problem.

With edge AI servers strategically located at its processing centers, he believed USPS could analyze the billions of images each center generated. The resulting insights, expressed in a few key data points, could be shared quickly over the network.

The data scientist, half a dozen architects at NVIDIA and others designed the deep-learning models needed in a three-week sprint that felt like one long hackathon. The work was the genesis of the Edge Computing Infrastructure Program (ECIP, pronounced EE-sip), a distributed edge AI system that’s up and running on the NVIDIA EGX platform at USPS today.

An AI Platform at the Edge

It turns out edge AI is a kind of stage for many great performances. ECIP is already running a second app that acts like automated eyes, tracking items for a variety of business needs.

USPS camera gantry
Cameras mounted on the sorting machines capture addresses, barcodes and other data such as hazardous materials symbols. Courtesy of U.S. Postal Service.

“It used to take eight or 10 people several days to track down items, now it takes one or two people a couple hours,” said Todd Schimmel, the manager who oversees USPS systems including ECIP, which uses NVIDIA-Certified edge servers from Hewlett-Packard Enterprise.

Another analysis was even more telling. It said a computer vision task that would have required two weeks on a network of servers with 800 CPUs can now get done in 20 minutes on the four NVIDIA V100 Tensor Core GPUs in one of the HPE Apollo 6500 servers.

Today, each edge server processes 20 terabytes of images a day from more than 1,000 mail processing machines. Open source software from NVIDIA, the Triton Inference Server, acts as the digital mailperson, delivering the AI models each of the 195 systems need —  when and how they need it.

Next App for the Edge

USPS put out a request for what could be the next app for ECIP, one that uses optical character recognition (OCR) to streamline its imaging workflow.

“In the past, we would have bought new hardware, software — a whole infrastructure for OCR; or if we used a public cloud service, we’d have to get images to the cloud, which takes a lot of bandwidth and has significant costs when you’re talking about approximately a billion images,” said Schimmel.

Today, the new OCR use case will live as a deep learning model in a container on ECIP managed by Kubernetes and served by Triton.

The same systems software smoothed the initial deployment of ECIP in the early weeks of the pandemic. Operators rolled out containers to get the first systems running as others were being delivered, updating them as the full network of nearly nodes was installed.

“The deployment was very streamlined,” Schimmel said. “We awarded the contract in September 2019, started deploying systems in February 2020 and finished most of the hardware by August — the USPS was very happy with that,” he added.

Triton Expedites Model Deliveries

Part of the software magic dust under ECIP’s hood, Triton automates the delivery of different AI models to different systems that may have different versions of GPUs and CPUs supporting different deep-learning frameworks. That saves a lot of time for edge AI systems like the ECIP network of almost 200 distributed servers.

NVIDIA DGX servers at USPS
AI algorithms were developed on NVIDIA DGX servers at a U.S. Postal Service Engineering facility. Courtesy of NVIDIA.

The app that checks for mail items alone requires coordinating the work of more than a half dozen deep-learning models, each checking for specific features. And operators expect to enhance the app with more models enabling more features in the future.

“The models we have deployed so far help manage the mail and the Postal Service — it helps us maintain our mission,” Schimmel said.

A Pipeline of Edge AI Apps

So far, departments across USPS from enterprise analytics to finance and marketing have spawned ideas for as many as 30 applications for ECIP. Schimmel hopes to get a few of them up and running this year.

One would automatically check if a package carries the right postage for its size, weight and destination. Another one would automatically decipher a damaged barcode and could be online as soon as this summer.

“This has a benefit for us and our customers, letting us know where a specific parcel is at — it’s not a silver bullet, but it will fill a gap and boost our performance,” he said.

The work is part of a broader effort at USPS to explore its digital footprint and unlock the value of its data in ways that benefit customers.

“We’re at the very beginning of our journey with edge AI. Every day, people in our organization are thinking of new ways to apply machine learning to new facets of robotics, data processing and image handling,” he said.

Learn more about the benefits of edge computing and the NVIDIA EGX platform, as well as how NVIDIA’s edge AI solutions are transforming every industry.

Pictured at top: Postal Service employees perform spot checks to ensure packages are properly handled and sorted. Courtesy of U.S. Postal Service.

The post Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network appeared first on The Official NVIDIA Blog.

Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale

With only one U.S. state without a Walmart supercenter — and over 4,600 stores across the country — the retail giant’s prediction analytics work with data on an enormous scale.

Grant Gelven, a machine learning engineer at Walmart Global Tech, joined NVIDIA AI Podcast host Noah Kravitz for the latest episode of the AI Podcast.

Gelven spoke about the big data and machine learning methods making it possible to improve everything from the customer experience to stocking to item pricing.

Gelven’s most recent project has been a dynamic pricing system, which reduces excess food waste by pricing perishable goods at a cost that ensures they’ll be sold. This improves suppliers’ ability to deliver the correct volume of items, the customers’ ability to purchase, and lessens the company’s impact on the environment.

The models that Gelven’s team work on are extremely large, with hundreds of millions of parameters. They’re impossible to run without GPUs, which are helping accelerate dataset preparation and training.

The improvements that machine learning have made to Walmart’s retail predictions reach even farther than streamlining business operations. Gelven points out that it’s ultimately helped customers worldwide get the essential goods they need, by allowing enterprises to react to crises and changing market conditions.

Key Points From This Episode:

  • Gelven’s goal for enterprise AI and machine learning models isn’t just to solve single use case problems, but to improve the entire customer experience through a complex system of thousands of models working simultaneously.
  • Five years ago, the time from concept to model to operations took roughly a year. Gelven explains that GPU acceleration, open-source software, and various other new tools have drastically reduced deployment times.

Tweetables:

“Solving these prediction problems really means we have to be able to make predictions about hundreds of millions of distinct units that are distributed all over the country.” — Grant Gelven [3:17]

“To give customers exactly what they need when they need it, I think is probably one of the most important things that a business or service provider can do.” — Grant Gelven [16:11]

You Might Also Like:

Focal Systems Brings AI to Grocery Stores

CEO Francois Chaubard explains how Focal Systems is applying deep learning and computer vision to automate portions of retail stores to streamline store operations and get customers in and out more efficiently.

Credit Check: Capital One’s Kyle Nicholson on Modern Machine Learning in Finance

Kyle Nicholson, a senior software engineer at Capital One, talks about how modern machine learning techniques have become a key tool for financial and credit analysis.

HP’s Jared Dame on How AI, Data Science Are Driving Demand for Powerful New Workstations

Jared Dame, Z by HP’s director of business development and strategy for AI, data science and edge technologies, speaks about the role HP’s workstations play in cutting-edge AI and data science.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale appeared first on The Official NVIDIA Blog.