Driverless Ed: Students Advance Self-Driving Research at ’Formula Student Germany’ Competition

Insulin, quantum theory and the Nash equilibrium are just a few of the landmark discoveries that university student researchers have made pivotal contributions to.

Autonomous vehicles — one of the biggest transportation breakthroughs of the past century — are no different. In fact, much of today’s self-driving technology was born out of university challenges funded by the U.S. Defense Advanced Research Projects Agency (better known as DARPA) in 2005 and 2007.

The Formula Student Germany (FSG) Driverless competition, which took place earlier this month in Hockenheimring, Germany, is one of the arenas in which students have the chance to test and demonstrate their skills in autonomous vehicle development.

Now in its second year, the competition is a growing part of the FSG international design event, which draws 4,000 students from 25 countries. To advance their designs, five of the 17 FSG Driverless student teams — including the winning group from ETH Zurich — chose to engineer their autonomous vehicles on powerful NVIDIA GPUs.

“The FSG Driverless challenge is another step where we have the chance to learn a lot about the latest technologies in the field of autonomous driving,” said Tu Pham, chief technical officer of the Technical University of Darmstadt’s racing team. “By using NVIDIA GPUs for our computer vision neural networks, we experienced a huge increase in performance — and we haven’t even come close to its computational limits.”

All-Around Competition

The teams competing in this year’s FSG Driverless competition were tasked with designing and deploying a self-driving car from the ground up.

The student teams must independently create a concept and business plan for their vehicle, build it and undergo intense technical inspection as well as “scrutineering” — rigorous oversight from the competition’s officials. Then, the teams must test their vehicle in several disciplines on the racetrack during the FSG event week.

The driverless competition takes place alongside combustion and electric vehicle competitions, which also require teams to design, engineer and compete with their own innovations.

In the week leading up to the final challenges, each team’s technical design, manufacturing and cost plans are closely evaluated. Those results are then combined with scores in the static — design, business plan, strategy — and dynamic, or racetrack, disciplines. The teams with the highest overall scores achieve top placement.

A GPU-Powered Finish

As development progressed from blueprints to the racetrack, students said NVIDIA GPUs made the process of advancing their deep learning algorithms seamless.

“As our goal as a first-year team was to complete all dynamic events with similar lap-times as last year’s winner, we needed to aim high,” said Mathias Backsaether, chief driverless engineer the Norwegian University for Science and Technology’s racing team. “For this, we used NVIDIA, facilitating easy testing of our autonomous software.”

The work these teams accomplished in building and deploying a driverless vehicle will contribute to the overall development of this groundbreaking technology. And NVIDIA will continue to partner with university researchers around the globe, helping students make their designs and innovations a reality, and showcasing their findings on the global stage.

The post Driverless Ed: Students Advance Self-Driving Research at ’Formula Student Germany’ Competition appeared first on The Official NVIDIA Blog.

Intel Hosts NASA Frontier Development Lab Demo Day for 2018 Research Presentations

moon 2x1
A NASA Frontier Development Lab photo shows a crater on the moon. (Credit: NASA Frontier Development Lab)

What’s New: Today Intel hosts the NASA Frontier Development Lab (FDL)* Event Horizon 2018: AI+Space Challenge Review Event (view live webcast) in Santa Clara, California. It concludes an eight-week research program that applies artificial intelligence (AI) technologies to challenges faced in space exploration. For the third year, NASA Ames Research Center*, the SETI Institute* and participating program partners have provided support to ongoing research for interdisciplinary AI approaches, leveraging the latest hardware technology with advanced machine learning tools.

“Artificial intelligence is expected to significantly influence the future of the space industry and power the solutions that create, use and analyze the massive amounts of data generated. The NASA FDL summer program represents an incredible opportunity to take AI applications and implement them across different challenges facing space science and exploration today.”
– Amir Khosrowshahi, vice president and chief technology officer, Artificial Intelligence Products Group, Intel

Why It’s Important: Through its work with FDL, Intel is addressing critical knowledge gaps by using AI to further space exploration and solve problems that can affect life on Earth.

New tools in artificial intelligence are already demonstrating a new paradigm in robotics, data acquisition and analysis, while also driving down the barriers to entry for scientific discovery. FDL’s researcher program participants have implemented AI to predict solar activity, map lunar poles, build 3D shape models of potentially hazardous asteroids, discover uncategorized meteor showers and determine the efficacy of asteroid mitigation strategies.

“This is an exciting time for space science. We have this wonderful new toolbox of AI technologies that allow us to not only optimize and automate, but better predict Space phenomena – and ultimately derive a better understanding,” said James Parr, FDL director.

The Challenge: Since 2017, Intel has been a key partner to FDL, contributing computing resources and AI and data science mentorship. Intel sponsored two Space Resources teams, which used the Intel® Xeon® platform for inference and training, as well as the knowledge of Intel principal engineers:

  • Space Resources Team 1:  Autonomous route planning and cooperative platforms to coordinate routes between a group of lunar rovers and a base station, allowing the rovers to autonomously cooperate in order to complete a mission.
  • Space Resources Team 2: Localization – merging orbital maps with surface perspective imagery to allow NASA engineers to locate a rover on the lunar surface using only imagery. This is necessary since there is no GPS in space. A rover using the team’s algorithm will be able to precisely locate itself by uploading a 360-degree view of its surroundings as four images.

More Challenges: Additional challenges presented during the event include:

  • Astrobiology Challenge 1: Understanding What is Universally Possible for Life
  • Astrobiology Challenge 2: From Biohints to Evidence of Life: Possible Metabolisms within Extraterrestrial Environmental Substrates
  • Exoplanet Challenge: Increase the Efficacy and Yield of Exoplanet Detection from Tess and Codify the Process of AI Derived Discovery
  • Space Weather Challenge 1: Improve Ionospheric Models Using Global Navigation Satellite System Signal Data
  • Space Weather Challenge 2: Predicting Solar Spectral Irradiance from SDO/AIA Observations

What’s Next: At the conclusion of the projects, FDL will open-source research and algorithms, allowing the AI and space communities to leverage work from the eight teams in future space missions.

The post Intel Hosts NASA Frontier Development Lab Demo Day for 2018 Research Presentations appeared first on Intel Newsroom.

Intel and Philips Accelerate Deep Learning Inference on CPUs in Key Medical Imaging Uses

healthcare illustrationWhat’s New: Using Intel® Xeon® Scalable processors and the OpenVINO™ toolkit, Intel and Philips* tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

“Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds.”
–Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights

Why It’s Important: Until recently, there was one prominent hardware solution to accelerate deep learning: graphics processing unit (GPUs). By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models.

Central processing units (CPUs) – in this case Intel Xeon Scalable processors – don’t have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence (AI) workloads, Intel Xeon Scalable processors can better meet data scientists’ needs than GPU-based systems. As Philips found in the two recent tests, this enables the company to offer AI solutions at lower cost to its customers.

Why It Matters: AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals.

Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are a great fit for low batch or streaming applications. In particular, Intel Xeon Scalable processors offer an affordable, flexible platform for AI models – particularly in conjunction with tools like the OpenVINO toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy.

These tests show that healthcare organizations can implement AI workloads without expensive hardware investments.

What the Results Show: The results for both use cases surpassed expectations. The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimizations – an increase of 188 times. The lung-segmentation model far surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimizations.

What’s Next: Running healthcare deep learning workloads on CPU-based devices offers direct benefits to companies like Philips, because it allows them to offer AI-based services that don’t drive up costs for their end customers. As shown in this test, companies like Philips can offer AI algorithms for download through an online store as a way to increase revenue and differentiate themselves from growing competition.

More Context: Multiple trends are contributing to this shift:

  • As medical image resolution improves, medical image file sizes are growing – many images are 1GB or greater.
  • More healthcare organizations are using deep learning inference to more quickly and accurately review patient images.
  • Organizations are looking for ways to do this without buying expensive new infrastructure.

The Philips tests are just one example of these trends in action. Novartis* is another. And many other Intel customers – not yet publicly announced – are achieving similar results. Learn more about Intel AI technology in healthcare at “Advancing Data-Driven Healthcare Solutions.”

The post Intel and Philips Accelerate Deep Learning Inference on CPUs in Key Medical Imaging Uses appeared first on Intel Newsroom.

Intel Artificial Intelligence Helps Bring ‘The Meg’ Mega Shark to the Big Screen

Meg Trailer 01 48

What’s New: Last week, Warner Bros. Pictures* and Gravity Pictures* released “The Meg*,” a science fiction action thriller film starring a prehistoric, 75-foot-long shark known as the Megalodon. Powered by Intel artificial intelligence (AI) hardware and created by Scanline VFX* using the Ziva VFX* software, the Megalodon was created by VFX animators in record time and with lifelike accuracy – from the way the shark moves in the water to its muscles and skin – to deliver a jaw-dropping experience to movie audiences around the world.

“At Intel, we strive every day to make the amazing possible, and it’s exciting to see our Intel® Xeon® Scalable processors used to bring the film’s Megalodon shark to life on the big screen.”
– Julie Choi, head of AI Marketing, Intel

Why It’s Important: AI technology allows movies to create incredibly detailed and lifelike graphics, while saving time throughout creative iterations, which all work together to elevate the art of movie creation and enhance the audience experience.

Re-creating a prehistoric, 75-foot long shark in the water for the big screen is not an easy task. In addition to bringing the Megalodon to life, Scanline and Ziva also needed to ensure its movements through the ocean, a fluid background, were realistic. They were able to realistically create the Megalodon moving through water by processing a number of physical simulations and then running the simulated shark through all of the movements and poses needed in the shots for the film.

“At Ziva, we help creators make amazing creatures with the power of Intel AI. One of the great advantages to using Intel Xeon Scalable processors is that they allow us to generate amazing training data. When you want to train a machine learning process, it needs to know how something is going to behave in order to anticipate itself, or extrapolate how it expects something to behave – in this case, the movement of the shark itself. Intel Xeon technology helped the film’s creators do that quickly and efficiently and in the most realistic way possible,” said James Jacobs, CEO, Ziva VFX.

What Powers the Technology: Intel Xeon Scalable processors power Ziva’s character-generating software and help accelerate Ziva’s physics engine – an AI algorithm that automates movement for generated creatures, including the Megalodon from “The Meg.” Additionally, Scanline used powerful Intel Xeon processors to render the shots for the film, saving them valuable time while allowing them to create more shots and options.

“To create ‘The Meg,’ we needed a massive amount of performance in our computer system,” said Stephan Trojansky, president and VFX supervisor, Scanline. “Years ago, you would have needed a huge render farm and a large crew for a very small amount of footage – today, we can use 2,500 Intel Xeon processors with almost 100,000 cores that are used to compute all of the needs of the movie. This enables fast iterations and the ability to present multiple options to the director, which is critical in making the best possible visual effects.”

Where to See It: Warner Bros. Pictures and Gravity Pictures present a di Bonaventura/Apelles Entertainment Inc.*/Maeday Productions Inc.*/Flagship Entertainment Group* production, a film by Jon Turteltaub, “The Meg.” The film was released Aug. 10 in 2D and 3D in select theatres and IMAX. It will be distributed in China by Gravity Pictures, and throughout the rest of the world by Warner Bros. Pictures, a Warner Bros. Entertainment Company.  “The Meg” has been rated PG-13.

More Context: Artificial Intelligence at Intel

The post Intel Artificial Intelligence Helps Bring ‘The Meg’ Mega Shark to the Big Screen appeared first on Intel Newsroom.

Perception Matters: How Deep Learning Enables Autonomous Vehicles to Understand Their Environment

Humans are constantly taking in data from the world around them using five primary senses. You hear your phone ring, see a notification on your computer screen or touch something hot.

However, without perception, there’s no way to decipher those inputs and determine what’s relevant. That you should answer the call, know there’s an email to respond to or pull away your hand before it’s burned.

Now imagine driving on a highway, where a constant stream of information surrounds you. From lane markings and street signs to lane-splitting motorcyclists, merging trucks and traffic jams — the ability to make instant, informed decisions is not just a skill, it’s an imperative.

Just as perception enables humans to make instant associations and act on them, the ability to extract relevant knowledge from immediate surroundings is a fundamental pillar for the safe operation of an autonomous vehicle.

With the power of perception, a car can detect vehicles ahead using cameras and other sensors, identify if they become potential hazards and know to continuously track their movements. This capability extends to the 360-degree field around the vehicle, enabling the car to detect and track all moving and static objects as it travels.

Perception is the first stage in the computational pipeline for the safe functioning of a self-driving car. Once the vehicle is able to extract relevant data from the surrounding environment, it can plan the path ahead and actuate, all without human intervention.

Finding the Signal Through the Noise

Autonomous vehicle sensors generate massive amounts of data every second. From other cars, to pedestrians, to street signs, to traffic lights, every mile contains indicators for where the self-driving car should and shouldn’t go.

Identifying these indicators and determining those needed to safely move is incredibly complex, requiring a diversity of deep neural networks working in parallel. The NVIDIA DRIVE software stack — a primary component of the NVIDIA DRIVE platform — contains libraries, frameworks and source packages that allow the necessary deep neural networks to work together for comprehensive perception.

These networks include DriveNet, which detects obstacles, and OpenRoadNet, which detects drivable space. To plan a path forward, LaneNet detects lane edges and PilotNet detects drivable paths.

NVIDIA DRIVE software enables this integration by building on top of highly optimized and flexible libraries. These diverse networks run simultaneously and can overlap, providing redundancy, a key element to safety.

Inherently Safe

In addition to the redundancy within the perception layer, these networks backup the overall function of the vehicle, enhancing safety at every level.

For example, the car’s high-definition map can indicate a four-way intersection, and when paired with real-time sensor data, the perception layer shows the car precisely where to stop, enabling a more powerful way to pinpoint the car’s location.

Perception also contributes to the diversity of autonomous vehicle capabilities, enabling the car to see the world with the same sophistication as humans. Rather than just identify obstacles, it can discern stationary objects as well as moving ones, and determine their path.

With added software capabilities, like those offered by NVIDIA partner Perceptive Automata, the car can even predict human behavior by reading body language and other markers. This added human behavior perception capability can run simultaneously with other algorithms operating an autonomous vehicle thanks to the computing horsepower from the NVIDIA DRIVE platform.

With this combined hardware and software solution, developers are continuously adding new perception abilities to the car’s self-driving brain.

The post Perception Matters: How Deep Learning Enables Autonomous Vehicles to Understand Their Environment appeared first on The Official NVIDIA Blog.

Innovating for the ‘Data-Centric’ Era

Navin ShenoyBy Navin Shenoy

Today at Intel’s Data-Centric Innovation Summit, I shared our strategy for the future of data-centric computing, as well as an expansive view of Intel’s total addressable market (TAM), and new details about our product roadmap. Central to our strategy is a keen understanding of both the biggest challenges – and opportunities – our customers are facing today.

As part of my role leading Intel’s data-centric businesses, I meet with customers and partners from all over the globe. While they come from many different industries and face unique business challenges, they have one thing in common: the need to get more value out of enormous amounts of data.

I find it astounding that 90 percent of the world’s data was generated in the past two years. And analysts forecast that by 2025 data will exponentially grow by 10 times and reach 163 zettabytes. But we have a long way to go in harnessing the power of this data. A safe guess is that only about 1 percent of it is utilized, processed and acted upon. Imagine what could happen if we were able to effectively leverage more of this data at scale.

The intersection of data and transportation is a perfect example of this in action. The life-saving potential of autonomous driving is profound – many lives globally could be saved as a result of fewer accidents. Achieving this, however, requires a combination of technologies working in concert – everything including computer vision, edge computing, mapping, the cloud and artificial intelligence (AI).

This, in turn, requires a significant shift in the way we as an industry view computing and data-centric technology. We need to look at data holistically, including how we move data faster, store more of it and process everything from the cloud to the edge.

Implications for Infrastructure

This end-to-end approach is core to Intel’s strategy and when we look at it through this lens – helping customers move, store and process data – the market opportunity is enormous. In fact, we’ve revised our TAM from $160 billion in 2021 to $200 billion in 2022 for our data-centric businesses. This is the biggest opportunity in the history of the company.

As part of my keynote today, I outlined the investments we’re making across a broad portfolio to maximize this opportunity.

Move Faster

With the explosion of data comes the need to move data faster, especially within hyperscale data centers. Connectivity and the network have become the bottlenecks to more effectively utilize and unleash high-performance computing. Innovations such as Intel’s silicon photonics are designed to break those boundaries using our unique ability to integrate the laser in silicon and, ultimately, deliver the lowest cost and power per bit and the highest bandwidth.

In addition, my colleague Alexis Bjorlin, announced today that we are further expanding our connectivity portfolio with a new and innovative SmartNIC product line – code-named Cascade Glacier – which is based on Intel® Arria® 10 FPGAs and enables optimized performance for Intel Xeon processor-based systems. Customers are sampling today and Cascade Glacier will be available in 2019’s first quarter.

Store More

For many applications running in today’s data centers, it’s not just about moving data, it’s also about storing data in the most economical way. To that end, we have challenged ourselves to completely transform the memory and storage hierarchy in the data center.

We recently unveiled more details about Intel® Optane™ DC persistent memory, a completely new class of memory and storage innovation that enables a large persistent memory tier between DRAM and SSDs, while being fast and affordable. And today, we shared new performance metrics that show that Intel Optane DC persistent memory-based systems can achieve up to 8 times the performance gains for certain analytics queries over configurations that rely on DRAM only.

Customers like Google, CERN, Huawei, SAP and Tencent already see this as a game-changer. And today, we’ve started to ship the first units of Optane DC persistent memory, and I personally delivered the first unit to Bart Sano, Google’s vice president of Platforms. Broad availability is planned for 2019, with the next generation of Intel Xeon processors.

In addition, at the Flash Memory Summit, we will unveil new Intel® QLC 3D NAND-based products, and demonstrate how companies like Tencent use this to unleash the value of their data.

Process Everything

A lot has changed since we introduced the first Intel Xeon processor 20 years ago, but the appetite for computing performance is greater than ever. Since launching the Intel Xeon Scalable platform last July, we’ve seen demand continue to rise, and I’m pleased to say that we have shipped more than 2 million units in 2018’s second quarter. Even better, in the first four weeks of the third quarter, we’ve shipped another 1 million units.

Our investments in optimizing Intel Xeon processors and Intel FPGAs for artificial intelligence are also paying off. In 2017, more than $1 billion in revenue came from customers running AI on Intel Xeon processors in the data center. And we continue to improve AI training and inference performance. In total, since 2014, our performance has improved well over 200 times.

Equally exciting to me is what is to come. Today, we disclosed the next generation roadmap for the Intel Xeon platform:

  • Cascade Lake is a future Intel Xeon Scalable processor based on 14nm technology that will introduce Intel Optane DC persistent memory and a set of new AI features called Intel DL Boost. This embedded AI accelerator will speed deep learning inference workloads, with an expected 11 times faster image recognition than the current generation Intel Xeon Scalable processors when they launched in July 2017. Cascade Lake is targeted to begin shipping late this year.
  • Cooper Lake is a future Intel Xeon Scalable processor that is based on 14nm technology. Cooper Lake will introduce a new generation platform with significant performance improvements, new I/O features, new Intel® DL Boost capabilities (Bfloat16) that improve AI/deep learning training performance, and additional Intel Optane DC persistent memory innovations. Cooper Lake is targeted for 2019 shipments.
  • Ice Lake is a future Intel Xeon Scalable processor based on 10nm technology that shares a common platform with Cooper Lake and is planned as a fast follow-on targeted for 2020 shipments.

In addition to investing in the right technologies, we are also offering optimized solutions – from hardware to software – to help our customers stay ahead of their growing infrastructure demands. As an example, we introduced three new Intel Select Solutions today, focused on AI, blockchain and SAP Hana*, which aim to simplify deployment and speed time-to-value for our ecosystem partners and customers.

The Opportunity Ahead

In summary, we’ve entered a new era of data-centric computing. The proliferation of the cloud beyond hyperscale and into the network and out to the edge, the impending transition to 5G, and the growth of AI and analytics have driven a profound shift in the market, creating massive amounts of largely untapped data.

And when you add the growth in processing power, breakthroughs in connectivity, storage, memory and algorithms, we end up with a completely new way of thinking about infrastructure. I’m excited about the huge and fast data-centric opportunity ($200 billion by 2022) that we see ahead.

To help our customers move, store and process massive amounts of data, we have actionable plans to win in the highest growth areas, and we have an unparalleled portfolio to fuel our growth – including, performance-leading products and a broad ecosystem that spans the entire data-centric market.

When people ask what I love about working at Intel, the answer is simple. We are inventing – and scaling – the technologies and solutions that will usher in a new era of computing and help solve some of society’s greatest problems.

Navin Shenoy is executive vice president and general manager of the Data Center Group at Intel Corporation.

The post Innovating for the ‘Data-Centric’ Era appeared first on Intel Newsroom.

Media Alert: Data-Centric Innovation Summit – Data Center Platform and Products Fueling Intel’s Growth

datacentric summit2 2x1Join Intel’s executive vice president and general manager of the Data Center Group, Navin Shenoy, as he presents the company’s vision for a new era of data-centric computing at Intel’s Data-Centric Innovation Summit.

Intel’s silicon portfolio, investment in software optimization and work with partners provide an opportunity to fuel new global business opportunities and societal advancements.

During the opening keynote, Shenoy will share his view of Intel’s expanded data-centric opportunity and his plans to shape and win key growth trends: artificial intelligence, the cloud and network transformation.  

When: Wednesday, August 8; livestream begins at 9:00 a.m. PDT

Where: Livestream can be accessed at Intel’s investor relations website.

Media Contact: Stephen Gabriel, Intel Global Communications, stephen.gabriel@intel.com

The post Media Alert: Data-Centric Innovation Summit – Data Center Platform and Products Fueling Intel’s Growth appeared first on Intel Newsroom.

Recruiting 9 to 5, What AI Way to Make a Living

There’s a revolving door for many jobs in the U.S., and it’s been rotating faster lately. For AI recruiting startup Eightfold, which promises lower churn and faster hiring, that spins up opportunity.

Silicon Valley-based Eightfold, which recently raised $18 million, is offering its talent management platform as U.S. unemployment has hit its lowest since the go-go era of 2000.

Stakes are high for companies seeking the best and the brightest amid a sizzling job market. Many companies — especially tech sector ones — have high churn amid much-needed positions unfilled. Eightfold says its software can slash the time it takes to hire by 40 percent.

“The most important asset in the enterprise is the people,” said Eightfold founder and CEO Ashutosh Garg.

The tech industry veteran helped steer the launch of eight product launches at Google and co-founded BloomReach to offer AI-driven retail personalization.

Tech’s Short Job Tenures

Current recruiting and retention methods are holding back people and employers, Garg says.

The problem is acute in the U.S., as on average it takes two months to fill a position at a cost of as much as $20,000. And churn is unnecessarily high, he says.

This is an escalating workforce problem. The employee tenure on average at tech companies such as Apple, Amazon, Twitter, Microsoft and Airbnb has fallen below two years, according to Paysa, a career service. [Disclosure: NVIDIA’s average tenure of employees is more than 5 years.]

The Eightfold Talent Intelligence Platform is used by companies alongside human resources management software and talent tracking systems on areas such as recruiting, retention and diversity.

Machines ‘More Than 2x Better’

Eightfold’s deep neural network was trained on millions of public profiles and continues to ingest massive helpings of data, including public profiles and info on who is getting hired.

The startup’s ongoing training of its recurrent neural networks, which are predictive by design, enables it to continually improve its inferencing capabilities.

“We use a cluster of NVIDIA GPUs to train deep models. These servers process billions of data points to understand career trajectory of people and predict what they will do next. We have over a dozen deep models powering core components of our product,” Garg said.

The platform has processed over 20 million applications, helping companies increase

response rates from candidates by eightfold (thus its name). It has helped reduce screening costs and time to hire by 90 percent.

“It’s more than two times better at matching than a human recruiter,” Garg said.

AI Matchmaker for Job Candidates

For job matching, AI is a natural fit. Job applicants and recruiters alike can upload a resume into a company’s employment listings portal powered by Eightfold’s AI. The platform shoots back in seconds the handful of jobs that are a good match for the particular candidate.

Recruiters can use the tool to search for a wider set of job candidate attributes. For example, it can be used to find star employees who move up more quickly than others in an organization. It can also find those whose GitHub follower count is higher than average, a key metric for technical recruiting.

The software can ingest data on prospects — education, career trajectory, peer rankings, etc. — to do inferencing about how good of a company fit they are for open jobs.

No, Please Don’t Go!

The software is used for retention as well. The employment tenure on average in the U.S. is 4.2 years and a mere 18 months for millennials. For retention, the software directs its inferencing capabilities at employees to determine what they might do next in their career paths.

It looks at many signals, aiming to determine questions such as: Are you likely to switch jobs? Are you an attrition risk? How engaged are you? Are peers switching roles now?

“There’s all kinds of signals that we try to connect,” Garg said. “You try to give that person another opportunity in the company to keep them from leaving.”

Diversity? Blind Screening

Eightfold offers diversity help for recruiters, too. For those seeking talent, the startup’s algorithms can assist to identify which applicants are a good match. Its diversity function also allows recruiters blind screening of candidates shown, helping to remove bias.

Customers AdRoll and DigitalOcean are among those using the startup’s diversity product.

“By bringing AI into the screening, you are able to remove all these barriers — their gender, their ethnicity and so on,” Garg said. “So many of us know how valuable it is to have diversity.”

The post Recruiting 9 to 5, What AI Way to Make a Living appeared first on The Official NVIDIA Blog.

SuperVize Me: What’s the Difference Between Supervised, Unsupervised, Semi-Supervised and Reinforcement Learning?

There are a few different ways to build IKEA furniture. Each will, ideally, lead to a completed couch or chair. But depending on the details, one approach will make more sense than the others.

Got the instruction manual and all the right pieces? Just follow directions. Getting the hang of it? Toss the manual aside and go solo. But misplace the instructions, and it’s up to you to make sense of that pile of wooden dowels and planks.

It’s the same with deep learning. Based on the kind of data available and the research question at hand, a scientist will choose to train an algorithm using a specific learning model.

In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own.

Semi-supervised learning takes a middle ground. It uses a small amount of labeled data bolstering a larger set of unlabeled data. And reinforcement learning trains an algorithm with a reward system, providing feedback when an artificial intelligence agent performs the best action in a particular situation.

Let’s walk through the kinds of datasets and problems that lend themselves to each kind of learning.

What Is Supervised Learning?

If you’re learning a task under supervision, someone is present judging whether you’re getting the right answer. Similarly, in supervised learning, that means having a full set of labeled data while training an algorithm.

Fully labeled means that each example in the training dataset is tagged with the answer the algorithm should come up with on its own. So, a labeled dataset of flower images would tell the model which photos were of roses, daisies and daffodils. When shown a new image, the model compares it to the training examples to predict the correct label.

With supervised machine learning, the algorithm learns from labeled data.

There are two main areas where supervised learning is useful: classification problems and regression problems.

Cat, koala or turtle? A classification algorithm can tell the difference.

Classification problems ask the algorithm to predict a discrete value, identifying the input data as the member of a particular class, or group. In a training dataset of animal images, that would mean each photo was pre-labeled as cat, koala or turtle. The algorithm is then evaluated by how accurately it can correctly classify new images of other koalas and turtles.

On the other hand, regression problems look at continuous data. One use case, linear regression, should sound familiar from algebra class: given a particular x value, what’s the expected value of the y variable?

A more realistic machine learning example is one involving lots of variables, like an algorithm that predicts the price of an apartment in San Francisco based on square footage, location and proximity to public transport.

Supervised learning is, thus, best suited to problems where there is a set of available reference points or a ground truth with which to train the algorithm. But those aren’t always available.

What Is Unsupervised Learning?

Clean, perfectly labeled datasets aren’t easy to come by. And sometimes, researchers are asking the algorithm questions they don’t know the answer to. That’s where unsupervised learning comes in.

In unsupervised learning, a deep learning model is handed a dataset without explicit instructions on what to do with it. The training dataset is a collection of examples without a specific desired outcome or correct answer. The neural network then attempts to automatically find structure in the data by extracting useful features and analyzing its structure.

Unsupervised learning models automatically extract features and find patterns in the data.

Depending on the problem at hand, the unsupervised learning model can organize the data in different ways.

  • Clustering: Without being an expert ornithologist, it’s possible to look at a collection of bird photos and separate them roughly by species, relying on cues like feather color, size or beak shape. That’s how the most common application for unsupervised learning, clustering, works: the deep learning model looks for training data that are similar to each other and groups them together.
  • Anomaly detection: Banks detect fraudulent transactions by looking for unusual patterns in customer’s purchasing behavior. For instance, if the same credit card is used in California and Denmark within the same day, that’s cause for suspicion. Similarly, unsupervised learning can be used to flag outliers in a dataset.
  • Association: Fill an online shopping cart with diapers, applesauce and sippy cups and the site just may recommend that you add a bib and a baby monitor to your order. This is an example of association, where certain features of a data sample correlate with other features. By looking at a couple key attributes of a data point, an unsupervised learning model can predict the other attributes with which they’re commonly associated.
  • Autoencoders: Autoencoders take input data, compress it into a code, then try to recreate the input data from that summarized code. It’s like starting with Moby Dick, creating a SparkNotes version and then trying to rewrite the original story using only SparkNotes for reference. While a neat deep learning trick, there are fewer real-world cases where a simple autocoder is useful. But add a layer of complexity and the possibilities multiply: by using both noisy and clean versions of an image during training, autoencoders can remove noise from visual data like images, video or medical scans to improve picture quality.

Because there is no “ground truth” element to the data, it’s difficult to measure the accuracy of an algorithm trained with unsupervised learning. But there are many research areas where labeled data is elusive, or too expensive, to get. In these cases, giving the deep learning model free rein to find patterns of its own can produce high-quality results.

What Is Semi-Supervised Learning?

Think of it as a happy medium.

Semi-supervised learning is, for the most part, just what it sounds like: a training dataset with both labeled and unlabeled data. This method is particularly useful when extracting relevant features from the data is difficult, and labeling examples is a time-intensive task for experts.

Semi-supervised learning is especially useful for medical images, where a small amount of labeled data can lead to a significant improvement in accuracy.

Common situations for this kind of learning are medical images like CT scans or MRIs. A trained radiologist can go through and label a small subset of scans for tumors or diseases. It would be too time-intensive and costly to manually label all the scans — but the deep learning network can still benefit from the small proportion of labeled data and improve its accuracy compared to a fully unsupervised model.

A popular training method that starts with a fairly small set of labeled data is using general adversarial networks, or GANs.

Imagine two deep learning networks in competition, each trying to outsmart the other. That’s a GAN. One of the networks, called the generator, tries to create new data points that mimic the training data. The other network, the discriminator, pulls in these newly generated data and evaluates whether they are part of the training data or fakes. The networks improve in a positive feedback loop — as the discriminator gets better at separating the fakes from the originals, the generator improves its ability to create convincing fakes.

This is how a GAN works: The discriminator, labeled “D,” is shown images from both the generator, “G,” and from the training dataset. The discriminator is tasked with determining which images are real, and which are fakes from the generator.

What Is Reinforcement Learning?

Video games are full of reinforcement cues. Complete a level and earn a badge. Defeat the bad guy in a certain number of moves and earn a bonus. Step into a trap — game over.

These cues help players learn how to improve their performance for the next game. Without this feedback, they would just take random actions around a game environment in the hopes of advancing to the next level.

Reinforcement learning operates on the same principle — and actually, video games are a common test environment for this kind of research.

In this kind of machine learning, AI agents are attempting to find the optimal way to accomplish a particular goal, or improve performance on a specific task. As the agent takes action that goes toward the goal, it receives a reward. The overall aim: predict the best next step to take to earn the biggest final reward.

To make its choices, the agent relies both on learnings from past feedback and exploration of new tactics that may present a larger payoff. This involves a long-term strategy — just as the best immediate move in a chess game may not help you win in the long run, the agent tries to maximize the cumulative reward.

It’s an iterative process: the more rounds of feedback, the better the agent’s strategy becomes. This technique is especially useful for training robots, which make a series of decisions in tasks like steering an autonomous vehicle or managing inventory in a warehouse.

Just as students in a school, every algorithm learns differently. But with the diversity of approaches available, it’s only a matter of picking the best way to help your neural network learn the ropes.

The post SuperVize Me: What’s the Difference Between Supervised, Unsupervised, Semi-Supervised and Reinforcement Learning? appeared first on The Official NVIDIA Blog.

NVIDIA and NetApp Team to Help Businesses Accelerate AI

For all the focus these days on AI, it’s largely just the world’s largest hyperscalers that have the chops to roll out predictable, scalable deep learning across their organizations.

Their vast budgets and in-house expertise have been required to design systems with the right balance of compute, storage and networking to deliver powerful AI services across a broad base of users.

NetApp ONTAP AI, powered by NVIDIA DGX and NetApp all-flash storage is a blueprint for enterprises wanting to do the same. It helps organizations, both large and small, transform deep learning ambitions into reality. It offers an easy-to-deploy, modular approach for implementing — and scaling — deep learning across their infrastructures. Deployment times get shrunk from months to days.

We’ve worked with NetApp to distill hard-won design insights and best practices into a replicable formula for rolling out an optimal architecture for AI and deep learning. It’s a formula that eliminates the guesswork of designing infrastructure, providing an optimal configuration of GPU computing, storage and networking.

ONTAP AI is backed by a growing roster of trusted NVIDIA and NetApp partners that can help a business get its deep learning infrastructure up and running quickly and cost effectively. And these partners have the AI expertise and enterprise-grade support needed to keep it humming.

This support can extend into a simplified, day-to-day operational experience that will help ensure the ongoing productivity of an enterprise’s deep learning efforts.

For businesses looking to accelerate and simplify their journey into the AI revolution, ONTAP AI is a great way to get there.

Learn more at https://www.netapp.com/us/products/ontap-ai.aspx.

The post NVIDIA and NetApp Team to Help Businesses Accelerate AI appeared first on The Official NVIDIA Blog.