OmniSci (formerly MapD) Charts $55M in Funding for GPU-Powered Analytics

OmniSci, a data-visualization startup that’s just changed its name from MapD, has a chart of its own: hockey stick growth.

The pioneer in GPU-driven analytics, which delivers its popular data visualizations in the blink of an eye, on Wednesday landed $55 million in Series C funding from investors, including NVIDIA. It was the fourth time NVIDIA has participated in one of its fund-raising rounds.

OmniSci CEO Todd Mostak, who originally built the technology as a researcher at Harvard and MIT, realized early the speed advantages of GPUs over CPUs to query and visualize massive datasets.

OmniSci’s SQL database engine fully exploits GPUs, offering in-memory access to big data. Its software allows customers to slice and dice data and serve up graphics and visualizations from billions of data points on the fly. It quickly made a splash for its ability to power real-time visual data analytics over more than a billion tweets.

“We cache as much as possible on the memory of these GPUs,” said Mostak, the company’s CEO. “We have taken the fastest hardware out there to optimize our software. You have all this legacy software out there that relies on CPUs, and it simply cannot provide interactive and real-time analytics at the scales data-driven organizations are grappling with.”

The startup’s software is aimed at data-intensive sectors, including automotive, telecommunications, financial, entertainment, defense and intelligence. The company recently rebranded to OmniSci, inspired by the idea of the endless pursuit of knowledge and insight for everyone.

OmniSci unleashes the massively parallel processing capabilities of GPUs to instantly query multibillion-sized datasets. Customers use OmniSci to answer queries in milliseconds that, in some cases, used to take nearly a day. And OmniSci can exploit the visual computing benefits of GPUs to transform massive datasets into interactive visualizations.

A screenshot of an OmniSci public demo, showcasing interactive cross-filtering and drill-down on nearly 12B rows of telematics data from U.S. ship transponders.

Lightning-fast analytics from OmniSci are powering faster, and financially impactful, business decisions. Verizon, for example, uses OmniSci to process billions of rows of communications data in real time to monitor network performance in real time, improving customer service and optimizing field service decisions.

“Analytics and data science professionals are realizing there’s this new architecture emerging into the mainstream,” said Mostak.

OmniSci can be installed on-premises, and it runs on AWS and Google Cloud, harnessing NVIDIA GPUs. In March, the startup launched its own GPU-accelerated analytics-as-a-service cloud at our GPU Technology Conference. OmniSci Cloud makes the world’s fastest open source SQL engine and visual analytics software available in under 60 seconds, from a web browser.

OmniSci plans to use the funding to accelerate research and development as well as to support the open source community and expand to meet “rapidly growing enterprise demand,” particularly in the U.S. federal market and Europe, Mostak said.

The company has experienced explosive growth, significantly expanding its employee base and more than tripling customers in past year alone, according to the company.

Lead investor Tiger Global Management cites OmniSci’s potential for major impacts to the analytics market, noting a “growing ecosystem of software purpose-built to run on GPUs can have a transformative impact to a number of software categories,” according to partner Lee Fixel.

OmniSci, founded in 2013, has now raised $92 million in total funding. See its technology in action at GTC Europe, which runs Oct. 9 to 11.

The post OmniSci (formerly MapD) Charts $55M in Funding for GPU-Powered Analytics appeared first on The Official NVIDIA Blog.

Robot Tamer Madeline Gannon: New Platform Will Bring Machines to Heel at Scale

Training, testing and coding robots is a grueling process. Our recently launched Isaac platform promises to change all that.

Few know that better than roboticist Madeline Gannon. For the past month, she’s been hard at work developing a robotic art installation in her research studio in the Polish Hill neighborhood of Pittsburgh.

Her development sprint is focused on the debut of Manus, which connects 10 industrial arms with a single robot brain to illustrate new frontiers in human-robot interaction.

She’s been racing the clock to develop the software and interaction design to bring these robotic arms to life in time for today’s World Economic Forum exhibit, in Tianjin, China. Putting in 80-hour weeks in her warehouse, she’s faced the difficult task of taking two bots on loan from robotics company ABB and using them to simulate the interactions of all 10 robots that will be at the show.

Gannon relied heavily on her simulations to create the interactions for spectators. And she wouldn’t know for certain whether it actually works until she got the 10 machines onsite in China up and running.

The challenge of simulating robots in operation has traditionally driven roboticists to take on custom programming — not to mention big gambles and anxiety — because software until recently hasn’t reliably worked. 

Yet it remains a key issue for the industry, as logistics operations and warehouses shift gears to embrace robots featuring increasing levels of autonomy to work alongside humans.

“As we transition from robotic automation to robotic autonomy, art installations like Manus provide people an opportunity to experience firsthand how humans and autonomous machines might coexist in the future,” she says.

To be sure, Gannon’s gruelling effort in getting this demonstration off the ground underscores the industry’s nascent state of affairs for developing robotics at scale.

Robotics Help Arrives

Much of that is now changing. Earlier this year, we launched the Isaac Simulator for developing, testing and training autonomous machines in the virtual world. Last week, at GTC Japan, we announced the availability of the Jetson AGX Xavier devkit for developers to put to work on autonomous machine such as robots and drones.

Combined, this software and hardware will boost the robotics revolution by turbo-charging development cycles.

“Isaac is going to allow people to develop smart applications a heck of a lot faster,” Gannon said. “We’re really at a golden age for robotics right now.”

This isn’t Gannon’s first robotics rodeo. Last year, while a Ph.D. candidate at Carnegie Mellon University, she developed an interactive industrial robot arm that was put on display at the Design Museum in London.

That robot, Mimus, was a 2,600-pound giant designed to be curious about its surroundings. Enclosed in a viewing area, the robot used sensors embedded in the museum ceiling to see and come closer to or even follow spectators it found interesting.

Exhibiting Manus in Tianjin for the World Economic Forum marks her second, and significantly more complex, robotics installation, which required custom software to create interactions from scratch.

Bringing Manus to Life

Manus wasn’t easy to pull off. Once she arrived in China, Gannon had only 10 days with all 10 robots onsite before the opening of the interactive exhibit. Manus’s robots stand in a row atop a 9-meter base and are encased in plexiglass. Twelve depth sensors placed at the bottom of its base enable the interconnected robots to track and respond to the movements of visitors.

“There is a lot of vision processing in the project — that’s why its brain is using an NVIDIA GPU,” Gannon said.

This vision system enables Manus to move autonomously in response to the people around it: once Manus finds an interesting person, all 10 robot arms reorient as its robotic gaze follows them around.

To create the interaction design for Manus, Gannon needed to develop custom communication protocols and kinematic solvers for her robots as well as custom people-sensing, remote monitoring and human-robot interaction design software.

She says that up until now there haven’t been reliable technical resources for doing atypical things with intelligent robots. As a result, she’s had to reinvent the wheel each time she creates a new robotics piece.

The technical development for Mamus’s software stack took about two-thirds of the project timeline, leaving only one-third of the time to devote to heart of the project — the human-robot interaction design.

Future Robot Training

Using Jetson for vision and Isaac Sim for training robots could help developers turn those ratios around for future such projects. And they’re well-suited for development and simulation of industrial robots used by massive enterprises for warehouses and logistics operations.

Gannon’s mastery of training robots against such obstacles has garnered attention for her pioneering work, and she’s been called a “robot whisperer” or “robot tamer” for years.

She shrugs that off. “Now, with Isaac, I’m hopeful we won’t need robot whisperers anymore.”

Learn about availability of our Jetson AGX Xavier developer kit.

The post Robot Tamer Madeline Gannon: New Platform Will Bring Machines to Heel at Scale appeared first on The Official NVIDIA Blog.

In the Eye of the Storm: The Weather Channel Forecasts Hurricane Florence With Stunning Visuals

With Hurricane Florence threatening flash floods, The Weather Channel on Thursday broadcast its first-ever live simulation to convey the storm’s severity before it hit land.

The Atlanta-based television network has adopted graphics processing more common to video game makers in its productions. The result — see video below — is the stunning, immersive mixed reality visual to accompany meteorologists in live broadcasts.

Hurricane Florence slammed into the southeastern shore of North Carolina early Friday morning. Wind speeds of the category 1 hurricane have reached 90 miles per hour, and up to 40 inches of rain have been forecast to drench the region.

Warnings for life-threatening storm surge flooding have been in effect along the North Carolina coast.

The Weather Channel in 2016 began working with this immersive mixed reality to better display the severity of conditions with graphically intense simulations using high performance computing. This type of immersive mixed reality  for broadcast news has only recently become a technique used to convey the severity of life-threatening weather conditions.

In June, The Weather Channel began releasing immersive mixed reality for live broadcasts, tapping The Future Group along with its own teams of meteorologists and designers. Their objective was to deliver new ways to convey the weather severity, said Michael Potts, vice president of design at The Weather Channel.

“Our larger vision is to evolve and transform how The Weather Channel puts on its presentation, to leverage this immersive technology,” he added.

The Weather Channel takes the traditional green-screen setting — the background setup for visual — and places the meteorologist in the center for a live broadcast. The weather simulation displays the forecast via green screen, which wraps around the broadcaster with real-time visuals in synch with the broadcast. “It’s a tremendous amount of real-time processing, enabled by NVIDIA GPUs,” said Potts.

It’s science-based. The Weather Channel takes wind speed, direction, rainfall and countless other meteorological data points fed into the 3D renderings to provide accurate visualizations.

Video game-like production was made possible through The Weather Channel’s partnership with Oslo, Norway-based The Future Group, a mixed reality company with U.S. offices. The Future Group’s Frontier graphics platform, based on the Epic Games Unreal Engine 4 gaming engine, was enlisted to deliver photorealistic immersive mixed reality backdrops.

“The NVIDIA GPUs are allowing us to really push the boundaries. We’re rendering 4.7 million polygons in real time,” said Lawrence Jones, executive vice president of the Americas at The Future Group. “The pixels that are being drawn are actually changing lives.”

The post In the Eye of the Storm: The Weather Channel Forecasts Hurricane Florence With Stunning Visuals appeared first on The Official NVIDIA Blog.

Kiwi Delivery Robots Tackle Student Snack Attacks

What’s not to like about food delivered by a cheery little robot in about 30 minutes.

That’s the attraction of Kiwi Bot, a robot hatched by a Colombian team now in residence at the University of California, Berkeley’s Skydeck accelerator, which funds and mentors startups.

The startup has rolled out 50 of its robots — basically, computerized beer coolers with four wheels and one cute digital face — and delivered more than 12,000 meals. They’re often seen shuttling food on Cal’s and Stanford University’s campuses.

Kiwi Bot has been something of a sidewalk sensation and won the hearts of students early on with promotions such as its free Soylent  and Red Bull deliveries (check out the bot’s variety of eye expressions).

Kiwi Bot customers use the KiwiCampus app to select a restaurant and menu items. Food options range from fare at big chains such as Chipotle, McDonald’s, Subway and Jamba Juice to generous helpings of favorites from local restaurants. Kiwi texts customers on their order status and expected time of arrival. Customers receive the food with the app, and a hand gesture in front of Kiwi opens its hatch. Deliveries are available between 11 am and 8 pm.

Kiwi is partnered with restaurant food delivery startup Snackpass, delivering to its customers for the same $3.80 fee. For now, the delivery robots are only available around the UC Berkeley and Stanford campuses.

Reinventing Food Delivery Model

Kiwi is aimed at a unique human-and-robotics delivery opportunity. In Colombia, like in other parts of the world, it’s normal to get fast and cheap deliveries by bicycle service, said Kiwi co-founder and CEO Felipe Chavez. “Here it’s by car, and the the delivery fees are like $8. That gave me the curiosity to explore the unit economics of the delivery.”

Chavez and his team moved their startup — originally for food delivery by people — from Bogota to Berkeley in 2017 and applied to the Skydeck program.

The Kiwi team has ambitious plans. The company is working to develop a smooth connection between people, robots and restaurants, addressing the problem with three different bots. Its Kiwi Restaurant Bot is waist high — think R2-D2 in Star Wars — and has an opening at the top for restaurant employees to drop in orders. It then wheels out to the curb for loading.

At the sidewalk, a person unloads meals into a Kiwi Triike, an autonomous and rideable electric pedicab that stores up to four Kiwi Bots loaded with the meals for deliveries. The Kiwi Trike operator can then distribute the Kiwi Bots to make deliveries on sidewalks.

Packing Grub and Tech

Kiwi Bots are tech-laden little food delivery robots. They sport a friendly smile and digital eyes that can wink at people. Kiwi Bots have six Ultra HD cameras capable of 250 degrees of vision for object detection, packing NVIDIA Jetson TX2 AI processors to help interpret all the images of street and sidewalk action for navigation.

Jetson enables Kiwi to run its neural networks and work with its optical systems. “We have a neural network to make sure the robot is centered on the sidewalk and for obstacle avoidance. We can also use it for traffic lights. The GPU has allowed us to experiment,” Chavez said.

The Kiwi team underwent simulation for training its delivery robots. They also enabled object detection using MobileNets and autonomous driving with DriveNet architecture.  The company’s deep neural network relied on a convolutional neural network to classify events such as street crossing, wall crashes, falls, sidewalk driving and other common situations to improve navigation.

The Kiwi platform is designed for humans and robots working together. It’s intended to make it so that people can service more orders and do it in a more efficient way.

“It’s humans plus robots making it better,” Chavez said. “We are going to start operating in other cities of the Bay Area next.”

Lean more about artificial intelligence for robotics using NVIDIA Jetson.

The post Kiwi Delivery Robots Tackle Student Snack Attacks appeared first on The Official NVIDIA Blog.

What’s the Difference Between a CNN and an RNN?

The hit 1982 TV series Knight Rider, starring David Hasselhoff and a futuristic crime-fighting Pontiac Firebird, was prophetic. The self-driving, talking car also offers a Hollywood lesson in image and language recognition.

If scripted today, Hasselhoff’s AI car, dubbed KITT, would feature deep learning from convolutional neural networks and recurrent neural networks to see, hear and talk.

That’s because CNNs are the image crunchers now used by machines — the eyes — to identify objects. And RNNs are the mathematical engines — the ears and mouth — used to parse language patterns.

Fast-forward from the ‘80s, and CNNs are today’s eyes of autonomous vehicles, oil exploration and fusion energy research. They can help spot diseases faster in medical imaging and save lives.

Today the “Hoff” — like billions of others — benefits, even if unwittingly, from CNNs to post photos of friends on Facebook, enjoying its auto-tagging feature for names, adding to his social lubrication.

So strip the CNN from his Firebird and it no longer has the computerized eyes to drive itself, becoming just another action prop without sizzle.

And yank the RNN from Hasselhoff’s sleek, black, autonomous Firebird sidekick, and there goes the intelligent computerized voice that wryly pokes fun at his bachelorhood. Not to mention, toss out KITT’s command of French and Spanish.

Without a doubt, RNNs are revving up a voice-based computing revolution. They are the natural language processing brains that give ears and speech to Amazon’s Alexa, Google’s Assistant and Apple’s Siri. They lend clairvoyant-like magic to Google’s autocomplete feature that fills in lines of your search queries.

Moreover, CNNs and RNNs today make such a car more than just Tinseltown fantasy. Automakers are now fast at work on the KITT-like cars of tomorrow.

Today’s autonomous cars can get put through paces in simulation to test before even hitting the road. This allows developers to test and validate that the eyes of the vehicle are able to see at superhuman levels of perception.

AI-driven machines of all types are becoming powered with eyes and ears like ours, thanks to CNNs and RNNs. Much of these applications of AI are made possible by decades of advances in deep neural networks and strides in high performance computing from GPUs to process massive amounts of data.

Brief History of CNNs

How did we get here is often asked. Long before autonomous vehicles came along, the biological connections made between neurons of the human brain served as inspiration to researchers studying general artificial neural networks. Researchers of CNNs followed the same line of thinking.

A seminal moment for CNNs hit in 1998. That year Yann LeCun and co-authors Léon Bottou, Yoshua Bengio and Patrick Haffner published the influential paper Gradient-based Learning Applied to Document Recognition.

The paper describes how these learning algorithms can help classify patterns in handwritten letters with minimal preprocessing. The research into CNNs proved record-breaking accuracy at reading bank checks and has now been implemented widely for processing them commercially.

It fueled a surge of hope for the promise of AI. LeCun, the paper’s lead researcher, became a professor at New York University in 2003 and much later joined Facebook, in 2018, to become the social network’s chief AI scientist.

The next breakout moment was 2012. That’s when University of Toronto researchers Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton published the groundbreaking paper ImageNet Classification with Deep Convolutional Neural Networks.

The research advanced the state of object recognition. The trio trained a deep convolutional neural network to classify the 1.2 million images from the ImageNet Large Scale Visual Recognition Challenge contest, winning with a record-breaking reduction in error rate.

This sparked today’s modern AI boom.

CNNs Explained: Dog or Pony?

Here’s an example of image recognition’s role. We humans can see a Great Dane and know it’s big but that it is still a dog. Computers just see numbers. How do they know a Great Dane isn’t a pony? Well, that numerical representation of pixels can be processed through many layers of a CNN. Many Great Dane features can be identified this way to arrive at dog for an answer.

Now, let’s peek deeper under the hood of CNNs to understand what goes on at a more technical level.

CNNs are comprised of an input layer (such as an image represented by numbers for pixels), one or more hidden layers and an output layer.

These layers of math operations help computers define details of images little bits at a time in an effort to eventually — hopefully — identify specific objects or animals or whatever the aim. They often miss, however, especially early on in training.

Convolutional Layer:

In mathematics, a convolution is a grouping function. In CNNs, convolution happens between two matrices (rectangular arrays of numbers arranged in columns and rows) to form a third matrix as an output.

A CNN uses these convolutions in the convolutional layers to filter input data and find information.

The convolutional layer does most of the computational heavy lifting in a CNN. It acts as the mathematical filters that help computers find edges of images, dark and light areas, colors, and other details, such as height, width and depth.

There are usually many convolutional layer filters applied to an image.

  • Pooling layer: Pooling layers are often sandwiched between the convolutional layers. They’re used to reduce the size of the representations created by the convolutional layers as well as reduce the memory requirements, which allows for more convolutional layers.
  • Normalization layer: Normalization is a technique used to improve the performance and stability of neural networks. It acts to make more manageable the inputs of each layer by converting all inputs to a mean of zero and a variance of one. Think of this as regularizing the data.
  • Fully connected layers: Fully connected layers are used to connect every neuron in one layer to all the neurons in another layer.

 

For a more in-depth technical explanation, check out the CNN page of our developer site.

CNNs are ideally suited for computer vision, but feeding them enough data can make them useful in videos, speech, music and text as well.

They can enlist a giant sequence of filters — or neurons — in these hidden layers that all optimize toward efficiency in identifying an image. CNNs are called “feedforward” neural networks because information is fed from one layer to the next.

Alternatively, RNNs share much of the same architecture of traditional artificial neural networks and CNNs, except that they have memory that can serve as feedback loops. Like a human brain, particularly in conversations, more weight is given to recency of information to anticipate sentences.

This makes RNNs suited for predicting what comes next in a sequence of words. Also, RNNs can be fed sequences of data of varying length, while CNNs have fixed input data.

A Brief History of RNNs

Like the rising star of Hasselhoff, RNNs have been around since the 1980s. In 1982, John Hopfield invented the Hopfield network, an early RNN.

What’s known as long short-term memory (LSTM) networks, and used by RNNs, was invented by Sepp Hochreiter and Jürgen Schmidhuber in 1997. By about 2007, LSTMs made leaps in speech recognition.

In 2009, an RNN was winning pattern recognition contests for handwriting recognition. By 2014, China’s Baidu search engine beat the Switchboard Hub5’00 speech recognition standard, a new landmark.

RNNs Explained: What’s for Lunch?

An RNN is a neural network with an active data memory, known as the LSTM, that can be applied to a sequence of data to help guess what comes next.

With RNNs, the outputs of some layers are fed back into the inputs of a previous layer, creating a feedback loop.

Here’s a classic example of a simple RNN. It’s for keeping track of which day the main dishes are served in your cafeteria, which let’s say has a rigid schedule of the same dish running on the same day each week. Let’s imagine it looks like this: burgers on Mondays, tacos on Tuesdays, pizza on Wednesdays, sushi on Thursdays and pasta on Fridays.

With an RNN, if the output “sushi” is fed back into the network to determine Friday’s dish, then the RNN will know that the next main dish in the sequence is pasta (because it has learned there is an order and Thursday’s dish just happened, so Friday’s dish comes next).

Another example is the sentence: I just ran 10 miles and need a drink of ______. A human could figure how to fill in the blank based on past experience. Thanks to the memory capabilities of RNNs, it’s possible to anticipate what comes next because it may have enough trained memory of similar such sentences that end with “water” to complete the answer.

RNN applications extend beyond natural language processing and speech recognition. They’re used in language translation, stock predictions and algorithmic trading as well.

Also used, neural turing machines (NTMs) are RNNs that have access to external memory.

Last, what’s known as bidirectional RNNs take an input vector and train it on two RNNs. One of the them gets trained on the regular RNN input sequence while the other on a reversed sequence. Outputs from both RNNs are next concatenated, or combined

All told, CNNs and RNNs have made apps, the web and the world of machines far more capable with sight and speech. Without these two AI workhorses, our machines would be boring.

Amazon’s Alexa, for one, is teaching us how to talk to our kitchen Echo “radio” devices, begging all kinds of new queries in chatting with its quirky AI.

And autonomous vehicles will soon be right around the corner, promising a starring role in our lives.

For a more technical deep dive on RNNs, check out our developers site. To learn more about deep learning, visit our NVIDIA Deep Learning Institute for the latest information on classes.

The post What’s the Difference Between a CNN and an RNN? appeared first on The Official NVIDIA Blog.

Recruiting 9 to 5, What AI Way to Make a Living

There’s a revolving door for many jobs in the U.S., and it’s been rotating faster lately. For AI recruiting startup Eightfold, which promises lower churn and faster hiring, that spins up opportunity.

Silicon Valley-based Eightfold, which recently raised $18 million, is offering its talent management platform as U.S. unemployment has hit its lowest since the go-go era of 2000.

Stakes are high for companies seeking the best and the brightest amid a sizzling job market. Many companies — especially tech sector ones — have high churn amid much-needed positions unfilled. Eightfold says its software can slash the time it takes to hire by 40 percent.

“The most important asset in the enterprise is the people,” said Eightfold founder and CEO Ashutosh Garg.

The tech industry veteran helped steer the launch of eight product launches at Google and co-founded BloomReach to offer AI-driven retail personalization.

Tech’s Short Job Tenures

Current recruiting and retention methods are holding back people and employers, Garg says.

The problem is acute in the U.S., as on average it takes two months to fill a position at a cost of as much as $20,000. And churn is unnecessarily high, he says.

This is an escalating workforce problem. The employee tenure on average at tech companies such as Apple, Amazon, Twitter, Microsoft and Airbnb has fallen below two years, according to Paysa, a career service. [Disclosure: NVIDIA’s average tenure of employees is more than 5 years.]

The Eightfold Talent Intelligence Platform is used by companies alongside human resources management software and talent tracking systems on areas such as recruiting, retention and diversity.

Machines ‘More Than 2x Better’

Eightfold’s deep neural network was trained on millions of public profiles and continues to ingest massive helpings of data, including public profiles and info on who is getting hired.

The startup’s ongoing training of its recurrent neural networks, which are predictive by design, enables it to continually improve its inferencing capabilities.

“We use a cluster of NVIDIA GPUs to train deep models. These servers process billions of data points to understand career trajectory of people and predict what they will do next. We have over a dozen deep models powering core components of our product,” Garg said.

The platform has processed over 20 million applications, helping companies increase

response rates from candidates by eightfold (thus its name). It has helped reduce screening costs and time to hire by 90 percent.

“It’s more than two times better at matching than a human recruiter,” Garg said.

AI Matchmaker for Job Candidates

For job matching, AI is a natural fit. Job applicants and recruiters alike can upload a resume into a company’s employment listings portal powered by Eightfold’s AI. The platform shoots back in seconds the handful of jobs that are a good match for the particular candidate.

Recruiters can use the tool to search for a wider set of job candidate attributes. For example, it can be used to find star employees who move up more quickly than others in an organization. It can also find those whose GitHub follower count is higher than average, a key metric for technical recruiting.

The software can ingest data on prospects — education, career trajectory, peer rankings, etc. — to do inferencing about how good of a company fit they are for open jobs.

No, Please Don’t Go!

The software is used for retention as well. The employment tenure on average in the U.S. is 4.2 years and a mere 18 months for millennials. For retention, the software directs its inferencing capabilities at employees to determine what they might do next in their career paths.

It looks at many signals, aiming to determine questions such as: Are you likely to switch jobs? Are you an attrition risk? How engaged are you? Are peers switching roles now?

“There’s all kinds of signals that we try to connect,” Garg said. “You try to give that person another opportunity in the company to keep them from leaving.”

Diversity? Blind Screening

Eightfold offers diversity help for recruiters, too. For those seeking talent, the startup’s algorithms can assist to identify which applicants are a good match. Its diversity function also allows recruiters blind screening of candidates shown, helping to remove bias.

Customers AdRoll and DigitalOcean are among those using the startup’s diversity product.

“By bringing AI into the screening, you are able to remove all these barriers — their gender, their ethnicity and so on,” Garg said. “So many of us know how valuable it is to have diversity.”

The post Recruiting 9 to 5, What AI Way to Make a Living appeared first on The Official NVIDIA Blog.

AMD Expects GPU Sales to Cryptocurrency Miners to Keep Sliding

AMD has disclosed that the cryptocurrency-induced boom for sales of its GPU cards is over, for the foreseeable future. During an earnings call the Santa Clara, California-based chipmaker revealed that graphics cards sales to cryptocurrency miners declined during the quarter that ended in June. The Graphics Processing Units that AMD and its rival Nvidia make

The post AMD Expects GPU Sales to Cryptocurrency Miners to Keep Sliding appeared first on CCN

Gamers’ Relief: Bitcoin Bear Period is Bringing Down High-End GPU Prices

Bitcoin isn’t the only one going down. As the cryptocurrency keeps losing its value, other areas of the industry are starting to take its toll. As Bitcoin, and cryptocurrencies all over the world, skyrocketed in value last year, a new market was born — crypto mining. I’m sure you know at least one person who

The post Gamers’ Relief: Bitcoin Bear Period is Bringing Down High-End GPU Prices appeared first on CCN

Let’s Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets

Press a button on your smartphone and go. Daimler, Bosch and NVIDIA have joined forces to bring fully automated and driverless vehicles to city streets, and the effects will be felt far beyond the way we drive.

While the world’s billion cars travel 10 trillion miles per year, most of the time these vehicles are sitting idle, taking up valuable real estate while parked. And when driven, they are often stuck on congested roadways. Mobility services will solve these issues plaguing urban areas, capture underutilized capacity and revolutionize the way we travel.

All over the globe we are seeing a rapid adoption of new mobility services from companies like Uber, Lyft, Didi, and Ola. But now the availability of drivers threatens to limit their continued growth.

The answer is the driverless car — a vehicle rich with sensors, powered by an extremely energy efficient supercomputer, and running AI software that acts as a virtual driver.

The collaboration of Daimler, Bosch, and NVIDIA, announced Tuesday, promises to unleash what auto industry insiders call Level 4 and Level 5 autonomy — cars that can drive themselves.

The benefits of mobility services built on autonomous vehicles are enormous. These AI-infused vehicles will improve traffic flow, enhance safety, and offer greater access to mobility. In addition, analysts predict it will cost a mere 17 cents a mile to ride in a driverless car you can summon anytime. And commuters will be able to spend their drive to work actually working, recapturing an estimated $99 billion worth of lost productivity each year.

Driving the convenience of transportation up, and costs down, is a huge opportunity. By 2030, driverless vehicles and services will be a $1 trillion industry, according to KPMG.

To reap these benefits, the great automotive brands will need to weave the latest technology into everything they do. And NVIDIA DRIVE, our AV computing platform,  promises to help them stitch all the breakthroughs of our time — deep learning, sensor fusion, image recognition, cloud computing and more — into this fabric.

Our collaboration with Daimler and Bosch will unite each company’s strengths. NVIDIA brings leadership in AI and self-driving platforms. Bosch, the world’s largest tier 1 automotive supplier, brings its hardware and system expertise. Mercedes-Benz parent Daimler brings total vehicle expertise and a global brand that’s synonymous with safety and quality.

Street Smarts Needed

Together, we’re tackling an enormous challenge. Pedestrians, bicyclists, traffic lights, and other vehicles make navigating congested urban streets stressful for even the best human drivers.

Demand for computational horsepower in this chaotic, unstructured environment adds up fast. Just a single video camera generates 100 gigabytes of data per kilometer, according to Bosch.

Now imagine a fully automated vehicle or robotaxi with a suite of sensors wrapped around the car: high resolution camera, lidar, and radar that are configured to sense objects from afar, combined with diverse sensors that are specialized for seeing color, measuring distance, and detecting motion across a wide range of conditions. Together these systems provide levels of diversity to increase safety and redundancy to provide backup in case of failure. However, this vast quantity of information needs to be deciphered, processed, and put to work by multiple layers of neural networks almost instantaneously.

NVIDIA DRIVE delivers the high-performance required to simultaneously run the wide array of diverse deep neural networks needed to safely driving through urban environments.
NVIDIA DRIVE delivers the high-performance required to simultaneously run the wide array of diverse deep neural networks needed to safely driving through urban environments.

A massive amount of computing performance is required to run the dozens of complex algorithms concurrently, executing within milliseconds so that the car can navigate safely and comfortably.

Daimler and Bosch Select DRIVE Pegasus

NVIDIA DRIVE Pegasus is the AI supercomputer designed specifically for autonomous vehicles, delivering 320 TOPS (trillions of operations per second) to handle these diverse and redundant algorithms. At just the size of a license plate, it has the performance equivalent to six synchronized deskside workstations.

This is the most energy efficient supercomputer ever created — performing one trillion operations per watt. By minimizing the amount of energy consumed, we can translate that directly to increased operating range.

Pegasus is architected for safety, as well as performance. This automotive-grade, functional safety production solution uses two NVIDIA Xavier SoCs and two of our next-generation GPUs designed for AI and vision processing. This co-designed hardware and software platform is created to achieve ASIL-D ISO 26262, the industry’s highest level of automotive functional safety. Even when a fault is detected, the system will still operate.

From the Car to the Cloud

NVIDIA AV solutions go beyond what can be put on wheels. NVIDIA DGX AI supercomputers for the data center are used to train the deep neural nets that enable a vehicle to deliver superhuman levels of perception. The new DGX-2, with its two petaflops of performance, enables deep learning training in a fraction of the time, space, energy, and cost of CPU servers.

Once trained on powerful GPU-based servers, the NVIDIA DRIVE Constellation AV simulator can be utilized to test and validate the complete software “stack” that will ultimately be placed inside the vehicle. This high performance software stack includes every aspect of piloting an autonomous vehicle, from object detection through deep learning and computer vision, to map localization and path planning, and it all runs on DRIVE Pegasus.

In the years to come, DRIVE Pegasus will be key to helping automakers meet a surge in demand. The mobility-as-a-service industry will purchase more than 10 million cars in 2040, up from 300,000 in 2017, market research firm IHS Markit projects.

“The partnership with Bosch and Daimler illustrates that the NVIDIA DRIVE Pegasus architecture solves the critical needs of automakers as they tackle the challenge of automated driving,” said IHS Markit Senior Research Director for Artificial Intelligence Luca De Ambroggi. “The combination of NVIDIA’s AI silicon, software, integrated platforms, and tools for simulation and validation adds value for AV development.”

A Thriving Ecosystem for Mobility-as-a-Service

The NVIDIA DRIVE ecosystem continues to expand in all areas of autonomous driving, from robotaxis to trucking to delivery vehicles, as more than 370 companies have already adopted the DRIVE platform. And now our work with Daimler and Bosch will create innovative new driverless vehicles and services that will do more than just transform our streets, they’ll transform our lives.

The post Let’s Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets appeared first on The Official NVIDIA Blog.

How Virtual Drug Discovery Tools Could Even the Playing Field Between Big Pharma and Small Biotech

While AI can lift competition and productivity, it also can act as a great leveler, putting smaller players on the same footing as goliaths.

Take pharmaceutical research, for example.

Large companies have the budget and resources to physically test millions of drug candidates, giving them an advantage over startups and researchers. But smaller labs can achieve similar results by harnessing neural networks that simulate how a potential drug molecule will bind with a target protein.

Deep learning can help smaller companies and other researchers discover promising drug treatments by improving the speed and accuracy of molecular docking, the process of computationally predicting how and how well a molecule binds with a protein.

“You don’t actually need to have the molecule in hand,” says David Koes, assistant professor at University of Pittsburgh. “You can screen billions of compounds and they don’t actually have to exist.”

A Perfect Match

When scientists look for the perfect molecular structure for a drug treatment, they look at a few laws of attraction.

A drug molecule should have an attraction, or affinity, to the protein that researchers want it to bind to. Too little affinity, and the drug is too weak for the pair to work.

A familiar principle applies here: opposites attract. The lesson is universally learned in elementary school science classes — and from unsolicited relationship advice. Now Koes and his fellow researchers are imparting this principle to their neural network.

The match should be specific, too — if the molecule is too general, it could bind with a hundred proteins in the body instead of just one. “That’s usually a bad thing,” says David Koes, assistant professor at University of Pittsburgh.

Screening these molecules virtually could speed up the years-long process of identifying a candidate good enough to bring to clinical trials.

As Koes puts it, “When you discover better drugs to begin with, you fail less later.”

This method further opens up researchers’ horizons to test molecules that don’t even exist yet. If a particular molecular structure looks promising, it can be synthesized in the lab.

Koes sees immense potential in this field. He envisions a future world where researchers could use sliders to activate molecular features like solubility, or whether a molecule can pass the blood-brain barrier.

It will take time to get there, he concedes. “It’s quite challenging because you need to make something that’s physically realistic and chemically realistic.”

Unleashing Deep Learning

The researchers’ convolutional neural network looks at the physical structure of a protein to infer what kind of drug molecule could bind as desired.

Choosing a non-parametric method, the team did not instruct the algorithm on which features of molecular structure are important for binding — like “opposites attract.” So far, the results are encouraging and show the neural network is able to infer these laws from training data.

The deep learning model, using the cuDNN deep learning software, improved prediction accuracy to 70 percent compared with the 52 percent of previous machine learning models.

“If we can get it to the accuracy point where people are motivated to synthesize new molecules, that’s a good indicator that we’re useful,” Koes says.

Koes has been using NVIDIA GPUs for almost a decade. He says this work used an array of NVIDIA GPUs including Tesla V100, GTX 1080, Titan V, and Titan Xp GPUs.

Though the team has not yet optimized the model for inference, GPUs have been used in both the training and inference phases of their work.

Koes says the process of virtually screening a test molecule is so complex — the model must sample multiple different 3D positions to determine a molecule’s affinity — that “it’s not really usable without a GPU. It’s like a self-driving car, constantly processing.”

To learn more about Koes’ research, watch his GTC talk, Deep Learning for Molecular Docking, or read this recent paper.

The post How Virtual Drug Discovery Tools Could Even the Playing Field Between Big Pharma and Small Biotech appeared first on The Official NVIDIA Blog.