Recruiting 9 to 5, What AI Way to Make a Living

There’s a revolving door for many jobs in the U.S., and it’s been rotating faster lately. For AI recruiting startup Eightfold, which promises lower churn and faster hiring, that spins up opportunity.

Silicon Valley-based Eightfold, which recently raised $18 million, is offering its talent management platform as U.S. unemployment has hit its lowest since the go-go era of 2000.

Stakes are high for companies seeking the best and the brightest amid a sizzling job market. Many companies — especially tech sector ones — have high churn amid much-needed positions unfilled. Eightfold says its software can slash the time it takes to hire by 40 percent.

“The most important asset in the enterprise is the people,” said Eightfold founder and CEO Ashutosh Garg.

The tech industry veteran helped steer the launch of eight product launches at Google and co-founded BloomReach to offer AI-driven retail personalization.

Tech’s Short Job Tenures

Current recruiting and retention methods are holding back people and employers, Garg says.

The problem is acute in the U.S., as on average it takes two months to fill a position at a cost of as much as $20,000. And churn is unnecessarily high, he says.

This is an escalating workforce problem. The employee tenure on average at tech companies such as Apple, Amazon, Twitter, Microsoft and Airbnb has fallen below two years, according to Paysa, a career service. [Disclosure: NVIDIA’s average tenure of employees is more than 5 years.]

The Eightfold Talent Intelligence Platform is used by companies alongside human resources management software and talent tracking systems on areas such as recruiting, retention and diversity.

Machines ‘More Than 2x Better’

Eightfold’s deep neural network was trained on millions of public profiles and continues to ingest massive helpings of data, including public profiles and info on who is getting hired.

The startup’s ongoing training of its recurrent neural networks, which are predictive by design, enables it to continually improve its inferencing capabilities.

“We use a cluster of NVIDIA GPUs to train deep models. These servers process billions of data points to understand career trajectory of people and predict what they will do next. We have over a dozen deep models powering core components of our product,” Garg said.

The platform has processed over 20 million applications, helping companies increase

response rates from candidates by eightfold (thus its name). It has helped reduce screening costs and time to hire by 90 percent.

“It’s more than two times better at matching than a human recruiter,” Garg said.

AI Matchmaker for Job Candidates

For job matching, AI is a natural fit. Job applicants and recruiters alike can upload a resume into a company’s employment listings portal powered by Eightfold’s AI. The platform shoots back in seconds the handful of jobs that are a good match for the particular candidate.

Recruiters can use the tool to search for a wider set of job candidate attributes. For example, it can be used to find star employees who move up more quickly than others in an organization. It can also find those whose GitHub follower count is higher than average, a key metric for technical recruiting.

The software can ingest data on prospects — education, career trajectory, peer rankings, etc. — to do inferencing about how good of a company fit they are for open jobs.

No, Please Don’t Go!

The software is used for retention as well. The employment tenure on average in the U.S. is 4.2 years and a mere 18 months for millennials. For retention, the software directs its inferencing capabilities at employees to determine what they might do next in their career paths.

It looks at many signals, aiming to determine questions such as: Are you likely to switch jobs? Are you an attrition risk? How engaged are you? Are peers switching roles now?

“There’s all kinds of signals that we try to connect,” Garg said. “You try to give that person another opportunity in the company to keep them from leaving.”

Diversity? Blind Screening

Eightfold offers diversity help for recruiters, too. For those seeking talent, the startup’s algorithms can assist to identify which applicants are a good match. Its diversity function also allows recruiters blind screening of candidates shown, helping to remove bias.

Customers AdRoll and DigitalOcean are among those using the startup’s diversity product.

“By bringing AI into the screening, you are able to remove all these barriers — their gender, their ethnicity and so on,” Garg said. “So many of us know how valuable it is to have diversity.”

The post Recruiting 9 to 5, What AI Way to Make a Living appeared first on The Official NVIDIA Blog.

AMD Expects GPU Sales to Cryptocurrency Miners to Keep Sliding

AMD has disclosed that the cryptocurrency-induced boom for sales of its GPU cards is over, for the foreseeable future. During an earnings call the Santa Clara, California-based chipmaker revealed that graphics cards sales to cryptocurrency miners declined during the quarter that ended in June. The Graphics Processing Units that AMD and its rival Nvidia make

The post AMD Expects GPU Sales to Cryptocurrency Miners to Keep Sliding appeared first on CCN

Gamers’ Relief: Bitcoin Bear Period is Bringing Down High-End GPU Prices

Bitcoin isn’t the only one going down. As the cryptocurrency keeps losing its value, other areas of the industry are starting to take its toll. As Bitcoin, and cryptocurrencies all over the world, skyrocketed in value last year, a new market was born — crypto mining. I’m sure you know at least one person who

The post Gamers’ Relief: Bitcoin Bear Period is Bringing Down High-End GPU Prices appeared first on CCN

Let’s Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets

Press a button on your smartphone and go. Daimler, Bosch and NVIDIA have joined forces to bring fully automated and driverless vehicles to city streets, and the effects will be felt far beyond the way we drive.

While the world’s billion cars travel 10 trillion miles per year, most of the time these vehicles are sitting idle, taking up valuable real estate while parked. And when driven, they are often stuck on congested roadways. Mobility services will solve these issues plaguing urban areas, capture underutilized capacity and revolutionize the way we travel.

All over the globe we are seeing a rapid adoption of new mobility services from companies like Uber, Lyft, Didi, and Ola. But now the availability of drivers threatens to limit their continued growth.

The answer is the driverless car — a vehicle rich with sensors, powered by an extremely energy efficient supercomputer, and running AI software that acts as a virtual driver.

The collaboration of Daimler, Bosch, and NVIDIA, announced Tuesday, promises to unleash what auto industry insiders call Level 4 and Level 5 autonomy — cars that can drive themselves.

The benefits of mobility services built on autonomous vehicles are enormous. These AI-infused vehicles will improve traffic flow, enhance safety, and offer greater access to mobility. In addition, analysts predict it will cost a mere 17 cents a mile to ride in a driverless car you can summon anytime. And commuters will be able to spend their drive to work actually working, recapturing an estimated $99 billion worth of lost productivity each year.

Driving the convenience of transportation up, and costs down, is a huge opportunity. By 2030, driverless vehicles and services will be a $1 trillion industry, according to KPMG.

To reap these benefits, the great automotive brands will need to weave the latest technology into everything they do. And NVIDIA DRIVE, our AV computing platform,  promises to help them stitch all the breakthroughs of our time — deep learning, sensor fusion, image recognition, cloud computing and more — into this fabric.

Our collaboration with Daimler and Bosch will unite each company’s strengths. NVIDIA brings leadership in AI and self-driving platforms. Bosch, the world’s largest tier 1 automotive supplier, brings its hardware and system expertise. Mercedes-Benz parent Daimler brings total vehicle expertise and a global brand that’s synonymous with safety and quality.

Street Smarts Needed

Together, we’re tackling an enormous challenge. Pedestrians, bicyclists, traffic lights, and other vehicles make navigating congested urban streets stressful for even the best human drivers.

Demand for computational horsepower in this chaotic, unstructured environment adds up fast. Just a single video camera generates 100 gigabytes of data per kilometer, according to Bosch.

Now imagine a fully automated vehicle or robotaxi with a suite of sensors wrapped around the car: high resolution camera, lidar, and radar that are configured to sense objects from afar, combined with diverse sensors that are specialized for seeing color, measuring distance, and detecting motion across a wide range of conditions. Together these systems provide levels of diversity to increase safety and redundancy to provide backup in case of failure. However, this vast quantity of information needs to be deciphered, processed, and put to work by multiple layers of neural networks almost instantaneously.

NVIDIA DRIVE delivers the high-performance required to simultaneously run the wide array of diverse deep neural networks needed to safely driving through urban environments.
NVIDIA DRIVE delivers the high-performance required to simultaneously run the wide array of diverse deep neural networks needed to safely driving through urban environments.

A massive amount of computing performance is required to run the dozens of complex algorithms concurrently, executing within milliseconds so that the car can navigate safely and comfortably.

Daimler and Bosch Select DRIVE Pegasus

NVIDIA DRIVE Pegasus is the AI supercomputer designed specifically for autonomous vehicles, delivering 320 TOPS (trillions of operations per second) to handle these diverse and redundant algorithms. At just the size of a license plate, it has the performance equivalent to six synchronized deskside workstations.

This is the most energy efficient supercomputer ever created — performing one trillion operations per watt. By minimizing the amount of energy consumed, we can translate that directly to increased operating range.

Pegasus is architected for safety, as well as performance. This automotive-grade, functional safety production solution uses two NVIDIA Xavier SoCs and two of our next-generation GPUs designed for AI and vision processing. This co-designed hardware and software platform is created to achieve ASIL-D ISO 26262, the industry’s highest level of automotive functional safety. Even when a fault is detected, the system will still operate.

From the Car to the Cloud

NVIDIA AV solutions go beyond what can be put on wheels. NVIDIA DGX AI supercomputers for the data center are used to train the deep neural nets that enable a vehicle to deliver superhuman levels of perception. The new DGX-2, with its two petaflops of performance, enables deep learning training in a fraction of the time, space, energy, and cost of CPU servers.

Once trained on powerful GPU-based servers, the NVIDIA DRIVE Constellation AV simulator can be utilized to test and validate the complete software “stack” that will ultimately be placed inside the vehicle. This high performance software stack includes every aspect of piloting an autonomous vehicle, from object detection through deep learning and computer vision, to map localization and path planning, and it all runs on DRIVE Pegasus.

In the years to come, DRIVE Pegasus will be key to helping automakers meet a surge in demand. The mobility-as-a-service industry will purchase more than 10 million cars in 2040, up from 300,000 in 2017, market research firm IHS Markit projects.

“The partnership with Bosch and Daimler illustrates that the NVIDIA DRIVE Pegasus architecture solves the critical needs of automakers as they tackle the challenge of automated driving,” said IHS Markit Senior Research Director for Artificial Intelligence Luca De Ambroggi. “The combination of NVIDIA’s AI silicon, software, integrated platforms, and tools for simulation and validation adds value for AV development.”

A Thriving Ecosystem for Mobility-as-a-Service

The NVIDIA DRIVE ecosystem continues to expand in all areas of autonomous driving, from robotaxis to trucking to delivery vehicles, as more than 370 companies have already adopted the DRIVE platform. And now our work with Daimler and Bosch will create innovative new driverless vehicles and services that will do more than just transform our streets, they’ll transform our lives.

The post Let’s Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets appeared first on The Official NVIDIA Blog.

How Virtual Drug Discovery Tools Could Even the Playing Field Between Big Pharma and Small Biotech

While AI can lift competition and productivity, it also can act as a great leveler, putting smaller players on the same footing as goliaths.

Take pharmaceutical research, for example.

Large companies have the budget and resources to physically test millions of drug candidates, giving them an advantage over startups and researchers. But smaller labs can achieve similar results by harnessing neural networks that simulate how a potential drug molecule will bind with a target protein.

Deep learning can help smaller companies and other researchers discover promising drug treatments by improving the speed and accuracy of molecular docking, the process of computationally predicting how and how well a molecule binds with a protein.

“You don’t actually need to have the molecule in hand,” says David Koes, assistant professor at University of Pittsburgh. “You can screen billions of compounds and they don’t actually have to exist.”

A Perfect Match

When scientists look for the perfect molecular structure for a drug treatment, they look at a few laws of attraction.

A drug molecule should have an attraction, or affinity, to the protein that researchers want it to bind to. Too little affinity, and the drug is too weak for the pair to work.

A familiar principle applies here: opposites attract. The lesson is universally learned in elementary school science classes — and from unsolicited relationship advice. Now Koes and his fellow researchers are imparting this principle to their neural network.

The match should be specific, too — if the molecule is too general, it could bind with a hundred proteins in the body instead of just one. “That’s usually a bad thing,” says David Koes, assistant professor at University of Pittsburgh.

Screening these molecules virtually could speed up the years-long process of identifying a candidate good enough to bring to clinical trials.

As Koes puts it, “When you discover better drugs to begin with, you fail less later.”

This method further opens up researchers’ horizons to test molecules that don’t even exist yet. If a particular molecular structure looks promising, it can be synthesized in the lab.

Koes sees immense potential in this field. He envisions a future world where researchers could use sliders to activate molecular features like solubility, or whether a molecule can pass the blood-brain barrier.

It will take time to get there, he concedes. “It’s quite challenging because you need to make something that’s physically realistic and chemically realistic.”

Unleashing Deep Learning

The researchers’ convolutional neural network looks at the physical structure of a protein to infer what kind of drug molecule could bind as desired.

Choosing a non-parametric method, the team did not instruct the algorithm on which features of molecular structure are important for binding — like “opposites attract.” So far, the results are encouraging and show the neural network is able to infer these laws from training data.

The deep learning model, using the cuDNN deep learning software, improved prediction accuracy to 70 percent compared with the 52 percent of previous machine learning models.

“If we can get it to the accuracy point where people are motivated to synthesize new molecules, that’s a good indicator that we’re useful,” Koes says.

Koes has been using NVIDIA GPUs for almost a decade. He says this work used an array of NVIDIA GPUs including Tesla V100, GTX 1080, Titan V, and Titan Xp GPUs.

Though the team has not yet optimized the model for inference, GPUs have been used in both the training and inference phases of their work.

Koes says the process of virtually screening a test molecule is so complex — the model must sample multiple different 3D positions to determine a molecule’s affinity — that “it’s not really usable without a GPU. It’s like a self-driving car, constantly processing.”

To learn more about Koes’ research, watch his GTC talk, Deep Learning for Molecular Docking, or read this recent paper.

The post How Virtual Drug Discovery Tools Could Even the Playing Field Between Big Pharma and Small Biotech appeared first on The Official NVIDIA Blog.

Accelerated Computing Emerges as Booster Rocket Taking Us to Age of Exacale

Another supercomputing show, another performance record, courtesy this week of Oak Ridge National Laboratory’s Summit system, the first to smash through the 100-petaflop barrier.

But behind that headline at Frankfurt’s International Supercomputing show is a broader story about how the tectonic plates of the supercomputing world are shifting.

As Moore’s Law continues to slow, accelerated computing clearly emerged at ISC as the booster rocket that will soon propel us into the age of exascale computing. Consider: Most of the new processing power on the just-released Top500 list comes from GPUs, which provide 95 percent of Summit’s flops.

And NVIDIA Volta Tesla Core GPUs are providing that power, enabling multi-precision computing that fuses the highly precise calculations that tackle the challenges of HPC with the highly efficient processing required for deep learning.

Indeed, the Top500 list shows that five of the world’s top seven supercomputers are now GPU powered, including the top systems in the U.S., Europe and Japan.

Summiting with Accelerated Computing

Those who track the big twice-annual supercomputing shows have seen accelerated computing on the rise over recent years. But at ISC18, it broke past the tipping point.

Summit is clearly the most potent example. Powered by 27,648 Volta Tensor Core GPU, it was measured at 122 petaflops of double precision-performance. Its performance each second is equivalent to the Earth’s entire population doing one calculation a second for an entire year.

And it’s AI performance speed is even more dazzling at 3 exaops. That’s like the entire Earth’s population doing one calculation a second for 15 years.

Mean and Lean

Multi-precision computing opens up new worlds of possibility. But that would be of limited utility if GPUs didn’t also offer extraordinary efficiency.

GPUs now power 17 of top 20 greenest systems in the world, according to the new Green500 list. Summit isn’t only the world’s fastest, it’s also the world’s most efficient system within the newly established “Level 3” category, the most stringent of the levels in Green500 list.

GPU’s have helped improve power efficiency by 50x in the past 10 years for leadership class supercomputers at Oak Ridge national labs, going from CPU-only Jaguar to GPU-accelerated Titan and Summit.

And all this is just a start. Achieving exascale will require even more breakthroughs in power efficiency. With the average efficiency of systems in the Green 500 list, powering exascale would take over 300 megawatts of energy, equivalent to the power requirements of 250,000 U.S. homes. Exascale requires 10X higher efficiency to operate in 30 megawatts.

GPUs have gotten Summit halfway toward this ambitious goal, and offer a clear path forward to efficient exascale by 2021.

Cutting Through the Knots

The once-unimagined processing capability of the latest top systems makes it possible for today’s generation of researchers to address some of science’s knottiest challenges.

Take, for example, genetics. GPU computing power can unlock such puzzles as the link between the human genome’s billions of AGCT DNA pairs and devastating diseases like Parkinson’s and Alzheimer’s. Already, Summit’s making headway in combing through an individual’s genes to determine sensitivities to opioid addiction – one of the leading causes of death in the U.S.

Or take materials. Superconductive materials can be used to develop powerful scientific magnets for MRI equipment, particle accelerators, or magnetic fusion devices. Today’s versions, however, are brittle, hard to manufacture and only work at very low temperatures. Summit is helping simulate and discover new superconducting materials with metal-like properties that can operate at room temperature.

Or take cancer research. A key to combating cancer is developing tools that can automatically extract, analyze, and sort health data to reveal previously hidden relationships between such disease factors as genes, biological markers, and environment. Paired with unstructured data, like text-based reports and medical images, deep learning algorithms scaled on Summit will help provide medical researchers with a comprehensive view of the entire U.S. cancer population at a level of detail typically obtained only for clinical trial patients.

Just Getting Going

We see this as just the beginning for accelerated computing.

Every country is racing to build exascale systems. Peek at the Top500 list of 2025 and you’ll likely see over a dozen of such systems, with multi-precision accelerated computing the platform of choice. By comparison, all the systems added together on this week’s new Top500 list barely achieved an exaflop of total computing. This speaks to the massive opportunity ahead.

One of the great appeals of accelerated computing is that it’s full-stack innovation — from the architecture, through to the system, acceleration stack, developers, as well as semiconductor process. That’s important because, with the end of Moore’s law, there are no automatic performance gains.

At NVIDIA, we’ve been investing in accelerating the full HPC stack for more than a decade.

When we started with the first CUDA-capable GPU, it could run exactly zero applications. An entire universe of applications, algorithms, libraries, tools, compilers, operating systems, and system design needed to re-designed for a new accelerated world. It’s easy to build a chip that stamps out math processors; making those processors usable and programmable by the world’s HPC developers takes extraordinary innovation on the entire stack.

As a result, more than 550 HPC and AI applications are GPU-accelerated, including the top 15 applications and all AI frameworks. The number of developers working on this is now close to a million, up 10X in the past five years. And with the latest HPC containers on our NGC container registry, HPC users can now simply click, download, and run the latest GPU accelerated apps on their systems or in the Tensor Core GPU powered cloud.

Looking Around the Bend

Now that we’re barreling down the accelerated computing straight-away, some of us are looking around the next bend to quantum computing, which uses quantum bits, or “qubits” instead of 1s and 0s to handle information.

These theories are deeply intriguing. At some point in the future, there may be killer apps that run on quantum computers, particularly in the area of cryptography or quantum chemistry, taking advantage of extraordinary processing power that draws exceptionally little power.

But for the foreseeable future, accelerated computing’s momentum appears unstoppable. We are committed to continuing to innovate in HPC, putting the promise of exascale — and all that it holds for science — within our grasp.

The post Accelerated Computing Emerges as Booster Rocket Taking Us to Age of Exacale appeared first on The Official NVIDIA Blog.

That Was Fast: Summit Already Speeding Research into Addiction, Superconductors

Just weeks after its debut, Summit, the world’s fastest supercomputer, is already blasting through scientific applications crucial to breakthroughs in everything from superconductors to understanding addiction.

Summit, based at the Oak Ridge National Laboratory, in Tennessee, already runs CoMet — which helps identify genetic patterns linked to diseases — 150x faster than its predecessor, Titan. It’s running another application, QMCPACK — which handles quantum Monte Carlo simulations for discovering new materials such as next-generation superconductors — 50x faster than Titan.

The ability to quickly accelerate widely-used scientific applications such as these comes thanks to our more than a decade of investment across what technologists call “the stack.” That is, everything from architecture improvements in our GPU parallel processors to system design, software, algorithms, and optimized applications. While innovating across the entire stack hard, it’s also essential, because, with the end of Moore’s law, there are no automatic performance gains.

Summit, powered by 27,648 NVIDIA GPUs, is the latest GPU-powered supercomputer built to accelerate scientific discovery of all kinds. Built for the U.S. Department of Energy, Summit is the world’s first supercomputer to achieve over a 100 petaflops, accelerating the work of the world’s best scientists in high-energy physics, materials discovery, healthcare and more.

But Summit delivers more than just speed. Instead of one GPU per node with Titan, Summit has six Tensor Core GPUs per node. That gives Summit the flexibility to do traditional simulations along with the GPU-driven deep learning techniques that have upended the computing world since Titan was completed six years ago.

How Volta Stacks the Deck

With Volta, we reinvented the GPU. Its revolutionary Tensor Core architecture enables multi-precision computing. So it can crank through deep learning at 125 teraflops at FP16 precision. Or when greater range or precision are needed, such as for scientific simulations, it can compute at FP64 and FP32.

This fusing of highly efficient processing required for deep learning and highly precise calculations required for scientific simulations enables Volta to be a computational powerhouse for both AI and HPC.

Our-full stack optimization approach means researchers can put the raw speed of systems based on Volta, such as Summit, to work faster. That, in turn, accelerates solutions to our most important challenges. Researchers with early access to Summit have already demonstrated the benefits of this approach in genomic and materials science research that promise real-world benefits.

Solving Genetic Mysteries

Genes hold blueprints for diseases and conditions we’re predisposed to. Diseases like Alzheimer’s or conditions like chronic pain that can lead to opioid addiction, but 10 percent of those prescribed opioids will become addicted.

The human genome is composed of 3 billion nucleotides and massive computing is required to understand combinations of genes and leading to chronic pain or opioid addiction — possibilities larger than the number of atoms in universe.

Powered by six V100 Tensor Core GPUs each node of Summit is providing 150X higher performance than Titan and is enabling discovery of previously impossible genetic patterns. This will help pave the way for medical breakthroughs in a range of conditions such as heart diseases, cancer, alzheimer’s, and opioid addiction.

Unlocking Power of Superconductivity

Superconductors are among the most exciting materials yet discovered. They can transmit electricity without any loss and can be used to store energy indefinitely. They can be used to develop powerful scientific magnets for MRI equipment, for levitating trains, and magnetic fusion devices among other uses.

A key challenge: superconductors operate only at extremely low temperatures (-243°C) that require expensive setup such as the use of liquid helium.

High-temperature superconductors (HTS) can be operated at -70 °C, but are brittle, hard to manufacture. Quantum Monte Carlo (QMC) electronic structure calculations can help identify new HTS materials with metal like properties.

QMCPACK is optimized for Summit’s Volta GPUs and runs 50X faster on Summit node than Titan node. This is enabling researchers to greatly increase the complexity of materials that they can simulate and dramatically accelerate ORNL’s research for new, cost-effective superconductors.

AI and Simulation: A Powerful Combination

Summit is a shining example of the big shift in the current breed of supercomputers to machines that are both fast enough to accelerate scientific simulations and smart enough to gather insights from massive volumes of data.

It’s a powerful combination that promises to help this generation of scientists accomplish wonders. With Summit and systems like it, we’re doing everything we can to ensure they do just that.

The post That Was Fast: Summit Already Speeding Research into Addiction, Superconductors appeared first on The Official NVIDIA Blog.

Shock and Awe in Utah: NVIDIA CEO Springs Special Titan V GPUs on Elite AI Researchers

Some come to Utah to ascend mountain peaks, others to ski down them.

At the Computer Vision and Pattern Recognition conference in Salt Lake City Wednesday, NVIDIA’s Jensen Huang told top AI researchers he wanted to help them achieve a very different kind of peak performance — before unveiling a big surprise.

“The number of problems you guys are able to solve as a result of deep learning is truly amazing,” Huang said, addressing more than 500 guests at NVIDIA’s “best of Utah” themed CVPR bash at the Grand America Hotel. “We’ve dedicated our company to create a computing platform to advance your work. Our goal is to enable you to do amazing research.”

Then he sprung his first surprise on the crowd.

Supporting Today’s Pioneers of AI

The authors of the CVPR accepted paper “What Do Deep Networks Like to See” — submitted by Sebastian Palacio of DFKI and team — were among the twelve groups selected to receive an NVIDIA Pioneer Award.

Huang called up 12 teams of researchers and presented each with an NVIDIA Pioneer Awards.

The awards went to those who’ve used NVIDIA’s AI platform to support great work featured in papers accepted by CVPR and other leading academic conferences.

Wednesday night’s award recipients were an elite group, representing some of the leading academic institutions that participate in the global NVIDIA AI Labs (NVAIL) program.

Honorees included researchers from the Chinese Academy of Science, DFKI, Peking University, Stanford University, Tsinghua University, University of Toronto, University of Tokyo and University of Washington.

University of Toronto researchers Makarand Tapaswi and Sanja Fidler were one of 12 research groups to receive an NVIDIA Pioneer Award from NVIDIA CEO Jensen Huang at the NVIDIA CVPR party Wednesday night.

But the surprises didn’t stop there.

Titans for the Titans of AI

Twenty guests selected at random joined Huang in the the hotel’s center courtyard. There he presented them with signed, limited edition NVIDIA Titan V CEO Edition GPUs, featuring our groundbreaking NVIDIA Volta Tensor Cores and loaded with 32 gigabytes of memory.

Fabio Ramos (center) of the University of Sydney was one of twenty lucky recipients of the NVIDIA Titan V CEO Edition handed out Wednesday night at NVIDIA’s CVPR party. He’s joined by his colleagues Rafael Possas and Sheila Pinto Caceres.

Among the lucky ones was AI researcher Fabio Ramos, from the University of Sydney who is doing groundbreaking work in the field of robotics.

“My work is focused on helping robots make decisions autonomously. I hope to use this to help advance my work to help robots take care of elderly people,” he said, as other guests noshed on chicken sandwiches prepared by Food Network star Viet Pham.

While the food and drinks — which included green jello, a nod to Utah tradition — had guests buzzing, researchers were even more eager to dig into their new GPUs.

“I’m eager to start using my GPU to support my work in deep driving,” said Firas Lethaus who is a senior engineer at the development division of automated driving and head of the machine learning competence center at AUDI AG. “With this new tool, I’ll be able to further examine image data so that self-driving learning systems can better separate relevant from non-relevant information.”

Titans of AI: NVIDIA Jensen Huang is joined by winners of the NVIDIA Titan V CEO Edition giveaway at CVPR.

The limited edition GPU is built on top of NVIDIA’s breakthrough Volta platform. It features our revolutionary Tensor Core architecture to enable multi-precision computing. It can crank through deep learning matrix operations at 125 teraflops at FP16 precision. Or it can blast through FP64 and FP32 operations when there’s a need for greater range, precision or numerical stability.

With 20 of their peers equipped with some of our most powerful GPUs to accelerate their work, these won’t be the last to be so honored.

“There’s all kinds of research being done here. As someone who benefits from your work, as a person who is going to enjoy the incredible research you guys do — solving some of the world’s grand challenges — and to be able to witness artificial intelligence happen in my lifetime, I want to thank all of you guys for that,” Huang said. “You guys bring me so much joy.”

The post Shock and Awe in Utah: NVIDIA CEO Springs Special Titan V GPUs on Elite AI Researchers appeared first on The Official NVIDIA Blog.

How the PC is Raising the Game for Everyone, Everywhere

PC gaming’s latest moment is its biggest yet.

Today new gaming experiences charge across the internet, and hundreds of millions of GeForce PCs, seemingly at the speed of light.

Witness the ‘battle royale’ craze. Each week tens of millions of PC gamers parachute in to games like Fortnite and PlayerUnknown’s Battlegrounds. Even more tune in to watch. And this worldwide cultural phenomenon is creating new (and re-engaging former) PC gamers at a rate never before seen. In fact, just for GeForce, the number of active weekly gamers has grown by 10 million in the last 8 months, a massive increase.

Prefered Platform of Game Developers Everywhere

As new games spread faster, PC gamers enjoy more experiences than ever before. This is no walled garden: over 7,000 new PC games were introduced in 2017, by far the most of any gaming platform. Sixty percent of the game developers surveyed for the latest Game Developers Conference targeted the PC for their games. That’s double the number developing for any console.

Today, PC games are a $33 billion industry. Two-thirds of all Americans are now gamers, the highest percentage ever published by Nielsen. But this is a phenomenon that, like the games themselves, left borders behind long ago. There are more than 1.2 billion PC gamers worldwide, a figure that will grow to 1.4 billion by 2020.

GeForce Drives This Virtuous Cycle

Over the past 25 years we’ve been single mindedly focused on advancing PC gaming. And many of our innovations have been gamechangers.

Years ago, when shadows in games looked fake, NVIDIA brought transform and lighting technologies to PC graphics. Now that’s the standard.

When developers wanted to push realism to new levels, NVIDIA invented the programmable graphics processing unit.

We saw an opportunity to link the GPU directly to the display for buttery smooth gameplay and created NVIDIA G-SYNC display technology. G-SYNC displays are rigorously tested in our labs and certified by NVIDIA to deliver a premium experience. There are now 217 G-SYNC displays shipping from 17 partners: 158 for laptops and 59 for desktops.

With Gaming Everywhere, Laptops That Let You Game Anywhere

The bigger and better the games, the more gamers want to take their game with them. First, we optimized our Pascal GPUs for laptop gaming systems as well as desktops.

Then, a year ago, we launched Max-Q: a design approach to bring GeForce GTX 1080 GPUs to sleek gaming laptops in dimensions never before possible. 26 Max Q models will be on shelf this fall, 3 times the number this time last year. With almost 100 million consumer laptops sold each year, the growth opportunity for thin, light and powerful gaming laptops is huge (Tune into the #IWANTMAXQ hashtag on top social media networks to see the latest models).

The World’s Fastest-Growing Global Sport

From games that have spread like wildfire, to the richest world of content, to impossibly thin, light and powerful laptops, the PC is the most dynamic and vibrant gaming platform. It’s also the home of another global cultural phenomenon: esports

Now the fastest-growing global sport, the stats for esports are staggering. Esports audiences worldwide grew by 100 million over the past two years and are expected to reach over 600 million by 2020. More than 120 million viewers in China tuned in on a single night in May to watch a League of Legends finals match. That’s more than the population of all but a handful of countries.

All the major esports tournaments play on GeForce PCs. And 240Hz G-SYNC monitors, delivering the unmatched frame rates and tear-free gaming favored by esports players worldwide, are the official display of TI 2018 (The International) taking place in Vancouver in August.

The Platform that Keeps Getting Bigger, Faster, and Better

This is a story that goes beyond just hardware, however. Capabilities like NVIDIA Highlights — which lets gamers capture great gaming moments and share them with friends — have accelerated the growth of titles like Fortnite, for which viewership far outpaced player numbers at the outset.

And we announced the next gamechanger a few months ago at GDC: NVIDIA RTX technology. Real-time ray tracing has been the dream of the graphics industry and game developers for decades and NVIDIA RTX brings it to life. The product of 10 years of development in computer graphics algorithms and GPU architectures, it consists of a highly scalable ray-tracing technology running on NVIDIA Volta architecture GPUs to deliver amazing lifelike cinematic-quality to future games.

Long considered the definitive solution for realistic and lifelike lighting, reflections and shadows, ray tracing offers a level of realism far beyond what is possible using traditional rendering techniques. Real-time ray tracing replaces a majority of the techniques used today in standard rendering with realistic optical calculations that replicate the way light behaves in the real world, delivering more lifelike images.

The Virtuous Cycle Accelerates

NVIDIA RTX will set the bar high for gaming technology, transforming capabilities and driving up production quality. It will raise the game for the entire industry. And PC gamers will be the biggest winners.

The post How the PC is Raising the Game for Everyone, Everywhere appeared first on The Official NVIDIA Blog.

Reaching the Summit: Accelerated Computing Powering World’s Fastest Supercomputer

Call it the most powerful scientific tool ever built. Call it a new paradigm of computing. Just don’t call it slow, because whatever number you look at, Summit — which made its debut Friday at the Oak Ridge National Laboratory — is flat-out fast.

This massive machine, powered by 27,648 of our Volta Tensor Core GPUs, can perform more than three exaops, or three billion billion calculations per second. That’s more than 100 times faster than Titan, previously the fastest U.S. supercomputer, completed just five years ago. And 95 percent of that computing power comes from GPUs.

Built for the U.S. Department of Energy, this is a machine designed to tackle the grand challenges of our time. It will accelerate the work of the world’s best scientists in high-energy physics, materials discovery, healthcare, and more, with the ability to crank 200 petaflops of computing power to high precision scientific simulations.

“Summit takes GPU accelerated computing to the next level, with more computing power, more memory, an enormous high-performance file system, and fast data paths to tie it all together,” said James Hack, director of ORNL’s National Center for Computational Sciences. “That means researchers will be able to explore more complex phenomena at higher levels of fidelity in less time than with previous generations of supercomputer systems.”

This is, literally, a scientific time machine.

The story behind the story: The team at Oak Ridge was the first in the nation to realize – almost a decade ago – that a new kind of computing was needed. The old paradigm of piling one transistor on top of another wouldn’t deliver the efficiency they needed.

They took a risk and built Titan in 2012, the world’s fastest supercomputer, with one GPU in every node. That courage paid off. Now over 550 HPC applications are accelerated with GPUs, and all of the 15 most widely used ones. Their work reshaped supercomputing.

Writing Computing’s Next Chapter

Summit is the next chapter. Not just for ORNL, but for all of computing. Our research team has been working with the DOE for more than 11 years on advanced technologies, including the Volta GPUs and NVLink high-speed interconnect technology at the very heart of Summit. Instead of one GPU per node, Summit has six Tensor Core GPUs, delivering 10x Titan’s  simulation performance.

And just as Titan inspired the world to accelerate simulations, Summit will inspire the world’s scientist to harness AI to drive discovery hand-in-hand with simulation. The technology powering Summit is already speeding the work of scientists on everything from PCs to servers, workstations to sprawling cloud computing systems.

Fusing AI and High Performance Computing

But while Summit will share DNA with a new generation of machines built for AI, it will work at speeds like no other. Researchers will be able to use the simplified calculations, known as half-precision, or FP16, to boost Summit’s performance about 15x to exascale levels—more than a billion billion operations per second.

That’s staggering. If every computation were represented by a single grain of sand, you could fill up the Houston Astrodome with sand 350 times in a single second.

What Summit Will Do for Science

This speed will let today’s generation of scientists accomplish wonders. The Oak Ridge National Laboratory is already a playground for cutting-edge science, and its  campus is a hub for scientists eager to harness its machines to do their best work.

That’s why Summit already has a full schedule, accelerating projects including:

  • Cancer Research: The DOE and National Cancer Institute are working on a program called CANcer Distributed Learning Environment (CANDLE). Their aim is to develop tools that can automatically extract, analyze, and sort existing health data to reveal previously hidden relationships between disease factors such as genes, biological markers, and the
  • Fusion Energy: Fusion, the energy source powering the Sun, has long been touted for its promise of clean, abundant energy. Summit will be able to model a fusion reactor and its magnetically confined plasma, hastening commercial development.
  • Disease and Addiction: Researchers will use AI to identify patterns in the function and evolution of human proteins and cellular systems. These patterns can help us better understand Alzheimer’s, heart disease, or addiction, and inform the drug discovery process.

Next Giant Leap for Mankind

Using techniques like machine learning and deep learning at a massive scale, scientists will achieve breakthroughs on Summit that will boost our economy, improve our healthcare and help deliver limitless energy. This could help save the planet, and that’s why we need faster supercomputers.

And that’s why the next great computing challenge has already been set: building the world’s first exascale accelerated supercomputer. We’re already racing to help get this done, so the scientists and researchers of the world can continue racing forward.

5 Facts About the World’s Fastest Supercomputer

  • At 200 petaflops – If everyone on Earth did 1 calculation/second, would take 1 year to do what Summit does in 1 second.
  • At 3 exaops of AI – If everyone on Earth did 1 calculation/second, would take 15 years to do what Summit can do in 1 second.
  • In an early test, a genomics team solved a problem in 1 hour that would take 30 years on a PC.
  • Its 5,600 sq ft of cabinet space are similar in size to 2 tennis courts.
  • Summit has the approximate weight of a commercial jet.

The post Reaching the Summit: Accelerated Computing Powering World’s Fastest Supercomputer appeared first on The Official NVIDIA Blog.