NVIDIA CEO on How Deep Learning Makes Turing’s Graphics Scream

The deep learning revolution sweeping the globe started with processors — GPUs — originally made for gaming. With our Turing architecture, deep learning is coming back to gaming, and bringing stunning performance with it.

Turing combines next-generation programmable shaders; support for real-time ray tracing — the holy grail of computer graphics; and Tensor Cores, a new kind of processor which accelerates all kinds of deep learning tasks, NVIDIA CEO Jensen Huang told a crowd of more than 3,000 at the GPU Technology Conference in Europe this week.

This deep learning power allows Turing to leap forward in ways no other processor can, Huang explained.

“If we can create a neural network architecture and an AI that can infer and can imagine certain types of pixels, we can run that on our 114 teraflops of Tensor Cores, and as a result increase performance while generating beautiful images,” Huang said.

“Well, we’ve done so with Turing with computer graphics,” Huang added.

Deep Learning Super Sampling, or DLSS, allows Turing to generate some pixels with shaders and imagine others with AI.

“As a result, with the combination of our 114 teraflops of Tensor Core performance and 15 teraflops of programmable shader performance, we’re able to generate incredible results,” Huang said.

That translates into an enormous leap in performance.

Infiltrator Benchmark @ 4K resolution with custom settings (Max settings + all GameWorks enabled). System: Core i9-7900X 3.3GHz CPU with 16GB Corsair DDR4 memory, Windows 10 (v1803) 64-bit, 416.25 NVIDIA drivers. 

“In each and every series, the Turing GPU is twice the performance,” Huang said. “This is a brand new way of doing computer graphics — it merges together traditional computer graphics and deep learning into a cohesive pipeline.”

With a stunning demo, Huang showcased how our latest NVIDIA RTX GPUs — which enable real-time ray-tracing for the first time — allowed our team to digitally rebuild the scene around one of the moon landing’s iconic photographs, that of astronaut Buzz Aldrin clambering down the lunar module’s ladder.

The demonstration puts to rest the assertion that the photo can’t be real because Aldrin is lit too well as he climbs down to the surface of the moon while in the shadow of the lunar lander. Instead the simulation shows how the reflectivity of the surface of the moon accounts for exactly what’s seen in the photo.

“This is the benefit of NVIDIA RTX. Using this type of rendering technology, we can simulate light physics and things are going to look the way things should look,” Huang said.

To see the benefits for yourself, grab a GeForce RTX 2080 Ti or 2080 now, or a GeForce RTX 2070 on October 17.

The post NVIDIA CEO on How Deep Learning Makes Turing’s Graphics Scream appeared first on The Official NVIDIA Blog.

Joint Venture with Intel-Funded Startup Eko Makes Walmart a Major Player in Interactive Media

eko jill 2x1
A screen capture from the Eko show, “That Moment When.” Eko is known for original programming that lets viewers decide how the actors onscreen should behave and react.

Walmart and Amazon are locked in a battle for retail dominance. Now the Arkansas-based giant has opened a second front, which analysts say could counter Amazon’s robust streaming entertainment business.

Walmart on Thursday announced a joint venture with Eko, a startup funded by Intel Capital and other blue-chip names. Eko will use its interactive video technology and ties to Hollywood studios to create cooking shows, toy catalogs and other content for the world’s largest retailer.

Eko founder and CEO Yoni Bloch called the deal with Walmart “the largest investment made to date in interactive TV,’’ a sentiment echoed in coverage by The New York Times and others.

Eko is known for original programming that lets viewers decide how the actors onscreen should behave and react.

Scott McCall, Walmart’s senior vice president for entertainment, said new content from Eko will “deepen relationships with customers.” Each choice made by viewers could provide insight into their preferences, enabling targeted advertising and interactive shopping.

McCall said Eko’s programs will be available on Walmart properties including streaming service Vudu, as well as on social media and other channels.

Bloch, a well-known rock musician in his native Israel, founded Eko in 2010 as Interlude. It’s headquartered in New York and Tel Aviv. In addition to Intel Capital, investors include Sequoia Capital, Warner Music and Sony.

With Intel predicting 5G networks will drive more than $1.3 trillion in new revenue for media companies in the coming decade, Eko’s leaders see opportunity ahead.

“5G makes everything work better, particularly as more and more media is going through wireless,” said Jim Spare, Eko president and COO. “It just accelerates and expands the ways in which consumers can enjoy interactive experiences.”

The post Joint Venture with Intel-Funded Startup Eko Makes Walmart a Major Player in Interactive Media appeared first on Intel Newsroom.

By the Light of the Moon: Turing Recreates Scene of Iconic Lunar Landing

If you’re going to fake a moon landing, you’re going to need the world’s most advanced GPUs.

Four years ago, our demo team used GPUs to debunk the myth that the Apollo 11 moon landing was a hoax. So thoroughly, in fact, that it’s become a bit of a joke that the best way to have actually faked the moon landing would’ve been to use technology that didn’t exist at the time.

Now, nearly a half century after Neil Armstrong first set foot on the moon, we’ve refreshed our earlier demo with the real-time ray-tracing technology we built into our latest GPU architecture, Turing.

Speaking before an audience of thousands of entrepreneurs, researchers, technologists and media at GTC Europe in Munich Wednesday, NVIDIA CEO Jensen Huang demonstrated how our latest NVIDIA RTX GPUs — with their real-time ray-tracing capabilities — allowed our demo team to digitally rebuild one of the landing’s iconic photographs, of astronaut Buzz Aldrin clambering down the lunar module’s lander.

Recreating the great event with a stunning level of realism, we re-confirmed our earlier conclusion: the photo looks the way it would if it were taken on the moon.

“This is the benefit of NVIDIA RTX,” Huang said. “Using this type of rendering technology, we can simulate light physics and things are going to look the way things should look.”

Our Turing architecture allowed our demo team to do this because it’s able to trace the path of a beam of light back from the screen — or frustum, as computer scientists call it — and bounce around a scene to render reflections, shadows, ambient occlusion, global illumination and other visual phenomena in an instant. Prior to RTX technology, only special effects rendering farms working for weeks or months on a single scene could manage this.

The demo team built on work they did four years ago, when they collected every detail they could to understand the iconic image. They researched the rivets on the lunar lander, identified the properties of the dust coating the moon’s surface, and measured the reflectivity of the material used in the astronauts’ space suits.

Serious Moonlight

To update our original demo, NVIDIA engineers rebuilt the scene of the moon landing in Unreal Engine 4, a game engine developed by Epic Games. They simulated how the sun’s rays, coming from behind the lander, bounced off the moon’s surface, and Armstrong’s suit, to cast light on Aldrin as he stepped off the lander.

All of this only heightened the fidelity of our latest demo — and re-confirmed what we’d discovered four years ago. That the illumination of the astronaut in the photo wasn’t caused by something other than the sun — such as studio lights  — but by light doing what light does.

Which proves one of two things. Either the Apollo 11 landing is real. Or NASA figured going to the moon was too hard, built a time machine instead, and sent someone 50 years into the future to grab a NVIDIA RTX GPU.

The post By the Light of the Moon: Turing Recreates Scene of Iconic Lunar Landing appeared first on The Official NVIDIA Blog.

NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE

Big data is bigger than ever. Now, thanks to GPUs, it will be faster than ever, too.

NVIDIA founder and CEO Jensen Huang took the stage Wednesday in Munich to introduce RAPIDS, accelerating “big data, for big industry, for big companies, for deep learning,” Huang told a packed house of more than 3,000 developers and executives gathered for the three-day GPU Technology Conference in Europe.

Already backed by Walmart, IBM, Oracle, Hewlett-Packard Enterprise and some two dozen other partners, the open-source GPU-acceleration platform promises 50x speedups on the NVIDIA DGX-2 AI supercomputer compared with CPU-only systems, Huang said.

The result is an invaluable tool as companies in every industry look to harness big data for a competitive edge, Huang explained as he detailed how RAPIDS will turbo-charge the work of the world’s data scientists.

“We’re accelerating things by 1000x in the domains we focus on,” Huang said. “When we accelerate something 1000x in ten years, if your demand goes up 100 times your cost goes down by 10 times.”

Over the course of a keynote packed with news and demos, Huang detailed how NVIDIA is bringing that 1000x acceleration to bear on challenges ranging from autonomous vehicles to robotics to medicine.

Among the highlights: Volvo Cars had selected the NVIDIA DRIVE AGX Xavier computer for its next generation of vehicles; King’s College London is adopting NVIDIA’s Clara medical platform; and startup Oxford Nanopore will use Xavier to build the world’s first handheld, low-cost, real-time DNA sequencer.

Big Gains for GPU Computing

Huang opened his talk by detailing the eye-popping numbers driving the adoption of accelerated computing — gains in computing power of 1,000x over the past 10 years.

“In ten years time, while Moore’s law has ended, our computing approach has resulted in a 1000x increase in computing performance.” Huang said. “It’s now recognized as the path forward.”

Huang also spoke about how NVIDIA’s new Turing architecture — launched in August — brings AI and computer graphics together.

Turing combines support for next-generation rasterization, real-time ray-tracing and AI to drive big performance gains in gaming with NVIDIA GeForce RTX GPUs, visual effects with new NVIDIA Quadro RTX pro graphics cards, and hyperscale data centers with the new NVIDIA Tesla T4 GPU, the world’s first universal deep learning accelerator.

One Small Step for Man…

With a stunning demo, Huang showcased how our latest NVIDIA RTX GPUs — which enable real-time ray-tracing for the first time — allowed our team to digitally rebuild the scene around one of the lunar landing’s iconic photographs, that of astronaut Buzz Aldrin clambering down the lunar module’s lander.

The demonstration puts to rest the assertion that the photo can’t be real because Buzz Aldrin is lit too well as he climbs down to the surface of the moon while in the shadow of the lunar lander. Instead the simulation shows how the reflectivity of the surface of the moon accounts for exactly what’s seen in the controversial photo.

“This is the benefit of NVIDIA RTX, using this type of rendering technology we can simulate light physics and things are going to look the way things should look,” Huang said.

…One Giant Leap for Data Science

Bringing GPU computing back down to Earth, Huang announced a plan to accelerate the work of data scientists at the world’s largest enterprises.

RAPIDS open-source software gives data scientists facing complex challenges a giant performance boost. These challenges range from predicting credit card fraud to forecasting retail inventory and understanding customer buying behavior, Huang explained.

Analysts estimate the server market for data science and machine learning at $20 billion. Together with scientific analysis and deep learning, this pushes up the value of the high performance computing market to approximately $36 billion.

Developed over the past two years by NVIDIA engineers in close collaboration with key open-source contributors, RAPIDS offers a suite of open-source libraries for GPU-accelerated analytics, machine learning and, soon, data visualization.

RAPIDS has already won support from tech leaders such as Hewlett-Packard Enterprise, IBM and Oracle as well as open-source pioneers such as Databracks and Anaconda, Huang said.

“We have integrated RAPIDS into basically the world’s data science ecosystem, and companies big and small, their researchers can get into machine learning using RAPIDS and be able to accelerate it and do it quickly, and if they want to take it as a way to get into deep learning, they can do so,” Huang said.

Bringing Data to Your Drive

Huang also outlined the strides NVIDIA is making with automakers, announcing that Swedish automaker Volvo has selected the NVIDIA DRIVE AGX Xavier computer for its vehicles, with production starting in the early 2020s.

DRIVE AGX Xavier — built around our Xavier SoC, the world’s most advanced — is a highly integrated AI car computer that enables Volvo to streamline development of self-driving capabilities while reducing total cost of development and support.

The initial production release will deliver Level 2+ automated driving features, going beyond traditional advanced driver assistance systems.The companies are working together to develop automated driving capabilities, uniquely integrating 360-degree surround perception and a driver monitoring system.

The NVIDIA-based computing platform will enable Volvo to implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.

It’s a vision that’s backed by a growing number of automotive companies, with Huang announcing Wednesday that, in addition to Volvo Cars, Volvo Trucks, tier one automotive components supplier Continental, and automotive technology companies Veoneer and Zenuity and have all adopted NVIDIA DRIVE AGX.

Jensen also showed the audience a video of how, this month, an autonomous NVIDIA test vehicle, nicknamed BB8, completed a jam-packed 80-kilometer, or 50 mile, loop, in Silicon Valley without the need for the safety driver to take control — even once.

Running on the NVIDIA DRIVE AGX Pegasus AI supercomputer, the car handled highway entrance and exits and numerous lane changes entirely on its own.

From Hospitals Serving Millions to Medicine Tailored Just for You

AI is also driving breakthroughs in the healthcare, Huang explained, detailing how NVIDIA Clara will harness GPU computing for everything from medical scanning to robotic surgery.

He also announced a partnership with King’s College London to bring AI tools to radiology, and deploy it to three hospitals serving 8 million patients in the U.K.

In addition, he announced NVIDIA Clara AGX — which brings the power of Xavier to medical devices — has been selected by Oxford Nanopore to power its personal DNA sequencer MinION, which promises to driven down the cost and drive up the availability of medical care that’s tailored to a patient’s DNA.

A New Computing Era

Huang finished his talk by recapping the new NVIDIA platforms being rolled out — the Turing GPU architecture; the RAPIDS data science platform; and DRIVE AGX for autonomous machines of all kinds.

Then he left the audience with a stunning demo of a nameless hero being prepared for action by his robotic assistants — before he returns to catch his robotics bopping along to K.C. and the Sunshine Band and join in the fun before returning to stage with a quick caveat.

“And I forgot to tell you everything was done in real time,” Huang said. “That was not a movie.”

The post NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

GTC DC: Learn How Washington Plans to Keep the U.S. in Front in the AI Race

The U.S. government spends about $4 trillion a year, and the question every taxpayer seems to ask is: How can we get more, while paying less?

The answer more and more leaders are turning to: AI.

That’s why thousands of agency leaders, congressional staffers, entrepreneurs, developers and media will attend our third annual GTC DC event October 22-24 at the Reagan Center in Washington. GTC DC has quickly become the largest AI event in Washington.

It’s research, not rhetoric, attendees will tell you, that makes D.C. an AI accelerator like no other. The conference is packed with representatives from federal agencies — among them, the National Science Foundation, the National Institutes of Health, and DARPA — that routinely marshal scientific efforts on a scale far beyond that of anywhere else in the world.

Unmatched Research Leadership

These efforts extend deep into the computing industry, with the federal government commissioning the construction of supercomputers with ever more stupendous capabilities. Summit and Sierra, a pair of GPU-powered machines completed this year, represent an investment of $325 million. Summit is easily the world’s fastest.

And while Washington’s leaders are transforming AI, AI is transforming the region’s economy into one of the nation’s most vibrant startup hubs, with 54 deals worth $544 million in the second quarter of 2018 — up 29 percent from the year-ago period, according to the latest PwC MoneyTree report.

Bringing Public, Private Sector Leaders Together

All of this makes GTC DC a one-of-a-kind gathering, bringing together leaders from the public and private sectors for panel discussions about AI policy, and 150 talks about applying AI to a wide range of applications, from healthcare to cybersecurity to self-driving cars and autonomous machines.

The event features two keynote talks, from U.S. Chief Information Officer Suzette Kent and NVIDIA VP of Accelerated Computing Ian Buck

Other notable speakers include Heidi King of the National Highway Traffic Safety Administration; James Kurose of the NSF; Derek Kan from the DOT; Elizabeth Jones from the National Institute of Health; Missye Brickell from the U.S. Senate Committee on Commerce, Science and Transportation; Bakul Patel from the FDA; and Melissa Froelich from the House Committee on Energy and Commerce.

Leaders from the public and private sector will participate in panel discussions to discuss policy issues for:

  • Artificial Intelligence and Autonomy for Humanitarian Assistance and Disaster Relief
  • American Leadership in AI Research
  • The Keys to Deploying Self-driving Cars
  • How AI Can Improve Citizen Services
  • AI for Healthcare
  • Transforming Agriculture with AI

This is your opportunity to join in the discussions around these — and other efforts — that the rest of the nation, and the world, will be seeing in the news months from now.

AI in Healthcare

Anchored by the National Institutes of Health, which spends more than $37 billion on research annually — making it the world’s largest funder of biomedical research — the DC area is home to a constellation of healthcare innovators, many of whom are flocking to GTC.

Luminaries such as Elizabeth Jones, acting director of radiology and imaging sciences at the National Institutes for Health; Baku Patel, associate director at the U.S. Food and Drug Administration; and Agata Anthony, regulatory affairs executive at General Electric, will discuss how to bring AI out of labs and into clinics.

Other healthcare speakers include:

  • Daniel Jacobson, chief scientist for systems biology at Oak Ridge National Laboratory, who will talk about how Summit, the world’s fastest supercomputer, is being used to attack the opioid epidemic.
  • Faisal Mahmood, a postdoctoral fellow from from Johns Hopkins University, who will talk about how a new generation of AI generated images can be used to accelerate efforts to train up sophisticated new medical imaging systems.
  • Avantika Lal, a research scientist from NVIDIA’s deep learning genomics group, who will explain how deep learning can transform noisy, low-quality DNA sequencing data into clean, high-quality data.

AI in Cyber-Security and Law Enforcement

Cyber security’s another industry where the DC area leads the way — with the region employing more than 77,500 cybersecurity professionals. It’s an industry that’s one of the leaders in AI adoption.  Among featured speakers covering the topic:

  • Booz Allen Hamilton Lead Data Scientist Rachel Allen will talk about how to secure sprawling commercial and government networks.
  • NVIDIA’s Bartley Richardson, senior data scientist for AI infrastructure will talk about how new machine learning approaches to cyber security threats.
  • And, if you’re a fan of CSi, Graphistry CEO Leo Meyerovich will talk about bringing the latest graphics technology to crime scene analytics.

AI for Safer Driving

The big-picture thinking at GTC DC extends to self-driving cars, too.

While carmakers continue to add more and more autonomy to their vehicles, policymakers working on the infrastructure and regulatory changes that will make mass adoption of fully autonomous vehicles possible.

The highlight: a discussion at GTC DC will be a panel on deploying self-driving cars.

Among the speakers:

  • Melissa Froelich, a staffer from the U.S. House Committee on Energy and Commerce;
  • Audi director of Government Affairs Brad Stertz; Bert Kaufman, head of corporate and regulatory affairs at Zoox;
  • Finch Fulton, deputy assistant secretary for Transportation Policy at the U.S. Transportation Department.

Get ahead of the game – come to GTC DC to learn what the rest of the nation, and the world, will be seeing in the news months from now.

The post GTC DC: Learn How Washington Plans to Keep the U.S. in Front in the AI Race appeared first on The Official NVIDIA Blog.

Qualcomm’s Rhetoric Pierced

In response to recent litigation developments, including today’s decision in the International Trade Commission, Intel’s general counsel, Steve Rodgers, has written the following:

Steven RodgersBy Steven R. Rodgers

In July 2017, Qualcomm launched a worldwide campaign of patent litigation as part of its efforts to eliminate competition and preserve its unlawful “no license, no chips” regime, which has already been found to violate competition laws across the globe. Indeed, Qualcomm has already been fined $975 million in China, $850 million in Korea, $1.2 billion by the European Commission, and $773 million in Taiwan (although the case later settled for a reduced fine) for its anticompetitive practices. Qualcomm has also been found to be in violation of Japanese competition law, and the U.S. Federal Trade Commission is pursuing claims in federal court against it for alleged violation of U.S. antitrust law.

Qualcomm has had a lot to say publicly about its litigation campaign – and about Intel. It has publicly disparaged Intel’s products – products created by the innovation and hard work of dedicated teams of scientists and engineers at Intel. It has asked a judge to order a customer not to purchase Intel’s modems, claiming, among other things, that Intel’s engineers could have made their inventions only by purloining ideas from Qualcomm. It has claimed that its patents form the very core of modern mobile communication technologies and networks, and extend even into future technologies.

It is easy to say things. But Intel’s track record is clear. Intel has been one of the world’s leading technology innovators for more than 50 years. We are proud of our engineers and employees who bring the world’s best technology solutions to market through hard work, sweat, risk-taking and great ideas.  Every day, we push the boundaries of computing and communication technologies. And, the proof is in the pudding: Last year, the U.S. Patent Office awarded more patents to Intel than to Qualcomm.

For the most part, we have chosen, and will continue to choose, to respond to Qualcomm’s statements in court, not in public. This week, in one lawsuit, Qualcomm failed to win its case on 88 patent claims it said were infringed by products, including Intel’s modem. And, in another case, a federal judge found “considerable, compelling common proof” that Qualcomm has required companies “to accept a separate license to Qualcomm’s cellular [standard essential patents] in order to gain access to Qualcomm’s modem chips.” This is the “no license, no chips” scheme that has been found to be part of Qualcomm’s anticompetitive conduct challenged in so many countries.

As one of the world’s largest patent holders, Intel respects intellectual property. But we also respect truth, candor and fair competition. And we look forward to continuing to compete with Qualcomm.

Steven R. Rodgers is executive vice president and general counsel of Intel Corporation.

The post Qualcomm’s Rhetoric Pierced appeared first on Intel Newsroom.

Supply Update

To our customers and partners,

The first half of this year showed remarkable growth for our industry. I want to take a moment to recap where we’ve been, offer our sincere thanks and acknowledge the work underway to support you with performance-leading Intel products to help you innovate.

First, the situation … The continued explosion of data and the need to process, store, analyze and share it is driving industry innovation and incredible demand for compute performance in the cloud, the network and the enterprise. In fact, our data-centric businesses grew 25 percent through June, and cloud revenue grew a whopping 43 percent in the first six months. The performance of our PC-centric business has been even more surprising. Together as an industry, our products are convincing buyers it’s time to upgrade to a new PC. For example, second-quarter PC shipments grew globally for the first time in six years, according to Gartner. We now expect modest growth in the PC total addressable market (TAM) this year for the first time since 2011, driven by strong demand for gaming as well as commercial systems – a segment where you and your customers trust and count on Intel.

We are thrilled that in an increasingly competitive market, you keep choosing Intel. Thank you.

Now for the challenge… The surprising return to PC TAM growth has put pressure on our factory network. We’re prioritizing the production of Intel® Xeon® and Intel® Core™ processors so that collectively we can serve the high-performance segments of the market. That said, supply is undoubtedly tight, particularly at the entry-level of the PC market. We continue to believe we will have at least the supply to meet the full-year revenue outlook we announced in July, which was $4.5 billion higher than our January expectations.

To address this challenge, we’re taking the following actions:

  • We are investing a record $15 billion in capital expenditures in 2018, up approximately $1 billion from the beginning of the year. We’re putting that $1 billion into our 14nm manufacturing sites in Oregon, Arizona, Ireland and Israel. This capital along with other efficiencies is increasing our supply to respond to your increased demand.
  • We’re making progress with 10nm. Yields are improving and we continue to expect volume production in 2019.
  • We are taking a customer-first approach. We’re working with your teams to align demand with available supply. You can expect us to stay close, listen, partner and keep you informed.

The actions we are taking have put us on a path of continuous improvement. At the end of the day, we want to help you make great products and deliver strong business results. Many of you have been longtime Intel customers and partners, and you have seen us at our best when we are solving problems.

Sincerely,

Bob Swan
Intel Corporation CFO and Interim CEO

 

Forward-Looking Statements

Statements in this letter that refer to forecasts, future plans or expectations are forward-looking statements that involve a number of risks and uncertainties. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Such statements are based on the company’s current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company’s expectations are set forth in Intel’s earnings release dated July 26, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date.  Additional information regarding these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Forms 10-K and 10-Q. Copies of Intel’s Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC’s website at www.sec.gov.

The post Supply Update appeared first on Intel Newsroom.

Reinforcement Learning ‘Really Works’ for AI Against Pro Gamers, OpenAI Trailblazer Says

Fast, creative, smart — great gamers are all these things. Somebody has to teach machines how to keep up. That somebody is Ilya Sutskever and his team at OpenAI

Sutskever, co-founder and research director of OpenAI, and his team at Open AI are developing AI bots smart enough to battle some of the world’s best human gamers.

In August, OpenAI Five, a team of five neural networks, were defeated by some of the world’s top professional players of Dota 2, the wildly popular multiplayer online battle arena game.

It was a leap for OpenAI Five to even be playing a nearly unrestricted version of Dota 2 at a professional level, which took place at Valve’s International competition in Vancouver — a world series of esports played for tens of millions of dollars.

That’s because Dota 2 is an extremely complex game. Players can unleash an enormous number of tactics, strategies and interactions in the quest to win. The game layout — only partially observable — requires both short-term tactics and long-term strategy, as each match can last 45 minutes. “Professional players dedicate their lives to this game,” said Sutskever. “It’s not an easy game to play.”

Sutskever spoke Thursday at NTECH, an annual engineering conference at NVIDIA’s Silicon Valley campus. The internal event drew an enthusiastic crowd of several hundred engineers — many also huge gaming fans — and hundreds more online.

Dota 2 Raises AI-Gaming Bar

OpenAI Five’s Dota 2 work marks an entirely new level for human-versus-AI challenges. For comparison, in chess and Go — also popular AI challenges — the average number of actions is 35 and 250, respectively. In Dota 2, which has really complex rules, there are about 170,000 actions per move and there are 20,000 moves per game.

With all of Dota 2’s complexity, it’s closer to the real world than any other previous game tackled by an AI, he said. “So, how did we do it? We used large scale RL (reinforcement learning),” Sutskever told the audience.

Reinforcement learning matters for humans and machines alike. When we earn a bonus point in a game with a move or get blown to bits with another, each of these moments provide reinforcement learning — burned in memory — for the next go-round.

Reinforcement learning matters to AI because it is a very natural way of training neural networks to act in order to achieve goals, which is essential for building an intelligent system.

OpenAI Five has seen spectacular results because it used a reliable reinforcement learning algorithm (Proximal Policy Optimization) at massive scale, running on more than 1,000 NVIDIA Tesla P100 GPUs in Google Cloud Platform.

NVIDIA has been there as an early supporter, with CEO Jensen Huang personally delivering the first DGX-1 AI supercomputer in a box for the folks at OpenAI.

History of GPU Challenges

Sutskever is no stranger at unleashing GPUs on AI’s biggest challenges. He was among the trio of University of Toronto researchers — including Alex Krizhevsky and advisor Geoffrey Hinton — who pioneered a GPU-based convolutional neural network to take the prestigious ImageNet competition by storm.

The results — nearly slashing in half the error rate — go down in history as the moment that spawned the modern AI boom.

The resulting model — dubbed AlexNet — is the basis of countless deep learning models. At GTC 2018, Huang spoke of AlexNet’s influence on thousands of AI strains, stating: “Neural networks are growing and evolving at an extraordinary rate.”

Sutskever says leaps in AI track closely to processing gains. “It’s pretty remarkable that the amount of compute from the original AlexNet to AlphaGo Zero is 300,000x. You’re talking about a five-year gap. Those are big increases.”

OpenAI’s ‘Moonshot’ Ambitions

OpenAI is a nonprofit that was formed in 2015 to develop and release artificial general intelligence aimed at benefiting humanity. Its founding members include Tesla CEO Elon Musk, Y Combinator President Sam Altman and other tech luminaries who have collectively committed $1 billion to its mission.

Researchers at OpenAI are also making strides on a project called Dactyl, which aims to increase the dexterity of a robot hand. The team there has been working on domain randomization — an old concept — with remarkable results. They have been able to train the robot hand to manipulate objects in simulation, and then transfer that knowledge to real-world manipulation. This is important, because simulation is the only way to get enough training experience for these robots. “The idea works really, really well,” Sutskever said.

Sutskever is keen on pushing common AI concepts such as reinforcement learning and domain randomization to new heights. In the wide-ranging discussion at NTECH, he praised the conclusions of Arthur C. Clarke’s book Profiles of the Future, which said historically, doubts were cast on great inventions such as the airplane and space travel.

Skepticism, he said, initially led the U.S. to pass on building and sending a 200-ton rocket to space — on the grounds that it’s too large to be built. “So the Russians went on and built a 200-ton rocket,” he quipped, drawing audience laughter.

 

The post Reinforcement Learning ‘Really Works’ for AI Against Pro Gamers, OpenAI Trailblazer Says appeared first on The Official NVIDIA Blog.

In the Eye of the Storm: The Weather Channel Forecasts Hurricane Florence With Stunning Visuals

With Hurricane Florence threatening flash floods, The Weather Channel on Thursday broadcast its first-ever live simulation to convey the storm’s severity before it hit land.

The Atlanta-based television network has adopted graphics processing more common to video game makers in its productions. The result — see video below — is the stunning, immersive mixed reality visual to accompany meteorologists in live broadcasts.

Hurricane Florence slammed into the southeastern shore of North Carolina early Friday morning. Wind speeds of the category 1 hurricane have reached 90 miles per hour, and up to 40 inches of rain have been forecast to drench the region.

Warnings for life-threatening storm surge flooding have been in effect along the North Carolina coast.

The Weather Channel in 2016 began working with this immersive mixed reality to better display the severity of conditions with graphically intense simulations using high performance computing. This type of immersive mixed reality  for broadcast news has only recently become a technique used to convey the severity of life-threatening weather conditions.

In June, The Weather Channel began releasing immersive mixed reality for live broadcasts, tapping The Future Group along with its own teams of meteorologists and designers. Their objective was to deliver new ways to convey the weather severity, said Michael Potts, vice president of design at The Weather Channel.

“Our larger vision is to evolve and transform how The Weather Channel puts on its presentation, to leverage this immersive technology,” he added.

The Weather Channel takes the traditional green-screen setting — the background setup for visual — and places the meteorologist in the center for a live broadcast. The weather simulation displays the forecast via green screen, which wraps around the broadcaster with real-time visuals in synch with the broadcast. “It’s a tremendous amount of real-time processing, enabled by NVIDIA GPUs,” said Potts.

It’s science-based. The Weather Channel takes wind speed, direction, rainfall and countless other meteorological data points fed into the 3D renderings to provide accurate visualizations.

Video game-like production was made possible through The Weather Channel’s partnership with Oslo, Norway-based The Future Group, a mixed reality company with U.S. offices. The Future Group’s Frontier graphics platform, based on the Epic Games Unreal Engine 4 gaming engine, was enlisted to deliver photorealistic immersive mixed reality backdrops.

“The NVIDIA GPUs are allowing us to really push the boundaries. We’re rendering 4.7 million polygons in real time,” said Lawrence Jones, executive vice president of the Americas at The Future Group. “The pixels that are being drawn are actually changing lives.”

The post In the Eye of the Storm: The Weather Channel Forecasts Hurricane Florence With Stunning Visuals appeared first on The Official NVIDIA Blog.

Intel Declares Quarterly Cash Dividend

SANTA CLARA, Calif., Sept. 14, 2018 – Intel Corporation today announced that its board of directors has declared a quarterly dividend of $0.30 per share ($1.20 per share on an annual basis) on the company’s common stock. The dividend will be payable on Dec. 1, 2018, to stockholders of record on Nov. 7, 2018.

The post Intel Declares Quarterly Cash Dividend appeared first on Intel Newsroom.