A lot has been said recently about our GeForce Partner Program. The rumors, conjecture and mistruths go far beyond its intent. Rather than battling misinformation, we have decided to cancel the program.
GPP had a simple goal – ensuring that gamers know what they are buying and can make a clear choice.
NVIDIA creates cutting-edge technologies for gamers. We have dedicated our lives to it. We do our work at a crazy intense level – investing billions to invent the future and ensure that amazing NVIDIA tech keeps coming. We do this work because we know gamers love it and appreciate it. Gamers want the best GPU tech. GPP was about making sure gamers who want NVIDIA tech get NVIDIA tech.
With GPP, we asked our partners to brand their products in a way that would be crystal clear. The choice of GPU greatly defines a gaming platform. So, the GPU brand should be clearly transparent – no substitute GPUs hidden behind a pile of techno-jargon.
Most partners agreed. They own their brands and GPP didn’t change that. They decide how they want to convey their product promise to gamers. Still, today we are pulling the plug on GPP to avoid any distraction from the super exciting work we’re doing to bring amazing advances to PC gaming.
This is a great time to be a GeForce partner and be part of the fastest growing gaming platform in the world. The GeForce gaming platform is rich with the most advanced technology. And with GeForce Experience, it is “the way it’s meant to be played.”
Eni, a multinational oil and gas company, holds bragging rights to HPC4, the world’s most powerful commercial supercomputer.
Running 3,200 NVIDIA Tesla GPUs, Eni’s parallel processing beast has completed a breakthrough calculation that highlights an acceleration in reservoir assessment.
Eni’s super computing feat was achieved in partnership with Baltimore-based Stone Ridge Technology, which develops and licenses ECHELON. ECHELON is high-performance NVIDIA GPU-based petroleum reservoir simulation software.
The 18.6 petaflops supercomputer processed 100,000 reservoir models in about 15 and a half hours, a task which would take 10 days using legacy hardware and software. Each individual model simulated 15 years of production in an average of 28 minutes.
Modeling oil reservoirs is no small computing problem. First exploration experts need to find the reserves by essentially drumming the Earth’s surface and capturing the reflected sound waves.
After massive numerical processing this reflected wave data is turned into images that geoscientists can use to determine if a reservoir prospect contains hydrocarbons and where the hydrocarbons are located within the image. Over the past decade, GPUs have played an increasingly important role in reservoir simulations.
Minimizing Financial, Environmental Risk
Much is at stake for oil companies seeking to responsibly tap new reservoirs or reassess production stage fields at the least financial and environmental risk. Drilling can cost hundreds of millions of dollars. After locating hydrocarbons, quickly determining the most profitable strategies for new or ongoing production matters. For this task oil companies use reservoir simulators.
These reservoir simulators are technical software applications that model how hydrocarbons and water flow under the ground in the presence of wells. They let oil companies such as Eni evaluate virtual production strategies and “what-if” scenarios on supercomputers before committing to a new project as well as readjusting their models on production wells in the real world.
Simulators traditionally run on CPU-based hardware and are limited in both performance and in the size of the models. It’s not uncommon to have models that take days to run. To improve the odds of hitting production targets, the energy giants are increasingly turning to higher-resolution models and faster software powered by NVIDIA GPUs.
“Stone Ridge was an early adopter of NVIDIA GPUs for high-performance computing and we built ECHELON from the ground up to exploit the technology.” said Vincent Natoli, founder and president of Stone Ridge. “We see the benefit of that choice now as ECHELON has become faster and more capable with each generation of NVIDIA product. We significantly outpace our CPU-based competitors in both performance and scalability and each year the gap has become more pronounced.”
Simulate Reservoir’s Potential
Eni’s announcement that its HPC4 supercomputer running ECHELON can speed reservoir simulations highlights new records for the industry. Eni was able to take a high-resolution model of a deep-water reservoir with 5.7 million active cells and generate 100,000 models with varying petro-physical properties. All 100,000 models were completed in 15 hours and 25 minutes, running on Eni’s HPC4 sporting 3,200 Tesla P100 GPUs.
NVIDIA and DEEPCORE, a Tokyo-based startup incubator owned by SoftBank, are working together to support AI startups and promote university research programs across Japan.
Launched earlier this year with a mission to cultivate entrepreneurs who aspire to change the world with technology, DEEPCORE will use NVIDIA’s AI computing platform to build out the technology infrastructure of its incubation program.
Program members will have access to NVIDIA DGX systems and the NVIDIA GPU Cloud, which gives researchers and data scientists easy access to a comprehensive catalog of GPU-optimized software tools for deep learning and high performance computing. DEEPCORE is developing its GPU-accelerated AI computing platform in its open, collaboration-focused R&D space, called KERNEL, located near the University of Tokyo.
NVIDIA will offer AI training to DEEPCORE customers and incubator members via its Deep Learning Institute, which provides hands-on training for developers, data scientists and researchers.
NVIDIA will also provide technical and business advice to members regarding its GPU-accelerated products and services.
DEEPCORE and NVIDIA additionally plan to explore business opportunities for members as part of NVIDIA’s Inception program, a virtual startup accelerator working with more than 2,800 companies around the world.
DEEPCORE works in collaboration with the University of Tokyo and other top schools to unleash the potential in technology for entrepreneurs, researchers, engineers and others in Japan and around the globe.
New technologies will allow you to use VR to travel to stunning new worlds. Or, through augmented reality, to blend those worlds with your own.
At the Adobe Summit in Las Vegas before an audience of more than 10,000 marketing pros and data scientists, Adobe CEO Shantanu Narayen and NVIDIA CEO Jensen Huang spoke Wednesday about how AI and machine learning will unleash wild new possibilities.
In the wide-ranging talk, which highlighted a just announced strategic partnership, Narayen and Huang spoke about how new tools — such as Adobe’s new Sensei AI and machine learning framework — will enhance the work of creative and marketing professionals in a variety of fields.
“You guys are going to create experiences with these tools … that will allow people to create an enormous number of new realities,” Huang told the audience.
Adobe and NVIDIA, of course, have worked together for more than a decade to enable GPU acceleration for a broad set of Adobe’s creative and digital experience products — a history touched on as Huang and Narayen talked about their shared history at the cutting-edge of computer graphics.
More’s coming. Offering a taste of some of the capabilities our just announced partnership will bring to market, Adobe showed a jaw-dropping live demo that paired NVIDIA’s powerful Ansel photo mode for games with Adobe’s AI capabilities.
Adobe Sensei Comes to Novigrad
Taking the audience to the richly detailed world of “The Witcher 3: The Wild Hunt,” Adobe’s AI was able to describe the aesthetics of a scene and instantly tag thousands of people, objects and animals as a camera panned around the village of Novigrad.
And then, with a simple voice command of “show me the angry man with arms crossed,” it was able to call up an image from the experience.
“Obviously, you can do this for video games, but you can do this for real things …” Huang said.
“Retail experiences, travel experiences …,” Narayen replied, letting the audience ponder the possibilities for a moment.
It’s just one example of how Adobe and NVIDIA’s collaboration promises to speed time to market and improve the performance of Sensei-powered services for Adobe Creative Cloud and Experience Cloud customers and developers.
“This was done by virtue of the fact that you have this open architecture that allows us — and we’re doing the same for our customers — to make this available so that our customers can apply their own data science,” Narayen said.
The partnership promises to enhance Sensei-powered features, such as auto lip sync in Adobe Character Animator CC and face-aware editing in Photoshop CC, as well as cloud-based AI and machine learning products and features, such as image analysis for Adobe Stock and Lightroom CC and auto-tagging in Adobe Experience Manager.
Ray Tracing Brings Cinematic Quality to Real-Time Experiences
NVIDIA has long sought to create virtual experiences that model the real world. One of the biggest challenges in the field: modeling the physics of how light moves around a scene to deliver the kind of realism — in real time — that now takes moviemakers hundreds of hours to accomplish.
After 10 years of endeavor, NVIDIA earlier this month announced NVIDIA RTX ray-tracing technology, which will allow content creators to use this technique in real time. “It’s just a huge breakthrough,” Huang told Narayen.
How Graphics Led to AI
Simulating physics — key to creating life-like graphics — also led to NVIDIA’s AI and machine learning, Huang said. To do that, NVIDIA has to build capabilities for its technologies that now make its GPUs invaluable for the deep-learning revolution.
Seven years ago, Huang said, he pivoted NVIDIA to take advantage this new opportunity. “We saw a new model of software that we call deep learning and AI that was going to change the development of software going forward, so that we could write software that no human could write,” Huang said.
Roll Up Your Sleeves
NVIDIA’s march into AI is an example of how successful companies — like Adobe — change over time. “There is no alternative to rolling up your sleeves and trying to understand yourself and the implications of the new dynamics in the industry,” Huang said.
The conversation ended with Narayen asking if the rumor that Huang has a tattoo of NVIDIA’s logo was true. Huang then told the story of how his colleagues challenged him to get a tattoo if NVIDIA’s stock ever hit $100.
“I’ll show you may tattoo,” Huang said. “However, I think that Shantanu should get a tattoo, too,” Huang added, to applause from the audience.
Then Huang, at Narayen’s prompting, took off his jacket and rolled up the sleeve of his shirt to show a tattoo of NVIDIA’s logo on his bicep.
“So Shantanu’s going to get a tattoo,” Huang said with a laugh.
For more, follow #AdobeSummit on your favorite social media platform.
Standing before 650 people and a panel of high-profile judges to pitch your startup, with just five minutes to tell your story is a little “unnerving,” the trim Stanford professor admits.
But Zaharchuki — a co-founder of Subtle Medical — and teams from two other startups were smiling Tuesday night after blitzing through their presentations before a crowd of technologists, investors, business leaders and other peers.
Subtle Medical was among a group of three AI-driven startups that won a little bit of celebrity — and, between them, $1 million in cash — Tuesday night at the second annual Inception Awards held at at our annual GPU Technology Conference in Silicon Valley.
The winners, named after a series of quick, high-pressure pitches to a crowd of technologies, press, investors, and entrepreneurs, were: AiFi, which is building checkout-free systems for stores of all sizes; Subtle Medical, which is improving medical imaging for better acquisition, reconstruction, processing and analysis; and Kinema Systems, which is creating robotics for logistics and manufacturing.
“You guys are solving such large problems,” NVIDIA CEO Jensen Huang told the standing room only crowd gathered for the event. “And all of you are so polished and so ambitious and what you’re doing is so important.”
It was competition designed for drama — and the competitors worked until the very last moment to improve their odds. AiFi’s Kaushal Patel admits pulling an all nighter working on a slide to add to his team’s presentation at the last moment. He missed the deadline, but he had no complaints. “It all worked out in the end,” Patel says.
These are stories any entrepreneur can relate to.
“I was so happy that I didn’t have to be tortured like that when I was raising money,” said Huang of the intense competition. Huang appeared alongside fellow NVIDIA founder Chris Malachowsky on stage at the event.
Malachowsky reminded Jensen of the time 25 years ago they stayed up late trying to finish a business plan for the then-fledgling startup to present to investors the next day. They never finished it.
“You had to go there,” Jensen replied. “That memory just gave me hot flashes.”
Tuesday’s winners survived a selection process that brought a dozen semi finalists to our Silicon Valley campus earlier this month. Of those who gave their pitches, six were selected to go on to the pressure-packed finals Tuesday. Each of the remaining contenders had compelling stories.
They got to tell those stories to a panel of high-profile judges that included Pawan Tewari from Goldman Sachs; Steve Wymer from Fidelity Investments; Jaimin Rangwalla from Coatue Management; and NVDIA’s Jeff Herbst, vice president of business development.
But while the pressure was real, the exposure is invaluable. Last year’s winners and nominees — 14 companies in all — have already raised a combined total of $180 million from Sequoia Capital, Data Collective, Khosla Ventures, Lux Capital and others.
Millions of servers powering the world’s hyperscale data centers are about to get a lot smarter.
NVIDIA CEO Jensen Huang Tuesday announced new technologies and partnerships that promise to slash the cost of delivering deep learning-powered services.
Speaking at the kickoff of the company’s ninth annual GPU Technology Conference, Huang described a “Cambrian Explosion” of technologies driven by GPU-powered deep learning that are bringing support for new capabilities that go far beyond accelerating images and video.
“In the future, starting with this generation, starting with today, we can now accelerate voice, speech, natural language understanding and recommender systems as well as images and video,” Huang, clad in his trademark leather jacket, told an audience of 8,500 technologists, business leaders, scientists, analysts and press gathered at the San Jose Convention Center.
Over the course of a two-and-a-half hour keynote, Huang also unveiled a series of advances to NVIDIA’s deep learning computing platform that deliver a 10x performance boost on deep learning workloads from just six months ago; launched GV 100, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced DRIVE Constellation to run self-driving car systems for billions of simulated miles.
Power to the Pros
Huang’s keynote got off to a brisk start, with the launch of the new Quadro GV 100. Based on Volta, the world’s most advanced GPU architecture, Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity.
GV100 sports a new interconnect called NVLink 2 that extends the programming and memory model out of our GPU to a second one. They essentially function as one GPU. These two combined have 10,000 CUDA cores, 236 teraflops of Tensor Cores, all used to revolutionize modern computer graphics, with 64GB of memory.
Deep Learning’s Swift Rise
The announcements come as deep learning gathers momentum. In less than a decade, the computing power of GPUs has grown 20x — representing growth of 1.7x per year, far outstripping Moore’s law, Huang said.
“We are all in on deep learning, and this is the result,” Huang said.
Drawn to that growing power, in just five years the number of GPU developers has risen 10x to 820,000. Downloads of CUDA, our parallel computing platform, have risen 5x to 8 million.
“More data, more computing are compounding together into a double exponential for AI, that’s one of the reasons why it’s moving so fast” Huang said.
Bringing Deep Learning Inferencing to Millions of Servers
The next step: putting deep learning to work on a massive scale. To meet this challenge, technology will have to address seven challenges: programability, latency, accuracy, size, throughput, energy efficiency and rate of learning.
Together, they form the acronym PLASTER.
Meeting these challenges will require more than just sticking an ASIC or an FPGA in a datacenter, Huang said. “Hyperscale data centers are the most complicated computers ever made — how could it be simple?” Huang said.
To put even more innovation to work faster, Huang announced a new version of our TensorRT inference software, TensorRT 4. Used to deploy trained neural networks in hyperscale datacenters, TensorRT 4 offers INT8 and FP16 network execution, cutting datacenter costs up to 70 percent, Huang said.
The software delivers up to 190x faster deep learning inference than CPUs for common applications such as computer vision, neural machine translation, automatic speech recognition, speech synthesis and recommendation systems.
Support from Key Partners
TensorRT 4 and GPU-powered inferencing are drawing support from around the technology industry.
Huang punctuated his announcement of support for GPU acceleration for Kubernetes to facilitate enterprise inference deployment on multi-cloud GPU clusters with a stunning demo of a flower recognition system scaling up to unimaginable speeds.
“This is like magic, Kubernetes is orchestrating this datacenter — it can assign one GPU, or many GPUs on one server, or many GPUs on many servers,” Huang said. “It can also assign it across datacenters, so you can have some work done on a cloud and some in our datacenter, some on this cloud and some on that cloud.”
In addition, Microsoft, which recently announced AI support for Windows 10 applications, has partnered with NVIDIA to build GPU-accelerated tools to help developers incorporate more intelligent features in Windows applications.
NVIDIA engineers have also worked closely with Amazon, Facebook and Microsoft to ensure developers using ONNX frameworks such as Caffe 2, Chainer, CNTK, MXNet and Pytorch can now easily deploy to NVIDIA deep learning platforms.
“Our strategy at NVIDIA is to advance GPU computing, to advance GPU deep learning at the speed of light, irrespective of whatever kind of AI framework you use or the deep learning network you want to create,” Huang said.
Feeding the Need for Speed
At the same time, Huang announced that the GPU-driven systems where advanced new deep learning networks are trained are growing vastly more powerful.
“Clearly the adoption of GPU computing is growing and it’s growing at quite a fast rate,” Huang said. “The world needs larger computers because there is so much work to be done in reinventing energy, trying to understand the Earth’s core to predict future disasters, or understanding and simulating weather, or understanding how the HIV virus works.”
Key advancements to the NVIDIA platform — which has been adopted by every major cloud-services provider and server maker — include a 2x memory boost to NVIDIA Tesla V100, the most powerful datacenter GPU, and a revolutionary new GPU interconnect fabric called NVIDIA NVSwitch, which enables up to 16 Tesla V100 GPUs to simultaneously communicate at a record speed of 2.4 terabytes per second.
Harnessing these innovations, NVIDIA launched NVIDIA DGX-2, the first single server capable of delivering two petaflops of computational power. DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.
It is, in effect, a single GPU. “The world wants a gigantic GPU, not a big one, a gigantic one, not a huge one, a gigantic one,” Huang said moments before unveiling the DGX-2.
Simulating Billions of Miles of Driving
We’re also bringing deep learning — and the powerful visualization capabilities of GPUs — to speed the development of a new generation of self-driving vehicles with DRIVE Constellation.
DRIVE Constellation pairs a server simulating a self-driving vehicle’s sensors — such as cameras, lidar and radar — with another server equipped with our DRIVE Pegasus AI car computer.
The result: autonomous vehicles can be driven for billions of miles — and tested in a vast number of situations — before they’re put on the road.
Fusing Man, Machine in VR
Finally, in a rousing finale to his keynote, Huang showed how a human driver can take control of an autonomous vehicle, remotely. The live demonstration of real-time, bi-directional communication between sensors in an autonomous vehicle and a VR environment hints at a future where intelligent machines can work seamlessly with humans.
“Teleportation — the future has arrived,” Huang declared.
Less than 24 hours until NVIDIA CEO Jensen Huang delivers the keynote at our ninth annual GPU Technology Conference in Silicon Valley, and the action’s already begun.
The crowd more more than 8,000 surging into the McEnery Convention Center — which includes researchers, press, technologists, analysts and partners from all over the globe — is our largest yet.
The 600+ talks on the docket may be the best testament to the spread of GPUs into every aspect of human endeavor.
Attendees are already crowding into conference rooms to hear about how GPUs can be used to model the formation of galaxies, generate dazzling special effects for blockbuster movies, and even analyze scans of the human heart.
Their mood: happy. At least, that’s what the Emotions Demo, set up on the convention’s main concourse, tells us. The demo uses deep learning to instantly read the facial expressions of people nearby in real time – whether they’re happy, neutral, afraid, or disgusted.
Also on the show floor: a pop up store selling NVIDIA Gear. The best sellers? The NVIDIA “I Am AI” t-shirt, and our much sought after NVIDIA Ruler, according the store’s staff.
We’ll be buttonholing speakers from a broad cross-section of these talks and interviewing them for AI Podcast, where we’re recording in a sleek glass booth positioned on the show floor.
If all this makes your heartbeat a little faster, check back for live updates from our keynote Tuesday. And keep an eye on our blog throughout the week for the latest news from the show.
As the AI ecosystem continues to expand, NVIDIA revealed today that speech analytics startup Deepgram is the newest member of the GPU Ventures portfolio.
Founded in 2015, San Francisco-based Deepgram is the first end-to-end deep learning speech recognition system in production that uses NVIDIA GPUs for inferencing and training.
“Using GPUs made our inferencing 100 times more efficient than when using CPUs,” said Scott Stephenson, CEO and co-founder of Deepgram. “Our technology coupled with NVIDIA’s expertise in AI allows us to make a greater impact in speech analysis.”
Deepgram’s automatic transcription tool, Deepgram Brain, searches for keywords in transcripts by both sound and text. It also helps businesses analyze customer calls to improve their service. The company already serves over 5,000 clients, from financial institutions to journalists.
“While many companies already implement accelerated speech recognition, true speech analytics has until recently been largely untouched by deep learning,” said Jeff Herbst, vice president of business development at NVIDIA. “Deepgram has done an amazing job introducing deep learning to this field, and we look forward to working closely with them to advance deep learning-driven speech analytics to the next level.”
NVIDIA GPU Ventures has invested in more than three dozen companies to date, including 10 in five countries over the past year. In addition to Deepgram, some of the most recent members of its startup portfolio include:
BlazingDB – Peruvian startup accelerating the data parsing process
Graphistry – Bay Area startup streamlining data investigations using GPUs
H2O.ai – California startup seeking to make AI adoption more efficient
JingChi – Chinese self-driving startup developing an autonomous Uber-like service
SANTA CLARA, Calif., March 19, 2018 – Intel Corporation today announced that Risa Lavizzo-Mourey was elected to Intel’s board of directors. Her election marks the fifth new independent director added to Intel’s board since the beginning of 2016. The board also voted unanimously to extend Andy Bryant’s term as Intel chairman in order to ensure board continuity and a smooth integration for new directors. Bryant became Intel chairman in May 2012 and will stand for re-election at the company’s 2018 annual stockholders’ meeting. If elected, he will continue to serve as chairman until the conclusion of the company’s 2019 annual stockholders’ meeting.
“Risa knows how to lead a large organization tackling complex issues, and brings extensive public-company board experience. I look forward to her fresh insights and perspective,” said Intel Chairman Andy Bryant. “We’ve worked to make sure the board has the right skills and backgrounds to be strong stewards in our dynamic industry. I’m honored to continue serving alongside them, as Intel transforms to create more value for our customers and our owners.”
Dr. Lavizzo-Mourey has served as the Robert Wood Johnson Foundation PIK Professor of Population Health and Health Equity at the University of Pennsylvania since January 2018. From 2003 to 2017, she was the president and chief executive officer of the Robert Wood Johnson Foundation, the largest U.S. philanthropy organization dedicated to health. Dr. Lavizzo-Mourey is a member of the boards of directors of General Electric Co. and Hess Corp., and she previously served as a director at Genworth Financial Inc. and Beckman Coulter Inc.
She is also a member of the National Academy of Medicine, the board of regents of the Smithsonian Institution, and the board of fellows of Harvard Medical School. Dr. Lavizzo-Mourey holds an MBA from the University of Pennsylvania and an M.D. from Harvard Medical School.