NVIDIA PhysX, the most popular physics simulation engine on the planet, is going open source.
We’re doing this because physics simulation — long key to immersive games and entertainment — turns out to be more important than we ever thought.
Physics simulation dovetails with AI, robotics and computer vision, self-driving vehicles, and high-performance computing.
It’s foundational for so many different things we’ve decided to provide it to the world in an open source fashion.
Meanwhile, we’re building on more than a decade of continuous investment in this area to simulate the world with ever greater fidelity, with on-going research and development to meet the needs of those working in robotics and with autonomous vehicles.
Free, Open-Source, GPU-Accelerated
PhysX will now be the only free, open-source physics solution that takes advantage of GPU acceleration and can handle large virtual environments.
It will be available as open source starting Monday, Dec. 3, under the simple BSD-3 license.
PhysX solves some serious challenges.
In AI, researchers need synthetic data — artificial representations of the real world — to train data-hungry neural networks.
In robotics, researchers need to train robotic minds in environments that work like the real one.
For self-driving cars, PhysX allows vehicles to drive for millions of miles in simulators that duplicate real-world conditions.
In game development, canned animation doesn’t look organic and is time consuming to produce at a polished level.
In high-performance computing, physics simulations are being done on ever more powerful machines with ever greater levels of fidelity.
The list goes on.
PhysX SDK addresses these challenges with scalable, stable and accurate simulations. It’s widely compatible, and it’s now open source.
PhysX SDK is a scalable multi-platform game physics solution supporting a wide range of devices, from smartphones to high-end multicore CPUs and GPUs.
It’s already integrated into some of the most popular game engines, including Unreal Engine (versions 3 and 4) and Unity3D.
A row of red lanterns reflected in a warrior’s shining scales of armor. Shadows cast by children running down the paved pathway of an open-air market. A lit carriage reflected onto colorful ceramic vases that line the street.
These stunning ray-traced reflections and shadows will soon enhance the game environment of Justice, one of China’s most popular massively multiplayer online games — or MMOs.
This level of realistic detail was previously impossible to render because some of the objects being reflected were off-camera, and shadow mapping techniques couldn’t support shadows of this quality.
But with real-time ray tracing and performance-enhancing Deep Learning Super Sampling technology, our GeForce RTX graphics aim to bring increased realism to the game, NVIDIA CEO Jensen Huang told the audience at the GPU Technology Conference in China this week.
And last week’s new Final Fantasy XV: Windows Edition benchmark showed off the new DLSS technique. DLSS allows the GPU to generate some pixels with shaders and imagine others with AI, boosting performance while maintaining smooth, clear image quality.
Justice will use both real-time ray tracing and DLSS simultaneously. Part of the wuxia fiction genre, the MMO chronicles the adventures of martial artists in ancient China. Ray-traced reflections will bring lifelike reflective properties to game elements like weapons and water, enabling them to accurately reflect the game world around them.
And ray-traced shadows overcome the limitations of traditional shadowmap techniques to create more complex, physically correct shadows for a more realistic and immersive experience.
Justice will also be the first game to feature real-time, ray-traced caustic effects. These occur when light rays are reflected or refracted by a curved surface, like a body of water.
Used at 25×14 or 4K resolution, DLSS can accelerate Justice game performance by up to 40 percent compared to default settings.
Huang also revealed that another popular martial arts MMO, JX3 Online, will incorporate DLSS. The upgrade will launch in early 2019.
JX3 Online and Justice are the latest games to announce support for real-time ray tracing and DLSS. So far, dozens of developers have signed up to add RTX features to their games.
NVIDIA’s third GPU Technology Conference in just over a month spilled across Tel Aviv’s convention center this week, in a packed show featuring a live demo of an AI-infused apple-picking drone, a student-built autonomous Formula One car and a two-person company that ran away with the title of Israel’s hottest startup.
The second annual GTC Israel show drew 2,000 attendees, up 75 percent from last year, on the back of recent sellout crowds in at GTCs in Tokyo and Munich. It was wall to wall with the companies that have won Israel the moniker of “startup nation.” Indeed, there are more than 4,000 tech startups in a country of 8.5 million, or 40x the density of startups in the U.S.
None flew higher at the show than Inception award winner TheWhollySee, a winkingly named shop with just two full-time employees that creates high-fidelity image datasets for training and certifying the AI brains of autonomous vehicles.
Its founder, Dan Yanson, said the first thing he’d do with the prize — $100,000 in cash plus an NVIDIA DGX Station personal AI supercomputer — is to bring his two part-timers fully on board.
“My first reaction? I’m just overwhelmed,” said Yanson, who studied in Sweden and Russia before completing his Ph.D. from the University of Glasgow. “We’re really a baby company — it’s a small team, we haven’t raised a lot of money yet and the competition was extremely strong. The prize money is a great boost, but it’s the DGX Station that will be a springboard to accelerate us, both in terms of our technology and our business.”
Yanson competed against seven other startups — in fields that included healthcare, agriculture, retail and esports — in a back-to-back series of five-minute presentations and then Q&A with a four-person panel. He briskly described the company’s ability to infuse imagery into the foreground of scenes to more rapidly train neural networks for self-driving cars
The Inception awards, which drew a crowd of more than 300 sitting largely nightclub style in a soaring black-walled space, capped off a day that had started with a blistering keynote about NVIDIA’s mission to accelerate computing by NVIDIA Chief Scientist Bill Dally.
Dally announced that NVIDIA has just named a long-time Google Brain researcher, Gal Chechik, to the newly created position of Israel Research Head. Chechik’s mission: to build a world-class team focused on research into deep learning for smarter perception — including combining vision with language and knowledge, learning to generalize more broadly, and understanding complex data.
Along with some 450 individuals who received training from the Deep Learning Institute and 50+ talks by AI experts, the show included a teeming exhibition hall.
Among its hottest draws was a large netted structure where Tevel Aerobotics showed off its autonomous drone which can gingerly pick, thin and prune fruit trees, relieving the labor crunch in the agriculture sector, while helping farmers’ margins.
And a team of undergrads from Israel’s top-ranked Technion University showed off their side project developing an AI-powered mini-Formula One car, which they’re in the process of converting from gas-powered to all electric.
Other finalists in the Inception awards included:
Blink (esports) — Aiming at the rapidly growing esports market, the company focuses on what it estimates as 600 million gamers and gaming enthusiasts who want to share their favorite moments online. Its platform focuses on the social side of esports by automatically detecting great gaming moments, saving them and making them easily shareable online.
IBEX Medical Analytics (Healthcare) — This two-year-old startup applies AI and big data to support pathologists in diagnosing types of cancer. Its work is focused on developing products that improve clinical decision making, streamline laboratory workflows and enable predictive, personalized cancer treatments.
Jungo Connectivity (Automotive) — This Cisco spinoff offers in-car AI software focused on no-driver monitoring and cabin sensing, enabling vehicles to make better decisions and protect their driver and passengers. Its CoDriver SDK provides deep learning, machine learning and computer vision algorithms to OEMs and tier-1 suppliers, enabling them to create next-gen driver and occupant monitoring systems.
Tevel Aerobotics (Agriculture) — This two-year-old startup addresses the shortage of labor in the agricultural sector, by developing a fleet of autonomous airborne drones for picking, thinning and pruning fruit trees, enhancing productivity and saving costs. It promotes its solution as cost effective, flexible, easy to operate and enabling fruit trees to grow higher, thus maximizing yields and improving farmers’ margins.
TRACXPOiNT (Retail) — Aiming to bring the convenience of online shopping to retail, the company has created an AI-infused, self-checkout shopping cart. Using visual detection with deep learning capabilities, its AIC cart recognizes customers, transfers their shopping list to its monitor, automatically finds and recognizes products, while offering coupons, and provides automatic checkout.
Voiceitt (Healthcare) — Its proprietary technology enables individuals with non-standard speech — such as stroke victims or those with muscle-related disabilities — to have their vocal expressions translated into clear speech, in real time. Its technology can be integrated with smart assistants, such as Alexa, to provide new levels of independence for those otherwise unable to carry out many simple actions.
WayCare (Smart Cities) — In a world of ever-worsening traffic conditions, the company is focusing on providing municipalities with the ability to harness in-vehicle information and traffic data to optimize roadways. It uses CCTVs, traffic accident information, telematics, traffic detectors, weather forecasts and other data sources to extract, compile and analyze data for improving traffic flows.
The gaming industry witnessed a seismic shift in computer graphics in August with the launch of NVIDIA’s GeForce RTX 20-series GPUs. For the first time, developers and consumers worldwide have access to hardware fast enough to do real-time ray tracing.
Another milestone hit today with the release of Windows 10 October 2018 Update. It promises to catalyze the development of a new generation of games that bring lifelike lighting, reflections and shadows to real-time, interactive experiences.
One of the update’s key features is the first public support for Microsoft DirectX Raytracing (DXR). This is huge for two reasons:
DXR provides an industry-standard application programming interface (API) that gives all game developers access to GeForce RTX’s hardware support of ray tracing.
DXR adds support for ray tracing to the Windows operating system, so DirectX 12 Windows PCs can now execute the applications that support real-time ray tracing.
Over the years, rasterization evolved to help game developers create stunning graphics and realism. But ray tracing — long heralded as the holy grail for computer graphics — models the real-world behavior of light. Artists and computer graphics researchers alike consider it the definitive solution for realistic and lifelike lighting, reflections and shadows.
See it at work in the Battlefield V RTX trailer below:
Ray tracing isn’t new. It’s been rendered offline for decades to create movies and other applications where real-time performance isn’t essential.
This isn’t the case for games. Movie scenes are “fixed.” The action doesn’t change. So moviemakers can spend days or weeks rendering a single scene. By contrast, game scenes are interactive and unpredictable. A game has only milliseconds to generate an image.
NVIDIA GeForce RTX graphics cards make real-time ray tracing possible. Their powerful performance, hardware support for ray tracing and artificial intelligence enable unprecedented cinematic experiences in games and other interactive content.
With Microsoft’s official public support of DXR with its Windows 10 October 2018 Update and the GeForce RTX family of Turing-based GPUs, PC gamers can look forward to the first wave of next-generation titles with ray tracing.
Multiplayer first-person shooters can be both exciting and nerve-racking. So, it helps to have a knowledgeable player by your side with suggestions.
That’s the promise of Visor, which is offering in-game AI alerts and post-game analysis for titles like Blizzard’s Overwatch, a first-person shooter for team battles.
San Francisco-based Visor recently scooped up $4.7 million in funding for its AI that aims to help improve your game in Overwatch. The startup in August released the open beta download for its software that offers gamers pointers on their playing.
Visor recently participated in the winter class of the Y Combinator accelerator program.
The startup was founded by Anhang Zhu and Ivan Zhou. The duo met at the University of California, Berkeley, where they coded a lot of projects together and shared a love of gaming.
“Visor is like having a really good friend sit next to you while you play and give you feedback. It’s surfacing information so that you can act on it,” said Zhou, the startup’s chief executive and co-founder.
These in-game alerts arrive as text cues that appear on the right-hand side of the screen. They offer tips such as “play more aggressively” or “use your ult” (which gives players more power) to help in the exact moment of the game.
“We can see through a secondary predictive algorithm that will predict how likely you are at winning a specific game,” said co-founder Zhu, a former Facebook engineer.
It turns out that surfacing gaming insights requires training a deep neural network on a lot of gaming data.
Training AI for Gaming
There’s no shortage of user-generated gaming data. People upload boatloads of footage to Twitch, YouTube and other online destinations for watching game play. That’s allowed Visor to train its AI on more than 500 million frames of video.
Visor uses the video frames to train convolutional neural networks for image classification and recurrent neural networks for predictions of in-game actions. The company used the k-nearest neighbors algorithm, or k-NN, to help support the pattern recognition for its deep neural networks.
The team of six engineers at Visor trained the system locally on NVIDIA GPUs. The initial process of collecting data and training its models took about six months, and it’s all deployed now on AWS. The startup gets millions of frames per day to continue honing the models.
How AI Helps Players
Visor tracks data on wins and loses of players. It can use in-game data to determine a player lost by 5 percent, for example. It can asses that several key areas could be improved in the game to win. Visor uses that data to offer one or two tips in those areas to help improve in the game.
The Visor platform was built to be game agnostic, so its deep neural networks could be applied to other games to give players a boost. The founders say there will definitely be additional game titles for Visor. And they point to the precedence for industry acceptance of such tools in professional games with the third-party add-ons for Hearthstone and World of Warcraft.
Visor’s in-game intel for Overwatch is aimed at everyone from newbies to professional gamers.
“The user experience is the final frontier for what we are doing,” said Zhou. “We’re trying to make it so that you have more fun playing the game.”
Fast, creative, smart — great gamers are all these things. Somebody has to teach machines how to keep up. That somebody is Ilya Sutskever and his team at OpenAI
Sutskever, co-founder and research director of OpenAI, and his team at Open AI are developing AI bots smart enough to battle some of the world’s best human gamers.
In August, OpenAI Five, a team of five neural networks, were defeated by some of the world’s top professional players of Dota 2, the wildly popular multiplayer online battle arena game.
It was a leap for OpenAI Five to even be playing a nearly unrestricted version of Dota 2 at a professional level, which took place at Valve’s International competition in Vancouver — a world series of esports played for tens of millions of dollars.
That’s because Dota 2 is an extremely complex game. Players can unleash an enormous number of tactics, strategies and interactions in the quest to win. The game layout — only partially observable — requires both short-term tactics and long-term strategy, as each match can last 45 minutes. “Professional players dedicate their lives to this game,” said Sutskever. “It’s not an easy game to play.”
Sutskever spoke Thursday at NTECH, an annual engineering conference at NVIDIA’s Silicon Valley campus. The internal event drew an enthusiastic crowd of several hundred engineers — many also huge gaming fans — and hundreds more online.
Dota 2 Raises AI-Gaming Bar
OpenAI Five’s Dota 2 work marks an entirely new level for human-versus-AI challenges. For comparison, in chess and Go — also popular AI challenges — the average number of actions is 35 and 250, respectively. In Dota 2, which has really complex rules, there are about 170,000 actions per move and there are 20,000 moves per game.
With all of Dota 2’s complexity, it’s closer to the real world than any other previous game tackled by an AI, he said. “So, how did we do it? We used large scale RL (reinforcement learning),” Sutskever told the audience.
Reinforcement learning matters for humans and machines alike. When we earn a bonus point in a game with a move or get blown to bits with another, each of these moments provide reinforcement learning — burned in memory — for the next go-round.
Reinforcement learning matters to AI because it is a very natural way of training neural networks to act in order to achieve goals, which is essential for building an intelligent system.
NVIDIA has been there as an early supporter, with CEO Jensen Huang personally delivering the first DGX-1 AI supercomputer in a box for the folks at OpenAI.
History of GPU Challenges
Sutskever is no stranger at unleashing GPUs on AI’s biggest challenges. He was among the trio of University of Toronto researchers — including Alex Krizhevsky and advisor Geoffrey Hinton — who pioneered a GPU-based convolutional neural network to take the prestigious ImageNet competition by storm.
The results — nearly slashing in half the error rate — go down in history as the moment that spawned the modern AI boom.
The resulting model — dubbed AlexNet — is the basis of countless deep learning models. At GTC 2018, Huang spoke of AlexNet’s influence on thousands of AI strains, stating: “Neural networks are growing and evolving at an extraordinary rate.”
Sutskever says leaps in AI track closely to processing gains. “It’s pretty remarkable that the amount of compute from the original AlexNet to AlphaGo Zero is 300,000x. You’re talking about a five-year gap. Those are big increases.”
OpenAI’s ‘Moonshot’ Ambitions
OpenAI is a nonprofit that was formed in 2015 to develop and release artificial general intelligence aimed at benefiting humanity. Its founding members include Tesla CEO Elon Musk, Y Combinator President Sam Altman and other tech luminaries who have collectively committed $1 billion to its mission.
Researchers at OpenAI are also making strides on a project called Dactyl, which aims to increase the dexterity of a robot hand. The team there has been working on domain randomization — an old concept — with remarkable results. They have been able to train the robot hand to manipulate objects in simulation, and then transfer that knowledge to real-world manipulation. This is important, because simulation is the only way to get enough training experience for these robots. “The idea works really, really well,” Sutskever said.
Sutskever is keen on pushing common AI concepts such as reinforcement learning and domain randomization to new heights. In the wide-ranging discussion at NTECH, he praised the conclusions of Arthur C. Clarke’s book Profiles of the Future, which said historically, doubts were cast on great inventions such as the airplane and space travel.
Skepticism, he said, initially led the U.S. to pass on building and sending a 200-ton rocket to space — on the grounds that it’s too large to be built. “So the Russians went on and built a 200-ton rocket,” he quipped, drawing audience laughter.
Since its launch in 2015, we’ve worked to make SHIELD the best media streamer. And we’ve tried to deliver on the requests of our passionate fanbase without asking them to buy new hardware year after year. We want SHIELD to be the only device you need for living room entertainment.
At its heart is an incredibly advanced Tegra X1 processor. Its capabilities allow our world-class software team to continue to introduce new experiences.
Over the years, users have received new streaming, gaming and smart home features, as well as three operating system upgrades, including a revamped UI with Android 8.0 Oreo.
SHIELD was the first streaming media player to deliver 4K HDR in many top apps, like Netflix and Prime Video. Now it’s the home to the largest 4K library of any media streamer.
It’s become the best device for Plex users as SHIELD has the capability to watch and record live TV, while serving entertainment libraries as both a Plex client — at up to 4K HDR with Dolby Atmos and DTS-X surround pass-through — and Plex Media Server.
SHIELD powers an immersive YouTube experience with YouTube TV, YouTube 360 and YouTube Kids. And we’ve continually improved the experience with deep search integration, direct play media controls using your voice, and 4K resolution at 60 frames per second.
Gaming on the big screen improved dramatically as the new GeForce NOW beta streamed your library of PC games from NVIDIA servers to the living room. In an instant, SHIELD now accesses a growing list of PC games: Fortnite, Dota 2, Counter-Strike: Global Offensive, PLAYERUNKNOWN’S BATTLEGROUNDS, Monster Hunter: World, Tom Clancy’s Rainbow Six Siege and over 300 more.
Android gamers have been able to download and play games from their favorite franchises like Tomb Raider, Borderlands, Doom, Resident Evil and Metal Gear Solid.
And for local casting, GeForce GTX graphics card owners upgraded to 4K HDR with GameStream.
SHIELD became the center of the smart home with a series of AI capabilities that transform home entertainment. It joins the first hands-free Google Assistant integration for TV with SmartThings Hub technology integration, turning SHIELD into a hub that can connect to thousands of smart home devices.
The addition of Google Assistant brought voice command and visuals for Google services like Photos and Calendar. So, besides being able to say things like, “Turn on the lights” or “Play Stranger Things,” you can say, “Show me my photos from Egypt” or “what time is my first meeting?”
Your TV, only smarter, with SHIELD, your Google Assistant and Samsung SmartThings.
As SHIELD is an open platform, we’ve added Amazon Prime Video and Amazon Music (via Cast). Support for iTunes Movies via the Movies Anywhere app and iTunes Music via Google Music Manager means iOS users can bring their libraries to SHIELD.
Whether you’re 4K HDR binge-watching the new seasons of The Crown or Orange is the New Black on Netflix, shooting for that illustrious victory royale in Fortnite, or checking the traffic on your way out the door, there are lots of reasons to be a member of the SHIELD fan club.
And SHIELD TV only promises to get better. Current and new owners alike can expect more fantastic upgrades on the world’s best streaming media player.
But right out of the box it gives you a huge performance upgrade for games you’re playing now.
GeForce RTX GPUs deliver 4K HDR gaming on modern AAA titles at 60 frames per second — a feat even our flagship GeForce GTX 1080 Ti GPU can’t manage.
Our new Turing GPUs — the GeForce RTX 2080 Ti, 2080 and 2070 — arrive ahead of a holiday shopping season supercharged by sub-$300 4K monitors and a lineup of big titles from top game franchises.
Big Game Hunting
For game after game, Turing delivers big performance gains (see chart, above). Those numbers get even bigger with deep learning super-sampling, or DLSS, unveiled Monday.
DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance.
That’s one example of how Turing can do things no other GPU can. This makes Turing’s full performance hard to measure.
But looking at today’s PC games — against which GPUs without Turing’s capabilities have long been measured — it’s clear Turing is an absolute beast.
They say real-time ray tracing is the future of graphics — and always will be. No longer.
NVIDIA today unveiled the biggest breakthrough in PC gaming in over a decade: the GeForce RTX series and the advent of real-time ray tracing. It’s a watershed moment, the start of a new, golden age of gaming. And the technology — regarded as the “holy grail” of computer graphics — has come 10 years earlier than most predicted.
“Games will never be the same,” said Jensen Huang, NVIDIA founder and CEO, during his Gamescom presentation, where he unveiled GeForce RTX.
Graphics is advancing at 10x the rate of Moore’s law, before it ends. Propelling this are architectural advancements, which are responsible for GeForce RTX’s massive leap forward. Turing’s innovations fuse real-time ray tracing, all-new AI capabilities and advanced shaders to deliver 6x the performance of Pascal.
This approach, called hybrid graphics, reinvents computer graphics and represents the biggest generational performance leap ever, delivering on the promise of 4K HDR gaming at 60 frames per second on even the most advanced games.
The invention of hybrid graphics requires a new way to measure performance. We call this RTX-OPS, or the performance available when rendering next-generation games with hybrid graphics. The resulting performance of GeForce RTX is astounding — 78 trillion RTX-OPS, 6x that of the previous Pascal generation, and 10 GigaRays per second, or 10x Pascal.
The Timing Is Perfect
GeForce RTX is coming to market at the perfect time, with a blockbuster holiday season ahead.
New games are the No. 1 reason most gamers upgrade, and this coming holiday season looks to be massive. Projections are for double-digit, year-over-year growth, with a great run anticipated for game publishers like Activision Blizzard, EA, Take Two and Ubisoft.
This year’s lineup of releases includes no less than eight major titles — from franchises with user bases of hundreds of millions of gamers and tens of billions dollars in past sales. Two of the biggest — Battlefield V and Shadow of the Tomb Raider — ship in the next couple months with RTX technology.
Gamers Need to Gear Up for 4K HDR
Just as 4K is fast becoming the standard for TVs, PC gamers are moving to 4K — the new breathtaking baseline for games. Monitor prices are dropping under $300 and shipments have increased 1.5x over the past year.
Yet, PC gamers still aren’t ready. That’s because 4K gaming requires 4x the processing power of 1080p at 60 frames per second. Even our Pascal flagship GeForce GTX 1080 Ti can’t run the latest cinematic games at 4K 60 FPS.
That’s also why top gaming sites today recommend gamers think “GPU first” and, whatever their budget, allocate about 40 percent to the GPU. For a $2,500 4K rig, that’s a $1,000 investment, more than twice the CPU.
GeForce RTX 20-series are the first GPUs that will play AAA games at 4K 60 FPS HDR.
For Esports, Performance Is a Matter of Life or Death
Esports has been a big driver in bringing new players to PC gaming. And to these gamers, the stakes for graphic performance is much higher. Data shows esports pro players perform best with high FPS, low latency, high resolution and no stutter or flickering.
That’s also why all esports tournaments, without exception, are played with top-of-the-range GeForce GPUs. But for esports enthusiasts, the choice of GPU can be a matter of life or death, with high-end GeForce GPUs resulting in 1.5x the kill/death ratio in battle royale games.
Overwhelming Developer Support
The industry is anxious for the next big thing in graphics to create a new look for games — RTX technology will deliver that promise.
Just listen to Klemens Kundratitz, CEO of Deep Silver: “From a game publishing perspective, GeForce RTX from NVIDIA is really exciting. We can see a future where games are more realistic and more immersive for our game-playing customers. This is a great time to be a PC gamer.”
Microsoft is releasing an update to DX12, called DXR, that supports RTX ray tracing and puts tremendous firepower into the hands of developers.
Epic Games has integrated RTX technology into the most popular game engine in the world, Unreal Engine.
Over 20 RTX titles are set for release — starting this holiday season — and more than 40 game developers are working on future titles that will integrate RTX.
Dawn of a New Golden Age
GeForce RTX is reinventing graphics. For today’s games, and well into a bright future.
Turing’s fusing of advanced shaders, AI and ray tracing will deliver 60 FPS gaming in 4K HDR today, and is the platform for a new era of photorealism for fully ray-traced games.
“This is a historic moment,” the NVIDIA founder and CEO declared as he rolled out the new GPUs, starting at just $499. “Computer graphics has been reinvented.”
Delivering the “holy grail” of graphics to gamers, Huang introduced the world’s first real-time ray-tracing gaming GPUs — supported by a fat roster of upcoming blockbuster game titles — to a heaving crowd at the Palladium, a spare steel and concrete music venue tucked between railroad tracks and metal fabrication shops on Cologne’s gritty industrial north side.
Unveiled ahead of Gamescom, the world’s largest gaming expo, the GeForce RTX 2080 Ti, 2080 and 2070 GPUs are the first gaming processors based on our new Turing architecture, packed with new features that will deliver 4K HDR gaming at 60 frames per second on the most advanced titles.
The RTX 2080 Ti and RTX 2080 — including Founders Edition cards direct from NVIDIA — will be available for pre-order starting Monday. The RTX 2070, starting at $499, will be available in October.
These products are built on the NVIDIA Turing GPU architecture introduced a week ago in Vancouver, which fuses next-generation shaders with real-time ray tracing and all-new AI capabilities. Huang said this new hybrid graphics capability represents the biggest generational leap ever in gaming GPUs, delivering 6x more performance than its predecessor, Pascal.
The show stopper: a demo of Battlefield V that had the audience alternately bursting into applause and shouting with enthusiasm as they saw scenes from an urban battle reflected in a soldier’s eyes, fire from from a flame-throwing Churchill Crocodile tank reflected from the hood of a car, or an explosion from a V1 rocket reflected in the windows of nearby storefronts moments before the shockwave from the explosion shattered them.
“It does exactly what you would expect it to do and it does it all by itself,” Huang said from the stage of the effects Turing unleashes. “Everything just works because ray tracing just works.”
Delivering the Holy Grail
To put Turing’s capabilities into perspective, Huang’s talk opened with a video telling the visual history of computer graphics over the past half century, narrated by its pioneering figures.
It’s the tale of a grand quest to simulate the world, one that’s captivated some of the world’s brightest minds. It highlights breakthroughs in films such as Star Wars and The Abyss, and games like Crysis and Destiny 2.
NVIDIA RTX is the product of 10 years of work and 10,000 engineering years of effort in computer graphics algorithms and GPU architectures, Huang said. The NVIDIA RTX platform benefits from support in Microsoft’s new DirectX Raytracing API, games adopting it in development for Windows and Vulkan APIs, and hardware acceleration integrated into NVIDIA’s Turing architecture.
The headline feature — RT Cores — represent a kind of “holy grail” for gamers, accelerating the crushingly computationally intensive work of tracing beams of light through to generate images in real time, Huang said.
RTX: A Big Difference for Big Games
Turning to a tested computer graphics teaching tool, the Cornell Box — a 3D box inside which various objects are displayed — Huang showed how Turing uses ray tracing to portray increasingly complex scenes incorporating reflections, refractions, and shadows with stunning photo-realism. Each iteration of the demo got an instant reaction from audience members, who clapped and gasped every time Huang showed what RTX could do.
“Everything just works,” Huang said. “Everything….just…works…you just turn it on.”
To give the audience a taste of what Turing can do, Huang teed up a demo, dubbed Sol, showing a pair of robotic assistants placing glossy-white armor onto a lone figure, each piece finding its place with a gratifying “snick.”
As the protagonist ascends to a hatch to jump into action — with ray-traced reflections of the futuristic environment all around him gleaming from his suit and visor — the now unsupervised robots begin to dance to the irresistible rhythms 1977’s “Boogie Shoes” by KC and the Sunshine Band.
Hearing the music, the armored figure returns, cocks his head in surprise, and then demonstrates his own fluid, loose-limbed dance moves in a twist the had the audience howling with delight.
Turing also includes unprecedented deep learning capabilities — thanks to its built-in Tensor Cores, which accelerate the deep-learning algorithms driving the deep learning revolution.
Now that technology is coming back to games, with NVIDIA harnessing banks of supercomputers to train network, such as the NVIDIA Deep Learning Super Sampling, which turn low resolution into high resolution ones, and can run on Turing’s Tensor Cores.
Huang ended his presentation with a real-time demo that how the academic world of computer graphics — and the rollicking fun of computer games — intersect. It brings the audience back to the inside of the Cornell box — this time outfitted with a disco ball and strobe lights — where the armored figure from the video Huang showed just a few minutes before pops up again, dancing, only to freeze after the music stops.
The message is clear: you’re going to have a blast playing with Turing’s cutting-edge graphics.