Alluring Turing: Get Up Close with 7 Keynote-worthy Turing Demos

RT Cores. Tensor Cores. Advanced shading technologies.

If you’ve been following the news about our new Turing architecturelaunched this week at the SIGGRAPH professional graphics conference — you’re probably wondering what all this technology can do for you.

We invite you to step into our booth at SIGGRAPH — number 801 on the main floor — to see for yourself. Here’s what you’ll find:

  • Photorealistic, interactive car rendering — Spoiler: this demo of a Porsche prototype looks real, but is actually rendered. To prove it, you’ll be able to adjust the lighting, and move the car around. It’s all built in in Unreal Engine, with the Microsoft DXR API is used to access NVIDIA RTX dev platform. It runs two Quadro RTX GPUs.
  • Real-time ray tracing on a single GPU — This Star Wars themed demo stunned when it made its debut earlier this year running on a $70,000 DGX Station powered by four Volta GPUs. Now you can see the same interactive, real-time, ray tracing using Unreal Engine running on our NVIDIA RTX developer platform on a single Turing Quadro GPU.
  • Advanced rendering for games & film (dancing robots) — This one is built on Unreal, as well — and shows how real-time ray-tracing can a bring complex, action-packed scenes to life. Powered by a single Quadro RTX 6000, it shows effects such as real-time ray-traced effects such as global illumination, shadows, ambient occlusion, and reflections.
  • Advanced rendering for games & film (Project Sol) — An interaction between a man and his robotic assistants takes a surprising turn. Powered by the Quadro RTX 6000, this demo shows off production quality rendering and cinematic frame rates, enabling users to interact with scene elements in real time.
  • Cornell Box — Turn to this tested graphics teaching tool to see how Turing uses ray tracing to deliver complex effects — ranging from diffused reflection to refractions to caustics to global illumination — with stunning photorealism.
  • Ray-traced global illumination — This live, photorealistic demo is set in the lobby of the Rosewood Bangkok Hotel, and shows the effects of light switching between raster and ray-traced materials. You’ll be able to make changes to the scene, and see the effects in real time on this demo powered by a pair of Quadro RTX 6000 GPUs.
  • New Autodesk Arnold with GPU acceleration — Featuring a scene from Avengers: Infinity war courtesy of Cinesite, Autodesk, and Marvel Studios, this demo lets you see the benefits of Quadro RTX GPUs for both content creation and final frame rendering for feature film.

Of course, this isn’t all you’ll find in our booth.

In addition to being able to see demos from NVIDIA CEO Jensen Huang’s keynote Monday up close, you’ll be able to see a technology demo of our new NGX software development kit — featuring in-painting, super-slo mo, and up resing; a new version of our NSIGHT developer tools; AI-powered rendering enhancements including deep-learning anti-aliasing; and A simulation based on Palm4u, a modified version of PALM for urban environments, looking at how much urban surfaces receive solar radiation, as well as atmospheric and building heat emissions during summer in Berlin.

So, if you’re at SIGGRAPH stop by our booth. We’ll be here Tuesday and Wednesday from 9:30am – 6pm or Thursday from 9:30 am – 3:30 pm.

The post Alluring Turing: Get Up Close with 7 Keynote-worthy Turing Demos appeared first on The Official NVIDIA Blog.

Hundreds of Trillions of Pixels: NVIDIA and RED Digital Cinema Solve 8K Bottleneck

For discerning households, 4K video resolution is becoming the new gold standard. But in professional circles, the real glitter is on 8K.

This ultra-high HD standard provides stunning clarity on large screens, where pixels aren’t visible even from inches away.

But it takes massive computing muscle to power that capability — and that’s a task our revolutionary Turing GPU architecture and new Quadro RTX GPUs are perfect for.

100 Trillion Pixels and Counting

State-of-the-art cameras can capture 8K video (which contains four times the pixels of 4K), but those pixels can create a massive computational logjam when editing footage.

Consider: an 8K camera captures at 8192×4320, or more than 35 million pixels per frame. So, just five minutes of that video at 24 frames per second is 250 billion pixels. Figure that a typical shoot involves hours of content, and you get past 100 trillion pretty quickly.

To handle that oceanic volume, post-production professionals rely on powerful, expensive workstations, high-end custom hardware and time-consuming preprocessing.

But that’s all about to change with Turing.

Working with leading camera maker RED Digital Cinema, Turing makes it possible for video editors and color graders to work with 8K footage in full resolution — in real time — reaching greater than 24 frames per second using just a single-processor PC with one Quadro RTX GPU.

And at less than half the price of CPU-laden workstations, this solution puts 8K within reach for a broad universe of post-production professionals using RED cameras whose content is viewed by millions.

“RED is passionate about getting high-performance tools in the hands of as many content creators as possible,” said Jarred Land, president of RED Digital Cinema. “Our work with NVIDIA to massively accelerate decode times has made working with 8K files in real time a reality for all.”

Pixel Perfect: The Perks of 8K

Though the markets for 8K displays and TVs are nascent, professionals can benefit by producing in 8K and distributing in 4K. The extra pixels from an 8K camera give the cinematographer more creative choices in post-production.

For example, there’s more flexibility for image stabilization, or panning and zooming to reframe a shot without losing image quality in the final delivery format. For visual effects, high resolution can provide more detail for tracking or keying. Downsampling high-resolution video can help reduce noise as well as maintain a high level of quality.

Whether the end result is 4K, 8K or somewhere in between, each tool in the production pipeline must be ready to handle 8K. It’s one thing to have a camera that captures the artists’ vision in seamless 8K footage, frame after frame. It’s another thing to translate that smoothness into the post-processing and editing processes.

When the video industry transitioned from HD to 4K, GPU adoption gave professionals the computing power to handle the higher resolution. Now, too, GPUs are doing the heavy lifting for 8K, freeing CPUs to do other work.

Unleashing 8K’s RAW Potential

Video applications like Adobe Premiere Pro, Blackmagic’s DaVinci Resolve and Autodesk Flame are already capable of working with 8K footage captured from cameras like RED’s. This includes footage stored in its REDCODE RAW file format, which gives post-production professionals more creative control but greatly increases the computational demand to process it.

Depending on the processing power of the computer or workstation at hand, though, videographers and editors end up viewing their 8K files at significantly reduced resolution in the software application. Attempting to play back the footage at full resolution can cause the application to drop frames or stop playback while buffering — so the editors are forced to choose between smooth playback and working in full resolution.

Alternatively, they can preprocess their footage into a more manageable format, but that takes time and disc space.

By offloading all of the computationally intensive parts of the REDCODE processing to a Turing GPU, NVIDIA and RED are giving post-production professionals access to 8K footage at full resolution in real time. And it’s not just for Turing — this acceleration will also substantially increase REDCODE processing performance on other NVIDIA GPUs.

Artists working with 8K footage will no longer have to disrupt their creative flow, waiting for their editing tools to catch up.

New capabilities will also be possible with the NVIDIA RTX Tensor Cores and RT Cores available with Turing. Editors will gain from new functionality like AI-enabled upscaling, which will let them intermix archival footage or zoom in beyond 8K resolution with the best possible results. And those incorporating high-resolution 3D graphics and titling will get more time back to focus on the creative parts of their workflow.

The post Hundreds of Trillions of Pixels: NVIDIA and RED Digital Cinema Solve 8K Bottleneck appeared first on The Official NVIDIA Blog.

Temple Run: CyArk Taps GPUs to Capture Visual Records of World Heritage Sites

When the Taliban blew up two 1,700-year-old statues of the Buddha in Bamiyan, Afghanistan, in 2001, preservationists around the world were shocked, perhaps none more so than Ben Kacyra.

Before and after view of a Bamiyan buddha.

A software developer who’d just sold his industrial mapping company, Kacyra wanted to do something good with his technology skills. The Taliban’s appalling actions gave him a perfect topic on which to focus.

The result was CyArk, a nonprofit that has spent the past 15 years capturing high-resolution imagery of World Heritage Sites.

“There was no record, no videos, no detailed photos,” said John Ristevski, CEO of CyArk, who helped Kacyra with the organization’s initial projects. “Ben was aghast and thought we needed 3D records of these things.”

Kacyra, and his wife Barbara, who together founded CyArk, wanted to ensure that if other sites met the same fate as those Afghani monuments, there would at least be a visual record.

Today, CyArk has not only detailed photogrammetric data on 200 sites on all seven continents, it has started to deliver on the second part of its vision: opening up that data to the public in the hope that developers will create 3D virtual experiences.

Taste of What’s to Come

To jumpstart things, CyArk, in conjunction with Google — which has provided cloud computing resources, funding and technical support — has released a virtual tour of the ancient city of Bagan, in central Myanmar, that shows off what’s possible.

The tour lets visitors digitally walk through temples, look at things from different angles and zoom in for closer looks, providing an amazingly detailed substitute for those who might otherwise have to travel to the other side of the Earth to see it.

The potential for using the approach to provide education about the world’s ancient historical sites, enable preservationists to study sites more readily, and allow tourists to visit places they could never travel to is seemingly limitless. As such, Ristevski hopes the Bagan tour is just the tip of a much larger iceberg than CyArk could ever build.

“We’re probably more interested in what other people can do with the data,” said Ristevski. “Through the open heritage program, anyone can take the data and build educational experiences. And they can get a non-commercial license to use the data, too.”

In addition to the Bagan tour, CyArk recently released MasterWorks VR, a free app for the Oculus Rift VR headset. It lets people explore multiple heritage sites on three continents, jumping from one location to another.

A Premium on Processing

NVIDIA GPUs have played a critical role in processing the hours and hours of programmetric data CyArk collects on each site, as well as performing 3D reconstruction. Working on powerful workstations equipped with NVIDIA Quadro P6000 cards, CyArk technicians convert the data to 3D imagery using a software package called Capturing Reality.

Ristevski said the P6000 GPUs enable CyArk to crunch its data many times faster than would be possible on CPUs, and he’s also seen significant speed gains compared with previous generations of GPUs.

CyArk Bagan World Heritage Site 3D scan
Every pixel counts in CyArk’s 3D reconstructions of World Heritage Sites.

More important than speed, Ristevski said, is the improved ability to present detailed textures. CyArk has seen resolution of those textures shrink from centimeters down to fractions of millimeters, which is a huge consideration for the heritage community.

“Every square inch of surface is unique,” he said. “We can’t make up textures or replicate textures. We have to preserve every little pixel we capture to the highest degree.”

For Now, More Data

While Ristevski sees a lot of potential for deep learning to help CyArk as it gets more into classification of its data, the company hasn’t delved far into that technology to date. Once it can move on from its current focus on 3D reconstruction, deep learning figures to play a bigger role.

In the meantime, CyArk plans to continue collecting data on more World Heritage Sites. Among those it’s currently documenting or planning to hit soon are ancient temples in Vietnam and the Jefferson Memorial in Washington, D.C.

As CyArk collects more data, it will also continue to make that data publicly available, as well as packaging it in future VR applications and generating its own VR experiences.

And Ristevski maintains that CyArk has no goals to monetize its data, and will instead remain a nonprofit for the foreseeable future: “We have no intention of forming a business model.”

The post Temple Run: CyArk Taps GPUs to Capture Visual Records of World Heritage Sites appeared first on The Official NVIDIA Blog.

What Is a Virtual GPU?

Virtualization technology for applications and desktops has been around for a long time, but it hasn’t always lived up to the hype surrounding it. Its biggest failing: a poor user experience.

And the reason why is simple. When virtualization first came on the scene, GPUs — which are specialists in parallel computing — weren’t part of the mix. The virtual GPU, aka vGPU, has changed that.

On a traditional physical computing device like a workstation, PC or laptop, a GPU typically performs all the capture, encode and rendering to power complex tasks, such as 3D apps and video. With early virtualization, all of that was handled by the CPU in the data center host. While it was functional for some basic applications, CPU-based virtualization never met the native experience and performance levels that most users needed.

That changed a few years ago when NVIDIA released its virtual GPU. Virtualizing a data center GPU allowed it to be shared across multiple virtual machines. This greatly improved performance for applications and desktops, and allowed organizations to build virtual desktop infrastructures (or VDIs) that cost-effectively scaled this performance across their businesses.

What a GPU Does

Why a GPU chartA graphics processing unit has thousands of computing cores to efficiently process workloads in parallel. Think 3D apps, video and image rendering. These are all massively parallel tasks.

The GPU’s ability to handle parallel tasks makes it expert at accelerating computer-aided applications. Engineers rely on them for heavy-duty stuff like computer-aided engineering (CAE), computer-aided design (CAD) and computer-aided manufacturing (CAM) applications. But there are plenty of other consumer and enterprise applications.

Of course, any processor can render graphics. Four, eight or 16 cores could do the job, eventually. But with the thousands of specialized cores on a GPU, there’s no long wait. Applications simply run faster, interactively — the way they’re supposed to run.

Virtual GPUs Explained

What makes a virtual GPU work is software.

NVIDIA vGPU software delivers graphics-rich virtual desktops and workstations accelerated by NVIDIA Tesla accelerators, the world’s most powerful data center GPUs.

This software transforms a physical GPU installed on a server to create virtual GPUs that can be shared across multiple virtual machines. It’s no longer a one-to-one relationship from GPU to user, but one-to-many.

NVIDIA vGPU software also includes a graphics driver for every virtual machine. Sometimes, this is referred to as server-side graphics. And this enables every virtual machine to get the benefits of a GPU just like a physical desktop has. But because work that was typically done by the CPU has been offloaded to the GPU, users have a much better experience and more users can be supported.

NVIDIA’s virtual GPU offerings include three products designed to meet the challenges of the digital workplace: NVIDIA GRID Virtual PC (GRID vPC) and NVIDIA GRID Virtual Apps (GRID vApps) for knowledge workers and NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) for designers, engineers and architects.

NVIDIA GRID Provides an Amazing Experience for Every User

Business users’ graphics requirements are rising. Windows 10 requires up to 32 percent more CPU resources than Windows 7, according to a whitepaper from Lakeside Software, Inc. And updated versions of basic office productivity apps such as Chrome, Skype and Microsoft Office demand a much higher level of computer graphics than before.

This trend toward digitally sophisticated, graphics-intensive workplaces will only accelerate. With CPU-only virtualized environments unable to support the needs of knowledge workers, GPU-accelerated performance with NVIDIA GRID has become a fundamental requirement of virtualized digital workplaces and enterprises using Windows 10.

NVIDIA Quadro vDWS Delivers Secure, Workstation-Class Performance on Any Device

Every day, tens of millions of creative and technical professionals need to access the most demanding applications from any device, work from anywhere and interact with large datasets — all while keeping their information secure.

This might be a cardiologist providing a remote consultation and accessing high-quality images while at a conference; or a government agency delivering simulated, immersive training experiences; or an R&D engineer working on a new car design who needs to ensure intellectual property and proprietary designs remain secure in the data center while collaborating with others in a client’s office.

For people with sophisticated, graphics-intense needs like these, Quadro vDWS offers the most powerful virtual workstation from the data center or cloud to any device, anywhere.

How vGPUs Simplify IT Administration

Working with VDI, IT administrators can manage resources centrally instead of supporting individual workstations at every worker location. Plus, the number of users can be scaled up and down based on project and application needs.

NVIDIA virtual GPU monitoring provides IT departments with tools and insights so they can spend less time troubleshooting and more time focusing on strategic projects. IT admins can gain an understanding of their infrastructure down to the application level, enabling them to localize a problem before it starts. This can reduce the number of tickets and escalations, and reduce the time it takes to resolve issues.

With VDI, IT can also better understand the requirements of their users and adjust the allocation of resources. This saves operational costs while enabling a better user experience. In addition, live migration features of NVIDIA GPU-accelerated virtual machines enables IT to perform critical services like workload leveling, infrastructure resilience and server software upgrades without any virtual machine downtime. It lets IT truly deliver quality user experiences with high availability.

How Virtual GPU Help Businesses

These are a few examples of how organizations that have deployed NVIDIA vGPU offerings have benefited:

  • CannonDesign (Architecture, engineering and construction). CannonDesign provided virtualization to all its users, from designers and engineers using Revit and high-end apps to knowledge workers using office productivity apps. The company achieved higher user density at 2x the performance, with better security. It’s IT team can now provision a new user with a virtual workstation in 10 minutes.
  • Cornerstone Home Lending (Financial services). Cornerstone Home Lending streamlined its desktop deployment across 100 branches and 1,000 users into a single, virtualized environment. The company achieved lower latency and high performance on modern business applications like video editing and playback.
  • DigitalGlobe (Satellite imagery). DigitalGlobe enabled its developers and office staff to use graphics-intensive applications on any device with a native PC-like experience. The move to NVIDIA Tesla M10 GPU accelerators and NVIDIA GRID software delivered huge cost savings with a 2x improvement in user density, and streamlined their IT operation with a 500:1 user to IT ratio.
  • Honda (Automotive). Honda used virtual GPU technology to enable better scalability and lower investment costs. The company achieved faster performance and lower latency on graphics-heavy applications like 3D CAD, even on thin clients. Honda and Acura vehicles are now being designed using VDI with NVIDIA vGPU software.
  • Seyfarth Shaw (Legal). To provide its attorneys with a rich web browsing experience on any device, Seyfarth Shaw upgraded to Windows 10 VDI with Tesla M10 GPUs and NVIDIA GRID vPC. Just loading its intranet, which once took 8-10 seconds, now only takes 2-3. Scrolling through large PDFs is a breeze, and user complaints to IT nosedived.
  • Holstebro Kommune (Government). Holstelbro Kommune achieved up to a 70 percent improvement in CPU utilization with NVIDIA GRID. Modern applications and web browsers with rich multimedia content, video conferencing, video editing and playback can be done on any device, with performance that rivals the physical desktop.
  • UMass Lowell (Education). The University of Massachusetts at Lowell provides a  workstation-caliber experience to its students, who can use apps like SOLIDWORKS, the full Autodesk suite, Moldflow and Mastercam on any device. The university operates its VDI environment at one-fifth the cost of a workstation seat with equivalent performance. Just with NVIDIA virtualization software updates, the UMass Lowell achieved a 20-30 percent performance improvement.

Learn more about NVIDIA vGPU solutions by following @NVIDIAVirt.

The post What Is a Virtual GPU? appeared first on The Official NVIDIA Blog.

To Boldly Go: World’s Biggest Planetarium Achieves Jaw-Dropping 10K Resolution

We can’t all be starship captains. But visitors to Planetarium No. 1 in St. Petersburg, Russia, can experience the universe with a level of clarity, detail and interactivity that Captain Kirk himself would envy.

Planetarium No. 1
Planetarium No. 1 inside its 19th century natural gas storage building.

Housed in a 19th century natural gas storage building, the planetarium’s exterior is about the only thing that isn’t on the cutting edge of modernity. Inside is the world’s largest planetarium, with a half acre (2,000 square meters) of projection area within a 37-meter diameter dome.

It’s the planet’s only large-size planetarium with a dome that partially touches the floor. This expansive viewing angle makes it possible for visitors to take photos of themselves with space in the background.

And thanks to NVIDIA Quadro graphics, it’s also the world’s highest resolution planetarium, able to display interactive images of space in a whopping 10K resolution — more than 2.5x the detail level of conventional digital cinema screens.

A few months after its official opening in November, Planetarium No. 1 flipped the switch on its  record-breaking projection system. It uses NVIDIA Quadro P6000 GPUs, which have become the de facto industry standard for building high-res, multiple-projector systems. Each Quadro P6000 has four outputs creating 4K images, which are then synchronized across 40 high-resolution projectors from one server.

Each projector is responsible for a section of the overall image, which must be blended seamlessly together, in a process known as image stitching.

Using NVIDIA Quadro P6000 GPUs, Planetarium No. 1 boasts a record-breaking projection system.

“Creating such a large and detailed projection was an incredible technical challenge,” said Evgeny Gudov, director at Planetarium No. 1. “Using the NVIDIA Quadro platform was the only way to achieve it.”

The previous record holder, for both image stitching and dome size, is the 35m planetarium in the Nagoya Science Museum in Japan, which combines 24 projectors.

Projection of NVIDIA’s logo.

Command the Stars

Visitors — up to 5,000 of them every day — can control the starry sky above them using multi-touch controllers, enabling them to pilot through space. When it’s not roaming the galaxy, Planetarium No. 1 hosts 360-degree broadcasts of concerts and sporting events.

It’s also a resource for scientific and educational projects, enabling star-gazers to study the skies above St. Petersburg even during overcast conditions, and despite urban light pollution.

An opera performance at Planetarium No. 1.

“Because we’re projecting onto a dome, we need to use 3D mapping techniques to make the images look seamless,” said Gudov. “And with so many visitors, the reliability of the technology was also vital.”

Planetarium No. 1’s most popular offering is a specially created 90-minute show that takes visitors from the birth of the universe right through to the space age.

The post To Boldly Go: World’s Biggest Planetarium Achieves Jaw-Dropping 10K Resolution appeared first on The Official NVIDIA Blog.

More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete

Airport control towers are an emblem of the aviation industry. A Canadian company wants to use its technology to make them a relic of the past.

Airport buffs may mourn the change. But Ontario-based Searidge Technologies believes its reasoning is, um, well-grounded.

It believes AI-powered video systems can better watch runways, taxiways and gate areas. By “seeing” airport operations through as many as 200 cameras, there’s no need for the sightline towers give air traffic controllers.

That doesn’t mean air traffic controllers are going away. The alternative Searidge proposes is a new concept made possible by remote towers. It’s not an easy idea to swallow for an industry that’s been reluctant to embrace change, and is sensitive to any perception safety is being compromised.

But the benefits are hard to deny, including reduced taxi and wait times, handling 15-30 percent more aircraft per hour and reducing the number of tarmac incidents.

“The industry is adapting, and often now puts air traffic controllers in regular buildings,” said Chris Thurow, head of research and development for Searidge. “It gives them a better view than they see out the tower.”

Searidge control windows
View of an airport from remote tower using Searidge technology.

Originally a Radar Alternative

At first, Searidge focused on providing cheaper alternatives to expensive radar systems for tracking and identifying objects on airport runways and taxiways. The company’s earliest products used traditional computer vision algorithms that analyzed video feeds on CPUs. They met the demands on the system at the time, but that was more than a decade ago.

Since then, the resolution of video and need for real-time intelligence have both grown fast. CPUs can’t keep up with these resource-intensive features.

“Using GPU technology, we can offer this at a better price and with a significantly lower number of servers,” he said.

Searidge shifted to GPUs about two years ago. It also brought deep learning tools such as NVIDIA’s CUDA libraries, TensorRT deep learning inference optimizer, and the Caffe deep learning framework into the mix.

Then, as airports began to ask not only for coverage of runways and taxiways, but also tarmacs and gate areas, Searidge expanded the abilities of its technology.

The company started working on more advanced AI that could accommodate a wider range of business rules. This enabled it to detect a greater assortment of objects. It could even deduce when such objects might cause unexpected delays.

“We are still trying to find the limits of the technology,” Thurow said.

Searidge control workstation
A Searidge Technologies control workstation.

Trained with Pooled Airport Data

Searidge has been training its deep learning network on workstations running NVIDIA Quadro P6000 GPUs. The system constantly collects imagery from the airports it serves to expand its training base. Training typically takes five to seven days, so the company has recently begun training on the GPU-powered Google Cloud to speed the process.

The company deploys its technology on workstations running Quadro P6000 GPUs to do positioning of targets, classification and stitching of images in real time for 20 HD cameras. Once at a new airport, it annotates 24 hours of that facility’s normal operations and combines this with customer data from about three dozen airports in 20 countries — so its algorithms are always improving.

Searidge’s AI innovations are built on top of their “remote tower” platform. New control towers are no longer being built or renovated, Thurow said. Instead airports are moving air traffic control to ground facilities. They’re even considering off-site locations. With AI added to remote towers, they offer high levels of situational awareness and air traffic controller support.

In some cases, he said, smaller airports are considering joining forces, allowing a single remote tower to manage more than one facility.

The European Union’s first certified medium-size, multi-runway remote tower recently opened in Budapest, Hungary, using Searidge’s technology. All tower controllers have been trained on the system, which is initially being used for contingency operations, live training and as a backup system. By 2020, HungaroControl aims to operate a full-time remote tower at Budapest.

Eventually, Thurow believes further AI innovation will lead to a more fully functioning “AI assistant.” The assistant could help air traffic controllers by picking up things humans might miss, predicting situations and recognizing patterns.

“I expect AI assistants to come into play in the next five to ten years,” he said.

The post More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete appeared first on The Official NVIDIA Blog.

Walt Disney Imagineering, NVIDIA Develop New Tech to Enable Star Wars: Galaxy’s Edge Millennium Falcon Attraction for Disney Parks

When Star Wars: Galaxy’s Edge opens next year at Disneyland Resort and the Walt Disney World Resort, park guests will visit the planet of Batuu, a remote outpost that was once a busy crossroads along the old sub-lightspeed trade routes. But you don’t have to wait another year to get a glimpse of it.

An in-progress animated sequence from the Millennium Falcon attraction was unveiled today. Produced by ILMxLAB and running in real time, it gives fans the first ever glimpse of the incredible detail and immersion the attraction will offer.

Walt Disney Imagineering teamed with NVIDIA and Epic Games to develop new technology to drive its attraction. When it launches, riders will enter a cockpit powered with a single BOXX chassis packed with eight high-end NVIDIA Quadro P6000 GPUs, connected via Quadro SLI.

Quadro Sync synchronizes five projectors for the creation of dazzling ultra-high resolution, perfectly timed displays to fully immerse the riders in the word of planet Batuu.

Working with NVIDIA and Epic Games, the Imagineering team created a custom multi-GPU implementation for Unreal Engine. This new code was returned to the Epic Games team and will help influence how multi-GPUs function for their engine.

“We worked with NVIDIA engineers to use Quadro-specific features like Mosaic and cross-GPU reads to develop a renderer that had performance characteristics we needed,” says Bei Yang, technology studio executive at Disney Imagineering. “Using the eight connected GPUs allowed us to achieve performance unlike anything before.”

Yang and Principal Software Developer Eric Smolikowski dove into more details during their GTC talk, “Walt Disney Imagineering Technology Preview: Real-time Rendering of a Galaxy Far, Far Away,” and discussed how Disney Imagineering took advantage of the latest NVIDIA technology and the technical modifications they made for the Unreal Engine, which allows eight GPUs to render at unprecedented quality and speed.

The post Walt Disney Imagineering, NVIDIA Develop New Tech to Enable Star Wars: Galaxy’s Edge Millennium Falcon Attraction for Disney Parks appeared first on The Official NVIDIA Blog.

NVIDIA Brings Live GPU Migration, Ultra-High-End Workstation Performance to Virtualization

They say good things come in threes. That makes it a banner day for the advancement of virtual desktop infrastructure, as NVIDIA has announced:

  • Availability of new virtualization software capabilities with the NVIDIA virtual GPU March 2018 Release, including improved data center management with support for live migration of GPU-accelerated virtual machines.
  • NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) support for Tesla V100 GPUs.
  • Enhanced NVIDIA GRID Virtual PC (GRID vPC) with support for multiple 4K monitors and Linux.

The announcements came at the GPU Technology Conference, taking place through March 29 in San Jose.

Live Migration Keeps VDI Deployments Up and Running

Live migration saves valuable time and resources by allowing IT teams to focus on more strategic projects and drive business transformation.

Using Citrix XenMotion, IT teams can migrate live, NVIDIA GPU-accelerated VMs with no impact to users. And with VMware vSphere, they can suspend desktops and resume them later on compatible infrastructure, while preserving desktop and application states.

Live migration not only eases routine maintenance, it facilitates proactive maintenance — IT can resolve potential problems before service disruption occurs. Manual load balancing enables IT to optimize the utilization of resources without end user disruption.

And the best part: IT teams can perform updates and patches more frequently, at their own time. Keeping servers in healthy states is particularly difficult with today’s complex VDI environments. Without live migration, IT teams can spend hours handholding and coordinating between groups — and nights and weekends planning and performing upgrades to ensure users continually receive a great experience with minimal disruption.

Limited resources and regular quarterly and semi-annual updates recommended by Citrix and Microsoft, respectively, to keep servers secure often leave IT teams in a “keep the lights on” trap. With live migration, they can perform these updates easily, ensuring that they get the most out of their investments, without disrupting users.

Industry Support for Live Migration

In January, NVIDIA, Citrix and VMware kicked of a tech preview of live migration and early reviews are in:

“It’s a game changer. The Meltdown and Spectre BIOS came out one day, and the next day I was able to immediately make it a simple, during the workday, BIOS update. Normally this would be a 10-11 p.m. task on my own time,” says Jeremy Stroebel, IT director at Browning Day Mullins Dierdorf (BDMD)

“Now we don’t have to worry about the large Linux VMs that researchers use to run jobs overnight getting interrupted.  Students can now work on an application, go to another class, and get back to the state where they were even if we had to perform maintenance. With VMware suspend/resume for GPU-accelerated VMs, we can tighten the maintenance window for our quarterly and monthly updates or do more inside the same window,“ says Jon Kelley, associate director of enterprise innovation at University of Arkansas.

“XenServer patching was once a nightmare, taking a month or requiring several hours of downtime. With the live migration feature from Citrix and NVIDIA, I can provide a great user experience and still have the flexibility to keep my environment more secure and stable. And I can even do it within working hours without downtime,” says Tommy Stylsvig Würtz Rasmussen, IT specialist at Holstebro Kommune.

“XenMotion support on vGPU is very critical as we are a global company running almost 24 hours a day with a really short service window. Now I have the flexibility to take a host out of production without any issue. With today’s security issues like Meltdown and Spectre this is a must have to keep our environment patched and safe!” says Roddy Kossen, senior system engineer at AWL-Techniek B.V.

 

The Most Powerful Virtual Workstation Gets More Powerful

Built to accelerate deep learning, HPC and graphics, the NVIDIA Tesla V100 is the world’s most advanced data center GPU ever. The new NVIDIA Quadro vDWS with Tesla V100 GPU support is the most powerful data center workstation. With it, users can:

  • Run real-time, interactive simulations up to 55 percent faster than previous generations such as Ansys Discovery Live.
  • Speed rendering time of photorealistic images up to 80 percent faster than with previous generations.
  • Leverage deep learning-enhanced applications for more fluid, visual interactivity throughout the design process.

Accessed from any connected device, from any location, the latest Quadro vDWS delivers even more advanced professional workstation features to designers and technical professionals, freeing creativity to happen anywhere.

“With the massive parallel computing power of NVIDIA Volta GPU architecture we’ve been able to harness deep learning to train our artificial neural network to predict travel times for transportation networks, helping our customers produce predictions 2-3 times faster than before. And with Quadro vDWS support for GV100, we have the flexibility to run VDI during the day and deep learning at night, which helps maximize use of our compute resources while keeping data secure in the data center,” says John Meza, performance engineering and virtualization team leader, ESRI

Delivering More Value to More Enterprise Users

The NVIDIA virtual GPU March 2018 release also brings enhanced capabilities with GRID vPC – including support for multiple 4K-resolution monitors and larger frame buffer support to enable increased productivity and multitasking. These features are critical to industries like healthcare and financial services, as well as for today’s knowledge workers.

With Linux application support, GRID vPC can be used by engineers who work in 2D and use electronic design automation tools, as well as software developers who code in Linux-based software development environments.

Experience NVIDIA Virtual GPU Products

See GPU-accelerated live migration at the GPU Technology Conference, where global architectural firm Browning Day Mullins Dierdorf will share how it deployed a GPU-accelerated VDI with live migration. Also on display, the NVIDIA Quadro vDWS running on Tesla V100.

See a complete listing of virtualization sessions at GTC.

Get more information about live migration for GPU-accelerated VMs.

Join our webinar, What’s New with NVIDIA Virtual GPU, to learn more.

Learn more about NVIDIA vGPU solutions by following @NVIDIAVirt.

The post NVIDIA Brings Live GPU Migration, Ultra-High-End Workstation Performance to Virtualization appeared first on The Official NVIDIA Blog.

NVIDIA Transforms the Workstation for the Age of Deep Learning

As demand for deep learning continues to gain momentum, it’s already changing the way people work. Driving the next wave of advancement in deep learning-infused workflows is the NVIDIA Volta GPU architecture.

In his keynote address at the GPU Technology Conference today, NVIDIA founder and CEO Jensen Huang unveiled the new Volta-based Quadro GV100, and described how it transforms the workstation with real-time ray tracing and deep learning.

The Quadro GV100 and its companion product, Quadro vDWS for the data center, address the growing demands of the world’s largest businesses — in such fields as automotive, architecture, engineering, entertainment and healthcare — to rapidly deploy deep learning-based research and development, accelerate deep learning-enhanced applications, enable photoreal VR and provide secure, anytime, anywhere access.

Bringing unprecedented capabilities in deep learning, rendering and simulation to designers, engineers and scientists, the new products allow professionals to design better products in a completely new way. GPU-accelerated techniques, like generative design, and the ability to conduct complex simulations faster mean businesses can explore more design choices, optimize their designs for performance and cost, and consequently bring groundbreaking products to market faster.

Quadro GV100 GPUInnovate Without Restrictions

The new Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity. Two GV100 cards can be combined using NVIDIA NVLink interconnect technology to scale memory and performance, creating a massive visual computing solution in a single workstation chassis.

Other benefits of the GV100 include:

  • Easy implementation of deep learning development – Access the NVIDIA GPU Cloud container registry with GV100 or other high-end Quadro GPUs for a comprehensive catalog of GPU-optimized software tools for deep learning and high performance computing on any workstation.
  • Accelerated deep learning training and inferencing on a desktop workstation – Dedicated Tensor Cores and the ability to scale two GV100s for up to 64GB of HBM2 memory with NVIDIA NVLink provide the performance required for demanding deep learning training and inferencing applications.
  • Supercharged rendering performance – Deep learning-accelerated denoising performance for ray tracing provides fluid visual interactivity throughout the design process.
  • Ability to run complex 3D simulations – Fast double-precision coupled with the ability to scale memory up to 64GB accelerates solver performance in computer-aided engineering workflows.
  • Collaborate, design and create in immersive VR – Support for advanced VR features and massive on-board memory capacity means designers can use physics-based, immersive VR platforms such as NVIDIA Holodeck to conduct design reviews and explore complex photoreal scenes and products at scale.

The World’s Most Powerful Virtual Workstation

With newly added support for the NVIDIA Tesla V100 GPUs, Quadro vDWS has the power to address increasingly compute-intensive workflows and securely deliver workstation-class performance to any connected device.

With Quadro vDWS, users can:

  • Run interactive, real-time simulations such as ANSYS Discovery Live
  • Speed rendering time of photorealistic images up to 80 percent faster than with previous generations
  • Leverage AI-enhanced applications for more fluid, visual interactivity throughout the design process
  • Work from anywhere, anytime, from any connected device, while data stays secure, never leaving the data center

Positive Early Reaction to the Quadro GV100

“With Adobe Sensei’s AI and machine learning platform, we’re enabling our creative and enterprise customers to solve digital experience challenges by working smarter, better and faster. The NVIDIA Volta GPU architecture that powers its new Quadro GV100 GPU is clearly a driving force in the evolution of AI. The speed and performance from NVIDIA’s GPUs are helping our customers deliver amazing, real-time experiences at scale across platforms, leveraging Adobe Sensei capabilities.”

– Scott Prevost, vice president of Engineering at Adobe

“The capabilities of the new Volta architecture allow us to create and interact with mathematical models of extreme complexity which rival the accuracy of prohibitively expensive physics simulation, at a fraction of the cost. The new AI-dedicated Tensor Cores have dramatically increased the performance of our models and the speedier NVLink allows us to efficiently scale multi-GPU simulations.”

– Francesco “Frio” Iorio, director of Computational Science Research at Autodesk

“AI computing is allowing our customers to access new business insights and solve problems that were not possible before recent advances in technology. Dell’s capabilities to support customers in AI span IOT, workstations, and data center solutions. The Precision 7920 workstation with Quadro GV100 enables new levels of performance and compute capabilities for an AI-driven future with the simplicity of a deskside solution.”

– Rahul Tikoo, vice president and general manager of Precision Workstations at Dell

“Design in the Age of Experience requires going beyond traditional methods to create a ‘New Reality’ experience for customers. To do this, designers must collaborate and create multisensory, real-world environments that enrich the customer experience. This requires serious GPU horsepower. That’s why we are excited about the performance gains we’ve seen in 3DEXPERIENCE with the new Quadro GV100. The ability to scale two Quadro GV100 GPUs using NVLink, coupled with the performance enhancements of NVIDIA VR SLI, doubled our performance allowing us to seamlessly interact with massive datasets comprised of several hundred million polygons.”

– Xavier Melkonian, CATIA DESIGN portfolio director at Dassault Systèmes

“The exponential growth in AI and the pace of change attributed to machine learning is rapid. HP Z Workstation customers are seeing unprecedented opportunities that have huge implications for not only businesses, but also end users. Combined with the Quadro GV100, HP Z Workstations are the ideal machine learning development platform, while providing the extreme power necessary for product designers, architects and others to create with high visual fidelity and obtain fast results. The HP ML Developers Portal now provides support for NVIDIA GPU Cloud, as well as state-of-the-art tools such as HP’s curated software stacks.”

– Carol Hess, vice president of Worldwide Workstation Product Management at HP Inc.

Our projects include the world’s tallest towers, longest spans, most varied programs and inventive forms. Utilizing NVIDIA GPUs throughout our 3D visualization and VR workflow helps us discover the smartest solution to every project. AI opens up new possibilities for enhancing our traditional design process. That’s why we are especially excited about the new Quadro GV100. It’s not only equipped with enough memory for us to work on massive projects, but its power to accelerate AI is truly a game changer for us. It’s as if we have an entirely new gear to speed up our projects and deliver higher quality results faster and more efficiently for our clients.”

– Cobus Bothma, applied research director at KPF

“Technology is constantly pushing forward; breaking down walls and bringing with it innovation beyond what was imagined before. With the NVIDIA Quadro GV100 GPU for compute and 3D graphics, we are excited to see the progression and dedication towards pushing boundaries and unleashing the possibilities. Lenovo Workstations is excited to support the GV100 over the coming months as an addition to our overall AI and generative design solutions and to shape the future of creative work.”

– Rob Herman, general manager at Lenovo Workstations

“When we tested the NVIDIA Quadro GV100, we saw a 3x performance improvement right out of the box. We can’t wait to see what kind of performance levels we can achieve by tailoring our applications to really take advantage of it.”

– Paolo Emilio Selva, head of Software Engineering at Weta Digital

Availability

Quadro vDWS is available now for over 120 systems from 33 vendors. The NVIDIA Quadro GV100 is available immediately on nvidia.com and starting in April from leading workstation OEMs, including Dell, HP, Lenovo and Fujitsu, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan and Leadtek in Asia Pacific.

Image courtesy of KPF.

The post NVIDIA Transforms the Workstation for the Age of Deep Learning appeared first on The Official NVIDIA Blog.

HTC and NVIDIA Give Major Boost to High-End VR

For VR to be an immersive experience, your visual, auditory and tactile senses need to convince your brain that it’s in a believable environment. The new HTC VIVE Pro VR headset, powered by NVIDIA GPUs, makes a persuasive argument.

The device, announced at CES, improves display resolution by nearly 80 percent compared with its predecessor (2880 x 1600 resolution vs. 2160 x 1200).

It’s a major leap forward for commercial VR, where the demands of customers, developers and VR enthusiasts require the best visual clarity, the best audio and the highest degree of comfort.

How Does Increased Resolution Improve the VR Experience?

The VIVE Pro’s increased display resolution, while keeping a similar screen size, leads to a dramatic increase in pixel density, which is measured in pixels per inch (ppi). The VIVE Pro features 615 ppi — 37 percent more than the original VIVE.

The increased pixel density results in a clearer VR headset image. Lines appear sharper, objects are more distinct and individual pixels are less apparent. And this fidelity improvement helps convince the brain that the user is in a virtual world.

Higher VR Resolution Demands a Powerful GPU

VR headsets require powerful GPUs to refresh at the 90 frames per second needed for a smooth, comfortable experience. With 78 percent more pixels, VIVE Pro raises the GPU performance workload to the next level — and NVIDIA GPUs deliver.

For an optimal experience with the VIVE Pro, NVIDIA recommends NVIDIA Quadro P5000 or higher professional GPUs, and the GeForce GTX 1070 or above for VR enthusiasts.

These GPUs also feature hardware support for NVIDIA VRWorks technologies, which enable the highest level of VR fidelity.

HTC VIVE Pro VR headset

VIVE Pro and NVIDIA Holodeck at GTC

The latest in VR technology is on display at GPU Technology Conference, taking place through March 29 in San Jose. To see where VR is headed, GTC attendees can experience VIVE and Vive Pro across the show floor powering VR for education, hardware, design, medical, military and gaming.

In the VR Village at GTC, attendees also can experience a virtual tour of Ready Player One featuring NVIDIA Holodeck and VIVE Pro*.  Based on never before seen 3D assets from Steven Spielberg’s Ready Player One movie, set for release on March 31, up to three players are transported into Aech’s basement for an escape room-like experience with an amazing level of immersion and visual fidelity.

Learn more about NVIDIA VR Ready GPUs, VIVE Pro and GTC.

* HTC VIVE is the official VR partner of Ready Player One.

The post HTC and NVIDIA Give Major Boost to High-End VR appeared first on The Official NVIDIA Blog.