What Is a Virtual GPU?

Virtualization technology for applications and desktops has been around for a long time, but it hasn’t always lived up to the hype surrounding it. Its biggest failing: a poor user experience.

And the reason why is simple. When virtualization first came on the scene, GPUs — which are specialists in parallel computing — weren’t part of the mix. The virtual GPU, aka vGPU, has changed that.

On a traditional physical computing device like a workstation, PC or laptop, a GPU typically performs all the capture, encode and rendering to power complex tasks, such as 3D apps and video. With early virtualization, all of that was handled by the CPU in the data center host. While it was functional for some basic applications, CPU-based virtualization never met the native experience and performance levels that most users needed.

That changed a few years ago when NVIDIA released its virtual GPU. Virtualizing a data center GPU allowed it to be shared across multiple virtual machines. This greatly improved performance for applications and desktops, and allowed organizations to build virtual desktop infrastructures (or VDIs) that cost-effectively scaled this performance across their businesses.

What a GPU Does

Why a GPU chartA graphics processing unit has thousands of computing cores to efficiently process workloads in parallel. Think 3D apps, video and image rendering. These are all massively parallel tasks.

The GPU’s ability to handle parallel tasks makes it expert at accelerating computer-aided applications. Engineers rely on them for heavy-duty stuff like computer-aided engineering (CAE), computer-aided design (CAD) and computer-aided manufacturing (CAM) applications. But there are plenty of other consumer and enterprise applications.

Of course, any processor can render graphics. Four, eight or 16 cores could do the job, eventually. But with the thousands of specialized cores on a GPU, there’s no long wait. Applications simply run faster, interactively — the way they’re supposed to run.

Virtual GPUs Explained

What makes a virtual GPU work is software.

NVIDIA vGPU software delivers graphics-rich virtual desktops and workstations accelerated by NVIDIA Tesla accelerators, the world’s most powerful data center GPUs.

This software transforms a physical GPU installed on a server to create virtual GPUs that can be shared across multiple virtual machines. It’s no longer a one-to-one relationship from GPU to user, but one-to-many.

NVIDIA vGPU software also includes a graphics driver for every virtual machine. Sometimes, this is referred to as server-side graphics. And this enables every virtual machine to get the benefits of a GPU just like a physical desktop has. But because work that was typically done by the CPU has been offloaded to the GPU, users have a much better experience and more users can be supported.

NVIDIA’s virtual GPU offerings include three products designed to meet the challenges of the digital workplace: NVIDIA GRID Virtual PC (GRID vPC) and NVIDIA GRID Virtual Apps (GRID vApps) for knowledge workers and NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) for designers, engineers and architects.

NVIDIA GRID Provides an Amazing Experience for Every User

Business users’ graphics requirements are rising. Windows 10 requires up to 32 percent more CPU resources than Windows 7, according to a whitepaper from Lakeside Software, Inc. And updated versions of basic office productivity apps such as Chrome, Skype and Microsoft Office demand a much higher level of computer graphics than before.

This trend toward digitally sophisticated, graphics-intensive workplaces will only accelerate. With CPU-only virtualized environments unable to support the needs of knowledge workers, GPU-accelerated performance with NVIDIA GRID has become a fundamental requirement of virtualized digital workplaces and enterprises using Windows 10.

NVIDIA Quadro vDWS Delivers Secure, Workstation-Class Performance on Any Device

Every day, tens of millions of creative and technical professionals need to access the most demanding applications from any device, work from anywhere and interact with large datasets — all while keeping their information secure.

This might be a cardiologist providing a remote consultation and accessing high-quality images while at a conference; or a government agency delivering simulated, immersive training experiences; or an R&D engineer working on a new car design who needs to ensure intellectual property and proprietary designs remain secure in the data center while collaborating with others in a client’s office.

For people with sophisticated, graphics-intense needs like these, Quadro vDWS offers the most powerful virtual workstation from the data center or cloud to any device, anywhere.

How vGPUs Simplify IT Administration

Working with VDI, IT administrators can manage resources centrally instead of supporting individual workstations at every worker location. Plus, the number of users can be scaled up and down based on project and application needs.

NVIDIA virtual GPU monitoring provides IT departments with tools and insights so they can spend less time troubleshooting and more time focusing on strategic projects. IT admins can gain an understanding of their infrastructure down to the application level, enabling them to localize a problem before it starts. This can reduce the number of tickets and escalations, and reduce the time it takes to resolve issues.

With VDI, IT can also better understand the requirements of their users and adjust the allocation of resources. This saves operational costs while enabling a better user experience. In addition, live migration features of NVIDIA GPU-accelerated virtual machines enables IT to perform critical services like workload leveling, infrastructure resilience and server software upgrades without any virtual machine downtime. It lets IT truly deliver quality user experiences with high availability.

How Virtual GPU Help Businesses

These are a few examples of how organizations that have deployed NVIDIA vGPU offerings have benefited:

  • CannonDesign (Architecture, engineering and construction). CannonDesign provided virtualization to all its users, from designers and engineers using Revit and high-end apps to knowledge workers using office productivity apps. The company achieved higher user density at 2x the performance, with better security. It’s IT team can now provision a new user with a virtual workstation in 10 minutes.
  • Cornerstone Home Lending (Financial services). Cornerstone Home Lending streamlined its desktop deployment across 100 branches and 1,000 users into a single, virtualized environment. The company achieved lower latency and high performance on modern business applications like video editing and playback.
  • DigitalGlobe (Satellite imagery). DigitalGlobe enabled its developers and office staff to use graphics-intensive applications on any device with a native PC-like experience. The move to NVIDIA Tesla M10 GPU accelerators and NVIDIA GRID software delivered huge cost savings with a 2x improvement in user density, and streamlined their IT operation with a 500:1 user to IT ratio.
  • Honda (Automotive). Honda used virtual GPU technology to enable better scalability and lower investment costs. The company achieved faster performance and lower latency on graphics-heavy applications like 3D CAD, even on thin clients. Honda and Acura vehicles are now being designed using VDI with NVIDIA vGPU software.
  • Seyfarth Shaw (Legal). To provide its attorneys with a rich web browsing experience on any device, Seyfarth Shaw upgraded to Windows 10 VDI with Tesla M10 GPUs and NVIDIA GRID vPC. Just loading its intranet, which once took 8-10 seconds, now only takes 2-3. Scrolling through large PDFs is a breeze, and user complaints to IT nosedived.
  • Holstebro Kommune (Government). Holstelbro Kommune achieved up to a 70 percent improvement in CPU utilization with NVIDIA GRID. Modern applications and web browsers with rich multimedia content, video conferencing, video editing and playback can be done on any device, with performance that rivals the physical desktop.
  • UMass Lowell (Education). The University of Massachusetts at Lowell provides a  workstation-caliber experience to its students, who can use apps like SOLIDWORKS, the full Autodesk suite, Moldflow and Mastercam on any device. The university operates its VDI environment at one-fifth the cost of a workstation seat with equivalent performance. Just with NVIDIA virtualization software updates, the UMass Lowell achieved a 20-30 percent performance improvement.

Learn more about NVIDIA vGPU solutions by following @NVIDIAVirt.

The post What Is a Virtual GPU? appeared first on The Official NVIDIA Blog.

To Boldly Go: World’s Biggest Planetarium Achieves Jaw-Dropping 10K Resolution

We can’t all be starship captains. But visitors to Planetarium No. 1 in St. Petersburg, Russia, can experience the universe with a level of clarity, detail and interactivity that Captain Kirk himself would envy.

Planetarium No. 1
Planetarium No. 1 inside its 19th century natural gas storage building.

Housed in a 19th century natural gas storage building, the planetarium’s exterior is about the only thing that isn’t on the cutting edge of modernity. Inside is the world’s largest planetarium, with a half acre (2,000 square meters) of projection area within a 37-meter diameter dome.

It’s the planet’s only large-size planetarium with a dome that partially touches the floor. This expansive viewing angle makes it possible for visitors to take photos of themselves with space in the background.

And thanks to NVIDIA Quadro graphics, it’s also the world’s highest resolution planetarium, able to display interactive images of space in a whopping 10K resolution — more than 2.5x the detail level of conventional digital cinema screens.

A few months after its official opening in November, Planetarium No. 1 flipped the switch on its  record-breaking projection system. It uses NVIDIA Quadro P6000 GPUs, which have become the de facto industry standard for building high-res, multiple-projector systems. Each Quadro P6000 has four outputs creating 4K images, which are then synchronized across 40 high-resolution projectors from one server.

Each projector is responsible for a section of the overall image, which must be blended seamlessly together, in a process known as image stitching.

Using NVIDIA Quadro P6000 GPUs, Planetarium No. 1 boasts a record-breaking projection system.

“Creating such a large and detailed projection was an incredible technical challenge,” said Evgeny Gudov, director at Planetarium No. 1. “Using the NVIDIA Quadro platform was the only way to achieve it.”

The previous record holder, for both image stitching and dome size, is the 35m planetarium in the Nagoya Science Museum in Japan, which combines 24 projectors.

Projection of NVIDIA’s logo.

Command the Stars

Visitors — up to 5,000 of them every day — can control the starry sky above them using multi-touch controllers, enabling them to pilot through space. When it’s not roaming the galaxy, Planetarium No. 1 hosts 360-degree broadcasts of concerts and sporting events.

It’s also a resource for scientific and educational projects, enabling star-gazers to study the skies above St. Petersburg even during overcast conditions, and despite urban light pollution.

An opera performance at Planetarium No. 1.

“Because we’re projecting onto a dome, we need to use 3D mapping techniques to make the images look seamless,” said Gudov. “And with so many visitors, the reliability of the technology was also vital.”

Planetarium No. 1’s most popular offering is a specially created 90-minute show that takes visitors from the birth of the universe right through to the space age.

The post To Boldly Go: World’s Biggest Planetarium Achieves Jaw-Dropping 10K Resolution appeared first on The Official NVIDIA Blog.

More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete

Airport control towers are an emblem of the aviation industry. A Canadian company wants to use its technology to make them a relic of the past.

Airport buffs may mourn the change. But Ontario-based Searidge Technologies believes its reasoning is, um, well-grounded.

It believes AI-powered video systems can better watch runways, taxiways and gate areas. By “seeing” airport operations through as many as 200 cameras, there’s no need for the sightline towers give air traffic controllers.

That doesn’t mean air traffic controllers are going away. The alternative Searidge proposes is a new concept made possible by remote towers. It’s not an easy idea to swallow for an industry that’s been reluctant to embrace change, and is sensitive to any perception safety is being compromised.

But the benefits are hard to deny, including reduced taxi and wait times, handling 15-30 percent more aircraft per hour and reducing the number of tarmac incidents.

“The industry is adapting, and often now puts air traffic controllers in regular buildings,” said Chris Thurow, head of research and development for Searidge. “It gives them a better view than they see out the tower.”

Searidge control windows
View of an airport from remote tower using Searidge technology.

Originally a Radar Alternative

At first, Searidge focused on providing cheaper alternatives to expensive radar systems for tracking and identifying objects on airport runways and taxiways. The company’s earliest products used traditional computer vision algorithms that analyzed video feeds on CPUs. They met the demands on the system at the time, but that was more than a decade ago.

Since then, the resolution of video and need for real-time intelligence have both grown fast. CPUs can’t keep up with these resource-intensive features.

“Using GPU technology, we can offer this at a better price and with a significantly lower number of servers,” he said.

Searidge shifted to GPUs about two years ago. It also brought deep learning tools such as NVIDIA’s CUDA libraries, TensorRT deep learning inference optimizer, and the Caffe deep learning framework into the mix.

Then, as airports began to ask not only for coverage of runways and taxiways, but also tarmacs and gate areas, Searidge expanded the abilities of its technology.

The company started working on more advanced AI that could accommodate a wider range of business rules. This enabled it to detect a greater assortment of objects. It could even deduce when such objects might cause unexpected delays.

“We are still trying to find the limits of the technology,” Thurow said.

Searidge control workstation
A Searidge Technologies control workstation.

Trained with Pooled Airport Data

Searidge has been training its deep learning network on workstations running NVIDIA Quadro P6000 GPUs. The system constantly collects imagery from the airports it serves to expand its training base. Training typically takes five to seven days, so the company has recently begun training on the GPU-powered Google Cloud to speed the process.

The company deploys its technology on workstations running Quadro P6000 GPUs to do positioning of targets, classification and stitching of images in real time for 20 HD cameras. Once at a new airport, it annotates 24 hours of that facility’s normal operations and combines this with customer data from about three dozen airports in 20 countries — so its algorithms are always improving.

Searidge’s AI innovations are built on top of their “remote tower” platform. New control towers are no longer being built or renovated, Thurow said. Instead airports are moving air traffic control to ground facilities. They’re even considering off-site locations. With AI added to remote towers, they offer high levels of situational awareness and air traffic controller support.

In some cases, he said, smaller airports are considering joining forces, allowing a single remote tower to manage more than one facility.

The European Union’s first certified medium-size, multi-runway remote tower recently opened in Budapest, Hungary, using Searidge’s technology. All tower controllers have been trained on the system, which is initially being used for contingency operations, live training and as a backup system. By 2020, HungaroControl aims to operate a full-time remote tower at Budapest.

Eventually, Thurow believes further AI innovation will lead to a more fully functioning “AI assistant.” The assistant could help air traffic controllers by picking up things humans might miss, predicting situations and recognizing patterns.

“I expect AI assistants to come into play in the next five to ten years,” he said.

The post More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete appeared first on The Official NVIDIA Blog.

Walt Disney Imagineering, NVIDIA Develop New Tech to Enable Star Wars: Galaxy’s Edge Millennium Falcon Attraction for Disney Parks

When Star Wars: Galaxy’s Edge opens next year at Disneyland Resort and the Walt Disney World Resort, park guests will visit the planet of Batuu, a remote outpost that was once a busy crossroads along the old sub-lightspeed trade routes. But you don’t have to wait another year to get a glimpse of it.

An in-progress animated sequence from the Millennium Falcon attraction was unveiled today. Produced by ILMxLAB and running in real time, it gives fans the first ever glimpse of the incredible detail and immersion the attraction will offer.

Walt Disney Imagineering teamed with NVIDIA and Epic Games to develop new technology to drive its attraction. When it launches, riders will enter a cockpit powered with a single BOXX chassis packed with eight high-end NVIDIA Quadro P6000 GPUs, connected via Quadro SLI.

Quadro Sync synchronizes five projectors for the creation of dazzling ultra-high resolution, perfectly timed displays to fully immerse the riders in the word of planet Batuu.

Working with NVIDIA and Epic Games, the Imagineering team created a custom multi-GPU implementation for Unreal Engine. This new code was returned to the Epic Games team and will help influence how multi-GPUs function for their engine.

“We worked with NVIDIA engineers to use Quadro-specific features like Mosaic and cross-GPU reads to develop a renderer that had performance characteristics we needed,” says Bei Yang, technology studio executive at Disney Imagineering. “Using the eight connected GPUs allowed us to achieve performance unlike anything before.”

Yang and Principal Software Developer Eric Smolikowski dove into more details during their GTC talk, “Walt Disney Imagineering Technology Preview: Real-time Rendering of a Galaxy Far, Far Away,” and discussed how Disney Imagineering took advantage of the latest NVIDIA technology and the technical modifications they made for the Unreal Engine, which allows eight GPUs to render at unprecedented quality and speed.

The post Walt Disney Imagineering, NVIDIA Develop New Tech to Enable Star Wars: Galaxy’s Edge Millennium Falcon Attraction for Disney Parks appeared first on The Official NVIDIA Blog.

NVIDIA Brings Live GPU Migration, Ultra-High-End Workstation Performance to Virtualization

They say good things come in threes. That makes it a banner day for the advancement of virtual desktop infrastructure, as NVIDIA has announced:

  • Availability of new virtualization software capabilities with the NVIDIA virtual GPU March 2018 Release, including improved data center management with support for live migration of GPU-accelerated virtual machines.
  • NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) support for Tesla V100 GPUs.
  • Enhanced NVIDIA GRID Virtual PC (GRID vPC) with support for multiple 4K monitors and Linux.

The announcements came at the GPU Technology Conference, taking place through March 29 in San Jose.

Live Migration Keeps VDI Deployments Up and Running

Live migration saves valuable time and resources by allowing IT teams to focus on more strategic projects and drive business transformation.

Using Citrix XenMotion, IT teams can migrate live, NVIDIA GPU-accelerated VMs with no impact to users. And with VMware vSphere, they can suspend desktops and resume them later on compatible infrastructure, while preserving desktop and application states.

Live migration not only eases routine maintenance, it facilitates proactive maintenance — IT can resolve potential problems before service disruption occurs. Manual load balancing enables IT to optimize the utilization of resources without end user disruption.

And the best part: IT teams can perform updates and patches more frequently, at their own time. Keeping servers in healthy states is particularly difficult with today’s complex VDI environments. Without live migration, IT teams can spend hours handholding and coordinating between groups — and nights and weekends planning and performing upgrades to ensure users continually receive a great experience with minimal disruption.

Limited resources and regular quarterly and semi-annual updates recommended by Citrix and Microsoft, respectively, to keep servers secure often leave IT teams in a “keep the lights on” trap. With live migration, they can perform these updates easily, ensuring that they get the most out of their investments, without disrupting users.

Industry Support for Live Migration

In January, NVIDIA, Citrix and VMware kicked of a tech preview of live migration and early reviews are in:

“It’s a game changer. The Meltdown and Spectre BIOS came out one day, and the next day I was able to immediately make it a simple, during the workday, BIOS update. Normally this would be a 10-11 p.m. task on my own time,” says Jeremy Stroebel, IT director at Browning Day Mullins Dierdorf (BDMD)

“Now we don’t have to worry about the large Linux VMs that researchers use to run jobs overnight getting interrupted.  Students can now work on an application, go to another class, and get back to the state where they were even if we had to perform maintenance. With VMware suspend/resume for GPU-accelerated VMs, we can tighten the maintenance window for our quarterly and monthly updates or do more inside the same window,“ says Jon Kelley, associate director of enterprise innovation at University of Arkansas.

“XenServer patching was once a nightmare, taking a month or requiring several hours of downtime. With the live migration feature from Citrix and NVIDIA, I can provide a great user experience and still have the flexibility to keep my environment more secure and stable. And I can even do it within working hours without downtime,” says Tommy Stylsvig Würtz Rasmussen, IT specialist at Holstebro Kommune.

“XenMotion support on vGPU is very critical as we are a global company running almost 24 hours a day with a really short service window. Now I have the flexibility to take a host out of production without any issue. With today’s security issues like Meltdown and Spectre this is a must have to keep our environment patched and safe!” says Roddy Kossen, senior system engineer at AWL-Techniek B.V.

 

The Most Powerful Virtual Workstation Gets More Powerful

Built to accelerate deep learning, HPC and graphics, the NVIDIA Tesla V100 is the world’s most advanced data center GPU ever. The new NVIDIA Quadro vDWS with Tesla V100 GPU support is the most powerful data center workstation. With it, users can:

  • Run real-time, interactive simulations up to 55 percent faster than previous generations such as Ansys Discovery Live.
  • Speed rendering time of photorealistic images up to 80 percent faster than with previous generations.
  • Leverage deep learning-enhanced applications for more fluid, visual interactivity throughout the design process.

Accessed from any connected device, from any location, the latest Quadro vDWS delivers even more advanced professional workstation features to designers and technical professionals, freeing creativity to happen anywhere.

“With the massive parallel computing power of NVIDIA Volta GPU architecture we’ve been able to harness deep learning to train our artificial neural network to predict travel times for transportation networks, helping our customers produce predictions 2-3 times faster than before. And with Quadro vDWS support for GV100, we have the flexibility to run VDI during the day and deep learning at night, which helps maximize use of our compute resources while keeping data secure in the data center,” says John Meza, performance engineering and virtualization team leader, ESRI

Delivering More Value to More Enterprise Users

The NVIDIA virtual GPU March 2018 release also brings enhanced capabilities with GRID vPC – including support for multiple 4K-resolution monitors and larger frame buffer support to enable increased productivity and multitasking. These features are critical to industries like healthcare and financial services, as well as for today’s knowledge workers.

With Linux application support, GRID vPC can be used by engineers who work in 2D and use electronic design automation tools, as well as software developers who code in Linux-based software development environments.

Experience NVIDIA Virtual GPU Products

See GPU-accelerated live migration at the GPU Technology Conference, where global architectural firm Browning Day Mullins Dierdorf will share how it deployed a GPU-accelerated VDI with live migration. Also on display, the NVIDIA Quadro vDWS running on Tesla V100.

See a complete listing of virtualization sessions at GTC.

Get more information about live migration for GPU-accelerated VMs.

Join our webinar, What’s New with NVIDIA Virtual GPU, to learn more.

Learn more about NVIDIA vGPU solutions by following @NVIDIAVirt.

The post NVIDIA Brings Live GPU Migration, Ultra-High-End Workstation Performance to Virtualization appeared first on The Official NVIDIA Blog.

NVIDIA Transforms the Workstation for the Age of Deep Learning

As demand for deep learning continues to gain momentum, it’s already changing the way people work. Driving the next wave of advancement in deep learning-infused workflows is the NVIDIA Volta GPU architecture.

In his keynote address at the GPU Technology Conference today, NVIDIA founder and CEO Jensen Huang unveiled the new Volta-based Quadro GV100, and described how it transforms the workstation with real-time ray tracing and deep learning.

The Quadro GV100 and its companion product, Quadro vDWS for the data center, address the growing demands of the world’s largest businesses — in such fields as automotive, architecture, engineering, entertainment and healthcare — to rapidly deploy deep learning-based research and development, accelerate deep learning-enhanced applications, enable photoreal VR and provide secure, anytime, anywhere access.

Bringing unprecedented capabilities in deep learning, rendering and simulation to designers, engineers and scientists, the new products allow professionals to design better products in a completely new way. GPU-accelerated techniques, like generative design, and the ability to conduct complex simulations faster mean businesses can explore more design choices, optimize their designs for performance and cost, and consequently bring groundbreaking products to market faster.

Quadro GV100 GPUInnovate Without Restrictions

The new Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity. Two GV100 cards can be combined using NVIDIA NVLink interconnect technology to scale memory and performance, creating a massive visual computing solution in a single workstation chassis.

Other benefits of the GV100 include:

  • Easy implementation of deep learning development – Access the NVIDIA GPU Cloud container registry with GV100 or other high-end Quadro GPUs for a comprehensive catalog of GPU-optimized software tools for deep learning and high performance computing on any workstation.
  • Accelerated deep learning training and inferencing on a desktop workstation – Dedicated Tensor Cores and the ability to scale two GV100s for up to 64GB of HBM2 memory with NVIDIA NVLink provide the performance required for demanding deep learning training and inferencing applications.
  • Supercharged rendering performance – Deep learning-accelerated denoising performance for ray tracing provides fluid visual interactivity throughout the design process.
  • Ability to run complex 3D simulations – Fast double-precision coupled with the ability to scale memory up to 64GB accelerates solver performance in computer-aided engineering workflows.
  • Collaborate, design and create in immersive VR – Support for advanced VR features and massive on-board memory capacity means designers can use physics-based, immersive VR platforms such as NVIDIA Holodeck to conduct design reviews and explore complex photoreal scenes and products at scale.

The World’s Most Powerful Virtual Workstation

With newly added support for the NVIDIA Tesla V100 GPUs, Quadro vDWS has the power to address increasingly compute-intensive workflows and securely deliver workstation-class performance to any connected device.

With Quadro vDWS, users can:

  • Run interactive, real-time simulations such as ANSYS Discovery Live
  • Speed rendering time of photorealistic images up to 80 percent faster than with previous generations
  • Leverage AI-enhanced applications for more fluid, visual interactivity throughout the design process
  • Work from anywhere, anytime, from any connected device, while data stays secure, never leaving the data center

Positive Early Reaction to the Quadro GV100

“With Adobe Sensei’s AI and machine learning platform, we’re enabling our creative and enterprise customers to solve digital experience challenges by working smarter, better and faster. The NVIDIA Volta GPU architecture that powers its new Quadro GV100 GPU is clearly a driving force in the evolution of AI. The speed and performance from NVIDIA’s GPUs are helping our customers deliver amazing, real-time experiences at scale across platforms, leveraging Adobe Sensei capabilities.”

– Scott Prevost, vice president of Engineering at Adobe

“The capabilities of the new Volta architecture allow us to create and interact with mathematical models of extreme complexity which rival the accuracy of prohibitively expensive physics simulation, at a fraction of the cost. The new AI-dedicated Tensor Cores have dramatically increased the performance of our models and the speedier NVLink allows us to efficiently scale multi-GPU simulations.”

– Francesco “Frio” Iorio, director of Computational Science Research at Autodesk

“AI computing is allowing our customers to access new business insights and solve problems that were not possible before recent advances in technology. Dell’s capabilities to support customers in AI span IOT, workstations, and data center solutions. The Precision 7920 workstation with Quadro GV100 enables new levels of performance and compute capabilities for an AI-driven future with the simplicity of a deskside solution.”

– Rahul Tikoo, vice president and general manager of Precision Workstations at Dell

“Design in the Age of Experience requires going beyond traditional methods to create a ‘New Reality’ experience for customers. To do this, designers must collaborate and create multisensory, real-world environments that enrich the customer experience. This requires serious GPU horsepower. That’s why we are excited about the performance gains we’ve seen in 3DEXPERIENCE with the new Quadro GV100. The ability to scale two Quadro GV100 GPUs using NVLink, coupled with the performance enhancements of NVIDIA VR SLI, doubled our performance allowing us to seamlessly interact with massive datasets comprised of several hundred million polygons.”

– Xavier Melkonian, CATIA DESIGN portfolio director at Dassault Systèmes

“The exponential growth in AI and the pace of change attributed to machine learning is rapid. HP Z Workstation customers are seeing unprecedented opportunities that have huge implications for not only businesses, but also end users. Combined with the Quadro GV100, HP Z Workstations are the ideal machine learning development platform, while providing the extreme power necessary for product designers, architects and others to create with high visual fidelity and obtain fast results. The HP ML Developers Portal now provides support for NVIDIA GPU Cloud, as well as state-of-the-art tools such as HP’s curated software stacks.”

– Carol Hess, vice president of Worldwide Workstation Product Management at HP Inc.

Our projects include the world’s tallest towers, longest spans, most varied programs and inventive forms. Utilizing NVIDIA GPUs throughout our 3D visualization and VR workflow helps us discover the smartest solution to every project. AI opens up new possibilities for enhancing our traditional design process. That’s why we are especially excited about the new Quadro GV100. It’s not only equipped with enough memory for us to work on massive projects, but its power to accelerate AI is truly a game changer for us. It’s as if we have an entirely new gear to speed up our projects and deliver higher quality results faster and more efficiently for our clients.”

– Cobus Bothma, applied research director at KPF

“Technology is constantly pushing forward; breaking down walls and bringing with it innovation beyond what was imagined before. With the NVIDIA Quadro GV100 GPU for compute and 3D graphics, we are excited to see the progression and dedication towards pushing boundaries and unleashing the possibilities. Lenovo Workstations is excited to support the GV100 over the coming months as an addition to our overall AI and generative design solutions and to shape the future of creative work.”

– Rob Herman, general manager at Lenovo Workstations

“When we tested the NVIDIA Quadro GV100, we saw a 3x performance improvement right out of the box. We can’t wait to see what kind of performance levels we can achieve by tailoring our applications to really take advantage of it.”

– Paolo Emilio Selva, head of Software Engineering at Weta Digital

Availability

Quadro vDWS is available now for over 120 systems from 33 vendors. The NVIDIA Quadro GV100 is available immediately on nvidia.com and starting in April from leading workstation OEMs, including Dell, HP, Lenovo and Fujitsu, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan and Leadtek in Asia Pacific.

Image courtesy of KPF.

The post NVIDIA Transforms the Workstation for the Age of Deep Learning appeared first on The Official NVIDIA Blog.

HTC and NVIDIA Give Major Boost to High-End VR

For VR to be an immersive experience, your visual, auditory and tactile senses need to convince your brain that it’s in a believable environment. The new HTC VIVE Pro VR headset, powered by NVIDIA GPUs, makes a persuasive argument.

The device, announced at CES, improves display resolution by nearly 80 percent compared with its predecessor (2880 x 1600 resolution vs. 2160 x 1200).

It’s a major leap forward for commercial VR, where the demands of customers, developers and VR enthusiasts require the best visual clarity, the best audio and the highest degree of comfort.

How Does Increased Resolution Improve the VR Experience?

The VIVE Pro’s increased display resolution, while keeping a similar screen size, leads to a dramatic increase in pixel density, which is measured in pixels per inch (ppi). The VIVE Pro features 615 ppi — 37 percent more than the original VIVE.

The increased pixel density results in a clearer VR headset image. Lines appear sharper, objects are more distinct and individual pixels are less apparent. And this fidelity improvement helps convince the brain that the user is in a virtual world.

Higher VR Resolution Demands a Powerful GPU

VR headsets require powerful GPUs to refresh at the 90 frames per second needed for a smooth, comfortable experience. With 78 percent more pixels, VIVE Pro raises the GPU performance workload to the next level — and NVIDIA GPUs deliver.

For an optimal experience with the VIVE Pro, NVIDIA recommends NVIDIA Quadro P5000 or higher professional GPUs, and the GeForce GTX 1070 or above for VR enthusiasts.

These GPUs also feature hardware support for NVIDIA VRWorks technologies, which enable the highest level of VR fidelity.

HTC VIVE Pro VR headset

VIVE Pro and NVIDIA Holodeck at GTC

The latest in VR technology is on display at GPU Technology Conference, taking place through March 29 in San Jose. To see where VR is headed, GTC attendees can experience VIVE and Vive Pro across the show floor powering VR for education, hardware, design, medical, military and gaming.

In the VR Village at GTC, attendees also can experience a virtual tour of Ready Player One featuring NVIDIA Holodeck and VIVE Pro*.  Based on never before seen 3D assets from Steven Spielberg’s Ready Player One movie, set for release on March 31, up to three players are transported into Aech’s basement for an escape room-like experience with an amazing level of immersion and visual fidelity.

Learn more about NVIDIA VR Ready GPUs, VIVE Pro and GTC.

* HTC VIVE is the official VR partner of Ready Player One.

The post HTC and NVIDIA Give Major Boost to High-End VR appeared first on The Official NVIDIA Blog.

No Barrier to This Reef: Dazzling Film Brings Coral to Life in GPU-Powered Museum Show

The slithery spotted eel isn’t really swimming over you, and you’re not surrounded by a school of light blue fish. But it sure feels that way as you plunge into Expedition Reef, the new planetarium show at the California Academy of Sciences.

The San Francisco museum’s GPU-powered coral reef experience immerses viewers in a dramatic 3D video re-creation of reefs as they live, reproduce and struggle to survive in an increasingly challenging environment.

The reef teems with life — 50 species of coral, sea turtles, seaweed, algae and more than 5,000 individual fish — captured in fine detail in what the academy describes as the world’s most accurate digital reef. The film is showing daily in San Francisco until March 2019 and is being licensed to planetariums around the world.

“We wanted to bring corals to life in a way you haven’t seen in other productions,” said Ryan Wyatt, senior director of Morrison Planetarium and science visualization at the academy. “These complex ecosystems demand a highly realistic approach to help people engage with them.”

Plenty of Fish in the Sea

The film posed unprecedented challenges for the museum’s visualization studio, which relied heavily on NVIDIA Quadro GPUs, the same technology used by movie studios to create their dazzling special effects.

Making the reef and its inhabitants look real meant capturing a vast amount of detail that included everything from the reflection of light on the water to the rough texture of the coral to the gaudy colors of the tropical fish. In addition to the thousands of plants and animals, studio artists had to reproduce movement as creatures swam, swayed or floated in the ocean currents.

“We’ve produced shows with photorealistic environments before, but none have been this complex, with this much detail and variety,” said Michael Garza, the museum’s senior planetarium and production engineering manager.

A sea turtle swims above a colorful coral reef in the California Academy of Sciences' GPU-powered 3D experience.
A sea turtle swims above a colorful coral reef in the California Academy of Sciences’ GPU-powered 3D experience.

Into the Blue with GPUs

The film’s reefs are 3D reconstructions from more than 100,000 underwater photos shot by researchers and collaborators around the world. The museum’s visualization studio transformed these two-dimensional photos into 3D models, with help from NVIDIA Quadro GP100 and P6000 GPUs. The production team then used GPU-accelerated rendering software to turn the models into a movie.

“Quadro acceleration allowed us to process large swaths of undersea surveys into realistic virtual coral reefs,” Garza said. It also produced a 10x improvement in rendering performance, he said, and it drove new 32-inch, 4K monitors at artists’ workstations. The monitors were critical for artists to make creative choices and iterate more frequently.

The museum also relied on the NVIDIA Quadro Virtual Workstation to manage resources easily with limited space, easily allocating multiple GPU resources to a single large task or multiple small tasks, Garza said.

Hope for Coral Reefs

The show unfolds on the planetarium’s 75-foot, 180-degree screen with an awesome display of beautiful multicolored reefs and flashy fish darting across the screen.

But the film isn’t just about beauty.

It’s about the vital role coral reefs play in the world’s ecosystems, why they’re imperiled and what scientists at the California Academy of Sciences and elsewhere are doing to save them. It also aims to inspire viewers to do their part by consuming fewer resources.

“Most people won’t get to visit coral reefs,” said Elizabeth Babcock, dean of education at the academy. “We want to use digital tools to spark people’s imaginations and create an emotional connection to reefs.”

In addition to the planetarium shows in San Francisco and elsewhere, the museum plans to offer an HD version and lesson plans that teachers can use in their classrooms. For more information on how the museum created Expedition Reef, see the video below.

Learn more about NVIDIA GPU-accelerated rendering solutions during industry and technology focused sessions at the GPU Technology Conference, March 26-29 in San Jose. Register now.

* The main image for this story pictures a moray eel. It is provided courtesy of the California Academy of Sciences.

The post No Barrier to This Reef: Dazzling Film Brings Coral to Life in GPU-Powered Museum Show appeared first on The Official NVIDIA Blog.

How AI is Reinventing the Design of Planes, Trains and Automobiles

Deep learning is coming to 3D design. Just ask Dassault Systèmes.

The company’s 3DExperience Platform is used for everything from Tesla cars and Boeing airplanes to Procter & Gamble consumer products. Now, it’s amping up 3D design and simulation with deep learning and GPUs.

“What people have done with  deep learning for image and speech recognition, we’re doing to transform how products are designed and experienced,” said Patrick Johnson, vice president for corporate research at the 3D modeling software company.

AI by Design

Traditionally, designers and engineers had to start each new product from scratch.

With NVIDIA GPUs and deep learning, Dassault will soon be able to harness the history of previous designs. The system’s 3Dbots use this knowledge to predict and propose a whole new range of designs in a exploratory innovation space. Romain Perron, R&D web apps and services director for Dassault’s flagship CATIA brand, will illustrate those cognitive augmented design 3Dbots in more detail on Thursday at GTC

“From there, you just select the ones you want for faster and better design leveraging know-how and past designs,” Johnson said. Engineers can use the company’s software to plan the ideal manufacturing process and see how their design will function in the real world.

Shipmaker Meyerwerft used Dassault Systemes' GPU-powered 3D design software to create this ship, which has 300,000 different parts.
Shipmaker Meyerwerft used Dassaullt Systemes’ GPU-powered 3D design software to create this ship, which has 300,000 different parts.

3D Design at GTC

The company is out in force at the GPU Technology Conference, with three sessions upcoming about how it’s using deep learning, virtual and augmented reality, and NVIDIA Iray rendering software to advance 3D modeling:

To learn more about how AI computing is transforming industries, watch for replays of these and other talks from the GPU Technology Conference, May 8-11, in Silicon Valley.

The post How AI is Reinventing the Design of Planes, Trains and Automobiles appeared first on The Official NVIDIA Blog.

If You Build It, They Will Come: Multi-User VR Environment Showcased at GTC

Whether you’re using it to play a game, hold a meeting or design a new building while out in the field, VR is pushing the limits of human experience.

An as yet unfulfilled promise of VR is a single system allowing multiple people to collaborate and interact with each other in a shared experience.

NVIDIA VR multi-user VR system
VR for four: NVIDIA’s multi-user VR system will be on display at GTC.

At the GPU Technology Conference this week, we’re showcasing a proof of concept, developed by NVIDIA engineers, that aims to do just that.

Using four NVIDIA Quadro P6000 GPUs running four virtual machines on a PC server, we were able to power four HTC Vive Business Edition headsets all from a single box. This four-way PC, combined with HTC’s Lighthouse tracking system, enables four people to use VR all sharing the same physical space.

Unlimited Possibilities

Multi-user systems open up opportunities to use VR in everything from amusement parks and arcades to military and first responder training, to manufacturing and design.

The setup minimizes the space, power and cooling required, making the system portable and quick to deploy. This is particularly advantageous for the growing market for location-based VR environments, the customized VR spaces popping up at cinemas, shopping malls and elsewhere.

The system’s compact size brings full-featured VR capabilities into tight or unconventional spaces, like naval ships and mobile command centers, where simulation training can add tremendous value.

Initially, the reason for developing this system was to figure out a way to support multi-user VR. However, other interesting use cases began to emerge, including a mixed-reality spectator view, where some virtual machines drive head-mounted displays for participants, while others drive virtual cameras for observers.

“The possibilities are endless,” said Tom Kaye, a senior solutions architect at NVIDIA who helped develop the system. “With the addition of remote management and reliability features, such as multiple templates, clone on boot and remote rebuilds, we could see system builders working to create a robust, ready-to-deploy multi-user VR appliance.”

See It at GTC

You can see this four-user VR system in action in the VR Village at GTC.

MonsterVRMonsterVR, a community-driven VR development studio, will demo the system with its multi-VR application for the architecture, engineering and construction industry in HTC’s booth 700.

And CAVRNUS, a VR company offering solutions for collaborative design, engineering, training and education, will show in-the-field training with multi-user VR in booth 934.

“When NVIDIA shared this system with us, we knew it would be an ideal solution for our collaborative VR platform for our most demanding users, ” said Anthony Duca, founder and CEO at CAVRNUS. “The feedback and reaction to the multi-user, virtualized system, particularly in the engineering and defense markets, has been tremendous.”

To learn more about how NVIDIA developed this system, join NVIDIA’s Kaye and Fred Devoir for their talk today at 10:30 am Pacific at GTC. [Change to wording and link to the replay once it’s available.]

And for partners who are interested in learning how to build systems to support multi-user VR, request the design guide.

(Feature image courtesy of CAVRNUS.)

The post If You Build It, They Will Come: Multi-User VR Environment Showcased at GTC appeared first on The Official NVIDIA Blog.