Medical Startup Scrubs Operating Room Data to Train Better Surgeons

Tweezers carefully poised, the surgeon gingerly attempts to extract the patient’s funny bone. But the hand slips — buzz! — and another round of the classic board game Operation is lost.

operation game
It takes a steady hand to win a game of Operation. It takes much more to make a skilled surgeon.

At best, the game, which first went on sale in the 1960s, might identify children with the steady grip and hand-eye coordination of a future surgeon.

Today, NVIDIA GPUs are powering a vastly more sophisticated set of surgery training tools that can walk surgical residents and medical teams step by step through a procedure.

Built by U.K.-based startup Digital Surgery, the mobile app Touch Surgery helps medical professionals learn procedures or prepare for surgical cases with simulations and quizzes.

Now used in more than 150 U.S. residency programs, the app has a reference library of surgical maps and a virtual human patient that trains surgeons to make the right decisions at the right time during a procedure.

Digital Surgery, a member of NVIDIA’s Inception virtual accelerator program, is also developing an operating room tool called GoSurgery. It improves coordination between surgeons and their teams to manage workflows and aid in real-time operating room decisions.

“We know humans aren’t perfect, so we use digital tools to help them improve their capacity,” said Andre Chow, co-founder of Digital Surgery.

No One Asks Who’s the Best Pilot on a Route

When a person needs surgery, the first question they ask is, “Who’s the best surgeon?”

But this mentality doesn’t apply to every industry. Airplane pilots are responsible for the lives of everyone on board — but it’s not typical to think “Who’s the best pilot?” before buying a ticket and stepping onto a flight.

That’s because the airline industry has worked hard to provide a standard level of safety for pilots using tools like autopilot and radars, says Chow. “We believe that should be the case with surgery as well.”

Yet, when Chow and co-founder Jean Nehme were training to become surgeons, they noticed that every surgeon likes to do things slightly differently.

Digital Surgery aims to close the disparities in surgery quality around the world by bolstering the doctors’ skills with powerful software. This technology gives surgeons an interactive way to rehearse operations digitally and learn best practices across different surgical specialties.

Brain Training for Surgeons

The company’s first product, the Touch Surgery app, has a library of surgical videos and simulations with a virtual human patient. The app hosts more than 200 simulations across 15 surgical specialties including orthopedics, neurosurgery and oral surgery.

Surgical residents and healthcare professionals can use the free app to learn, review or rehearse a procedure. Rendered on NVIDIA Quadro GPUs, the simulations test users’ knowledge about correct operating technique. Chow calls it “brain training for surgeons.”

It’s been validated in more than 15 different research publications as an effective mobile training tool. The Digital Surgery team sees the potential for its applications to serve as training aids in areas of the world lacking safe surgical services.

Using simulation footage from the app’s virtual surgeries and hundreds of thousands of surgical videos as a training database, the company developed its second product, the GoSurgery AI platform.

GoSurgery uses operating room camera streams that are fed into its neural network. The algorithms determine which instruments are being used and what stage of the operation the surgeon is in. Each team member has a screen displaying guidance based on the neural network’s real-time inferences.

This operating room platform is powered by NVIDIA embedded technology. It’s currently being used at several sites in the U.K., with plans to expand to the United States as well.

So far, the Digital Surgery team has deployed this solution for eye surgeries and bariatric procedures. They’re also starting to work on colonic surgery and orthopedic operations, among others.

The post Medical Startup Scrubs Operating Room Data to Train Better Surgeons appeared first on The Official NVIDIA Blog.

NVIDIA, Adobe to Bring Interactive, Photo-Real Ray Tracing to Millions of Graphic Designers

Oil and canvas. Thunder and lightning. Salt and pepper. Some things just go together — like Adobe Dimension CC and NVIDIA RTX ray tracing, which are poised to revolutionize the work of graphic designers and artists.

Adobe Dimension CC makes it easy for graphic designers to create high-quality, photorealistic 3D images, whether for package design, scene visualizations or abstract art. And Adobe Research is constantly innovating to make it even easier for designers to create faster and better.

The latest find: NVIDIA RTX ray-tracing technology, which promises to make photo-real 3D design interactive and intuitive, with over 10x faster performance for Adobe Dimension on NVIDIA RTX GPUs.

What used to cost tens of thousands of dollars on ultra-high-end systems will be able to run on a desktop with an NVIDIA RTX GPU at a price within reach of millions of graphics designers. Check out a tech preview of this technology at Adobe MAX, in Los Angeles, in NVIDIA booth 717.

Adobe Dimension enables designers to incorporate 3D into their workflows, from packaging design to brand visualization to synthetic photography. Adobe makes 3D accessible by handling the heavy lifting of lighting and compositing with Adobe Sensei machine learning-based features, then producing a final, photorealistic output using the Dimension ray tracer.

“We’re partnering with NVIDIA on RTX because of its significant potential to accelerate our two core pillars – ray tracing and machine learning,” said Ross McKegney, director of Engineering for Adobe Dimension CC. “Early results are very promising. Our prototype Dimension builds running on RTX are able to produce photorealistic renders in near real time.”

Reinventing Creative Workflows with Real-Time Ray Tracing and NVIDIA RTX

Ray tracing is the technique modern movies rely on to produce images that are indistinguishable from those captured by a camera. Think realistic reflections, refractions and shadows.

The easiest way to understand ray tracing is to look around you. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Historically, though, computer hardware hasn’t been fast enough to use these 3D rendering techniques in real time. Artists have been limited to working with low-resolution proxies, slow design interaction and long waits to render the final production. But NVIDIA RTX changes the game.

NVIDIA has built ray-tracing acceleration into its Turing GPU architecture with accelerators called RT Cores.  These accelerators enable artists to smoothly interact with a full-screen view of their final image. Lighting changes appear in real time, so artists can quickly get just the look they need. Camera changes, including depth of field, happen in real time, so artists can frame shots perfectly just like they would in a real camera.

Experience NVIDIA RTX Technology and Adobe Dimension Now

See the future of 3D design with NVIDIA RTX ray tracing and Adobe Dimension throughout the Adobe MAX show floor:

  • Adobe Dimension & NVIDIA RTX Tech Preview: Experience the power of interactive ray tracing in person with a hands-on tech preview of the RTX-powered Dimension renderer – NVIDIA booth 717.
  • NVIDIA Holodeck VR Experience with Adobe: Step into VR with NVIDIA Holodeck for a virtual design review of photorealistic 3D assets created in Adobe Dimension CC. First come, first serve at booth 212.

The post NVIDIA, Adobe to Bring Interactive, Photo-Real Ray Tracing to Millions of Graphic Designers appeared first on The Official NVIDIA Blog.

NVIDIA RTX GPUs Inspire Creativity at Adobe MAX

You’re a creator with an imagination running at the speed of light. Nothing can stop you. And nothing should — especially not the system that powers your favorite creative apps.

Thousands of the world’s top creators will descend on Adobe MAX in Los Angeles next week to get inspired, learn and network. We’ll be there demonstrating how NVIDIA RTX GPUs revolutionize creativity with the power of real-time photorealistic design, AI-enhanced graphics, and video and image processing. Millions of designers and artists will be able to create amazing content in a completely new way.

Visit us in booth 717 to get hands-on experience with the latest NVIDIA Quadro RTX and GeForce RTX GPUs driving a broad range of creative apps, including video editing for Hollywood pros and online creators alike, high-end visual effects, 3D design for the masses, groundbreaking photo-realistic ray tracing, and AI-augmented creativity.

Our NVIDIA RTX GPUs are powering:

  • Adobe Dimension CC and NVIDIA RTX Ray Tracing
    Tech preview of interactive, photo-realistic, ray-tracing with NVIDIA RTX GPU-powered Adobe Dimension renderer, delivers up to 10x faster performance.
  • Adobe Project Rush
    Exciting new video editing app from Adobe that offers online creators a major speed boost.
  • Real-Time 8K Video Editing – Tech Preview with REDCINE-X
    Ultra-fast 8K editing based on a collaboration between NVIDIA and RED Digital Cinema.
  • Adobe Premiere Pro CC
    Professional video editing for real-time editing, interactive effects and faster output renders.
  • Adobe After Effects CC
    Up to 10x faster visual effects and animation.
  • Vincent – Turning Sketches into Art with AI
    Transform your art with real-time AI as if Van Gogh was your design partner. Imagined by NVIDIA Partner, Cambridge Consultants.

Learn Something New at NVIDIA’s Creative Experts Bar

To see some amazing creative designs and productions, and learn how they were done, step up to the NVIDIA Creative Experts Bar. Engage in casual, one-hour sessions with some of the brightest creators at MAX.

Whether you’re a visual effects pro, graphics artist, professional photographer or 3D designer, there’s something for you in these creative discussions going on all day in NVIDIA booth 717. Plan your visit to the Creative Experts Bar here.

NVIDIA Holodeck VR Experience with Adobe

For something completely different, step into VR with NVIDIA Holodeck for a virtual design review of photorealistic 3D assets created in Adobe Dimension CC.  First come, first serve at booth 217.

Don’t Go Home Empty Handed

For a chance to win a new NVIDIA GeForce RTX 2080 Ti GPU, visit the NVIDIA booth for a creative experience with the AI-augmented Vincent sketch system. Then share a photo of your work of art on Twitter or Instagram with #NVIDIARTX and #AdobeMAX.

Get another chance to win when you tell us what you would create with more time by using #NVIDIARTX and #AdobeMAX.

The post NVIDIA RTX GPUs Inspire Creativity at Adobe MAX appeared first on The Official NVIDIA Blog.

NVIDIA Turing Propels VR Toward Full Immersion

Over the last few decades, VR experiences have gone from science fiction to research labs to inside homes and offices. But even today’s best VR experiences have yet to achieve full immersion.

NVIDIA’s new Turing GPUs are poised to take VR a big step closer to that level. Announced at SIGGRAPH last week and Gamescom today, Turing’s combination of real-time ray tracing, AI and new rendering technologies will propel VR to a new level of immersion and realism.

Real-Time Ray Tracing

Turing enables true-to-life visual fidelity through the introduction of RT Cores. These processors are dedicated to accelerating the computation of where rays of light intersect objects in the environment, enabling — for the first time — real-time ray tracing in games and applications.

These optical calculations replicate the way light behaves to create stunningly realistic imagery, and allow VR developers to better simulate real-world environments.

Turing’s RT Cores can also simulate sound, using the NVIDIA VRWorks Audio SDK. Today’s VR experiences provide audio quality that’s accurate in terms of location. But they’re unable to meet the computational demands to adequately reflect an environment’s size, shape and material properties, especially dynamic ones.

NVIDIA Holodeck VRWorks Audio

VRWorks Audio is accelerated by 6x with our RTX platform compared with prior generations. Its ray-traced audio technology creates a physically realistic acoustic image of the virtual environment in real time.

At SIGGRAPH, we demonstrated the integration of VRWorks Audio into NVIDIA Holodeck showing how the technology can create more realistic audio and speed up audio workflows when developing complex virtual environments.

AI for More Realistic VR Environments

Deep learning, a method of GPU-accelerated AI, has the potential to address some of VR’s biggest visual and perceptual challenges. Graphics can be further enhanced, positional and eye tracking can be improved and character animations can be more true to life.

The Turing architecture’s Tensor Cores deliver up to 500 trillion tensor operations per second, accelerating inferencing and enabling the use of AI in advanced rendering techniques to make virtual environments more realistic.

Advanced VR Rendering Technologies

Turing also boasts a range of new rendering techniques that increase performance and visual quality in VR.

Variable Rate Shading (VRS) optimizes rendering by applying more shading horsepower in detailed areas of the scene and throttling back in scenes with less perceptible detail. This can be used for foveated rendering by reducing the shading rate on the periphery of scenes, where users are less likely to focus, particularly when combined with the emergence of eye-tracking.

Multi-View Rendering enables next-gen headsets that offer ultra-wide fields of view and canted displays, so users see only the virtual world without a bezel. A next-generation version of Single Pass Stereo, Multi-View Rendering doubles to four the number of projection views for a single rendering pass. And all four are now position-independent and able to shift along any axis. By rendering four projection views, it can accelerate canted (non-coplanar) head-mounted displays with extremely wide fields of view.

Multi-View Rendering
Turing’s Multi-View Rendering can accelerate geometry processing for up to four views.

VR Connectivity Made Easy

Turing is NVIDIA’s first GPU designed with hardware support for USB Type-C and VirtualLink*, a new open industry standard that powers next-generation headsets through a single, lightweight USB-C cable.

Today’s VR headsets can be complex to set up, with multiple, bulky cables. VirtualLink simplifies the VR setup process by providing power, display and data via one cable, while packing plenty of bandwidth to meet the demands of future headsets. A single connector also brings VR to smaller devices, such as thin-and-light notebooks, that provide only a single, small footprint USB-C connector.

VirtualLink

 

 

Availability

VRWorks Variable Rate Shading, Multi-View Rendering and Audio SDKs will be available to developers through an update to the VRWorks SDK in September.

NVIDIA Turing-based Quadro RTX and GeForce RTX GPUs will be available starting this fall on nvidia.com and from leading manufacturers and add-in card partners.

* In preparation for the emerging VirtualLink standard, Turing GPUs have implemented hardware support according to the “VirtualLink Advance Overview”. To learn more about VirtualLink, see www.virtuallink.org.

The post NVIDIA Turing Propels VR Toward Full Immersion appeared first on The Official NVIDIA Blog.

Alluring Turing: Get Up Close with 7 Keynote-worthy Turing Demos

RT Cores. Tensor Cores. Advanced shading technologies.

If you’ve been following the news about our new Turing architecturelaunched this week at the SIGGRAPH professional graphics conference — you’re probably wondering what all this technology can do for you.

We invite you to step into our booth at SIGGRAPH — number 801 on the main floor — to see for yourself. Here’s what you’ll find:

  • Photorealistic, interactive car rendering — Spoiler: this demo of a Porsche prototype looks real, but is actually rendered. To prove it, you’ll be able to adjust the lighting, and move the car around. It’s all built in in Unreal Engine, with the Microsoft DXR API is used to access NVIDIA RTX dev platform. It runs two Quadro RTX GPUs.
  • Real-time ray tracing on a single GPU — This Star Wars themed demo stunned when it made its debut earlier this year running on a $70,000 DGX Station powered by four Volta GPUs. Now you can see the same interactive, real-time, ray tracing using Unreal Engine running on our NVIDIA RTX developer platform on a single Turing Quadro GPU.
  • Advanced rendering for games & film (dancing robots) — This one is built on Unreal, as well — and shows how real-time ray-tracing can a bring complex, action-packed scenes to life. Powered by a single Quadro RTX 6000, it shows effects such as real-time ray-traced effects such as global illumination, shadows, ambient occlusion, and reflections.
  • Advanced rendering for games & film (Project Sol) — An interaction between a man and his robotic assistants takes a surprising turn. Powered by the Quadro RTX 6000, this demo shows off production quality rendering and cinematic frame rates, enabling users to interact with scene elements in real time.
  • Cornell Box — Turn to this tested graphics teaching tool to see how Turing uses ray tracing to deliver complex effects — ranging from diffused reflection to refractions to caustics to global illumination — with stunning photorealism.
  • Ray-traced global illumination — This live, photorealistic demo is set in the lobby of the Rosewood Bangkok Hotel, and shows the effects of light switching between raster and ray-traced materials. You’ll be able to make changes to the scene, and see the effects in real time on this demo powered by a pair of Quadro RTX 6000 GPUs.
  • New Autodesk Arnold with GPU acceleration — Featuring a scene from Avengers: Infinity war courtesy of Cinesite, Autodesk, and Marvel Studios, this demo lets you see the benefits of Quadro RTX GPUs for both content creation and final frame rendering for feature film.

Of course, this isn’t all you’ll find in our booth.

In addition to being able to see demos from NVIDIA CEO Jensen Huang’s keynote Monday up close, you’ll be able to see a technology demo of our new NGX software development kit — featuring in-painting, super-slo mo, and up resing; a new version of our NSIGHT developer tools; AI-powered rendering enhancements including deep-learning anti-aliasing; and A simulation based on Palm4u, a modified version of PALM for urban environments, looking at how much urban surfaces receive solar radiation, as well as atmospheric and building heat emissions during summer in Berlin.

So, if you’re at SIGGRAPH stop by our booth. We’ll be here Tuesday and Wednesday from 9:30am – 6pm or Thursday from 9:30 am – 3:30 pm.

The post Alluring Turing: Get Up Close with 7 Keynote-worthy Turing Demos appeared first on The Official NVIDIA Blog.

Hundreds of Trillions of Pixels: NVIDIA and RED Digital Cinema Solve 8K Bottleneck

For discerning households, 4K video resolution is becoming the new gold standard. But in professional circles, the real glitter is on 8K.

This ultra-high HD standard provides stunning clarity on large screens, where pixels aren’t visible even from inches away.

But it takes massive computing muscle to power that capability — and that’s a task our revolutionary Turing GPU architecture and new Quadro RTX GPUs are perfect for.

100 Trillion Pixels and Counting

State-of-the-art cameras can capture 8K video (which contains four times the pixels of 4K), but those pixels can create a massive computational logjam when editing footage.

Consider: an 8K camera captures at 8192×4320, or more than 35 million pixels per frame. So, just five minutes of that video at 24 frames per second is 250 billion pixels. Figure that a typical shoot involves hours of content, and you get past 100 trillion pretty quickly.

To handle that oceanic volume, post-production professionals rely on powerful, expensive workstations, high-end custom hardware and time-consuming preprocessing.

But that’s all about to change with Turing.

Working with leading camera maker RED Digital Cinema, Turing makes it possible for video editors and color graders to work with 8K footage in full resolution — in real time — reaching greater than 24 frames per second using just a single-processor PC with one Quadro RTX GPU.

And at less than half the price of CPU-laden workstations, this solution puts 8K within reach for a broad universe of post-production professionals using RED cameras whose content is viewed by millions.

“RED is passionate about getting high-performance tools in the hands of as many content creators as possible,” said Jarred Land, president of RED Digital Cinema. “Our work with NVIDIA to massively accelerate decode times has made working with 8K files in real time a reality for all.”

Pixel Perfect: The Perks of 8K

Though the markets for 8K displays and TVs are nascent, professionals can benefit by producing in 8K and distributing in 4K. The extra pixels from an 8K camera give the cinematographer more creative choices in post-production.

For example, there’s more flexibility for image stabilization, or panning and zooming to reframe a shot without losing image quality in the final delivery format. For visual effects, high resolution can provide more detail for tracking or keying. Downsampling high-resolution video can help reduce noise as well as maintain a high level of quality.

Whether the end result is 4K, 8K or somewhere in between, each tool in the production pipeline must be ready to handle 8K. It’s one thing to have a camera that captures the artists’ vision in seamless 8K footage, frame after frame. It’s another thing to translate that smoothness into the post-processing and editing processes.

When the video industry transitioned from HD to 4K, GPU adoption gave professionals the computing power to handle the higher resolution. Now, too, GPUs are doing the heavy lifting for 8K, freeing CPUs to do other work.

Unleashing 8K’s RAW Potential

Video applications like Adobe Premiere Pro, Blackmagic’s DaVinci Resolve and Autodesk Flame are already capable of working with 8K footage captured from cameras like RED’s. This includes footage stored in its REDCODE RAW file format, which gives post-production professionals more creative control but greatly increases the computational demand to process it.

Depending on the processing power of the computer or workstation at hand, though, videographers and editors end up viewing their 8K files at significantly reduced resolution in the software application. Attempting to play back the footage at full resolution can cause the application to drop frames or stop playback while buffering — so the editors are forced to choose between smooth playback and working in full resolution.

Alternatively, they can preprocess their footage into a more manageable format, but that takes time and disc space.

By offloading all of the computationally intensive parts of the REDCODE processing to a Turing GPU, NVIDIA and RED are giving post-production professionals access to 8K footage at full resolution in real time. And it’s not just for Turing — this acceleration will also substantially increase REDCODE processing performance on other NVIDIA GPUs.

Artists working with 8K footage will no longer have to disrupt their creative flow, waiting for their editing tools to catch up.

New capabilities will also be possible with the NVIDIA RTX Tensor Cores and RT Cores available with Turing. Editors will gain from new functionality like AI-enabled upscaling, which will let them intermix archival footage or zoom in beyond 8K resolution with the best possible results. And those incorporating high-resolution 3D graphics and titling will get more time back to focus on the creative parts of their workflow.

The post Hundreds of Trillions of Pixels: NVIDIA and RED Digital Cinema Solve 8K Bottleneck appeared first on The Official NVIDIA Blog.

Temple Run: CyArk Taps GPUs to Capture Visual Records of World Heritage Sites

When the Taliban blew up two 1,700-year-old statues of the Buddha in Bamiyan, Afghanistan, in 2001, preservationists around the world were shocked, perhaps none more so than Ben Kacyra.

Before and after view of a Bamiyan buddha.

A software developer who’d just sold his industrial mapping company, Kacyra wanted to do something good with his technology skills. The Taliban’s appalling actions gave him a perfect topic on which to focus.

The result was CyArk, a nonprofit that has spent the past 15 years capturing high-resolution imagery of World Heritage Sites.

“There was no record, no videos, no detailed photos,” said John Ristevski, CEO of CyArk, who helped Kacyra with the organization’s initial projects. “Ben was aghast and thought we needed 3D records of these things.”

Kacyra, and his wife Barbara, who together founded CyArk, wanted to ensure that if other sites met the same fate as those Afghani monuments, there would at least be a visual record.

Today, CyArk has not only detailed photogrammetric data on 200 sites on all seven continents, it has started to deliver on the second part of its vision: opening up that data to the public in the hope that developers will create 3D virtual experiences.

Taste of What’s to Come

To jumpstart things, CyArk, in conjunction with Google — which has provided cloud computing resources, funding and technical support — has released a virtual tour of the ancient city of Bagan, in central Myanmar, that shows off what’s possible.

The tour lets visitors digitally walk through temples, look at things from different angles and zoom in for closer looks, providing an amazingly detailed substitute for those who might otherwise have to travel to the other side of the Earth to see it.

The potential for using the approach to provide education about the world’s ancient historical sites, enable preservationists to study sites more readily, and allow tourists to visit places they could never travel to is seemingly limitless. As such, Ristevski hopes the Bagan tour is just the tip of a much larger iceberg than CyArk could ever build.

“We’re probably more interested in what other people can do with the data,” said Ristevski. “Through the open heritage program, anyone can take the data and build educational experiences. And they can get a non-commercial license to use the data, too.”

In addition to the Bagan tour, CyArk recently released MasterWorks VR, a free app for the Oculus Rift VR headset. It lets people explore multiple heritage sites on three continents, jumping from one location to another.

A Premium on Processing

NVIDIA GPUs have played a critical role in processing the hours and hours of programmetric data CyArk collects on each site, as well as performing 3D reconstruction. Working on powerful workstations equipped with NVIDIA Quadro P6000 cards, CyArk technicians convert the data to 3D imagery using a software package called Capturing Reality.

Ristevski said the P6000 GPUs enable CyArk to crunch its data many times faster than would be possible on CPUs, and he’s also seen significant speed gains compared with previous generations of GPUs.

CyArk Bagan World Heritage Site 3D scan
Every pixel counts in CyArk’s 3D reconstructions of World Heritage Sites.

More important than speed, Ristevski said, is the improved ability to present detailed textures. CyArk has seen resolution of those textures shrink from centimeters down to fractions of millimeters, which is a huge consideration for the heritage community.

“Every square inch of surface is unique,” he said. “We can’t make up textures or replicate textures. We have to preserve every little pixel we capture to the highest degree.”

For Now, More Data

While Ristevski sees a lot of potential for deep learning to help CyArk as it gets more into classification of its data, the company hasn’t delved far into that technology to date. Once it can move on from its current focus on 3D reconstruction, deep learning figures to play a bigger role.

In the meantime, CyArk plans to continue collecting data on more World Heritage Sites. Among those it’s currently documenting or planning to hit soon are ancient temples in Vietnam and the Jefferson Memorial in Washington, D.C.

As CyArk collects more data, it will also continue to make that data publicly available, as well as packaging it in future VR applications and generating its own VR experiences.

And Ristevski maintains that CyArk has no goals to monetize its data, and will instead remain a nonprofit for the foreseeable future: “We have no intention of forming a business model.”

The post Temple Run: CyArk Taps GPUs to Capture Visual Records of World Heritage Sites appeared first on The Official NVIDIA Blog.

What Is a Virtual GPU?

Virtualization technology for applications and desktops has been around for a long time, but it hasn’t always lived up to the hype surrounding it. Its biggest failing: a poor user experience.

And the reason why is simple. When virtualization first came on the scene, GPUs — which are specialists in parallel computing — weren’t part of the mix. The virtual GPU, aka vGPU, has changed that.

On a traditional physical computing device like a workstation, PC or laptop, a GPU typically performs all the capture, encode and rendering to power complex tasks, such as 3D apps and video. With early virtualization, all of that was handled by the CPU in the data center host. While it was functional for some basic applications, CPU-based virtualization never met the native experience and performance levels that most users needed.

That changed a few years ago when NVIDIA released its virtual GPU. Virtualizing a data center GPU allowed it to be shared across multiple virtual machines. This greatly improved performance for applications and desktops, and allowed organizations to build virtual desktop infrastructures (or VDIs) that cost-effectively scaled this performance across their businesses.

What a GPU Does

Why a GPU chartA graphics processing unit has thousands of computing cores to efficiently process workloads in parallel. Think 3D apps, video and image rendering. These are all massively parallel tasks.

The GPU’s ability to handle parallel tasks makes it expert at accelerating computer-aided applications. Engineers rely on them for heavy-duty stuff like computer-aided engineering (CAE), computer-aided design (CAD) and computer-aided manufacturing (CAM) applications. But there are plenty of other consumer and enterprise applications.

Of course, any processor can render graphics. Four, eight or 16 cores could do the job, eventually. But with the thousands of specialized cores on a GPU, there’s no long wait. Applications simply run faster, interactively — the way they’re supposed to run.

Virtual GPUs Explained

What makes a virtual GPU work is software.

NVIDIA vGPU software delivers graphics-rich virtual desktops and workstations accelerated by NVIDIA Tesla accelerators, the world’s most powerful data center GPUs.

This software transforms a physical GPU installed on a server to create virtual GPUs that can be shared across multiple virtual machines. It’s no longer a one-to-one relationship from GPU to user, but one-to-many.

NVIDIA vGPU software also includes a graphics driver for every virtual machine. Sometimes, this is referred to as server-side graphics. And this enables every virtual machine to get the benefits of a GPU just like a physical desktop has. But because work that was typically done by the CPU has been offloaded to the GPU, users have a much better experience and more users can be supported.

NVIDIA’s virtual GPU offerings include three products designed to meet the challenges of the digital workplace: NVIDIA GRID Virtual PC (GRID vPC) and NVIDIA GRID Virtual Apps (GRID vApps) for knowledge workers and NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) for designers, engineers and architects.

NVIDIA GRID Provides an Amazing Experience for Every User

Business users’ graphics requirements are rising. Windows 10 requires up to 32 percent more CPU resources than Windows 7, according to a whitepaper from Lakeside Software, Inc. And updated versions of basic office productivity apps such as Chrome, Skype and Microsoft Office demand a much higher level of computer graphics than before.

This trend toward digitally sophisticated, graphics-intensive workplaces will only accelerate. With CPU-only virtualized environments unable to support the needs of knowledge workers, GPU-accelerated performance with NVIDIA GRID has become a fundamental requirement of virtualized digital workplaces and enterprises using Windows 10.

NVIDIA Quadro vDWS Delivers Secure, Workstation-Class Performance on Any Device

Every day, tens of millions of creative and technical professionals need to access the most demanding applications from any device, work from anywhere and interact with large datasets — all while keeping their information secure.

This might be a cardiologist providing a remote consultation and accessing high-quality images while at a conference; or a government agency delivering simulated, immersive training experiences; or an R&D engineer working on a new car design who needs to ensure intellectual property and proprietary designs remain secure in the data center while collaborating with others in a client’s office.

For people with sophisticated, graphics-intense needs like these, Quadro vDWS offers the most powerful virtual workstation from the data center or cloud to any device, anywhere.

How vGPUs Simplify IT Administration

Working with VDI, IT administrators can manage resources centrally instead of supporting individual workstations at every worker location. Plus, the number of users can be scaled up and down based on project and application needs.

NVIDIA virtual GPU monitoring provides IT departments with tools and insights so they can spend less time troubleshooting and more time focusing on strategic projects. IT admins can gain an understanding of their infrastructure down to the application level, enabling them to localize a problem before it starts. This can reduce the number of tickets and escalations, and reduce the time it takes to resolve issues.

With VDI, IT can also better understand the requirements of their users and adjust the allocation of resources. This saves operational costs while enabling a better user experience. In addition, live migration features of NVIDIA GPU-accelerated virtual machines enables IT to perform critical services like workload leveling, infrastructure resilience and server software upgrades without any virtual machine downtime. It lets IT truly deliver quality user experiences with high availability.

How Virtual GPU Help Businesses

These are a few examples of how organizations that have deployed NVIDIA vGPU offerings have benefited:

  • CannonDesign (Architecture, engineering and construction). CannonDesign provided virtualization to all its users, from designers and engineers using Revit and high-end apps to knowledge workers using office productivity apps. The company achieved higher user density at 2x the performance, with better security. It’s IT team can now provision a new user with a virtual workstation in 10 minutes.
  • Cornerstone Home Lending (Financial services). Cornerstone Home Lending streamlined its desktop deployment across 100 branches and 1,000 users into a single, virtualized environment. The company achieved lower latency and high performance on modern business applications like video editing and playback.
  • DigitalGlobe (Satellite imagery). DigitalGlobe enabled its developers and office staff to use graphics-intensive applications on any device with a native PC-like experience. The move to NVIDIA Tesla M10 GPU accelerators and NVIDIA GRID software delivered huge cost savings with a 2x improvement in user density, and streamlined their IT operation with a 500:1 user to IT ratio.
  • Honda (Automotive). Honda used virtual GPU technology to enable better scalability and lower investment costs. The company achieved faster performance and lower latency on graphics-heavy applications like 3D CAD, even on thin clients. Honda and Acura vehicles are now being designed using VDI with NVIDIA vGPU software.
  • Seyfarth Shaw (Legal). To provide its attorneys with a rich web browsing experience on any device, Seyfarth Shaw upgraded to Windows 10 VDI with Tesla M10 GPUs and NVIDIA GRID vPC. Just loading its intranet, which once took 8-10 seconds, now only takes 2-3. Scrolling through large PDFs is a breeze, and user complaints to IT nosedived.
  • Holstebro Kommune (Government). Holstelbro Kommune achieved up to a 70 percent improvement in CPU utilization with NVIDIA GRID. Modern applications and web browsers with rich multimedia content, video conferencing, video editing and playback can be done on any device, with performance that rivals the physical desktop.
  • UMass Lowell (Education). The University of Massachusetts at Lowell provides a  workstation-caliber experience to its students, who can use apps like SOLIDWORKS, the full Autodesk suite, Moldflow and Mastercam on any device. The university operates its VDI environment at one-fifth the cost of a workstation seat with equivalent performance. Just with NVIDIA virtualization software updates, the UMass Lowell achieved a 20-30 percent performance improvement.

Learn more about NVIDIA vGPU solutions by following @NVIDIAVirt.

The post What Is a Virtual GPU? appeared first on The Official NVIDIA Blog.

To Boldly Go: World’s Biggest Planetarium Achieves Jaw-Dropping 10K Resolution

We can’t all be starship captains. But visitors to Planetarium No. 1 in St. Petersburg, Russia, can experience the universe with a level of clarity, detail and interactivity that Captain Kirk himself would envy.

Planetarium No. 1
Planetarium No. 1 inside its 19th century natural gas storage building.

Housed in a 19th century natural gas storage building, the planetarium’s exterior is about the only thing that isn’t on the cutting edge of modernity. Inside is the world’s largest planetarium, with a half acre (2,000 square meters) of projection area within a 37-meter diameter dome.

It’s the planet’s only large-size planetarium with a dome that partially touches the floor. This expansive viewing angle makes it possible for visitors to take photos of themselves with space in the background.

And thanks to NVIDIA Quadro graphics, it’s also the world’s highest resolution planetarium, able to display interactive images of space in a whopping 10K resolution — more than 2.5x the detail level of conventional digital cinema screens.

A few months after its official opening in November, Planetarium No. 1 flipped the switch on its  record-breaking projection system. It uses NVIDIA Quadro P6000 GPUs, which have become the de facto industry standard for building high-res, multiple-projector systems. Each Quadro P6000 has four outputs creating 4K images, which are then synchronized across 40 high-resolution projectors from one server.

Each projector is responsible for a section of the overall image, which must be blended seamlessly together, in a process known as image stitching.

Using NVIDIA Quadro P6000 GPUs, Planetarium No. 1 boasts a record-breaking projection system.

“Creating such a large and detailed projection was an incredible technical challenge,” said Evgeny Gudov, director at Planetarium No. 1. “Using the NVIDIA Quadro platform was the only way to achieve it.”

The previous record holder, for both image stitching and dome size, is the 35m planetarium in the Nagoya Science Museum in Japan, which combines 24 projectors.

Projection of NVIDIA’s logo.

Command the Stars

Visitors — up to 5,000 of them every day — can control the starry sky above them using multi-touch controllers, enabling them to pilot through space. When it’s not roaming the galaxy, Planetarium No. 1 hosts 360-degree broadcasts of concerts and sporting events.

It’s also a resource for scientific and educational projects, enabling star-gazers to study the skies above St. Petersburg even during overcast conditions, and despite urban light pollution.

An opera performance at Planetarium No. 1.

“Because we’re projecting onto a dome, we need to use 3D mapping techniques to make the images look seamless,” said Gudov. “And with so many visitors, the reliability of the technology was also vital.”

Planetarium No. 1’s most popular offering is a specially created 90-minute show that takes visitors from the birth of the universe right through to the space age.

The post To Boldly Go: World’s Biggest Planetarium Achieves Jaw-Dropping 10K Resolution appeared first on The Official NVIDIA Blog.

More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete

Airport control towers are an emblem of the aviation industry. A Canadian company wants to use its technology to make them a relic of the past.

Airport buffs may mourn the change. But Ontario-based Searidge Technologies believes its reasoning is, um, well-grounded.

It believes AI-powered video systems can better watch runways, taxiways and gate areas. By “seeing” airport operations through as many as 200 cameras, there’s no need for the sightline towers give air traffic controllers.

That doesn’t mean air traffic controllers are going away. The alternative Searidge proposes is a new concept made possible by remote towers. It’s not an easy idea to swallow for an industry that’s been reluctant to embrace change, and is sensitive to any perception safety is being compromised.

But the benefits are hard to deny, including reduced taxi and wait times, handling 15-30 percent more aircraft per hour and reducing the number of tarmac incidents.

“The industry is adapting, and often now puts air traffic controllers in regular buildings,” said Chris Thurow, head of research and development for Searidge. “It gives them a better view than they see out the tower.”

Searidge control windows
View of an airport from remote tower using Searidge technology.

Originally a Radar Alternative

At first, Searidge focused on providing cheaper alternatives to expensive radar systems for tracking and identifying objects on airport runways and taxiways. The company’s earliest products used traditional computer vision algorithms that analyzed video feeds on CPUs. They met the demands on the system at the time, but that was more than a decade ago.

Since then, the resolution of video and need for real-time intelligence have both grown fast. CPUs can’t keep up with these resource-intensive features.

“Using GPU technology, we can offer this at a better price and with a significantly lower number of servers,” he said.

Searidge shifted to GPUs about two years ago. It also brought deep learning tools such as NVIDIA’s CUDA libraries, TensorRT deep learning inference optimizer, and the Caffe deep learning framework into the mix.

Then, as airports began to ask not only for coverage of runways and taxiways, but also tarmacs and gate areas, Searidge expanded the abilities of its technology.

The company started working on more advanced AI that could accommodate a wider range of business rules. This enabled it to detect a greater assortment of objects. It could even deduce when such objects might cause unexpected delays.

“We are still trying to find the limits of the technology,” Thurow said.

Searidge control workstation
A Searidge Technologies control workstation.

Trained with Pooled Airport Data

Searidge has been training its deep learning network on workstations running NVIDIA Quadro P6000 GPUs. The system constantly collects imagery from the airports it serves to expand its training base. Training typically takes five to seven days, so the company has recently begun training on the GPU-powered Google Cloud to speed the process.

The company deploys its technology on workstations running Quadro P6000 GPUs to do positioning of targets, classification and stitching of images in real time for 20 HD cameras. Once at a new airport, it annotates 24 hours of that facility’s normal operations and combines this with customer data from about three dozen airports in 20 countries — so its algorithms are always improving.

Searidge’s AI innovations are built on top of their “remote tower” platform. New control towers are no longer being built or renovated, Thurow said. Instead airports are moving air traffic control to ground facilities. They’re even considering off-site locations. With AI added to remote towers, they offer high levels of situational awareness and air traffic controller support.

In some cases, he said, smaller airports are considering joining forces, allowing a single remote tower to manage more than one facility.

The European Union’s first certified medium-size, multi-runway remote tower recently opened in Budapest, Hungary, using Searidge’s technology. All tower controllers have been trained on the system, which is initially being used for contingency operations, live training and as a backup system. By 2020, HungaroControl aims to operate a full-time remote tower at Budapest.

Eventually, Thurow believes further AI innovation will lead to a more fully functioning “AI assistant.” The assistant could help air traffic controllers by picking up things humans might miss, predicting situations and recognizing patterns.

“I expect AI assistants to come into play in the next five to ten years,” he said.

The post More Power, Less Tower: AI May Make Aircraft Control Towers Obsolete appeared first on The Official NVIDIA Blog.