From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector

Whether helping the world understand our most immediate threats, like COVID-19, or seeing the future of landing humans on Mars, researchers are increasingly leaning on scientific visualization to analyze, understand and extract scientific insights. With large-scale simulations generating tens or even hundreds of terabytes of data, and with team members dispersed around the globe, researchers Read article >

The post From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector appeared first on The Official NVIDIA Blog.

From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector

Whether helping the world understand our most immediate threats, like COVID-19, or seeing the future of landing humans on Mars, researchers are increasingly leaning on scientific visualization to analyze, understand and extract scientific insights.

With large-scale simulations generating tens or even hundreds of terabytes of data, and with team members dispersed around the globe, researchers need tools that can both enhance these visualizations and help them work simultaneously across different high performance computing systems.

NVIDIA Omniverse is a real-time collaboration platform that lets users share 2D and 3D simulation data in universal scene description (USD) format from their preferred content creation and visualization applications. Global teams can use Omniverse to view, interact with and update the same dataset with a live connection, making collaboration truly interactive.

Omniverse ParaView Connector

The platform has expanded to address the scientific visualization community and now includes a connector to ParaView, one of the world’s most popular scientific visualization applications. Researchers use ParaView on their local workstations or on HPC systems to analyze large datasets for a variety of domains, including astrophysics, climate and weather, fluid dynamics and structural analysis.

With the availability of the Omniverse ParaView Connector, announced at GTC21, researchers can boost their productivity and speed their discoveries. Large datasets no longer need to be downloaded and exchanged, and colleagues can get instantaneous feedback as Omniverse users can work in the same workspace in the cloud.

Chart showing the NVIDIA Omniverse-pipeline
The NVIDIA Omniverse pipeline

Users can upload their USD format data to the Omniverse Nucleus DB from various application connectors, including the ParaView Connector. The clients then connect to the Omniverse Kit and take advantage of:

  • Photorealistic visuals – Users can leverage a variety of core NVIDIA technologies such as real-time ray tracing, photorealistic materials, depth of field, and advanced lighting and shading through the Omniverse platform’s components such as Omniverse RTX Renderer. This enables researchers to better visualize and understand the results of their simulations leading to deeper insights.
  • Access to high-end visualization tools – Omniverse users can open and interact with USD files through a variety of popular applications like SideFX Houdini, Autodesk Maya and NVIDIA IndeX. See documentation on how to work with various applications in Omniverse to maximize analysis.
  • Interactivity at scale – Analyzing part of a dataset at a time through batched renderings is time-consuming. And traditional applications are too slow to render features like ray tracing, soft shadows and depth of field in real time, which are required for a fast and uninterrupted analysis. Now, users can intuitively interact with entire datasets in their original resolution at high frame rates for better and faster discoveries.

NVIDIA IndeX provides interactive visualization for large-scale volumetric data, allowing users to zoom in on the smallest details for any timestep in real time. With IndeX soon coming to Omniverse, users will be able to take advantage of both technologies for better and faster scientific analysis. This GTC session will go over what researchers can unlock when IndeX connects to Omniverse.

HPC NVIDIA IndeX Omniverse visualization
Visualization of Mars Lander using NVIDIA IndeX in NVIDIA Omniverse. Simulation data courtesy of NASA.
  • Real-time collaboration – Omniverse simplifies workflows by eliminating the need to download data on different systems. It also increases productivity by allowing researchers on different systems to visualize, analyze and modify the same data at the same time.
  • Publish cinematic visuals – Outreach is an essential part of scientific publications. With high-end rendering tools on the Omniverse platform, researchers and artists can interact in real time to transform their work into cinematic visuals that are easy for wide audiences to understand.

“Traditionally, scientists generate visualizations that are useful for data analysis, but are not always aesthetic and straightforward to understand by a broader audience,” said Brad Carvey, an Emmy and Addy Award-winning visualization research engineer at Sandia National Labs. “To generate a range of visualizations, I use ParaView, Houdini FX, Substance Painter, Photoshop and other applications. Omniverse allows me to use all of these tools, interactively, to create what I call ‘impactful visualizations.’”

Learn More from Omniverse Experts

Attend the following GTC sessions to dive deeper into the features and benefits of Omniverse and the ParaView connector:

Get Started Today

The Omniverse ParaView Connector is coming soon to Omniverse. Download and get started with Omniverse open beta here.

The post From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector appeared first on The Official NVIDIA Blog.

Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing

As businesses extend the power of AI and data science to every developer, IT needs to deliver seamless, scalable access to supercomputing with cloud-like simplicity and security. At GTC21, we introduced the latest NVIDIA DGX SuperPOD, which gives business, IT and their users a platform for securing and scaling AI across the enterprise, with the Read article >

The post Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing appeared first on The Official NVIDIA Blog.

Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing

As businesses extend the power of AI and data science to every developer, IT needs to deliver seamless, scalable access to supercomputing with cloud-like simplicity and security.

At GTC21, we introduced the latest NVIDIA DGX SuperPOD, which gives business, IT and their users a platform for securing and scaling AI across the enterprise, with the necessary software to manage it as well as a white-glove services experience to help operationalize it.

Solving AI Challenges of Every Size, at Massive Scale

Since its introduction, DGX SuperPOD has enabled enterprises to scale their development on infrastructure that can tackle problems of a size and complexity that were previously unsolvable in a reasonable amount of time. It’s AI infrastructure built and managed the way NVIDIA does its own.

As AI gets infused into almost every aspect of modern business, the need to deliver almost limitless access to computational resources powering development has been scaling exponentially. This escalation in demand is exemplified by business-critical applications like natural language processing, recommender systems and clinical research.

Organizations often tap into the power of DGX SuperPOD in two ways. Some use it to solve huge, monolithic problems such as conversational AI, where the computational power of an entire DGX SuperPOD is brought to bear to accelerate the training of complex natural language processing models.

Others use DGX SuperPOD to service an entire company, providing multiple teams access to the system to support fluctuating needs across a wide variety of projects. In this mode, enterprise IT is often acting as a service provider, managing this AI infrastructure-as-a-service, with multiple users (perhaps even adversarial ones) who need and expect complete isolation of each other’s work and data.

DGX SuperPOD with BlueField DPU

Increasingly, businesses need to bring the world of high-performance AI supercomputing into an operational mode where many developers can be assured their work is secure and isolated like it is in cloud. And where IT can manage the environment much like a private cloud, with the ability to deliver resources to jobs, right-sized to the task, in a secure, multi-tenant environment.

This is called cloud-native supercomputing and it’s enabled by NVIDIA BlueField-2 DPUs, which bring accelerated, software-defined data center networking, storage, security and management services to AI infrastructure.

With a data processing unit optimized for enterprise deployment and 200 Gbps network connectivity, enterprises gain state-of-the-art, accelerated, fully programmable networking that implements zero trust security to protect against breaches, and isolate users and data, with bare-metal performance.

Every DGX SuperPOD now has this capability with the integration of two NVIDIA BlueField-2 DPUs in each DGX A100 node within it. IT administrators can use the offload, accelerate and isolate capabilities of NVIDIA BlueField DPUs to implement secure multi-tenancy for shared AI infrastructure without impacting the AI performance of the DGX SuperPOD.

Infrastructure Management with Base Command Manager

Every week, NVIDIA manages thousands of AI workloads executed on our internal DGX SATURNV infrastructure, which includes over 2,000 DGX systems. To date, we’ve run over 1.2 million jobs on it supporting over 2,500 developers across more than 200 teams. We’ve also been developing state-of-the-art infrastructure management software that ensures every NVIDIA developer is fully productive as they perform their research and develop our autonomous systems technology, robotics, simulations and more.

The software supports all this work, simplifies and streamlines management, and lets our IT team monitor health, utilization, performance and more. We’re adding this same software, called NVIDIA Base Command Manager, to DGX SuperPOD so businesses can run their environments the way we do. We’ll continuously improve Base Command Manager, delivering the latest innovations to customers automatically.

White-Glove Services

Deploying AI infrastructure is more than just installing servers and storage in data center racks. When a business decides to scale AI, they need a hand-in-glove experience that guides them from design to deployment to operationalization, without burdening their IT team to figure out how to run it, once the “keys” are handed over.

With DGX SuperPOD White Glove Services, customers enjoy a full lifecycle services experience that’s backed by proven expertise from install to operations. Customers benefit from pre-delivery performance certified on NVIDIA’s own acceptance cluster, which validates the deployed system is running at specification before it’s handed off.

White Glove Services also include a dedicated multidisciplinary NVIDIA team that covers everything from installation to infrastructure management to workflow to addressing performance-impacting bottlenecks and optimizations. The services are designed to give IT leaders peace of mind and confidence as they entrust their business to DGX SuperPOD.

DGX SuperPOD at GTC21

To learn more about DGX SuperPOD and how you can consolidate AI infrastructure and centralize development across your enterprise, check out our session presented by Charlie Boyle, vice president and general manager of DGX Systems, who will cover our DGX SuperPOD news and more in two separate sessions at GTC:

Register for GTC, which runs through April 16, for free.

Learn more:

The post Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing appeared first on The Official NVIDIA Blog.

XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk

Applying for a home mortgage can resemble a part-time job. But whether consumers are seeking out a home loan, car loan or credit card, there’s an incredible amount of work going on behind the scenes in a bank’s decision — especially if it has to say no. To comply with an alphabet soup of financial Read article >

The post XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk appeared first on The Official NVIDIA Blog.

XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk

Applying for a home mortgage can resemble a part-time job. But whether consumers are seeking out a home loan, car loan or credit card, there’s an incredible amount of work going on behind the scenes in a bank’s decision — especially if it has to say no.

To comply with an alphabet soup of financial regulations, banks and mortgage lenders have to keep pace with explaining the reasons for rejections to both applicants and regulators.

Busy in this domain, Wells Fargo will present at NVIDIA GTC21 next week some of its latest development work behind this complex decision-making using AI models accelerated by GPUs.

To inform their decisions, lenders have historically applied linear and non-linear regression models for financial forecasting and logistic and survivability models for default risk. These simple, decades-old methods are easy to explain to customers.

But machine learning and deep learning models are reinventing risk forecasting and in the process requiring explainable AI, or XAI, to allow for customer and regulatory disclosures.

Machine learning and deep learning techniques are more accurate but also more complex, which means banks need to spend extra effort to be able to explain decisions to customers and regulators.

These more powerful models allow banks to do a better job understanding the riskiness of loans, and may allow them to say yes to applicants that would have been rejected by a simpler model.

At the same time, these powerful models require more processing, so financial services firms like Wells Fargo are moving to GPU-accelerated models to improve processing, accuracy and explainability, and to provide faster results to consumers and regulators.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help understand the math inside an AI model.

XAI maps out the data inputs with the data outputs of models in a way that people can understand.

“You have all the linear sub-models, and you can see which factor is the most significant — you can see it very clearly,” said Agus Sudjianto, executive vice president and head of Corporate Model Risk at Wells Fargo, explaining his team’s recent work on Linear Iterative Feature Embedding (LIFE) in a research paper.

Wells Fargo XAI Development

The LIFE algorithm was developed to handle high prediction accuracy, ease of interpretation and efficient computation.

LIFE outperforms directly trained single-layer networks, according to Wells Fargo, as well as many other benchmark models in experiments.

The research paper — titled Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model — authors include Sudjianto, Jinwen Qiu, Miaoqi Li and Jie Chen.

Default or No Default 

Using LIFE, the bank can generate codes that correlate to model interpretability, offering the right explanations to which variables weighed heaviest in the decision. For example, codes might be generated for high debt-to-income ratio or a FICO score that fell below a set minimum for a particular loan product.

There can be anywhere from 40 to 80 different variables taken into consideration for explaining rejections.

“We assess whether the customer is able to repay the loan. And then if we decline the loan, we can give a reason from a recent code as to why it was declined,” said Sudjianto.

Future Work at Wells Fargo

Wells Fargo is also working on Deep ReLU networks to further its efforts in model explainability. Two of the team’s developers will be discussing research from their paper, Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification, at GTC.

Learn more about the LIFE model work by attending the GTC talk by Jie Chen, managing director for Corporate Model Risk at Wells Fargo. Learn about model work on Deep ReLU Networks by attending the talk by Aijun Zhang, a quantitative analytics specialist at Wells Fargo, and Zebin Yang, a Ph.D. student at Hong Kong University. 

Registration for GTC is free.

Image courtesy of joão vincient lewis on Unsplash

The post XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk appeared first on The Official NVIDIA Blog.

NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries

NVIDIA technology has been behind some of the world’s most stunning virtual reality experiences. Each new generation of GPUs has raised the bar for VR environments, producing interactive experiences with photorealistic details to bring new levels of productivity, collaboration and fun. And with each GTC, we’ve introduced new technologies and software development kits that help Read article >

The post NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries appeared first on The Official NVIDIA Blog.

NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries

NVIDIA technology has been behind some of the world’s most stunning virtual reality experiences.

Each new generation of GPUs has raised the bar for VR environments, producing interactive experiences with photorealistic details to bring new levels of productivity, collaboration and fun.

And with each GTC, we’ve introduced new technologies and software development kits that help developers create extended reality (XR) content and experiences that are more immersive and delightful than ever.

From tetherless streaming with NVIDIA CloudXR to collaborating in a virtual world with NVIDIA Omniverse, our latest technologies are powering the next generation of XR.

This year at GTC, NVIDIA announced a new release for CloudXR that adds support for iOS. We also had announcements with leading cloud service providers to deliver high-quality XR streaming from the cloud. And we released a new version of Variable Rate Supersampling to improve visual performance.

Bringing High Performance and VR Mobility Together

NVIDIA CloudXR is an advanced technology that gives XR users the best of both worlds: the performance of NVIDIA GPUs with the mobility of untethered all-in-one head-mounted displays.

CloudXR is designed to stream all kinds of XR content from any server to any device. Users can easily access powerful, high-quality immersive experiences from anywhere in the world, without being physically connected to a workstation.

From product designers reviewing 3D models to first responders running training simulations, anyone can benefit from CloudXR using Windows and Android devices. We will soon be releasing CloudXR 2.1, which adds support for Apple iOS AR devices, including iPads and iPhones.

Taking XR Streaming to the Cloud

With 5G networks rolling out, streaming XR over 5G from the cloud has the potential to significantly enhance workflows across industries. But the big challenge with delivering XR from the cloud is latency — for people to have a great VR experience, they have to maintain 20ms motion-to-photon latency.

To deliver the best cloud streaming experience, we’ve fine-tuned NVIDIA CloudXR. Over the past six months, we’ve taken great strides to bring CloudXR streaming to cloud service providers, from Amazon Web Services to Tencent.

This year at GTC, we’re continuing this march forward with additional news:

Also at GTC, Google will present a session that showcases CloudXR running on a Google Cloud instance.

To support CloudXR everywhere, we’re adding more client devices to our family.

We’ve worked with Qualcomm Technologies to deliver boundless XR, and with Ericsson on its 5G radio and packet core infrastructure to optimize CloudXR. Hear about the translation of this work to the manufacturing environment at BT’s session in GTC’s XR track.

And we’ve collaborated with Magic Leap on a CloudXR integration, which they will present at GTC. Magic Leap and CloudXR provide a great step forward for spatial computing and an advanced solution that brings many benefits to enterprise customers.

Redefining the XR Experience 

The quality of visuals in a VR experience is critical to provide users with the best visual performance. That’s why NVIDIA developed Variable Rate Supersampling (VRSS), which allows rendering resources to be focused in a foveated region where they’ll have the greatest impact on image quality.

The first VRSS version supported fixed foveated rendering in the center of the screen. The latest version, VRSS 2, integrates dynamic gaze tracking, moving the foveated region where the user is looking.

These advances in XR technology are also paving the way for a solution that allows users to learn, work, collaborate or play with others in a highly realistic immersive environment. The CloudXR iOS integration will soon be available in NVIDIA Omniverse, a collaboration and simulation platform that streamlines 3D production pipelines.

Teams around the world can enter Omniverse and simultaneously collaborate across leading content creation applications in a shared virtual space. With the upcoming CloudXR 2.1 release, Omniverse users can stream specific AR solutions using their iOS tablets and phones.

Expanding XR Workflows at GTC

Learn more about these advances in XR technology at GTC. Register for free and explore over 40 speaker sessions that cover a variety of XR topics, from NVIDIA Omniverse to AI integrations.

Check out the latest XR demos, and get access to an exclusive Connect with Experts session.

And watch a replay of the GTC keynote address by NVIDIA CEO Jensen Huang to catch up on the latest announcements.

Sign up to get news and updates on NVIDIA XR technologies.

Feature image credit: Autodesk VRED.

The post NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries appeared first on The Official NVIDIA Blog.

GTC Showcases New Era of Design and Collaboration

Breakthroughs in 3D model visualization, such as real-time ray–traced rendering and immersive virtual reality, are making architecture and design workflows faster, better and safer.   At GTC this week, NVIDIA announced the newest advances for the AEC industry with the latest NVIDIA Ampere architecture-based enterprise desktop RTX GPUs, along with an expanded range of mobile laptop GPUs.   AEC professionals will also want to learn more about NVIDIA Omniverse Enterprise, an open platform Read article >

The post GTC Showcases New Era of Design and Collaboration appeared first on The Official NVIDIA Blog.

GTC Showcases New Era of Design and Collaboration

Breakthroughs in 3D model visualization, such as real-time raytraced rendering and immersive virtual reality, are making architecture and design workflows faster, better and safer.  

At GTC this week, NVIDIA announced the newest advances for the AEC industry with the latest NVIDIA Ampere architecture-based enterprise desktop RTX GPUs, along with an expanded range of mobile laptop GPUs.  

AEC professionals will also want to learn more about NVIDIA Omniverse Enterprise, an open platform for 3D collaboration and physically accurate simulation. 

New RTX GPUs Bring More Power, Performance for AEC 

The NVIDIA RTX A5000 and A4000 GPUs are designed to enhance workflows for architectural design visualization. 

Based on NVIDIA Ampere architecture, the RTX A5000 and A4000 integrate secondgeneration RT Cores to further boost ray tracing , and thirdgeneration Tensor Cores to accelerate AI-powered workflows such as rendering denoising, deep learning super sampling and generative design.  

Several architecture firms, including HNTB, have experienced how the RTX A5000 enhances design workflows.  

“The performance we get from the NVIDIA RTX A5000, even when enabling NVIDIA RTX Global Illumination, is amazing,” said Austin Reed, director of creative media studio​ at HNTB. Having NVIDIA RTX professional GPUs at our designers’ desks at HNTB will enable us to fully leverage RTX technology in our everyday workflows.  

NVIDIA’s new range of mobile laptop GPU models — including the NVIDIA RTX A5000, A4000, A3000 and A2000, and the NVIDIA T1200, T600 and T500  allows AEC professionals to select the perfect GPU for their workloads and budgets.  

With this array of choices, millions of AEC professionals can do their best work from anywhere, even compute-intensive work such as immersive VR for construction rehearsals or point cloud visualization of massive 3D models 

NVIDIA Omniverse Enterprise: A Shared Space for 3D Collaboration  

Architecture firms can now accelerate graphics and simulation workflows with NVIDIA Omniverse Enterprise, the world’s first technology platform that enables global 3D design teams to simultaneously collaborate in a shared virtual space. 

The platform enables organizations to unite their assets and design software tools, so AEC professionals can collaborate on a single project file in real time. 

Powered by NVIDIA RTX technology, Omniverse delivers high-performance and physically accurate simulation for complex 3D scenes like cityscapes, along with real-time ray and pathtraced rendering. Architects and designers can instantly share physically accurate models across teams and devices, accelerating design workflows and reducing the number of review cycles.  

Artists Create Futuristic Renderings with NVIDIA RTX  

Overlapping with GTC, the “Building Utopia” design challenge allowed archviz specialists around the world to discover how NVIDIA RTX real-time rendering is transforming architectural design visualization. 

Our thanks to all the participants who showcased their creativity and submitted short animations they generated using Chaos Vantage running on NVIDIA RTX GPUs. NVIDIA, Lenovo, Chaos Group, KitBash3D and CG Architect are thrilled to announce the winners. 

Congratulations to the winner, Yi Xiang, who receives a Lenovo ThinkPad P15 with an NVIDIA Quadro RTX 5000 GPUIn second place, Cheng Lei will get an NVIDIA Quadro RTX 8000, and in third place, Dariele Polinar will receive an NVIDIA Quadro RTX 6000. 

Image courtesy of Yi Xiang.

Discover More AEC Content at GTC 

Learn more about the newest innovations and all the AEC-focused content at GTC by registering for free 

Check out the latest GTC demos that showcase amazing technologyJoin sessions on NVIDIA Omniverse presented by leading architecture firms like CannonDesignKPF and Woods BagotLearn how companies like The Grid Factory and The Gettys Group are using RTX-powered immersive experiences to accelerate design workflows.  

And be sure to watch a replay of the GTC keynote address by NVIDIA founder and CEO Jensen Huang.

 

Featured image courtesy of KPF – Beijing Century City – 北京世纪城市.

The post GTC Showcases New Era of Design and Collaboration appeared first on The Official NVIDIA Blog.