When natural disasters strike, responders race against time to deploy critical resources and save lives.
Fanned by strong winds, a forest fire raged through a remote southwestern corner of China’s Sichuan province in early April. Drones equipped with high-resolution cameras and infrared detection technology were dispatched to the mountainous terrain, from where they transmitted footage of the leaping flames over 5G networks to emergency dispatch headquarters.
Responders, rather than waiting for drones to return to start processing the data, could immediately begin parsing the video with AI image algorithms running on NVIDIA GPUs, helping them better understand the crisis and concentrate rescue efforts.
This groundbreaking work was led by the China Mobile Chengdu Institute of Research and Development — a research division of China Mobile, the world’s largest mobile network operator — using advanced 5G technology, AI and the China Mobile Link-Cloud platform for drones.
The company, which has nearly a billion customers, is accelerating natural disaster response, improving emergency medical services and providing new education tools with NVIDIA GPUs connected to next-gen 5G mobile networks.
For example, a joint rescue team from China Mobile’s research division and Sichuan Provincial People’s Hospital in June used ambulances equipped with 5G terminals to remotely diagnose patients at the scene of a 6.0 earthquake. First responders in the emergency vehicles conducted tests like ECG monitoring or ultrasounds, using low-latency 5G networks for real-time video consultations with doctors at the hospital.
There, physicians could use GPU-accelerated medical imaging AI to diagnose and provide temporary treatment instructions until the patients were transferred to the hospital for surgical treatment.
Elsewhere, high-bandwidth 5G towers help address educational inequality between urban and rural areas by connecting multiple classrooms through virtual reality. China Mobile has connected a classroom from a rural primary school in Sichuan with students in Chengdu, the province’s capital. To do so, they used VR headsets, NVIDIA GPUs and an integration of the NVIDIA CloudXR software development kit — which delivers low-latency AR/VR streaming over 5G networks — with an application for remote synchronization of classrooms.
This initiative could help thousands of schools in far-flung regions participate in the same real-time, interactive learning experiences as more resource-rich schools.
Future deployments of these pilot projects will shift computational processing from data centers to GPUs at the edge, whether embedded in drones and ambulances or in full racks of edge servers.
Deploying 5G to 600 Million Users
With 10x lower latency and 1,000x the bandwidth of existing networks, 5G makes data-intensive mobile computing applications such as 4K video and VR possible at the edge for the first time. It also enables the deployment of complex AI models for inference at the edge.
China Mobile, a leader in 5G deployment, has to date installed 50,000 5G stations across 50 cities in China. The country is projected to have 600 million 5G users by 2025.
The company is a member of the Open Data Center Committee, a nonprofit consortium formed by the country’s leading technology providers and telecom giants. One of the committee’s initiatives is the Open Telecom IT Infrastructure (OTII) project, an effort to standardize server solutions for 5G mobile edge computing.
NVIDIA EGX servers developed by data center systems provider Inspur and edge computing manufacturer ADLINK will be the first GPU hardware to be incorporated under the OTII standard.
Powered by NVIDIA T4 and NVIDIA Quadro RTX GPUs, respectively, servers like these can be used at the edge to accelerate critical AI applications using 5G networks. An end-to-end software development kit compatible with Chinese technical requirements for mobile edge computing has also been developed to facilitate GPU adoption.
The NVIDIA EGX edge computing platform consists of a cloud-native software stack and edge servers optimized to run the stack. EGX systems vary from NVIDIA Jetson-powered edge devices to NGC-Ready for Edge validated servers. With NVIDIA EGX, system administrators can easily set up a fleet of edge servers remotely and securely for faster, easier deployment.
Inspur, H3C and Lenovo are among the dozens of manufacturers worldwide offering EGX systems today.
The smartphone revolution that’s swept the globe over the past decade is just the start, NVIDIA CEO Jensen Huang declared Monday.
Next up: the “smart everything revolution,” Huang told a crowd of hundreds from telcos, device manufacturers, developers, and press at his keynote ahead of the Mobile World Congress gathering in Los Angeles this week.
“The smartphone revolution is the first of what people will realize someday is the IoT revolution, where everything is intelligent, where everything is smart,” Huang said. He squarely positioned NVIDIA to power AI at the edge of enterprise networks and in the virtual radio access networks – or vRANs – powering next-generation 5G wireless services.
Among the dozens of leading companies joining NVIDIA as customers and partners cited during Huang’s 90 minute address are WalMart — which is already building NVIDIA’s latest technologies into its showcase Intelligent Retail Lab — BMW, Ericsson, Microsoft, NTT, Procter & Gamble, Red Hat, and Samsung Electronics.
Anchoring NVIDIA’s story: the NVIDIA EGX edge supercomputing platform, a high-performance cloud-native edge computing platform optimized to take advantage of three key revolutions – AI, IoT and 5G – providing the world’s leading companies the ability to build next-generation services.
“The smartphone moment for edge computing is here and a new type of computer has to be created to provision these applications,” said Huang speaking at the LA Convention Center. He noted that if the global economy can be made just a little more efficient with such pervasive technology, the opportunity can be measured in “trillions of dollars per year.”
Ericsson Exec Joins on Stage Marking Collaboration
Joining Jensen on stage was Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks. The company is a leader in the radio access network industry, one of the key building blocks for high-speed wireless networks.
“As an industry we’ve, in all honesty, been struggling to find alternatives that are better and higher performance than our current bespoke environment,” Jejdling said. “Our collaboration is figuring out an efficient way of providing that, combining your GPUs with our heritage.”
The collaboration brings Ericsson’s expertise in radio access network technology together with NVIDIA’s leadership in high-performance computing to fully virtualize the 5G Radio, giving telcos unprecedented flexibility.
Together NVIDIA and Ericsson are innovating to fuse 5G, supercomputing and AI for a revolutionary communications platform that will someday support trillions of always-on devices.
Red Hat, NVIDIA to Create Carrier-Grade Telecommunications Infrastructure
Huang also announced a new collaboration with Red Hat to building carrier-grade cloud native telecom infrastructure with EGX for AI, 5G RAN and other workloads. The enterprise software provider already serves 120 telcos around the world, powering every member of the Fortune 500.
Together, NVIDIA and Red Hat will bring carrier-grade Kubernetes — which automates the deployment, scaling, and management of applications – to telcos so they can orchestrate and manage 5G RANs in a truly-software defined mobile edge.
“Red Hat is joining us to integrate everything we’re working on and make it a carrier grade stack,” Huang said. “The rest of the industry has joined us as well, every single data center computer maker, the world’s leading enterprise software makers, have all joined us to take this platform to market.”
Aerial allows telecommunications companies to build completely virtualized 5G radio access networks that are highly programmable, scalable and energy efficient — enabling telcos to offer new AI services such as smart cities, smart factories, AR/VR and cloud gaming.
SUBHEAD: Technology for the Enterprise Edge
In addition to telcos, enterprises will also increasingly need high performance edge servers to make decisions from large amounts of data in real-time using AI.
EGX combines NVIDIA CUDA-X software, a collection of NVIDIA libraries that provide a flexible and high-performance programing language to developers, with NVIDIA-certified GPU servers and devices.
The result enables companies to harness rapidly streaming data — from factory floors to manufacturing inspection lines to city streets — delivering AI and other next-generation services.
Other top technology companies collaborating with NVIDIA on the EGX platform include Cisco, Dell Technologies, Hewlett Packard Enterprise, Mellanox and VMware.
Walmart Adopts EGX to Create Store of the Future
Huang cited Walmart as an example of EGX’s power.
The retail giant is deploying it in its Levittown, New York, Intelligent Retail Lab. It’s a unique, fully operating grocery store where the retail giant explores the ways AI can further improve in-store shopping experiences.
Using EGX’s advanced AI and edge capabilities, Walmart can compute in real time more than 1.6 terabytes of data generated per second. This helps it use to automatically alert associates to restock shelves, open up new checkout lanes, retrieve shopping carts and ensure product freshness in meat and produce departments.
Just squeezing out a half a percent of efficiencies in the $30 trillion retail opportunity represents an enormous opportunity, Huang noted. “The opportunity for using automation to improve efficiency in retail is extraordinary,” Huang said.
BMW, Procter & Gamble, Samsung, Among Leaders Adopting EGX
That power is already being harnessed for a dizzying array of real-world applications across the world:
Korea’s Samsung Electronics, in another early EGX deployment, is using AI at the edge for highly complex semiconductor design and manufacturing processes.
Germany’s BMW is using intelligent video analytics and EGX edge servers in its South Carolina manufacturing facility to automate inspection.
Japan’s NTT East uses EGX in its data centers to develop new AI-powered services in remote areas through its broadband access network.
The U.S.’s Procter & Gamble the world’s top consumer goods company, is working with NVIDIA to develop AI-enabled applications on top of the EGX platform for the inspection of products and packaging.
Cities, too, are grasping the opportunity. Las Vegas uses EGX to capture vehicle and pedestrian data to ensure safer streets and expand economic opportunity. And San Francisco’s prime shopping area, the Union Square Business Improvement District, uses EGX to capture real-time pedestrian counts for local retailers.
Stunning New Possibilities
To demonstrate the possibilities, Huang punctuated his keynote with demos showing what AI can unleash in the world around us.
In a flourish that stunned the crowd, Huang made a red McLaren Senna prototype — which carries a price of a hair under $1 million — materialize on stage in augmented reality. It could be viewed from any angle — including from the inside — on a smartphone streaming data over Verizon’s 5G network from a Verizon data center in Los Angeles
The technology behind the demo: Autodesk VRED running in a virtual machine on a Quadro RTX 8000 server. On the phone: a 5G client build with NVIDIA’s CloudXR client application software development kit for mobile devices and head mounted displays.
And, in a video, Huang showed how the Jarvis multi-modal AI was able to to follow queries from two different speakers conversing on different topics, the weather and restaurants, as they drove down the road – reacting to what the computer sees as well as what is said.
In another video, Jarvis guided a shopper through a purchase in a real-world store.
“In the future these kind of multi-modal AIs will make the conversation and the engagement you have with the AI much much better,” Huang said.
Cloud Gaming Goes Global
Huang also detailed how NVIDIA is expanding its cloud gaming network through partnerships with global telecommunications companies.
GeForce NOW, NVIDIA’s cloud gaming service, transforms underpowered or incompatible devices into a powerful GeForce gaming PC with access to popular online game stores.
Taiwan Mobile joins industry leaders rolling out GeForce NOW, including Korea’s LG U+, Japan’s Softbank, and Russia’s Rostelecom in partnership with GFN.RU. Additionally, Telefonica will kick-off a cloud gaming proof-of-concept in Spain.
Huang showed what’s now possible with a real-time demo of a gamer playing Assetto Corsa Competizione on GeForce Now — as a cameraman watched over his shoulder — on a smartphone over a 5G network. The gamer navigated through the demanding racing game’s action with no noticeable lag.
The mobile version of GeForce NOW for Android devices is available in Korea and will be available widely later this year, with a preview on display at Mobile World Congress Los Angeles.
“These servers are going to be the same servers that run intelligent agriculture and intelligent retail,” Huang said. “The future is software defined and these low latency services that need to be deployed at the edge can now be provisioned at the edge with these servers.”
A Trillion New Devices
The opportunities for AI, IoT, cloud gaming, augmented reality and 5G network acceleration are huge — with a trillion new IoT devices to be produced between now and 2035, according to industry estimates.
And GPUs are up to the challenge, with GPU computing power growing 300,000x from 2013, driving down the cost per teraflop of computing power, even as gains in CPU performance level off, Huang said.
NVIDIA is well positioned to help telcos and enterprises make the most of this by helping customers combine AI algorithms, powerful GPUs, smart NICs — or network interface cards, cloud native technologies, the NVIDIA EGX accelerated edge computing platform, and 5G high-speed wireless networks.
Huang compared all these elements to the powerful “infinity stones” featured in Marvel’s movies and comic books.
“What you’re looking at are the six miracles that will make it possible to put 5G at the edge, to virtualize the 5G data center and create a world of smart everything,” Huang said, and that, in turn, will add intelligence to everything in the world around us.
“This will be a pillar, a foundation for the smart everything revolution,” Huang said.
Speeding the mass adoption of AI at the 5G edge, NVIDIA has introduced Aerial, a software developer kit enabling GPU-accelerated, software-defined wireless radio access networks.
In his keynote at Mobile World Congress Los Angeles, NVIDIA founder and CEO Jensen Huang detailed how Aerial, running on the NVIDIA EGX platform, enables AI services and immersive content at the edge of 5G networks.
5G offers plenty of speed, of course, delivering 10x lower latency, 1,000x the bandwidth and millions of connected devices per square kilometer. 5G also introduces the critical concept of “network slicing.” This allows telcos to dynamically — on a session-by-session basis — offer unique services to customers.
Traditional solutions cannot be reconfigured quickly, therefore telco operators need a new network architecture. One that’s high performance and reconfigurable by the second, Huang explained.
Such virtualized radio access networks run in the wireless infrastructure closest to customers, making it well suited to offer AI services at the edge. They’re critical to building a modern 5G infrastructure capable of running a range of applications that are dynamically provisioned on a common platform.
With NVIDIA Aerial, the same computing infrastructure required for 5G networking can be used to provide AI services such as smart cities, smart factories, AR/VR and cloud gaming.
Aerial provides two critical SDKs — CUDA Virtual Network Function (cuVNF) and CUDA Baseband (cuBB) — to simplify building highly scalable and programmable, software-defined 5G RAN networks using off-the-shelf servers with NVIDIA GPUs.
The NVIDIA cuVNF SDK provides optimized input/output and packet processing, sending 5G packets directly to GPU memory from GPUDirect-capable network interface cards.
The NVIDIA cuBB SDK provides a GPU-accelerated 5G signal processing pipeline, including cuPHY for L1 5G Phy, delivering unprecedented throughput and efficiency by keeping all physical layer processing within the GPU’s high-performance memory.
The NVIDIA Aerial SDK runs on the NVIDIA EGX stack, bringing GPU acceleration to carrier-grade Kubernetes infrastructure.
The NVIDIA EGX stack includes an NVIDIA driver, NVIDIA Kubernetes plug-in, NVIDIA Container runtime plug-in and NVIDIA GPU monitoring software.
To simplify the management of GPU-enabled servers, telcos can install all required NVIDIA software as containers that run on Kubernetes — open-source software widely used to speed the deployment and management of sophisticated software of all kinds.
In short, Aerial enables the highest return on investment by providing elasticity as network traffic changes throughout the day, as well as the flexibility to offer services based on changing customer needs.
Aerial is already endorsed by some of the world’s leading telcos and cloud infrastructure providers:
“The telco industry is eagerly adopting cloud-native architecture to meet the growing compute demands of 5G. We are learning firsthand how the remarkable compute performance of NVIDIA GPUs, together with NVIDIA’s Aerial SDKs, can address the challenges of building flexible, high-performance virtualized telecom networks. We look forward to Aerial’s continued development.”
— Yasuyuki Nakajima, president and CEO, KDDI Research, Inc.
“5G networks must rely on software-defined infrastructure from the core to the edge to enable a range of high-value services, like AI/ML, IoT and autonomous driving. Red Hat’s vision of extending cloud-native technologies to the edge combined with NVIDIA’s flexible Aerial SDK aims to bring GPU acceleration to 5G RAN. We’ve teamed up with NVIDIA to provide our customers with standardized 5G infrastructure that enables them to develop and deploy their edge applications faster.”
— Chris Wright, senior vice president and chief technology officer, Red Hat
“SoftBank Corp. has been focused over the past decade on building centralized radio access networks that guarantee high capacity and stability. We believe that our 5G network will be completed through a software approach, or softwarization, and that NVIDIA’s Aerial SDKs will play an instrumental role in this effort. It enables an open ecosystem for software-defined 5G networks delivering both flexibility and high performance, which will help SoftBank Corp. drive the digital transformation of the telco industry.”
— Ryuji Wakikawa, vice president and head of the Advanced Technology Division, SoftBank Corp.
NVIDIA Aerial is available to early access partners today. Planned general availability is yearend. Sign up here to receive more information.
Visit NVIDIA at MWC Los Angeles
To experience firsthand the power of the EGX platform with Aerial, visit us at booth 1745 in Hall South at MWC Los Angeles this week.
Innovators in computer graphics from around the globe descended this week on the annual SIGGRAPH conference to catch a glimpse of the future.
As the Los Angeles Convention Center’s doors flung open, a sea of NVIDIA green rose up. Across the show floor, dozens of software and computer makers are demoing the latest applications — featuring real-time ray tracing and advanced AI — on NVIDIA RTX Studio laptops and mobile workstations.
More than 35 partners are running over 50 RTX Studio devices. An array of the world’s leading computer graphics companies are showcasing their applications, including: Adobe, Ansys, Autodesk, BinaryVR, Blackmagic Design, Blender, Boris FX, Chaos Group, Colorfront, Epic Games, Foundry, Luxion, Maxon, Noitom, OTOY, Pixar, PTC, Reallusion, Redshift, Siemens and Unity.
In the NVIDIA booth alone, attendees can get hands-on with and learn more about 14 demos featuring:
8K video editing with REDCINE-X PRO as well as Adobe Premiere Pro on a Razer Blade 15
VFX with Boris FX and After Effects on a Razer Blade 15
Product design with real-time ray tracing using McNeel Rhino with Luxion KeyShot on a Lenovo P53
RTX-powered ray tracing with Modo and Modo Renderer on a Lenovo P5
Architectural visualization in Autodesk Revit with Enscape on an MSI WS65
Mechanical visualizations with RTX-powered ray tracing and AI de-noising in SOLIDWORKS Visualize on a MSI WS65
Layout and set dressing with real-time ray tracing in Isotropix Clarisse on an Acer ConceptD 7
Concept design with RTX-powered ray tracing in Adobe Dimension on an Acer ConceptD 7
Interactive ray tracing with cloud boost using Chaos Group V-Ray and Project Lavina on an HP ZBook 17
3D painting with RTX bakers using Substance Painter on the ASUS StudioBook W500
RTX-powered ray tracing with OTOY OctaneRender on the ASUS StudioBook W500
3D animation and ray tracing using Maxon Cinema 4D and Redshift on a GIGABYTE AERO 15 OLED
3D animation and ray tracing powered by RTX in Blender 2.8 and Blender Cycles on a GIGABYTE AERO 15
AI-enhanced video editing and color grading in Blackmagic Design DaVinci Resolve on a Dell Precision 7740
Our partners are spread across the show floor, highlighting the impact of RTX on creative workflows:
A tech demo of the NVIDIA RTX GPU rendering capabilities of Adobe Dimension is on display in the NVIDIA booth (#1303). Follow the GPU Rendering section on the Adobe Dimension Feedback Portal to stay informed on when the RTX renderer becomes available.
Blackmagic Design is hosting free DaVinci Resolve training at in their booth (#903) at SIGGRAPH, exclusively on RTX Studio laptops from Lenovo. Attendees can learn more about DaVinci Resolve 16 and Fusion 16 during training sessions on VFX compositing, motion graphics, Resolve Color correction, GPU-accelerated AI and other features. The free DaVinci Resolve training is provided daily but space is limited.
Unveiled in March, the Autodesk Arnold GPU public beta introduced a first taste of GPU rendering. On display this week, Arnold GPU 5.4 adds support for OSL, OpenVDB, reduced noise in indirect lighting and much more. Arnold GPU 5.4 is still in beta, so look for a full release later this year.
Chaos Group is providing the first public demonstration of V-Ray with RTX ray-tracing support, yielding dramatic speedups. They’re also showing a demo of Project Lavina, a real-time rendering environment, built from the ground up with DXR, to enable real-time interactivity of scenes built in V-Ray.
Maxon’s press conference featured an RTX Studio laptop showcasing a demo of Redshift 3 with RTX-powered ray tracing and Cinema 4D R21. Attendees can check out the demo in the Maxon booth (#1227).
And there’s much more on the show floor from Epic Games (#1319), Foundry (#925), OTOY (#1141), Unity (#1241) and countless others.
At SIGGRAPH 2019, the world’s top ISVs have turned “RTX On,” displayed for everyone to see on RTX Studio laptops and mobile workstations.
Creative professionals are more mobile than ever. The need to create on the go is vital to meeting tight deadlines and getting content delivered. Whether starting initial edits on set or reviewing rendered ideas with a client at a local coffee shop, the “office” is wherever you need to work.
But nothing interrupts creative flow more than a laptop featuring more loading bars per second than frames per second. That’s why RTX Studio laptops come equipped with RTX GPUs with up to 16GB of graphics memory, Intel Core i7 or i9 CPUs, up to 64GB of fast RAM and fast SSD storage. RTX Studio laptops also feature up to 4K HDR OLED displays with Max-Q thin and light designs and enhanced battery life.
The heart of these laptops are NVIDIA GeForce and Quadro RTX GPUs. These GPUs — the same ones that help create jaw-dropping visual effects in Hollywood blockbusters and make games look incredibly realistic — accelerate video and photo editing, 3D modeling, ray-traced rendering and video streaming.
High-end RTX Studio laptops accelerate performance up to 7x faster than that of the MacBook Pro1 and have large amounts of video memory to improve the experience when running multiple graphics-intensive apps simultaneously. This is incredibly important as creatives don’t work exclusively in one app. With larger frame buffers, designers can switch easily between memory-intensive applications without closing apps to free up memory.
Simply put, RTX Studio laptops are purpose-built for content creators, with no compromises.
There are eight RTX Studio laptops available now, including Acer ConceptD 7, Gigabyte AERO 15, MSI P65 Creator, P75 Creator, WS65, WS75 and WE75, and Razer Blade 15 Studio Edition.
More RTX Studio laptops are on the way from Acer, ASUS, Dell, Gigabyte, HP and Razer.
Best for Video Editing
Video editing is a very compute-intensive task. 4K, 6K and 8K video formats cause great strain on system resources while visual effects are adding even more complexity.
That’s where the RTX GPU horsepower in RTX Studio laptops kicks into high gear.
The NVIDIA CUDA cores on RTX GPUs accelerate video and image processing such as color correction, sharpening, upsampling and transition effects in Adobe Premiere Pro, After Effects, Photoshop and other creative apps. Video editors will see playback and render speed increases in Premiere Pro up to 2x vs competitive laptops and upwards of 11x against CPU-only systems. 1
RED Digital Cinema’s latest release of REDCINE-X PRO takes full advantage of the GPU to decode, debayer and color correct REDCODE RAW footage. This gives RTX Studio laptops a leg up, allowing them to process 6K+ RED video footage at 24 fps in real time. This just wasn’t possible before on laptops or even high-end CPU desktop systems without significant time spent waiting for proxy rendering. With the power of RTX Studio laptops and RED, videographers have the freedom to work on location and edit on the go.
Best for 3D Rendering, Modeling and Design
In film and television, many of the worlds we escape to are brought to life by studios using NVIDIA Quadro GPUs. In fact, for 11 years running, every Oscar nominee for Best Visual Effects has used an NVIDIA Quadro GPU. The same NVIDIA Turing architecture these world-class studios rely on can be found in RTX Studio laptops.
Coupling GPU acceleration with ray tracing in popular renderers like Autodesk Arnold GPU for Maya and 3ds Max, empowers 3D artists and designers to create beautiful scenes with accurate lighting, shadows and reflections and review their photorealistic graphics as they’re created.
This means 3D artists using RTX Studio laptops to render and denoise with Autodesk Arnold can do so up to 13x faster by using GPU acceleration over CPU-only rendering. This speedup on a laptop unchains artists from their desks, letting them work wherever they need to be.
With GPU-accelerated 3D rasterization in 3D modeling applications and real-time ray tracing now enabled in Unreal Engine and Unity, designers can create stunningly photorealistic and interactive architectural visualizations with RTX Studio laptops.
Best for Film and Design Students
Between classes, commuting, two jobs and trying to have a social life, students don’t have time for loading bars when they’re trying to ace their semester-long projects.
Any film or design student would want to get twice the work done in the same amount of time as their peers — or get the same amount of work done in half the time.
With RTX Studio laptops offered in multiple configurations from a variety of manufacturers, students have more high-performance options than ever before. So whether your focus requires a higher resolution screen with fantastic color accuracy and size for precision accuracy, or you’re looking more memory and top-of-the-line graphics, there’s an RTX Studio laptop for you. And since they feature thin and light designs, they move from class to class without slowing you down.
Two years ago, we began our quest to make high-performance laptops as portable as possible. Today, creators can take immensely powerful RTX GPUs with them anywhere they go and work in ways unimaginable until now.
In short, creators can now create at the speed of their imagination, with RTX ON.
1) Performance testing conducted by NVIDIA in June 2019 on RTX Studio laptops equipped with 16GB RAM, Intel Core i7-8750H CPU and GeForce RTX 2080 Max-Q compared to 15-inch Macbook Pro with 32GB RAM, Intel Core i9 CPU and Radeon Pro Vega 20 GPU. Arnold performance measures render time with Maya 2019 and Arnold 126.96.36.199 using the NVIDIA SOL 3D model. REDCINE-X PRO performance measures video playback fps using an 8K 5:1 REDCODE RAW video. Adobe Premiere Pro 2019 performance measures playback and render fps of 4K video with various GPU-accelerated effects enabled.
A designer, an artist and a director walked into our Computex press conference on Monday. They didn’t deliver any punchlines, but did something better: Blowing people’s minds with breathtaking demos of real-time ray tracing and AI on our new RTX Studio laptops.
A packed press room at Asia’s biggest annual tech trade show watched live as the trio of artists used GeForce RTX and Quadro RTX GPUs to create amazing content. They used interactive ray tracing to construct interior designs, AI to apply edits on a single video frame to an entire scene, and complex real-time adjustments in a movie clip with 172 million polygons.
The demos showcased NVIDIA Studio platform — a new combination of hardware and software that allows creators to work at the speed of their imagination. Its RTX Studio laptops, which carry a special badge making them easy to identify, are purpose-built for creative workflows and enable desktop-level performance on the go.
As the creators below demonstrated, this means the creative studio of tomorrow is already here, anywhere you need it to be.
First up was Leo Chou, CEO of TCImage, in Taiwan. Chou specializes in architecture and interior design, working with 3ds Max and Unreal Engine to develop breathtaking renders. Joined by Jacob Norris, one of NVIDIA’s amazing Unreal Engine 4 artists, he created a high-end design for a Chinese-style living room, selecting furnishings, lighting and decorative objects.
They showed how real-time, interactive, ray-tracing design allows accurate visualization of environments so designers and clients can iterate quickly and improve design decisions.
Next up was Juan Salvo, founder of theColourSpace, in New York. A world-class finishing artist who worked with AI to enhance video production in Davinci Resolve, he makes images look their best. Since he’s constantly asked by clients to do more, faster, GPU-accelerated AI and machine learning save him a massive amount of time.
Salvo showcased how he uses the Davinci neutral engine to analyze clips, make adjustments to a single frame and use AI to apply them throughout an entire scene — all in real time. When asked how he can do this remotely, he replied, “what makes it viable is workstation-class performance laptops.”
Wrapping things up, Daniel Gregoire, founder and CEO of HALON Entertainment, a top Hollywood visualization studio working in Unreal Engine, showed real-time, interactive, ray-tracing design that allows accurate visualization of scenes.
His team is widely known for their work on Aquaman, and he wowed the crowd with real-time adjustments in a scene with over 172 million polygons. He described how he would make edits in the director’s office and run it live, bringing the visually breathtaking underwater world of the seven seas to life.
“In the past, this would have taken weeks and weeks, but now we’re basically running this real time in a game engine with the RTX laptop in my backpack,” Gregoire said.
Some of these demos here were possible before, but only if the designer had access to a large studio with high-end workstations.
NVIDIA RTX Studio laptops with GeForce RTX and Quadro RTX GPUs let artists apply their gifts instantaneously, wherever work takes them.
Creative workflows are riddled with hurry up and wait.
Repetitive tasks can take up to 80 percent of a creator’s time, leaving them watching progress bars instead of making adjustments and improvements. And content like ray-traced 3D models and RED RAW video require massive computational horsepower that hasn’t been available from a laptop. Until now.
New RTX Studio laptops let everyone — from aspiring online creators to freelancers — take content creation to the next level.
At Computex this week, we introduced 17 RTX Studio laptops, starting at $1,599, which are purpose-built for creative workflows to enable desktop-level performance on the go. Inside are NVIDIA GPUs, ranging from the GeForce RTX 2060 up to the newly announced Quadro RTX 5000, that enable real-time ray tracing, AI processing and high-resolution video editing, with performance up to 7 times faster than MacBook Pro.1
RTX Studio laptops are purpose-built for content creation, featuring up to 16GB of graphics memory, up to 4K HDR displays, Max-Q thin and light designs, Core i7 and i9 CPUs, and plenty of fast RAM and SSD storage.
The RTX Studio badge ensures creators can quickly and easily identify the right hardware to meet their stringent demands.
RTX Studio laptops provide artists with GPU acceleration to boost creative workflows, including faster playback for RED Digital Cinema users working in the field.
“From GPU decoding to real-time 8K editing, we continually work with NVIDIA to deliver a better experience for our customers,” said Jarred Land, president of RED Digital Cinema. “For most of them, the ability to view and edit video on location during filming is critical. With NVIDIA-powered laptops, it’s incredible to finally be able to edit 8K video in real time from anywhere you may be thanks to the new RED R3D SDK enhancements and NVIDIA GPUs.”
We’ve partnered with the world’s top OEMs to offer a wide selection of designs for creators to choose from, available starting in June.Acer’s ConceptD 7 and ConceptD 9 laptops are now powered by up to a Quadro RTX 5000 GPU, with GeForce RTX configurations that will be available, and brilliant 4K displays, letting creators enjoy true creative freedom.
The ASUS StudioBook 700G3T and W500 both come with up to Quadro RTX graphics. The ASUS ZenBook Pro Duo is equipped with a GeForce RTX GPU and full-width 4K ASUS ScreenPad Plus that works seamlessly with the main 4K UHD OLED display.
Dell’s Alienware m15 Creators Edition features multiple configurations including both GeForce RTX 2080 and RTX 2060 GPUs.
Gigabyte’s AERO 17 and AERO 15 both feature up to GeForce RTX 2080 GPUs, while the AERO 15 also packs a beautiful 4K OLED display.
HP’s OMEN X 2S – GeForce RTX Studio is equipped with a revolutionary dual-screen design which allows for a variety of multitasking capabilities, while both the OMEN X 2S and OMEN 15 – GeForce RTX Studio feature GeForce RTX 2080 graphics and up to a 4K screen.
MSI offers a wide range of RTX Studio laptops, including the recently announced WS65 with up to a Quadro RTX 5000 GPU — including GeForce RTX configurations — and a brilliant 4K display. Additional RTX Studio laptop models include the WS75, WE75, P75 and P65.
The Razer Blade Studio Edition laptop line features upgraded models of the Razer Blade 15 and Razer Blade Pro 17 in a striking Mercury White finish with a tone-on-tone Razer logo. The Studio Edition Razer Blades come equipped with Quadro RTX 5000 GPUs and a gorgeous 4K display to provide creators with the perfect mobile solution.
To support the newest laptops, NVIDIA is also providing creators with the software needed to ensure the highest standard of reliability. NVIDIA Studio Drivers deliver smooth performance on creative applications and the best possible experience when using NVIDIA GPUs. We conduct extensive multi-app workflow testing and a release cadence aligned to major creative app updates.
Visit the new NVIDIA Studio website to learn more about how we’re helping unleash creative potential with RTX Studio laptops and NVIDIA Studio Drivers.
(1) Performance testing conducted by NVIDIA in May 2019 on RTX Studio laptops equipped with 16GB RAM, Intel Core i7-8750H CPU and GeForce RTX 2080 Max-Q compared to 2018 15-inch MacBook Pro with 32GB RAM, Intel Core i9 CPU and Radeon Pro Vega 20 GPU. GeForce RTX 2080 Max-Q laptop was 7x faster in Maya+Arnold and REDCINE-X PRO. Arnold performance measures render time with Maya 2019 and Arnold 188.8.131.52 using the NVIDIA SOL 3D model. REDCINE-X PRO performance measures video playback FPS using an 8K 5:1 REDCODE RAW video.
What’s New: Today, Intel launched the most powerful generation of Intel® Core™ mobile processors ever: the new 9th Gen Intel® Core™ mobile H-series processors, designed for gamers and creators who want to push their experience to the next level.
“Our new 9th Gen platform is designed to delight gamers, creators and performance users by giving them more of what they want. We are bringing desktop-caliber performance with up to 5 Ghz and 8 cores in a range of thinner systems and new level of connectivity with Wi-Fi 6 (Gig+) so users can game or create where they want.”
–Fredrik Hamberger, general manager of the Premium and Gaming Laptop Segments at Intel
Why It’s Important: There are 580 million enthusiast PC gamers and 130 million PC-based content creators today1 who care about raw performance as much as they do responsiveness of their PC. They require PCs that can handle everything from demanding AAA games to taxing creative workloads like editing, rendering and transcoding massive 4K video – all while on the go. The 9th Gen Intel Core mobile processors deliver desktop-caliber performance in a mobile form factor and feature amazing performance; the fastest, most reliable wireless with Intel® Wi-Fi 6 AX200 (Gig+); the most versatile wired connectivity with Thunderbolt™ 3; and support for Intel® Optane™ memory technology.
How It Performs: At the top of the stack is the 9th Gen Intel Core i9-9980HK processor, the first Intel® Core™ i9 mobile processor with up to 5 GHz2 with Intel® Thermal Velocity Boost, 8 cores and 16 threads, and supporting 16MB of Intel® Smart Cache. The 9th Gen Intel Core mobile processors are designed for the most demanding workloads to deliver:
A full range of processors including Intel Core i5, i7 and the unlocked3 Intel Core i9-9980HK for even more performance.
Up to 33% overall performance leap compared with a 3-year old PC.4
Up to 28% increased responsiveness.5
Continuous performance optimization with Intel® Dynamic Tuning for all types of laptops.
How It Games: 9th Gen Intel Core mobile processors deliver desktop-caliber AAA gaming you can take anywhere, even while recording and streaming. With optimized performance on battery and Intel Wi-Fi 6 AX200 supporting multi-Gigabit Wi-Fi speed – based on the latest Wi-Fi standard offering low latency and ultra-fast connection speeds – laptop gaming gets a whole lot better. Gamers will be able to:
Play immersive AAA gameplay with up to 56% FPS improvement on games like “Total War: Warhammer II”.6
Experience up to 38% faster turn time on games like “Civilization 6”.7
Game, record and stream without compromise and broadcast HD live streams up to 2.1 times faster, gen-over-gen.8
Break the gigabit barrier with the latest Wi-Fi 6 standard running on the Intel Wi-Fi 6 AX200 (Gig+) solution offering almost three times faster throughput and up to 75% latency reduction9 – pair it with Wi-Fi 6 (Gig+) routers based on Intel technology to unleash a great gaming experience.
How It Improves Creation: 9th Gen Intel Core mobile processors power faster video10 editing to tackle heavyweight creative tasks on the go. New Intel Optane memory H10 with solid-state storage provides the responsiveness of Intel Optane memory with the capacity of a QLC NAND SSD, speeds application and content loading,11 and Thunderbolt 3 enhances home and office creation via fast single-wire access to multiple 4K monitors, additional external storage and system charging. 9th Gen Intel Core mobile processors will deliver:
Up to 54% faster 4K video editing versus a 3-year-old PC.12
With up to 1TB of total storage, Intel Optane memory H10 with solid-state storage will have the capacity users need for their apps and files.
Up to 63% faster content creation versus a 3-year-old PC via Intel Optane memory H10 with solid-state storage versus a standalone TLC NAND SSD.13
Intel Wi-Fi 6 (Gig+) lets you share your 10GB multimedia files in less than one minute (almost three times faster than standard 2×2 AC Wi-Fi).14 When connected to a new Wi-Fi 6 (Gig+) router powered by Intel technology it is now possible to create, edit and share faster than ever before.
Where to Get It: Starting April 23, 2019, laptops powered by the 9th Gen Intel Core mobile processors will launch from OEMs including Acer, ASUS, Dell, HP, Lenovo and MSI, as they introduce the most compelling laptops for gaming and content creation.
New Desktop Processors: New additions to the 9th Gen Intel Core desktop processor family were also introduced today. The 9th Gen Intel Core desktop processor lineup now includes more than 25 total products with options ranging from Intel Core i3 up to Intel Core i9 with amazing performance and flexibility to meet a range of consumer needs from everyday productivity to gaming to content creation. The family also brings Pentium Gold and Celeron products to market for entry-level computing, giving consumers even more options to find the right desktop to fit their specific need and budgets. New capabilities include:
Up to 47% more FPS while gaming.15
Up to 2.1 times faster video editing compared with a 5-year old PC for 4K and 360 video editing experiences.16
Intel Wi-Fi 6 (Gig+) with gigabit Wi-Fi speeds delivering almost three times faster than standard 2×2 AC, and 40% faster than Intel® Wireless-AC (Gigabit).17
Up to 8 cores and 16 threads, up to 5 GHz maximum turbo frequency, up to 16 MB Intel Smart Cache and up to 40 platform PCIe lanes.
Testing concluded April 16, 2019, and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information about performance and benchmark results, visit http://www.intel.com/benchmarks
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Intel is a sponsor and member of the BenchmarkXPRT Development Community, and was the major developer of the XPRT family of benchmarks. Principled Technologies is the publisher of the XPRT family of benchmarks. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases.
Warning: Altering PC clock or memory frequency and/or voltage may (i) reduce system stability and use life of the system, memory and processor; (ii) cause the processor and other system components to fail; (iii) cause reductions in system performance; (iv) cause additional heat or other damage; and (v) affect system data integrity. Intel assumes no responsibility that the memory, included if used with altered clock frequencies and/or voltages, will be fit for any particular purpose. Check with memory manufacturer for warranty and additional details.
1 NewZoo 2018 Global Games Market Report
2 Includes the effect of Intel® Thermal Velocity Boost (Intel® TVB), a feature that opportunistically and automatically increases clock frequency above single-core and multi-core Intel® Turbo Boost Technology frequencies based on how much the processor is operating below its maximum temperature and whether turbo power budget is available. The frequency gain and duration is dependent on the workload, capabilities of the processor and the processor cooling solution.
3 Altering clock frequency or voltage may damage or reduce the useful life of the processor and other system components, and may reduce system stability and performance. Product warranties may not apply if the processor is operated beyond its specifications. Check with the manufacturers of system and components for additional details.
4 As measured by SYSmark* 2018 comparing 9th Gen Intel® Core™ i7-9750H Processor vs. 6th Gen Intel® Core™ i7-6700HQ Processor.
5 As measured by SYSmark* 2018 Responsiveness Subscore 9th Gen Intel® Core™ i7-9750H Processor with 1TB Intel® Optane™ memory H10 with Solid State Storage drive vs. 6th Gen Intel® Core™ i7-6700HQ Processor with Intel® SSD 760P TLC SSD drive.
6 As measured by Total War: WARHAMMER II (Skaven Lab Mode) FPS Workload comparing 9th Gen Intel® Core™ i7-9750H vs. 6th Gen Intel® Core™ i7-6700HQ.
7 As measured by Civilization 6 Turn Time Workload comparing 9th Gen Intel® Core™ i7-9750H Processor vs. 6th Gen Intel® Core™ i7-6700HQ Processor.
8 As measured by Mega-tasking Gaming Scenario on Black Ops 4 comparing 9th Gen Intel® Core™ i9-9980HK vs. 8th Gen Intel® Core™ i9-8950HK.
9 75% Latency reduction: Is based on Intel simulation data (79%) of 802.11ax with and without OFDMA using 9 clients. Average latency without OFDM is 36ms, with OFDMA average latency is reduced to 7.6ms. Latency improvement requires that the 802.11ax (Wi-Fi 6) router and all clients support OFDMA.
10 As measured by Adobe Premiere Pro Video Editing Workload comparing 9th Gen Intel® Core™ i9-9980HK vs. 6th Gen Intel® Core™ i7-6700HQ Processor.
11 As measured by Media Project Load Workload comparing 9th Gen Intel® Core™ i7-9750H Processor with 1TB Intel® Optane™ memory H10 with Solid State Storage drive vs. 6th Gen Intel® Core™ i7-6700HQ Processor with Intel® SSD 760P TLC SSD drive.
12 As measured by Content Load workload on Intel® Core™ i7-9750H vs. Intel® Core™ i7-6700HQ.
13 As measured by 4k video editing workload on Intel® Core™ i7-9750H vs. Intel® Core™ i7-6700HQ.
14 Nearly 3X Faster: 802.11ax 2×2 160 MHz enables 2402 Mbps maximum theoretical data rates, ~3X (2.8X) faster than standard 802.11ac 2×2 80 MHz (867 Mbps) as documented in IEEE 802.11 wireless standard specifications, and require the use of similarly configured 802.11ax wireless network routers.
15 As measured by Hitman 2 FPS Workload comparing 9th Gen Intel® Core™ i5-9500 Processor vs. 4th Gen Intel® Core™ i5-4590S Processor.
16 As measured by Adobe* Lightroom Classic CC Photo Editing Workload comparing 9th Gen Intel® Core™ i5-9500 Processor vs. 4th Gen Intel® Core™ i5-4590S Processor.
17 Nearly 3X Faster: 802.11ax 2×2 160 MHz enables 2402 Mbps maximum theoretical data rates, ~3X (2.8X) faster than standard 802.11ac 2×2 80 MHz (867 Mbps) as documented in IEEE 802.11 wireless standard specifications, and require the use of similarly configured 802.11ax wireless network routers.
Magic AI is galloping into the internet of horses arena.
The Seattle-based startup, an angel-funded team of five, has been developing AI for stable managers and riders to monitor the health and security of horses from video feeds.
Image recognition has been a boon to agriculture businesses, including those in the cattle industry. Magic AI corrals algorithms for its AI-powered software to monitor video and help better manage horses, streaming the video to its servers for processing.
Magic AI founder and CEO Alexa Anthony knows the needs of horse owners. The daughter of a horse trainer, she grew up riding in the Seattle area and is a former NCAA national champion in horse jumping.
“If you have a Lamborghini, you have it in a garage with an alarm. Horses are often times in a barn in remote places without any security that they are ok when you are sleeping,” she said.
Magic AI’s StableGuard, a system of cameras that works with a mobile app to keep tabs on horses, provides GPU-driven video monitoring and emergency alerts. StableGuard can be configured to recognize riders and staff of stables as well as to send alerts if strangers enter.
Horse Data Hurdle
Building StableGuard wasn’t easy. The developers at Magic AI initially couldn’t find enough publicly available horse images to adequately train its deep neural networks. They began with MXNet and horse images from the classic ImageNet that proved problematic.
“They actually trained abysmally because the angle of our cameras is overhead, very different than ImageNet,” said Kyle Lampe, vice president of engineering at Magic AI “That really threw off most of the things we used to train.”
Magic AI’s developers relied heavily on transfer learning to add to a number of different image classification networks. Lampe said that with enough new data — “terabytes and terabytes” of video images — they were able to successfully build on top of networks that had already been trained and used in competitions.
“When you do transfer learning, you’re putting images in after the fact and it is applying everything that it has learned before,” he said.
Developers at Magic AI relied on GPUs on desktop as well as on AWS to handle the hefty training workloads on the deep neural networks.
Horse Health Results
The original inspiration for Magic AI came when Anthony’s horse died of colic. Colic symptoms are fairly easy to spot — rolling on the ground, kicking at the stomach, pawing on the ground — and can be identified with image classification algorithms.
Today, Magic AI is adding to a growing list of health indicators for customers to track using its StableGuard system. StableGuard enables customers to keep track of how often horses are eating and drinking, on their feet, and whether they are blanketed when it’s cold, offering more ways to support horse wellness.
The company can also alert horse owners to signs that an animal is close to giving birth. “We can see signs that are indicative of birth. And then you can look at the live feed on your phone,” said Anthony.
Magic AI has a pilot customer, Thunderbird Show Park, in British Columbia, Canada. On that site, StableGuard is in 120 horse stalls. It offers the horse-monitoring service for $15 a day to those there for horse-jumping tournaments and other events.
Most of these sites are powered by a GPU on site, sizable hard drive storage and other computing resources to run Magic AI’s service. “I am excited to see how this technology can improve the wellness of animals globally,” said Anthony.
RockMass is digging a niche for itself in mining and tunnels.
The Toronto-based startup is developing an NVIDIA AI-powered mapping platform that can help engineers assess tunnel stability in mines and construction.
Today, geologists and engineers visually assess the risks of rock formations by standing five meters away from the rock as a safety precaution. That isn’t ideal for ensuring accurate results, said Shelby Yee, CEO and co-founder of RockMass.
“What they are doing right now takes about 90 minutes, and our technology can do it in about five minutes,” said Yee.
RockMass is using engineers in the field test out its hand-held unit, the Mapper. It’s aimed at those in mining, geological exploration and civil engineering. The startup is developing the AI platform for robots, drones and handheld devices used to capture geological data.
The startup’s Mapper AI device now offers a safer way to keep engineers further away from a possible tunnel collapse as well as offers a faster system for gathering and processing data. Robots and drones using its platform could go into even more hazardous areas.
RockMass customers include Brazilian mining company Nexa Resources, which seeks increased automation and safety with use of the startup’s technology.
AI for Geotech
Engineers have for years surveyed the angles of rock surfaces using conventional equipment such as a theodolite scope-like device on a tripod to take optical measurements. They seek out so-called planes of weaknesses, which identify failure points within tunnels and rock formations.
The engineers measure the surfaces of rock formations to collect data for building what are known as stereonets. Stereonets map three-dimensional forms, such as a boulder, for viewing in a two-dimensional display.
Engineers traditionally take the data from a site back to the office to transfer onto a computer to create a stereonet.
The startup’s technology promises an easier way. Its handheld device is packed with sensors for such measurements. Its lidar sensor and inertial measurement unit map the orientation on planes of weaknesses in rock formations. And it can do this in underground environments lacking GPS, wireless communication and light.
RockMass’s software relies on the information provided by these sensors to quickly identify useable data for engineers within minutes. The company is working to capture and process the data on the spot for field engineers. “You are able to see the data in real time,” Yee said.
‘Computationally Demanding’ AI
RockMass’s platform for onsite data collection is computationally demanding, said CTO and co-founder Stuart Bourne. The company’s devices sport robotics capabilities from NVIDIA Jetson and rely on its support for CUDA, cuDNN and TensorRT software libraries.
“Jetson has very high computational power relative to how much energy it draws,” Bourne said.
The startup enlists CUDA libraries to get the real-time processing to work with the data on cloud instances running NVIDIA GPUs for processing stereonets for customers.
“Nobody is able to collect and process the data in the way that we do,” Yee said. “We are able to process in the cloud in real time simply because of the power of the GPUs.”
RockMass plans to further develop its drones and robots to launch pilots next year.