Back in the day, the annual SC supercomputing conference was filled with tabletops hung with research posters. Three decades on, the show’s Denver edition this week was a sea of sharp-angled booths, crowned with three-dimensional signage, promoting logos in a multitude of blues and reds. But nowhere on the SC19 show floor drew more of Read article >
Back in the day, the annual SC supercomputing conference was filled with tabletops hung with research posters. Three decades on, the show’s Denver edition this week was a sea of sharp-angled booths, crowned with three-dimensional signage, promoting logos in a multitude of blues and reds.
But nowhere on the SC19 show floor drew more of the show’s 14,000 attendees than NVIDIA’s booth, built around a broad, floor-to-ceiling triangle with 2,500 square feet of ultra-high def LED screens. With a packed lecture hall on one side and HPC simulations playing on a second, it was the third wall that drew the most buzz.
Cycling through was a collection of AI-enhanced photos of several hundred GPU developers — grad students, CUDA pioneers, supercomputing rockstars — together with descriptions of their work.
Like accelerated computing’s answer to baseball cards, they were rendered into art using AI style transfer technology inspired by various painters — from the classicism of Vermeer to van Gogh’s impressionism to Paul Klee’s abstractions.
Meanwhile, NVIDIA sprinted through the show, kicking things off with a news-filled keynote by founder and CEO Jensen Huang, helping to power research behind the two finalists nominated for the Gordon Bell prize, and joining in to celebrate its partner Mellanox.
And in its booth, 200 engineers took advantage of free AI training through the Deep Learning Institute and dozens of tech talks were provided by leading researchers packed in shoulder to shoulder.
Wall in the Family
Piecing together the Developer Wall project took a dozen NVIDIANs scrambling for weeks in their spare time. The team of designers, technologists and marketers created an app where developers could enter some background, which would be paired with their photo once it’s run through style filters at DeepArt.io, a German startup that’s part of NVIDIA’s Inception startup incubator.
“What we’re trying to do is showcase and celebrate the luminaries in our field. They amazing work they’ve done is the reason this show exists,” said Doug MacMillian, a developer evangelist who helped run the big wall initiative.
Behind him flashed an image of Jensen Huang, rendered as if painted by Cezanne. Alongside him was John Stone, the legendary HPC researcher at the University of Illinois, as if painted by Vincent Van Gogh. Close by were Erik Lindahl, who heads the international GROMACS molecular simulation project, right out of a Joan Miró painting. Paresh Kharya, a data center specialist at NVIDIA, looked like an abstracted sepia-tone circuit board.
Enabling the Best and Brightest
That theme — how NVIDIA’s working to accelerate the work of people in an ever growing array of industries — continued behind the scenes.
As she stood on stage, she witnessed an event she’s spent years simulating purely with data – the fiery path that the Mars lander, a capsule the size of a two-story condo, will take as it slows in seven dramatic minutes from 12,000 miles an hour to gently stick its landing on the Red Planet.
“This is amazing,” she quietly said through tears. “I never thought I’d be able to visualize this.”
Flurry of News
Huang later took the stage and in a broad-sweeping two hour keynote set out a range of announcements that show how NVIDIA’s helping others do their life’s work, including:
SC19 plays host to a series of awards throughout the show, and NVIDIA featured in a number of them.
Both finalists for the Gordon Bell Prize for outstanding achievement in high performance computing — the ultimate winner, ETH Zurich, as well as University of Michigan — ran their work on Oak Ridge National Laboratory’s Summit supercomputer, powered by nearly 28,000 V100 GPUs.
And NVIDIA’s Vasily Volkov co-authored with UC Berkeley’s James Demmel a seminal paper 11 years ago recognized with the Time of Time Award for a work of lasting impact. The paper, which has resulted in a new way of thinking and modeling algorithms on GPUs, has had nearly 1,000 citations.
This year, China’s Tsinghua University captured the top crown. It beat out 15 other undergrad teams using NVIDIA V100 Tensor Core GPUs in an immersive HPC challenge demonstrating the breadth of skills, technologies and science that it takes to build, maintain and use supercomputers. Tsinghua also won the IO500 competition, while two other prizes were won by Singapore’s Nanyang Technological University.
The teams came from xx different markets, including Germany, Latvia, Poland and Taiwan, in addition to China and Singapore.
Up Next: More Performance for the World’s Data Centers
NVIDIA’s frenetic week at SC19 ended with a look at what’s next, with Jensen joining Mellanox CEO Eyal Waldman on stage at an evening event hosted by the networking company, which NVIDIA agreed to acquire earlier this year.
Jensen and Eyal discussed how their partnership will enable the future of computing, with Jensen detailing the synergies between the companies. “Mellanox has an incredible vision,” Huang said. ““In a couple years we’re going to bring more compute performance to data centers than all of the compute since the beginning of time.”
In a visit to Mobileye headquarters in Jerusalem on Thursday, Michigan Gov. Gretchen Whitmer announced a pilot program to enhance the safety of existing state and city fleets through the application of Mobileye 8 Connect aftermarket systems for collision avoidance. The trial is part of Michigan leaders’ objective to enhance road safety today while paving the way for the autonomous vehicles (AVs) of tomorrow.
In her meeting with Mobileye President and CEO Professor Amnon Shashua, Whitmer explored ways that Mobileye technology could be used to improve road safety, reduce collision-related costs, gain insight into local collision hotspots, and prepare the state for broad deployment of robotaxis and AVs.
“This program will demonstrate the potential of driving assistance technology to save lives, reduce collision-related costs and help diminish traffic congestion,” Whitmer said. “Our work with Mobileye highlights the number of contributions Michigan brings to the world of mobility and will help us advance technology and improve the quality of life for countless people. I’m proud to collaborate with Mobileye and eager to continue our work to transform the automotive landscape and solidify Michigan as a world leader in mobility.”
Students at the Massachusetts Institute of Technology are learning about autonomous driving by taking NVIDIA-powered data science workstations for a spin. In an undergraduate robotics class at MIT, 17 students were organized into three teams and given a miniature racing car. Their task: teach it how to drive by itself through a complex course inside Read article >
Students at the Massachusetts Institute of Technology are learning about autonomous driving by taking NVIDIA-powered data science workstations for a spin.
In an undergraduate robotics class at MIT, 17 students were organized into three teams and given a miniature racing car. Their task: teach it how to drive by itself through a complex course inside the basement of the university’s Stata Center.
Sertac Karaman, associate professor of aeronautics and astronautics at MIT, wanted to teach students the process of imitation learning, a technique that uses human demonstrations to train a self-driving model.
Students Wheel It in with Data Science Workstations
Through the process of imitation learning, the students needed to teach their car how to autonomously drive by training a TensorFlow neural network. But first, they needed to collect as much data as they could on the indoor course so the cars could learn how to navigate through the hallways and doors of the Stata Center.
Each car was equipped with an NVIDIA Jetson AGX Xavier embedded system-on-a-module for performance-driven autonomous machines. Using a joystick, the students manually drove the small car around the complex course and recorded data through a camera mounted on its frontend.
Then the neural network, based on the NVIDIA PilotNet architecture, processed that data, learning how to map between observation and action — so the car could estimate steering angles based on what its camera sees.
The students used the advanced computing capabilities of the data science workstations, powered by NVIDIA Quadro RTX GPUs, to train their TensorFlow models, which were then deployed on the miniature race cars for on-device AI inference.
The data science workstations provided massive speedups in performance to greatly reduce iteration times. This allowed the students to quickly train and test various models to find the best one for their race car.
“The students were successful in their projects because the time it took for training the models was faster than we’ve ever seen,” said Karaman. “The accelerated computing capabilities of NVIDIA data science workstations allowed the class to iterate multiple times, and the best performing race cars were trained in only a few minutes.”
Karaman plans to teach the robotics class once again this year, using the data science workstations and the pre-installed AI software stack.
In 2004, Intel joined more than 100 other corporations to call for concrete action to promote diversity in the legal profession. That declaration reflected the shared recognition by Intel and our peers that “the legal and business interests of our clients require legal representation that reflects the diversity of our employees, customers and the communities where we do business.” We pledged to “make decisions regarding which law firms represent our companies based in significant part on the diversity performance of the firms” and “to end or limit our relationships with firms whose performance consistently evidences a lack of meaningful interest in being diverse.” We earnestly believed that our call to action would prompt meaningful change in the legal profession and increase the number of women and minority attorneys representing Intel as outside counsel.
Fifteen years have passed since that call to action, and many corporations and law firms have made progress. At Intel today, we believe our outside counsel roster is among corporate America’s most diverse, and we regularly partner with our outside firms to provide opportunities to diverse lawyers, to set challenging representation goals and to award bonuses for outstanding progress. Over the years, we and our law firm partners have pioneered or adopted nearly every available tool to increase the diversity of our legal teams, including mentoring programs and clerkships. These improvements are part and parcel with Intel’s values-driven desire to become the world’s most inclusive company – a goal we support with our time, talents and resources. For example, in 2015 we set a goal to reach full market representation of women and underrepresented minorities in our U.S. workforce by 2020. We committed $300 million to achieve this goal and to support the broader goal of improving diversity and inclusion in the entire technology industry. We achieved full market representation of women and underrepresented minorities in our U.S. workforce in 2018, two years ahead of schedule.
But despite these improvements, for the legal profession overall, progress has been frustratingly slow – especially when it comes to retention and promotion. According to most surveys, at large U.S. law firms, only about 20% of full equity partners are women, and only about 8 or 9% are underrepresented minorities. Indeed, the data suggest that the largest 200 firms in the country as a group will not reach 50% women and 33% racial and ethnic minorities in their equity partner ranks – which would mirror the composition of recent law school graduating classes – for at least another 50 years.
That sluggish progress is not enough for our profession, and it certainly is not enough for Intel – where we pride ourselves on taking bold risks to achieve rapid progress. In 1965, Intel co-founder Gordon Moore penned Moore’s Law, a prediction of constant, momentous improvement that has become the driving force for progress in the computer industry. Our industry’s belief in our ability to achieve the core promise of Gordon’s prediction has driven thousands of engineers and scientists to produce ever-faster computer chips.
We believe that driving real progress in the legal profession’s diversity requires taking risk and being audacious, in the best spirit of Moore’s Law. In fact, we need a Moore’s Law of diversity in the legal profession and, even more, we need the fearlessness that goes with it. Today, we are taking a step toward that ambition by announcing something we call the Intel Rule:
Beginning Jan. 1, 2021, Intel will not retain or use outside law firms in the U.S. that are average or below average on diversity. Firms are eligible to do legal work for Intel only if, as of that date and thereafter, they meet two diversity criteria: at least 21% of the firm’s U.S. equity partners are women and at least 10% of the firm’s U.S. equity partners are underrepresented minorities (which, for this purpose, we define as equity partners whose race is other than full white/Caucasian, and partners who have self-identified as LBGTQ+, disabled or as veterans).
The Intel Rule adds above-average diversity to a small list of mandatory items we require from every lawyer in every retention: results, value, professionalism and diversity. We understand that doing this may deny to us the services of many highly skilled lawyers, perhaps including the services of some law firms with which we have worked for decades. But Intel cannot abide the current state of progress – it is not enough, and progress is not happening fast enough. At Intel, below average and average on diversity is no longer good enough to be a member of our regular outside counsel roster.
Initially, our diversity criteria will focus on equity partners. As many firms now have multiple tiers of equity partners, over time we intend to develop the data necessary to adopt and apply diversity criteria based only on the top tier of equity partners at firms. We also intend to develop the data necessary to apply our diversity criteria to firms worldwide. And, we will regularly increase our required percentages so that Intel is always using firms that are at the forefront of our profession’s progress on diversity.
In full candor, our internal discussions about the Intel Rule have not been without debate, even among those most passionate about diversity. There may be instances where a real need for a specialty, relationship or lawyer (for example, a local counsel relationship) requires us to depart from the Intel Rule, but we will make such exceptions rarely and only after determining that no suitable diverse firm is available. In the area of patent prosecution, which presents distinct challenges in the area of gender diversity, we will make appropriate adjustments to balance our goal to drive greater diversity in the legal profession with our needs for specialized (and sometimes unique) technical skills.
Today, we call on corporate law departments to also renew our shared commitments to take concrete steps to develop and hire diverse outside teams. At Intel, we pledge our more than $300 million in annual outside counsel spending to this goal, in the belief that by declining to hire firms that are average or below average on diversity will spur the progress our profession needs to bring about change now, not in 50 years – an effort worthy of the spirit of Moore’s Law.
Steven R. Rodgers is executive vice president and general counsel at Intel Corporation.
At Oracle, customer service chatbots use conversational AI to respond to consumers with more speed and complexity. Suhas Uliyar, vice president for product management for digital assistance and AI at Oracle, stopped by to talk to AI Podcast host Noah Kravitz about how the newest wave of conversational AI can keep up with the nuances Read article >
At Oracle, customer service chatbots use conversational AI to respond to consumers with more speed and complexity.
Suhas Uliyar, vice president for product management for digital assistance and AI at Oracle, stopped by to talk to AI Podcast host Noah Kravitz about how the newest wave of conversational AI can keep up with the nuances of human conversation.
Many chatbots frustrate consumers because of their static nature. Asking a question or using the wrong keyword confuses the bot and prompts it to start over or make the wrong selection.
Uliyar says that Oracle’s digital assistant uses a sequence-to-sequence algorithm to understand the intricacies of human speech, and react to unexpected responses.
Their chatbots can “switch the context, keep the memory, give you the response and then you can carry on with the conversation that you had. That makes it natural, because we as humans fire off on different tangents at any given moment.”
Key Points From This Episode:
The contextual questions that often occur in normal conversation stump single-intent systems, but the most recent iteration is capable of answering simple questions quickly and remembering customers.
The next stage in conversational AI, Uliyar believes, will allow bots to learn about users in order to give them recommendations or take action for them.
Jared Ritter, the senior director of wireless engineering at Charter Communications, describes their innovative approach to data collection on customer feedback. Rather than retroactively accessing the data to fix problems, Charter uses AI to evaluate data constantly to predict issues and address them as early as possible.
What would the future of intelligent devices look like if we could bounce from using Amazon’s Alexa to order a new book to Google Assistant to schedule our next appointment, all in one conversation? Xuchen Yao, the founder of AI startup KITT.AI, discusses the toolkit that his company has created to achieve a “hands-free” experience.
Aakash Indurkha, head of machine learning projects at AI-based analytics platform Virtualitics, explains how the company is bringing creativity to data science using immersive visualization. Their software bridges the gap created by a lack of formal training to help inexperienced users identify anomalies on their own, and gives experts the technology to demonstrate their complex calculations.
From animators creating complex scenes with realistic 3D characters to architects designing buildings with accurate reflections, real-time ray tracing and AI are proving to be game-changers across industries. And now more of the world’s top applications have begun shipping with NVIDIA RTX technology to let artists explore their creativity further. Among the latest this week: Read article >
From animators creating complex scenes with realistic 3D characters to architects designing buildings with accurate reflections, real-time ray tracing and AI are proving to be game-changers across industries.
And now more of the world’s top applications have begun shipping with NVIDIA RTX technology to let artists explore their creativity further.
Among the latest this week: Chaos Group’s V-Ray and Blender’s Cycles. V-Ray is the software renderer used by professionals across industries like 3D animation, visual effects, architecture, automotive and product design. Cycles is an open source production renderer that uses NVIDIA OptiX to offer a wide range of features to enhance 3D animation, like subsurface scattering and motion blur.
“Accelerating artist productivity is always our top priority, so we’re quick to take advantage of the latest ray tracing hardware breakthroughs,” said Phillip Miller, vice president of product management at Chaos Group. “By supporting NVIDIA RTX in V-Ray GPU, we’re bringing our customers an exciting new boost in their GPU production rendering speeds.”
Powering this technology are NVIDIA Quadro RTX and GeForce RTX GPUs, like the ones found in RTX Studio-branded systems from Dell, HP, Lenovo and other OEMs. These systems are designed to meet creators’ demands for performance and reliability, and include carefully selected hardware and software specifications as well as the latest NVIDIA Studio Drivers.
With NVIDIA RTX, V-Ray delivers interactive rendering and massive speed-ups in performance. The new updates will speed up GPU rendering by an average of 40 percent when compared to general GPU acceleration running on the same RTX hardware. The RTX-enabled version of V-Ray lets artists easily render large, complex scenes with photorealistic details faster than before.
Blender’s RTX support with the latest version of the Cycles renderer provides real-time rendering so users can create high-quality images and boost performance 4x faster than on CPU.
“In Blender Cycles we’re always looking to reduce render time so artists can iterate faster,” said Brecht Van Lommel, lead architect at Blender. “With NVIDIA RTX, core ray-tracing operations are now hardware accelerated by the GPU, making this the fastest version of Cycles yet.”
Other applications recently releasing new features with RTX-powered ray tracing and AI include:
Adobe’s latest version of video editing tool Premiere Pro, released last week, includes a new GPU-accelerated AI feature called Auto Reframe. Using Adobe Sensei AI, Auto Reframe allows users to adjust the aspect ratio of their videos while keeping the important content within the frame. With NVIDIA RTX, the Auto Reframe performs up to 400 percent faster than CPU-only processing.
Officially released last week, Substance Alchemist is the material enthusiast’s toolbox that helps artist create collections of materials by combining and tweaking existing resources, or by building new materials from photos and high-res scans. NVIDIA RTX GPUs enable material creators to leverage a de-lighting feature that uses AI to remove highlights and shadows from an image to get a realistic material in 3D.
Adobe Dimension is a 3D design and layout software that was recently updated with a GPU rendering (beta) integration, which utilizes dedicated ray-tracing processors on RTX for accelerated 3D rendering. With interactive ray tracing, creative professionals can create high-end visuals with realistic lighting, reflections and shadows.
Luxion KeyShot Leading marketing and design professionals use KeyShot software for 3D renderings, animations and interactive visuals. KeyShot 9, released on Nov. 5, integrates the NVIDIA OptiX ray-tracing engine, providing RTX-accelerated ray tracing and AI denoising so designers can create realistic images of their 3D data with speed, quality and accuracy.
The Enscape rendering engine delivers real-time visualization for architectural designs and models. Enscape is one of the first rendering engines for architects to support NVIDIA RTX, which speeds up ray-tracing calculations for amazing lighting and reflections, and accelerates the rendering process for more complex geometry and environments.
With SOLIDWORKS Visualize, professionals can transform 3D CAD data into photo-quality images. SOLIDWORKS Visualize 2020 SP0 and 2019 SP5 are the first publicly available releases that take full advantage of NVIDIA RTX, delivering accelerated rendering performance and AI denoising up to 50 percent faster than previous releases.
MODO is a creative 3D modeling, texturing and rendering tool used by artists for exploring and developing ideas — from games to innovative product designs. Foundry’s latest release, MODO 13.2, introduces the new mPath path-tracing renderer with support for NVIDIA RTX ray tracing, providing artists, designers and engineers with accelerated rendering, denoising and real-time feedback.
Epic Games is among the early adopters of RTX. Unreal Engine, Epic’s industry-leading game engine, supports RTX ray tracing so artists can create photorealistic renders and immersive AR/VR experiences with subtle lighting effects through features like ambient occlusion, reflections, shadows, global illumination and more. Look for the latest release of Unreal Engine 4.24 coming in December.
To get the best experience with the latest RTX creative and design applications, download our new NVIDIA Studio driver, released this week with optimizations and support for V-Ray, Blender and more. Learn how our Studio Driver program delivers max performance and stability to your favorite creative apps.
For developers looking to leverage the most out of RTX GPUs, learn more about integrating OptiX 7 into applications.