NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

AI of the Storm: How We Built the Most Powerful Industrial Computer in the U.S. in Three Weeks During a Pandemic

In under a month amid the global pandemic, a small team assembled the world’s seventh-fastest computer.

Today that mega-system, called Selene, communicates with its operators on Slack, has its own robot attendant and is driving AI forward in automotive, healthcare and natural-language processing.

While many supercomputers tap exotic, proprietary designs that take months to commission, Selene is based on an open architecture NVIDIA shares with its customers.

The Argonne National Laboratory, outside Chicago, is using a system based on Selene’s DGX SuperPOD design to research ways to stop the coronavirus. The University of Florida will use the design to build the fastest AI computer in academia.

DGX SuperPODs are driving business results for companies like Continental in automotive, Lockheed Martin in aerospace and Microsoft in cloud-computing services.

Birth of an AI System

The story of how and why NVIDIA built Selene starts in 2015.

NVIDIA engineers started their first system-level design with two motivations. They wanted to build something both powerful enough to train the AI models their colleagues were building for autonomous vehicles and general purpose enough to serve the needs of any deep-learning researcher.

The result was the SATURNV cluster, born in 2016 and based on the NVIDIA Pascal GPU. When the more powerful NVIDIA Volta GPU debuted a year later, the budding systems group’s motivation and its designs expanded rapidly.

AI Jobs Grow Beyond the Accelerator

“We’re trying to anticipate what’s coming based on what we hear from researchers, building machines that serve multiple uses and have long lifetimes, packing as much processing, memory and storage as possible,” said Michael Houston, a chief architect who leads the systems team.

As early as 2017, “we were starting to see new apps drive the need for multi-node training, demanding very high-speed communications between systems and access to high-speed storage,” he said.

AI models were growing rapidly, requiring multiple GPUs to handle them. Workloads were demanding new computing styles, like model parallelism, to keep pace.

So, in fast succession, the team crafted ever larger clusters of V100-based NVIDIA DGX-2 systems, called DGX PODs. They used 32, then 64 DGX-2 nodes, culminating in a 96-node architecture dubbed the DGX SuperPOD.

They christened it Circe for the irresistible Greek goddess. It debuted in June 2019 at No. 22 on the TOP500 list of the world’s fastest supercomputers and currently holds No. 23.

Cutting Cables in a Computing Jungle

Along the way, the team learned lessons about networking, storage, power and thermals. Those learnings got baked into the latest NVIDIA DGX systems, reference architectures and today’s 280-node Selene.

In the race through ever larger clusters to get to Circe, some lessons were hard won.

“We tore everything out twice, we literally cut the cables out. It was the fastest way forward, but it still had a lot of downtime and cost. So we vowed to never do that again and set ease of expansion and incremental deployment as a fundamental design principle,” said Houston.

The team redesigned the overall network to simplify assembling the system.

They defined modules of 20 nodes connected by relatively simple “thin switches.” Each of these so-called scalable units could be laid down, cookie-cutter style, turned on and tested before the next one was added.

The design let engineers specify set lengths of cables that could be bundled together with Velcro at the factory. Racks could be labeled and mapped, radically simplifying the process of filling them with dozens of systems.

Doubling Up on InfiniBand

Early on, the team learned to split up compute, storage and management fabrics into independent planes, spreading them across more, faster network-interface cards.

The number of NICs per GPU doubled to two. So did their speeds, going from 100 Gbit per second InfiniBand in Circe to 200G HDR InfiniBand in Selene. The result was a 4x increase in the effective node bandwidth.

Likewise, memory and storage links grew in capacity and throughput to handle jobs with hot, warm and cold storage needs. Four storage tiers spanned 100 terabyte/second memory links to 100 Gbyte/s storage pools.

Power and thermals stayed within air-cooled limits. The default designs used 35kW racks typical in leased data centers, but they can stretch beyond 50kW for the most aggressive supercomputer centers and down to 7kW racks some telcos use.

Seeking the Big, Balanced System

The net result is a more balanced design that can handle today’s many different workloads. That flexibility also gives researchers the freedom to explore new directions in AI and high performance computing.

“To some extent HPC and AI both require max performance, but you have to look carefully at how you deliver that performance in terms of power, storage and networking as well as raw processing,” said Julie Bernauer, who leads an advanced development team that’s worked on all of NVIDIA’s large-scale systems.

A portrait of Selene by the numbers

Skeleton Crews on Strict Protocols

The gains paid off in early 2020.

Within days of the pandemic hitting, the first NVIDIA Ampere architecture GPUs arrived, and engineers faced the job of assembling the 280-node Selene.

In the best of times, it can take dozens of engineers a few months to assemble, test and commission a supercomputer-class system. NVIDIA had to get Selene running in a few weeks to participate in industry benchmarks and fulfill obligations to customers like Argonne.

And engineers had to stay well within public-health guidelines of the pandemic.

“We had skeleton crews with strict protocols to keep staff healthy,” said Bernauer.

“To unbox and rack systems, we used two-person teams that didn’t mix with the others — they even took vacation at the same time. And we did cabling with six-foot distances between people. That really changes how you build systems,” she said.

Even with the COVID restrictions, engineers racked up to 60 systems in a day, the maximum their loading dock could handle. Virtual log-ins let administrators validate cabling remotely, testing the 20-node modules as they were deployed.

Bernauer’s team put several layers of automation in place. That cut the need for people at the co-location facility where Selene was built, a block from NVIDIA’s Silicon Valley headquarters.

Slacking with a Supercomputer

Selene talks to staff over a Slack channel as if it were a co-worker, reporting loose cables and isolating malfunctioning hardware so the system can keep running.

“We don’t want to wake up in the night because the cluster has a problem,” Bernauer said.

It’s part of the automation customers can access if they follow the guidance in the DGX POD and SuperPOD architectures.

Thanks to this approach, the University of Florida, for example, is expected to rack and power up a 140-node extension to its HiPerGator system, switching on the most powerful AI supercomputer in academia within as little as 10 days of receiving it.

As an added touch, the NVIDIA team bought a telepresence robot from Double Robotics so non-essential designers sheltering at home could maintain daily contact with Selene. Tongue-in-cheek, they dubbed it Trip given early concerns essential technicians on site might bump into it.

The fact that Trip is powered by an NVIDIA Jetson TX2 module was an added attraction for team members who imagined some day they might tinker with its programming.

Trip robot with Selene
Trip helped engineers inspect Selene while it was under construction.

Since late July, Trip’s been used regularly to let them virtually drive through Selene’s aisles, observing the system through the robot’s camera and microphone.

“Trip doesn’t replace a human operator, but if you are worried about something at 2 a.m., you can check it without driving to the data center,” she said.

Delivering HPC, AI Results at Scale

In the end, it’s all about the results, and they came fast.

In June, Selene hit No. 7 on the TOP500 list and No. 2 on the Green500 list of the most power-efficient systems. In July, it broke records in all eight systems tests for AI training performance in the latest MLPerf benchmarks.

“The big surprise for me was how smoothly everything came up given we were using new processors and boards, and I credit all the testing along the way,” said Houston. “To get this machine up and do a bunch of hard back-to-back benchmarks gave the team a huge lift,” he added.

The work pre-testing NGC containers and HPC software for Argonne was even more gratifying. The lab is already hammering on hard problems in protein docking and quantum chemistry to shine a light on the coronavirus.

Separately, Circe donates many of its free cycles to the Folding@Home initiative that fights COVID.

At the same time, NVIDIA’s own researchers are using Selene to train autonomous vehicles and refine conversational AI, nearing advances they’re expected to report soon. They are among more than a thousand jobs run, often simultaneously, on the system so far.

Meanwhile the team already has on the whiteboard ideas for what’s next. “Give performance-obsessed engineers enough horsepower and cables and they will figure out amazing things,” said Bernauer.

At top: An artist’s rendering of a portion of Selene.

The post AI of the Storm: How We Built the Most Powerful Industrial Computer in the U.S. in Three Weeks During a Pandemic appeared first on The Official NVIDIA Blog.

University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world’s fastest AI supercomputer in academia, delivering 700 petaflops of AI performance.

The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

“We’ve created a replicable, powerful model of public-private cooperation for everyone’s benefit,” said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA.

UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

The $70 million public-private partnership promises to make UF one of the leading AI universities in the country, advance academic research and help address some of the state’s most complex challenges.

“This is going to be a tremendous partnership,” Florida Gov. Ron DeSantis said. “As we look to keep our best talent  in state, this will be a significant carrot, you’ll also see people around the country want to come to Florida.”

Working closely with NVIDIA, UF will boost the capabilities of its existing supercomputer, HiPerGator, with the recently announced NVIDIA DGX SuperPOD architecture. The system will be up and running by early 2021, just a few weeks after it’s delivered.

This gives faculty and students within and beyond UF the tools to apply AI across a multitude of areas to address major challenges such as rising seas, aging populations, data security, personalized medicine, urban transportation and food insecurity. UF expects to create 30,000 AI-enabled graduates by 2030.

“The partnership here with the UF, the state of Florida, and NVIDIA, anchored by Chris’ generous donation, goes beyond just money,” said NVIDIA CEO Jensen Huang, who founded NVIDIA in 1993 along with Malachowsky and Curtis Priem. “We are excited to contribute NVIDIA’s expertise to work together to make UF a national leader in AI and help address not only the region’s, but the nation’s challenges.”

UF, ranked seventh among public universities in the United States by US News & World Report and aims to break into the top five, offers an extraordinarily broad range of disciplines, Malachowsky said.

The region is also a “living laboratory for some of society’s biggest challenges,” Malachowsky said.

Regional, National AI Leadership

The effort aims to help define a research landscape to deal with the COVID-19, pandemic, which has seen supercomputers take a leading role.

“Our vision is to become the nation’s first AI university,” University of Florida President Kent Fuchs said. “I am so grateful again to Mr. Malachowsky and NVIDIA CEO Jensen Huang.

State and regional leaders already look to the university to bring its capabilities to bear on an array of regional and national issues.

Among them: supporting agriculture in a time of climate change, addressing the needs of an aging population, and managing the effects of rising sea levels in a state with more than 1,300 miles of coastline.

And to ensure no community is left behind, UF plans to promote wide accessibility to these computing capabilities.

As part of this, UF will:

  • Establish UF’s Equitable AI program, to bring faculty members across the university together to create standards and certifications for developing tools and solutions that are cognizant of bias, unethical practice and legal and moral issues.
  • Partner with industry and other academic groups, such as the Inclusive Engineering Consortium, whose students will work with members to conduct research and recruitment to UF graduate programs.

Broad Range of AI Initiatives

Malachowsky has served in a number of leadership roles as NVIDIA has grown from a startup to the global leader in visual and parallel computing. A recognized authority on integrated-circuit design and methodology, he has authored close to 40 patents.

In addition to holding a BSEE from the University of Florida, he has an MSCS from Santa Clara University. He has been named a distinguished alumni of both universities, in addition to being inducted last year into the Florida Inventors Hall of Fame.

UF is the first institution of higher learning in the U.S. to receive NVIDIA DGX A100 systems. These systems are based on the modular architecture of the NVIDIA DGX SuperPOD, which enables the rapid deployment and scaling of massive AI infrastructure.

UF’s HiPerGator 3 supercomputer will integrate 140 NVIDIA DGX A100 systems powered by a combined 1,120 NVIDIA A100 Tensor Core GPUs. It will include 4 petabytes of high-performance storage. An NVIDIA Mellanox HDR 200Gb/s InfiniBand network will provide the high throughput and extremely low-latency network connectivity.

DGX A100 systems are built to make the most of these capabilities as a single software-defined platform. NVIDIA DGX systems are already used by eight of the ten top US national universities.

That platform includes the most advanced suite of AI application frameworks in the world. It’s a software suite that covers data analytics, AI training and inference acceleration, and recommendation systems. Its multi-modal capabilities combine sound, vision, speech and a contextual understanding of the world around us.

Together, these tools have already had a significant impact on healthcare, transportation, science, interactive appliances, the internet and other areas.

More Than Just a Machine

Friday’s announcement, however, goes beyond any single, if singular, machine.

NVIDIA will also contribute its AI expertise to UF through ongoing support and collaboration across the following initiatives:

  • The NVIDIA Deep Learning Institute will collaborate with UF on developing new curriculum and coursework for both students and the community, including programing tuned to address the needs of young adults and teens to encourage their interest in STEM and AI, better preparing them for future educational and employment opportunities.
  • UF will become the site of the latest NVIDIA AI Technology Center, where UF Graduate Fellows and NVIDIA employees will work together to advance AI.
  • NVIDIA solution architects and product engineers will partner with UF on the installation, operation and optimization of the NVIDIA-based supercomputing resources on campus, including the latest AI software applications.

UF will also make investments all around its new machine, well beyond the $20 million targeted at upgrading their data center.

Collectively, all of the data sciences-related activities and programs — and UF’s new supercomputer — will support the university’s broader AI-related aspirations.

To support that effort, the university has committed to fill 100 new faculty positions in AI and related fields, making it one of the top AI universities in the country.

That’s in addition to the 500 recently hired faculty across disciplines, many of whom will weave AI into their teaching and research.

“It’s been thrilling to watch all this,” Malachowsky said. “It provides a blueprint for how other states can work with their region’s resources to make similar investments that bring their residents the benefits of AI, while bolstering our nation’s competitiveness, capabilities, and expertise.”

The post University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia appeared first on The Official NVIDIA Blog.

Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19

Ahmed Elnaggar and Michael Heinzinger are helping computers read proteins as easily as you read this sentence.

The researchers are applying the latest AI models used to understand text to the field of bioinformatics. Their work could accelerate efforts to characterize living organisms like the coronavirus.

By the end of the year, they aim to launch a website where researchers can plug in a string of amino acids that describe a protein. Within seconds, it will provide some details of the protein’s 3D structure, a key to knowing how to treat it with a drug.

Today, researchers typically search databases to get this kind of information. But the databases are growing rapidly as more proteins are sequenced, so a search can take up to 100 times longer than the approach using AI, depending on the size of a protein’s amino acid string.

In cases where a particular protein hasn’t been seen before, a database search won’t provide any useful results — but AI can.

“Twelve of the 14 proteins associated with COVID-19 are similar to well validated proteins, but for the remaining two we have very little data — for such cases, our approach could help a lot,” said Heinzinger, a Ph.D. candidate in computational biology and bioinformatics.

While time consuming, methods based on the database searches have been 7-8 percent more accurate than previous AI methods. But using the latest models and datasets, Elnaggar and Heinzinger cut the accuracy gap in half, paving the way for a shift to using AI.

AI Models, GPUs Drive Biology Insights

“The speed at which these AI algorithms are improving makes me optimistic we can close this accuracy gap, and no field has such fast growth in datasets as computational biology, so combining these two things I think we will reach a new state of the art soon,” said Heinzinger.

“This work couldn’t have been done two years ago,” said Elnaggar, an AI specialist with a Ph.D. in transfer learning. “Without the combination of today’s bioinformatics data, new AI algorithms and the computing power from NVIDIA GPUs, it couldn’t be done,” he said.

Elnaggar and Heinzinger are team members in the Rostlab at the Technical University of Munich, which helped pioneer this field at the intersection of AI and biology. Burkhard Rost, who heads the lab, wrote a seminal paper in 1993 that set the direction.

The Semantics of Reading a Protein

The underlying concept is straightforward. Proteins, the building blocks of life, are made up of strings of amino acids that need to be interpreted sequentially, just like words in a sentence.

So, researchers like Rost started applied emerging work in natural-language processing to understand proteins. But in the 1990s they had very little data on proteins and the AI models were still fairly crude.

Fast forward to today and a lot has changed.

Sequencing has become relatively fast and cheap, generating massive datasets. And thanks to modern GPUs, advanced AI models such as BERT can interpret language in some cases better than humans.

AI Models Grow 6x in Sophistication

The breakthroughs in natural-language processing have been particularly breathtaking. Just 18 months ago, Elnaggar and Heinzinger reported on work using a version of recurrent neural network models with 90 million parameters; this month their work leveraged Transformer models with 567 million parameters.

“Transformer models are hungry for compute power, so to do this work we used 5,616 GPUs on the Summit supercomputer and even then it took up to two days to train some of the models,” said Elnaggar.

Running the models on thousands of Summit’s nodes presented challenges.

Elnaggar tells a story familiar to those who work on supercomputers. He needed lots of patience to sync and manage files, storage, comms and their overheads at such a scale. He started small, working on a few nodes, and moved a step at a time.

Patient, stepwise work paid off in scaling complex AI algorithms across thousands of GPUs on the Summit supercomputer.

“The good news is we can now use our trained models to handle inference work in the lab using a single GPU,” he said.

Now Available: Pretrained AI Models

Their latest paper, published in July, characterizes the pros and cons of a handful of the latest AI models they used on various tasks. The work is funded with a grant from the COVID-19 High Performance Computing Consortium.

The duo also published the first versions of their pretrained models. “Given the pandemic, it’s better to have an early release,” rather than wait until the still ongoing project is completed, Elnaggar said.

“The proposed approach has the potential to revolutionize the way we analyze protein sequences,” said Heinzinger.

The work may not in itself bring the coronavirus down, but it is likely to establish a new and more efficient research platform to attack future viruses.

Collaborating Across Two Disciplines

The project highlights two of the soft lessons of science: Keep a keen eye on the horizon and share what’s working.

“Our progress mainly comes from advances in natural-language processing that we apply to our domain — why not take a good idea and apply it to something useful,” said Heinzinger, the computational biologist.

Elnaggar, the AI specialist, agreed. “We could only succeed because of this collaboration across different fields,” he said.

See more stories online of researchers advancing science to fight COVID-19.

The image at top shows language models trained without labelled samples picking up the signal of a protein sequence that is required for DNA binding.

The post Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19 appeared first on The Official NVIDIA Blog.

Green Light! TOP500 Speeds Up, Saves Energy with NVIDIA

The new ranking of the TOP500 supercomputers paints a picture of modern scientific computing, expanded with AI and data analytics, and accelerated with NVIDIA technologies.

Eight of the world’s top 10 supercomputers now use NVIDIA GPUs, InfiniBand networking or both. They include the most powerful systems in the U.S., Europe and China.

NVIDIA, now combined with Mellanox, powers two-thirds (333) of the overall TOP500 systems on the latest list, up dramatically from less than half (203) for the two separate companies combined on the June 2017 list.

Nearly three-quarters (73 percent) of the new InfiniBand systems on the list adopted NVIDIA Mellanox HDR 200G InfiniBand, demonstrating the rapid embrace of the latest data rates for smart interconnects.

The number of TOP500 systems using HDR InfiniBand nearly doubled since the November 2019 list. Overall, InfiniBand appears in 141 supercomputers on the list, up 12 percent since June 2019.

A rising number of TOP500 systems are adopting NVIDIA GPUs, its Mellanox networking or both.

 

NVIDIA Mellanox InfiniBand and Ethernet networks connect 305 systems (61 percent) of the TOP500 supercomputers, including all of the 141 InfiniBand systems, and 164 (63 percent) of the systems using Ethernet.

In energy efficiency, the systems using NVIDIA GPUs are pulling away from the pack. On average, they’re now 2.8x more power-efficient than systems without NVIDIA GPUs, measured in gigaflops/watt.

That’s one reason why NVIDIA GPUs are now used by 20 of the top 25 supercomputers on the TOP500 list.

The best example of this power efficiency is Selene (pictured above), the latest addition to NVIDIA’s internal research cluster. The system was No. 2 on the latest Green500 list and No. 7 on the overall TOP500 at 27.5 petaflops on the Linpack benchmark.

At 20.5 gigaflops/watt, Selene is within a fraction of a point from the top spot on the Green500 list, claimed by a much smaller system that ranked No. 394 by performance.

Selene is the only top 100 system to crack the 20 gigaflops/watt barrier. It’s also the second most powerful industrial supercomputer in the world behind the No. 6 system from energy giant Eni S.p.A. of Italy, which also uses NVIDIA GPUs.

NVIDIA GPUs are powering gains in energy efficiency for the TOP500 supercomputers.

In energy use, Selene is 6.8x more efficient than the average TOP500 system not using NVIDIA GPUs. Selene’s performance and energy efficiency are thanks to third-generation Tensor Cores in NVIDIA A100 GPUs that speed up both traditional 64-bit math for simulations and lower precision work for AI.

Selene’s rankings are an impressive feat for a system that took less than four weeks to build. Engineers were able to assemble Selene quickly because they used NVIDIA’s modular reference architecture.

The guide defines what NVIDIA calls a DGX SuperPOD. It’s based on a powerful, yet flexible building block for modern data centers: the NVIDIA DGX A100 system.

The DGX A100 is an agile system, available today, that packs eight A100 GPUs in a 6U server with NVIDIA Mellanox HDR InfiniBand networking. It was created to accelerate a rich mix of high performance computing, data analytics and AI jobs — including training and inference — and to be fast to deploy.

Scaling from Systems to SuperPODs

With the reference design, any organization can quickly set up a world-class computing cluster. It shows how 20 DGX A100 systems can be linked in Lego-like fashion using high-performance NVIDIA Mellanox InfiniBand switches.

InfiniBand now accelerates seven of the top 10 supercomputers, including the most powerful systems in China, Europe and the U.S.

Four operators can rack a 20-system DGX A100 cluster in as little as an hour, creating a 2-petaflops system powerful enough to appear on the TOP500 list. Such systems are designed to run comfortably within the power and thermal capabilities of standard data centers.

By adding an additional layer of NVIDIA Mellanox InfiniBand switches, engineers linked 14 of these 20-system units to create Selene, which sports:

  • 280 DGX A100 systems
  • 2,240 NVIDIA A100 GPUs
  • 494 NVIDIA Mellanox Quantum 200G InfiniBand switches
  • 56 TB/s network fabric
  • 7PB of high-performance all-flash storage

One of Selene’s most significant specs is it can deliver more than 1 exaflops of AI performance. Another is Selene set a new record using just 16 of its DGX A100 systems  a key data analytics benchmark — called TPCx-BB — delivering 20x greater performance than any other system.

These results are critical at a time when AI and analytics are becoming part of the new requirements for scientific computing.

Around the world, researchers are using deep learning and data analytics to predict the most fruitful areas for conducting experiments. The approach reduces the number of costly and time-consuming experiments researchers require, accelerating scientific results.

For example, six systems not yet on the TOP500 list are being built today with the A100 GPUs NVIDIA launched last month. They’ll accelerate a blend of HPC and AI that’s defining a new era in science.

TOP500 Expands Canvas for Scientific Computing

One of those systems is at Argonne National Laboratory, where researchers will use a cluster of 24 NVIDIA DGX A100 systems to scan billions of drugs in the search for treatments for COVID-19.

“Much of this work is hard to simulate on a computer, so we use AI to intelligently guide where and when we will sample next,” said Arvind Ramanathan, a computational biologist at Argonne, in a report on the first users of A100 GPUs.

AI, data analytics and edge streaming are redefining scientific computing.

For its part, NERSC (the U.S. National Energy Research Scientific Computing Center) is embracing AI for several projects targeting Perlmutter, its pre-exascale system packing 6,200 A100 GPUs.

For example, one project will use reinforcement learning to control light source experiments, and one will apply generative models to reproduce expensive simulations at high-energy physics detectors.

Researchers in Munich are training natural-language models on 6,000 GPUs on the Summit supercomputer to speed the analysis of coronavirus proteins. It’s another sign that leading TOP500 systems are extending beyond traditional simulations run with double-precision math..

As scientists expand into deep learning and analytics, they’re also tapping into cloud computing services and even streaming data from remote instruments at the edge of the network. Together these elements form four pillars of modern scientific computing that NVIDIA accelerates:

It’s part of a broader trend where both researchers and enterprises are seeking acceleration for AI and analytics from the cloud to the network’s edge. That’s why the world’s largest cloud service providers along with the world’s top OEMs are adopting NVIDIA GPUs.

In this way, the latest TOP500 list reflects NVIDIA’s efforts to democratize AI and HPC. Any company that wants to build leadership computing capabilities can access NVIDIA technologies such as DGX systems that power the world’s most powerful systems.

Finally, NVIDIA congratulates the engineers behind the Fugaku supercomputer in Japan for taking the No. 1 spot, showing Arm is becoming more real and now a viable option in high performance computing. That’s one reason why NVIDIA announced a year ago it’s making its CUDA accelerated computing software available on the Arm processor architecture.

The post Green Light! TOP500 Speeds Up, Saves Energy with NVIDIA appeared first on The Official NVIDIA Blog.

Fighting COVID-19 in New Era of Scientific Computing

Scientists and researchers around the world are racing to find a cure for COVID-19.

That’s made the work of all those digitally gathered for this week’s high performance computing conference, ISC 2020 Digital, more vital than ever.

And the work of these researchers is broadening to encompass a wider range of approaches than ever.

The NVIDIA scientific computing platform plays a vital role, accelerating progress across this entire spectrum of approaches — from data analytics to simulation and visualization to AI to edge processing.

Some highlights:

  • In genomics, Oxford Nanopore Technologies was able to sequence the virus genome in just 7 hours using our GPUs.
  • In infection analysis and prediction, the NVIDIA RAPIDS team has GPU-accelerated Plotly’s Dash, a data visualization tool, enabling clearer insights into real-time infection rate analysis.
  • In structural biology, the U.S. National Institutes of Health and the University of Texas, Austin, are using GPU-accelerated software CryoSPARC to reconstruct the first 3D structure of the virus protein using cryogenic electron microscopy.
  • In treatment, NVIDIA worked with the National Institutes of Health and built an AI to accurately classify COVID-19 infection based on lung scans so efficient treatment plans can be devised.
  • In drug discovery, Oak Ridge National Laboratory ran the Scripps Research Institute’s AutoDock on the GPU accelerated Summit Supercomputer to screen a billion potential drug combinations in just 12 hours.

  • In robotics, startup Kiwi is building robots to deliver medical supplies autonomously.
  • And in edge detection, Whiteboard Coordinator Inc. built an AI system to automatically measure and screen elevated body temperatures, screening well over 2,000 healthcare workers per hour.

It’s truly inspirational to wake up every day and see the amazing effort going on around the world and the role NVIDIA’s scientific computing platform plays in helping understand the virus and discovering testing and treatment options to fight the COVID-19 pandemic.

The reason we’re able to play a role in so many efforts, across so many areas, is because of our strong focus on providing end-to-end workflows for the scientific computing community.

We’re able to provide these workflows because of our approach to full-stack innovation to accelerate all key application areas.

For data analytics, we accelerate the key frameworks like Spark3.0, RAPIDS and Dask. This acceleration is built using our domain-specific CUDA-X libraries for data analytics such as cuDF, cuML and cuGRAPH, along with I/O acceleration technologies from Magnum IO.

These libraries contain millions of lines of code and provide seamless acceleration to developers and users, whether they’re creating applications on the desktops accelerated with our GPUs or running them in data centers, in edge computers, in supercomputers, or in the cloud.

Similarly, we accelerate over 700 HPC applications, including all the most widely used scientific applications.

NVIDIA accelerates all frameworks for AI, which has become crucial for tasks where the information is incomplete — where there are no first principles to work with or the first principle-based approaches are too slow.

And, thanks to our roots in visual computing, NVIDIA provides accelerated visualization solutions, so terabytes of data can be visualized.

NASA, for instance, used our acceleration stack to visualize the landing of the first manned mission to Mars, in what is the world’s largest real-time, interactive volumetric visualization (150TB).

Our deep domain libraries also provide a seamless performance boost to scientific computing users on their applications across the different generations of our architecture. Going from Volta to Ampere, for instance.

NVIDIA’s also making all our new and improved GPU-optimized scientific computing applications available through NGC for researchers to accelerate their time to insight

Together, all of these pillars of scientific computing — simulation, AI and data analytics , edge streaming and visualization workflows — are key to tackling the challenges of today, and tomorrow.

The post Fighting COVID-19 in New Era of Scientific Computing appeared first on The Official NVIDIA Blog.

NVIDIA Powers World’s Leading Weather Forecasters’ Supercomputers​

Think of weather forecasting and satellite images on the local news probably come to mind. But another technology has transformed the task of forecasting and simulating the weather: supercomputing.

Weather and climate models are both compute and data intensive. Forecast quality depends on model complexity and high resolution. Resolution depends on the performance of supercomputers. And supercomputer performance depends on interconnect technology to move data quickly, effectively and in a scalable manner across compute resources.

That’s why many of the world’s leading meteorological services have chosen NVIDIA Mellanox InfiniBand networking to accelerate their supercomputing platforms, including the Spanish Meteorological Agency, the China Meteorological Administration, the Finnish Meteorological Institute, NASA and the Royal Netherlands Meteorological Institute​.

The technological advantages of InfiniBand have made it the de facto standard for climate research and weather forecasting applications, delivering higher performance, scalability and resiliency versus any other interconnect technologies.

The Beijing Meteorological Service has selected 200 Gigabit HDR InfiniBand interconnect technology to accelerate its new supercomputing platform, which will be used for enhancing weather forecasting, improving climate and environmental research, and serving the weather forecasting information needs of the 2022 Winter Olympics in Beijing.

Meteo France, the French national meteorological service, has selected HDR InfiniBand to accelerate its two new large-scale supercomputers. The agency provides weather forecasting services for companies in transport, agriculture, energy and many other industries, as well as for a large number of media channels and worldwide sporting and cultural events. One of the systems debuted on the TOP500 list, out this month.

“We have been using InfiniBand for many years to connect our supercomputing platforms in the most efficient and scalable way, enabling us to conduct high-performance weather research and forecasting simulations,” said Alain Beuraud, HPC project manager at Meteo France. “We are excited to leverage the HDR InfiniBand technology advantages, its In-Network Computing acceleration engines, extremely low latency, and advanced routing capabilities to power our next supercomputing platforms.”

HDR InfiniBand will also accelerate the new supercomputer for the European Center for Medium Range Weather Forecasts (ECMWF). Being deployed this year, the system will support weather forecasting and prediction researchers from over 30 countries across Europe. It will increase the center’s weather and climate research compute power by 5x, making it one of the world’s most powerful meteorological supercomputers.

The new platform will enable running nearly twice as many higher-resolution probabilistic weather forecasts in less than an hour, improving the ability to monitor and predict increasingly severe weather phenomena and enabling European countries to better protect lives and property.

“We require the best supercomputing power and the best technologies available for our numerical weather prediction activities,” said Florence Rabier, director general at ECMWF. “With our new supercomputing capabilities, we will be able to run higher resolution forecasts in under an hour and enable much improved weather forecasts.”

“As governments and society continue to grapple with the impacts of increasingly severe weather, we are also proud to be relying on a supercomputer designed to maximize energy efficiency,” he added.

The NVIDIA Mellanox networking technology team has also been working with the German Climate Computing Centre on optimizing performance of the ICON application, the first project in a multi-phase collaboration. ICON is a unified weather forecasting and climate model, jointly developed by the Max-Planck-Institut für Meteorologie and Deutscher Wetterdienst (DWD), the German National Meteorological Service.

By optimizing the application’s data exchange modules to take advantage of InfiniBand, the team has demonstrated a nearly 20 percent increase in overall application performance.

The design of InfiniBand rests on four fundamentals: a smart endpoint design that can run all network engines; a software-defined switch network designed for scale; centralized management that lets the network be controlled and operated from a single place; and standard technology, ensuring forward and backward compatibility, with support for open source technology and open APIs.

It’s these fundamentals that help InfiniBand provide the highest network performance, extremely low latency and high message rate. As the only 200Gb/s high-speed interconnect in the market today, InfiniBand delivers the highest network efficiency with advanced end-to-end adaptive routing, congestion control and quality of service.

The forecast calls for more world-leading weather and climate agencies to announce their new supercomputing platforms this year using HDR InfiniBand. In the meantime, learn more about NVIDIA Mellanox InfiniBand HPC technology.

The post NVIDIA Powers World’s Leading Weather Forecasters’ Supercomputers​ appeared first on The Official NVIDIA Blog.

Digital Doubleheader: HPC Ignites Dual Online Events in June

Two virtual events this month aim to drive scientific computing forward, inspired in no small part by the race against the coronavirus.

The ISC High Performance 2020 Digital (June 22-25) and NVIDIA’s HPC Summit Digital (June 29-July 2) offer insights on the latest tools and techniques taking science and research into the next  era of scientific computing. Both online events are free.

Talks, panels and demos will describe new accelerated systems and networks and how they are being applied across fields from astrophysics to quantum mechanics. The fight against COVID-19 will be front and center.

While the pandemic prevents the HPC community from gathering in person, the two events help scientists, researchers and engineers collaborate. So set some time aside on your calendar and register for ISC Digital, NVIDIA’s activities at ISC Digital and the HPC Summit.

ISC and the HPC Summit also mark the first time experts from NVIDIA and Mellanox will present as part of a single company after completing a merger on April 27. They will share perspectives on what the combination of data center acceleration and advanced networking means for HPC.

HPC+AI: A New Pillar of Scientific Discovery

In other hot topics, ISC and the HPC Summit will shed light on increasing efforts that fuse HPC with AI.

The combination, already in use at several supercomputing centers, promises to accelerate simulations, shortening the time to scientific results. It’s one of many ways HPC is delivering gains in performance and power-efficiency.

At both events, NVIDIA experts will be on hand to answer questions and give updates on two major milestones.

The NVIDIA Ampere architecture, announced May 14, packs tensor core acceleration for FP64 operations and other features aimed to speed scientific applications. NVIDIA A100 GPUs come to the events with an expanding set of use cases, partners and form factors.

Supercomputer Centers Collaborate on COVID

The virtual festivities kick off with ISC, Europe’s annual supercomputing event. On its opening day, a leader from Argonne National Laboratory — a member of the COVID-19 High Performance Computing Consortium formed by the U.S. White House — will join others from Europe and Asia to describe their efforts.

The consortium is providing access to 30 supercomputers with more than 50,000 GPUs and has been described as “the Apollo program of our time … not a race to the moon, but a race for humanity.”

Argonne is the first of six supercomputing centers worldwide to apply NVIDIA’s new A100 GPUs to work seeking vaccines and treatments for the virus.

In one of 63 active projects supported by the consortium, researchers at Oak Ridge National Laboratory recently harnessed GPU accelerators in the Summit supercomputer. They hope to analyze as many as 2 billion compounds in 24 hours in the search for drugs to neutralize the coronavirus.

Inside a 700-Petaflops DGX SuperPOD

At ISC’s vendor-showdown session and exhibitor forum, speakers including Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA and Gilad Shainer, senior vice president of marketing for Mellanox networking at NVIDIA, will describe new technologies, such as the latest DGX SuperPOD.

The system sports a peak 700 petaflops to train AI models once thought beyond reach. It packs 1,120 A100 GPUs linked with 170 NVIDIA Mellanox HDR 200G InfiniBand switches that support in-network computing engines.

In a June 23 ISC session, NVIDIA will describe SHARP, an acceleration engine for offloading networking tasks from the CPU or the GPU to the InfiniBand switch network. In tests on a system using HDR 200G InfiniBand, it delivered about 96 percent of point-to-point bandwidth while increasing AI performance on PyTorch.

HPC Summit Covers A100 and More

The HPC Summit Digital, a forum for the HPC community hosted by NVIDIA, kicks off Monday, June 29. Ian Buck, the developer of CUDA and general manager of NVIDIA’s accelerated computing group, and Michael Kagen, co-founder of Mellanox and now CTO of NVIDIA’s Mellanox networking business unit will speak at the opening session.

They’ll talk about the first HPC centers using NVIDIA DGX A100 systems to support COVID-19 research, as well as the first NVIDIA HGX A100 supercomputers.

The following Tuesday, GPU and software experts will convene a roundtable and Q&A session to discuss NVIDIA’s Ampere architecture. They’ll provide details and take questions on the A100 and its platforms, libraries and developer tools and how they can advance HPC, AI and data science.

A three-hour developer forum on Wednesday, July 1, will start with descriptions of success stories, lessons learned and developers’ top requests. The forum also includes a Q&A session with NVIDIA experts and roundtable discussions in virtual breakout rooms on topics such as:

  • Parallel computing programming models and languages
  • Math libraries
  • HPC+AI
  • Data analytics
  • Mixed-precision computing
  • GPU acceleration for Arm
  • Multi-GPU programming
  • Message passing

The HPC Summit wraps up Thursday, July 2, with three sessions on topics of interest for HPC data centers. Each will start with an expert talk and include a Q&A.

The day begins with a session on networking and storage, especially for emerging HPC+AI use cases. A session on HPC cloud tools follows with a talk on the Open OnDemand project for accessing supercomputers from Alan Chalker, a director from the Ohio Supercomputer Center.

The third session of the day explores the state and outlook for Arm’s processor technology in HPC. Brent Gorda, senior director for HPC at Arm, will give a state of the union address on the Arm ecosystem in HPC and take feedback on future directions.

It’s an initiative NVIDIA officially joined when it announced last year support for CUDA on Arm, attracting many new partners into the fold.

For more details, see the full HPC Summit Digital schedule.

Here’s a preview of the HPC Summit’s schedule you can find at the event’s Web page.

The post Digital Doubleheader: HPC Ignites Dual Online Events in June appeared first on The Official NVIDIA Blog.

AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra

AI is the rocket fuel that will get us to Mars. It’s the vaccine that will save us on Earth. And it’s the people who aspire to make a dent in the universe.

Our latest “I Am AI” video, unveiled during NVIDIA CEO Jensen Huang’s keynote address at the GPU Technology Conference, pays tribute to the scientists, researchers, artists and many others making historic advances with AI.

To grasp AI’s global impact, consider: the technology is expected to generate $2.9 trillion worth of business value by 2021, according to Gartner.

It’s on course to classify 2 trillion galaxies to understand the universe’s origin, and to zero in on the molecular structure of the drugs needed to treat coronavirus and cancer.

As depicted in the latest video, AI has an artistic side, too. It can paint as well as Bob Ross. And its ability to assist in the creation of original compositions is worthy of the London Symphony Orchestra, which plays the accompanying theme music, a piece that started out written by a recurrent neural network.

AI is also capable of creating text-to-speech synthesis for narrating a short documentary. And that’s just what it did.

These fireworks and more are the story of I Am AI. Sixteen companies and research organizations are featured in the video. The action moves fast, so grab a bowl of popcorn, kick back and enjoy this tour of some of the highlights of AI in 2020.

Reaching Into Outer Space

Understanding the formation of the structure and the amount of matter in the universe requires observing and classifying celestial objects such as galaxies. With an estimated 2 trillion galaxies to examine in the observable universe, it’s what cosmologists call a “computational grand challenge.”

The recent Dark Energy Survey collected data from over 300 million galaxies. To study them with unprecedented precision, the Center for Artificial Intelligence Innovation at the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign teamed up with the Argonne Leadership Computing Facility at the U.S. Department of Energy’s Argonne National Laboratory.

NCSA tapped the Galaxy Zoo project, a crowdsourced astronomy effort that labeled millions of galaxies observed by the Sloan Digital Sky Survey. Using that data, an AI model with 99.6 percent accuracy can now chew through unlabeled galaxies to ID them and accelerate scientific research.

With Mars targeted for human travel, scientists are seeking the safest path. In that effort, the NASA Solar Dynamics Observatory takes images of the sun every 1.3 seconds. And researchers have developed an algorithm that removes errors from the images, which are placed into a growing archive for analysis.

Using such data, NASA is tapping into NVIDIA GPUs to analyze solar surface flows so that it can build better models for predicting the weather in space. NASA also aims to identify origins of energetic particles in Earth’s orbit that could damage interplanetary spacecraft, jeopardizing trips to Mars.

Restoring Voice and Limb

Voiceitt — a Tel Aviv-based startup that’s developed signal processing, speech recognition technologies and deep neural nets — offers a synthesized voice for those whose speech has been distorted. The company’s app converts unintelligible speech into easily understood speech.

The University of North Carolina at Chapel Hill’s Neuromuscular Rehabilitation Engineering Laboratory and North Carolina State University’s Active Robotic Sensing (ARoS) Laboratory develop experimental robotic limbs used in the labs.

The two research units have been working on walking environment recognition, aiming to develop environmental adaptive controls for prostheses. They’ve been using CNNs for prediction running on NVIDIA GPUs. And they aren’t alone.

Helping in Pandemic

Whiteboard Coordinator remotely monitors the temperature of people entering buildings to minimize exposure to COVID-19. The Chicago-based startup provides temperature-screening rates of more than 2,000 people per hour at checkpoints. Whiteboard Coordinator and NVIDIA bring AI to the edge of healthcare with NVIDIA Clara Guardian, an application framework that simplifies the development and deployment of smart sensors.

Viz.ai uses AI to inform neurologists about strokes much faster than traditional methods. With the onset of the pandemic, Viz.ai moved to help combat the new virus with an app that alerts care teams to positive COVID-19 results.

Axial3D is a Belfast, Northern Ireland, startup that enlists AI to accelerate the production time of 3D-printed models for medical images used in planning surgeries. Having redirected its resources at COVID-19, the company is now supplying face shields and is among those building ventilators for the U.K.’s National Health Service. It has also begun 3D printing of swab kits for testing as well as valves for respirators. (Check out their on-demand webinar.)

Autonomizing Contactless Help

KiwiBot, a cheery-eyed food delivery bot from Berkeley, Calif., has included in its path a way to provide COVID-19 services. It’s autonomously delivering masks, sanitizers and other supplies with its robot-to-human service.

Masterpieces of Art, Compositions and Narration

Researchers from London-based startup Oxia Palus demonstrated in a paper, “Raiders of the Lost Art,” that AI could be used to recreate lost works of art that had been painted over. Beneath Picasso’s 1902 The Crouching Beggar lies a mountainous landscape that art curators believe is of Parc del Laberint d’Horta, near Barcelona.

They also know that Santiago Rusiñol painted Parc del Laberint d’Horta. Using a modified X-ray fluorescence image of The Crouching Beggar and Santiago Rusiñol’s Terraced Garden in Mallorca, the researchers applied neural style transfer, running on NVIDIA GPUs, to reconstruct the lost artwork, creating Rusiñol’s Parc del Laberint d’Horta.

 

For GTC a few years ago, Luxembourg-based AIVA AI composed the start — melodies and accompaniments — of what would become an original classical music piece meriting an orchestra. Since then we’ve found it one.

Late last year, the London Symphony Orchestra agreed to play the moving piece, which was arranged for the occasion by musician John Paesano and was recorded at Abbey Road Studios.

 

NVIDIA alum Helen was our voice-over professional for videos and events for years. When she left the company, we thought about how we might continue the tradition. We turned to what we know: AI. But there weren’t publicly available models up to the task.

A team from NVIDIA’s Applied Deep Learning Research group published the answer to the problem: Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis. Licensing Helen’s voice, we trained the network on dozens of hours of it.

First, Helen produced multiple takes, guided by our creative director. Then our creative director was able to generate multiple takes from Flowtron and adjust parameters of the model to get the desired outcome. And what you hear is “Helen” speaking in the I Am AI video narration.

The post AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra appeared first on The Official NVIDIA Blog.