Bob Swan: Open Letter to President-elect Biden

Bob Swan, Intel chief executive officer, sent the following letter to the president-elect.

The Honorable Joseph R. Biden Jr.
President-elect of the United States

Dear Mr. President-elect,

Congratulations on your election as our 46th president. I also want to congratulate Vice President-elect Kamala Harris for her historic achievement and recognize the role your administration will play in inspiring our next generation of leaders.

cayce clifford bob swan 02
Bob Swan, chief executive officer of Intel Corp. (Credit: Cayce Clifford)

2020 has been a particularly disruptive year for the American people. And we know you are focused on uniting our nation to overcome the challenges posed by COVID-19, racial strife, a growing skills gap and increasing global competition.

In 1968, America was in a similar place. We were a nation divided over the Vietnam War, divided by race, undergoing a recession and experiencing mass protests shaping the political landscape. In this environment of change and upheaval, Robert Noyce and Gordon Moore came together and founded Intel, starting a silicon revolution that gave rise to many future technologies. Today, Intel is the only U.S.-based manufacturer of leading-edge semiconductors, with more than 50,000 employees across the country and innovation hubs in Oregon, Arizona, Texas, New Mexico and California. We again stand at the ready to support the next generation of technological advancements.

As the leader of a company driven by our purpose to create world-changing technologies that enrich the lives of every person on Earth, I am grateful for your recognition of the role technology plays in solving our nation’s largest societal challenges. As you begin to further develop your policy agenda, I urge you to focus on the following areas:

Investing in Technology to Solve the Challenges Posed by COVID

Artificial intelligence, high performance computing and edge-to-cloud computing are critical components in government collection and analysis of data, diagnostics, treatment and vaccine development. Intel technology has helped accelerate access to quality data to deliver remote care and protect medical professionals from exposure to infection. As you know, this pandemic has widely affected education, work and other aspects of our daily lives. It is crucial to expand investments in broadband connectivity, particularly to lessen the impact of COVID on the underserved and in communities of color.

Increasing U.S. Manufacturing

Your planned investment in American-made goods is critical to U.S. innovation and technology leadership. According to the Semiconductor Industry Association, the U.S. accounts for just 12% of global semiconductor production capacity, with more than 80% taking place in Asia. Rising costs and foreign government subsidies to national champions are a significant disadvantage for U.S. semiconductor companies that make substantial capital investments domestically. A national manufacturing strategy, including investment by the U.S. government in the domestic semiconductor industry, is critical to ensure American companies compete on a level playing field and lead the next generation of innovative technology.

Investing in Digital Infrastructure

Smart infrastructure spending will help address pressing economic and climate change needs. This will include technology designed to make cities and energy systems smarter and more efficient. Widespread deployment of advanced 5G telecommunications networks will fuel efficiencies for businesses in all industries and enable more U.S. innovation. Upgrades to our infrastructure must not only handle the technology of today but spur domestic development of the technologies of tomorrow.

Developing a 21st Century Workforce

In the U.S., Intel hired more than 4,000 people this year, and it still has 800 positions to fill. We produce the most complex technology on the planet and need access to the best talent available. We are designing STEM curricula to help feed the workforce pipeline and make next-generation training and skills more accessible. This year, we partnered with Maricopa County Community College District in Arizona to launch the first Intel-designed artificial intelligence associate degree program in the United States.

While we work to build a greater pipeline of U.S. high-tech workers, American universities and companies provide opportunities to smart, hard-working people from all over the world. They return the favor many times over with their contributions to this country and our technology leadership. The U.S. has welcomed global talent for decades and should continue to support immigration programs needed by Intel and other high-tech companies to operate in the U.S.

At Intel, we believe the current and future workforces need to reflect the makeup of this nation. We also share your commitment to make racial equity a top priority. We set ambitious goals for Intel’s next decade. We aim to double the number of women and underrepresented minorities in senior leadership at Intel, and to collaborate within our industry to create and implement a Global Inclusion Index to track industry progress in areas such as greater levels of women and minorities in senior and technical positions, accessible technology and equal pay.

Intel has enjoyed working closely with presidential administrations over the past 52 years on policies that help the United States lead the world in technological innovation. I look forward to working together in a shared mission to tackle the many challenges facing our nation today as we prepare for an equitable and prosperous future.

Sincerely,
Bob Swan
CEO, Intel Corporation

The post Bob Swan: Open Letter to President-elect Biden appeared first on Intel Newsroom.

Supercomputing Chops: Tsinghua U. Takes Top Flops in SC20 Student Cluster Battle

Props to team top flops. Virtual this year, the SC20 Student Cluster Competition was still all about teams vying for top supercomputing performance in the annual battle for HPC bragging rights. That honor went to Beijing-based Tsinghua University, whose six-member undergraduate student team clocked in 300 teraflops of processing performance. A one teraflop computer can Read article >

The post Supercomputing Chops: Tsinghua U. Takes Top Flops in SC20 Student Cluster Battle appeared first on The Official NVIDIA Blog.

Supercomputing Chops: Tsinghua U. Takes Top Flops in SC20 Student Cluster Battle

Props to team top flops.

Virtual this year, the SC20 Student Cluster Competition was still all about teams vying for top supercomputing performance in the annual battle for HPC bragging rights.

That honor went to Beijing-based Tsinghua University, whose six-member undergraduate student team clocked in 300 teraflops of processing performance.

A one teraflop computer can process one trillion floating-point operations per second.

The Virtual Student Cluster Competition was this year’s battleground for 19 teams. Competitors consisted of either high school or undergraduate students. Teams were made up of six members, an adviser and vendor partners.

Real-World Scenarios

In the 72-hour competition, student teams designed and built virtual clusters running NVIDIA GPUs in the Microsoft Azure cloud. Students completed a set of benchmarks and real-world scientific workloads.

Teams ran the Gromac molecular dynamics application, tackling COVID-19 research. They also ran the CESM application to work on optimizing climate modeling code. The “reproducibility challenge” called on the teams to replicate results from an SC19 research paper.

Among other hurdles, teams were tossed a surprise exascale computing project mini-application, miniVite, to test their chops at compiling, running and optimizing.

A leaderboard tracked performance results of their submissions and the amount of money spent on Microsoft Azure as well as the burn rate of their spending by the hour on cloud resources.

Roller-Coaster Computing Challenges

The Georgia Institute of Technology competed for its second time. This year’s squad, dubbed Team Phoenix, had the good fortune of landing advisor Vijay Thakkar, a Gordon Bell Prize nominee this year.

Half of the team members were teaching assistants for introductory systems courses at Georgia Tech, said team member Sudhanshu Agarwal.

Georgia Tech used NVIDIA GPUs “wherever it was possible, as GPUs reduced computation time,” said Agarwal.

“We had a lot of fun this year and look forward to participating in SC21 and beyond,” he said.

Pan Yueyang, a junior in computer science at Peking University, joined his university’s supercomputing team before taking the leap to participate in the SC20 battle. But it was full of surprises, he noted.

He said that during the competition his team ran into a series of unforeseen hiccups. “Luckily it finished as required and the budget was slightly below the limitation,” he said.

Jacob Xiaochen Li, a junior in computer science at the University of California, San Diego, said his team was relying on NVIDIA GPUs for the MemXCT portion of the competition to reproduce the scaling experiment along with memory bandwidth utilization. “Our results match the original chart closely,” he said, noting there were some hurdles along the way.

Po Hao Chen, a sophmore in computer science at Boston University, said he committed to the competition because he’s always enjoyed algorithmic optimization. Like many, he had to juggle the competition with the demands of courses and exams.

“I stayed up for three whole days working on the cluster,” he said. “And I really learned a lot from this competition.”

Teams and Flops

Tsinghua University, China
300 TFLOPS

ETH Zurich
129 TFLOPS

Southern University of Science and Technology
120 TFLOPS

Texas A&M University
113 TFLOPS

Georgia Institute of Technology
108 TFLOPS

Nanyang Technological University, Singapore
105 TFLOPS

University of Warsaw
75.0 TFLOPS

University of Illinois
71.6 TFLOPS

Massachusetts Institute of Technology
64.9 TFLOPS

Peking University
63.8 TFLOPS

University of California, San Diego
53.9 TFLOPS

North Carolina State University
44.3 TFLOPS

Clemson University
32.6 TFLOPS

Friedrich-Alexander University Erlangen-Nuremberg
29.0 TFLOPS

Northeastern University
21.1 TFLOPS

Shanghai Jiao Tong University
19.9 TFLOPS

ShanghaiTech University
14.4 TFLOPS

University of Texas
13.1 TFLOPS

Wake Forest University
9.172 TFLOPS

 

The post Supercomputing Chops: Tsinghua U. Takes Top Flops in SC20 Student Cluster Battle appeared first on The Official NVIDIA Blog.

Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology

As the healthcare world battles the pandemic, the medical-imaging field is gaining ground with AI, forging new partnerships and funding startup innovation. It will all be on display at RSNA, the Radiological Society of North America’s annual meeting, taking place Nov. 29 – Dec. 5. Radiologists, healthcare organizations, developers and instrument makers at RSNA will Read article >

The post Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology appeared first on The Official NVIDIA Blog.

Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology

As the healthcare world battles the pandemic, the medical-imaging field is gaining ground with AI, forging new partnerships and funding startup innovation. It will all be on display at RSNA, the Radiological Society of North America’s annual meeting, taking place Nov. 29 – Dec. 5.

Radiologists, healthcare organizations, developers and instrument makers at RSNA will share their latest advancements and what’s coming next — with an eye on the growing ability of AI models to integrate with medical-imaging workflows. More than half of informatics abstracts submitted to this year’s virtual conference involve AI.

In a special public address at RSNA, Kimberly Powell, NVIDIA’s VP of healthcare, will discuss how we’re working with research institutions, the healthcare industry and AI startups to bring workflow acceleration, deep learning models and deployment platforms to the medical imaging ecosystem.

Healthcare and AI experts worldwide are putting monumental effort into developing models that can help radiologists determine the severity of COVID cases from lung scans. They’re also building platforms to smoothly integrate AI into daily workflows, and developing federated learning techniques that help hospitals work together on more robust AI models.

The NVIDIA Clara Imaging application framework is poised to advance this work with NVIDIA GPUs and AI models that can accelerate each step of the radiology workflow, including image acquisition, scan annotation, triage and reporting.

Delivering Tools to Radiologists, Developers, Hospitals

AI developers are working to bridge the gap between their models and the systems radiologists already use, with the goal of creating seamless integration of deep learning insights into tools like PACS digital archiving systems. Here’s how NVIDIA is supporting their work:

  • We’ve strengthened the NVIDIA Clara application framework’s full-stack GPU-accelerated libraries and SDKs for imaging, with new pretrained models available on the NGC software hub. NVIDIA and the U.S. National Institutes of Health jointly developed AI models that can help researchers classify COVID cases from chest CT scans, and evaluate the severity of these cases.
  • Using the NVIDIA Clara Deploy SDK, Mass General Brigham researchers are testing a risk assessment model that analyzes chest X-rays to determine the severity of lung disease. The tool was developed by the Athinoula A. Martinos Center for Biomedical Imaging, which has adopted NVIDIA DGX A100 systems to power its research.
  • Together with King’s College London, we introduced this year MONAI, an open-source AI framework for medical imaging. Based on the Ignite and PyTorch deep learning frameworks, the modular MONAI code can be easily ported to researchers’ existing AI pipelines. So far, the GitHub project has dozens of contributors and over 1,500 stars.
  • NVIDIA Clara Federated Learning enables researchers to collaborate on training robust AI models without sharing patient information. It’s been used by hospitals and academic medical centers to train models for mammogram assessment, and to assess the likelihood that patients with COVID-19 symptoms will need supplemental oxygen.

NVIDIA at RSNA

RSNA attendees can check out NVIDIA’s digital booth to discover more about GPU-accelerated AI in medical imaging. Hands-on training courses from the NVIDIA Deep Learning Institute are also available, covering medical imaging topics including image classification, coarse-to-fine contextual memory and data augmentation with generative networks. The following events feature NVIDIA speakers:

Over 50 members of NVIDIA Inception — our accelerator program for AI startups — will be exhibiting at RSNA, including Subtle Medical, which developed the first AI tools for medical imaging enhancement to receive FDA clearance and this week announced $12 million in Series A funding.

Another, TrainingData.io used the NVIDIA Clara SDK to train a segmentation AI model to analyze COVID disease progression in chest CT scans. And South Korean startup Lunit recently received the European CE mark and partnered with GE Healthcare on an AI tool that flags abnormalities on chest X-rays for radiologists’ review.

Visit the NVIDIA at RSNA webpage for a full list of activities at the show. Email to request a meeting with our deep learning experts.

Subscribe to NVIDIA healthcare news here.

The post Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology appeared first on The Official NVIDIA Blog.

Science Magnified: Gordon Bell Winners Combine HPC, AI

Seven finalists including both winners of the 2020 Gordon Bell awards used supercomputers to see more clearly atoms, stars and more — all accelerated with NVIDIA technologies. Their efforts required the traditional number crunching of high performance computing, the latest data science in graph analytics, AI techniques like deep learning or combinations of all of Read article >

The post Science Magnified: Gordon Bell Winners Combine HPC, AI appeared first on The Official NVIDIA Blog.

Science Magnified: Gordon Bell Winners Combine HPC, AI

Seven finalists including both winners of the 2020 Gordon Bell awards used supercomputers to see more clearly atoms, stars and more — all accelerated with NVIDIA technologies.

Their efforts required the traditional number crunching of high performance computing, the latest data science in graph analytics, AI techniques like deep learning or combinations of all of the above.

The Gordon Bell Prize is regarded as a Nobel Prize in the supercomputing community, attracting some of the most ambitious efforts of researchers worldwide.

AI Helps Scale Simulation 1,000x

Winners of the traditional Gordon Bell award collaborated across universities in Beijing, Berkeley and Princeton. They used a combination of HPC and neural networks they called DeePMDkit to create complex simulations in molecular dynamics, 1,000x faster than previous work while maintaining accuracy.

In one day on the Summit supercomputer at Oak Ridge National Laboratory, they modeled 2.5 nanoseconds in the life of 127.4 million atoms, 100x more than the prior efforts.

Their work aids understanding complex materials and fields with heavy use of molecular modeling like drug discovery. In addition, it demonstrated the power of combining machine learning with physics-based modeling and simulation on future supercomputers.

Atomic-Scale HPC May Spawn New Materials 

Among the finalists, a team including members from Lawrence Berkeley National Laboratory and Stanford optimized the BerkeleyGW application to bust through the complex math needed to calculate atomic forces binding more than 1,000 atoms with 10,986 electrons, about 10x more than prior efforts.

“The idea of working on a system with tens of thousands of electrons was unheard of just 5-10 years ago,” said Jack Deslippe, a principal investigator on the project and the application performance lead at the U.S. National Energy Research Scientific Computing Center.

Their work could pave a way to new materials for better batteries, solar cells and energy harvesters as well as faster semiconductors and quantum computers.

The team used all 27,654 GPUs on the Summit supercomputer to get results in just 10 minutes, thanks to harnessing an estimated 105.9 petaflops of double-precision performance.

Developers are continuing the work, optimizing their code for Perlmutter, a next-generation system using NVIDIA A100 Tensor Core GPUs that sport hardware to accelerate 64-bit floating-point jobs.

Analytics Sifts Text to Fight COVID

Using a form of data mining called graph analytics, a team from Oak Ridge and Georgia Institute of Technology found a way to search for deep connections in medical literature using a dataset they created with 213 million relationships among 18.5 million concepts and papers.

Their DSNAPSHOT (Distributed Accelerated Semiring All-Pairs Shortest Path) algorithm, using the team’s customized CUDA code, ran on 24,576 V100 GPUs on Summit, delivering results on a graph with 4.43 million vertices in 21.3 minutes. They claimed a record for deep search in a biomedical database and showed the way for others.

Graph analytics from Gordon Bell 2020 finalists at Oak Ridge and GIT
Graph analytics finds deep patterns in biomedical literature related to COVID-19.

“Looking forward, we believe this novel capability will enable the mining of scholarly knowledge … (and could be used in) natural language processing workflows at scale,” Ramakrishnan Kannan, team lead for computational AI and machine learning at Oak Ridge, said in an article on the lab’s site.

Tuning in to the Stars

Another team pointed the Summit supercomputer at the stars in preparation for one of the biggest big-data projects ever tackled. They created a workflow that handled six hours of simulated output from the Square Kilometer Array (SKA), a network of thousands of radio  telescopes expected to come online later this decade.

Researchers from Australia, China and the U.S. analyzed 2.6 petabytes of data on Summit to provide a proof of concept for one of SKA’s key use cases. In the process they revealed critical design factors for future radio telescopes and the supercomputers that study their output.

The team’s work generated 247 GBytes/second of data and spawned 925 GBytes/s in I/O. Like many other finalists, they relied on the fast, low-latency InfiniBand links powered by NVIDIA Mellanox networking, widely used in supercomputers like Summit to speed data among thousands of computing nodes.

Simulating the Coronavirus with HPC+AI

The four teams stand beside three other finalists who used NVIDIA technologies in a competition for a special Gordon Bell Prize for COVID-19.

The winner of that award used all the GPUs on Summit to create the largest, longest and most accurate simulation of a coronavirus to date.

“It was a total game changer for seeing the subtle protein motions that are often the important ones, that’s why we started to run all our simulations on GPUs,” said Lilian Chong, an associate professor of chemistry at the University of Pittsburgh, one of 27 researchers on the team.

“It’s no exaggeration to say what took us literally five years to do with the flu virus, we are now able to do in a few months,” said Rommie Amaro, a researcher at the University of California at San Diego who led the AI-assisted simulation.

The post Science Magnified: Gordon Bell Winners Combine HPC, AI appeared first on The Official NVIDIA Blog.

COVID-19 Spurs Scientific Revolution in Drug Discovery with AI

Research across global academic and commercial labs to create a more efficient drug discovery process won recognition today with a special Gordon Bell Prize for work fighting COVID-19. A team of 27 researchers led by Rommie Amaro at the University of California at San Diego (UCSD) combined high performance computing (HPC) and AI to provide Read article >

The post COVID-19 Spurs Scientific Revolution in Drug Discovery with AI appeared first on The Official NVIDIA Blog.

COVID-19 Spurs Scientific Revolution in Drug Discovery with AI

Research across global academic and commercial labs to create a more efficient drug discovery process won recognition today with a special Gordon Bell Prize for work fighting COVID-19.

A team of 27 researchers led by Rommie Amaro at the University of California at San Diego (UCSD) combined high performance computing (HPC) and AI to provide the clearest view to date of the coronavirus, winning the award.

Their work began in late March when Amaro lit up Twitter with a picture of part of a simulated SARS-CoV-2 virus that looked like an upside-down Christmas tree.

Seeing it, one remote researcher noticed how a protein seemed to reach like a crooked finger from behind a protective shield to touch a healthy human cell.

“I said, ‘holy crap, that’s crazy’… only through sharing a simulation like this with the community could you see for the first time how the virus can only strike when it’s in an open position,” said Amaro, who leads a team of biochemists and computer experts at UCSD.

Tweet of coronavirus from Amaro Lab
Amaro shared her early results on Twitter.

The image in the tweet was taken by Amaro’s lab using what some call a computational microscope, a digital tool that links the power of HPC simulations with AI to see details beyond the capabilities of conventional instruments.

It’s one example of work around the world using AI and data analytics, accelerated by NVIDIA Clara Discovery, to slash the $2 billion in costs and ten-year time span it typically takes to bring a new drug to market.

A Virtual Microscope Enhanced with AI

In early October, Amaro’s team completed a series of more ambitious HPC+AI simulations. They showed for the first time fine details of how the spike protein moved, opened and contacted a healthy cell.

One simulation (below) packed a whopping 305 million atoms, more than twice the size of any prior simulation in molecular dynamics. It required AI and all 27,648 NVIDIA GPUs on the Summit supercomputer at Oak Ridge National Laboratory.

More than 4,000 researchers worldwide have downloaded the results that one called “critical for vaccine design” for COVID and future pathogens.

Today, it won a special Gordon Bell Prize for COVID-19, the equivalent of a Nobel Prize in the supercomputing community.

Two other teams also used NVIDIA technologies in work selected as finalists in the COVID-19 competition created by the ACM, a professional group representing more than 100,000 computing experts worldwide.

And the traditional Gordon Bell Prize went to a team from Beijing, Berkeley and Princeton that set a new milestone in molecular dynamics, also using a combination of HPC+AI on Summit.

An AI Funnel Catches Promising Drugs

Seeing how the infection process works is one of a string of pearls that scientists around the world are gathering into a new AI-assisted drug discovery process.

Another is screening from a vast field of 1068 candidates the right compounds to arrest a virus. In a paper from part of the team behind Amaro’s work, researchers described a new AI workflow that in less than five months filtered 4.2 billion compounds down to the 40 most promising ones that are now in advanced testing.

“We were so happy to get these results because people are dying and we need to address that with a new baseline that shows what you can get with AI,” said Arvind Ramanathan, a computational biologist at Argonne National Laboratory.

Ramanathan’s team was part of an international collaboration among eight universities and supercomputer centers, each contributing unique tools to process nearly 60 terabytes of data from 21 open datasets. It fueled a set of interlocking simulations and AI predictions that ran across 160 NVIDIA A100 Tensor Core GPUs on Argonne’s Theta system with massive AI inference runs using NVIDIA TensorRT on the many more GPUs on Summit.

Docking Compounds, Proteins on a Supercomputer

Earlier this year, Ada Sedova put a pearl on the string for protein docking (described in the video below) when she described plans to test a billion drug compounds against two coronavirus spike proteins in less than 24 hours using the GPUs on Summit. Her team’s work cut to just 21 hours the work that used to take 51 days, a 58x speedup.

In a related effort, colleagues at Oak Ridge used NVIDIA RAPIDS and BlazingSQL to accelerate by an order of magnitude data analytics on results like Sedova produced.

Among the other Gordon Bell finalists, Lawrence Livermore researchers used GPUs on the Sierra supercomputer to slash the training time for an AI model used to speed drug discovery from a day to just 23 minutes.

From the Lab to the Clinic

The Gordon Bell finalists are among more than 90 research efforts in a supercomputing collaboration using 50,000 GPU cores to fight the coronavirus.

They make up one front in a global war on COVID that also includes companies such as Oxford Nanopore Technologies, a genomics specialist using NVIDIA’s CUDA software to accelerate its work.

Oxford Nanopore won approval from European regulators last month for a novel system the size of a desktop printer that can be used with minimal training to perform thousands of COVID tests in a single day. Scientists worldwide have used its handheld sequencing devices to understand the transmission of the virus.

Relay Therapeutics uses NVIDIA GPUs and software to simulate with machine learning how proteins move, opening up new directions in the drug discovery process. In September, it started its first human trial of a molecule inhibitor to treat cancer.

Startup Structura uses CUDA on NVIDIA GPUs to analyze initial images of pathogens to quickly determine their 3D atomic structure, another key step in drug discovery. It’s a member of the NVIDIA Inception program, which gives startups in AI access to the latest GPU-accelerated technologies and market partners.

From Clara Discovery to Cambridge-1

NVIDIA Clara Discovery delivers a framework with AI models, GPU-optimized code and applications to accelerate every stage in the drug discovery pipeline. It provides speedups of 6-30x across jobs in genomics, protein structure prediction, virtual screening, docking, molecular simulation, imaging and natural-language processing that are all part of the drug discovery process.

It’s NVIDIA’s latest contribution to fighting the SARS-CoV-2 and future pathogens.

NVIDIA Clara Discovery
NVIDIA Clara Discovery speeds each step of a drug discovery process using AI and data analytics.

Within hours of the shelter-at-home order in the U.S., NVIDIA gave researchers free access to a test drive of Parabricks, our genomic sequencing software. Since then, we’ve provided as part of NVIDIA Clara open access to AI models co-developed with the U.S. National Institutes of Health.

We’ve also committed to build with partners including GSK and AstraZeneca Europe’s largest supercomputer dedicated to driving drug discovery forward. Cambridge-1 will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance.

Next Up: A Billion-Atom Simulation

The work is just getting started.

Ramanathan of Argonne sees a future where self-driving labs learn what experiments they should launch next, like autonomous vehicles finding their own way forward.

“And I want to scale to the absolute max of screening 1068 drug compounds, but even covering half that will be significantly harder than what we’ve done so far,” he said.

“For me, simulating a virus with a billion atoms is the next peak, and we know we will get there in 2021,” said Amaro. “Longer term, we need to learn how to use AI even more effectively to deal with coronavirus mutations and other emerging pathogens that could be even worse,” she added.

Hear NVIDIA CEO Jensen Huang describe in the video below how AI in Clara Discovery is advancing drug discovery.

At top: An image of the SARS-CoV-2 virus based on the Amaro lab’s simulation showing 305 million atoms.

The post COVID-19 Spurs Scientific Revolution in Drug Discovery with AI appeared first on The Official NVIDIA Blog.

A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines

In the global race to tame the spread of COVID-19, scientific researchers and pharmaceutical companies first must understand the virus’s protein structure. Doing so requires building detailed 3D models of protein molecules, which until recently has been an intensely time-consuming task. Structura Biotechnology’s groundbreaking software is helping speed things along. The GPU-powered machine learning algorithms Read article >

The post A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines appeared first on The Official NVIDIA Blog.