NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

Diversity and Inclusion: Read Barbara Whye’s Op-ed; Intel-Lenovo Research

Barbara Whye
Chief Diversity and Inclusion Officer
Corporate Vice President, Social Impact and Human Resources

U.K. publication PCR magazine has published “Is your workplace attractive to Generation Z?”, an op-ed from Barbara Whye, Intel chief officer of Diversity and Inclusion and corporate vice president of HR and social impact. Today, Intel also released the second installment of its global research report commissioned with Lenovo that considers what employees of different ages and genders expect when it comes to workplace diversity and inclusion (D&I).

In PCR, Whye writes about a U.K.-based study that Intel recently launched assessing expectations of Generation Z (people born from 1996 to 2002) concerning diversity and personal experiences of bias – and how these will contribute to shaping their future career paths. The findings were clear: For Gen Z, D&I is a deciding factor when considering future careers. Since the workforce now consists of multiple generations, it’s important to understand their viewpoints and experiences in order to be inclusive of all. Gen Z in particular expects companies to understand and respect its needs and will not conform to a culture that doesn’t align with its values.

She writes:

“From social equity to climate change, this is a generation that is determined to make a difference. When it comes to work, Intel’s research found that a majority of Gen Zs in the U.K. would be hesitant to take a job from a company that does not have diverse representation in senior leadership roles. Moreover, in choosing between competing job offers, a company’s stance on diversity and inclusivity is almost as important as the pay offered.”

» Read the full op-ed on the PCR magazine website.

As part of its joint research with Lenovo, Intel examined Gen Z’s global perspective on D&I with a focus on gender differences and the importance of leadership diversity. It surveyed over 5,000 people recruited by Lucid, a global survey platform, between Dec. 19, 2019, and Jan. 7, 2020. Key findings include:

  • Gen Z employees across markets consistently lead other generations when it comes to finding importance in the diverse composition of company leadership. For example, Gen Z, by a 22-point margin, want to see more LGBTQ representation when compared with baby boomers born between 1947 and 1965 (66% vs. 44%).
  • As it pertains to the importance for companies to provide specific benefits for groups with different needs, 77% of global Gen Z respondents ranked it extremely or very important compared with 58% of baby boomers. Additionally, by a 10-point margin, Gen Z believes implementing programs to ensure D&I at every level is extremely or very important for ensuring equal treatment of employees (83% of global Gen Z respondents vs. 73% of global baby boomer respondents).
  • Employees of both genders in Brazil and China place a higher importance of ensuring team members’ voices are heard (Brazil: 98% of women and 92% of men; China: 90% of women and 88% of men). However, women across all markets consistently indexed higher than their male counterparts on this issue.
  • Employees across all markets indicate seeing female representation in leadership positions as a top priority; however, the U.S. is the only country to place ethnic minority leadership as a top-two priority. In the U.K., Brazil and Germany, more weight is given to supporting leadership representation to those living with disabilities, while China wants more consideration and support for parents in the workplace.

» New Research from Intel and Lenovo (Installment 2) | Intel and Lenovo Research Finds Tech is Essential to Driving Global Diversity and Inclusion (Installment 1)

The post Diversity and Inclusion: Read Barbara Whye’s Op-ed; Intel-Lenovo Research appeared first on Intel Newsroom.

Telltale Signs: AI Researchers Trace Cancer Risk Factors Using Tumor DNA

Life choices can change a person’s DNA — literally.

Gene changes that occur in human cells over a person’s lifetime, known as somatic mutations, cause the vast majority of cancers. They can be triggered by environmental or behavioral factors such as exposure to ultraviolet light or radiation, drinking or smoking.

By using NVIDIA GPUs to analyze the signature, or molecular fingerprint, of these mutations, researchers can better understand known causes of cancer, discover new risk factors and investigate why certain cancers are more common in certain areas of the world than others.

The Cancer Grand Challenges’ Mutographs team, an international research group funded by Cancer Research U.K., is using NVIDIA GPU-accelerated machine learning models to study DNA from the tumors of 5,000 patients with five cancer types: pancreas, kidney and colorectal cancer, as well as two kinds of esophageal cancer.

Using powerful NVIDIA DGX systems, researchers from the Wellcome Sanger Institute — a world leader in genomics — and the University of California, San Diego, collaborated with NVIDIA developers to achieve more than 30x acceleration when running their machine learning software SigProfiler.

“Research projects such as the Mutographs Grand Challenge are just that — grand challenges that push the boundary of what’s possible,” said Pete Clapham, leader of the Informatics Support Group at the Wellcome Sanger Institute. “NVIDIA DGX systems provide considerable acceleration that enables the Mutographs team to not only meet the project’s computational demands, but to drive it even further, efficiently delivering previously impossible results.”

Molecular Detective Work

Just as every person has a unique fingerprint, cancer-causing somatic mutations have unique patterns that show up in a cell’s DNA.

“At a crime scene, investigators will lift fingerprints and run those through a database to find a match,” said Ludmil Alexandrov, computational lead on the project and an assistant professor of cellular and molecular medicine at UCSD. “Similarly, we can take a molecular fingerprint from cells collected in a patient’s biopsy and see if it matches a risk factor like smoking or ultraviolet light exposure.”

Some somatic mutations have known sources, like those Alexandrov mentions. But the machine learning model can pull out other mutation patterns that occur repeatedly in patients with a specific cancer, but have no known source.

When that happens, Alexandrov teams up with other scientists to test hypotheses and perform large-scale experiments to discover the cancer-causing culprit.

Discovering a new risk factor can help improve cancer prevention. Researchers in 2018 traced back a skin cancer mutational signature to an immunosuppressant drug, which now lists the condition as one of its possible side effects, and helps doctors better monitor patients being treated with the drug.

Enabling Whirlwind Tours of Global Data

In cases where the source of a mutational signature is known, researchers can analyze trends in the occurrence of specific kinds of somatic mutations (and their corresponding cancers) in different regions of the world as well as over time.

“Certain cancers are very common in one part of the world, and very rare in others. And when people migrate from one country to another, they tend to acquire the cancer risk of the country they move to,” said Alexandrov. “What that tells you is that it’s mostly environmental.”

Researchers on the Mutographs project are studying a somatic mutation linked to esophageal cancer, a condition some studies have correlated with the drinking of scalding beverages like tea or maté.

Esophageal cancer is much more common in Eastern South America, East Africa and Central Asia than in North America or West Africa. Finding the environmental or lifestyle factor that puts people at higher risk can help with prevention and early detection of future cases.

map of esophageal cancer cases
Cases of esophageal squamous cell carcinoma vary greatly around the world. (Image courtesy of Mutographs project. Data source: GLOBOCAN 2012.)

The Mutographs researchers teamed up with NVIDIA to accelerate the most time-consuming parts of the SigProfiler AI framework on NVIDIA GPUs. When running pipeline jobs with double precision on NVIDIA DGX systems, the team observed more than 30x acceleration compared to using CPU hardware. With single precision, Alexandrov says, SigProfiler runs significantly faster, achieving around a 50x speedup.

The DGX system’s optimized software and NVLink interconnect technology also enable the scaling of AI models across all eight NVIDIA V100 Tensor Core GPUs within the system for maximum performance in both model development and deployment.

For research published in Nature this year, Alexandrov’s team analyzed data from more than 20,000 cancer patients, which used to take almost a month.

“With NVIDIA DGX, we can now do that same analysis in less than a day,” he said. “That means we can do much more testing, validation and exploration.”

Subscribe to NVIDIA healthcare news here.

Main image credit: Wellcome Sanger Institute

The post Telltale Signs: AI Researchers Trace Cancer Risk Factors Using Tumor DNA appeared first on The Official NVIDIA Blog.

Mass General’s Martinos Center Adopts AI for COVID, Radiology Research

Academic medical centers worldwide are building new AI tools to battle COVID-19 —  including at Mass General, where one center is adopting NVIDIA DGX A100 AI systems to accelerate its work.

Researchers at the hospital’s Athinoula A. Martinos Center for Biomedical Imaging are working on models to segment and align multiple chest scans, calculate lung disease severity from X-ray images, and combine radiology data with other clinical variables to predict outcomes in COVID patients.

Built and tested using Mass General Brigham data, these models, once validated, could be used together in a hospital setting during and beyond the pandemic to bring radiology insights closer to the clinicians tracking patient progress and making treatment decisions.

“While helping hospitalists on the COVID-19 inpatient service, I realized that there’s a lot of information in radiologic images that’s not readily available to the folks making clinical decisions,” said Matthew D. Li, a radiology resident at Mass General and member of the Martinos Center’s QTIM Lab. “Using deep learning, we developed an algorithm to extract a lung disease severity score from chest X-rays that’s reproducible and scalable — something clinicians can track over time, along with other lab values like vital signs, pulse oximetry data and blood test results.”

The Martinos Center uses a variety of NVIDIA AI systems, including NVIDIA DGX-1, to accelerate its research. This summer, the center will install NVIDIA DGX A100 systems, each built with eight NVIDIA A100 Tensor Core GPUs and delivering 5 petaflops of AI performance.

“When we started working on COVID model development, it was all hands on deck. The quicker we could develop a model, the more immediately useful it would be,” said Jayashree Kalpathy-Cramer, director of the QTIM lab and the Center for Machine Learning at the Martinos Center. “If we didn’t have access to the sufficient computational resources, it would’ve been impossible to do.”

Comparing Notes: AI for Chest Imaging

COVID patients often get imaging studies — usually CT scans in Europe, and X-rays in the U.S. — to check for the disease’s impact on the lungs. Comparing a patient’s initial study with follow-ups can be a useful way to understand whether a patient is getting better or worse.

But segmenting and lining up two scans that have been taken in different body positions or from different angles, with distracting elements like wires in the image, is no easy feat.

Bruce Fischl, director of the Martinos Center’s Laboratory for Computational Neuroimaging, and Adrian Dalca, assistant professor in radiology at Harvard Medical School, took the underlying technology behind Dalca’s MRI comparison AI and applied it to chest X-rays, training the model on an NVIDIA DGX system.

“Radiologists spend a lot of time assessing if there is change or no change between two studies. This general technique can help with that,” Fischl said. “Our model labels 20 structures in a high-resolution X-ray and aligns them between two studies, taking less than a second for inference.”

This tool can be used in concert with Li and Kalpathy-Cramer’s research: a risk assessment model that analyzes a chest X-ray to assign a score for lung disease severity. The model can provide clinicians, researchers and infectious disease experts with a consistent, quantitative metric for lung impact, which is described subjectively in typical radiology reports.

Trained on a public dataset of over 150,000 chest X-rays, as well as a few hundred COVID-positive X-rays from Mass General, the severity score AI is being used for testing by four research groups at the hospital using the NVIDIA Clara Deploy SDK. Beyond the pandemic, the team plans to expand the model’s use to more conditions, like pulmonary edema, or wet lung.

Comparing the AI lung disease severity score, or PXS, between images taken at different stages can help clinicians track changes in a patient’s disease over time. (Image from the researchers’ paper in Radiology: Artificial Intelligence, available under open access.)

Foreseeing the Need for Ventilators

Chest imaging is just one variable in a COVID patient’s health. For the broader picture, the Martinos Center team is working with Brandon Westover, executive director of Mass General Brigham’s Clinical Data Animation Center.

Westover is developing AI models that predict clinical outcomes for both admitted patients and outpatient COVID cases, and Kalpathy-Cramer’s lung disease severity score could be integrated as one of the clinical variables for this tool.

The outpatient model analyzes 30 variables to create a risk score for each of hundreds of patients screened at the hospital network’s respiratory infection clinics — predicting the likelihood a patient will end up needing critical care or dying from COVID.

For patients already admitted to the hospital, a neural network predicts the hourly risk that a patient will require artificial breathing support in the next 12 hours, using variables including vital signs, age, pulse oximetry data and respiratory rate.

“These variables can be very subtle, but in combination can provide a pretty strong indication that a patient is getting worse,” Westover said. Running on an NVIDIA Quadro RTX 8000 GPU, the model is accessible through a front-end portal clinicians can use to see who’s most at risk, and which variables are contributing most to the risk score.

Better, Faster, Stronger: Research on NVIDIA DGX

Fischl says NVIDIA DGX systems help Martinos Center researchers more quickly iterate, experimenting with different ways to improve their AI algorithms. DGX A100, with NVIDIA A100 GPUs based on the NVIDIA Ampere architecture, will further speed the team’s work with third-generation Tensor Core technology.

“Quantitative differences make a qualitative difference,” he said. “I can imagine five ways to improve our algorithm, each of which would take seven hours of training. If I can turn those seven hours into just an hour, it makes the development cycle so much more efficient.”

The Martinos Center will use NVIDIA Mellanox switches and VAST Data storage infrastructure, enabling its developers to use NVIDIA GPUDirect technology to bypass the CPU and move data directly into or out of GPU memory, achieving better performance and faster AI training.

“Having access to this high-capacity, high-speed storage will allow us to to analyze raw multimodal data from our research MRI, PET and MEG scanners,” said Matthew Rosen, assistant professor in radiology at Harvard Medical School, who co-directs the Center for Machine Learning at the Martinos Center. “The VAST storage system, when linked with the new A100 GPUs, is going to offer an amazing opportunity to set a new standard for the future of intelligent imaging.”

To learn more about how AI and accelerated computing are helping healthcare institutions fight the pandemic, visit our COVID page.

Main image shows chest x-ray and corresponding heat map, highlighting areas with lung disease. Image from the researchers’ paper in Radiology: Artificial Intelligence, available under open access.

The post Mass General’s Martinos Center Adopts AI for COVID, Radiology Research appeared first on The Official NVIDIA Blog.

For Businesses to Succeed in 2030, Gen Z Says No One Can be Left Behind

genz 2x1 1What’s New: With Gen Z’s representation in the global workforce set to pass 1 billion by 2030, organizations need to understand the demographic group’s motivations and perspectives on critical issues such as diversity and inclusion. Intel commissioned a study in the U.K. to assess Gen Z’s expectations around diversity, their experiences of bias and how these will contribute to shaping their future career paths.

“As Gen Z employees enter the workforce, they are going to make their voice heard on the importance of diversity and inclusion. Many have personally experienced discrimination as a result of gender, ethnic background, disability or sexual orientation, and are seeking career opportunities that align with their ethics and social values. Companies must accelerate their efforts to create diverse, inclusive workplaces to meet the expectations of a generation who will be making career choices as much on values and sense of purpose as pay and progression.”
–Megan Stowe, director, EMEA Strategic Sourcing and International Supplier, Diversity & Inclusion at Intel

Why It Matters: The study found that a majority of Gen Z — those ages 18 to 24 — in the U.K. would be hesitant to take a job from a company that does not have diverse representation in senior leadership roles. In choosing between competing job offers, a company’s stance on diversity and inclusivity is almost as important as the pay offered.

Researchers surveyed 2,000 workers in the U.K. across a variety of age groups and compared the responses of Gen Z to those in other demographics.

What the Study Says: Diversity and inclusion have become essential workplace priorities: Diverse teams with diverse perspectives are more creative and innovative. It’s critical, now more than ever, to actively create and foster an environment that empowers employees to have confidence and bring their full experiences to work each day. This will continue to be driven by the expectations, experiences and needs of Gen Z employees as they enter the workforce.

The survey found a broad acceptance of the importance of diversity and inclusion across different age groups, but particularly strong opinions among members of Gen Z. The group is most likely to have more personal experiences of bias as a result of gender, ethnic background or disability, and use diversity and inclusion as a deciding factor in choosing between job offers. Key findings include:

  • Gen Z will make career choices based on diversity and inclusion. Over half (56%) of 18- to 24-year-olds said they would be hesitant to accept a job from an organization that does not have any underrepresented minorities in senior leadership roles.
  • Young people are more likely to have experienced bias. Among Gen Z, 39% have experienced bias as a result of gender, 31% personal appearance, 26% ethnic background and 21% sexual orientation. In all cases, these are higher figures than the average across all age groups.
  • Diversity and inclusion must be broadly based. Among all respondents, examples of diversity and inclusion at work cited as most important included having colleagues of all ages and levels of experience and backgrounds, as well as equal opportunities for those with disabilities. Gen Z especially emphasized the need for workplaces to be LGBTQ+ friendly.
  • Diversity delivers business value. Forty-two percent of respondents said diversity is important because it allows for greater wealth of experience and insights. Forty percent said it means people are placed first and no one is left behind.

Importance of a Mindset Shift: As they work to become more diverse and inclusive, businesses need to recognize the shift in mindsets that will follow Gen Z into the mainstream workforce. For this rising generation, values and ethics are on par with financial reward. Almost as many Gen Z respondents said they were worried about finding a job that aligns with their ethics and sense of purpose (33%) compared to a job that provides financial security (36%). Similarly, 34% of Gen Z would decide between similar job offers based on which is more diverse and inclusive, against 36% who would consider pay the deciding factor.

More Context: A diverse workforce and inclusive culture are key to Intel’s evolution and driving forces of its growth. This survey builds on the company’s recently announced 2030 goals and global impact challenges. The directive reinforces our commitment to advance diversity and inclusion across our global workforce and industry while also looking beyond our own walls: working with stakeholders to make technology fully inclusive and expand digital readiness worldwide.

Intel’s corporate responsibility and positive global impact work is embedded in its purpose to create world-changing technology that enriches the lives of every person on Earth. By leveraging its position in the technology ecosystem, Intel can help customers and partners achieve their own aspirations and accelerate progress on key topics across the technology industry.

Even More Context: Inclusion: The Deciding Factor (Study Report) | Intel 2030 Goals

The post For Businesses to Succeed in 2030, Gen Z Says No One Can be Left Behind appeared first on Intel Newsroom.

Intel and Lenovo Research Finds Tech is Essential to Driving Global Diversity and Inclusion

intel lenovo di 2x1 1The first release of a new global research report from Intel and Lenovo finds that technology will play an integral role in achieving diversity and inclusion (D&I) in the workplace of the future. With the power to bridge accessibility gaps, connect people who are otherwise divided, and expand the benefits of upskilling and progressive training programs, tech is enabling people to work in more dynamic, flexible ways.

The study explores how people around the world view D&I in their personal and professional lives, and their perspective on the role technology plays to address systematic inequities, create more access and enable growth. Among the study’s findings:

  • In the U.S., parents are more likely than non-parents to view flexible work hours as a prominent impact of technology in the workplace by a 12-point margin.
  • Respondents from higher income brackets are more likely to agree that tech plays an “extremely large role” in improving diversity and inclusion in the workplace.
  • More than 80% of employees in Brazil and China agree that artificial intelligence can be used to make the workplace more diverse and inclusive, as do half of respondents in the U.S., U.K. and Germany.
  • More than half of global respondents say a company’s diversity and inclusion policies are “extremely” or “very” important when deciding where to apply and whether to accept an offer.

“Intel has a longstanding commitment to diversity and inclusion. We believe that transparency is key, and our goal is to see our representation mirror the markets and customers we serve. Just as we apply our engineering mindset to create the world’s leading technological innovations, we do the same with our D&I strategies, using data to inform our decisions and sharing it transparently to drive clear accountability and deliver results across the industry,” says Barbara Whye, chief diversity & inclusion officer and vice president of Social Impact and Human Resources at Intel. “We know that to truly progress D&I, it takes companies working together, and being a global company, this work can’t be limited to the U.S. only. That’s why with both companies sharing a rich history of collaboration, we decided to extend our partnership and conduct a global survey.”

More on the Lenovo Website: Research Brief | Topline Findings | Full News Release

The post Intel and Lenovo Research Finds Tech is Essential to Driving Global Diversity and Inclusion appeared first on Intel Newsroom.

Quantum of Solace: Research Seeks Atomic Keys to Lock Down COVID-19

Anshuman Kumar is sharpening a digital pencil to penetrate the secrets of the coronavirus.

He and colleagues at the University of California at Riverside want to calculate atomic interactions at a scale never before attempted for the virus. If they succeed, they’ll get a glimpse into how a molecule on the virus binds to a molecule of a drug, preventing it from infecting healthy cells.

Kumar is part of a team at UCR taking work in the tiny world of quantum mechanics to a new level. They aim to measure a so-called barrier height, a measure of the energy required to interact with a viral protein that consists of about 5,000 atoms.

That’s more than 10x the state of the art in the field, which to date has calculated forces of molecules with up to a few hundred atoms.

Accelerating Anti-COVID Drug Discovery

Data on how quantum forces determine the likelihood a virus will bind with a neutralizing molecule, called a ligand, could speed work at pharmaceutical companies seeking drugs to prevent COVID-19.

“At the atomic level, Newtonian forces become irrelevant, so you have to use quantum mechanics because that’s the way nature works,” said Bryan Wong, an associate professor of chemical engineering, materials science and physics at UCR who oversees the project. “We aim to make these calculations fast and efficient with NVIDIA GPUs in Microsoft’s Azure cloud to narrow down our path to a solution.”

Researchers started their work in late April using a protein on the coronavirus believed to play a strong role in rapidly infecting healthy cells. They’re now finishing up a series of preliminary calculations that take up to 10 days each.

The next step, discovering the barrier height, involves even more complex and time-consuming calculations. They could take as long as five weeks for a single protein/ligand pair.

Calling on GPUs in the Azure Cloud

To accelerate time to results, the team got a grant from Microsoft’s AI for Health program through the COVID-19 High Performance Computing Consortium. It included high performance computing on Microsoft’s Azure cloud and assistance from NVIDIA.

Kumar implemented a GPU-accelerated version of the scientific program that handles the quantum calculations. It already runs on the university’s NVIDIA GPU-powered cluster on premises, but the team wanted to move it to the cloud where it could run on V100 Tensor Core GPUs.

In less than a day, Kumar was able to migrate the program to Azure with help from NVIDIA solutions architect Scott McMillan using HPC Container Maker, an open source tool created and maintained by NVIDIA. The tool lets users define a container with a few clicks that identify a program and its key components such as a runtime environment and other dependencies.

Anshuman Kumar used an open source program developed by NVIDIA to move UCR’s software to the latest GPUs in the Microsoft Azure cloud.

It was a big move given the researchers had never used containers or cloud services before.

“The process is very smooth once you identify the correct libraries and dependencies — you just write a script and build the code image,” said Kumar. “After doing this, we got 2-10x speedups on GPUs on Azure compared to our local system,” he added.

NVIDIA helped fine-tune the performance by making sure the code used the latest versions of CUDA and the Magma math library. One specialist dug deep in the stack to update a routine that enabled multi-GPU scaling.

New Teammates and a Mascot

The team got some unexpected help recently when it discovered a separate computational biology lab at UCR also won a grant from the HPC consortium to work on COVID. The lab observes the binding process using statistical sampling techniques to make bindings that are otherwise rare occur more often.

“I reached out to them because pairing up makes for a better project,” said Wong. “They can use the GPU code Anshuman implemented for their enhanced sampling work,” he added.

“I’m really proud to be part of this work because it could help the whole world,” said Kumar.

The team also recently got a mascot. A large squirrel, dubbed Billy, now sits daily outside the window of Wong’s home office, a good symbol for the group’s aim to be fast and agile.

Pictured above: Colorful ribbons represent the Mpro protein believed to have an important role in the replication of the coronavirus. Red strands represent a biological molecule that binds to a ligand. (Image courtesy of UCR.)

The post Quantum of Solace: Research Seeks Atomic Keys to Lock Down COVID-19 appeared first on The Official NVIDIA Blog.

Robotics Reaps Rewards at ICRA: NVIDIA’s Dieter Fox Named RAS Pioneer

Thousands of researchers from around the globe will be gathering — virtually — next week for the IEEE International Conference on Robotics and Automation.

As a flagship conference on all things robotics, ICRA has become a renowned forum since its inception in 1984. This year, NVIDIA’s Dieter Fox will receive the RAS Pioneer Award, given by the IEEE Robotics and Automation Society.

Fox is the company’s senior director of robotics research and head of the NVIDIA Robotics Research Lab, in Seattle, as well as a professor at the University of Washington Paul G. Allen School of Computer Science & Engineering and head of the UW Robotics and State Estimation Lab. At the NVIDIA lab, Fox leads over 20 researchers and interns, fostering collaboration with the neighboring UW.

He’s receiving the RAS Pioneer Award “for pioneering contributions to probabilistic state estimation, RGB-D perception, machine learning in robotics, and bridging academic and industrial robotics research.”

“Being recognized with this award by my research colleagues and the IEEE society is an incredible honor,” Fox said. “I’m very grateful for the amazing collaborators and students I had the chance to work with during my career. I also appreciate that IEEE sees the importance of connecting academic and industrial research — I believe that bridging these areas allows us to make faster progress on the problems we really care about.”

Fox will also give a talk at the conference, where a total of 19 papers that investigate a variety of topics in robotics will be presented by researchers from NVIDIA Research.

Here’s a preview of some of the show-stopping NVIDIA research papers that were accepted at ICRA:

Robotics Work a Finalist for Best Paper Awards

6-DOF Grasping for Target-Driven Object Manipulation in Clutter” is a finalist for both the Best Paper Award in Robot Manipulation and the Best Student Paper.

The paper delves into the challenging robotics problem of grasping in cluttered environments, which is a necessity in most real-world scenes, said Adithya Murali, one of the lead researchers and a graduate student at the Robotics Institute at Carnegie Mellon University. Much current research considers only planar grasping, in which a robot grasps from the top down rather than moving in more dimensions.

Arsalan Mousavian, another lead researcher on the paper and a senior research scientist at the NVIDIA Robotics Research Lab, explained that they performed this research in simulation. “We weren’t bound by any physical robot, which is time-consuming and very expensive,” he said.

Mousavian and his colleagues trained their algorithms on NVIDIA V100 Tensor Core GPUs, and then tested on NVIDIA TITAN GPUs. For this particular paper, the training data consisted of simulating 750,000 robot object interactions in less than half a day, and the models were trained in a week. Once trained, the robot was able to robustly manipulate objects in the real world.

Replanning for Success

NVIDIA Research also considered how robots could plan to accomplish a wide variety of tasks in challenging environments, such as grasping an object that isn’t visible, in a paper called “Online Replanning in Belief Space for Partially Observable Task and Motion Problems.”

The approach makes a variety of tasks possible. Caelan Garrett, graduate student at MIT and a lead researcher on the paper, explained, “Our work is quite general in that we deal with tasks that involve not only picking and placing things in the environment, but also pouring things, cooking, trying to open doors and drawers.”

Garrett and his colleagues created an open-source algorithm, SS-Replan, that allows the robot to incorporate observations when making decisions, which it can adjust based on new observations it makes while trying to accomplish its goal.

They tested their work in NVIDIA Isaac Sim, a simulation environment used to develop, test and evaluate virtual robots, and on a real robot.

DexPilot: A Teleoperated Robotic Hand-Arm System

In another paper, NVIDIA researchers confronted the problem that current robotics algorithms don’t yet allow for a robot to complete precise tasks such as pulling a tea bag out of a drawer, removing a dollar bill from a wallet or unscrewing the lid off a jar autonomously.

In “DexPilot: Depth-Based Teleoperation of Dexterous Robotic Hand-Arm System,” NVIDIA researchers present a system in which a human can remotely operate a robotic system. DexPilot observes the human hand using cameras, and then uses neural networks to relay the motion to the robotic hand.

Whereas other systems require expensive equipment such as motion-capture systems, gloves and headsets, DexPilot archives teleoperation through a combination of deep learning and optimization.

It took 15 hours to train on a single GPU once we collected the data, according to NVIDIA researchers Ankur Handa and Karl Van Wyk, two of the authors of the paper. They and their colleagues used the NVIDIA TITAN GPU for their research.

Learn all about these papers and more at ICRA 2020.

The NVIDIA research team has more than 200 scientists around the globe, focused on areas such as AI, computer vision, self-driving cars, robotics and graphics. Learn more at www.nvidia.com/research.

The post Robotics Reaps Rewards at ICRA: NVIDIA’s Dieter Fox Named RAS Pioneer appeared first on The Official NVIDIA Blog.

40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers

Forty years to the day since PAC-MAN first hit arcades in Japan, and went on to munch a path to global stardom, the retro classic has been reborn, delivered courtesy of AI.

Trained on 50,000 episodes of the game, a powerful new AI model created by NVIDIA Research, called NVIDIA GameGAN, can generate a fully functional version of PAC-MAN — without an underlying game engine. That means that even without understanding a game’s fundamental rules, AI can recreate the game with convincing results.

GameGAN is the first neural network model that mimics a computer game engine by harnessing generative adversarial networks, or GANs. Made up of two competing neural networks, a generator and a discriminator, GAN-based models learn to create new content that’s convincing enough to pass for the original.

“This is the first research to emulate a game engine using GAN-based neural networks,” said Seung-Wook Kim, an NVIDIA researcher and lead author on the project. “We wanted to see whether the AI could learn the rules of an environment just by looking at the screenplay of an agent moving through the game. And it did.”

As an artificial agent plays the GAN-generated game, GameGAN responds to the agent’s actions, generating new frames of the game environment in real time. GameGAN can even generate game layouts it’s never seen before, if trained on screenplays from games with multiple levels or versions.

This capability could be used by game developers to automatically generate layouts for new game levels, as well as by AI researchers to more easily develop simulator systems for training autonomous machines.

“We were blown away when we saw the results, in disbelief that AI could recreate the iconic PAC-MAN experience without a game engine,” said Koichiro Tsutsumi from BANDAI NAMCO Research Inc., the research development company of the game’s publisher BANDAI NAMCO Entertainment Inc., which provided the PAC-MAN data to train GameGAN. “This research presents exciting possibilities to help game developers accelerate the creative process of developing new level layouts, characters and even games.”

We’ll be making our AI tribute to the game available later this year on AI Playground, where anyone can experience our research demos firsthand.

AI Goes Old School

PAC-MAN enthusiasts once had to take their coins to the nearest arcade to play the classic maze chase. Take a left at the pinball machine and continue straight past the air hockey, following the unmistakable soundtrack of PAC-MAN gobbling dots and avoiding ghosts Inky, Pinky, Blinky and Clyde.

In 1981 alone, Americans inserted billions of quarters to play 75,000 hours of coin-operated games like PAC-MAN. Over the decades since, the hit game has seen versions for PCs, gaming consoles and cell phones.

NVIDIA Researcher Seung-Wook Kim
Game Changer: NVIDIA Researcher Seung-Wook Kim and his collaborators trained GameGAN on 50,000 episodes of PAC-MAN.

The GameGAN edition relies on neural networks, instead of a traditional game engine, to generate PAC-MAN’s environment. The AI keeps track of the virtual world, remembering what’s already been generated to maintain visual consistency from frame to frame.

No matter the game, the GAN can learn its rules simply by ingesting screen recordings and agent keystrokes from past gameplay. Game developers could use such a tool to automatically design new level layouts for existing games, using screenplay from the original levels as training data.

With data from BANDAI NAMCO Research, Kim and his collaborators at the NVIDIA AI Research Lab in Toronto used NVIDIA DGX systems to train the neural networks on the PAC-MAN episodes (a few million frames, in total) paired with data on the keystrokes of an AI agent playing the game.

The trained GameGAN model then generates static elements of the environment, like a consistent maze shape, dots and Power Pellets — plus moving elements like the enemy ghosts and PAC-MAN itself.

It learns key rules of the game, both simple and complex. Just like in the original game, PAC-MAN can’t walk through the maze walls. He eats up dots as he moves around, and when he consumes a Power Pellet, the ghosts turn blue and flee. When PAC-MAN exits the maze from one side, he’s teleported to the opposite end. If he runs into a ghost, the screen flashes and the game ends.

Since the model can disentangle the background from the moving characters, it’s possible to recast the game to take place in an outdoor hedge maze, or swap out PAC-MAN for your favorite emoji. Developers could use this capability to experiment with new character ideas or game themes.

It’s Not Just About Games

Autonomous robots are typically trained in a simulator, where the AI can learn the rules of an environment before interacting with objects in the real world. Creating a simulator is a time-consuming process for developers, who must code rules about how objects interact with one another and how light works within the environment.

Simulators are used to develop autonomous machines of all kinds, such as warehouse robots learning how to grasp and move objects around, or delivery robots that must navigate sidewalks to transport food or medicine.

GameGAN introduces the possibility that the work of writing a simulator for tasks like these could one day be replaced by simply training a neural network.

Suppose you install a camera on a car. It can record what the road environment looks like or what the driver is doing, like turning the steering wheel or hitting the accelerator. This data could be used to train a deep learning model that can predict what would happen in the real world if a human driver — or an autonomous car — took an action like slamming the brakes.

“We could eventually have an AI that can learn to mimic the rules of driving, the laws of physics, just by watching videos and seeing agents take actions in an environment,” said Sanja Fidler, director of NVIDIA’s Toronto research lab. “GameGAN is the first step toward that.”

NVIDIA Research has more than 200 scientists around the globe, focused on areas such as AI, computer vision, self-driving cars, robotics and graphics.

GameGAN is authored by Fidler, Kim, NVIDIA researcher Jonah Philion, University of Toronto student Yuhao Zhou and MIT professor Antonio Torralba. The paper will be presented at the prestigious Conference on Computer Vision and Pattern Recognition in June.

PAC-MANTM & ©BANDAI NAMCO Entertainment Inc.

The post 40 Years on, PAC-MAN Recreated with AI by NVIDIA Researchers appeared first on The Official NVIDIA Blog.