Intel and Google Cloud Simplify Hybrid and Multi-Clouds Deployment

What’s New: Intel and Google Cloud today announced their collaboration to simplify enterprises’ ability to adopt and deploy cloud-first business models using their existing on-prem, self-managed hardware. The two organizations co-developed reference architectures optimized for the now-generally available “Anthos on bare metal” solution. Targeted at data center and edge computing use cases, customers can leverage the reference architectures to rapidly deploy enterprise-class applications on their existing hardware infrastructure and efficiently handle complicated hybrid- and multi-cloud tasks.

“With today’s rapidly evolving business climate, enterprises are constantly looking for new ways to modernize their business while leveraging their existing infrastructure. Running Anthos on bare metal using servers based on Intel® Xeon® Scalable processors will simplify the deployment of a cloud-first approach, opening a wide array of new use cases across retail, telco and manufacturing industries.”
–Jason Grebe, Intel corporate vice president and general manager the Cloud and Enterprise Solutions Group

What’s the Benefit: Anthos on bare metal helps customers expedite hybrid- and multi-cloud deployments within their enterprise. The co-developed reference architectures Optimized to target the unique feature set of Intel® processors, providing customers consistency as they move containerized applications between common architectures within different cloud environments.

Intel Xeon Scalable processors are the most widely deployed server processors in the world, targeting a broad range of applications from the core of the data center to the edge of the network. Customers can use the data center reference architecture within Intel Xeon Scalable processors to help:

  • Optimize networking workloads leveraging Intel’s Data Plane Development Kit with single root input/output virtualization.
  • Deploy artificial intelligence and analytics workloads leveraging Intel® Deep Learning Boost technology.
  • Accelerate encryption and compression workloads with Intel® QuickAssist technology.

anthos“Anthos on bare metal provides customers with more choice and flexibility over where to deploy applications in the public cloud, on prem or at the edge,” said Rayn Veerubhotla, director, Partner Engineering at Google Cloud. “Intel’s support for Anthos on bare metal ensures that customers can quickly deploy their enterprise applications on existing hardware, simplifying their path to hybrid- and multi-cloud approaches.”

Why It’s Important: Anthos allows applications to be packaged into containers and moved between various public cloud environments without having to rewrite the application for the underlying cloud infrastructure. Anthos on bare metal is an option that now allows enterprises to run Anthos on their existing on-prem physical servers, deployed on an operating system without a hypervisor layer.

What’s Being Done: Intel and Google Cloud have co-developed two reference architectures for Anthos on bare metal: a data center reference architecture and an edge reference architecture. The server-based data center reference architecture features Intel® Xeon® Gold 6240Y processors, Intel® Optane™ persistent memory, Intel® Solid State Drive DC S4500 Series and 10/25 GbE Intel® Ethernet Adapters. The edge reference architecture targets the Intel® NUC 10 performance kit featuring the 10th Gen Intel® Core™ i7-10710U processor, Intel® SSD Pro 7600p and Intel® Ethernet Connection I219-V.

Intel has validated both architectures with Google Cloud for customers to start to deploy today. Customers can work with their OEM partners, systems integrator or reseller to build out the reference architecture in their infrastructure.

More Context: Anthos on bare metal, now GA, puts you back in control (Google Cloud Blog) | Intel Data Center News

The post Intel and Google Cloud Simplify Hybrid and Multi-Clouds Deployment appeared first on Intel Newsroom.

How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure

Cloud or on premises? That’s the question many organizations ask when building AI infrastructure. Cloud computing can help developers get a fast start with minimal cost. It’s great for early experimentation and supporting temporary needs. As businesses iterate on their AI models, however, they can become increasingly complex, consume more compute cycles and involve exponentially Read article >

The post How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure appeared first on The Official NVIDIA Blog.

How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure

Cloud or on premises? That’s the question many organizations ask when building AI infrastructure.

Cloud computing can help developers get a fast start with minimal cost. It’s great for early experimentation and supporting temporary needs.

As businesses iterate on their AI models, however, they can become increasingly complex, consume more compute cycles and involve exponentially larger datasets. The costs of data gravity can escalate, with more time and money spent pushing large datasets from where they’re generated to where compute resources reside.

This AI development “speed bump” is often an inflection point where organizations realize there are opex benefits with on-premises or collocated infrastructure. Its fixed costs can support rapid iteration at the lowest “cost per training run,” complementing their cloud usage.

Conversely, for organizations whose datasets are created in the cloud and live there, procuring compute resources adjacent to that data makes sense. Whether on-prem or in the cloud, minimizing data travel — by keeping large volumes as close to compute resources as possible — helps minimize the impact of data gravity on operating costs.

‘Own the Base, Rent the Spike’ 

Businesses that ultimately embrace hybrid cloud infrastructure trace a familiar trajectory.

One customer developing an image recognition application immediately benefited from a fast, effortless start in the cloud.

As their database grew to millions of images, costs rose and processing slowed, causing their data scientists to become more cautious in refining their models.

At this tipping point — when a fixed cost infrastructure was justified — they shifted training workloads to an on-prem NVIDIA DGX system. This enabled an immediate return to rapid, creative experimentation, allowing the business to build on the great start enabled by the cloud.

The saying “own the base, rent the spike” captures this situation. Enterprise IT provisions on-prem DGX infrastructure to support the steady-state volume of AI workloads and retains the ability to burst to the cloud whenever extra capacity is needed.

It’s this hybrid cloud approach that can secure the continuous availability of compute resources for developers while ensuring the lowest cost per training run.

Delivering the AI Hybrid Cloud with DGX and Google Cloud’s Anthos on Bare Metal

To help businesses embrace hybrid cloud infrastructure, NVIDIA has introduced support for Google Cloud’s Anthos on bare metal for its DGX A100 systems.

For customers using Kubernetes to straddle cloud GPU compute instances and on-prem DGX infrastructure, Anthos on bare metal enables a consistent development and operational experience across deployments, while reducing expensive overhead and improving developer productivity.

This presents several benefits to enterprises. While many have implemented GPU-accelerated AI in their data centers, much of the world retains some legacy x86 compute infrastructure. With Anthos on bare metal, IT can easily add on-prem DGX systems to their infrastructure to tackle AI workloads and manage it the same familiar way, all without the need for a hypervisor layer.

Without the need for a virtual machine, Anthos on bare metal — now generally available — manages application deployment and health across existing environments for more efficient operations. Anthos on bare metal can also manage application containers on a wide variety of performance, GPU-optimized hardware types and allows for direct application access to hardware.

“Anthos on bare metal provides customers with more choice over how and where they run applications and workloads,” said Rayn Veerubhotla, Director of Partner Engineering at Google Cloud. “NVIDIA’s support for Anthos on bare metal means customers can seamlessly deploy NVIDIA’s GPU Device Plugin directly on their hardware, enabling increased performance and flexibility to balance ML workloads across hybrid environments.”

Additionally, teams can access their favorite NVIDIA NGC containers, Helm charts and AI models from anywhere.

With this combination, enterprises can enjoy the rapid start and elasticity of resources offered on Google Cloud, as well as the secure performance of dedicated on-prem DGX infrastructure.

Learn more about Google Cloud’s Anthos.

Learn more about NVIDIA DGX A100.

The post How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure appeared first on The Official NVIDIA Blog.

MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare

MONAI — the Medical Open Network for AI, a domain-optimized, open-source framework for healthcare — is now ready for production with the upcoming release of the NVIDIA Clara application framework for AI-powered healthcare and life sciences. Introduced in April and already adopted by leading healthcare research institutions, MONAI is a PyTorch-based framework that enables the Read article >

The post MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare appeared first on The Official NVIDIA Blog.

MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare

MONAI — the Medical Open Network for AI, a domain-optimized, open-source framework for healthcare — is now ready for production with the upcoming release of the NVIDIA Clara application framework for AI-powered healthcare and life sciences.

Introduced in April and already adopted by leading healthcare research institutions, MONAI is a PyTorch-based framework that enables the development of AI for medical imaging with industry-specific data handling, high-performance training workflows and reproducible reference implementations of state-of-the-art approaches.

As part of the updated Clara offering, MONAI will come with over 20 pre-trained models, including ones recently developed for COVID-19, as well as the latest training optimizations on NVIDIA DGX A100 GPUs that provide up to a sixfold acceleration in training turnaround time.

“MONAI is becoming the PyTorch of healthcare, paving the way for closer collaboration between data scientists and clinicians,” said Dr. Jayashree Kalpathy-Cramer, director of the QTIM lab at the Athinoula A. Martinos Center for Biomedical Imaging at MGH. “Global adoption of MONAI is fostering collaboration across the globe facilitated by federated learning.”

Adoption by the healthcare ecosystem of MONAI has been tremendous. DKFZ, King’s College London, Mass General, Stanford and Vanderbilt are among those to adopt the AI framework for imaging. MONAI is being used in everything from industry-leading imaging competitions to the first boot camp focused on the framework, held in September, which drew over 550 registrants from 40 countries, including undergraduate university students.

“MONAI is quickly becoming the go-to deep learning framework for healthcare. Getting from research to production is critical for the integration of AI applications into clinical care,” said Dr. Bennett Landman of Vanderbilt University. “NVIDIA’s commitment to community-driven science and allowing the academic community to contribute to a framework that is production-ready will allow for further innovation to build enterprise-ready features.”

New Features

NVIDIA Clara brings the latest breakthroughs in AI-assisted annotation, federated learning and production deployment to the MONAI community.

The latest version introduces a game-changer to AI-assisted annotation that allows radiologists to label complex 3D CT data in one-tenth of the clicks with a new model called DeepGrow 3D. Instead of the traditional time-consuming method of segmenting an organ or lesion image by image or slice by slice, which can be up to 250 clicks for a large organ like the liver, users can segment with far fewer clicks.

Integrated with Fovia Ai’s F.A.S.T. AI Annotation software, NVIDIA Clara’s AI-assisted annotation tools and the new DeepGrow 3D feature can be used for labeling training data as well as assisting radiologists when reading. Fovia offers the XStream HDVR SDK suite to review DICOM images that’s integrated into industry-leading PACS viewers.

AI-assisted annotation is the key to unlocking rich radiology datasets and was recently used to label the public COVID-19 CT dataset published by The Cancer Imaging Archive at the U.S. National Institutes of Health. This labeled dataset was then used in the MICCAI-endorsed COVID-19 Lung CT Lesion Segmentation Challenge.

Clara Federated Learning made possible the recent research collaboration of 20 hospitals around the world to develop a generalized AI model for COVID-19 patients. The EXAM model predicts oxygen requirements in COVID-19 patients, is available on the NGC software registry, and is being evaluated for clinical validation at Mount Sinai Health System in New York, Diagnósticos da America SA in Brazil, NIHR Cambridge Biomedical Research Centre in the U.K. and the NIH.

“The MONAI software framework provides key components for training and evaluating imaging-based deep learning models, and its open-source approach is fostering a growing community that is contributing exciting advances, such as federated learning,” said Dr. Daniel Rubin, professor of biomedical data science, radiology and medicine at Stanford University.

NVIDIA is additionally expanding its release of NVIDIA Clara to digital pathology applications, where the sheer sizes of images would choke off-the-shelf open-source AI tools. Clara for pathology early access contains reference pipelines for both training and deployment of AI applications.

“Healthcare data interoperability, model deployment and clinical pathway integration are an increasingly complex and intertwined topic, with significant field-specific expertise,” said Jorge Cardoso, CTO of the London Medical Imaging and AI Centre for Value-based Healthcare. “Project MONAI, jointly with the rest of the NVIDIA Clara ecosystem, will help deliver improvements to patient care and optimize hospital operations.”

Learn more about NVIDIA Clara Train 4.0 and subscribe to NVIDIA healthcare news.

The post MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare appeared first on The Official NVIDIA Blog.

NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups

Just over 125 years ago, on Nov. 8, 1895, the world’s first X-ray image was captured. It was a breakthrough that set the stage for the modern medical imaging industry. Over the decades, an entire ecosystem of medical imaging hardware and software came to be, with AI startups now playing a key role within it. Read article >

The post NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups appeared first on The Official NVIDIA Blog.

NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups

Just over 125 years ago, on Nov. 8, 1895, the world’s first X-ray image was captured. It was a breakthrough that set the stage for the modern medical imaging industry.

Over the decades, an entire ecosystem of medical imaging hardware and software came to be, with AI startups now playing a key role within it.

Today we’re announcing the NVIDIA Inception Alliance for Healthcare, an initiative where medical AI startups have new opportunities to chart innovations and accelerate their success with the help of NVIDIA and its healthcare industry partners.

Premier members of NVIDIA Inception, our accelerator program for 6,500+ startups across 90 countries in AI and data sciences, can now join the GE Healthcare Edison Developer Program. Through integration with the GE Healthcare Edison Platform, these startups gain access to GE Healthcare’s global network to scale clinical and commercial activities within their expansive install base of 4 million imaging, mobile diagnostics and monitoring units across 160 countries with 230 million exams and associated data.

Premier members with FDA clearance also can join the Nuance AI Marketplace for Diagnostic Imaging. The Nuance AI Marketplace brings AI into the radiology workflow by connecting developers directly with radiology subscribers. It offers AI developers a single API to connect their AI solutions to radiologists across 8,000 healthcare facilities that use the Nuance PowerShare Network. And it gives radiology subscribers a one-stop shop to review, try, validate and buy AI models — bridging the technology divide to make AI useful, usable and used.

A Prescription for Growth

NVIDIA Inception, which recently added its 1,000th healthcare AI startup, offers its members a variety of ongoing benefits, including go-to-market support, technology assistance and access to NVIDIA expertise — all tailored to a business’s maturity stage. Startups get access to training through the NVIDIA Deep Learning Institute, preferred pricing on hardware through our global network of distributors, invitations to exclusive networking events and more.

To nurture the growth of AI startups in healthcare, and ultimately the entire medical ecosystem, NVIDIA is working with healthcare giants to offer Inception members an accelerated go-to-market path.

The NVIDIA Inception Alliance for Healthcare will forge new ways to grow through targeted networking, AI training, early access to technology, pitch competitions and technology integration. Members will receive customized training and support to develop, deploy and integrate NVIDIA GPU-accelerated apps everywhere within the medical imaging ecosystem.

Select startup members will have direct access to engage joint customers, in addition to marketing promotion of their results. The initiative will kick off with a pitch competition for leading AI startups in medical imaging and related supporting fields.

“Startups are on the forefront of innovation and the GE Healthcare Edison Developer Program provides them access to the world’s largest installed base of medical devices and customers,” said Karley Yoder, vice president and general manager of Artificial Intelligence at GE Healthcare. “Bringing together the world-class capabilities from industry-leading partners creates a fast-track to accelerate innovation in a connected ecosystem that will help improve the quality of care, lower healthcare costs and deliver better outcomes for patients.”

“With Nuance’s deep understanding of radiologists’ needs and workflow, we are uniquely positioned to help them transform healthcare by harnessing AI. The Nuance AI Marketplace gives radiologists the ability to easily purchase, validate and use AI models within solutions they use every day, so they can work smarter and more efficiently,” said Karen Holzberger, senior vice president and general manager of the Diagnostic Division at Nuance. “The AI models help radiologists focus their time and expertise on the right case at the right time, alleviate many repetitive, mundane tasks and, ultimately, improve patient care — and save more lives. Connecting NVIDIA Inception startup members to the Nuance AI Marketplace is a natural fit — and creates a connection for startups that benefits the entire industry.”

Learn more at the NVIDIA RSNA 2020 Special Address, which is open to the public on Tuesday, Dec. 1, at 6 p.m. CT. Kimberly Powell, NVIDIA’s vice president of healthcare, will discuss how we’re working with medical imaging companies, radiologists, data scientists, researchers and medical device companies to bring workflow acceleration, AI models and deployment platforms to the medical imaging ecosystem.

Subscribe to NVIDIA healthcare news.

The post NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups appeared first on The Official NVIDIA Blog.

Bob Swan: Open Letter to President-elect Biden

Bob Swan, Intel chief executive officer, sent the following letter to the president-elect.

The Honorable Joseph R. Biden Jr.
President-elect of the United States

Dear Mr. President-elect,

Congratulations on your election as our 46th president. I also want to congratulate Vice President-elect Kamala Harris for her historic achievement and recognize the role your administration will play in inspiring our next generation of leaders.

cayce clifford bob swan 02
Bob Swan, chief executive officer of Intel Corp. (Credit: Cayce Clifford)

2020 has been a particularly disruptive year for the American people. And we know you are focused on uniting our nation to overcome the challenges posed by COVID-19, racial strife, a growing skills gap and increasing global competition.

In 1968, America was in a similar place. We were a nation divided over the Vietnam War, divided by race, undergoing a recession and experiencing mass protests shaping the political landscape. In this environment of change and upheaval, Robert Noyce and Gordon Moore came together and founded Intel, starting a silicon revolution that gave rise to many future technologies. Today, Intel is the only U.S.-based manufacturer of leading-edge semiconductors, with more than 50,000 employees across the country and innovation hubs in Oregon, Arizona, Texas, New Mexico and California. We again stand at the ready to support the next generation of technological advancements.

As the leader of a company driven by our purpose to create world-changing technologies that enrich the lives of every person on Earth, I am grateful for your recognition of the role technology plays in solving our nation’s largest societal challenges. As you begin to further develop your policy agenda, I urge you to focus on the following areas:

Investing in Technology to Solve the Challenges Posed by COVID

Artificial intelligence, high performance computing and edge-to-cloud computing are critical components in government collection and analysis of data, diagnostics, treatment and vaccine development. Intel technology has helped accelerate access to quality data to deliver remote care and protect medical professionals from exposure to infection. As you know, this pandemic has widely affected education, work and other aspects of our daily lives. It is crucial to expand investments in broadband connectivity, particularly to lessen the impact of COVID on the underserved and in communities of color.

Increasing U.S. Manufacturing

Your planned investment in American-made goods is critical to U.S. innovation and technology leadership. According to the Semiconductor Industry Association, the U.S. accounts for just 12% of global semiconductor production capacity, with more than 80% taking place in Asia. Rising costs and foreign government subsidies to national champions are a significant disadvantage for U.S. semiconductor companies that make substantial capital investments domestically. A national manufacturing strategy, including investment by the U.S. government in the domestic semiconductor industry, is critical to ensure American companies compete on a level playing field and lead the next generation of innovative technology.

Investing in Digital Infrastructure

Smart infrastructure spending will help address pressing economic and climate change needs. This will include technology designed to make cities and energy systems smarter and more efficient. Widespread deployment of advanced 5G telecommunications networks will fuel efficiencies for businesses in all industries and enable more U.S. innovation. Upgrades to our infrastructure must not only handle the technology of today but spur domestic development of the technologies of tomorrow.

Developing a 21st Century Workforce

In the U.S., Intel hired more than 4,000 people this year, and it still has 800 positions to fill. We produce the most complex technology on the planet and need access to the best talent available. We are designing STEM curricula to help feed the workforce pipeline and make next-generation training and skills more accessible. This year, we partnered with Maricopa County Community College District in Arizona to launch the first Intel-designed artificial intelligence associate degree program in the United States.

While we work to build a greater pipeline of U.S. high-tech workers, American universities and companies provide opportunities to smart, hard-working people from all over the world. They return the favor many times over with their contributions to this country and our technology leadership. The U.S. has welcomed global talent for decades and should continue to support immigration programs needed by Intel and other high-tech companies to operate in the U.S.

At Intel, we believe the current and future workforces need to reflect the makeup of this nation. We also share your commitment to make racial equity a top priority. We set ambitious goals for Intel’s next decade. We aim to double the number of women and underrepresented minorities in senior leadership at Intel, and to collaborate within our industry to create and implement a Global Inclusion Index to track industry progress in areas such as greater levels of women and minorities in senior and technical positions, accessible technology and equal pay.

Intel has enjoyed working closely with presidential administrations over the past 52 years on policies that help the United States lead the world in technological innovation. I look forward to working together in a shared mission to tackle the many challenges facing our nation today as we prepare for an equitable and prosperous future.

Sincerely,
Bob Swan
CEO, Intel Corporation

The post Bob Swan: Open Letter to President-elect Biden appeared first on Intel Newsroom.

Supercomputing Chops: Tsinghua U. Takes Top Flops in SC20 Student Cluster Battle

Props to team top flops. Virtual this year, the SC20 Student Cluster Competition was still all about teams vying for top supercomputing performance in the annual battle for HPC bragging rights. That honor went to Beijing-based Tsinghua University, whose six-member undergraduate student team clocked in 300 teraflops of processing performance. A one teraflop computer can Read article >

The post Supercomputing Chops: Tsinghua U. Takes Top Flops in SC20 Student Cluster Battle appeared first on The Official NVIDIA Blog.