Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides

Hundreds of millions of tissue biopsies are performed worldwide each year — most of which are diagnosed as non-cancerous. But for the few days or weeks it takes a lab to provide a result, uncertainty weighs on patients.

“Patients suffer emotionally, and their cancer is progressing as the clock ticks,” said David West, CEO of digital pathology startup Proscia.

That turnaround time has the potential to dramatically reduce. In recent years, the biopsy process has begun to digitize, with more and more pathologists looking at digital scans of body tissue instead of physical slides under a microscope.

Proscia, a member of our Inception virtual accelerator program, is hosting these digital biopsy specimens in the cloud. This makes specimen analysis borderless, with one hospital able to consult a pathologist in a different region. It also creates the opportunity for AI to assist experts as they analyze specimens and make their diagnoses.

“If you have the opportunity to read twice as many slides in the same amount of time, it’s an obvious win for the laboratories,” said West.

The Philadelphia-based company recently closed a $8.3 million Series A funding round, which will power its AI development and software deployment. And a feasibility study published last week demonstrated that Proscia’s deep learning software scores over 99 percent accuracy for classifying three common types of skin pathologies.

Biopsy Analysis, Behind the Scenes

Pathologists have the weighty task of examining lab samples of body tissue to determine if they’re cancerous or benign. But depending on the type and stage of disease, two pathologists looking at the same tissue may disagree on a diagnosis more than half the time, West says.

These experts are also overworked and in short supply globally. Laboratories around the world have too many slides and not enough people to read them.

China has one pathologist per 80,000 patients, said West. And while the United States has one per 25,000 patients, it’s facing a decline as many pathologists are reaching retirement age. Many other countries have so few pathologists that they are “on the precipice of a crisis,” according to West.

He projects that 80 to 90 percent of major laboratories will have switched their biopsy analysis from microscopes to scanners in the next five years. Proscia’s subscription-based software platform aims to help pathologists more efficiently analyze these digital biopsy specimens, assisted by AI.

The company uses a range of NVIDIA Tesla GPUs through Amazon Web Services to power its digital pathology software and AI development. The platform is currently being used worldwide by more than 4,000 pathologists, scientists and lab managers to manage biopsy data and workflows.

screenshot of Proscia's DermAI tool
Proscia’s digital pathology and AI platform displays a heat map analysis of this H&E stained skin tissue image.

In December, Proscia will release its first deep learning module, DermAI. This tool will be able to analyze skin biopsies and is trained to recognize roughly 70 percent of the pathologies a typical dermatology lab sees. Three other modules are currently under development.

Proscia works with both labeled and unlabeled data from clinical partners to train its algorithms. The labeled dataset, created by expert pathologists, are tagged with the overall diagnosis as well as more granular labels for specific tissue formations within the image.

While biopsies can be ordered at multiple stages of treatment, Proscia focuses on the initial diagnosis stage, when doctors are looking at tissue and making treatment decisions.

“The AI is checking those cases as a virtual second opinion behind the scenes,” said West. This could lower the chances of missing tricky-to-spot cancers like melanoma, and make diagnoses more consistent among pathologists.

The post Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides appeared first on The Official NVIDIA Blog.

An AI for an Eye: How Deep Learning May Prevent Diabetes-Induced Blindness

There are many ways diabetes can be debilitating, even lethal. But one condition caused by the disease comes on without warning.

A patient “can go to sleep one night and wake up the next morning and be legally blind, with no previous symptoms,” said Jonathan Stevenson, chief strategy and information officer for Intelligent Retinal Imaging Systems (IRIS), speaking of the condition known as diabetic retinopathy.

While most complications of diabetes such as heart disease, kidney disease and nerve damage have overt symptoms, diabetic retinopathy can sneak up on a patient undetected, unless spotted early by regular eye exams.

IRIS has been applying GPU-powered deep learning and Azure Machine Learning Services to provide early and broad detection of diabetic retinopathy, and prevent patients from losing their eyesight.

Making Diabetic Eye Exams Widely Available

Fewer than 40 percent of the 370 million diabetics in the world get checked for diabetes-related eye conditions. To make matters worse, while the number of patients with diabetes has steadily grown in recent decades, the population of ophthalmologists has been shrinking.

IRIS is attempting to bridge this gap by making retinal exams quick, easy and widely available.

“We are trying to enable a workflow that gives the provider the data they need to make decisions, but not interrupt that sacred time spent with patients,” said Stevenson.

The idea that a patient with diabetes could go blind so suddenly and unnecessarily was too much for Dr. Sunil Gupta, who founded IRIS in 2011. What the young company subsequently discovered was that deep learning can detect early indicators of diabetic complications in the retina.

Now, IRIS is preparing to unleash an updated component to its cloud-based solution that quickly analyzes uploaded images and returns that analysis to caregivers, achieving a 97 percent success rate in matching the analysis of expert ophthalmologists.

Tapping Microsoft’s Latest Toolkits

Behind that solution is an approach combining NVIDIA GPUs and the TensorFlow machine learning library with Microsoft Azure Machine Learning Services and CNTK, which make it possible to write low-level, hardware-agnostic algorithms.

Jocelyn Desbiens, lead innovator and data scientist for IRIS, said the company was one of the first organizations to make use of the Microsoft toolkits in this way. IRIS also uses Kubernetes to orchestrate its cloud-based container, which runs on the Microsoft Azure platform.

To build its model, IRIS obtained a dataset of about 10,000 retinal images, sifting through them to reveal 8,000 high-quality images, 6,000 of which were used for training, while 2,000 were held out for validation.

The system can detect differences between the left and right eyes, as well as between diabetic and normal eyes. Ultimately, it recommends whether a patient needs to be referred to a physician or if the detected condition simply needs to be observed.

Newer GPUs Up the Ante

All training and inferencing occur on NVIDIA GPUs running in IRIS’s Azure instance. IRIS has been at it long enough that it’s benefited from incredible advances in performance.

A few years ago, adopting NVIDIA Tesla K80 GPU accelerators slashed the time it took to train the company’s model on 10,000 images from a month to a week. Switching to the Tesla P100 shrunk that down to only a couple of days. And now with the Tesla V100, the process is down to half a day.

That time gain, Stevenson said, is how NVIDIA is enabling researchers and scientists to answer questions they’d never been able to tackle before — such as whether diabetic blindness can be identified ahead of time.

Even more Azure customers will soon be able to utilize these performance gains as Microsoft has announced the preview of two new N-series Virtual Machines with NVIDIA GPU capabilities.

Eventually, IRIS intends to apply its understanding of the retina to assist in the treatment of other conditions. The retina in many ways, Stevenson said, is a window into a person’s health, providing clues about everything from autoimmune disorders and cancers to cardiovascular diseases.

Without divulging specifics, he made it clear that IRIS’s work won’t stop with diabetic blindness.

“By looking at features within the retinas,” Stevenson said, “we’re able to see other conditions that aren’t necessarily related to the eye.”

The post An AI for an Eye: How Deep Learning May Prevent Diabetes-Induced Blindness appeared first on The Official NVIDIA Blog.

Alibaba and Intel Transforming Data-Centric Computing from Hyperscale Data Centers to the Edge

data centric 2x1What’s New: At The Computing Conference 2018 hosted by Alibaba Group* in Hangzhou, China, Intel and Alibaba revealed how their deep collaboration is driving the creation of revolutionary technologies that power the era of data-centric computing – from hyperscale data centers to the edge, to accelerate the deployments of new applications such as autonomous vehicles and Internet of Things (IoT).

“Alibaba’s highly innovative data-centric computing infrastructure supported by Intel technology enables real-time insight for customers from the cloud to the edge. Our close collaboration with Alibaba from silicon to software to market adoption enables customers to benefit from a broad set of workload-optimized solutions.”
– Navin Shenoy, Intel executive vice president and general manager of the Data Center Group

What the Headlines are: Intel and Alibaba Group are:

  • Launching of a Joint Edge Computing Platform to accelerate edge computing development
  • Establishing the Apsara Stack Industry Alliance targeting on-premises enterprise cloud environments
  • Deploying latest Intel technology in Alibaba to prepare for the 11/11 shopping festival
  • Bringing volumetric content to the Olympic Games Tokyo 2020 via OBS Cloud
  • Accelerating the commercialization intelligent roads

“We are thrilled to have Intel as our long-term strategic partner, and are excited to expand our collaboration across a wide array of areas from edge computing to hybrid cloud, Internet of Things and smart mobility,” said Simon Hu, senior vice president of Alibaba Group and president of Alibaba Cloud. “By combining Intel’s leading technology services and Alibaba’s experience in driving digital transformation in China and the rest of Asia, we are confident that our clients worldwide will benefit from the technology innovation that comes from this partnership.”

How They Accelerate on the Edge: Intel and Alibaba Cloud launched a Joint Edge Computing Platform that allows enterprises to develop customizable device-to-cloud IoT solutions for different edge computing scenarios, including industrial manufacturing, smart building and smart community, among others. The Joint Edge Computing Platform is an open architecture that integrates Intel software, hardware and artificial intelligence (AI) technologies with Alibaba Cloud’s latest IoT products. The platform utilizes computer vision and AI to convert data at the edge into business insights. The Joint Edge Computing Platform was recently deployed in Chongqing Refine-Yumei Die Casting Co., Ltd. (Yumei) factories and was able to increase defect detection speed five times from manual detection to automatic detection1.

hybrid cloud 2x1How They Drive Hybrid Cloud Solutions: Intel and Alibaba Cloud established the Apsara Stack Industry Alliance, which focuses on building an ecosystem of hybrid cloud solutions for Alibaba Cloud’s Apsara Stack. Optimized for Intel® Xeon® Scalable processors, the Apsara Stack provides large- and medium-sized businesses with on-premises hybrid cloud services that function the same as hyperscale cloud computing and big data services provided by Alibaba public cloud. This alliance will also enable small- and medium-sized businesses (SMBs) to access technologies, infrastructure and security on par with that of large corporations, while offering them a path to greater levels of automation, self-service capabilities, cost efficiencies and governance.

How They Power eCommerce: In preparation for the upcoming 11/11 “Singles Day” global shopping festival – which generated in excess of 168.2 billion yuan ($25 billion) in spending during the 2017 celebration – Alibaba plans to trial the next-generation Intel Xeon Scalable processors and upcoming Intel® Optane® DC persistent memory with Alibaba’s Tair workload. This workload is a key value data access and caching storage system developed by Alibaba and broadly deployed in many of Alibaba’s core applications such as Taobao and Tmall. Intel’s compute, memory and storage solutions are optimized for Alibaba’s highly interactive and data-intensive applications. These applications require the infrastructure to keep large amounts of hot accessible data in the memory cache to achieve the desired throughput (queries per second) in order to deliver smooth and responsive user experiences, especially during peak hours of the 11/11 shopping festival.

How They Accelerate the Olympics’ Digital Transformation: Also announced was a partnership aimed at advancing the digital transformation of the Olympics and delivering volumetric content over the OBS Cloud for the first time at the Olympic Games Tokyo 2020. As worldwide Olympic partners, Intel and Alibaba Cloud, will collaborate with OBS to explore a more efficient and reliable delivery pipeline of immersive media to RHBs worldwide that will improve the fan experience and bring them closer to the action via Intel’s volumetric and virtual reality technologies. This showcases the depth of Intel’s end-to-end capabilities, including the most advanced Intel Xeon Scalable processors powering OBS Cloud, compute power to process high volumes of data, and technology to create and deliver immersive media.

How They Accelerate the Commercialization of Intelligent Roads: Intel officially became one of Alibaba AliOS’ first strategic partners of the intelligent transportation initiative, aiming to support the construction of intelligent road traffic network and build a digital and intelligent transportation system to realize vehicle-road synergy. Intel and Alibaba will jointly explore v2x usage model with respect to 5G communication and edge computing based on the Intel Network Edge Virtualization Software Development Kit (NEV SDK).

More Context: Intel and Alibaba Cloud Deliver Joint Computing Platform for AI Inference at the Edge

1Automated product quality data collected by YuMei using JWIPC® model IX7, ruggedized, fan-less edge compute node/industrial PC running an Intel® Core™ i7 CPU with integrated on die GPU and OpenVINO SDK. 16GB of system memory, connected to a 5MP POE Basler* Camera model acA 1920-40gc. Together these components, along with the Intel developed computer vision and deep learning algorithms, provide YuMei factory workers information on product defects near real-time (within 100 milliseconds). Sample size >100,000 production units collected over 6 months in 2018.

The post Alibaba and Intel Transforming Data-Centric Computing from Hyperscale Data Centers to the Edge appeared first on Intel Newsroom.

NVIDIA GPU Cloud Adds Support for Microsoft Azure

Thousands more developers, data scientists and researchers can now jumpstart their GPU computing projects, following today’s announcement that Microsoft Azure is a supported platform with NVIDIA GPU Cloud (NGC).

Ready-to-run containers from NGC with Azure give developers access to on-demand GPU computing that scales to their need, and eliminates the complexity of software integration and testing.

Getting AI and HPC Projects Up and Running Faster

Building and testing reliable software stacks to run popular deep learning software — such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch and NVIDIA TensorRT — is challenging and time consuming. There are dependencies at the operating system level and with drivers, libraries and runtimes. And many packages recommend differing versions of the supporting components.

To make matters worse, the frameworks and applications are updated frequently, meaning this work has to be redone every time a new version rolls out. Ideally, you’d test the new version to ensure it provides the same or better performance as before. And all of this is before you can even get started with a project.

For HPC, the difficulty is how to deploy the latest software to clusters of systems. In addition to finding and installing the correct dependencies, testing and so forth, you have to do this in a multi-tenant environment and across many systems.

NGC removes this complexity by providing pre-configured containers with GPU-accelerated software. Its deep learning containers benefit from NVIDIA’s ongoing R&D investment to make sure the containers take advantage of the latest GPU features. And we test, tune and optimize the complete software stack in the deep learning containers with monthly updates to ensure the best possible performance.

NVIDIA also works closely with the community and framework developers, and contributes back to open source projects. We made more than 800 contributions in 2017 alone. And we work with the developers of the other containers available on NGC to optimize their applications, and we test them for performance and compatibility.

NGC with Microsoft Azure

You can access 35 GPU-accelerated containers for deep learning software, HPC applications, HPC visualization tools and a variety of partner applications from the NGC container registry and run them on the following Microsoft Azure instance types with NVIDIA GPUs:

The same NGC containers work across Azure instance types, even with different types or quantities of GPUs.

Using NGC containers with Azure is simple.

Just go to the Microsoft Azure Marketplace and find the NVIDIA GPU Cloud Image for Deep Learning and HPC (this is a pre-configured Azure virtual machine image with everything needed to run NGC containers). Launch a compatible NVIDIA GPU instance on Azure. Then, pull the containers you want from the NGC registry into your running instance. (You’ll need to sign up for a free NGC account first.) Detailed information is in the Using NGC with Microsoft Azure documentation.

In addition to using NVIDIA published images on Azure Marketplace to run these NGC containers, Azure Batch AI can also be used to download and run these containers from NGC on Azure NCv2, NCv3 and ND virtual machines. Follow these simple GitHub instructions to start with Batch AI with NGC containers.

With NGC support for Azure, we are making it even easier for everyone to start with AI or HPC in cloud. See how easy it is for yourself.

Sign up now for our upcoming webinar on October 2 at 9am PT to learn more, and get started with NGC today.

The post NVIDIA GPU Cloud Adds Support for Microsoft Azure appeared first on The Official NVIDIA Blog.

Brain Gain: ‘Virtual Neurons’ Power Drug Discovery for Parkinson’s, Alzheimer’s

More than 50 million people are afflicted with Alzheimer’s and Parkinson’s diseases worldwide — a figure that’s growing as the average age of the global population rises. Yet effective treatments for nervous system disorders remain elusive due to the complexity of the human brain.

Drug development requires scientists to identify a molecule that interacts with a target protein and alters the progression of a disease. Though many researchers are hard at work to find cures for neurodegenerative disorders, it’s difficult to determine which biomarkers indicate how quickly the disease is progressing, or whether the drug is working.

NeuroInitiative, a startup with operations in Florida and Massachusetts, is using GPU computing to create simulations of neural pathways — in essence, a “virtual neuron” — to help researchers test hypotheses about how a potential drug molecule will interact with the body.

The simulations allow scientists to take a four-dimensional tour inside a virtual neuron and understand its complexity. They can also modify factors within the neuron, providing visual insight that helps researchers figure out which drugs could work best for particular patients.

“The biology of Parkinson’s and Alzheimer’s diseases is incredibly dynamic and complex, which provides a great application for computer simulation,” said Andy Lee, co-founder and chief technology officer at NeuroInitiative. Speedups from NVIDIA GPUs allow “startups like us to tackle these critical medical needs.”

NeuroInitiative runs its simulations on NVIDIA Tesla V100 GPUs through the Microsoft Azure cloud platform. The software uses the NVIDIA CUDA toolkit, and cranks up to more than 100,000 CUDA cores during peak periods.

Harnessing NVIDIA GPUs in the cloud gives the researchers flexible, pay-as-you-go access to the virtual machines. They can crank up their usage during the most demanding simulations, then wind it down and spend weeks analyzing the results before gearing up for the next one.

NVIDIA GPUs allow them to perform compute-intensive simulations that could cut the 12-20 year drug development period for a new treatment in half, estimates Lee.

“We’re heading toward testing hypotheses in minutes, which speeds up the iteration of ideas and, ultimately, cures,” said Lee.

NeuroInitiative has already identified more than 25 promising drug candidates for Parkinson’s treatment that will be tested in the lab. The top compounds could be ready for human clinical trials in just three years, he estimates.

Lee said virtualized GPUs have allowed his team “to scale in minutes to levels which just a few years ago would have required access to a handful of supercomputers around the world.”

To learn more, watch Lee’s GTC talk, Identifying New Therapeutics for Parkinson’s Disease.

The post Brain Gain: ‘Virtual Neurons’ Power Drug Discovery for Parkinson’s, Alzheimer’s appeared first on The Official NVIDIA Blog.

NVIDIA Virtual GPU Brings Autodesk Around the World in Minutes

Mary Poppins might’ve been able to fit a coat rack, wall mirror and potted plant in her handbag. You’d have to be just as magical to try packing up a workstation and transporting it across the country.

Until now, developers at Autodesk, whose software is used by 200 million customers to design virtually everything, have had to grapple with a similar problem.

With more than 200 million customers — including design professionals, engineers, architects, digital artists, and students — Autodesk develops software for people who make things. However, to support rigorous testing requirements across a variety of operating systems and product versions, many developers needed up to three workstations under their desks.

Now, with NVIDIA’s virtual GPUs, Autodesk’s developers and sales technicians can access all their applications and products anywhere in the world, thousands of miles from where code might be housed, in just minutes.

Optimizing Resources

A look into Autodesk’s resource usage statistics show that developers only used 50 percent of their graphics resources in their development cycle. Workstations for high-end rendering and graphics testing often sat idle for weeks at a time.

Before adopting NVIDIA GPUs, sales technicians had to carry two expensive, heavy laptops, each with an integrated graphics card, just to be able to run Autodesk software. Even then, the graphics cards didn’t support the best experience on a laptop, so the second laptop served as a backup in case the first didn’t work.

Searching for a fix, the team turned to NVIDIA to better optimize the use of their resources by setting up a virtual desktop infrastructure. Depending on developers’ workflows, their graphics requirements for storage differed. With NVIDIA vGPU management and monitoring capabilities, Autodesk could assign computing resources based on differing compute and graphics requirements.

Due to scalability problems and long wait times for hardware, Autodesk also hoped to test a cloud-deployed virtualized solution — the Mary Poppins bag for applications. This led to CloudPC, which uses both Quadro vDWS and NVIDIA GRID to create a virtualized environment that allows developers to access applications the moment they get to their desktops.

Code that used to take up to 12 hours to access now only takes minutes to download. Support engineers, who are often on the road, can connect and resolve issues from anywhere.

“The biggest value NVIDIA brings to Autodesk is the ability to resolve and share access to the right compute resources,” said Rachel O’Gorman, leader of the Desktop Virtualization Services team for Autodesk’s CloudPC.

Autodesk developers report having the same capabilities on virtual desktops as they did with workstations, but now have increased accessibility to software.

“The virtualized desktops not only replace and augment the physical workstations,” said O’Gorman. “The VDI environment lets us optimize resource consumption, reduce maintenance and management work, and increase productivity for our developers and technical sales teams.”

The post NVIDIA Virtual GPU Brings Autodesk Around the World in Minutes appeared first on The Official NVIDIA Blog.

How AI and Deep Learning Will Enable Cancer Diagnosis Via Ultrasound

Viksit Kumar didn’t know his mother had ovarian cancer until it had reached its third stage, too late for chemotherapy to be effective.

She died in a hospital in Mumbai, India, in 2006, but might have lived years longer if her cancer had been detected earlier.

This knowledge ate at the mechanical engineering student, spurring him to choose a different path.

“That was one of the driving factors for me to move into the medical field,” said Kumar, now a senior research fellow at the Mayo Clinic, in Rochester, Minn. He hopes that the work his mom’s death inspired will help others to avoid her fate.

For the past few years, Kumar has been leading an effort to use GPU-powered deep learning to more accurately diagnose cancers sooner using ultrasound images.

The work has focused on breast cancer (which is much more prevalent than ovarian cancer and attracts more funding), with the primary aim of enabling earlier diagnoses in developing countries, where mammograms are rare.

Into the Deep End of Deep Learning

Kumar came to this work soon after joining the Mayo Clinic. At the time, he was working with ultrasound images for diagnosing pre-term birth complications. When he noticed that ultrasounds were picking up different objects, he figured that they might be useful for classifying breast cancer images.

As he looked closer at the issue, he deduced that deep learning would be a good match. However, at the time, Kumar knew very little about deep learning. So he dove in, spending more than six months teaching himself everything he could about building and working with deep learning models.

“There was a drive behind that learning: This was a tool that could really help,” he said.

And help is needed. Breast cancer is one of the most common cancers, and one of the easiest to detect. However, in developing countries, mammogram machines are hard to find outside of large cities, primarily due to cost. As a result, health care providers often take a conservative approach and perform unnecessary biopsies.

Ultrasound offers a much more affordable option for far-flung facilities, which could lead to more women being referred for mammograms in large cities.

Even in developed countries, where most women have regular mammograms after the age of 40, Kumar said ultrasound could prove critical for diagnosing women who are pregnant or are planning to get pregnant, and who can’t be exposed to a mammogram’s X-rays.

Mayo Clinic ultrasound deep learning research
The red outline shows the manually segmented boundary of a carcinoma, while the deep learning-predicted boundaries are shown in blue, green and cyan. © 2018 Kumar et al. under Creative Commons Attribution License.

 

Getting Better All the Time

Kumar is amazed at how far the deep learning tools have already progressed. It used to take two or three days for him to configure a system for deep learning, and now takes as little as a couple of hours.

Kumar’s team does its local processing using the TensorFlow deep learning framework container from NVIDIA GPU Cloud (NGC) on NVIDIA TITAN and GeForce GPUs. For the heaviest lifting, the work shifts to NVIDIA Tesla V100 GPUs on Amazon Web Services, using the same container from NGC.

The NGC containers are optimized to deliver maximum performance on NVIDIA Volta and Pascal architecture GPUs on-premises and in the cloud, and include everything needed to run GPU-accelerated software. And using the same container for both environments allows them to run jobs everywhere they have compute resources.

“Once we have the architecture developed and we want to iterate on the process, then we go to AWS,” said Kumar, estimating that doing so is at least eight times faster than processing larger jobs locally, thanks to the greater number of more advanced GPUs in play.

The team currently does both training and inference on the same GPUs. Kumar said he wants to do inference on an ultrasound machine in live mode.

More Progress Coming

Kumar hopes to start applying the technique on live patient trials within the next year.

Eventually, he hopes his team’s work enables ultrasound images to be used in early detection of other cancers, such as thyroid and, naturally, ovarian cancer.

As groundbreaking as his work is, Kumar urges patience when it comes to applying AI and deep learning in the medical field. “It needs to be a mature technology before it can be accepted as a clinical standard by radiologists and sonographers,” he said.

Work like Kumar’s is certainly helping to push that along.

Read Kumar’s paper, “Automated and real-time segmentation of suspicious breast masses using convolutional neural network.” 

The post How AI and Deep Learning Will Enable Cancer Diagnosis Via Ultrasound appeared first on The Official NVIDIA Blog.

Intel Reimagines Data Center Storage with new 3D NAND SSDs

In 2017, Intel brought to market a wide array of products designed to tackle the world’s growing stockpile of data. Intel CEO Brian Krzanich called data an “unseen driving force behind the next-generation technology revolution,” and Rob Crooke, senior vice president and general manager of the Non-Volatile Memory (NVM) Solutions Group at Intel, recently outlined his vision for how storage and memory technologies can address all that data. In the past year, Intel introduced new Intel® Optane™ technology-based products and will continue to deliver exciting, blazing-fast solutions based on this breakthrough technology, with announcements later this year. Intel also brought industry-leading areal density to storage for consumers and enterprises, driving both capacity and form factor innovation with Intel® 3D NAND storage products.

Intel is reimagining how data is stored for the data center. By driving the creation and adoption of compelling new form factors, like the EDSFF 1U long and 1U short, and delivering advanced materials, including our densest NAND to date with 64-layer TLC Intel 3D NAND, Intel is enabling capacities of 8TB and beyond in an array of form factors that meet the specific performance needs of data centers.

The Intel SSD DC P4500 Series comes in the ruler form factor, wh

» Download all images (ZIP, 450 KB)

Introducing Intel SSD DC P4510 and P4511 Series

Today, Intel announced the Intel SSD DC P4510 Series for data center applications. The P4510 Series uses 64-layer TLC Intel 3D NAND to enable end users to do more per server, support broader workloads and deliver space-efficient capacity. The P4510 Series enables up to four times more terabytes per server and delivers up to 10 times better random read latency at 99.99 percent quality of service than previous generations. The drive can also deliver up to double the input-output operations per second (IOPS) per terabyte. The 1 and 2TB capacities have been shipping to cloud service providers (CSPs) in high volume since August 2017, and the 4 and 8TB capacities are now available to CSPs and channel customers. All capacities are in the 2.5-inch 15 mm U.2 form factor and utilize a PCIe* NVMe* 3.0 x4 connection.

To accelerate performance and simplify management of the P4510 Series PCIe SSDs and other PCIe SSDs, Intel is also delivering two new technologies that work together to replace legacy storage hardware. Intel® Xeon® Scalable processors include Intel Volume Management Device (VMD), enabling robust management such as surprise insertion/removal and LED management of PCIe SSDs directly connected to the CPU. Building on this functionality, Intel® Virtual RAID on CPU (VROC) uses Intel VMD to provide RAID to PCIe SSDs. By replacing RAID cards with Intel VROC, customers are able to enjoy up to twice the IOPs performance and up to a 70 percent cost savings with PCIe SSDs directly attached to the CPU, improving customer’s return on their investments in SSD-based storage.

Intel is also bringing innovation to the data center with new low-power SSDs and the Enterprise and Datacenter SSD Form Factor (EDSFF). The Intel SSD DC P4511 Series offers a low-power option for workloads with lower performance requirements, enabling data centers to save power. The P4511 Series will be available later in the first half of 2018 in M.2 110 mm form factor. Additionally, Intel continues to drive form factor innovation in the data center, with the Intel SSD DC P4510 Series available in the future in EDSFF 1U long and 1U short with up to 1 petabyte (PB) of storage in a 1U server rack.

EDSFF Momentum

At Flash Memory Summit* 2017, Intel introduced the ruler form factor for Intel SSDs, purpose-built from the ground up for data center efficiency and free from the confines of legacy form factors. The new form factor delivers unprecedented storage density, system design flexibility with long and short versions, optimum thermal efficiency, scalable performance (available x4, x8 and x16 connectors) and easy maintenance, with front-load, hot-swap capabilities. EDSFF is also future-ready and designed for PCIe 3.0, available today, and PCIe 4.0 and 5.0, when they are ready.

Recently, the Enterprise and Datacenter SSD Form Factor specification was ratified by the EDSFF Working Group*, which includes Intel®, Samsung*, Microsoft*, Facebook* and others. Intel has been shipping a pre-spec version of the Intel SSD DC P4500 Series to select customers, including IBM* and Tencent*, for more than a year, and the Intel SSD DC P4510 Series will be available in EDSFF 1U long and 1U short starting in the second half of 2018. The industry has shown an overwhelmingly positive response to the Intel-inspired EDSFF specifications, with more than 10 key OEM, ODM and ecosystem members indicating intentions to design EDSFF SSDs into their systems. Additional SSD manufactures have also expressed intent to deliver EDSFF SSDs in the future.

IBM has deployed the P4500 Series in this new form factor to the IBM cloud. Tencent, a leading provider of value-added internet services in the world, has incorporated Intel® SSD DC P4500 series in the “ruler” form factor into its newly announced T-Flex platform, which supports 32 “ruler” SSDs as the standard high-performance storage resource pool.

“‘Ruler’ optimizes heat dissipation, significantly enhances SSD serviceability and delivers amazing storage capacity that will scale to 1PB in 1U in the future, thereby reducing overall storage construction and operating costs,” said Wu Jianjian, product director of Blackstone Product Center, Tencent Cloud. “We are very excited about this modern design and encourage its adoption as an industry standard specification.”

For more information on the Intel SSD DC P4510 Series, EDSFF and Intel 3D NAND, visit Intel’s solid state drive site.

The post Intel Reimagines Data Center Storage with new 3D NAND SSDs appeared first on Intel Newsroom.

Oracle Supercharges Its Cloud Offerings with NVIDIA Tesla GPUs

Oracle announced today that it’s bringing the power of our latest Tesla GPU accelerators to its public cloud.

Speaking this morning at Oracle OpenWorld, Don Johnson, the company’s senior vice president of product development, said that Oracle Cloud customers can access NVIDIA Tesla P100 GPU accelerators, starting today. Additionally, he said Oracle will expand its cloud offerings to include Tesla V100 GPUs, the most powerful data center GPUs, based on our latest Volta architecture.

The move underscores growing demand for public-cloud access to our GPU computing platform from an increasingly wide set of enterprise users. Oracle’s massive customer base means that a broad range of businesses across many industries will have access to accelerated computing to harness the power of AI, accelerated analytics and high performance computing.

“We’re working closely with NVIDIA to provide the next generation of accelerated computing to enterprises worldwide using our X7 compute architecture,” said Kash Ifikhar, vice president of Oracle Cloud Infrastructure. “This provides incredible flexibility to data scientists, engineers and researchers, allowing them to rent cutting-edge AI and HPC supercomputers by the hour to solve challenges of exceptional complexity.”

Oracle’s NVIDIA P100 offering provides its users two P100 GPUs with NVIDIA NVLink high-speed interconnect technology and can deliver 21 teraflops of single-precision performance per instance — the kind of performance required for deep learning training and inferencing, accelerated analytics and high performance computing.

Each P100 cloud instance can deliver the performance of up to 25 non-accelerated servers, dramatically saving money for HPC and AI workloads.

“Accelerated computing is powering a revolution in AI, HPC and enterprise computing,” said Ian Buck, vice president of Accelerated Computing at NVIDIA. “Now with NVIDIA GPUs, the Oracle Cloud brings that computation power to its customers worldwide.”

Reducing Model Training from Weeks to Days

One of the first to access the new NVIDIA GPU offering from Oracle Cloud is Fluent.ai. Inc. — a member of NVIDIA’s Inception program for AI startups. Fluent.ai offers the world’s first acoustic-only speech recognition technology to OEMs wanting to make their consumer electronics voice enabled.

Fluent.ai’s proprietary neural network algorithms learn to understand meaning directly from a user’s speech. The result is a highly flexible, accessible and accurate voice-interface technology that performs robustly even in offline and noisy environments — in any language or with any accent.

“Running on new NVIDIA GPU instances has significantly optimized the training of our deep learning models compared to the previous-generation hardware,” said Vikrant Tomar, chief technology officer at Fluent.ai. “This allows us to train more sophisticated speech recognition models while reducing the overall job time from weeks to days.”

And, cloud customers will be able to glean even more performance and cost-savings from our new NVIDIA V100 GPUs. With more than 120 teraflops of deep learning performance per GPU, a single Volta GPU offers the equivalent performance of 100 CPUs.

With the benefits of our GPU computing platform available to an even wider audience, expect to be amazed by a new wave of solutions for problems not yet solved.

The post Oracle Supercharges Its Cloud Offerings with NVIDIA Tesla GPUs appeared first on The Official NVIDIA Blog.

Award-Winning VFX Studio MPC Turns NVIDIA GRID into a Star

You may have seen the work of leading visual effects studio MPC in movies such as The Jungle Book, Wonder Woman, Alien: Covenant and Pirates of the Caribbean. Now the latest name MPC is turning into a star is NVIDIA GRID.

MPC (Moving Picture Company), a subsidiary of Technicolor and a global leader in visual effects for over 25 years, has deployed NVIDIA GRID to ensure they can keep connected, and creative, wherever they are in the world.

“As a VFX supervisor on large feature films, I need immediate access to my team’s work at all times,” said Greg Butler, an award-winning VFX supervisor with MPC. “Whether I’m on location for the shoot, in Los Angeles for client meetings or at home strategizing, NVIDIA technology helps me to keep my projects moving forward.”

The Show Must Go On

MPC’s work has been recognized with numerous awards, including the Best Visual Effects Oscar for their work on Life of Pi and The Jungle Book.

For MPC to deliver on these graphically demanding projects, they need fast, reliable and flexible access to graphic-intensive software on mobile devices. No small undertaking, but NVIDIA GRID is there to support them.

Thanks to deploying GRID Virtual Workstation software on NVIDIA Tesla M60-based servers in five locations around the world, the MPC team is able to provide remote access to their production platform, reviewTool. Mobile users can easily access Linux-based applications on the go, completely securely, with no reduction in performance – even on location in the desert.

MPC won the Best Visual Effects Oscar for The Jungle Book. Photo courtesy of MPC Film.

No Creative Limits

NVIDIA GRID enables the MPC production teams to work in a way that wasn’t possible before. Now, they can more easily work as a global team and enjoy the same performance levels wherever they are.

“MPC’s reviewTool makes it possible to see everything happening on a show across the globe. Running reviewTool through NVIDIA GRID means the content is always available and secure, freeing me to work remotely,” Butler said.

The post Award-Winning VFX Studio MPC Turns NVIDIA GRID into a Star appeared first on The Official NVIDIA Blog.