Redesigning the Solid State Drive – and the Data Center along with It

intel wayne allen
Wayne Allen leads data center storage pathfinding at Intel Corporation. Allen and his team brought the ruler, or EDSFF, solid-state drive to life, delivering massive improvements in density, cooling, and space and power efficiency. (Credit: Walden Kirsch/Intel Corporation)

How he’d describe his work to a 10-year-old: “I make it so you can save endless selfies.”

Flash to the future: While your smartphone and laptop have gotten substantially sleeker and slimmer with more compact electronics inside, data center equipment evolves more slowly. Occasionally, however, big jumps happen. One began a couple years ago, when Wayne and his team — which he says “dreams up what’s needed in the data center of the future” — set out to “go figure out the best way to deploy flash in a server.” Flash is a kind of chip that stores data, replacing the last-century spinning hard disk drive. Flash is both faster and more reliable, and most important, it’s smaller. But the size and shape of most drives is based on those old spinning disks, which have been “around for 30 years,” Wayne says.

More: Read about all Intel Innovators | World’s Densest, Totally Silent Solid State Drive (Intel Images)

Hello, ruler! Er, EDSFF: The team aimed to create something more compact, more efficient and easier to service and replace. They narrowed dozens of ideas down to three, and showed them to select customers for a vote. The winner? The aptly nicknamed “ruler.” The long version (now an industry specification called EDSFF, which includes two other form factors) is about 12 inches long, 1½ inches wide, and a measly one-third of an inch thick.

A seriously cool design: While it wasn’t easy, simply changing the size and shape of the drive had a surprising set of positive side effects. “We didn’t just improve density — we improved thermals,” Wayne notes. Thermals are a major factor in the operations and expense of a data center (cooling alone can be the biggest cost). And though these drives will spend their lives hidden from view, the design has earned awards from the Industrial Designers Society of America and the Core77 Design Awards.

Reshaping the drive, and the data center: Ruler-based servers can be designed to let fresh air pass directly to the processors in the back of the machine, improving the cooling efficiency and opening the door to even higher-performing processors. “A new form factor itself isn’t all that exciting, typically,” Wayne says. “But because [the ruler] impacts everything about server design and helps increase performance and reach new levels of density, it’s a big deal. We’re redesigning the data center with this — that’s the most fun part of it for me.”

Huge storage, tiny slot: The team wanted to achieve a petabyte of storage space — that’d be 4,000 256 GB smartphones — in a “1U” server slot that’s 1.75 inches high and 19 inches wide. Thanks to the ruler and new 3D NAND technology, 32 of Intel’s forthcoming 32-terabyte rulers will fit in that 1U space, and there’s your very slim, extremely efficient petabyte. Compared to a petabyte of hard drives, Wayne says, “we’ve delivered a 10x power reduction and a 20x space improvement. It’s pretty remarkable.”

» Click for full image

The post Redesigning the Solid State Drive – and the Data Center along with It appeared first on Intel Newsroom.

World’s Densest, Totally Silent Solid State Drive

Fast disappearing from data centers are power-hungry spinning hard disk drives that hum, buzz, run warm (or even hot), require fans and expensive cooling systems, and can crash unexpectedly.

Intel’s newest solid state drive, the Intel® SSD DC P4500, is about the size of an old-fashioned 12-inch ruler, and can store 32 terabytes. That’s equivalent to triple the entire printed collection of the U.S. Library of Congress.

The new SSD is Intel’s densest drive ever, and is built on Intel® 3D NAND technology, which stacks memory cells atop each other in multiple extremely thin layers, instead of just one. Memory cells in the P4500 are stacked 64 layers deep.

Older disk drives produce a great deal of heat. In most data centers today, the single biggest cost is air conditioning to keep them cool. This is one of the reasons some of the world’s biggest data companies — IBM, Microsoft, Tencent — are using the new “ruler” SSD to support their cloud and data center operations.

In data centers, the no-moving-parts ruler-shaped SSDs can be lined up 32 side-by-side, to hold up to a petabyte in a single server slot. Compared with a traditional SSD, the “ruler” requires half the airflow to keep cool. And compared with hard disk storage, the new 3D NAND SSD sips one-tenth the power and requires just one-twentieth the space.

More: Redesigning the Solid State Drive – and the Data Center along with It (Intel Innovators) | All Intel Images

intel ssd dc p4500
The ruler-shaped Intel SSD DC P4500 can hold up to 32 terabytes. It draws just one-tenth the power of a traditional spinning hard drive. (Credit: Walden Kirsch/Intel Corporation)
» Click for full image

The post World’s Densest, Totally Silent Solid State Drive appeared first on Intel Newsroom.

Intel Poised to Shape the Future of Memory and Storage with Optane + QLC

optane nand plus 2x1
Intel is reimagining the memory-and-storage market and igniting a new era of computing with the combination of Intel Optane and Intel QLC 3D NAND technologies. (Credit: Peter Belanger Photography)
» Click for full image

What’s New: Intel outlines vision for reimagining memory and storage with Optane + QLC. Intel is reimagining the memory-and-storage market and igniting a new era of computing with a combination of two unique memory technologies in memory and storage solutions no one in the industry currently offers: Intel® Optane™ and Intel® QLC 3D NAND.

“Intel Optane and 3D NAND technologies ensure computer and storage architects and developers can access vital data where and when they need it. The two technologies bridge the wide gap that exists between data that’s being worked on and data that’s waiting to be accessed.”
– Rob Crooke, senior vice president and general manager of the Non-Volatile Memory Solutions Group at Intel

Why It’s Important: The combination of Intel Optane and Intel QLC 3D NAND technologies allows customers to accelerate the speed of their most frequently accessed data, while utilizing the value flash technology delivers over HDDs for massive capacity storage. Intel’s aim is to break bottlenecks and deliver better solutions to unleash the value of data.

How It’s Used: Optane has already had an impact throughout the world. Here are a few examples:

  • Intel Optane SSDs integrated into IBM Cloud’s bare metal servers have enabled up to 7.5 times improvement — especially for applications that have high write-intensive operations.
  • Using Intel Optane Technology, the University of Pisa has reduced MRI scan times from 42 minutes to 4 minutes.
  • Intel Optane has enabled IFLYTEK, a Chinese information technology company, to enable faster voice and facial recognition services.

Intel’s QLC 3D NAND products announced today at Flash Memory Summit deliver new memory and storage solutions: Tencent, employing the new QLC PCIe Intel® SSD D5-P4320 in an initial production environment, increased by 10 times the number of customers served on a per-system basis.

How It’s Different: The Intel® SSD 660p (for client) and the Intel® SSD D5-P4320 (data center) are the world’s first QLC PCIe 3D NAND SSDs for their segments, offering the highest PCIe areal density on the market at an affordable price. With up to 2TB of storage in one drive for the 660p and 8TB for the P4320 (larger capacities coming later), these QLC SSDs have the capacity to replace hard drives. The 660p’s M.2 80mm form factor provides 2 times more capacity than TLC-based storage. In the data center, the P4320 enables users to store more, save more and do more than legacy solutions.

While Intel’s client solutions — including Intel Optane Memory and Intel Optane SSDs — have changed the PC space, Intel has set its sights on the data center. The challenges businesses face in managing vast amounts of data require the high performance of Intel Optane paired with the capacity-storage power of Intel QLC 3D NAND — two technologies that will transform the memory and storage tier.

» View Rob Crooke’s keynote address at Flash Memory Summit.

The post Intel Poised to Shape the Future of Memory and Storage with Optane + QLC appeared first on Intel Newsroom.

Innovating for the ‘Data-Centric’ Era

Navin ShenoyBy Navin Shenoy

Today at Intel’s Data-Centric Innovation Summit, I shared our strategy for the future of data-centric computing, as well as an expansive view of Intel’s total addressable market (TAM), and new details about our product roadmap. Central to our strategy is a keen understanding of both the biggest challenges – and opportunities – our customers are facing today.

As part of my role leading Intel’s data-centric businesses, I meet with customers and partners from all over the globe. While they come from many different industries and face unique business challenges, they have one thing in common: the need to get more value out of enormous amounts of data.

I find it astounding that 90 percent of the world’s data was generated in the past two years. And analysts forecast that by 2025 data will exponentially grow by 10 times and reach 163 zettabytes. But we have a long way to go in harnessing the power of this data. A safe guess is that only about 1 percent of it is utilized, processed and acted upon. Imagine what could happen if we were able to effectively leverage more of this data at scale.

The intersection of data and transportation is a perfect example of this in action. The life-saving potential of autonomous driving is profound – many lives globally could be saved as a result of fewer accidents. Achieving this, however, requires a combination of technologies working in concert – everything including computer vision, edge computing, mapping, the cloud and artificial intelligence (AI).

This, in turn, requires a significant shift in the way we as an industry view computing and data-centric technology. We need to look at data holistically, including how we move data faster, store more of it and process everything from the cloud to the edge.

Implications for Infrastructure

This end-to-end approach is core to Intel’s strategy and when we look at it through this lens – helping customers move, store and process data – the market opportunity is enormous. In fact, we’ve revised our TAM from $160 billion in 2021 to $200 billion in 2022 for our data-centric businesses. This is the biggest opportunity in the history of the company.

As part of my keynote today, I outlined the investments we’re making across a broad portfolio to maximize this opportunity.

Move Faster

With the explosion of data comes the need to move data faster, especially within hyperscale data centers. Connectivity and the network have become the bottlenecks to more effectively utilize and unleash high-performance computing. Innovations such as Intel’s silicon photonics are designed to break those boundaries using our unique ability to integrate the laser in silicon and, ultimately, deliver the lowest cost and power per bit and the highest bandwidth.

In addition, my colleague Alexis Bjorlin, announced today that we are further expanding our connectivity portfolio with a new and innovative SmartNIC product line – code-named Cascade Glacier – which is based on Intel® Arria® 10 FPGAs and enables optimized performance for Intel Xeon processor-based systems. Customers are sampling today and Cascade Glacier will be available in 2019’s first quarter.

Store More

For many applications running in today’s data centers, it’s not just about moving data, it’s also about storing data in the most economical way. To that end, we have challenged ourselves to completely transform the memory and storage hierarchy in the data center.

We recently unveiled more details about Intel® Optane™ DC persistent memory, a completely new class of memory and storage innovation that enables a large persistent memory tier between DRAM and SSDs, while being fast and affordable. And today, we shared new performance metrics that show that Intel Optane DC persistent memory-based systems can achieve up to 8 times the performance gains for certain analytics queries over configurations that rely on DRAM only.

Customers like Google, CERN, Huawei, SAP and Tencent already see this as a game-changer. And today, we’ve started to ship the first units of Optane DC persistent memory, and I personally delivered the first unit to Bart Sano, Google’s vice president of Platforms. Broad availability is planned for 2019, with the next generation of Intel Xeon processors.

In addition, at the Flash Memory Summit, we will unveil new Intel® QLC 3D NAND-based products, and demonstrate how companies like Tencent use this to unleash the value of their data.

Process Everything

A lot has changed since we introduced the first Intel Xeon processor 20 years ago, but the appetite for computing performance is greater than ever. Since launching the Intel Xeon Scalable platform last July, we’ve seen demand continue to rise, and I’m pleased to say that we have shipped more than 2 million units in 2018’s second quarter. Even better, in the first four weeks of the third quarter, we’ve shipped another 1 million units.

Our investments in optimizing Intel Xeon processors and Intel FPGAs for artificial intelligence are also paying off. In 2017, more than $1 billion in revenue came from customers running AI on Intel Xeon processors in the data center. And we continue to improve AI training and inference performance. In total, since 2014, our performance has improved well over 200 times.

Equally exciting to me is what is to come. Today, we disclosed the next generation roadmap for the Intel Xeon platform:

  • Cascade Lake is a future Intel Xeon Scalable processor based on 14nm technology that will introduce Intel Optane DC persistent memory and a set of new AI features called Intel DL Boost. This embedded AI accelerator will speed deep learning inference workloads, with an expected 11 times faster image recognition than the current generation Intel Xeon Scalable processors when they launched in July 2017. Cascade Lake is targeted to begin shipping late this year.
  • Cooper Lake is a future Intel Xeon Scalable processor that is based on 14nm technology. Cooper Lake will introduce a new generation platform with significant performance improvements, new I/O features, new Intel® DL Boost capabilities (Bfloat16) that improve AI/deep learning training performance, and additional Intel Optane DC persistent memory innovations. Cooper Lake is targeted for 2019 shipments.
  • Ice Lake is a future Intel Xeon Scalable processor based on 10nm technology that shares a common platform with Cooper Lake and is planned as a fast follow-on targeted for 2020 shipments.

In addition to investing in the right technologies, we are also offering optimized solutions – from hardware to software – to help our customers stay ahead of their growing infrastructure demands. As an example, we introduced three new Intel Select Solutions today, focused on AI, blockchain and SAP Hana*, which aim to simplify deployment and speed time-to-value for our ecosystem partners and customers.

The Opportunity Ahead

In summary, we’ve entered a new era of data-centric computing. The proliferation of the cloud beyond hyperscale and into the network and out to the edge, the impending transition to 5G, and the growth of AI and analytics have driven a profound shift in the market, creating massive amounts of largely untapped data.

And when you add the growth in processing power, breakthroughs in connectivity, storage, memory and algorithms, we end up with a completely new way of thinking about infrastructure. I’m excited about the huge and fast data-centric opportunity ($200 billion by 2022) that we see ahead.

To help our customers move, store and process massive amounts of data, we have actionable plans to win in the highest growth areas, and we have an unparalleled portfolio to fuel our growth – including, performance-leading products and a broad ecosystem that spans the entire data-centric market.

When people ask what I love about working at Intel, the answer is simple. We are inventing – and scaling – the technologies and solutions that will usher in a new era of computing and help solve some of society’s greatest problems.

Navin Shenoy is executive vice president and general manager of the Data Center Group at Intel Corporation.

The post Innovating for the ‘Data-Centric’ Era appeared first on Intel Newsroom.

Media Alert: Data-Centric Innovation Summit – Data Center Platform and Products Fueling Intel’s Growth

datacentric summit2 2x1Join Intel’s executive vice president and general manager of the Data Center Group, Navin Shenoy, as he presents the company’s vision for a new era of data-centric computing at Intel’s Data-Centric Innovation Summit.

Intel’s silicon portfolio, investment in software optimization and work with partners provide an opportunity to fuel new global business opportunities and societal advancements.

During the opening keynote, Shenoy will share his view of Intel’s expanded data-centric opportunity and his plans to shape and win key growth trends: artificial intelligence, the cloud and network transformation.  

When: Wednesday, August 8; livestream begins at 9:00 a.m. PDT

Where: Livestream can be accessed at Intel’s investor relations website.

Media Contact: Stephen Gabriel, Intel Global Communications,

The post Media Alert: Data-Centric Innovation Summit – Data Center Platform and Products Fueling Intel’s Growth appeared first on Intel Newsroom.

NVIDIA and NetApp Team to Help Businesses Accelerate AI

For all the focus these days on AI, it’s largely just the world’s largest hyperscalers that have the chops to roll out predictable, scalable deep learning across their organizations.

Their vast budgets and in-house expertise have been required to design systems with the right balance of compute, storage and networking to deliver powerful AI services across a broad base of users.

NetApp ONTAP AI, powered by NVIDIA DGX and NetApp all-flash storage is a blueprint for enterprises wanting to do the same. It helps organizations, both large and small, transform deep learning ambitions into reality. It offers an easy-to-deploy, modular approach for implementing — and scaling — deep learning across their infrastructures. Deployment times get shrunk from months to days.

We’ve worked with NetApp to distill hard-won design insights and best practices into a replicable formula for rolling out an optimal architecture for AI and deep learning. It’s a formula that eliminates the guesswork of designing infrastructure, providing an optimal configuration of GPU computing, storage and networking.

ONTAP AI is backed by a growing roster of trusted NVIDIA and NetApp partners that can help a business get its deep learning infrastructure up and running quickly and cost effectively. And these partners have the AI expertise and enterprise-grade support needed to keep it humming.

This support can extend into a simplified, day-to-day operational experience that will help ensure the ongoing productivity of an enterprise’s deep learning efforts.

For businesses looking to accelerate and simplify their journey into the AI revolution, ONTAP AI is a great way to get there.

Learn more at

The post NVIDIA and NetApp Team to Help Businesses Accelerate AI appeared first on The Official NVIDIA Blog.

Intel at 50: Innovation Platform for a New Era

By Murthy Renduchintala

Today we celebrate the 50th anniversary of Intel, a company born at the dawn of the technology industry – the advent of the integrated circuit. Since that day — July 18, 1968 — Intel’s impact has been felt through a progression of tech waves, including the personal computer, the internet and the cloud. We stand now at the starting line of a new and even more profound digital transformation where virtually every activity will interact with computing.

Computing is about to become infinitely more diverse. It will evolve into new form factors. It will adapt to extreme cost and environmental constraints. And it will power software and algorithms that are always-on, always-learning and able to excel at specialized tasks. Think of systems to prevent fraud on a blockchain or to concurrently anticipate and prevent diabetic events for a million individuals.

More: Intel Celebrates 50 Years of Innovation

Computers will work in concert, invisibly across devices, nearby networks and distant cloud data centers in service of individuals. For example, the car may be both the next big game-changer in computers and the planet’s biggest data collector. Intel’s Mobileye is already changing cars with driver-assistive technology installed in 27 million vehicles around the world. An additional 2 million cars this year will continuously crowdsource very precise high-definition maps for enhanced sensing and localization as part of Mobileye’s evolution to higher levels of autonomy. To help build confidence in these new autonomous cars, we have proposed an open, industry-driven formal model for safe decision-making. Mobileye’s example is a new kind of integrated technology platform that speaks to the future of innovation.

In his original 1965 “Moore’s Law” paper, Intel co-founder Gordon Moore wrote of integrating many similar devices to great benefit:

“The future of integrated electronics is the future of electronics itself. The advantages of integration will bring about a proliferation of electronics, pushing this science into many new areas.”

In our generation, we will have the freedom to integrate devices dissimilar in almost every respect: function, architecture, cost, power and manufacturing process.

As a company, we have been working toward this future for over a decade. Our manufacturing and engineering expertise produces the products and technologies that are the foundation for the world’s innovation. It is critical that we continue to drive breakthroughs in five areas:

Innovative Technologies – Mixing and Matching to Create Something New: In a world of heterogeneous integration, where value is created by combining powerful but sometimes disparate technologies, invention and investment in diverse intellectual property (IP) portfolios will be increasingly important. One new pervasive area of technology where we are investing is computer vision – from graphic processing units and vision processing units to domain-specific integrated platforms like Mobileye. Seeing is difficult for computers, yet advanced computer vision and other sensing technologies and software will enable more natural human-computer interactions. At the same time, we continue to invest in the next generation of foundational technologies including central processing units across the spectrum of high-performance and low-power; very fast memory; and 5G communications needed for low-latency access, analysis and transportation of data.

Advanced Architectures and Ecosystems:  Artificial Intelligence-related computing tasks will permeate virtually all data-rich processes over the next decade. We are inventing technologies and open software tools that will advance the nascent AI ecosystem, making it possible to gain insight, anticipate needs and continuously learn from data at enterprise scale. This includes new neural network processors, customizable FPGAs in the cloud and embedding AI technologies into existing platforms. Over the past year, Intel has worked with Google on Tensorflow*, Amazon for MXNet*, and Caffe* to enhance deep learning performance with optimizations for Xeon in the data center resulting in over 100 times performance gains for training and nearly 200 times for inference across frameworks.

Packaging in New Dimensions: We are making huge progress to broadly enable heterogeneous integration of computing, memory and communications. We will connect and stack diverse technologies in tiny footprints tuned for specific power envelopes, providing unique cost and performance characteristics with much greater flexibility. One processor we are developing connects “chiplets” built in different manufacturing processes using new 2D and 3D assembly techniques to deliver powerful PC performance with the energy usage of an ultra-efficient mobile device. Testing and verification breakthroughs were also required to begin to scale these new packaging techniques. Imagine the possibilities as we combine into the most efficient of packages the more diverse capabilities – even data center-class technologies – once miles apart in computing terms or based on incompatible processes.

New Models for Computing: Intel Labs is working with academic partners around the world to look over the horizon and rethink computing itself. Quantum computing promises to penetrate complex problems with seemingly infinite variables if it can be scaled reliably. And neuromorphic technology mimicking the function of neurons and operating on feedback from the environment could be a new model of adaptable, always-on, ultra-efficient computing at the edge. We are making strong progress in both.

Securing the Future: Consistent with our Security-First pledge, we are working on deeper collaboration to identify and address vulnerabilities in increasingly complex technologies. And we are engaged in industry and academic initiatives like the RISE program at the University of California, Berkeley on new frameworks and technologies to help protect millions of connected people and things relying on assistive technologies and software.

All of this requires commitment to long-term strategies and sustained investment in people and innovation platforms. At Intel, we have acquired innovative companies but we have also increased our research and development spending. Last year, Intel accounted for over one-third of the world’s semiconductor R&D, or $13 billion, to ensure that we are in a position with the right people and the right technology to lead in this future.

On Intel’s 50th anniversary we celebrate the past – icons like our founders, Robert Noyce and Gordon Moore – but mostly we look forward, as they did, with our colleagues and our broader industry and academic communities to create the future, improve lives and solve the world’s biggest challenges.

Dr. Venkata (Murthy) M. Renduchintala is group president of the Technology, Systems Architecture & Client Group and chief engineering officer at Intel Corporation.

The post Intel at 50: Innovation Platform for a New Era appeared first on Intel Newsroom.

At ISC 2018, Intel Transforms the Future of High-Performance Computing

supercomputer 2x1

What’s New: A new era of supercomputing offers unprecedented opportunities for scientific and industrial breakthroughs. The global competition among high-performance computing (HPC) systems heats up at the International Supercomputing Conference (ISC 2018) in Frankfurt, Germany.

The next generation of supercomputers will provide scientists and researchers with powerful new tools to accelerate scientific discoveries and drive innovations. Intel is on the forefront of the convergence of artificial intelligence (AI), analytics, simulation and modeling and other HPC workloads that will drive the industry toward the next era in supercomputing.

Intel’s Leadership Role in HPC: ISC 2018 kicked off with the release of the Top500 list, which shows today’s supercomputing platforms continue to rely on Intel® Xeon® processors as the preferred processor in the world’s leading Top500 supercomputers. Intel processors power a record 95 percent of all systems in the Top500 list, an increase of 2.4 percent since June 2017. Intel Xeon processors are used in 97 percent of all new systems added to the Top500 list with the latest Intel Xeon Scalable processors powering more than 27 percent of all newly added (37 of 133) Top500 systems. Intel Xeon processors provide the performance and flexibility to handle the most stringent HPC workloads as well as the widest range of workloads at any scale required by science and industry.

What’s New from Intel: For both supercomputing and traditional HPC clusters, Intel continues to deliver and innovate high-speed interconnect technologies that enable cost-effective deployment of HPC systems. Intel shared at ISC its next-generation Omni-Path Architecture (Intel® OPA200), coming in 2019, which will provide data rate speeds up to 200 Gb/s, doubling the performance of the previous generation. This next-generation fabric will be interoperable and compatible with the current generation of Intel® OPA.  Intel® OPA200’s high-performance capabilities and low-latency at scale will provide system architects the ability to scale to tens of thousands of nodes while benefiting from improved total cost of ownership.

Visualization is a key component in advanced computing that allows systems to deliver greater insights with faster turnaround when using large-scale data sets. This week at ISC, Intel announced Intel® Select Solution for Professional Visualization, an easily deployed, Intel-optimized system reference architecture that is purpose-built to meet the demands of today’s most complex data explosion challenges. Intel Select Solution for Professional Visualization uses the platform’s onboard memory to do graphical rendering of large datasets in real time. A greater footprint of available memory makes these Intel solutions better suited for the larger data sets in HPC workloads than architectures with smaller, captive memory pools. Intel Select Solution for Professional Visualization is supported by leading software vendors and visualization experts and will be available later this year from partners such as Atipa*, Dalco*, E4 Computing*, Megware* and RSC*.

Hear from Intel: Intel Corporate Vice President Dr. Rajeeb (Raj) Hazra will kick off Intel’s participation at ISC 2018 by sharing insight on emerging technologies and the changing landscape of HPC systems. Hazra will present to ISC 2018 attendees at 6 p.m. Monday, June 25, in the Panorama 2 ballroom of the Messe Frankfurt Hall 3 on how the convergence of artificial intelligence (AI), analytics, and simulation and modeling are ushering in new approaches that will help the industry accelerate insights and innovation.

See Intel Tech in Action: Intel is showcasing the following technology demonstrations for HPC and AI at its ISC 2018 booth (#F-930):

  • A brain tumor screening demonstration that trains high-resolution images using AI based on Intel Xeon Scalable processors.
  • Intel Select Solutions for Professional Visualization demonstrations that include a live “Virtual Auto Wind Tunnel” using OpenFOAM, ParaView+OSPRay interactive ultra-high-fidelity ray tracing, and an interactive “fly through” of a LiDAR captured 500 year-old German Village and Church with MegaMol+OSPRay photo-real rendering >500GB of particle data.
  • A demonstration of Intel® FPGAs in key high-performance computing and AI workloads such as compression, image/object recognition and genome sequencing.
  • A demonstration of distributed deep reinforcement learnings on Intel Xeon Scalable processors.

More Context: Institut Curie Names Intel Lead Partner to Implement High-Performance Computing and Artificial Intelligence in Accelerating Genome Sequencing and Interpretation for Oncology | Intel Starts Testing Smallest ‘Spin QUBIT’ Chip for Quantum Computing | Intel Delivers 17-Qubit Superconducting Chip with Advanced Packaging to QuTech | Using Deep Neural Network Acceleration for Image Analysis in Drug Discovery

The post At ISC 2018, Intel Transforms the Future of High-Performance Computing appeared first on Intel Newsroom.

SoundHound Digs Deeper Into Voice AI Market

SoundHound is learning some new AI tricks.

The Silicon Valley startup, which creates AI-based voice services, has fetched $100 million in strategic investment capital as it expands its offerings.

In addition to its eponymous music recognition app, SoundHound offers its Hound voice search app and Houndify voice platform for companies to create AI-powered voice services. The company’s tech has become the de facto alternative for voice search in a market crowded with the industry’s biggest players.

SoundHound is the underdog choice versus the likes of Amazon, Apple, Google and Microsoft.

The company is pushing out its voice domains, or topics for natural language processing fluency, at a rapid pace. It has gone from 50 domains to 200 such areas in which it’s improving voice services in a span of two years, outpacing the advances of Apple’s Siri.

NVIDIA GPU Ventures, which backs startups working on deep learning, is an early investor in  SoundHound.

Join in the Collective

Meanwhile, SoundHound continues to push for interoperability — or the ability for domains to speak to one another — as a leg up in providing better search capabilities for consumers. The company, which calls this effort Collective AI, says this makes products using the architecture  smarter and more capable.

Collective AI is intended to enable people to ask complicated queries and get responses, such as this: Find the best Italian restaurant in San Francisco that has more than 4 stars, is good for kids, isn’t a chain and is open after 9 p.m. on Wednesdays.

The company’s Collective AI alliance includes NVIDIA, Yelp,, Sportstrader, Xignite, FlightStats, Onkyo, Sharp, Uber and Samsung ARTIK.

SoundHound also aims to stand out from the pack with Houndify. The white-label licensed service allows companies to personalize voice assistants with their own name in products and keep the customer data that’s generated. This enables companies to build their voice search brand and tap into other business opportunities that can emerge from customer data.

Amazon, by contrast, licenses Alexa and customers must call it “Alexa” in queries and the delivery giant owns the customer data. Apple doesn’t license its Siri voice assistant and Google doesn’t allow people to customize its Google Assistant name or own the data created by their customers.

Houndify Developers Triple

Developers are biting for Houndify. Early last year SoundHound had more than 20,000 developers registered to use Houndify, a number that has now swelled to more than 60,000.

SoundHound is retrieving customers for Houndify, too. Today the company is working with 11 automakers as well as companies pursuing robotics, connected speakers, appliances, augmented reality and smart home devices using Houndify.

Hyundai is implementing Houndify for next-generation voice in future cars. The automaker’s proactive assistant is designed to predict driver needs for information, such as providing meeting reminders. It also enables hands-free phone calls, texting, destination and music search, as well as the ability to check the weather and manage a calendar. Voice will also extend to control of air conditioning, door locks and other vehicle functions.

The NVIDIA DRIVE and Jetson TX2 platforms help make SoundHound’s speech-to-meaning technology possible in automotive and robotics applications, respectively.

Jetson TX2 Development Kit
The Jetson TX2 developer kit for robotics

Dual Approach to Speech Recognition

SoundHound has taken a novel approach to serve up speech recognition on the fly. It has been granted a patent for its system that applies a dual method in which both its local recognition model and remote recognition engine perform speech recognition. SoundHound’s hybrid engineering takes advantage of GPUs from NVIDIA Drive to serve up faster processing of voice queries.

The dual approach from SoundHound has enabled real-time responses to voice queries in vehicles, a game changer in an industry whose legacy voice systems are frustratingly slow.

This type of ingenuity is what can make AI available on the edge of the network. Historically, embedded technologies have only been able to recognize a small set of vocabulary and at lower speed and accuracy. SoundHound, however, unleashed NVIDIA GPUs to run a large vocabulary for speech recognition and natural language understanding at high speed and accuracy.

“We use the NVIDIA DRIVE platform to create an embedded version of our system that can scale to more than a million words in natural language,” said SoundHound co-founder and CEO Keyvan Mohajer. “It’s very fast and scalable.”

In robotics, Mayfield Robotics is developing its Kuri robot for use with Houndify for voice interactions, allowing people to interact with and guide the robot.

For appliances, Bunn has shown a reference model using Houndify on its Sure Immersion Coffee Machine, which is brought to life with the prompt, “OK, barista.” Customers can use voice commands to operate the coffee-making part of the machine as well as to search for weather, sports and other information while waiting for coffee to brew.

SoundHound uses NVIDIA GPUs for training neural networks and deep learning, and it operates its own data centers running GPUs. The company’s natural language processing runs on thousands of servers and the company works with terabytes of data.

“Something that might take many months, now takes days, and that’s thanks to the GPUs,” Mohajer said. “The industry can’t move without GPUs.”

The post SoundHound Digs Deeper Into Voice AI Market appeared first on The Official NVIDIA Blog.

GPU-Accelerate the Intelligent Enterprise at SAP SAPPHIRE NOW

Self-driving cars and Go-beating algorithms get most of the media attention, but businesses of all sorts can use AI to get better control of their operations and boost efficiency.

Many HR departments spend more than 60 percent of their time reviewing resumes, according to SAP. And 70 percent of customers expect a response to their complaints within an hour on social media, bogging down customer support teams. Meanwhile, marketing teams are often blind to how well paid placements of their brands perform at sports events.

NVIDIA and SAP are working to help businesses transform these operations with GPU-accelerated AI, and we’re making it easy to do so with any platform or vendor.

SAP has launched several GPU-accelerated SAP Leonardo machine learning applications, including Resume Matching application, Service Ticket Intelligent application, Brand Impact application, and Accounts Payable application, with more to come.

Join us and many influencers at SAPPHIRE NOW this week in Orlando, Fla., to explore a GPU-accelerated and purpose-driven future where self-optimizing and context-sensitive processes free people to focus on their business essentials.

Among those speaking live at the show:

  • NVIDIA VP and GM Jim McHugh joins SAP EVP Arlen Shenkman on how to seize the opportunity for AI innovations with ecosystem partners.

SAP and NVIDIA have been focused on building a strong partner ecosystem. NVIDIA will be in many partner booths at SAPPHIRE NOW to showcase the momentum of the AI revolution. To name a few:

Partners Booth Locations at SAP SAPPHIRE NOW
  • Cisco booth 550 to showcase how GPU-aware Kubernetes helps orchestrate AI workloads on Cisco HyperFlex, powered by NVIDIA Tesla V100
  • Dell booth 343 to talk about how PowerEdge C4140 Rack Server, powered by NVIDIA V100, addresses training and inference for the most demanding HPC, data visualization and AI workloads.
  • HPE booth 241 to demonstrate the importance of AI powered at the core by HPE Apollo 6500, running on NVIDIA V100, and shifting the model runtime to edge with converged HPE Edgeline Systems using NVIDIA Tesla P4 GPUs.
  • Lenovo booth 558 to highlight Lenovo E-Health, an intelligent, medical image diagnostic assistance solution running on NVIDIA GPUs. This solution won first place in the LiTS (Liver Tumor Segmentation) competition in 2017 for automatically segmenting liver lesions in CT volumes.

  • NetApp booth 108 to share how we can use a real-time style transfer demo on NVIDIA DGX systems to transform looks in subseconds.
  • Pure Storage booth 1445 to showcase AIRI — the AI-Ready Infrastructure — an integrated AI solution that combines four DGX-1 and Pure Storage FlashBlade. In case you missed it, we also launched the AIRI mini last week.

Join us June 4-7 to see how this is just the tip of the iceberg of an industrial change. If you won’t be at the conference, follow @NVIDIADC and #SAPPHIRENOW on Twitter for real-time updates.

The post GPU-Accelerate the Intelligent Enterprise at SAP SAPPHIRE NOW appeared first on The Official NVIDIA Blog.