Israel’s Constellation of Startups Using NVIDIA DGX Station to Polish Their Stars

A host of startups at the GPU Technology Conference in Israel this week are showing off the extraordinary acceleration gains and efficiency the NVIDIA DGX Station provides for their deep-learning work.

Companies like Cognata — which creates technology for testing autonomous vehicles in VR and won last year’s $100,000 Inception award at GTC Israel — praise DGX Station, the world’s first personal AI supercomputer, for accelerating their work up to 10x.

Another startup, TRACXPOiNT, which is creating an AI-infused shopping cart, recouped the cost of purchasing its DGX Station within two months, based on what it saved in paying for using GPU instances in the cloud.

“It’s no exaggeration to say that we depend on our DGX Station every day, sometimes every hour of every day,” said Danny Atsmon, CEO of Cognata. “It’s an enormous advantage in speeding up our work and getting to market fast.”

Indeed, the DGX Station provides enormous computing muscle — 500 teraflops of deep learning punch. That’s thanks to four Tesla V100 GPUs, NVLink interconnect technology, and 20,480 NVIDIA CUDA cores.

While it provides the computing power of over 400 CPUs, it uses only about one-twentieth as much power. It creates only one-tenth the noise of a workstation — about as quiet as a typical office ventilation system.

An integrated system purpose-built for AI, DGX Station comes with fully optimized hardware and software. This enables companies to get started in just an hour, compared with potentially a month of setup time required to build similar systems.

DGX Station’s deep learning and analytics performance is unmatched, providing:

  • 72x the deep-learning performance of CPU servers
  • more than 100x the speedup for analyzing large datasets versus a 20-node Spark server cluster
  • full versatility for both deep learning training and inferencing at over 30,000 images a second.

Here’s how four of Israel’s hottest young companies use DGX Station.


It’s estimated that an autonomous car would need 11 billion miles of test drives to achieve the same level of accuracy as a human driver. Cognata puts that within reach by using state-of-the-art deep learning simulation to test vehicles in virtual reality through computer-generated landscapes, complete with other cars, pedestrians, buildings and varying weather conditions.

By using a DGX Station, Cognata shaved off years of training time, accelerating its efforts by 10x beyond what it could achieve even in a GPU-powered workstation. The system enables its team to simultaneously run dozens of training jobs. So Cognata can rack up millions of virtual miles on which it can finetune autonomous responses.


AnyVision, which has grown to more than 170 employees in just four years, applies proprietary technology and convolutional neural networks to provide face, body and object detection. It enables capabilities such as facial recognition for ticketless entry to sporting events and visual identification for two-factor authentication for banking applications.

DGX Station enables AnyVision to train 8x faster than on a sophisticated GPU-powered workstation, while detecting individual identities against a database of 115 million faces in 200 milliseconds.


This Jerusalem-based startup monitors the effects of mutations and drugs on cancer patients, enabling oncologists to provide precision, personalized treatments.

Using DGX Station, NovellusDX was able to train 4x faster by eliminating the need for large data transfers to the cloud and save $70,000 on an annual basis, for an eight-month payback period.

NovellusDX also improved its accuracy by a factor of 10x in using its own deep learning framework to quantify the level of intra-cellular signaling pathway activity from millions of images of mutating cells.


This startup is bringing the convenience of online shopping to retail with an AI-powered shopping cart that visually recognizes items in stores, communicates with suppliers to get real-time offers, and enables shoppers to pay digitally for their cart’s contents without stopping to scan items at the checkout counter.

TRACXPOiNT’s cart is fully integrated with hardware and GPU-accelerated software. Training on a DGX Station provided a 3x performance increase, and the company made its money back after just two months of 24/7 training versus GPU-accelerated cloud solutions. It conducts inferencing using TensorRT software running on the NVIDIA Jetson embedded platform, which enables it to recognize up to 100,000 different products in under a second.

Join the more than 3,200 NVIDIA Inception program partners and get started on your AI innovation with DGX Station.

The post Israel’s Constellation of Startups Using NVIDIA DGX Station to Polish Their Stars appeared first on The Official NVIDIA Blog.

Intel and Simacan Combine in Effort to Ease Congestion Caused by Freight Traffic

trucks 2x1What’s New: Intel and Simacan* are working together to enable so-called “digital corridors” for truck platoons along the highly congested “Tulip Corridor” routes that connect North Sea shipping ports to Germany’s industrial Ruhr Valley. The platoons are enabled by Simacan Control Tower,* a cloud-based logistics solution that uses Intel® Xeon® Scalable processors to analyse huge amounts of real-time data.Supporting Quote

“The Tulip Corridor is a very tangible illustration that data is the ‘new oil.’ The volume of data involved bringing truck platooning to reality demonstrates the ability of Intel technology to power the world’s data-driven needs.”
–Norberto Carrascal, business consumption director, EMEA territory, Intel Corporation

How It Works: In truck platoons, a collection of trucks equipped with state-of-the-art driving support systems follow each other in close formation. The trucks include smart technology and are communicating among one another, as well as to the drivers, to enable them to stay in close formation.

Intel-powered Simacan Control Tower software delivers a detailed operational picture featuring vehicles of multiple carriers. It includes traffic and vehicle condition updates, predicted arrival times, and automatic geofence detection. Based on this information, Simacan shares real-time notifications on planning, routing and arrival times, and delivers post-trip analyses based on the data gathered.

“With the Simacan Transport Cloud and Simacan Control Tower, we constantly merge and analyse millions of data points out of logistic planning systems, onboard vehicle systems and intelligent traffic management systems in real time,” said Rob Schuurbiers, CEO of Simacan. “With the support of Intel’s extremely high-performance technology, we succeed in meeting and surpassing our customers’ expectations.”

Results from the initial trials have already indicated the potential benefits of the platooning approach. Traffic flow for the platoons was improved by 10 to 17 percent. Applied to the working lifespan of a truck of 175,000 kilometres, this equates to a saving of 6,000 litres of diesel per truck, beneficial for the operators and for the environment.

Why It’s Important: Rotterdam, Netherlands, and Antwerp, Belgium, are Europe’s two busiest ports, handling a combined 675 million tons of freight in 2017. Most of this freight needs to be transported inland, much of it to the Ruhr Valley in Germany’s industrial heartland. This creates an ever-increasing burden on road networks that are struggling to cope with a high volume of trucks and the increasing disruption than can be caused by one truck breaking down. Truck platooning is a potential answer to this problem – helping ease the flow of traffic, improve safety and reliability, while reducing the impact on the surrounding environment. All this needs to be done without impairing companies’ ability to get their goods delivered on time.

What’s Next: Truck platooning will be tested at different levels along the Tulip Corridor between 2019 and 2023. An ambitious goal is for 100 platoons daily to traverse the Tulip Corridor by 2020.

The post Intel and Simacan Combine in Effort to Ease Congestion Caused by Freight Traffic appeared first on Intel Newsroom.

Intel Artificial Intelligence and Rolls-Royce Push Full Steam ahead on Autonomous Shipping

iam 2x1
Photos from Rolls-Royce

What’s New: Rolls-Royce* builds shipping systems that are sophisticated and intelligent – and eventually it will add fully autonomous to that portfolio – as it makes commercial shipping safer and more efficient. It’s doing so using artificial intelligence (AI) powered by Intel® Xeon® Scalable processors and Intel® 3D NAND SSDs for storage.

“Delivering these systems is all about processing – moving and storing huge volumes of data – and that is where Intel comes in. Rolls-Royce is a key driver of innovation in the shipping industry, and together we are creating the foundation for safe shipping operations around the world.”
–Lisa Spelman, vice president and general manager, Intel Xeon Processors and Data Center Marketing in the Data Center Group at Intel

How It Works: Ships have dedicated Intel Xeon Scalable processor-based servers on board, turning them into cutting-edge floating data centers with heavy computation and AI inference capabilities. Rolls-Royce’s Intelligent Awareness System (IA) uses AI-powered sensor fusion and decision-making by processing data from lidar, radar, thermal cameras, HD cameras, satellite data and weather forecasts. This data allows vessels to become aware of their surroundings, improving safety by detecting objects several kilometers away, even in busy ports. This is especially important when operating at night, in adverse weather conditions or in congested waterways.

iam multi 2x1Data collected by the vessels is stored using Intel 3D NAND SSDs, acting as a “black box,” securing the information for training and analysis once the ship is docked. Even compressed, data captured by each vessel can reach up to 1TB per day or 30TB to 40TB over a monthlong voyage, making storage a critical component of the intelligent solution.

“This collaboration is helping us to develop technology that supports ship owners in the automation of their navigation and operations, reducing the opportunity for human error and allowing crews to focus on more valuable tasks,” said Kevin Daffey, director, Engineering & Technology and Ship Intelligence at Rolls-Royce. “Simply said, this project would not be possible without leading-edge technology now brought to the table by Intel. Together, we can blend the best of the best to change the world of shipping.”

Why It’s Important: Ninety percent of world trade is carried out via international shipping – a number that is projected to grow. Of a total world fleet of about 100,000 vessels, around 25,000 use Rolls-Royce equipment, making the company a key player in the shipping industry.

The sea can be a hostile environment – dangerous ocean conditions resulted in 1,129 total shipping losses over the past 10 years, mostly due to human error. Enabling a massive vessel — loaded with millions or billions of dollars’ worth of goods — to better navigate and detect obstacles and hazards in real time, requires the crew to have the information they need to make smart and potentially lifesaving decisions. These systems also reduce the potential for human error by automating routine tasks and processes, freeing the crew to focus on critical decision-making.

Additionally, this system can potentially lower insurance premiums for vessels, since all of the ship’s data is stored securely in the 3D NAND SSDs that can provide valuable data on the cause of collisions and other problems.

This technology is in action today. In a recent pilot in Japan, Rolls-Royce demonstrated that its vessels can even understand their surroundings at nighttime, when it is not possible for humans to visually detect objects in the water.

More Context: Sailing the Seas of Autonomous Shipping (Binay Ackaloor Blog) | Rolls-Royce Ship Intelligence | Artificial Intelligence at Intel

Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

The post Intel Artificial Intelligence and Rolls-Royce Push Full Steam ahead on Autonomous Shipping appeared first on Intel Newsroom.

NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE

Big data is bigger than ever. Now, thanks to GPUs, it will be faster than ever, too.

NVIDIA founder and CEO Jensen Huang took the stage Wednesday in Munich to introduce RAPIDS, accelerating “big data, for big industry, for big companies, for deep learning,” Huang told a packed house of more than 3,000 developers and executives gathered for the three-day GPU Technology Conference in Europe.

Already backed by Walmart, IBM, Oracle, Hewlett-Packard Enterprise and some two dozen other partners, the open-source GPU-acceleration platform promises 50x speedups on the NVIDIA DGX-2 AI supercomputer compared with CPU-only systems, Huang said.

The result is an invaluable tool as companies in every industry look to harness big data for a competitive edge, Huang explained as he detailed how RAPIDS will turbo-charge the work of the world’s data scientists.

“We’re accelerating things by 1000x in the domains we focus on,” Huang said. “When we accelerate something 1000x in ten years, if your demand goes up 100 times your cost goes down by 10 times.”

Over the course of a keynote packed with news and demos, Huang detailed how NVIDIA is bringing that 1000x acceleration to bear on challenges ranging from autonomous vehicles to robotics to medicine.

Among the highlights: Volvo Cars had selected the NVIDIA DRIVE AGX Xavier computer for its next generation of vehicles; King’s College London is adopting NVIDIA’s Clara medical platform; and startup Oxford Nanopore will use Xavier to build the world’s first handheld, low-cost, real-time DNA sequencer.

Big Gains for GPU Computing

Huang opened his talk by detailing the eye-popping numbers driving the adoption of accelerated computing — gains in computing power of 1,000x over the past 10 years.

“In ten years time, while Moore’s law has ended, our computing approach has resulted in a 1000x increase in computing performance.” Huang said. “It’s now recognized as the path forward.”

Huang also spoke about how NVIDIA’s new Turing architecture — launched in August — brings AI and computer graphics together.

Turing combines support for next-generation rasterization, real-time ray-tracing and AI to drive big performance gains in gaming with NVIDIA GeForce RTX GPUs, visual effects with new NVIDIA Quadro RTX pro graphics cards, and hyperscale data centers with the new NVIDIA Tesla T4 GPU, the world’s first universal deep learning accelerator.

One Small Step for Man…

With a stunning demo, Huang showcased how our latest NVIDIA RTX GPUs — which enable real-time ray-tracing for the first time — allowed our team to digitally rebuild the scene around one of the lunar landing’s iconic photographs, that of astronaut Buzz Aldrin clambering down the lunar module’s lander.

The demonstration puts to rest the assertion that the photo can’t be real because Buzz Aldrin is lit too well as he climbs down to the surface of the moon while in the shadow of the lunar lander. Instead the simulation shows how the reflectivity of the surface of the moon accounts for exactly what’s seen in the controversial photo.

“This is the benefit of NVIDIA RTX, using this type of rendering technology we can simulate light physics and things are going to look the way things should look,” Huang said.

…One Giant Leap for Data Science

Bringing GPU computing back down to Earth, Huang announced a plan to accelerate the work of data scientists at the world’s largest enterprises.

RAPIDS open-source software gives data scientists facing complex challenges a giant performance boost. These challenges range from predicting credit card fraud to forecasting retail inventory and understanding customer buying behavior, Huang explained.

Analysts estimate the server market for data science and machine learning at $20 billion. Together with scientific analysis and deep learning, this pushes up the value of the high performance computing market to approximately $36 billion.

Developed over the past two years by NVIDIA engineers in close collaboration with key open-source contributors, RAPIDS offers a suite of open-source libraries for GPU-accelerated analytics, machine learning and, soon, data visualization.

RAPIDS has already won support from tech leaders such as Hewlett-Packard Enterprise, IBM and Oracle as well as open-source pioneers such as Databracks and Anaconda, Huang said.

“We have integrated RAPIDS into basically the world’s data science ecosystem, and companies big and small, their researchers can get into machine learning using RAPIDS and be able to accelerate it and do it quickly, and if they want to take it as a way to get into deep learning, they can do so,” Huang said.

Bringing Data to Your Drive

Huang also outlined the strides NVIDIA is making with automakers, announcing that Swedish automaker Volvo has selected the NVIDIA DRIVE AGX Xavier computer for its vehicles, with production starting in the early 2020s.

DRIVE AGX Xavier — built around our Xavier SoC, the world’s most advanced — is a highly integrated AI car computer that enables Volvo to streamline development of self-driving capabilities while reducing total cost of development and support.

The initial production release will deliver Level 2+ automated driving features, going beyond traditional advanced driver assistance systems.The companies are working together to develop automated driving capabilities, uniquely integrating 360-degree surround perception and a driver monitoring system.

The NVIDIA-based computing platform will enable Volvo to implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.

It’s a vision that’s backed by a growing number of automotive companies, with Huang announcing Wednesday that, in addition to Volvo Cars, Volvo Trucks, tier one automotive components supplier Continental, and automotive technology companies Veoneer and Zenuity and have all adopted NVIDIA DRIVE AGX.

Jensen also showed the audience a video of how, this month, an autonomous NVIDIA test vehicle, nicknamed BB8, completed a jam-packed 80-kilometer, or 50 mile, loop, in Silicon Valley without the need for the safety driver to take control — even once.

Running on the NVIDIA DRIVE AGX Pegasus AI supercomputer, the car handled highway entrance and exits and numerous lane changes entirely on its own.

From Hospitals Serving Millions to Medicine Tailored Just for You

AI is also driving breakthroughs in the healthcare, Huang explained, detailing how NVIDIA Clara will harness GPU computing for everything from medical scanning to robotic surgery.

He also announced a partnership with King’s College London to bring AI tools to radiology, and deploy it to three hospitals serving 8 million patients in the U.K.

In addition, he announced NVIDIA Clara AGX — which brings the power of Xavier to medical devices — has been selected by Oxford Nanopore to power its personal DNA sequencer MinION, which promises to driven down the cost and drive up the availability of medical care that’s tailored to a patient’s DNA.

A New Computing Era

Huang finished his talk by recapping the new NVIDIA platforms being rolled out — the Turing GPU architecture; the RAPIDS data science platform; and DRIVE AGX for autonomous machines of all kinds.

Then he left the audience with a stunning demo of a nameless hero being prepared for action by his robotic assistants — before he returns to catch his robotics bopping along to K.C. and the Sunshine Band and join in the fun before returning to stage with a quick caveat.

“And I forgot to tell you everything was done in real time,” Huang said. “That was not a movie.”

The post NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Intel Collaborates on New AI Research Center at Technion, Israel’s Technological Institute

mayberry rao 2x1
From left: Dr. Michael Mayberry, Intel’s chief technology officer; Dr. Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group

What’s New: Technion*, Israel’s technological institute, announced this week that Intel is collaborating with the institute on its new artificial intelligence (AI) research center. The announcement was made at the center’s inauguration attended by Dr. Michael Mayberry, Intel’s chief technology officer, and Dr. Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group.

“AI is not a one-size-fits-all approach, and Intel has been working closely with a range of industry leaders to deploy AI capabilities and create new experiences. Our collaboration with Technion not only reinforces Intel Israel’s AI operations, but we are also seeing advancements to the field of AI from the joint research that is under way and in the pipeline.”
–Naveen Rao, Intel corporate vice president and general manager of Artificial Intelligence Products Group

What It Includes: The center features Technion’s computer science, electrical engineering, industrial engineering and management departments, among others, all collaborating to drive a closer relationship between academia and industry in the race to AI. Intel, which invested undisclosed funds in the center, will represent the industry in leading AI-dedicated computing research.

What It Means: Intel is committed to accelerating the promise of AI across many industries and driving the next wave of computing. Research exploring novel architectural and algorithmic approaches is a critical component of Intel’s overall AI program. The company is working with customers across verticals – including healthcare, autonomous driving, sports/entertainment, government, enterprise, retail and more – to implement AI solutions and demonstrate real value. Along with Technion, Intel is also involved in AI research with other universities and organizations worldwide.

Why It Matters: Intel and Technion have enjoyed a strong relationship through the years, as generations of Technion graduates have joined Intel’s development center in Haifa, Israel, as engineers. Intel has also previously collaborated with Technion on AI as part of the Intel Collaborative Research Institute for Computational Intelligence program.

More Context: Artificial Intelligence at Intel







The post Intel Collaborates on New AI Research Center at Technion, Israel’s Technological Institute appeared first on Intel Newsroom.

Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides

Hundreds of millions of tissue biopsies are performed worldwide each year — most of which are diagnosed as non-cancerous. But for the few days or weeks it takes a lab to provide a result, uncertainty weighs on patients.

“Patients suffer emotionally, and their cancer is progressing as the clock ticks,” said David West, CEO of digital pathology startup Proscia.

That turnaround time has the potential to dramatically reduce. In recent years, the biopsy process has begun to digitize, with more and more pathologists looking at digital scans of body tissue instead of physical slides under a microscope.

Proscia, a member of our Inception virtual accelerator program, is hosting these digital biopsy specimens in the cloud. This makes specimen analysis borderless, with one hospital able to consult a pathologist in a different region. It also creates the opportunity for AI to assist experts as they analyze specimens and make their diagnoses.

“If you have the opportunity to read twice as many slides in the same amount of time, it’s an obvious win for the laboratories,” said West.

The Philadelphia-based company recently closed a $8.3 million Series A funding round, which will power its AI development and software deployment. And a feasibility study published last week demonstrated that Proscia’s deep learning software scores over 99 percent accuracy for classifying three common types of skin pathologies.

Biopsy Analysis, Behind the Scenes

Pathologists have the weighty task of examining lab samples of body tissue to determine if they’re cancerous or benign. But depending on the type and stage of disease, two pathologists looking at the same tissue may disagree on a diagnosis more than half the time, West says.

These experts are also overworked and in short supply globally. Laboratories around the world have too many slides and not enough people to read them.

China has one pathologist per 80,000 patients, said West. And while the United States has one per 25,000 patients, it’s facing a decline as many pathologists are reaching retirement age. Many other countries have so few pathologists that they are “on the precipice of a crisis,” according to West.

He projects that 80 to 90 percent of major laboratories will have switched their biopsy analysis from microscopes to scanners in the next five years. Proscia’s subscription-based software platform aims to help pathologists more efficiently analyze these digital biopsy specimens, assisted by AI.

The company uses a range of NVIDIA Tesla GPUs through Amazon Web Services to power its digital pathology software and AI development. The platform is currently being used worldwide by more than 4,000 pathologists, scientists and lab managers to manage biopsy data and workflows.

screenshot of Proscia's DermAI tool
Proscia’s digital pathology and AI platform displays a heat map analysis of this H&E stained skin tissue image.

In December, Proscia will release its first deep learning module, DermAI. This tool will be able to analyze skin biopsies and is trained to recognize roughly 70 percent of the pathologies a typical dermatology lab sees. Three other modules are currently under development.

Proscia works with both labeled and unlabeled data from clinical partners to train its algorithms. The labeled dataset, created by expert pathologists, are tagged with the overall diagnosis as well as more granular labels for specific tissue formations within the image.

While biopsies can be ordered at multiple stages of treatment, Proscia focuses on the initial diagnosis stage, when doctors are looking at tissue and making treatment decisions.

“The AI is checking those cases as a virtual second opinion behind the scenes,” said West. This could lower the chances of missing tricky-to-spot cancers like melanoma, and make diagnoses more consistent among pathologists.

The post Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides appeared first on The Official NVIDIA Blog.

Intel Xeon Scalable Processors Set 95 New Performance World Records

xeon scalable 2x1What’s New: Intel today announced 95 new performance world records1 for its Intel® Xeon® Scalable processors using the most up-to-date benchmarks from industry-standard bodies. These world records were achieved in servers from major original equipment manufacturers, ranging from single-socket systems up to eight-socket systems.

“I’m extremely proud of the 95 world record performance benchmarks that our partners have delivered, but even more delighted to see the real-world performance that our customers are achieving on the fastest ramping Intel Xeon processor family in history.”
– Lisa Spelman, vice president and general manager of Intel Xeon products and data center marketing

Why CPU Performance Leadership Matters: The continued explosion of data and the need to process, store, analyze and share it is driving industry innovation and incredible demand for computing performance in the cloud, the network and the enterprise. Delivering world-record CPU performance enables enterprises to accelerate their operations and increase productivity. The use of Intel Xeon Scalable processors within cloud data centers, the enterprise and out to the edge allows customers to build fast, high-performing energy-efficient infrastructure.

What Performance Records Were Set: Intel Xeon Scalable processors deliver world-record performance in a variety of server platforms, ranging from general computing workloads running on single-socket systems to advanced technical computing and big data analytics workloads running on eight-socket systems. All systems tested include mitigations for Spectre and Meltdown.

A full list of the most recent world records can be found at’s Intel Xeon Scalable benchmarks page.

What Intel Xeon Scalable Processors Deliver: The Intel Xeon Scalable processor features a new core built from the ground up for the diverse workload needs and rapid growth of the data-centric era. The processor family offers up to 28 cores and 56 threads per processor, a 50 percent increase in memory channels and 20 percent more PCIe lanes compared with the prior generation, and delivers up to 2-times flops/cycle with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) (compared with Intel® Advanced Vector Extensions 2 (Intel® AVX2)) enabling large gains in many high-performance computing applications and the foundation for emerging workloads such as artificial intelligence.

The Small Print:

1World Record Configurations: Results and configurations as of September 14, 2018 or as noted

  • World record claims are determined by evaluating published results from corresponding benchmark organizations as of 14 September 2018.
  • The test sponsors attest, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented.
  • The test sponsors attest, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented.
  • The test sponsors attest, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented.
  • Additional configuration data can be found in each of the full disclosure test reports.

The post Intel Xeon Scalable Processors Set 95 New Performance World Records appeared first on Intel Newsroom.

Supply Update

To our customers and partners,

The first half of this year showed remarkable growth for our industry. I want to take a moment to recap where we’ve been, offer our sincere thanks and acknowledge the work underway to support you with performance-leading Intel products to help you innovate.

First, the situation … The continued explosion of data and the need to process, store, analyze and share it is driving industry innovation and incredible demand for compute performance in the cloud, the network and the enterprise. In fact, our data-centric businesses grew 25 percent through June, and cloud revenue grew a whopping 43 percent in the first six months. The performance of our PC-centric business has been even more surprising. Together as an industry, our products are convincing buyers it’s time to upgrade to a new PC. For example, second-quarter PC shipments grew globally for the first time in six years, according to Gartner. We now expect modest growth in the PC total addressable market (TAM) this year for the first time since 2011, driven by strong demand for gaming as well as commercial systems – a segment where you and your customers trust and count on Intel.

We are thrilled that in an increasingly competitive market, you keep choosing Intel. Thank you.

Now for the challenge… The surprising return to PC TAM growth has put pressure on our factory network. We’re prioritizing the production of Intel® Xeon® and Intel® Core™ processors so that collectively we can serve the high-performance segments of the market. That said, supply is undoubtedly tight, particularly at the entry-level of the PC market. We continue to believe we will have at least the supply to meet the full-year revenue outlook we announced in July, which was $4.5 billion higher than our January expectations.

To address this challenge, we’re taking the following actions:

  • We are investing a record $15 billion in capital expenditures in 2018, up approximately $1 billion from the beginning of the year. We’re putting that $1 billion into our 14nm manufacturing sites in Oregon, Arizona, Ireland and Israel. This capital along with other efficiencies is increasing our supply to respond to your increased demand.
  • We’re making progress with 10nm. Yields are improving and we continue to expect volume production in 2019.
  • We are taking a customer-first approach. We’re working with your teams to align demand with available supply. You can expect us to stay close, listen, partner and keep you informed.

The actions we are taking have put us on a path of continuous improvement. At the end of the day, we want to help you make great products and deliver strong business results. Many of you have been longtime Intel customers and partners, and you have seen us at our best when we are solving problems.


Bob Swan
Intel Corporation CFO and Interim CEO


Forward-Looking Statements

Statements in this letter that refer to forecasts, future plans or expectations are forward-looking statements that involve a number of risks and uncertainties. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Such statements are based on the company’s current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company’s expectations are set forth in Intel’s earnings release dated July 26, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date.  Additional information regarding these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Forms 10-K and 10-Q. Copies of Intel’s Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at or the SEC’s website at

The post Supply Update appeared first on Intel Newsroom.

Intel Adds to Portfolio of FPGA Programmable Acceleration Cards to Speed Up Data Center Computing

Intel Stratix 10PAC 1 1

» Download all images (ZIP, 11 MB)

What’s New: Intel today extended its field programmable gate array (FPGA) acceleration platform portfolio with the addition of the new Intel® Programmable Acceleration Card (PAC) with Intel® Stratix® 10 SX FPGA, Intel’s most powerful FPGA. This high-bandwidth card leverages the Acceleration Stack for Intel® Xeon® CPU with FPGAs, providing data center developers a robust platform to deploy FPGA-based accelerated workloads. Hewlett Packard Enterprise* will be the first OEM to incorporate the Intel PAC with Stratix 10 SX FPGA along with the Intel Acceleration Stack for Intel Xeon Scalable processor with FPGAs into its server offering.

“We’re seeing a growing market for FPGA-based accelerators, and with Intel’s new FPGA solution, more developers – no matter their expertise – can adopt the tool and benefit from workload acceleration. We plan to use the Intel Stratix 10 PAC and acceleration stack in our offerings to enable customers to easily manage complex, emerging workloads.”
–Bill Mannel, vice president and general manager, HPC and AI Group, HPE

What It Does: Like the previously announced Intel PAC with Intel® Arria® 10 FPGA, this new Intel PAC with Stratix 10 SX FPGA supports an ecosystem of design partners that delivers IP to accelerate a wide range of application workloads. The Intel PAC with Stratix 10 SX FPGA is a larger form factor card built for inline processing and memory-intensive workloads, like streaming analytics and video transcoding. While the smaller form factor Intel PAC with Arria 10 FPGA is ideal for backtesting, data base acceleration and image processing workloads.

Why It’s Important: As the demands for big data and artificial intelligence (AI) increase, the reprogrammable technology of the FPGA meets the processing requirements and changing workloads of data center applications. With reconfigurable logic, memory and digital signal processing blocks, FPGAs can be programmed to execute any type of function with high throughput and real-time performance, making them ideal for many critical enterprise and cloud applications.

The acceleration stack for Intel Xeon CPU with Intel FPGAs works with industry-leading OS, virtualization and orchestration software partners, providing a common interface for software developers to get faster time to revenue, simplified management and access to a growing ecosystem of acceleration workloads.

What the Solution Includes:

  • Intel-validated Intel Programmable Acceleration Card (PAC) with Intel Stratix 10 SX FPGA.
  • Production-grade FPGA Interface Manager (FIM) to which Intel and partner AFUs are connected.
  • Acceleration Stack for Intel Xeon CPU with FPGAs, including a common set of APIs and open-source drivers that work seamlessly with industry-leading OS, virtualization and orchestration software across the portfolio of Intel programmable acceleration cards.
  • Support for native, network-attached workloads; initial partners including Adaptive Microware* and Megh Computing*, with more to come.
  • Workloads available through acceleration workload storefront for ease of evaluation.

More Context: Programmable Solutions Group News

The Small Print: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at

The post Intel Adds to Portfolio of FPGA Programmable Acceleration Cards to Speed Up Data Center Computing appeared first on Intel Newsroom.

Intel and Industry Partners Accelerate 5G in China

Intel 5G Network Summit Beijing 1

» Download all images (ZIP, 2 MB)

What’s New: Today at the Intel 5G Network Summit in Beijing, China, Sandra Rivera, senior vice president of Intel’s Network Platforms Group, unveiled new developments across the 5G value chain alongside a powerful ecosystem of industry leaders, including Baidu*, China Mobile*, China Telecom*, China Unicom*, H3C*, Huawei*, Tencent*, Unisoc* and ZTE.*

“By providing end-to-end technologies and collaboration with our partner ecosystem in China, Intel will accelerate the path to 5G. This is another excellent example of how we are uniquely able to bring together the worlds of connectivity, computing and cloud for a seamlessly connected, powerfully smart 5G future.”
–Sandra Rivera, Intel senior vice president of the Network Platforms Group

Why It’s Important: Intel’s end-to-end portfolio of technologies and solutions makes it a key enabler delivering on the promise of 5G. Intel is bringing together an ecosystem of telecommunications equipment manufacturers (TEMs) and operators to accelerate 5G commercialization. Through leading industry keynotes and a panel, today’s event also gave attendees a deep look at the progress being made by Intel and partners in 5G networks in China.

What Was Unveiled: Among the day’s top news:

  • Unisoc, which produces chipsets for mobile phones, shared plans to utilize Intel 5G modems for mid-tier Android* smartphones in China and globally with its applications processor, ROC1. Unisoc CTO Xiaoxin Qiu appeared on stage with Dr. Cormac Conroy, Intel vice president and general manager of the Communication and Devices Group, who said Intel will target broad global markets, building upon its strong momentum in LTE modems as 5G scales.
  • Cloud provider Baidu’s System Department executive director Zhenyu Hou announced that a joint artificial intelligence and 5G innovation lab will be developed with Intel to explore converged edge and cloud services to provide better user experiences, delivering 5G-ready applications in the areas of the Internet of Things, entertainment and automotive.
  • China Unicom and the Beijing Organizing Committee for the Olympic Games (BOCOG) unveiled plans to collaborate with Intel to deliver new 5G experiences and capabilities at the coming 2022 Winter Olympics.

Why an Ecosystem Matters: Intel has a long history in China as an enabler with technologies for computing, data center and cloud and will deliver 5G with its ecosystem to service providers and operators. Its partners in the region are key to this transformation. Because 5G experiences will only be as capable as the network that supports them, Intel’s focus in cloud computing from the data center to the edge to devices enables partners to leverage existing Intel® Xeon® processor-based infrastructure to rapidly develop, test and deploy the next-generation experiences and services for customers. This includes network functions virtualization (NFV) and software-defined networking (SDN) solutions that run on Intel.

Other Unveilings:

  • Alibaba AliOS’ named Intel as one of the first strategic partners of its intelligent transportation initiative, aiming to support the construction of an intelligent road traffic network. The two companies, along with Datang Telecom* will explore v2x usage model with respect to 5G communication and edge computing based on the Intel Network Edge Virtualization Software Development Kit (NEV SDK), as shared at the recent Alibaba Yunqi Conference in Hangzhou. (Earlier story: Alibaba and Intel Transforming Data-Centric Computing from Hyperscale Data Centers to the Edge)
  • H3C, an Ethernet switch maker, and Comba Telecom Holdings* outlined plans to use an Intel FlexRAN 5G NR-compliant solution for 5G.
  • Huawei shared successes in interoperability trials with Intel as part of the IMT 2020 5G Phase 3 testing and announced that the two companies will continue to work together on driving this International Telecommunications Union (ITU) standard to completion.
  • Tencent WeTest is deploying an industry-leading edge-cloud gaming platform based on Intel Xeon processors to drive the gaming industry ecosystem into next phase of transformation with a focus on infrastructure, game R&D, distribution and devices.

More Context: 5G at Intel

2018 Intel Corporation. Intel, the Intel logo, and Intel Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

The post Intel and Industry Partners Accelerate 5G in China appeared first on Intel Newsroom.