New Supercomputer at Texas Advanced Computing Center Powered by Next-Gen Intel Xeon Scalable Processors

Earth mantle simulation
A digitally produced image shows the Earth’s mantle convection in a simulation enabled by the NSF-funded Stampede supercomputer. The planned Frontera system will allow researchers to incorporate more observations into simulations, leading to new insights into the main drivers of plate motion. [Courtesy of ICES, UT Austin]
The Texas Advanced Computing Center (TACC) at The University of Texas at Austin announced its latest supercomputer, known as Frontera, will be powered by next-generation Intel® Xeon® Scalable processors. Academic researchers hope the new supercomputer will enable important discoveries across scientific fields, from astrophysics to zoology.

“Accelerating scientific discovery lies at the foundation of the TACC’s mission, and enabling technologies to advance these discoveries and innovations is a key focus for Intel,” said Patricia Damkroger, vice president and general manager, Extreme Computing Group at Intel.

frontera black 01Frontera is expected to enter production in the summer of 2019. It is the latest in a string of successful supercomputers from TACC that leverage Intel technology. Since 2006, TACC has built and operated three supercomputers that debuted among the world’s 10 most powerful systems: Ranger (2008), Stampede1 (2012) and Stampede2 (2017).

TACC claims that if completed today, Frontera would be the fifth-most-powerful system in the world, the third-fastest in the U.S. and the largest at any university. Frontera will be roughly twice as powerful as the Intel Xeon processor-based Stampede2 (currently the fastest university supercomputer) and 70 times larger than Ranger, which operated until 2013. To match what Frontera will compute in one second, a person would have to perform one calculation every second for roughly 1 billion years.

The Frontera supercomputer continues the close collaboration between TACC and Intel to drive high-performance computing innovations with unprecedented performance, productivity and efficiency.

Read more at “New Texas Supercomputer to Push the Frontiers of Science.”

The post New Supercomputer at Texas Advanced Computing Center Powered by Next-Gen Intel Xeon Scalable Processors appeared first on Intel Newsroom.

Intel Hosts NASA Frontier Development Lab Demo Day for 2018 Research Presentations

moon 2x1
A NASA Frontier Development Lab photo shows a crater on the moon. (Credit: NASA Frontier Development Lab)

What’s New: Today Intel hosts the NASA Frontier Development Lab (FDL)* Event Horizon 2018: AI+Space Challenge Review Event (view live webcast) in Santa Clara, California. It concludes an eight-week research program that applies artificial intelligence (AI) technologies to challenges faced in space exploration. For the third year, NASA Ames Research Center*, the SETI Institute* and participating program partners have provided support to ongoing research for interdisciplinary AI approaches, leveraging the latest hardware technology with advanced machine learning tools.

“Artificial intelligence is expected to significantly influence the future of the space industry and power the solutions that create, use and analyze the massive amounts of data generated. The NASA FDL summer program represents an incredible opportunity to take AI applications and implement them across different challenges facing space science and exploration today.”
– Amir Khosrowshahi, vice president and chief technology officer, Artificial Intelligence Products Group, Intel

Why It’s Important: Through its work with FDL, Intel is addressing critical knowledge gaps by using AI to further space exploration and solve problems that can affect life on Earth.

New tools in artificial intelligence are already demonstrating a new paradigm in robotics, data acquisition and analysis, while also driving down the barriers to entry for scientific discovery. FDL’s researcher program participants have implemented AI to predict solar activity, map lunar poles, build 3D shape models of potentially hazardous asteroids, discover uncategorized meteor showers and determine the efficacy of asteroid mitigation strategies.

“This is an exciting time for space science. We have this wonderful new toolbox of AI technologies that allow us to not only optimize and automate, but better predict Space phenomena – and ultimately derive a better understanding,” said James Parr, FDL director.

The Challenge: Since 2017, Intel has been a key partner to FDL, contributing computing resources and AI and data science mentorship. Intel sponsored two Space Resources teams, which used the Intel® Xeon® platform for inference and training, as well as the knowledge of Intel principal engineers:

  • Space Resources Team 1:  Autonomous route planning and cooperative platforms to coordinate routes between a group of lunar rovers and a base station, allowing the rovers to autonomously cooperate in order to complete a mission.
  • Space Resources Team 2: Localization – merging orbital maps with surface perspective imagery to allow NASA engineers to locate a rover on the lunar surface using only imagery. This is necessary since there is no GPS in space. A rover using the team’s algorithm will be able to precisely locate itself by uploading a 360-degree view of its surroundings as four images.

More Challenges: Additional challenges presented during the event include:

  • Astrobiology Challenge 1: Understanding What is Universally Possible for Life
  • Astrobiology Challenge 2: From Biohints to Evidence of Life: Possible Metabolisms within Extraterrestrial Environmental Substrates
  • Exoplanet Challenge: Increase the Efficacy and Yield of Exoplanet Detection from Tess and Codify the Process of AI Derived Discovery
  • Space Weather Challenge 1: Improve Ionospheric Models Using Global Navigation Satellite System Signal Data
  • Space Weather Challenge 2: Predicting Solar Spectral Irradiance from SDO/AIA Observations

What’s Next: At the conclusion of the projects, FDL will open-source research and algorithms, allowing the AI and space communities to leverage work from the eight teams in future space missions.

The post Intel Hosts NASA Frontier Development Lab Demo Day for 2018 Research Presentations appeared first on Intel Newsroom.

Intel and Philips Accelerate Deep Learning Inference on CPUs in Key Medical Imaging Uses

healthcare illustrationWhat’s New: Using Intel® Xeon® Scalable processors and the OpenVINO™ toolkit, Intel and Philips* tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

“Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds.”
–Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights

Why It’s Important: Until recently, there was one prominent hardware solution to accelerate deep learning: graphics processing unit (GPUs). By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models.

Central processing units (CPUs) – in this case Intel Xeon Scalable processors – don’t have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence (AI) workloads, Intel Xeon Scalable processors can better meet data scientists’ needs than GPU-based systems. As Philips found in the two recent tests, this enables the company to offer AI solutions at lower cost to its customers.

Why It Matters: AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals.

Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are a great fit for low batch or streaming applications. In particular, Intel Xeon Scalable processors offer an affordable, flexible platform for AI models – particularly in conjunction with tools like the OpenVINO toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy.

These tests show that healthcare organizations can implement AI workloads without expensive hardware investments.

What the Results Show: The results for both use cases surpassed expectations. The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimizations – an increase of 188 times. The lung-segmentation model far surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimizations.

What’s Next: Running healthcare deep learning workloads on CPU-based devices offers direct benefits to companies like Philips, because it allows them to offer AI-based services that don’t drive up costs for their end customers. As shown in this test, companies like Philips can offer AI algorithms for download through an online store as a way to increase revenue and differentiate themselves from growing competition.

More Context: Multiple trends are contributing to this shift:

  • As medical image resolution improves, medical image file sizes are growing – many images are 1GB or greater.
  • More healthcare organizations are using deep learning inference to more quickly and accurately review patient images.
  • Organizations are looking for ways to do this without buying expensive new infrastructure.

The Philips tests are just one example of these trends in action. Novartis* is another. And many other Intel customers – not yet publicly announced – are achieving similar results. Learn more about Intel AI technology in healthcare at “Advancing Data-Driven Healthcare Solutions.”

The post Intel and Philips Accelerate Deep Learning Inference on CPUs in Key Medical Imaging Uses appeared first on Intel Newsroom.

Intel Eases Use of FPGA Acceleration: Combines Platforms, Software Stack and Ecosystem Solutions to Maximize Performance and Lower Data Center Costs

Today, Intel announced a comprehensive hardware and software platform solution to enable faster deployment of customized field programmable gate array (FPGA)-based acceleration of networking, storage and computing workloads.

Complex data-intensive applications in the areas of genomics, finance, industry 4.0 and other disciplines are pushing the boundaries of data center capabilities. Intel® FPGA-based acceleration exploits massively parallelized hardware offloading to maximize performance and power efficiency of data centers.

The new solution abstracts the complexities of implementation to enable architects and developers to quickly develop and deploy power-efficient acceleration of a variety of applications and workloads.

This consists of three major elements:

  • Intel-qualified FPGA acceleration platforms that operate seamlessly with Intel® Xeon® CPUs
  • The Acceleration Stack for Intel Xeon CPU with FPGAs that provide industry standard frameworks, interfaces and optimized libraries
  • Growing ecosystem of market-specific solutions

Programmable-acceleration-card-Intel-small
The Intel Programmable Acceleration Card with Intel Arria 10 GX FPGA plugs into a server to accelerate workloads. (Credit: Intel Corporation)
» Download full-size image
As the first in a family of Intel® Programmable Acceleration Cards, today Intel is unveiling the Intel Programmable Acceleration Card with the Intel® Arria® 10 GX FPGA enabled by the acceleration stack.

This new platform approach enables Original Equipment Manufacturers (OEMs), such as Dell EMC, to offer Intel Xeon processor-based server acceleration solutions with their unique value add.

“Dell EMC continues to be committed to delivering technology that helps our customers transform their business and IT,” said Brian Payne, vice president, Product Management and Marketing, Server Solutions Division, Dell EMC. “With this collaboration, Dell EMC and Intel are combining a reliable platform with an emerging software ecosystem that provides a new technology capability for customers to unlock their business potential.”

“Intel is making it easier for server equipment makers such as Dell EMC to exploit FPGA technology for data acceleration as a ready-to-use platform,” said Dan McNamara, corporate vice president and general manager of Intel’s Programmable Solutions Group. “With our ecosystem partners, we are enabling the industry with point solutions with a substantial boost in performance while preserving power and cost budgets.”

The Intel Programmable Acceleration Card with Intel Arria 10 GX FPGA is sampling now and is expected to be broadly available in the first half of 2018.

More:

The post Intel Eases Use of FPGA Acceleration: Combines Platforms, Software Stack and Ecosystem Solutions to Maximize Performance and Lower Data Center Costs appeared first on Intel Newsroom.

Intel Eases Use of FPGA Acceleration: Combines Platforms, Software Stack and Ecosystem Solutions to Maximize Performance and Lower Data Center Costs

Today, Intel announced a comprehensive hardware and software platform solution to enable faster deployment of customized field programmable gate array (FPGA)-based acceleration of networking, storage and computing workloads.

Complex data-intensive applications in the areas of genomics, finance, industry 4.0 and other disciplines are pushing the boundaries of data center capabilities. Intel® FPGA-based acceleration exploits massively parallelized hardware offloading to maximize performance and power efficiency of data centers.

The new solution abstracts the complexities of implementation to enable architects and developers to quickly develop and deploy power-efficient acceleration of a variety of applications and workloads.

This consists of three major elements:

  • Intel-qualified FPGA acceleration platforms that operate seamlessly with Intel® Xeon® CPUs
  • The Acceleration Stack for Intel Xeon CPU with FPGAs that provide industry standard frameworks, interfaces and optimized libraries
  • Growing ecosystem of market-specific solutions

Programmable-acceleration-card-Intel-small
The Intel Programmable Acceleration Card with Intel Arria 10 GX FPGA plugs into a server to accelerate workloads. (Credit: Intel Corporation)
» Download full-size image
As the first in a family of Intel® Programmable Acceleration Cards, today Intel is unveiling the Intel Programmable Acceleration Card with the Intel® Arria® 10 GX FPGA enabled by the acceleration stack.

This new platform approach enables Original Equipment Manufacturers (OEMs), such as Dell EMC, to offer Intel Xeon processor-based server acceleration solutions with their unique value add.

“Dell EMC continues to be committed to delivering technology that helps our customers transform their business and IT,” said Brian Payne, vice president, Product Management and Marketing, Server Solutions Division, Dell EMC. “With this collaboration, Dell EMC and Intel are combining a reliable platform with an emerging software ecosystem that provides a new technology capability for customers to unlock their business potential.”

“Intel is making it easier for server equipment makers such as Dell EMC to exploit FPGA technology for data acceleration as a ready-to-use platform,” said Dan McNamara, corporate vice president and general manager of Intel’s Programmable Solutions Group. “With our ecosystem partners, we are enabling the industry with point solutions with a substantial boost in performance while preserving power and cost budgets.”

The Intel Programmable Acceleration Card with Intel Arria 10 GX FPGA is sampling now and is expected to be broadly available in the first half of 2018.

More:

The post Intel Eases Use of FPGA Acceleration: Combines Platforms, Software Stack and Ecosystem Solutions to Maximize Performance and Lower Data Center Costs appeared first on Intel Newsroom.

Intel Xeon Scalable Processors Accelerate Creation and Innovation in Next-Generation Workstations

intel_xeon_w_processor
» Click for full size image

Workstations powered by Intel® Xeon® processors meet the most stringent demands for professionals seeking to increase productivity and rapidly bring data to life. Intel today disclosed that the world-record performance of the Intel Xeon Scalable processors is now available for next-generation expert workstations to enable photorealistic design, modeling, artificial intelligence (AI), analytics and virtual-reality (VR) content creation.

MORE: Intel Xeon Scalable Processors Designed for Professional Workstations (Lisa Spelman Blog) | Intel Xeon Workstation Overview (PDF) | Intel Xeon Scalable Processors (Press Kit)

One of the most exciting trends in entertainment today is immersive 3D VR media, and professional workstations are a key component to the creation of this content. Rendering immersive media is a time-consuming process that demands the highest performance workstations. Companies like Technicolor* are using Intel Xeon Scalable processors to push the boundaries of immersive media by accelerating the creation, rendering and processing of this data, and bringing the ultimate VR creation experience to life. Marcie Jastrow, Technicolor senior vice president, Immersive Media and Head of the Technicolor Experience Center, stated, “Intel Xeon Scalable processors represent the ultimate in what is possible in VR today, and it also makes me feel very hopeful about what will happen tomorrow in immersive VR media.”

Beyond VR, many organizations are taking advantage of workstations to accelerate creation and innovation. From faster time to market with computer-aided design tools and creating ultrahigh-definition (HD) and 3D content, to improving medical care and driving faster financial trades and AI analytics, workstations give professionals a powerful productivity tool.

Unveiled in July 2017, Intel Xeon Scalable processors deliver breakthrough dual-socket performance1 for the most advanced workstation professionals, offering up to 56 cores, up to 112 threads and an Intel® Turbo Boost Technology frequency up to 4.2 GHz. Expert workstations will experience up to a 2.71x boost in performance compared to a 4-year-old system2 and up to 1.65x higher performance compared to the previous generation.3

As part of today’s news, Intel also unveiled the new Intel Xeon W processors, targeting mainstream workstations. The Intel Xeon W processor delivers optimized performance1 for traditional workstation professionals by combining mainstream performance, enhanced memory capabilities, and hardware-enhanced security and reliability features.

The single-socket Intel Xeon W processor delivers mainstream performance optimized1 for the needs of traditional workstation professionals. The Intel Xeon W processor features up to 18 cores and up to 36 threads, with an Intel Turbo Boost Technology frequency up to 4.5 GHz. Mainstream workstations will experience up to a 1.87x boost in performance compared to a 4-year-old system4 and up to 1.38x higher performance compared to the previous generation.5

To learn more about gaining the ultimate performance with professional-grade workstations based on Intel Xeon processors, visit Intel’s workstation page, and to learn more about the Intel Xeon Scalable platform, visit Intel’s Xeon Scalable page.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks.

1 Statements are based on new Intel products and features compared against historical Intel products and features. Unless otherwise noted, statements and examples referencing Intel® Xeon® Scalable processors are shown based on a dual-socket configuration. Statements and examples referencing Intel® Xeon® W processors are shown as single-socket configurations only.

2 Up to 2.71x performance improvement versus a 4-year-old workstation. Config: Based on best-published two-socket SPECfp*_rate_base2006 result submitted to/published at http://www.spec.org/cpu2006/results/ as of 11 July 2017. New configuration: 1-Node, 2 x Intel® Xeon® Platinum 8180 processor on Huawei 2288H V5 with 384 GB total memory on SUSE Linux Enterprise Server 12 SP2 (x86_64) Kernel 4.4.21-69-default, using C/C++ and Fortran: Version 17.0.0.098 of Intel C/C++ and Intel Fortran Compiler for Linux. Source: submitted to www.spec.org, SPECfp*_rate_base2006 Score: 1850 compared to 1-Node, 1 node, 2x Intel® Xeon® E5-2697 v2 on Cisco Systems Cisco UCS C220 M3 using 128 GB total memory on Red Hat Enterprise Linux Server release 6.4 (Santiago) 2.6.32-358.el6.x86_64 C/C++: Version 14.0.0.080 of Intel C++ Studio XE for Linux; Fortran: Version 4.0.0.080 of Intel Fortran Studio XE for Linux. Source: https://www.spec.org/cpu2006/results/res2013q4/cpu2006-20130923-26455.html SPECfp*_rate_base2006 Score: 682

3 Up to 1.65x performance improvement gen-on-gen. Config: Based on best-published two-socket SPECfp*_rate_base2006 result submitted to/published at http://www.spec.org/cpu2006/results/ as of 11 July 2017. New configuration: 1-Node, 2 x Intel® Xeon® Platinum 8180 processor on Huawei 2288H V5 with 384 GB total memory on SUSE Linux Enterprise Server 12 SP2 (x86_64) Kernel 4.4.21-69-default, using C/C++ and Fortran: Version 17.0.0.098 of Intel C/C++ and Intel Fortran Compiler for Linux. Source: submitted to www.spec.org, SPECfp*_rate_base2006 Score: 1850 compared to 1 node, 2x x Intel® Xeon® E5-2699A v4 on Lenovo Group Limited Lenovo System x3650 M5 using 256 GB total memory on SUSE Linux Enterprise Server 12 (x86_64) Kernel 3.12.49-11-default C/C++: Version 17.0.0.098 of Intel C/C++ Compiler for Linux; Fortran: Version 17.0.0.098 of Intel Fortran Compiler for Linux. Source: https://www.spec.org/cpu2006/results/res2016q4/cpu2006-20161129-45946.html; SPECfp*_rate_base2006 Score: 1120

4 Up to 1.87x performance improvement versus a 4-year-old workstation. Config: 1-Node, 1 x Intel® Xeon® processor E5-1680 v2 on Romley-EP with 64 GB Total Memory on CentOS release 6.9 2.6.32-431.el6.x86_64 using C/C++: Version 14.0.0.080 of Intel C/C++ studio XE for Linux, AVX Data Source: Request Number: 3822, Benchmark: SPECint*_rate_base2006, Score: 332 Higher is better; vs 1-Node, 1 x Intel® Xeon® W-2155 processor on Basin Falls RVP with 128 GB Total Memory on Red Hat Enterprise Linux* 7.3 using CPU2006-1.2-ic17.0u3-lin-binaries-20170411. Data Source: Request Number: 3821, Benchmark: SPECint*_rate_base2006, Score: 622 Higher is better.

5 Up to 1.38x performance improvement versus previous generation. Config: 1-Node, 1 x Intel® Xeon® processor E5-1680 v4 on on Supermicro SYS_5038A-A with 128 GB Total Memory on Red Hat Enterprise Linux* 7.3 kernel 3.10.0-514.16.1.el7x86_64 using C/C++: Version 17.0.3.1919 of Intel C/C++ Compiler for Linux, AVX2 Data Source: Request Number: 3822, Benchmark: SPECint*_rate_base2006, Score: 449 Higher is better; vs 1-Node, 1 x Intel® Xeon® W-2155 processor on Basin Falls RVP with 128 GB Total Memory on Red Hat Enterprise Linux* 7.3 using CPU2006-1.2-ic17.0u3-lin-binaries-20170411. Data Source: Request Number: 3821, Benchmark: SPECint*_rate_base2006, Score: 622 Higher is better.

The post Intel Xeon Scalable Processors Accelerate Creation and Innovation in Next-Generation Workstations appeared first on Intel Newsroom.

Intel Xeon Scalable Processors Accelerate Creation and Innovation in Next-Generation Workstations

intel_xeon_w_processor
» Click for full size image

Workstations powered by Intel® Xeon® processors meet the most stringent demands for professionals seeking to increase productivity and rapidly bring data to life. Intel today disclosed that the world-record performance of the Intel Xeon Scalable processors is now available for next-generation expert workstations to enable photorealistic design, modeling, artificial intelligence (AI), analytics and virtual-reality (VR) content creation.

MORE: Intel Xeon Scalable Processors Designed for Professional Workstations (Lisa Spelman Blog) | Intel Xeon Workstation Overview (PDF) | Intel Xeon Scalable Processors (Press Kit)

One of the most exciting trends in entertainment today is immersive 3D VR media, and professional workstations are a key component to the creation of this content. Rendering immersive media is a time-consuming process that demands the highest performance workstations. Companies like Technicolor* are using Intel Xeon Scalable processors to push the boundaries of immersive media by accelerating the creation, rendering and processing of this data, and bringing the ultimate VR creation experience to life. Marcie Jastrow, Technicolor senior vice president, Immersive Media and Head of The Technicolor Experience Center, stated, “Intel Xeon Scalable processors represent the ultimate in what is possible in VR today, and it also makes me feel very hopeful about what will happen tomorrow in immersive VR media.”

Beyond VR, many organizations are taking advantage of workstations to accelerate creation and innovation. From faster time to market with computer-aided design tools and creating ultrahigh-definition (HD) and 3D content, to improving medical care and driving faster financial trades and AI analytics, workstations give professionals a powerful productivity tool.

Unveiled in July 2017, Intel Xeon Scalable processors deliver breakthrough dual-socket performance1 for the most advanced workstation professionals, offering up to 56 cores, up to 112 threads and an Intel® Turbo Boost Technology frequency up to 4.2 GHz. Expert workstations will experience up to a 2.71x boost in performance compared to a 4-year-old system2 and up to 1.65x higher performance compared to the previous generation.3

As part of today’s news, Intel also unveiled the new Intel Xeon W processors, targeting mainstream workstations. The Intel Xeon W processor delivers optimized performance1 for traditional workstation professionals by combining mainstream performance, enhanced memory capabilities, and hardware-enhanced security and reliability features.

The single-socket Intel Xeon W processor delivers mainstream performance optimized1 for the needs of traditional workstation professionals. The Intel Xeon W processor features up to 18 cores and up to 36 threads, with an Intel Turbo Boost Technology frequency up to 4.5 GHz. Mainstream workstations will experience up to a 1.87x boost in performance compared to a 4-year-old system4 and up to 1.38x higher performance compared to the previous generation.5

To learn more about gaining the ultimate performance with professional-grade workstations based on Intel Xeon processors, visit Intel’s workstation page, and to learn more about the Intel Xeon Scalable platform, visit Intel’s Xeon Scalable page.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks.

1 Statements are based on new Intel products and features compared against historical Intel products and features. Unless otherwise noted, statements and examples referencing Intel® Xeon® Scalable processors are shown based on a dual-socket configuration. Statements and examples referencing Intel® Xeon® W processors are shown as single-socket configurations only.

2 Up to 2.71x performance improvement versus a 4-year-old workstation. Config: Based on best-published two-socket SPECfp*_rate_base2006 result submitted to/published at http://www.spec.org/cpu2006/results/ as of 11 July 2017. New configuration: 1-Node, 2 x Intel® Xeon® Platinum 8180 processor on Huawei 2288H V5 with 384 GB total memory on SUSE Linux Enterprise Server 12 SP2 (x86_64) Kernel 4.4.21-69-default, using C/C++ and Fortran: Version 17.0.0.098 of Intel C/C++ and Intel Fortran Compiler for Linux. Source: submitted to www.spec.org, SPECfp*_rate_base2006 Score: 1850 compared to 1-Node, 1 node, 2x Intel® Xeon® E5-2697 v2 on Cisco Systems Cisco UCS C220 M3 using 128 GB total memory on Red Hat Enterprise Linux Server release 6.4 (Santiago) 2.6.32-358.el6.x86_64 C/C++: Version 14.0.0.080 of Intel C++ Studio XE for Linux; Fortran: Version 4.0.0.080 of Intel Fortran Studio XE for Linux. Source: https://www.spec.org/cpu2006/results/res2013q4/cpu2006-20130923-26455.html SPECfp*_rate_base2006 Score: 682

3 Up to 1.65x performance improvement gen-on-gen. Config: Based on best-published two-socket SPECfp*_rate_base2006 result submitted to/published at http://www.spec.org/cpu2006/results/ as of 11 July 2017. New configuration: 1-Node, 2 x Intel® Xeon® Platinum 8180 processor on Huawei 2288H V5 with 384 GB total memory on SUSE Linux Enterprise Server 12 SP2 (x86_64) Kernel 4.4.21-69-default, using C/C++ and Fortran: Version 17.0.0.098 of Intel C/C++ and Intel Fortran Compiler for Linux. Source: submitted to www.spec.org, SPECfp*_rate_base2006 Score: 1850 compared to 1 node, 2x x Intel® Xeon® E5-2699A v4 on Lenovo Group Limited Lenovo System x3650 M5 using 256 GB total memory on SUSE Linux Enterprise Server 12 (x86_64) Kernel 3.12.49-11-default C/C++: Version 17.0.0.098 of Intel C/C++ Compiler for Linux; Fortran: Version 17.0.0.098 of Intel Fortran Compiler for Linux. Source: https://www.spec.org/cpu2006/results/res2016q4/cpu2006-20161129-45946.html; SPECfp*_rate_base2006 Score: 1120

4 Up to 1.87x performance improvement versus a 4-year-old workstation. Config: 1-Node, 1 x Intel® Xeon® processor E5-1680 v2 on Romley-EP with 64 GB Total Memory on CentOS release 6.9 2.6.32-431.el6.x86_64 using C/C++: Version 14.0.0.080 of Intel C/C++ studio XE for Linux, AVX Data Source: Request Number: 3822, Benchmark: SPECint*_rate_base2006, Score: 332 Higher is better; vs 1-Node, 1 x Intel® Xeon® W-2155 processor on Basin Falls RVP with 128 GB Total Memory on Red Hat Enterprise Linux* 7.3 using CPU2006-1.2-ic17.0u3-lin-binaries-20170411. Data Source: Request Number: 3821, Benchmark: SPECint*_rate_base2006, Score: 622 Higher is better.

5 Up to 1.38x performance improvement versus previous generation. Config: 1-Node, 1 x Intel® Xeon® processor E5-1680 v4 on on Supermicro SYS_5038A-A with 128 GB Total Memory on Red Hat Enterprise Linux* 7.3 kernel 3.10.0-514.16.1.el7x86_64 using C/C++: Version 17.0.3.1919 of Intel C/C++ Compiler for Linux, AVX2 Data Source: Request Number: 3822, Benchmark: SPECint*_rate_base2006, Score: 449 Higher is better; vs 1-Node, 1 x Intel® Xeon® W-2155 processor on Basin Falls RVP with 128 GB Total Memory on Red Hat Enterprise Linux* 7.3 using CPU2006-1.2-ic17.0u3-lin-binaries-20170411. Data Source: Request Number: 3821, Benchmark: SPECint*_rate_base2006, Score: 622 Higher is better.

The post Intel Xeon Scalable Processors Accelerate Creation and Innovation in Next-Generation Workstations appeared first on Intel Newsroom.

Intel and Microsoft Collaborate to Deliver Industry-First Enterprise Blockchain Service

blockchain-2x1
» Click to view full infographic

Today, Microsoft announced a new framework that enables businesses to adopt blockchain technology for increased enterprise privacy and security, and named Intel as a key hardware and software development partner. As part of this collaboration, Microsoft, Intel and other blockchain technology leaders will build a new enterprise-targeted blockchain framework – called the Coco Framework – that integrates Intel® Software Guard Extensions (Intel SGX) to deliver improved transaction speed, scale and data confidentiality to enterprises. This first-of-its-kind innovation accelerates the enterprise readiness of blockchain technology, allowing developers to create flexible and more secure enterprise blockchain applications that can be easily managed by businesses.

Rick Echevarria Blog: Collaborating with Microsoft to Strengthen Enterprise Blockchains

Blockchain is a digital record-keeping system where digital transactions are executed, validated and recorded chronologically and publicly. Because it’s decentralized and transparent, it increases the efficiency and security of financial transactions – and does so at a significantly lower cost than traditional ledgers. The technology can be used for everything from simple file sharing to complex global payment processing and has the potential to transform the way companies operate.

Intel, Microsoft and other blockchain technology leaders are working together to deliver security-enhanced, scalable capabilities in blockchain services. The Coco Framework uses Intel SGX to add new levels of privacy and confidentiality to blockchain transactions. Intel SGX is a hardware-based security technology that can help improve blockchain solutions by providing a trusted execution environment that isolates key portions of a blockchain program. Intel SGX consists of a set of CPU instructions and platform enhancements that create private areas in the CPU and memory that can protect code and data during execution. Intel SGX helps the Coco Framework provide confidential data and accelerated transaction throughput. The data confidentiality is achieved by encrypting sensitive blockchain data until it is opened in an Intel SGX enclave by a permitted program. The accelerated throughput is achieved by isolating the transaction verification process to speed network consensus.

Intel is an active participant in the blockchain revolution, participating in developing standards, actively contributing technology and providing expert insight. Intel is actively engaged with industry leaders to improve performance, reliability and scalability of blockchain technologies.

Intel® Xeon processors provide unique capabilities that can improve the privacy, security and scalability of distributed ledger networks. For example, the recently-announced Intel® Xeon Scalable processors include a range of hardware-based trust, key protection and crypto-acceleration features that increase blockchain security and performance.

The post Intel and Microsoft Collaborate to Deliver Industry-First Enterprise Blockchain Service appeared first on Intel Newsroom.

Intel and Microsoft Collaborate to Deliver Industry-First Enterprise Blockchain Service

blockchain-2x1
» Click to view full infographic

Today, Microsoft announced a new framework that enables businesses to adopt blockchain technology for increased enterprise privacy and security, and named Intel as a key hardware and software development partner. As part of this collaboration, Microsoft, Intel and other blockchain technology leaders will build a new enterprise-targeted blockchain framework – called the Coco Framework – that integrates Intel® Software Guard Extensions (Intel SGX) to deliver improved transaction speed, scale and data confidentiality to enterprises. This first-of-its-kind innovation accelerates the enterprise readiness of blockchain technology, allowing developers to create flexible and more secure enterprise blockchain applications that can be easily managed by businesses.

Rick Echevarria Blog: Collaborating with Microsoft to Strengthen Enterprise Blockchains

Blockchain is a digital record-keeping system where digital transactions are executed, validated and recorded chronologically and publicly. Because it’s decentralized and transparent, it increases the efficiency and security of financial transactions – and does so at a significantly lower cost than traditional ledgers. The technology can be used for everything from simple file sharing to complex global payment processing and has the potential to transform the way companies operate.

Intel, Microsoft and other blockchain technology leaders are working together to deliver security-enhanced, scalable capabilities in blockchain services. The Coco Framework uses Intel SGX to add new levels of privacy and confidentiality to blockchain transactions. Intel SGX is a hardware-based security technology that can help improve blockchain solutions by providing a trusted execution environment that isolates key portions of a blockchain program. Intel SGX consists of a set of CPU instructions and platform enhancements that create private areas in the CPU and memory that can protect code and data during execution. Intel SGX helps the Coco Framework provide confidential data and accelerated transaction throughput. The data confidentiality is achieved by encrypting sensitive blockchain data until it is opened in an Intel SGX enclave by a permitted program. The accelerated throughput is achieved by isolating the transaction verification process to speed network consensus.

Intel is an active participant in the blockchain revolution, participating in developing standards, actively contributing technology and providing expert insight. Intel is actively engaged with industry leaders to improve performance, reliability and scalability of blockchain technologies.

Intel® Xeon processors provide unique capabilities that can improve the privacy, security and scalability of distributed ledger networks. For example, the recently-announced Intel® Xeon Scalable processors include a range of hardware-based trust, key protection and crypto-acceleration features that increase blockchain security and performance.

The post Intel and Microsoft Collaborate to Deliver Industry-First Enterprise Blockchain Service appeared first on Intel Newsroom.

AT&T Accelerates Network Transformation on the Path to 5G with Intel Xeon Scalable Processors

Navin Shenoy, (left) executive vice president and general manager of the Data Center Group at Intel Corporation, speaks with John Donovan, chief strategy officer and group president of AT&T Technology and Operations, during the introduction of Intel Xeon Scalable processors. Credit: Intel Corporation)

Intel recently unveiled its new Intel® Xeon® Scalable Processors that enable communications service providers to accelerate the transformation of fixed-function, purpose-built networks into flexible, software-defined networks. These networks will be capable of delivering ultra-low latency, high data capacity and lightning fast speeds – ushering in the 5G era with billons of devices connected to the cloud and life-changing experiences such as autonomous driving.

Press Kit: Intel Xeon Scalable Processors

AT&T is collaborating with Intel to transform its network and advance the progress of 5G. John Donovan, chief strategy officer and group president of AT&T Technology and Operations, appeared on stage at the Intel Xeon Scalable Processor launch event with Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group, to discuss AT&T’s involvement in the Intel Xeon Scalable processor early access program.

Donovan shared that AT&T was able to get production traffic running on servers using the new processors in weeks, and has seen a 30 percent performance improvement over its current install base since deploying the Intel Xeon Scalable platform in March. AT&T has also seen a 25 percent reduction in the number of servers needed per cluster, with larger data throughput – improving overall total cost of ownership. Further, Donovan explained AT&T’s relationship with Intel lets AT&T build, fine-tune and accelerate time to market with its software-centric, cloudified solutions, like AT&T’s FlexWare. This all is helping AT&T achieve its goal of virtualizing 75 percent of its network by 2020; AT&T expects to reach 55 percent by the end of this year.

Donovan further explained that this progress represents the type of improvements needed for AT&T to stay ahead of surging demand for data capacity, rapidly add new services and ready its network for 5G.

The post AT&T Accelerates Network Transformation on the Path to 5G with Intel Xeon Scalable Processors appeared first on Intel Newsroom.