NVIDIA, Canon Medical Systems Partner to Accelerate Deep Learning in Healthcare

Healthcare represents one of the biggest opportunities in deep learning, and we’re partnering with Canon Medical Systems, Japan’s largest medical systems supplier, to develop the research infrastructure to help support it.

The healthcare sector needs to analyze scientific reports from around the world, while simultaneously coordinating a variety of patient data to determine the most appropriate treatment options.

Given the huge volumes of data involved, big data analysis via deep learning will play a major role in the development of optimized healthcare delivery systems and support early detection and assisted diagnosis.

At the same time, medical institutions wanting to use deep learning for independent research need hardware for analysis, systems for the collection, collation and analysis of in-house data, and knowledge of deep learning processes and techniques.

NVIDIA and Canon Medical Systems expect to make a significant contribution to promoting the use of data-intensive deep learning techniques in medical and related research, as well as to driving the uptake of AI in the healthcare sector.

“Canon Medical aims to improve the quality and efficiency of medical services,” said Kiyoshi Imai, vice president and general manager of the Healthcare IT division at Canon Medical Systems. “Our collaboration with NVIDIA will accelerate AI research and provide high-quality medical services using AI, such as image recognition, in the future.”

“We’re partnering with Canon Medical Systems to apply deep learning for early detection and diagnostic support systems that will enable us to address a range of challenges posed by a rapidly aging population,” said Masataka Osaki, vice president of Corporate Sales and NVIDIA Japan Country Manager. “We’re also working together to enhance the global competitiveness of medical and research organizations and contribute to the medical industry of Japan.”

Canon Medical Systems will use NVIDIA DGX systems to process large volumes of medical data generated by Abierto VNA, the proprietary, in-house, medical data management system it launched in January.

DGX systems feature NVIDIA Tesla data center GPUs powered by Volta, the world’s most advanced GPU architecture. Among the portfolio is NVIDIA DGX Station, an AI workstation that has the computing capacity of four server racks in a desk-friendly package, while consuming only 1/20 the power.

The systems include NVIDIA’s specially optimized AI software and Canon Medical System’s graphical user interface, which provides full support for the design, deployment and operation of advanced deep learning algorithms.

Abierto VNA and DGX Systems on Exhibition at ITEM

Deep learning usually requires extensive programming and data science skills, however, the system from Canon Medical Systems and NVIDIA guides users through all the steps involved in the deep learning process, from generating training data with the image viewer to setting up the NVIDIA learning environment.

Canon Medical Systems will be providing demonstrations on how to implement deep learning at medical institutions via Abierto VNA and the NVIDIA DGX systems at the International Technical Exhibition of Medical Imaging (ITEM) at the Japan Radiology Congress, April 13-15, in the exhibition hall at Pacifico Yokohama.

The post NVIDIA, Canon Medical Systems Partner to Accelerate Deep Learning in Healthcare appeared first on The Official NVIDIA Blog.

Kyoto University Chooses Intel Machine and Deep Learning Tech to Tackle Drug Discovery, Medicine and Healthcare Challenges

Kyoto University Graduate School of Medicine*, one of Asia’s leading research-oriented institutions, has recently chosen Intel® Xeon® Scalable processors to power its clinical genome analysis cluster and its molecular simulation cluster. These clusters will aid in Kyoto’s search for new drug discoveries and help reduce research and development costs.

The Intel Xeon Scalable platform offers potent performance for all types of artificial intelligence (AI). Intel’s optimizations for popular deep learning frameworks have produced up to 127 times1 performance gains for deep learning training and 198 times2 performance gains for deep learning inference for AI workloads running on Intel Xeon Scalable processors. Kyoto is one of many leading healthcare providers and research institutions that are working with Intel and using Intel artificial intelligence technology to tackle some of the biggest challenges in healthcare.

More: One Simple Truth about Artificial Intelligence in Healthcare: It’s Already Here (Navin Shenoy Editorial) | Shaping the Future of Healthcare through Artificial Intelligence (Event Video Replay) | Artificial Intelligence (Press Kit) | Advancing Data-Driven Healthcare Solutions (Press Kit)

“We are only at the beginning of solving these problems. We are continuing to push forward and work with industry-leading entities to solve even more,” said Arjun Bansal, vice president of the Artificial Intelligence Products Group and general manager of Artificial Intelligence Labs and Software at Intel Corporation. “For example, I’m happy to announce that Kyoto University has recently chosen Intel to power their clinical genome analysis cluster and their molecular simulation cluster. These clusters are to aid in their drug discovery efforts and should help reduce the R&D costs of testing different compounds and accelerate precision medicine by adopting Deep Learning techniques.”

It can take up to 15 years – and billions of dollars – to translate a drug discovery idea from initial inception to a market-ready product. Identifying the right protein to manipulate in a disease, proving the concept, optimizing the molecule for delivery to the patient, carrying out pre-clinical and clinical safety and efficacy testing are all essential, but ultimately the process takes far too long today.

Dramatic shifts are needed to meet the needs of society and a future generation of patients. Artificial intelligence presents researchers with an opportunity to do R&D differently – driving down the resources and costs to develop drugs and bringing the potential for a substantial increase in new treatments for serious diseases.

1 Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as “Spectre” and “Meltdown.”  Implementation of these updates may make these results inapplicable to your device or system.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks

Source: Intel measured as of June 2017 Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Configurations for Inference throughput

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :googlenet_v1 BIOS:SE5C620.86B.00.01.0004.071220170215 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1190 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

2Configuration for training throughput:

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.28GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :alexnet BIOS:SE5C620.86B.00.01.0009.101920170742 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1023 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

The post Kyoto University Chooses Intel Machine and Deep Learning Tech to Tackle Drug Discovery, Medicine and Healthcare Challenges appeared first on Intel Newsroom.

Delivering Intel Optane Technology to Mainstream Client Systems

Today, Intel announced the Intel® Optane™ SSD 800P, the latest addition to the growing Intel® Optane™ technology family of products. The 800P joins the Intel® Optane™ SSD 900P, designed for enthusiasts and professional users, and Intel Optane memory, an acceleration solution to speed up slower storage, like hard disk drives and SATA SSDs. Intel Optane technology delivers an unparalleled combination of high throughput, low latency, high quality of service and industry-leading endurance.

More: Storage and Memory News

Intel Optane SSD 800P enables fast system boot, speedy application load times and smooth multitasking. It is ideal for use as a standalone SSD, in a dual drive setup or in a multiple SSD RAID configuration (PCH-based or CPU-based), offering performance and flexibility to users. The drive also supports lower-power states, allowing it to operate in devices like laptops and 2 in 1 devices, as well as desktop systems. The 800P excels with low queue depth random workloads, where most client system activity takes place, and offers uncompromising responsiveness and throughput. The 800P is available in 58GB and 118GB capacities in the M.2 2280 form factor using an NVMe PCIe 3.0 x2 interface.

To find out more about Intel® Optane™ technology for client systems, visit Intel’s solid state drive page.

Intel Optane SSD 800P 3

» Download all images (ZIP, 8 MB)

The post Delivering Intel Optane Technology to Mainstream Client Systems appeared first on Intel Newsroom.

Making a Server as Easy to Upgrade as a Light Bulb

Shesha Krishnapura Innovator
Shesha Krishnapura, Intel Fellow and IT chief technical officer, found a novel way to save money and reduce waste by creating a modular server design that allows critical components to be upgraded easily. (Credit: Walden Kirsch/Intel Corporation)

How he’d describe his job to a 10-year-old: “My job at Intel is to build massive labs called data centers, which hold more than 200,000 interconnected computers, to aid more than 50,000 engineers to design and develop not only the world’s most powerful computer chips, but also software tools that help build smart and connected devices.”

More: Read about all Intel Innovators

The light-bulb moment: Staying on top of Intel’s insatiable, growing (up to 40 percent a year) demand for data center power and high-performance tools while carefully minding costs is the challenge Shesha and his team face. They’ve responded by building one of the most energy-efficient data centers on the planet in the shell of an old Santa Clara microprocessor factory. It is cooled only by fans, passive radiators and recirculating grey water — with space for future growth. But frequent server upgrades had an unfortunate side effect, as Shesha saw it: “All these perfectly good power supplies, fans, cables, chassis and drives are sent to recycling. It’s really painful.” Though hardware design is not a typical service offered by an IT shop, Shesha had an idea: make upgrading a server more like replacing a light bulb.

Cutting the blade: The industry-common “blade” server design helps reduce some of that upgrade waste, allowing the processor, memory and storage to be replaced independently from longer-life components. Shesha wanted to take that modularity another big step: split the motherboard in two and separate the processor and memory from the much slower-evolving input-output parts. He whiteboarded the design, earned support among Intel technical leaders and connected with a manufacturer. From the first meeting to actually installing the first machines in the data center took a mere 5 weeks — and the reorganized motherboard added only $8 to the cost.

Trading 8 bucks for big-time savings: As a result of all this, Shesha estimates that a $10 million server upgrade would now cost only $5.6 million — 44  percent less — and save 77 percent in technician time versus a full “rip and replace” upgrade. In that cavernous cathedral of computing in Santa Clara, some 70,000 of these new machines now fill an entire towering row. The “disaggregated server,” as he calls it, allows data center managers like himself “to invest in emerging technology without wasting money on replacing technology that is not evolving.”

‘I’m extremely frugal: Shesha isn’t coy about what inspired him to take this path. “I’m extremely frugal,” he confirms with a laugh. “In my personal life, I have always used things for their full useful life, and extend, where possible, with minimal maintenance, whether it is household appliances or automobiles. Then the question came to my mind: how to extend selectively what we invest in data center IT equipment in a way that is both financially rewarding and environmentally responsible? That was the motivation for disaggregated server design.” Frugality pays off in many ways — Shesha’s project team earned the Intel Achievement Award, the company’s highest honor, for bringing the disaggregated server to life.

» Download all images (ZIP, 15 MB)

The post Making a Server as Easy to Upgrade as a Light Bulb appeared first on Intel Newsroom.

Intel, Lenovo and Diaceutics Unlocking the Power of AI to Advance Patient Care

healthcare-security-2x1At the Healthcare Information and Management Systems Society (HIMSS) Conference 2018, Intel unveiled a collaboration with Lenovo and Diaceutics that will allow doctors and health care providers to harness the power of artificial intelligence (AI) to transform patient diagnostic testing and improve treatment.

More: Advancing Data-Driven Health Care Solutions (Press Kit)

Intel, Lenovo and Diaceutics are improving the existing diagnosis process with a solution that unlocks the capabilities of AI to process meaningful data and allow health care providers to deliver the right medicine at the right time to patients.

Lenovo’s enterprise workstations powered by Intel® Xeon® Scalable processors will provide the computing power necessary to process Diaceutics’ vast proprietary database of patient testing data. Doctors and health care providers will be able to deploy AI-based techniques to large data sets to generate accurate, real-time insight from diagnostic testing. By integrating testing results with treatment paths, pharmaceutical companies can develop a targeted plan for correctly treating diseases. Initial trials will target treatments for cancer patients, where Diaceutics has a robust data set, with intent to expand to other diseases in the future.

Intel, Lenovo, and Diaceutics will describe their work at HIMSS 2018 in a session taking place in Lenovo’s booth #2443 at 2:00 p.m. on Wednesday, March 7, 2018.

The post Intel, Lenovo and Diaceutics Unlocking the Power of AI to Advance Patient Care appeared first on Intel Newsroom.

A Blue-Sky View of Our Cloud Computing Future

intel-innovator-skillern-2
Intel Vice President Raejeanne Skillern oversees Intel’s cloud business, which in the fourth quarter of 2017 grew 35 percent versus the same period a year earlier. (Photo caption: Walden Kirsch/Intel Corporation)

How she’d describe her job to a 10-year-old: “Computers are like Lego sets and the cloud is like having thousands or millions of Lego sets connected. Cloud service providers rent these ‘sets’ so people can process, transmit and store data through the internet. I help them decide upon the ‘best’ sets to build.”

More: Read about all Intel Innovators

From buzzword to big business: In 2009 — when Raejeanne Skillern first started in Intel’s cloud business, then as director of marketing — cloud computing was a hot new buzzword, and pundits were preaching caution and hand-wringing over trust issues. “When we began, we only worked with a few cloud service providers,” she says. “Today we’re working closely with roughly 250 companies worldwide and many more through our channels. CSPs are setting the pace of innovation more so than ever before. Cloud is no longer a buzzword—it’s an industry transformation.”

Supercharging new services: “I’m amazed by all of the new services and unique applications that the cloud is bringing ─ not to mention the positive impact for areas such as health care and small business growth,” says Raejeanne, who leads Intel’s Cloud Service Provider Group. Her team works with the “Super 7” — Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent. She also partners with rising companies like Digital Ocean and 1&1. And then there’s Meituan, a Chinese “super-app” that combines restaurant reviews, food ordering and delivery, digital coupons and payments, and even ride-hailing — it’s a perfect example of the increasing breadth of what’s become a cloud provider. “You sit down with these folks and it’s hard not to be excited about their business,” Raejeanne says.

Why it’s clouds, not one cloud: Though often referred to monolithically as “the cloud,” there’s nothing monolithic about it. Although cloud providers face similar challenges, the ways they solve them can differ greatly, Raejeanne says. Her job is to help match each customer with the right set of technologies — not just processors but also accelerators and FPGAs, storage and memory, networking, and, in some cases, new custom parts — to achieve maximum performance-per-dollar. “Our strength lies in the fact that we are one company that can do everything across the platform,” Raejeanne says.

Making AI accessible for all: Though the cloud isn’t often discussed alongside artificial intelligence, Raejeanne explains that a big part of what’s making AI possible is the performance, scale and cost-efficiency that cloud infrastructure provides. The cloud, she says, “is really extending the reach. Now our job is not just to provide leadership silicon, but also to enable the entire spectrum of artificial intelligence solutions, and help customers adopt AI so that they can utilize it from cloud to edge to transform their business.”

Where the clouds are billowing next: As Intel’s recent earnings results attest, Raejeanne sees continued cloud growth. She likens its rise to Jevons paradox, an observation that as advancements make use of a resource more efficient, its total use doesn’t decrease, but rather increases. “As cloud technology, public or private, makes compute easier and cheaper to consume, new use cases are being rapidly invented.” Looking ahead, Raejeanne says, “Artificial intelligence, 5G and network transformation, and immersive media are driving innovation across our entire industry.” She also expects that developments like function-as-a-service, which “changes the programming paradigm and really makes it easier and faster for developers,” could accelerate those trends and spur the next wave of innovation.

» Download full-size image

The post A Blue-Sky View of Our Cloud Computing Future appeared first on Intel Newsroom.

Intel Reimagines Data Center Storage with new 3D NAND SSDs

In 2017, Intel brought to market a wide array of products designed to tackle the world’s growing stockpile of data. Intel CEO Brian Krzanich called data an “unseen driving force behind the next-generation technology revolution,” and Rob Crooke, senior vice president and general manager of the Non-Volatile Memory (NVM) Solutions Group at Intel, recently outlined his vision for how storage and memory technologies can address all that data. In the past year, Intel introduced new Intel® Optane™ technology-based products and will continue to deliver exciting, blazing-fast solutions based on this breakthrough technology, with announcements later this year. Intel also brought industry-leading areal density to storage for consumers and enterprises, driving both capacity and form factor innovation with Intel® 3D NAND storage products.

Intel is reimagining how data is stored for the data center. By driving the creation and adoption of compelling new form factors, like the EDSFF 1U long and 1U short, and delivering advanced materials, including our densest NAND to date with 64-layer TLC Intel 3D NAND, Intel is enabling capacities of 8TB and beyond in an array of form factors that meet the specific performance needs of data centers.

The Intel SSD DC P4500 Series comes in the ruler form factor, wh

» Download all images (ZIP, 450 KB)

Introducing Intel SSD DC P4510 and P4511 Series

Today, Intel announced the Intel SSD DC P4510 Series for data center applications. The P4510 Series uses 64-layer TLC Intel 3D NAND to enable end users to do more per server, support broader workloads and deliver space-efficient capacity. The P4510 Series enables up to four times more terabytes per server and delivers up to 10 times better random read latency at 99.99 percent quality of service than previous generations. The drive can also deliver up to double the input-output operations per second (IOPS) per terabyte. The 1 and 2TB capacities have been shipping to cloud service providers (CSPs) in high volume since August 2017, and the 4 and 8TB capacities are now available to CSPs and channel customers. All capacities are in the 2.5-inch 15 mm U.2 form factor and utilize a PCIe* NVMe* 3.0 x4 connection.

To accelerate performance and simplify management of the P4510 Series PCIe SSDs and other PCIe SSDs, Intel is also delivering two new technologies that work together to replace legacy storage hardware. Intel® Xeon® Scalable processors include Intel Volume Management Device (VMD), enabling robust management such as surprise insertion/removal and LED management of PCIe SSDs directly connected to the CPU. Building on this functionality, Intel® Virtual RAID on CPU (VROC) uses Intel VMD to provide RAID to PCIe SSDs. By replacing RAID cards with Intel VROC, customers are able to enjoy up to twice the IOPs performance and up to a 70 percent cost savings with PCIe SSDs directly attached to the CPU, improving customer’s return on their investments in SSD-based storage.

Intel is also bringing innovation to the data center with new low-power SSDs and the Enterprise and Datacenter SSD Form Factor (EDSFF). The Intel SSD DC P4511 Series offers a low-power option for workloads with lower performance requirements, enabling data centers to save power. The P4511 Series will be available later in the first half of 2018 in M.2 110 mm form factor. Additionally, Intel continues to drive form factor innovation in the data center, with the Intel SSD DC P4510 Series available in the future in EDSFF 1U long and 1U short with up to 1 petabyte (PB) of storage in a 1U server rack.

EDSFF Momentum

At Flash Memory Summit* 2017, Intel introduced the ruler form factor for Intel SSDs, purpose-built from the ground up for data center efficiency and free from the confines of legacy form factors. The new form factor delivers unprecedented storage density, system design flexibility with long and short versions, optimum thermal efficiency, scalable performance (available x4, x8 and x16 connectors) and easy maintenance, with front-load, hot-swap capabilities. EDSFF is also future-ready and designed for PCIe 3.0, available today, and PCIe 4.0 and 5.0, when they are ready.

Recently, the Enterprise and Datacenter SSD Form Factor specification was ratified by the EDSFF Working Group*, which includes Intel®, Samsung*, Microsoft*, Facebook* and others. Intel has been shipping a pre-spec version of the Intel SSD DC P4500 Series to select customers, including IBM* and Tencent*, for more than a year, and the Intel SSD DC P4510 Series will be available in EDSFF 1U long and 1U short starting in the second half of 2018. The industry has shown an overwhelmingly positive response to the Intel-inspired EDSFF specifications, with more than 10 key OEM, ODM and ecosystem members indicating intentions to design EDSFF SSDs into their systems. Additional SSD manufactures have also expressed intent to deliver EDSFF SSDs in the future.

IBM has deployed the P4500 Series in this new form factor to the IBM cloud. Tencent, a leading provider of value-added internet services in the world, has incorporated Intel® SSD DC P4500 series in the “ruler” form factor into its newly announced T-Flex platform, which supports 32 “ruler” SSDs as the standard high-performance storage resource pool.

“‘Ruler’ optimizes heat dissipation, significantly enhances SSD serviceability and delivers amazing storage capacity that will scale to 1PB in 1U in the future, thereby reducing overall storage construction and operating costs,” said Wu Jianjian, product director of Blackstone Product Center, Tencent Cloud. “We are very excited about this modern design and encourage its adoption as an industry standard specification.”

For more information on the Intel SSD DC P4510 Series, EDSFF and Intel 3D NAND, visit Intel’s solid state drive site.

The post Intel Reimagines Data Center Storage with new 3D NAND SSDs appeared first on Intel Newsroom.

2018 CES

We are entering an artificial intelligence revolution.

To power the technology of the future and create amazing new experiences, we need to unlock the power of data. Its collection, storage and analysis continue to change and grow, having more of an impact on our everyday lives than ever before.

With advances in artificial intelligence, 5G connectivity, autonomous driving and virtual reality, Intel is taking the next steps at CES 2018 to reimagine how data will create amazing new experiences that will transform our daily lives.

Latest News

Event Details

2017 CES PHOTOS

Photos from Intel’s CES Booth

intel-booth-ces2017-34s

» Download image set 1 (ZIP, 142 MB)
» Download image set 2 (ZIP, 17 MB)


Photos from Intel’s VR News Conference

intel-news-2017ces-11

» Download all news conference images (ZIP, 82 MB)

2017 CES VIDEOS

2017 CES: Highlights from Intel’s Products and Experiences (B-roll)

» Download video: “2017 CES: Highlights from Intel’s Products and Experiences (B-roll)”
» Download video: “2017 CES: Intel Presents a World of Virtual Reality Experiences (Replay)”
» Download video: “2017 CES: Intel News Conference Present VR Experiences (Highlights)”

The post 2018 CES appeared first on Intel Newsroom.

2018 CES

We are entering an artificial intelligence revolution.

To power the technology of the future and create amazing new experiences, we need to unlock the power of data. Its collection, storage and analysis continue to change and grow, having more of an impact on our everyday lives than ever before.

With advances in artificial intelligence, 5G connectivity, autonomous driving and virtual reality, Intel is taking the next steps at CES 2018 to reimagine how data will create amazing new experiences that will transform our daily lives.

Latest News

Event Details

2017 CES PHOTOS

Photos from Intel’s CES Booth

intel-booth-ces2017-34s

» Download image set 1 (ZIP, 142 MB)
» Download image set 2 (ZIP, 17 MB)


Photos from Intel’s VR News Conference

intel-news-2017ces-11

» Download all news conference images (ZIP, 82 MB)

2017 CES VIDEOS

2017 CES: Highlights from Intel’s Products and Experiences (B-roll)

» Download video: “2017 CES: Highlights from Intel’s Products and Experiences (B-roll)”
» Download video: “2017 CES: Intel Presents a World of Virtual Reality Experiences (Replay)”
» Download video: “2017 CES: Intel News Conference Present VR Experiences (Highlights)”

The post 2018 CES appeared first on Intel Newsroom.

Intel Launches Intel Saffron AML Advisor Using AI to Detect Financial Crime

saffronNEWS HIGHLIGHTS

  • Intel Saffron Anti-Money Laundering (AML) Advisor uses explainable AI to enhance decision-making for investigators and analysts. Associative memory AI finds and explains multidimensional patterns so that investigators and analysts can explore emerging trends across a bank’s or insurer’s data.
  • With an unsupervised learning approach, the AML Advisor unifies structured and unstructured data from enterprise systems, email, web and other data sources to deliver insights along with the explanation of how connections were identified.
  • Additionally, the AML Advisor provides the transparency required to comply with ever-tighter regulatory standards.
  • Intel Saffron Early Adopter Program partners with five select organizations to utilize the latest developments in associative memory AI to shape the future of financial services.

SANTA CLARA, Calif., Oct. 11, 2017 – Intel today launched the Intel® Saffron™ Anti-Money Laundering (AML) Advisor, aimed at detecting financial crime through a transparent AI solution utilizing associative memory. Today’s launch kicks off the first associative memory AI solution specifically tailored to the needs of financial services institutions and is optimized on Intel® Xeon® Scalable processors.

Intel Saffron’s associative memory AI simulates a human’s natural ability to learn, remember and reason in real time. It mimics the associative memory of the human brain to surface similarities and anomalies hidden in dynamic, heterogeneous data sources, while accessing an infinitely larger data set than its human counterparts. The AML Advisor surfaces these patterns in a transparent way, paving the way for “white box AI” in enterprise applications. These solutions are designed to enhance decision-making in highly complex tasks, and early results indicate they can catch money launderers with unprecedented speed and efficiency.

Press Kit: Artificial Intelligence

Total financial crime is at all-time highs. According to the United Nations, the estimated amount of money laundered globally in one year is 2 to 5 percent of global GDP, or $800 billion to $2 trillion.1 In addition, in 2016 alone, approximately 15.4 million consumers were victims of identity theft or fraud, resulting in $16 billion in losses.2

“Intel Saffron’s mission is to minimize the time and effort it takes to reach confident decisions,” said Gayle Sheppard, vice president and general manager of Saffron AI Group at Intel. “We accelerate the path to decision by surfacing and explaining patterns in data with speed, precision and accuracy. The amount of data that banks and insurers collect is growing at massive scale, doubling every two years. While the quantity of data is growing, so are the types and sources of data, which means today much of the data isn’t queried for insights because it’s simply not accessible with traditional tools at scale. Investigators and analysts will depend on transparent AI solutions to meet the ever-growing demands of consistency and efficiency from a business, regulatory and compliance perspective.”

Banks and financial organizations often have 50 or more applications that require use of the same personal financial data. Banks want a more efficient way to manage their data, putting an end to moving and replicating data, which is costly and increases risk. They also want visibility to the unified knowledge across multiple data sources to better serve customers. Intel Saffron AML Advisor uses associative memory AI to discover new insights for growing businesses, meeting compliance and regulatory requirements, and fighting financial crime with a suite of features, including:

  • Knowledge Index: Unifying structured and unstructured data linked into a 360-degree view at the individual entity level, to make sense of the patterns found across boundaries wherever the data is stored. This derives knowledge that is hard to gain with vendor and database proliferation of point solutions.
  • Continuous Learning: Unlike traditional machine learning methods, Intel Saffron AML Advisor doesn’t require domain-specific models nor training and retraining, resulting in improving the time to insight. The financial services industry faces the challenge of “What will be important tomorrow?” In this dynamic landscape, actionable insights realized in hours or days rather than weeks or months is an imperative.
  • Work Augmentation: Intel Saffron AML Advisor reduces the human cognitive burden through automation thought processes that work with and for the investigators allowing them to focus on higher value activities.
  • Compliance Validation: Banks collect the data necessary to comply with various regulations, but often must pay non-compliance fines in the billions due to human error or missed deadlines. Intel Saffron AML Advisor explains the rationale behind the recommendations to help banks meet compliance, mitigate fines and reduce countless hours reworking reports.

Intel also introduced the Intel Saffron Early Adopter Program (EAP). This program is designed for institutions whose ambition is to lead the pack on innovation in financial services by taking advantage of the latest advancements in associative memory artificial intelligence. It allows its members to gain the first-mover advantage over the competition and define the future of associative memory AI in financial services. Expanding upon its existing relationship with Intel, Bank of New Zealand* (BNZ) has joined the Intel Saffron EAP.

“We’re excited to be working with Intel Saffron on truly bleeding edge technology that will enable us to understand our customers far better than we ever have before and help them make smarter decisions” said David Bullock, director of Products and Technology at BNZ. “By staying at the forefront of AI, we can help ensure we have access to the latest, innovative technologies that enhance our business.”

Intel Saffron solutions allow BNZ to take advantage of its existing big data platform to glean increasingly sophisticated insights for innovative customer service.

For more details about the Intel Saffron AML Advisor and the Intel Saffron Early Adopter Program, visit the Intel Saffron financial services page.

1United Nations Office on Drugs and Crime, https://www.unodc.org/unodc/en/money-laundering/globalization.html

2Javelin Strategy & Research, 2017 Identity Fraud Study, https://www.javelinstrategy.com/press-release/identity-fraud-hits-record-high-154-million-us-victims-2016-16-percent-according-new

Intel, Saffron, Xeon and the Intel logo are trademarks of Intel Corporation in the United States and other countries.

The post Intel Launches Intel Saffron AML Advisor Using AI to Detect Financial Crime appeared first on Intel Newsroom.