Kyoto University Chooses Intel Machine and Deep Learning Tech to Tackle Drug Discovery, Medicine and Healthcare Challenges

Kyoto University Graduate School of Medicine*, one of Asia’s leading research-oriented institutions, has recently chosen Intel® Xeon® Scalable processors to power its clinical genome analysis cluster and its molecular simulation cluster. These clusters will aid in Kyoto’s search for new drug discoveries and help reduce research and development costs.

The Intel Xeon Scalable platform offers potent performance for all types of artificial intelligence (AI). Intel’s optimizations for popular deep learning frameworks have produced up to 127 times1 performance gains for deep learning training and 198 times2 performance gains for deep learning inference for AI workloads running on Intel Xeon Scalable processors. Kyoto is one of many leading healthcare providers and research institutions that are working with Intel and using Intel artificial intelligence technology to tackle some of the biggest challenges in healthcare.

More: One Simple Truth about Artificial Intelligence in Healthcare: It’s Already Here (Navin Shenoy Editorial) | Shaping the Future of Healthcare through Artificial Intelligence (Event Video Replay) | Artificial Intelligence (Press Kit) | Advancing Data-Driven Healthcare Solutions (Press Kit)

“We are only at the beginning of solving these problems. We are continuing to push forward and work with industry-leading entities to solve even more,” said Arjun Bansal, vice president of the Artificial Intelligence Products Group and general manager of Artificial Intelligence Labs and Software at Intel Corporation. “For example, I’m happy to announce that Kyoto University has recently chosen Intel to power their clinical genome analysis cluster and their molecular simulation cluster. These clusters are to aid in their drug discovery efforts and should help reduce the R&D costs of testing different compounds and accelerate precision medicine by adopting Deep Learning techniques.”

It can take up to 15 years – and billions of dollars – to translate a drug discovery idea from initial inception to a market-ready product. Identifying the right protein to manipulate in a disease, proving the concept, optimizing the molecule for delivery to the patient, carrying out pre-clinical and clinical safety and efficacy testing are all essential, but ultimately the process takes far too long today.

Dramatic shifts are needed to meet the needs of society and a future generation of patients. Artificial intelligence presents researchers with an opportunity to do R&D differently – driving down the resources and costs to develop drugs and bringing the potential for a substantial increase in new treatments for serious diseases.

1 Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as “Spectre” and “Meltdown.”  Implementation of these updates may make these results inapplicable to your device or system.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks

Source: Intel measured as of June 2017 Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Configurations for Inference throughput

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :googlenet_v1 BIOS:SE5C620.86B.00.01.0004.071220170215 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1190 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

2Configuration for training throughput:

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.28GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :alexnet BIOS:SE5C620.86B.00.01.0009.101920170742 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1023 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

The post Kyoto University Chooses Intel Machine and Deep Learning Tech to Tackle Drug Discovery, Medicine and Healthcare Challenges appeared first on Intel Newsroom.

One Simple Truth about Artificial Intelligence in Healthcare: It’s Already Here

Navin ShenoyBy Navin Shenoy

In the wide world of big data, artificial intelligence (AI) holds transformational promise. Everything from manufacturing to transportation to retail to education will be improved through its application. But nowhere is that potential more profound than in healthcare, where every one of us has a stake.

What if we could predict the next big disease epidemic, and stop it before it kills? What if we could look at zettabytes of data to find those at greatest risk of becoming sick, then quickly and precisely prevent that from happening? What if the treatment and management of chronic disease could be so personalized that no two individuals get the same medicine, but equally enjoy the best possible outcome? What if we could drastically reduce the time and cost to discover new drugs and bring them on the market? What if we could do all of that now?

Thanks to artificial intelligence and the work of Intel and its partners, we can.

Real Impact Today

There’s a common myth that AI in healthcare is the stuff of science fiction – think machines diagnosing illness and prescribing treatment without a doctor involved. But that is not only highly unlikely, it’s not even close to the best examples of how AI is emerging in healthcare today.

Intel and partners throughout the healthcare industry – including GE Healthcare, Siemens, Sharp Healthcare, the Broad Institute, UCSF and the Mayo Clinic – are successfully applying AI solutions today, from the back office to the doctor’s office, from the emergency room to the living room. A few customers that we’re working closely with include:

Montefiore Medical System: using prescriptive models to identify patients at risk for respiratory failure, so healthcare workers can act on alerts that lead to timely interventions that save lives and resources.

Stanford Medical: using AI to augment MRI image reconstruction so that a complete image can be delivered in about a minute versus what normally would take about an hour – eliminating risky intubation and sedation in pediatric patients during imaging exams.

ICON plc: Instead of only relying on burdensome clinic visits and paper diaries, using clinical data from sensors and wearable devices to more quickly assess the impact of new therapies in clinical trials.

AccuHealth: Using home monitoring along with data mining and predictive modeling to identify changes of concern among chronic disease patients to enable intervention before conditions escalate and become acute.

Better Health Tomorrow

But the triumph of artificial intelligence in healthcare isn’t inevitable. Right now, the average hospital generates 665 terabytes of data annually1, but most of that data isn’t useful. At least 80 percent of hospital data is unstructured2, such as clinical notes, video and images. Electronic medical records (EMRs) are a mandated system of record, but they aren’t as actionable as they could be. Only with AI can we leverage healthcare data to create a system of insights.

Getting healthcare systems to provide greater access to their data would help. Government also has a role to play by providing appropriate incentives and regulatory clarity for sharing data. We agree with the recent White House proposal to give patients control and ownership of all their health data, bringing it with them wherever they go rather than residing in various doctor’s offices, clinics and hospitals.

New technology can help as well. One example: Intel researchers are making great strides toward practical methods for homomorphic encryption, a method that will allow computer systems to perform calculations on encrypted information without decrypting it first. Such encryption would enable researchers to operate on data in a secure and private way, while still delivering insightful results.

Indeed, much work is ahead, and Intel is uniquely positioned to help healthcare organizations succeed. Emerging healthcare data is massive data – images, a growing list of ‘omics (i.e. genomics, proteomics), video – and will require a storage plan and a network that addresses speed, latency and reliability. We have been investing with our partners to build the right systems – data, storage, network, full infrastructure – all the way from the edge to the network to the cloud, and everywhere in between. With the advancements in our hardware and optimizations of popular deep learning frameworks, the Intel Xeon Scalable processor has 198x better inference performance and 127x better training performance than prior generations3. As a result, the Xeon platform is at the center or many AI workloads that are real today because it is well suited for many machine and deep learning applications across industries like healthcare.

But hardware, storage and network alone are not enough. We need to leverage the unparalleled expertise from data scientists, software developers, industry experts, and ecosystem partners ––to address AI in healthcare end to end. As part of the effort to expand expertise across AI, we launched the Intel AI Academy, a place that offers learning materials, community tools and technology to boost AI developments.  With more than 250K monthly participants, I invite you to join for free as well.

I feel very fortunate to work for a company like Intel that is committed to powering AI solutions that will tackle some of the biggest challenges of our time, including healthcare. I’m also proud to be leading the team that will deliver that vision.

Navin Shenoy is executive vice president and general manager of the Data Center Group at Intel Corporation.

1Source: http://www.netapp.com/us/media/wp-7169.pdf

2Source: http://www.zdnet.com/news/unstructured-data-challenge-or-asset/6356681

3Source: Configuration: AI Performance – Software + Hardware

INFERENCE using FP32 Batch Size Caffe GoogleNet v1 256 AlexNet 256.

Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as “Spectre” and “Meltdown.”  Implementation of these updates may make these results inapplicable to your device or system. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance.

Source: Intel measured as of June 2017 Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Configurations for Inference throughput

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :googlenet_v1 BIOS:SE5C620.86B.00.01.0004.071220170215 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1190 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

Configuration for training throughput:

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.28GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :alexnet BIOS:SE5C620.86B.00.01.0009.101920170742 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1023 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

The post One Simple Truth about Artificial Intelligence in Healthcare: It’s Already Here appeared first on Intel Newsroom.

Fact Sheet: Unleashing the Power of Artificial Intelligence in Healthcare with Intel Technologies

Artificial Intelligence: Machine Learning, Deep Learning and More

Artificial intelligence (AI) is now all around us, as machines increasingly gain the ability to sense, learn, reason, act and adapt in the real world. Much of the AI spotlight is focused on deep learning, a branch of machine learning that uses neural networks to comprehend complex and unstructured data (training) and use that understanding to classify, recognize and process new inputs (inference). Deep learning is delivering breakthroughs in areas like image recognition, speech recognition, natural language processing and other complex tasks.

Processing Deep Learning Workloads in Healthcare

Deep learning workloads require a tremendous amount of computational power to run complex mathematical algorithms and process huge amounts of data. While GPUs have been used for some deep learning training applications, they don’t necessarily have a performance edge for deep learning inference. In healthcare in particular, there are many examples of applications that run better on CPUs than GPUs.

That’s why for healthcare organizations running AI workloads, Intel® Xeon® Scalable processors provide an ideal computational foundation. Intel Xeon Scalable processors are optimized for AI, scale up quickly and seamlessly for 2.1x faster deep learning performance over previous generations1, and offer server-class reliability and workload flexibility1.

Versatility

Many organizations that are implementing AI don’t need to train models 24/7. Those that do need to train models around the clock require a dedicated accelerator, but the majority of organizations simply need to train and then deploy a model. If an organization invests thousands of dollars in building a dedicated acceleration stream, it can’t do anything else with that infrastructure beyond training – it will just sit idle.

Intel Xeon Scalable processors are more agile. They are designed to flex with business needs to support the workload needs of the moment, allowing organizations to leverage their data center infrastructure for AI applications and a wide range of other workloads.

Increased Utilization

Most organizations have at least 35 percent free utilization of processing capacity, if not more. This means they have could be obtaining better ROI from their infrastructure investments. With Intel Xeon Scalable processors, they can leverage this unused capacity to implement AI while still meeting the needs of other applications.

Technical Advantages

GPUs are often leveraged as an accelerator, but they have limitations when it comes to building models to tackle healthcare issues. These limitations begin with memory. GPUs are limited to only 12 to 16 gigabytes of memory on the chip itself, and that restricts the size and capabilities of the model that can be built. This memory limitation is a particular problem in healthcare AI work, which often involves building massive models – for example, to support a giant CT scan.

For today’s AI networks deployed on a standard GPU, the image size for a scan that can be developed is usually 256 x 256 pixels. At the most, that image could be 1000 x 1000 pixels, but that’s pushing it. If clinicians have a 4K image where they actually need all of those pixels to make an analysis — for example, to detect a tumor in a tiny portion of the image — then they have a problem. When they scale that 4K image down to 256 x 256 pixels, they lose all of the fine temporal resolution.

The same model can be built with Intel Xeon Scalable processors without limiting the size and resolution because the Intel Xeon Scalable family offers a terabit of on-load memory. Intel Xeon processors may process the image more slowly than a GPU, but when the resolution is needed for medical imaging, Intel Xeon processors deliver.

A Processor Designed for Deep Learning

Moving forward, Intel has also developed the Intel® Nervana™ Neural Network Processor (NNP), the world’s first processor specifically designed from the ground up for deep learning. The Intel Nervana NNP promises to further enhance medical imaging and other healthcare applications.

Using the Intel Nervana platform, healthcare organizations can develop entirely new classes of AI applications that maximize the amount of data processed and lead to greater insights. For example, AI will allow for earlier diagnosis and greater accuracy, helping make the impossible possible by advancing research on cancer, Parkinson’s disease and other brain disorders2.

The Intel Advantage for AI

Intel has 50 years of experience in helping its customers make their data valuable. Today, Intel is using that experience to help healthcare organizations address every aspect of their workflows through the successful implementation of AI.

From hardware to software and data science, Intel brings its full suite of products and expertise to AI. Intel has the technical expertise not only to help organizations build the right infrastructure – from data to storage and network – but also has the data scientists to understand and model data and the application developers to help make the data useful. In short, Intel’s expertise to implement AI goes well beyond the CPU.

Driving the Age of AI

At Intel, we recognize the age of AI is upon us, and we know our technologies will help drive the future of AI. We are also motivated by a desire to use our technology to help advance society and tackle the world’s big challenges, so we’re particularly excited about our work in healthcare. We want to help channel AI for societal good – we call it AI with a purpose.

With these thoughts in mind, we’re pleased to be engaged with healthcare providers and researchers. We know that our collective work is helping move the needle – with even more promise and unlimited possibilities for the future.

Benchmark results were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as “Spectre” and “Meltdown”.  Implementation of these updates may make these results inapplicable to your device or system.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.

Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.

1INFERENCE using FP32 Batch Size Caffe GoogleNet v1 256  AlexNet 256.

Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as “Spectre” and “Meltdown.”  Implementation of these updates may make these results inapplicable to your device or system. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance  Source: Intel measured as of June 2017 Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Configurations for Inference throughput

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :googlenet_v1 BIOS:SE5C620.86B.00.01.0004.071220170215 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1190 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe

Configuration for training throughput:

Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.28GB (12slots / 32 GB / 2666 MHz).CentOS Linux-7.3.1611-Core , SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework caffe version: f6d01efbe93f70726ea3796a4b89c612365a6341 Topology :alexnet BIOS:SE5C620.86B.00.01.0009.101920170742 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. Measured: 1023 imgs/sec vs Platform: 2S Intel® Xeon® CPU E5-2699 v3 @ 2.30GHz (18 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 256GB DDR4-2133 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.el7.x86_64. OS drive: Seagate* Enterprise ST2000NX0253 2 TB 2.5″ Internal Hard Drive. Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact,1,0‘, OMP_NUM_THREADS=36, CPU Freq set with cpupower frequency-set -d 2.3G -u 2.3G -g performance. Deep Learning Frameworks: Intel Caffe: (http://github.com/intel/caffe/), revision b0ef3236528a2c7d2988f249d347d5fdae831236. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). GCC 4.8.5, MKLML version 2017.0.2.20170110. BVLC-Caffe: https://github.com/BVLC/caffe, Inference & Training measured with “caffe time” command.  For “ConvNet” topologies, dummy dataset was used. For other topologies, data was st ored on local storage and cached in memory before training  BVLC Caffe (http://github.com/BVLC/caffe), revision 91b09280f5233cafc62954c98ce8bc4c204e7475 (commit date 5/14/2017). BLAS: atlas ver. 3.10.1.

2Intel editorial by Brian Krzanich, chief executive officer of Intel Corporation, “Announcing Industry’s First Neural Network Processor,” October 17, 2017.

Intel, the Intel logo, Xeon, and Nervana are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at www.intel.com.

The post Fact Sheet: Unleashing the Power of Artificial Intelligence in Healthcare with Intel Technologies appeared first on Intel Newsroom.

SOLVE: HEALTHCARE – Intel and Partners Demonstrate Real Uses for Artificial Intelligence in Healthcare

When used together, the terms “healthcare” and “artificial intelligence” (AI) often conjure up images of “robot surgeons.” That’s one use case – and a promising one – but there are other, less photo-worthy AI examples that warrant our attention, such as the use of data “lakes” to develop insights on large patient populations in order to help predict, pre-empt and prevent people from getting sicker than they already are.

Or using AI to gain a better understanding of the human genome to more precisely deliver patient care. Or using AI to detect the massive waste, fraud and abuse in healthcare spending today – estimated at 3 to 10 percent1 of the more than $3 trillion spent in the U.S. annually2.

AI applications like these are starting to make an impact on our healthcare system today and are poised to revolutionize everything from the back office to the doctor’s office, from the emergency room to the living room. Intel together with several of our partners is demonstrating this progress today at our inaugural SOLVE: Healthcare event.

Read on to learn more about some of the innovative companies applying AI to improve the delivery of healthcare. The solutions demonstrated here today represent a fraction of the companies Intel is working with on new use cases. Visit the Intel newsroom to learn more.

AccuHealth*: Remote Patient Monitoring

Responsible for 86 percent the annual health care expenditure in the US3, chronic illness is one of the most pressing challenges facing our health care system today. The average cost of an ER visit ranges anywhere between $50 to $3,0004 and the average cost of a day in the hospital starts as $1,8005. Costs soar even higher with each care-related procedures and treatments performed. For every ER visit and hospital bill, there is often a chronically ill patient who may be in a life-threatening situation To help prevent these acute events, keep chronic illnesses in check, and avoid the associated costs, AccuHealth, a Chilean-based startup, has developed a patient-centric healthcare model that shifts reactionary facility-based care to preventative home-based remote care.

Using wearable sensors linked to a smart Intel-based monitoring device that connects to the AccuHealth virtual hospital remote monitoring center, patients perform 3-5 minute “check-ups” throughout the day from the comfort of their homes or office. Patient data is sent in real time to a data center where powerful processors apply data mining and predictive modeling to identify anticipate any changes of concern which can be addressed before they escalate to acute and costly care. In the event of any alarming shifts, caregivers can proactively intervene before an ER visit or hospitalization are required.

To date, AccuHealth has monitored over 15,000 patients. The data indicates a 42 percent decrease in emergency room visits, and over a 30 percent savings for participating insurance companies. To date, AccuHealth has successfully implemented this solution in Chile.

Broad* Institute: Developing a Genome Analysis Toolkit to Better Understand Disease

The human genome includes 3 billion base pairs of DNA. By analyzing and understanding vast amounts of genomic data across tens of thousands of people, researchers are gaining insights into how diseases form — and what we can do to fight them.

Broad Institute developed the Genome Analysis Toolkit* (GATK) to analyze the rapidly growing body of genomic data, predicted to be multiple exabytes’ worth by 2025. Intel helped optimize code and develop the reference architecture called BIGStack. This open-source, AI-enabled toolkit can rapidly tackle vast amounts of data (Broad scientists generate 24 terabytes of data per day, and more than 57,000 researchers around the world use the GATK) while providing some of the most consistent, accurate results available to researchers and clinicians.

The design of many GATK tools has evolved with recent advances in machine learning, enabling the development of new and improved analytical capabilities, which in turn has enabled researchers, clinicians and industry users to make novel discoveries and pave the way for precision medicine. Ultimately, this may translate into faster treatment identification by clinicians for diseases such as cancer, Alzheimer’s, Type 2 diabetes, and schizophrenia, as well as faster biomarker targeting by pharmaceutical companies.

Diaceutics*: Using AI to Improve Patient Diagnostics

Today, it takes 15 years and $3 billion dollars to bring a drug to market. Diaceutics, an Ireland-based diagnostic commercialization service provider, works with pharmaceutical companies to improve patient outcomes and strategically integrate diagnostic testing with targeted drug treatments. Currently, the company works with 29 of the world’s top 30 pharmaceutical companies to inform and guide precision medicine development and launch.

Collaborating with Intel and using Intel Xeon processors, Diaceutics can leverage its vast, global database of patient test data, using AI – in real time – to analyze big data reserves, recognize patterns and identify new patient subsets. The goal: to inform, expedite and improve diagnoses, treatment and outcomes for patients with similar characteristics.

One example: in the U.S. alone, an estimated 78,000 cancer patients are not properly tested each year, meaning that they are missing out on potentially lifesaving medications. Diaceutics’ AI and machine learning, however, can quickly analyze biomarkers and disease patterns for a variety of tumors and cancer types that can lead to at-risk patients being identified, diagnosed and treated more quickly and accurately. With expertise in the laboratory, diagnostics and pharmaceuticals, the company is transforming an industry model for diagnosing and treating patients, further unlocking the promise of precision medicine.

Doctor Hazel*: An App to Detect Skin Cancer

According to the Skin Cancer Foundation, skin cancer is the most common cancer in the U.S. Half of the population will be diagnosed with some form of skin cancer by the time they reach age 65. Without early detection, the five-year survival rate falls to 62 percent when the disease reaches the lymph nodes and to 18 percent when it metastasizes to distant organs6. However, when melanoma is found early, the survival rate is extremely high.

Software developers Mike Borozdin and Peter Ma, used a high-powered endoscope camera and the Intel® Movidius™ Neural Compute Stick, a tiny, fanless, deep-learning USB drive designed for high-performance AI programming, to create an artificial intelligence-powered app that in the future may used to detect skin cancer – in real time – by simply analyzing a photo of a mole.

Dubbed Doctor Hazel, the app sorts through a database of images, classifies the mole in question and instantly lets the person know if the mole looks benign or potentially cancerous. If the app raises a red flag, the person is advised to follow up with a dermatologist. The result: earlier detection, quicker treatment, and better patient outcomes.

GE Healthcare*: Using AI to Improve Imaging

Hospitals and health care systems often rely on medical imaging – MRIs, CT scans, PET scans, bone scans, etc. – to diagnose and determine treatment for patients.

GE Healthcare, a leader in healthcare imaging, is using applied intelligence to improve and speed up the imaging process, while also reducing costs for hospitals and health care systems using their equipment. GE has partnered with Intel on AI in medical imaging workflows and clinical diagnostic scanning to find ways to detect a disease in the early stages, transforming data into actionable insights. The aim: quicker, more informed clinician decisions and better patient outcomes. This includes reduced patient risk and dosage exposure – with faster image processing – to expedite a patient’s time to diagnosis and treatment.

By using Intel’s Xeon processors, GE anticipates improving radiologists’ reading productivity compared to the prior generation by reducing first image display to under two seconds and full study load times to under eight seconds. Additionally GE anticipates lowering the cost of ownership of the equipment by up to 25 percent.

With outstanding ability to extract and interpret data across healthcare IT systems, devices and imaging equipment, GE delivers a one-two punch of insight and actionability to help inform decision making and improve outcomes.

ICON* PLC, Michael J. Fox Foundation* (MJFF), and Teva Pharmaceutical Industries Ltd*.: Transforming Clinical Trials

In 2013, sparked by a request from former Intel CEO and President, Andy Grove, Intel IT partnered with the Michael J. Fox Foundation to improve research and treatment associated with Parkinson’s disease. The collaboration included a multiphase research study using an AI platform, to gain insights from patient data collected with wearable and mobile technologies. The result of this study? The Intel® Pharma Analytics Platform.

To date, MJFF and Teva Pharmaceutical have used the Intel® Pharma Analytics Platform to digitize clinical trials. Side by side with a traditional clinical trial that relies on burdensome clinic visits and paper diaries, the Intel® platform provides an edge-to-cloud AI solution that uses remote monitoring. This allows clinical data from sensors and wearable devices to be efficiently and continuously captured. It also applies machine learning techniques to develop objective measures for assessing symptoms and quantify the impact of therapies. Most recently, ICON PLC, a global provider of drug development solutions and services, has added the platform to further enhance its existing offerings.

So far, the platform has been used in over more than a dozen clinical trials, comprising more than 1.2 million hours of data collection from more than 1,000 patients. Trial results demonstrate increased compliance and patient retention. Building on Intel’s AI and analytics leadership, the Intel Pharma Analytics Platform provides powerful web-based analytics, running on high-performance Intel Xeon processor-based servers and Intel® Solid State Drives that can handle the massive volumes of streaming sensor data.

Intermountain Healthcare*: Pursuing Precision Medicine

Though many healthcare innovations originate in large academic institutions, Intermountain Healthcare, a Utah-based healthcare system that includes 22 hospitals and 185 clinics, shows that cutting-edge research can also take place within a large integrated health network.

Intermountain’s Precision Genomics group has more than 4 million samples in its biorepository that researchers are working to digitize, sequence and add to their data set so they can share throughout the network and with other partners via their Oncology Precision Network (OPeN).

Clinicians provide a personalized approach to testing, diagnosing and treating cancer. After a patient’s DNA has been sequenced, the results are discussed by Precision Genomics’ molecular tumor board, an expert panel of scientists, physicians, nurses, and pharmacists from inside and outside the Intermountain network. AI augments this capability allowing them to take on more cases more quickly and at a lower cost. The board meets in person and through teleconferencing to discuss treatment that best suits each individual patient.

Already, Intermountain believes they have doubled the overall survival rate of patients in its precision medicine program by providing more targeted treatment based on each individual’s specific cancer.

Mayo Clinic*: Merging Heterogenous Data Sources for Individualized Care

Today, medical data comes not just in the form of test results – from such things as CT scans, heart monitors or blood pressure cuffs – but also from wearable fitness devices and Internet-of-Things, home-based technologies. Until recently, there has been no way to merge these data fields to benefit patients. Leveraging contemporary architectures, including high-performance multi-core and many-core processors, Mayo has the computing power to bring these heterogeneous data pools together in a cohesive manner to analyze it and search for patterns indicating better ways to diagnose and treat individuals. The Mayo Clinic sees AI as the bridge they need to turn big data into real, actionable information to move past treating populations to treating individuals and bringing about the best possible outcome for each and every patient.

Montefiore Health System*: Predicting and Assessing Risk in Acute Patients

Staying in the intensive care unit can be quite costly, and the burden rises the longer a patient stays in the ICU and if they require ventilation. Longer stays also come with a higher the risk of complications.

To reduce patients’ time in ICU and the risk of hospital acquired conditions (HACs), cutting-edge hospitals are turning to data. To this end, Montefiore, a nationally ranked health system, the largest in the Bronx, has created an AI platform called PALM, which stands for Patient-centered Analytics and Learning Machine, that streams and mines data in near real time.

A constant theme is finding ways to predict when a patient is about to take a turn for the worse so caregivers can intervene before an acute event. For example, clinicians have used PALM to predict which patients in the ICU are at risk for respiratory failure. They are also using PALM to predict sepsis and other HACs before they become problematic.

Eventually, Montefiore caregivers hope to extend this prediction model to patients outside of the hospital, using a variety of data sources – including genetics, socioeconomic data, etc. – to determine who will develop chronic conditions as much as two years before they occur.

OptumLabs*: Using Data Science and Collaboration to Solve Health care’s biggest problems

Partnering with more than 25 thought-leading health care institutions, including Mayo Clinic, AARP*, American Cancer Society, and UC Health, OptumLabs is passionate about discovering solutions that help address health care’s greatest challenges. OptumLabs uses a rich data set of over 200 million de-identified patient lives, including administrative claims, medical records and self-reported health information, to generate actionable insights and build predictive AI models that can lead to earlier patient diagnosis, lower costs, and better patient outcomes.

With the help of state-of-the-art data science and evolving AI-driven technologies, such as those powered by powerful Intel Xeon processors and other tools, OptumLabs and its partners have done a lot of impactful work. This includes creating new kinds of patient clusters for those with heart failure and COPD and developing a unique dashboard of key performance indicators that are able to help organizations identify, benchmark and set performance targets to respond to the drivers of the national opioid epidemic.

OptumLabs has also been using machine learning to predict Alzheimer’s and dementia diagnosis years before diagnosis. Alzheimer’s Disease is a growing problem that is projected to affect more than 10 million Americans by 2040, and there is currently no cure in sight. OptumLabs has partnered with the Global CEO Initiative on Alzheimer’s Disease, and others, to identify predictive signals earlier and find people for clinical trials and other interventions. Initial models can identify signals of dementia four to eight years earlier than the first diagnosis, and the next phase of work strives to improve these results by using more nuanced clinical data and newer deep learning models.

Princeton Neuroscience Institute*: Treating Mental Disorders with AI Technologies

It used to be that the standard treatment for mental disorders such as clinical depression, anxiety, addiction (to smoking, for example) and PTSD, took place on the couch, with clinicians using cognitive behavioral therapy to help their patients overcome issues.

Today, however, researchers at the Princeton Neuroscience Institute are analyzing functional MRI (fMRI) technology to decode the brain and deliver treatment in the form of real-time neurofeedback for such afflictions.

The neurofeedback from the decoding is used to update stimulus provided to the subject – in the form of visuals, audio and/or instructions to perform certain tasks – which then modifies a patient’s brain states.

AI technologies include high-performance computing Intel Xeon processor clusters and an open source brain-imaging analysis kit called BrainIAK (brainiak.org) that provides a performance-optimized Python library that can scale-up laptop-based decoding to MPI clusters in the cloud.

Stanford University*, UC Berkeley*, Intel, and Lucile Packard Children’s Hospital*: Ushering in the Next Generation of Medical Imaging

Arterys*, a company originating from scientists at Stanford University, has developed a new AI assistant for radiologists. This work focuses on quantitative medical imaging analysis in the areas of cardiac and oncologic imaging (lung, liver, etc.) and tracking changes over time. This permits rapid and more quantitative results from imaging exams across a range of clinical indications.

Now, the same team at Stanford has collaborated with researchers at UC Berkeley to develop fast image reconstruction algorithms (compressed sensing) that “smart” sample the electromagnetic field to deliver high resolution/low noise images with speed, allowing for improved deep learning-based segmentation of tumors.

Stanford, collaborating with Intel, is currently using a small Intel Xeon processor-based server cluster for fast computation of images in its imaging applications to shorten exams, and make the exams more feasible for children. Today images from this technique can be reconstructed in about one minute compared to over 45 minutes before, while the radiologist and technologists wait. As applied to pediatrics, this faster processing time saves the youngest and most vulnerable patients from the added risk of intubation and/or sedation usually required for the traditional imaging process.

University of California San Francisco* (UCSF): Detecting Disease Before Onset

Early in 2017, UCSF partnered with Intel to create a deep learning analytics platform designed to deliver clinical decision support and predictive analytics capabilities that would enable delivery of the right care to the right patient at the right time – ideally before the onset of disease. The collaboration brings together Intel’s leading-edge computer science and deep learning capabilities with UCSF’s clinical and research expertise to create a scalable, high-performance computational environment to support enhanced frontline clinical decision making for a wide variety of patient care scenarios.

The UCSF Center for Digital Health Innovation under its SmarterHealth* Initiative has created the logical nexus of data, analytics, technology, methodology and scalable commercial partners in order to make artificial intelligence practical and useful at the point of care to the benefit of all patients worldwide.

Further, by integrating these advanced methodologies into medicine and health we are demystifying the integration of man and machine; the outcome is the ability to harness incredible technological advances by blurring the line between medicine and health, actually personalizing the delivery of medicine, defining the ability to predict health trajectories, and while lowering costs to all stakeholders.

SmarterHealth and their core enhanced patient data sets have already been utilized to create novel and useful algorithms to help practitioners “triage” trauma patients in order to optimize personalized clinical diagnosis. Other initiatives include developing point of care image data reconstruction and recognition intended to both inform clinicians about how to capture the “right image” while using deep learning and artificial intelligence to not only map individual physiological structures but to also use serial deployed imaging to help clinicians and health providers to actually predict patient trajectories at the point of care.

Finally, by engaging leading academic and technology partners around the world through the SmarterHealth Artificial Intelligence in Medicine Consortium we intend to co-develop and disseminate these actionable capabilities for the betterment of all patients throughout the world.

1https://www.datameer.com/company/datameer-blog/role-big-data-preventing-healthcare-fraud-waste-abuse/

2https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical.html

3https://www.cdc.gov/chronicdisease/overview/index.htm

4http://health.costhelper.com/emergency-room.html

5http://www.realtorsinsurancemarketplace.com/how-much-does-it-cost-to-spend-a-day-in-the-hospital/

6https://www.cancer.org/cancer/melanoma-skin-cancer/detection-diagnosis-staging/survival-rates-for-melanoma-skin-cancer-by-stage.html

Intel, Xeon, and the Intel logo are registered trademarks of the Intel Corporation.

The post SOLVE: HEALTHCARE – Intel and Partners Demonstrate Real Uses for Artificial Intelligence in Healthcare appeared first on Intel Newsroom.

Media Alert: Shaping the Future of Healthcare through Artificial Intelligence (Video Replay)

Join Intel and leading healthcare organizations as they address the most pressing topics and challenges in healthcare today. Experts in the field will examine the industry’s most pressing challenges, the enormous potential of artificial intelligence (AI) in democratizing cost-effective healthcare services and the role AI is playing in helping to find cures for diseases. They will also consider the challenges on the road ahead.

WHEN:  9-11:35 a.m. PDT, Wednesday, March 21, 2018 (12-2:35 p.m. EDT)

WHO: Leaders in the field of healthcare and artificial intelligence from UCSF, AccuHealth, Intermountain Healthcare, Princeton University, Broad Institute of MIT & Harvard, Diaceutics, Mayo Clinic, Stanford University, Arterys, Montefiore Medical Center, Harvard University, and OptumLabs, as well as leaders in healthcare and artificial intelligence from Intel. (Full speaker lineups and agenda below.)


» Download “Solve: Healthcare — Intel Helps Shape the Future with Artificial Intelligence (Event Replay)”

Media Contact: Robin Holt, Intel Global Communications, robin.holt@intel.com

More: One Simple Truth about Artificial Intelligence in Healthcare: It’s Already Here (Navin Shenoy Editorial) | Artificial Intelligence (Press Kit) | Advancing Data-Driven Healthcare Solutions (Press Kit)

SOLVE Agenda

8:30–9 a.m. Check-in and Breakfast
9–9:05 a.m. Welcome Remarks

Speaker: Bryce Olson, global strategist, Health and Life Sciences Group, Intel

9:05–9:15 a.m. Introduction: AI Today

Intel’s AI solutions are helping to solve some of the most pressing healthcare challenges. Data holds the promise to help reduce costs, improve quality, increase access and pave the way to precision health.

Speaker: Navin Shenoy, executive vice president, general manager, Data Center Group, Intel

9:15–9:30 a.m. Demystifying AI

Learn about the work UCSF is doing to blur the line between medicine and health to create seamless and non-invasive, personalized inferences that apply throughout our lives all over the world and in all settings.

Speaker: Dr. Michael Blum, MD, UCSF associate vice chancellor for informatics; UCSF Health chief digital transformation officer; executive director, UCSF Center for Digital Health Innovation

9:30–10:15 a.m. Panel 1: How is AI Changing Healthcare Today?

Panelists will discuss the progress made in healthcare services, ranging from trauma care/emergency room to imaging and precision diagnostics, and how AI is enabling healthcare clinicians, researchers and academics to achieve advances and breakthroughs today.  Panelists will talk about patient benefits, cost implications, time savings and more, while referencing use cases/anecdotes and results in conversations.

Moderator: Jennifer Esposito, general manager, Health and Life Sciences Group, Intel

Speakers:

  • Xavier Urtubey, MD, MBA, co-founder & CEO, AccuHealth
  • Lonny Northrup, senior health informaticist, Intermountain Healthcare
  • Jonathan Cohen, MD, PhD, Princeton Neuroscience Institute
  • Rachael A. Callcut, M.D., M.S.P.H., director of data science and advanced analytics, UCSF Center for Digital Health Innovation; program director, SmarterHealth; CDHI program director, SmarterHealth; and director of data science and advanced analytics
10:15–10:55 a.m. Panel 2: AI Stepping Stones to the Future of Medicine

This panel will highlight recent cutting-edge AI developments in the healthcare industry that have the potential to become mainstream diagnostic/therapeutic offerings in the near future. Panelists will explore personalized medicine, gene therapy, imaging and other areas of treatment that are benefitting from advances in AI.

Moderator: Arjun Bansal, vice president, general manager, Artificial Intelligence Software and Lab, Intel

Speakers:

  • Lee Lichtenstein, associate director, Somatic Computational Methods, Broad Institute of MIT & Harvard
  • Juliesta Sylvester, managing director, Diaceutics
  • David Holmes, PhD, collaborative scientist and biomedical engineer, Mayo Clinic
  • Chris Hanes, PhD, vice president – data science, OptumLabs
  • Shreyas Vasanawala, MD/PhD, associate professor of radiology (pediatric radiology), Stanford; founder, Arterys
10:55–11:35 a.m. Panel 3: What are the Barriers to AI Adoption in Healthcare?

While AI promises enormous benefits in healthcare, the road ahead for its mass adoption is fraught with challenges. Barriers to adoption such as patient privacy/HIPPA, security of data, the lack of transparency of AI models and technical challenges are proving to be daunting issues. In this wild west of new analytic frameworks, the panel will explore the thorny issues we need to overcome to ensure that AI can reach its full potential in healthcare.

Moderator: Ted Willke, senior principal engineer, Intel Labs

Speakers:

  • Jayashree Kalpathy-Cramer, PhD, Harvard
  • Parsa Mirhaji, director, Center for Health Data Innovations; chief technology officer, NYC Clinical Data Research Network, Montefiore Medical Center
  • Morteza Mardani, research scientist, Stanford University Department of Electrical Engineering and Radiology

The post Media Alert: Shaping the Future of Healthcare through Artificial Intelligence (Video Replay) appeared first on Intel Newsroom.

The Smallest VR Gaming Rig: Intel’s Newest NUC

Intel NUCs are tiny but powerful mini-PCs. Their popularity has rapidly increased since Intel introduced them in 2012. People use the Intel NUC — an acronym for Next Unit of Computing, and pronounced like the first syllable in “knuckle” — in millions of systems worldwide from home entertainment to gaming to digital security to airport signs. Learn more about the Hades Canyon NUC, with was introduced in January: Intel Launches Most Powerful NUC: Smallest VR-Capable System Ever

Intel Hades Canyon NUC
» Click for full image
The main circuit board — also called the motherboard — of the new Intel Hades Canyon model NUC. Introduced in January 2018, the Hades Canyon model is designed to support high-end virtual reality gaming and can power as many as six high-resolution 4K displays simultaneously. The large chip in the center is an 8th Generation Intel® Core™ i7 processor with Radeon™ RX Vega M graphics. (Credit: Walden Kirsch/Intel Corporation)

The post The Smallest VR Gaming Rig: Intel’s Newest NUC appeared first on Intel Newsroom.

Intel Board of Directors Elects New Director and Extends Andy Bryant’s Term as Intel Chairman until 2019

Intel Risa Lavizzo Mourey
» Click for full image
Intel Corporation announced on Monday, March 19, 2018, that Risa Lavizzo-Mourey has been elected to Intel’s board of directors. (Credit: Intel Corporation)

SANTA CLARA, Calif., March 19, 2018 – Intel Corporation today announced that Risa Lavizzo-Mourey was elected to Intel’s board of directors. Her election marks the fifth new independent director added to Intel’s board since the beginning of 2016. The board also voted unanimously to extend Andy Bryant’s term as Intel chairman in order to ensure board continuity and a smooth integration for new directors. Bryant became Intel chairman in May 2012 and will stand for re-election at the company’s 2018 annual stockholders’ meeting. If elected, he will continue to serve as chairman until the conclusion of the company’s 2019 annual stockholders’ meeting.

“Risa knows how to lead a large organization tackling complex issues, and brings extensive public-company board experience. I look forward to her fresh insights and perspective,” said Intel Chairman Andy Bryant. “We’ve worked to make sure the board has the right skills and backgrounds to be strong stewards in our dynamic industry. I’m honored to continue serving alongside them, as Intel transforms to create more value for our customers and our owners.”

Dr. Lavizzo-Mourey has served as the Robert Wood Johnson Foundation PIK Professor of Population Health and Health Equity at the University of Pennsylvania since January 2018. From 2003 to 2017, she was the president and chief executive officer of the Robert Wood Johnson Foundation, the largest U.S. philanthropy organization dedicated to health. Dr. Lavizzo-Mourey is a member of the boards of directors of General Electric Co. and Hess Corp., and she previously served as a director at Genworth Financial Inc. and Beckman Coulter Inc.

She is also a member of the National Academy of Medicine, the board of regents of the Smithsonian Institution, and the board of fellows of Harvard Medical School. Dr. Lavizzo-Mourey holds an MBA from the University of Pennsylvania and an M.D. from Harvard Medical School.

The post Intel Board of Directors Elects New Director and Extends Andy Bryant’s Term as Intel Chairman until 2019 appeared first on Intel Newsroom.

Intel Diversity in Technology Initiative

Diversity is an integral part of Intel’s competitive strategy and vision. In January 2015, Intel announced the Diversity in Technology initiative, setting a bold hiring and retention goal to achieve full representation of women and underrepresented minorities in Intel’s U.S. workforce by 2020. The company also committed $300M to support this goal and accelerate diversity and inclusion—not just at Intel, but across the technology industry at large. The scope of Intel’s efforts span the entire value chain, from spending with diverse suppliers and diversifying its venture portfolio to better serving its markets and communities through innovative programs like Hack Harassment, which aims to combat online harassment.

2017 Diversity and Inclusion Midyear Report

News Articles

Oakland Unified School District

 

Fact Sheets & Backgrounders

The post Intel Diversity in Technology Initiative appeared first on Intel Newsroom.

Intel at 50: Historic Photos

As Intel counts down to its 50th anniversary on July 18, 2018, it will share photos from its history. They show the company’s first years, its growth and how its products change lives worldwide.

Return regularly to the Intel Newsroom as more photos are added every few weeks. Intel will also share stories about its first 50 years as the golden anniversary nears.

To track Intel’s progress toward the 50th anniversary on social media, follow the WeAreIntel Twitter account.

Intel 50 1st employees 2

» Download all images (ZIP, 4 MB)

The post Intel at 50: Historic Photos appeared first on Intel Newsroom.

Security Exploits and Intel Products

Security researchers on Jan. 3 disclosed several software analysis methods that, when used for malicious purposes, have the potential to improperly gather sensitive data from many types of computing devices with different vendors’ processors and operating systems.

Intel is committed to product and customer security and to responsible disclosure.

News

The Newest:
March 15, 2018: Advancing Security at the Silicon Level


By Date:
Jan. 3, 2018: Intel Responds to Security Research Findings
Jan. 4, 2018: Intel Issues Updates to Protect Systems from Security Exploits
Jan. 4, 2018: Industry Testing Shows Recently Released Security Updates Not Impacting Performance in Real-World Deployments
Jan. 8, 2018: Intel CEO Addresses Security Research Findings during 2018 CES Keynote Address
Jan. 9, 2018: Intel Offers Security Issue Update
Jan. 10, 2018: Intel Security Issue Update: Initial Performance Data Results for Client Systems
Jan. 11, 2018: Intel’s Security-First Pledge
Jan. 11, 2018: Intel Security Issue Update: Addressing Reboot Issues
Jan. 17, 2018: Firmware Updates and Initial Performance Data for Data Center Systems
Jan. 22, 2018: Root Cause of Reboot Issue Identified; Updated Guidance for Customers and Partners
Feb. 7, 2018: Security Issue Update: Progress Continues on Firmware Updates
Feb. 14, 2018: Expanding Intel’s Bug Bounty Program: New Side Channel Program, Increased Awards
Feb. 20, 2018: Latest Intel Security News: Updated Firmware Available for 6th, 7th and 8th Generation Intel Core Processors, Intel Xeon Scalable Processors and More

Resources

Partner Announcements

Microsoft Azure: Securing Azure Customers from CPU Vulnerability
Google Security Blog: More Details About Mitigations for the CPU Speculative Execution Issue
Amazon AWS: Processor Speculative Execution Research Disclosure
Apple: About Speculative Execution Vulnerabilities in ARM-Based and Intel CPUs

The post Security Exploits and Intel Products appeared first on Intel Newsroom.