Visualize Large-Scale, Unstructured Data in Real Time for Faster Scientific Discoveries

To help develop better pacemakers, researchers at the Barcelona Supercomputing Center recently developed the world’s first comprehensive heart model.

It’s an amazing achievement, mimicking blood flow and muscle reaction based on the heart’s electrical signals. Nearly as daunting: visualizing and analyzing their huge 54 million tetrahedral-cell model.

When running simulations at scale, supercomputers generate petabytes of data. For scientists, visualizing an entire dataset with high fidelity and interactivity is key to gathering insights. But datasets have grown so vast, that’s become difficult.

IndeX heart visualization

Tackling Scientific Visualization in the HPC Era

NVIDIA IndeX packs the performance needed to visualize these compute-heavy jobs. It works on large-scale datasets by distributing workloads across multiple nodes in a GPU-accelerated cluster.

With IndeX, there’s no need throttle back frame rates to visualize and analyze volumetric data. And there’s no need for workarounds like batch rendering that lose interactivity and show data in 2D.

IndeX lets users view their simulation’s entire datasets in real time.

ParaView Users Can Take Advantage of NVIDIA IndeX

Even better, users of ParaView, a popular HPC visualization and data analysis tool, can now take advantage of NVIDIA IndeX through the latest plug-in. ParaView is the “go to” tool for analyzing a wide variety of simulation-based data. It’s supported at all major supercomputing sites.

With IndeX on ParaView, scientists can interact with volume visualizations that scale with structured and unstructured data. This lets them analyze entire datasets in real time.

“High-interactivity visualization of our full dataset is key to gathering meaningful findings,” said Mariano Vazquez, high-performance computational mechanics group manager at the Barcelona Supercomputing Center. “The IndeX plug-in enabled us to visualize our 54 million tetrahedral-cell model in real time. Best of all, it fits right inside our existing ParaView workflow.”

Integrating IndeX as a plug-in inside ParaView allows users to take advantage of the powerful features of IndeX without  learning a new tool. In addition, the workflow remains unchanged so users can focus on their research.

NVIDIA IndeX for ParaView Key Features

  • Render structured and unstructured volume data
  • Depth correct mixing with ParaView primitives
  • Delivers high interactivity for large datasets
  • Time-series visualization
  • Scales across multi-GPU, multi-node cluster
  • Open source plug-in for custom versions of ParaView
  • Supports ParaView data formats

IndeX Plugin for Workstations and HPC Clusters

There are two versions of the plug-in. For usage in a workstation, or single server node, the plug-in is available at no cost. For performance at scale in a GPU-accelerated multi-node system, the Cluster edition of the plug-in is available at no cost to academic users and with a license for commercial users.

Get your plug-in now at nvidia.com/index.

Stop by the NVIDIA booth, H-730, at ISC this week and check out our HPC visualization demo showing real-time interactivity on a large, unstructured volume dataset.

The post Visualize Large-Scale, Unstructured Data in Real Time for Faster Scientific Discoveries appeared first on The Official NVIDIA Blog.

How GPUs Can Kick 3D Printing Industry Into High Gear

Three-dimensional printing has opened up new possibilities in fields like manufacturing, architecture, engineering and construction.

But when the objects to be printed become complex, limitations kick in. Challenges in 3D printing include multiple colors, differing densities and the use of a mix of materials.

At last month’s GPU Technology Conference, HP Labs and NVIDIA described how they’ve worked together to overcome these challenges using NVIDIA’s new GVDB Voxel open source software development kit.

Jun Zeng, principal scientist for HP Labs, and Rama Hoetzlein, lead architect for GVDB Voxels, presented a statue of a human figure with wings that combined these challenging elements.

Simplified, their goal was to be able to 3D print the statue while adjusting the density of materials to account for external forces. That increased structural integrity where it’s needed, while minimizing the amount and weight of material needed to produce it.

GVDB Voxels printed a 3D statue (L) of a complex image (R) with minimal materials and structural support.
GVDB Voxels printed a 3D statue (L) of a complex image (R) with structural support and minimal material.

Zeng told a roomful of GTC attendees that HP Labs had started using GPUs to more quickly process 3D printing voxels (volumetric pixels — essentially pixels in 3D space). He anticipates that printing technology and scale will rapidly increase computing demands in the future.

NVIDIA’s GVDB Voxels SDK has eased the complexity of 3D printing workflows by offering a platform for large-scale voxel simulation and high-quality ray-traced visualizations. And it allows for continuous data manipulation throughout the process.

“Iteration can happen during infilling, or while analyzing and determining stress,” said Hoetzlein.

Hoetzlein said the SDK is designed for simple efficient computation, simulation and rendering, even when there’s sparse volumetric data. It includes a compute API that generates high-resolution data and requires minimal memory footprint, and a rendering API that supports development of CUDA and NVIDIA OptiX pathways, allowing users to write custom rendering kernels.

The researchers’ effort started with a polygonal statue, which was subject to a stress simulation before the GVDB Voxels took over. The object is converted into a model made of small voxel cubes. Then the software optimizes the in-filling structure, varying the density based on the results of the stress simulation.

What they found was that combining GVDB Voxels with the latest Pascal architecture GPUs generated results 50 percent faster than the previous generation of GPUs, and up to 10x faster than CPU techniques. The SDK makes this possible by storing data only at the surface of the object. That reduces memory requirements without sacrificing resolution.

Zeng said that oftentimes the limitations of 3D printing devices dictate what designers can do. With the NVIDIA GVDB Voxels SDK, designers gain new flexibility.

More information is available at http://developer.nvidia.com/gvdb.

The post How GPUs Can Kick 3D Printing Industry Into High Gear appeared first on The Official NVIDIA Blog.

NVIDIA Delivers New Deep Learning Software Tools for Developers

To help developers meet the growing complexity of deep learning, NVIDIA today announced better and faster tools for our software development community. This includes a significant update to the NVIDIA SDK, which includes software libraries and tools for developers building AI-powered applications.

With each new generation of GPU architecture, we’ve continually improved the NVIDIA SDK. Keeping with that heritage, our software is Volta-ready.

Aided by developers’ requests, we’ve built tools, libraries and enhancements to the CUDA programming model to help developers accelerate and build the next generation of AI and HPC applications.

Chart of GPU ecosystem growth
The level of interest in GPU computing has exploded, fueled by advancements in AI.

The latest SDK updates introduce new capabilities and performance optimizations for GPU-accelerated applications:

  • New CUDA 9 speeds up HPC and deep learning applications with support for Volta GPUs, up to 5x faster performance for libraries, a new programming model for thread management, and updates to debugging and profiling tools.
  • Developers of end-user applications such as AI-powered web services and embedded edge devices benefit from 3.5x faster deep learning inference with the new TensorRT 3. With built-in support for optimizing both Caffe and TensorFlow models, developers can take trained neural networks to production faster than ever.
  • Engineers and data scientists can benefit from 2.5x faster deep learning training using Volta optimizations for frameworks such as Caffe2, Microsoft Cognitive Toolkit, MXNet, PyTorch and TensorFlow.

Here’s a detailed look at each of the software updates and the benefits they bring to developers and end users:

CUDA

CUDA is the fastest software development platform for creating GPU-accelerated applications. Every new generation of GPU is accompanied by a major update of CUDA, and version 9 includes support for Volta GPUs, major updates to libraries, a new programming model, and updates to debugging and profiling tools.

Learn more about CUDA 9.

NVIDIA Deep Learning SDK

With the updated Deep Learning SDK optimized for Volta, developers have access to the libraries and tools that ensure seamless development and deployment of deep neural networks on all NVIDIA platforms, from the cloud or data center to the desktop to embedded edge devices. Deep learning frameworks using the latest updates deliver up to 2.5x faster training of CNNs, 3x faster training of RNNs and 3.5x faster inference on Volta GPUs compared to Pascal GPUs.

We’ve also worked with our partners and the communities so the Caffe2, Microsoft Cognitive Toolkit, MXNet, PyTorch and TensorFlow deep learning frameworks will be updated take advantage of the latest Deep Learning SDK and Volta.

This update brings performance improvements and new features to:

cuDNN

NVIDIA cuDNN provides high-performance building blocks for deep learning and is used by all the leading deep learning frameworks.

cuDNN 7 delivers 2.5x faster training of Microsoft’s ResNet50 neural network on the Volta-optimized Caffe2 deep learning framework. Apache MXNet delivers 3x faster training of OpenNMT language translation LSTM RNNs.

The cuDNN 7 release will be available in July as a free download for members of the NVIDIA Developer Program. Learn more at the cuDNN website.

NCCL

Deep learning frameworks rely on NCCL to deliver multi-GPU scaling of deep learning workloads. NCCL 2 introduces multi-node scaling of deep learning training on up to eight GPU-accelerated servers. With the time required to train a neural network reduced from days to hours, developers can iterate and develop their products faster.

Developers of HPC applications and deep learning frameworks will have access to NCCL 2 in July. It will be available as a free download for members of the NVIDIA Developer Program. Learn more at the NCCL website.

TensorRT

Delivering AI services in real time poses stringent latency requirements for deep learning inference. With NVIDIA TensorRT 3, developers can now deliver 3.5x faster inference performance — under 7 ms real-time latency.

Developers can optimize models trained in TensorFlow or Caffe deep learning frameworks and deploy fast AI services to platforms running Linux, Microsoft Windows, BlackBerry QNX or Android operating systems.

TensorRT 3 will be available in July as a free download for members of the NVIDIA Developer Program. Learn more at the TensorRT website.

NVIDIA DIGITS

DIGITS introduces support for the TensorFlow deep learning framework. Engineers and data scientists can improve productivity by designing TensorFlow models within DIGITS and using its interactive workflow to manage datasets, training and monitor model accuracy in real time. To decrease training time and improve accuracy, the update also provides three new pre-trained models in the DIGITS Model Store: Oxford’s VGG-16 and Microsoft’s ResNet50, for image classification tasks, and NVIDIA DetectNet for object detection tasks.

DIGITS update with TensorFlow and new models will be available for the desktop and the cloud in July as a free download for members of the NVIDIA Developer Program. Learn more at the DIGITS website.

Deep Learning Frameworks

The NVIDIA Deep Learning SDK accelerates widely used deep learning frameworks such as Caffe, Microsoft Cognitive Toolkit, TensorFlow, Theano and Torch as well as many other deep learning applications. NVIDIA is working closely with leading deep learning frameworks maintainers at Amazon, Facebook, Google, Microsoft, University of Oxford and others to integrate the latest NVIDIA Deep Learning SDK libraries and immediately take advantage of the power of Volta.

Caffe2

Caffe2 announced on their blog an update to the framework that brings 16-bit floating point (FP16) training to Volta, developed in collaboration with NVIDIA:

“We are working closely with NVIDIA on Caffe2 to utilize the features in NVIDIA’s upcoming Tesla V100, based on the next-generation Volta architecture. Caffe2 is excited to be one of the first frameworks that is designed from the ground up to take full advantage of Volta by integrating the latest NVIDIA Deep Learning SDK libraries — NCCL and cuDNN.”

MXNet

Amazon announced how they are working together with NVIDIA to bring high-performance deep learning to AWS. As part of the announcement, they spoke about the work we’ve done together on bringing Volta support to MXNet.

“In collaboration with NVIDIA, AWS engineers and researchers have pre-optimized neural machine translation (NMT) algorithms on Apache MXNet allowing developers to train the fastest on Volta-based platforms,” wrote Joseph Spisak, manager of Product Management at Amazon AI.

TensorFlow

Google shared the latest TensorFlow benchmarks on DGX-1 on their developers blog:

“We’d like to thank NVIDIA for sharing a DGX-1 for benchmark testing and for their technical assistance. We’re looking forward to NVIDIA’s upcoming Volta architecture, and to working closely with them to optimize TensorFlow’s performance there, and to expand support for FP16.”

At NVIDIA, we’re also working closely with Microsoft to optimize Microsoft Cognitive Toolkit and Facebook AI Research (FAIR) to optimize PyTorch on Volta.

NVIDIA GPU Cloud Deep Learning Stack

We also announced today NVIDIA GPU Cloud (NGC), a GPU-accelerated cloud platform optimized for deep learning.

NGC is designed for developers of deep learning-powered applications who don’t want to assemble and maintain the latest deep learning software and GPUs. This comes with NGC Deep Learning Stack, a complete development environment that will run on PC, DGX and the cloud, and is powered by the latest deep learning frameworks, NVIDIA Deep Learning SDK and CUDA. The stack is fully managed by NVIDIA so developers and data scientists can start with a single GPU on a PC and scale up to additional compute resource on the cloud.

Updates to NVIDIA VRWorks and DesignWorks

Learn more information about the latest updates to some of our other SDKs:

DesignWorks GTC 2017 release

VRWorks Audio and 360 Video SDKs released at GTC

The post NVIDIA Delivers New Deep Learning Software Tools for Developers appeared first on The Official NVIDIA Blog.

NVIDIA and SAP Partner to Create a New Wave of AI Business Applications

Businesses collect mountains of data daily. Now it’s time to make those mountains move.

NVIDIA CEO and founder Jensen Huang announced today at our GPU Technology Conference that SAP and NVIDIA are working together to help businesses use AI in ways that will change the world’s view of business applications.

Together, we’re combining the advantages of NVIDIA’s AI computing platform with SAP’s leadership in enterprise software.

“With strong partners like NVIDIA at our side, the possibilities are limitless,” wrote SAP Chief Innovation Officer Juergen Mueller in a blog post published today. “New applications, unprecedented value in existing applications, and easy access to machine learning services will allow you to make your own enterprise intelligent.”

SAP is leveraging advancements NVIDIA has made from GPUs to systems to software. Our Tesla GPU computing platform represents a $2 billion investment. The NVIDIA DGX-1 — announced just over a year ago and incorporating eight GPUs — is an integrated hardware and software supercomputer that’s the result of work by over a dozen engineering teams.

DGX-1, in turn, brings together an integrated suite of software, including the leading deep learning frameworks optimized with our NVIDIA Deep Learning SDK.

Here are three examples of our collaboration with SAP that we’re demonstrating at GTC:

Measuring the ROI of Brand Impact

Many brands rely on sponsoring of televised events, yet it’s very difficult to track the impact of those ads. With the current manual process, it takes industry up to six weeks to report brand impact return on investment, and an entire quarter to adjust brand marketing expenditures.

SAP Brand Impact, powered by NVIDIA deep learning, measures brand attributes such as logos in near real time and with superhuman accuracy, because AI is not limited by all the human constraints. This is made possible using deep neural networks trained on NVIDIA DGX-1 and TensorRT to provide video inference analysis.

Results are immediate, accurate and auditable. Delivered in a day.

As a long-term SAP customer, Audi got early access to the latest SAP solution powered by NVIDIA deep learning, explains Global Head of Audi Sports Marketing Thomas Glas.

“Audi’s sponsorship team found the SAP Brand Impact solution a very useful tool. It can help Audi to evaluate its sponsorship exposure at high levels of operational excellence and transparency,” Glas said. “We were impressed by the capabilities and results of the first proof-of-concepts based on video footage from Audi FIS Alpine Ski World Cup. We’re strongly considering possibilities to combine SAP Brand Impact with our media analysis workflow for the upcoming Audi Cup and Audi FC Bayern Summer Tour.”

SAP Brand Impact screenshot
SAP Brand Impact — capturing brand logo placement in near real time. (Source: SAP)

The Future of Accounts Payable

Talk about a paper trail. A typical large manufacturing company processes 8 million invoices a year. Companies around the world still receive paper invoices that need to be processed manually. These manual processes are costly, time-consuming, repetitive and error-prone.

SAP used deep learning to train its Accounts Payable application, which automates the extraction and classification of relevant information from invoices without human intervention. A recurrent neural network is trained on NVIDIA GPUs to create this customized solution.

Records are processed in sub-seconds. Cash flow is sped up. Errors are reduced.

SAP Accounts Payable
Accounts Payable — Automatically loading accounts payable vendor details. (Source: SAP)

The Ticket to Customer Satisfaction

Eighty-one percent of companies recognize customer experience as a competitive differentiator, so why do just 13 percent rate their customer service at 9/10 or better? Companies struggle to keep up with their customers’ complaints and support issues with limited resources.

Using natural language processing and deep learning techniques on NVIDIA GPU platform, the SAP Service Ticketing application helps companies analyze unstructured data and create automated rules to categorize and route service tickets to the right person.

The result: a faster response and an improved customer experience.

SAP Service Ticketing
SAP Service Ticketing — automatically tag service tickets in the right category. (Source: SAP)

See More at GTC, SAPPHIRE and on the Web

To learn more about the SAP demos at GTC, join us at booth 118. We’ll also be at SAP SAPPHIRE, in Orlando next week, to showcase five more applications.

If you can’t make it to either show, join us for our live webinar on how we’re bringing AI to the enterprise, on June 14 at 9 am Pacific.

The post NVIDIA and SAP Partner to Create a New Wave of AI Business Applications appeared first on The Official NVIDIA Blog.

Unleashing Creativity in Fashion

Whether it’s the creation of smart and connected accessories, immersive runway experiences or enabling data-driven sales with analytics technology, the rapid convergence of technology and fashion is evolving beyond the wrist to enable brands to unleash creativity and allow consumers to experience brand new forms of personalization.

Intel is reinventing the boundaries for what it means to innovate in fashion by enabling designers and brands to deliver stylish wearables that consumers want through accessible and versatile computing technology such as Intel Curie. By powering and creating virtual reality and immersive retail experiences, Intel is also helping brands secure new touchpoints with consumers through IoT and software solutions. From stunning runway collaborations to retail inventory tracking systems, Intel is empowering brands to innovate and enhance their end-to-end solutions through wearables, real-time insights, inventory tracking, and more, resulting in seamless experiences for consumers and brands alike.

Press Releases

Press Materials

Blogs and social media

Product and Event photos

Videos

Press Kits

The post Unleashing Creativity in Fashion appeared first on Intel Newsroom.

Creativity in Fashion

Whether it’s the creation of smart and connected accessories, immersive runway experiences or enabling data-driven sales with analytics technology, the rapid convergence of technology and fashion is evolving beyond the wrist to enable brands to unleash creativity and allow consumers to experience brand new forms of personalization.

Intel is reinventing the boundaries for what it means to innovate in fashion by enabling designers and brands to deliver stylish wearables that consumers want through accessible and versatile computing technology such as Intel Curie. By powering and creating virtual reality and immersive retail experiences, Intel is also helping brands secure new touchpoints with consumers through IoT and software solutions. From stunning runway collaborations to retail inventory tracking systems, Intel is empowering brands to innovate and enhance their end-to-end solutions through wearables, real-time insights, inventory tracking, and more, resulting in seamless experiences for consumers and brands alike.

Intel Editorials

Press Releases

Press Materials

Blogs and social media

Product and Event photos

nyfw15_dress3

» Download all images (ZIP, 99 MB)

Videos

Press Kits

Intel iQ Stories

The post Creativity in Fashion appeared first on Intel Newsroom.

Make Amazing Things Happen in IoT and Entrepreneurship with Intel Joule

Today during the Intel Developer Forum (IDF) opening keynote, Intel CEO Brian Krzanich introduced Intel® Joule™, a sophisticated maker board with an Intel® RealSense™ depth-sensing camera targeted at Internet of Things (IoT) developers, entrepreneurs and established enterprises. Intel Joule will be featured in the upcoming season of America’s Greatest Makers.

Several Intel partners such as Microsoft and GE are demonstrating potential applications of this technology this week at IDF, including French company PivotHead which built augmented reality safety glasses for Airbus employees.

Intel Joule, a sophisticated maker board with an Intel RealSense depth-sensing camera targeted at Internet of Things developers, entrepreneurs and established enterprises. (Credit: Intel Corporation)
Intel Joule, a sophisticated maker board with an Intel RealSense depth-sensing camera targeted at Internet of Things developers, entrepreneurs and established enterprises. (Credit: Intel Corporation)

Intel Joule enables people to take a concept into a prototype and then into production at a fraction of the time and development cost. The Intel Joule platform is a high performance system-on-module (SOM) in a tiny, low-power package thus making it ideal for computer vision, robotics, drones, industrial IoT, VR, AR, micro-servers and other applications that require high-end edge computing.

Intel Joule is available in two models – 570x and 550x. The Intel Joule 570x developer kit is available for sale at the 2016 Intel Developer Conference in San Francisco, and will begin shipping in September through Intel reseller partners.

For more on the technical specifications of Intel Joule or to learn more about the Intel partner demos on display at IDF, visit the Intel Joule fact sheet.

The post Make Amazing Things Happen in IoT and Entrepreneurship with Intel Joule appeared first on Intel Newsroom.

New Opportunities and Tech for Drone Developers and Enthusiasts

Aero Ready to Fly
Aero Ready to Fly

Intel is focused on creating innovative new technologies and leading with key vision capabilities in the unmanned aerial vehicle (UAV) segment, commonly referred to as drones. At the Intel Developer Forum (IDF) today, Intel is hosting a panel with drone industry leaders including Ronie Gnecco, innovation manager for UAV Development & Applications, Airbus; Earl Lawrence, director, Unmanned Aircraft Systems Integration Office, Federal Aviation Administration; Shan Phillips, CEO, Yuneec USA; and Art Pregler, UAS program director, AT&T. They were joined by Intel drone experts Anil Nanduri and Natalie Cheung to discuss how new drone technologies and capabilities present new opportunities for drone developers. Chief among these opportunities relate to some additional drone-related announcements Intel made today at the show.

Intel® Aero Platform for UAVs: Pre-orders are open for the Intel Aero Platform for unmanned aerial vehicles. Designed from the ground up to support drones, the UAV developer kit is powered by an Intel® Atom™ quad-core processor. It combines compute, storage, communications and flexible I/O all in a form factor the size of a standard playing card. When matched with the optional Vision Accessory Kit, developers will have opportunities to launch sophisticated drone applications. The Aero Ready To Fly drone is a fully-assembled quadcopter with compute board, integrated depth and vision capabilities using Intel® RealSense™ Technology — the fastest path available from Intel for developers to get applications airborne. Aero Ready To Fly Drone supports several “plug and play” options, including a flight controller with Dronecode PX4 software, Intel RealSense for vision and AirMap SDK for airspace services. The Aero compute board is available for $399 at click.intel.com. The Aero Ready To Fly Drone will be available by end of year.

Yuneec with Intel® RealSense™ Technology: Today, Intel showcased the award-winning Yuneec Typhoon H with Intel RealSense Technology which uses intelligent obstacle navigation to not only avoid objects, but also plot an alternative course around obstacles. The Typhoon H with Intel RealSense is available now for $1,899.

Continue the exploration of Intel’s developments in the aerial technology, and learn more about Intel’s upcoming programs and announcements on Intel’s aerial technology web page.

Yuneec Typhoon H

The post New Opportunities and Tech for Drone Developers and Enthusiasts appeared first on Intel Newsroom.

IDF 2016 – Top Activities You Shouldn’t Miss!

Intel is bringing leaders who are shaping the future of technology to speak directly to media and other participants at this year’s Intel Developer Forum (IDF) in San Francisco. They will hear from top executives at Intel, industry leaders and our developers on the future of key growth areas

Keep reading to find highlights of this year’s IDF that will offer access you don’t want to miss. Visit us at newsroom.intel.com/2016-idf for updates throughout the show. We look forward to seeing you there.

DAY 1 – TUESDAY, AUGUST 16

 

Brian Krzanich’s Opening Keynote

Intel’s CEO will showcase how the boundaries of computing are expanding as billions of smart and connected devices, new data-rich services and cloud apps fueled by the Internet of Things (IoT) increasingly bring new and exciting experiences to our lives. He will highlight how Intel and developers, makers and innovators can transform Intel technologies into the next generation of amazing experiences via autonomous driving, virtual reality, artificial intelligence and 5G connectivity.

  • When: Tuesday, August 16, 9:00 – 10:30 a.m.
  • Where: Moscone West Convention Center, Level 3

 

Virtual Reality – Beyond Boundaries Where No Computer or Person Has Gone Before

Check out demos in the Technology Showcase on how virtual reality is bringing truly immersive experience. Intel vice president and general manager of the New Technology Group Perceptual Computing Group Achin Bhowmik will also share in a Technical Insights session on how Intel® RealSense™ technology is bringing human-like sensing to devices, enabling a new level of intelligence, interaction and immersion through a deep-dive tech insights session.

Technology Showcase

  • When: Tuesday & Wednesday, August 16 & 17, 11 a.m. – 7 p.m.; Thursday, August 18, 11 a.m. – 2 p.m.
  • Where: Moscone West Convention Center, 1/F Technology Showcase

Technical Insights Session

  • When: Wednesday, August 17, 1:15 – 2:15 p.m
  • Where: NDSTI01 — Intel® RealSense™ Technology: Adding Human-like Sensing to Devices, Moscone West Convention Center, Level 2, Room 2016

Learn more »

 

Drone Challenge – Watch Yuneec Typhoon H with Intel RealSense Technology Demonstrate Intelligent Obstacle Avoidance and “Follow Me” Features

Drone pilots will showcase intelligent obstacle avoidance and “follow me” features in the Yuneec Typhoon H with Intel RealSense technology in a drone cage atop a nearby building. Attendees can walk/run along with the drone as it successfully navigates typical park obstacles.

  • When: Tuesday & Wednesday, August 16 & 17, 10:30 a.m. – 5:30 p.m.
  • Where: Metreon, 4th floor

Learn more »

 

5G: A Transformative Force Across Industries

5G will be a massive transformative force across industries and businesses. It will usher in new services – such as the mobile Internet of Things, drone delivery, self-driving vehicles and virtual reality. During this technical session, Intel executives Asha Keddy and Sandra Rivera will discuss how 5G transformation is being enabled by service providers that incorporate new data-oriented network elements to create new revenue-generating services that broaden their ecosystem and value chain.

  • When: Tuesday, August 16, 1:15 – 2:15 p.m.
  • Where: R5GBI01 — 5G: A Transformative Force Across Industries – Business Insights, Moscone Center, Level 2, Room 2016

Learn more »

 

Advanced Analytics – Trends, Challenges, Opportunities

In this industry panel, you will hear from both data scientists and business experts about the opportunities and challenges for adopting advanced analytics such as big data and machine learning, with special focus on mainstream and novel use cases for applying analytics and machine learning to business, as well as top industry challenges that can be addressed by software developers in the analytics community.

  • When: Tuesday, August 16, 2:30 – 3:30 p.m.
  • Where: Moscone West Convention Center, Level 2, Room 2016

Learn more »

 

DAY 2 – WEDNESDAY, AUGUST 17

 

Murthy Renduchintala and Diane Bryant Keynotes

Dr. Venkata “Murthy” Renduchintala will discuss innovations that will drive the next revolution in technology as we shift to a truly connected world such as pervasive computing, cloud-like capabilities for compute, analytics and storage, and connectivity as the lifeblood of the Internet of Things. Diane Bryant will share thoughts on the future of cloud computing, silicon photonics as well as the future of data with the artificial intelligence revolution expanding our insights.

  • When: Wednesday, August 17, 9:00 – 10:30 a.m.
  • Where: Moscone West Convention Center, Level 3

 

DAY 3 – THURSDAY, AUGUST 18

 

Future Proof FPGA – Experts Discuss How Programmable Logic Address the Demands of Smart & Connected Devices

The use of FPGAs for specialized computing is on the verge of a tremendous breakthrough, especially with the rapid growth in machine learning for big data analytics. Hear Intel’s vision for FPGAs at the Intel SoC FPGA Developer Forum (ISDF). ISDF provides embedded system designers insights into the tools and technologies available for FPGA-based system development.

  • When: Thursday, August 18
  • Where: Moscone West Convention Center, Level 2, Room 2016
  • Time: ISDF sessions start at 9:00 a.m. with CEO Brian Krzanich keynote at 11:00 a.m.

 

CONTACT: Krystal Temple, Intel Corporation

The post IDF 2016 – Top Activities You Shouldn’t Miss! appeared first on Intel Newsroom.

The Foundation of Artificial Intelligence


Diane BryantArtificial Intelligence (AI):
Intelligence exhibited by machines

Intel is a company that powers the cloud and billions of smart, connected computing devices. Thanks to the pervasive reach of cloud computing, the ever decreasing cost of compute enabled by Moore’s Law, and the increasing availability of connectivity, these connected devices are generating millions of terabytes of data every single day. The ability to analyze and derive value from that data is one of the most exciting opportunities for us all. Central to that opportunity is artificial intelligence.

While artificial intelligence is often equated with great science fiction, it isn’t relegated to novels and movies. AI is all around us, from the commonplace (talk-to-text, photo tagging, fraud detection) to the cutting edge (precision medicine, injury prediction, autonomous cars). Encompassing compute methods like advanced data analytics, computer vision, natural language processing and machine learning, artificial intelligence is transforming the way businesses operate and how people engage with the world.

Machine learning, and its subset deep learning, are key methods for the expanding field of AI. Intel processors power >97% of servers deployed to support machine learning workloads today. The Intel® Xeon® processor E5 family is the most widely deployed processor for deep learning inference and the recently launched Intel® Xeon Phi™ processor delivers the scalable performance needed for deep learning training.  While less than 10% of servers worldwide were deployed in support of machine learning last year, the capabilities and insights it enables makes machine learning the fastest growing form of AI.

Adding Nervana Systems to the Intel AI Portfolio

Intel’s Diane Bryant with Nervana Systems’ co-founders Naveen Rao, Arjun Bansal, Amir Khosrowshaki and Intel vice president Jason Waxman
Intel’s Diane Bryant with Nervana’s co-founders Naveen Rao, Arjun Bansal, Amir Khosrowshaki and Intel vice president Jason Waxman

Success in this space requires continued innovation to deliver an optimized, scalable platform providing the highest performance at lowest total cost of ownership. Today, I’m excited to announce that Intel signed a definitive agreement to acquire Nervana Systems, a recognized leader in deep learning1. Founded in 2014 and headquartered in San Diego, California, Nervana has a fully-optimized software and hardware stack for deep learning. Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI. We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks. Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.

At Intel we believe in the power of collaboration: the goodness inherent in exchanging fresh ideas and diverse points of view. We believe that bringing together the Intel engineers who create the Intel Xeon and Intel Xeon Phi processors with the talented Nervana Systems’ team, we will be able to advance the industry faster than would have otherwise been possible. We will continue to invest in leading edge technologies that complement and enhance Intel’s AI portfolio.

We will share more about artificial intelligence and the amazing experiences it enables at our Intel Developer Forum next week. I hope to see you there!

Diane Bryant is executive vice president and general manager of the Data Center Group at Intel.


1Transaction is subject to certain regulatory approvals and customary closing conditions

The post The Foundation of Artificial Intelligence appeared first on Intel Newsroom.