Intel Study: Applying Emerging Technology to Solve Environmental Challenges

todd bradyBy Todd Brady

Technology and environmental sustainability leaders must work together on collaborative solutions to unlock the power of emerging technology to address the challenges of environmental sustainability, including those related to climate change and responsible water management.

These aren’t small or simple tasks. For example, with droughts resulting in billions of dollars’ worth of damage, access to clean water is an important global issue.

To better understand how emerging technologies can be applied today and in the future, Intel and the research firm Concentrix recently conducted a study of more than 200 business decision-makers working in environmental sustainability. The study revealed that the majority are optimistic about the power of these technologies: 74 percent of respondents agree that artificial intelligence (AI) will help solve long-standing environmental challenges; 64 percent agree that the internet of things (IoT) will help solve these challenges.

Despite the promise technology holds, the survey also reveals barriers preventing broader adoption of these solutions for sustainability. Respondents cite cost as the top challenge for implementation at 33 percent, followed by regulatory approval at 17 percent. Additionally, just under half of respondents in the survey say they don’t know about or aren’t using emerging tech to support their water conservation strategies.

At GreenBiz’s Verge Conference: Where Technology Meets Sustainability, we convened a workshop with sustainability and technology leaders from a variety of organizations spanning academia, the Fortune 500, private-sector companies and environmental nonprofits to examine these survey findings and discuss solutions to perceived barriers. Ideas included:

  • Realizing that cost is not a barrier, but rather one of the fundamental reasons to adopt emerging technology solutions as they have the potential for long-term savings.
  • Creating shared supply chain goals and standards while also collaborating with customers to create greater environmental impact.
  • Sharing learnings from past mistakes through ongoing collaboration to build new systems that are more resilient and efficient.

To accelerate solutions’ deployment and realize their full potential, public and private organizations need to reduce the barriers to implementation and bridge the awareness gaps regarding cost-effective solutions that already exist today. In doing so, we’ll unlock greater access to environmental data for decision-making and new insights into our collective impact on the environment.

As a leader in emerging technologies – from AI to IoT to 5G communications – Intel is uniquely suited to build the foundation that will enable innovations across the environmental sustainability field. For the last few years, we’ve been working with partners to develop solutions to address many environmental challenges, such as smart city IoT technologydigital solutions that use natural resources more efficiently, and smart, green buildings.

Last year, we announced our commitment to restoring 100 percent of our global water use by 2025. One year in, we have funded 14 water projects across California, Arizona, New Mexico and Oregon, that, once complete, are expected to achieve approximately 56 percent of our goal. For example, in central Arizona, we have been working with a local farmer to pilot IoT sensors to monitor soil moisture and local weather conditions, aimed at reducing water usage.

In Costa Rica, Intel is using drones to re-create 3D models of forests’ surfaces, gathering information about tree health, height, biomass and other factors to estimate the amount of carbon they store. The result is an innovative interpretation of highly precise information about carbon capture and its implications for scientific research, management, conservation, monitoring and other uses.

As a society, we continue to face enormous environmental challenges. However, the actions we take today can arm us with the tools to adapt to our changing world, preserving our natural resources and quality of life for future generations.

And here at Intel, we will continue to encourage collaboration across organizations. It’s the only way to drive true transformation and create positive change for the environment.

To see full details of our progress to date, read Intel’s most recent Corporate Responsibility Report.

Todd Brady is director of Global Public Affairs and Sustainability at Intel Corporation.

Photo caption: Internet of things sensors at a hazelnut orchard in Oregon monitor moisture and reduce water use. (Credit: Intel Corporation)

The post Intel Study: Applying Emerging Technology to Solve Environmental Challenges appeared first on Intel Newsroom.

NVIDIA Sets Six Records in AI Performance

NVIDIA has set six AI performance records with today’s release of the industry’s first broad set of AI benchmarks.

Backed by Google, Intel, Baidu, NVIDIA and dozens more technology leaders, the new MLPerf benchmark suite measures a wide range of deep learning workloads. Aiming to serve as the industry’s first objective AI benchmark suite, it covers such areas as computer vision, language translation, personalized recommendations and reinforcement learning tasks.

NVIDIA achieved the best performance in the six MLPerf benchmark results it submitted for. These cover a variety of workloads and infrastructure scale – ranging from 16 GPUs on one node to up to 640 GPUs across 80 nodes.

The six categories include image classification, object instance segmentation, object detection, non-recurrent translation, recurrent translation and recommendation systems. NVIDIA did not submit results for the seventh category for reinforcement learning, which does not yet take advantage of GPU acceleration.

A key benchmark on which NVIDIA technology performed particularly well was language translation, training the Transformer neural network in just 6.2 minutes. More details on all six submissions are available on the NVIDIA Developer news center.

NVIDIA engineers achieved their results on NVIDIA DGX systems, including NVIDIA DGX-2, the world’s most powerful AI system, featuring 16 fully connected V100 Tensor Core GPUs.

NVIDIA is the only company to have entered as many as six benchmarks, demonstrating the versatility of V100 Tensor Core GPUs for the wide variety of AI workloads deployed today.

“The new MLPerf benchmarks demonstrate the unmatched performance and versatility of NVIDIA’s Tensor Core GPUs,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “Exceptionally affordable and available in every geography from every cloud service provider and every computer maker, our Tensor Core GPUs are helping developers around the world advance AI at every stage of development.”

State-of-the-Art AI Computing Requires Full Stack Innovation

Performance on complex and diverse computing workloads takes more than great chips. Accelerated computing is about more than an accelerator. It takes the full stack.

NVIDIA’s stack includes NVIDIA Tensor Cores, NVLink, NVSwitch, DGX systems, CUDA, cuDNN, NCCL, optimized deep learning framework containers and NVIDIA software development kits.

NVIDIA’s AI platform is also the most accessible and affordable. Tensor Core GPUs are available on every cloud and from every computer maker and in every geography.

The same power of Tensor Core GPUs is also available on the desktop, with the most powerful desktop GPU, NVIDIA TITAN RTX costing only $2,500. When amortized over three years, this translates to just a few cents per hour.

And the software acceleration stacks are always updated on the NVIDIA GPU Cloud (NGC) cloud registry.

NVIDIA’s Record-Setting Platform Available Now on NGC

The software innovations and optimizations used to achieve NVIDIA’s industry-leading MLPerf performance are available free of charge in our latest NGC deep learning containers. Download them from the NGC container registry.

The containers include the complete software stack and the top AI frameworks, optimized by NVIDIA. Our 18.11 release of the NGC deep learning containers includes the exact software used to achieve our MLPerf results.

Developers can use them everywhere, at every stage of development:

  • For data scientists on desktops, the containers enable cutting-edge research with NVIDIA TITAN RTX GPUs.
  • For workgroups, the same containers run on NVIDIA DGX Station.
  • For enterprises, the containers accelerate the application of AI to their data in the cloud with NVIDIA GPU-accelerated instances from Alibaba Cloud, AWS, Baidu Cloud, Google Cloud Platform, IBM Cloud, Microsoft Azure, Oracle Cloud Infrastructure and Tencent Cloud.
  • For organizations building on-premise AI infrastructure, NVIDIA DGX systems and NGC-Ready systems from Atos, Cisco, Cray, Dell EMC, HP, HPE, Inspur, Lenovo, Sugon and Supermicro put AI to work.

To get started on your AI project, or to run your own MLPerf benchmark, download containers from the NGC container registry.

The post NVIDIA Sets Six Records in AI Performance appeared first on The Official NVIDIA Blog.

Putting Their Foot Down: GOAT Uses AI to Stomp Out Fake Air Jordan and Adidas Yeezy Sneakers

Sneaker aficionados invest hundreds of dollars into rare Nike Air Jordans and the hottest Kanye West Adidas Yeezys. But scoring an authentic pair amid a crush of counterfeits is no slam dunk.

Culver City, Calif., startup GOAT (a nod to the sports shorthand for “greatest of all time”) operates the world’s largest sneaker marketplace that uses AI to stomp out fakes. The company offers a seal of authenticity for shoes approved for sale on its site.

Counterfeit sneakers are rampant online for some of the most sought after basketball brands.

“Yeezys and Jordans are now the most faked shoes in the world, and over 10 percent of all sneakers sold online are fake,” said Michael Hall, director of data at GOAT.

A pair of sought-after Kanye West Adidas Yeezys or Nike Air Jordans can easily set you back more than $300.

Pop culture interest in iconic shoes developed for sports stars and celebrity rappers is fueling instant sellouts in new releases. Meanwhile, there’s a heated aftermarket for the most popular footwear fashions as well as scarce vintage and retro models.

As a result, sneaker fans and novices alike are turning to a new wave of shoe sellers, such as GOAT, to ensure they’re getting getting an authentic pair of the most sought-after shoes.

GOAT pioneered the ship-to-verify model in the sneaker industry. This means that sellers can list any shoes on GOAT’s marketplace, but shoes that sell are first sent to the company for authentication by its image detection AI. If the shoes are found to be replicas or not as described, they don’t ship and buyers are given a refund.

Founded in 2015, GOAT’s business is booming. The startup, which has expanded to more than 500 employees, attracts more than 10 million users and has the largest catalog of sneakers in the world at 35,000 skus. This year, the company merged with Flight Club, a sneaker consignment store with locations in Los Angeles,New York City and Miami.

GOAT’s popular app and website — some users have sold more than $10 million in sneakers — has secured nearly $100 million in venture capital funding. The company is a member of NVIDIA’s Inception program, which offers technical guidance to promising AI startups.

AI to Kick Out Counterfeits

When you’re offering 35,000 unique styles, tracking down counterfeit sneakers is no small challenge. GOAT has teams of sneaker experts trained in the art of spotting replicas without AI. “They can spot a fake in like 10 seconds,” said Emmanuelle Fuentes, lead data scientist at GOAT.

Image recognition assists GOAT’s teams of authenticators and quality assurance representatives to ID and authenticate shoes in the warehouse. And the more GOAT’s experts provide helpful metadata to train the AI as they work, the better it helps all those vetting sneakers.

There’s a long list of data signals that are fed into a cloud instance of GPUs for the identification process and for training the network. GOAT’s convolutional neural networks are trained for anomaly and fraud detection.

GOAT, which has multiple neural networks dedicated to brands of sneakers, provides proprietary tools to help its authenticators upload data to train its identification networks.

GPUs Slam Dunk

Tracking and sharing expertise on so many models of high-end sneakers requires logging a ton of photos of authentic sneakers to help assist team members in handling shoes sent in for verification.

“The resolution that we are capturing things at and the scale that we are capturing the images — it’s a high-resolution, massive computational challenge requiring GPUs,” said Fuentes.

GOAT turned to NVIDIA TITAN Xp GPUs and NVIDIA Tesla GPUs on P2 instances of AWS running the cuDNN-accelerated PyTorch deep learning framework to initially train their neural networks on 75,000 images of authentic sneakers.

The company relies on the power of GPUs for identification of all of its sneaker models, Hall added. “For some of the most-coveted sneakers, there are more inauthentic pairs than real ones out there. Previously there wasn’t a way for sneakerheads to purchase with confidence,” he said.

The post Putting Their Foot Down: GOAT Uses AI to Stomp Out Fake Air Jordan and Adidas Yeezy Sneakers appeared first on The Official NVIDIA Blog.

2019 CES

Intel technology is the foundation for the world’s most important innovations and advances.

At CES 2019, we will share our vision for the future of computing and explore advancements in the client, network, cloud and edge — designed to power the next era of computing in areas including PC innovation, artificial intelligence, 5G connectivity and autonomous driving.

Others predict the future. We’re building it.

ALL CES NEWS

MEDIA RESOURCES

Intel at 2018 CES

Intel-2018-CES-booth-3-1

» Download all DAY 1 images (ZIP, 198 MB)
» Download all DAY 2 images (ZIP, 38 MB)
» Download all DAY 3 images (ZIP, 2 MB)

The post 2019 CES appeared first on Intel Newsroom.

Media Alert: Intel to Showcase Technology Innovation for the Next Era of Computing at CES 2019

gbryant nshenoy 2x1
Gregory Bryant (left) and Navin Shenoy will lead Intel’s news conference at 4 p.m. Jan. 7, 2019, at 2019 CES.

Intel technology is the foundation for many of the world’s most important innovations and advances. At CES 2019, Intel will highlight the future of computing and explore advancements in the client, network, cloud and edge designed to unlock human potential.

Intel News Conference – “Innovations and the Compute Foundation”

Join Intel executives, Client Computing Group Senior Vice President Gregory Bryant and Data Center Group Executive Vice President Navin Shenoy, who will take the stage to showcase news related to innovations in client computing, data center, artificial intelligence, 5G and more. During the event, Intel will touch on how expanding technology capabilities have a direct impact on human experiences. Please note, there is limited seating. Doors open to press and analysts at 3:30 p.m. PST.

Where: Mandalay Bay South Convention Center
Level 2, Ballrooms E & F
When: Jan. 7, 4-4:45 p.m. PST
Livestream:    Watch on the Intel Newsroom

Intel Press Breakfast & Booth Preview

Mix and mingle with Intel executives and take part in a guided tour of Intel’s booth to see the latest Intel technology in action – before the CES show floor opens to attendees. The CES opening keynote will be livestreamed at 8:30 a.m., and a light continental breakfast will be served. Please note, only credentialed press will be allowed access, and registration is requested at the link below.

Where:    Las Vegas Convention Center
Intel Booth in Central Hall South (Booth #10048)
When: Jan. 8, 7:30-9:30 a.m. PST

 

» Register for the Press Breakfast

Mobileye Press & Customer Conference – “An Hour with Amnon”

Join Mobileye CEO Amnon Shashua as he delivers a “state of the state” on automated driving technologies, along with a look at how these technologies are being delivered globally. Shashua will touch on Mobileye’s unique perspective on vision and mapping technologies, along with the company’s proposed model for industry-wide safety standards.

Where:    Las Vegas Convention Center Room S228
When: Jan. 8, 11:30 a.m.-12:30 p.m. PST

 

Visit Intel in the Central Hall South (Booth #10048)

Stop by our booth for an up-close look at how Intel’s technology is helping power the smart, connected data-centric world and how it is helping accelerate the expansion of human potential.

Where:    Las Vegas Convention Center
Central Hall South, C2 lobby entrance
When: Jan. 8, 10 a.m.-6 p.m. PST
Jan. 9 and Jan. 10, 9 a.m.-6 p.m. PST
Jan. 11, 9 a.m.-4 p.m. PST

 

Throughout CES:

  • Recharge in Intel’s Media-Only Lounge: Intel’s media lounge is a great getaway for you to enjoy. Located upstairs in the Intel booth, stop by to relax in our comfortable seating, snack, quench your thirst and use our dedicated internet access.
  • Spotlight Sessions: Throughout the week, Intel will host Spotlight Sessions at the Intel booth. They will focus on specific topics including 5G, autonomous driving, artificial intelligence and more. Final schedule will be uploaded to the CES press kit on the Intel Newsroom prior to the show.
  • Booth Demonstrations: Experience Intel technology and products.

Can’t Make It to CES 2019?

Visit our newsroom at newsroom.intel.com/2019-CES and follow us on social media at @IntelNews, @Intel and www.facebook.com/intel.

Media Contacts

Laurie Smith DeJong
(503) 313-6891
laurie.smith.dejong@intel.com

Erica Pereira Kubr
(415) 471-4970
erica.pereira.kubr@intel.com

The post Media Alert: Intel to Showcase Technology Innovation for the Next Era of Computing at CES 2019 appeared first on Intel Newsroom.

IBM and NVIDIA Deliver Proven Infrastructure for the AI Enterprise

A deluge of data is fueling AI innovation. It’s a trend that shows no sign of slowing.

As organizations in every industry around the globe attempt to streamline their data pipelines and maximize data science productivity, one challenge looms large: implementing AI initiatives effectively.

This is where IBM SpectrumAI with NVIDIA DGX comes in.

At the core of data science productivity is the infrastructure and software used for building and training machine learning and deep learning workloads.

With IBM and NVIDIA’s new converged infrastructure offering, organizations can take advantage of integrated compute, storage and networking. It’s the latest systems and software to support the complete lifecycle of AI — from data preparation to training to inference.

IBM SpectrumAI with NVIDIA DGX is built on:

  • IBM Spectrum Scale v5: Software-defined to streamline data movement through the AI data pipeline
  • NVIDIA DGX-1 servers: Purpose-built for AI and machine learning
  • NVIDIA DGX software stack: Optimized for maximum GPU training performance
  • Proven data performance: Over 100GB/s throughput, supporting up to nine DGX-1 servers in a single rack

IBM SpectrumAI with NVIDIA DGX helps businesses deploy AI infrastructure quickly, efficiently and with top-tier performance — and it’s easier for IT teams to manage.

It’s the latest addition to our NVIDIA DGX-1 Reference Architecture lineup, which includes data center solutions from select storage technology partners. The solutions help enterprises and their data science teams:

  • Focus on innovation, research and transforming the world through their AI initiatives.
  • Minimize the design complexities behind architectures optimized for AI workloads.
  • Effortlessly scale AI workloads with predictable performance that’s also cost effective.

IBM software-defined storage offers performance, flexibility and extensibility for the AI data pipeline. NVIDIA DGX-1 provides the fastest path to machine learning and deep learning. Pairing the two results in an integrated, turnkey AI infrastructure solution with proven productivity, agility and scalability.

Register for our joint webinar on Jan. 29, or check out these resources to learn more:

The post IBM and NVIDIA Deliver Proven Infrastructure for the AI Enterprise appeared first on The Official NVIDIA Blog.

AI Makes a Splash: Parisian Trio Navigates Autonomous Cargo Ships

It started as friends hacking a radio-controlled boat over a weekend for fun. What happened next is classic Silicon Valley: Three childhood buddies parlay robotics and autonomous vehicle skills into an autonomous ship startup and cold-call the world’s third-largest cargo shipper.

San Francisco-based Shone — founded by Parisian classmates Ugo Vollmer, Clement Renault and Antoine de Maleprade — has made a splash in maritime and startup circles. The company landed a pilot deal with shipping giant CMA CGM and recently won industry recognition.

Left to right: Shone co-founders Antoine de Maleprade, Ugo Vollmer and Clement Renault.

Shone aims to modernize shipping. The startup applies NVIDIA GPUs to a flood of traditional cargo ship data such as sonar, radar, GPS and AIS, a ship-to-ship tracking system. This has enabled it to quickly process terabytes of training data on its custom algorithms to develop perception, navigation and control for ocean freighters. The company has added cameras to offer better seafaring object detection as well.

“The first part is packaging all of the perception so the crew can make better decisions,” said Vollmer, the company’s CEO, previously an autonomous vehicle maps engineer at Mapbox. “But there’s tons of value of connecting communications from the ship to the shore.”

Cargo ships are among a wave of industries, including locomotives, joining the autonomous vehicle revolution.

Shone is a member of NVIDIA Inception, a virtual accelerator program that helps startups get to market faster.

GPUs Set Sail

Founded in 2017, Shone has expanded to eight employees to navigate seafaring AI. Its NVIDIA GPU-powered software is now deployed on several CMA CGM cargo ships in pilot tests to help with perception for ship captains.

“What is particularly interesting for CMA CGM is what artificial intelligence can bring to systems on board container ships in terms of safety. AI will facilitate the work of crews on board, whether in decision support, maritime safety or piloting assistance,” said Jean-Baptiste Boutillier, deputy vice president at CMA CGM.

It didn’t come easy. The trio of scrappy entrepreneurs had to hustle. After hacking the radio-controlled boat and realizing they had a model for an autonomous boat, they raised $200,000 in friends-and-family money to start the company.

Next they got accepted into Y Combinator. Shone’s partnership with CMA CGM, which operates 500 ships worldwide, came as the team was working as a member of the winter 2018 batch at the accelerator and urgently seeking a pilot to prove their startup’s potential.

Renault and Vollmer recall cold-calling CMA CGM, the surprise of getting an appointment, and going in jeans and T-shirts and underprepared to meet with executives — who were in suits. Despite that, the execs were impressed with the team’s technical knowledge and encouraged them to come back — in six months.

“There is this industry that is moving 90 percent of the products in the world, and the tech is basically from the ‘80s — we were like, ‘What? How is that possible?’” said Renault, who previously worked on autonomous trucks at Starsky Robotics.

Prototype AI Boat

Not to be discouraged by the first meeting, the team decided to buy a real boat to outfit with AI to show CMA CGM just how serious they were. Renault spent $10,000 for a 30-foot boat he located on Craigslist, but the trailer blew a tire on the way back from the Sacramento River delta. Just one more obstacle to overcome, but he got help and got it back to the office.

De Maleprade’s robotics skills came into play next. (“He was building rockets when he was 14 and blowing things up,” Vollmer said.) De Maleprade fitted the boat with a robotic hydraulic steering mechanism. The other two went to work on the software with him, installing a GPU on the boat, as well.

The boat development was sped by training their algorithms on GPU workstations, and the on-board GPUs of the boat enabled inferencing, said de Maleprade.

Three months later, the trio had the autonomous boat prototype ready. They showed off its capabilities to CMA CGM, which was even more impressed. The executives invited them to develop the platform on several of their freighters, which span four football fields in length and transport 14,000 containers from Southern California to China.

“CMA CGM has built its success on strong entrepreneurial values and constant innovations. That is why we decided to allow the Shone team to board some of our ships to develop their ideas and their potential,” said Boutillier.

The founders often sleep on the cargo ships, departing from the Port of Long Beach to arrive in the Port of Oakland, to observe the technology in action and study CMA CGM’s needs.

Synthesizing all the maritime data with various home-brewed algorithms and other techniques to develop Shone’s perception platform was a unique challenge, said de Maleprade, who worked on robotics before becoming CTO at Shone.

“We took a video with a drone to show CMA CGM our prototype boat could offer autonomous navigation assistance for crews. They liked what we had,” said de Maleprade. He says they’re now working on simulation to speed their development timeline.

The Shone team recently finished up at Y Combinator, leaving a strong impression at its Demo Day presentation and with $4 million in seed funding from investors.

The post AI Makes a Splash: Parisian Trio Navigates Autonomous Cargo Ships appeared first on The Official NVIDIA Blog.

Scaling the Universe: How Large GPU Clusters Aid Understanding of Galaxy Formation

For centuries, scientists have marveled at telescopic imagery, theorized about much of what they see and drawn conclusions from their observations.

More recently, astronomers and astrophysicists are using the computing performance of GPUs and AI to glean more from that imagery than ever.

A research team at the University of California, Santa Cruz, and Princeton University has been pushing these limits. Led by Brant Robertson, of UC Santa Cruz and NASA Hubble Fellow Evan Schneider, the team has been optimizing their use of NVIDIA GPUs and deep learning tools to accommodate larger calculations.

Their goal: expand their ability to do more accurate hydrodynamic simulations, and thereby gain a better understanding of how galaxies are formed.

Titanic Simulations

The team started by simply moving their efforts from CPUs to GPUs. Doing so to measure matter passing in and out of the cell faces of a 3D grid mesh was akin to suddenly being able to solve many Rubik’s Cubes simultaneously.

With CUDA in the mix, the team could transfer an array of grids onto the GPU to do the necessary calculations, resulting in more detailed simulations.

Once they’d squeezed the most of that setup, the team’s ambitions shifted to a more powerful cluster of NVIDIA GPUs, namely the Titan supercomputer at the U.S. Department of Energy’s Oak Ridge National Laboratory. But to perform higher-resolution simulations, the team needed some powerful code to harness Titan’s 16,000-plus Tesla GPUs.

[Read how researchers from the University of Western Australia have trained a GPU-powered AI system to recognize new galaxies.]

Finding Answers

Schneider, Robertson’s former graduate student and now a postdoctoral fellow at Princeton, was up to the task. She wrote a GPU-accelerated piece of hydrodynamic code called CHOLLA, which stands for Computational Hydrodynamics On paraLLel Architectures.

Robertson believes CHOLLA will help scientists answer previously unanswerable questions. For instance, applying the code to M82, a galaxy revered by astronomers for its prodigious star-formation rate and powerful galactic winds, could provide new levels of understanding of how stars are formed.

“How does that wind get there? What sets the properties in the wind? How does the wind control the mass of the galaxy? These are all questions we’d like to understand, but it’s a very difficult computational problem,” Robertson said. “Evan is the first person to solve this with any fidelity.”

CHOLLA, which was written a few years ago, has enabled Schneider and Robertson to leverage 100 million core hours on Titan. The code is unique in that it performs all calculations on GPUs, enabling the team to do sophisticated simulations on their NVIDIA DGX and DGX-1 deep learning systems in the lab and then transfer them to Titan, where they can be expanded.

“You want to take advantage of the floating-point operation power of the GPUs. You don’t want to spend your time waiting for information to go back and forth between the GPUs if you can avoid it,” said Robertson. “Spending as much time as possible computing on the GPU is where you want to be.”

Pushing to the Summit

CHOLLA’s ability to scale vast numbers of GPUs has enabled the team to perform a test calculation of 550 billion cells that Robertson called “one of the largest hydrosimulations ever done in astrophysics.”

Another student, Ryan Hausen, has paved the way to even more ambitious work by developing a deep learning framework called Morpheus that uses raw telescope data to classify galaxies. That opens the door to potentially processing giant surveys with billions of galaxies on a DGX system.

“That’s something I didn’t think was possible just a few years ago,” said Robertson.

Yet another huge leap may be coming soon, as Robertson is hoping to obtain time on Summit, the world’s most powerful supercomputer — powered by NVIDIA Volta GPUs. He believes CHOLLA will enable the team to do even more with Summit’s expansive GPU memory than it has with Titan.

“The computational power of NVIDIA GPUs enabled us to perform numerical simulations that were not possible before,” said Robertson. “And we plan to use NVIDIA GPUs to push what is possible.”

Feature image: The M82 galaxy, remixed from the Hubble Legacy Archive, by Adam Evans. Licensed through Creative Commons.

The post Scaling the Universe: How Large GPU Clusters Aid Understanding of Galaxy Formation appeared first on The Official NVIDIA Blog.

Intel Announces Neuromorphic Computing Research Collaborators

intel kapoho
Kapoho Bay is Intel’s codename for a a USB form factor based on the Loihi neuromorphic research chip system. Kapoho Bay provides a USB interface to Loihi, allowing access with peripherals. (Credit: Walden Kirsch/Intel Corporation)
» Click for full image

What’s New: Today, Intel named academic, government and corporate research groups participating in its Intel Neuromorphic Research Community (INRC) and discussed research progress from the inaugural INRC symposium held in October. The goal of the INRC is to tackle the challenges facing the adoption of neuromorphic architectures for mainstream computing applications. INRC members will use Intel’s Loihi research chip as the architectural focal point for research and development. Intel hopes the findings of this community will drive future improvement of neuromorphic architectures, software and systems, eventually leading to the commercialization of this promising technology.

“While there are many important unsolved neuromorphic computing research problems to explore at all levels of the computing stack, we believe the state of neuromorphic hardware currently leads the state of neuromorphic computing software. We’re confident this network of INRC members will rapidly advance the state of neuromorphic learning algorithms and demonstrate the value of this emerging technology for a wide range of applications.”
–Mike Davies, director of the Neuromorphic Computing Lab, Intel

Who is Participating: Fifty projects have been selected to participate in the INRC. Engaged INRC members will receive access to Intel’s Loihi neuromorphic research chip and software, and are invited to participate in technical symposiums where progress, results and insights will be shared among the community. INRC-supported workshops will offer members an opportunity to learn to develop for Loihi in extended hands-on tutorial sessions and hackathons hosted by Intel Labs researchers and collaborators.

Among the 50 selected projects, teams from 13 universities were selected to receive funding to pursue their research plans. These teams come from a wide range of academic institutions around the world, including University of Bern; University of California, Berkeley; University of California, San Diego; Cornell University; University of Göttingen; TU Graz; Harvard University; TU Munich; Radboud University; University of Tennessee; and Villanova University.

Projects have been scheduled to start over a series of four waves, the first of which began in 2018’s third quarter.

Results So Far: In October, Intel held an inaugural gathering of INRC members in Reykjavik, Iceland. More than 60 researchers attended over five days to discuss research plans, learn about Loihi and meet members of the community. Several presentations from early INRC members announced exciting preliminary progress:

  • Chris Eliasmith of Applied Brain Research Inc. (ABR)* shared early benchmarking results evaluating Loihi’s performance running an audio keyword spotting deep network implemented with ABR’s Nengo DL, which runs TensorFlow-trained networks on Loihi. These results show that for real-time streaming data inference applications, Loihi may provide better energy efficiency than conventional architectures by a factor of 2 times to over 50 times, depending on the architecture.
  • Professor Wolfgang Maass of the Institute for Theoretical Computer Science, Technische Universität Graz, discussed his team’s promising discovery of a new class of spiking neural nets that achieve classification accuracies similar to state-of-the-art deep learning models known as long short-term memory (LSTM) networks. LSTMs are commonly used today for speech recognition and natural language processing applications. These new spiking neural networks, named LSNNs, integrate working memory into their operation in a similar manner as LSTMs do, while promising significantly improved efficiency when running on neuromorphic hardware. This work, to be published at the Neural Information Processing Systems conference in December, was developed using a simulator. In collaboration with Intel Labs, Maass’ team is now working on mapping the networks to Loihi. The team shared early accuracy results from the Loihi network, which currently stand within a few percent of the ideal model.
  • Professor Thomas Cleland of Cornell University discussed a set of neuromorphic algorithms for signal restoration and identification in spiking neural networks based on computational principles inspired by the mammalian olfactory system. In work to be published in collaboration with Intel Labs, these algorithms running on Loihi have already shown state-of-the-art learning and classification performance on chemosensor data sets. “These algorithms were derived from mechanistic studies of the mammalian brain’s olfactory circuits, but I anticipate that in generalized form, they will be applicable to a range of similar computational problems such as air and water quality assessment, cancer screening, and genomic expression profiling,” Cleland said.

What Is Neuromorphic Computing: Neuromorphic computing entails nothing less than a bottom-up rethinking of computer architecture. By applying the latest insights from neuroscience, the goal is to create chips that function less like a classical computer and more like a human brain. Neuromorphic chips model how the brain’s neurons communicate and learn, using spikes and plastic synapses that can be modulated based on the timing of events. These chips are designed to self-organize and make decisions in response to learned patterns and associations.

The goal is that one day neuromorphic chips may be able to learn as fast and efficiently as the brain, which still far outperforms today’s most powerful computers. Neuromorphic computing could lead to big advancements in robotics, smart city infrastructure and other applications that require continuous learning and adaptation to evolving, real-world data.

Last year, Intel introduced the Loihi neuromorphic test chip, a first-of-its-kind research chip with an unprecedented combination of neuromorphic features, efficiency, scale and on-chip learning capabilities. Loihi serves as the architectural foundation for the INRC program. Intel provides INRC members with access to this leading neuromorphic chip to accelerate progress in this field of research.

What is Next: Intel has released early versions of its software development kit for Loihi, named Nx SDK, to engaged INRC members. Researchers may remotely log in to Intel’s neuromorphic cloud service to access Loihi hardware and Nx SDK to develop their algorithms, software and applications. Additionally, Intel has supported Applied Brain Research to port its Nengo software framework to work with Loihi. Nengo is freely available today for research use.

Loihi hardware has been made available to select INRC members for research in domains such as robotics that require direct access to hardware. These systems include a USB form factor code-named “Kapoho Bay.” In addition to providing a USB interface to Loihi, Kapoho Bay offers an event-driven hardware interface to the DAVIS 240C DVS silicon retina camera available from iniVation*, among other peripherals.

Next year, Intel and INRC members expect to contribute much of the enabling software and research results to the public domain in the form of publications and open source software. INRC membership is expected to steadily grow, and as the foundational algorithms and SDK components mature, Intel foresees an increasing project focus on real-world applications, ultimately leading to the commercialization of neuromorphic technology.

How to Get Involved: Neuroscientists, computational scientists and machine learning researchers interested in participating in the INRC and developing for Loihi are encouraged to email inrc_interest@intel.com for more information.

Additionally, Intel’s Neuromorphic Computing Lab will support full-day tutorials on Loihi’s systems and software at two upcoming events: at the 2019 Riken International Workshop on Neuromorphic Computing in Kobe, Japan, on March 13, and at the 2019 Neuro Inspired Computing Elements (NICE) Workshop in Albany, New York, on March 29. The tutorials will be open to all registered attendees of these workshops.

More Context: Intel Labs

The post Intel Announces Neuromorphic Computing Research Collaborators appeared first on Intel Newsroom.

NVIDIA Research Takes NeurIPS Attendees on AI Road Trip

Grab the steering wheel. Step on the accelerator. Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro — all imagined by AI.

We’ve introduced at this week’s NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture.

The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.

Visitors to the booth hopped into the driver’s seat to tour the virtual environment. Azin Nazari, a University of Waterloo grad student, was impressed with the AI-painted scene, which can switch between the streets of Boston, Germany, or even the Grand Theft Auto game environment at sunset.

The demo uses Unreal Engine 4 to generate semantic layouts of scenes. A deep neural network trained on real-world videos fills in the features — depicting an urban scene filled with buildings, cars, streets and other objects.

This is the first time neural networks have been used with a computer graphics engine to render new, fully synthetic worlds, say NVIDIA researchers Ting-Chun Wang and Ming-Yu Liu.

“With this ability, developers will be able to rapidly create interactive graphics at a much lower cost than traditional virtual modeling,” Wang said.

Called vid2vid, the AI model behind this demo uses a deep learning method known as GANs to render photorealistic videos from high-level representations like semantic layouts, edge maps and poses. As the deep learning network trains, it becomes better at making videos that are smooth and visually coherent, with minimal flickering between frames.

Hitting a new state-of-the-art result, the researchers’ model can synthesize 30-second street scene videos in 2K resolution. By training the model on different video sequences, the model can paint scenes that look like different cities around the world.

For those in Montreal this week, stop by our booth — No. 209 — to sit behind the wheel and try it out for yourself.

A TITAN Tangles with DOPE

Two of the latest pieces of NVIDIA hardware — the new TITAN RTX GPU and the NVIDIA DGX-2 system — are also under the spotlight in the NVIDIA booth.

TITAN RTX GPU at NeurIPS
NeurIPS attendees stopped to marvel at the new NVIDIA TITAN RTX GPU.

Opposite of them, attendees are flocking to a table stacked with an odd assortment of items — cans of tomato soup and spam, a box of crackers, a mustard bottle. It may not sound like much, but this demo is DOPE. Literally.

DOPE, or Deep Object Pose Estimation, is an algorithm that detects the pose of known objects using a single RGB camera. It’s an ability that’s essential for robots to grasp these objects.

Giving new meaning to “hands-on demos,” booth visitors can pick up the cracker box and cans, moving them across the table and changing their orientation. A screen above displays the neural network’s inferences, tracking the objects’ edges as they shift around the scene.

“It’s a $30 camera, very cheap, very accessible for anyone interested in robotics,” said NVIDIA researcher Jonathan Tremblay. The tool, trained entirely on computer-generated image data, is publicly available on GitHub.

DOPE demonstration
DOPE can determine the edges of these household objects,even when they’re partly hidden behind other objects, or moved around by booth visitors.

Booth visitors can also feast their eyes on stunning demos of real-time ray tracing. Running on a single Quadro RTX 6000 GPU, our Star Wars demo features beautiful, cinema-quality reflections enabled by NVIDIA RTX technology.

And while a few conspiracy theorists still question whether the Apollo 11 mission actually landed on the moon, a ray-traced recreation of one iconic lunar landing image shows that the photo looks just as it should if it were taken on the moon.

Data scientists exploring the booth will see the new TITAN RTX in action with the RAPIDS data analytics software, quickly manipulating a dataset of all movie ratings by IMDB users. Other demos showcase the computing power NVIDIA TensorRT software provides for inference, both in data centers and at the edge.

The NVIDIA booth at NeurIPS is open all week from 10am to 7pm. For more, see our full schedule of activities.

The post NVIDIA Research Takes NeurIPS Attendees on AI Road Trip appeared first on The Official NVIDIA Blog.