Go Robot, Go: AI Team at MiR Helps Factory Robots Find Their Way

Like so many software developers, Elias Sorensen has been studying AI. Now he and his 10-member team are teaching it to robots.

When the AI specialists at Mobile Industrial Robots, based in Odense, Denmark, are done, the first graduating class of autonomous machines will be on their way to factories and warehouses, powered by NVIDIA Jetson Xavier NX GPUs.

“The ultimate goal is to make the robots behave in ways humans understand, so it’s easier for humans to work alongside them. And Xavier NX is at the bleeding edge of what we are doing,” said Sorensen, who will provide an online talk about MiR’s work at GTC Digital.

MiR’s low-slung robots carry pallets weighing as much as 2,200 pounds. They sport lidar and proximity sensors, as well as multiple cameras the team is now linking to Jetson Xavier GPUs.

Inferencing Their Way Forward

The new digital brains will act as pilots. They’ll fuse sensor data to let the bots navigate around people, forklifts and other objects, dynamically re-mapping safety zones and changing speeds as needed.

The smart bots use NVIDIA’s DeepStream and TensorRT software to run AI inference jobs on Xavier NX, based on models trained on NVIDIA GPUs in the AWS cloud.

MiR chose Xavier for its combination of high performance at low power and price, as well as its wealth of software.

“Lowering the cost and power consumption for AI processing was really important for us,” said Sorensen. “We make small, battery-powered robots and our price is a major selling point for us.” He noted that MiR has deployed more than 3,000 robots to date to users such as Ford, Honeywell and Toyota.

The new autonomous models are working prototypes. The team is training their object-detection models in preparation for first pilot tests.

Jetson Nano Powers Remote Vision

It’s MiR’s first big AI product, but not its first ever. Since November, the company has shipped smart, standalone cameras powered by Jetson Nano GPUs.

The Nano-based cameras process video at 15 frames per second to detect objects. They’re networked with each other and other robots to enhance the robots’ vision and help them navigate.

Both the Nano cameras and Xavier-powered robots process all camera data locally, only sending navigation decisions over the network. “That’s a major benefit for such a small, but powerful module because many of our customers are very privacy minded,” Sorensen said.

MiR developed a tool its customers use to train the camera by simply showing it pictures of objects such as robots and forklifts. The ease of customizing the cameras is a big measure of the product’s success, he added.

AI Training with Simulations

The company hopes its smart robots will be equally easy to train for non-technical staff at customer sites.

But here the challenge is greater. Public roads have standard traffic signs, but every factory and warehouse is unique with different floor layouts, signs and types of pallets.

MiR’s AI team aims to create a simulation tool that places robots in a virtual work area that users can customize. Such a simulation could let users who are not AI specialists train their smart robots like they train their smart cameras today.

The company is currently investigating NVIDIA’s Isaac platform, which supports training through simulations.

MiR is outfitting its family of industrial robots for AI.

The journey into the era of autonomous machines is just starting for MiR. It’s parent company, Teradyne, announced in February it is investing $36 million to create a hub for developing collaborative robots, aka co-bots, in Odense as part of a partnership with MiR’s sister company, Universal Robotics.

Market watchers at ABI Research predict the co-bot market could expand to $12 billion by 2030. In 2018, Danish companies including MiR and Universal captured $995 million of that emerging market, according to Damvad, a Danish analyst firm.

With such potential and strong ingredients from companies like NVIDIA, “it’s a great time in the robotics industry,” Sorensen said.

The post Go Robot, Go: AI Team at MiR Helps Factory Robots Find Their Way appeared first on The Official NVIDIA Blog.

Researchers Make Movies of the Brain with CUDA

When colleagues told Sally Epstein they sped up image processing by three orders of magnitude for a client’s brain-on-a-chip technology, she responded like any trained scientist. Go back and check your work, the biomedical engineering Ph.D. told them.

Yet it was true. The handful of researchers at Cambridge Consultants had devised a basket of techniques to process an image on GPUs in an NVIDIA DGX-1 system in 300 milliseconds, a 3,000x boost over the 18 minutes the task took on an Intel Core i9 CPU.

The achievement makes it possible for researchers to essentially watch movies of neurons firing in real time using the brain-on-a-chip technology from NETRI, a French startup.

“Animal studies revolutionized medicine. This is the next step in testing for areas like discovering new drugs,” said Epstein, head of Strategic Technology at Cambridge Consultants, which develops products and technologies for a wide variety of established companies and startups such as NETRI.

The startup designs chips that sport 3D microfluidic channels to host neural tissues and a CMOS camera sensor with polarizing filters to detect individual neurons firing. It hopes its precision imaging can speed the development of novel treatments for neurological disorders such as Alzheimer’s disease.

Facing a Computational Bottleneck

NETRI’s chips generate 100-megapixel images at up to 1,000 frames per second — the equivalent of a hundred 4K gaming systems running at 120fps. Besides spawning tons of data, they use highly complex math.

As a result, processing a single second of a recording took NETRI 12 days, an unacceptable delay. So, the startup turned to Cambridge Consultants to bust through the bottleneck.

“Our track record in scientific and biological imaging turned out to be very relevant,” said Monty Barlow, Director of Strategic Technology at Cambridge Consultants. And when NETRI heard about the 3,000x boost, “they trusted us even though we didn’t trust ourselves at first,” he quipped.

Leveraging Math, Algorithms and GPUs

A handful of specialists at Cambridge Consultants delivered the 3,000x speedup using multiple techniques. For example, math and algorithm experts employed a mix of Gaussian filters, multivariate calculus and other tools to eliminate redundant tasks and reduce peak RAM requirements.

Software developers migrated NETRI’s Python code to CuPy to take advantage of the massive parallelism of NVIDIA’s CUDA software. And hardware specialists optimized the code to fit into GPU memory, eliminating unnecessary data transfers inside the DGX-1.

The CUDA profiler helped find bottlenecks in NETRI’s code and alternatives to resolve them. “NVIDIA gave us the tools to execute this work efficiently — it happened within a week with a core team of four researchers and a few specialists,” said Epstein.

Looking ahead, Cambridge Consultants expects to find further speedups for the code using the DGX-1 that could enable real-time manipulation of neurons using a laser. Researchers also aim to explore NVIDIA IndeX software to help visualize neural activity.

The work with NETRI is one of several DGX-1 applications at the company. It also hosts a Digital Greenhouse for AI research. Last year, it used the DGX-1 to create a low-cost but highly accurate tool for monitoring tuberculosis.

The post Researchers Make Movies of the Brain with CUDA appeared first on The Official NVIDIA Blog.

Aiforia Paves Path for AI-Assisted Pathology

Pathology, the study and diagnosis of disease, is a growth industry.

As the global population ages and diseases such as cancer become more prevalent, demand for keen-eyed pathologists who can analyze medical images is on the rise. In the U.K. alone, about 300,000 tests are carried out daily by pathologists.

But there’s an acute personnel shortage, globally. In the U.S., there are only 5.7 pathologists for every 100,000 people. By 2030, this number is expected to drop to 3.7. In the U.K., a survey by the Royal College of Pathology showed that only 3 percent of histopathology departments had enough staff to meet demand. In some parts of Africa, there is only one pathologist for every 1.5 million people.

While pathologists are under increasing pressure to analyze as many samples as possible, patients are having to endure lengthy wait times to get their results.

Aiforia, a member of the NVIDIA Inception startup accelerator program, has created a set of AI-based tools to speed and improve pathology workflows — and a lot more.

The company, which has offices in Helsinki and Cambridge, Mass., enables tedious tasks to be automated and for complex challenges to be solved by unveiling quantitative data from tissue samples.

This helps pathologists overcome common obstacles to the development of versatile, scalable processes for medical research, drug development and diagnostics.

“Today we can already support pathologists with AI-assisted analysis, but AI can do so much more than that,” said Kaisa Helminen, CEO of Aiforia. “With deep learning AI, we are able to extract more information from a patient tissue sample than what’s been previously possible due to limitations of the human eye.

“This way, we are able to promote new discoveries from morphological patterns and facilitate more accurate and more personalized treatment for the patient,” she said.

Hidden Figures

AI has made it possible to automate medical imaging tasks that have traditionally proved nearly impossible for the human eye to handle. And it can reveal information that was previously hidden in image data.

With Aiforia’s AI tool assisting the diagnostic process, pathologists can improve the efficiency, accuracy and reproducibility of their results.

Its cloud-based deep learning image analysis platform, Aiforia Create, allows for the rapid development of AI-powered image analysis algorithms, initially optimized for digital pathology applications.

Aiforia initially developed its platform with a focus on cancer, as well as neurological, infectious and lifestyle diseases, but is now expanding it to other medical imaging domains.

For those who want to develop algorithms for a specific task, Aiforia Create provides domain experts with unique self-service AI development tools.

Aiforia trains its image analysis AI models using convolutional neural networks on NVIDIA GPUs. These networks are able to learn, detect and quantify specific features of interest in medical images generated by microscope scanners, X-rays, MRI or CT scans.

Its users can upload a handful of medical images at a time to the platform, which uses active learning techniques to increase the efficiency of annotating images for AI training.

Users don’t need to invest in local hardware or software — instead they can access the software services via an online platform, hosted on Microsoft Azure. The platform can be deployed instantly and scales up easily.

Unpacking Parkinson’s

Aiforia’s tools are also being used to improve the diagnosis of Parkinson’s disease, a debilitating neurological condition affecting around one in 500 people.

The disease is caused by a loss of nerve cells in a part of the brain called the substantia nigra. This loss causes a reduction in dopamine, a chemical that helps regulate the movement of the body.

Researchers are working to uncover what causes the loss of nerve cells in the first place. Doing this requires collecting unbiased estimates of brain cell (neuron) numbers, but this process is extremely labor-intensive, time-consuming and prone to human error.

University of Helsinki researchers collaborated with Aiforia to mitigate the challenges of traditional methods to count neurons. Brain histology images were uploaded to Aiforia Hub, then they deployed the Aiforia Create tool to quantify the number of dopamine neurons in the samples.

Introducing the computerized counting of neurons improves the reproducibility of results, reduces the impact of human error and makes analysis more efficient.

“It’s been studied so many times that if you send the same microscope slide to five different pathologists, you get different results,” Helminen said. “Using AI can help bring consistency and reproducibility, where AI is acting as a tireless assistant or like a ‘second opinion’ for the expert.”

The study carried out at the University of Helsinki would typically have taken weeks or even months without AI. Using Aiforia’s tools, the research team was able to achieve a 99 percent speed increase, freeing up their time to further their work into finding a cure for Parkinson’s.

Aiforia Create is sold with a research use status and is not intended for diagnostic procedures.

The post Aiforia Paves Path for AI-Assisted Pathology appeared first on The Official NVIDIA Blog.

How Evo’s AI Keeps Fashion Forward

Imagine if fashion houses knew that teal blue was going to replace orange as the new black. Or if retailers knew that tie dye was going to be the wave to ride when swimsuit season rolls in this summer.

So far, there hasn’t been an efficient way of getting ahead of consumer and market trends like these. But Italy-based startup Evo is helping retailers and fashion houses get a jump on changing tastes and a whole lot more.

The company’s deep-learning pricing and supply chain systems, powered by NVIDIA GPUs, let organizations quickly respond to changes — whether in markets, weather, inventory, customers, or competitor moves — by recommending optimal pricing, inventory and promotions in stores.

Evo is also a member of the NVIDIA Inception program, a virtual accelerator that offers startups in AI and data science go-to-market support, expertise and technology assistance.

The AI Show Stopper

Evo was born from a Ph.D. thesis by its founder, Fabrizio Fantini, while he was at Harvard.

Now the company’s CEO, Fantini discovered new algorithms that could outperform even the most complex and expensive commercial pricing systems in use at the time.

“Our research was shocking, as we measured an immediate 30 percent reduction in the average forecast error rate, and then continuous improvement thereafter,” Fantini said. “We realized that the ability to ingest more data, and to self-learn, was going to be of strategic importance to any player with any intention of remaining commercially viable.”

The software, developed in the I3P incubator at the Polytechnic University of Turin, examines patterns in fashion choices and draws data that anticipates market demand.

Last year, Evo’s systems managed goods worth over 10 billion euros from more than 2,000 retail stores. Its algorithms changed over 1 million prices and physically moved over 15 million items, while generating more than 100 million euros in additional profit for customers, according to the company.

Nearly three dozen companies, including grocers and other retailers, as well as fashion houses, have already benefited from these predictions.

“Our pilot clients showed a 10 percent improvement in margin within the first 12 months,” Fantini said. “And longer term, they achieved up to 5.5 points of EBITDA margin expansion, which was unprecedented.”

GPUs in Vogue 

Evo uses NVIDIA GPUs to run neural network models that transform data into predictive signals of market trends. This allows clients to make systematic and profitable decisions.

Using a combination of advanced machine learning methods and statistics, the system transforms products into “functional attributes,” such as type of sleeve or neckline, and into “style attributes,” such as the color or silhouette.

It works off a database that maps the social media, internet patterns and purchase behaviors of over 1.3 billion consumers, which is a fully representative sample of the entire world population.

Then the system uses multiple algorithms and approaches, including meta-modeling, to process market data that is tagged automatically based on the clients, prices, products and characteristics of a company’s main competitors.

This makes the data directly comparable across different companies and geographies, which is one of the key ingredients required for success.

“It’s a bit like Google Translate,” said Fantini. “Learning from its corpus of translations to make each new request smarter, we use our growing body of data to help each new prediction become more accurate, but we work directly on transaction data rather than images, text or voice as others do.”

These insights help retailers understand how to manage their supply chains and how to plan pricing and production even when facing rapid changes in demand.

In the future, Evo plans to use AI to help design fashion collections and forecast trends at increasingly earlier stages.

Resources:

Image by Pexels.

The post How Evo’s AI Keeps Fashion Forward appeared first on The Official NVIDIA Blog.

Italy Forges AI Future in Partnership with NVIDIA

Italy is well known for its architecture, culture and cuisine. Soon, its contributions to AI may be just as renowned.

Taking a step in that direction, a national research organization forged a three-year collaboration with NVIDIA. Together they aim to accelerate AI research and commercial adoption across the country.

Leading the charge for Italy is CINI, the National Inter-University Consortium for Informatics that includes a faculty of more than 1,300 professors in various computing fields across 39 public universities.

CINI’s National Laboratory of Artificial Intelligence and Intelligence Systems (AIIS) is spearheading the effort as part of its goal to expand Italy’s ecosystem for both academic research and commercial applications of AI.

“Leveraging NVIDIA’s expertise to build systems specifically designed for the creation of AI will help secure Italy’s position as a top player in AI education, research and industry adoption,” said Rita Cucchiara, a professor of computer engineering and science and director of AIIS.

National effort begins in Modena

The joint initiative aims to train students, nurture startups and spread adoption of the attest AI technology throughout Italy. As a first step, the partners will create a local hub at the University of Modena and Reggio Emilia (Unimore) for the global NVIDIA AI Technology Center.

The partnership marks an important expansion of NVIDIA’s work with the university whose roots date back to the medieval period.

In December, the company supported research on a novel way to automate the process of describing actions in a video. A team of four researchers at Unimore and one from Milan-based AI startup Metaliquid developed an AI model that achieved up to a 16 percent relative improvement compared to prior solutions. In a final stage of the project, NVIDIA helped researchers analyze their network’s topology to optimize training it on an NVIDIA DGX-1 system.

In July, Unimore and NVIDIA collaborated on an event for AI startups. Unimore’s AImageLab hosted the event, which included representatives of NVIDIA’s Inception program, an initiative to nurture AI startupswith access to the company’s technology and ecosystem.

The collaboration comes at a time when the AImageLab, host for the new NVIDIA hub, is already making its mark in areas such as machine vision and medical imaging.

Winning kudos in image recognition

In September, two world-class research events singled out the AImageLab for recognition. One team from the lab won a best paper award at the International Conference on Computer Analysis of Images and Patterns. Another came third out of 64 research groups in an international competition using AI to classify skin lesions.

The Modena hub becomes the latest of more than 12 collaborations with countries worldwide for the NVIDIA AI Technology Center. NVAITC maintains an open database of research and tools developed with and for its partners.

Overall, the new collaboration “will bring together NVIDIA and CINI in our shared mission to enable, support and inform Italy’s AI ecosystem for research, industry and society,” said Simon See, senior director of NVAITC.

The post Italy Forges AI Future in Partnership with NVIDIA appeared first on The Official NVIDIA Blog.

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started.

NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can hear about best practices for AI at the edge and get an update on how NVIDIA GPUs are paving the way to better, smarter 5G services.

It’s a story that’s moving fast.

AI was born in the cloud to process the vast amounts of data needed for jobs like recommending new products and optimizing news feeds. But most enterprises interact with their customers and products in the physical world at the edge of the network — in stores, warehouses and smart cities.

The need to sense, infer and act in real time as conditions change is driving the next wave of AI adoption at the edge. That’s why a growing list of forward-thinking companies are building their own AI capabilities using the NVIDIA EGX edge-computing platform.

Walmart, for example, built a smart supermarket it calls its Intelligent Retail Lab. Jakarta uses AI in a smart city application to manage its vehicle registration program. BMW and Procter & Gamble automate inspection of their products in smart factories. They all use NVIDIA EGX along with our Metropolis application framework for video and data analytics.

For conversational AI, the NVIDIA Jarvis developer kit enables voice assistants geared to run on embedded GPUs in smart cars or other systems. WeChat, the world’s most popular smartphone app, accelerates conversational AI using NVIDIA TensorRT software for inference.

All these software stacks ride on our CUDA-X libraries, tools, and technologies that run on an installed base of more than 500 million NVIDIA GPUs.

Carriers Make the Call

At MWC Los Angeles this year, NVIDIA founder and CEO Jensen Huang announced Aerial, software that rides on the EGX platform to let telecommunications companies harness the power of GPU acceleration.

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage at MWC LA to announce their collaboration.

With Aerial, carriers can both increase the spectral efficiency of their virtualized 5G radio-access networks and offer new AI services for smart cities, smart factories, cloud gaming and more — all on the same computing platform.

In Barcelona, NVIDIA and partners including Ericsson will give an update on how Aerial will reshape the mobile edge network.

Verizon is already using NVIDIA GPUs at the edge to deliver real-time ray tracing for AR/VR applications over 5G networks.

It’s one of several ways telecom applications can be taken to the next level with GPU acceleration. Imagine having the ability to process complex AI jobs on the nearest base station with the speed and ease of making a cellular call.

Your Dance Card for Barcelona

For a few days in February, we will turn our innovation center — located at Fira de Barcelona, Hall 4 — into a virtual university on AI with 5G at the edge. Attendees will get a world-class deep dive on this strategic technology mashup and how companies are leveraging it to monetize 5G.

Sessions start Monday morning, Feb. 24, and include AI customer case studies in retail, manufacturing and smart cities. Afternoon talks will explore consumer applications such as cloud gaming, 5G-enabled cloud AR/VR and AI in live sports.

We’ve partnered with the organizers of MWC on applied AI sessions on Tuesday, Feb. 25. These presentations will cover topics like federated learning, an emerging technique for collaborating on the development and training of AI models while protecting data privacy.

Wednesday’s schedule features three roundtables where attendees can meet executives working at the intersection of AI, 5G and edge computing. The week also includes two instructor-led sessions from the NVIDIA Deep Learning Institute that trains developers on best practices.

See Demos, Take a Meeting

For a hands-on experience, check out our lineup of demos based on the NVIDIA EGX platform. These will highlight applications such as object detection in a retail setting, ways to unsnarl traffic congestion in a smart city and our cloud-gaming service GeForce Now.

To learn more about the capabilities of AI, 5G and edge computing, check out the full agenda and book an appointment here.

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

BERT Does Europe: AI Language Model Learns German, Swedish

BERT is at work in Europe, tackling natural-language processing jobs in multiple industries and languages with help from NVIDIA’s products and partners.

The AI model formally known as Bidirectional Encoder Representations from Transformers debuted just last year as a state-of-the-art approach to machine learning for text. Though new, BERT is already finding use in avionics, finance, semiconductor and telecom companies on the continent, said developers optimizing it for German and Swedish.

“There are so many use cases for BERT because text is one of the most common data types companies have,” said Anders Arpteg, head of research for Peltarion, a Stockholm-based developer that aims to make the latest AI techniques such as BERT inexpensive and easy for companies to adopt.

Natural-language processing will outpace today’s AI work in computer vision because “text has way more apps than images — we started our company on that hypothesis,” said Milos Rusic, chief executive of deepset in Berlin. He called BERT “a revolution, a milestone we bet on.”

Deepset is working with PricewaterhouseCoopers to create a system that uses BERT to help strategists at a chip maker query piles of annual reports and market data for key insights. In another project, a manufacturing company is using NLP to search technical documents to speed maintenance of their products and predict needed repairs.

Peltarion, a member of NVIDIA’s Inception program that nurtures startups with access to its technology and ecosystem, packed support for BERT into its tools in November. It is already using NLP to help a large telecom company automate parts of its process for responding to product and service requests. And it’s using the technology to let a large market research company more easily query its database of surveys.

Work in Localization

Peltarion is collaborating with three other organizations on a three-year, government-backed project to optimize BERT for Swedish. Interestingly, a new model from Facebook called XLM-R suggests training on multiple languages at once could be more effective than optimizing for just one.

“In our initial results, XLM-R, which Facebook trained on 100 languages at once, outperformed a vanilla version of BERT trained for Swedish by a significant amount,” said Arpteg, whose team is preparing a paper on their analysis.

Nevertheless, the group hopes to have before summer a first version of a Swedish BERT model that performs really well, said Arpteg, who headed up an AI research group at Spotify before joining Peltarion three years ago.

An analysis by deepset of its German version of BERT.

In June, deepset released as open source a version of BERT optimized for German. Although its performance is only a couple percentage points ahead of the original model, two winners in an annual NLP competition in Germany used the deepset model.

Right Tool for the Job

BERT also benefits from optimizations for specific tasks such as text classification, question answering and sentiment analysis, said Arpteg. Peltarion researchers plans to publish in 2020 results of an analysis of gains from tuning BERT for areas with their own vocabularies such as medicine and legal.

The question-answering task has become so strategic for deepset it created Haystack, a version of its FARM transfer-learning framework to handle the job.

In hardware, the latest NVIDIA GPUs are among the favorite tools both companies use to tame big NLP models. That’s not surprising given NVIDIA recently broke records lowering BERT training time.

“The vanilla BERT has 100 million parameters and XML-R has 270 million,” said Arpteg, whose team recently purchased systems using NVIDIA Quadro and TITAN GPUs with up to 48GB of memory. It also has access to NVIDIA DGX-1 servers because “for training language models from scratch, we need these super-fast systems,” he said.

More memory is better, said Rusic, whose German BERT models weigh in at 400MB. Deepset taps into NVIDIA V100 Tensor Core 100 GPUs on cloud services and uses another NVIDIA GPU locally.

The post BERT Does Europe: AI Language Model Learns German, Swedish appeared first on The Official NVIDIA Blog.

Eni Doubles Up on GPUs for 52 Petaflops Supercomputer

Italy energy company Eni is upgrading its supercomputer with another helping of NVIDIA GPUs aimed at making it the most powerful industrial system in the world.

The news comes a little more than two weeks before SC19, the annual supercomputing event in North America. Growing adoption of GPUs as accelerators for the world’s toughest high performance computing and AI jobs will be among the hot topics at the event.

The new Eni system, dubbed HPC5, will use 7,280 NVIDIA V100 GPUs capable of delivering 52 petaflops of peak double-precision floating point performance. That’s nearly triple the performance of its previous 18 petaflops system that used 3,200 NVIDIA P100 GPUs.

When HPC5 is deployed in early 2020, Eni will have at its disposal 70 petaflops including existing systems also installed in its Green Data Center in Ferrera Erbognone, outside of Milan. The figure would put it head and shoulders above any other industrial company on the current TOP500 list of the world’s most powerful computers.

The new system will consist of 1,820 Dell EMC PowerEdge C4140 servers, each with four NVIDIA V100 GPUs and two Intel CPUs. A Mellanox InfiniBand HDR network running at 200 Gb/s will link the servers.

Green Data Center Uses Solar Power

Eni will use its expanded computing muscle to gather and analyze data across its operations. It will enhance its monitoring of oil fields, subsurface imaging and reservoir simulation and accelerate R&D in non-fossil energy sources. The data center itself is designed to be energy efficient, powered in part by a nearby solar plant.

“Our investment to strengthen our supercomputer infrastructure and to develop proprietary technologies is a crucial part of the digital transformation of Eni,” said Chief Executive Officer Claudio Descalzi in a press statement. The new system’s advanced parallel architecture and hybrid programming model will allow Eni to process seismic imagery faster, using more sophisticated algorithms.

Eni was among the first movers to adopt GPUs as accelerators. NVIDIA GPUs are now used in 125 of the fastest systems worldwide, according to the latest TOP500 list. They include the world’s most powerful system, the Summit supercomputer, as well as four others in the top 10.

Over the last several years, designers have increasingly relied on NVIDIA GPU accelerators to propel these beasts to new performance heights.

The SC19 event will be host to three paper tracks, two panels and three invited talks that touch on AI or GPUs. In one invited talk, a director from the Pacific Northwest National Laboratory will describe six top research directions to increase the impact of machine learning on scientific problems.

In another, the assistant director for AI at the White House Office of Science and Technology Policy will share the administration’s priorities in AI and HPC. She’ll detail the American AI Initiative announced in February.

The post Eni Doubles Up on GPUs for 52 Petaflops Supercomputer appeared first on The Official NVIDIA Blog.

Top European Supercomputer Shines Brighter with 70-Petaflops Booster Module

Jülich Research Center — long counted among high performance computing royalty in Europe — is adding a new gem to its crown.

The center, known in German as Forschungszentrum Jülich, is extending its JUWELS supercomputer system with a booster module. Designed to provide the highest application performance for massively scalable workloads, the system is scheduled to be deployed next year.

Developed in cooperation with Atos, Mellanox, ParTec and NVIDIA, the booster module is powered by several thousand GPUs and is expected to provide a computational peak performance of more than 70 petaflops once fully integrated, up from its current level of 12 petaflops.

Building HPC Systems Block by Block

The original JUWELS supercomputer was built following a modular supercomputing architecture. The first module of the system, which started operation last year, was designed from the start to have multiple complementary modules added to it.

This innovative way of building HPC and high-performance data analytics systems follows a building-block principle. Each module is built to meet the needs of a specific group of applications. The specialized modules can then be dynamically combined as required, using a uniform system software layer.

“The modular supercomputing architecture makes it possible to integrate the best available technologies flexibly and without compromise,” said Thomas Lippert, director of the Jülich Supercomputing Center.

Powered by the latest-generation NVIDIA GPUs with 200 Gb/s HDR InfiniBand from Mellanox, the new booster module is by far the largest in the JUWELS cluster. It brings with it a throughput and floating-point performance-optimized architecture designed for large-scale simulation and machine learning workloads.

More Brain Power

One of the initiatives the booster will fuel is the Human Brain Project. This flagship project is led by brain scientist Katrin Amunts at Jülich’s Institute of Neuroscience and Medicine and connects the work of some 500 scientists at more than 130 universities, teaching hospitals and research centers across Europe.

Created in 2013 by the European Commission, the project works to build a unique European technology platform for neuroscience, medicine and advanced information technologies — and supercomputing is integral to this.

The project’s scientists gather, organize and disseminate data describing the brain and its diseases at an unprecedented scale. To integrate all this data, the Jülich team and international collaborators are building the most detailed brain atlas to date. That’s no easy task — the human brain, with about 86 billion neurons and 100 trillion connections, is one of the most complex systems known to man.

They do this by analyzing images of thousands of ultrathin histological brain slices. The JUWELS cluster helps to solve many of the memory and performance bottlenecks on the way to reconstructing these slices into a 3D computer model.

Cracking Climate Change

The new JUWELS booster is also enabling insights into the processes behind climate change, which poses major risks for the Earth’s ecosystems.

Jülich’s “Simulation Laboratory Climate Science” provides support for an internal community of scientists who are already using JUWELS for the numerical modeling of the Earth’s systems. The booster will aid its efficiency as researchers delve into this grand challenge for the 21st century.

The post Top European Supercomputer Shines Brighter with 70-Petaflops Booster Module appeared first on The Official NVIDIA Blog.

TB or Not TB: AI-Powered App Aids Treatment of Tuberculosis

Despite being treatable, tuberculosis kills 1.6 million people every year.

This is because TB treatment is time- and cost-intensive, requiring extensive patient monitoring.

In developing countries, where the disease is most deadly, monitoring involves a form of testing that has been used for hundreds of years. Clinicians study samples of lung fluid (called sputum) under a microscope and manually count the number of TB bacteria present, which sometimes reach into the hundreds.

This method may be cheaper than other available tests, but it’s only accurate 50 percent of the time.

Cambridge Consultants, a U.K.-based consultancy, has set out to investigate whether an AI-powered monitoring system could provide a feasible alternative for keeping tabs on this killer.

The result is BacillAi, a system that uses an AI-powered smartphone app and a standard-grade microscope to capture and analyze samples of sputum.

“With BacillAi, we wanted to tackle two main questions,” explained Richard Hammond, technology director of the Medical Technologies Division at Cambridge Consultants. “Can AI improve a labor-intensive, difficult process in healthcare diagnostics? And how could you go about making it available to those who need it most, even in the most remote and low-resource areas?”

Putting Manual Processes Under the Microscope

The current process for monitoring TB patients is inefficient and ineffective. Medical professionals review any number of patient samples a day, identifying and counting every single cell. This can take up to 40 minutes per case.

And the difficulty doesn’t stop there. Stains used to distinguish cells in the lung fluid can vary in strength between samples, and adjusting a microscope’s optical focus can alter colors.

The final BacillAi concept consists of a standard low-cost microscope, modified with a mount for a smartphone, and an AI app.

Clinicians monitoring TB under these conditions face both mental and physical strain. With such a high risk of human error, patients often receive poor-quality results that arrive too late for them to start vital treatment.

To tackle this conundrum, Cambridge Consultants trained a deep learning system using data gathered from cultured surrogate bacteria and artificial sputum.

Developed on the NVIDIA DGX POD reference architecture with NetApp storage, known as ONTAP AI, the resulting convolutional neural network (CNN) can identify, count and classify TB cells in a matter of minutes.

The final BacillAi concept consists of a standard low-cost microscope, modified with a mount for a smartphone, and an app with the CNN at its heart.

A product like BacillAi could help clinicians determine the state of a patient’s health faster and more consistently than is currently possible. Patients would also have improved chances of fighting the disease.

Solving Challenges at Scale

A multidisciplinary team worked on developing BacillAi in Cambridge Consultants’ purpose-built deep learning research facility, which is powered by ONTAP AI. The space is designed specifically for discovering, developing and testing machine learning approaches in a secure environment.

The same research facility also developed Aficionado, an AI music classifier, Vincent, which turns your squiggles into art, and SharpWave, a tool that creates clear, undistorted views of the real world from a damaged or obscured moving image.

Discover Cambridge Consultants’ innovative approaches for yourself at The AI Summit, in San Francisco, Sept. 25-26.

 

The post TB or Not TB: AI-Powered App Aids Treatment of Tuberculosis appeared first on The Official NVIDIA Blog.