Farm to Frameworks: French Duo Fork Into Sustainable Farming with Robotics

What began as two classmates getting their hands dirty at a farm in the French Alps has hatched a Silicon Valley farming startup building robots in Detroit and operating in one of the world’s largest agriculture regions.

San Francisco-based FarmWise offers farmers an AI-driven robotic system for more sustainable farming methods, and to address a severe labor shortage. Its machines can remove weeds without pesticides — a holy grail for organic farmers in the Golden State and elsewhere.

FarmWise’s robotic farming machine runs 10 cameras, capturing crops and weeds it passes over to send through image recognition models. The machine sports five NVIDIA DRIVE GPUs to help it navigate and make split-second decisions on weed removal.

The company is operating in two of California’s agriculture belts, Salinas and San Luis Obispo, where farms have its machines deployed in the field.

“We don’t use chemicals at all — we use blades and automate the process,” said Sebastien Boyer, the company’s CEO. “We’re working with a few of the largest vegetable growers in California.”

AI has played an increasing role in agriculture as researchers, startups and public companies alike are plowing it for environmental and business benefits.

FarmWise recently landed $14.5 million in Series A funding to further develop its machines.

Robotics for Weed Removal

It wasn’t an easy start. Boyer and Thomas Palomares, computer science classmates from France’s Ecole PolyTechnique, decided to work on Palomares’s family farm in the Alps to try big data on farming. Their initial goal was to help farmers use information to work more sustainably while also improving crop yields. It didn’t pan out as planned.

The two discovered farms lacked the equipment to support sustainable methods, so they shelved their idea and instead packed their bags for grad school in the U.S. After that, the friends came back to their concept but with a twist: using AI-driven robotic machinery.

“We decided to move our focus to robotics to build new types of agriculture machines that are better-suited to take advantage of data,” Boyer said. “Weed removal is our first application.”

In April, FarmWise began manufacturing its farm machines. It tapped custom automotive parts maker Roush, which serves Detroit and has built self-driving vehicle prototypes for the likes of Google.

FarmWise for Labor Shortage

Farm labor is in short supply. A California Farm Bureau Federation survey of more than 1,000 farmers found that 56 percent were unable to hire sufficient labor to tend their crops in the past five years.

Of those surveyed, 37 percent said they had to change their cultivation practices, including by reducing weeding and pruning. More than half were already using labor-saving technologies. Not to mention that weeding is often back-breaking work.

FarmWise helps fill this void. The company’s automated weeders can do the labor of 10 workers. And it can work 24/7 autonomously.

“We’re filling the gaps of missing people, and those tasks that aren’t getting done — and we’re offering an alternative to chemical herbicides,” said Boyer, adding that weed management is crucial for crop yields.

When farms can’t get back-breaking weeding covered, they turn to herbicides as an alternative. FarmWise can help to reduce that. Plus, there’s a financial incentive: medium-size California farms can expect to save as much as $500,000 a year on pesticides and other costs by using FarmWise, he said.

Training Autonomous Farming Machines

To help farmers, FarmWise AI recognizes the difference between weeds and plants, and its machines can make 25 cuts per second to remove weeds. Its NVIDIA GPU-powered image networks recognize 10 different crops and can spot the typical weeds of California and Arizona.

“As we operate on fields, we continuously capture data, label that data and use it to improve our algorithms,” said Boyer.

FarmWise’s weeding machines are geo-fenced by uploading maps of the fields. The onboard cameras can be used as an override for safety to stop the machines.

The 30-person company attracted recruits to its sustainable farming mission from SpaceX, Tesla, Cruise and Facebook as well as experts in farm machine design and operations, said Boyer.

Developing machines for farms, said Boyer, requires spending time in the field to understand the needs of farmers and translating their ideas into technology.

“We’re a group of engineers with very close ties to the farming community,” he said.

The post Farm to Frameworks: French Duo Fork Into Sustainable Farming with Robotics appeared first on The Official NVIDIA Blog.

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started.

NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can hear about best practices for AI at the edge and get an update on how NVIDIA GPUs are paving the way to better, smarter 5G services.

It’s a story that’s moving fast.

AI was born in the cloud to process the vast amounts of data needed for jobs like recommending new products and optimizing news feeds. But most enterprises interact with their customers and products in the physical world at the edge of the network — in stores, warehouses and smart cities.

The need to sense, infer and act in real time as conditions change is driving the next wave of AI adoption at the edge. That’s why a growing list of forward-thinking companies are building their own AI capabilities using the NVIDIA EGX edge-computing platform.

Walmart, for example, built a smart supermarket it calls its Intelligent Retail Lab. Jakarta uses AI in a smart city application to manage its vehicle registration program. BMW and Procter & Gamble automate inspection of their products in smart factories. They all use NVIDIA EGX along with our Metropolis application framework for video and data analytics.

For conversational AI, the NVIDIA Jarvis developer kit enables voice assistants geared to run on embedded GPUs in smart cars or other systems. WeChat, the world’s most popular smartphone app, accelerates conversational AI using NVIDIA TensorRT software for inference.

All these software stacks ride on our CUDA-X libraries, tools, and technologies that run on an installed base of more than 500 million NVIDIA GPUs.

Carriers Make the Call

At MWC Los Angeles this year, NVIDIA founder and CEO Jensen Huang announced Aerial, software that rides on the EGX platform to let telecommunications companies harness the power of GPU acceleration.

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage at MWC LA to announce their collaboration.

With Aerial, carriers can both increase the spectral efficiency of their virtualized 5G radio-access networks and offer new AI services for smart cities, smart factories, cloud gaming and more — all on the same computing platform.

In Barcelona, NVIDIA and partners including Ericsson will give an update on how Aerial will reshape the mobile edge network.

Verizon is already using NVIDIA GPUs at the edge to deliver real-time ray tracing for AR/VR applications over 5G networks.

It’s one of several ways telecom applications can be taken to the next level with GPU acceleration. Imagine having the ability to process complex AI jobs on the nearest base station with the speed and ease of making a cellular call.

Your Dance Card for Barcelona

For a few days in February, we will turn our innovation center — located at Fira de Barcelona, Hall 4 — into a virtual university on AI with 5G at the edge. Attendees will get a world-class deep dive on this strategic technology mashup and how companies are leveraging it to monetize 5G.

Sessions start Monday morning, Feb. 24, and include AI customer case studies in retail, manufacturing and smart cities. Afternoon talks will explore consumer applications such as cloud gaming, 5G-enabled cloud AR/VR and AI in live sports.

We’ve partnered with the organizers of MWC on applied AI sessions on Tuesday, Feb. 25. These presentations will cover topics like federated learning, an emerging technique for collaborating on the development and training of AI models while protecting data privacy.

Wednesday’s schedule features three roundtables where attendees can meet executives working at the intersection of AI, 5G and edge computing. The week also includes two instructor-led sessions from the NVIDIA Deep Learning Institute that trains developers on best practices.

See Demos, Take a Meeting

For a hands-on experience, check out our lineup of demos based on the NVIDIA EGX platform. These will highlight applications such as object detection in a retail setting, ways to unsnarl traffic congestion in a smart city and our cloud-gaming service GeForce Now.

To learn more about the capabilities of AI, 5G and edge computing, check out the full agenda and book an appointment here.

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

BERT Does Europe: AI Language Model Learns German, Swedish

BERT is at work in Europe, tackling natural-language processing jobs in multiple industries and languages with help from NVIDIA’s products and partners.

The AI model formally known as Bidirectional Encoder Representations from Transformers debuted just last year as a state-of-the-art approach to machine learning for text. Though new, BERT is already finding use in avionics, finance, semiconductor and telecom companies on the continent, said developers optimizing it for German and Swedish.

“There are so many use cases for BERT because text is one of the most common data types companies have,” said Anders Arpteg, head of research for Peltarion, a Stockholm-based developer that aims to make the latest AI techniques such as BERT inexpensive and easy for companies to adopt.

Natural-language processing will outpace today’s AI work in computer vision because “text has way more apps than images — we started our company on that hypothesis,” said Milos Rusic, chief executive of deepset in Berlin. He called BERT “a revolution, a milestone we bet on.”

Deepset is working with PricewaterhouseCoopers to create a system that uses BERT to help strategists at a chip maker query piles of annual reports and market data for key insights. In another project, a manufacturing company is using NLP to search technical documents to speed maintenance of their products and predict needed repairs.

Peltarion, a member of NVIDIA’s Inception program that nurtures startups with access to its technology and ecosystem, packed support for BERT into its tools in November. It is already using NLP to help a large telecom company automate parts of its process for responding to product and service requests. And it’s using the technology to let a large market research company more easily query its database of surveys.

Work in Localization

Peltarion is collaborating with three other organizations on a three-year, government-backed project to optimize BERT for Swedish. Interestingly, a new model from Facebook called XLM-R suggests training on multiple languages at once could be more effective than optimizing for just one.

“In our initial results, XLM-R, which Facebook trained on 100 languages at once, outperformed a vanilla version of BERT trained for Swedish by a significant amount,” said Arpteg, whose team is preparing a paper on their analysis.

Nevertheless, the group hopes to have before summer a first version of a Swedish BERT model that performs really well, said Arpteg, who headed up an AI research group at Spotify before joining Peltarion three years ago.

An analysis by deepset of its German version of BERT.

In June, deepset released as open source a version of BERT optimized for German. Although its performance is only a couple percentage points ahead of the original model, two winners in an annual NLP competition in Germany used the deepset model.

Right Tool for the Job

BERT also benefits from optimizations for specific tasks such as text classification, question answering and sentiment analysis, said Arpteg. Peltarion researchers plans to publish in 2020 results of an analysis of gains from tuning BERT for areas with their own vocabularies such as medicine and legal.

The question-answering task has become so strategic for deepset it created Haystack, a version of its FARM transfer-learning framework to handle the job.

In hardware, the latest NVIDIA GPUs are among the favorite tools both companies use to tame big NLP models. That’s not surprising given NVIDIA recently broke records lowering BERT training time.

“The vanilla BERT has 100 million parameters and XML-R has 270 million,” said Arpteg, whose team recently purchased systems using NVIDIA Quadro and TITAN GPUs with up to 48GB of memory. It also has access to NVIDIA DGX-1 servers because “for training language models from scratch, we need these super-fast systems,” he said.

More memory is better, said Rusic, whose German BERT models weigh in at 400MB. Deepset taps into NVIDIA V100 Tensor Core 100 GPUs on cloud services and uses another NVIDIA GPU locally.

The post BERT Does Europe: AI Language Model Learns German, Swedish appeared first on The Official NVIDIA Blog.

Blue Moon Over Dijon: French Hobbyist Taps GPU for Stellar Camera

By day, Alain Paillou is the head of water quality for the Bourgogne region of France. But when the stars come out, he indulges his other passions.

Paillou takes exquisitely crisp pictures of the moon, stars and planets — a hobby that combines his lifelong love of astronomy and technology.

Earlier this year, he chronicled on an NVIDIA forum his work building what he calls SkyNano, a GPU-powered camera to take detailed images of the night sky using NVIDIA’s Jetson Nano.

“I’ve been interested in astronomy from about eight or 10 years old, but I had to quit my studies for more than 30 years because of my job as an aerospace software engineer,” said Paillou in an interview from his home in Dijon.

Paillou went back to school in his early 30s to get a degree and eventually a job as a hydrogeologist. “I came back to astronomy after my career change 20 years ago when I lived in Paris, where I started taking photographs of the moon, Jupiter and Saturn,” he said.

“I really love technology and astronomy needs technical competence,” he said. “It lets me return to some of the skills of my first job — developing software to get the best results from my equipment — and it’s very interesting to me.”

Seeing Minerals on the Moon

Paillou loves to take color-enhanced pictures of the moon that show the diversity of its blue titanium and orange iron-oxide minerals. And he delights in capturing star-rich pictures of the night sky. Both require significant real-time filters, best run on a GPU.

Around his Dijon home, as in many places, “the sky is really bad with light pollution from cities that make images blurry,” he said. “I can see 10-12 stars with my eyes, but with my system I can see thousands of stars,” he said.

Paillou in his home astronomy lab in Dijon.

“If you want to retrieve something beautiful, you need to apply real-time filtering with an A/V compensation system. I built my own system because I could not find anything I could buy that matched what I wanted,” Paillou said.

Building the SkyNano

His first prototype mounted a ZWO ASI178MC camera using a Sony IMX178 color sensor on a platform with a gyro/compass and a two-axis mount controlled by stepper motors. Initially he used a Raspberry Pi 3 B+ to run Python programs that controlled the mount and camera.

The board lacked the muscle to drive the real-time filters. After some more experiments, he asked NVIDIA for help in his first post on the Jetson Nano community projects forum on June 21. By July 5, he had a Jetson Nano in hand and started loading OpenCV filters on it using Python.

By the end of July, he had taught himself PyCUDA and posted significant results with it. He released his routines on GitHub and reported he was ready to start taking pictures.

On Aug. 2, he posted his camera’s first digitally enhanced picture of the Copernicus crater on the moon as well as a YouTube video showing a Jetson Nano-enhanced night sky. By October, he posted stunning color-enhanced pictures of the moon (see above), impressive night-vision capabilities and a feature for tracking satellites.

Paillou’s project became the most popular thread on the NVIDIA Jetson Project’s forum with more than 3,100 views to date. Along the way, he gave a handful of others tips for their own AI projects, many of which are available here.

Exploring Horizons in Space and Software

“Twenty years ago, computers were not powerful enough to do this work, but today a little computer like the Jetson Nano makes it really interesting and it’s not expensive,” said Paillou, whose laptop connected to the system also uses an NVIDIA GPU.

In fact, the $99 Jetson Nano is currently marked down to $89 in a holiday special on NVIDIA’s website. Hobbyists who want to use Jetson Nano for neural networking can pair the starter kit with a free AI for Beginners course from our Deep Learning Institute.

Paillou sees plenty of headroom for his project. He hopes to rewrite his Python code in C++ for further performance speed-ups, get a better camera, and further study the possibilities for using AI.

With a little help from friends in America, the sky’s the limit.

“I was not sure I would have the time to learn CUDA – at 52, I am not so young – but it turned out to be very powerful and not so complicated,” he said.

Follow Paillou’s work and many others contributed by fellow developers on the Jetson Community Projects page.

Paillou’s SkyNano (lower left) and SkyPC waiting for the dark.


The post Blue Moon Over Dijon: French Hobbyist Taps GPU for Stellar Camera appeared first on The Official NVIDIA Blog.

All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event

Putting AI to work on a massive scale, Alibaba recently harnessed NVIDIA GPUs to serve its customers on 11/11, the year’s largest shopping event.

During Singles Day, as the Nov. 11 shopping event is also known, it generated $38 billion in sales. That’s up by nearly a quarter from last year’s $31 billion, and more than double online sales on Black Friday and Cyber Monday combined.

Singles Day — which has grown from $7 million a decade ago — illustrates the massive scale AI has reached in global online retail, where no player is bigger than Alibaba.

Each day, over 100 million shoppers comb through billions of available products on its site. Activity skyrockets on peak shopping days, when Alibaba’s systems field hundreds of thousands of queries a second.

And AI keeps things humming along, according to Lingjie Xu, Alibaba’s director of heterogeneous computing.

“To ensure these customers have a great user experience, we deploy state-of-the-art AI technology at massive scale using the NVIDIA accelerated computing platform, including T4 GPUs, cuBLAS, customized mixed precision and inference acceleration software,” he said.

“The platform’s intuitive search capabilities and reliable recommendations allow us to support a model six times more complex than in the past, which has driven a 10 percent improvement in click-through rate. Our largest model shows 100 times higher throughput with T4 compared to CPU,” he said.

One key application for Alibaba and other modern online retailers: recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver.

Every small improvement in click-through rate directly impacts the user experience and revenues. A 10 percent improvement from advanced recommender models that can run in real time, and at incredible scale, is only possible with GPUs.

Alibaba’s teams employ NVIDIA GPUs to support a trio of optimization strategies around resource allocation, model quantization and graph transformation to increase throughput and responsiveness.

This has enabled NVIDIA T4 GPUs to accelerate Alibaba’s wide and deep recommendation model and deliver 780 queries per second. That’s a huge leap from CPU-based inference, which could only deliver three queries per second.

Alibaba has also deployed NVIDIA GPUs to accelerate its systems for automatic advertisement banner-generating, ad recommendation, imaging processing to help identify fake products, language translation, and speech recognition, among others. As the world’s third-largest cloud service provider, Alibaba Cloud provides a wide range of heterogeneous computing products capable of intelligent scheduling, automatic maintenance and real-time capacity expansion.

Alibaba’s far-sighted deployment of NVIDIA’s AI platform is a straw in the wind, indicating what more is to come in a burgeoning range of industries.

Just as its tools filter billions of products for millions of consumers, AI recommenders running on NVIDIA GPUs will find a place among other countless other digital services — app stores, news feeds, restaurant guides and music services among them — keeping customers happy.

Learn more about NVIDIA’s AI inference platform.

The post All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event appeared first on The Official NVIDIA Blog.

Retail Therapy: Smart Carts Roll Out to Assist Shoppers and Avoid Checkouts

York Yang is a hardcore millennial: He doesn’t wait for restaurants or mass transit — he orders on-demand meals and wheels. And when it comes to grocery checkouts, he takes matters into his own hands.

Yang and three co-founders started Caper, a maker of automated shopping carts, to solve their generation’s distaste for waiting in line to pay at the supermarket.

Sure, there’s Instacart or other delivery services, but when you want the instant gratification of a kombucha, those options are too slow, said Yang, the company’s CTO.

Caper speeds things along. Its smart carts feature barcode scanners, three cameras for image recognition, scales and point-of-sale card readers. Shoppers can use them for self-checkout to bypass cashier lines. They pack into a shopping cart similar technology as that of Amazon Go self-service stores.

“The technology enables shoppers to scan directly on the shopping cart and pay directly on the shopping cart so they don’t have to wait in line to get to the cashiers,” Yang said.

New York-based Caper recently landed $10 million in financing. The Y Combinator-accelerated company is also a member of the NVIDIA Inception program, which helps startups scale markets faster.

Bagging Store Interest

Grocers are getting comfortable with Caper offering new convenience without the cost of a store overhaul.

Caper’s convenience goes beyond self-checkouts. Its smart carts can suggest items to buy based off of previous purchases, as well as direct consumers by map to items in the store.

Also, Caper remotely updates prices and deals hourly on smart carts to match store databases.

It’s a flood of streaming data. To handle all that, the company deploys edge computing from NVIDIA GPUs in servers inside of stores to enable its smart carts.

Caper’s carts are in pilot tests with Sobeys, Canada’s second-largest food retailer with more than 1,500 locations. “Right now the whole team of executives is really excited with our solution. “They want to help us grow this out and help us succeed in this area,” said Yang.

Training AI Carts

Stores stock a lot of items. Caper is developing image-recognition models to see as many as 50,000 items for some stores, so it can avoid and cost and time in using product photography to produce labeled datasets.

It takes 100 to 1,000 images to positively identify each item, but Caper accelerates this process  using simulation for data augmentation. It can take five images of each product and then run 3D simulations to capture different angles of it to synthetically expand the training set to 100 to 1,000 images of each product.

Caper runs these graphics-intensive simulations and its model training on NVIDIA GPUs in the cloud and on local machines.

Cartloads of Training

Caper has a lot of data pipelines going. It’s a big undertaking on-boarding each store for image recognition. But its barcode reader can scan items while it develops the capability. In the interim, it’s using sensor fusion between its scales and cameras to verify items by using weight and images. But many at Caper are busy perfecting its image models for retailers.

The fast-growing startup — it’s launched in a number of grocers this year since its January debut — has a lot of AI know-how in play across its teams. Keeping everybody up to speed, especially new hires, has been aided by NVIDIA’s Inception program, which offers free access to Deep Learning Institute courses on the latest AI topics.

“As we hire more people, our new hires might not always have all the skills we require — NVIDIA’s Deep Learning Institute courses are very useful and have helped train them,” Yang said.


Photo credit: David Shankbone, licensed under Creative Commons 3.0

The post Retail Therapy: Smart Carts Roll Out to Assist Shoppers and Avoid Checkouts appeared first on The Official NVIDIA Blog.

AWS Outposts Station a GPU Garrison in Your Datacenter

All the goodness of GPU acceleration on Amazon Web Services can now also run inside your own data center.

AWS Outposts powered by NVIDIA T4 Tensor Core GPUs are generally available starting today. They bring cloud-based Amazon EC2 G4 instances inside your data center to meet user requirements for security and latency in a wide variety of AI and graphics applications.

With this new offering, AI is no longer a research project.

Most companies still keep their data inside their own walls because they see it as their core intellectual property. But for deep learning to transition from research into production, enterprises need the flexibility and ease of development the cloud offers — right beside their data. That’s a big part of what AWS Outposts with T4 GPUs now enables.

With this new offering, enterprises can install a fully managed rack-scale appliance next to the large data lakes stored securely in their data centers.

AI Acceleration Across the Enterprise

To train neural networks, every layer of software needs to be optimized, from NVIDIA drivers to container runtimes and application frameworks. AWS services like Sagemaker, Elastic MapReduce and many others designed on custom-built Amazon Machine Images require model development to start with the training on large datasets. With the introduction of NVIDIA-powered AWS Outposts, those services can now be run securely in enterprise data centers.

The GPUs in Outposts accelerate deep learning as well as high performance computing and other GPU applications. They all can access software in NGC, NVIDIA’s hub for GPU-accelerated software optimization, which is stocked with applications, frameworks, libraries and SDKs that include pre-trained models.

For AI inference, the NVIDIA EGX edge-computing platform also runs on AWS Outposts and works with the AWS Elastic Kubernetes Service. Backed by the power of NVIDIA T4 GPUs, these services are capable of processing orders of magnitudes more information than CPUs alone. They can quickly derive insights from vast amounts of data streamed in real time from sensors in an Internet of Things deployment whether it’s in manufacturing, healthcare, financial services, retail or any other industry.

On top of EGX, the NVIDIA Metropolis application framework provides building blocks for vision AI, geared for use in smart cities, retail, logistics and industrial inspection, as well as other AI and IoT use cases, now easily delivered on AWS Outposts.

Alternatively, the NVIDIA Clara application framework is tuned to bring AI to healthcare providers whether it’s for medical imaging, federated learning or AI-assisted data labeling.

The T4 GPU’s Turing architecture uses TensorRT to accelerate the industry’s widest set of AI models. Its Tensor Cores support multi-precision computing that delivers up to 40x more inference performance than CPUs.

Remote Graphics, Locally Hosted

Users of high-end graphics have choices, too. Remote designers, artists and technical professionals who need to access large datasets and models can now get both cloud convenience and GPU performance.

Graphics professionals can benefit from the same NVIDIA Quadro technology that powers most of the world’s professional workstations not only on the public AWS cloud, but on their own internal cloud now with AWS Outposts packing T4 GPUs.

Whether they’re working locally or in the cloud, Quadro users can access the same set of hundreds of graphics-intensive, GPU-accelerated third-party applications.

The Quadro Virtual Workstation AMI, available in AWS Marketplace, includes the same Quadro driver found on physical workstations. It supports hundreds of Quadro-certified applications such as Dassault Systèmes SOLIDWORKS and CATIA; Siemens NX; Autodesk AutoCAD and Maya; ESRI ArcGIS Pro; and ANSYS Fluent, Mechanical and Discovery Live.

Learn more about AWS and NVIDIA offerings and check out our booth 1237 and session talks at AWS re:Invent.

The post AWS Outposts Station a GPU Garrison in Your Datacenter appeared first on The Official NVIDIA Blog.

Healthcare Regulators Open the Tap for AI

Approvals for AI-based healthcare products are streaming in from regulators around the globe, with medical imaging leading the way.

It’s just the start of what’s expected to become a steady flow as submissions rise and the technology becomes better understood.

More than 90 medical imaging products using AI are now cleared for clinical use, thanks to approvals from at least one global regulator, according to Signify Research Ltd., a U.K. consulting firm in healthcare technology.

Regulators in Europe and the U.S. are leading the pace. Each has issued about 60 approvals to date. Asia is making its mark with South Korea and Japan issuing their first approvals recently.

Entrepreneurs are at the forefront of the trend to apply AI to healthcare.

At least 17 companies in NVIDIA’s Inception program, which accelerates startups, have received regulatory approvals. They include some of the first companies in Israel, Japan, South Korea and the U.S. to get regulatory clearance for AI-based medical products. Inception members get access to NVIDIA’s experts, technologies and marketing channels.

“Radiology AI is now ready for purchase,” said Sanjay Parekh, a senior market analyst at Signify Research.

The pipeline promises significant growth over the next few years.

“A year or two ago this technology was still in the research and validation phase. Today, many of the 200+ algorithm developers we track have either submitted or are close to submitting for regulatory approval,” said Parekh.

Startups Lead the Way

Trends in clearances for AI-based products will be a hot topic at the gathering this week of the Radiological Society of North America, Dec. 1-6 in Chicago. The latest approvals span products from startups around the globe that will address afflictions of the brain, heart and bones.

In mid-October, Inception partner LPIXEL Inc. won one of the first two approvals for an AI-based product from the Pharmaceuticals and Medical Devices Agency in Japan. LPIXEL’s product, called EIRL aneurysm, uses deep learning to identify suspected aneurysms using a brain MRI. The startup employs more than 30 NVIDIA GPUs, delivering more accurate results faster than traditional approaches.

In November, Inception partner ImageBiopsy Lab (Vienna) became the first company in Austria to receive 510(k) clearance for an AI product from the U.S. Food and Drug Administration. The Knee Osteoarthritis Labelling Assistant (KOALA) uses deep learning to process within seconds radiological data on knee osteoarthritis, a malady that afflicts 70 million patients worldwide.

In late October, HeartVista (Los Gatos, Calif.) won FDA 510(k) clearance for its One Click MRI acquisition software. The Inception partner’s AI product enables adoption for many patients of non-invasive cardiac MRIs, replacing an existing invasive process.

Regulators in South Korea cleared products from two Inception startups — Lunit and Vuno. They were among the first four companies to get approval to sell AI-based medical products in the country.

In China, a handful of Inception startups are in the pipeline to receive the country’s first class-three approvals needed to let hospitals pay for a product or service. They include companies such as 12Sigma and Shukun that already have class-two clearances.

Healthcare giants are fully participating in the trend, too.

Earlier this month, GE Healthcare recently won clearance for its Deep Learning Image Reconstruction engine that uses AI to improve reading confidence for head, whole body and cardiovascular images. It’s one of several medical imaging apps on GE’s Edison system, powered by NVIDIA GPUs.

Coming to Grips with Big Data

Zebra Medical Vision, in Israel, is among the most experienced AI startups in dealing with global regulators. European regulators approved more than a half dozen of its products, and the FDA has approved three with two more submissions pending.

AI creates new challenges regulators are still working through. “The best way for regulators to understand the quality of the AI software is to understand the quality of the data, so that’s where we put a lot of effort in our submissions,” said Eyal Toledano, co-founder and CTO at Zebra.

The shift to evaluating data has its pitfalls. “Sometimes regulators talk about data used for training, but that’s a distraction,” said Toledano.

“They may get distracted by looking at the test data, sometimes it is difficult to realise the idea that you can train your model on noisy data in large quantities but still generalize well. I really think they should focus on evaluation and test data,” he said.

In addition, it can be hard to make fair comparisons between new products that use deep learning and legacy product that don’t. That’s because until recently products only published performance metrics. They are allowed to keep their data sets hidden as trade secrets while companies submitting new AI products that would like to measure against each other or against other AI algorithms cannot compare apples to apples as done in public challenges

Zebra participated in feedback programs the FDA created to get a better understanding of the issues in AI. The company currently focuses on approvals in the U.S. and Europe because their agencies are seen as leaders with robust processes that other countries are likely to follow.

A Tour of Global Regulators

Breaking new ground, the FDA published in June a 20-page proposal for guidelines on AI-based medical products. It opens the door for the first time to products that improve as they learn.

It suggested products “follow pre-specified performance objectives and change control plans, use a validation process … and include real-world monitoring of performance once the device is on the market,” said FDA Commissioner Scott Gottlieb in an April statement.

AI has “the potential to fundamentally transform the delivery of health care … [with] earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalized medicine,” he added.

For its part, the European Medicines Agency, Europe’s equivalent of the FDA, released in October 2018 a report on its goals through 2025. It includes plans to set up a dedicated AI test lab to gain insight into ways to support data-driven decisions. The agency is holding a November workshop on the report.

China’s National Medical Products Administration also issued in June technical guidelines for AI-based software products. It set up in April a special unit to set standards for approving the products.

Parekh, of Signify, recommends companies use data sets that are as large as possible for AI products and train algorithms for different types of patients around the world. “An algorithm used in China may not be applicable in the U.S. due to different population demographics,” he said.

Overall, automating medical processes with AI is a dual challenge.

“Quality needs to be not only as good as what a human can do, but in many cases it must be much better,” said Toledano, of Zebra. In addition, “to deliver(ing) value, you can’t just build an algorithm that detects something, it needs to deliver actionable results and many insights for many stakeholders such as both general practitioners and specialists,” he added.

You can see six approved AI healthcare products from Inception startups — including CureMetrix, Subtle Medical and — as well as NVIDIA’s technologies at our booth at the RSNA event.

The post Healthcare Regulators Open the Tap for AI appeared first on The Official NVIDIA Blog.

NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data

With over 100 exhibitors at the annual Radiological Society of North America conference using NVIDIA technology to bring AI to radiology, 2019 looks to be a tipping point for AI in healthcare.

Despite AI’s great potential, a key challenge remains: gaining access to the huge volumes of data required to train AI models while protecting patient privacy. Partnering with the industry, we’ve created a solution.

Today at RSNA, we’re introducing NVIDIA Clara Federated Learning, which takes advantage of a distributed, collaborative learning technique that keeps patient data where it belongs — inside the walls of a healthcare provider.

Clara Federated Learning (Clara FL) runs on our recently announced NVIDIA EGX intelligent edge computing platform.

Federated Learning — AI with Privacy

Clara FL is a reference application for distributed, collaborative AI model training that preserves patient privacy. Running on NVIDIA NGC-Ready for Edge servers from global system manufacturers, these distributed client systems can perform deep learning training locally and collaborate to train a more accurate global model.

Here’s how it works: The Clara FL application is packaged into a Helm chart to simplify deployment on Kubernetes infrastructure. The NVIDIA EGX platform securely provisions the federated server and the collaborating clients, delivering everything required to begin a federated learning project, including application containers and the initial AI model.

NVIDIA Clara Federated Learning uses distributed training across multiple hospitals to develop robust AI models without sharing patient data.

Participating hospitals label their own patient data using the NVIDIA Clara AI-Assisted Annotation SDK integrated into medical viewers like 3D slicer, MITK, Fovia and Philips Intellispace Discovery. Using pre-trained models and transfer learning techniques, NVIDIA AI assists radiologists in labeling, reducing the time for complex 3D studies from hours to minutes.

NVIDIA EGX servers at participating hospitals train the global model on their local data. The local training results are shared back to the federated learning server over a secure link. This approach preserves privacy by only sharing partial model weights and no patient records in order to build a new global model through federated averaging.

The process repeats until the AI model reaches its desired accuracy. This distributed approach delivers exceptional performance in deep learning while keeping patient data secure and private.

US and UK Lead the Way

Healthcare giants around the world — including the American College of Radiology, MGH and BWH Center for Clinical Data Science, and UCLA Health — are pioneering the technology. They aim to develop personalized AI for their doctors, patients and facilities where medical data, applications and devices are on the rise and patient privacy must be preserved.

ACR is piloting NVIDIA Clara FL in its AI-LAB, a national platform for medical imaging. The AI-LAB will allow the ACR’s 38,000 medical imaging members to securely build, share, adapt and validate AI models. Healthcare providers that want access to the AI-LAB can choose a variety of NVIDIA NGC-Ready for Edge systems, including from Dell, Hewlett Packard Enterprise, Lenovo and Supermicro.

UCLA Radiology is also using NVIDIA Clara FL to bring the power of AI to its radiology department. As a top academic medical center, UCLA can validate the effectiveness of Clara FL and extend it in the future across the broader University of California system.

Partners HealthCare in New England also announced a new initiative using NVIDIA Clara FL. Massachusetts General Hospital and Brigham and Women’s Hospital’s Center for Clinical Data Science will spearhead the work, leveraging data assets and clinical expertise of the Partners HealthCare system.

In the U.K., NVIDIA is partnering with King’s College London and Owkin to create a federated learning platform for the National Health Service. The Owkin Connect platform running on NVIDIA Clara enables algorithms to travel from one hospital to another, training on local datasets. It provides each hospital a blockchain-distributed ledger that captures and traces all data used for model training.

The project is initially connecting four of London’s premier teaching hospitals, offering AI services to accelerate work in areas such as cancer, heart failure and neurodegenerative disease, and will expand to at least 12 U.K. hospitals in 2020.

Making Everything Smart in the Hospital 

With the rapid proliferation of sensors, medical centers like Stanford Hospital are working to make every system smart. To make sensors intelligent, devices need a powerful, low-power AI computer.

That’s why we’re announcing NVIDIA Clara AGX, an embedded AI developer kit that can handle image and video processing at high data rates, bringing AI inference and 3D visualization to the point of care.

NVIDIA Clara AGX scales from small, embedded devices to sidecar systems to full-size servers.

Clara AGX is powered by NVIDIA Xavier SoCs, the same processors that control self-driving cars. They consume as little as 10W, making them suitable for embedding inside a medical instrument or running in a small adjacent system.

A perfect showcase of Clara AGX is Hyperfine, the world’s first portable point-of-care MRI system. The revolutionary Hyperfine system will be on display in NVIDIA’s booth at this week’s RSNA event.

Hyperfine’s system is among the first of many medical instruments, surgical suites, patient monitoring devices and smart medical cameras expected to use Clara AGX. We’re witnessing the beginning of an AI-enabled internet of medical things.

Hyperfine’s mobile MRI system uses an NVIDIA GPU and will be on display at NVIDIA’s booth.

The NVIDIA Clara AGX SDK will be available soon through our early access program. It includes reference applications for two popular uses — real-time ultrasound and endoscopy edge computing.


Visit NVIDIA and our many healthcare partners in booth 10939 in the RSNA AI Showcase. We’ll be showing our latest AI-driven medical imaging advancements, including keeping patient data secure with AI at the edge.

Find out from our deep learning experts how to use AI to advance your research and accelerate your clinical workflows. See the full lineup of talks and learn more on our website.


The post NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data appeared first on The Official NVIDIA Blog.

Buzzworthy AI: Startup’s Robo-Hives Counter Bee Population Declines

Honeybee colonies worldwide are under siege by parasites, but they now have a white knight: a band of Israeli entrepreneurs bearing AI.

Beewise, an Israel-based startup, is using AI in its small northern community on the border of Lebanon to monitor honeybee colonies. It’s secured seed funding of more than $3 million and launched its robo-hive sporting image recognition to support bee populations.

In the U.S., honeybee colonies have collapsed by 40 percent in the past year, according to a recent report. The culprit is widely viewed to be varroa mites, which feed off the liver-like organs of honeybees and larvae, causing weakness as well as greater susceptibility to diseases and viruses.

Farmers everywhere count on honeybees for pollination of fruits and vegetables, and many now have to rent colonies from beekeepers to support their crops. Without bees to pollinate them, plants would have a difficult time reproducing and bearing fruit for people to eat.

A cottage industry of small private companies and researchers alike is developing image recognition for early detection of the varroa mite so that beekeepers can act before it’s too late for colonies.

“We’re trying to work on the colony loss — I call it ‘eyes on the hives, 24/7,’” said Saar Safra, CEO and co-founder of Beewise.

Traditional Colony Work

Managing commercial hives is labor-intensive for beekeepers, who manually pull frames (see image below), or sections of the honeycombs, from beehives and visually inspect them.

This time-consuming work can span as many as 1,000 beehives under management by a single professional beekeeper. That means a beehive might not get inspected for several weeks as it waits in line for the busy beekeeper to come along.

A few weeks of an undetected varroa mite infestation can have disastrous results for bee colonies. Computer vision with AI provides a faster way to keep on top of problems.

By replacing that traditional manual process with image recognition and robotics, keepers can recognize and treat the problem in real time, said Safra.

Beewise has developed a proprietary robotics system that can remotely treat infestations.

“When you take AI and apply it to traditional industries, the level of social impact is so much bigger than when you keep it enclosed in high tech — NVIDIA GPUs are basically doing a lot of that work,” he said.

Robo Beehive AI 

Beewise trained its neural networks on thousands of images of bees. Its convolutional neural networks are doing unsupervised learning capable of image classification to identify bees with mites in its autonomous hives now in deployment.

Once image classification has identified bees that have been infested with mites, a recurrent neural network makes a decision on the best course of action. That could include automatically administering pesticides by the robot or to quarantine the beehive frame from others.

Beewise has made this possible with its autonomous beehives that rely on multiple cameras. Images from these prototype hives are fed into the compact supercomputing of NVIDIA Jetson for real-time processing on its deep learning models.

“It’s a whole AI-based control system — our AI detects and identifies the varroa mite in real time and sterilizes it. Clean healthy colonies operate completely different than infested ones,” said Safra.


The post Buzzworthy AI: Startup’s Robo-Hives Counter Bee Population Declines appeared first on The Official NVIDIA Blog.