AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines

Christian Sanz isn’t above trying disguises to sneak into places. He once put on a hard hat, vest and steel-toed boots to get onto the construction site of the San Francisco 49ers football stadium to explore applications for his drone startup. That bold move scored his first deal. For the entrepreneur who popularized drones in Read article >

The post AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines appeared first on The Official NVIDIA Blog.

AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines

Christian Sanz isn’t above trying disguises to sneak into places. He once put on a hard hat, vest and steel-toed boots to get onto the construction site of the San Francisco 49ers football stadium to explore applications for his drone startup.

That bold move scored his first deal.

For the entrepreneur who popularized drones in hackathons in 2012 as founder of the Drone Games matches, starting Skycatch in 2013 was a logical next step.

“We decided to look for more industrial uses, so I went and bought construction gear and was able to blend in, and in many cases people didn’t know I wasn’t working for them as I was collecting data,” Sanz said.

Skycatch has since grown up: In recent years the San Francisco-based company has been providing some of the world’s largest mining and construction companies its AI-enabled automated drone surveying and analytics platform. The startup, which has landed $47 million in funding, promises customers automated visibility over operations.

At the heart of the platform is the NVIDIA Jetson TX2-driven Edge1 edge computer and base station. It can create 2D maps and 3D point clouds in real-time, as well as pinpoint features  to within five-centimeter accuracy. Also, it runs AI models to do split-second inference in the field to detect objects.

Today, Skycatch announced its new Discover1 device. The Discover1 connects to industrial machines, enabling customers to plug in a multitude of sensors that can expand the data gathering of Skycatch.

The Discover1 sports a Jetson Nano inside to facilitate the collection of data from sensors and enable computer vision and machine learning on the edge. The device has LTE and WiFi connectivity to stream data to the cloud.

Changing-Tracking AI

Skycatch can capture 3D images of job sites for merging against blueprints to monitor changes.

Such monitoring for one large construction site showed that electrical conduit pipes were installed in the wrong spot. Concrete would be poured next, cementing them in place. Catching the mistake early helped avoid a much costlier revision later.

Skycatch says that customers using its services can expect to compress the timelines on their projects as well as reduce costs by catching errors before they become bigger problems.

Surveying with Speed

Japan’s Komatsu, one of the world’s leading makers of bulldozers, excavators and other industrial machines, is an early customer of Skycatch.

With Japan facing a labor shortage, the equipment maker was looking for ways to help automate its products. One bottleneck was surveying a location, which could take days, before unleashing the machines.

Skycatch automated the process with its drone platform. The result for Komatsu is that less-skilled workers can generate a 3D map of a job site within 30 minutes, enabling operators to get started sooner with the land-moving beasts.

Jetson for AI

As Skycatch was generating massive sums of data, the company’s founder realized they needed more computing capability to handle it. Also, given the environment in which they were operating, the computing had to be done on the edge while consuming minimal power.

They turned to the Jetson TX2, which provides server-level AI performance using the CUDA-enabled NVIDIA Pascal GPU in a small form factor and taps as little as 7.5 watts of power. It’s high memory bandwidth and wide range of hardware interfaces in a rugged form factor are ideal for the industrial environments Skycatch operates in.

Sanz says that “indexing the physical world” is demanding because of all the unstructured data of photos and videos, which require feature extraction to “make sense of it all.”

“When the Jetson TX2 came out, we were super excited. Since 2017, we’ve rewritten our photogrammetry engine to use the CUDA language framework so that we can achieve much faster speed and processing,” Sanz said.

Remote Bulldozers

The Discover1 can collect data right from the shovel of a bulldozer. Inertial measurement unit, or IMU, sensors can be attached to the Discover1 on construction machines to track movements from the bulldozer’s point of view.

One of the largest mining companies in the world uses the Discover1 in pilot tests to help remotely steer its massive mining machines in situations too dangerous for operators.

“Now you can actually enable 3D viewing of the machine to someone who is driving it remotely, which is much more affordable,” Sanz said.


Skycatch is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines appeared first on The Official NVIDIA Blog.

Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working

Apple’s iPad 2 launch in 2011 ignited a touch tablet craze, but when David Cann and Marc DeVidts got their hands on one they saw something different: They rigged it to a remote-controlled golf caddy and posted a video of it in action on YouTube.

Next came phone calls from those interested in buying such a telepresence robot.

Hacks like this were second nature for the friends who met in 2002 while working on the set of the BattleBots TV series, featuring team-built robots battling before live audiences.

That’s how Double Robotics began in 2012. The startup went on to attend YCombinator’s accelerator, and it has sold more than 12,000 units. That cash flow has allowed the small team with just $1.8 million in seed funding to carry on without raising capital, a rarity in hardware.

Much has changed since they began. Double Robotics, based in Burlingame, Calif., today launched its third-generation model, the Double 3, sporting an NVIDIA Jetson TX2 for AI workloads.

“We did a bunch of custom CUDA code to be able to process all of the depth data in real time, so it’s much faster than before, and it’s highly tailored to the Jetson TX2 now,” said Cann.

Remote Worker Presence

The Double helped engineers inspect Selene while it was under construction.

The Double device, as it’s known, was designed for remote workers to visit offices in the form of the robot so they could see their co-workers in meetings. Video-over-internet call connections allow people to see and hear their remote colleague on the device’s tablet screen.

The Double has been a popular ticket at tech companies on the East and West Coasts in the five years prior to the pandemic, and interest remains strong but in different use cases, according to the company. It has also proven useful in rural communities across the country, where people travel long distances to get anywhere, the company said.

NVIDIA purchased a telepresence robot from Double Robotics so that non-essential designers sheltering at home could maintain daily contact with work on Selene, the world’s seventh-fastest computer.

Some customers who use it say it breaks down communication barriers for remote workers, with the physical presence of the robot able to interact better than using video conferencing platforms.

Also, COVID-19 has spurred interest for contact-free work using the Double. Pharmaceutical companies have contacted Double Robotics asking how the robot might aid in international development efforts, according to Cann. The biggest use case amid the pandemic is for using the Double robots in place of international business travel, he said. Instead of flying in to visit a company office, the office destination could offer a Double to would-be travelers.


Double 3 Jetson Advances

Now shipping, the Double 3 features wide-angle and zoom cameras and can support night vision. It also uses two stereovision sensors for depth vision, five ultrasonic range finders, two wheel encoders and an inertial measurement unit sensor.

Double Robotics will sell the head of the new Double 3 — which includes the Jetson TX2 — to existing customers seeking to upgrade its brains for access to increasing levels of autonomy.

To enable the autonomous capabilities, Double Robotics relied on the NVIDIA Jetson TX2 to process all of the camera and sensor data in realtime, utilizing the CUDA-enabled GPUs and the accelerated multimedia and image processors.

The company is working on autonomous features for improved self-navigation and safety features for obstacle avoidance as well as other capabilities, such as improved auto docking for recharging and auto pilot all the way into offices.

Right now the Double can do automated assisted driving to help people avoid hitting walls. The company next aims for full office autonomy and ways to help it get through closed doors.

“One of the reasons we chose the NVIDIA Jetson TX2 is that it comes with the Jetpack SDK that makes it easy to get started and there’s a lot that’s already done for you — it’s certainly a huge help to us,” said Cann.


The post Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working appeared first on The Official NVIDIA Blog.

Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working

Apple’s iPad 2 launch in 2011 ignited a touch tablet craze, but when David Cann and Marc DeVidts got their hands on one they saw something different: They rigged it to a remote-controlled golf caddy and posted a video of it in action on YouTube. Next came phone calls from those interested in buying such Read article >

The post Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working appeared first on The Official NVIDIA Blog.

More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots

With Lime and Bird scooters covering just about every major U.S. city, you’d think all bets were off for walking. Think again.

Piaggio Fast Forward is staking its future on the idea that people will skip e-scooters or ride-hailing once they take a stroll with its gita robot. A Boston-based subsidiary of the iconic Vespa scooter maker, the company says the recent focus on getting fresh air and walking during the COVID-19 pandemic bodes well for its new robotics concept.

The fashionable gita robot — looking like a curvaceous vintage scooter — can carry up to 40 pounds and automatically keeps stride so you don’t have to lug groceries, picnic goodies or other items on walks. Another mark in gita’s favor: you can exercise in the fashion of those in Milan and Paris, walking sidewalks to meals and stores. “Gita” means short trip in Italian.

The robot may turn some heads on the street. That’s because Piaggio Fast Forward parent Piaggio Group, which also makes Moto Guzzi motorcycles, expects sleek, flashy designs under its brand.

The first idea from Piaggio Fast Forward was to automate something like a scooter to autonomously deliver pizzas. “The investors and leadership came from Italy, and we pitched this idea, and they were just horrified,” quipped CEO and founder Greg Lynn.

If the company gets it right, walking could even become fashionable in the U.S. Early adopters have been picking up gita robots since the November debut. The stylish personal gita robot, enabled by the NVIDIA Jetson TX2 supercomputer on a module, comes in signal red, twilight blue or thunder gray.

Gita as Companion

The robot was designed to follow a person. That means the company didn’t have to create a completely autonomous robot that uses simultaneous localization and mapping, or SLAM, to get around fully on its own, said Lynn. And it doesn’t use GPS.

Instead, a gita user taps a button and the robot’s cameras and sensors immediately capture images that pair it with its leader to follow the person.

Using neural networks and the Jetson’s GPU to perform complex image processing tasks, the gita can avoid collisions with people by understanding how people move  in sidewalk traffic, according to the company. “We have a pretty deep library of what we call ‘pedestrian etiquette,’ which we use to make decisions about how we navigate,” said Lynn.

Pose-estimation networks with 3D point cloud processing allow it to see the gestures of people to anticipate movements, for example. The company recorded thousands of hours of walking data to study human behavior and tune gita’s networks. It used simulation training much the way the auto industry does, using virtual environments. Piaggio Fast Forward also created environments in its labs for training with actual gitas.

“So we know that if a person’s shoulders rotate at a certain degree relative to their pelvis, they are going to make a turn,” Lynn said. “We also know how close to get to people and how close to follow.”

‘Impossible’ Without Jetson 

The robot has a stereo depth camera to understand the speed and distance of moving people, and it has three other cameras for seeing pedestrians for help in path planning. The ability to do split-second inference to make sidewalk navigation decisions was important.

“We switched over and started to take advantage of CUDA for all the parallel processing we could do on the Jetson TX2,” said Lynn.

Piaggio Fast Forward used lidar on its early design prototype robots, which were tethered to a bulky desktop computer, in all costing tens of thousands of dollars. It needed to find a compact, energy-efficient and affordable embedded AI processor to sell its robot at a reasonable price.

“We have hundreds of machines out in the world, and nobody is joy-sticking them out of trouble. It would have been impossible to produce a robot for $3,250 if we didn’t rely on the Jetson platform,” he said.

Enterprise Gita Rollouts

Gita robots have been off to a good start in U.S. sales with early technology adopters, according to the company, which declined to disclose unit sales. They have also begun to roll out in enterprise customer pilot tests, said Lynn.   

Cincinnati-Northern Kentucky International Airport is running gita pilots for delivery of merchandise purchased in airports as well as food and beverage orders from mobile devices at the gates.

Piaggio Fast Forward is also working with some retailers who are experimenting with the gita robots for handling curbside deliveries, which have grown in popularity for avoiding the insides of stores.

The company is also in discussions with residential communities exploring usage of gita robots for the replacement of golf carts to encourage walking in new developments.

Piaggio Fast Forward plans to launch several variations in the gita line of robots by next year.

“Rather than do autonomous vehicles to move people around, we started to think about a way to unlock the walkability of people’s neighborhoods and of businesses,” said Lynn.


Piaggio Fast Forward is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots appeared first on The Official NVIDIA Blog.

Clarifying Training Time, Startup Launches AI-Assisted Data Annotation

Creating a labeled dataset for training an AI application can hit the brakes on a company’s speed to market. Clarifai, an image and text recognition startup, aims to put that obstacle in the rearview mirror.

The New York City-based company today announced the general availability of its AI-assisted data labeling service, dubbed Clarifai Labeler. The company offers data labeling as a service as well.

Founded in 2013, Clarifai entered the image-recognition market in its early days. Since that time, the number of companies exploiting unstructured data for business advantages has swelled, creating a wave of demand for data scientists. And with industry disruption from image and text recognition spanning agriculture, retail, banking, construction, insurance and beyond, much is at stake.

“High-quality AI models start with high-quality dataset annotation. We’re able to use AI to make labeling data an order of magnitude faster than some of the traditional technologies out there,” said Alfredo Ramos, a senior vice president at Clarifai.

Backed by NVIDIA GPU Ventures, Clarifai is gaining traction in retail, banking and insurance, as well as for applications in federal, state and local agencies, he says.

AI Labeling with Benefits

Clarifai’s Labeler shines at labeling video footage. The tool integrates a statistical method so that an annotated object — one with a bounding box around it — can be tracked as it moves throughout the video.

Since each second of video is made up of multiple frames of images, the tracking capabilities result in increased accuracy and huge improvements in the quantity of annotations per object, as well as a drastic reduction in the time to label large volumes of data.

The new Labeler was most recently used to annotate days of video footage to build a model to detect whether people were wearing face masks, which resulted in a million annotations in less than four days.

Traditionally, this would’ve taken a human workforce six weeks to label the individual frames. With Labeler, they created 1 million annotations 10 times faster, said Ramos.

Clarifai uses an array of NVIDIA V100 Tensor Core GPUs onsite for development of models, and it taps into NVIDIA T4 GPUs in the cloud for inference.

Star-Powered AI 

Ramos reports to one of AI’s academic champions. CEO and founder Matthew Zeiler took the industry by storm when his neural networks dominated the ImageNet Challenge in 2013. That became his launchpad for Clarifai.

Zeiler has since evolved his research into developer-friendly products that allow enterprises to quickly and easily integrate AI into their workflows and customer experiences. The company continues to attract new customers, most recently, with the release of its natural language processing product.

While much has changed in the industry, Clarifai’s focus on research hasn’t.

“We have a sizable team of researchers, and we have become adept at taking some of the best research out there in the academic world and very quickly deploying it for commercial use,” said Ramos.


Clarifai is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

Image credit: Chris Curry via Unsplash.

The post Clarifying Training Time, Startup Launches AI-Assisted Data Annotation appeared first on The Official NVIDIA Blog.

AI Explains AI: Fiddler Develops Model Explainability for Transparency

Your online loan application just got declined without explanation. Welcome to the AI black box.

Businesses of all stripes turn to AI for computerized decisions driven by data. Yet consumers using applications with AI get left in the dark on how automated decisions work. And many people working within companies have no idea how to explain the inner workings of AI to customers.

Fiddler Labs wants to change that.

The San Francisco-based startup offers an explainable AI platform that enables companies to explain, monitor and analyze their AI products.

Explainable AI is a growing area of interest for enterprises because those outside of engineering often need to understand how their AI models work.

Using explainable AI, banks can provide reasons to customers for a loan’s rejection, based on data points fed to models, such as maxed credit cards or high debt-to-income ratios. Internally, marketers can strategize about customers and products by knowing more about the data points that drive them.

“This is bridging the gap between hardcore data scientists who are building the models and the business teams using these models to make decisions,” said Anusha Sethuraman, head of product marketing at Fiddler Labs.

Fiddler Labs is a member of NVIDIA Inception, a program that enables companies working in AI and data science with fundamental tools, expertise and marketing support, and helps them get to market faster.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help explore the math inside an AI model. It can map out the data inputs and their weighted values that were used to arrive at the data output of the model.

All of this, essentially, enables a layperson to study the sausage factory at work on the inside of an otherwise opaque process. The result is explainable AI can help deliver insights into how and why a particular decision was made by a model.

“There’s often a hurdle to get AI into production. Explainability is one of the things that we think can address this hurdle,” Sethuraman said.

With an ensemble of models often at use, creating this is no easy job.

But Fiddler Labs CEO and co-founder Krishna Gade is up to the task. He previously led the team at Facebook that built the “Why am I seeing this post?” feature to help consumers and internal teams understand how its AI works in the Facebook news feed.

He and Amit Paka — a University of Minnesota classmate — joined forces and quit their jobs to start Fiddler Labs. Paka, the company’s chief product officer, was motivated by his experience at Samsung with shopping recommendation apps and the lack of understanding into how these AI recommendation models work.

Explainability for Transparency

Founded in 2018, Fiddler Labs offers explainability for greater transparency in businesses. It helps companies make better informed business decisions through a combination of data, explainable AI and human oversight, according to Sethuraman.

Fiddler’s tech is used by Hired, a talent and job matchmaking site driven by AI. Fiddler provides real-time reporting on how Hired’s AI models are working. It can generate explanations on candidate assessments and provide bias monitoring feedback, allowing Hired to assess its AI.

Explainable AI needs to be quickly available for consumer fintech applications. That enables customer service representatives to explain automated financial decisions — like loan rejections and robo rates — and build trust with transparency about the process.

The algorithms used for explanations require hefty processing. Sethuraman said that Fiddler Labs taps into NVIDIA cloud GPUs to make this possible, saying CPUs aren’t up to the task.

“You can’t wait 30 seconds for the explanations — you want explanations within milliseconds on a lot of different things depending on the use cases,” Sethuraman said.

Visit NVIDIA’s financial services industry page to learn more.

Image credit: Emily Morter, via the Unsplash Photo Community. 

The post AI Explains AI: Fiddler Develops Model Explainability for Transparency appeared first on The Official NVIDIA Blog.

Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections

Childhood buddies from back in South Africa, Nicholas Pilkington, Jono Millin and Mike Winn went off together to a nearby college, teamed up on a handful of startups and kept a pact: work on drones once a week.

That dedication is paying off. Their drone startup, based in San Francisco, is picking up interest worldwide and has landed $35 million in Series D funding.

It all catalyzed in 2014, when the friends were accepted into the AngelPad accelerator program in Silicon Valley. They founded DroneDeploy there, enabling contractors to capture photos, maps, videos and high-fidelity panoramic images for remote inspections of job sites.

“We had this a-ha moment: Almost any industry can benefit from aerial imagery, so we set out to build the best drone software out there and make it easy for everyone,” said Pilkington, co-founder and CTO at DroneDeploy.

DroneDeploy’s AI software platform — it’s the navigational brains and eyes — is operating in more than 200 countries and handling more than 1 million flights a year.

Nailing Down Applications

DroneDeploy’s software has been adopted in construction, agriculture, forestry, search and rescue, inspection, conservation and mining.

In construction, DroneDeploy is used by one-quarter of the world’s 400 largest building contractors and six of the top 10 oil and gas companies, according to the company.

DroneDeploy was one of three startups that recently presented at an NVIDIA Inception Connect event held by Japanese insurer Sompo Holdings. For good reason: Startups are helping insurance and reinsurance firms become more competitive by analyzing portfolio risks with AI.

The NVIDIA Inception program nurtures startups with access to GPU guidance, Deep Learning Institute courses, networking and marketing opportunities.

Navigating Drone Software

DroneDeploy offers features like fast setup of autonomous flights, photogrammetry to take physical measurements and APIs for drone data.

In addition to supporting industry-leading drones and hardware, DroneDeploy operates an app ecosystem for partners to build apps using its drone data platform. John Deere, for example, offers an app for customers to upload aerial drone maps of their fields to their John Deere account so that they can plan flights based on the field data.

Split-second photogrammetry and 360-degree images provided by DroneDeploy’s algorithms running on NVIDIA GPUs in the cloud help provide pioneering mapping and visibility.

AI on Safety, Cost and Time

Drones used in high places instead of people can aid in safety. The U.S. Occupational Safety and Health Administration last year reported that 22 people were killed in roofing-related accidents in the U.S.

Inspecting roofs and solar panels with drone technology can improve that safety record. It can also save on cost: The traditional alternative to having people on rooftops to perform these inspections is using helicopters.

Customers of the DroneDeploy platform can follow a quickly created map to carry out a sequence of inspections with guidance from cameras fed into image recognition algorithms.

Using drones, customers can speed up inspections by 80 percent, according to the company.  

“In areas like oil, gas and energy, it’s about zero-downtime inspections of facilities for operations and safety, which is a huge value driver for these customers,” said Pilkington.

The post Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections appeared first on The Official NVIDIA Blog.

Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives

A team in Israel is making a splash with AI.

It started as biz school buddies Netanel Eliav and Adam Bismut were looking to solve a problem to change the world. The problem found them: Bismut visited the Dead Sea after a drowning and noticed a lack of tech for lifeguards, who scanned the area with age-old binoculars.

The two aspiring entrepreneurs — recent MBA graduates of Ben-Gurion University, in the country’s south — decided this was their problem to solve with AI.

“I have two little girls, and as a father, I know the feeling that parents have when their children are near the water,” said Eliav, the company’s CEO.

They founded Sightbit in 2018 with BGU classmates Gadi Kovler and Minna Shezaf to help lifeguards see dangerous conditions and prevent drownings.

The startup is seed funded by Cactus Capital, the venture arm of their alma mater.

Sightbit is now in pilot testing at Palmachim Beach, a popular escape for sunbathers and surfers in the Palmachim Kibbutz area along the Mediterranean Sea, south of Tel Aviv. The sand dune-lined destination, with its inviting, warm aquamarine waters, gets packed with thousands of daily summer visitors.

But it’s also a place known for deadly rip currents.

Danger Detectors

Sightbit has developed image detection to help spot dangers to aid lifeguards in their work. In collaboration with the Israel Nature and Parks Authority, the Beersheba-based startup has installed three cameras that feed data into a single NVIDIA Jetson AGX at the lifeguard towers at Palmachim beach. NVIDIA Metropolis is deployed for video analytics.

The system of danger detectors enables lifeguards to keep tabs on a computer monitor that flags potential safety concerns while they scan the beach.

Sightbit has developed models based on convolutional neural networks and image detection to provide lifeguards views of potential dangers. Kovler, the company’s CTO, has trained the company’s danger detectors on tens of thousands of images, processed with NVIDIA GPUs in the cloud.

Training on the images wasn’t easy with sun glare on the ocean, weather conditions, crowds of people, and people partially submerged in the ocean, said Shezaf, the company’s CMO.

But Sightbit’s deep learning and proprietary algorithms have enabled it to identify children alone as well as clusters of people. This allows its system to flag children who have strayed from the pack.

Rip Current Recognition

The system also harnesses optical flow algorithms to detect dangerous rip currents in the ocean for helping lifeguards keep people out of those zones.  These algorithms make it possible to identify the speed of every object in an image, using partial differential equations to calculate acceleration vectors of every voxel in the image.

Lifeguards can get updates on ocean conditions so when they start work they have a sense of hazards present that day.

“We spoke with many lifeguards. The lifeguard is trying to avoid the next accident. Many people go too deep and get caught in the rip currents,” said Eliav.

Cameras at lifeguard towers processed on the single compact supercomputing Jetson Xavier and accessing Metropolis can offer split-second inference for alerts, tracking, statistics and risk analysis in real time.

The Israel Nature and Parks Authority is planning to have a structure built on the beach to house more cameras for automated safety, according to Sightbit.

COVID-19 Calls 

Palmachim Beach lifeguards have a lot to watch, especially now as people get out of their homes for fresh air after the region begins reopening from COVID-19-related closures.

As part of Sightbit’s beach safety developments, the company had been training its network to spot how far apart people were to help gauge child safety.

This work also directly applies to monitoring social distancing and has attracted the attention of potential customers seeking ways to slow the spread of COVID-19. The Sightbit platform can provide them crowding alerts when a public area is overcrowded and proximity alerts for when individuals are too close to each other, said Shezaf.

The startup has put in extra hours to work with those interested in its tech to help monitor areas for ways to reduce the spread of the pathogen.

“If you want to change the world, you need to do something that is going to affect people immediately without any focus on profit,” said Eliav.


Sightbit is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives appeared first on The Official NVIDIA Blog.

AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra

AI is the rocket fuel that will get us to Mars. It’s the vaccine that will save us on Earth. And it’s the people who aspire to make a dent in the universe.

Our latest “I Am AI” video, unveiled during NVIDIA CEO Jensen Huang’s keynote address at the GPU Technology Conference, pays tribute to the scientists, researchers, artists and many others making historic advances with AI.

To grasp AI’s global impact, consider: the technology is expected to generate $2.9 trillion worth of business value by 2021, according to Gartner.

It’s on course to classify 2 trillion galaxies to understand the universe’s origin, and to zero in on the molecular structure of the drugs needed to treat coronavirus and cancer.

As depicted in the latest video, AI has an artistic side, too. It can paint as well as Bob Ross. And its ability to assist in the creation of original compositions is worthy of the London Symphony Orchestra, which plays the accompanying theme music, a piece that started out written by a recurrent neural network.

AI is also capable of creating text-to-speech synthesis for narrating a short documentary. And that’s just what it did.

These fireworks and more are the story of I Am AI. Sixteen companies and research organizations are featured in the video. The action moves fast, so grab a bowl of popcorn, kick back and enjoy this tour of some of the highlights of AI in 2020.

Reaching Into Outer Space

Understanding the formation of the structure and the amount of matter in the universe requires observing and classifying celestial objects such as galaxies. With an estimated 2 trillion galaxies to examine in the observable universe, it’s what cosmologists call a “computational grand challenge.”

The recent Dark Energy Survey collected data from over 300 million galaxies. To study them with unprecedented precision, the Center for Artificial Intelligence Innovation at the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign teamed up with the Argonne Leadership Computing Facility at the U.S. Department of Energy’s Argonne National Laboratory.

NCSA tapped the Galaxy Zoo project, a crowdsourced astronomy effort that labeled millions of galaxies observed by the Sloan Digital Sky Survey. Using that data, an AI model with 99.6 percent accuracy can now chew through unlabeled galaxies to ID them and accelerate scientific research.

With Mars targeted for human travel, scientists are seeking the safest path. In that effort, the NASA Solar Dynamics Observatory takes images of the sun every 1.3 seconds. And researchers have developed an algorithm that removes errors from the images, which are placed into a growing archive for analysis.

Using such data, NASA is tapping into NVIDIA GPUs to analyze solar surface flows so that it can build better models for predicting the weather in space. NASA also aims to identify origins of energetic particles in Earth’s orbit that could damage interplanetary spacecraft, jeopardizing trips to Mars.

Restoring Voice and Limb

Voiceitt — a Tel Aviv-based startup that’s developed signal processing, speech recognition technologies and deep neural nets — offers a synthesized voice for those whose speech has been distorted. The company’s app converts unintelligible speech into easily understood speech.

The University of North Carolina at Chapel Hill’s Neuromuscular Rehabilitation Engineering Laboratory and North Carolina State University’s Active Robotic Sensing (ARoS) Laboratory develop experimental robotic limbs used in the labs.

The two research units have been working on walking environment recognition, aiming to develop environmental adaptive controls for prostheses. They’ve been using CNNs for prediction running on NVIDIA GPUs. And they aren’t alone.

Helping in Pandemic

Whiteboard Coordinator remotely monitors the temperature of people entering buildings to minimize exposure to COVID-19. The Chicago-based startup provides temperature-screening rates of more than 2,000 people per hour at checkpoints. Whiteboard Coordinator and NVIDIA bring AI to the edge of healthcare with NVIDIA Clara Guardian, an application framework that simplifies the development and deployment of smart sensors. uses AI to inform neurologists about strokes much faster than traditional methods. With the onset of the pandemic, moved to help combat the new virus with an app that alerts care teams to positive COVID-19 results.

Axial3D is a Belfast, Northern Ireland, startup that enlists AI to accelerate the production time of 3D-printed models for medical images used in planning surgeries. Having redirected its resources at COVID-19, the company is now supplying face shields and is among those building ventilators for the U.K.’s National Health Service. It has also begun 3D printing of swab kits for testing as well as valves for respirators. (Check out their on-demand webinar.)

Autonomizing Contactless Help

KiwiBot, a cheery-eyed food delivery bot from Berkeley, Calif., has included in its path a way to provide COVID-19 services. It’s autonomously delivering masks, sanitizers and other supplies with its robot-to-human service.

Masterpieces of Art, Compositions and Narration

Researchers from London-based startup Oxia Palus demonstrated in a paper, “Raiders of the Lost Art,” that AI could be used to recreate lost works of art that had been painted over. Beneath Picasso’s 1902 The Crouching Beggar lies a mountainous landscape that art curators believe is of Parc del Laberint d’Horta, near Barcelona.

They also know that Santiago Rusiñol painted Parc del Laberint d’Horta. Using a modified X-ray fluorescence image of The Crouching Beggar and Santiago Rusiñol’s Terraced Garden in Mallorca, the researchers applied neural style transfer, running on NVIDIA GPUs, to reconstruct the lost artwork, creating Rusiñol’s Parc del Laberint d’Horta.


For GTC a few years ago, Luxembourg-based AIVA AI composed the start — melodies and accompaniments — of what would become an original classical music piece meriting an orchestra. Since then we’ve found it one.

Late last year, the London Symphony Orchestra agreed to play the moving piece, which was arranged for the occasion by musician John Paesano and was recorded at Abbey Road Studios.


NVIDIA alum Helen was our voice-over professional for videos and events for years. When she left the company, we thought about how we might continue the tradition. We turned to what we know: AI. But there weren’t publicly available models up to the task.

A team from NVIDIA’s Applied Deep Learning Research group published the answer to the problem: Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis. Licensing Helen’s voice, we trained the network on dozens of hours of it.

First, Helen produced multiple takes, guided by our creative director. Then our creative director was able to generate multiple takes from Flowtron and adjust parameters of the model to get the desired outcome. And what you hear is “Helen” speaking in the I Am AI video narration.

The post AI to Hit Mars, Blunt Coronavirus, Play at the London Symphony Orchestra appeared first on The Official NVIDIA Blog.