Go Robot, Go: AI Team at MiR Helps Factory Robots Find Their Way

Like so many software developers, Elias Sorensen has been studying AI. Now he and his 10-member team are teaching it to robots.

When the AI specialists at Mobile Industrial Robots, based in Odense, Denmark, are done, the first graduating class of autonomous machines will be on their way to factories and warehouses, powered by NVIDIA Jetson Xavier NX GPUs.

“The ultimate goal is to make the robots behave in ways humans understand, so it’s easier for humans to work alongside them. And Xavier NX is at the bleeding edge of what we are doing,” said Sorensen, who will provide an online talk about MiR’s work at GTC Digital.

MiR’s low-slung robots carry pallets weighing as much as 2,200 pounds. They sport lidar and proximity sensors, as well as multiple cameras the team is now linking to Jetson Xavier GPUs.

Inferencing Their Way Forward

The new digital brains will act as pilots. They’ll fuse sensor data to let the bots navigate around people, forklifts and other objects, dynamically re-mapping safety zones and changing speeds as needed.

The smart bots use NVIDIA’s DeepStream and TensorRT software to run AI inference jobs on Xavier NX, based on models trained on NVIDIA GPUs in the AWS cloud.

MiR chose Xavier for its combination of high performance at low power and price, as well as its wealth of software.

“Lowering the cost and power consumption for AI processing was really important for us,” said Sorensen. “We make small, battery-powered robots and our price is a major selling point for us.” He noted that MiR has deployed more than 3,000 robots to date to users such as Ford, Honeywell and Toyota.

The new autonomous models are working prototypes. The team is training their object-detection models in preparation for first pilot tests.

Jetson Nano Powers Remote Vision

It’s MiR’s first big AI product, but not its first ever. Since November, the company has shipped smart, standalone cameras powered by Jetson Nano GPUs.

The Nano-based cameras process video at 15 frames per second to detect objects. They’re networked with each other and other robots to enhance the robots’ vision and help them navigate.

Both the Nano cameras and Xavier-powered robots process all camera data locally, only sending navigation decisions over the network. “That’s a major benefit for such a small, but powerful module because many of our customers are very privacy minded,” Sorensen said.

MiR developed a tool its customers use to train the camera by simply showing it pictures of objects such as robots and forklifts. The ease of customizing the cameras is a big measure of the product’s success, he added.

AI Training with Simulations

The company hopes its smart robots will be equally easy to train for non-technical staff at customer sites.

But here the challenge is greater. Public roads have standard traffic signs, but every factory and warehouse is unique with different floor layouts, signs and types of pallets.

MiR’s AI team aims to create a simulation tool that places robots in a virtual work area that users can customize. Such a simulation could let users who are not AI specialists train their smart robots like they train their smart cameras today.

The company is currently investigating NVIDIA’s Isaac platform, which supports training through simulations.

MiR is outfitting its family of industrial robots for AI.

The journey into the era of autonomous machines is just starting for MiR. It’s parent company, Teradyne, announced in February it is investing $36 million to create a hub for developing collaborative robots, aka co-bots, in Odense as part of a partnership with MiR’s sister company, Universal Robotics.

Market watchers at ABI Research predict the co-bot market could expand to $12 billion by 2030. In 2018, Danish companies including MiR and Universal captured $995 million of that emerging market, according to Damvad, a Danish analyst firm.

With such potential and strong ingredients from companies like NVIDIA, “it’s a great time in the robotics industry,” Sorensen said.

The post Go Robot, Go: AI Team at MiR Helps Factory Robots Find Their Way appeared first on The Official NVIDIA Blog.

Silicon Valley High Schooler Takes Top Award in Hackster.io Jetson Nano Competition

Over the phone, Andrew Bernas leaves the impression he’s a veteran Silicon Valley software engineer focused on worldwide social causes with a lot of heart.

He’s in fact a 16-year-old high school student, and he recently won first place in the AI for Social Good category of the NVIDIA-supported AI at the Edge Challenge.

At Hackster.io — an online community of developers and hobbyists — he and others began competing in October, building AI projects using the NVIDIA Jetson Nano Developer Kit.

An Eagle Scout who leads conservation projects, he wanted to use AI to solve a big problem.

“I got the idea to use Jetson Nano processing power to compute a program to recognize handwritten and printed text to allow those who are visually impaired or disabled to have access to reading,” said Bernas, a junior at Palo Alto High School.

He was among 10 winners in the competition, which drew more than 2,500 registrants from 35 countries for a share of NVIDIA supercomputing prizes. Bernas used NVIDIA’s Getting Started With Jetson Nano Deep Learning Institute course to begin his project.

AI on Reading

Bernas’s winning entry, Reading Eye for the Blind with NVIDIA Jetson Nano, is a text-to-voice AI app and a device prototype to aid the visually impaired.

The number of people worldwide visually impaired — those with moderate to severe vision loss — is estimated to be 285 million, with 39 million of them blind, according to the World Health Organization.

His device, which can be seen in the video below, allows people to place books or handwritten text to be scanned by a camera and converted to voice.

“Part of the inspiration was for creating a solution for my grandmother and other people with vision loss,” said Bernas. “She’s very proud of me.”

DIY Jetson Education

Bernas enjoys do-it-yourself building. His living room stores some of the more than 20 radio-controlled planes and drones he has designed and built. He also competes in his school’s Science Olympiad team in electrical engineering, circuits, aeronautical and physics.

Between high school courses, online programs and forums, Bernas has learned to use HTML, CSS, Python, C, C++, Java and JavaScript. For him, developing models using the NVIDIA Jetson Nano Developer Kit was a logical next step in his education for DIY building.

He plans to develop his text-to-voice prototype to include Hindi, Mandarin, Russian and Spanish. Meanwhile, he has his sights on AI for robotics and autonomy as a career path.

“Now that machine learning is so big, I’m planning to major in something engineering-related with programming and machine learning,” he said of his college plans.

 

NVIDIA Jetson Nano makes adding AI easier and more accessible to makers, self-taught developers and embedded tech enthusiasts. Learn more about Jetson Nano and view more community projects to get started.

The post Silicon Valley High Schooler Takes Top Award in Hackster.io Jetson Nano Competition appeared first on The Official NVIDIA Blog.

Namaste to AI: Yoga App Among 10 Winners of Hackster.io Jetson Competition

The developers behind MixPose, a yoga posture-recognizing application, aim to improve your downward dog and tree pose positions with a little nudge from AI.

MixPose enables yoga teachers to broadcast live-streaming instructional videos that include skeletal lines to help students better understand the angles of postures. It also enables students to capture their skeletal outlines and share them in live class settings for virtual feedback.

“Our goal is to create and enhance the connections between yoga instructors and students, and we believe using a Twitch-like streaming class is an innovative way to accomplish that,”  said Peter Ma, 36, a co-founder at the MixPose team.

MixPose’s streaming video platform can be broadcasted with Jetson Nano. The live stream content can then be viewed on Android TV and mobile phones.

On Tuesday, the group was among 10 teams awarded top honors at the AI at the Edge Challenge, launched in October, on Hackster.io. Winners competed for a share of NVIDIA supercomputing prizes, as well as a trip to our Silicon Valley headquarters.

Hackster.io is an online community of developers, engineers and hobbyists who work on hardware projects. To date, it’s seen more than 1.3 million members across 150 countries working on more than 21,000 open source projects and 240 company platforms.

MixPose, based in San Francisco, taps PoseNet pose estimation networks powered by Jetson Nano to do inference on yoga positions for yoga instructors, allowing the teachers and students to engage remotely based on the AI pose estimation. It is developing networks for different yoga poses, utilizing Jetpack SDK, CUDA ToolKit and cuDNN.

Four Prized Categories

MixPose took first place in the Artificial Intelligence of Things (AIoT) category, one of four project areas in the competition that drew 2,542 registrants from 35 countries and 79 submissions completed with code and shared with the community.

MixPose demos its streaming app

The team also landed third place in AIoT for its Jetson Clean Water AI entry, using object detection for water contamination.

“It can determine whether the water is clean for drinking or not,” said 27-year-old MixPose co-founder Sarah Han.

Contest categories also included Autonomous Machines and Robotics, Intelligent Video Analytics and Smart Cities. First, second and third place winners in each took home awards.

RNNs for Reading

NVIDIA gave a  single award in the category AI for Social Good, which Palo Alto High School junior Andrew Bernas took home for his Reading Eye for the Blind with NVIDIA Jetson Nano. It’s a text-to-voice app device for the visually impaired that uses CNNs and RNNs.

“Part of the inspiration was creating a solution for my grandmother and other people with vision loss to be able to read,” said Bernas.

Andrew Bernas’ text-to-speech device for the visually impaired

AI Whacks Weeds

First-place winners also included a team from India behind the weed removal robot Nindamani, in the Autonomous Machines and Robotics category.

Nindamani’s AI-driven wee removal robot 

Traffic Gets Moving

And a duo working on adaptive traffic controls took a top award for their networks used to help improve traffic flows, in the Intelligent Video Analytics and Smart Cities category.

Chathuranga Liyanage and Sandali Jayaweera do AI for visual traffic aids for drivers.

 

NVIDIA Jetson Nano makes adding AI easier and more accessible to makers, self-taught developers and embedded tech enthusiasts. Learn more about Jetson Nano and view more projects to get started.

The post Namaste to AI: Yoga App Among 10 Winners of Hackster.io Jetson Competition appeared first on The Official NVIDIA Blog.

Meet Six Smart Robots at GTC 2020

The GPU Technology Conference is like a new Star Wars movie. There are always cool new robots scurrying about.

This year’s event in San Jose, March 22-26, is no exception, with at least six autonomous machines expected on the show floor. Like C3PO and BB8, each one is different.

Among what you’ll see at GTC 2020:

  • a robotic dog that sniffs out trouble in complex environments such as construction sites
  • a personal porter that lugs your stuff while it follows your footsteps
  • a man-sized bot that takes inventory quickly and accurately
  • a short, squat bot that hauls as much as 2,200 pounds across a warehouse
  • a delivery robot that navigates sidewalks to bring you dinner

“What I find interesting this year is just how much intelligence is being incorporated into autonomous machines to quickly ingest and act on data while navigating around unstructured environments that sometimes are not safe for humans,” said Amit Goel, senior product manager for autonomous machines at NVIDIA and robot wrangler for GTC 2020.

The ANYmal C from ANYbotics AG (pictured above), based in Zurich, is among the svelte navigators, detecting obstacles and finding its own shortest path forward thanks to its Jetson AGX Xavier GPU. The four-legged bot can slip through passages just 23.6 inches wide and climb stairs as steep as 45 degrees on a factory floor to inspect industrial equipment with its depth, wide-angle and thermal cameras.

Gita robot
The Gita personal robot will demo hauling your stuff at GTC 2020.

The folks behind the Vespa scooter will show Gita, a personal robot that can carry up to 40 pounds of your gear for four hours on a charge. It runs computer vision algorithms on a Jetson TX2 GPU to identify and follow its owner’s legs on any hard surfaces.

Say cheese. Bossa Nova Robotics will show its retail robot that can scan a 40-foot supermarket aisle in 60 seconds, capturing 4,000 images that it turns into inventory reports with help from its NVIDIA Turing architecture RTX GPU. Walmart plans to use the bots in at least 650 of its stores.

Mobile Industrial Robots A/S, based in Odense, Denmark, will give a talk at GTC about how it’s adding AI with Jetson Xavier to its pallet-toting robots to expand their work repertoire. On the show floor, it will demonstrate one of the robots from its MiR family that can carry payloads up to 2,200 pounds while using two 3D cameras and other sensors to navigate safely around people and objects in a warehouse.

From the other side of the globe, ExaWizards Inc. (Tokyo) will show its multimodal AI technology running on robotic arms from Japan’s Denso Robotics. It combines multiple sensors to learn human behaviors and perform jobs such as weighing a set portion of water.

Boss Nova robot
Walmart will use the Bossa Nova robot to help automate inventory taking in at least 650 of its stories

Rounding out the cast, the Serve delivery robot from Postmates will make a return engagement at GTC. It can carry 50 pounds of goods for 30 miles, using a Jetson AGX Xavier and Ouster lidar to navigate sidewalks like a polite pedestrian. In a talk, a Postmates engineer will share lessons learned in its early deployments.

Many of the latest systems reflect the trend toward collaborative robotics that NVIDIA CEO Jensen Huang demonstrated in a keynote in December. He showed ways humans can work with and teach robots directly, thanks to an updated NVIDIA Isaac developers kit that also speeds development by using AI and simulations to train robots, now part of NVIDIA’s end-to-end offering in robotics.

Just for fun, GTC also will host races of AI-powered DIY robotic cars, zipping around a track on the show floor at speeds approaching 50 mph. You can sign up here if you want to bring your own Jetson-powered robocar to the event.

We’re saving at least one surprise in robotics for those who attend. To get in on the action, register here for GTC 2020.

The post Meet Six Smart Robots at GTC 2020 appeared first on The Official NVIDIA Blog.

This Building Is in Beta: Startup’s AI Can Design and Control Buildings

Beta testing is a common practice for intangible products like software. Release an application, let customers bang on it, put bug fixes into the next version for download onto devices. Repeat.

For brick-and-mortar products like buildings, beta testing is unusual if not unheard of. But two Salt Lake City entrepreneurs now offer a system that evaluates buildings while in development.

PassiveLogic CEO Troy Harvey says this could solve a lot of problems before construction begins. “If you’re an engineer, there’s this is kind of a crazy idea to go and build the first one without beta testing it, and to just hope it works out,” he said.

Hive controller

Harvey and Jeremy Fillingim in 2014 founded PassiveLogic, an AI platform to engineer and autonomously operate all the Internet of Things components of buildings.

PassiveLogic’s Hive system — the startup calls it “brains for buildings” — is powered by the energy-sipping, AI-capable Jetson Nano module. The system can also be retrofitted into existing structures.

The Hive can make split-second decisions on controlling buildings by merging data from multiple sensors using sensor fusion algorithms. And it enables automated interpretation and responsiveness for dynamic situations such as lights that brighten but can also add heat to a space or automated window louvers that reduce glare but also cool a room.

New Era for IoT: Jetson

PassiveLogic’s software enables designers and architects to digitally map out the components for the system controls architecture. Contractors and architects can then run AI-driven simulations on IoT systems before starting construction. The simulations are run with neural networks to help optimize for areas such as energy efficiency and comfort.

PassiveLogic’s Swarm sensor

In addition to the Hive controller for edge computing, the system uses the startup’s half-dollar-size Swarm room sensors and compact Cell modules to connect into building components for hard-wired control.

PassiveLogic’s Cell module“With the Jetson Nano, we’re getting all this computing power that we can put right at the edge, and so we can do all these things in AI with a real-time system,” said Harvey.

PassiveLogic’s pioneering application in building AI, edge computing and IoT comes as retailers, manufacturers, municipalities and scores of others are embracing NVIDIA GPU-driven edge computing for autonomy.

The company is a member of NVIDIA’s Inception program, which helps startups scale markets faster with networking

opportunities, technical guidance on GPUs and access to training.

“NVIDIA Inception is offering technical guidance on the capabilities and implementation of Jetson as PassiveLogic prepares to fulfill our backlog of customer demand,” said Harvey. “The capabilities of the Jetson chip open up opportunities for our platform.”

Hive: AI Edge Computing 

PassiveLogic’s Hive controllers can bring AI to ordinary edge devices such as closed-caption cameras, lighting, and heating and air conditioning systems. This allows image recognition applications for buildings with cameras and smart temperature controls, among other benefits.

“It becomes a control hub for all of those sensors in the building and all of the controllable things,” said Harvey.

Hive can also factor in where densities of people are in buildings, based on data taken from its networked Swarm devices, which use Bluetooth mesh trilateralization to locate building occupants. It can then adjust temperature, lights or other systems for where people are located.

Digital Twin AI Simulations

The company’s Cell modules — hard-wired, software-defined input-output units — are used to bridge all the physical building connections into its Hive AI edge computing systems. As customers connect these building block-like modules together, they’re also laying the software foundation for what this autonomous system looks like.

PassiveLogic enables customers to digitally lay out building controls and set up simulations within its software platform on Hive. Customers can import CAD designs or sketch them out, including all of the features of a building that need to be monitored.

The AI engine understands at a physics level how buildings components work, and it can run simulations of building systems, taking into account complex interactions, and making control decisions to optimize operation. Next, the Hive compares this optimal control path to actual sensor data, applies machine learning, and gets smarter about operating the building over time.

Whether it’s an existing building getting an update or a design for a new one, customers can run simulations with Hive to see how they can improve energy consumption and comfort.

“Once you plug it in, you can learn onsite and actually build up a unique local-based training using deep learning and compare it with other buildings,” Harvey said.

The post This Building Is in Beta: Startup’s AI Can Design and Control Buildings appeared first on The Official NVIDIA Blog.

Trash-Talking AI Platform Schools You on Recycling

Get ready for trash-talking garbage cans.

Hassan Murad and Vivek Vyas have developed the world’s largest garbage dataset, dubbed WasteNet, and offer an AI-driven trash-sorting technology.

The Vancouver engineers’ startup, Intuitive AI, uses machine learning and computer vision to see what people are holding as they approach trash and recycling bins. Their product visually sorts the item on a display to nudge the user how to separate waste  — and verbally ribs people for misses.

The split-second detection of the item using WasteNet’s nearly 1 million images is made possible by the compact supercomputing of the NVIDIA Jetson TX2 AI module.

Murad and Vyas call their AI recycling platform Oscar, like Sesame Street’s trashcan muppet.  “Oscar is a grouchy, trash-talking AI. It rewards people with praise if they recycle right and playfully shouts at them for doing it wrong,” said Murad.

Intuitive AI’s launch is timely. In 2018, China banned imports of contaminated plastic, paper and other materials for processing into recycling.

Since then, U.S. exports of plastic scraps to China have plummeted nearly 90 percent, according to the Institute of Scrap Recycling Industries. And recycling processors everywhere are scrambling to better sort trash to produce cleaner recyclables.

The startup is also a member of NVIDIA’s Inception program, which helps startups scale markets faster with networking opportunities, technical guidance and access to training.

“NVIDIA really helped us understand which processor to try out and what kind of results to expect and then provided a couple of them for us to test on for free,” said Murad.

The early-stage AI company is also a cohort of Next AI, a Canada-based incubator that guides promising startups. Next AI gives startups access to professors from the University of Toronto, Harvard, MIT and big figures in the tech industry.

In January, NVIDIA and Next AI forged a partnership to jointly support their growing ecosystems of startups, providing AI education, investment, technical guidance and mentorship.

Turning Trash Into Cash

Trash is a surging environmental problem worldwide. And it’s not just the Great Pacific Garbage Patch — the headline-grabbing mass of floating plastic bits that’s twice the size of Texas.

Now that China requires clean recyclables from exporters — with no more than 0.5 percent contamination — nations across the world are facing mounting landfills.

Intuitive AI aims to help cities cope with soaring costs from recycling collection companies, which have limited markets to sell tons of contaminated plastics and other materials.

“The way to make the recycling chain work is by obtaining cleaner sorted materials. And it begins by measurement and education at the source so that  waste management companies get cleaner recyclables so that they can sell to China, India,Indonesia or not send it at all because eventually, we could be able to process it locally,” said Murad.

Garbage in, Garbage out

Deploying image recognition to make trash-versus-recycling decisions isn’t easy. The founders discovered objects in people’s hands are often obscured 80 percent from view. Also, there are thousands of different objects people might discard. They needed a huge dataset.

“It became quite clear to us we need to build the world’s largest garbage dataset, which we call WasteNet,” said Murad.

From deployments at malls, universities, airports and corporate campuses, Oscar has now demonstrated it can increase recycling by 300%.

WasteNet is a proprietary dataset. The founders declined to disclose the details of how they created such a massive dataset.

GPUs Versus CPUs

The startup’s system needs to work fast. After all, who wants to wait by a garbage bin? Initially, the founders used every possible hardware option on the market for image recognition, said Murad, including Raspberry Pi and Intel’s Movidius.

But requiring that people wait for up to six seconds — the result of their early hardware experiments — for where to toss an item just wasn’t an option. Once they moved to NVIDIA GPUs, they were able to get results down to half a second.

“Using Jetson TX2, we are able to run AI on the edge and help people change the world in three seconds,”  said Murad.

The post Trash-Talking AI Platform Schools You on Recycling appeared first on The Official NVIDIA Blog.

NVIDIA Brings the Future into Focus at CES 2020

CES 2020 will be bursting with vivid visual entertainment and smart everything, powered, in part, by NVIDIA and its partners.

Attendees packing the annual techfest will experience the latest additions to GeForce, the world’s most powerful PC gaming platform and the first to deliver ray tracing. They’ll see powerful displays and laptops, ultra-realistic game titles and capabilities offering new levels of game play.

NVIDIA’s Vegas headliners include three firsts  — a 360Hz esports display, the first 14-inch laptops and all-in-one PCs delivering the graphics realism of ray tracing.

The same GPU technologies powering next-gen gaming are also spawning an age of autonomous machines. CES 2020 will be alive with robots such as Toyota’s new T-HR3, thanks to advances in the NVIDIA Isaac platform. And the newly minted DRIVE AGX Orin promises 7x performance gains for future autonomous vehicles.

Together, they’re knitting together an AI-powered Internet of Things from the cloud to the network’s edge that will touch everything from entertainment to healthcare and transportation.

A 2020 Vision for Play

NVIDIA’s new G-SYNC display for esports gamers delivers a breakthrough at 360Hz, projecting a vision of game play that’s more vivid than ever.  NVIDIA and ASUS this week unveiled the ASUS ROG 360, the world’s fastest display, powered by NVIDIA G-SYNC. Its 360Hz refresh rate in a 24.5-inch form factor let esports and competitive gamers keep every pixel of action in their field of view during the heat of competition.

The 24-inch ASUS ROG Swift sports a 360Hz refresh rate.

Keeping the picture crisp, Acer, Asus and LG are expanding support for G-SYNC. First introduced in 2013, G-SYNC is best known for its innovative Variable Refresh Rate technology that eliminates screen tearing by synchronizing the refresh rate of the display with the GPU’s frame rate.

In 2019, LG became the first TV manufacturer to offer NVIDIA G-SYNC compatibility, bringing the must-have gaming feature to select OLED TV models. Thirteen new models for 2020 will provide a flawless gaming experience on the big screen, without screen tearing or other distracting visual artifacts.

In addition, Acer and Asus are showcasing two upcoming G-SYNC ULTIMATE displays. They feature the latest full-array direct backlight technology with 1,400 nits brightness, significantly increasing display contrast for darker blacks and more vibrant colors. Gamers will enjoy the fast response time and ultra-low lag of these displays running at up to 144Hz at 4K.

Game On, RTX On

The best gaming monitors need awesome content to shine. So today, Bethesda turned on ray tracing in Wolfenstein: Youngblood, bringing a new level of realism to the popular title. An update that sports ray-tracing reflections and DLSS is available as a free downloadable patch starting today for gamers with a GeForce RTX GPU.

Bethesda joins the world’s leading publishers who are embracing ray tracing as the next big thing in their top franchises. Call of Duty Modern Warfare and Control — IGN’s Game of the Year — both feature incredible real-time ray-tracing effects.

VR is donning new headsets, games and innovations for CES 2020.

NVIDIA’s new rendering technique, Variable Rate Super Sampling, in the latest Game Ready Driver improves image quality in VR games. It uses Variable Rate Shading, part of the NVIDIA Turing architecture, to dynamically apply up to 8x supersampling to the center. or foveal region. of the VR headset, enhancing image quality where it matters most while delivering stellar performance.

In addition, Game Ready Drivers now make it possible to set the max frame rate a 3D application or game can render to save power and reduce system latency. They enable the best gaming experience by keeping a G-SYNC display within the range where the technology shines.

Creators’ Visions Coming into Focus

A total of 14 hardware OEMs introduced new RTX Studio systems at CES 2020. Combined with NVIDIA Studio Drivers, they’re powering more than 55 creative and design apps with RTX-accelerated ray tracing and AI.

HP launched the ENVY 32 All-in-One with GeForce RTX graphics, configurable with up to GeForce RTX 2080. Acer has three new systems from its ConceptD line. And ten other system builders across North America, Europe and China all now have RTX Studio offerings.

These RTX Studio systems adhere to stringent hardware and software requirements to empower creativity at the speed of imagination. They also ship with NVIDIA’s Studio Drivers, providing the ultimate performance and stability for creative applications.

Robots Ring in the New Year

The GPU technology that powers games is also driving AI, accelerating the development of a host of autonomous vehicles and robots at CES 2020.

Toyota’s new T-HR3 humanoid partner robot will have a Vegas debut at its booth (LVCC, North Hall, Booth 6919). A human operator wearing a VR headset controls the system using augmented video and perception data fed from an NVIDIA Jetson AGX Xavier computer in the robot.

Toyota’s T-HR3 makes its Vegas debut at CES 2020.

Attendees can try out the autonomous wheelchair from WHILL, which had won a CES 2019 Innovation of the Year award, powered by a Jetson TX2. Sunflower Labs will demo its new home security robot, also packing a Jetson TX2. Other NVIDIA-powered systems at CES include a delivery robot from PostMates and an inspection snake robot from Sarcos.

The Isaac software development kit marks a milestone in establishing a unified AI robotic development platform we call NVIDIA Isaac, an open environment for mapping, model training, simulation and computing. It includes a variety of camera-based perception deep neural networks for functions such as object detection, 3D pose estimation and 2D human pose estimation.

This release also introduces Isaac Sim, which lets developers train on simulated robots and deploy their lessons to real ones, promising to greatly accelerate robotic development especially for environments such as large logistics operations. Isaac Simulation will add early-access availability for manipulation later this month.

Driving an Era of Autonomous Vehicles

This marks a new decade of automotive performance, defined by AI compute rather than horsepower. It will spread autonomous capabilities across today’s $10 trillion transportation industry. The transformation will require dramatically more compute performance to handle exponential growth in AI models being developed to ensure autonomous vehicles  are both functional and safe.

NVIDIA DRIVE AV, an end-to-end, software-defined platform for AVs, delivers just that. It includes a development flow, data center infrastructure, an in-vehicle computer and the highest quality pre-trained AI models that can be adapted by OEMs.

Last month, NVIDIA announced the latest piece of that platform, DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles.

The platform is powered by a new system-on-a-chip called Orin, which achieves 200 TOPS — nearly 7x the performance of the previous generation SoC Xavier. It’s designed to handle the large number of applications and DNNs that run simultaneously in autonomous vehicles, while achieving systematic safety standards such as ISO 26262 ASIL-D.

NVIDIA is now providing access to its pre-trained DNNs and cutting-edge training processes on the NGC container registry. With industry-leading networks and advanced learning techniques such as active learning, transfer learning and federated learning, developers can turbo charge development and custom applications

Working Together

NVIDIA’S AI ecosystem of innovators is spread across the CES 2020 show floor, including more than 100 members of Inception, a company program that nurtures cutting-edge startups that are revolutionizing industries with AI.

Among established leaders, Mercedes-Benz, an NVIDIA DRIVE customer, will open the show Monday night with a keynote on the future of intelligent transportation. And GeForce partners will crank up the gaming excitement in demos across the event.

The post NVIDIA Brings the Future into Focus at CES 2020 appeared first on The Official NVIDIA Blog.

Blue Moon Over Dijon: French Hobbyist Taps GPU for Stellar Camera

By day, Alain Paillou is the head of water quality for the Bourgogne region of France. But when the stars come out, he indulges his other passions.

Paillou takes exquisitely crisp pictures of the moon, stars and planets — a hobby that combines his lifelong love of astronomy and technology.

Earlier this year, he chronicled on an NVIDIA forum his work building what he calls SkyNano, a GPU-powered camera to take detailed images of the night sky using NVIDIA’s Jetson Nano.

“I’ve been interested in astronomy from about eight or 10 years old, but I had to quit my studies for more than 30 years because of my job as an aerospace software engineer,” said Paillou in an interview from his home in Dijon.

Paillou went back to school in his early 30s to get a degree and eventually a job as a hydrogeologist. “I came back to astronomy after my career change 20 years ago when I lived in Paris, where I started taking photographs of the moon, Jupiter and Saturn,” he said.

“I really love technology and astronomy needs technical competence,” he said. “It lets me return to some of the skills of my first job — developing software to get the best results from my equipment — and it’s very interesting to me.”

Seeing Minerals on the Moon

Paillou loves to take color-enhanced pictures of the moon that show the diversity of its blue titanium and orange iron-oxide minerals. And he delights in capturing star-rich pictures of the night sky. Both require significant real-time filters, best run on a GPU.

Around his Dijon home, as in many places, “the sky is really bad with light pollution from cities that make images blurry,” he said. “I can see 10-12 stars with my eyes, but with my system I can see thousands of stars,” he said.

Paillou in his home astronomy lab in Dijon.

“If you want to retrieve something beautiful, you need to apply real-time filtering with an A/V compensation system. I built my own system because I could not find anything I could buy that matched what I wanted,” Paillou said.

Building the SkyNano

His first prototype mounted a ZWO ASI178MC camera using a Sony IMX178 color sensor on a platform with a gyro/compass and a two-axis mount controlled by stepper motors. Initially he used a Raspberry Pi 3 B+ to run Python programs that controlled the mount and camera.

The board lacked the muscle to drive the real-time filters. After some more experiments, he asked NVIDIA for help in his first post on the Jetson Nano community projects forum on June 21. By July 5, he had a Jetson Nano in hand and started loading OpenCV filters on it using Python.

By the end of July, he had taught himself PyCUDA and posted significant results with it. He released his routines on GitHub and reported he was ready to start taking pictures.

On Aug. 2, he posted his camera’s first digitally enhanced picture of the Copernicus crater on the moon as well as a YouTube video showing a Jetson Nano-enhanced night sky. By October, he posted stunning color-enhanced pictures of the moon (see above), impressive night-vision capabilities and a feature for tracking satellites.

Paillou’s project became the most popular thread on the NVIDIA Jetson Project’s forum with more than 3,100 views to date. Along the way, he gave a handful of others tips for their own AI projects, many of which are available here.

Exploring Horizons in Space and Software

“Twenty years ago, computers were not powerful enough to do this work, but today a little computer like the Jetson Nano makes it really interesting and it’s not expensive,” said Paillou, whose laptop connected to the system also uses an NVIDIA GPU.

In fact, the $99 Jetson Nano is currently marked down to $89 in a holiday special on NVIDIA’s website. Hobbyists who want to use Jetson Nano for neural networking can pair the starter kit with a free AI for Beginners course from our Deep Learning Institute.

Paillou sees plenty of headroom for his project. He hopes to rewrite his Python code in C++ for further performance speed-ups, get a better camera, and further study the possibilities for using AI.

With a little help from friends in America, the sky’s the limit.

“I was not sure I would have the time to learn CUDA – at 52, I am not so young – but it turned out to be very powerful and not so complicated,” he said.

Follow Paillou’s work and many others contributed by fellow developers on the Jetson Community Projects page.


Paillou’s SkyNano (lower left) and SkyPC waiting for the dark.

 

The post Blue Moon Over Dijon: French Hobbyist Taps GPU for Stellar Camera appeared first on The Official NVIDIA Blog.

Robotics Ramp-Up: NVIDIA Sets Milestone in Delivering Unified Platform for Building Autonomous Machines

NVIDIA has released updated capabilities for robotics AI perception and simulation, with a new version of the NVIDIA Isaac software development kit.

Announced by company founder and CEO Jensen Huang at NVIDIA’s latest GPU Technology Conference, the SDK achieves an important milestone in establishing a unified robotic development platform — enabling AI, simulation and manipulation capabilities.

It includes the Isaac Robotics Engine (which provides the application framework), Isaac GEMs (pre-built deep neural networks models, algorithms, libraries, drivers and APIs), reference apps for indoor logistics, as well as the first release of Isaac Sim (offering navigation capabilities).

As a result, the new Isaac SDK can greatly accelerate developing and testing robots for researchers, developers, startups and manufacturers. It enables the AI-powered perception and training of robots in simulation — allowing them to be tested and validated in a range of environments and situations.

And in doing so, it saves costs.

Start with AI-Based Perception

Every autonomous machine starts with perception.

To jumpstart AI robotic development, the new Isaac SDK includes a variety of camera-based perception deep neural networks. Among them:

  • Object detection — recognizes objects for navigation, interaction or manipulation
  • Free space segmentation — detects and segments the external world, such as determining where a sidewalk is and where a robot is allowed to travel
  • 3D pose estimation — understands an object’s position and orientation, enabling such tasks as a robotic arm picking up an object
  • 2D human pose estimation — applies pose estimation to humans, which is important for robots that interact with people, such as delivery bots, and for cobots, which are specifically designed to work together with humans

The SDK’s object detection has also been updated with the ResNet deep neural network, which can be trained using NVIDIA’s Transfer Learning Toolkit. This makes it easier to add new objects for detection and train new models that can get up and running with high accuracy levels.

Introducing Isaac Sim

The new release introduces an important capability — using Isaac Sim to train a robot and deploy the resulting software into a real robot that operates in the real world. This promises to greatly accelerate robotic development, enabling training with synthetic data.

Simulation enables testing in so-called corner cases — that is, under difficult, unusual circumstances — to further sharpen training. Feeding these results into the training pipeline enables neural networks to improve accuracy based on both real and simulated data.

Multi-Robot Sim Is Here

The new SDK provides multi-robot simulation, as well. This allows developers to put multiple robots into a simulation environment for testing, so they learn to work in relation to each other. Individual robots can run independent versions of Isaac’s navigation software stack while moving within a shared virtual environment.

Thus, manufacturers seeking to run multiple robots in large logistics operations can, for example, test their interactions and debug problems before deployment into the real world.

Isaac Integrates with DeepStream 

The new SDK also integrates support for NVIDIA DeepStream software, which is widely used for processing analytic capabilities. Video streams can be processed deploying DeepStream and NVIDIA GPUs for AI at the edge supporting robotic applications.

Developers can now build a wide variety of robots that require analysis of camera video feeds, provided for both onboard applications and remote locations.

Programming with Isaac SDK 

Finally, for robot developers who have developed their own code, the new SDK is designed to integrate that work, with the addition of a new API based on the C programming language.

This enables developers to connect their own software stacks to the Isaac SDK and minimize programming language conversions — giving users the features of Isaac routed through the C API access.

The inclusion of C-API access also allows the use of the Isaac SDK in other programming languages.

The NVIDIA Isaac SDK 2019.03 download is now available.

 

Top image: NVIDIA’s Carter robot in the Isaac Sim environment (left) and in the real environment (right).

The post Robotics Ramp-Up: NVIDIA Sets Milestone in Delivering Unified Platform for Building Autonomous Machines appeared first on The Official NVIDIA Blog.

Buzzworthy AI: Startup’s Robo-Hives Counter Bee Population Declines

Honeybee colonies worldwide are under siege by parasites, but they now have a white knight: a band of Israeli entrepreneurs bearing AI.

Beewise, an Israel-based startup, is using AI in its small northern community on the border of Lebanon to monitor honeybee colonies. It’s secured seed funding of more than $3 million and launched its robo-hive sporting image recognition to support bee populations.

In the U.S., honeybee colonies have collapsed by 40 percent in the past year, according to a recent report. The culprit is widely viewed to be varroa mites, which feed off the liver-like organs of honeybees and larvae, causing weakness as well as greater susceptibility to diseases and viruses.

Farmers everywhere count on honeybees for pollination of fruits and vegetables, and many now have to rent colonies from beekeepers to support their crops. Without bees to pollinate them, plants would have a difficult time reproducing and bearing fruit for people to eat.

A cottage industry of small private companies and researchers alike is developing image recognition for early detection of the varroa mite so that beekeepers can act before it’s too late for colonies.

“We’re trying to work on the colony loss — I call it ‘eyes on the hives, 24/7,’” said Saar Safra, CEO and co-founder of Beewise.

Traditional Colony Work

Managing commercial hives is labor-intensive for beekeepers, who manually pull frames (see image below), or sections of the honeycombs, from beehives and visually inspect them.

This time-consuming work can span as many as 1,000 beehives under management by a single professional beekeeper. That means a beehive might not get inspected for several weeks as it waits in line for the busy beekeeper to come along.

A few weeks of an undetected varroa mite infestation can have disastrous results for bee colonies. Computer vision with AI provides a faster way to keep on top of problems.

By replacing that traditional manual process with image recognition and robotics, keepers can recognize and treat the problem in real time, said Safra.

Beewise has developed a proprietary robotics system that can remotely treat infestations.

“When you take AI and apply it to traditional industries, the level of social impact is so much bigger than when you keep it enclosed in high tech — NVIDIA GPUs are basically doing a lot of that work,” he said.

Robo Beehive AI 

Beewise trained its neural networks on thousands of images of bees. Its convolutional neural networks are doing unsupervised learning capable of image classification to identify bees with mites in its autonomous hives now in deployment.

Once image classification has identified bees that have been infested with mites, a recurrent neural network makes a decision on the best course of action. That could include automatically administering pesticides by the robot or to quarantine the beehive frame from others.

Beewise has made this possible with its autonomous beehives that rely on multiple cameras. Images from these prototype hives are fed into the compact supercomputing of NVIDIA Jetson for real-time processing on its deep learning models.

“It’s a whole AI-based control system — our AI detects and identifies the varroa mite in real time and sterilizes it. Clean healthy colonies operate completely different than infested ones,” said Safra.

 

The post Buzzworthy AI: Startup’s Robo-Hives Counter Bee Population Declines appeared first on The Official NVIDIA Blog.