Like so many software developers, Elias Sorensen has been studying AI. Now he and his 10-member team are teaching it to robots.
When the AI specialists at Mobile Industrial Robots, based in Odense, Denmark, are done, the first graduating class of autonomous machines will be on their way to factories and warehouses, powered by NVIDIA Jetson Xavier NX GPUs.
“The ultimate goal is to make the robots behave in ways humans understand, so it’s easier for humans to work alongside them. And Xavier NX is at the bleeding edge of what we are doing,” said Sorensen, who will provide an online talk about MiR’s work at GTC Digital.
MiR’s low-slung robots carry pallets weighing as much as 2,200 pounds. They sport lidar and proximity sensors, as well as multiple cameras the team is now linking to Jetson Xavier GPUs.
Inferencing Their Way Forward
The new digital brains will act as pilots. They’ll fuse sensor data to let the bots navigate around people, forklifts and other objects, dynamically re-mapping safety zones and changing speeds as needed.
The smart bots use NVIDIA’s DeepStream and TensorRT software to run AI inference jobs on Xavier NX, based on models trained on NVIDIA GPUs in the AWS cloud.
MiR chose Xavier for its combination of high performance at low power and price, as well as its wealth of software.
“Lowering the cost and power consumption for AI processing was really important for us,” said Sorensen. “We make small, battery-powered robots and our price is a major selling point for us.” He noted that MiR has deployed more than 3,000 robots to date to users such as Ford, Honeywell and Toyota.
The new autonomous models are working prototypes. The team is training their object-detection models in preparation for first pilot tests.
Jetson Nano Powers Remote Vision
It’s MiR’s first big AI product, but not its first ever. Since November, the company has shipped smart, standalone cameras powered by Jetson Nano GPUs.
The Nano-based cameras process video at 15 frames per second to detect objects. They’re networked with each other and other robots to enhance the robots’ vision and help them navigate.
Both the Nano cameras and Xavier-powered robots process all camera data locally, only sending navigation decisions over the network. “That’s a major benefit for such a small, but powerful module because many of our customers are very privacy minded,” Sorensen said.
MiR developed a tool its customers use to train the camera by simply showing it pictures of objects such as robots and forklifts. The ease of customizing the cameras is a big measure of the product’s success, he added.
AI Training with Simulations
The company hopes its smart robots will be equally easy to train for non-technical staff at customer sites.
But here the challenge is greater. Public roads have standard traffic signs, but every factory and warehouse is unique with different floor layouts, signs and types of pallets.
MiR’s AI team aims to create a simulation tool that places robots in a virtual work area that users can customize. Such a simulation could let users who are not AI specialists train their smart robots like they train their smart cameras today.
The journey into the era of autonomous machines is just starting for MiR. It’s parent company, Teradyne, announced in February it is investing $36 million to create a hub for developing collaborative robots, aka co-bots, in Odense as part of a partnership with MiR’s sister company, Universal Robotics.
Market watchers at ABI Research predict the co-bot market could expand to $12 billion by 2030. In 2018, Danish companies including MiR and Universal captured $995 million of that emerging market, according to Damvad, a Danish analyst firm.
With such potential and strong ingredients from companies like NVIDIA, “it’s a great time in the robotics industry,” Sorensen said.
Over the phone, Andrew Bernas leaves the impression he’s a veteran Silicon Valley software engineer focused on worldwide social causes with a lot of heart.
He’s in fact a 16-year-old high school student, and he recently won first place in the AI for Social Good category of the NVIDIA-supported AI at the Edge Challenge.
At Hackster.io — an online community of developers and hobbyists — he and others began competing in October, building AI projects using the NVIDIA Jetson Nano Developer Kit.
An Eagle Scout who leads conservation projects, he wanted to use AI to solve a big problem.
“I got the idea to use Jetson Nano processing power to compute a program to recognize handwritten and printed text to allow those who are visually impaired or disabled to have access to reading,” said Bernas, a junior at Palo Alto High School.
The number of people worldwide visually impaired — those with moderate to severe vision loss — is estimated to be 285 million, with 39 million of them blind, according to the World Health Organization.
His device, which can be seen in the video below, allows people to place books or handwritten text to be scanned by a camera and converted to voice.
“Part of the inspiration was for creating a solution for my grandmother and other people with vision loss,” said Bernas. “She’s very proud of me.”
DIY Jetson Education
Bernas enjoys do-it-yourself building. His living room stores some of the more than 20 radio-controlled planes and drones he has designed and built. He also competes in his school’s Science Olympiad team in electrical engineering, circuits, aeronautical and physics.
He plans to develop his text-to-voice prototype to include Hindi, Mandarin, Russian and Spanish. Meanwhile, he has his sights on AI for robotics and autonomy as a career path.
“Now that machine learning is so big, I’m planning to major in something engineering-related with programming and machine learning,” he said of his college plans.
The developers behind MixPose, a yoga posture-recognizing application, aim to improve your downward dog and tree pose positions with a little nudge from AI.
MixPose enables yoga teachers to broadcast live-streaming instructional videos that include skeletal lines to help students better understand the angles of postures. It also enables students to capture their skeletal outlines and share them in live class settings for virtual feedback.
“Our goal is to create and enhance the connections between yoga instructors and students, and we believe using a Twitch-like streaming class is an innovative way to accomplish that,” said Peter Ma, 36, a co-founder at the MixPose team.
MixPose’s streaming video platform can be broadcasted with Jetson Nano. The live stream content can then be viewed on Android TV and mobile phones.
On Tuesday, the group was among 10 teams awarded top honors at the AI at the Edge Challenge, launched in October, on Hackster.io. Winners competed for a share of NVIDIA supercomputing prizes, as well as a trip to our Silicon Valley headquarters.
Hackster.io is an online community of developers, engineers and hobbyists who work on hardware projects. To date, it’s seen more than 1.3 million members across 150 countries working on more than 21,000 open source projects and 240 company platforms.
MixPose, based in San Francisco, taps PoseNet pose estimation networks powered by Jetson Nano to do inference on yoga positions for yoga instructors, allowing the teachers and students to engage remotely based on the AI pose estimation. It is developing networks for different yoga poses, utilizing Jetpack SDK, CUDA ToolKit and cuDNN.
Four Prized Categories
MixPose took first place in the Artificial Intelligence of Things (AIoT) category, one of four project areas in the competition that drew 2,542 registrants from 35 countries and 79 submissions completed with code and shared with the community.
MixPose demos its streaming app
The team also landed third place in AIoT for its Jetson Clean Water AI entry, using object detection for water contamination.
“It can determine whether the water is clean for drinking or not,” said 27-year-old MixPose co-founder Sarah Han.
Contest categories also included Autonomous Machines and Robotics, Intelligent Video Analytics and Smart Cities. First, second and third place winners in each took home awards.
This year’s event in San Jose, March 22-26, is no exception, with at least six autonomous machines expected on the show floor. Like C3PO and BB8, each one is different.
Among what you’ll see at GTC 2020:
a robotic dog that sniffs out trouble in complex environments such as construction sites
a personal porter that lugs your stuff while it follows your footsteps
a man-sized bot that takes inventory quickly and accurately
a short, squat bot that hauls as much as 2,200 pounds across a warehouse
a delivery robot that navigates sidewalks to bring you dinner
“What I find interesting this year is just how much intelligence is being incorporated into autonomous machines to quickly ingest and act on data while navigating around unstructured environments that sometimes are not safe for humans,” said Amit Goel, senior product manager for autonomous machines at NVIDIA and robot wrangler for GTC 2020.
The ANYmal C from ANYbotics AG (pictured above), based in Zurich, is among the svelte navigators, detecting obstacles and finding its own shortest path forward thanks to its Jetson AGX Xavier GPU. The four-legged bot can slip through passages just 23.6 inches wide and climb stairs as steep as 45 degrees on a factory floor to inspect industrial equipment with its depth, wide-angle and thermal cameras.
The folks behind the Vespa scooter will show Gita, a personal robot that can carry up to 40 pounds of your gear for four hours on a charge. It runs computer vision algorithms on a Jetson TX2 GPU to identify and follow its owner’s legs on any hard surfaces.
Say cheese. Bossa Nova Robotics will show its retail robot that can scan a 40-foot supermarket aisle in 60 seconds, capturing 4,000 images that it turns into inventory reports with help from its NVIDIA Turing architecture RTX GPU. Walmart plans to use the bots in at least 650 of its stores.
Mobile Industrial Robots A/S, based in Odense, Denmark, will give a talk at GTC about how it’s adding AI with Jetson Xavier to its pallet-toting robots to expand their work repertoire. On the show floor, it will demonstrate one of the robots from its MiR family that can carry payloads up to 2,200 pounds while using two 3D cameras and other sensors to navigate safely around people and objects in a warehouse.
From the other side of the globe, ExaWizards Inc. (Tokyo) will show its multimodal AI technology running on robotic arms from Japan’s Denso Robotics. It combines multiple sensors to learn human behaviors and perform jobs such as weighing a set portion of water.
Rounding out the cast, the Serve delivery robot from Postmates will make a return engagement at GTC. It can carry 50 pounds of goods for 30 miles, using a Jetson AGX Xavier and Ouster lidar to navigate sidewalks like a polite pedestrian. In a talk, a Postmates engineer will share lessons learned in its early deployments.
Many of the latest systems reflect the trend toward collaborative robotics that NVIDIA CEO Jensen Huang demonstrated in a keynote in December. He showed ways humans can work with and teach robots directly, thanks to an updated NVIDIA Isaac developers kit that also speeds development by using AI and simulations to train robots, now part of NVIDIA’s end-to-end offering in robotics.
Just for fun, GTC also will host races of AI-powered DIY robotic cars, zipping around a track on the show floor at speeds approaching 50 mph. You can sign up here if you want to bring your own Jetson-powered robocar to the event.
We’re saving at least one surprise in robotics for those who attend. To get in on the action, register here for GTC 2020.
Beta testing is a common practice for intangible products like software. Release an application, let customers bang on it, put bug fixes into the next version for download onto devices. Repeat.
For brick-and-mortar products like buildings, beta testing is unusual if not unheard of. But two Salt Lake City entrepreneurs now offer a system that evaluates buildings while in development.
PassiveLogic CEO Troy Harvey says this could solve a lot of problems before construction begins. “If you’re an engineer, there’s this is kind of a crazy idea to go and build the first one without beta testing it, and to just hope it works out,” he said.
Harvey and Jeremy Fillingim in 2014 founded PassiveLogic, an AI platform to engineer and autonomously operate all the Internet of Things components of buildings.
PassiveLogic’s Hive system — the startup calls it “brains for buildings” — is powered by the energy-sipping, AI-capable Jetson Nano module. The system can also be retrofitted into existing structures.
The Hive can make split-second decisions on controlling buildings by merging data from multiple sensors using sensor fusion algorithms. And it enables automated interpretation and responsiveness for dynamic situations such as lights that brighten but can also add heat to a space or automated window louvers that reduce glare but also cool a room.
New Era for IoT: Jetson
PassiveLogic’s software enables designers and architects to digitally map out the components for the system controls architecture. Contractors and architects can then run AI-driven simulations on IoT systems before starting construction. The simulations are run with neural networks to help optimize for areas such as energy efficiency and comfort.
PassiveLogic’s Cell module“With the Jetson Nano, we’re getting all this computing power that we can put right at the edge, and so we can do all these things in AI with a real-time system,” said Harvey.
The company is a member of NVIDIA’s Inception program, which helps startups scale markets faster with networking
opportunities, technical guidance on GPUs and access to training.
“NVIDIA Inception is offering technical guidance on the capabilities and implementation of Jetson as PassiveLogic prepares to fulfill our backlog of customer demand,” said Harvey. “The capabilities of the Jetson chip open up opportunities for our platform.”
Hive: AI Edge Computing
PassiveLogic’s Hive controllers can bring AI to ordinary edge devices such as closed-caption cameras, lighting, and heating and air conditioning systems. This allows image recognition applications for buildings with cameras and smart temperature controls, among other benefits.
“It becomes a control hub for all of those sensors in the building and all of the controllable things,” said Harvey.
Hive can also factor in where densities of people are in buildings, based on data taken from its networked Swarm devices, which use Bluetooth mesh trilateralization to locate building occupants. It can then adjust temperature, lights or other systems for where people are located.
Digital Twin AI Simulations
The company’s Cell modules — hard-wired, software-defined input-output units — are used to bridge all the physical building connections into its Hive AI edge computing systems. As customers connect these building block-like modules together, they’re also laying the software foundation for what this autonomous system looks like.
PassiveLogic enables customers to digitally lay out building controls and set up simulations within its software platform on Hive. Customers can import CAD designs or sketch them out, including all of the features of a building that need to be monitored.
The AI engine understands at a physics level how buildings components work, and it can run simulations of building systems, taking into account complex interactions, and making control decisions to optimize operation. Next, the Hive compares this optimal control path to actual sensor data, applies machine learning, and gets smarter about operating the building over time.
Whether it’s an existing building getting an update or a design for a new one, customers can run simulations with Hive to see how they can improve energy consumption and comfort.
“Once you plug it in, you can learn onsite and actually build up a unique local-based training using deep learning and compare it with other buildings,” Harvey said.
What John Madden was to pro football, Neda Cvijetic is to autonomous vehicles. No one’s better at explaining the action, in real time, than Cvijetic.
Cvijetic, senior manager of autonomous vehicles at NVIDIA, drives our NVIDIA DRIVE Labs series of videos and blogs breaking down the science behind autonomous vehicles.
A Serbian-American electrical engineer, Cvijetic seems destined for this role. She literally grew up in the shadow of Nikola Tesla. His statue in Belgrade stood across the street from her childhood home.
On this week’s AI Podcast, Cvijetic spoke to host Rick Merritt about what’s driving autonomous vehicles. She also shared her perspective on how both broad initiatives and day-to-day actions can promote diversity in AI.
Key Points From This Episode:
Autonomous vehicles use three key techniques: perception, localization, and control and planning.
Each self-driving car runs on dozens of deep neural networks, which are each trained on thousands of hours of real-world driving data and on NVIDIA DRIVE Constellation, which provides extensive testing in virtual reality before the car even hits the road.
Autonomous vehicles drive safely because diversity and redundancy are designed into their systems. Multiple cameras with overlapping fields of view, radar, and more provide a wealth of perception data for the highest level of accuracy.
“I want every driver out there to feel that they understand AI, to understand how AI works in self-driving cars, and feel empowered by that understanding” — Neda Cvijetic [1:56]
“The NVIDIA DRIVE simulator seeks to create some of these corner cases that might take years to actually observe” — Neda Cvijetic [10:36]
You Might Also Like
Deep Learning 101: Will Ramey, NVIDIA Senior Manager for GPU Computing
If you’ve ever wanted a guided tour through the history of AI, this is the episode for you. NVIDIA’s Will Ramey covers the big bang of AI and the concepts that are now defining the industry.
How AI Turns Kiddie Cars Into Fast and Frugal Autonomous Racers
Take brains, a few hundred bones and a pink Barbie jeep. What have you got? For inventive hackers, a new sport filled with f-words — fast, furious, frugal. Founder of the Power Racing Series Jim Burke talks about why he’s bringing autonomous vehicles to a racing event built on the backs of $500 kiddie cars.
AutoX’s Professor X on the State of Automotive Autonomy
Jianxiong Xiao, CEO of startup AutoX, is speeding towards fully autonomous vehicles, as defined by the National Highway Traffic Administration. Ranging from level 0, or no automation, to level 5, full autonomy, Xiao is pursuing level 4 — a car that can perform all driving functions under certain conditions.
Hassan Murad and Vivek Vyas have developed the world’s largest garbage dataset, dubbed WasteNet, and offer an AI-driven trash-sorting technology.
The Vancouver engineers’ startup, Intuitive AI, uses machine learning and computer vision to see what people are holding as they approach trash and recycling bins. Their product visually sorts the item on a display to nudge the user how to separate waste — and verbally ribs people for misses.
The split-second detection of the item using WasteNet’s nearly 1 million images is made possible by the compact supercomputing of the NVIDIA Jetson TX2 AI module.
Murad and Vyas call their AI recycling platform Oscar, like Sesame Street’s trashcan muppet. “Oscar is a grouchy, trash-talking AI. It rewards people with praise if they recycle right and playfully shouts at them for doing it wrong,” said Murad.
Intuitive AI’s launch is timely. In 2018, China banned imports of contaminated plastic, paper and other materials for processing into recycling.
Since then, U.S. exports of plastic scraps to China have plummeted nearly 90 percent, according to the Institute of Scrap Recycling Industries. And recycling processors everywhere are scrambling to better sort trash to produce cleaner recyclables.
The startup is also a member of NVIDIA’s Inception program, which helps startups scale markets faster with networking opportunities, technical guidance and access to training.
“NVIDIA really helped us understand which processor to try out and what kind of results to expect and then provided a couple of them for us to test on for free,” said Murad.
The early-stage AI company is also a cohort of Next AI, a Canada-based incubator that guides promising startups. Next AI gives startups access to professors from the University of Toronto, Harvard, MIT and big figures in the tech industry.
In January, NVIDIA and Next AI forged a partnership to jointly support their growing ecosystems of startups, providing AI education, investment, technical guidance and mentorship.
Turning Trash Into Cash
Trash is a surging environmental problem worldwide. And it’s not just the Great Pacific Garbage Patch — the headline-grabbing mass of floating plastic bits that’s twice the size of Texas.
Now that China requires clean recyclables from exporters — with no more than 0.5 percent contamination — nations across the world are facing mounting landfills.
Intuitive AI aims to help cities cope with soaring costs from recycling collection companies, which have limited markets to sell tons of contaminated plastics and other materials.
“The way to make the recycling chain work is by obtaining cleaner sorted materials. And it begins by measurement and education at the source so that waste management companies get cleaner recyclables so that they can sell to China, India,Indonesia or not send it at all because eventually, we could be able to process it locally,” said Murad.
Garbage in, Garbage out
Deploying image recognition to make trash-versus-recycling decisions isn’t easy. The founders discovered objects in people’s hands are often obscured 80 percent from view. Also, there are thousands of different objects people might discard. They needed a huge dataset.
“It became quite clear to us we need to build the world’s largest garbage dataset, which we call WasteNet,” said Murad.
From deployments at malls, universities, airports and corporate campuses, Oscar has now demonstrated it can increase recycling by 300%.
WasteNet is a proprietary dataset. The founders declined to disclose the details of how they created such a massive dataset.
GPUs Versus CPUs
The startup’s system needs to work fast. After all, who wants to wait by a garbage bin? Initially, the founders used every possible hardware option on the market for image recognition, said Murad, including Raspberry Pi and Intel’s Movidius.
But requiring that people wait for up to six seconds — the result of their early hardware experiments — for where to toss an item just wasn’t an option. Once they moved to NVIDIA GPUs, they were able to get results down to half a second.
“Using Jetson TX2, we are able to run AI on the edge and help people change the world in three seconds,” said Murad.
Florida native Terry Olkin, accustomed to warm weather, had a problem with his new home of Colorado: Snow blocked the driveway.
“I couldn’t get my car out of the driveway to get to work — I thought, ‘Why aren’t there robots yet to do this?’” Olkin said.
The moment led the busy entrepreneur and a colleague, Mike Ott, to start Left Hand Robotics for clearing snow with autonomous maintenance machines. Olkin and Ott — CEO and CTO, respectively, of the Longmont, Colo.-based company — also work together in a nonprofit that teaches robotics for school-age children.
Founded in 2016, Left Hand Robotics has been running pilot tests for the past year with customers such as the city of Longmont and Michigan State University.
The company recently scooped up $3.6 million in funding and began shipping its commercial machine, which clears snow and can trim grass. It’s also a member of the NVIDIA Inception program, which helps startups scale faster.
“NVIDIA Inception has allowed us to more easily access resources such as the Jetson and, most importantly, the community and NVIDIA engineers who can quickly answer questions and point us in the right direction when we come across an issue or need guidance,” said Olkin.
Tractor with a Vision
The Left Hand Robotics machine, dubbed RT-1000, has cameras, lidar and radar to help it see.
While radar and lidar help the machine detect people and objects for safety, the radar is there for seeing through snow to detect objects in the robot’s way.
Data from the robot’s six cameras, lidar and radar sensors are processed by the compact supercomputing power of the NVIDIA Jetson TX2 module.
The maintenance machines also run GPS capable of RTK, or real-time kinematics, which can provide mapping within a centimeter of accuracy.
Using the company’s path collection tool — it’s a device to collect coordinates on wheels — customers can push it along to define the robot’s job path. That information gets compiled into a program and downloaded onto the robot for navigation at the time it’s ready to perform assigned tasks.
Shoveling In Interest
The city of Longmont this year deployed Left Hand Robotics machines in tests for clearing snow from sidewalks and greenways. The robot mowers have also been used there over the summer for mowing parks and softball fields.
Michigan State University has done testing in the winter on its golf course as well, as for using it to mow fields.
In Canada, customers who need to keep pathways cleared of snow are using it as well.
Left Hand Robotics sells the autonomous maintenance machines, and then customers pay an annual software subscription based on the level of support, features and usage.
“We’re just getting started here and getting customers ramping up. We’re the only robot out there that can do these kinds of multiple tasks,” said Olkin.
What began as two classmates getting their hands dirty at a farm in the French Alps has hatched a Silicon Valley farming startup building robots in Detroit and operating in one of the world’s largest agriculture regions.
San Francisco-based FarmWise offers farmers an AI-driven robotic system for more sustainable farming methods, and to address a severe labor shortage. Its machines can remove weeds without pesticides — a holy grail for organic farmers in the Golden State and elsewhere.
FarmWise’s robotic farming machine runs 10 cameras, capturing crops and weeds it passes over to send through image recognition models. The machine sports five NVIDIA DRIVE GPUs to help it navigate and make split-second decisions on weed removal.
The company is operating in two of California’s agriculture belts, Salinas and San Luis Obispo, where farms have its machines deployed in the field.
“We don’t use chemicals at all — we use blades and automate the process,” said Sebastien Boyer, the company’s CEO. “We’re working with a few of the largest vegetable growers in California.”
FarmWise recently landed $14.5 million in Series A funding to further develop its machines.
Robotics for Weed Removal
It wasn’t an easy start. Boyer and Thomas Palomares, computer science classmates from France’s Ecole PolyTechnique, decided to work on Palomares’s family farm in the Alps to try big data on farming. Their initial goal was to help farmers use information to work more sustainably while also improving crop yields. It didn’t pan out as planned.
The two discovered farms lacked the equipment to support sustainable methods, so they shelved their idea and instead packed their bags for grad school in the U.S. After that, the friends came back to their concept but with a twist: using AI-driven robotic machinery.
“We decided to move our focus to robotics to build new types of agriculture machines that are better-suited to take advantage of data,” Boyer said. “Weed removal is our first application.”
In April, FarmWise began manufacturing its farm machines. It tapped custom automotive parts maker Roush, which serves Detroit and has built self-driving vehicle prototypes for the likes of Google.
FarmWise for Labor Shortage
Farm labor is in short supply. A California Farm Bureau Federation survey of more than 1,000 farmers found that 56 percent were unable to hire sufficient labor to tend their crops in the past five years.
Of those surveyed, 37 percent said they had to change their cultivation practices, including by reducing weeding and pruning. More than half were already using labor-saving technologies. Not to mention that weeding is often back-breaking work.
FarmWise helps fill this void. The company’s automated weeders can do the labor of 10 workers. And it can work 24/7 autonomously.
“We’re filling the gaps of missing people, and those tasks that aren’t getting done — and we’re offering an alternative to chemical herbicides,” said Boyer, adding that weed management is crucial for crop yields.
When farms can’t get back-breaking weeding covered, they turn to herbicides as an alternative. FarmWise can help to reduce that. Plus, there’s a financial incentive: medium-size California farms can expect to save as much as $500,000 a year on pesticides and other costs by using FarmWise, he said.
Training Autonomous Farming Machines
To help farmers, FarmWise AI recognizes the difference between weeds and plants, and its machines can make 25 cuts per second to remove weeds. Its NVIDIA GPU-powered image networks recognize 10 different crops and can spot the typical weeds of California and Arizona.
“As we operate on fields, we continuously capture data, label that data and use it to improve our algorithms,” said Boyer.
FarmWise’s weeding machines are geo-fenced by uploading maps of the fields. The onboard cameras can be used as an override for safety to stop the machines.
The 30-person company attracted recruits to its sustainable farming mission from SpaceX, Tesla, Cruise and Facebook as well as experts in farm machine design and operations, said Boyer.
Developing machines for farms, said Boyer, requires spending time in the field to understand the needs of farmers and translating their ideas into technology.
“We’re a group of engineers with very close ties to the farming community,” he said.
CES 2020 will be bursting with vivid visual entertainment and smart everything, powered, in part, by NVIDIA and its partners.
Attendees packing the annual techfest will experience the latest additions to GeForce, the world’s most powerful PC gaming platform and the first to deliver ray tracing. They’ll see powerful displays and laptops, ultra-realistic game titles and capabilities offering new levels of game play.
NVIDIA’s Vegas headliners include three firsts — a 360Hz esports display, the first 14-inch laptops and all-in-one PCs delivering the graphics realism of ray tracing.
The same GPU technologies powering next-gen gaming are also spawning an age of autonomous machines. CES 2020 will be alive with robots such as Toyota’s new T-HR3, thanks to advances in the NVIDIA Isaac platform. And the newly minted DRIVE AGX Orin promises 7x performance gains for future autonomous vehicles.
Together, they’re knitting together an AI-powered Internet of Things from the cloud to the network’s edge that will touch everything from entertainment to healthcare and transportation.
A 2020 Vision for Play
NVIDIA’s new G-SYNC display for esports gamers delivers a breakthrough at 360Hz, projecting a vision of game play that’s more vivid than ever. NVIDIA and ASUS this week unveiled the ASUS ROG 360, the world’s fastest display, powered by NVIDIA G-SYNC. Its 360Hz refresh rate in a 24.5-inch form factor let esports and competitive gamers keep every pixel of action in their field of view during the heat of competition.
Keeping the picture crisp, Acer, Asus and LG are expanding support for G-SYNC. First introduced in 2013, G-SYNC is best known for its innovative Variable Refresh Rate technology that eliminates screen tearing by synchronizing the refresh rate of the display with the GPU’s frame rate.
In 2019, LG became the first TV manufacturer to offer NVIDIA G-SYNC compatibility, bringing the must-have gaming feature to select OLED TV models. Thirteen new models for 2020 will provide a flawless gaming experience on the big screen, without screen tearing or other distracting visual artifacts.
In addition, Acer and Asus are showcasing two upcoming G-SYNC ULTIMATE displays. They feature the latest full-array direct backlight technology with 1,400 nits brightness, significantly increasing display contrast for darker blacks and more vibrant colors. Gamers will enjoy the fast response time and ultra-low lag of these displays running at up to 144Hz at 4K.
Game On, RTX On
The best gaming monitors need awesome content to shine. So today, Bethesda turned on ray tracing in Wolfenstein: Youngblood, bringing a new level of realism to the popular title. An update that sports ray-tracing reflections and DLSS is available as a free downloadable patch starting today for gamers with a GeForce RTX GPU.
Bethesda joins the world’s leading publishers who are embracing ray tracing as the next big thing in their top franchises. Call of Duty Modern Warfare and Control — IGN’s Game of the Year — both feature incredible real-time ray-tracing effects.
VR is donning new headsets, games and innovations for CES 2020.
NVIDIA’s new rendering technique, Variable Rate Super Sampling, in the latest Game Ready Driver improves image quality in VR games. It uses Variable Rate Shading, part of the NVIDIA Turing architecture, to dynamically apply up to 8x supersampling to the center. or foveal region. of the VR headset, enhancing image quality where it matters most while delivering stellar performance.
In addition, Game Ready Drivers now make it possible to set the max frame rate a 3D application or game can render to save power and reduce system latency. They enable the best gaming experience by keeping a G-SYNC display within the range where the technology shines.
HP launched the ENVY 32 All-in-One with GeForce RTX graphics, configurable with up to GeForce RTX 2080. Acer has three new systems from its ConceptD line. And ten other system builders across North America, Europe and China all now have RTX Studio offerings.
These RTX Studio systems adhere to stringent hardware and software requirements to empower creativity at the speed of imagination. They also ship with NVIDIA’s Studio Drivers, providing the ultimate performance and stability for creative applications.
Robots Ring in the New Year
The GPU technology that powers games is also driving AI, accelerating the development of a host of autonomous vehicles and robots at CES 2020.
Toyota’s new T-HR3 humanoid partner robot will have a Vegas debut at its booth (LVCC, North Hall, Booth 6919). A human operator wearing a VR headset controls the system using augmented video and perception data fed from an NVIDIA Jetson AGX Xavier computer in the robot.
Attendees can try out the autonomous wheelchair from WHILL, which had won a CES 2019 Innovation of the Year award, powered by a Jetson TX2. Sunflower Labs will demo its new home security robot, also packing a Jetson TX2. Other NVIDIA-powered systems at CES include a delivery robot from PostMates and an inspection snake robot from Sarcos.
The Isaac software development kit marks a milestone in establishing a unified AI robotic development platform we call NVIDIA Isaac, an open environment for mapping, model training, simulation and computing. It includes a variety of camera-based perception deep neural networks for functions such as object detection, 3D pose estimation and 2D human pose estimation.
This release also introduces Isaac Sim, which lets developers train on simulated robots and deploy their lessons to real ones, promising to greatly accelerate robotic development especially for environments such as large logistics operations. Isaac Simulation will add early-access availability for manipulation later this month.
Driving an Era of Autonomous Vehicles
This marks a new decade of automotive performance, defined by AI compute rather than horsepower. It will spread autonomous capabilities across today’s $10 trillion transportation industry. The transformation will require dramatically more compute performance to handle exponential growth in AI models being developed to ensure autonomous vehicles are both functional and safe.
NVIDIA DRIVE AV, an end-to-end, software-defined platform for AVs, delivers just that. It includes a development flow, data center infrastructure, an in-vehicle computer and the highest quality pre-trained AI models that can be adapted by OEMs.
Last month, NVIDIA announced the latest piece of that platform, DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles.
The platform is powered by a new system-on-a-chip called Orin, which achieves 200 TOPS — nearly 7x the performance of the previous generation SoC Xavier. It’s designed to handle the large number of applications and DNNs that run simultaneously in autonomous vehicles, while achieving systematic safety standards such as ISO 26262 ASIL-D.
NVIDIA’S AI ecosystem of innovators is spread across the CES 2020 show floor, including more than 100 members of Inception, a company program that nurtures cutting-edge startups that are revolutionizing industries with AI.
Among established leaders, Mercedes-Benz, an NVIDIA DRIVE customer, will open the show Monday night with a keynote on the future of intelligent transportation. And GeForce partners will crank up the gaming excitement in demos across the event.