Path Math: How AI Can Find a Way Around Pathologist Shortage

Ever since a Dutch cloth merchant accidentally discovered bacteria in 1676, microscopes have been a critical tool for medicine. Today’s microscopes are 800,000 times more powerful than the human eye, but they still need a person to scrutinize what’s under the lens.

That person is usually a pathologist — and that’s a problem. Worldwide, there are too few of these doctors who interpret lab tests to diagnose, monitor and treat disease.

Now SigTuple, a member of our Inception startup incubator program, is testing an AI microscope that could help address the pathologist shortage. The GPU-powered device automatically scans and analyzes blood smears and other biological samples to detect problems.

Global workforce capacity in pathology and laboratory medicine. Image reprinted from The Lancet.
Global workforce capacity in pathology and laboratory medicine. Image reprinted from The Lancet, Access to pathology and laboratory medicine services: a crucial gap. Copyright (2018), with permission from Elsevier.

One in a Million

The dearth of pathologists is crucial problem in the poorest countries, where patients lacking a proper diagnosis are often given inappropriate treatments, according to studies published this month in The Lancet medical journal. In sub-Saharan Africa, for example, there is a single pathologist for every million people, the journal reported.

But the problem isn’t confined to poor countries. In China, there’s one pathologist for every 130,000 people, The Lancet reported. That compares with 5.7 per 100,000 people in the U.S., according to a the most recent figures available. And in the U.S., studies predict the number of pathologists will shrink to 3.7 per 100,000 people by 2030.

In India, that’s now 1 pathologist per 65,000 people — a total of 20,000 pathologists available to treat the nation of 1.3 billion people, said Tathagato Rai Dastidar, co-founder and chief technology officer of Bangalore-based SigTuple.

“There is a human cost here. In many places, where there is no pathologist, a half-trained technician will write out a report and cases will go undetected until it’s too late,” Dastidar said.

SigTuple’s automated microscope costs a fraction of what existing devices do, making it affordable for developing countries where pathologists are few. Image courtesy of SigTuple.
SigTuple’s automated microscope costs a fraction of what existing devices do, making it affordable for developing countries where pathologists are few. Image courtesy of SigTuple.

Low Cost, High Performance Microscope

SigTuple’s device isn’t the first automated microscope. Instruments known as digital slide scanners automatically convert glass slides to digital images and interpret the results. But SigTuple’s microscope sells for a fraction of the price of digital slide scanners, making it affordable for most labs, including those in the developing world.

The company’s AI microscope works by scanning slides under its lens and then using GPU-accelerated deep learning to analyze the digital images either on SigTuple’s AI platform in the cloud or on the microscope itself. It uses different deep learning models to analyze blood, urine and semen.

The microscope performs functions like identifying cells, classifying them into categories and subcategories, and calculating the numbers of different cell types.

For a blood smear, for example, Shonit — that’s Sanskrit for blood — identifies red and white blood cells and platelets, pinpoints their locations and calculates ratios of different types of white blood cells (commonly known as differential count). It also computes 3D information about cells from their 2D images using machine learning techniques.

In studies SigTuple conducted with some of India’s leading labs, Shonit’s accuracy matched that of other automated analyzers. It also successfully identified rare varieties of cells that both pathologists and automated tools usually miss.

Expert Review in the Cloud

In addition to providing a low-cost method for interpreting slides, Dastidar sees SigTuple’s AI platform as an ideal tool for providing expert review of tests when no expert is available. As well as automating analysis, it stores data in the cloud so any pathologist anywhere can interpret test results.

The company’s cloud platform also makes it far easier for pathologists to collaborate on difficult cases.

“Before that would have meant shipping the slide from one lab to another,” Dastidar said.

SigTuple next plans a formal trial of Shonit and is beginning to roll it out commercially.

For more information about SigTuple and Shonit, watch Dastidar’s GTC talk or read SigTuple’s recent paper, Analyzing Microscopic Images of Peripheral Blood Smear Using Deep Learning.

The post Path Math: How AI Can Find a Way Around Pathologist Shortage appeared first on The Official NVIDIA Blog.

Path Math: How AI Can Find a Way Around Pathologist Shortage

Ever since a Dutch cloth merchant accidentally discovered bacteria in 1676, microscopes have been a critical tool for medicine. Today’s microscopes are 800,000 times more powerful than the human eye, but they still need a person to scrutinize what’s under the lens. That person is usually a pathologist — and that’s a problem. Worldwide, there Read article >

The post Path Math: How AI Can Find a Way Around Pathologist Shortage appeared first on The Official NVIDIA Blog.

DRIVE Xavier, World’s First Single-Chip Self-Driving Car Processor, Gets Approval from Top Safety Experts

Automotive safety isn’t a box you check. It’s not a feature. Safety is the whole point of autonomous vehicles. And it starts with a new class of computer, a new type of software and a new breed of chips.

Safety is designed into the NVIDIA DRIVE computer for autonomous vehicles from the ground up. Experts architect safety technology into every aspect of our computing system, from the hardware to the software stack. Tools and methods are developed to create software that performs as intended, reliably and with backups. Stringent engineering processes are developed to ensure no corners are cut.

“Safety-first” computer design is equal parts expertise, architecture, design, tools, methods and best practices. Safety needs to be everywhere — permeating our engineering culture.

Top Experts Agree – Xavier Is Architected for Safety

We didn’t stop there. We invited the world’s top automotive safety and reliability company, TÜV SÜD, to perform a safety concept assessment of our new NVIDIA Xavier system-on-chip (SoC). The 150-year-old German firm’s 24,000 employees assess compliance to national and international standards for safety, durability and quality in cars, as well as for factories, buildings, bridges and other infrastructure.

“NVIDIA Xavier is one of the most complex processors we have evaluated,” said Axel Köhnen, Xavier lead assessor at TÜV SÜD RAIL. “Our in-depth technical assessment confirms the Xavier SoC architecture is suitable for use in autonomous driving applications and highlights NVIDIA’s commitment to enable safe autonomous driving.”

Feeds and Speeds Built Around a Single Need: Safety

Let’s walk through what that means.

As the world’s first autonomous driving processor, Xavier is the most complex SoC ever created. Its 9 billion transistors enable Xavier to process vast amounts of data. Its GMSL (gigabit multimedia serial link) high-speed IO connects Xavier to the largest array of lidar, radar and camera sensors of any chip ever built.

Inside the SoC, six types of processors — ISP (image signal processor), VPU (video processing unit), PVA (programmable vision accelerator), DLA (deep learning accelerator), CUDA GPU, and CPU — process nearly 40 trillion operations per second, 30 trillion for deep learning alone. This level of processing is 10x more powerful than our previous generation DRIVE PX 2 reference design, which is used in today’s most advanced production cars.

These aren’t feeds and speeds we enabled just because we could. They’re essential to safety.

1 Chip, 6 Processors, 40 TOPS – Diversity and Redundancy Need Performance

Xavier is the brain of the self-driving car. From a safety perspective, this means building in diversity, redundancy and fault detection from end to end. From sensors, to specialized processors, to algorithms, to the computer, all the way to the car’s actuation — each function is performed using multiple methods, which gives us diversity. And each vital function has a fallback system, which ensures redundancy.

For example, objects detected by radar, lidar or cameras are handled with different processors and perceived using a variety of computer vision, signal processing and point cloud algorithms. Multiple deep learning networks run concurrently to recognize objects that should be avoided, while other networks determine where it’s safe to drive, achieving both diversity and redundancy. Different processors, running diverse algorithms in parallel, backing each other up, reduce the chance of an undetected single point of failure.

NVIDIA Xavier dieshot
Inside the Xavier SoC — six types of processors, processing nearly 40 trillion operations per second.

Xavier also includes many types of hardware diagnostics. Key areas of logic are duplicated and voted in hardware using lockstep comparators. Error-correcting codes on memories detect faults and improve availability. A unique built-in self-test helps to find faults in the diagnostics, wherever they may be on chip.

Xavier’s safety architecture was created over several years by more than 300 architects, designers and safety experts who analyzed over 150 safety-related modules. With Xavier, the auto industry can achieve the highest functional safety rating: ASIL-D.

Building for diversity and redundancy needed for safety demands a huge amount of extra processing. For self-driving cars, processing power translates to safety.

Measuring Up to the Highest Standards

Thousands of engineers writing millions of lines of code — how do we ensure Xavier does what we designed it to do?

We created DRIVE as an open platform so that the experts in the world’s best car companies can engage our platform to make it industrial strength. We also turned to TÜV SÜD, among the world’s most respected safety experts, who measured Xavier against the automotive industry’s standard for functional safety — ISO 26262.

Established by the International Organization for Standardization, the world’s chief standards body, ISO 26262 is the definitive global standard for the functional safety — a system’s ability to avoid, identify and manage failures — of road vehicles’ systems, hardware and software.

To meet that standard, an SoC must have an architecture that doesn’t just detect hardware failures during operation. It also needs to be developed in a process that mitigates potential systematic faults. That is, the SoC must avoid failures whenever possible, but detect and respond to them if they cannot be avoided.

TÜV SÜD’s team determined Xavier’s architecture meets the ISO 26262 requirements to avoid unreasonable risk in situations that could result in serious injury.

Our Journey to Zero Accidents

Inventing technology that will one day eliminate accidents on our roads is one of NVIDIA’s most important endeavors. We are inspired to tackle this grand computing challenge that will have great social impact.

We had to re-invent every aspect of computing, starting with the Xavier processor. We created processing power not for speed, but for safety. We benchmarked ourselves against the highest standards: ASIL-D and ISO 26262. And we engaged every expert — from the best car companies to TÜV SÜD — to test and challenge us.

The journey is long, but the destination is worth every step.

The post DRIVE Xavier, World’s First Single-Chip Self-Driving Car Processor, Gets Approval from Top Safety Experts appeared first on The Official NVIDIA Blog.

DRIVE Xavier, World’s First Single-Chip Self-Driving Car Processor, Gets Approval from Top Safety Experts

Automotive safety isn’t a box you check. It’s not a feature. Safety is the whole point of autonomous vehicles. And it starts with a new class of computer, a new type of software and a new breed of chips. Safety is designed into the NVIDIA DRIVE computer for autonomous vehicles from the ground up. Experts Read article >

The post DRIVE Xavier, World’s First Single-Chip Self-Driving Car Processor, Gets Approval from Top Safety Experts appeared first on The Official NVIDIA Blog.

What’s the Difference Between VR, AR and MR?

Get up. Brush your teeth. Put on your pants. Go to the office. That’s reality.

Now, imagine you can play Tony Stark in Iron Man or the Joker in Batman. That’s virtual reality.

Advances in VR have enabled people to create, play, work, collaborate and explore in computer-generated environments like never before.

VR has been in development for decades, but only recently has it emerged as a fast-growing market opportunity for entertainment and business.

Now, picture a hero, and set that image in the room you’re in right now. That’s augmented reality.

AR is another overnight sensation decades in the making. The mania for Pokemon Go — which brought the popular Japanese trading card game for kids to city streets — has led to a score of games that blend pixie dust and real worlds. There’s another sign of its rise: Apple’s 2017 introduction of ARKit, a set of tools for developers to create mobile AR content, is encouraging companies to build AR for iOS 11.

Microsoft’s Hololens and Magic Leap’s Lightwear — both enabling people to engage with holographic content — are two major developments in pioneering head-mounted displays.

Magic Leap Lightwear

It’s not just fun and games either — it’s big business. Researchers at IDC predict worldwide spending on AR and VR products and services will rocket from $11.4 billion in 2017 to nearly $215 billion by 2021.

Yet just as VR and AR take flight, mixed reality, or MR, is evolving fast. Developers working with MR are quite literally mixing the qualities of VR and AR with the real world to offer hybrid experiences.

Imagine another VR setting: You’re instantly teleported onto a beach chair in Hawaii, sand at feet, mai tai in hand, transported to the islands while skipping economy-class airline seating. But are you really there?

In mixed reality, you could experience that scenario while actually sitting on a flight to Hawaii in a coach seat that emulates a creaky beach chair when you move, receiving the mai tai from a real flight attendant and touching sand on the floor to create a beach-like experience. A flight to Hawaii that feels like Hawaii is an example of MR.

VR Explained

The Sensorama

The notion of VR dates back to 1930s literature. In the 1950s, filmmaker Morton Hellig wrote of an “experience theater” and then later built an immersive video game-like machine he called the Sensorama for people to peer into. A pioneering moment came in 1968, when Ivan Sutherland is credited for developing the first head-mounted display.

Much has changed since. Consumer-grade VR headsets have made leaps of progress. Their advances have been propelled by technology breakthroughs in optics, tracking and GPU performance.

Consumer interest in VR has soared in the past several years as new headsets from Facebook’s Oculus, HTC, Samsung, Sony and a host of others offer substantial improvements to the experience. Yet producing 3D, computer-generated, immersive environments for people is about more than sleek VR goggles.

Obstacles that have been mostly overcome, so far, include delivering enough frames per second and reducing latency — the delays created when users move their head — to create experiences that aren’t herky jerky and potentially motion-sickness inducing.

VR taxes graphics processing requirements to the max — it’s about 7x more demanding than PC gaming. Today’s VR experiences wouldn’t be possible without blazing fast GPUs to help quickly deliver graphics.

Software plays a key role, too. The NVIDIA VRWorks software development kit, for example, helps headset and application developers access the best performance, lowest latency and plug-and-play compatibility available for VR. VRWorks is integrated into game engines such as Unity and Unreal Engine 4.

To be sure, VR still has a long way to go to reach its potential. Right now, the human eye is still able to detect imperfections in rendering for VR.

Vergence accommodation conflict, Journal of Vision

Some experts say VR technology will be able to perform at a level exceeding human perception when it can make a 200x leap in performance, roughly within a decade. In the meantime, NVIDIA researchers are working on ways to improve the experience.

One approach, known as foveated rendering, reduces the quality of images delivered to the edge of our retinas — where they’re less sensitive — while boosting the quality of the images delivered to the center of our retinas. This technology is powerful when combined with eye tracking that can inform processors where the viewing area needs to be sharpest.


VR’s Technical Hurdles

  • Frames per second: Virtual reality requires processing of 90 frames per second. That’s because lower frame rates reveal lags in movement detectable to the human eye. That can make some people nauseous. NVIDIA GPUs make VR possible by enabling rendering that’s faster than the human eye can perceive.
  • Latency: VR latency is the time span between initiating a movement and a computer-represented visual response. Experts say latency rates should be 20 milliseconds or less for VR. NVIDIA VRWorks, the SDK for VR headsets and game developers, helps address latency.
  • Field of view: In VR, it’s essential to create a sense of presence. Field of view is the angle of view available from a particular VR headset. For example, the Oculus Rift headset offers a 110-degree viewing angle.
  • Positional tracking: To enable a sense of presence and to deliver a good VR experience,  headsets need to be tracked in space within under 1 millimeter of  accuracy. This minimum is required in order to present images at any point in time and space.
  • Vergence accommodation conflict: This is a viewing problem for VR headsets today. Here’s the problem: Your pupils move with “vergence,” meaning looking toward or away from one another when focusing. But at the same time the lenses of the eyes focus on an object, or accommodation. The display of 3D images in VR goggles creates conflicts between vergence and accommodation that are unnatural to the eye, which can cause visual fatigue and discomfort.
  • Eye-tracking tech: VR head-mounted displays to date haven’t been adequate at tracking the user’s eyes for computing systems to react relative to where a person’s eyes are focused in a session. Increasing resolution where the eyes are moving to will help deliver better visuals.

Despite a decades-long crawl to market, VR has made a splash in Hollywood films such as the recently released Ready Player One and is winning fans across games, TV content, real estate, architecture, automotive design and other industries.

VR’s continued consumer momentum, however, is expected by analysts to be outpaced in the years ahead by enterprise customers.

AR Explained

AR development spans decades. Much of the early work dates to universities such as MIT’s Media Lab and pioneers the likes of Thad Starner — now the technical lead for Alphabet’s Google Glass smart glasses — who used to wear heavy belt packs of batteries attached to AR goggles in its infancy.

Google Glass (2.0)

Google Glass, the ill-fated consumer fashion faux pas, has since been rebirthed as a less conspicuous, behind the scenes technology for use in warehouses and manufacturing. Today Glass is being used by workers for access to training videos and hands-on help from colleagues.

And AR technology has been squeezed down into low-cost cameras and screen technology for use in inexpensive welding helmets that automatically dim, enabling people to melt steel without burning their retinas.

For now, AR is making inroads with businesses. It’s easy to see why when you think about overlaying useful information in AR that offers hands-free help for industrial applications.

Smart glasses for augmenting intelligence certainly support that image. Sporting microdisplays, AR glasses can behave like having that indispensable co-worker who can help out in a pinch. AR eyewear can make the difference between getting a job done or not.

Consider a junior-level heavy duty equipment mechanic assigned to make emergency repairs to a massive tractor on a construction site. The boss hands the newbie AR glasses providing service manuals with schematics for hands-free guidance while troubleshooting repairs.

Some of these smart glasses even pack Amazon’s Alexa service to enable voice queries for access to information without having to fumble for a smartphone or tap one’s temple.

Jensen Huang on VR at GTC

Business examples for consumers today include IKEA’s Place app, which allows people to use a smartphone to view images of furniture and other products overlayed into their actual home seen through the phone.

Today there is a wide variety of smart glasses from the likes of Sony, Epson, Vuzix, ODG and startups such as Magic Leap.

NVIDIA research is continuing to improve AR and VR experiences. Many of these can be experienced at our GPU Technology Conference VR demonstrations.

MR Explained

MR holds big promise for businesses. Because it can offer nearly unlimited variations in virtual experiences, MR can inform people on-the-fly, enable collaboration and solve real problems.

Now, imagine the junior-level heavy duty mechanic waist deep in a massive tractor and completely baffled by its mechanical problem. Crews sit idle unable to work. Site managers become anxious.

Thankfully, the mechanic’s smart glasses can start a virtual help session. This enables the mechanic to call in a senior mechanic by VR around the actual tractor model to walk through troubleshooting together while working on the real machine, tapping into access to online repair manual documents for torque specs and other maintenance details via AR.

That’s mixed reality.

To visualize a consumer version of this, consider the IKEA Place app for placing and viewing furniture in your house. You have your eye on a bright red couch, but want to hear the opinions of its appearance in your living room from friends and family before you buy it.

So you pipe in a VR session within the smart glasses. Let’s imagine this session is in an IKEA Place app for VR and you can invite these closest confidants. But first they must sit down on any couch they can find. Now, however, when everyone sits down it’s in a virtual setting of your house and the couch is red and just like the IKEA one.

Voila, in mixed reality, the group virtual showroom session yields a decision: your friends and family unanimously love the couch, so you buy it.

The possibilities are endless for businesses.

Mixed reality, while in its infancy, might be the holy grail for all the major developers of VR and AR. There are unlimited options for the likes of Apple, Microsoft, Facebook, Google, Sony, Netflix and Amazon. That’s because MR can blend the here and now reality with virtual reality and augmented reality for many entertainment and business situations.

Take this one: automotive designers can sit down on an actual car seat, with armrests, shifters and steering wheel, and pipe into VR sessions delivering the dashboard and interior. Many can assess designs together remotely this way. NVIDIA Holodeck actually enables this type of collaborative work.

Buckle up for the ride.

The post What’s the Difference Between VR, AR and MR? appeared first on The Official NVIDIA Blog.

What’s the Difference Between VR, AR and MR?

Get up. Brush your teeth. Put on your pants. Go to the office. That’s reality. Now, imagine you can play Tony Stark in Iron Man or the Joker in Batman. That’s virtual reality. Advances in VR have enabled people to create, play, work, collaborate and explore in computer-generated environments like never before. VR has been Read article >

The post What’s the Difference Between VR, AR and MR? appeared first on The Official NVIDIA Blog.

Hidden Figures: How AI Could Spot a Silent Cancer in Time to Save Lives

It’s no wonder Dr. Elliot Fishman sounds frustrated when he talks about pancreatic cancer.

As a diagnostic radiologist at Johns Hopkins Hospital, one of the world’s largest centers for pancreatic cancer treatment, he has the grim task of examining pancreatic CT scans for signs of a disease that’s usually too advanced to treat.

Because symptoms seldom show up in the early stages of pancreatic cancer, most patients don’t get CT scans or other tests until the cancer has spread. By then, the odds of survival are low: Just 7 percent of patients live five years after diagnosis, the lowest rate for any cancer.

“Our goal is early detection of pancreatic cancer, and that would save lives,” Fishman said.

Fishman aims to spot pancreatic cancers far sooner than humans alone can by applying GPU-accelerated deep learning to the task. He helps spearhead Johns Hopkins’ Felix project, a multimillion dollar effort supported by the Lustgarten Foundation to improve doctors’ ability to detect the disease.

Pancreatic cancer that has invaded the vessels, meaning it's too advanced to be treated with surgery.
This video depicts pancreatic cancer that has invaded the vessels — the branch-like structures at the center of the picture — surrounding the pancreas. That means the disease is too advanced to be treated with surgery. Video courtesy of Dr. Elliot Fishman, Johns Hopkins Hospital.

Deep Learning Aids Hunt for Silent Killer

The pancreas — a six-inch long organ located behind the stomach — plays an essential role in converting the food we eat into fuel for the body’s cells. It’s located deep in the abdomen, making it hard for doctors to feel during routine examinations, and making it difficult to detect tumors using imaging tests like CT scans.

Some radiologists, like Fishman, see thousands of cases a year. But others lack the experience to spot the cancer, especially when the lesions — abnormalities in organs and tissue — are at their smallest in the early stages of the disease.

“If people are getting scanned and diagnoses aren’t being made, what can we do differently?” Fishman asked in a recent talk at the GPU Technology Conference, in San Jose. “We believe deep learning will work for the pancreas.”

Johns Hopkins is ideally suited to developing a deep learning solution because it has the massive amounts of data on pancreatic cancer needed to teach a computer to detect the disease in a CT scan. Hospital researchers also have our DGX-1 AI supercomputer, an essential tool for deep learning research.

 The pancreas, a fish-shaped organ, is pictured here in golden brown, above the kidneys and below the spleen. The dark circle at the center of the image is a tumor. Image courtesy of Dr. Elliot Fishman, Johns Hopkins Hospital.
The pancreas, a fish-shaped organ, is pictured here in golden brown, above the kidneys and below the spleen. The dark circle at the center of the image is a tumor. Image courtesy of Dr. Elliot Fishman, Johns Hopkins Hospital.

Detecting Pancreatic Cancer with Greater Accuracy

Working with a team of computer scientists, oncologists, pathologists and other physicians, Fishman is helping  train deep learning algorithms to spot minute textural changes to tissue of the pancreas and nearby organs. These changes are often the first indication of cancer.

The team trained its algorithms on about 2,000 CT scans, including 800 from patients with confirmed pancreatic cancer. It wasn’t easy. Although Johns Hopkins has ample data, the images must be labeled to point out key characteristics that are important in determining the state of the pancreas. At four hours per case, it’s a massive undertaking.

In the first year of the project, the team trained an algorithm to recognize the pancreas and the organs that surround it, achieving a 70 percent accuracy rate. In tests this year, the deep learning model has accurately detected pancreatic cancer about nine times out of 10.

Earlier Diagnosis Possible  

The team is now examining instances where cancer was missed to improve its algorithm. It’s also working to go beyond identifying tumor cells to predict likely survival rates and whether the patient is a candidate for surgery.

Finding an answer is urgent because even though pancreatic cancer is rare, it’s on the rise. Not long ago, it was the fourth-leading cause of cancer deaths in the U.S. Today it’s No. 3. And less than a fifth of patients are eligible for surgery at the time of presentation, the primary treatment for the disease.

For Fishman, deep learning detection methods could mean earlier diagnosis. He estimates that nearly a third of the cases he sees could have been detected four-12 months sooner.

“We want to train the computer to be the best radiologist in the world,” Fishman said. “We’re hopeful we can make a difference.”

To learn more about Fishman’s research, watch his GTC talk, The Early Detection of Pancreatic Cancer Using Deep Learning: Preliminary Observations.

Also, here are two of his recent papers:

* Main image for this story pictures a pancreatic cancer cell.

The post Hidden Figures: How AI Could Spot a Silent Cancer in Time to Save Lives appeared first on The Official NVIDIA Blog.

Hidden Figures: How AI Could Spot a Silent Cancer in Time to Save Lives

It’s no wonder Dr. Elliot Fishman sounds frustrated when he talks about pancreatic cancer. As a diagnostic radiologist at Johns Hopkins Hospital, one of the world’s largest centers for pancreatic cancer treatment, he has the grim task of examining pancreatic CT scans for signs of a disease that’s usually too advanced to treat. Because symptoms Read article >

The post Hidden Figures: How AI Could Spot a Silent Cancer in Time to Save Lives appeared first on The Official NVIDIA Blog.

Getting Brainy in Brisbane: NVIDIA Talks Robots, Research at ICRA

We’re bringing NVIDIA researchers — the brains behind our bots — to the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, from May 21-25. And they want to meet you. Held annually since 1984, ICRA has become a premier forum for robotics researchers from across the globe to present their work. The conference Read article >

The post Getting Brainy in Brisbane: NVIDIA Talks Robots, Research at ICRA appeared first on The Official NVIDIA Blog.