From Intel Movidius Neural Compute Stick to Tiny Artificial Intelligence Camera

Today, FLIR® Systems announced the FLIR Firefly® camera family, which incorporates the Intel® Movidius™ Myriad™ 2 Vision Processing Unit (VPU) for artificial intelligence at the edge. The Firefly* combines a new machine vision platform with the power of deep learning to address complex and subjective problems, such as classifying the quality of a solar panel or determining whether fruit is of export quality.

FLIR engineers accelerated the Firefly’s development cycle using the Intel® Movidius™ Neural Compute Stick (NCS) for prototype development. For large-scale commercial production, they  ported that development work to the Intel Movidius Myriad 2 VPU. By rapidly prototyping on the Intel Movidius Neural Compute Stick and the Neural Compute SDK, FLIR streamlined the development of machine learning in the camera. The production version of the Firefly uses the tiny, stand-alone Intel Movidius Myriad 2 VPU to perform two roles at the edge: image signal processing and open platform inference.

The 27- by 27-milimeter Firefly is roughly the size of a U.S. quarter.

More: Reimagined Machine Vision with On-Camera Deep Learning (Case Study) | FLIR Systems Announces Industry-First Deep Learning-Enabled Camera Family (News Release) | Intel Movidius (Press Kit) | All Intel Explainers

flir ncs
The FLIR Firefly camera family announced on Oct. 18, 2018, incorporates the Intel Movidius Myriad 2 Vision Processing Unit. What started as three distinct devices is transformed into the 27- by 27-milimeter Firefly unit. (Credit: FLIR)

The post From Intel Movidius Neural Compute Stick to Tiny Artificial Intelligence Camera appeared first on Intel Newsroom.

Intel Tech Learning Lab Starts Tour to Shape Education’s Future

Intel Tech Learning Lab 1

» Download all images (ZIP, 63 MB)

What’s New: Sharing an immersive, technology-based approach to education, Intel’s Tech Learning Lab begins a multi-city experiential tour today for students and teachers. The nationwide tour starts at the Bronx Academy of Letters in New York City.

“Intel is addressing the needs of educators through advanced technology that enables effective and dynamic classroom experiences and drives students’ skills development to prepare them for the demands of the future workforce.”
–Raysana Hurtado, education segment manager at Intel

What It Is: Intel’s Tech Learning Lab is a custom-built mobile container truck outfitted with virtual reality (VR) demo stations, powerful PCs, augmented reality (AR) and Internet of Things (IoT) smart whiteboards. Accompanied by immersive, hands-on workshops featuring artificial intelligence, coding and robotics, it will make stops at schools and other education institutions through Dec. 15.

Why It’s Important: Intel will bring innovative teaching methods to educators to help them build the leaders of tomorrow by developing fundamental career skills like communication, collaboration, self-awareness, problem-solving, critical thinking and more.

Today’s students live in a digital world. Modern teaching methods need to reflect this, with technology as a seamless integration applied across all areas of instruction. Despite the indisputable need for a sophisticated workforce, schools across the country are stuck using technologies and instructional models of the past to prepare students for careers of the future.

The Tech Learning Lab tour is designed to engage with educators and spark conversations that go beyond the classroom to fuel curiosity about the role of technology and its impact on our world and daily lives. Hands-on virtual lessons spanning arts, science and other subjects will introduce students, teachers and administrators to the power of technology as an instructional tool for the 21st century.

Why Now: The U.S. education system is changing, with drastic cuts to arts education; the rise of science, technology, engineering and math education (STEM); and innovative new models. Until now, classroom technology has been used as an add-on to existing instructional methods rather than as tools to improve or revolutionize instruction.

Cutting-edge technology-based educational programs can emphasize deeper collaboration and engagement, versus student instruction on software that likely will be obsolete by the time they enter the workforce. The future classroom is one that incorporates powerful technology and encourages creative approaches to learning, supporting education goals today and for tomorrow.

Where It will Visit: Intel’s Tech Learning Lab continues at schools across the country, with visits planned to:

  • Weston High School, in Weston, Mass. (Nov. 7-9)
  • Ron Clark Academy, in Atlanta (Nov. 15-16)
  • Design39Campus, in San Diego (Nov. 29-30)
  • McClymonds High School, in Oakland, Calif. (week of Dec. 3rd)
  • Oakland Tech, in Oakland, Calif. (week of Dec. 3rd)

More Context: For full details on the technologies featured throughout Intel’s Tech Learning Lab, visit the tour fact sheet.


» Download video: “Tech Learning Lab (B-Roll)”

The post Intel Tech Learning Lab Starts Tour to Shape Education’s Future appeared first on Intel Newsroom.

Intel Artificial Intelligence and Rolls-Royce Push Full Steam ahead on Autonomous Shipping

iam 2x1
Photos from Rolls-Royce

What’s New: Rolls-Royce* builds shipping systems that are sophisticated and intelligent – and eventually it will add fully autonomous to that portfolio – as it makes commercial shipping safer and more efficient. It’s doing so using artificial intelligence (AI) powered by Intel® Xeon® Scalable processors and Intel® 3D NAND SSDs for storage.

“Delivering these systems is all about processing – moving and storing huge volumes of data – and that is where Intel comes in. Rolls-Royce is a key driver of innovation in the shipping industry, and together we are creating the foundation for safe shipping operations around the world.”
–Lisa Spelman, vice president and general manager, Intel Xeon Processors and Data Center Marketing in the Data Center Group at Intel

How It Works: Ships have dedicated Intel Xeon Scalable processor-based servers on board, turning them into cutting-edge floating data centers with heavy computation and AI inference capabilities. Rolls-Royce’s Intelligent Awareness System (IA) uses AI-powered sensor fusion and decision-making by processing data from lidar, radar, thermal cameras, HD cameras, satellite data and weather forecasts. This data allows vessels to become aware of their surroundings, improving safety by detecting objects several kilometers away, even in busy ports. This is especially important when operating at night, in adverse weather conditions or in congested waterways.

iam multi 2x1Data collected by the vessels is stored using Intel 3D NAND SSDs, acting as a “black box,” securing the information for training and analysis once the ship is docked. Even compressed, data captured by each vessel can reach up to 1TB per day or 30TB to 40TB over a monthlong voyage, making storage a critical component of the intelligent solution.

“This collaboration is helping us to develop technology that supports ship owners in the automation of their navigation and operations, reducing the opportunity for human error and allowing crews to focus on more valuable tasks,” said Kevin Daffey, director, Engineering & Technology and Ship Intelligence at Rolls-Royce. “Simply said, this project would not be possible without leading-edge technology now brought to the table by Intel. Together, we can blend the best of the best to change the world of shipping.”

Why It’s Important: Ninety percent of world trade is carried out via international shipping – a number that is projected to grow. Of a total world fleet of about 100,000 vessels, around 25,000 use Rolls-Royce equipment, making the company a key player in the shipping industry.

The sea can be a hostile environment – dangerous ocean conditions resulted in 1,129 total shipping losses over the past 10 years, mostly due to human error. Enabling a massive vessel — loaded with millions or billions of dollars’ worth of goods — to better navigate and detect obstacles and hazards in real time, requires the crew to have the information they need to make smart and potentially lifesaving decisions. These systems also reduce the potential for human error by automating routine tasks and processes, freeing the crew to focus on critical decision-making.

Additionally, this system can potentially lower insurance premiums for vessels, since all of the ship’s data is stored securely in the 3D NAND SSDs that can provide valuable data on the cause of collisions and other problems.

This technology is in action today. In a recent pilot in Japan, Rolls-Royce demonstrated that its vessels can even understand their surroundings at nighttime, when it is not possible for humans to visually detect objects in the water.

More Context: Sailing the Seas of Autonomous Shipping (Binay Ackaloor Blog) | Rolls-Royce Ship Intelligence | Artificial Intelligence at Intel

Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

The post Intel Artificial Intelligence and Rolls-Royce Push Full Steam ahead on Autonomous Shipping appeared first on Intel Newsroom.

Germany’s BrighterAI Named Hottest Startup at GTC Europe

On the tenth day of the tenth month in Munich, there were ten.

Deeply ambitious AI startups, that is, going head to head at GTC Europe. They were vying for the title of Europe’s Hottest Startup in a series of lightning-fast pitches and Q&A with a panel of startup specialists at the show’s Inception Awards.

With some 400 tech execs, developers and academics looking on, Germany’s BrighterAI took the bragging rights, along with a tidy prize valued at about $200,000 in cash and an NVIDIA DGX Station personal AI supercomputer. At news of the victory, much of the company’s young 12-member team, who had made the 600-kilometer trip from Berlin, leapt from their seats hooting and pumping fists.

BrighterAI’s co-founder and CEO Marian Gläser had just three minutes to describe their work, which uses deep neural networks to enable businesses to store and process images and video in a way that complies with GDPR and other increasingly important privacy laws and practices. The company creates perception layers from camera inputs to anonymize images in a natural way, while also stripping out distortions from weather and other factors. He projected revenue from the two-year-old company to grow to $70 million in five years.

All 10 Inception finalists were focused on applying AI to specific vertical industries – healthcare, financial analysis, manufacturing optimization and call-center operations, among others.

In presenting the award to BrighterAI, Jensen Huang, who admitted to being far less polished when he founded NVIDIA 25 years ago, said that the next wave of computing will be focused on such companies.

“The revolution of the past was inventing computing tools, but the revolution of now is applying computing technology to solve the great challenges of humanity,” he said. “The last 35 years were about the computer industry, the next 35 years is about your industries.”

The finalists had been whittled down from a list of more than 140 entrants from the 1,600 European AI startups in NVIDIA’s Inception program, a virtual incubator for AI companies that in total has more than 3,000 members.

Other finalists included:

  • axial3D (Ireland): Produces medical 3D printing software to advance the standard and efficiency of surgical intervention.
  • ATLAN Space (Morocco): Makes drones smarter and enables them to monitor vast areas, identify risks and make smart decisions.
  • Axyon AI (Italy): Uses its deep learning platform to augment work in financial areas from credit risk and wealth management to churn-rate prediction and fraud detection.
  • Conundrum (Russia): Uses AI and machine learning to predict failures, malfunctions and quality issues in complex industrial equipment and processes.
  • Corti Labs (Denmark): Uses deep learning to help medical personnel in call centers and other settings to make critical decisions when time is of the essence. It provides accurate diagnostic support to emergency services, allowing patients to get the right treatment faster.
  • IPT (Germany): Deploys its software to enable engineers to combine their own experience, AI and data to optimize manufacturing processes.
  • RetinAI (Switzerland): Develops preventative treatment for such eye diseases as age-related macular degeneration, diabetic retinopathy and glaucoma.
  • Serket (Netherlands): Uses AI to assist farmers in tracking and monitoring the health of livestock. It’s named for the Egyptian goddess of nature, animals, medicine and magic.
  • TheraPanacea (France): Uses AI, high performance computing, physics-based simulation and medical imaging to improve the efficiency and accuracy of radiotherapy treatment planning.

Special thanks to our Inception Awards sponsors: InfineonMD Elektronik and Qwant.

The post Germany’s BrighterAI Named Hottest Startup at GTC Europe appeared first on The Official NVIDIA Blog.

First Mover: Germany’s DFKI to Deploy Europe’s Initial DGX-2 Supercomputer

DFKI, the leading research center in Germany in the field of innovative commercial software technology using AI, is the first group in Europe to adopt the NVIDIA DGX-2 AI supercomputer.

The research center will use the system to quickly analyze large-scale satellite and aerial imagery using image processing and deep neural network training, as well as for various deep learning experiments.

One experiment aims to develop new applications that will support rescuers in disaster-response scenarios by enabling them to make faster decisions. The resulting applications could help answer important questions such as finding the areas affected by a disaster and the accessibility of infrastructure during events such as floods.

Another highly topical research area is to measure and understand convolutional neural networks (CNNs) by quantifying the amount of input the let in. This technology is breaking new ground in the area of neural network understanding, opening a new way to reason, debug and interpret results.

DGX-2 integrates 16 NVIDIA Tesla V100 Tensor Core GPUs connected via NVIDIA NVSwitch, an AI network fabric that delivers throughput of 2.4TB per second.

“The analysis of big amounts of data — for example, large-scale aerial and satellite imagery — requires a powerful solution to process and train these deep neural networks,” said Andreas Dengel, head of the research department Smart Data & Knowledge Services at DFKI in Kaiserslautern. “The increased memory footprint of the DGX-2, enabled by the fully connected GPUs based on the NVSwitch architecture, will play a key role for us in improving the development of effective AI applications and expand the unique infrastructure of our Deep Learning Competence Center.”

Founded in 1988, DFKI previously used the NVIDIA DGX-1 for various projects including its DeepEye project. To help estimate and forecast damages of natural disasters, the system trained multiple CNN models to extract relevant information from text, image and metadata from social media.

For more NVIDIA developments at #GTC18, follow @NVIDIAEU.

 

The post First Mover: Germany’s DFKI to Deploy Europe’s Initial DGX-2 Supercomputer appeared first on The Official NVIDIA Blog.

King’s College London, NVIDIA Build Gold Standard for AI Infrastructure in the Clinic

King’s College London, a leader in medical research, is Europe’s first clinical partner to adopt NVIDIA DGX-2 and the NVIDIA Clara platform. KCL is deploying NVIDIA AI solutions to rethink the practice of radiology and pathology in a quest to better serve 8 million patients in the U.K.’s National Healthcare System.

NVIDIA and KCL will co-locate researchers and engineers with clinicians from major London hospitals that are part of the NHS Trust citywide network, including King’s College Hospital, Guy’s and St Thomas’, and South London and Maudsley. The trio of research, technology and clinicians will accelerate discovery of critical data strategies, targeted AI problems and speed deployment in the clinic.

“This is a huge opportunity to transform patient outcomes by applying the extraordinary capabilities of AI to ultimately make diagnoses earlier and more accurately than in the past,” said Professor Sebastien Ourselin, head of the School of Biomedical Engineering and Imaging Sciences at KCL. “This partnership will combine our expertise in medical imaging and health records with NVIDIA’s technology to improve patient care across the U.K.”

First up is unleashing the power of DGX-2 on the advanced imaging and analytics challenges at KCL. The DGX-2 system’s large memory and 2 petaflops of computing prowess make it perfect to tackle the training of large, 3D datasets in minutes instead of days.

Training at scale is tricky, but the DGX-2 can enhance medical imaging AI tools like Niftynet, a TensorFlow-based open-source convolutional neural network platform for research in medical image analysis and image-guided therapy developed at KCL.

If infrastructure and tools are at the heart of developing AI applications, data is the blood that makes the heart do something magical. Federated learning, the ability to learn from data that is not centralized, is one example.

Working with KCL’s clinical network to crack the technical and data governance issues of federated learning could lead to breakthroughs such as more precisely classifying stroke and neurological impairments to recommend the best treatment or automatic biomarker determination.

NVIDIA Clara is the computing platform for deploying breakthrough applications like these. Like its namesake, Clara Barton, the platform is meant to help people. It’s universal, scalable and accessible to the applications that need to run in clinical workflows.

From development to deployment, NVIDIA and KCL plan to streamline AI while building the necessary tools, infrastructure and best practices to empower the entire clinical ecosystem.

The post King’s College London, NVIDIA Build Gold Standard for AI Infrastructure in the Clinic appeared first on The Official NVIDIA Blog.

New Intel Vision Accelerator Solutions Speed Deep Learning and Artificial Intelligence on Edge Devices

iot vision 2x1What’s New: Today, Intel unveiled its family of Intel® Vision Accelerator Design Products targeted at artificial intelligence (AI) inference and analytics performance on edge devices, where data originates and is acted upon. The new acceleration solutions come in two forms: one that features an array of Intel® Movidius™ vision processors and one built on the high-performance Intel® Arria® 10 FPGA. The accelerator solutions build on the OpenVINO™ software toolkit that provides developers with improved neural network performance on a variety of Intel products and helps them further unlock cost-effective, real-time image analysis and intelligence within their Internet of Things (IoT) devices.

“Until recently, businesses have been struggling to implement deep learning technology. For transportation, smart cities, healthcare, retail and manufacturing industries, it takes specialized expertise, a broad range of form factors and scalable solutions to make this happen. Intel’s Vision Accelerator Design Products now offer businesses choice and flexibility to easily and affordably accelerate AI at the edge to drive real-time insights.”
–Jonathan Ballon, Intel vice president and general manager, Internet of Things Group

Why This Is Important: The need for intelligence on edge devices has never been greater. As deep learning approaches rapidly replace more traditional computer vision techniques, businesses can unlock rich data from digital video. With Intel Vision Accelerator Design Products, businesses can implement vision-based AI systems to collect and analyze data right on edge devices for real-time decision-making. Advanced edge computing capabilities help cut costs, drive new revenue streams and improve services.

What This Delivers: Combined with Intel Vision products such as Intel CPUs with integrated graphics, these new edge accelerator cards allow businesses the choice and flexibility of price, power and performance to meet specific requirements from camera to cloud. Intel’s Vision Accelerator Design Products will build upon growing industry adoption for the OpenVINO toolkit:

  • Smart, Safe Cities: With the OpenVINO toolkit, stadium security provider AxxonSoft* used existing installed-base hardware to achieve 9.6 times the performance on standard Intel® Core™ i7 processors and 3.1 times the performance on Intel® Xeon® Scalable processors in order to ensure the safety of 2 million visitors to the FIFA 2018 World Cup.*

Who Uses This: Leading companies such as Dell*, Honeywell* and QNAP* are planning products based on Intel Vision Accelerator Designs. Additional partners and customers, from equipment builders, solution developers and cloud service providers support these products.

More Context: Intel’s Vision Accelerator Design Products Customer QuotesVideo | Infographic

How This Works: Intel Vision Accelerator Design Products work by offloading AI inference workloads to purpose-built accelerator cards that feature either an array of Intel Movidius Vision Processing Units, or a high-performance Intel Arria 10 FPGA. Deep learning inference accelerators scale to the needs of businesses using Intel Vision solutions, whether they are adopting deep learning AI applications in the data center, in on-premise servers or inside edge devices. With the OpenVINO toolkit, developers can easily extend their investment in deep learning inference applications on Intel CPUs and integrated GPUs to these new accelerator designs, saving time and money.

The Small Print:

1Automated product quality data collected by Yumei using JWIPC® model IX7, ruggedized, fan-less edge compute node/industrial PC running an Intel® Core™ i7 CPU with integrated on die GPU and OpenVINO SDK. 16GB of system memory, connected to a 5MP POE Basler* Camera model acA 1920-40gc. Together these components, along with the Intel developed computer vision and deep learning algorithms, provide Yumei factory workers information on product defects near real-time (within 100 milliseconds). Sample size >100,000 production units collected over 6 months in 2018.

The post New Intel Vision Accelerator Solutions Speed Deep Learning and Artificial Intelligence on Edge Devices appeared first on Intel Newsroom.

Intel Collaborates on New AI Research Center at Technion, Israel’s Technological Institute

mayberry rao 2x1
From left: Dr. Michael Mayberry, Intel’s chief technology officer; Dr. Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group

What’s New: Technion*, Israel’s technological institute, announced this week that Intel is collaborating with the institute on its new artificial intelligence (AI) research center. The announcement was made at the center’s inauguration attended by Dr. Michael Mayberry, Intel’s chief technology officer, and Dr. Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group.

“AI is not a one-size-fits-all approach, and Intel has been working closely with a range of industry leaders to deploy AI capabilities and create new experiences. Our collaboration with Technion not only reinforces Intel Israel’s AI operations, but we are also seeing advancements to the field of AI from the joint research that is under way and in the pipeline.”
–Naveen Rao, Intel corporate vice president and general manager of Artificial Intelligence Products Group

What It Includes: The center features Technion’s computer science, electrical engineering, industrial engineering and management departments, among others, all collaborating to drive a closer relationship between academia and industry in the race to AI. Intel, which invested undisclosed funds in the center, will represent the industry in leading AI-dedicated computing research.

What It Means: Intel is committed to accelerating the promise of AI across many industries and driving the next wave of computing. Research exploring novel architectural and algorithmic approaches is a critical component of Intel’s overall AI program. The company is working with customers across verticals – including healthcare, autonomous driving, sports/entertainment, government, enterprise, retail and more – to implement AI solutions and demonstrate real value. Along with Technion, Intel is also involved in AI research with other universities and organizations worldwide.

Why It Matters: Intel and Technion have enjoyed a strong relationship through the years, as generations of Technion graduates have joined Intel’s development center in Haifa, Israel, as engineers. Intel has also previously collaborated with Technion on AI as part of the Intel Collaborative Research Institute for Computational Intelligence program.

More Context: Artificial Intelligence at Intel

 

 

 

 

 

 

The post Intel Collaborates on New AI Research Center at Technion, Israel’s Technological Institute appeared first on Intel Newsroom.

Keep on Rockin’: Startup Develops AI for Mapping Tunnels and Mineshafts

RockMass is digging a niche for itself in mining and tunnels.

The Toronto-based startup is developing an NVIDIA AI-powered mapping platform that can help engineers assess tunnel stability in mines and construction.

Today, geologists and engineers visually assess the risks of rock formations by standing five meters away from the rock as a safety precaution. That isn’t ideal for ensuring accurate results, said Shelby Yee, CEO and co-founder of RockMass.

“What they are doing right now takes about 90 minutes, and our technology can do it in about five minutes,” said Yee.

RockMass is using engineers in the field test out its hand-held unit, the Mapper. It’s aimed at those in mining, geological exploration and civil engineering. The startup is developing the AI platform for robots, drones and handheld devices used to capture geological data.

The startup’s Mapper AI device now offers a safer way to keep engineers further away from a possible tunnel collapse as well as offers a faster system for gathering and processing data.  Robots and drones using its platform could go into even more hazardous areas.

RockMass customers include Brazilian mining company Nexa Resources, which seeks increased automation and safety with use of the startup’s technology.

AI for Geotech

Engineers have for years surveyed the angles of rock surfaces using conventional equipment such as a theodolite scope-like device on a tripod to take optical measurements. They seek out so-called planes of weaknesses, which identify failure points within tunnels and rock formations.

The engineers measure the surfaces of rock formations to collect data for building what are known as stereonets. Stereonets map three-dimensional forms, such as a boulder, for viewing in a two-dimensional display.

Matt Gubasta (co-founder and CFO) testing instrumentation at an underground testing center in Sudbury, Ontario.

Engineers traditionally take the data from a site back to the office to transfer onto a computer to create a stereonet.

The startup’s technology promises an easier way. Its handheld device is packed with sensors for such measurements. Its lidar sensor and inertial measurement unit map the orientation on planes of weaknesses in rock formations. And it can do this in underground environments lacking GPS, wireless communication and light.

RockMass’s software relies on the information provided by these sensors to quickly identify useable data for engineers within minutes. The company is working to capture and process the data on the spot for field engineers. “You are able to see the data in real time,” Yee said.

‘Computationally Demanding’ AI

RockMass’s platform for onsite data collection is computationally demanding, said CTO and co-founder Stuart Bourne. The company’s devices sport robotics capabilities from NVIDIA Jetson and rely on its support for CUDA, cuDNN and TensorRT software libraries.

“Jetson has very high computational power relative to how much energy it draws,” Bourne said.

The startup enlists CUDA libraries to get the real-time processing to work with the data on cloud instances running NVIDIA GPUs for processing stereonets for customers.

“Nobody is able to collect and process the data in the way that we do,” Yee said. “We are able to process in the cloud in real time simply because of the power of the GPUs.”

RockMass plans to further develop its drones and robots to launch pilots next year.

Learn about our Jetson Xavier developer kit.

The post Keep on Rockin’: Startup Develops AI for Mapping Tunnels and Mineshafts appeared first on The Official NVIDIA Blog.

Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides

Hundreds of millions of tissue biopsies are performed worldwide each year — most of which are diagnosed as non-cancerous. But for the few days or weeks it takes a lab to provide a result, uncertainty weighs on patients.

“Patients suffer emotionally, and their cancer is progressing as the clock ticks,” said David West, CEO of digital pathology startup Proscia.

That turnaround time has the potential to dramatically reduce. In recent years, the biopsy process has begun to digitize, with more and more pathologists looking at digital scans of body tissue instead of physical slides under a microscope.

Proscia, a member of our Inception virtual accelerator program, is hosting these digital biopsy specimens in the cloud. This makes specimen analysis borderless, with one hospital able to consult a pathologist in a different region. It also creates the opportunity for AI to assist experts as they analyze specimens and make their diagnoses.

“If you have the opportunity to read twice as many slides in the same amount of time, it’s an obvious win for the laboratories,” said West.

The Philadelphia-based company recently closed a $8.3 million Series A funding round, which will power its AI development and software deployment. And a feasibility study published last week demonstrated that Proscia’s deep learning software scores over 99 percent accuracy for classifying three common types of skin pathologies.

Biopsy Analysis, Behind the Scenes

Pathologists have the weighty task of examining lab samples of body tissue to determine if they’re cancerous or benign. But depending on the type and stage of disease, two pathologists looking at the same tissue may disagree on a diagnosis more than half the time, West says.

These experts are also overworked and in short supply globally. Laboratories around the world have too many slides and not enough people to read them.

China has one pathologist per 80,000 patients, said West. And while the United States has one per 25,000 patients, it’s facing a decline as many pathologists are reaching retirement age. Many other countries have so few pathologists that they are “on the precipice of a crisis,” according to West.

He projects that 80 to 90 percent of major laboratories will have switched their biopsy analysis from microscopes to scanners in the next five years. Proscia’s subscription-based software platform aims to help pathologists more efficiently analyze these digital biopsy specimens, assisted by AI.

The company uses a range of NVIDIA Tesla GPUs through Amazon Web Services to power its digital pathology software and AI development. The platform is currently being used worldwide by more than 4,000 pathologists, scientists and lab managers to manage biopsy data and workflows.

screenshot of Proscia's DermAI tool
Proscia’s digital pathology and AI platform displays a heat map analysis of this H&E stained skin tissue image.

In December, Proscia will release its first deep learning module, DermAI. This tool will be able to analyze skin biopsies and is trained to recognize roughly 70 percent of the pathologies a typical dermatology lab sees. Three other modules are currently under development.

Proscia works with both labeled and unlabeled data from clinical partners to train its algorithms. The labeled dataset, created by expert pathologists, are tagged with the overall diagnosis as well as more granular labels for specific tissue formations within the image.

While biopsies can be ordered at multiple stages of treatment, Proscia focuses on the initial diagnosis stage, when doctors are looking at tissue and making treatment decisions.

“The AI is checking those cases as a virtual second opinion behind the scenes,” said West. This could lower the chances of missing tricky-to-spot cancers like melanoma, and make diagnoses more consistent among pathologists.

The post Putting Biopsies Under AI Microscope: Pathology Startup Fuels Shift Away from Physical Slides appeared first on The Official NVIDIA Blog.