Mass General’s Martinos Center Adopts AI for COVID, Radiology Research

Academic medical centers worldwide are building new AI tools to battle COVID-19 —  including at Mass General, where one center is adopting NVIDIA DGX A100 AI systems to accelerate its work.

Researchers at the hospital’s Athinoula A. Martinos Center for Biomedical Imaging are working on models to segment and align multiple chest scans, calculate lung disease severity from X-ray images, and combine radiology data with other clinical variables to predict outcomes in COVID patients.

Built and tested using Mass General Brigham data, these models, once validated, could be used together in a hospital setting during and beyond the pandemic to bring radiology insights closer to the clinicians tracking patient progress and making treatment decisions.

“While helping hospitalists on the COVID-19 inpatient service, I realized that there’s a lot of information in radiologic images that’s not readily available to the folks making clinical decisions,” said Matthew D. Li, a radiology resident at Mass General and member of the Martinos Center’s QTIM Lab. “Using deep learning, we developed an algorithm to extract a lung disease severity score from chest X-rays that’s reproducible and scalable — something clinicians can track over time, along with other lab values like vital signs, pulse oximetry data and blood test results.”

The Martinos Center uses a variety of NVIDIA AI systems, including NVIDIA DGX-1, to accelerate its research. This summer, the center will install NVIDIA DGX A100 systems, each built with eight NVIDIA A100 Tensor Core GPUs and delivering 5 petaflops of AI performance.

“When we started working on COVID model development, it was all hands on deck. The quicker we could develop a model, the more immediately useful it would be,” said Jayashree Kalpathy-Cramer, director of the QTIM lab and the Center for Machine Learning at the Martinos Center. “If we didn’t have access to the sufficient computational resources, it would’ve been impossible to do.”

Comparing Notes: AI for Chest Imaging

COVID patients often get imaging studies — usually CT scans in Europe, and X-rays in the U.S. — to check for the disease’s impact on the lungs. Comparing a patient’s initial study with follow-ups can be a useful way to understand whether a patient is getting better or worse.

But segmenting and lining up two scans that have been taken in different body positions or from different angles, with distracting elements like wires in the image, is no easy feat.

Bruce Fischl, director of the Martinos Center’s Laboratory for Computational Neuroimaging, and Adrian Dalca, assistant professor in radiology at Harvard Medical School, took the underlying technology behind Dalca’s MRI comparison AI and applied it to chest X-rays, training the model on an NVIDIA DGX system.

“Radiologists spend a lot of time assessing if there is change or no change between two studies. This general technique can help with that,” Fischl said. “Our model labels 20 structures in a high-resolution X-ray and aligns them between two studies, taking less than a second for inference.”

This tool can be used in concert with Li and Kalpathy-Cramer’s research: a risk assessment model that analyzes a chest X-ray to assign a score for lung disease severity. The model can provide clinicians, researchers and infectious disease experts with a consistent, quantitative metric for lung impact, which is described subjectively in typical radiology reports.

Trained on a public dataset of over 150,000 chest X-rays, as well as a few hundred COVID-positive X-rays from Mass General, the severity score AI is being used for testing by four research groups at the hospital using the NVIDIA Clara Deploy SDK. Beyond the pandemic, the team plans to expand the model’s use to more conditions, like pulmonary edema, or wet lung.

Comparing the AI lung disease severity score, or PXS, between images taken at different stages can help clinicians track changes in a patient’s disease over time. (Image from the researchers’ paper in Radiology: Artificial Intelligence, available under open access.)

Foreseeing the Need for Ventilators

Chest imaging is just one variable in a COVID patient’s health. For the broader picture, the Martinos Center team is working with Brandon Westover, executive director of Mass General Brigham’s Clinical Data Animation Center.

Westover is developing AI models that predict clinical outcomes for both admitted patients and outpatient COVID cases, and Kalpathy-Cramer’s lung disease severity score could be integrated as one of the clinical variables for this tool.

The outpatient model analyzes 30 variables to create a risk score for each of hundreds of patients screened at the hospital network’s respiratory infection clinics — predicting the likelihood a patient will end up needing critical care or dying from COVID.

For patients already admitted to the hospital, a neural network predicts the hourly risk that a patient will require artificial breathing support in the next 12 hours, using variables including vital signs, age, pulse oximetry data and respiratory rate.

“These variables can be very subtle, but in combination can provide a pretty strong indication that a patient is getting worse,” Westover said. Running on an NVIDIA Quadro RTX 8000 GPU, the model is accessible through a front-end portal clinicians can use to see who’s most at risk, and which variables are contributing most to the risk score.

Better, Faster, Stronger: Research on NVIDIA DGX

Fischl says NVIDIA DGX systems help Martinos Center researchers more quickly iterate, experimenting with different ways to improve their AI algorithms. DGX A100, with NVIDIA A100 GPUs based on the NVIDIA Ampere architecture, will further speed the team’s work with third-generation Tensor Core technology.

“Quantitative differences make a qualitative difference,” he said. “I can imagine five ways to improve our algorithm, each of which would take seven hours of training. If I can turn those seven hours into just an hour, it makes the development cycle so much more efficient.”

The Martinos Center will use NVIDIA Mellanox switches and VAST Data storage infrastructure, enabling its developers to use NVIDIA GPUDirect technology to bypass the CPU and move data directly into or out of GPU memory, achieving better performance and faster AI training.

“Having access to this high-capacity, high-speed storage will allow us to to analyze raw multimodal data from our research MRI, PET and MEG scanners,” said Matthew Rosen, assistant professor in radiology at Harvard Medical School, who co-directs the Center for Machine Learning at the Martinos Center. “The VAST storage system, when linked with the new A100 GPUs, is going to offer an amazing opportunity to set a new standard for the future of intelligent imaging.”

To learn more about how AI and accelerated computing are helping healthcare institutions fight the pandemic, visit our COVID page.

Main image shows chest x-ray and corresponding heat map, highlighting areas with lung disease. Image from the researchers’ paper in Radiology: Artificial Intelligence, available under open access.

The post Mass General’s Martinos Center Adopts AI for COVID, Radiology Research appeared first on The Official NVIDIA Blog.

Nerd Watching: GPU-Powered AI Helps Researchers Identify Individual Birds

Anyone can tell an eagle from an ostrich. It takes a skilled birdwatcher to tell a chipping sparrow from a house sparrow from an American tree sparrow.

Now researchers are using AI to take this to the next level — identifying individual birds.

André Ferreira, a Ph.D. student at France’s Centre for Functional and Evolutionary Ecology, harnessed an NVIDIA GeForce RTX 2070 to train a powerful AI that identifies individual birds within the same species.

It’s the latest example of how deep learning has become a powerful tool for wildlife biologists studying a wide range of animals.

Marine biologists with the U.S. National Oceanic and Atmospheric Research Organization use deep learning to identify and track the endangered North Atlantic Right Whale. Zoologist Dan Rubenstein uses deep learning to distinguish between individuals in herds of Grevy’s Zebras.

The sociable weaver isn’t endangered. But understanding the role of an individual in a group is key to understanding how the birds, native to Southern Africa, work together to build their nests.

The problem: it’s hard to tell the small, rust-colored birds apart, especially when trying to capture their activities in the wild.

In a paper released last week, Ferreira detailed how he and a team of researchers trained a convolutional neural network to identify individual birds.

Ferreira built his model using Keras, a popular open-source neural network library, running on a GeForce RTX 2070 GPU.

He then teamed up with researchers at Germany’s Max Planck Institute of Animal Behavior. Together, they adapted the model to identify wild great tits and captive zebra finches, two other widely studied bird species.

To train their models — a crucial step towards building any modern deep-learning-based AI — researchers made feeders equipped with cameras.

The researchers fitted birds with electronic tags, which triggered sensors in the feeders alerting researchers to the bird’s identity.

This data gave the model a “ground truth” that it could check against for accuracy.

The team’s AI was able to identify individual sociable weavers and wild great tits more than 90 percent of the time. And it identified captive zebra finches 87 percent of the time.

For bird researchers, the work promises several key benefits.

Using cameras and other sensors to track birds allows researchers to study bird behavior much less invasively.

With less need to put researchers in the field, the technique allows researchers to track bird behavior over more extended periods.

Next: Ferreira and his colleagues are working to build AI that can recognize individual birds it has never seen before, and better track groups of birds.

Birdwatching may never be the same.

Featured image credit: Bernard DuPont, some rights reserved.

The post Nerd Watching: GPU-Powered AI Helps Researchers Identify Individual Birds appeared first on The Official NVIDIA Blog.

Media Alert: Intel at DEF CON 28

Join Intel experts for panel discussions and talks at DEF CON 28, a virtual event taking place through this weekend. Learn how Intel, together with partners and customers, is building the trusted foundation for computing in a data-centric world.

DEF CON 28

When: Aug. 7-9, 2020

Where: Virtual Event

 

katie noble
Katie Noble

Building Connections across the Aviation Ecosystem

There is an increased effort to collaborate and coordinate to protect information technology and operational technology systems at airports, airlines, aviation management, and manufacturers and vendors via the supply chain. Katie Noble, director of PSIRT and Bug Bounty at Intel, moderates a panel of experts, including Randy Talley (CISA), Sidd Gejji (FAA), Al Burke (Department of Defense), Jen Ellis (Rapid7), Jeff Troy (Aviation ISAC) and John Craig (Boeing). They will share their insights and current activities among government, industry and the security research community.

When: Friday, Aug. 7, 1-2 p.m. PDT

Where: https://bit.ly/3gCsE6j

Registration: Free

anders fogh
Anders Fogh

katie noble
Katie Noble

The Joy of Coordinating Vulnerability Disclosure

Under the best of circumstances, coordinating vulnerability disclosures can be a challenge.  In a panel discussion moderated by Christopher “CRob” Robinson (Red Hat), Katie Noble, director of PSIRT and Bug Bounty at Intel, and Anders Fogh, senior principal engineer in the security division at Intel, join Lisa Bradley (Dell), Omar Santos (Cisco) and Daniel Gruss (TU Graz). They will share experiences and show how researchers and technology companies together can improve the impact of disclosing vulnerabilities within the technology ecosystem.

When: Friday, Aug. 7, 10:30-11:30 a.m. and 6:30-7:30 p.m. PDT

Where:  https://www.twitch.tv/redteamvillage (10:30 a.m. session); https://www.twitch.tv/iotvillage (6:30 p.m. session)

Registration: Free

amit elazari
Dr. Amit Elazari

anahit tarkhanyan
Anahit Tarkhanyan

The Future of IoT Security ‘Baselines,’ Standards and Regulatory Domain

Proposed initiatives and standards in IoT security are shaping the industry at a fast pace and on a global scale. In this talk, Dr. Amit Elazari, Intel director of global cybersecurity policy, and Anahit Tarkhanyan, Intel platform architect, will introduce a variety of regulatory concepts and baseline proposals that are shaping the future of IoT security. They’ll focus on recent trends, including NISTIR 8259, C2, international standards, supply chain transparency, researchers’ collaboration, proposed legislation, coordinated vulnerability disclosure and innovative capabilities that can support and enhance development from the foundation up.

When: Saturday, Aug. 8, 2:30-3:15 p.m. PDT

Where:  https://www.twitch.tv/iotvillage

Registration: Free

 

Contact:

Jennifer Foss
Intel
425-765-3485
Jennifer.foss@intel.com

The post Media Alert: Intel at DEF CON 28 appeared first on Intel Newsroom.

AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Deep Learning on Tap: NVIDIA Engineer Turns to AI, GPU to Invent New Brew

Some dream of code. Others dream of beer. NVIDIA’s Eric Boucher does both at once, and the result couldn’t be more aptly named.

Full Nerd #1 is a crisp, light-bodied blonde ale perfect for summertime quaffing.

Eric, an engineer in the GPU systems software kernel driver team, went to sleep one night in May wrestling with two problems.

One, he had to wring key information from the often cryptic logs for the systems he oversees to help his team respond to issues faster.

The other: the veteran home brewer wanted a way to brew new kinds of beer.

“I woke up in the morning and I knew just what to do,” Boucher said. “Basically I got both done on one night’s broken sleep.”

Both solutions involved putting deep learning to work on a NVIDIA TITAN V GPU. Such powerful gear tends to encourage this sort of parallel processing, it seems.

Eric, a native of France now based near Sacramento, Calif., began homebrewing two decades ago, inspired by a friend and mentor at Sun Microsystems. He took a break from it when his children were first born.

Now that they’re older, he’s begun brewing again in earnest, using gear in both his garage and backyard, turning to AI for new recipes this spring.

Of course, AI has been used in the past to help humans analyze beer flavors, and even create wild new craft beer names. Eric’s project, however, is more ambitious, because it’s relying on AI to create new beer recipes.

You’ve Got Ale — GPU Speeds New Brew Ideas

For training data, Eric started with the all-grain ale recipes from MoreBeer, a hub for brewing enthusiasts, where he usually shops for recipe kits and ingredients.

Eric focused on ales because they’re relatively easy and quick to brew, and encompass a broad range of different styles, from hearty Irish stout to tangy and refreshing Kölsch.

He used wget — an open source program that retrieves content from the web — to save four index pages of ale recipes.

Then, using a Python script, he filtered the downloaded HTML pages and downloaded the linked recipe PDFs. He then converted the PDFs to plain text and used another Python script to interpret the text and generate recipes in a standardized format.

He fed these 108 recipes — including one for Russian River Brewing’s legendary Pliny the Elder IPA — to Textgenrnn, a recurrent neural network, a type of neural network that can be applied to a sequence of data to help guess what should come next.

And, because no one likes to wait for good beer, he ran it on an NVIDIA TITAN V GPU. Eric estimates it cuts the time to learn from the recipes database to seven minutes from one hour and 45 minutes using a CPU alone.

After a little tuning, Eric generated 10 beer recipes. They ranged from dark stouts to yellowish ales, and in flavor from bitter to light.

To Eric’s surprise, most looked reasonable (though a few were “plain weird and impossible to brew” like a recipe that instructed him to wait 45 days with hops in the wort, or unfermented beer, before adding the yeast).

Speed of Light (Beer)

With the approaching hot California summer in mind, Eric selected a blonde ale.

He was particularly intrigued because the recipe suggested adding Warrior, Cascade, and Amarillo hops — the flowers of the herbaceous perennial Humulus lupulus that gives good beer a range of flavors, from bitter to citrusy — an “intriguing schedule.”

The result, Eric reports, was refreshing, “not too sweet, not too bitter,” with “a nice, fresh hops smell and a long, complex finish.”

He dubbed the result Full Nerd #1.

The AI-generated brew became the latest in a long line of brews with witty names Eric has produced, including a bourbon oak-infused beer named, appropriately enough, “The Groot Beer,” in honor of the tree-like creature from Marvel’s “Guardians of the Galaxy.”

Eric’s next AI brewing project: perhaps a dark stout, for winter, or a lager, a light, crisp beer that requires months of cold storage to mature.

For now, however, there’s plenty of good brew to drink. Perhaps too much. Eric usually shares his creations with his martial arts buddies. But with social distancing in place amidst the global COVID-19 pandemic, the five gallons, or forty pints, is more than the light drinker knows what to do with.

Eric, it seems, has found a problem deep learning can’t help him with. Bottoms up.

The post Deep Learning on Tap: NVIDIA Engineer Turns to AI, GPU to Invent New Brew appeared first on The Official NVIDIA Blog.

LLVM 10.0.0 imported into -current

With this commit and several more, Patrick Wildt (patrick@) upgraded -current to version 10.0.0 of LLVM:

CVSROOT:	/cvs
Module name:	src
Changes by:	patrick@cvs.openbsd.org	2020/08/03 08:30:27

Log message:
    Import LLVM 10.0.0 release including clang, lld and lldb.
    
    ok hackroom
    tested by plenty
    
    Status:
    
    Vendor Tag:	LLVM
    Release Tags:	LLVM_10_0_0
[…]

Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE

Self-driving trucks are coming to an interstate near you.

Autonomous trucking startup TuSimple and truck maker Navistar recently announced they will build self-driving semi trucks, powered by the NVIDIA DRIVE AGX platform. The collaboration is one of the first to develop autonomous trucks, set to begin production in 2024.

Over the past decade, self-driving truck developers have relied on traditional trucks retrofitted with the sensors, hardware and software necessary for autonomous driving. Building these trucks from the ground up, however, allows for companies to custom-build them for the needs of a self-driving system as well as take advantage of the infrastructure of a mass production truck manufacturer.

This transition is the first step from research to widespread deployment, said Chuck Price, chief product officer at TuSimple.

“Our technology, developed in partnership with NVIDIA, is ready to go to production with Navistar,” Price said. “This is a significant turning point for the industry.”

Tailor-Made Trucks

Developing a truck to drive on its own takes more than a software upgrade.

Autonomous driving relies on redundant and diverse deep neural networks, all running simultaneously to handle perception, planning and actuation. This requires massive amounts of compute.

The NVIDIA DRIVE AGX platform delivers high-performance, energy-efficient compute to enable AI-powered and autonomous driving capabilities. TuSimple has been using the platform in its test vehicles and pilots, such as its partnership with the United States Postal Service.

Building dedicated autonomous trucks makes it possible for TuSimple and Navistar to develop a centralized architecture optimized for the power and performance of the NVIDIA DRIVE AGX platform. The platform is also automotive grade, meaning it is built to withstand the wear and tear of years driving on interstate highways.

Invaluable Infrastructure

In addition to a customized architecture, developing an autonomous truck in partnership with a manufacturer opens up valuable infrastructure.

Truck makers like Navistar provide nationwide support for their fleets, with local service centers and vehicle tracking. This network is crucial for deploying self-driving trucks that will criss-cross the country on long-haul routes, providing seamless and convenient service to maintain efficiency.

TuSimple is also building out an HD map network of the nation’s highways for the routes its vehicles will travel. Combined with the widespread fleet management network, this infrastructure makes its autonomous trucks appealing to a wide variety of partners — UPS, U.S. Xpress, Penske Truck Leasing and food service supply chain company McLane Inc., a Berkshire Hathaway company, have all signed on to this autonomous freight network.

And backed by the performance of NVIDIA DRIVE AGX, these vehicles will continue to improve, delivering safer, more efficient logistics across the country.

“We’re really excited as we move into production to have a partner like NVIDIA with us the whole way,” Price said.

The post Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage

During a stroke, a patient loses an estimated 1.9 million brain cells every minute, so interpreting their CT scan even one second quicker is vital to maintaining their health.

To save precious time, Taiwan-based medical imaging startup Deep01 has created an AI-based medical imaging software, called DeepCT, to evaluate acute intracerebral hemorrhage (ICH), a type of stroke. The system works with 95 percent accuracy in just 30 seconds per case — about 10 times faster than competing methods.

Founded in 2016, Deep01 is the first AI company in Asia to have FDA clearances in both the U.S. and Taiwan. It’s a member of NVIDIA Inception, a program that helps startups develop, prototype and deploy their AI or data science technology and get to market faster.

The startup recently raised around $3 million for DeepCT, which detects suspected areas of bleeding around the brain and annotates where they’re located on CT scans, notifying physicians of the results.

The software was trained using 60,000 medical images that displayed all types of acute ICH. Deep01 uses a self-developed deep learning framework that runs images and trains the model on NVIDIA GPUs.

“Working with NVIDIA’s robust AI computing hardware, in addition to software frameworks like TensorFlow and PyTorch, allows us to deliver excellent AI inference performance,” said David Chou, founder and CEO of the company.

Making Quick Diagnosis Accessible and Affordable

Strokes are the world’s second-most common cause of death. When stroke patients are ushered into the emergency room, doctors must quickly determine whether the brain is bleeding and what next steps for treatment should be.

However, many hospitals lack enough manpower to perform such timely diagnoses, since only some emergency room doctors specialize in reading CT scans. Because of this, Deep01 was founded, according to Chou, with the mission of offering affordable AI-based solutions to medical institutions.

The 30-second speed with which DeepCT completes interpretation can help medical practitioners prioritize the patients in most urgent need for treatment.

Helpful for Facilities of All Types and Sizes

DeepCT has helped doctors evaluate more than 5,000 brain scans and is being used in nine medical institutions in Taiwan, ranging from small hospitals to large-scale medical centers.

“The lack of radiologists is a big issue even in large-scale medical centers like the one I work at, especially during late-night shifts when fewer staff are on duty,” said Tseng-Lung Yang, senior radiologist at Kaohsiung Veterans General Hospital in Taiwan.

Geng-Wang Liaw, an emergency physician at Yeezen General Hospital — a smaller facility in Taiwan — agreed that Deep01’s technology helps relieve physical and mental burdens for doctors.

“Doctors in the emergency room may misdiagnose a CT scan at times,” he said. “Deep01’s solution stands by as an assistant 24/7, to give doctors confidence and reduce the possibility for medical error.”

Beyond ICH, Deep01 is at work on expanding its technology to identify midline shift, a pathological finding that occurs when there’s increased pressure on the brain and increases mortality.

The post Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage appeared first on The Official NVIDIA Blog.

Intel and VMware Extend Virtualization to Radio Access Network for 5G

vmware 2x1 1Intel and VMware Inc. today announced a collaboration on an integrated software platform for virtualized radio access networks (RAN) to accelerate the rollout of both existing LTE and future 5G networks.

More: 5G & Wireless Communications News

As communications service providers (CoSPs) evolve their networks to support the rollout of future 5G networks, they are increasingly adopting a software-defined, virtualized infrastructure. Virtualization of the core network enables CoSPs to improve operational costs and bring services to market faster.

“Many CoSPs are choosing to extend the benefits of network virtualization into the RAN for increased agility as they roll out new 5G services, but the software integration can be rather complex. With the integrated vRAN platform, combined with leading technology and expertise from Intel and VMware, CoSPs are positioned to benefit from accelerated time to deployment of innovative services at the edge of their network,” said Dan Rodriguez, corporate vice president and general manager of the Network Platforms Group at Intel.

Specific use cases and the full news release can be found on VMware’s website.

The post Intel and VMware Extend Virtualization to Radio Access Network for 5G appeared first on Intel Newsroom.