Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics

With a new university campus nearby and an airport under construction, the city of Liverpool, Australia, 27 kilometers southwest of Sydney, is growing fast.

More than 30,000 people are expected to make a daily commute to its central business district. Liverpool needed to know the possible impact to traffic flow and movement of pedestrians, cyclists and vehicles.

The city already hosts closed-circuit televisions to monitor safety and security. Each CCTV captures lots of video and data that, due to stringent privacy regulations, is mainly combed through after an incident has been reported.

The challenge before the city was to turn this massive dataset into information that could help it run more efficiently, handle an influx of commuters and keep the place liveable for residents — without compromising anyone’s privacy.

To achieve this goal, the city has partnered with the Digital Living Lab of the University of Wollongong. Part of Wollongong’s SMART Infrastructure Facility, the DLL has developed what it calls the Versatile Intelligent Video Analytics platform. VIVA, for short, unlocks data so that owners of CCTV networks can access real-time, privacy-compliant data to make better informed decisions.

VIVA is designed to convert existing infrastructure into edge-computing devices embedded with the latest AI. The platform’s state-of-the-art deep learning algorithms are developed at DLL on the NVIDIA Metropolis platform. Their video analytics deep-learning models are trained using transfer learning to adapt to use cases, optimized via NVIDIA TensorRT software and deployed on NVIDIA Jetson edge AI computers.

“We designed VIVA to process video feeds as close as possible to the source, which is the camera,” said Johan Barthelemy, lecturer at the SMART Infrastructure Facility of the University of Wollongong. “Once a frame has been analyzed using a deep neural network, the outcome is transmitted and the current frame is discarded.”

Disposing of frames maintains privacy as no images are transmitted. It also reduces the bandwidth needed.

Beyond city streets like in Liverpool, VIVA has been adapted for a wide variety of applications, such as identifying and tracking wildlife; detecting culvert blockage for stormwater management and flash flood early warnings; and tracking of people using thermal cameras to understand people’s mobility behavior during heat waves. It can also distinguish between firefighters searching a building and other building occupants, helping identify those who may need help to evacuate.

Making Sense of Traffic Patterns

The research collaboration between SMART, Liverpool’s city council and its industry partners is intended to improve the efficiency, effectiveness and accessibility of a range of government services and facilities.

For pedestrians, the project aims to understand where they’re going, their preferred routes and which areas are congested. For cyclists, it’s about the routes they use and ways to improve bicycle usage. For vehicles, understanding movement and traffic patterns, where they stop, and where they park are key.

Understanding mobility within a city formerly required a fleet of costly and fixed sensors, according to Barthelemy. Different models were needed to count specific types of traffic, and manual processes were used to understand how different types of traffic interacted with each other.

Using computer vision on the NVIDIA Jetson TX2 at the edge, the VIVA platform can count the different types of traffic and capture their trajectory and speed. Data is gathered using the city’s existing CCTV network, eliminating the need to invest in additional sensors.

Patterns of movements and points of congestion are identified and predicted to help improve street and footpath layout and connectivity, traffic management and guided pathways. The data has been invaluable in helping Liverpool plan for the urban design and traffic management of its central business district.

Machine Learning Application Built Using NVIDIA Technologies

SMART trained the machine learning applications on its VIVA platform for Liverpool on four workstations powered by a variety of NVIDIA TITAN GPUs, as well as six workstations equipped with NVIDIA RTX GPUs to generate synthetic data and run experiments.

In addition to using open databases such as OpenImage, COCO and Pascal VOC for training, DLL created synthetic data via an in-house application based on the Unity Engine. Synthetic data allows the project to learn from numerous scenarios that might not otherwise be present at any given time, like rainstorms or masses of cyclists.

“This synthetic data generation allowed us to generate 35,000-plus images per scenario of interest under different weather, time of day and lighting conditions,” said Barthelemy. “The synthetic data generation uses ray tracing to improve the realism of the generated images.”

Inferencing is done with NVIDIA Jetson Nano, NVIDIA Jetson TX2 and NVIDIA Jetson Xavier NX, depending on the use case and processing required.

The post Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics appeared first on The Official NVIDIA Blog.

Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives

A team in Israel is making a splash with AI.

It started as biz school buddies Netanel Eliav and Adam Bismut were looking to solve a problem to change the world. The problem found them: Bismut visited the Dead Sea after a drowning and noticed a lack of tech for lifeguards, who scanned the area with age-old binoculars.

The two aspiring entrepreneurs — recent MBA graduates of Ben-Gurion University, in the country’s south — decided this was their problem to solve with AI.

“I have two little girls, and as a father, I know the feeling that parents have when their children are near the water,” said Eliav, the company’s CEO.

They founded Sightbit in 2018 with BGU classmates Gadi Kovler and Minna Shezaf to help lifeguards see dangerous conditions and prevent drownings.

The startup is seed funded by Cactus Capital, the venture arm of their alma mater.

Sightbit is now in pilot testing at Palmachim Beach, a popular escape for sunbathers and surfers in the Palmachim Kibbutz area along the Mediterranean Sea, south of Tel Aviv. The sand dune-lined destination, with its inviting, warm aquamarine waters, gets packed with thousands of daily summer visitors.

But it’s also a place known for deadly rip currents.

Danger Detectors

Sightbit has developed image detection to help spot dangers to aid lifeguards in their work. In collaboration with the Israel Nature and Parks Authority, the Beersheba-based startup has installed three cameras that feed data into a single NVIDIA Jetson AGX at the lifeguard towers at Palmachim beach. NVIDIA Metropolis is deployed for video analytics.

The system of danger detectors enables lifeguards to keep tabs on a computer monitor that flags potential safety concerns while they scan the beach.

Sightbit has developed models based on convolutional neural networks and image detection to provide lifeguards views of potential dangers. Kovler, the company’s CTO, has trained the company’s danger detectors on tens of thousands of images, processed with NVIDIA GPUs in the cloud.

Training on the images wasn’t easy with sun glare on the ocean, weather conditions, crowds of people, and people partially submerged in the ocean, said Shezaf, the company’s CMO.

But Sightbit’s deep learning and proprietary algorithms have enabled it to identify children alone as well as clusters of people. This allows its system to flag children who have strayed from the pack.

Rip Current Recognition

The system also harnesses optical flow algorithms to detect dangerous rip currents in the ocean for helping lifeguards keep people out of those zones.  These algorithms make it possible to identify the speed of every object in an image, using partial differential equations to calculate acceleration vectors of every voxel in the image.

Lifeguards can get updates on ocean conditions so when they start work they have a sense of hazards present that day.

“We spoke with many lifeguards. The lifeguard is trying to avoid the next accident. Many people go too deep and get caught in the rip currents,” said Eliav.

Cameras at lifeguard towers processed on the single compact supercomputing Jetson Xavier and accessing Metropolis can offer split-second inference for alerts, tracking, statistics and risk analysis in real time.

The Israel Nature and Parks Authority is planning to have a structure built on the beach to house more cameras for automated safety, according to Sightbit.

COVID-19 Calls 

Palmachim Beach lifeguards have a lot to watch, especially now as people get out of their homes for fresh air after the region begins reopening from COVID-19-related closures.

As part of Sightbit’s beach safety developments, the company had been training its network to spot how far apart people were to help gauge child safety.

This work also directly applies to monitoring social distancing and has attracted the attention of potential customers seeking ways to slow the spread of COVID-19. The Sightbit platform can provide them crowding alerts when a public area is overcrowded and proximity alerts for when individuals are too close to each other, said Shezaf.

The startup has put in extra hours to work with those interested in its tech to help monitor areas for ways to reduce the spread of the pathogen.

“If you want to change the world, you need to do something that is going to affect people immediately without any focus on profit,” said Eliav.

 

Sightbit is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives appeared first on The Official NVIDIA Blog.

Heart of the Matter: AI Helps Doctors Navigate Pandemic

A month after it got FDA approval, a startup’s first product was saving lives on the front lines of the battle against COVID-19.

Caption Health develops software for ultrasound systems, called Caption AI. It uses deep learning to empower medical professionals, including those without prior ultrasound experience, to perform echocardiograms quickly and accurately.

The results are images of the heart often worthy of an expert sonographer that help doctors diagnose and treat critically ill patients.

The coronavirus pandemic provided plenty of opportunities to try out the first dozen systems. Two doctors who used the new tool shared their stories on the condition they are their patients remain anonymous.

A 53-year-old diabetic woman with COVID-19 went into cardiac shock in a New York hospital. Without the images from Caption AI, it would have been difficult to clinch the diagnosis, said a doctor on the scene.

The system helped the physician identify heart problems in an 86-year-old man with the virus in the same hospital, helping doctors bring him back to health. It was another case among more than 200 in the facility that was effectively turned into a COVID-19 hospital in mid-March.

The Caption Health system made a tremendous impact for a staff spread thin, said the doctor. It would have been hard for a trained sonographer to keep up with the demand for heart exams, he added.

Heart Test Becomes Standard Procedure

Caption AI helped doctors in North Carolina determine that a 62-year-old man had COVID-19-related heart damage. Thanks, in part, to the ease of using the system, the hospital now performs echocardiograms for most patients with the virus.

At the height of the pandemic’s first wave, the hospital stationed ultrasound systems with Caption AI in COVID-19 wards. Rather than sending sonographers from unit to unit, the usual practice, staff stationed at the wards used the systems. The change reduced staff exposure to the virus and conserved precious protective gear.

Beyond the pandemic, the system will help hospitals provide urgent services while keeping a lid on rising costs, said a doctor at that hospital.

“AI-enabled machines will be the next big wave in taking care of patients wherever they are,” said Randy Martin, chief medical officer of Caption Health and emeritus professor of cardiology at Emory University.

Martin joined the startup about four years ago after meeting its founders, who shared expertise and passion for medicine and AI. Today their software “takes a user through 10 standard views of the heart, coaching them through some 90 fine movements experts make,” he said.

“We don’t intend to replace sonographers; we’re just expanding the use of portable ultrasound systems to the periphery for more early detection,” he added.

Coping with an Unexpected Demand Spike

In the early days of the pandemic, that expansion couldn’t come fast enough.

In late March, the startup exhausted supplies that included NVIDIA Quadro P3000 GPUs that ran its AI software. In the early days of the global shutdown, the startup reached out to its supply chain.

“We are experiencing overwhelming demand for our product,” the company’s CEO wrote, after placing orders for 100 GPUs with a distributor.

Caption Health has systems currently in use at 11 hospitals. It expects to deploy Caption AI at a number of additional sites in the coming weeks.

GPUs at the Heart of Automated Heart Tests

The startup currently integrates its software in a portable ultrasound from Terason. It intends to partner with more ultrasound makers in the future. And it advises partners to embed GPUs in their future ultrasound equipment.

The Quadro P3000 in Caption AI runs real-time inference tasks using deep convolutional neural networks. They provide operators guidance in positioning a probe that captures images. Then they automatically choose the highest-quality heart images and interpret them to help doctors make informed decisions.

The NVIDIA GPU also freed up four CPU cores, making space to process other tasks on the system, such as providing a smooth user experience.

The startup trained its AI models on a database of 1 million echocardiograms from clinical partners. An early study in partnership with Northwestern Medicine and the Minneapolis Heart Institute showed Caption AI helped eight registered nurses with no prior ultrasound experience capture highly accurate images on a wide variety of patients.

Inception Program Gives Startup Traction

Caption Heath, formerly called Bay Labs, was founded in 2015 in Brisbane, Calif. It received a $125,000 prize at a 2017 GTC competition for members of NVIDIA’s Inception program, which gives startups access to technology, expertise and markets.

“Being part of the Inception program has provided us with increased recognition in the field of deep learning, a platform to share our AI innovations with healthcare and deep learning communities, and phenomenal support getting NVIDIA GPUs into our supply chain so we could deliver Caption AI,” said Charles Cadieu, co-founder and president of Caption Health.

Now that its tool has been tested in a pandemic, Caption Health looks forward to opportunities to help save lives across many ailments. The company aims to ride a trend toward more portable systems that extend availability and lower costs of diagnostic imaging.

“We hope to see our technology used everywhere from big hospitals to rural villages to examine people for a wide range of medical conditions,” said Cadieu.

To learn more about Caption Health and other companies like it, watch the webinar on healthcare startups against COVID-19 Heart of the Matter: AI Helps Doctors Navigate Pandemic appeared first on The Official NVIDIA Blog.

Taking AI to Market: NVIDIA and Arterys Bridge Gap Between Medical Researchers and Clinicians

Around the world, researchers in startups, academic institutions and online communities are developing AI models for healthcare. Getting these models from their hard drives and into clinical settings can be challenging, however.

Developers need feedback from healthcare practitioners on how their models can be optimized for the real world. So, San Francisco-based AI startup Arterys built a forum for these essential conversations between clinicians and researchers.

Called the Arterys Marketplace, and now integrated with the NVIDIA Clara Deploy SDK, the platform makes it easy for researchers to share medical imaging AI models with clinicians, who can try it on their own data.

“By integrating the NVIDIA Clara Deploy technology into our platform, anyone building an imaging AI workflow with the Clara SDK can take their pipeline online with a simple handoff to the Arterys team,” said Christian Ulstrup, product manager for Arterys Marketplace. “We’ve streamlined the process and are excited to make it easy for Clara developers to share their models.”

Researchers can submit medical imaging models in any stage of development — from AI tools for research use to apps with regulatory clearance. Once the model is posted on the public Marketplace site, anyone with an internet connection can test it by uploading a medical image through a web browser.

Models on Arterys Marketplace run on NVIDIA GPUs through Amazon Web Services for inference.

A member of both the NVIDIA Inception and AWS Activate programs, which collaborate to help startups get to market faster, Arterys was founded in 2011. The company builds clinical AI applications for medical imaging and launched the Arterys Marketplace at the RSNA 2019 medical conference.

It recently raised $28 million in funding to further develop the ecosystem of partners and clinical-grade AI solutions on its platform.

Several of the models now on the Arterys Marketplace are focused on COVID-19 screening from chest X-rays and CT images. Among them is a model jointly developed by NVIDIA’s medical imaging applied research team and clinicians and data scientists at the National Institutes of Health. Built in under three weeks using the NVIDIA Clara Train framework, the model can help researchers study the detection of COVID-19 from chest CT scans.

Building AI Pillar of the Community

While there’s been significant investment in developing AI models for healthcare in the last decade, the Arterys team found that it can still take years to get radiologists’ hands on the tools.

“There’s been a huge gap between the smart, passionate researchers building AI models for healthcare and the end users — radiologists and clinicians who can use these models in their workflow,” Ulstrup said. “We realized that no research institution, no startup was going to be able to do this alone.”

The Arterys Marketplace was created with simplicity in mind. Developers need only fill out a short form to submit an AI model for inclusion, and then can send the model to users as a URL — all for free.

For clinicians around the world, there’s no need to download and install an AI model. All that’s needed is an internet connection and a couple medical images to upload for testing with the AI models. Users can choose whether or not their imaging data is shared with the researchers.

The images are analyzed with NVIDIA GPUs in the cloud, and results are emailed to the user within minutes. A Slack channel provides a forum for clinicians to provide feedback to researchers, so they can work together to improve the AI model.

“In healthcare, it can take years to get from an idea to seeing it implemented in clinical settings. We’re reducing that to weeks, if not days,” said Ulstrup. “It’s absurdly easy compared to what the process has been in the past.”

With a focus on open innovation and rapid iteration, Ulstrup says, the Arterys Marketplace aims to bring doctors into the product development cycle, helping researchers build better AI tools. By interacting with clinicians in different geographies, developers can improve their models’ ability to generalize across different medical equipment and imaging datasets.

Over a dozen AI models are on the Arterys Marketplace so far, with more than 300 developers, researchers, and startups joining the community discussion on Slack.

“Once models are hosted on the Arterys Marketplace, developers can send them to researchers anywhere in the world, who in turn can start dragging and dropping data in and getting results,” Ulstrup said. “We’re seeing discussion threads between researchers and clinicians on every continent, sharing screenshots and feedback — and then using that feedback to make the models even better.”

Check out the research-targeted AI COVID-19 Classification Pipeline developed by NVIDIA and NIH researchers on the Arterys Marketplace. To hear more from the Arterys team, register for the Startups4COVID webinar, taking place July 28.

The post Taking AI to Market: NVIDIA and Arterys Bridge Gap Between Medical Researchers and Clinicians appeared first on The Official NVIDIA Blog.

Into the Woods: AI Startup Lets Foresters See the Wood for the Trees

AI startup Trefos is helping foresters see the wood for the trees.

Using custom lidar and camera-mounted drones, the Philadelphia-based company collects data for high-resolution, 3D forest maps. These metrics allow government agencies and the forestry industry to estimate the volume of timber and biomass in an area of forest, as well as the amount of carbon stored in the trees.

With this unprecedented detail, foresters can make more informed decisions when, for example, evaluating the need for controlled burns to clear biomass and reduce the risk of wildfires.

“Forests are often very dense, with a very repetitive layout,” said Steven Chen, founder and CEO of the startup, a member of the NVIDIA Inception program, which supports startups from product development to deployment. “We can use deep learning algorithms to detect trees, isolate them from the surrounding branches and vines, and use those as landmarks.”

Trained on NVIDIA GPUs, the deep learning algorithms detect trees from both camera images and lidar point clouds. AI can dramatically increase the amount of data foresters are able to collect, while delivering results much faster than traditional forest monitoring — where scientists physically walk through forest terrain to record metrics of interest, like the width of a tree trunk.

“It’s an extremely time-consuming process, often walking through very dense forests with a tape measure,” Chen said. “It would take at least a day to survey 100 acres, while measuring less than 2 percent of the trees.”

By collecting data by drone-mounted lidar and camera sensors, it would only take 30 minutes to go through the same 100 acres — while measuring every tree.

Getting AI Down to a Tree 

Chen began his career in finance, working as an agricultural-options trader. “There, I saw the importance of getting inventory data for forests,” he said.

So when he joined the University of Pennsylvania as a Ph.D. student in robotics, he began working on ways robotics and machine learning can help get a better picture of the layout and features of forests around the world. Much of the research behind Trefos originated in Chen’s work in the Vijay Kumar Lab, a robotics research group at UPenn.

Trefos’s custom-built drones can fly autonomously through both organized, planted forests and wild ones. Chen and his team are working with the New Jersey Forest Service, which allow Trefos’s drones to fly through state forests and provide perspective on the kinds of metrics that would be useful to foresters.

The company has collected and labeled all its own training data to ensure high quality and to maintain control over the properties being labeled — such as whether the algorithms should classify a tree and its branches as two separate elements or just one.

Some processing is done at the edge, helping autonomously fly the drone through forests. But the data collected for mapping is processed offline on NVIDIA hardware, including TITAN GPUs and RTX GPUs on desktop systems — plus the NVIDIA DGX Station and DGX-1 server for heavier computing workloads.

Its AI algorithms are developed using the TensorFlow deep learning framework. While the drone platform currently captures images at 1-megapixel resolution, Trefos is looking at 4K cameras for the deployed product.

Chen founded Trefos less than two years ago. The company has received a grant from the National Science Foundation’s Small Business Innovation Research program and is running pilot tests in forests across the U.S.

The post Into the Woods: AI Startup Lets Foresters See the Wood for the Trees appeared first on The Official NVIDIA Blog.

Intel Launches First Artificial Intelligence Associate Degree Program

MCC students 1
Students at Chandler-Gilbert Community College gather for a new student orientation in 2019. Intel is partnering with Maricopa County Community College District to launch the first Intel-designed artificial intelligence associate degree program in the U.S. The program’s first phase will be piloted online at Estrella Mountain Community College and Chandler Gilbert Community College in fall 2020. (Credit: Maricopa County Community College District)

What’s New: Intel is partnering with Maricopa County Community College District (MCCCD) to launch the first Intel-designed artificial intelligence (AI) associate degree program in the United States. The Arizona Commerce Authority will also provide a workforce grant of $100,000 to support the program. It will enable tens of thousands of students to land careers in high-tech, healthcare, automotive, industrial and aerospace fields.

“We strongly believe AI technology should be shaped by many voices representing different experiences and backgrounds. Community colleges offer the opportunity to expand and diversify AI since they attract a diverse array of students with a variety of backgrounds and expertise. Intel is committed to partnering with educational institutions to expand access to technology skills needed for current and future jobs.”
–Gregory Bryant, Intel executive vice president and general manager of the Client Computing Group

Whom It Helps: Based in Tempe, Arizona, MCCCD is the largest community college district in the U.S. with an estimated enrollment of more than 100,000 students across 10 campuses and 10,000 faculty and staff members.

How It Helps: The AI program consists of courses that have been developed by MCCCD’s faculty and Intel leaders based on Intel software and tools such as the Intel® Distribution of OpenVINO™ Toolkit and Intel Python. Intel will also contribute technical advice, faculty training, summer internships and Intel mentors for both students and faculty members. Students will learn fundamental skills such as data collection, AI model training, coding and exploration of AI technology’s societal impact. The program includes a social impact AI project that is developed with guidance from teachers and Intel mentors. Upon completion, MCCCD will offer an associate degree in artificial intelligence that can be transferred to a four-year college.

Why It’s Important: AI technology is rapidly accelerating with new tools, technology and applications requiring workers to learn new skills. Recent studies show the demand for artificial intelligence skills is expected to grow exponentially. A 2020 LinkedIn report notes that AI skills are one of the top five most in-demand hard skills. Research by MCCCD Workforce and Economic Development Office estimates an increase of 22.4 percent for these roles by 2029.

As of early June 2020, more than 43 million Americans have filed for unemployment benefits. Furthermore, a recent McKinsey study estimates that over 57 million jobs are vulnerable, meaning they are subject to furloughs, layoffs or being rendered unproductive. It is critical for educational institutions and corporations to collaborate to prepare for future workforce demands.

About AI Program Launch Details: The program’s first phase will be piloted online at Estrella Mountain Community College and Chandler Gilbert Community College in fall 2020. As physical distancing requirements are lifted and the concerns of the COVID-19 pandemic decrease, classes will begin in-person at both campuses.

More Context: This expands on the Intel® AI for Youth program, which provides AI curriculum and resources to over 100,000 high school and vocational students in nine countries and will continue to scale globally. (Read, “AI for Youth Uses Intel Technology to Solve Real-World Problems.”) Additionally, Intel recently collaborated with Udacity to create the Intel Edge AI for IoT Developers Nanodegree Program aimed at training 1 million developers. Intel has a commitment to expand digital readiness to reach 30 million people in 30,000 institutions in 30 countries. This builds on the company’s recently announced 2030 goals and Global Impact Challenges that reinforce its commitment to making technology fully inclusive and expand digital readiness.

Intel’s corporate responsibility and positive global impact work is embedded in its purpose to create world-changing technology that enriches the lives of every person on Earth. By leveraging its position in the technology ecosystem, Intel can help customers and partners achieve their own aspirations and accelerate progress on key topics across the technology industry.

The post Intel Launches First Artificial Intelligence Associate Degree Program appeared first on Intel Newsroom.

AI for Youth Uses Intel Technology to Solve Real-World Problems

Intel AI for Youth 1

In Bangalore, India, 10th grader Rahul Jaikrishna developed Cyber Detective – an artificial intelligence-based model that detects cyber bullying with an accuracy of up to 80%. Fourteen-year-old Jaikrishna was inspired after learning that “confession pages” created by school students – online diaries on social media where young people post confessions and secrets – often make teens easy targets for bullying.

Jaikrisha didn’t learn AI programming in his everyday 10th grade syllabus. He picked it up via the Intel® AI for Youth program, which launched in 2019 in three countries and was offered at his school.

This year, AI for Youth will scale to nine countries. And by late 2020, program leaders hope to provide over 100,000 high school and vocational students with vital AI skills, curriculum and resources that can be applied in everyday life.

More: Intel Launches First Artificial Intelligence Associate Degree Program | Artificial Intelligence at Intel (Press Kit)

The skills are critical to accelerating new tools, technologies and applications in industries such as high-tech, healthcare, automotive, industrial and aerospace engineering, and more. A 2019 LinkedIn report notes that AI skills were the second-most in-demand skill behind cloud computing, while Forbes reported on the need to train more skilled AI professionals: “There are about 300,000 AI professionals worldwide, but millions of roles available to fill.” The democratization of AI and deep learning, says Forbes, is increasing the demand for AI professionals.

To fill this need, Intel plans to increase its AI for Youth program to teach as many as 30 million current and future workforce members about AI by 2030.

“Demystifying and democratizing AI for the next generation non-techie workforce is key to fuel mutual growth for countries, industries and broader society for the larger socio-economic revitalization, especially when COVID is impacting the economy and jobs worldwide,” said Brian Gonzalez, senior director of Government Market Trade at Intel. “The AI for Youth program is testimonial to our commitment to expand digital readiness for all people in the world.”

AI for Youth is offered today at K-12 and vocational schools in eight countries: India, Poland, South Korea, Germany, Singapore, United Kingdom, China and Russia. The United States joins that list today as the program’s ninth country.

Using the principles and techniques they learned as part of the AI for Youth program, students around the world are turning out technology-based solutions.

In November, 17-year old Polish students Jakub Florkowski, Antoni Marcinek, Wiktoria Gradecka and Wojciech Janicki from Jan Kanty High School applied the skills they learned from Intel AI for Youth to create the Hey Teacher! app.

The app match-makes private tutors to interested students in Poland via easy-to-navigate filters like subject, level of education, availability, location and price. The quartet created Hey Teacher! to solve a problem they faced as students: easily locating competent teaching resources to help explain or broaden their knowledge via private tutoring.

And in June 2019, four students at Busan Computer High School in South Korea noticed a staggering amount of energy wasted when they entered an empty computer lab. Despite not being in use, the lab’s air-conditioning, lights and PCs were on. They noticed a similar pattern of energy waste across the school’s 30 classrooms.

Instead of ignoring the issue, Lee Jihong, Kim Eundong, Kim Jidong and Lee Seungyun created Energy Guard. They spent seven months developing an AI algorithm that pairs a PC and a webcam with computer vision and other analytics to count the number of people present in a room and toggle on or off the room’s power supply.

The system is currently in pilot at the school’s PC lab. After the trial, Energy Guard will be expanded to over 30 rooms in the school. The group has set a goal of covering over 10,000 classrooms across South Korea.

The post AI for Youth Uses Intel Technology to Solve Real-World Problems appeared first on Intel Newsroom.

Intel and National Science Foundation Invest in Wireless-Specific Machine Learning Edge Research

iot security 2x1

What’s New: Today, Intel and the National Science Foundation (NSF) announced award recipients of joint funding for research into the development of future wireless systems. The Machine Learning for Wireless Networking Systems (MLWiNS) program is the latest in a series of joint efforts between the two partners to support research that accelerates innovation with the focus of enabling ultra-dense wireless systems and architectures that meet the throughput, latency and reliability requirements of future applications. In parallel, the program will target research on distributed machine learning computations over wireless edge networks, to enable a broad range of new applications.

“Since 2015, Intel and NSF have collectively contributed more than $30 million to support science and engineering research in emerging areas of technology. MLWiNS is the next step in this collaboration and has the promise to enable future wireless systems that serve the world’s rising demand for pervasive, intelligent devices.”
– Gabriela Cruz Thompson, director of university research and collaborations at Intel Labs

Why It’s Important: As demand for advanced connected services and devices grows, future wireless networks will need to meet the challenging density, latency, throughput and security requirements these applications will require. Machine learning shows great potential to manage the size and complexity of such networks – addressing the demand for capacity and coverage while maintaining the stringent and diverse quality of service expected from network users. At the same time, sophisticated networks and devices create an opportunity for machine learning services and computation to be deployed closer to where the data is generated, which alleviates bandwidth, privacy, latency and scalability concerns to move data to the cloud.

“5G and Beyond networks need to support throughput, density and latency requirements that are orders of magnitudes higher than what current wireless networks can support, and they also need to be secure and energy-efficient,” said Margaret Martonosi, assistant director for computer and information science and engineering at NSF. “The MLWiNS program was designed to stimulate novel machine learning research that can help meet these requirements – the awards announced today seek to apply innovative machine learning techniques to future wireless network designs to enable such advances and capabilities.”

What Will Be Researched: Through MLWiNS, Intel and NSF will fund research with the goal of driving new wireless system and architecture design, increasing the utilization of sparse spectrum resources and enhancing distributed machine learning computation over wireless edge networks. Grant winners will conduct research across multiple areas of machine learning and wireless networking. Key focus areas and project examples include:

Reinforcement learning for wireless networks: Research teams from the University of Virginia and Penn State University will study reinforcement learning for optimizing wireless network operation, focusing on tackling convergence issues, leveraging knowledge-transfer methods to reduce the amount of training data necessary, and bridging the gap between model-based and model-free reinforcement learning through an episodic approach.

Federated learning for edge computing:

  • Researchers from the University of North Carolina at Charlotte will explore methods to speed up multi-hop federated learning over wireless communications, allowing multiple groups of devices to collaboratively train a shared global model while keeping their data local and private. Unlike classical federated learning systems that utilize single-hop wireless communications, multi-hop system updates need to go through multiple noisy and interference-rich wireless links, which can result in slower updates. Researchers aim to overcome this challenge by developing a novel wireless multi-hop federated learning system with guaranteed stability, high accuracy and a fast convergence speed by systematically addressing the challenges of communication latency, and system and data heterogeneity.
  • Researchers from the Georgia Institute of Technology will analyze and design federated and collaborative machine-learning training and inference schemes for edge computing, with the goal of increasing efficiency over wireless networks. The team will address challenges with real-time deep learning at the edge, including limited and dynamic wireless channel bandwidth, unevenly distributed data across edge devices and on-device resource constraints.
  • Research from the University of Southern California and the University of California, Berkeley will focus on a coding-centric approach to enhance federated learning over wireless communications. Specifically, researchers will work to tackle the challenges of dealing with non-independent and identically distributed data, and heterogeneous resources at the wireless edge, and minimizing upload bandwidth costs from users, while emphasizing issues of privacy and security when learning from distributed data.

Distributed training across multiple edge devices: Rice University researchers will work to train large-scale centralized neural networks by separating them into a set of independent sub-networks that can be trained on different devices at the edge. This can reduce training time and complexity, while limiting the impact on model accuracy.

Leveraging information theory and machine learning to improve wireless network performance: Research teams from the Massachusetts Institute of Technology and Virginia Polytechnic Institute and State University will collaborate to explore the use of deep neural networks to address physical layer problems of a wireless network. They will exploit information theoretic tools in order to develop new algorithms that can better address non-linear distortions and relax simplifying assumptions on the noise and impairments encountered in wireless networks.

Deep learning from radio frequency signatures: Researchers at Oregon State University will investigate cross-layer techniques that leverage the combined capabilities of transceiver hardware, wireless radio frequency (RF) domain knowledge and deep learning to enable efficient wireless device classification. Specifically, the focus will be on exploiting RF signal knowledge and transceiver hardware impairments to develop efficient deep learning-based device classification techniques that are scalable with the massive and diverse numbers of emerging wireless devices, robust against device signature cloning and replication, and agnostic to environment and system distortions.

About Award Winners and Project Descriptions: A full list of award winners and project descriptions can be found in “Intel and National Science Foundation Announce Future Wireless Systems Research Award Recipients.

More Context: NSF/Intel Partnership on Machine Learning for Wireless Networking Systems (MLWiNS) | Intel Labs (Press Kit) | Artificial Intelligence at Intel (Press Kit)

The post Intel and National Science Foundation Invest in Wireless-Specific Machine Learning Edge Research appeared first on Intel Newsroom.

Green Light! TOP500 Speeds Up, Saves Energy with NVIDIA

The new ranking of the TOP500 supercomputers paints a picture of modern scientific computing, expanded with AI and data analytics, and accelerated with NVIDIA technologies.

Eight of the world’s top 10 supercomputers now use NVIDIA GPUs, InfiniBand networking or both. They include the most powerful systems in the U.S., Europe and China.

NVIDIA, now combined with Mellanox, powers two-thirds (333) of the overall TOP500 systems on the latest list, up dramatically from less than half (203) for the two separate companies combined on the June 2017 list.

Nearly three-quarters (73 percent) of the new InfiniBand systems on the list adopted NVIDIA Mellanox HDR 200G InfiniBand, demonstrating the rapid embrace of the latest data rates for smart interconnects.

The number of TOP500 systems using HDR InfiniBand nearly doubled since the November 2019 list. Overall, InfiniBand appears in 141 supercomputers on the list, up 12 percent since June 2019.

A rising number of TOP500 systems are adopting NVIDIA GPUs, its Mellanox networking or both.

 

NVIDIA Mellanox InfiniBand and Ethernet networks connect 305 systems (61 percent) of the TOP500 supercomputers, including all of the 141 InfiniBand systems, and 164 (63 percent) of the systems using Ethernet.

In energy efficiency, the systems using NVIDIA GPUs are pulling away from the pack. On average, they’re now 2.8x more power-efficient than systems without NVIDIA GPUs, measured in gigaflops/watt.

That’s one reason why NVIDIA GPUs are now used by 20 of the top 25 supercomputers on the TOP500 list.

The best example of this power efficiency is Selene (pictured above), the latest addition to NVIDIA’s internal research cluster. The system was No. 2 on the latest Green500 list and No. 7 on the overall TOP500 at 27.5 petaflops on the Linpack benchmark.

At 20.5 gigaflops/watt, Selene is within a fraction of a point from the top spot on the Green500 list, claimed by a much smaller system that ranked No. 394 by performance.

Selene is the only top 100 system to crack the 20 gigaflops/watt barrier. It’s also the second most powerful industrial supercomputer in the world behind the No. 6 system from energy giant Eni S.p.A. of Italy, which also uses NVIDIA GPUs.

NVIDIA GPUs are powering gains in energy efficiency for the TOP500 supercomputers.

In energy use, Selene is 6.8x more efficient than the average TOP500 system not using NVIDIA GPUs. Selene’s performance and energy efficiency are thanks to third-generation Tensor Cores in NVIDIA A100 GPUs that speed up both traditional 64-bit math for simulations and lower precision work for AI.

Selene’s rankings are an impressive feat for a system that took less than four weeks to build. Engineers were able to assemble Selene quickly because they used NVIDIA’s modular reference architecture.

The guide defines what NVIDIA calls a DGX SuperPOD. It’s based on a powerful, yet flexible building block for modern data centers: the NVIDIA DGX A100 system.

The DGX A100 is an agile system, available today, that packs eight A100 GPUs in a 6U server with NVIDIA Mellanox HDR InfiniBand networking. It was created to accelerate a rich mix of high performance computing, data analytics and AI jobs — including training and inference — and to be fast to deploy.

Scaling from Systems to SuperPODs

With the reference design, any organization can quickly set up a world-class computing cluster. It shows how 20 DGX A100 systems can be linked in Lego-like fashion using high-performance NVIDIA Mellanox InfiniBand switches.

InfiniBand now accelerates seven of the top 10 supercomputers, including the most powerful systems in China, Europe and the U.S.

Four operators can rack a 20-system DGX A100 cluster in as little as an hour, creating a 2-petaflops system powerful enough to appear on the TOP500 list. Such systems are designed to run comfortably within the power and thermal capabilities of standard data centers.

By adding an additional layer of NVIDIA Mellanox InfiniBand switches, engineers linked 14 of these 20-system units to create Selene, which sports:

  • 280 DGX A100 systems
  • 2,240 NVIDIA A100 GPUs
  • 494 NVIDIA Mellanox Quantum 200G InfiniBand switches
  • 56 TB/s network fabric
  • 7PB of high-performance all-flash storage

One of Selene’s most significant specs is it can deliver more than 1 exaflops of AI performance. Another is Selene set a new record using just 16 of its DGX A100 systems  a key data analytics benchmark — called TPCx-BB — delivering 20x greater performance than any other system.

These results are critical at a time when AI and analytics are becoming part of the new requirements for scientific computing.

Around the world, researchers are using deep learning and data analytics to predict the most fruitful areas for conducting experiments. The approach reduces the number of costly and time-consuming experiments researchers require, accelerating scientific results.

For example, six systems not yet on the TOP500 list are being built today with the A100 GPUs NVIDIA launched last month. They’ll accelerate a blend of HPC and AI that’s defining a new era in science.

TOP500 Expands Canvas for Scientific Computing

One of those systems is at Argonne National Laboratory, where researchers will use a cluster of 24 NVIDIA DGX A100 systems to scan billions of drugs in the search for treatments for COVID-19.

“Much of this work is hard to simulate on a computer, so we use AI to intelligently guide where and when we will sample next,” said Arvind Ramanathan, a computational biologist at Argonne, in a report on the first users of A100 GPUs.

AI, data analytics and edge streaming are redefining scientific computing.

For its part, NERSC (the U.S. National Energy Research Scientific Computing Center) is embracing AI for several projects targeting Perlmutter, its pre-exascale system packing 6,200 A100 GPUs.

For example, one project will use reinforcement learning to control light source experiments, and one will apply generative models to reproduce expensive simulations at high-energy physics detectors.

Researchers in Munich are training natural-language models on 6,000 GPUs on the Summit supercomputer to speed the analysis of coronavirus proteins. It’s another sign that leading TOP500 systems are extending beyond traditional simulations run with double-precision math..

As scientists expand into deep learning and analytics, they’re also tapping into cloud computing services and even streaming data from remote instruments at the edge of the network. Together these elements form four pillars of modern scientific computing that NVIDIA accelerates:

It’s part of a broader trend where both researchers and enterprises are seeking acceleration for AI and analytics from the cloud to the network’s edge. That’s why the world’s largest cloud service providers along with the world’s top OEMs are adopting NVIDIA GPUs.

In this way, the latest TOP500 list reflects NVIDIA’s efforts to democratize AI and HPC. Any company that wants to build leadership computing capabilities can access NVIDIA technologies such as DGX systems that power the world’s most powerful systems.

Finally, NVIDIA congratulates the engineers behind the Fugaku supercomputer in Japan for taking the No. 1 spot, showing Arm is becoming more real and now a viable option in high performance computing. That’s one reason why NVIDIA announced a year ago it’s making its CUDA accelerated computing software available on the Arm processor architecture.

The post Green Light! TOP500 Speeds Up, Saves Energy with NVIDIA appeared first on The Official NVIDIA Blog.

Best AI Processor: NVIDIA Jetson Nano Wins 2020 Vision Product of the Year Award

The small but mighty NVIDIA Jetson Nano has added yet another accolade to the company’s armory of awards.

The Edge AI and Vision Alliance, a worldwide collection of companies creating and enabling applications for computer vision and edge AI, has named Jetson Nano its 2020 Vision Product of the Year Award for “Best AI Processor.”

Now in its third year, the Vision Product of the Year Awards were announced in five categories. The winning entries were chosen by an independent panel of judges based on innovation, impact on customers and the market, and competitive differentiation.

“Congratulations to NVIDIA on being selected for this prestigious award by our panel of independent judges,” said Jeff Bier, founder of the Edge AI and Vision Alliance. “NVIDIA is a pioneer in embedded computer vision and AI, and has sustained an impressive pace of innovation over many years.”

The NVIDIA Jetson Nano, launched last year, delivers powerful computing power for AI at the edge in a compact, easy-to-use platform with full software programmability. At just 70 x 45 mm, the Jetson Nano module is the smallest in the Jetson lineup.

But don’t let its credit card sized form factor fool you. With the performance and capabilities needed to run modern AI workloads fast, Jetson Nano delivers big when it comes to deploying AI at the edge across multiple industries — from robotics and smart cities to retail and healthcare.

Opening new possibilities for AI at the edge, Jetson Nano delivers up to 472 GFLOPS of accelerated computing and can run many modern neural networks in parallel.

It’s production-ready and supports all popular AI frameworks. This makes Jetson Nano ideal for developing AI-powered products such as IoT gateways, network video recorders, cameras, robots and optical inspection systems.

The system-on-module is powered by an NVIDIA Maxwell GPU and supported by the NVIDIA JetPack SDK, significantly expanding the choices now available for manufacturers, developers and educators looking for embedded edge-computing options that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.

This comprehensive software stack makes AI deployment on autonomous machines fast, reduces complexity and speeds time to market.

NVIDIA Jetson is the leading AI-at-the-edge computing platform, with nearly half a million developers. With support for cloud-native technologies now available across the full Jetson lineup, manufacturers of intelligent machines and developers of AI applications can build and deploy high-quality, software-defined features on embedded and edge devices targeting a wide variety of applications.

Cloud-native support allows users to implement frequent improvements, improve accuracy and quickly deploy new algorithms throughout an application’s lifecycle, at scale, while minimizing downtime.

Learn more about why the Edge AI and Vision Alliance selected Jetson Nano for its Best AI Processor award.

New to the Jetson platform? Get started.

The post Best AI Processor: NVIDIA Jetson Nano Wins 2020 Vision Product of the Year Award appeared first on The Official NVIDIA Blog.