NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators

As data grows in volume, velocity and complexity, the field of data science is booming.

There’s an ever-increasing demand for talent and skillsets to help design the best data science solutions. However, expertise that can help drive these breakthroughs requires students to have a foundation in various tools, programming languages, computing frameworks and libraries.

That’s why the NVIDIA Deep Learning Institute has released the first version of its Accelerated Data Science Teaching Kit for qualified educators. The kit has been co-developed with Polo Chau, from the Georgia Institute of Technology, and Xishuang Dong, from Prairie View A&M University, two highly regarded researchers and educators in the fields of data science and accelerating data analytics with GPUs.

“Data science unlocks the immense potential of data in solving societal challenges and large-scale complex problems across virtually every domain, from business, technology, science and engineering to healthcare, government and many more,” Chau said.

The free teaching materials cover fundamental and advanced topics in data collection and preprocessing, accelerated data science with RAPIDS, GPU-accelerated machine learning, data visualization and graph analytics.

Content also covers culturally responsive topics such as fairness and data bias, as well as challenges and important individuals from underrepresented groups.

This first release of the Accelerated Data Science Teaching Kit includes focused modules covering:

  • Introduction to Data Science and RAPIDS
  • Data Collection and Pre-processing (ETL)
  • Data Ethics and Bias in Data Sets
  • Data Integration and Analytics
  • Data Visualization
  • Distributed Computing with Hadoop, Hive, Spark and RAPIDS

More modules are planned for future releases.

All modules include lecture slides, lecture notes and quiz/exam problem sets, and most modules include hands-on labs with included datasets and sample solutions in Python and interactive Jupyter notebook formats. Lecture videos will be included for all modules in later releases.

DLI Teaching Kits also come bundled with free GPU resources in the form of Amazon Web Services credits for educators and their students, as well as free DLI online, self-paced courses and certificate opportunities.

“Data science is such an important field of study, not just because it touches every domain and vertical, but also because data science addresses important societal issues relating to gender, race, age and other ethical elements of humanity,“ said Dong, whose school is a Historically Black College/University.

This is the fourth teaching kit released by the DLI, as part of its program that has reached 7,000 qualified educators so far. Learn more about NVIDIA Teaching Kits.

The post NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators appeared first on The Official NVIDIA Blog.

New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors

For the first time ever, the NVIDIA Deep Learning Institute is making its popular instructor-led workshops available to the general public.

With the launch of public workshops this week, enrollment will be open to individual developers, data scientists, researchers and students. NVIDIA is increasing accessibility and the number of courses available to participants around the world. Anyone can learn from expert NVIDIA instructors in courses on AI, accelerated computing and data science.

Previously, DLI workshops were only available to large organizations that wanted dedicated and specialized training for their in-house developers, or to individuals attending GPU Technology Conferences.

But demand for in-depth training has increased dramatically in the last few years. Individuals are looking to acquire new skills and organizations are seeking to provide their workforces with advanced software development techniques.

“Our public workshops provide a great opportunity for individual developers and smaller organizations to get industry-leading training in deep learning, accelerated computing and data science,” said Will Ramey, global head of Developer Programs at NVIDIA. “Now the same expert instructors and world-class learning materials that help accelerate innovation at leading companies are available to everyone.”

The current lineup of DLI workshops for individuals includes:

March 2021

  • Fundamentals of Accelerated Computing with CUDA Python
  • Applications of AI for Predictive Maintenance

April 2021

  • Fundamentals of Deep Learning
  • Applications of AI for Anomaly Detection
  • Fundamentals of Accelerated Computing with CUDA C/C++
  • Building Transformer-Based Natural Language Processing Applications
  • Deep Learning for Autonomous Vehicles – Perception
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Accelerating CUDA C++ Applications with Multiple GPUs
  • Fundamentals of Deep Learning for Multi-GPUs

May 2021

  • Building Intelligent Recommender Systems
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Deep Learning for Industrial Inspection
  • Building Transformer-Based Natural Language Processing Applications
  • Applications of AI for Anomaly Detection

Visit the DLI website for details on each course and the full schedule of upcoming workshops, which is regularly updated with new training opportunities.

Jump-Start Your Software Development

As organizations invest in transforming their workforce to benefit from modern technologies, it’s critical that their software and solutions development teams are equipped with the right skills and tools. In a market where developers with the latest skills in deep learning, accelerated computing and data science are scarce, DLI strengthens their employees’ skillsets through a wide array of course offerings.

The full-day workshops offer a comprehensive learning experience that includes hands-on exercises and guidance from expert instructors certified by DLI. Courses are delivered virtually and in many time zones to reach developers worldwide. Courses are offered in English, Chinese, Japanese and other languages.

Registration fees cover learning materials, instructors and access to fully configured GPU-accelerated development servers for hands-on exercises.

A complete list of DLI courses are available in the DLI course catalog.

Register today for a DLI instructor-led workshop for individuals. Space is limited so sign up early.

For more information, visit the DLI website or email nvdli@nvidia.com.

The post New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors appeared first on The Official NVIDIA Blog.

Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Despite the pandemic putting in-person training on hold, organizations can still offer instructor-led courses to their staff to develop key skills in AI, data science and accelerated computing.

NVIDIA’s Deep Learning Institute offers many online courses that deliver hands-on training. One of its most popular — recently updated and retitled as The Fundamentals of Deep Learning — will be taken by hundreds of attendees at next week’s GPU Technology Conference, running Oct. 5-9.

Organizations interested in boosting the deep learning skills of their personnel can arrange to get their teams trained by requesting a workshop from the DLI Course Catalog.

“Technology professionals who take our revamped deep learning course will emerge with the basics they need to start applying deep learning to their most challenging AI and machine learning applications,” said Craig Clawson, director of Training Services at NVIDIA. “This course is a key building block for developing a cutting-edge AI skillset.”

Huge Demand for Deep Learning

Deep learning is at the heart of the fast-growing fields of machine learning and AI. This makes it a skill that’s in huge demand and has put companies across industries in a race to recruit talent. Linkedin recently reported that the fastest growing job category in the U.S. is AI specialist, with annual job growth of 74 percent and an average annual salary of $136,000.

For many organizations, especially those in the software, internet, IT, higher education and consumer electronics sectors, investing in upskilling current employees can be critical to their success while offering a path to career advancement and increasing worker retention.

Deep Learning Application Development-

With interest in the field heating up, a recent article in Forbes highlighted that AI and machine learning, data science and IoT are among the most in-demand skills tech professionals should focus on. In other words, tech workers who lack these skills could soon find themselves at a professional disadvantage.

By developing needed skills, employees can make themselves more valuable to their organizations. And their employers benefit by embedding machine learning and AI functionality into their products, services and business processes.

“Organizations are looking closely at how AI and machine learning can improve their business,” Clawson said. “As they identify opportunities to leverage these technologies, they’re hustling to either develop or import the required skills.”

Get a glimpse of the DLI experience in this short video:

DLI Courses: An Invaluable Resource

The DLI has trained more than 250,000 developers globally. It has continued to deliver a wide range of training remotely via virtual classrooms during the COVID-19 pandemic.

Classes are taught by DLI-certified instructors who are experts in their fields, and breakout rooms support collaboration among students, and interaction with the instructors.

And by completing select courses, students can earn an NVIDIA Deep Learning Institute certificate to demonstrate subject matter competency and support career growth.

It would be hard to exaggerate the potential that this new technology and the NVIDIA developer community holds for improving the world — and the community is growing faster than ever. It took 13 years for the number of registered NVIDIA developers to reach 1 million. Just two years later, it has grown to over 2 million.

Whether enabling new medical procedures, inventing new robots or joining the effort to combat COVID-19, the NVIDIA developer community is breaking new ground every day.

Courses like the re-imagined Fundamentals of Deep Learning are helping developers and data scientists deliver breakthrough innovations across a wide range of industries and application domains.

“Our courses are structured to give developers the skills they need to thrive as AI and machine learning leaders,” said Clawson. “What they take away from the courses, both for themselves and their organizations, is immeasurable.”

To get started on the journey of transforming your organization into an AI powerhouse, request a DLI workshop today.

What is deep learning? Read more about this core technology.

The post Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse appeared first on The Official NVIDIA Blog.

DIY with AI: GTC to Host NVIDIA Deep Learning Institute Courses for Anyone, Anywhere

The NVIDIA Deep Learning Institute is launching three new courses, which can be taken for the first time ever at the GPU Technology Conference next month. 

The new instructor-led workshops cover fundamentals of deep learning, recommender systems and Transformer-based applications. Anyone connected online can join for a nominal fee, and participants will have access to a fully configured, GPU-accelerated server in the cloud. 

DLI instructor-led trainings consist of hands-on remote learning taught by NVIDIA-certified experts in virtual classrooms. Participants can interact with their instructors and peers in real time. They can whiteboard ideas, tackle interactive coding challenges and earn a DLI certificate of subject competency to support their professional growth.

DLI at GTC is offered globally, with several courses available in Korean, Japanese and Simplified Chinese for attendees in their respective time zones.

New DLI workshops launching at GTC include:

  • Fundamentals of Deep Learning — Build the confidence to take on a deep learning project by learning how to train a model, work with common data types and model architectures, use transfer learning between models, and more.
  • Building Intelligent Recommender Systems — Create different types of recommender systems: content-based, collaborative filtering, hybrid, and more. Learn how to use the open-source cuDF library, Apache Arrow, alternating least squares, CuPy and TensorFlow 2 to do so.
  • Building Transformer-Based Natural Language Processing Applications — Learn about NLP topics like Word2Vec and recurrent neural network-based embeddings, as well as Transformer architecture features and how to improve them. Use pre-trained NLP models for text classification, named-entity recognition and question answering, and deploy refined models for live applications.

Other DLI offerings at GTC will include:

  • Fundamentals of Accelerated Computing with CUDA Python — Dive into how to use Numba to compile NVIDIA CUDA kernels from NumPy universal functions, as well as create and launch custom CUDA kernels, while applying key GPU memory management techniques.
  • Applications of AI for Predictive Maintenance — Leverage predictive maintenance and identify anomalies to manage failures and avoid costly unplanned downtimes, use time-series data to predict outcomes using machine learning classification models with XGBoost, and more.
  • Fundamentals of Accelerated Data Science with RAPIDS — Learn how to use cuDF and Dask to ingest and manipulate large datasets directly on the GPU, applying GPU-accelerated machine learning algorithms including XGBoost, cuGRAPH and cuML to perform data analysis at massive scale.
  • Fundamentals of Accelerated Computing with CUDA C/C++ — Find out how to accelerate CPU-only applications to run their latent parallelism on GPUs, using techniques like essential CUDA memory management to optimize accelerated applications.
  • Fundamentals of Deep Learning for Multi-GPUs — Scale deep learning training to multiple GPUs, significantly shortening the time required to train lots of data and making solving complex problems with deep learning feasible.
  • Applications of AI for Anomaly Detection — Discover how to implement multiple AI-based solutions to identify network intrusions, using accelerated XGBoost, deep learning-based autoencoders and generative adversarial networks.

With more than 2 million registered NVIDIA developers working on technological breakthroughs to solve the world’s toughest problems, the demand for deep learning expertise is greater than ever. The full DLI course catalog includes a variety of topics for anyone interested in learning more about AI, accelerated computing and data science.

Get a glimpse of the DLI experience:

Workshops have limited seating, with the early bird deadline on Sep 25. Register now.

The post DIY with AI: GTC to Host NVIDIA Deep Learning Institute Courses for Anyone, Anywhere appeared first on The Official NVIDIA Blog.

2 Million Registered Developers, Countless Breakthroughs

Everyone has problems.

Whether they’re tackling challenges at the cutting edge of physics, trying to tame a worldwide pandemic, or sorting their child’s Lego collection, innovators join NVIDIA’s developer program to help them solve their most challenging problems.

With the number of registered NVIDIA developers having just hit 2 million, NVIDIA developers are pursuing more breakthroughs than ever.

Their ranks continue to grow by larger numbers every year. It took 13 years to reach 1 million registered developers, and less than two more to reach 2 million.

Most recently, teams at the U.S. National Institutes of Health, Scripps Research Institute and Oak Ridge National Laboratory have been among the NVIDIA developers at the forefront of efforts to combat COVID-19.

Every Country, Every Field

No surprise. Whether they’re software programmers, data scientists or devops engineers, developers are problem solvers.

They write, debug and optimize code, often taking a set of software building blocks — frameworks, application programming interfaces and other tools — and putting them to work to do something new.

These developers include business and academic leaders from every region in the world.

In China, Alibaba and Baidu are among the most active GPU developers. In North America, those names include Microsoft, Amazon and Google. In Japan, it’s Sony, Hitachi and Panasonic. In Europe, they include Bosch, Daimler and Siemens.

All the top technical universities are represented, including CalTech, MIT, Oxford, Cambridge, Stanford, Tsinghua University, the University of Tokyo, and IIT campuses throughout India. 

Look beyond the big names — there are too many to drop here — and you’ll find tens of thousands of entrepreneurs, hobbyists and enthusiasts.

Developers are signing up for our developer program to put NVIDIA accelerated computing tools to work across fields such as scientific and high performance computing, graphics and professional visualization, robotics, AI and data science, networking, and autonomous vehicles.

Developers are trained and equipped for success through our GTC conferences, online and in-person tutorials, our Deep Learning Institute training, and technical blogs. We provide them with software development kits such as CUDA, cuDNN, TensorRT and OptiX.

Registered developers account for 100,000 downloads a month, thousands participate each month in DLI training sessions, and thousands more engage in our online forums or attend conferences and webinars.

NVIDIA’s developer program, however, is just a piece of a much bigger developer story. There are now more than a billion CUDA GPUs in the world — each capable of running CUDA-accelerated software — giving developers, hackers and makers a vast installed base to work with.

As a result, the number of downloads of CUDA, which is free, without registration, is far higher than that of registered developers. On average, 39,000 developers sign up for memberships each month and 438,000 download CUDA each month.

That’s an awful lot of problem solvers.

Breakthroughs in Science and Research

The ranks of those who depend on such problem solvers include the team who won the 2017 Nobel Prize in Chemistry — Jacques Dubochet, Joachim Frank and Richard Henderson — for their contribution to cryogenic electron microscopy.

They also include the team that won the 2017 Nobel Prize in Physics — Rainer Weiss, Barry Barish and Kip Thorne — for their work detecting gravitational waves.

More scientific breakthroughs are coming, as developers attack new HPC problems and, increasingly, deep learning.

William Tang, principal research physicist at the Princeton Plasma Physics Laboratory — one of the world’s foremost experts on fusion energy — leads a team using deep learning and HPC to advance the quest for cheap, clean energy.

Michael Kirk and Raphael Attie, scientists at NASA’s Goddard Space Flight Center — are among the many active GPU developers at NASA — relying on Quadro RTX data science workstations to analyze the vast quantities of data streaming in from satellites monitoring the sun.

And at UC Berkeley, astrophysics Ph.D. student Gerry Zhang uses GPU-accelerated deep learning to analyze signals from space for signs of intelligent extraterrestrial civilizations.

Top Companies

Outside of research and academia, developers at the world’s top companies are tackling problems faced by every one of the world’s industries.

At Intuit, Chief Data Officer Ashok Srivastava leads a team using GPU-accelerated machine learning to help consumers with taxes and help small businesses through the financial effects of COVID-19.

At health insurer Anthem, Chief Digital Officer Rajeev Ronanki uses GPU-accelerated AI to help patients personalize and better understand their healthcare information.

Arne Stoschek, head of autonomous systems at Acubed, the Silicon Valley-based advanced products and partnerships outpost of Airbus Group, is developing self-piloted air taxis powered by GPU-accelerated AI.

New Problems, New Businesses: Entrepreneurs Swell Developer Ranks

Other developers — many supported by the NVIDIA Inception program — work at startups building businesses that solve new kinds of problems.

Looking to invest in a genuine pair of vintage Air Jordans? Michael Hall, director of data at GOAT Group, uses GPU-accelerated AI to help the startup connect sneaker enthusiasts with Air Jordans, Yeezys and a variety of old-school kicks that they can be confident are authentic.

Don’t know what to wear? Brad Klingenberg, chief algorithms officer at fashion ecommerce startup Stitch Fix, leads a team that uses GPU-accelerated AI to help us all dress better.

And Benjamin Schmidt, at Roadbotics, offers what might be the ultimate case study in how developers are solving concrete problems: his startup helps cities find and fix potholes.

Entrepreneurs are also supported by NVIDIA’s Inception program, which includes more than 6,000 startups in industries ranging from agriculture to healthcare to logistics to manufacturing.

Of course, just because something’s a problem, doesn’t mean you can’t love solving it.

Love beer? Eric Boucher, a home brewing enthusiast, is using AI to invent new kinds of suds.

Love a critter-free lawn? Robert Bond has trained a system that can detect cats and gently shoo them from his grass by turning on his sprinklers to the amazement and delight of his grandchildren.

Francisco “Paco” Garcia has even trained an AI to help sort out his children’s Lego pile.

Most telling: stories from developers working at the cutting edge of the arts.

Pierre Barreau has created an AI, named AIVA, which uses mathematical models based on the work of great composers to create new music.

And Raiders of the Lost Art — a collaboration between Anthony Bourached and George Cann, a pair of Ph.D. candidates at the University College, London — has used neural style transfer techniques to tease out hidden artwork in a Leonardo da Vinci painting.

Wherever you go, follow the computing power and you’ll find developers delivering breakthroughs.

How big is the opportunity for problem solvers like these? However many problems there are in the world.

Want more stories like these? No problem. Over the months to come, we’ll be bringing as many to you as we can. 

The post 2 Million Registered Developers, Countless Breakthroughs appeared first on The Official NVIDIA Blog.

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

The four undergrads met for the first at the Stanford TreeHacks hackathon, became close friends, and developed an AI-powered app to help physical therapy patients ensure correct posture for their at-home exercises — all within 36 hours.

Back in February, just before the lockdown, Shachi Champaneri, Lilliana de Souza, Riley Howk and Deepa Marti happened to sit across from each other at the event’s introductory session and almost immediately decided to form a team for the competition.

Together, they created PocketPT, an app that lets users know whether they’re completing a physical therapy exercise with the correct posture and form. It captured two prizes against a crowded field, and inspired them to continue using AI to help others.

The app’s AI model uses the NVIDIA Jetson Nano developer kit to detect a user doing the tree pose, a position known to increase shoulder muscle strength and improve balance. The Jetson Nano performs image classification so the model can tell whether the pose is being done correctly based on 100+ images it was trained on, which the team took of themselves. Then, it provides feedback to the user, letting them know if they should adjust their form.

“It can be taxing for patients to go to the physical therapist often, both financially and physically,” said Howk.

Continuing exercises at home is a crucial part of recovery for physical therapy patients, but doing them incorrectly can actually hinder progress, she explained.

Bringing the Idea to Life

In the months leading up to the hackathon, Howk, a rising senior at the University of Alabama, was interning in Los Angeles, where a yoga studio is virtually on every corner. She’d arrived at the competition with the idea to create some kind of yoga app, but it wasn’t until the team came across the NVIDIA table at the hackathon’s sponsor fair that they realized the idea’s potential to expand and help those in need.

“A demo of the Jetson Nano displayed how the system can track bodily movement down to the joint,” said Marti, a rising sophomore at UC Davis. “That’s what sparked the possibility of making a physical therapy app, rather than limiting it to yoga.”

None of the team members had prior experience working with deep learning and computer vision, so they faced the challenge of learning how to implement the model in such a short period of time.

“The NVIDIA mentors were really helpful,” said Champaneri, a rising senior at UC Davis. “They put together a tutorial guide on how to use the Nano that gave us the right footing and outline to follow and implement the idea.”

Over the first night of the hackathon, the team took NVIDIA’s Deep Learning Institute course on getting started with AI on the Jetson Nano. They’d grasped the basics of deep learning. The next morning, they began hacking and training the model with images of themselves displaying correct versus incorrect exercise poses.

In just 36 hours since the idea first emerged, PocketPT was born.

Winning More Than Just Awards

The most exciting part of the weekend was finding out the team had made it to final pitches, according to Howk. They presented their project in front of a crowd of 500 and later found out that it had won the two prizes.

The hackathon attracted 197 projects. Competing against 65 other projects in the Medical Access category — many of which used cloud or other platforms — their project took home the category’s grand prize. It was also chosen as the “Best Use of Jetson Hack,” among 11 other groups that borrowed a Jetson for their projects.

But the quartet is looking to do more with their app than win awards.

Because of the fast-paced nature of the hackathon, PocketPT was only able to fully implement one pose, with others still in the works. However, the team is committed to expanding the product and promoting their overall mission of making physical therapy easily accessible to all.

While the hackathon took place just before the COVID outbreak in the U.S., the team highlighted how their project seems to be all the more relevant now.

“We didn’t even realize we were developing something that would become the future, which is telemedicine,” said de Souza, a rising senior at Northwestern University. “We were creating an at-home version of PT, which is very much needed right now. It’s definitely worth our time to continue working on this project.”

Read about other Jetson projects on the Jetson community projects page and get acquainted with other developers on the Jetson forum page.

Learn how to get started on a Jetson project of your own on the Jetson developers page.

The post It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises appeared first on The Official NVIDIA Blog.

Virtually Free GTC: 30,000 Developers and AI Researchers to Access Hundreds of Hours of No-Cost Sessions at GTC Digital

Just three weeks ago, we announced plans to take GTC online due to the COVID-19 crisis.

Since then, a small army of researchers, partners, customers and NVIDIA employees has worked remotely to produce GTC Digital, which kicks off this week.

GTC typically packs hundreds of hours of talks, presentations and conversations into a five-day event in San Jose.

Our goal with GTC Digital is to bring some of the best aspects of this event to a global audience, and make it accessible for months.

Hundreds of our speakers — among the most talented, experienced scientists and researchers in the world — agreed to participate. Apart from the instructor-led, hands-on workshops and training sessions, which require a nominal fee, we’re delighted to bring this content to the global community at no cost. And we’ve incorporated new platforms to facilitate interaction and engagement.

Accelerating Blender Cycles with NVIDIA RTX: Blender is an open-source 3D software package that comes with the Cycles Renderer. Cycles is already a GPU enabled path-tracer, now super-charged with the latest generation of RTX GPUs. Furthering the rendering speed, RTX AI features such as the OptiX Denoiser infers rendering results for a truly interactive ray tracing experience.
Accelerating Blender Cycles with NVIDIA RTX: Blender is an open-source 3D software package that comes with the Cycles Renderer. Cycles is already a GPU enabled path-tracer, now super-charged with the latest generation of RTX GPUs. Furthering the rendering speed, RTX AI features such as the OptiX Denoiser infers rendering results for a truly interactive ray tracing experience.

We provided refunds to those who purchased a GTC 2020 pass, and those tickets have been converted to GTC Digital passes. Passholders just need to log in with GTC 2020 credentials to get started. Anyone else can attend with free registration.

Most GTC Digital content is for a technical audience of data scientists, researchers and developers. But we also offer high-level talks and podcasts on various topics, including women in data science, AI for business and responsible AI.

What’s in Store at GTC Digital

The following activities will be virtual events that take place at a specific time (early registration recommended). Participants will be able to interact in real time with the presenters.

Training Sessions:

  • Seven full-day, instructor-led workshops, from March 25 to April 2, on data science, deep learning, CUDA, cybersecurity, AI for predictive maintenance, AI for anomaly detection, and autonomous vehicles. Each full-day workshop costs $79.
  • Fifteen 2-hour training sessions running April 6-10, on various topics, including autonomous vehicles, CUDA, conversational AI, data science, deep learning inference, intelligent video analytics, medical imaging, recommendation systems, deep learning training at scale, and using containers for HPC. Each two-hour instructor-led session costs $39. 

Live Webinars: Seventeen 1-hour sessions, from March 24-April 8, on various topics, including data science, conversational AI, edge computing, deep learning, IVA, autonomous machines and more. Live webinars will be converted to on-demand content and posted within 48 hours. Free. 

Connect with Experts: Thirty-eight 1-hour sessions, from March 25-April 10, where participants can chat one-on-one with NVIDIA experts to get answers in a virtual classroom. Topics include conversational AI, recommender systems, deep learning training and autonomous vehicle development. Free. 

The following activities will be available on demand:

Recorded Talks: More than 150 recorded presentations with experts from leading companies around the world, speaking on a variety of topics such as computer vision, edge computing, conversational AI, data science, CUDA, graphics and ray tracing, medical imaging, virtualization, weather modeling and more. Free. 

Tech Demos: We’ll feature amazing demo videos, narrated by experts, highlighting how NVIDIA GPUs are accelerating creative workflows, enabling analysis of massive datasets and helping advance research. Free. 

AI Podcast: Several half-hour interviews with leaders across AI and accelerated computing will be posted over the next four weeks. Among them: Kathie Baxter, of Salesforce, on responsible AI; Stanford Professor Margot Gerritsen on women in data science and how data science intersects with AI; Ryan Coffee, of the SLAC National Accelerator Lab, on how deep learning is advancing physics research; and Richard Loft, of the National Center of Atmospheric Research, on how AI is helping scientists better model climate change. Free.

Posters: A virtual gallery of 140+ posters from researchers around the world showing how they are solving unique problems with GPUs. Registrants will be able to contact and share feedback with researchers. Free. 

For the Einsteins and Da Vincis of Our Time

The world faces extraordinary challenges now, and the scientists, researchers and developers focused on solving them need extraordinary tools and technology. Our goal with GTC has always been to help the world’s leading developers — the Einsteins and Da Vincis of our time — solve difficult challenges with accelerated computing. And that’s still our goal with GTC Digital.

Whether you work for a small startup or a large enterprise, in the public or private sector, wherever you are, we encourage you to take part, and we look forward to hearing from you.

The post Virtually Free GTC: 30,000 Developers and AI Researchers to Access Hundreds of Hours of No-Cost Sessions at GTC Digital appeared first on The Official NVIDIA Blog.

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started.

NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can hear about best practices for AI at the edge and get an update on how NVIDIA GPUs are paving the way to better, smarter 5G services.

It’s a story that’s moving fast.

AI was born in the cloud to process the vast amounts of data needed for jobs like recommending new products and optimizing news feeds. But most enterprises interact with their customers and products in the physical world at the edge of the network — in stores, warehouses and smart cities.

The need to sense, infer and act in real time as conditions change is driving the next wave of AI adoption at the edge. That’s why a growing list of forward-thinking companies are building their own AI capabilities using the NVIDIA EGX edge-computing platform.

Walmart, for example, built a smart supermarket it calls its Intelligent Retail Lab. Jakarta uses AI in a smart city application to manage its vehicle registration program. BMW and Procter & Gamble automate inspection of their products in smart factories. They all use NVIDIA EGX along with our Metropolis application framework for video and data analytics.

For conversational AI, the NVIDIA Jarvis developer kit enables voice assistants geared to run on embedded GPUs in smart cars or other systems. WeChat, the world’s most popular smartphone app, accelerates conversational AI using NVIDIA TensorRT software for inference.

All these software stacks ride on our CUDA-X libraries, tools, and technologies that run on an installed base of more than 500 million NVIDIA GPUs.

Carriers Make the Call

At MWC Los Angeles this year, NVIDIA founder and CEO Jensen Huang announced Aerial, software that rides on the EGX platform to let telecommunications companies harness the power of GPU acceleration.

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage at MWC LA to announce their collaboration.

With Aerial, carriers can both increase the spectral efficiency of their virtualized 5G radio-access networks and offer new AI services for smart cities, smart factories, cloud gaming and more — all on the same computing platform.

In Barcelona, NVIDIA and partners including Ericsson will give an update on how Aerial will reshape the mobile edge network.

Verizon is already using NVIDIA GPUs at the edge to deliver real-time ray tracing for AR/VR applications over 5G networks.

It’s one of several ways telecom applications can be taken to the next level with GPU acceleration. Imagine having the ability to process complex AI jobs on the nearest base station with the speed and ease of making a cellular call.

Your Dance Card for Barcelona

For a few days in February, we will turn our innovation center — located at Fira de Barcelona, Hall 4 — into a virtual university on AI with 5G at the edge. Attendees will get a world-class deep dive on this strategic technology mashup and how companies are leveraging it to monetize 5G.

Sessions start Monday morning, Feb. 24, and include AI customer case studies in retail, manufacturing and smart cities. Afternoon talks will explore consumer applications such as cloud gaming, 5G-enabled cloud AR/VR and AI in live sports.

We’ve partnered with the organizers of MWC on applied AI sessions on Tuesday, Feb. 25. These presentations will cover topics like federated learning, an emerging technique for collaborating on the development and training of AI models while protecting data privacy.

Wednesday’s schedule features three roundtables where attendees can meet executives working at the intersection of AI, 5G and edge computing. The week also includes two instructor-led sessions from the NVIDIA Deep Learning Institute that trains developers on best practices.

See Demos, Take a Meeting

For a hands-on experience, check out our lineup of demos based on the NVIDIA EGX platform. These will highlight applications such as object detection in a retail setting, ways to unsnarl traffic congestion in a smart city and our cloud-gaming service GeForce Now.

To learn more about the capabilities of AI, 5G and edge computing, check out the full agenda and book an appointment here.

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

SC18: World’s Best-Educated Graffiti Wall Celebrates GPU Developer Community

Call it the heart of the heart of the SC18 supercomputing show.

Possibly the world’s best-educated graffiti wall, the whiteboard-sized graphic tracks the dizzying growth of the NVIDIA Developer Program — from a single soul to its current 1.1 million members. In a rainbow of colors and multiplicity of handwriting styles, developers are penning notes describing their own contributions along the way, corresponding to the year it occurred.

Beside an illuminated line tracking the number of developers each year, towers of note cards are building, growing by the hour as more individuals take in the project. The work of computing legends sits beside those of anonymous engineers.

2008 begins with “World’s First GPU-accelerated Supercomputer in Top500: Tsubame 1.2.” Midway above the 2010 stack is the first “GPU-accelerated Molecular Simulation” by Erik Lindahl, the Stockholm University biologist. 2012 features “Alexnet Wins ImageNet” by Alex Krizhevsky, considered a defining moment ushering in the era of artificial intelligence.

“It’s a crowdsourced celebration of the GPU developer ecosystem,” said Greg Estes, vice president of developer programs and corporate marketing at NVIDIA.

The living yearbook — which after the show will take a pride of place in NVIDIA’s Santa Clara campus — depicts the story of the growth of accelerated computing, propelled less by silicon and more by individual imagination and dazzling coding skills.

It embraces work that’s been awarded Nobel Prizes in physics and chemistry. It’s made possible the world’s fastest supercomputers, which are driving groundbreaking research in fields as far-flung as particle physics and planetary science. And it’s opened the door to video games so realistic that they begin to blur with movies. And to movies with effects so mind blowing, they push to the far edges of human imagination.

The NVIDIA Developer Program, which recently pushed above 1.1 million individuals, continues to grow steadily because of the emergence of AI, as well as continued growth in robotics, game development, data science and ray tracing.

What developers receive when they sign up for the program for free is access to more than 100 SDKs, performance analysis tools, member discounts and hands-on training on CUDA, OpenACC, AI and machine learning through the NVIDIA Deep Learning Institute. It’s a package of tools and offerings that simplifies the task of tapping into the incredible processing power and efficiency of GPUs, the performance of which has increased many times over in just a few generations.

The timeline begins with hoary entries that, by the standards of accelerated computing, seem modest and almost accidental. But they laid the foundation for work to come, including monumental achievements that have changed the shape of science.

Among them, the 2002 milestone when NVIDIA’s Mark Harris coined the term GPGPU — general purpose computation on graphics processing units. This inspired developers to find ways to use GPUs for compute functions beyond their traditional focus on graphics.

Two years later, NVIDIA’s Ian Buck and a team of researchers introduced Brooks for GPUs, the first GPGPU programming language. And two years after that, in 2006, we launched CUDA, our accelerated parallel computing architecture, which has since been downloaded more than 12 million times.

The board’s very first entry, though, far precedes Harris’s and Buck’s work, and was even more foundational. “Just The First,” it reads, signed by Jensen Huang, dated 1993, the year of NVIDIA’s founding.

The post SC18: World’s Best-Educated Graffiti Wall Celebrates GPU Developer Community appeared first on The Official NVIDIA Blog.