AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC

Major tech conferences are typically hosted in highly industrialized countries. But the appetite for AI and data science resources spans the globe — with an estimated 3 million developers in emerging markets.

Our recent GPU Technology Conference — virtual, free to register, and featuring 24/7 content — for the first time featured a dedicated track on AI in emerging markets. The conference attracted a record 20,000+ developers, industry leaders, policymakers and researchers in emerging markets across 95 countries.

These registrations accounted for around 10 percent of all signups for GTC. We saw a 6x jump from last spring’s GTC in registrations from Latin America, a 10x boost in registrations from the Middle East and a nearly 30x jump in registrations from African countries.

Nigeria alone accounted for more than 1,300 signups, and developers from 30 countries in Latin America and the Caribbean registered for the conference.

These attendees weren’t simply absorbing high-level content — they were leading conversations.

Dozens of startup founders from emerging markets shared their innovations. Community leaders, major tech companies and nonprofits discussed their work to build resources for developers in the Caribbean, Latin America and Africa. And hands-on labs, training and networking sessions offered opportunities for attendees to boost their skills and ask questions of AI experts.

We’re still growing our emerging markets initiatives to better connect with developers worldwide. As we do so, we’ll incorporate three key takeaways from this GTC:

  1. Remove Barriers to Access

While in-person AI conferences typically draw attendees from around the world, these opportunities aren’t equally accessible to developers from every region.

Though Africa has the world’s fastest-growing community of AI developers, visa challenges have in recent years prevented some African researchers from attending AI conferences in the U.S. and Canada. And the cost of conference registrations, flights and hotel accommodations in major tech hubs can be prohibitive for many, even at discounted rates.

By making GTC21 virtual and free to register, we were able to welcome thousands of attendees and presenters from countries including Kenya, Zimbabwe, Trinidad and Tobago, Ghana and Indonesia.

  1. Spotlight Region-Specific Challenges, Successes

Opening access is just the first step. A developer from Nigeria faces different challenges than one in Norway, so global representation in conference speakers can help provide a diversity of perspectives. Relevant content that’s localized by topic or language can help cater to the unique needs of a specific audience and market.

The Emerging Markets Pavilion at GTC, hosted by NVIDIA Inception, our acceleration platform for AI startups, featured companies developing augmented reality apps for cultural tourism in Tunisia, smart video analytics in Lebanon and data science tools in Mexico, to name a few examples.

Several panel discussions brought together public sector reps, United Nations leads, community leaders and developer advocates from NVIDIA, Google, Amazon Web Services and other companies for discussions on how to bolster AI ecosystems around the world. And a session on AI in Africa focused on ways to further AI and data science education for a community that mostly learns through non-traditional pathways.

  1. Foster Opportunities to Learn and Connect

Developer groups in emerging markets are growing rapidly, with many building skills through online courses or community forums, rather than relying on traditional degree programs. One way we’re supporting this is by sponsoring AI hackathons in Africa with Zindi, an online forum that brings together thousands of developers to solve challenges for companies and governments across the continent.

The NVIDIA Developer Program includes tens of thousands of members from emerging markets — but there are hundreds of thousands more developers in these regions poised to take advantage of AI and accelerated applications to power their work.

To learn more about GTC, watch the replay of NVIDIA CEO Jensen Huang’s keynote address. Join the NVIDIA Developer Program for access to a wide variety of tools and training to accelerate AI, HPC and advanced graphics applications.

The post AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC appeared first on The Official NVIDIA Blog.

Around the World in AI Ways: Video Explores Machine Learning’s Global Impact

You may have used AI in your smartphone or smart speaker, but have you seen how it comes alive in an artist’s brush stroke, how it animates artificial limbs or assists astronauts in Earth’s orbit?

The latest video in the “I Am AI” series — the annual scene setter for the keynote at NVIDIA’s GTC — invites viewers on a journey through more than a dozen ways this new and powerful form of computing is expanding horizons.

Perhaps your smart speaker woke you up this morning to music from a distant radio station. Maybe you used AI in your smartphone to translate a foreign phrase in a book you’re reading.

A View of What’s to Come

These everyday use cases are becoming almost commonplace. Meanwhile, the frontiers of AI are extending to advance more critical needs.

In healthcare, the Bionic Vision Lab at UC Santa Barbara uses deep learning and virtual prototyping on NVIDIA GPUs to develop models of artificial eyes. They let researchers explore the potential and limits of a design for artificial eyes by viewing a model through a virtual-reality headset.

At Canada’s University of Waterloo, researchers are using AI to develop autonomous controls for exoskeleton legs that help users walk, climb stairs and avoid obstacles. Wearable cameras filter video through AI models trained on NVIDIA GPUs to recognize surrounding features such as stairs and doorways and then determine the best movements to take.

“Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves,” Brokoslaw Laschowski, a lead researcher on the ExoNet project, said in a recent blog.

Watch New Worlds Come to Life

In “I Am AI,” we meet Sofia Crespo who calls herself a generative artist. She blends and morphs images of jellyfish, corals and insects in videos that celebrate the diversity of life, using an emerging form of AI called generative adversarial networks and neural network models like GPT-2.

Sofia Crespo uses GANs
A fanciful creature created by artist Sofia Crespo using GANs.

“Can we use these technologies to dream up new biodiversities that don’t exist? What would these creatures look like?” she asks in a separate video describing her work.

See How AI Guards Ocean Life

“I Am AI” travels to Hawaii, Morocco, the Seychelles and the U.K., where machine learning is on the job protecting marine life from very real threats.

In Africa, the ATLAN Space project uses a fleet of autonomous drones with AI-powered computer vision to detect illegal fishing and ships dumping oil into the sea.

On the other side of the planet, the Maui dolphin is on the brink of extinction, with only 63 adults in the latest count. A nonprofit called MAUI63 uses AI in drones to identify individuals by their fin markings, tracking their movements so policy makers can take steps such as creating marine sanctuaries to protect them.

Taking the Planet’s Temperature

AI is also at work developing the big picture in planet ecology.

The video spotlights the Plymouth Marine Laboratory in the UK, where researchers use an NVIDIA DGX system to analyze data gathered on the state of our oceans. Their work contributes to the U.N. Sustainable Development Goals and other efforts to monitor the health of the seas.

A team of Stanford researchers is using AI to track wildfire risks. The video provides a snapshot of their work opening doors to deeper understandings of how ecosystems are affected by changes in water availability and climate.

Beam Me Up, NASA

The sky’s the limit with the Spaceborne Computer-2, a supercharged system made by Hewlett Packard Enterprise now installed in the International Space Station. It packs NVIDIA GPUs that astronauts use to monitor their health in real time and track objects in space and on Earth like a cosmic traffic copter.

The ISS now packs an NVIDIA GPU.
Astronauts use Spaceborne Computer-2 to run AI experiments on the ISS.

One of the coolest things about Spaceborne Computer-2 is you can suggest an experiment to run on it. HPE and NASA extended an open invitation for proposals, so Earth-bound scientists can expand the use of AI in space.

If these examples don’t blow the roof off your image of where machine learning might go next, check out the full “I Am AI” video below. It includes several more examples of other AI projects in art, science and beyond.

The post Around the World in AI Ways: Video Explores Machine Learning’s Global Impact appeared first on The Official NVIDIA Blog.

Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet

What began as a budding academic movement into farm AI projects has now blossomed into a field of startups creating agriculture technology with a positive social impact for Earth.

Whether it’s the threat to honey bees worldwide from varroa mites, devastation to citrus markets from citrus greening, or contamination of groundwater caused from agrochemicals — AI startups are enlisting NVIDIA GPUs to help solve these problems.

With Earth Day today, here’s looking at some of the work of developers, researchers and entrepreneurs who are harnessing NVIDIA GPUs to protect the planet.

The Bee’s Knees: Parasite Prevention 

Bees are under siege by varroa parasites destroying their colonies. And saving the world’s honeybee population is about a lot more than just honey. All kinds of farmers now need to rent bees because of their scarcity to get their own crops pollinated.

Beewise, a startup based in Israel, has developed robo hives with computer vision for infestation identification and treatment capabilities. In December, TIME magazine named the Beewise Beehome to its “Best Inventions of 2020” list. Others are using deep learning to understand hives better and look at improved hive designs.

Orange You Glad AI Helps

If it weren’t for AI, that glass of orange juice for breakfast might be a puckery one. A rampant “citrus greening” disease is decimating orchards and souring fruit worldwide. Thankfully, University of Florida researchers are developing computer vision for smart sprayers of agrochemicals, which are now being licensed and deployed in pilot tests by CCI, an agricultural equipment company.

The system can adjust in real time to turn off or on the application of crop protection products or fertilizers as well as adjust the amount sprayed based on the plant’s size.

SeeTree, based in Israel, is tackling citrus greening, too. It offers a GPU-driven tree analytics platform of image recognition algorithms, sensors, drones and a data collection app.

The startup uses NVIDIA Jetson TX2 to process images and CUDA as the interface for cameras at orchards. The TX2 enables it to do fruit-detection for orchards as well as provide farms with a yield estimation tool.

AI Land of Sky Blue Water

Bilberry, located in Paris, develops weed recognition powered by the NVIDIA Jetson edge AI platform for precision application of herbicides. The startup has helped customers reduce the usage of chemicals by as much as 92 percent.

FarmWise, based in San Francisco, offers farmers an AI-driven robotic machine for pulling weeds rather than spraying them, reducing groundwater contamination.

Also, John Deere-owned Blue River offers precision spraying of crops to reduce the usage of agrochemicals harmful to land and water.

And two students from India last year developed Nindamani, an AI-driven, weed-removal robot prototype that took top honors at the AI at the Edge Challenge on Hackster.io.

Milking AI for Dairy Farmers 

AI is going to the cows, too. Advanced Animal Diagnostics, based in Morrisville, North Carolina, offers a portable testing device to predict animal performance and detect infections in cattle before they take hold. Its tests are processed on NVIDIA GPUs in the cloud. The machine can help reduce usage of antibiotics.

Similarly, SomaDetect aims to improve milk production with AI. The Halifax, Nova Scotia, company runs deep learning models on NVIDIA GPUs to analyze milk images.

Photo courtesy of Mark Kelly on Unsplash

The post Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet appeared first on The Official NVIDIA Blog.

Hanging in the Balance: More Research Coordination, Collaboration Needed for AI to Reach Its Potential, Experts Say

As AI is increasingly established as a world-changing field, the U.S. has an opportunity not only to demonstrate global leadership, but to establish a solid economic foundation for the future of the technology.

A panel of experts convened last week at GTC to shed light on this topic, with the co-chairs of the Congressional AI Caucus, U.S. Reps. Jerry McNerney (D-CA) and Anthony Gonzalez (R-OH), leading a discussion that reflects Washington’s growing interest in the topic.

The panel also included Hodan Omaar, AI policy lead at the Center for Data Innovation; Russell Wald, director of policy at Stanford University’s Institute for Human-Centered AI and Damon Woodard, director of AI partnerships at University of Florida’s AI Initiative.

“AI is getting increased interest among my colleagues on both sides of the aisle, and this is going to continue for some time,” McNerney said. Given that momentum, Gonzalez said the U.S. should be on the bleeding edge of AI development “for both economic and geopolitical reasons.”

Along those lines, the first thing the pair wanted to learn was how panelists viewed the importance of legislative efforts to fund and support AI research and development.

Wald expressed enthusiasm over legislation Congress passed last year as part of the National Defense Authorization Act, which he said would have an expansive effect on the market for AI.

Wald also said he was surprised at the findings of Stanford’s “Government by Algorithm” report, which detailed the federal government’s use of AI to do things such as track suicide risk among veterans, support SEC insider trading investigations and identify Medicare fraud.

Woodard suggested that continued leadership and innovation coming from Washington is critical if AI is to deliver on its promise.

“AI can play a big role in the economy,” said Woodard. “Having this kind of input from the government is important before we can have the kind of advancements that we need.”

The Role of Universities

Woodard and UF are already doing their part. Woodard’s role at the school includes helping transform it into a so-called “AI university.” In response to a question from Gonzalez about what that transition looks like, he said it required establishing a world-class AI infrastructure, performing cutting-edge AI research and incorporating AI throughout the curriculum.

“We want to make sure every student has some exposure to AI as it relates to their field of study,” said Woodard.

He said the school has more than 200 faculty members engaged in AI-related research, and that it’s committed to hiring 100 more. And while Woodard believes the university’s efforts will lead to more qualified AI professionals and AI innovation around its campus in Gainesville, he also said that partnerships, especially those that encourage diversity, are critical to encouraging more widespread industry development.

Along those lines, UF has joined an engineering consortium and will provide 15 historically Black colleges and two Hispanic-serving schools with access to its prodigious AI resources.

Omaar said such efforts are especially important when considering how unequally the high performance computing resources needed to conduct AI research are distributed.

In response to a question from McNerney about a recent National Science Foundation report, Omaar noted the finding that the U.S. Department of Energy is only providing support to about a third of the researchers seeking access to HPC resources.

“Many universities are conducting AI research without the tools they need,” she said.

Omaar said she’d like to see the NSF focus its funding on supporting efforts in states where HPC resources are scarce but AI research activity is high.

McNerney announced that he would soon introduce legislation requiring NSF to determine what AI resources are necessary for significant research output.

Moving Toward National AI Research Resources

The myriad challenges points to the benefits that could come from a more coordinated national effort. To that end, Gonzalez asked about the potential of the National AI Research Resource Task Force Act, and the national AI research cloud that would result from it.

Wald called the legislation a “game-changing AI initiative,” noting that the limited number of universities with AI research computing resources has pushed AI research into the private sector, where the objectives are driven by shorter-term financial goals rather than long-term societal benefits.

“What we see is an imbalance in the AI research ecosystem,” Wald said. The federal legislation would establish a pathway for a national AI research hub, which “has the potential to unleash American AI innovation,” he said.

The way Omaar sees it, the nationwide collaboration that would likely result — among politicians, industry and academia — is necessary for AI to reach its potential.

“Since AI will impact us all,” she said, “it’s going to need everyone’s contribution.”

The post Hanging in the Balance: More Research Coordination, Collaboration Needed for AI to Reach Its Potential, Experts Say appeared first on The Official NVIDIA Blog.

Asia’s Rising Star: VinAI Advances Innovation with Vietnam’s Most Powerful AI Supercomputer

A rising technology star in Southeast Asia just put a sparkle in its AI.

Vingroup, Vietnam’s largest conglomerate, is installing the most powerful AI supercomputer in the region. The NVIDIA DGX SuperPOD will power VinAI Research, Vingroup’s machine-learning lab, in global initiatives that span autonomous vehicles, healthcare and consumer services.

One of the lab’s most important missions is to develop the AI smarts for an upcoming fleet of autonomous electric cars from VinFast, the group’s automotive division, driving its way to global markets.

New Hub on the AI Map

It’s a world-class challenge for the team led by Hung Bui. As a top-tier AI researcher and alum of Google’s DeepMind unit with nearly 6,000 citations from more than 200 papers and the winner of an International Math Olympiad in his youth, he’s up for a heady challenge.

In barely two years, Hung’s built a team that now includes 200 researchers. Last year, as a warm-up, they published as many as 20 papers at top conferences, pushing the boundaries of AI while driving new capabilities into the sprawling group’s many products.

“By July, a fleet of cars will start sending us their data from operating 24/7 in real traffic conditions over millions of miles on roads in the U.S. and Europe, and that’s just the start — the volume of data will only increase,” said Hung.

The team will harness the data to design and refine at least a dozen AI models to enable level 3 autonomous driving capabilities for VinFast’s cars.

DGX SuperPOD Behind the Wheel

Hung foresees a need to retrain those models on a daily basis as new data arrives. He believes the DGX SuperPOD can accelerate by at least 10x the AI work of the NVIDIA DGX A100 system VinAI currently uses, letting engineers update their models every 24 hours.

“That’s the goal, it will save a lot of engineering time, but we will need a lot of help from NVIDIA,” said Hung, who hopes to have in May the new cluster of 20 DGX A100 systems linked together with an NVIDIA Mellanox HDR 200Gb/s InfiniBand network.

Developing World-Class Talent

With a DGX SuperPOD in place, Hung hopes to attract and develop more world-class AI talent in Vietnam. It’s a goal shared widely at Vingroup.

In October, the company hosted a ceremony to mark the end of the initial year of studies for the first 260 students at its VinUniversity. Vietnam’s first private, nonprofit college — founded and funded by Vingroup — it so far offers programs in business, engineering, computer science and health sciences.

It’s a kind of beacon pointing to a better future, like the Landmark81 (pictured above), the 81-story skyscraper, the country’s largest, that the group built and operates on the banks of the Saigon River.

“AI technology is a way to move the company forward, and it can make a lot of impact on the lives of people in Vietnam,” he said, noting other group divisions use DGX systems to advance medical imaging and diagnosis.

VinAI researchers
VinAI researchers synch up on a project.

Making Life Better with AI

Hung has seen AI’s impact firsthand. His early work in the field at SRI International, in Silicon Valley, helped spawn the technology that powers the Siri assistant in Apple’s iPhone.

More recently, VinAI developed an AI model that lets users of VinSmart handsets unlock their phones using facial recognition — even if they’re wearing a COVID mask. At the same time, core AI researchers on his team developed Pho-BERT, a version for Vietnamese of the giant Transformer model used for natural-language processing.

It’s the kind of world-class work that two years ago Vingroup’s chairman and Vietnam’s first billionaire, Pham Nhat Vuong, wanted from VinAI Research. He personally convinced Hung to leave a position as research scientist in the DeepMind team and join Vingroup.

Navigating the AI Future

Last year to help power its efforts, VinAI became the first company in Southeast Asia to install a DGX A100 system.

“We’ve been using the latest hardware and software from NVIDIA quite successfully in speech recognition, NLP and computer vision, and now we’re taking our work to the next level with a perception system for driving,” he said.

It’s a challenge Hung gets to gauge daily amid a rising tide of pedestrians, bicycles, scooters and cars on his way to his office in Hanoi.

“When I came back to Vietnam, I had to relearn how to drive here — the traffic conditions are very different from the U.S.” he said.

“After a while I got the hang of it, but it got me thinking a machine probably will do an even better job — Vietnam’s driving conditions provide the ultimate challenge for systems trying to reach level 5 autonomy,” he added.

The post Asia’s Rising Star: VinAI Advances Innovation with Vietnam’s Most Powerful AI Supercomputer appeared first on The Official NVIDIA Blog.

NVIDIA Unveils 50+ New, Updated AI Tools and Trainings for Developers

To help developers hone their craft, NVIDIA this week introduced more than 50 new and updated tools and training materials for data scientists, researchers, students and developers of all kinds.

The offerings range from software development kits for conversational AI and ray tracing, to hands-on courses from the NVIDIA Deep Learning Institute.

They’re available to all members of the NVIDIA Developer Program, a free-to-join global community of over 2.5 million technology innovators who are revolutionizing industries through accelerated computing.

Training for Success

Learning new and advanced software development skills is vital to staying ahead in a competitive job market. DLI offers a comprehensive learning experience on a wide range of important topics in AI, data science and accelerated computing. Courses include hands-on exercises and are available in both self-paced and instructor-led formats.

The five courses cover topics such as deep learning, data science, autonomous driving and conversational AI. All include hands-on exercises that accelerate learning and mastery of the material. DLI workshops are led by NVIDIA-certified instructors and include access to fully configured GPU-accelerated servers in the cloud for each participant.

New self-paced courses, which are available now:

New full-day, instructor-led workshops for live virtual classroom delivery (coming soon):

These instructor-led workshops will be available to enterprise customers and the general public. DLI recently launched public workshops for its popular instructor-led courses, increasing accessibility to individual developers, data scientists, researchers and students.

To extend training further, DLI is releasing a new book, “Learning Deep Learning,” that provides a complete guide to deep learning theory and practical applications. Authored by NVIDIA Engineer Magnus Ekman, it explores how deep neural networks are applied to solve complex and challenging problems. Pre-orders are available now through Amazon.

New and Accelerated SDKs, Plus Updated Technical Tools

SDKs are a key component that can make or break an application’s performance. Dozens of new and updated kits for high performance computing, computer vision, data science, conversational AI, recommender systems and real-time graphics are available so developers can meet virtually any challenge. Updated tools are also in place to help developers accelerate application development.

Updated tools available now:

  • NGC is a GPU-optimized hub for AI and HPC software with a catalog of hundreds of SDKs, AI, ML and HPC containers, pre-trained models and Helm charts that simplify and accelerate workflows from end to end. Pre-trained models help developers jump-start their AI projects for a variety of use cases, including computer vision and speech.

New SDK (coming soon):

  • TAO (Train, Adapt, Optimize) is a GUI-based, workflow-driven framework that simplifies and accelerates the creation of enterprise AI applications and services. Enterprises can fine-tune pre-trained models using transfer learning or federated learning to produce domain specific models in hours rather than months, eliminating the need for large training runs and deep AI expertise. Learn more about TAO.

New and updated SDKs and frameworks available now:

  • Jarvis, a fully accelerated application framework for building multimodal conversational AI services. It includes state-of-the-art models pre-trained for thousands of hours on NVIDIA DGX systems, the Transfer Learning Toolkit for adapting those models to domains with zero coding, and optimized end-to-end speech, vision and language pipelines that run in real time. Learn more.
  • Maxine, a GPU-accelerated SDK with state-of-the-art AI features for developers to build virtual collaboration and content creation applications such as video conferencing and live streaming. Maxine’s AI SDKs — video effects, audio effects and augmented reality — are highly optimized and include modular features that can be chained into end-to-end pipelines to deliver the highest performance possible on GPUs, both on PCs and in data centers. Learn more.
  • Merlin, an application framework, currently in open beta, enables the development of deep learning recommender systems — from data preprocessing to model training and inference — all accelerated on NVIDIA GPUs. Read more about Merlin.
  • DeepStream, an AI streaming analytics toolkit for building high-performance, low-latency, complex video analytics apps and services.
  • Triton Inference Server, which lets teams deploy trained AI models from any framework, from local storage or cloud platform on any GPU- or CPU-based infrastructure.
  • TensorRT, for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT 8 is 2x faster for transformer-based models and new techniques to achieve accuracy similar to FP32 while using high-performance INT8 precision.
  • RTX technology, which helps developers harness and bring realism to their games:
    • DLSS is a deep learning neural network that helps graphics developers boost frame rates and generates beautiful, sharp images for their projects. It includes performance headroom to maximize ray-tracing settings and increase output resolution. Unity has announced that DLSS will be natively supported in Unity Engine 2021.2.
    • RTX Direct Illumination (RTXDI) makes it possible to render, in real time, scenes with millions of dynamic lights without worrying about performance or resource constraints.
    • RTX Global Illumination (RTXGI) leverages the power of ray tracing to scalably compute multi-bounce indirect lighting without bake times, light leaks or expensive per-frame costs.
    • Real-Time Denoisers (NRD) is a spatio-temporal API-agnostic denoising library that’s designed to work with low ray-per-pixel signals.

Joining the NVIDIA Developer Program is easy, check it out today.

The post NVIDIA Unveils 50+ New, Updated AI Tools and Trainings for Developers appeared first on The Official NVIDIA Blog.

Knight Rider Rides a GAN: Bringing KITT to Life with AI, NVIDIA Omniverse

Fasten your seatbelts. NVIDIA Research is revving up a new deep learning engine that creates 3D object models from standard 2D images — and can bring iconic cars like the Knight Rider’s AI-powered KITT to life — in NVIDIA Omniverse.

Developed by the NVIDIA AI Research Lab in Toronto, the GANverse3D application inflates flat images into realistic 3D models that can be visualized and controlled in virtual environments. This capability could help architects, creators, game developers and designers easily add new objects to their mockups without needing expertise in 3D modeling, or a large budget to spend on renderings.

A single photo of a car, for example, could be turned into a 3D model that can drive around a virtual scene, complete with realistic headlights, tail lights and blinkers.

To generate a dataset for training, the researchers harnessed a generative adversarial network, or GAN, to synthesize images depicting the same object from multiple viewpoints — like a photographer who walks around a parked vehicle, taking shots from different angles. These multi-view images were plugged into a rendering framework for inverse graphics, the process of inferring 3D mesh models from 2D images.

Once trained on multi-view images, GANverse3D needs only a single 2D image to predict a 3D mesh model. This model can be used with a 3D neural renderer that gives developers control to customize objects and swap out backgrounds.

When imported as an extension in the NVIDIA Omniverse platform and run on NVIDIA RTX GPUs, GANverse3D can be used to recreate any 2D image into 3D — like the beloved crime-fighting car KITT, from the popular 1980s Knight Rider TV show.

Previous models for inverse graphics have relied on 3D shapes as training data.

Instead, with no aid from 3D assets, “We turned a GAN model into a very efficient data generator so we can create 3D objects from any 2D image on the web,” said Wenzheng Chen, research scientist at NVIDIA and lead author on the project.

“Because we trained on real images instead of the typical pipeline, which relies on synthetic data, the AI model generalizes better to real-world applications,” said NVIDIA researcher Jun Gao, an author on the project.

The research behind GANverse3D will be presented at two upcoming conferences: the International Conference on Learning Representations in May, and the Conference on Computer Vision and Pattern Recognition, in June.

From Flat Tire to Racing KITT 

Creators in gaming, architecture and design rely on virtual environments like the NVIDIA Omniverse simulation and collaboration platform to test out new ideas and visualize prototypes before creating their final products. With Omniverse Connectors, developers can use their preferred 3D applications in Omniverse to simulate complex virtual worlds with real-time ray tracing.

But not every creator has the time and resources to create 3D models of every object they sketch. The cost of capturing the number of multi-view images necessary to render a showroom’s worth of cars, or a street’s worth of buildings, can be prohibitive.

That’s where a trained GANverse3D application can be used to convert standard images of a car, a building or even a horse into a 3D figure that can be customized and animated in Omniverse.

To recreate KITT, the researchers simply fed the trained model an image of the car, letting GANverse3D predict a corresponding 3D textured mesh, as well as different parts of the vehicle such as wheels and headlights. They then used NVIDIA Omniverse Kit and NVIDIA PhysX tools to convert the predicted texture into high-quality materials that give KITT a more realistic look and feel, and placed it in a dynamic driving sequence.

“Omniverse allows researchers to bring exciting, cutting-edge research directly to creators and end users,” said Jean-Francois Lafleche, deep learning engineer at NVIDIA. “Offering GANverse3D as an extension in Omniverse will help artists create richer virtual worlds for game development, city planning or even training new machine learning models.”

GANs Power a Dimensional Shift

Because real-world datasets that capture the same object from different angles are rare, most AI tools that convert images from 2D to 3D are trained using synthetic 3D datasets like ShapeNet.

To obtain multi-view images from real-world data — like images of cars available publicly on the web — the NVIDIA researchers instead turned to a GAN model, manipulating its neural network layers to turn it into a data generator.

The team found that opening the first four layers of the neural network and freezing the remaining 12 caused the GAN to render images of the same object from different viewpoints.

Keeping the first four layers frozen and the other 12 layers variable caused the neural network to generate different images from the same viewpoint. By manually assigning standard viewpoints, with vehicles pictured at a specific elevation and camera distance, the researchers could rapidly generate a multi-view dataset from individual 2D images.

The final model, trained on 55,000 car images generated by the GAN, outperformed an inverse graphics network trained on the popular Pascal3D dataset.

Read the full ICLR paper, authored by Wenzheng Chen, fellow NVIDIA researchers Jun Gao and Huan Ling, Sanja Fidler, director of NVIDIA’s Toronto research lab, University of Waterloo student Yuxuan Zhang, Stanford student Yinan Zhang and MIT professor Antonio Torralba. Additional collaborators on the CVPR paper include Jean-Francois Lafleche, NVIDIA researcher Kangxue Yin and Adela Barriuso.

The NVIDIA Research team consists of more than 200 scientists around the globe, focusing on areas such as AI, computer vision, self-driving cars, robotics and graphics. Learn more about the company’s latest research and industry breakthroughs in NVIDIA CEO Jensen Huang’s keynote address at this week’s GPU Technology Conference.

GTC registration is free, and open through April 23. Attendees will have access to on-demand content through May 11.

Knight Rider content courtesy of Universal Studios Licensing LLC. 

The post Knight Rider Rides a GAN: Bringing KITT to Life with AI, NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Healthcare Headliners Put AI Under the Microscope at GTC

Two revolutions are meeting in the field of life sciences — the explosion of digital data and the rise of AI computing to help healthcare professionals make sense of it all, said Daphne Koller and Kimberly Powell at this week’s GPU Technology Conference,.

Powell, NVIDIA’s vice president of healthcare, presented an overview of AI innovation in medicine that highlighted advances in drug discovery, medical imaging, genomics and intelligent medical instruments.

“There’s a digital biology revolution underway, and it’s generating enormous data, far too complex for human understanding,” she said. “With algorithms and computations at the ready, we now have the third ingredient — data — to truly enter the AI healthcare era.”

And Koller, a Stanford adjunct professor and CEO of the AI drug discovery company Insitro, focused on AI solutions in her talk outlining the challenges of drug development and the ways in which predictive machine learning models can enable a better understanding of disease-related biological data.

Digital biology “allows us to measure biological systems in entirely new ways, interpret what we’re measuring using data science and machine learning, and then bring that back to engineer biology to do things that we’d never otherwise be able to do,” she said.

Watch replays of these talks — part of a packed lineup of more than 100 healthcare sessions among 1,600 on-demand sessions — by registering free for GTC through April 23. Registration isn’t required to watch a replay of the keynote address by NVIDIA CEO Jensen Huang.

Data-Driven Insights into Disease

Recent advancements in biotechnology — including CRISPR, induced pluripotent stem cells and more widespread availability of DNA sequencing — have allowed scientists to gather “mountains of data,” Koller said in her talk, “leaving us with a problem of how to interpret those data.”

“Fortunately, this is where the other revolution comes in, which is that using machine learning to interpret and identify patterns in very large amounts of data has transformed virtually every sector of our existence,” she said.

The data-intensive process of drug discovery requires researchers to understand the biological structure of a disease, and then vet potential compounds that could be used to bind with a critical protein along the disease pathway. Finding a promising therapeutic is a complex optimization problem, and despite the exponential rise in the amount of digital data available in the last decade or two, the process has been getting slower and more expensive.

Daphne Koller, CEO of Insitro

Known as Eroom’s law, this observation finds that the research and development cost for bringing a new drug to market has trended upward since the 1980s, taking pharmaceutical companies more time and money. Koller says that’s because of all the potential drug candidates that fail to get approved for use.

“What we aim to do at Insitro is to understand those failures, and try and see whether machine learning — combined with the right kind of data generation — can get us to make better decisions along the path and avoid a lot of those failures,” she said. “Machine learning is able to see things that people just cannot see.”

Bringing AI to vast datasets can help scientists determine how physical characteristics like height and weight, known as phenotypes, relate to genetic variants, known as genotypes. In many cases, “these associations give us a hint about the causal drivers of disease,” said Koller.

She gave the example of NASH, or nonalcoholic steatohepatitis, a common liver condition related to obesity and diabetes. To study underlying causes and potential treatments for NASH, Insitro worked with biopharmaceutical company Gilead to apply machine learning to liver biopsy and RNA sequencing data from clinical trial data representing hundreds of patients.

The team created a machine learning model to analyze biopsy images to capture a quantitative representation of a patient’s disease state, and found even with just a weak level of supervision, the AI’s predictions aligned with the scores assigned by clinical pathologists. The models could even differentiate between images with and without NASH, which is difficult to determine with the naked eye.

Accelerating the AI Healthcare Era

It’s not enough to just have abundant data to create an effective deep learning model for medicine, however. Powell’s GTC talk focused on domain-specific computational platforms — like the NVIDIA Clara application framework for healthcare — that are tailored to the needs and quirks of medical datasets.

The NVIDIA Clara Discovery suite of AI libraries harnesses transformer models, popular in natural language processing, to parse biomedical deta. Using the NVIDIA Megatron framework for training transformers helps researchers build models with billions of parameters — like MegaMolBart, an NLP generative drug discovery model in development by NVIDIA and AstraZeneca for use in reaction prediction, molecular optimization and de novo molecular generation.

Kimberly Powell, VP of healthcare at NVIDIA

University of Florida Health has also used the NVIDIA Megatron framework and NVIDIA BioMegatron pre-trained model to develop GatorTron, the largest clinical language model to date, which was trained on more than 2 million patient records with more than 50 million interactions.

“With biomedical data at scale of petabytes, and learning at the scale of billions and soon trillions of parameters, transformers are helping us do and find the unexpected,” Powell said.

Clinical decisions, too, can be supported by AI insights that parse data from health records, medical imaging instruments, lab tests, patient monitors and surgical procedures.

“No one hospital’s the same, and no healthcare practice is the same,” Powell said. “So we need an entire ecosystem approach to developing algorithms that can predict the future, see the unseen, and help healthcare providers make complex decisions.”

The NVIDIA Clara framework has more than 40 domain-specific pretrained models available in the NGC catalog — including NVIDIA Federated Learning, which allows different institutions to collaborate on AI model development without sharing patient data with each other, overcoming challenges of data governance and privacy.

And to power the next generation of intelligent medical instruments, the newly available NVIDIA Clara AGX developer kit helps hospitals develop and deploy AI across smart sensors such as endoscopes, ultrasound devices and microscopes.

“As sensor technology continues to innovate, so must the computing platforms that process them,” Powell said. “With AI, instruments can become smaller, cheaper and guide an inexperienced user through the acquisition process.”

These AI-driven devices could help reach areas of the world that lack access to many medical diagnostics today, she said. “The instruments that measure biology, see inside our bodies and perform surgeries are becoming intelligent sensors with AI and computing.”

GTC registration is open through April 23. Attendees will have access to on-demand content through May 11. For more, subscribe to NVIDIA healthcare news, and follow NVIDIA Healthcare on Twitter.

The post Healthcare Headliners Put AI Under the Microscope at GTC appeared first on The Official NVIDIA Blog.

NVIDIA CEO Introduces Software, Silicon, Supercomputers ‘for the Da Vincis of Our Time’

Buckle up. NVIDIA CEO Jensen Huang just laid out a singular vision filled with autonomous machines, super-intelligent AIs and sprawling virtual worlds – from silicon to supercomputers to AI software – in a single presentation.

“NVIDIA is a computing platform company, helping to advance the work for the Da Vincis of our time – in language understanding, drug discovery, or quantum computing,” Huang said in a talk delivered from behind his kitchen counter to NVIDIA’s GPU Technology Conference. “NVIDIA is the instrument for your life’s work.”

During a presentation punctuated with product announcements, partnerships, and demos that danced up and down the modern technology stack, Huang spoke about how NVIDIA is investing heavily in CPUs, DPUs, and GPUs and weaving them into new data center scale computing solutions for researchers and enterprises.

He talked about NVIDIA as a software company, offering a host of software built on NVIDIA AI as well as NVIDIA Omniverse for simulation, collaboration, and training autonomous machines.

Finally, Huang spoke about how NVIDIA is moving automotive computing forward with a new SoC, NVIDIA Atlan, and new simulation capabilities.

CPUs, DPUs and GPUs

Huang announced NVIDIA’s first data center CPU, Grace, named after Grace Hopper, a U.S. Navy rear admiral and computer programming pioneer.

Grace is a highly specialized processor targeting largest data intensive HPC and AI applications as the training of next-generation natural-language processing models that have more than one trillion parameters.

When tightly coupled with NVIDIA GPUs, a Grace-based system will deliver 10x faster performance than today’s state-of-the-art NVIDIA DGX-based systems, which run on x86 CPUs.

While the vast majority of data centers are expected to be served by existing CPUs, Gracewill serve a niche segment of computing.“Grace highlights the beauty of Arm,” Huang said.

Huang also announced that the Swiss National Supercomputing Center will build a supercomputer, dubbed Alps, will be powered by Grace and NVIDIA’s next-generation GPU. U.S. Department of Energy’s Los Alamos National Laboratory will also bring a Grace-powered supercomputer online in 2023, NVIDIA announced.

Accelerating Data Centers with BlueField-3

Further accelerating the infrastructure upon which hyperscale data centers, workstations, and supercomputers are built, Huang announced the NVIDIA BlueField-3 DPU.

The next-generation data processing unit will deliver the most powerful software-defined networking, storage and cybersecurity acceleration capabilities.

Where BlueField-2 offloaded the equivalent of 30 CPU cores, it would take 300 CPU cores to secure, offload, and accelerate network traffic at 400 Gbps as BlueField-3— a 10x leap in performance, Huang explained.

‘Three Chips’

Grace and BlueField are key parts of a data center roadmap consisting of 3 chips: CPU, GPU, and DPU, Huang said. Each chip architecture has a two-year rhythm with likely a kicker in between. One year will focus on x86 platforms, the next on Arm platforms.

“Every year will see new exciting products from us,” Huang said. “Three chips, yearly leaps, one architecture.”

Expanding Arm into the Cloud 

Arm, Huang said, is the most popular CPU in the world. “For good reason – it’s super energy-efficient and its open licensing model inspires a world of innovators,” he said.

For other markets like cloud, enterprise and edge data centers, supercomputing, and PC, Arm is just starting. Huang announced key Arm partnerships — Amazon Web Services in cloud computing, Ampere Computing in scientific and cloud computing, Marvel in hyper-converged edge servers, and MediaTek to create a Chrome OS and Linux PC SDK and reference system.

DGX – A Computer for AI

Weaving together NVIDIA silicon and software, Huang announced upgrades to NVIDIA’s DGX Station “AI data center in-a-box” for workgroups, and the NVIDIA DGX SuperPod, NVIDIA’s AI-data-center-as-a-product for intensive AI research and development.

The new DGX Station 320G harnesses 320Gbytes of super-fast HBM2e connected to 4 NVIDIA A100 GPUs over 8 terabytes per second of memory bandwidth. Yet it plugs into a normal wall outlet and consumes just 1500 watts of power, Huang said.

The DGX SuperPOD gets the new 80GB NVIDIA A100, bringing the SuperPOD to 90 terabytes of HBM2e memory. It’s been upgraded with NVIDIA BlueField-2, and NVIDIA is now offering it with the NVIDIA Base Command DGX management and orchestration tool.

NVIDIA EGX for Enterprise 

Further democratizing AI, Huang introduced a new class of NVIDIA-certified systems, high-volume enterprise servers from top manufacturers. They’re now certified to run the NVIDIA AI Enterprise software suite, exclusively certified for VMware vSphere 7, the world’s most widely used compute virtualization platform.

Expanding the NVIDIA-certified servers ecosystem is a new wave of systems featuring the NVIDIA A30 GPU for mainstream AI and data analytics and the NVIDIA A10 GPU for AI-enabled graphics, virtual workstations and mixed compute and graphics workloads, announced today.

AI-on-5G

Huang also discussed NVIDIA’s AI-on-5G computing platform – bringing together 5G and AI into a new type of computing platform designed for the edge that pairs the NVIDIA Aerial software development kit with the NVIDIA BlueField-2 A100, combining GPUs and CPUs into “the most advanced PCIE card ever created.”

Partners Fujitsu, Google Cloud, Mavenir, Radisys and Wind River are all developing solutions for NVIDIA’s AI-on-5G platform.

NVIDIA AI and NVIDIA Omniverse

Virtual, real-time, 3d worlds inhabited by people, AIs, and robots are no longer science-fiction.

NVIDIA Omniverse is cloud-native, scalable to multiple GPUs, physically accurate, takes advantage of RTX real-time path tracing and DLSS, simulates materials with NVIDIA MDL, simulates physics with NVIDIA PhysX, and fully integrates NVIDIA AI, Huang explained.

“Omniverse was made to create shared virtual 3D worlds,” Huang said. “Ones not unlike the science fiction metaverse described by Neal Stephenson in his early 1990s novel ‘Snow Crash’”

Huang announced that starting this summer, Omniverse will be available for enterprise licensing. Since its release in open beta partners such as Foster and Partners in architecture, ILM in entertainment, Activision in gaming, and advertising powerhouse WPP have put Omniverse to work.

The Factory of the Future

To show what’s possible with Omniverse Huang, along with Milan Nedeljković, member of the Board of Management of BMW AG, showed how a photorealistic, real-time digital model — a “digital twin” of one of BMW’s highly-automated factories — can accelerate modern manufacturing.

“These new innovations will reduce the planning times, improve flexibility and precision and at the end produce 30 percent more efficient planning,” Nedeljković said.

A Host of AI Software

Huang announced NVIDIA Megatron — a framework for training Transformers, which have led to breakthroughs in natural-language processing. Transformers generate document summaries, complete phrases in email, grade quizzes, generate live sports commentary, even code.

He detailed new models for Clara Discovery — NVIDIA’s acceleration libraries for computational drug discovery, and a partnership with Schrodinger — the leading physics-based and machine learning computational platform for drug discovery and material science.

To accelerate research into quantum computing — which relies on quantum bits, or qubits, that can be 0, 1, or both — Huang introduced cuQuantum to accelerate quantum circuit simulators so researchers can design better quantum computers.

To secure modern data centers, Huang announced NVIDIA Morpheus – a data center security platform for real-time all-packet inspection built on NVIDIA AI, NVIDIA BlueField, Net-Q network telemetry software, and EGX.

To accelerate conversational AI, Huang announced the availability of NVIDIA Jarvis – a state-of-the-art deep learning AI for speech recognition, language understanding, translations, and expressive speech.

To accelerate recommender systems — the engine for search, ads, online shopping, music, books, movies, user-generated content, and news — Huang announced NVIDIA Merlin is now available on NGC, NVIDIA’s catalog of deep learning framework containers.

And to help customers turn their expertise into AI, Huang introduced NVIDIA TAO to fine-tune and adapt NVIDIA pre-trained models with data from customers and partners while protecting data privacy.

“There is infinite diversity of application domains, environments, and specializations,” Huang said. “No one has all the data – sometimes it’s rare, sometimes it’s a trade secret.

The final piece is the inference server, NVIDIA Triton, to glean insights from the continuous streams of data coming into customer’s EGX servers or cloud instances, Huang said.

‘Any AI model that runs on cuDNN, so basically every AI model,” Huang said. “From any framework – TensorFlow, Pytorch, ONNX, OpenVINO, TensorRT, or custom C++/python backends.”

Advancing Automotive with NVIDIA DRIVE

Autonomous vehicles are “one of the most intense machine learning and robotics challenges – one of the hardest but also with the greatest impact,” Huang said.

NVIDIA is building modular, end-to-end solutions for the $10 trillion transportation industry so partners can leverage the parts they need.

Huang said NVIDIA DRIVE Orin, NVIDIA’s AV computing system-on-a-chip, which goes into production in 2022, was designed to be the car’s central computer.

Volvo Cars has been using the high-performance, energy-efficient compute of NVIDIA DRIVE since 2016 and developing AI-assisted driving features for new models on NVIDIA DRIVE Xavier with software developed in-house and by Zenseact, Volvo Cars’ autonomous driving software development company.

And Volvo Cars announced during the GTC keynote today that it will use NVIDIA DRIVE Orin to power the autonomous driving computer in its next-generation cars.

The decision deepens the companies’ collaboration to even more software-defined model lineups, beginning with the next-generation XC90, set to debut next year.

Meanwhile, NVIDIA DRIVE Atlan, NVIDIA’s next-generation automotive system-on-a-chip, and a true data center on wheels, “will be yet another giant leap,” Huang announced.

Atlan will deliver more than 1,000 trillion operations per second, or TOPS, and targets 2025 models.

“Atlan will be a technical marvel – fusing all of NVIDIA’s technologies in AI, auto, robotics, safety, and BlueField secure data centers,” Huang said.

Huang also announced the NVIDIA 8th generation Hyperion car platform – including reference sensors, AV and central computers, 3D ground-truth data recorders, networking, and all of the essential software.

Huang also announced that DRIVE Sim will be available for the community this summer.

Just as Omniverse can build a digital twin of the factories that produce cars, DRIVE Sim can be used to create a digital twin of autonomous vehicles to be used throughout AV development.

“The DRIVE digital twin in Omniverse is a virtual space that every engineer and every car in the fleet is connected to,” Huang said.

The ‘Instrument for Your Life’s Work’

Huang wrapped up with four points.

NVIDIA is now a 3-chip company – offering GPUs, CPUs, and DPUs.

NVIDIA is a software platform company and is dedicating enormous investment in NVIDIA AI and NVIDIA Omniverse.

NVIDIA is an AI company with Megatron, Jarvis, Merlin, Maxine, Isaac, Metropolis, Clara, and DRIVE, and pre-trained models you can customize with TAO.

NVIDIA is expanding AI with DGX for researchers, HGX for cloud, EGX for enterprise and 5G edge, and AGX for robotics.

“Mostly,” Huang said. “NVIDIA is the instrument for your life’s work.”

The post NVIDIA CEO Introduces Software, Silicon, Supercomputers ‘for the Da Vincis of Our Time’ appeared first on The Official NVIDIA Blog.

Carestream Health and Startups Develop AI-Enabled Medical Instruments with NVIDIA Clara AGX Developer Kit

Carestream Health, a leading maker of medical imaging systems, is investigating the use of  NVIDIA Clara AGX — an embedded AI platform for medical devices — in the development of AI-powered features on single-frame and streaming x-ray applications.

Startups around the world, too, are adopting Clara AGX for AI solutions in medical imaging, surgery and electron microscopy. Among them is Boston-based Activ Surgical, which recently received FDA clearance for a hardware imaging module to deliver real-time AI insights to the operating room.

Now in general availability, the NVIDIA Clara AGX developer kit advances the development of software-defined instruments, such as microscopes, ultrasounds and endoscopes.

This emerging generation of medical devices is equipped with dozens of real-time AI applications providing support at every step of the clinical experience — from automating patient set-up for scans and improving image quality to analyzing data streams and delivering critical insights to care providers.

NVIDIA Clara AGX is accelerating the development of these new medical instruments by providing a universal platform that can deliver high-bandwidth signal processing, accelerated computing reconstruction, AI processing and advanced 3D visualization.

Helping Clinicians Sense in Real Time 

Medical instruments like endoscopes and surgical robots are mounted with cameras, sending a live video feed to the clinicians operating the devices. Capturing these streams and applying computer vision AI to the video content can give medical professionals tools to improve patient care and bolster the capabilities of hospitals that lack adequate medical imaging resources.

Architected with NVIDIA Jetson AGX Xavier, an NVIDIA RTX 6000 GPU and the NVIDIA Mellanox ConnectX-6 SmartNIC, the Clara AGX developer kit comes with an SDK that makes it easy for developers to get up and running with real-time system software, libraries for input/output and video pipelining, and reference applications to create AI models for ultrasound and endoscopy.

Built into the platform is the NVIDIA EGX stack for cloud-native containerized software and microservices, including NVIDIA Fleet Command to securely deploy fleets of devices in hospitals, which together transform everyday sensors into smart sensors.

These smart sensors will be software-defined, meaning they can be regularly updated with AI algorithms as they improve — an essential capability to continuously connect research breakthroughs with the day-to-day practice of medicine.

Enabling Intelligent Instruments

Carestream Health is creating smart X-ray rooms that will include AI-powered features for an enhanced imaging workflow and faster, more efficient exams. The devices include automated positioning and exposure settings for similar exam types, which helps improve the consistency of X-ray images, boosting diagnostic confidence.

And Activ Surgical, a member of the NVIDIA Inception startup accelerator program, is using NVIDIA GPU-accelerated AI to deliver real-time surgical guidance. The company’s newly FDA-cleared ActivSight module will power its ActivINSIGHT product, which will provide surgeons with previously unavailable visual overlays, including blood flow and perfusion without the need for the injection of dyes.

Carestream Health and Activ Surgical are just two of the pioneering companies worldwide using NVIDIA AGX systems to power intelligent medical devices. Others include:

  • AJA Video Systems, based in California’s Gold Country, develops professional video and audio PCIe cards for high-bandwidth streaming. When combined with the NVIDIA Clara AGX developer kit, which includes two PCIe slots and high-speed network ports, the company’s cards can be used for endoscopy and surgical visualization applications.
  • Kaliber Labs, an NVIDIA Inception member, is building real-time AI-powered software solutions to support surgeons performing arthroscopic and minimally invasive procedures. Kaliber uses NVIDIA Clara AGX to deploy its surgical software suite, which equips surgeons with a first-of-its-kind contextualized and personalized surgical toolkit to help surgeons perform at the highest level and reduce surgical variability.
  • KAYA Instruments, an NVIDIA Inception member, develops computer vision products that can be used with imaging devices, including electron microscopes, ultrasound machines and MRI equipment. The Israel-based company’s video acquisition cards and cameras transfer medical imaging content to NVIDIA GPUs for real-time processing and AI-accelerated analysis.
  • Subtle Medical, an NVIDIA Inception member, has deployed FDA-cleared and CE-marked deep-learning powered image enhancement software solutions for PET and MRI protocols. The company will leverage NVIDIA Clara AGX for SubtleIR, an AI-powered software under development that improves the speed and quality of interventional imaging procedures.
  • Theator, an NVIDIA Inception member, will use NVIDIA Clara AGX to develop its surgical analytics platform. The Palo Alto-based startup is developing edge GPU-accelerated AI systems to annotate operation room footage, allowing surgeons to conduct post-surgery reviews where they can compare parts of a procedure with previous identical procedures.
  • us4us, a Poland-based maker of ultrasound research systems, is using NVIDIA AGX systems for a portable ultrasound platform that will support real-time digital beamforming — a compute-intensive technique essential to capturing quality ultrasound images. The software-defined system uses embedded GPU modules so medical researchers can develop and deploy custom AI models for image processing during ultrasound scans.

Learn more about Clara AGX for AI-powered medical devices and instruments in the GTC talk, “Using Ethernet to Stream High-Throughput, Low-Latency Medical Sensor Data.” The NVIDIA GPU Technology Conference is free to register. The healthcare track includes 16 live webinars, 18 special events and over 100 recorded sessions.

Registration isn’t required to watch NVIDIA CEO Jensen Huang’s keynote address.

Subscribe to NVIDIA healthcare news, and follow NVIDIA Healthcare on Twitter.

The post Carestream Health and Startups Develop AI-Enabled Medical Instruments with NVIDIA Clara AGX Developer Kit appeared first on The Official NVIDIA Blog.