Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say

Hundreds of technology experts from the public and private sectors, as well as academia, came together earlier this month for NVIDIA’s GPU Technology Conference to discuss U.S. federal agency adoption of AI and how industry can help. Leaders from dozens of organizations, including the U.S. Department of Defense, the Federal Communication Commission, Booz Allen Hamilton, Read article >

The post Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say appeared first on The Official NVIDIA Blog.

Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say

Hundreds of technology experts from the public and private sectors, as well as academia, came together earlier this month for NVIDIA’s GPU Technology Conference to discuss U.S. federal agency adoption of AI and how industry can help.

Leaders from dozens of organizations, including the U.S. Department of Defense, the Federal Communication Commission, Booz Allen Hamilton, Lockheed Martin, NASA, RAND Corporation, Carnegie Mellon and Stanford Universities, participated in approximately 100 sessions that were part of GTC’s Public Sector Summit.

They talked about the need to accelerate efforts in a number of areas, including education, access to data and computing resources, funding and research. Many encouraged government executives and federal agencies to act with a greater sense of urgency.

“Artificial intelligence is inspiring the greatest technological transformation of our time,” Anthony Robbins, vice president of federal at NVIDIA, said in a panel with former Federal CIO Suzette Kent and retired Lt. Gen. Jack Shanahan during one of the talks focused on “Building an AI Nation.” “The train has left the station,” Robbins said. “In fact, it’s already roaring down the tracks.”

“We’re in a critical period with the United States government,” Shanahan said during the panel. “We have to get it right. This is a really important conversation.”

Just Get Started

These and other speakers cited a common theme: agencies need to get started now. But this requires a cultural shift, which Kent spoke of as one of the most significant challenges she experienced as federal CIO.

“In any kind of transformation the tech is often the easy part,” she said, noting that the only way to get people on board across the U.S. government — one of the largest and most complex institutions in the world — is to focus on return on investment for agency missions.

In a session titled “Why Leaders in Both the Public and Private Sectors Should Embrace Exponential Changes in Data, AI, and Work,” David Bray, former Senior National Intelligence Service Executive, FCC CIO, and current inaugural director and founder of the GeoTech Center at the Atlantic Council, tackled the same topic, saying that worker buy-in was important not just for AI adoption but also for its sustainability.

“If you only treat this as a tech endeavor, you might get it right, but it won’t stick,” Bray said. “What you’re doing isn’t an add-on to agencies — this is transforming how the government does business.”

Make Data a Priority

Data strategy came up repeatedly as an important component to the future of federal AI.

Less than an hour before a GTC virtual fireside chat with Robbins and DoD Chief Data Officer David Spirk, the Pentagon released its first enterprise data strategy.

The document positions the DoD to become a data-centric organization, but implementing the strategy won’t be easy, Spirk said. It will require an incredible amount of orchestration among the numerous data pipelines flowing in and out of the Pentagon and its service branches.

“Data is a strategic asset,” he said. “It’s a high-interest commodity that has to be leveraged for both immediate and lasting advantage.”

Kent and Shanahan agreed that data is critical. Kent said agency chief data officers need to think of the federal government as one large enterprise with a huge repository of data rather than silos of information, considering how the government at large can leverage an agency’s data.

Invest in Exponential Change

The next few years will be crucial for the government’s adoption of AI, and experts say more investment will be needed.

To start, the government will have to address the AI talent gap. The exact extent of the talent shortage is difficult to measure, but job website statistics show that demand for workers far exceeds supply, according to a study by Georgetown University’s Center for Security and Emerging Technology.

One way to do that is for the federal government to set aside money to help small and mid-sized universities develop AI programs.

Another is to provide colleges and universities with access to more computing resources and federal datasets, according to John Etchemendy, co-director of the Human Centered Artificial Intelligence at Stanford University, who spoke during a session with panelists from academia and think tanks. That would accelerate R&D and help students become more proficient at data science.

Government investment in AI research will also be key in helping agencies move forward. Without a significant increase, the United States will fall behind, Martijn Rasser, senior fellow at the Center for New American Security, said during the panel discussion. CNAS recently released a report calling for $25 billion per year in federal AI investment by 2025.

The RAND Corp. released a congressionally mandated assessment of the DoD’s AI posture last year that recommended defense agencies need to create mechanisms for connecting AI researchers, technology developers and operators. By allowing operators to be part of the process at every stage, they’ll be more confident and trusting of the new technology, Danielle Tarraf, senior information scientist at RAND, told the panel. Tarraf highlighted that many of these recommendations were applicable government-wide.

Michael McQuade, vice president of research at Carnegie Mellon University and a member of the Defense Innovation Board, argued that it’s crucial that we start delivering solutions now. “Building confidence is key” to continue to justify the increasing support from authorizers and appropriators for the crucial national investments in Al.

By framing AI in the context of both broad AI innovations and individual use cases, government can elucidate why it’s so important to “knock down barriers and get the money in the right place,” said Seth Center, a senior advisor to the National Security Commission on AI.

An overarching theme from the Public Sector Summit was that government technology leaders need to heighten their focus on AI, with a sense of urgency.

Kent and Shanahan noted that training and tools are available for the government to make the transition smoothly, and begin using the technology. Both said that by partnering with industry and academia, the federal government can make an AI-equipped America a reality.

Bray, noting the breakneck pace of change from new technologies, said that it usually takes decades for the kind of shifts that are now possible. He urged government executives to take an active role in guiding those changes, encouraging them to be “brave, bold and benevolent.”

The post Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say appeared first on The Official NVIDIA Blog.

Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story

After his AI-enhanced vintage video went viral, Denis Shiryaev launched a startup to bottle the magic. Soon anyone who wants to dust off their old films may be able to use his neural networks. The story began with a blog on Telegram by the Russian entrepreneur currently living in Gdańsk, Poland. “Some years ago I Read article >

The post Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story appeared first on The Official NVIDIA Blog.

Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story

After his AI-enhanced vintage video went viral, Denis Shiryaev launched a startup to bottle the magic. Soon anyone who wants to dust off their old films may be able to use his neural networks.

The story began with a blog on Telegram by the Russian entrepreneur currently living in Gdańsk, Poland.

“Some years ago I started to blog about machine learning and play with different algorithms to understand it better,” said Shiryaev, who later founded the startup known by its web address, neural.love. “I was generating music with neural nets and staging Turing tests of chatbots — silly, fun stuff.”

Eight months ago, he tried an AI experiment with a short, grainy film he’d found on YouTube of a train in 1896 arriving in a small French town. He used open-source software and AI models to upscale it to 4K resolution and smooth its jerky motion from 15 frames per second to 60 fps.

“I posted it one night, and when I woke up the next day, I had a million views and was on the front page of Reddit. My in-box was exploding with messages on Facebook, LinkedIn — everywhere,” he said of responses to the video below

Not wanting to be a one-hit wonder, he found other vintage videos to work with. He ran them through an expanding workflow of AI models, including DeOldify for adding color and other open-source algorithms for removing visual noise.

His inbox stayed full.

He got requests from a media company in the Netherlands to enhance an old film of Amsterdam. Displays in the Moscow subway played a vintage video he enhanced of the Russian capital. A Polish documentary maker knocked on his door, too.

Even the USA was calling. PBS asked for help with footage for an interactive website for its documentary on women’s suffrage.

“They had a colorist for the still images, but even with advances in digital painting, colorizing film takes a ridiculous amount of time,” said Elizabeth Peck, the business development manager for the five-person team at neural.love.

NVIDIA RTX Speeds AI Work 60x+

Along the way, Shiryaev and team got an upgrade to the latest NVIDIA RTX 6000 GPU. It could process 60 minutes of video in less time than an earlier graphics card took to handle 90 seconds of footage.

The RTX card also trains the team’s custom AI models in eight hours, a job that used to take a week.

“This card shines, it’s amazing how helpful the right hardware can be,” he said.

AI Film Editor in the Cloud

The bright lights the team sees these days are flashing images of a future consumer service in the public cloud. An online self-serve AI video editor could help anyone with a digital copy of an old VHS tape or Super8 reel in their closet.

“People were sending us really touching footage — the last video of their father, a snippet from a Michael Jackson concert they attended as a teenager. The amount of personal interest people had in what we were doing was striking,” explained Peck.

It’s still early days. Shiryaev expects it will take a few months to get a beta service ready for launch.

Meanwhile, neural.love is steering clear of the VC world. “We don’t want to take money until we are sure there is a market and we have a working product,” he said.

You can hear more of neural.love’s story in a webinar hosted by PNY Technologies, an NVIDIA partner.

The post Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story appeared first on The Official NVIDIA Blog.

What Is Computer Vision?

Computer vision has become so good that the days of general managers screaming at umpires in baseball games in disputes over pitches may become a thing of the past. That’s because developments in image classification along with parallel processing make it possible for computers to see a baseball whizzing by at 95 miles per hour. Read article >

The post What Is Computer Vision? appeared first on The Official NVIDIA Blog.

What Is Computer Vision?

Computer vision has become so good that the days of general managers screaming at umpires in baseball games in disputes over pitches may become a thing of the past.

That’s because developments in image classification along with parallel processing make it possible for computers to see a baseball whizzing by at 95 miles per hour. Pair that with image detection to help geolocate balls, and you’ve got a potent umpire tool that’s hard to argue with.

But computer vision doesn’t stop at baseball.

What Is Computer Vision?

Computer vision is a broad term for the work done with deep neural networks to develop human-like vision capabilities for applications, most often run on NVIDIA GPUs. It can include specific training of neural nets for segmentation, classification and detection using images and videos for data.

Major League Baseball is testing AI-assisted calls at the plate using computer vision. Judging balls and strikes on baseballs that can take just .4 seconds to reach the plate isn’t easy for human eyes. It could be better handled by a camera feed run on image nets and NVIDIA GPUs that can process split-second decisions at a rate of more than 60 frames per second.

Hawk-Eye, based in London, is making this a reality in sports. Hawk-Eye’s NVIDIA GPU-powered ball tracking and SMART software is deployed in more than 20 sports, including baseball, basketball, tennis, soccer, cricket, hockey and NASCAR.

Yet computer vision can do much more than just make sports calls.

What Is Computer Vision Beyond Sports?

Computer vision can handle many more tasks. Developed with convolutional neural networks, computer vision can perform segmentation, classification and detection for a myriad of applications.

Computer vision has infinite applications. With industry changes from computer vision spanning sports, automotive, agriculture, retail, banking, construction, insurance and beyond, much is at stake.

3 Things to Know About Computer Vision

  • Segmentation: Image segmentation is about classifying pixels to belong to a certain category, such as a car, road or pedestrian. It’s widely used in self-driving vehicle applications, including the NVIDIA DRIVE software stack, to show roads, cars and people.  Think of it as a sort of visualization technique that makes what computers do easier to understand for humans.
  • Classification: Image classification is used to determine what’s in an image. Neural networks can be trained to identify dogs or cats, for example, or many other things with a high degree of precision given sufficient data.
  • Detection: Image detection allows computers to localize where objects exist. It puts rectangular bounding boxes — like in the lower half of the image below — that fully contain the object. A detector might be trained to see where cars or people are within an image, for instance, as in the numbered boxes below.

What You Need to Know: Segmentation, Classification and Detection

SegmentationClassificationDetection
Good at delineating objectsIs it a cat or a dog?Where does it exist in space?
Used in self-driving vehiclesClassifies with precisionRecognizes things for safety

 

NVIDIA’s Deep Learning Institute offers courses such as Getting Started with Image Segmentation and Fundamentals of Deep Learning for Computer Vision

The post What Is Computer Vision? appeared first on The Official NVIDIA Blog.

Intel Reports Third-Quarter 2020 Financial Results

Intel Corporation’s third-quarter 2020 earnings news release and presentation are available on the company’s Investor Relations website. The earnings conference call for investors begins at 2 p.m. PDT today; a public webcast will be available at www.intc.com.

q3 2020 earnings infographic
» Click for full infographic

The post Intel Reports Third-Quarter 2020 Financial Results appeared first on Intel Newsroom.

NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks

AI-powered vehicles aren’t a future vision, they’re a reality today. And they’re only truly possible on NVIDIA Xavier, our system-on-a-chip for autonomous vehicles. The key to these cutting-edge vehicles is inference — the process of running AI models in real time to extract insights from enormous amounts of data. And when it comes to in-vehicle Read article >

The post NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks appeared first on The Official NVIDIA Blog.

NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks

AI-powered vehicles aren’t a future vision, they’re a reality today. And they’re only truly possible on NVIDIA Xavier, our system-on-a-chip for autonomous vehicles.

The key to these cutting-edge vehicles is inference — the process of running AI models in real time to extract insights from enormous amounts of data. And when it comes to in-vehicle inference, NVIDIA Xavier has been proven the best — and the only — platform capable of real-world AI processing, yet again.

NVIDIA GPUs smashed performance records across AI inference in data center and edge computing systems in the latest round of MLPerf benchmarks, the only consortium-based and peer-reviewed inference performance tests. NVIDIA Xavier extended its performance leadership demonstrated in the first AI inference tests, held last year, while supporting all new use cases added for energy-efficient, edge compute SoC.

Inferencing for intelligent vehicles is a full-stack problem. It requires the ability to process sensors and run the neural networks, operating system and applications all at once. This high level of complexity calls for a huge investment, which NVIDIA continues to make.

The new NVIDIA A100 GPU, based on the NVIDIA Ampere architecture, also rose above the competition, outperforming CPUs by up to 237x in data center inference. This level of performance in the data center is critical for training and validating the neural networks that will run in the car at the massive scale necessary for widespread deployment.

Achieving this performance isn’t easy. In fact, most of the companies that have proven the ability to run a full self-driving stack run it on NVIDIA.

The MLPerf tests demonstrate that AI processing capability lies beyond the pure number of trillions of operations per second (TOPS) a platform can achieve. It’s the architecture, flexibility and accompanying tools that define a compute platform’s AI proficiency.

Xavier Stands Alone

The inference tests represent a suite of benchmarks to assess the type of complex workload needed for software-defined vehicles. Many different benchmark tests across multiple scenarios, including edge computing, verify whether a solution can perform exceptionally at not just one task, but many, as would be required in a modern car.

In this year’s tests, NVIDIA Xavier dominated results for energy-efficient, edge compute SoCs — processors necessary for edge computing in vehicles and robots — in both single-stream and multi-stream inference tasks.

Xavier is the current generation SoC powering the brain of the NVIDIA DRIVE AGX computer for both self-driving and cockpit applications. It’s an AI supercomputer, incorporating six different types of processors, including CPU, GPU, deep learning accelerator, programmable vision accelerator, image signal processor and stereo/optical flow accelerator.

NVIDIA DRIVE AGX Xavier

Thanks to its architecture, Xavier stands alone when it comes to AI inference. Its programmable deep neural network accelerators optimally support the operations for high-throughput and low-latency DNN processing. Because these algorithms are still in their infancy, we built the Xavier compute platform to be flexible so it could handle new iterations.

Supporting new and diverse neural networks requires processing different types of data, through a wide range of neural nets. Xavier’s tremendous processing performance handles this inference load to deliver a safe automated or autonomous vehicle with an intelligent user interface.

Proven Effective with Industry Adoption

As the industry compares TOPS of performance to determine autonomous capabilities, it’s important to test how these platforms can handle actual AI workloads.

Xavier’s back-to-back leadership in the industry’s leading inference benchmarks demonstrates NVIDIA’s architectural advantage for AI application development. Our SoC really is the only proven platform up to this unprecedented challenge.

The vast majority of automakers, tier 1 suppliers and startups are developing on the DRIVE platform. NVIDIA has gained much experience running real-world AI applications on its partners’ platforms. All these learnings and improvements will further benefit the NVIDIA DRIVE ecosystem.

Raising the Bar Further

It doesn’t stop there. NVIDIA Orin, our next-generation SoC, is coming next year, delivering nearly 7x the performance of Xavier with incredible energy-efficiency.

NVIDIA Orin

Xavier is compatible with software tools such as CUDA and TensorRT to support the optimization of DNNs to target hardware. These same tools will be available on Orin, which means developers can seamlessly transfer past software development onto the latest hardware.

NVIDIA has shown time and again that it’s the only solution for real-world AI and will continue to drive transformational technology such as self-driving cars for a safer, more advanced future.

The post NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks appeared first on The Official NVIDIA Blog.

NVIDIA Inference Performance Surges as AI Use Crosses Tipping Point

Inference, the work of using AI in applications, is moving into mainstream uses, and it’s running faster than ever. NVIDIA GPUs won all tests of AI inference in data center and edge computing systems in the latest round of the industry’s only consortium-based and peer-reviewed benchmarks. NVIDIA A100 Tensor Core GPUs extended the performance leadership Read article >

The post NVIDIA Inference Performance Surges as AI Use Crosses Tipping Point appeared first on The Official NVIDIA Blog.