Theo De Raadt Interview between Ottawa 2019 Hackathon and BSDCAN 2019

Tom Smyth writes in about an interview he did with Theo de Raadt in between g2k19, the general hackathon in Ottawa, and BSDCAN 2019:

Have you ever wondered about the whys and the hows Theo and his friends in OpenBSD relentlessly pursue security perfection in computer operating systems and the software that runs on them? Or perhaps you are more concerned with much deeper questions like : What operating system does Theo use on his Laptop? Who is his favourite developer? Who is his favourite user / sysadmin? Or you are just in need of some serious life tips on dealing with trolls?

Ok enough with the superficial questions... lets let Theo do the talking... check out the video here

A big Thank you goes to Theo for his time in the interview. I enjoyed making it with him, and I hope you all enjoy it, and I hope the wider public learn something new from it too.

Many thanks to Theo indeed, and also to Tom for doing the interview. We hope to see more soon!

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started. NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can Read article >

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started.

NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can hear about best practices for AI at the edge and get an update on how NVIDIA GPUs are paving the way to better, smarter 5G services.

It’s a story that’s moving fast.

AI was born in the cloud to process the vast amounts of data needed for jobs like recommending new products and optimizing news feeds. But most enterprises interact with their customers and products in the physical world at the edge of the network — in stores, warehouses and smart cities.

The need to sense, infer and act in real time as conditions change is driving the next wave of AI adoption at the edge. That’s why a growing list of forward-thinking companies are building their own AI capabilities using the NVIDIA EGX edge-computing platform.

Walmart, for example, built a smart supermarket it calls its Intelligent Retail Lab. Jakarta uses AI in a smart city application to manage its vehicle registration program. BMW and Procter & Gamble automate inspection of their products in smart factories. They all use NVIDIA EGX along with our Metropolis application framework for video and data analytics.

For conversational AI, the NVIDIA Jarvis developer kit enables voice assistants geared to run on embedded GPUs in smart cars or other systems. WeChat, the world’s most popular smartphone app, accelerates conversational AI using NVIDIA TensorRT software for inference.

All these software stacks ride on our CUDA-X libraries, tools, and technologies that run on an installed base of more than 500 million NVIDIA GPUs.

Carriers Make the Call

At MWC Los Angeles this year, NVIDIA founder and CEO Jensen Huang announced Aerial, software that rides on the EGX platform to let telecommunications companies harness the power of GPU acceleration.

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage at MWC LA to announce their collaboration.

With Aerial, carriers can both increase the spectral efficiency of their virtualized 5G radio-access networks and offer new AI services for smart cities, smart factories, cloud gaming and more — all on the same computing platform.

In Barcelona, NVIDIA and partners including Ericsson will give an update on how Aerial will reshape the mobile edge network.

Verizon is already using NVIDIA GPUs at the edge to deliver real-time ray tracing for AR/VR applications over 5G networks.

It’s one of several ways telecom applications can be taken to the next level with GPU acceleration. Imagine having the ability to process complex AI jobs on the nearest base station with the speed and ease of making a cellular call.

Your Dance Card for Barcelona

For a few days in February, we will turn our innovation center — located at Fira de Barcelona, Hall 4 — into a virtual university on AI with 5G at the edge. Attendees will get a world-class deep dive on this strategic technology mashup and how companies are leveraging it to monetize 5G.

Sessions start Monday morning, Feb. 24, and include AI customer case studies in retail, manufacturing and smart cities. Afternoon talks will explore consumer applications such as cloud gaming, 5G-enabled cloud AR/VR and AI in live sports.

We’ve partnered with the organizers of MWC on applied AI sessions on Tuesday, Feb. 25. These presentations will cover topics like federated learning, an emerging technique for collaborating on the development and training of AI models while protecting data privacy.

Wednesday’s schedule features three roundtables where attendees can meet executives working at the intersection of AI, 5G and edge computing. The week also includes two instructor-led sessions from the NVIDIA Deep Learning Institute that trains developers on best practices.

See Demos, Take a Meeting

For a hands-on experience, check out our lineup of demos based on the NVIDIA EGX platform. These will highlight applications such as object detection in a retail setting, ways to unsnarl traffic congestion in a smart city and our cloud-gaming service GeForce Now.

To learn more about the capabilities of AI, 5G and edge computing, check out the full agenda and book an appointment here.

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

Instant Replay: NVIDIA’s 5 Top YouTube Videos for 2019

‘Seeing is believing’ is a cliche for a reason; but it might be truer to say believing is seeing. Until this year, after all, it would be hard to believe a computer could turn a few stylus strokes into a realistic work of art; or that you could test autonomous vehicles, for millions of miles, Read article >

The post Instant Replay: NVIDIA’s 5 Top YouTube Videos for 2019 appeared first on The Official NVIDIA Blog.

Instant Replay: NVIDIA’s 5 Top YouTube Videos for 2019

‘Seeing is believing’ is a cliche for a reason; but it might be truer to say believing is seeing.
Until this year, after all, it would be hard to believe a computer could turn a few stylus strokes into a realistic work of art; or that you could test autonomous vehicles, for millions of miles, before rubber hit the road. But someone at NVIDIA believed these things could be done — and now the rest of us are seeing it, too.
Below, we’ve queued up a quick spin through the five most popular videos NVIDIA posted in 2019.
Each — whether explaining how NVIDIA used ray tracing to recreate the Apollo 11 moon landing or riding alongside NVIDIA’s Neda Cvijetic as she provides a live play-by-play as an autonomous vehicle navigates California’s highways — will leave you believing anything is possible.
What better way to get ready for 2020?

GauGAN: Changing Sketches into Photorealistic Masterpieces

A deep learning model developed by NVIDIA Research turns rough doodles into highly realistic scenes using generative adversarial networks (GANs). Dubbed GauGAN, the tool is like a smart paintbrush, converting segmentation maps into lifelike images. Read more at https://nvda.ws/2O6MHN2

NVIDIA DRIVE Sim

The flexible, open DRIVE Constellation platform enables developers to design and implement detailed simulations for vehicle testing and validation. Engineers can recreate a vehicle’s sensor structure, positioning, and traffic scenario to test in a variety of road and weather conditions for the development of safe autonomous vehicles.

Celebrating the 50th Anniversary of Apollo 11’s Moon Landing, with Commentary from Buzz Aldrin

The anniversary of the Apollo 11 landing — one of mankind’s greatest achievements — inspired us to dramatically step up our own work, enhancing our earlier moon-landing demo with NVIDIA RTX real-time ray-tracing technology. The result: a beautiful, near-cinematic depiction of one of history’s great moments. Relive the moment with commentary from Buzz Aldrin, whose giant leap years ago inspires new generations of moonshots today.

Ride in NVIDIA’s Self-Driving Car

This special edition DRIVE Labs episode shows how NVIDIA DRIVE AV Software combines the essential building blocks of perception, localization, and planning/control to drive autonomously on public roads around our headquarters in Santa Clara, Calif.

GTC 2019 Keynote with NVIDIA CEO Jensen Huang

Whether you’re a true believer, or just want to see what’s next, you’ll want to sign up, now for NVIDIA’s GPU Technology Conference in Silicon Valley. For a taste of what’s to come, here’s a recap of NVIDIA CEO Jensen Huang’s keynote address at the GTC 2019, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and more.

The post Instant Replay: NVIDIA’s 5 Top YouTube Videos for 2019 appeared first on The Official NVIDIA Blog.

Deep Learning Shakes Up Geologists’ Tools to Study Seismic Fault Systems

Fifteen years after a magnitude 9.1 earthquake and tsunami struck off the coast of Indonesia, killing more than 200,000 people in over a dozen countries, geologists are still working to understand the complex fault systems that run through Earth’s crust. While major faults are easy for geologists to spot, these large features are connected to Read article >

The post Deep Learning Shakes Up Geologists’ Tools to Study Seismic Fault Systems appeared first on The Official NVIDIA Blog.

Deep Learning Shakes Up Geologists’ Tools to Study Seismic Fault Systems

Fifteen years after a magnitude 9.1 earthquake and tsunami struck off the coast of Indonesia, killing more than 200,000 people in over a dozen countries, geologists are still working to understand the complex fault systems that run through Earth’s crust.

While major faults are easy for geologists to spot, these large features are connected to other, smaller faults and fractures in the rock. Identifying these smaller faults is painstaking, requiring weeks to study individual slices from a 3D image.

Researchers at the University of Texas at Austin are shaking up the process with deep learning models that identify geologic fault systems from 3D seismic images, saving scientists time and resources. The developers used NVIDIA GPUs and synthetic data to train neural networks that spot small, subtle faults typically missed by human interpreters.

Examining fault systems helps scientists to determine which seismic features are older than others and to study regions of interest like continental margins, where a continental plate meets an oceanic one.

Seismic analyses are also used in the energy sector to plan drilling and rigging activities to extract oil and natural gas, as well as the opposite process of carbon sequestration — injecting carbon dioxide back into the ground to mitigate the effects of climate change.

“Deep learning isn’t just a little bit more accurate — it’s on a whole different level both in accuracy and efficiency.” – Sergey Fomel

“Sometimes you want to drill into the fractures, and sometimes you want to stay away from them,” said Sergey Fomel, geological sciences professor at UT Austin. “But in either case, you need to know where they are.”

Tracing Cracks in Earth’s Upper Crust

Seismic fault systems are so complex that researchers analyzing real-world data by hand miss some of the finer cracks and fissures connected to a major fault. As a result, a deep learning model trained on human-annotated datasets will also miss these smaller fractures.

To get around this limitation, the researchers created synthetic data of seismic faults. Using synthetic data meant the scientists already knew the location of each major and minor fault in the dataset. This ground-truth baseline enabled them to train an AI model that surpasses the accuracy of manual labeling.

The team’s deep learning model parses 3D volumetric data to determine the probability that there’s a fault at every pixel within the image. Geologists can then go through the regions the neural network has flagged as having a high probability of faults present to conduct their analyses.

seismic fault systems
Fomel’s team uses 3D seismic volumes like these to map seismic fault systems. (Figure courtesy of Xinming Wu, from the paper “FaultSeg3D: Using synthetic data sets to train an end-to-end convolutional neural network for 3D seismic fault segmentation.” )

“Geologists help explain what happened throughout the history of geologic time,” he said. “They still need to analyze the AI model’s results to create the story, but we want to relieve them from the labor of trying to pick these features out manually. It’s not the best use of geologists’ time.”

Fomel said it can take up to a month to analyze by hand fault systems that take just seconds to process with the team’s CNN-based model, using an NVIDIA GPU for inference. Previous automated methods took hours and were much less accurate.

“Deep learning isn’t just a little bit more accurate — it’s on a whole different level both in accuracy and efficiency,” Fomel said. “It’s a game changer in terms of automatic interpretation.”

The researchers trained their neural networks on the Texas Advanced Computing Center’s Maverick2 system, powered by NVIDIA GPUs. Their deep learning models were built using the PyTorch and TensorFlow deep learning frameworks, as well as the Madagascar software package for geophysical data analysis.

Besides faults, these algorithms can be used to detect other features geologists examine, including salt bodies, sedimentary layers and channels. The researchers are also designing neural networks to calculate relative geologic time from seismic data —  a measure that gives scientists detailed information about geologic structures.

The post Deep Learning Shakes Up Geologists’ Tools to Study Seismic Fault Systems appeared first on The Official NVIDIA Blog.

Saved by the Spell: Serkan Piantino’s Company Makes AI for Everyone

Spell, founded by Serkan Piantino, is making machine learning as easy as ABC. Piantino, CEO of the New York-based startup and former director of engineering for Facebook AI Research, explained to AI Podcast host Noah Kravitz how he’s bringing compute power to those that don’t have easy access to GPU clusters. Spell provides access to Read article >

The post Saved by the Spell: Serkan Piantino’s Company Makes AI for Everyone appeared first on The Official NVIDIA Blog.