6.8-beta tagged in CVS

Theo (deraadt@) has just committed the crank to 6.8-beta to CVS

From: Theo de Raadt
Date: Mon, 31 Aug 2020 10:08:28 -0600 (MDT)
To: source-changes@openbsd.org
Subject: CVS: cvs.openbsd.org: src

CVSROOT:        /cvs
Module name:    src
Changes by:     deraadt@cvs.openbsd.org 2020/08/31 10:08:28

Modified files:
        sys/conf       : newvers.sh 
        sys/sys        : param.h 
        usr.bin/signify: signify.1 
        etc/root       : root.mail 
        share/mk       : sys.mk 
        sys/arch/macppc/stand/tbxidata: bsd.tbxi 

Log message:
crank to 6.8-beta

You know what this means: time to test snapshots and report any issues you find, both in the base systems as in the supplied packages, so that the upcoming 6.8 release will not surprise you in unfortunate ways!

Speed Reader: Startup Primer Helps Analysts Make Every Second Count

Expected to read upwards of 200,000 words daily from hundreds, if not thousands, of documents, financial analysts are asked to perform the impossible.

Primer is using AI to apply the equivalent of compression technology to this mountain of data to help make work easier for them as well as analysts across a range of other industries.

The five-year-old company, based in San Francisco, has built a natural language processing and machine learning platform that essentially does all the reading and collating for analysts in a tiny fraction of the time it would normally take them.

Whatever a given analyst might be monitoring, whether it’s a natural disaster, credit default or geo-political event, Primer slashes hours of human research into a few seconds of analysis.

The software combs through massive amounts of content, highlights pertinent information such as quotes and facts, and assembles them into related lists. It distills vast topics into the essentials in seconds.

“We train the models to mimic that human behavior,” said Barry Dauber, vice president of commercial sales at Primer. “It’s really a powerful analyst platform that uses natural language processing and machine learning to surface and summarize information at scale.”

The Power of 1,000 Analysts

Using Primer’s platform running on NVIDIA GPUs is akin to giving an analyst a virtual staff that delivers near-instantaneous results. The software can analyze and report on tens of thousands of documents from financial reports, internal proprietary content, social media, 30,000-40,000 news sources and elsewhere.

“Every time an analyst wants to know something about Syria, we cluster together documents about Syria, in real time,” said Ethan Chan, engineering manager and staff machine learning engineer at Primer. “The goal is to reduce the amount of effort an analyst has to expend to process more information.”

Primer has done just that to the relief of its customers, which includes financial services firms, government agencies and an array of Fortune 500 companies.

As powerful as Primer’s natural language processing algorithms are, up until two years ago they required 20 minutes to deliver results because of the complexity of the document clustering they were asking CPUs to support.

“The clustering was the bottleneck,” said Chan. “Because we have to compare every document with every other document, we’re looking at nearly a trillion flops for a million documents.”

GPUs Slash Analysis Times

Primer’s team added GPUs to the clustering process in 2018 after joining NVIDIA Inception — an accelerator program for AI startups — and quickly slashed those analysis times to mere seconds.

Primer’s GPU work unfolds in the cloud, where it makes equally generous use of AWS, Google Cloud and Microsoft Azure. For prototyping and training of its NLP algorithms such as Named Entity Recognition and Headline Generation (on public, open-source news datasets), Primer uses instances with NVIDIA V100 Tensor Core GPUs.

Model serving and clustering happens on instances with NVIDIA T4 GPUs, which can be dialed up and down based on clustering needs. The company also uses a wrapper called CuPy, which allows for CUDA-powered acceleration of GPUs on Python.

But what Chan believes is Primer’s most innovative use of GPUs is in acceleration of its clustering algorithms.

“Grouping documents together is not something anyone else is doing,” he said, adding that Primer’s success in this area further establishes that “you can use NVIDIA for new use cases and new markets.”

Flexible Delivery Model

With the cloud-based SaaS model, customers can increase or decrease their analysis speed, depending on how much they want to spend on GPUs.

Primer’s offering can also be deployed in a customer’s data center. There, the models can be trained on a customer’s IP and clustering can be performed on premises. This is an important consideration for those working in highly regulated or sensitive markets.

Analysts in finance and national security are currently Primer’s primary users, however, the company could help anyone tasked with combing through mounds of data actually make decisions instead of preparing to make decisions.

The post Speed Reader: Startup Primer Helps Analysts Make Every Second Count appeared first on The Official NVIDIA Blog.

Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares

Looking directly at the sun isn’t recommended — unless you’re doing it with AI, which is what NASA is working on.

The surface of the sun, which is the layer you can see with the eye, is actually bubbly: intense heat creates a boiling reaction, similar to water at high temperature. So when NASA researchers magnify images of the sun with a telescope, they can see tiny blobs, called granules, moving on the surface.

Studying the movement and flows of the granules helps the researchers better understand what’s happening underneath that outer layer of the sun.

The computations for tracking the motion of granules requires advanced imaging techniques. Using data science and GPU computing with NVIDIA Quadro RTX-powered HP Z8 workstations, NASA researchers have developed deep learning techniques to more easily track the flows on the sun’s surface.

RTX Flares Up Deep Learning Performance

When studying how storms and hurricanes form, meteorologists analyze the flows of winds in Earth’s atmosphere. For this same reason, it’s important to measure the flows of plasma in the sun’s atmosphere to learn more about the short- and long-term evolution of our nearest star.

This helps NASA understand and anticipate events like solar flares, which can affect power grids, communication systems like GPS or radios, or even put space travel at risk because of the intense radiation and charged particles associated with space weather.

“It’s like predicting earthquakes,” said Michael Kirk, research astrophysicist at NASA. “Since we can’t see very well beneath the surface of the sun, we have to take measurements from the flows on the exterior to infer what is happening subsurface.”

Granules are transported by plasma motions — hot ionized gas under the surface. To capture these motions, NASA developed customized algorithms best tailored to their solar observations, with a deep learning neural network that observes the granules using images from the Solar Dynamics Observatory, and then learns how to reconstruct their motions.

“Neural networks can generate estimates of plasma motions at resolutions beyond what traditional flow tracking methods can achieve,” said Benoit Tremblay from the National Solar Observatory. “Flow estimates are no longer limited to the surface — deep learning can look for a relationship between what we see on the surface and the plasma motions at different altitudes in the solar atmosphere.”

“We’re training neural networks using synthetic images of these granules to learn the flow fields, so it helps us understand precursor environments that surround the active magnetic regions that can become the source of solar flares,” said Raphael Attie, solar astronomer at NASA’s Goddard Space Flight Center.

NVIDIA GPUs were essential in training the neural networks because NASA needed to complete several training sessions with data preprocessed in multiple ways to develop robust deep learning models, and CPU power was not enough for these computations.

When using TensorFlow on a 72 CPU-core compute node, it took an hour to complete only one pass with the training data. Even in a CPU-based cloud environment, it would still take weeks to train all the models that the scientists needed for a single project.

With an NVIDIA Quadro RTX 8000 GPU, the researchers can complete one training in about three minutes — a 20x speedup. This allows them to start testing the trained models after a day instead of having to wait weeks.

“This incredible speedup enables us to try out different ways to train the models and make ‘stress tests,’ like preprocessing images at different resolutions or introducing synthetic errors to better emulate imperfections in the telescopes,” said Attie. “That kind of accelerated workflow completely changed the scope of what we can afford to explore, and it allows us to be much more daring and creative.”

With NVIDIA Quadro RTX GPUs, the NASA researchers can accelerate workflows for their solar physics projects, and they have more time to conduct thorough research with simulations to gain deeper understandings of the sun’s dynamics.

Learn more about NVIDIA and HP data science workstations, and listen to the AI Podcast with NASA.

The post Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares appeared first on The Official NVIDIA Blog.

Intel Studios Showcases Volumetric Production at 77th Venice International Film Festival

intel studios
Intel Studios’ 10,000-square-foot geodesic dome in Los Angeles is the world’s largest immersive media hub. Each of the studio’s 96 high-resolution 5K cameras captures action in two dimensions and algorithms convert those trillions of pixels into a 360-degree, 3D virtual environment. (Credit: Tim Herman/Intel Corporation)
» Click for full image

What’s New: Intel® Studios will premiere two new volumetric films, “Queerskins: ARK” and “HERE,” at the 77th Venice International Film Festival beginning Sept. 2. Part of the Venice VR Expanded competition on Viveport, the films are two of the latest innovative virtual reality (VR) experiences captured and produced at Intel Studios.

“Using the groundbreaking volumetric capture and production abilities of Intel Studios, whether it be through the unique movement-based experiences of “ARK” via six degrees of freedom or the ability to layer scenes from various time periods on top of one another in “HERE,” we are ushering in a new age of content creation and immersive experiences.”
–Diego Prilusky, head of Intel Studios

Why It Matters: Intel Studios is driving the future of content creation with next-generation filmmaking technology and production that allow for content-capture translating to VR, augmented reality and movement-based six degrees of freedom (6DoF). After unveiling previous productions at Tribeca Film Festival and Cannes, Venice is a defining moment for Intel Studios to showcase the interactive experiences that demonstrate the capabilities and future of longer-form volumetric content.

About the Interactive Experience: “Queerskins: ARK” is an Intel Studios original, written by Illya Szilak and co-produced by Cloudred. It follows a Catholic mother living in rural Missouri who reads a diary left by the estranged son she has lost to AIDS, as a way to transcend herself and her grief by imagining him alive and in love. The 6DoF interactive experience allows viewers to navigate around a large space and enter the mother’s imagination, co-creating and controlling the experience through movements.

Intel Queerskins
“Queerskins: ARK,” an Intel Studios original written by Illya Szilak and co-produced by Cloudred. (Credit: Intel Studios)

About the Layers of Time: “HERE,” an Intel Studios original co-produced with 59 Productions, presents an immersive adaptation of Richard McGuire’s groundbreaking graphic novel. This unique experience is a grand biopic where the main character is a place rather than a person. Through volumetric capture and VR technology, we join the host of characters across layers of generations who have called this particular room home. The immersive VR narrative invites audiences to reflect on human experiences across generations.

Intel HERE
“HERE,” an Intel Studios original co-produced with 59 Productions. (Credit: Intel Studios)

More Context: Intel Studios (Press Kit)

The post Intel Studios Showcases Volumetric Production at 77th Venice International Film Festival appeared first on Intel Newsroom.

Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models

Cells under a microscope, grapes on a vine and species in a forest are just a few of the things that AI can identify using the image annotation platform created by startup V7 Labs.

Whether a user wants AI to detect and label images showing equipment in an operating room or livestock on a farm, the London-based company offers V7 Darwin, an AI-powered web platform with a trained model that already knows what almost any object looks like, according to Alberto Rizzoli, co-founder of V7 Labs.

It’s a boon for small businesses and other users that are new to AI or want to reduce the costs of training deep learning models with custom data. Users can load their data onto the platform, which then segments objects and annotates them. It also allows for training and deploying models.

V7 Darwin is trained on several million images and optimized on NVIDIA GPUs. The startup is also exploring the use of NVIDIA Clara Guardian, which includes NVIDIA DeepStream SDK intelligent video analytics framework on edge AI embedded systems. So far, it’s piloted laboratory perception, quality inspection, and livestock monitoring projects, using the NVIDIA Jetson AGX Xavier and Jetson TX2 modules for the edge deployment of trained models.

V7 Labs is a member of NVIDIA Inception, a program that provides AI startups with go-to-market support, expertise and technology assistance.

Pixel-Perfect Object Classification

“For AI to learn to see something, you need to give it examples,” said Rizzoli. “And to have it accurately identify an object based on an image, you need to make sure the training sample captures 100 percent of the object’s pixels.”

Annotating and labeling an object based on such a level of “pixel-perfect” granular detail takes just two-and-a-half seconds for V7 Darwin — up to 50x faster than a human, depending on the complexity of the image, said Rizzoli.

Saving time and costs around image annotation is especially important in the context of healthcare, he said. Healthcare professionals must look at hundreds of thousands of X-ray or CT scans and annotate abnormalities, Rizzoli said, but this can be automated.

For example, during the COVID-19 pandemic, V7 Labs worked with the U.K.’s National Health Service and Italy’s San Matteo Hospital to develop a model that detects the severity of pneumonia in a chest X-ray and predicts whether a patient will need to enter an intensive care unit.

The company also published an open dataset with over 6,500 X-ray images showing pneumonia, 500 cases of which were caused by COVID-19.

V7 Darwin can be used in a laboratory setting, helping to detect protocol errors and automatically log experiments.

Application Across Industries

Companies in a wide variety of industries beyond healthcare can benefit from V7’s technology.

“Our goal is to capture all of computer vision and make it remarkably easy to use” said Rizzoli. “We believe that if we can identify a cell under a microscope, we can also identify, say, a house from a satellite. And if we can identify a doctor performing an operation or a lab technician performing an experiment, we can also identify a sculptor or a person preparing a cake.”

Global uses of the platform include assessing the damage of natural disasters, observing the growth of human and animal embryos, detecting caries in dental X-rays, creating autonomous machines to evaluate safety protocols in manufacturing, and allowing farming robots to count their harvests.

Stay up to date with the latest healthcare news from NVIDIA, and explore how AI, accelerated computing, and GPU technology contribute to the worldwide battle against the novel coronavirus on our COVID-19 research hub.

The post Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models appeared first on The Official NVIDIA Blog.

More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots

With Lime and Bird scooters covering just about every major U.S. city, you’d think all bets were off for walking. Think again.

Piaggio Fast Forward is staking its future on the idea that people will skip e-scooters or ride-hailing once they take a stroll with its gita robot. A Boston-based subsidiary of the iconic Vespa scooter maker, the company says the recent focus on getting fresh air and walking during the COVID-19 pandemic bodes well for its new robotics concept.

The fashionable gita robot — looking like a curvaceous vintage scooter — can carry up to 40 pounds and automatically keeps stride so you don’t have to lug groceries, picnic goodies or other items on walks. Another mark in gita’s favor: you can exercise in the fashion of those in Milan and Paris, walking sidewalks to meals and stores. “Gita” means short trip in Italian.

The robot may turn some heads on the street. That’s because Piaggio Fast Forward parent Piaggio Group, which also makes Moto Guzzi motorcycles, expects sleek, flashy designs under its brand.

The first idea from Piaggio Fast Forward was to automate something like a scooter to autonomously deliver pizzas. “The investors and leadership came from Italy, and we pitched this idea, and they were just horrified,” quipped CEO and founder Greg Lynn.

If the company gets it right, walking could even become fashionable in the U.S. Early adopters have been picking up gita robots since the November debut. The stylish personal gita robot, enabled by the NVIDIA Jetson TX2 supercomputer on a module, comes in signal red, twilight blue or thunder gray.

Gita as Companion

The robot was designed to follow a person. That means the company didn’t have to create a completely autonomous robot that uses simultaneous localization and mapping, or SLAM, to get around fully on its own, said Lynn. And it doesn’t use GPS.

Instead, a gita user taps a button and the robot’s cameras and sensors immediately capture images that pair it with its leader to follow the person.

Using neural networks and the Jetson’s GPU to perform complex image processing tasks, the gita can avoid collisions with people by understanding how people move  in sidewalk traffic, according to the company. “We have a pretty deep library of what we call ‘pedestrian etiquette,’ which we use to make decisions about how we navigate,” said Lynn.

Pose-estimation networks with 3D point cloud processing allow it to see the gestures of people to anticipate movements, for example. The company recorded thousands of hours of walking data to study human behavior and tune gita’s networks. It used simulation training much the way the auto industry does, using virtual environments. Piaggio Fast Forward also created environments in its labs for training with actual gitas.

“So we know that if a person’s shoulders rotate at a certain degree relative to their pelvis, they are going to make a turn,” Lynn said. “We also know how close to get to people and how close to follow.”

‘Impossible’ Without Jetson 

The robot has a stereo depth camera to understand the speed and distance of moving people, and it has three other cameras for seeing pedestrians for help in path planning. The ability to do split-second inference to make sidewalk navigation decisions was important.

“We switched over and started to take advantage of CUDA for all the parallel processing we could do on the Jetson TX2,” said Lynn.

Piaggio Fast Forward used lidar on its early design prototype robots, which were tethered to a bulky desktop computer, in all costing tens of thousands of dollars. It needed to find a compact, energy-efficient and affordable embedded AI processor to sell its robot at a reasonable price.

“We have hundreds of machines out in the world, and nobody is joy-sticking them out of trouble. It would have been impossible to produce a robot for $3,250 if we didn’t rely on the Jetson platform,” he said.

Enterprise Gita Rollouts

Gita robots have been off to a good start in U.S. sales with early technology adopters, according to the company, which declined to disclose unit sales. They have also begun to roll out in enterprise customer pilot tests, said Lynn.   

Cincinnati-Northern Kentucky International Airport is running gita pilots for delivery of merchandise purchased in airports as well as food and beverage orders from mobile devices at the gates.

Piaggio Fast Forward is also working with some retailers who are experimenting with the gita robots for handling curbside deliveries, which have grown in popularity for avoiding the insides of stores.

The company is also in discussions with residential communities exploring usage of gita robots for the replacement of golf carts to encourage walking in new developments.

Piaggio Fast Forward plans to launch several variations in the gita line of robots by next year.

“Rather than do autonomous vehicles to move people around, we started to think about a way to unlock the walkability of people’s neighborhoods and of businesses,” said Lynn.

 

Piaggio Fast Forward is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots appeared first on The Official NVIDIA Blog.

Houston Depends on Intel Tech for Healthcare, Education and Connectivity Solutions

Intel Houston Tech NumbersAfter the pandemic was declared in March, Intel announced the Pandemic Response Technology Initiative to help combat COVID-19. The company targeted investments of $50 million to accelerate access to technology that can combat current and future pandemics through scientific discovery, enable remote learning for students and help apply technology to aid in economic recovery.

To make this happen, Intel works closely with organizations, cities and communities around the globe.

Houston has been hit especially hard by coronavirus. In July, reports showed a test positivity rate of almost 25% for Harris County, Houston’s home. At the time, that was the 10th highest in the U.S. for confirmed cases. The city with 2 million people – and that doesn’t count an additional 5 million people living in its metro area – is among the most diverse in the U.S. and one of the largest geographically, spanning 640 square miles. With the coronavirus disproportionately affecting minorities, the diversity and sheer size of the city put its residents at risk.

More: Intel Response to COVID-19 Crisis (Press Kit) | The Pandemic Drives New Era of Tech Collaboration (Rick Echevarria Editorial)

To fight COVID-19,  Houston’s leaders have created a holistic approach across focus areas: healthcare, education and smart cities.

“We can’t do this without a strong investment in technology,” says Houston Mayor Sylvester Turner of the coronavirus fight.

Remote Care and Monitoring for Healthcare

Intel is working with Medical Informatics Corp. (MIC), an Intel Capital Portfolio company based in Houston. MIC’s goal is to give medical professionals access to the quality data they need to deliver quality remote care so they are protected from exposure to infection through clinical distancing, are able to scale the capacity of critical care and non-critical care beds and staff, and can collect and analyze that data to get ahead of patient deterioration. To achieve these goals, MIC’s FDA-cleared Sickbay platform collects patient data across disparate bedside devices in ICUs and other care areas to enable flexible, vendor-agnostic remote monitoring and patient-centered analytics at scale.

The solution gives hospitals hit hard by the coronavirus the ability to remotely tap healthcare workers from other places by giving doctors, nurses and other care providers access to collected patient data from any enabled PC, tablet or phone, and turn traditional hospital rooms into makeshift virtual intensive care units or low acuity infectious disease units using software run on Intel® architecture.

Collected data is used by medical professionals to both monitor patient conditions and proactively consider larger trends. Heather Hitchcock, MIC’s chief commercial officer, says the technology will have a major impact during and after the coronavirus pandemic.

“While hospitals need help now with getting access to data remotely to monitor their patients, they don’t want just a short-term fix,” she said. “They are looking to redefine and simplify their architecture, reduce costs, and increase and protect revenue, not just for COVID-19, but to change the practice of medicine, realize the vision of AI, and create the foundation for a new standard of data-driven medicine and patient-centered care.”

Intel and MIC have launched the Scale to Serve Program to help 100 hospitals rapidly install the Sickbay platform, funding the implementation process and waiving the first 90 days of subscription costs. Qualifying hospitals can skip months of procurement work and set up easily.  Once hospitals adopt the platform, it can be deployed in as little as a week and be rapidly scaled as needed.

In advance of the pandemic, Houston Methodist Hospital had set up their virtual ICU. Within days they were able to add hundreds of additional intensive care beds across units and are now creating a low acuity command center and deterioration scores to support second and future waves with the help of a Sickbay installation. (To learn more about Intel’s work with MIC, visit Intel.com.)

Student Connectivity across the Digital Divide

Houston’s 48 school districts are navigating the coronavirus pandemic and the many challenges that come along with a rapid transition to distance learning. With 20% of Houston students at or below the poverty line, the digital divide was — and is — more present than ever.

Gabrielle Rowe, whom the Mayor tapped to run Houston’s student connectivity efforts, emphasizes the importance of technology and innovation in building a resilient Houston.

“We have to do it all, otherwise we will be measured by our weakest link. Our Black, Latinx and under-resourced citizens haven’t been pushed to the fringe; they’ve been pushed off a cliff,” she said. “The ladder that will bring them back up the cliff is technology. Without that there is nothing.”

“To be a resilient city, we have to look at all of our citizens,” Rowe said. It isn’t really a technology issue, she says, but an equity and affordability issue. Giving a device to someone doesn’t necessarily guarantee connectivity to services and the community as a whole.

The city of Houston, working closely with Intel, Microsoft and T-Mobile, is working to bridge the gap between students and their education, as 25% of students in Texas don’t have access to technology. The support provided by Intel is helping the city understand educational and community needs to bring digital skills and training to students and communities. Then, working with Intel’s strategic partners, the students and their families who qualify receive T-Mobile internet connectivity, providing students and their families connectivity to the greater community and resources.

Juliet Stipeche, director of the mayor’s office of education, explains that technology empowers not only the students but their families.

“Technology is a lifeline during this pandemic, unlike anything I’ve ever seen before,” Stipeche said. “If there’s a silver lining from this crisis, it’s compelled collaboration to formulate creative and transformative solutions to bridge the tech divide. Digital equity is critical to the future of Houston’s education, workforce and economy. And enhancing the lives of all Houstonian’s drives us in this endeavor.”

A Safe Return Depends on Smart City Tech

As the economy reopens and we get back to all the places that people gather – offices, movie theatres, airports, universities – smart city technologies will play a critical role in ensuring a safe return. Houston is actively planning for a safer return to what Sameer Sharma, Intel’s general manager of Smart Cities & Transportation, describes as “Smart Spaces.”

In 2019, Intel and the City of Houston established a partnership to bring smart-city solutions to life through The ION Smart and Resilient Cities Accelerator. Jesse Bounds, director of the mayor’s office of innovation, says Houston’s resilience plan was in the works prior to COVID-19, however, now the city faces a new set of challenges. The city is turning to partners — including startups, Intel and others — to help respond to COVID-19 and build a more resilient Houston.

One startup that came from the Intel-supported accelerator program is Water Lens, which offers next-generation genetic water testing technology, including a rapid test for COVID-19. Water Lens is currently working on a pilot with the city to test for COVID-19 in wastewater to help determine the community infection rate.

Mayor Turner explains that with many residents working from home, the billions of dollars that would normally be spent on capital improvements to infrastructure around the city can be invested in technology that will go into Smart Spaces – everything from fever checks to contact tracing and robotics – to create a more resilient, sustainable and operational city.

Turner emphasizes the importance of using technology to keep front-line workers – medical professionals, grocery workers, transit workers, waste management workers and others – safe and ensure that they don’t have to forfeit their health for the sake of their jobs. Bounds explains that as the city opens, the need for smart safety precautions will be critical.

Though the work is far from over, the city of Houston is confident in its ability to keep on fighting.

“We are technologically up for the challenge,” Rowe said.

The post Houston Depends on Intel Tech for Healthcare, Education and Connectivity Solutions appeared first on Intel Newsroom.

Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available

With NVIDIA SHIELD TV, there’s always more to love.

Today’s software update — SHIELD Software Experience Upgrade 8.2 — is the 25th for owners of the original SHIELD TV. It’s a remarkable run, spanning more than 5 years since the first SHIELD TVs launched in May 2015.

The latest upgrade brings a host of new features and improvements for daily streamers and media enthusiasts.

Stream On

One of the fan-favorite features for the newest SHIELD TVs is the AI upscaler. It works by training a neural network model on countless images. Deployed on 2019 SHIELD TVs, the AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper. Hair looks scruffier. Landscapes pop with striking clarity.

To see the difference between “basic upscaling” and “AI-enhanced upscaling” on SHIELD, click the image below and move the slider left and right.

Today’s upgrade adds more UHD 4K upscaling support from 360p to 1440p content. And on 2019 SHIELD TV Pros, we added support for 60fps content. Now SHIELD can upscale live sports on HD TV and HD video from YouTube to 4K with AI. In the weeks ahead, following an update to the NVIDIA Games app in September, we’ll add 4K 60fps upscaling to GeForce NOW.

The customizable menu button on the new SHIELD remote is another popular addition to the family. It’s getting two more actions to customize.

In addition to an action assigned to a single press, users can now configure a custom action for double press and long press. With over 25 actions available, the SHIELD remote is now the most customizable remote for streamers. This powerful feature works with all SHIELD TVs and the SHIELD TV app, available on the Google Play Store and iOS App Store.

More to Be Enthusiastic About

We take pride in SHIELD being a streaming media player enthusiasts can be, well, enthusiastic about. With our latest software upgrade, we’re improving our IR and CEC volume control support.

These upgrades include support for digital projectors, and allowing functionality when SHIELD isn’t active. It also adds IR volume control when using the SHIELD TV app, and when you’ve paired your Google Home with SHIELD. The 2019 SHIELD remote adds IR control to change the input source on TVs, AVRs and soundbars.

Additionally, earlier SHIELD generations — both 2015 and 2017 models — now have an option to match the frame rate of displayed content.

We’ve added native SMBv3 support as well, providing faster and more secure connections between PC and SHIELD. SMBv3 now works without requiring a PLEX media server.

With SHIELD, there’s always more to love. Download the latest software upgrade today, and check out the release notes for a complete list of all the new features and improvements.

The post Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available appeared first on The Official NVIDIA Blog.

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Self-driving cars continue to amaze passengers as a truly transformative technology. However, in the time of COVID-19, a self-cleaning car may be even more appealing.

Robotaxi startup Voyage introduced its third-generation vehicle, the G3, this week. The  autonomous vehicle, a Chrysler Pacifica Hybrid minivan retrofitted with self-driving technology, is the company’s first designed to operate without a driver and is equipped with an ambulance-grade ultraviolet light disinfectant system to keep passengers healthy.

The new vehicles use the NVIDIA DRIVE AGX Pegasus compute platform to enable the startup’s self-driving AI for robust perception and planning. The automotive-grade platform delivers safety to the core of Voyage’s autonomous fleet.

Given the enclosed space and the proximity of the driver and passengers, ride-hailing currently poses a major risk in a COVID-19 world. By implementing a disinfecting system alongside driverless technology, Voyage is ensuring self-driving cars will continue to develop as a safer, more efficient alternative to everyday mobility.

The G3 vehicle uses an ultraviolet-C system from automotive supplier GHSP to destroy pathogens in the vehicle between rides. UV-C works by inactivating a pathogen’s DNA, blocking its reproductive cycle. It’s been proven to be up to 99.9 percent effective and is commonly used to sterilize ambulances and hospital rooms.

The G3 is production-ready and currently testing on public roads in San Jose, Calif., with production vehicles planned to come out next year.

G3 Compute Horsepower Takes Off with DRIVE AGX Pegasus

Voyage has been using the NVIDIA DRIVE AGX platform in its previous-generation vehicles to power its Shield automatic emergency braking system.

With the G3, the startup is unleashing the 320 TOPS of performance from NVIDIA DRIVE AGX Pegasus to process sensor data and run diverse and redundant deep neural networks simultaneously for driverless operation. Voyage’s onboard computers are automotive grade and safety certified, built to handle the harsh vehicle environment for safe daily operation.

NVIDIA DRIVE AGX Pegasus delivers the compute necessary for level 4 and level 5 autonomous driving.

DRIVE AGX Pegasus is built on two NVIDIA Xavier systems-on-a-chip. Xavier is the first SoC built for autonomous machines and was recently determined by global safety agency TÜV SÜD to meet all applicable requirements of ISO 26262. This stringent assessment means it meets the strictest standard for functional safety.

Xavier’s safety architecture combined with the AI compute horsepower of the DRIVE AGX Pegasus platform delivers the robustness and performance necessary for the G3’s fully autonomous capabilities.

Moving Forward as the World Shelters in Place

As the COVID-19 pandemic continues to limit the way people live and work, transportation must adapt to keep the world moving.

In addition to the UV-C lights, Voyage has also equipped the car with HEPA-certified air filters to ensure safe airflow inside the car. The startup uses its own employees to manage and operate the fleet, enacting strict contact tracing and temperature checks to help minimize virus spread.

The Voyage G3 is equipped with a UV-C light system to disinfect the vehicle between rides.

While these measures are in place to specifically protect against the COVID-19 virus, they demonstrate the importance of an autonomous vehicle as a place where passengers can feel safe. No matter the condition of the world, autonomous transportation translates to a worry-free voyage, every time.

The post Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.