SUPER Powers for All Gamers: Best In Class Performance, Ray Tracing, Latest Features

Most gamers don’t buy a GPU to accelerate a single game. They make investments. That’s why — with the deluge of new ray-traced games announced over the past few weeks — we’re doubling down on our gaming GPUs.

SUPER, our new line of faster Turing GPUs announced Tuesday — and maybe not our best kept secret — is perfect for them.

These new GPUs — the GeForce RTX 2080 SUPER, GeForce RTX 2070 SUPER, and GeForce RTX 2060 SUPER — deliver up to 25 percent faster performance than the original RTX 20 series.

They offer more cores and higher clock speeds, so gamers — who want the best performance they can afford — know they’ll be able to play the blockbusters today, and the ones on the horizon.

And with so many ray-traced mega titles publicly embracing real-time ray tracing, why would anyone buy a GPU that doesn’t support it?

Game On, Literally

Dubbed the “graphics holy grail,” real-time ray tracing brings cinematic quality lighting effects to interactive experiences for the first-time.

The ecosystem now driving real-time ray tracing is immense – tens of millions of GPUs, industry standard APIs, leading game engines and an all-star roster of game franchises.

Turing, which was introduced last year, is a key part of that ecosystem. The world’s most advanced GPU architecture, it fuses next-generation shaders with real-time ray tracing and all-new AI capabilities.

Turing’s hybrid graphics capability represents the biggest generational leap ever in gaming GPUs, delivering up to 6x more performance than previous 10 Series Pascal GPUs.

And our killer line-up of SUPER GPUs — which represents a year of tweaking and tuning our Turing architecture — will deliver even more performance.

So, demanding PC gamers can be ready to take on the new generation of games that every gamer now knows are coming.

E3, Computex Kick Off a Deluge of New Ray-Traced Games

Last month’s Computex and Electronic Entertainment Expo marked a milestone for real time ray tracing, as blockbuster after blockbuster announced that they would be using it to create stunning visuals in their titles.

Call of Duty: Modern Warfare, Control, Cyberpunk 2077, Doom Eternal, Sword and Fairy 7, Watch Dogs: Legion, and Wolfenstein: Youngblood joined the list of major titles that will be using ray tracing. 

Battlefield V, Metro Exodus  Quake II RTX, Shadow of the Tomb Raider, and Stay in the Light (early access) are already shipping with ray tracing support.

And more are coming.

Ray tracing is now supported in industry standard APIs, including Microsoft DirectX Raytracing and Vulkan.

The most popular game engines used by game developers to create games now support real-time ray tracing, including Unreal Engine, Unity, Frostbite, id Tech, Remedy, 4A and more.

Virtual Reality, More a Reality than Ever

More than just a ray tracing king, the RTX GPU series also designed for virtual reality.

NVIDIA Adaptive Shading (NAS) technology is built into the Turing architecture. NAS supports Variable Rate Shading (VRS), including motion and content adaptive shading for the highest performance and image quality, as well as Foveated Rendering, which puts the detail where the gamer is looking.

These technologies support a booming ecosystem of headsets, developer tools — and, increasingly — games. Valve’s Index HMD and controller began shipping just days ago. That follows the launch of the highly-anticipated Oculus Rift S earlier this year, as well as the Vive Focus Plus in February.

A Trio of GPUs for the Latest AAA Games

Our refreshed lineup of Turing GPUs are ready, joining our existing entry level GeForce RTX 2060, starting at $349; and GeForce RTX 2080 Ti flagship, starting at $999. They include:

  • GeForce RTX 2060 SUPER GPU – Starting at $399, Available July 9
    • Up to 22% faster (average 15%) than RTX 2060
    • 8GB GDDR6 – 2GB more than the RTX 2060
    • Faster than GTX 1080
    • 7+7 TOPs (FP32+INT32) and 57 Tensor TFLOPs
  • GeForce RTX 2070 SUPER GPU – Starting at $499, Available July 9
    • Up to 24% faster (average 16%) than RTX 2070, for the same price
    • Faster than GTX 1080 Ti
    • 9+9 TOPs (FP32+INT32) and 73 Tensor TFLOPs
  • GeForce RTX 2080 SUPER GPU – Starting at $699, Available July 23
    • More performance than RTX 2080, for the same price
    • Memory speed cranked up to 15.5Gbps
    • Faster than TITAN Xp
    • 11+11 TOPs (FP32+INT32) and 89 Tensor TFLOPs

NVIDIA GeForce RTX GPUs aren’t just the only gaming GPUs capable of real-time ray tracing; they’re the only ones to support other advanced gaming features, such as NVIDIA Adaptive Shading, Mesh Shading, Variable Rate Shading and NVIDIA Deep Learning Super Sampling, which uses AI to sharpen game visuals while increasing performance.

Future Proof

Gamers want to make a future proof investment and the future is clearly ray-tracing. They want the best performance and features they can afford. SUPER offers all of that, today.

The post SUPER Powers for All Gamers: Best In Class Performance, Ray Tracing, Latest Features appeared first on The Official NVIDIA Blog.

What’s the Difference Between Jetson Nano, Raspberry Pi, Edge TPU and Neural Compute Stick?

Raspberry Pi launched revolutionary computer building blocks for DIY makers. Think of Jetson Nano as the next step, providing AI for makers.

Released in 2012, Raspberry Pi has established itself as the de facto DIY computer board for makers, students and educators alike. For just $35, it offers features like video and wireless communication for home-brewed drones and robots.

More than 25 million Raspberry Pi units have sold worldwide, capturing the latter day DIY spirit of the Whole Earth Catalog, the publication that a half century ago touched a generation striving to understand and build computers.

Entry-level AI is the next frontier for developers and makers alike. The NVIDIA Jetson Nano Developer Kit makes the past decade’s leap in artificial intelligence accessible to educators, creators and developers everywhere for just $99.

Jetson Nano delivers the best performance in a compact supercomputing package. And it sips just 5 to 10 watts of power, making it ideal for mobile applications where battery life matters most. To help DIYers get going, NVIDIA’s Deep Learning Institute is now offering a free course, Getting Started with AI on Jetson Nano, to learn how to develop on the Jetson Nano devkit.

Jetson Nano Versus Raspberry Pi on Specs

Jetson Nano offers the most comprehensive and powerful AI capability in its class (see benchmarks). And it provides a full version of desktop Linux. Raspberry Pi offers the Raspbian operating system.

AI processing is a key differentiator. For those seeking object detection and image recognition, Jetson Nano’s 128-core Maxwell GPU can handle these tasks in real time.

Jetson Nano has a GPU that can handle about 500 gigaflops, offering a high performance difference for deep learning applications. Its specs include a camera interface, 4K display capability and  high-speed interfaces for input and output to enable AI application development.

  • GPU: NVIDIA Maxwell architecture with 128 NVIDIA CUDA® cores
  • CPU: Quad-core ARM Cortex-A57 MPCore processor
  • Memory: 4 GB 64-bit LPDDR4 3200 MT/s
  • Storage microSD slot
  • Video Encode: 4K @ 30 (H.264/H.265)
  • Video Decode: 4K @ 60 (H.264/H.265)
  • Camera: 2 lane MIPI CSI-2
  • Connectivity: Gigabit Ethernet, M.2 Key E
  • Display: HDMI 2.0 and eDP1.4
  • USB: 4x USB 3.0 (shared),  USB 2.0 Micro-B
  • I/O: 40 pin connector for SPI, I2C, I2S, UART  and GPIOs
  • Size: 100 mm x 89 mm x 29 mm

Memory also matters. Jetson Nano has 4GB of memory, enabling it to process multiple deep learning models on high resolution data.

Jetson Nano is equipped to run full versions of machine learning frameworks. That means the same versions of TensorFlow, PyTorch, Caffe/Caffe2, Keras, MXNet and others available on desktop or cloud computing can run on Jetson Nano to build autonomous machines.

The result is that Jetson Nano provides real-time inferencing to edge devices such as robots and drones.

It’s also a great tool to experiment on voice applications and such vision tasks as building DIY home monitoring with cameras capable of image recognition.

There aren’t alternatives to Jetson Nano for such sophisticated tasks.

Raspberry Pi Versus Jetson Nano on Applications

The latest Raspberry Pi version remains the ideal choice for some applications. Jetson Nano is well-suited for makers tinkering on Raspberry Pi who want to step up to some autonomy or another form of AI in their projects.

For instance, if you want to build your own old-school arcade games — think Pacman or Donkey Kong — Raspberry Pi has you covered. Just code, compile, pop in USB game controllers, and plug in an HDMI monitor and you’re on the way.

However, if you want serious gaming play, say Doom 3, Jetson Nano is the only way to go.

Or if DIY drones are your thing, Raspberry Pi can fly you pretty far with a remote control. Support for cameras and wireless capabilities can enable video capture and remote control of these mechanical birds.

But if your drone requires even slightly complex object detection to aid navigations, it’s time to graduate to Jetson Nano.

And let’s say you’ve developed a pet-monitoring robot for the home. Do you want to train this robot to recognize your pets, navigate your house autonomously and provide video feeds while you’re at work? Jetson Nano is your best choice.

But if all you want is a robot that you can use to remotely follow your pets around with a joystick, even online, then Raspberry Pi 4 might be just the ticket.

Beyond Raspberry Pi, there’s Google’s Coral Edge TPU and Intel’s Movidius Neural Compute Stick, offering options for AI development.

Jetson Nano Versus Edge TPU Dev Board

Looking at Jetson Nano versus Edge TPU dev board, the latter didn’t run on several AI models for classification and object detection.

That’s in large part because Edge TPU is an ASIC-based board intended for only specific models and tasks and only sports 1GB of memory. It can’t run full TensorFlow and instead runs TensorFlow Lite, sharply limiting the functions it can perform.

Meanwhile, Jetson Nano’s 4GB of memory and a general purpose GPU allow it to run full versions of frameworks.

Jetson Nano’s software and full framework support, memory capacity and unified memory subsystem allow it to run a ton of different networks up to full HD resolution, including variable batch sizes on multiple sensor streams concurrently.

Jetson Nano isn’t limited to deep neural network inferencing either. Its CUDA architecture can be unleashed for accelerated computing and digital signal processing, enabling training.

Jetson Nano Versus Intel Movidius Neural Compute Stick

A Neural Compute Stick is an add-on accessory that requires a separate computer for development. When running ResNet-50, one of the most commonly used image recognition models, it can only process 16 frames per second for image classification at a low resolution. This isn’t enough performance for real world applications.

Jetson Nano can handle 36 frames per second, which allows enough processing for both reinforcement learning and inference in real time.

If you crank up the resolution using SSD ResNet-18, Neural Compute Stick 2 did not run in benchmark tests. That’s likely because of either limited memory capacity, unsupported network layers, hardware or software limitations, or any combination of these shortcomings.

A Jetson Nano developer kit — a complete AI computer —  is a more cost-effective solution compared with a Neural Compute Stick.

DLI Course: Learn AI on Jetson Nano

The Deep Learning Institute course Getting Started with AI on Jetson Nano is a free eight-hour course designed to get people up to speed on AI. It uses Python notebooks on Jetson Nano to walk through how to build a deep learning classification project with computer vision models.

The course helps makers, students and developers set up their Jetson Nano Developer Kit and camera. It covers collection of image data, how to annotate images for regression models and explains how to train a neural network on your data and create your own models.

Then it’s off to the races: By the end of this course, you should be able to run inference on Jetson Nano with the models created.

Ready to make intelligent autonomous machines in the world we live in today?

Welcome to Jetson Nano.

The post What’s the Difference Between Jetson Nano, Raspberry Pi, Edge TPU and Neural Compute Stick? appeared first on The Official NVIDIA Blog.

Head of the Class: Indiana University Selects NVIDIA V100 to Power Nation’s Fastest University Supercomputer for AI

Indiana University has signed up to become home to the nation’s fastest university-owned supercomputer in an effort to support its AI research.

The university plans to be the first academic customer in line for a Cray Shasta supercomputer sporting the NVIDIA V100 line of Tensor Core GPUs.

The Cray Shasta supercomputer, dubbed Big Red 200, is expected to be fully operational by IU’s January 20, 2020, bicentennial anniversary.

Cray’s Shasta system promises to pack 5.9-petaflops, a nearly 300X performance leap from the university’s original Big Red supercomputer from 15 years ago.

IU plans to deploy the new Shasta system for university efforts in artificial intelligence, machine learning and data analytics. The supercomputer will help support the university’s advancement of AI in education, cybersecurity, medicine, environmental science and other areas.

Boosting Scientific Discovery

The performance boost will play a key role in supporting scientific research and enabling the next wave of discoveries at IU as well as the university’s Grand Challenges initiatives, said Matt Link, associate vice president for research technologies at IU.

Big Red 200 will support the university’s Precision Health Initiative, a Grand Challenge intended to improve prevention, treatment and health outcomes of human diseases.

The Cray Shasta supercomputer at IU will boost the work of researchers in healthcare, physics and astronomy in particular, said Link.

“Big Red 200 will mark a significant change in how we support the research engine at IU,” he said.

Powering Disease Detection

The new supercomputer could significantly accelerate timelines for IU researchers, he said. This holds vast promise to enable researchers to make breakthroughs in medical imaging, including for dementia and Alzheimer’s studies.

IU’s new system will provide service to about 130,000 students and 20,000 faculty members across its campuses, with support from more than 100 on staff, he said.

Also, IU’s new Cray Shasta system will enable university researchers to easily scale projects using resources beyond those on campus. That’s because larger versions of the Cray Shasta systems are planned for deployment at the Department of Energy — under the Exascale Computing Project — which will allow researchers to take AI workloads there.

Accelerating University Research

Access to GPU-accelerated high-performance computing plays an increasingly important role in the research of leading universities. It enables postdoctoral researchers to tackle big data challenges — like precision medicine — using deep learning.

IU garnered more than $185 million in research grant awards in 2018 that were supported by the university’s high-performance computing systems.

The Big Red 200 system itself will be funded by revenue from federal contracts and grants, according to the university.

Big Red 200 will replace Big Red II, which handled more than $500 million in grants and research contracts in the roughly 5 years of service and cost a small fraction of that sum.


Photo and credit: Indiana University; Emily Sterneman

The post Head of the Class: Indiana University Selects NVIDIA V100 to Power Nation’s Fastest University Supercomputer for AI appeared first on The Official NVIDIA Blog.

What’s the Difference Between Hardware and Software Accelerated Ray Tracing?

You don’t need specialized hardware to do ray tracing, but you want it.

Software-based ray tracing, of course, is decades old. And it looks great: movie makers have been using ray-tracing for decades now.

But it’s now clear that specialized hardware — like the RT cores built into NVIDIA’s latest Turing architecture — make a huge difference if you’re doing ray-tracing in real time. Games require real time ray tracing.

Once considered the “holy grail” of graphics, real-time ray tracing brings the same techniques long used by moviemakers to gamers and creators.

Thanks to a raft of new AAA games developers have introduced this year — and the introduction last year of NVIDIA GeForce RTX GPUs — this once wild idea is mainstream.

Millions are now firing up PCs that benefit from the RT Cores and Tensor Cores built into RTX. And they’re enjoying ray-tracing enhanced experiences many thought would be years, or decades, away.

Real-time ray tracing, however, is possible without dedicated hardware. That’s because – while ray tracing has been around since the 1970s – the real trend is much newer: GPU-accelerated ray tracing with dedicated cores.

The use of GPUs to accelerate ray-tracing algorithms gained fresh momentum last year with the introduction of Microsoft’s DirectX Raytracing (DXR) API. And that’s great news for gamers and creators.

Ray Tracing Isn’t New

So what is ray tracing? Look around you. The objects you’re seeing are illuminated by beams of light. Now follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

It’s a technique first described by IBM’s Arthur Appel, in 1969, in “Some Techniques for Shading Machine Renderings of Solids.” Thanks to pioneers such as Turner Whitted, Lucasfilm’s Robert Cook, Thomas Porter and Loren Carpenter, CalTech’s Jim Kajiya, and a host of others, ray tracing is now the standard in the film and CG industry for creating lifelike lighting and images.

However, until last year, almost all ray-tracing was done offline. It’s very compute intensive. Even today, the effects you see at movie theaters require sprawling, CPU-equipped server farms. Gamers want to play interactive, real time games. The won’t wait minutes or hours per frame.

GPUs, by contrast, can move much faster, thanks to the fact they rely on larger numbers of computing cores to get complex tasks done more quickly. And, traditionally, they’ve used another rendering technique, known as “rasterization,” to display three-dimensional objects on a two-dimensional screen.

With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. In this virtual mesh, the corners of each triangle — known as vertices — intersect with the vertices of other triangles of different sizes and shapes. It’s fast and the results have gotten very good, even if it’s still not always as good as what ray tracing can do.

GPUs Take on Ray Tracing

But what if you used these GPUs — and their parallel processing capabilities — to accelerate ray tracing? This is where GPU software-accelerated ray tracing comes in. NVIDIA OptiX, introduced in 2009, targeted design professionals with GPU-accelerated ray tracing. Over the next decade, OptiX rode the steady advance in speed delivered by successive generations of NVIDIA GPUs.

By 2015, NVIDIA was demonstrating at SIGGRAPH how ray tracing could turn a CAD model into a photorealistic image — indistinguishable from a photograph — in seconds, speeding up the work of architects, product designers and graphic artists.

That approach — GPU-accelerated software ray tracing — was endorsed by Microsoft early last year, with the introduction of DXR, which enables full support of NVIDIA’s RTX ray-tracing software through Microsoft’s DXR API.

Delivering high performance real time ray tracing required two innovations: dedicated ray tracing hardware, RT Cores; and Tensor Cores for high performance AI processing for advanced denoising, anti-aliasing, and super resolution.

RT Cores accelerate ray tracing by speeding up the process of finding out where a ray intersects with the 3D geometry of a scene. These specialized cores accelerate a tree-based ray tracing structure called a bounding volume hierarchy, or BVH, used to calculate where rays and the triangles that comprise a computer-generated image intersect.

Tensor Cores — first unveiled with NVIDIA’s Volta architecture aimed at enterprise and scientific computing in 2018 to accelerate AI algorithms — further accelerate graphically intense workloads. That’s through a special AI technique called NVIDIA DLSS, short for Deep Learning Super Sampling,. RTX’s Tensor Cores make this possible.

Turing at Work

You can see how this works by comparing how quickly Turing and our previous generation Pascal architecture render a single frame of Metro Exodus.

Metro rendered on one frame on Pascal, this is one frame of Pascal, and this time in the middle is spent ray tracing.

On Turing, you can see several things happening here. One is green, that’s our RT cores kicking in. As you can see, the same ray tracing done on Pascal GPU is done in 20% of the time on Turing.

Reinventing graphics, NVIDIA and our partners have been driving Turing to market through a stack of products that now range from the highest performance product, at $999, all the way down to an entry gamer, at $199. The RTX products, with RT cores and Tensor cores, start at $349.

Broad Support

There’s no question that real time ray tracing is the next generation of gaming.

Some of the most important ecosystem partners have announced their support, and are now opening the floodgates for real time ray tracing in games.

Inside of Microsoft’s DirectX 12 multimedia programming interfaces is a ray tracing component they call DirectX Raytracing (DXR). So every PC, if enabled by the GPU, is capable of accelerated ray tracing.

At the Game Developer Conference this past March we turned on DXR accelerated ray tracing on our Pascal and Turing GTX GPUs.

To be sure, earlier GPU architectures, such as Pascal, were designed to accelerate DirectX 12. So on this hardware, these calculations are performed on the programmable shader cores, a resource shared with many other graphics functions of the GPU.

So while your mileage will vary — since there are many ways ray tracing can be implemented — Turing will consistently perform better when playing games that make use of ray-tracing effects.

And that performance advantage on the most popular games is only going to grow.

EA’s AAA Engine Frostbite, supports ray tracing. Unity and Unreal, which together power 90 percent of the world’s games, now support Microsoft’s DirectX ray tracing in the engine.

Collectively, that opens up an easy path for thousands and thousands of game developers to implement ray tracing in their games.

All told, NVIDIA’s engaged somewhere in excess of 100 developers who are working on ray traced games.

To date we have millions, millions of gamers who are gaming on RTX hardware, hardware accelerated hardware, today.

And — thanks to ray-tracing —  that number is growing every week.

The post What’s the Difference Between Hardware and Software Accelerated Ray Tracing? appeared first on The Official NVIDIA Blog.

AI Nails It: Startup’s Drones Eye Construction Sites

Krishna Sudarshan was a Goldman Sachs managing director until his younger son’s obsession with drones became his own, attracting him to the flying machines’ data-heavy business potential.

Sudarshan quit Goldman after a decade in 2016 to found Aspec Scire, which pairs drones to a cloud service for construction and engineering firms to monitor their businesses.

A colleague from the banking giant joined as head of engineering and former colleagues invested in the startup.

Business has taken flight.

Since its launch, Aspec Scire has landed work with construction management firms, engineering firms and owners and developers, including a large IT services company, Sudarshan said.

“The construction industry has very low levels of automation. This an industry that’s desperately in need of increased efficiency,” he said.

On-Demand Drones

Aspec Scire licenses its service to construction management firms, general contractors, surveyors and drone operators who offer it to their customers.

It can replace a lot of old-fashioned grunt work and record-keeping.

Its drones-as-a-service cloud business allows managers to remotely monitor the progress of construction sites. Videos and photos taken by drones can build up files on the status of properties, providing a digital trail of documentation for fulfilment of so-called SLAs, or service-level agreements, according to the company.

It can also show whether substructural building elements — such as pilings and columns — are keeping to the blueprints. The service holds promise for heading off safety issues and saving construction firms a lot of time and money if they can quickly catch mistakes before they become problems requiring major revisions.

Image recognition algorithms are also trained to spot hundreds of problems that could be dangerous or cause contractors major headaches — hot water lines next to gas lines, for example, or forgotten 50-amp outdoor plug-in receptacles for Tesla owners to charge from.

“We will be the ones that analyze data from construction sites to provide actionable insights to improve the efficiency of their operations,” he said.

AI Nails Construction

Sudarshan, who led technology for a division of Goldman, is like many who use NVIDIA GPUs to tap into fast processing for massive datasets. After studying the idea of drone data collection for construction, he put the two together.

He fits a banking industry adage: You can take the person out of Goldman, but you can’t take Goldman out of the person. “Goldman is so data driven at everything. And I can see this industry is not data driven, so I’m trying to see how we can make it more so,” he said.

Construction data is plentiful. Aspec Scire uses millions of images for training its image classification algorithms that apply to about 20,000 aspects of construction sites. Training data continues to grow as customers upload images from sites, he said.

Aspec Scire also provides trained models for the compact supercomputing power of Jetson TX2 onboard DJI drones to quickly process images. It trains its algorithms on NVIDIA GPUs on Google Cloud Platform, including the NVIDIA V100 Tensor Core GPU.

“Without GPUs we wouldn’t be able to do some of the things that we’re doing,” Sudarshan said.

Aspec Scire is a member of NVIDIA Inception, a virtual accelerator program that helps startups get to market faster.


Image credit: Magnus Bäck, licensed under Creative Commons

The post AI Nails It: Startup’s Drones Eye Construction Sites appeared first on The Official NVIDIA Blog.

Plowing AI, Startup Retrofits Tractors with Autonomy

Colin Hurd, Mark Barglof and Quincy Milloy aren’t your conventional tech entrepreneurs. And that’s not just because their agriculture technology startup is based in Ames, Iowa.

Smart Ag is developing autonomy and robotics for tractors in a region more notable for its corn and soybeans than software or silicon.

Hurd, Barglof and Milloy, all Iowa natives, founded Smart Ag in 2015 and landed a total of $6 million in seed funding, $5 million of which came from Stine Seed Farm, an affiliate of Stine Seed Co. Other key investors included Ag Startup Engine, which backs Iowa State University startups.

The company is in widespread pilot tests with its autonomy for tractors and plans to commercialize its technology for row crops by 2020.

Smart Ag is a member of the NVIDIA Inception virtual accelerator, which provides marketing and technology support to AI startups.

A team of two dozen employees has been busy on its GPU-enabled autonomous software and robotic hardware system that operates tractors that pull grain carts during harvest.

“We aspire to being a software company first, but we’ve had to make a lot of hardware in order to make a vehicle autonomous,” said Milloy.

Wheat from Chaff

Smart Ag primarily works today with traditional row crop (corn and soybean) producers and cereal grain (wheat) producers. During harvest, these farmers use a tractor to pull a grain cart in conjunction with the harvesting machine, or combine, which separates the wheat from the chaff or corn from the stalk. Once the combine’s storage bin is full, the tractor with the grain cart pulls alongside for the combine to unload into the cart.

That’s where autonomous tractors come in.

Farm labor is scarce. In California, 55 percent of farms surveyed said they had experienced labor shortages in 2017, according to a report from the California Farm Bureau Federation.

Smart Ag is developing its autonomous tractor to pull a grain cart, addressing the lack of drivers available for this job.

Harvest Tractor Autonomy

Farmers can retrofit a John Deere 8R Series tractor using the company’s AutoCart system. It provides controllers for steering, acceleration and braking, as well as cameras, radar and wireless connectivity. An NVIDIA Jetson Xavier powers its perception system, fusing Smart Ag’s custom agricultural object detection model with other sensor data to give the tractor awareness of its surroundings.

“The NVIDIA Jetson AGX Xavier has greatly increased our perception capabilities — from the ability to process more camera feeds to the fusion of additional sensors —  it has enabled the path to develop and rapidly deploy a robust safety system into the field,” Milloy said.

Customers can use mobile devices and a web browser to access the system to control tractors.

Smart Ag’s team gathered more than 1 million images to train the image recognition system on AWS, tapping into NVIDIA GPUs. The startup’s custom image recognition algorithms allow its autonomous tractor to avoid people and other objects in the field, find the combine for unloading and return to a semi truck for the driverless grain cart vehicle to unload the grain for final transport to a grain storage facility.

Smart Ag has more than 12 pilot tests under its belt and uses those to gather more data to refine its algorithms. The company plans to expand its test base to roughly 20 systems operating during harvest in 2019 in preparation for its commercial launch in 2020.

“We’ve been training for the past year and a half. The system can get put out today in deployment, but we can always get higher accuracy,” Milloy said.

The post Plowing AI, Startup Retrofits Tractors with Autonomy appeared first on The Official NVIDIA Blog.

2019 Investor Meeting: Intel Previews Design Innovation; 10nm CPU Ships in June; 7nm Product in 2021

intel investors renduchintala
Dr. Murthy Renduchintala, Intel’s chief engineering officer and group president of the Technology, Systems Architecture and Client Group, speaks at the 2019 Intel Investor Meeting on Wednesday, May 8, 2019, in Santa Clara, California. (Credit: Intel Corporation)
» Click for full image

Today, Wall Street analysts are gathered at Intel headquarters in Santa Clara for the company’s 2019 Investor Meeting, which features executive keynotes by Intel CEO Bob Swan and business unit leaders. At the meeting, Dr. Murthy Renduchintala, Intel’s chief engineering officer and group president of the Technology, Systems Architecture and Client Group, announced that Intel will start shipping its volume 10nm client processor in June and shared first details on the company’s 7nm process technology. Renduchintala said Intel has redefined its product innovation model for the data-centric era of computing, which “requires workload-optimized platforms and effortless customer and developer innovation.” He shared expected performance gains resulting from a combination of technical innovations across six pillars – process and packaging, architecture, memory, interconnect, security and software – giving insight into the design and engineering model steering the company’s product development.

More: Intel Investor Relations Website

“While process and CPU leadership remain fundamentally important, an extraordinary rate of innovation is required across a combination of foundational building blocks that also include architecture, memory, interconnect, security and software, to take full advantage of the opportunities created by the explosion of data,” Renduchintala said. “Only Intel has the R&D, talent, world-class portfolio of technologies and intellectual property to deliver leadership products across the breadth of architectures and workloads required to meet the demands of the expanding data-centric market.”

10nm Process Technology: Intel’s first volume 10nm processor, a mobile PC platform code-named “Ice Lake,” will begin shipping in June. The Ice Lake platform will take full advantage of 10nm along with architecture innovations. It is expected to deliver approximately 3 times faster wireless speeds, 2 times faster video transcode speeds, 2 times faster graphics performance, and 2.5 to 3 times faster artificial intelligence (AI) performance over previous generation products1. As announced, Ice Lake-based devices from Intel OEM partners will be on shelves for the 2019 holiday season. Intel also plans to launch multiple 10nm products across the portfolio through 2019 and 2020, including additional CPUs for client and server, the Intel® Agilex™ family of FPGAs, the Intel® Nervana™ NNP-I (AI inference processor), a general-purpose GPU and the “Snow Ridge” 5G-ready network system-on-chip (SOC).

Building on a model proven with 14nm that included optimizations in 14+ nm and 14++ nm, the company will drive sustained process advancement between nodes and within a node, continuing to lead the scaling of process technology according to Moore’s Law. The company plans to effectively deliver performance and scaling at the beginning of a node, plus another performance improvement within the node through multiple intra-node optimizations within the technology generation.

7nm Status:  Renduchintala provided first updates on Intel’s 7nm process technology that will deliver 2 times scaling and is expected to provide approximately 20 percent increase in performance per watt with a 4 times reduction in design rule complexity. It will mark the company’s first commercial use of extreme ultraviolet (EUV) lithography, a technology that will help drive scaling for multiple node generations.

The lead 7nm product is expected to be an Intel Xe architecture-based, general-purpose GPU for data center AI and high-performance computing. It will embody a heterogeneous approach to product construction using advanced packaging technology. On the heels of Intel’s first discrete GPU coming in 2020, the 7nm general purpose GPU is expected to launch in 2021.

renduchintala presentation 1 1

» Download all images (ZIP, 1 MB)

Heterogeneous Integration for Data-Centric Era: Renduchintala previewed new chip designs that leverage advanced 2D and 3D packaging technology to integrate multiple intellectual property (IP), each on its own optimized process technology, into a single package. The heterogeneous approach allows new process technologies to be leveraged earlier by interconnecting multiple smaller chiplets, and larger platforms to be built with unprecedented levels of performance when compared to non-monolithic alternatives.

Renduchintala unveiled the performance gains that resulted from innovative development of the client platform code-named “Lakefield”.  The approach is symbolic of the strategic shift in the company’s design and engineering model that underpins Intel’s future product roadmaps. To meet customer specifications, a breadth of technical innovations including a hybrid CPU architecture and Foveros 3D packaging technology were used to meet always-on, always-connected and form-factor requirements while simultaneously delivering to power and performance targets. Lakefield is projected to deliver approximately 10 times SOC standby power improvement and 1.5 to 2 times active SOC power improvement relative to 14nm predecessors, 2 times graphics performance increases2, and 2 times reduction in printed-circuit-board (PCB) area, enabling OEMs to have more flexibility for thin and light form factor designs.

Performance results are based on testing as of dates shown in configuration and may not reflect all publicly available security updates.  See configuration disclosure for details.  No product or component can be absolutely secure.  Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.   For more complete information visit  

Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at

1Ice Lake configuration disclosures:

Approximately 3x Ice Lake Wireless Speeds: 802.11ax 2×2 160MHz enables 2402Mbps maximum theoretical data rates, ~3X (2.8X) faster than standard 802.11ac 2×2 80MHz (867Mbps) as documented in IEEE 802.11 wireless standard specifications and require the use of similarly configured 802.11ax wireless network routers.

Approximately 2x Ice Lake Video Encode: Based on 4k HEVC to 4k HEVC transcode (8bit). Intel preproduction system, ICL 15w compared to WHL 15w.  

Approximately 2x Ice Lake Graphics Performance:  Workload: 3DMark11 v 1.0.132. Intel PreProduction ICL U4+2 15W Configuration (Assumptions):, Processor: Intel® Core™ i7 (ICL-U 4+2) PL1=15W TDP, 4C8T, Memory: 2x8GB LPDDR4-3733 2Rx8, Storage: Intel® 760p m.2 PCIe NVMe SSD with AHCI Microsoft driver, Display Resolution: 3840×2160 eDP Panel 12.5”, OS: Windows* 10 RS5-17763.316, Graphics driver: PROD-H-RELEASES_ICL-PV-2019-04-09-1006832. Vs config – Intel PreProduction WHL U4+2 15W Configuration (Measured), Processor: Intel® Core™ i7-8565U (WHL-U4+2) PL1=15W TDP, 4C8T, Turbo up to 4.6Ghz, Memory: 2x8GB DDR4-2400 2Rx8, Storage: Intel® 760p m.2 PCIe NVMe SSD with AHCI Microsoft driver, Display Resolution: 3840×2160 eDP Panel 12.5”, OS: Windows* 10 RS4-17134.112., Graphics driver: 100.6195

Approximately 2.5x-3x Ice Lake AI Performance: Workload: images per second using AIXPRT Community Preview 2 with Int8 precision on ResNet-50 and SSD-Mobilenet-v1 models. Intel preproduction system, ICL-U, PL1 15w, 4C/8T, Turbo TBD, Intel Gen11 Graphics, GFX driver preproduction, Memory 8GB LPDDR4X-3733, Storage Intel SSD Pro 760P 256GB, OS Microsoft Windows 10, RS5 Build 475, preprod bios. Vs. Config – HP spectre x360 13t 13-ap0038nr, Intel® Core™ i7-8565U, PL1 20w, 4C/8T, Turbo up to 4.6Ghz, Intel UHD Graphics 620, Gfx driver, Memory 16GB DDR4-2400, Storage Intel SSD 760p 512GB, OS – Microsoft Windows 10 RS5 Build 475 Bios F.26.

2Lakefield configuration disclosures:

Approximately 10x Lakefield Standby SoC Power Improvement: Estimated or simulated as of April 2019 using Intel internal analysis or architecture simulation or modeling. Vs. Amber Lake.

Approximately 1.5x-2x Lakefield Active SoC Power Improvement:  Estimated or simulated as of April 2019 using Intel internal analysis or architecture simulation or modeling. Workload:  1080p video playback. Vs. Amber Lake next-gen product.

Approximately 2x Lakefield Graphics Performance: Estimated or simulated as of April 2019 using Intel internal analysis or architecture simulation or modeling. Workload:  GfxBENCH. LKF 5W & 7W Configuration (Assumptions):,Processor: LKF PL1=5W & 7W TDP, 5C5T, Memory: 2X4GB LPDDR4x – 4267MHz, Storage: Intel® 760p m.2 PCIe NVMe SSD; LKF Optimized Power configuration uses UFS, Display Resolution: 1920×1080 for Performance; 25×14 eDP 13.3” and 19×12 MIPI 8.0” for Power, OS: Windows* 10 RS5. Power policy set to AC/Balanced mode for all benchmarks except SYSmark 2014 SE which is measured in AC/BAPCo mode for Performance. Power policy set to DC/Balanced mode for power. All benchmarks run in Admin mode., Graphics driver: X.X Vs. Configuration Data: Intel® Core™ AML Y2+2 5W measurements: Processor: Intel® Core™ i7-8500Y processor, PL1=5.0W TDP, 2C4T, Turbo up to 4.2GHz/3.6GHz, Memory: 2x4GB LPDDR3-1866MHz, Storage: Intel® 760p m.2 PCIe NVMe SSD, Display Resolution: 1920×1080 for Performance; 25×14 eDP 13.3” for Power, OS: Windows 10 Build RS3 17134.112. SYSmark 2014 SE is measured in BAPCo power plan. Power policy set to DC/Balanced mode for power. All benchmarks run in Admin mode, Graphics driver: driver:whl.1006167-v2.

Forward-Looking Statements: Statements in this release that refer to future plans and expectations, including with respect to Intel’s future technologies and the expected benefits of such technologies, are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “goals,” “plans,” “believes,” “seeks,” “estimates,” “continues,” “may,” “will,” “would,” “should,” “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on estimates, forecasts, projections, uncertain events or assumptions, including statements relating to total addressable market (TAM) or market opportunity, future products and the expected availability and benefits of such products, and anticipated trends in our businesses or the markets relevant to them, also identify forward-looking statements. Such statements are based on current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company’s expectations are set forth in Intel’s most recent earnings release dated April 25, 2019, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Forms 10-K and 10-Q. Copies of Intel’s Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at or the SEC’s website at

The post 2019 Investor Meeting: Intel Previews Design Innovation; 10nm CPU Ships in June; 7nm Product in 2021 appeared first on Intel Newsroom.

Goodwill Farming: Startup Harvests AI to Reduce Herbicides

Jorge Heraud is an anomaly for a founder whose startup was recently acquired by a corporate giant: Instead of counting days to reap earn-outs, he’s sowing the company’s goodwill message.

That might have something to do with the mission. Blue River Technology, acquired by John Deere more than a year ago for $300 million, aims to reduce herbicide use in farms.

The effort has been a calling to like-minded talent in Silicon Valley who want to apply their technology know-how to more meaningful problems than the next hot app, said Heraud, who continues to serve as Blue River’s CEO.

“We’re using machine learning to make a positive impact on the world. We don’t see it as just a way of making a profit. It’s about solving problems that are worthy of solving — that attracts people to us,” he said.

Heraud and co-founder Lee Redden, who continues to serve as Blue River’s CTO, were attending Stanford University in 2011 when they decided to form the startup. Redden was pursuing graduate studies in computer vision and machine learning applied to robotics while Heraud was getting an executive MBA.

The duo’s work formed one of the early success stories of many for harnessing NVIDIA GPUs and computer vision to tackle complex industrial problems with big benefits to humanity.

“Growing food is one of the biggest and oldest industries — it doesn’t get bigger than that,” said Ryan Kottenstette, who invested in Blue River at Khosla Ventures.

Herbicide Spraying 2.0

As part of tractor giant John Deere, Blue River remains committed to herbicide reduction. The company is engaged in multiple pilots of its See & Spray smart agriculture technology.

Pulled behind tractors, its See & Spray machine is about 40 feet wide and covers 12 rows of crops. It has 30 mounted cameras to capture photos of plants every 50 milliseconds and process them through its on-board 25 Jetson AGX Xavier supercomputing modules.

As a tractor pulls at about 7 miles per hour, according to Blue River, the Jetson Xavier modules running Blue River’s image recognition algorithms need to decide whether images fed from the 30 cameras are a weed or crop plant quicker than the blink of an eye. That allows enough time for the See & Spray’s robotic sprayer — it features 200 precision sprayers — to zap each weed individually with herbicide.

“We use Jetson to run inference on our machine learning algorithms and to decide on the fly if a plant is a crop or a weed, and spray only the weeds,” Heraud said.

GPUs Fertilize AgTech

Blue River has trained its convolutional neural networks on more than a million images and its See & Spray pilot machines keep feeding new data as they get used.

Capturing as many possible varieties of weeds in different stages of growth is critical to training the neural nets, which are processed on a “server closet full of GPUs” as well as on hundreds of GPUs at AWS, said Heraud.

Using cloud GPU instances, Blue River has been able to train networks much faster. “We have been able to solve hard problems and train in minutes instead of hours. It’s pretty cool what new possibilities are coming out,” he said.

Among them, Jetson Xavier’s compact design has enabled Blue River to move away from using PCs equipped with GPUs on board tractors. John Deere has ruggedized the Jetson Xavier modules, which offer some protection from the heat and dust of farms.

Business and Environment

Herbicides are expensive. A farmer spending a quarter-million dollars a year on herbicides was able to reduce that expense by 80 percent, Heraud said.

Blue River’s See & Spray can take the place of conventional, or aerial spraying of herbicides, which blankets entire crops with chemicals, something most countries are trying to reduce.

See & Spray can reduce the world’s herbicide use by roughly 2.5 billion pounds, an 80 percent reduction, which could have huge environmental benefits.

“It’s a tremendous reduction in the amount of chemicals. I think it’s very aligned with what customers want,” said Heraud.


Image credit: Blue River

The post Goodwill Farming: Startup Harvests AI to Reduce Herbicides appeared first on The Official NVIDIA Blog.

Tesla Raises the Bar for Self-Driving Carmakers

In unveiling the specs of his new self-driving car computer at today’s Tesla Autonomy Day investor event, Elon Musk made several things very clear to the world.

First, Tesla is raising the bar for all other carmakers.

Second, Tesla’s self-driving cars will be powered by a computer based on two of its new AI chips, each equipped with a CPU, GPU, and deep-learning accelerators. The computer delivers 144 trillion operations per second (TOPS), enabling it to collect data from a range of surround cameras, radars and ultrasonics and power deep neural network algorithms.

Third, Tesla is working on a next-generation chip, which says 144 TOPS isn’t enough.

At NVIDIA, we have long believed in the vision Tesla reiterated today: self-driving cars require computers with extraordinary capabilities.

Which is exactly why we designed and built the NVIDIA Xavier SoC several years ago. The Xavier processor features a programmable CPU, GPU and deep learning accelerators, delivering 30 TOPs. We built a computer called DRIVE AGX Pegasus based on a two chip solution, pairing Xavier with a powerful GPU to deliver 160 TOPS, and then put two sets of them on the computer, to deliver a total of 320 TOPS.

And as we announced a year ago, we’re not sitting still. Our next-generation processor Orin is coming.

That’s why NVIDIA is the standard Musk compares Tesla to—we’re the only other company framing this problem in terms of trillions of operations per second, or TOPS.

But while we agree with him on the big picture—that this is a challenge that can only be tackled with supercomputer-class systems—there are a few inaccuracies in Tesla’s Autonomy Day presentation that we need to correct.

It’s not useful to compare the performance of Tesla’s two-chip Full Self Driving computer against NVIDIA’s single-chip driver assistance system. Tesla’s two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.

Additionally, while Xavier delivers 30 TOPS of processing, Tesla erroneously stated that it delivers 21 TOPS. Moreover, a system with a single Xavier processor is designed for assisted driving AutoPilot features, not full self-driving. Self-driving, as Tesla asserts, requires a good deal more compute.

Tesla, however, has the most important issue fully right: Self-driving cars—which are key to new levels of safety, efficiency, and convenience—are the future of the industry. And they require massive amounts of computing performance.

Indeed Tesla sees this approach as so important to the industry’s future that it’s building its future around it. This is the way forward. Every other automaker will need to deliver this level of performance.

There are only two places where you can get that AI computing horsepower: NVIDIA and Tesla.

And only one of these is an open platform that’s available for the industry to build on.

The post Tesla Raises the Bar for Self-Driving Carmakers appeared first on The Official NVIDIA Blog.

Israeli Startup Putting the Squeeze on Citrus Disease with AI

The multibillion-dollar citrus industry is getting squeezed.

The disease known as “citrus greening” is causing sour fruit around the world. Damage to Florida’s citrus crops has cost billions of dollars and thousands of jobs, according to the University of Florida. In the past few years, the disease has moved into California.

SeeTree, an AI startup based in Tel Aviv, is helping farmers step up crop defenses.

The startup’s GPU-driven tree analytics platform relies on image recognition algorithms, sensors, drones and an app for collecting data on the ground. Its platform helps farmers pinpoint affected trees for removal to slow the spread of the orchard disease.

“In permanent crops such as trees, if you make a mistake you will suffer for years,” said Ori Shachar, SeeTree’s head of science and AI. “Florida has lost 75 percent of its crops from citrus greening.”

SeeTree works with orchards hit by the Asian citrus psyllid, an insect that spreads the disease causing patchy leaves and green fruit.

Citrus greening is an irreversible condition. Farmers need to move quickly to replace trees hit by citrus greening to blunt the advance of the disease throughout orchards.

Cultivating Precision Agriculture

SeeTree’s citrus greening containment effort is just one aspect of its business. The company’s analytics platform enables customers to track the performance of their farms, as well as get the best results from their use of fertilizer, pesticides, water and labor.

The startup is among a growing field of companies focused on precision agriculture. These companies apply deep learning to agricultural data and run on NVIDIA GPUs to yield visual analytics for farm optimization.

SeeTree uses NVIDIA Jetson TX2 to process images and CUDA as the interface for cameras at orchards. The TX2 enables it to do fruit-detection for orchards as well as provide farms with a yield estimation tool.

“The result is a fairly accurate estimation of the amount of fruit per tree,” Shachar said. “This offers intelligent farming and planning for the farmer.”

The startup taps NVIDIA GPUs on Google Cloud Platform to train its image recognition algorithms on thousands of images of fruit.

Optimized farms can reduce water and pesticide use as well as increase their yield, among other benefits, according to SeeTree.

“We’re introducing automation in orchards, and suddenly you can do stuff differently — it’s data-driven decisions on a large scale,” said Shachar, who was previously at Mobileye.

In addition to development in Israel, the startup is working in California and Brazil.

IoT for Agriculture

Drones are important. In Brazil, where SeeTree is helping battle citrus greening, workers are slowed by the high temperatures. SeeTree is able to do drone inspections from remote locations and capture in one hour what takes several weeks with a person on the ground.

“Drones are the workhorse of our activity. It allows us to get to every tree to get the information and get multiple resolutions,” said Shachar.

There is no known biological or chemical fix for the problem right now, and it’s not anticipated to be solved for at least five years, Shachar said.

For now, improved maintenance is the key. Farmers can use sensors to keep trees healthier. By better tracking the soil moisture levels and air temperature, farmers can adjust their irrigation to make sure root systems aren’t being over-watered.

All of this data can be viewed as analytics on SeeTree’s platform for farmers.

Citrus greening in the U.S. has also hit Louisiana, Georgia, South Carolina, Texas and Hawaii. It’s in Mexico, Cuba and other regions of the world, as well.

Image credit: Hans Braxmeier, released under Creative Commons.

The post Israeli Startup Putting the Squeeze on Citrus Disease with AI appeared first on The Official NVIDIA Blog.