AI Nails It: Startup’s Drones Eye Construction Sites

Krishna Sudarshan was a Goldman Sachs managing director until his younger son’s obsession with drones became his own, attracting him to the flying machines’ data-heavy business potential.

Sudarshan quit Goldman after a decade in 2016 to found Aspec Scire, which pairs drones to a cloud service for construction and engineering firms to monitor their businesses.

A colleague from the banking giant joined as head of engineering and former colleagues invested in the startup.

Business has taken flight.

Since its launch, Aspec Scire has landed work with construction management firms, engineering firms and owners and developers, including a large IT services company, Sudarshan said.

“The construction industry has very low levels of automation. This an industry that’s desperately in need of increased efficiency,” he said.

On-Demand Drones

Aspec Scire licenses its service to construction management firms, general contractors, surveyors and drone operators who offer it to their customers.

It can replace a lot of old-fashioned grunt work and record-keeping.

Its drones-as-a-service cloud business allows managers to remotely monitor the progress of construction sites. Videos and photos taken by drones can build up files on the status of properties, providing a digital trail of documentation for fulfilment of so-called SLAs, or service-level agreements, according to the company.

It can also show whether substructural building elements — such as pilings and columns — are keeping to the blueprints. The service holds promise for heading off safety issues and saving construction firms a lot of time and money if they can quickly catch mistakes before they become problems requiring major revisions.

Image recognition algorithms are also trained to spot hundreds of problems that could be dangerous or cause contractors major headaches — hot water lines next to gas lines, for example, or forgotten 50-amp outdoor plug-in receptacles for Tesla owners to charge from.

“We will be the ones that analyze data from construction sites to provide actionable insights to improve the efficiency of their operations,” he said.

AI Nails Construction

Sudarshan, who led technology for a division of Goldman, is like many who use NVIDIA GPUs to tap into fast processing for massive datasets. After studying the idea of drone data collection for construction, he put the two together.

He fits a banking industry adage: You can take the person out of Goldman, but you can’t take Goldman out of the person. “Goldman is so data driven at everything. And I can see this industry is not data driven, so I’m trying to see how we can make it more so,” he said.

Construction data is plentiful. Aspec Scire uses millions of images for training its image classification algorithms that apply to about 20,000 aspects of construction sites. Training data continues to grow as customers upload images from sites, he said.

Aspec Scire also provides trained models for the compact supercomputing power of Jetson TX2 onboard DJI drones to quickly process images. It trains its algorithms on NVIDIA GPUs on Google Cloud Platform, including the NVIDIA V100 Tensor Core GPU.

“Without GPUs we wouldn’t be able to do some of the things that we’re doing,” Sudarshan said.

Aspec Scire is a member of NVIDIA Inception, a virtual accelerator program that helps startups get to market faster.

 

Image credit: Magnus Bäck, licensed under Creative Commons

The post AI Nails It: Startup’s Drones Eye Construction Sites appeared first on The Official NVIDIA Blog.

Plowing AI, Startup Retrofits Tractors with Autonomy

Colin Hurd, Mark Barglof and Quincy Milloy aren’t your conventional tech entrepreneurs. And that’s not just because their agriculture technology startup is based in Ames, Iowa.

Smart Ag is developing autonomy and robotics for tractors in a region more notable for its corn and soybeans than software or silicon.

Hurd, Barglof and Milloy, all Iowa natives, founded Smart Ag in 2015 and landed a total of $6 million in seed funding, $5 million of which came from Stine Seed Farm, an affiliate of Stine Seed Co. Other key investors included Ag Startup Engine, which backs Iowa State University startups.

The company is in widespread pilot tests with its autonomy for tractors and plans to commercialize its technology for row crops by 2020.

Smart Ag is a member of the NVIDIA Inception virtual accelerator, which provides marketing and technology support to AI startups.

A team of two dozen employees has been busy on its GPU-enabled autonomous software and robotic hardware system that operates tractors that pull grain carts during harvest.

“We aspire to being a software company first, but we’ve had to make a lot of hardware in order to make a vehicle autonomous,” said Milloy.

Wheat from Chaff

Smart Ag primarily works today with traditional row crop (corn and soybean) producers and cereal grain (wheat) producers. During harvest, these farmers use a tractor to pull a grain cart in conjunction with the harvesting machine, or combine, which separates the wheat from the chaff or corn from the stalk. Once the combine’s storage bin is full, the tractor with the grain cart pulls alongside for the combine to unload into the cart.

That’s where autonomous tractors come in.

Farm labor is scarce. In California, 55 percent of farms surveyed said they had experienced labor shortages in 2017, according to a report from the California Farm Bureau Federation.

Smart Ag is developing its autonomous tractor to pull a grain cart, addressing the lack of drivers available for this job.

Harvest Tractor Autonomy

Farmers can retrofit a John Deere 8R Series tractor using the company’s AutoCart system. It provides controllers for steering, acceleration and braking, as well as cameras, radar and wireless connectivity. An NVIDIA Jetson Xavier powers its perception system, fusing Smart Ag’s custom agricultural object detection model with other sensor data to give the tractor awareness of its surroundings.

“The NVIDIA Jetson AGX Xavier has greatly increased our perception capabilities — from the ability to process more camera feeds to the fusion of additional sensors —  it has enabled the path to develop and rapidly deploy a robust safety system into the field,” Milloy said.

Customers can use mobile devices and a web browser to access the system to control tractors.

Smart Ag’s team gathered more than 1 million images to train the image recognition system on AWS, tapping into NVIDIA GPUs. The startup’s custom image recognition algorithms allow its autonomous tractor to avoid people and other objects in the field, find the combine for unloading and return to a semi truck for the driverless grain cart vehicle to unload the grain for final transport to a grain storage facility.

Smart Ag has more than 12 pilot tests under its belt and uses those to gather more data to refine its algorithms. The company plans to expand its test base to roughly 20 systems operating during harvest in 2019 in preparation for its commercial launch in 2020.

“We’ve been training for the past year and a half. The system can get put out today in deployment, but we can always get higher accuracy,” Milloy said.

The post Plowing AI, Startup Retrofits Tractors with Autonomy appeared first on The Official NVIDIA Blog.

2019 Investor Meeting: Intel Previews Design Innovation; 10nm CPU Ships in June; 7nm Product in 2021

intel investors renduchintala
Dr. Murthy Renduchintala, Intel’s chief engineering officer and group president of the Technology, Systems Architecture and Client Group, speaks at the 2019 Intel Investor Meeting on Wednesday, May 8, 2019, in Santa Clara, California. (Credit: Intel Corporation)
» Click for full image

Today, Wall Street analysts are gathered at Intel headquarters in Santa Clara for the company’s 2019 Investor Meeting, which features executive keynotes by Intel CEO Bob Swan and business unit leaders. At the meeting, Dr. Murthy Renduchintala, Intel’s chief engineering officer and group president of the Technology, Systems Architecture and Client Group, announced that Intel will start shipping its volume 10nm client processor in June and shared first details on the company’s 7nm process technology. Renduchintala said Intel has redefined its product innovation model for the data-centric era of computing, which “requires workload-optimized platforms and effortless customer and developer innovation.” He shared expected performance gains resulting from a combination of technical innovations across six pillars – process and packaging, architecture, memory, interconnect, security and software – giving insight into the design and engineering model steering the company’s product development.

More: Intel Investor Relations Website

“While process and CPU leadership remain fundamentally important, an extraordinary rate of innovation is required across a combination of foundational building blocks that also include architecture, memory, interconnect, security and software, to take full advantage of the opportunities created by the explosion of data,” Renduchintala said. “Only Intel has the R&D, talent, world-class portfolio of technologies and intellectual property to deliver leadership products across the breadth of architectures and workloads required to meet the demands of the expanding data-centric market.”

10nm Process Technology: Intel’s first volume 10nm processor, a mobile PC platform code-named “Ice Lake,” will begin shipping in June. The Ice Lake platform will take full advantage of 10nm along with architecture innovations. It is expected to deliver approximately 3 times faster wireless speeds, 2 times faster video transcode speeds, 2 times faster graphics performance, and 2.5 to 3 times faster artificial intelligence (AI) performance over previous generation products1. As announced, Ice Lake-based devices from Intel OEM partners will be on shelves for the 2019 holiday season. Intel also plans to launch multiple 10nm products across the portfolio through 2019 and 2020, including additional CPUs for client and server, the Intel® Agilex™ family of FPGAs, the Intel® Nervana™ NNP-I (AI inference processor), a general-purpose GPU and the “Snow Ridge” 5G-ready network system-on-chip (SOC).

Building on a model proven with 14nm that included optimizations in 14+ nm and 14++ nm, the company will drive sustained process advancement between nodes and within a node, continuing to lead the scaling of process technology according to Moore’s Law. The company plans to effectively deliver performance and scaling at the beginning of a node, plus another performance improvement within the node through multiple intra-node optimizations within the technology generation.

7nm Status:  Renduchintala provided first updates on Intel’s 7nm process technology that will deliver 2 times scaling and is expected to provide approximately 20 percent increase in performance per watt with a 4 times reduction in design rule complexity. It will mark the company’s first commercial use of extreme ultraviolet (EUV) lithography, a technology that will help drive scaling for multiple node generations.

The lead 7nm product is expected to be an Intel Xe architecture-based, general-purpose GPU for data center AI and high-performance computing. It will embody a heterogeneous approach to product construction using advanced packaging technology. On the heels of Intel’s first discrete GPU coming in 2020, the 7nm general purpose GPU is expected to launch in 2021.

renduchintala presentation 1 1

» Download all images (ZIP, 1 MB)

Heterogeneous Integration for Data-Centric Era: Renduchintala previewed new chip designs that leverage advanced 2D and 3D packaging technology to integrate multiple intellectual property (IP), each on its own optimized process technology, into a single package. The heterogeneous approach allows new process technologies to be leveraged earlier by interconnecting multiple smaller chiplets, and larger platforms to be built with unprecedented levels of performance when compared to non-monolithic alternatives.

Renduchintala unveiled the performance gains that resulted from innovative development of the client platform code-named “Lakefield”.  The approach is symbolic of the strategic shift in the company’s design and engineering model that underpins Intel’s future product roadmaps. To meet customer specifications, a breadth of technical innovations including a hybrid CPU architecture and Foveros 3D packaging technology were used to meet always-on, always-connected and form-factor requirements while simultaneously delivering to power and performance targets. Lakefield is projected to deliver approximately 10 times SOC standby power improvement and 1.5 to 2 times active SOC power improvement relative to 14nm predecessors, 2 times graphics performance increases2, and 2 times reduction in printed-circuit-board (PCB) area, enabling OEMs to have more flexibility for thin and light form factor designs.

Performance results are based on testing as of dates shown in configuration and may not reflect all publicly available security updates.  See configuration disclosure for details.  No product or component can be absolutely secure.  Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.   For more complete information visit www.intel.com/benchmarks.  

Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at intel.com.

1Ice Lake configuration disclosures:

Approximately 3x Ice Lake Wireless Speeds: 802.11ax 2×2 160MHz enables 2402Mbps maximum theoretical data rates, ~3X (2.8X) faster than standard 802.11ac 2×2 80MHz (867Mbps) as documented in IEEE 802.11 wireless standard specifications and require the use of similarly configured 802.11ax wireless network routers.

Approximately 2x Ice Lake Video Encode: Based on 4k HEVC to 4k HEVC transcode (8bit). Intel preproduction system, ICL 15w compared to WHL 15w.  

Approximately 2x Ice Lake Graphics Performance:  Workload: 3DMark11 v 1.0.132. Intel PreProduction ICL U4+2 15W Configuration (Assumptions):, Processor: Intel® Core™ i7 (ICL-U 4+2) PL1=15W TDP, 4C8T, Memory: 2x8GB LPDDR4-3733 2Rx8, Storage: Intel® 760p m.2 PCIe NVMe SSD with AHCI Microsoft driver, Display Resolution: 3840×2160 eDP Panel 12.5”, OS: Windows* 10 RS5-17763.316, Graphics driver: PROD-H-RELEASES_ICL-PV-2019-04-09-1006832. Vs config – Intel PreProduction WHL U4+2 15W Configuration (Measured), Processor: Intel® Core™ i7-8565U (WHL-U4+2) PL1=15W TDP, 4C8T, Turbo up to 4.6Ghz, Memory: 2x8GB DDR4-2400 2Rx8, Storage: Intel® 760p m.2 PCIe NVMe SSD with AHCI Microsoft driver, Display Resolution: 3840×2160 eDP Panel 12.5”, OS: Windows* 10 RS4-17134.112., Graphics driver: 100.6195

Approximately 2.5x-3x Ice Lake AI Performance: Workload: images per second using AIXPRT Community Preview 2 with Int8 precision on ResNet-50 and SSD-Mobilenet-v1 models. Intel preproduction system, ICL-U, PL1 15w, 4C/8T, Turbo TBD, Intel Gen11 Graphics, GFX driver preproduction, Memory 8GB LPDDR4X-3733, Storage Intel SSD Pro 760P 256GB, OS Microsoft Windows 10, RS5 Build 475, preprod bios. Vs. Config – HP spectre x360 13t 13-ap0038nr, Intel® Core™ i7-8565U, PL1 20w, 4C/8T, Turbo up to 4.6Ghz, Intel UHD Graphics 620, Gfx driver 26.20.100.6709, Memory 16GB DDR4-2400, Storage Intel SSD 760p 512GB, OS – Microsoft Windows 10 RS5 Build 475 Bios F.26.

2Lakefield configuration disclosures:

Approximately 10x Lakefield Standby SoC Power Improvement: Estimated or simulated as of April 2019 using Intel internal analysis or architecture simulation or modeling. Vs. Amber Lake.

Approximately 1.5x-2x Lakefield Active SoC Power Improvement:  Estimated or simulated as of April 2019 using Intel internal analysis or architecture simulation or modeling. Workload:  1080p video playback. Vs. Amber Lake next-gen product.

Approximately 2x Lakefield Graphics Performance: Estimated or simulated as of April 2019 using Intel internal analysis or architecture simulation or modeling. Workload:  GfxBENCH. LKF 5W & 7W Configuration (Assumptions):,Processor: LKF PL1=5W & 7W TDP, 5C5T, Memory: 2X4GB LPDDR4x – 4267MHz, Storage: Intel® 760p m.2 PCIe NVMe SSD; LKF Optimized Power configuration uses UFS, Display Resolution: 1920×1080 for Performance; 25×14 eDP 13.3” and 19×12 MIPI 8.0” for Power, OS: Windows* 10 RS5. Power policy set to AC/Balanced mode for all benchmarks except SYSmark 2014 SE which is measured in AC/BAPCo mode for Performance. Power policy set to DC/Balanced mode for power. All benchmarks run in Admin mode., Graphics driver: X.X Vs. Configuration Data: Intel® Core™ AML Y2+2 5W measurements: Processor: Intel® Core™ i7-8500Y processor, PL1=5.0W TDP, 2C4T, Turbo up to 4.2GHz/3.6GHz, Memory: 2x4GB LPDDR3-1866MHz, Storage: Intel® 760p m.2 PCIe NVMe SSD, Display Resolution: 1920×1080 for Performance; 25×14 eDP 13.3” for Power, OS: Windows 10 Build RS3 17134.112. SYSmark 2014 SE is measured in BAPCo power plan. Power policy set to DC/Balanced mode for power. All benchmarks run in Admin mode, Graphics driver: driver:whl.1006167-v2.

Forward-Looking Statements: Statements in this release that refer to future plans and expectations, including with respect to Intel’s future technologies and the expected benefits of such technologies, are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “goals,” “plans,” “believes,” “seeks,” “estimates,” “continues,” “may,” “will,” “would,” “should,” “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on estimates, forecasts, projections, uncertain events or assumptions, including statements relating to total addressable market (TAM) or market opportunity, future products and the expected availability and benefits of such products, and anticipated trends in our businesses or the markets relevant to them, also identify forward-looking statements. Such statements are based on current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company’s expectations are set forth in Intel’s most recent earnings release dated April 25, 2019, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Forms 10-K and 10-Q. Copies of Intel’s Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC’s website at www.sec.gov.

The post 2019 Investor Meeting: Intel Previews Design Innovation; 10nm CPU Ships in June; 7nm Product in 2021 appeared first on Intel Newsroom.

Goodwill Farming: Startup Harvests AI to Reduce Herbicides

Jorge Heraud is an anomaly for a founder whose startup was recently acquired by a corporate giant: Instead of counting days to reap earn-outs, he’s sowing the company’s goodwill message.

That might have something to do with the mission. Blue River Technology, acquired by John Deere more than a year ago for $300 million, aims to reduce herbicide use in farms.

The effort has been a calling to like-minded talent in Silicon Valley who want to apply their technology know-how to more meaningful problems than the next hot app, said Heraud, who continues to serve as Blue River’s CEO.

“We’re using machine learning to make a positive impact on the world. We don’t see it as just a way of making a profit. It’s about solving problems that are worthy of solving — that attracts people to us,” he said.

Heraud and co-founder Lee Redden, who continues to serve as Blue River’s CTO, were attending Stanford University in 2011 when they decided to form the startup. Redden was pursuing graduate studies in computer vision and machine learning applied to robotics while Heraud was getting an executive MBA.

The duo’s work formed one of the early success stories of many for harnessing NVIDIA GPUs and computer vision to tackle complex industrial problems with big benefits to humanity.

“Growing food is one of the biggest and oldest industries — it doesn’t get bigger than that,” said Ryan Kottenstette, who invested in Blue River at Khosla Ventures.

Herbicide Spraying 2.0

As part of tractor giant John Deere, Blue River remains committed to herbicide reduction. The company is engaged in multiple pilots of its See & Spray smart agriculture technology.

Pulled behind tractors, its See & Spray machine is about 40 feet wide and covers 12 rows of crops. It has 30 mounted cameras to capture photos of plants every 50 milliseconds and process them through its on-board 25 Jetson AGX Xavier supercomputing modules.

As a tractor pulls at about 7 miles per hour, according to Blue River, the Jetson Xavier modules running Blue River’s image recognition algorithms need to decide whether images fed from the 30 cameras are a weed or crop plant quicker than the blink of an eye. That allows enough time for the See & Spray’s robotic sprayer — it features 200 precision sprayers — to zap each weed individually with herbicide.

“We use Jetson to run inference on our machine learning algorithms and to decide on the fly if a plant is a crop or a weed, and spray only the weeds,” Heraud said.

GPUs Fertilize AgTech

Blue River has trained its convolutional neural networks on more than a million images and its See & Spray pilot machines keep feeding new data as they get used.

Capturing as many possible varieties of weeds in different stages of growth is critical to training the neural nets, which are processed on a “server closet full of GPUs” as well as on hundreds of GPUs at AWS, said Heraud.

Using cloud GPU instances, Blue River has been able to train networks much faster. “We have been able to solve hard problems and train in minutes instead of hours. It’s pretty cool what new possibilities are coming out,” he said.

Among them, Jetson Xavier’s compact design has enabled Blue River to move away from using PCs equipped with GPUs on board tractors. John Deere has ruggedized the Jetson Xavier modules, which offer some protection from the heat and dust of farms.

Business and Environment

Herbicides are expensive. A farmer spending a quarter-million dollars a year on herbicides was able to reduce that expense by 80 percent, Heraud said.

Blue River’s See & Spray can take the place of conventional, or aerial spraying of herbicides, which blankets entire crops with chemicals, something most countries are trying to reduce.

See & Spray can reduce the world’s herbicide use by roughly 2.5 billion pounds, an 80 percent reduction, which could have huge environmental benefits.

“It’s a tremendous reduction in the amount of chemicals. I think it’s very aligned with what customers want,” said Heraud.

 

Image credit: Blue River

The post Goodwill Farming: Startup Harvests AI to Reduce Herbicides appeared first on The Official NVIDIA Blog.

Tesla Raises the Bar for Self-Driving Carmakers

In unveiling the specs of his new self-driving car computer at today’s Tesla Autonomy Day investor event, Elon Musk made several things very clear to the world.

First, Tesla is raising the bar for all other carmakers.

Second, Tesla’s self-driving cars will be powered by a computer based on two of its new AI chips, each equipped with a CPU, GPU, and deep-learning accelerators. The computer delivers 144 trillion operations per second (TOPS), enabling it to collect data from a range of surround cameras, radars and ultrasonics and power deep neural network algorithms.

Third, Tesla is working on a next-generation chip, which says 144 TOPS isn’t enough.

At NVIDIA, we have long believed in the vision Tesla reiterated today: self-driving cars require computers with extraordinary capabilities.

Which is exactly why we designed and built the NVIDIA Xavier SoC several years ago. The Xavier processor features a programmable CPU, GPU and deep learning accelerators, delivering 30 TOPs. We built a computer called DRIVE AGX Pegasus based on a two chip solution, pairing Xavier with a powerful GPU to deliver 160 TOPS, and then put two sets of them on the computer, to deliver a total of 320 TOPS.

And as we announced a year ago, we’re not sitting still. Our next-generation processor Orin is coming.

That’s why NVIDIA is the standard Musk compares Tesla to—we’re the only other company framing this problem in terms of trillions of operations per second, or TOPS.

But while we agree with him on the big picture—that this is a challenge that can only be tackled with supercomputer-class systems—there are a few inaccuracies in Tesla’s Autonomy Day presentation that we need to correct.

It’s not useful to compare the performance of Tesla’s two-chip Full Self Driving computer against NVIDIA’s single-chip driver assistance system. Tesla’s two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.

Additionally, while Xavier delivers 30 TOPS of processing, Tesla erroneously stated that it delivers 21 TOPS. Moreover, a system with a single Xavier processor is designed for assisted driving AutoPilot features, not full self-driving. Self-driving, as Tesla asserts, requires a good deal more compute.

Tesla, however, has the most important issue fully right: Self-driving cars—which are key to new levels of safety, efficiency, and convenience—are the future of the industry. And they require massive amounts of computing performance.

Indeed Tesla sees this approach as so important to the industry’s future that it’s building its future around it. This is the way forward. Every other automaker will need to deliver this level of performance.

There are only two places where you can get that AI computing horsepower: NVIDIA and Tesla.

And only one of these is an open platform that’s available for the industry to build on.

The post Tesla Raises the Bar for Self-Driving Carmakers appeared first on The Official NVIDIA Blog.

Israeli Startup Putting the Squeeze on Citrus Disease with AI

The multibillion-dollar citrus industry is getting squeezed.

The disease known as “citrus greening” is causing sour fruit around the world. Damage to Florida’s citrus crops has cost billions of dollars and thousands of jobs, according to the University of Florida. In the past few years, the disease has moved into California.

SeeTree, an AI startup based in Tel Aviv, is helping farmers step up crop defenses.

The startup’s GPU-driven tree analytics platform relies on image recognition algorithms, sensors, drones and an app for collecting data on the ground. Its platform helps farmers pinpoint affected trees for removal to slow the spread of the orchard disease.

“In permanent crops such as trees, if you make a mistake you will suffer for years,” said Ori Shachar, SeeTree’s head of science and AI. “Florida has lost 75 percent of its crops from citrus greening.”

SeeTree works with orchards hit by the Asian citrus psyllid, an insect that spreads the disease causing patchy leaves and green fruit.

Citrus greening is an irreversible condition. Farmers need to move quickly to replace trees hit by citrus greening to blunt the advance of the disease throughout orchards.

Cultivating Precision Agriculture

SeeTree’s citrus greening containment effort is just one aspect of its business. The company’s analytics platform enables customers to track the performance of their farms, as well as get the best results from their use of fertilizer, pesticides, water and labor.

The startup is among a growing field of companies focused on precision agriculture. These companies apply deep learning to agricultural data and run on NVIDIA GPUs to yield visual analytics for farm optimization.

SeeTree uses NVIDIA Jetson TX2 to process images and CUDA as the interface for cameras at orchards. The TX2 enables it to do fruit-detection for orchards as well as provide farms with a yield estimation tool.

“The result is a fairly accurate estimation of the amount of fruit per tree,” Shachar said. “This offers intelligent farming and planning for the farmer.”

The startup taps NVIDIA GPUs on Google Cloud Platform to train its image recognition algorithms on thousands of images of fruit.

Optimized farms can reduce water and pesticide use as well as increase their yield, among other benefits, according to SeeTree.

“We’re introducing automation in orchards, and suddenly you can do stuff differently — it’s data-driven decisions on a large scale,” said Shachar, who was previously at Mobileye.

In addition to development in Israel, the startup is working in California and Brazil.

IoT for Agriculture

Drones are important. In Brazil, where SeeTree is helping battle citrus greening, workers are slowed by the high temperatures. SeeTree is able to do drone inspections from remote locations and capture in one hour what takes several weeks with a person on the ground.

“Drones are the workhorse of our activity. It allows us to get to every tree to get the information and get multiple resolutions,” said Shachar.

There is no known biological or chemical fix for the problem right now, and it’s not anticipated to be solved for at least five years, Shachar said.

For now, improved maintenance is the key. Farmers can use sensors to keep trees healthier. By better tracking the soil moisture levels and air temperature, farmers can adjust their irrigation to make sure root systems aren’t being over-watered.

All of this data can be viewed as analytics on SeeTree’s platform for farmers.

Citrus greening in the U.S. has also hit Louisiana, Georgia, South Carolina, Texas and Hawaii. It’s in Mexico, Cuba and other regions of the world, as well.

Image credit: Hans Braxmeier, released under Creative Commons.

The post Israeli Startup Putting the Squeeze on Citrus Disease with AI appeared first on The Official NVIDIA Blog.

Dig In: Startup Tills Satellite Data to Harvest Farm AI

The farm-to-fork movement is getting a taste of AI.

Startup OneSoil cultivates AI to help farmers boost their bounty. The company offers a GPU-enabled platform that turns satellite data into farm analytics for soil and crop conditions.

The Belarus-based company interprets satellite feeds to show how plants reflect different light waves, and it rates the state of plant growth based off this information for plots of land.

OneSoil’s free platform displays how areas of land measure up on the standard known as the NDVI (Normalized Difference Vegetation Index). Farmers can use this vegetation score to spot unhealthy crop areas that need inspection and to plan watering needs and the application of fertilizers.

The field monitoring platform is available as an Android app and on the web.

OneSoil has developed its platform to cover North America, most of western Europe and some of central Europe.  It aims to have coverage of the entire world by year’s end. The satellite data visualizations are updated every three to five days.

Satellite to Sprouts

OneSoil taps into free satellite data from the European Union’s Copernicus Earth observation program. The company manually marked out boundaries on nearly 400,000 fields for training data used on its convolutional neural networks. Now its algorithms can now enable its algorithms to automatically create boundaries from the satellite data.

It processed about 50 terabytes of Sentinel 2 satellite data using NVIDIA GPUs in Microsoft Azure to build out its boundaries of land for the map spanning much of the world.

“With Sentinel images, we need a lot of processing power to analyze those,” said Clement Matyuhov, director of business development at OneSoil.

OneSoil can automatically detect more than 20 different crop types.

Dig the Sensors

OneSoil has developed sensors to work on its platform. Customer can dig a hole and stick in one of its battery-powered sensors that packs a SIM card to start sending data.

The sensors measure air humidity, soil moisture, the temperature of air and soil, and the level of light intensity for the nearby area.

The company has also developed a modem that can transfer data between agricultural equipment and the OneSoil platform over a mobile network.

OneSoil users can enter data, as well. They can make such entries as date of harvest, crop type, average yield, field boundaries and files documenting field work. They can use the app, which tracks location and provides field data, to go examine areas.

Prescriptive Agriculture

On the analytics side, OneSoil Maps makes it easy for farmers to make adjustments on their land. The maps provide a productivity rating of low, medium or high for different areas of the land.

“We can say there is a low productivity zone there, so go check it out. Within one field, the productivity can vary dramatically,” said Matyuhov.

Farmers can use the maps for the vegetation on their land to create prescription maps for fertilizer. These prescription maps, downloadable as a file from app.onesoil.ai, can be uploaded into compatible tractors from John Deere and steering systems from Trimble, allowing tractors to go to the specific GPS coordinates and treat the area as prescribed.

“It’s really an expert assessment for the farmer. The results for the yield can be substantial,” he

Image and credit: Corn harvest with an IHC International combine harvester, Jones County, Iowa, U.S., by Bill Whittaker under Creative Commons license.

The post Dig In: Startup Tills Satellite Data to Harvest Farm AI appeared first on The Official NVIDIA Blog.

NVIDIA CEO Ties AI-Driven Medical Advances to Data-Driven Leaps in Every Industry

Radiology. Autonomous vehicles. Supercomputing. The changes sweeping through all these fields are closely related. Just ask NVIDIA CEO Jensen Huang.

Speaking in Boston at the World Medical Innovation Forum to more than 1,800 of the world’s top medical professionals, Huang tied Monday’s news — that NVIDIA is collaborating with the American College of Radiology to bring AI to thousands of hospitals and imaging centers — to the changes sweeping through fields as diverse as autonomous vehicles and scientific research.

In a conversation with Keith Dryer, vice chairman of radiology at Massachusetts General Hospital, Huang asserted that data science — driven by a torrent of data, new algorithms and advances in computing power — is becoming a fourth pillar of scientific discovery, alongside theoretical work, experimentation and simulation.

Putting data science to work, however, will require enterprises of all kinds to learn how to handle data in new ways. In the case of radiology, the privacy of the data is too important, and the expertise is local,  Huang told the audience. “You want to put computing at the edge,” he said.

As a result, the collaboration between NVIDIA and the American College of Radiology promises to enable thousands of radiologists nationwide to use AI for diagnostic radiology in their own facilities, using their own data, to meet their own clinical needs.

Huang began the conversation by noting that the Turing Award, “the Nobel Prize of computing,” had just been given to the three researchers who kicked off today’s AI boom: Yoshua Bengio, Geoffrey Hinton and Yann LeCunn.

“The takeaway from that is that this is probably not a fad, that deep learning and this data-driven approach where software and the computer is writing software by itself, that this form of AI is going to have a profound impact,” Huang said.

Huang drew parallels between radiology and other industries putting AI to work, such as automotive, where Huang sees an enormous need for computing power in autonomous vehicles that can put multiple intelligenceS to work, in real time, as they travel through the world.

Similarly, in medicine, putting one — or more — AI models to work will only enhance the capabilities of the humans guiding these models.

These models can also guide those doing cutting-edge work at the frontiers of science, Huang said, citing Monday’s announcement that the Accelerating Therapeutics for Opportunities in Medicine, or ATOM, consortium will collaborate with NVIDIA to scale ATOM’s AI-driven drug discovery program.

The big idea: to pair data science with more traditional scientific methods, using neural networks to help “filter” through the large combination of possible molecules to decide which ones to simulate to find candidates for in vitro testing, Huang explained

Software Is automation, AI Is the Automation of Automation

Huang sees such techniques being used in all fields of human endeavor — from science to front-line healthcare and even to running a technology company. As part of that process, NVIDIA has built one of the world’s largest supercomputers, SATURNV, to support its own efforts to train

AI models with a broad array of capabilities. “We use this for designing chips, for improving our systems, for computer graphics,” Huang said.

Such techniques promise to revolutionize every field of human endeavor, Huang said, asserting that AI is “software that writes software,” and that software’s “fundamental purpose is automation.”

“AI therefore is the automation of automation,” Huang said. “And if we can harness the automation of automation, imagine what good we could do.”

 

 

The post NVIDIA CEO Ties AI-Driven Medical Advances to Data-Driven Leaps in Every Industry appeared first on The Official NVIDIA Blog.

KPF Pushes Limits of Building-design Rendering Using NVIDIA RTX

After meeting industrial designer Cobus Bothma, you’d be forgiven for assuming he works in the gaming industry or Hollywood.

He’s a big proponent of VR, AR and GPU computing. But Bothma is the director of applied research at Kohn Pedersen Fox, a New York-based global architecture firm whose work includes the city’s Museum of Modern Art.

Bothma discussed KPF’s latest GPU-powered rendering projects and use of NVIDIA Holodeck, our virtual reality collaboration environment, at the recent GPU Technology Conference.

Rendering with RTX

To create computer-generated images for architectural designs, KPF is exploring real-time ray-traced rendering on workstations powered by the NVIDIA Quadro RTX 6000 GPU.

Bothma showed the GTC audience a rendering that had been generated using a test version of the nascent Project Lavina rendering software, which was designed to harness the dedicated ray tracing silicon of NVIDIA Turing GPUs.

It was a detailed scene that had been rendered in real time on a single RTX 6000, showing a KPF-designed building, along with a landscaped park with flowers, grass and trees that had a depth of appearance from the shading.

The software was developed by Chaos Group, an early adopter of NVIDIA’s Turing architecture-based GPUs. KPF’s rendering work with Project Lavina was an effort to test the limits of hardware and software, running 9 billion shade instances of triangles and 19 million unique shade triangles in real time.

Design Iterations

GPUs are accelerating design iterations at KPF. Architectural projects come with an extensive list of requirements from clients, city codes, local governments and others. For example, the client might require a certain amount of green space and sunlight for a space. A project might have specific high-density requirements such as tall buildings or low-density configurations that allow more open space.

In the past, that required a designer to manually model, analyze and adjust work — and then continue to adjust and analyze to find the best solution. This was labor and time intensive, and would typically require multiple iterations.

With KPF’s workflow, when designers leave work for the night, they can run the project parameters and let the workstations not only model, analyze and iterate designs, but also produce the rendered results in real time.

This can create thousands of iterations rendered overnight on GPUs to match the brief’s requirements and allow the architects to filter the best solutions based on parameters and results to progress the early stages of the design.

Holodeck Teamwork

KPF uses NVIDIA Holodeck for photorealistic VR collaboration and internal communications on projects. This allows team members to review multiple design options before presenting the best options to clients.

The Holodeck VR environment includes sensations of real-world presence through sight and sound. It enables remote team members to “walk around and through” architectural models and communicate and collaborate in VR sessions with other people in a much more interactive fashion than traditional video conferencing.

“Holodeck allows us to look at designs in detail,” Bothma said. “We can move components apart and understand the structure underneath it. It will transform the way we collaborate on international projects over the next years.”

The post KPF Pushes Limits of Building-design Rendering Using NVIDIA RTX appeared first on The Official NVIDIA Blog.

Betting on Monte Carlo: GPUs a ‘Game Changer’ for Nuking Noise in Nuclear Imaging

Andras Wirth is like many early AI researchers: His deep learning ambitions only turned into reality because of a sea change in technology.

A physicist, Wirth wanted to run Monte Carlo algorithms to make leaping advances in nuclear imaging, which was previously computationally impossible without massive supercomputers.

A decade ago, his breakthrough came when his lab began using GPUs and the first CUDA release on the computationally demanding algorithms.

On Thursday at the GPU Technology Conference in Silicon Valley, Wirth, who leads nuclear imaging at Mediso Medical, spoke about his company’s groundbreaking work.

Wirth’s team of CUDA programmers runs Monte Carlo method transport calculations on GPUs to enhance image quality. This helps to eliminate the usual degenerating effects that come from inaccuracies in physical modeling.

Monte Carlo transport methods rely on modeling the physical processes that contribute to acquiring the image of a patient. For maximum precision, the modeling consists of simulating billions of photon tracks. These photon tracks are random by nature, thus the simulation itself has to be random —  just like the games in the city of Monte Carlo.

Besides improving the image quality of scans, the main issue for nuclear medicine is the need to lower the dose of injected radioactive isotopes without impairing the diagnostic value of the acquired images. Neural networks help cope with the increasing noise level while also maintaining the useful information with a performance that is unrivaled by  conventional methods.

The lowered dosages are a boon to patients and the facilities that administer the radioactive substances, and the GPU-accelerated technique behind it holds great promise across the field.

“This is a complete game changer — it can have an effect on every type of nuclear medical procedure,” Wirth said.

Los Alamos to Budapest

The Monte Carlo method dates back to research at the Manhattan Project in the 1940s. But it wasn’t until recently that researchers and engineers applied GPUs to the computationally demanding algorithms.

Wirth’s work with GPUs on Monte Carlo methods have added to the capabilities of Budapest-based Mediso’s software used in its cameras for SPECT scans. SPECT (single-photon emission computerized tomography) scans rely on radioisotopes that are injected into the bloodstream of patients. Clinicians then use specialized cameras to capture 3D images of organs.

Mediso trained its U-Net convolutional neural network architecture on 1,000 images of bone scans. U-nets are used in medical imaging to bolster image segmentation so that different areas of details can be outlined.

It took a lot of computing power to do these types of calculations, Wirth said. “Traditionally, only supercomputers were able to do these type of calculations,” he said. “Until, GPUs appeared for general computing, it didn’t even make sense to try out Monte Carlo particle transport calculations in medical imaging.”

GPUs Lower Dose

Radioisotopes administered in medical imaging are low-level carcinogens for patients, expensive for imaging facilities to obtain and require special handling.

“Nobody likes to have nuclear isotopes in their body. That’s why we want to minimize the dose injected to the body — there are risks,” said Wirth.

However, when you lower a radioisotope dose, those lines are more difficult to decipher and blurring occurs that makes it difficult to spot lesions in bones.

Mediso used its neural network solutions running on GPUs to help to minimize that imaging “noise” while reducing the radioisotope dose administered to patients by one-eighth.

“It’s hard to imagine developing neural network-based products without the help of GPUs nowadays. It doesn’t stop there, however: since processing time is crucial in medical imaging, GPU technology has become a vital element of imaging products,” Wirth said.

The post Betting on Monte Carlo: GPUs a ‘Game Changer’ for Nuking Noise in Nuclear Imaging appeared first on The Official NVIDIA Blog.