Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease

Florida orange juice is getting a taste of AI. With the Sunshine State’s $9 billion annual citrus crops plagued by a fruit-souring disease, researchers and businesses are tapping AI to help rescue the nation’s largest producer of orange juice. University of Florida researchers are developing AI applications for agriculture. And the technology — computer vision Read article >

The post Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease appeared first on The Official NVIDIA Blog.

Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease

Florida orange juice is getting a taste of AI.

With the Sunshine State’s $9 billion annual citrus crops plagued by a fruit-souring disease, researchers and businesses are tapping AI to help rescue the nation’s largest producer of orange juice.

University of Florida researchers are developing AI applications for agriculture. And the technology — computer vision for smart sprayers — is now being licensed and deployed in pilot tests by CCI, an agricultural equipment company.

The efforts promise to help farmers combat what’s known as “citrus greening,” the disease brought on by bacteria from the Asian citrus psyllid insect hitting farms worldwide.

Citrus greening causes patchy leaves and green fruit and can quickly decimate orchards.

The agricultural equipment supplier has seen farmers lose one-third of the orchard acreage in Florida from the onslaught of citrus greening.

“It’s having a huge impact on the state of Florida, California, Brazil, China, Mexico — the entire world is battling a citrus crisis,” said Yiannis Ampatzidis, assistant professor at UF’s Department of Agricultural and Biological Engineering.

Fertilizing Precision Agriculture

Ampatzidis works with a team of researchers focused on automation in agriculture. They develop AI applications to forecast crop yields and reduce pesticide use. The team’s image recognition models are run on the Jetson AI platform in the field for inference.

“The goal is to use Jetson Xavier to detect the size of the tree and the leaf density to instantly optimize the flow of the nozzles on sprayers for farming,” said Ampatzidis. “It also allows us to count fruit density, predict yield, and study water usage and pH levels.”

The growing popularity of organic produce and the adoption of more sustainable farming practices have drawn a field of startups plowing AI for benefits to businesses and the planet. John Deere-owned Blue River, FarmWise, SeeTree and Smart Ag are just some of the agriculture companies adopting NVIDIA GPUs for training and inference.

Like many, UF and CCI are developing applications for deployment on the NVIDIA Jetson edge AI platform. And UF has wider ambitions for fostering AI development that benefits the state

Last July, UF and NVIDIA hatched plans to build one of the world’s fastest AI supercomputers in academia, delivering 700 petaflops of processing power. Built with NVIDIA DGX systems and NVIDIA Mellanox networking, HiPerGator AI is now online to power UF’s precision agriculture research.  The new supercomputer was made possible by a $25 million donation from alumnus and NVIDIA founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

UF is a member of the NVIDIA Applied Research Accelerator Program, which supports applied research in coordination with businesses relying on NVIDIA platforms for GPU-accelerated application deployments.

Deploying Robotic Sprayers

Citrus greening has required farmers to act quickly to remove diseased trees to prevent its advances. Many orchards now have gaps in their rows of trees. As a result, conventional sprayers that apply agrochemicals uniformly along entire rows will often overspray, wasting resources and creating unnecessary environmental contamination.

UF researchers developed a sensor system of lidar and cameras for sprayers used in orchards. These sensors feed into the NVIDIA Jetson AGX Xavier, which can process split-second inference on whether the sprayer is facing a tree to spray or not, enabling autonomous spraying.

The system can adjust in real time to turn off or on the application of crop protection products or fertilizers as well as adjust the amount sprayed based on the plant’s size, said Kieth Hollingsworth, a CCI sales specialist.

“It cuts down on spraying waste overspray and on wasted material that ultimately gets washed into the groundwater. We can also predict yield based on the oranges we see on the tree,” said Hollingsworth.

Commercializing AgTech AI

CCI began working with UF eight years ago. In the past couple of years, the company has been working with the university to upgrade its infrared laser-based spraying system to one with AI.

And customers are coming to CCI for novel ways to attack the problem, said Hollingsworth.

Working with NVIDIA’s Applied Research Accelerator Program, CCI has gotten a boost with technical guidance on Jetson Xavier that has sped its development.

Citrus industry veteran Hollingsworth says AI is a useful tool in the field to wield against the crop disease that has taken some of the sweetness out of orange juice over the years.

“People have no idea how complex of a crop oranges are to grow and what it takes to produce and squeeze the juice that goes into a glass of orange juice,” said Hollingsworth.

Academic researchers can apply now for the Applied Research Accelerator Program.

Photo credit: Samuel Branch on Unsplash

The post Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease appeared first on The Official NVIDIA Blog.

What Is a Cluster? What Is a Pod?

Everything we do on the internet — which is just about everything we do these days — depends on the work of clusters, which are also called pods.

When we stream a hot new TV show, order a pair of jeans or Zoom with grandma, we use clusters. You’re reading this story thanks to pods.

So, What Is a Cluster? What Is a Pod?

A cluster or a pod is simply a set of computers linked by high-speed networks into a single unit.

Computer architects must have reached, at least unconsciously, for terms rooted in nature. Pea pods and dolphin superpods, like today’s computer clusters, show the power of many individuals working as a team.

The Roots of Pods and Superpods

The links go deeper. Botanists say pods not only protect and nourish individual peas, they can reallocate resources from damaged seeds to thriving ones. Similarly, a load balancer moves jobs off a failed compute node to a functioning one.

The dynamics aren’t much different for dolphins.

Working off the coast of the Bahamas, veteran marine biologist Denise Herzing often sees every day the same pods, family groups of perhaps 20 dolphins. And once she encountered a vastly larger group.

“Years ago, off the Baja peninsula, I saw a superpod. It was very exciting and a little overwhelming because as a researcher I want to observe a small group closely, not a thousand animals spread over a large area,” said the founder of the Wild Dolphin Project.

For dolphins, superpods are vital. “They protect the travelers by creating a huge sensory system, a thousand sets of ears that listen for predators like one super-sensor,” she said, noting the parallels with the clusters used in cloud computing today.

Warehouse-sized data center with many clusters or pods.
A data center with multiple clusters or pods can span multiple buildings, and run as a single system.

Pods Sprout in Early Data Centers

As companies began computerizing their accounting systems in the early 1960s, they instinctively ganged multiple computers together so they would have backups in case one failed, according to Greg Pfister, a former IBM technologist and an expert on clusters.

“I’m pretty sure NCR, MetLife and a lot of people did that kind of thing,” said Pfister, author of In Search of Clusters, considered by some the bible of the field.

In May 1983, Digital Equipment Corp. packed several of its popular 32-bit VAX minicomputers into what it called a VAXcluster. Each computer ran its own operating system, but they shared other resources, providing IT users with a single system image.

An early cluster diagram
Diagram of an early PC-based cluster.

By the late 1990s, the advent of low-cost PC processors, Ethernet networks and Linux inspired at least eight major research projects that built clusters. NASA designed one with 16 PC motherboards on two 10 Mbit/second networks and dubbed it Beowulf, imagining it slaying the giant mainframes and massively parallel systems of the day.

Cluster Networks Need Speed

Researchers found clusters could be assembled quickly and offered high performance at low cost, as long as they used high-speed networks to eliminate bottlenecks.

Another late ‘90’s project, Berkeley’s Network of Workstations (NoW), linked dozens of Sparc workstations on the fastest interconnects of the day. They created an image of a pod of small fish eating a larger fish to illustrate their work.

Berkeley NoW image of pod or superpod cluster
Researchers behind Berkeley’s NoW project envisioned clusters of many small systems out-performing a single larger computer.

One researcher, Eric Brewer, saw clusters were ideal for emerging internet apps, so he used the 100-server NoW system as a search engine.

“For a while we had the best search engine in the world running on the Berkeley campus,” said David Patterson, a veteran of NoW and many computer research projects at Berkeley.

The work was so successful, Brewer co-founded Inktomi, an early search engine built on a NoW-inspired cluster of 1,000 systems. It had many rivals, including a startup called Google with roots at Stanford.

“They built their network clusters out of PCs and defined a business model that let them grow and really improve search quality — the rest was history,” said Patterson, co-author of a popular textbook on computing.

Today, clusters or pods are the basis of most of the world’s TOP500 supercomputers as well as virtually all cloud computing services. And most use NVIDIA GPUs, but we’re getting ahead of the story.

Pods vs. Clusters: A War of Words

While computer architects called these systems clusters, some networking specialists preferred the term pod. Turning the biological term into a tech acronym, they said POD stood for a “point of delivery” of computing services.

The term pod gained traction in the early days of cloud computing. Service providers raced to build ever larger, warehouse-sized systems often ordering entire shipping containers, aka pods, of pre-configured systems they could plug together like Lego blocks.

First Google cluster in a container
An early prototype container delivered to a cloud service provider.

More recently, the Kubernetes group adopted the term pod. They define a software pod as “a single container or a small number of containers that are tightly coupled and that share resources.”

Industries like aerospace and consumer electronics adopted the term pod, too, perhaps to give their concepts an organic warmth. Among the most iconic examples are the iPod, the forerunner of the iPhone, and the single-astronaut vehicle from the movie 2001: A Space Odyssey.

When AI Met Clusters

In 2012, cloud computing services heard the Big Bang of AI, the genesis of a powerful new form of computing. They raced to build giant clusters of GPUs that, thanks to their internal clusters of accelerator cores, could process huge datasets to train and run neural networks.

To help spread AI to any enterprise data center, NVIDIA packs GPU clusters on InfiniBand networks into NVIDIA DGX Systems. A reference architecture lets users easily scale from a single DGX system to an NVIDIA DGX POD or even a supercomputer-class NVIDIA DGX SuperPOD.

For example, Cambridge-1, in the United Kingdom, is an AI supercomputer based on a DGX SuperPOD, dedicated to advancing life sciences and healthcare. It’s one of many AI-ready clusters and pods spreading like never before. They’re sprouting like AI itself, in many shapes and sizes in every industry and business.

The post What Is a Cluster? What Is a Pod? appeared first on The Official NVIDIA Blog.

Intel, EXOS Pilot 3D Athlete Tracking with Pro Football Hopefuls

What’s New: EXOS, a leader in the field of advancing human performance, is piloting Intel’s 3D Athlete Tracking (3DAT) technology in training aspiring professional athletes to reach their peak performance. As pro days loom, these athletes seek to take their game to the next level with 3DAT by leveraging artificial intelligence (AI) to gain actionable insights about their velocity, acceleration and biomechanics when sprinting.

“Metrics that were previously unmeasurable by the naked eye are now being revealed with Intel’s 3DAT technology. We’re able to take that information, synthesize it and turn it into something tangible for our coaches and athletes. It’s a gamechanger when the tiniest of adjustments can lead to real, impactful results for our athletes.”
– Monica Laudermilk, vice president of research at EXOS

Why It Matters: 3DAT is putting relevant data at the fingertips of coaches and elite athletes that, up to this point, have either been nonexistent or hard to get. By providing precise skeletal analysis and performance metrics through simple video, athletes, coaches, and anyone interested in human performance can now know what the body is doing and how to make it perform better.

“There’s a massive gap in the sports and movement field, between what people feel when they move and what they actually know that they’re doing,” said Ashton Eaton, two-time Olympic gold medalist in the decathlon, and Product Development engineer in Intel’s Olympic Technology Group. “When I was running the 100-meter dash, I’d work with my coach to make adjustments to shave off fractions of a second, but it was all by feel. Sometimes it worked, sometimes it didn’t, because I didn’t fully know what my body was actually doing. But 3DAT allows athletes to understand precisely what their body is doing while in motion, so they can precisely target where to make tweaks to get faster or better.”

How It Works: 3DAT technology is hands-free for athletes. It leverages cameras to film athletes as they run drills, so they are unburdened from wearing sensors or deviating from their regular training program. The system sends the video data – 60 frames per second – to the cloud for analysis on Intel® Xeon® Scalable processors with built-in Intel® Deep Learning Boost AI acceleration capabilities. Coaches then receive reports and charts that provide a detailed overview of the athletes’ sessions. The coach can drill in deeper on body mechanics or trouble spots to better understand what minor tweaks they can implement to help athletes achieve their full athletic potential.

“3DAT is giving us information, and insight, not just into the technique of how people are running and how they can improve, but also what might be holding them back. This data enables us to make adjustments in the weight room to help unlock more potential on the field,” said Craig Friedman, senior vice president of EXOS’ Performance Innovation Team.

Intel Ashton Eaton

» Download all images (ZIP, 4 MB)

What Comes Next: Through the continued partnership with EXOS, Intel will have access to expert coaches, elite athletes and high performers to glean additional insights on how this technology can be utilized. Intel’s engineers continue to relentlessly pursue new innovations that enable developers to build anything they can imagine on top of 3DAT technology to propel the field of human performance.

More Context: Artificial Intelligence at Intel

Intel Partner Stories: Intel Customer Spotlight on Intel.com | Partner Stories on Intel Newsroom

The post Intel, EXOS Pilot 3D Athlete Tracking with Pro Football Hopefuls appeared first on Intel Newsroom.

GFN Thursday — 21 Games Coming to GeForce NOW in March

Guess what’s back? Back again? GFN Thursday. Tell a friend. Check out this month’s list of all the exciting new titles and classic games coming to GeForce NOW in March. First, let’s get into what’s coming today. Don’t Hesitate It wouldn’t be GFN Thursday if members didn’t have new games to play. Here’s what’s new Read article >

The post GFN Thursday — 21 Games Coming to GeForce NOW in March appeared first on The Official NVIDIA Blog.

GFN Thursday — 21 Games Coming to GeForce NOW in March

Guess what’s back? Back again? GFN Thursday. Tell a friend.

Check out this month’s list of all the exciting new titles and classic games coming to GeForce NOW in March.

First, let’s get into what’s coming today.

Don’t Hesitate

It wouldn’t be GFN Thursday if members didn’t have new games to play. Here’s what’s new to GFN starting today:

Loop Hero on GeForce NOW

Loop Hero (day-and-date release on Steam)

Equal parts roguelike, deck-builder and auto battler, Loop Hero challenges you to think strategically as you explore each randomly generated loop path and fight to defeat The Lich. PC Gamer gave this indie high praise, saying, “don’t sleep on this brilliant roguelike.”

Disgaea PC on GeForce NOW

Disgaea PC (Steam)

The turn-based strategy RPG classic lets you amass your evil hordes and become the new Overlord. With more than 40 character types and PC-specific features, there’s never been a better time to visit the Netherworld.

Members can also look for the following games joining GeForce NOW later today:

  • Legends of Aria (Steam)
  • The Dungeon Of Naheulbeuk: The Amulet Of Chaos (Steam)
  • Wargame: Red Dragon (Free on Epic Games Store, March 4-11)
  • WRC 8 FIA World Rally Championship (Steam)

What’s Coming in March

We’ve got a great list of exciting tiles coming soon to GeForce NOW. You won’t want to miss:

Spacebase Startopia (Steam and Epic Games Store)

An original mixture of economic simulation and empire building strategy paired with classic RTS skirmishes and a good dose of humor to take the edge off.

Wrench (Steam)

Prepare and maintain race cars in an extraordinarily detailed mechanic simulator. Extreme attention has been paid to even the smallest components, including fasteners that are accurate to the thread pitch and install torque.

And that’s not all — check out even more games coming to GFN in March:

  • Door Kickers (Steam)
  • Endzone – A World Apart (Steam)
  • Monopoly Plus (Steam)
  • Monster Energy Supercross – The Official Videogame 4 (Steam)
  • Narita Boy (Steam)
  • Overcooked!: All You Can Eat (Steam)
  • Pascal’s Wager – Definitive Edition (Steam)
  • System Shock: Enhanced Edition (Steam)
  • Thief Gold (Steam)
  • Trackmania United Forever (Steam)
  • Uno (Steam)
  • Workers & Resources: Soviet Republic (Steam)
  • Worms Reloaded (Steam)

In Case You Missed It

Remember that one time in February when we told you that 30 titles were coming to GeForce NOW? Actually, it was even more than that — 18 additional games joined the service, bringing the total in February to nearly 50.

If you’re not following along every week, the additional 18 games that joined GFN are:

Add it all up, and you’ve got a lot of gaming ahead of you.

The Backbone of Your GFN iOS Experience

Backbone One, a GeForce NOW Recommended Controller

For those planning to give our new Safari iOS experience a try, our newest GeForce NOW Recommended Controller is for you.

Backbone One is an iPhone game controller recommended for GeForce NOW, with fantastic buttons and build quality, technology that preserves battery life and reduces latency, passthrough charging and more. It even has a built-in capture button to let you record and share your gameplay, right from your phone. Learn more about Backbone One on the GeForce NOW Recommended Product hub.

This should be an exciting month, GFN members. What are you going to play? Tell us on Twitter or in the comments below.

The post GFN Thursday — 21 Games Coming to GeForce NOW in March appeared first on The Official NVIDIA Blog.

AMD Unveils AMD Radeon RX 6700 XT Graphics Card, Delivering Exceptional 1440p PC Gaming Experiences

– Harnessing breakthrough AMD RDNA™ 2 gaming architecture, high-performance AMD Infinity Cache and 12GB of high-speed GDDR6 memory, new AMD Radeon™ RX 6700 XT graphics cards provide up to 2X higher performance in select titles than current installed base of older-generation graphics cards1

– Powerful blend of raytracing with AMD FidelityFX2 compute and rasterized effects elevates visuals to new levels of fidelity, delivering amazing lifelike, cinematic gaming experiences –

SANTA CLARA, Calif., March 03, 2021 (GLOBE NEWSWIRE) -- AMD (NASDAQ: AMD) today introduced the AMD Radeon RX 6700 XT graphics card, providing exceptional performance, stunningly vivid visuals and advanced software features to redefine 1440p resolution gaming.

Representing the cutting edge of engineering and design, AMD Radeon RX 6700 XT graphics cards harness breakthrough AMD RDNA 2 gaming architecture, 96MB of high-performance AMD Infinity Cache, 12GB of high-speed GDDR6 memory, AMD Smart Access Memory3 and other advanced technologies to meet the ever-increasing demands of modern games. Delivering up to 2X higher gaming performance in select titles1 with amazing new features compared to the current installed base of older-generation graphics cards and providing more than 165 FPS in select esports titles4, the AMD Radeon RX 6700 XT graphics card pushes the limits of gaming by enabling incredible, high-refresh 1440p performance and breathtaking visual fidelity.

“Modern games are more demanding than ever, requiring increasing levels of computing horsepower to deliver the breathtaking, immersive experiences gamers expect,” said Scott Herkelman, corporate vice president and general manager, Graphics Business Unit at AMD. “The AMD Radeon RX 6700 XT graphics card hits the sweet spot for 1440p gaming. For most gamers still playing on three-to-four-year-old graphics cards, it is the perfect upgrade solution designed to deliver incredible visuals and no-compromises, high-refresh 1440p gaming performance at maximum settings.”

Taking advantage of this incredible performance, more than 40 system builders and OEMs are expected to launch high-performance desktop PCs powered by the new graphics card. HP is expected to refresh two desktop gaming systems this Spring – the HP OMEN 25L and 30L – featuring AMD Radeon RX 6700 XT graphics cards and AMD Ryzen™ 5000 Series Desktop Processors.

“Gamers are routinely on the lookout for the latest cutting-edge technologies to excel and immerse themselves in their favorite games,” said Judy Johnson, gaming platform head, HP Inc. “We’re thrilled to add the new AMD Radeon RX 6700 XT graphics cards in our OMEN 25 and 30L Desktops to help power epic adventures across the globe.”

Exceptional 1440p Performance, Stunning Visuals and Unmatched Gaming Experiences

The AMD Radeon RX 6700 XT graphics card is built on 7nm process technology and AMD RDNA 2 gaming architecture, designed to deliver the optimal combination of performance and power efficiency. Additional features and capabilities include:

  • AMD Infinity Cache – 96MB of last-level data cache on the GPU die provides up to 2.5X higher bandwidth at the same power level as traditional architectures5 to provide higher gaming performance.
  • AMD Smart Access MemoryUnlocks higher performance when pairing AMD Radeon RX 6000 Series graphics cards with AMD Ryzen 5000 or select Ryzen 3000 Series Desktop Processors and AMD 500-series motherboards. Providing AMD Ryzen processors with access to high-speed GDDR6 graphics memory can deliver a performance uplift of up to 16 percent6.
  • 12GB High-Speed GDDR6 VRAM – Designed to handle the increasing texture loads and greater visual demands of today’s modern games at higher resolutions and max settings, the new graphics card with 12GB of GDDR6 of memory allows gamers to easily power through today and tomorrow’s demanding AAA titles.
  • AMD FidelityFX Integrated into more than 40 games and counting, AMD FidelityFX is an open-source toolkit of visual enhancement effects for game developers available at AMD GPUOpen. Optimized for AMD Radeon graphics, it includes a robust collection of rasterized lighting, shadow and reflection effects that can be integrated into the latest games with minimal performance overhead.
  • DirectX® Raytracing (DXR) – AMD RDNA 2 architecture-based graphics cards are optimized to deliver real-time lighting, shadow and reflection realism with DXR, providing a stunning gaming experience. When paired with AMD FidelityFX, developers can combine rasterized and ray-traced effects to provide an ideal balance of image quality and gaming performance.
  • AMD Radeon Anti-Lag7 – Now with support for the DirectX 12 API, AMD Radeon Anti-Lag decreases input-to-display response times, making games more responsive and offering a competitive edge in gameplay.
  • AMD Radeon Boost8 – Now with support for Variable Rate Shading, AMD Radeon Boost delivers up to a 27 percent performance increase9 during fast-motion gaming scenarios by dynamically reducing image resolution or by varying shading rates for different regions of a frame, increasing framerates and fluidity, and bolstering responsiveness with virtually no perceptual impact on image quality.

“We’re excited to partner with AMD to bring the latest, much-anticipated entry in the Resident Evil franchise to PC gamers,” said Peter Fabiano, producer at Capcom®. “With the powerful Radeon RX 6000 Series graphics coupled with next-generation features and technologies, including AMD FidelityFX, raytracing and Variable Rate Shading, gamers can experience the ultimate fight for survival with visually stunning, insanely detailed characters and environments and high-performance gameplay.”

Driving High-Octane Performance with Mercedes-AMG Petronas Esports Team

AMD also announced a new collaboration with the Mercedes-AMG Petronas Esports Team, the esports arm of the Mercedes-AMG Petronas Formula 1 Team. Mercedes has selected the powerful combination of AMD Radeon RX 6000 Series graphics cards, AMD Ryzen 5000 Series Desktop Processors and advanced AMD Radeon Software technologies to power their Esports gaming rigs, delivering the highest framerates, low-latency gameplay and the smoothest driving experience, unlocking the drivers’ full potential. AMD delivers an unmatched competitive edge in virtual racing.

Specifications, Pricing and Availability

Model Compute Units GDDR6 Game Clock10 (MHz) Boost Clock11 (MHZ) Memory Interface Infinity Cache TBP Price
(USD SEP)
AMD Radeon RX 6700 XT

40 12GB 2424 Up to
2581
192 bit 96MB 230W $479

AMD Radeon RX 6700 XT graphics cards are expected to be available from AMD.com, from AMD board partners including ASRock, ASUS, Gigabyte, MSI, PowerColor, SAPPHIRE and XFX, and from etailers/retailers on March 18, 2021, starting at $479 USD SEP. The refreshed HP OMEN 25L and 30L desktop gaming systems are expected to be available this Spring. Additional pre-built systems from OEM and SI partners are expected to become available in the coming months.

Supporting Resources

  • Learn more about the AMD Radeon RX 6700 XT graphics card here
  • Become a fan of AMD on Facebook
  • Follow AMD on Twitter

CAUTIONARY STATEMENT
This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including the AMD RadeonTM RX 6700 XT graphics card and the expected launch by system builders and OEMs, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; global economic uncertainty; the loss of a significant customer; the impact of the COVID-19 pandemic on AMD’s business, financial condition and results of operations; the competitive markets in which AMD’s products are sold; quarterly and seasonal sales patterns; market conditions of the industries in which AMD products are sold; the cyclical nature of the semiconductor industry; AMD's ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; the ability of third party manufacturers to manufacture AMD's products on a timely basis in sufficient quantities and using competitive technologies; expected manufacturing yields for AMD’s products; the availability of essential equipment, materials or manufacturing processes; AMD's ability to introduce products on a timely basis with features and performance levels that provide value to its customers; AMD's ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential IT outages, data loss, data breaches and cyber-attacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products in a timely manner; AMD's reliance on third-party companies for the design, manufacture and supply of motherboards, software and other computer platform components; AMD's reliance on Microsoft Corporation and other software vendors' support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; the impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; the efficiency of AMD's supply chain; AMD's ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control the sales of its products on the gray market; the impact of government actions and regulations such as export administration regulations, tariffs and trade protection measures; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; the impact of environmental laws, conflict minerals-related provisions and other laws or regulations; the impact of acquisitions, joint ventures and/or investments on AMD's business, including the announced acquisition of Xilinx, and the failure to integrate acquired businesses; AMD’s ability to complete the Xilinx merger; the impact of the announcement and pendency of the Xilinx merger on AMD’s business; the impact of any impairment of the combined company’s assets on the combined company’s financial position and results of operation; the restrictions imposed by agreements governing AMD’s notes and the revolving credit facility; the potential dilutive effect if the 2.125% Convertible Senior Notes due 2026 are converted; AMD's indebtedness; AMD's ability to generate sufficient cash to service its debt obligations or meet its working capital requirements; AMD's ability to repurchase its outstanding debt in the event of a change of control; AMD's ability to generate sufficient revenue and operating cash flow or obtain external financing for research and development or other strategic investments; political, legal, economic risks and natural disasters; future impairments of goodwill and technology license purchases; AMD’s ability to attract and retain qualified personnel; AMD’s stock price volatility; and worldwide political conditions. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.

About AMD

For 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies ― the building blocks for gaming, immersive platforms and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ:AMD) websiteblogFacebook and Twitter pages. 

©2021 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, Radeon, Ryzen and combinations thereof are trademarks of Advanced Micro Devices, Inc. DirectX is a registered trademark of Microsoft Corporation in the US and other jurisdictions. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.

The information contained herein is for informational purposes only, and is subject to change without notice. Timelines, roadmaps, and/or product release dates shown in this Press Release are plans only and subject to change. “Navi” is an AMD codename and is not a product name.

__________________________________
1 Testing done by AMD performance labs Feb 8 2020, on a Radeon RX 6700 XT GPU (pre-release 20.50 driver), RTX 2080 Super (461.40 driver), and GTX 1070 Ti (461.40 driver), AMD Ryzen 9 5900X CPU, 16GB DDR4-3200MHz, ASRock X570 Taichi motherboard, Win10 Pro 64. Performance may Vary. RX-627
2 For additional information, see https://www.amd.com/en/technologies/radeon-software-fidelityfx. GD-172
3 Smart Access Memory technology enablement requires an AMD Radeon 6000 series GPU, Ryzen 5000 or 3000 series CPU (excluding Ryzen 5 3400G and Ryzen 3 3200G) and an AMD 500 series motherboard with the latest BIOS update. BIOS requires support for AGESA 1.1.0.0 or higher. Download latest BIOS from vendor website. For additional information and system requirements, see https://www.amd.com/en/technologies/smart-access-memory. GD-178.        
4 Testing done by AMD performance labs Feb 18 2020, on a Radeon RX 6700 XT GPU (20.50-210202n-362065E-ATI driver), AMD Ryzen 9 5900X CPU, 16GB DDR4-3200MHz, ASRock X570 Taichi motherboard, Win10 Pro 64. Performance may Vary. RX-629
5 Measurements calculated by AMD engineering. Measuring 1440p gaming average AMD Infinity Cache hit rates of 60% across select top gaming titles, multiplied by theoretical peak bandwidth from the 12 64B AMD Infinity Fabric channels connecting the Cache to the Graphics Engine at boost frequency of up to 1.94 GHz. RX-624
6 Testing done by AMD performance labs Feb 18 2020, on a Radeon RX 6700 XT GPU (20.50-210202n-362065E-ATI driver), AMD Ryzen 9 5900X CPU, 16GB DDR4-3200MHz, ASRock X570 Taichi motherboard, Win10 Pro 64 with AMD Smart Access Memory Technology ENABLED vs. the same system with Smart Access Memory Technology DISABLED. Performance may vary. RX-640
7 Radeon™ Anti-Lag is compatible with DirectX 9, DirectX 11 and DirectX 12 APIs, Windows 7 and 10. Hardware compatibility includes GCN and newer consumer dGPUs Ryzen 2000 and newer APUs, including hybrid and detachable graphics configurations. No mGPU support. GD-157
8 Radeon™ Boost is compatible with Windows 7 and 10 in select titles only. Hardware compatibility includes Radeon RX 400 and newer consumer dGPUs, Ryzen 2000 Series and newer APUs, including hybrid and detachable graphics configurations. No mGPU support. Radeon™ Boost VRS compatible with AMD Radeon™ RX 6000 Series Graphics exclusively. For a list of compatible titles see https://www.amd.com/en/technologies/radeon-boost. GD-158
9 Testing conducted by AMD Performance Labs as of Feb 11, 2021 on the Radeon™ RX 6700XT, using a test system comprising of AMD Ryzen 9 5900X CPU (3.7 GHz), 16GB DDR4-3200MHz memory, and Windows 10x64 with Radeon Software Adrenalin 2020 Edition 20.50 with Radeon Boost ON/OFF. Performance may vary. RS-356.
10 Game clock is the expected GPU clock when running typical gaming applications, set to typical TGP (Total Graphics Power). Actual individual game clock results may vary. GD-147
11 Boost Clock Frequency is the maximum frequency achievable on the GPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-151


Contact:

George Millington
AMD Communications
(408) 547-7481
George.Millington@amd.com

Laura Graves
AMD Investor Relations
(408) 749-5467
Laura.Graves@amd.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/c7cb1885-fff2-461a-8754-91853181fc90


Primary Logo

AMD Radeon™ RX 6700 XT Graphics Card

AMD Radeon™ RX 6700 XT Graphics Card
Source: Advanced Micro Devices, Inc.

Intel Helps IntellectEU Fight Insurance Fraud with ClaimShare

security cyberweek 2x1
What’s New: IntellectEU, a technology company focused on emerging technologies, digital finance and insurtech, has implemented Intel® Software Guard Extensions (Intel® SGX) to secure ClaimShare, its new insurance fraud detection platform. ClaimShare uses R3’s Conclave confidential computing platform powered by Intel SGX and enabled on Microsoft Azure. Additionally, ClaimShare utilizes Corda blockchain and artificial intelligence to help solve the insurance industry’s growing problem with fraudulent duplicate claims.

“The application of Intel SGX technology and confidential computing to help combat this prominent form of insurance fraud will be a game changer for insurance companies. GDPR (General Data Protection Regulation) and strict data privacy compliance is critical in the insurance industry, and innovative solutions like ClaimShare will support collaboration, communication and further privacy.”
–Michael Reed, Intel director of Confidential Computing

Why It Matters: According to the FBI, the total cost of non-health insurance fraud is estimated to be more than $40 billion a year, meaning insurance fraud costs the average U.S. family between $400 and $700 a year in increased premiums. And while insurance companies invest in fraud-prevention technologies to identify patterns of fraudulent behavior, they are often limited to internal data. Coupled with a lack of collaboration, this proves problematic when bad actors create multiple claims for the same loss event at multiple insurers – a duplicate claims fraud also called “double dipping.”

IntellectEU launched its innovative solution ClaimShare to solve this problem of “double-dipping.” ClaimShare’s industrywide platform facilitates secure data sharing between insurers, powered by confidential computing and Intel SGX. Confidentiality is crucial given regulatory and privacy constraints when sharing sensitive, personal insurance information.

“ClaimShare is the first industrywide platform that addresses these fraudulent challenges in the insurance industry while respecting business and client privacy. Until recently, there was no technology that supported this way of data exchange. With the recent advancements and adoption of enterprise blockchain and confidential computing, insurers can now securely and privately share and match data. We are fighting insurance fraud head-on,” said Chaim Finizola, director of ClaimShare.

How It Works: Once the insurer validates the claims, ClaimShare separates claims data into personally identifiable information (PII) and non-personally identifiable information (non-PII). Using the Corda distributed ledger, the non-PII is shared between the insurers and matched using fuzzy matching algorithms to identify suspicious claims. Once claims are suspected of being fraudulent, confidential computing is used to match the PII, confirming the fraud attempt before the second payout happens for the same claim.

ClaimShare offers a duplicate fraud claim verification solution across insurers, significantly decreasing the number of fraudulent claim payouts by enabling industry collaboration. This allows insurers to put public claims data on the ClaimShare ledger after verification so other insurers can check if the claim has already been paid.

Intel SGX uses a hardware-based trusted execution environment or enclave – an area of memory with a higher level of security protection – to help isolate and protect specific application code and data in memory. By creating a confidential computing environment with Intel SGX, ClaimShare can improve the security of encrypted data sharing and collaboration between insurers and help ensure privacy so that no competitive or sensitive information is leaked. The pilot detection program focused on auto insurance but can be replicated for other insurance products.

More Context: Security News at Intel

Intel Partner Stories: Intel Customer Spotlight on Intel.com | Partner Stories on Intel Newsroom

The post Intel Helps IntellectEU Fight Insurance Fraud with ClaimShare appeared first on Intel Newsroom.

NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic

Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond.

The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture for a whopping 400 petaflops of AI performance.

Located at Kao Data, a data center using 100 percent renewable energy, Cambridge-1 would rank among the world’s top three most energy-efficient supercomputers on the latest Green500 list.

Hamilton points to the concentration of leading healthcare companies in the U.K. as a primary reason for NVIDIA’s decision to build Cambridge-1.

AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore have already announced their intent to harness the supercomputer for research in the coming months.

Construction has been progressing at NVIDIA’s usual speed-of-light pace, with just final installations and initial tests remaining.

Hamilton promises to provide the latest updates on Cambridge-1 at GTC 2021.

Key Points From This Episode:

  • Hamilton gives listeners an explainer on Cambridge-1’s scalable units, or building blocks — NVIDIA DGX A100 systems — and how just 20 of them can provide the equivalent of hundreds of CPUs.
  • NVIDIA intends for Cambridge-1 to accelerate corporate research in addition to that of universities. Among them are King’s College London, which has already announced that it’ll be using the system.

Tweetables:

“With only 20 [DGX A100] servers, you can build one of the top 500 supercomputers in the world” — Marc Hamilton [9:14]

“This is the first time we’re taking an NVIDIA supercomputer by our engineers and opening it up to our partners, to our customers, to use” — Marc Hamilton [10:17]

You Might Also Like:

How AI Can Improve the Diagnosis and Treatment of Diseases

Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases.

NVIDIA Chief Scientist Bill Dally on Where AI Goes Next

This podcast is full of words from the wise. One of the pillars of the computer science world, NVIDIA’s Bill Dally joins to share his perspective on the world of deep learning and AI in general.

The Buck Starts Here: NVIDIA’s Ian Buck on What’s Next for AI

Ian Buck, general manager of accelerated computing at NVIDIA, shares his insights on how relatively unsophisticated users can harness AI through the right software. Buck helped lay the foundation for GPU computing as a Stanford doctoral candidate, and delivered the keynote address at GTC DC 2019.

The post NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic appeared first on The Official NVIDIA Blog.