How Evo’s AI Keeps Fashion Forward

Imagine if fashion houses knew that teal blue was going to replace orange as the new black. Or if retailers knew that tie dye was going to be the wave to ride when swimsuit season rolls in this summer.

So far, there hasn’t been an efficient way of getting ahead of consumer and market trends like these. But Italy-based startup Evo is helping retailers and fashion houses get a jump on changing tastes and a whole lot more.

The company’s deep-learning pricing and supply chain systems, powered by NVIDIA GPUs, let organizations quickly respond to changes — whether in markets, weather, inventory, customers, or competitor moves — by recommending optimal pricing, inventory and promotions in stores.

Evo is also a member of the NVIDIA Inception program, a virtual accelerator that offers startups in AI and data science go-to-market support, expertise and technology assistance.

The AI Show Stopper

Evo was born from a Ph.D. thesis by its founder, Fabrizio Fantini, while he was at Harvard.

Now the company’s CEO, Fantini discovered new algorithms that could outperform even the most complex and expensive commercial pricing systems in use at the time.

“Our research was shocking, as we measured an immediate 30 percent reduction in the average forecast error rate, and then continuous improvement thereafter,” Fantini said. “We realized that the ability to ingest more data, and to self-learn, was going to be of strategic importance to any player with any intention of remaining commercially viable.”

The software, developed in the I3P incubator at the Polytechnic University of Turin, examines patterns in fashion choices and draws data that anticipates market demand.

Last year, Evo’s systems managed goods worth over 10 billion euros from more than 2,000 retail stores. Its algorithms changed over 1 million prices and physically moved over 15 million items, while generating more than 100 million euros in additional profit for customers, according to the company.

Nearly three dozen companies, including grocers and other retailers, as well as fashion houses, have already benefited from these predictions.

“Our pilot clients showed a 10 percent improvement in margin within the first 12 months,” Fantini said. “And longer term, they achieved up to 5.5 points of EBITDA margin expansion, which was unprecedented.”

GPUs in Vogue 

Evo uses NVIDIA GPUs to run neural network models that transform data into predictive signals of market trends. This allows clients to make systematic and profitable decisions.

Using a combination of advanced machine learning methods and statistics, the system transforms products into “functional attributes,” such as type of sleeve or neckline, and into “style attributes,” such as the color or silhouette.

It works off a database that maps the social media, internet patterns and purchase behaviors of over 1.3 billion consumers, which is a fully representative sample of the entire world population.

Then the system uses multiple algorithms and approaches, including meta-modeling, to process market data that is tagged automatically based on the clients, prices, products and characteristics of a company’s main competitors.

This makes the data directly comparable across different companies and geographies, which is one of the key ingredients required for success.

“It’s a bit like Google Translate,” said Fantini. “Learning from its corpus of translations to make each new request smarter, we use our growing body of data to help each new prediction become more accurate, but we work directly on transaction data rather than images, text or voice as others do.”

These insights help retailers understand how to manage their supply chains and how to plan pricing and production even when facing rapid changes in demand.

In the future, Evo plans to use AI to help design fashion collections and forecast trends at increasingly earlier stages.

Resources:

Image by Pexels.

The post How Evo’s AI Keeps Fashion Forward appeared first on The Official NVIDIA Blog.

Laika’s Oscar-nominated ‘Missing Link’ Comes to Life with Intel Technology

As moviemaking — and even the actors themselves — goes increasingly digital, Laika studios in Oregon is a unique hybrid. Most movies today are live action with visual effects added later — or they’re fully digital. Laika starts with the century-old craft of stop motion — 24 handcrafted frames per second — and uses visual effects not only to clean up those frames but to add backgrounds and characters.

“We’re dedicated to pushing the boundaries and trying to expand what you can do in a stop motion film,” says Jeff Stringer, director of production technology at Laika. “We want to try and get as much as we can in-camera, but using visual effects allows us to scale it up and do more.”

That’s exactly what Laika did with its latest feature, “Missing Link,” the company’s fifth-straight movie to be nominated for an Academy Award for best animated feature, and its first to win a Golden Globe. “The scope of this movie is huge,” the film’s writer-director, Chris Butler, told the Los Angeles Times. According to Animation World Network, the computational requirements of the film’s digital backgrounds and characters topped a petabyte of storage, and rendering the entire movie took 112 million processor hours — or 12,785 years.

Like most motion picture content today, Laika rendered “Missing Link” on Intel® Xeon® Scalable processors.  Intel and Laika engineers are working together to apply AI to further automate and speed the company’s articulate process. “Our biggest metric is, is the performance believable and beautiful?” Stringer asks. “Our ethos is to not let the craft limit the storytelling but try to push the craft as far as the story wants to go.”

Voting for the 2020 Academy Awards ends Tuesday, Feb. 4, and the Oscars will be awarded Sunday, Feb. 9.

More: Go behind the scenes and explore a special interactive booklet celebrating the world-class artists who brought “Missing Link” to life. | All Intel Images

Laika Intel Newsroom 1

The post Laika’s Oscar-nominated ‘Missing Link’ Comes to Life with Intel Technology appeared first on Intel Newsroom.

2020 CES: Intel Brings Innovation to Life with Intelligent Tech Spanning the Cloud, Network, Edge and PC

ASUS Chromebook Flip Intel Project Athena

» Download all images (ZIP, 57 MB)

LAS VEGAS, Jan. 6, 2020 – Breakthroughs in artificial intelligence (AI) that pave the way for autonomous driving. A new era of mobile computing innovation. The future of immersive sports and entertainment. Intel demonstrated all of these and more today at CES 2020, showcasing how the company is infusing intelligence across the cloud, network, edge and PC, and driving positive impact for people, business and society.

Intel CEO Bob Swan kicked off today’s news conference by sharing updates from its Mobileye business, including a demonstration of its self-driving robocar navigating traffic in a natural manner. The drive demonstrated Mobileye’s unique and innovative approach to deliver safer mobility for all with a combination of artificial intelligence, computer vision, the regulatory science model of RSS and true redundancy through independent sensing systems.

Swan also highlighted Intel’s work with the American Red Cross and its Missing Maps project to improve disaster preparedness. Using integrated AI acceleration on 2nd Generation Intel® Xeon® Scalable processors, Intel is helping the American Red Cross and its Missing Maps project to build highly accurate maps with bridges and roads for remote regions of the world, which helps emergency responders in the event of a disaster.

“At Intel, our ambition is to help customers make the most of technology inflections like AI, 5G and the intelligent edge so that together we can enrich lives and shape the world for decades to come. As we highlighted today, our drive to infuse intelligence into every aspect of computing can have positive impact at unprecedented scale,” Swan said.

Intelligence-Driven Mobile Computing

Mobile computing was an area of emphasis, as Intel made announcements spanning new products, partnerships and exciting platform-level innovations that will transform the way people focus, create and engage. Intel Executive Vice President Gregory Bryant announced the following:

  • First look and demonstration of the newest Intel® Core™ mobile processors, code-named “Tiger Lake”: Tiger Lake is designed to bring Intel’s bold, people-led vision for mobile computing to life, with groundbreaking advances in every vector and experience that matters. With optimizations spanning the CPU, AI accelerators and discrete-level integrated graphics based on the new Intel Xe graphics architecture, Tiger Lake will deliver double-digit performance gains1, massive AI performance improvements, a huge leap in graphics performance and 4x the throughput of USB 3 with the new integrated Thunderbolt 4. Built on Intel’s 10nm+ process, the first Tiger Lake systems are expected to ship this year.
  • Preview of first Xe-based discrete GPU: Intel Vice President of Architecture for Graphics and Software Lisa Pearce provided insight into the progress on the new Intel Xe graphics architecture, which will provide huge performance gains in Tiger Lake, and previewed Intel’s first Xe-based discrete GPU, code-named “DG1.”
  • Significant updates on Intel’s “Project Athena” innovation program, including the first Project Athena-verified Chromebooks: Project Athena-verified designs have been tuned, tested and verified to deliver fantastic system-level innovation and benefits spanning battery life, consistent responsiveness, instant wake, application compatibility and more. Intel has verified 25 Project Athena designs to date, and Bryant announced an expanded partnership with Google that has already resulted in the first two Project Athena-verified Chromebooks, the ASUS Chromebook Flip (C436) and the Samsung Galaxy Chromebook. Intel expects to verify approximately 50 more designs across Windows and Chrome this year and deliver a target specification for dual-screen PCs.
  • Form factor innovation, including dual screens and a revolutionary foldable design: Through deepened co-engineering efforts with OEM partners, Intel helps deliver category-defining devices based on Intel Core processors. This includes new dual-screen and foldable designs like the Lenovo ThinkPad X1 Fold, which leverages the Intel Core processor with Intel Hybrid Technology (code-named “Lakefield”) expected to ship midyear, and the Dell Concept Duet. Bryant also previewed Intel’s latest concept device, a foldable OLED display form factor, code-named “Horseshoe Bend.” Based on Intel’s upcoming Tiger Lake mobile processors, the design is similar in size to a 12-inch laptop with a folding touchscreen display that can be opened up to more than 17 inches.

Intelligence-Driven Business Transformation
The data center is the force that delivers intelligence to businesses around the world and Intel Xeon Scalable processors continue to be the foundation of the data center. Intel Executive Vice President Navin Shenoy announced that 3rd Generation Intel Xeon Scalable processors, coming in the first half of 2020, will include new Intel® DL Boost extensions for built-in AI training acceleration, providing up to a 60% increase in training performance over the previous family.

Shenoy highlighted several ways Intel is threading intelligence into data platforms across cloud, network and edge and how this is transforming sports and entertainment:

  • Netflix optimizes and accelerates media streaming services: Netflix has utilized the latest video compression technology, AV1, to enhance Netflix’s media streaming services and bring content to life across the globe, with up to 60% compression efficiency over the previous compression technology (AVC). Intel’s and Netflix’s joint efforts continue with the development of an open-source high-performance encoder (SVT-AV1), optimized on 2nd Gen Intel Xeon Scalable processors, that delivers significant quality and performance gains making it viable for commercial deployment.
  • Enhanced athlete and viewer experiences at Tokyo 2020 with 3D Athlete Tracking: A first-of-its-kind computer vision solution, 3D Athlete Tracking (3DAT) uses AI to enhance the viewing experience with near real-time insights and visualizations. 3DAT uses highly mobile cameras to capture the form and motion of athletes, then applies algorithms optimized with Intel DL Boost and powered by Intel Xeon Scalable processors to analyze the biomechanics of athletes’ movements. Shenoy announced that this technology will enhance replays of the 100-meter and other sprinting events at the Olympic Games Tokyo 2020.
  • Large-scale volumetric video streaming: Intel and the sports industry are transforming the sports viewing experience with volumetric video, an important progression toward enabling sports viewing without limitations. Intel® True View synthesizes the entire volume of the stadiums’ field to provide endless angles that allow fans to choose any vantage point and player perspective and stream from their devices. Intel and the NFL showcased the power of streaming volumetric video with a play from Week 15’s Cleveland Browns-Arizona Cardinals game. The data produced from the first quarter of an NFL game alone reaches beyond 3TB per minute – an exponential increase requiring tremendous computing power.

More information on all of these announcements, including visual assets from the event, is available in the CES press kit on the Intel Newsroom.

1Based on Intel testing and configurations.

Forward-Looking Statements

Statements in this news summary that refer to future plans and expectations, including with respect to Intel’s future products and the expected availability and benefits of such products, are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “goals,” “plans,” “believes,” “seeks,” “estimates,” “continues,” “may,” “will,” “would,” “should,” “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on estimates, forecasts, projections, uncertain events or assumptions, including statements relating to total addressable market (TAM) or market opportunity and anticipated trends in our businesses or the markets relevant to them, also identify forward-looking statements. Such statements are based on the company’s current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company’s expectations are set forth in Intel’s earnings release dated October 25, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Forms 10-K and 10-Q. Copies of Intel’s Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC’s website at www.sec.gov.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.

Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.

The post 2020 CES: Intel Brings Innovation to Life with Intelligent Tech Spanning the Cloud, Network, Edge and PC appeared first on Intel Newsroom.

2020 CES: Intel News Conference – ‘Innovation through Intelligence’ (Replay)

» Download “2020 CES: Intel News Conference – ‘Innovation through Intelligence’ (Event Replay)”

Intel’s CEO Bob Swan takes the stage to open this year’s Intel CES news conference. He is joined by Client Computing Group Executive Vice President Gregory Bryant and Data Platforms Group Executive Vice President Navin Shenoy, along with several guests. They highlight how Intel is infusing intelligence across the cloud, network, edge and PC. The conference brings to life several of Intel’s latest advancements focused on creating broad positive impact for people, business and society.

When: 4 p.m. PST Monday, Jan. 6, 2020

More Context: Intel at 2020 CES

The post 2020 CES: Intel News Conference – ‘Innovation through Intelligence’ (Replay) appeared first on Intel Newsroom.

2019 Yearbook: Intel Powers the Future

yearbook cover newsroomIntel’s 2019 yearbook offers a quick look back at a memorable year. 2019 started with a new chief executive officer and the momentum only increased up with a stream of notable introductions: new products, groundbreaking technology and unique customer use cases.

This yearbook reflects on 2019 highlights, but Intel’s leaders are firmly focused on 2020. In only a few weeks at CES, they will demonstrate several of Intel’s latest advancements focused on creating broad positive impact for people, businesses and society as a whole.

view yearbook button

The post 2019 Yearbook: Intel Powers the Future appeared first on Intel Newsroom.

World’s Largest Mobile Network Taps NVIDIA EGX for 5G, Mobile Edge Computing

When natural disasters strike, responders race against time to deploy critical resources and save lives.

Fanned by strong winds, a forest fire raged through a remote southwestern corner of China’s Sichuan province in early April. Drones equipped with high-resolution cameras and infrared detection technology were dispatched to the mountainous terrain, from where they transmitted footage of the leaping flames over 5G networks to emergency dispatch headquarters.

Responders, rather than waiting for drones to return to start processing the data, could immediately begin parsing the video with AI image algorithms running on NVIDIA GPUs, helping them better understand the crisis and concentrate rescue efforts.

This groundbreaking work was led by the China Mobile Chengdu Institute of Research and Development — a research division of China Mobile, the world’s largest mobile network operator — using advanced 5G technology, AI and the China Mobile Link-Cloud platform for drones.

The company, which has nearly a billion customers, is accelerating natural disaster response, improving emergency medical services and providing new education tools with NVIDIA GPUs connected to next-gen 5G mobile networks.

For example, a joint rescue team from China Mobile’s research division and Sichuan Provincial People’s Hospital in June used ambulances equipped with 5G terminals to remotely diagnose patients at the scene of a 6.0 earthquake. First responders in the emergency vehicles conducted tests like ECG monitoring or ultrasounds, using low-latency 5G networks for real-time video consultations with doctors at the hospital.

There, physicians could use GPU-accelerated medical imaging AI to diagnose and provide temporary treatment instructions until the patients were transferred to the hospital for surgical treatment.

Elsewhere, high-bandwidth 5G towers help address educational inequality between urban and rural areas by connecting multiple classrooms through virtual reality. China Mobile has connected a classroom from a rural primary school in Sichuan with students in Chengdu, the province’s capital. To do so, they used VR headsets, NVIDIA GPUs and an integration of the NVIDIA CloudXR software development kit — which delivers low-latency AR/VR streaming over 5G networks — with an application for remote synchronization of classrooms.

This initiative could help thousands of schools in far-flung regions participate in the same real-time, interactive learning experiences as more resource-rich schools.

Future deployments of these pilot projects will shift computational processing from data centers to GPUs at the edge, whether embedded in drones and ambulances or in full racks of edge servers.

Deploying 5G to 600 Million Users

With 10x lower latency and 1,000x the bandwidth of existing networks, 5G makes data-intensive mobile computing applications such as 4K video and VR possible at the edge for the first time. It also enables the deployment of complex AI models for inference at the edge.

China Mobile, a leader in 5G deployment, has to date installed 50,000 5G stations across 50 cities in China. The country is projected to have 600 million 5G users by 2025.

The company is a member of the Open Data Center Committee, a nonprofit consortium formed by the country’s leading technology providers and telecom giants. One of the committee’s initiatives is the Open Telecom IT Infrastructure (OTII) project, an effort to standardize server solutions for 5G mobile edge computing.

NVIDIA EGX servers developed by data center systems provider Inspur and edge computing manufacturer ADLINK will be the first GPU hardware to be incorporated under the OTII standard.

Powered by NVIDIA T4 and NVIDIA Quadro RTX GPUs, respectively, servers like these can be used at the edge to accelerate critical AI applications using 5G networks. An end-to-end software development kit compatible with Chinese technical requirements for mobile edge computing has also been developed to facilitate GPU adoption.

The NVIDIA EGX edge computing platform consists of a cloud-native software stack and edge servers optimized to run the stack. EGX systems vary from NVIDIA Jetson-powered edge devices to NGC-Ready for Edge validated servers. With NVIDIA EGX, system administrators can easily set up a fleet of edge servers remotely and securely for faster, easier deployment.

Inspur, H3C and Lenovo are among the dozens of manufacturers worldwide offering EGX systems today.

Learn more about how NVIDIA is accelerating 5G technology and bringing AI to the edge.

The post World’s Largest Mobile Network Taps NVIDIA EGX for 5G, Mobile Edge Computing appeared first on The Official NVIDIA Blog.

As AI Universe Keeps Expanding, NVIDIA CEO Lays Out Plan to Accelerate All of It

With the AI revolution spreading across industries everywhere, NVIDIA founder and CEO Jensen Huang took the stage Wednesday to unveil the latest technology for speeding its mass adoption.

His talk — to more than 6,000 scientists, engineers and entrepreneurs gathered for this week’s GPU Technology Conference in Suzhou, two hours west of Shanghai — touched on advancements in AI deployment, as well as NVIDIA’s work in the automotive, gaming, and healthcare industries.

“We build computers for the Einsteins, Leonardo di Vincis, Michaelngelos of our time,” Huang told the crowd, which overflowed into the aisles. “We build these computers for all of you.”

Huang explained that demand is surging for technology that can accelerate the delivery of AI services of all kinds. And NVIDIA’s deep learning platform — which the company updated Wednesday with new inferencing software — promises to be the fastest, most efficient way to deliver these services.

It’s the latest example of how NVIDIA achieves spectacular speedups by applying a combination of GPUs optimized for parallel computation, work across the entire computing stack, and algorithm and ecosystem expertise in focused vertical markets.

“It is accepted now that GPU accelerated computing is the path forward as Moore’s law has ended,” Huang said.

Real-Time Conversational AI

The biggest news: groundbreaking new inference software enabling smarter, real-time conversational AI.

NVIDIA TensorRT 7 — the seventh generation of the company’s inference software development kit — features a new deep learning compiler designed to automatically optimize and accelerate the increasingly complex recurrent and transformer-based neural networks needed for complex new applications, such as AI speech.

This speeds the components of conversational AI by 10x compared to CPUs, driving latency below the 300-millisecond threshold considered necessary for real-time interactions.

“To have the ability to understand your intention, make recommendations, do searches and queries for you, and summarize what they’ve learned to a text to speech system… that loop is now possible,” Huang said. “It is now possible to achieve very natural, very rich, conversational AI in real time.”

Real-Time Recommendations: Baidu and Alibaba

Another challenge for AI: driving a new generation of powerful systems, known as recommender systems, able to connect individuals with what they’re looking for in a world where the options available to them is spiraling exponentially.

“The era of search has ended: if I put out a trillion, billion million things and they’re changing all the time, how can you find anything,” Huang asked. “The era of search is over. The era of recommendations has arrived.

Baidu — one of the world’s largest search companies – is harnessing NVIDIA technology to power advanced recommendation engines.

“It solves this problem of taking this enormous amount of data, and filtering it through this recommendation system so you only see 10 things,” Huang said.

With GPUs, Baidu can now train the models that power its recommender systems 10x faster, reducing costs, and, over the long term, increasing the accuracy of its models, improving the quality of its recommendations.

Another example such systems’ power: Alibaba, which relies on NVIDIA technology to help power the recommendation engines behind the success of Single’s Day.

This new shopping festival which takes place on Nov. 11 — or 11.11 — generated $38 billion in sales last month. That’s up by nearly a quarter from last year’s $31 billion, and more than double the online sales on Black Friday and Cyber Monday combined.

Helping to drive its success are recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver. Its systems need to run in real-time and at an incredible scale, something that’s only possible with GPUs.

“Deep learning inference is wonderful for deep recommender systems and these recommender systems will be the engine for the Internet,” Huang said. “Everything we do in the future, everything we do now, passes through a recommender system.”

Accelerating Automotive Innovations

Huang also announced NVIDIA will provide the transportation industry with source access to its NVIDIA DRIVE deep neural networks (DNNs) for autonomous vehicle development.

NVIDIA DRIVE has become a de facto standard for AV development, used broadly by automakers, truck manufacturers, robotaxi companies, software companies and universities.

Now, NVIDIA is providing source access of it’s pre-trained AI models and training code to AV developers. Using a suite of NVIDIA AI tools, the ecosystem can freely extend and customize the models to increase the robustness and capabilities of their self-driving systems.

In addition to providing source access to the DNNs, Huang announcing the availability of a suite of advanced tools so developers can customize and enhance NVIDIA’s DNNs, utilizing their own data sets and target feature set. These tools allow the training of DNNs utilizing active learning, federated learning and transfer learning, Huang said.

Haung also announced NVIDIA DRIVE AGX Orin, the world’s highest performance and most advanced system-on-a-chip. It delivers 7x the performance and 3x the efficiency per watt of Xavier, NVIDIA’s previous-generation automotive SoC. Orin — which will be available to be incorporated in customer production runs for 2022 — boasts 17 billion transistors, 12 CPU cores, and is capable of over 200 trillion operations per second.

Orin will be woven into a stack of products — all running a single architecture and compatible with software developed on Xavier — able to scale from simple level 2 autonomy, all the way up to full Level 5 autonomy.

And Huang announced that Didi — the world’s largest ride hailing company — will adopt NVIDIA DRIVE to bring robotaxis and intelligent ride-hailing services to market.

“We believe everything that moves will be autonomous some day,” Huang said. “This is not the work of one company, this is the work of one industry, and we’ve created an open platform so we can all team up together to realize this autonomous future.”

Game On

Adding to NVIDIA’s growing footprint in cloud gaming, Huang announced a collaboration with Tencent Games in cloud gaming.

“We are going to extend the wonderful experience of PC gaming to all the computers that are underpowered today, the opportunity is quite extraordinary,” Huang said. “We can extend PC gaming to the other 800 milliion gamers in the world.”

NVIDIA’s technology will power Tencent Games’ START cloud gaming service, which began testing earlier this year. START gives gamers access to AAA games on underpowered devices anytime, anywhere.

Huang also announced that six leading game developers will join the ranks of game developers around the world who have been using the realtime ray tracing capabilities of NVIDIA’s GeForce RTX to transform the image quality and lighting effects of their upcoming titles

Ray tracing is a graphics rendering technique that brings real-time, cinematic-quality rendering to content creators and game developers. NVIDIA GeForce RTX GPUs contain specialized processor cores designed to accelerate ray tracing so the visual effects in games can be rendered in real time.

The upcoming games include a mix of blockbusters, new franchises, triple-A titles and indie fare — all using real-time ray tracing to bring ultra-realistic lighting models to their gameplay.

They include Boundary, from Surgical Scalpels Studios; Convallarioa, from LoongForce;  F.I.S.T. from  Shanghai TiGames; an unnamed project from Mihyo; Ring of Elysium, from TenCent; and Xuan Yuan Sword VII from Softstar.

Accelerating Medical Advances, 5G

This year, Huang said, NVIDIA has added two major new applications to CUDA – 5G vRAN and genomic processing. With each, NVIDIA’s supported by world leaders in their respective industries – Ericsson in telecommunication and BGI in genomics.

Since the first human genome was sequenced in 2003, the cost of whole genome sequencing has steadily shrunk, far outstripping the pace of Moore’s law. That’s led to an explosion of genomic data, with the total amount of sequence data is doubling every seven months.

“The ability to sequence the human genome in its totality is incredibly powerful,” Huang said.

To put this data to work — and unlock the promise of truly personalized medicine — Huang announced that NVIDIA is working with Beijing Genomics Institute.

BGI is using NVIDIA V100 GPUs and software from Parabricks, an Ann Arbor, Michigan- based startup acquired by NVIDIA earlier this month — to build the highest throughput genome sequencer yet, potentially driving down the cost of genomics-based personalized medicine.

“It took 15 years to sequence the human genome for the first time,” Huang said. “It is now possible to sequence 16 whole genomes per day.”

Huang also announced the availability of the NVIDIA Parabricks Genomic Analysis Toolkit, and its availability on NGC, NVIDIA’s hub for GPU-optimized software for deep learning, machine learning, and high-performance computing.

Accelerated Robotics with NVIDIA Isaac

As the talk wound to a close, Huang announced a new version of NVIDIA’s Isaac software development kit. The Isaac SDK achieves an important milestone in establishing a unified robotic development platform — enabling AI, simulation and manipulation capabilities.

The showstopper: Leonardo, a robotic arm with exquisite articulation created by NVIDIA researchers in Seattle, that not only performed a sophisticated task — recognizing and rearranging four colored cubes — but responded almost tenderly to the actions of the people around it in real time. It purred out a deep squeak, seemingly out of a Steven Spielberg movie.

As the audience watched the robotic arm was able to gently pluck a yellow colored block from Hunag’s hand and set it down. It then went on to rearrange four colored blocks, gently stacking them with fine precision.

The feat was the result of sophisticated simulation and training, that allows the robot to learn in virtual worlds, before being put to work in the real world. “And this is how we’re going to create robots in the future,” Huang said.

Accelerating Everything

Huang finished his talk by by recapping NVIDIA’s sprawling accelerated computing story, one that spans ray tracing, cloud gaming, recommendation systems, real-time conversational AI, 5G, genomics analysis, autonomous vehicle and robotis, and more.

“I want to thank you for your collaboration to make accelerated computing amazing and thank you for coming to GTC,” Huang said.

The post As AI Universe Keeps Expanding, NVIDIA CEO Lays Out Plan to Accelerate All of It appeared first on The Official NVIDIA Blog.

AI, Accelerated Computing Drive Shift to Personalized Healthcare

Genomics is finally poised to go mainstream, with help from deep learning and accelerated-computing technologies from NVIDIA.

Since the first human genome was sequenced in 2003, the cost of whole genome sequencing has steadily shrunk, far faster than suggested by Moore’s law. From sequencing the genomes of newborn babies to conducting national population genomics programs, the field is gaining momentum and getting more personal by the day.

Advances in sequencing technology have led to an explosion of genomic data. The total amount of sequence data is doubling every seven months. This breakneck pace could see genomics in 2025 surpass by 10x the amount of data generated by other big data sources such as astronomy, Twitter and YouTube — hitting the double-digit exabyte range.

New sequencing systems, like the DNBSEQ-T7 from BGI Group, the world’s largest genomics research group, are pushing the technology into broad use. The system generates a whopping 60 genomes per day, equaling 6 terabytes of data.

With advancements in BGI’s flow cell technology and acceleration by a pair of NVIDIA V100 Tensor Core GPUs, DNBSEQ-T7 sequencing is sped up 50x, making it the highest throughput genome sequencer to date.

As costs decline and sequencing times accelerate, more use cases emerge, such as the ability to sequence a newborn in intensive care where every minute counts.

Getting Past the Genome Analysis Bottleneck: GPU-Accelerated GATK

NVIDIA Parabricks GPU-accelerated GATK

The genomics community continues to extract new insights from DNA. Recent breakthroughs include single-cell sequencing to understand mutations at a cellular level, and liquid biopsies that detect and monitor cancer using blood for circulating DNA.

But genomic analysis has traditionally been a computational bottleneck in the sequencing pipeline — one that can be surmounted using GPU acceleration.

To deliver a roadmap of continuing GPU acceleration for key genomic analysis pipelines, the team at Parabricks — an Ann Arbor, Michigan-based developer of GPU software for genomics — is joining NVIDIA’s healthcare team, NVIDIA founder and CEO Jensen Huang shared today onstage at GTC China.

Teaming up with BGI, the Parabricks’ software can analyze a genome in under an hour. Using a server with eight NVIDIA T4 Tensor Core GPUs, BGI showed the throughput could lower the cost of genome sequencing to $2 — less than half the cost of existing systems.

See More, Do More with Smart Medical Devices

New medical devices are being invented across the healthcare industry. United Imaging Healthcare has introduced two industry-first medical devices. The uEXPLORER is the world’s first total body PET-CT scanner. Its pioneering ability to image an individual in one position enables it to carry out fast, continuous tracking of tracer distribution over the entire body.

A full body PET/CT image from uEXPLORER. Courtesy of United Imaging.

The total-body coverage of uEXPLORER can significantly shorten scan time. Scans as brief as 30 seconds provide good image quality, compared to traditional systems requiring over 20 minutes of scan time. uEXPLORER is also setting a new benchmark in tracer dose — imaging at about 1/50 of the regular dose, without compromising image quality.

The FDA-approved system uses 16 NVIDIA V100 Tensor Core GPUs and eight 56 GB/s InfiniBand network links from Mellanox to process movie-like scans that can acquire up to a terabyte of data. The system is already deployed in the U.S. at the University of California, Davis, where scientists helped design the system. It’s also the subject of an article in Nature, as well as videos watched by nearly half a million viewers on YouTube.

United’s other groundbreaking system, the uRT-Linac, is the first instrument to support a full radiation therapy suite, from detection to prevention.

With this instrument, a patient from a remote village can make the long trek to the nearest clinic just once to get diagnostic tests and treatment. The uRT-Linac combines CT imaging, AI processing to assist in treatment planning, and simulation with the radiation therapy delivery system. Using multi-modal technologies and AI, United has changed the nature of delivering cancer treatment.

Further afield, a growing number of smart medical devices are using AI for enhanced signal and image processing, workflow optimizations and data analysis.

And on the horizon are patient monitors that can sense when a patient is in danger and smart endoscopes that can guide surgeons during surgery. It’s no exaggeration to state that, in the future, every sensor in the hospital will have AI-infused capabilities.

Our recently announced NVIDIA Clara AGX developer kit helps address this trend. Clara AGX comprises hardware based on NVIDIA Xavier SoCs and Volta Tensor Core GPUs, along with a Clara AGX software development kit, to enable the proliferation of smart medical devices that make healthcare both smarter and more personal.

Apply for early access to Clara AGX.

The post AI, Accelerated Computing Drive Shift to Personalized Healthcare appeared first on The Official NVIDIA Blog.

All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event

Putting AI to work on a massive scale, Alibaba recently harnessed NVIDIA GPUs to serve its customers on 11/11, the year’s largest shopping event.

During Singles Day, as the Nov. 11 shopping event is also known, it generated $38 billion in sales. That’s up by nearly a quarter from last year’s $31 billion, and more than double online sales on Black Friday and Cyber Monday combined.

Singles Day — which has grown from $7 million a decade ago — illustrates the massive scale AI has reached in global online retail, where no player is bigger than Alibaba.

Each day, over 100 million shoppers comb through billions of available products on its site. Activity skyrockets on peak shopping days, when Alibaba’s systems field hundreds of thousands of queries a second.

And AI keeps things humming along, according to Lingjie Xu, Alibaba’s director of heterogeneous computing.

“To ensure these customers have a great user experience, we deploy state-of-the-art AI technology at massive scale using the NVIDIA accelerated computing platform, including T4 GPUs, cuBLAS, customized mixed precision and inference acceleration software,” he said.

“The platform’s intuitive search capabilities and reliable recommendations allow us to support a model six times more complex than in the past, which has driven a 10 percent improvement in click-through rate. Our largest model shows 100 times higher throughput with T4 compared to CPU,” he said.

One key application for Alibaba and other modern online retailers: recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver.

Every small improvement in click-through rate directly impacts the user experience and revenues. A 10 percent improvement from advanced recommender models that can run in real time, and at incredible scale, is only possible with GPUs.

Alibaba’s teams employ NVIDIA GPUs to support a trio of optimization strategies around resource allocation, model quantization and graph transformation to increase throughput and responsiveness.

This has enabled NVIDIA T4 GPUs to accelerate Alibaba’s wide and deep recommendation model and deliver 780 queries per second. That’s a huge leap from CPU-based inference, which could only deliver three queries per second.

Alibaba has also deployed NVIDIA GPUs to accelerate its systems for automatic advertisement banner-generating, ad recommendation, imaging processing to help identify fake products, language translation, and speech recognition, among others. As the world’s third-largest cloud service provider, Alibaba Cloud provides a wide range of heterogeneous computing products capable of intelligent scheduling, automatic maintenance and real-time capacity expansion.

Alibaba’s far-sighted deployment of NVIDIA’s AI platform is a straw in the wind, indicating what more is to come in a burgeoning range of industries.

Just as its tools filter billions of products for millions of consumers, AI recommenders running on NVIDIA GPUs will find a place among other countless other digital services — app stores, news feeds, restaurant guides and music services among them — keeping customers happy.

Learn more about NVIDIA’s AI inference platform.

The post All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event appeared first on The Official NVIDIA Blog.

AWS DeepComposer Enables Developers to Get Hands-On with Generative AI

gemini waves
The AWS DeepComposer keyboard announced at AWS re:Invent 2019. The machine learning-enabled keyboard helps developers in the field of generative artificial intelligence. (Credit: Amazon Web Services)

At the kickoff for AWS re:Invent, Dr. Matt Wood, vice president of artificial intelligence (AI) at Amazon Web Services, announced AWS DeepComposer, the world’s first machine learning-enabled keyboard for developers to get hands-on with generative AI. Generative AI is one of the most fascinating advancements in AI technology because of its ability to create something new: from turning sketches into images for accelerated product development to improving computer-aided design of complex objects.

AWS DeepComposer runs on Amazon EC2 C5 instances in the AWS cloud, which are powered by Intel® Xeon® Scalable processors.

This builds on previous announcements that demonstrate the joint commitment of Intel and AWS to make hands-on machine learning education more accessible to developers: AWS DeepLens at re:Invent 2017 and AWS DeepRacer at re:Invent 2018.

More: Artificial Intelligence at Intel

More Customer Stories: Intel Customer Spotlight on Intel.com | Customer Stories on Intel Newsroom

The post AWS DeepComposer Enables Developers to Get Hands-On with Generative AI appeared first on Intel Newsroom.