What Is Transfer Learning?

You probably have a career. But hit the books for a graduate degree or take online certificate courses by night, and you could start a new career building on your past experience.

Transfer learning is the same idea. This deep learning technique enables developers to harness a neural network used for one task and apply it to another domain.

Take image recognition. Let’s say that you want to identify horses, but there aren’t any publicly available algorithms that do an adequate job. With transfer learning, you begin with an existing convolutional neural network commonly used for image recognition of other animals, and you tweak it to train with horses.

ResNet to the Rescue

Developers might start with ResNet-50 — a pre-trained deep learning model consisting of 50 layers — because it has a high accuracy level for identifying cats or dogs. Within the neural network are layers that are used to identify outlines, curves, lines and other identifying features of these animals. The layers required a lot of labeled training data, so using them saves a lot of time.

Those layers can be applied to the task of carrying out the same identification on some horse features. You might be able to identify eyes, ears, legs and outlines of horses with ResNet-50, but to determine it was a horse and not a dog might require some additional training data.

And with additional training by feeding labeled training data for horses — more horse-specific features can be built into the model.

Transfer Learning Explained

Here’s how it works: First, you delete what’s known as the “loss output” layer, which is the final layer used to make predictions, and replace it with a new loss output layer for horse prediction. This loss output layer is a fine-tuning node for determining how training penalizes deviations from the labeled data and the predicted output.

Next, you would take your smaller dataset for horses and train it on the entire 50-layer neural network or the last few layers or just the loss layer alone. By applying these transfer learning techniques, your output on the new CNN will be horse identification.

Word Up, Speech!

Transfer learning isn’t just for image recognition. Recurrent neural networks, often used in speech recognition, can take advantage of transfer learning, as well. However, you’ll need two similar speech-related datasets, such as a million hours of speech from a pre-existing model and 10 hours of speech specific to the new task.

Similar to techniques used on a CNN, this new neural network’s loss layer is removed. Next, you might create two or more layers in its place that use your new speech data to help train the network and feed into a new loss layer for making predictions about speech.

Baidu’s Deep Speech neural network offers a jump start for speech-to-text models, for example, allowing an opportunity to use transfer learning to bake in special speech features.

Why Transfer Learning?

Transfer learning is useful when you have insufficient data for a new domain you want handled by a neural network and there is a big pre-existing data pool that can be transferred to your problem.

So you might have only 1,000 images of horses, but by tapping into an existing CNN such as ResNet, trained with more than 1 million images, you can gain a lot of low-level and mid-level feature definitions.

For developers and data scientists interested in accelerating their AI training workflow with transfer learning capabilities, the NVIDIA Transfer Learning Toolkit offers GPU-accelerated pre-trained models and functions to fine-tune your model for various domains such as intelligent video analytics and medical imaging.

And when it’s time for deployment, you can roll out your application with an end-to-end deep learning workflow using Transfer Learning Toolkit for IVA and medical imaging.

Plus, with just a few online courses, you could become your company’s expert — and launch yourself into an entirely new career path.

 

The post What Is Transfer Learning? appeared first on The Official NVIDIA Blog.

Fireflies: This Call Is Being Monitored — By AI — for Quality Assurance

Krish Ramineni and Sam Udotong are quintessential Silicon Valley: They breath code, bond over hackathons and one thrives on Soylent.

Their product’s hockey-stick growth curve isn’t: In atypical fashion, it happened simply by word of mouth rather than the usual PR blitz.

The 20-somethings are the duo behind call-automation startup Fireflies, an AI-powered platform for sales and customer support teams.

Fireflies’ customers have taken flight without the typical blast of marketing spending, said CEO and co-founder Ramineni. He and Udotong just passed it out to friends.

“It started spreading like wildfire within organizations because one [sales] rep starts using it and then another rep sees it and says, ‘What is that thing you’re using in your meetings? That’s pretty crazy, I want to use it.’”

Talking Business

It’s no surprise.

Fireflies’ AI addresses big pain points for enterprise call customers: automated call notes and recordings, automated customer relationship management (CRM) software entry, alerts to customer dissatisfaction in calls, sales coaching follow-up opportunities and more.

And it’s integrated with all the major web conferencing platforms  — such as WebEx, Zoom, Skype for business — as well as CRM software and tools used by call pros, including Salesforce, Hubspot, Zoho, and Slack, among others.

Fireflies plays in a lucrative software segment. CRM software revenue reached $39.5 billion in 2017 worldwide, a figure forecasted to grow 16 percent last year, according to data from researcher Gartner.

Calling GPUs

Founded in 2016 and launched last year, Fireflies has amassed more than 75 million conversations and is growing at a rapid clip. It uses all that chatter to continuously train its deep learning models running on NVIDIA GPUs in the Paperspace cloud.

What’s more, it trains models for different segments of customers, domains, industries and call types. It’s constantly working to refine its prediction accuracy, tapping techniques based off  Monte Carlo reinforcement learning, which creates reward functions to optimize for accuracy.

“We no longer need to write custom rules for each organization,” said Ramineni.

Fireflies users include individuals and teams from companies like Amazon, Microsoft, Samsung, Deloitte and Twitter.

Millennial Tech

Landing on Fireflies wasn’t easy. Like many millennial founders, the two dabbled with businesses ranging cryptocurrency to drones before landing on Fireflies. Rameni, a San Francisco Bay Area native, decided to hunker down with Udotong in Boston to initially hammer it out.

It took little effort to persuade New Jersey native Udotong to relocate to San Francisco, where they continued coding on Fireflies. Before long, Udotong had taken to Valley startup life and began daily consumption of Soylent, the popular meal replacement drink, which has nourished him for more than two years.

Fireflies, which is now a team of a dozen employees, was angel funded by Facebook, Salesforce and Coinbase. The company is also a member of the NVIDIA Inception program, a virtual accelerator that helps startups get to market faster.

The post Fireflies: This Call Is Being Monitored — By AI — for Quality Assurance appeared first on The Official NVIDIA Blog.

This Is Your Office on AI: UiPath’s Robotic Process Automation Software

AI can now help you get a leg up on tedious office work. Just ask Param Kahlon, who is developing teams of bots that can make life easier in the workplace.

Kahlon is chief product officer at UiPath, a pioneer in robotic process automation (RPA) software, which helps humans work with machines to automate aging software workflows.

Tasks like processing invoices, payroll and insurance claims can gobble up massive back office hours for employees jumping between spreadsheets, databases and email. Now, AI-powered software bots can open these applications and handle it all.

UiPath is growing in popularity among finance and accounting, accounts payable, claims processing, healthcare payer and contact center departments.

RPA aims to improve employee productivity to boost company services and save money. “We can help take the slack out of a business process. It could take hours, days or weeks and instead take minutes,” Kahlon said of the difference for UiPath customers.

UiPath has been early to harness the convergence of big data and advances in neural networks powered by GPUs to develop automation for office software tasks.

Founded in 2005, UiPath is leading the way in a workplace revolution for those handling rules-based tasks. It’s the type of work decision processes — if this, then that — that machine learning does extremely well.

The RPA market is expected to reach $2.9 billion by 2021, according to research firm Forrester.

UiPath raised a $225 million Series C round at a $3 billion valuation in September. The startup, which earlier this year raised $153 million at a $1.1 billion valuation, has secured more than $400 million in funding in total from investors.

Customers include BMW Group, the American Red Cross, Fujifilm, Japan Airlines, Korea Telecom, Landmark Group, NASA, Pandora, Thomson Reuters, General Electric and Equifax.

Starting AI Bots

UiPath can handle a sequence of desktop software tasks. For example, it can learn to produce a report in Salesforce so that employees don’t have to create reports. With UiPath’s Studio software, customers can use visual diagrams to describe how they would like to automate such a process. The alternative might be costly system integrations between software.

UiPath Studio also enables customers to record a series of desktop software steps taken by a human — such as logging into programs, clicking on buttons, using drop-down menus and typing into boxes — to automate the steps for report generation and other tasks.

The company offers a bot control center as well: Its Orchestrator software is a platform for managing and monitoring the bots a company uses.

GPUs for Bots

Behind the scenes, UiPath is using computer vision, machine learning and natural language processing to automate desktop software tasks of humans. It’s using image recognition of convolutional neural networks, for example, to understand handwritten documents for conversion into digital forms.

It uses machine learning for classification and data extraction within documents, based on a history of transaction data that’s been used for training.

UiPath is handling computationally demanding problems to deliver software automation. “We’re recording every keystroke and every click action, and we are using NVIDIA GPUs locally and in the cloud for training the models and for inference,” said Prabhdeep Singh, vice president of AI at UiPath.

The post This Is Your Office on AI: UiPath’s Robotic Process Automation Software appeared first on The Official NVIDIA Blog.

Viva Las Vegas! We’re Leaving CES 2019 Laden with Awards

You know you’re leaving Las Vegas on the right note when your suitcases are stuffed with loot.

We and our partners got plenty of the shiny stuff this week at the annual International Consumer Electronics Show in Las Vegas, winning more than a dozen awards for everything from our new Big Format Gaming Display to sophisticated thin-and-light Max-Q design notebooks powered by our GeForce RTX GPUs.

The awards mirror the story we brought with us to CES: we’re transforming gaming, television and transportation, and bringing modern AI — powered by GPUs — to cars, homes and the cloud.

Here’s our latest tally of awards from this year’s CES.

  • Big Format Gaming Display – Best of CES – HP Omen X 65 Emperium, Tom’s Guide; The best gaming laptops, headsets, monitors and more – HP Omen X 65 Emperium, The Telegraph.
  • NVIDIA GeForce RTX 20-Series GPUs — Best of Innovations, CES Innovation Awards.
  • NVIDIA GeForce RTX 2060 — Best GPU, Tom’s Hardware; Best of CES, PC World; The best gaming laptops, headsets, monitors and more, The Telegraph.
  • NVIDIA Mobile RTX Graphics — Best Gaming, The Verge; Best of CES, PC World.
  • RTX Powered Laptops — Best Tech of CES 2019 – Razer Blade 15 Advanced Edition, Mashable; The Stuff CES 2019 Gadget Awards – NVIDIA RTX Laptops, Stuff; Best of CES – Asus ROG Mothership, Tom’s Guide; CES Editor’s Choice Awards – Razer Blade Advanced Gaming Laptop, USA Today; CES 2019: The best gaming laptops, headsets, monitors and more – Asus ROG Mothership, The Telegraph; Best of Show – Asus ROG Mothership, Laptop Mag; Best of CES 2019 – Asus ROG Mothership, TechAdvisor; The best laptops of CES 2019 – Asus ROG Mothership, Android Authority.

The post Viva Las Vegas! We’re Leaving CES 2019 Laden with Awards appeared first on The Official NVIDIA Blog.

How Jason Antic Created ‘De-Oldify,’ a Popular Tool That Makes Old Photos Look New

You don’t need to be an an academic or to work for a big company to get into AI.

You can just be a guy with an NVIDIA GeForce 1080 Ti and a generative adversarial network.

Jason Antic, who describes himself as “a software guy,” began digging deep into Generative Adversarial Networks earlier this year.

Next thing you know: he’s created an increasingly popular tool that colors old black-and-white shots to make them look good.

French village, 1950s

“I just thought that colorizing black-and-white footage was just a really cool thing to do,” Antic says in a conversation with Noah Kravitz, who hosts NVIDIA’s AI Podcast.

“I just finished my fast.ai course just two months ago. My plan was to dig deep into this neural network stuff and just hammer out a few projects,” he adds.

Antic, a website developer, was a computer science major in college more than a decade ago. But he says that helped him less than you might think.

“Honestly, artificial intelligence back then didn’t really work,” Antic says.

That changed, dramatically, starting in 2012, he says.

Young woman in the forest, Finland, ca. 1910.

“Eventually I was like I really need to get into this field because it’s mind blowing,” Antic says. “It’s really going to revolutionize the world.”

So Antic dug in, taking a number of courses. He says his AI course at fast.ai really clicked for him. He even began working part time so he could focus on his studies.

When he was done, he decided he would try to create software for colorizing photographs and began putting his training — and his NVIDIA GeForce 1080 Ti — through their paces.

“I thought this would be really ambitious, but it would be really cool if it worked,” Antic says, who adds that he struggled for weeks before ultimately getting it to work.

“It works really well, way better than I thought it would,” he says.

The vendors’ harvest day – Cork, Ireland, 1905.

The result — which has been posted to the GitHub code repository — has caused a sensation.

“You don’t have to necessarily know the domain in order to be successful, and that’s really the power of deep learning,” Antic says.

Interested in digging into AI for yourself? Listen and get inspired.

How to Tune in to the AI Podcast

Our AI Podcast is available through iTunes, Castbox, DoggCatcher, Google Play Music, Overcast, PlayerFM, Podbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud. If your favorite isn’t listed here, email us at aipodcast [at] nvidia [dot] com.

 

The post How Jason Antic Created ‘De-Oldify,’ a Popular Tool That Makes Old Photos Look New appeared first on The Official NVIDIA Blog.

It’s ON! Putting Next Gen in the Hands of Tens of Millions of Gamers

Gaming is thriving. An entire generation is growing up gaming. Two billion gamers and counting. They play at home, on the go and on multiple platforms, and by far the most vibrant, innovative and open is the PC.

The forces driving PC gaming are staggering in scale.

Take the battle royale genre — think Fortnite, Call of Duty Blackout and PlayerUnknown’s Battlegrounds. It’s had a meteoric rise from zero to a player base of 300 million across platforms in just two years, bringing many new players to PC gaming and GeForce.

Consider the unstoppable rise of esports. It’s the fastest growing spectator sport and is on track to amass close to 600 million fans by 2020. That’s 2x in just three years. Worldwide prize money exceeded $150 million, also 2x in three years. Performance and fast response (frames per second and latency, respectively) are the top considerations for esports gamers, making GeForce the preferred GPU for top tournaments and pro gamers.

Gaming has become the new mass medium. More than 750 million people are watching gaming live streams and videos on outlets like YouTube and Twitch.tv. In fact, Twitch made news last year by surpassing CNN for monthly viewers. Within our GeForce community, we’ve seen games capture and sharing explode with 2 billion captures using GeForce Experience last year.

And PC gamers are the most demanding. Developers continue to produce more visually rich games. Over the past five years, the GPU performance required to play the latest games at 1080p, 60 FPS has increased 3x. And the most desired gaming monitors have doubled in resolution. The motivation for the installed base of gamers to upgrade is real — gamers choosing higher-end GPUs has increased 2.5x since 2013.

Along Comes GeForce RTX

At CES we announced we’re bringing our revolutionary Turing technology to tens of millions of gamers with the new GeForce RTX 2060, priced at just $349, along with impossibly sleek new Max-Q laptops to drive today’s fastest growing gaming platform.

GeForce RTX 2060 Delivers Performance… and Much More

The RTX 2060 delivers the performance and value for the “sweet spot” of GeForce gaming. The GeForce RTX 2060 follows a line of the most popular GPUs on Steam: GeForce GTX 960, 970 and 1060. This class of GPU represents one-third of our gaming installed base. With Turing’s advanced streaming multiprocessors, the RTX 2060 delivers up to 2x the performance of the GTX 1060 on today’s games.

Turing is much more than raw performance. It represents the biggest architectural leap forward in over a decade. With a new hybrid rendering approach, Turing fuses rasterization, real-time ray tracing and AI for incredible realism in PC games. GeForce RTX paves the way for the future of PC gaming.

Gaming Ecosystem Already Onboard

The gaming industry has come together to start delivering the future of PC gaming.

Microsoft released an extension to DirectX called DirectX Raytracing (DXR) last November, laying the foundation for accelerated ray tracing and giving developers access to the capabilities of the RTX GPUs.

And the leading game engines – Epic’s Unreal Engine, Unity, EA’s Frostbite and more – are delivering seamless access to accelerated ray tracing to thousands of developers, ensuring a future of breathtakingly realistic games.

Ray Tracing Meets AI

GeForce RTX uses Turing’s 52 teraflops of Tensor Core performance to enable Deep Learning Super Sampling (DLSS). DLSS starts with a given resolution and uses a trained AI network to construct high-quality, higher resolution gameplay. The result is a clear, crisp image with up to 50% higher performance.

Real-Time Ray Tracing + DLSS = RTX On!

At CES, we announced DLSS will be coming to Battlefield V. While it has been noted that the beauty of ray tracing can come at the cost of some performance, this is overcome with the integration of DLSS. With the addition of DLSS,  Battlefield V is able to run “RTX ON” (ray tracing + DLSS)  and maintain the same frame rate as non-ray tracing (RTX OFF) at 1440p on an RTX 2060.

We also announced DLSS is coming to blockbuster BioWare (EA) title Anthem, which releases in February. And there are dozens of other current and upcoming titles in the pipeline.

Leaner, Meaner, Greener: RTX Brings Big Performance to Sleek Laptops

Demand for gaming laptops has grown dramatically in recent years, driven by their transformation from a compromised platform to a full gaming experience, thanks to Pascal — and now Turing — GPUs and NVIDIA’s Max-Q laptop design approach.

Gaming laptops are the fastest-growing gaming platform, and GeForce gaming laptops sales have grown 10x in five years. A key driver: Max-Q, an innovative approach to crafting the world’s thinnest, fastest, quietest gaming laptops.

Max-Q has enabled a new class GeForce gaming laptops.  With impossibly sleek designs, thinner than 20 mm, narrow bezels, 144 Hz displays and extended battery life. Seventeen Max-Q laptops have already been announced this year, more than double last year’s total number. And overall a record 40+ new RTX models in more than 100 configurations start shipping Jan. 29 from the world’s top OEMs.

RTX Is for Creators

GeForce RTX Desktops and Laptops are great news not just for gamers but for the world’s creator community, whose numbers exceed 20 million. Whether you create complex 3D animations or edit high-resolution videos, you’ll be able to experience a new level of productivity on GeForce RTX.

This week at CES, we announced collaborations with Autodesk and RED for 3D animation and 8K video editing. We also demonstrated how we’re bringing hardware-accelerated ray tracing, AI-enhanced graphics and advanced video processing to our new thin and light GeForce RTX Max-Q laptops.

We also announced a major upgrade to OBS, the standard tool for game broadcasters. With GeForce RTX any game stream can achieve pro-quality from a single gaming rig, or even broadcast on the go from thin and light RTX Max Q laptops.

Finally, we announced we are working with HTC to push the state of the art in virtual reality with foveated rendering using RTX GPUs and HTC’s new Vive Pro Eye headset.

RTX Is the Next Generation of Gaming — and It’s ON!

The RTX 2060, priced at just $349, together with a record 40+ RTX gaming laptops open up the next generation of PC gaming to tens of millions of gamers.

The next generation of gaming is here.

The post It’s ON! Putting Next Gen in the Hands of Tens of Millions of Gamers appeared first on The Official NVIDIA Blog.

“Next Gen Is On” with RTX: NVIDIA Opens CES with Launch of GeForce RTX 2060, 40+ Laptop Models

NVIDIA kicked off CES 2019 on Sunday night by outlining sweeping plans to put to work the radical new graphics technology of real-time ray tracing in gaming and content creation in a dizzying array of laptops and desktops.

Speaking to a crowd of nearly a thousand attendees, NVIDIA CEO Jensen Huang announced the GeForce RTX 2060, which at $349 makes NVIDIA’s new Turing architecture accessible to tens of millions of PC gamers.

“RTX is here, next gen is on,” said Huang, speaking to a packed house of press, partners and professionals from around the consumer electronics and automotive industries at the MGM Grand in Las Vegas.

Huang’s unveiling of the RTX 2060 was the highlight of a flurry of announcements, including more than 40 new laptop models in 100+ configurations based on new Turing-generation GPUs, and new monitors packed with NVIDIA G-SYNC technology for silky smooth gaming.

“For $349 you can enjoy next generation gaming,” Huang said to broad hoots from the audience. “The long awaited RTX2060 is here.”

Stunning Demos

Kicking off a 90-minute talk studded with stunning demos, Huang explained how Turing represents the biggest shift in real-time graphics since NVIDIA invented the programmable shader more than 15 years ago.

Turing combines support for real-time ray tracing — thanks to its RT Cores — and deep learning — thanks to its Tensor Cores — in addition to a new generation of programmable shaders.

“It learns from great content what it should look like,” he said. “Then, one day, we give it an image and it goes through a network called DLSS and what comes out of it is a beautiful image. This is the neural network we created to improve an image.”

The combination reinvents graphics, allowing designers, artists and developers to create photo-realistic experiences that were only possible in scenes painstakingly generated over days, or weeks, on sprawling render farms.

The approach received immediate industry support from Microsoft, the companies that offer the most widely used game engines, and some of the world’s biggest game developers.

Huang then segued to a stunning demo that’s sure to go viral. Those who have tuned into any of NVIDIA’s recent events know the first part of the tale: a futuristic hero is equipped with armor, with the whole thing rendered in stunning detail, thanks to real-time ray tracing.

This demo, though, has a twist. Our hero launches himself into the sky, before sticking a suitably heroic landing. And then he gets stuck. The takeaway: real-time ray tracing makes real-time cinematic graphics possible.

It was the first of a cavalcade of demos with a single message:  RTX has ushered in a new generation of gaming, with truly cinematic graphics. In addition, new benchmarks, such as 3DMark Port Royal, make it easier to measure the combined impact of RTX’s new features on gaming experiences.

Another showstopper: Anthem, available Feb. 15, a fast-paced game set in a sprawling open world from BioWare, the acclaimed Canadian studio behind the Dragon Age and Mass Effect franchises.

All of these experiences will be more accessible than ever, thanks to the new GeForce RTX 2060. The new graphics card is the ideal upgrade, as games get more demanding and look to trade up to the latest generation of graphics technology, Huang said.

NVIDIA also continues to push forward the state of the art for gaming displays. Launched in 2013, G-SYNC introduced gamers to smooth variable refresh rate gameplay, with no screen tearing and no V-Sync input lag.

Huang announced plans to bring NVIDIA’s expertise with display technology to the broader pool of adaptive sync displays, announcing 12 G-SYNC Compatible displays and bringing silky smooth and beautiful gameplay to more gamers.

Real-Time Ray Tracing on a Laptop

These advancements are supported by a host of new systems from major PC brands featuring GeForce RTX. Gamers can also dip into a range of new thin-and-light laptops featuring Geforce RTX 2080, RTX 2070 and RTX 2060 GPUs. Huang announced a record number of laptops from all the top brands, many of them thin-and-models featuring NVIDIA’s Max-Q design.

These systems make it possible for gamers to enjoy the latest ray-tracing enhanced games — such as Battlefield V — while playing on battery power.

“We’ve redefined mobile gaming,” he said, holding a sleek new notebook on which he played Battlefield V. “This is twice as fast as Playstation Pro.”

And content creators will be able to free themselves from bulky workstations, rendering animations and editing high-definition video content on the go.

There are 8 million people who broadcast games, many using OBS, the world standard for coding and streaming. And there are many who use VR. And content creation, too.

As a result, Huang announced that RTX will be able to use Autodesk Arnold for interactive rendering. RTX will be able to be used to do 8K RED video editing.

And we’re partners with OBS on pro-quality broadcast streaming on a single PC, rather than the two that are currently required. RTX can also be used for high-end VR — with VirtualLink — a single wire that connects the GPU to the head mounted display — and other features with HTC, which the company will announce later at CES.

All this is “so we could create the perfect GPU for gamers, rather than just the perfect gaming GPU,” Huang said.

The post “Next Gen Is On” with RTX: NVIDIA Opens CES with Launch of GeForce RTX 2060, 40+ Laptop Models appeared first on The Official NVIDIA Blog.

Word Up: AI Writes New Chapter for Language Buffs

Who knew AI would become such a wordsmith. But not long ago, Spence Green and John DeNero were perplexed that the latest and greatest natural language processing research wasn’t yet in use by professional translators.

The Stanford grads set out to change that. In 2015, they co-founded Lilt, an AI software platform for translation and localization services.

Applications for natural language processing have exploded in the past decade as advances in recurrent neural networks powered by GPUs have offered better performing AI. That’s enabled startups to offer the likes of white-label voice services, language tutors and chatbots.

Lilt’s AI software platform was developed for translation experts to use on localization projects and train networks as they work. The hybrid human-machine platform is set up to boost the speed of translation and the domain expertise for specific projects.

Lilt software acts like the Google auto-complete feature for filling in search queries. The software allows users to review each line of text in one language and translate it into another, but it gives entire lines of translation suggestions that translators can accept, reject or alter.

As translators interact with the text, it helps train the neural networks and is immediately put into the software. “Every one of our users has a different set of parameters that is trained for them,” said DeNero.

Lilt software — available for more than 30 languages — can improve the speed of translation projects by as much as five times, said DeNero.

Lilt is a member of NVIDIA Inception, a virtual accelerator program that helps startups get to market faster. Customers of Lilt include Canva, Zendesk and Hudson’s Bay Company.

NLP on Transformer

What sets Lilt apart in its approach to natural language processing, according to its founders, is its deployment of services built on next-generation deep neural networks. Lilt harnesses an alternative to RNNs known as the Transformer neural network architecture, a model developed from research (Attention Is All You Need) at Google Brain in December 2017.

Transformer architecture differs from the sequential nature of RNNs, which give more weight to the last words in a sentence to determine the next. Instead, in each step, it applies what’s known as a self-attention technique that determines the next word based on a comparison score with all of the words in a sentence.

This newer method is considered ideal for language understanding. The architecture enables more parallelization, providing higher levels of translation quality, according to the paper’s authors.

NVIDIA GPUs recently set AI performance records, including for training the Transformer neural network in just 6.2 minutes.

Fast, Personalized Translation

The architecture enables a fast and personalized software platform for translators. This is important for Lilt because it is computationally demanding to have many different customized user profiles that are working on and training the software at the same time.

Lilt performs translation interactions while people are typing, so they have to happen quickly — under 300 milliseconds, said DeNero. This means Lilt’s service has to maintain some neural networks that perform static functions and others that need to be adapted live.

“We need GPUs in the cloud because we are training the system as they are working,” DeNero said.

The post Word Up: AI Writes New Chapter for Language Buffs appeared first on The Official NVIDIA Blog.

Friendly Work Rivalry Spurs Spectacular Holiday Light Shows in Austin

They say everything is bigger in Texas. And it’s especially true when it comes to spectacular holiday light shows, thanks to a decade-long rivalry between two Austin-based NVIDIA engineers.

John Storms and Lee Franzen festoon their respective homes with elaborate decorations played out in thousands of lights — Santa in a sleigh pulled by galloping reindeer, rows of glistening trees, and gigantic snowflakes. And it’s all synchronized to music.

For Franzen, a senior program manager, planning lasts throughout the year, beginning just as the lights from the previous year’s show come down in January. He flips the switch on his new show just before Thanksgiving with an opening party that thrills neighbors who come out in force to watch the display begin.

Holiday light show at John Storms’ home

Storms, a compiler verification manager, is a veteran holiday decorator — channeling the lighting-display enthusiasm, though not the bumbling, of Clark Griswold in the 1989 classic movie “Christmas Vacation.” Instead of using energy-sucking incandescent bulbs, he has long used hyper-efficient LED lights. An authority on the engineering behind lighting shows, Storms lectures on software programming at events for holiday show enthusiasts.

While Storms got his start as a child, helping to decorate the family home where they liked to “go all out,” Franzen’s enthusiasm for show stopping decorating came later, inspired in part by Storms’ ambitious light shows.

With their backgrounds in engineering, each developed their own styles. Franzen likes a musical mix to sync his show to, while Storms picks one song. They share a similar taste in decor, with glittering trees and swirling snowflakes. What brings them together is their smart use of NVIDIA technology that takes their show from static to spectacular.

Dialing Up the Dazzle

Constructing the displays takes place over the summer, and each year they include something new. This year, Franzen attached a 12-by-12-foot RGB matrix holding a couple of thousand pixels to an exterior wall to showcase NVIDIA’s logo in sparkling (and brand-appropriate) green lights. The matrix panels are the same as you’ll see on the giant displays around New York’s Times Square and bring big city dazzle into a suburban neighborhood.

Franzen also upgraded the gear powering his show this year, deploying two NVIDIA GeForce GTX 1080 Ti GPUs for his light display as they increase the speed and smoothness of the rendering simulations he uses to get lights and songs synced properly. With 100,000+ lights, there’s a lot to coordinate.

“Everything is so fast,” he says. “You ‘write’ a picture of your house on a simulator tool with each light fixture represented and can use the software to see how it looks. This way you can preview songs and shows. And this year, as we added so many more pixels, I knew I’d need more power.”

Lee Franzen's holiday light show, complete with NVIDIa's logo
Lee Franzen’s holiday light show, complete with NVIDIA’s logo

Prize-Winning Light Shows

This year, Storms used NVIDIA Quadro M1000M graphics on a PC to coordinate the sequence and render his lights, which include 6,300 individual color-controlled pixels and several thousand regular LED lights. Storms says he stopped counting lights several years ago after surpassing 25,000.

Music is a key factor in the complex sequencing, and this year the neighborhood light show contest where Storms lives was being judged by a local Girl Scout troop. Aiming to please the crowd with something they’d love, he synced the light show to “Baby Shark” and snagged first place.

Franzen, who involves his neighborhood in the annual “lights-on” party, kicks off fundraising to help local communities. With hundreds of families turning up for the show, he gathered nearly 200 cans of food for a food bank and hundreds of toys that are shared with families in need and nearby hospitals.

While Franzen’s holiday decorations often include a 20-foot inflatable Santa, with a 14-foot inflatable deer next to him, Storms has a sleigh pulled by herd of reindeer across his lawn and more than a dozen sparkling mini-trees in front of two giant trees decorated with strings of lights and dangling snowflakes.

John Storms illuminated trees, and a sleigh with reindeer
John Storms has illuminated trees and a sleigh with reindeer

For more, see John Storms’ “Listen to Our Lights” YouTube channel.

The post Friendly Work Rivalry Spurs Spectacular Holiday Light Shows in Austin appeared first on The Official NVIDIA Blog.

Flying High Again: Airplane Turnarounds Take a Spin with AI

Airlines generally aren’t soaring with fans, but AI could shorten delays to get people flying high again.

Zurich-based Assaia is developing AI that aims to compress the time between landing and liftoff.

The self-funded startup, founded earlier this year, is using image recognition algorithms sped by NVIDIA GPUs to process video feeds and deliver insights for management of airplane turnarounds.

Airport ground crews today don’t rely on sophisticated systems. Airports lack the digital tools to manage all the activities of an aircraft turnaround in real time, said Max Diez, founder and CEO of Assaia.

“We saw that turnarounds in the aviation industry are really a black box. Sometimes the catering truck will hit the plane and cause a dent,” Diez said.

Air carrier delays caused by crews servicing planes accounted for 5.8 percent of all airplanes running late, according to an October report from the U.S. Department of Transportation. That’s roughly six times the delays to flights caused by extreme weather.

Minimizing delays promises to boost airline profits. Delays can cost an airline about $70 per minute, according to the Bureau for Transportation Statistics.

AI Airport Operations

AI can help get a grip on this. That’s because airports capture a ton of video that can be fed into Assaia’s software and used to help manage turnarounds. That monitored video helps airlines track vendors servicing planes before takeoff to ensure timely turnarounds.

Airlines can hold their vendors accountable to quickly carrying out services. Catering trucks, cleaners and other service providers all have what’s known as service-level agreements with airlines to abide by in order to help keep the planes running on time. Those spell out how long a provider is given to perform a service before receiving a financial penalty.

Assaia’s software is helping record instances of service that run afoul of agreements.

“We provide a real-time dashboard that can help them understand whether they (airlines) need to intervene or not — like did the gasoline arrive or not,” said Diez.

GPUs at Gates

Assaia is working with more than a dozen airports in the U.S., Europe, the Middle East and Asia. Among its customers are some of the largest US carriers. At London Heathrow, Assaia just completed an extensive pilot in collaboration with British Airways.

It has arrangements in place with some to put its GPU-driven system in or near the airline terminals, and its units are already processing hundreds of video streams. A single system can process up to six video feeds at once. It can then push out the information for its customers. It’s planning to use Jetson AGX Xavier for the next iteration of its systems.

The startup trained its neural networks on several years’ worth of video from airfields around the world. The neural nets understand how different objects on the airfield look, move and interact, said Nikolay Kobyshev, CTO at Assaia.

“If we can make airplane turnarounds only a little bit efficient, it’s already a significant addition to the airlines’ revenue — which in turn will translate at almost 100 percent to incremental profit,” said Diez.

And passengers will get to their destinations that much sooner.

The post Flying High Again: Airplane Turnarounds Take a Spin with AI appeared first on The Official NVIDIA Blog.