NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic

Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond.

The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture for a whopping 400 petaflops of AI performance.

Located at Kao Data, a data center using 100 percent renewable energy, Cambridge-1 would rank among the world’s top three most energy-efficient supercomputers on the latest Green500 list.

Hamilton points to the concentration of leading healthcare companies in the U.K. as a primary reason for NVIDIA’s decision to build Cambridge-1.

AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore have already announced their intent to harness the supercomputer for research in the coming months.

Construction has been progressing at NVIDIA’s usual speed-of-light pace, with just final installations and initial tests remaining.

Hamilton promises to provide the latest updates on Cambridge-1 at GTC 2021.

Key Points From This Episode:

  • Hamilton gives listeners an explainer on Cambridge-1’s scalable units, or building blocks — NVIDIA DGX A100 systems — and how just 20 of them can provide the equivalent of hundreds of CPUs.
  • NVIDIA intends for Cambridge-1 to accelerate corporate research in addition to that of universities. Among them are King’s College London, which has already announced that it’ll be using the system.

Tweetables:

“With only 20 [DGX A100] servers, you can build one of the top 500 supercomputers in the world” — Marc Hamilton [9:14]

“This is the first time we’re taking an NVIDIA supercomputer by our engineers and opening it up to our partners, to our customers, to use” — Marc Hamilton [10:17]

You Might Also Like:

How AI Can Improve the Diagnosis and Treatment of Diseases

Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases.

NVIDIA Chief Scientist Bill Dally on Where AI Goes Next

This podcast is full of words from the wise. One of the pillars of the computer science world, NVIDIA’s Bill Dally joins to share his perspective on the world of deep learning and AI in general.

The Buck Starts Here: NVIDIA’s Ian Buck on What’s Next for AI

Ian Buck, general manager of accelerated computing at NVIDIA, shares his insights on how relatively unsophisticated users can harness AI through the right software. Buck helped lay the foundation for GPU computing as a Stanford doctoral candidate, and delivered the keynote address at GTC DC 2019.

The post NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic appeared first on The Official NVIDIA Blog.

Miracle Qure: Founder Pooja Rao Talks Medical Technology at Qure.ai

Pooja Rao, a doctor, data scientist and entrepreneur, wants to make cutting-edge medical care available to communities around the world, regardless of their resources. Her startup, Qure.ai, is doing exactly that, with technology that’s used in 150+ healthcare facilities in 27 countries.

Rao is the cofounder and head of research and development at the Mumbai-based company, which started in 2016. Qure.ai is also a member of the NVIDIA Inception startup accelerator program. The company develops AI technology that interprets medical images, with a focus on pulmonary and neurological scans.

Qure.ai technology has proven extremely useful in rapidly diagnosing tuberculosis, a disease that infects millions each year and can cause death if not treated early. By providing fast diagnoses and compensating in areas with fewer trained healthcare professionals, Qure.ai is saving lives.

Their AI is also helping to prioritize critical cases in teleradiology. Teleradiologists remotely analyze large volumes of medical images, with no way of knowing which scans might portray a time-sensitive issue, such as a brain hemorrhage. Qure.ai technology analyzes and prioritizes the scans for teleradiologists, reducing the time it takes them to read critical cases by 97 percent, according to Rao.

Right now, a major focus is helping fight COVID-19 — Qure.ai’s AI tool qXR is helping monitor disease progression and provide a risk score, aiding triage decisions.

In the future, Rao anticipates eventually building Qure.ai technology into medical imaging machinery to identify areas that need to be photographed more closely.

Key Points From This Episode:

  • Qure.ai has just received its first U.S. FDA approval. Its technology has also been acknowledged by the World Health Organization, which recently officially endorsed AI as a means to diagnose tuberculosis, especially in areas with fewer healthcare professionals.
  • Because Qure.ai’s mission is to create AI technology that can function in areas with limited resources, it has built systems that have learned to work with patchy internet and images that aren’t of the highest quality.
  • In order to be a global tool, Qure.ai partnered with universities and hospitals to train on data from patients of different genders and ethnicities from around the world.

Tweetables:

“You can have the fanciest architectures, but at some point it really becomes about the quality, the quantity and the diversity of the training data.” — Pooja Rao [7:46]

“I’ve always thought that the point of studying medicine was to be able to improve it — to develop new therapies and technology.” — Pooja Rao [18:57]

You Might Also Like:

How Nuance Brings AI to Healthcare

Nuance, a pioneer of voice recognition technology, is now bringing AI to the healthcare industry. Karen Holzberger, vice president and general manager of Nuance’s Healthcare Diagnostic Solutions business, talks about how their technology is helping physicians make people healthier.

Exploring the AI Startup Ecosystem with NVIDIA Inception’s Jeff Herbst

Jeff Herbst, vice president of business development at NVIDIA and head of NVIDIA Inception, is a fixture of the AI startup ecosystem. He joins the NVIDIA podcast to talk about how Inception is accelerating startups in every industry.

Anthem Could Have Healthcare Industry Singing a New Tune

Health insurance company Anthem is using AI to help patients personalize and better understand their healthcare information. Rajeev Ronanki, senior vice president and chief digital officer at Anthem, talks about how AI makes data as useful as possible for the healthcare giant.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Miracle Qure: Founder Pooja Rao Talks Medical Technology at Qure.ai appeared first on The Official NVIDIA Blog.

Making Machines More Human: Author Brian Christian Talks the Alignment Problem

Not many can claim to be a computer programmer, nonfiction author and poet, but Brian Christian has established himself as all three.

Christian has just released his newest book, The Alignment Problem, which delves into the disparity that occurs when AI models don’t do exactly what they’re intended to do.

The book follows on the success of Christian’s previous work, The Most Human Human and Algorithms to Live By. Now a visiting scholar at UC Berkeley, Christian joined AI Podcast host Noah Kravitz to talk about the alignment problem and some new techniques being used to address the issue.

The alignment problem can be caused by a range of reasons — such as data bias, or datasets used incorrectly and out of context. As AI takes on a variety of tasks, from medical diagnostics to parole sentencing decisions, machine learning researchers are expressing concern over the problem.

Listen to the full podcast to hear about this and more — including Christian’s book club experience with Elon Musk and why he chose to double major in philosophy and computer science.

Key Points From This Episode:

  • The Alignment Problem features insights from hundreds of interviews Christian did with those he calls “first responders” to the ethical and scientific concerns surrounding the issue. He believes this group is evolving into a new interdisciplinary field.
  • Christian is also director of technology at McSweeney’s Publishing and scientific communicator in residence at Simon’s Institute for the Theory of Computing. He talks to Kravitz about how he managed to combine his love for both computer science and creative writing in his current career.

Tweetables:

“Philosophy and computer science are really on a collision course.” — Brian Christian [20:23]

“This new interdisciplinary field, thinking about … how exactly are we going to get human norms into these ML systems.” — Brian Christian [26:25]

You Might Also Like:

Keeping an Eye on AI: Building Ethical Technology at Salesforce

Kathy Baxter, the architect of ethical AI practice at Salesforce, is helping her team and clients create more responsible technology. To do so, she supports employee education, the inclusion of safeguards in Salesforce technology and collaboration with other companies to improve ethical AI across industries.

Teaching Families to Embrace AI

Tara Chklovski is CEO and founder of Iridescent, a nonprofit that provides access to hands-on learning opportunities to prepare underrepresented children and adults for the future of work. She talks about everything from the UN’s AI for Good Global summit and the AI World Championship.

When AI Meets Sci-Fi — A Talk with Award-Winning Author Ken MacLeod

Ken MacLeod, an award-winning science fiction author whose work dives into the relationship between man and machine, discusses his latest book The Corporation Wars: Emergence. The final volume in an acclaimed trilogy, it features sentient robots, computer AIs that oversee Earth, and human minds running in digital simulations.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Making Machines More Human: Author Brian Christian Talks the Alignment Problem appeared first on The Official NVIDIA Blog.

Take Note: Otter.ai CEO Sam Liang on Bringing Live Captions to a Meeting Near You

Sam Liang is making things easier for the creators of the NVIDIA AI Podcast — and just about every remote worker.

He’s the CEO and co-founder of Otter.ai, which uses AI to produce speech-to-text transcriptions in real time or from recording uploads. The platform has a range of capabilities, from differentiating between multiple people, to understanding accents, to parsing through various background noises.

And now, Otter.ai is making live captioning possible on a variety of platforms, including Zoom, Skype and Microsoft Teams. Even Liang’s conversation with AI Podcast host Noah Kravitz was captioned in real time over Skype.

This new capability has been enthusiastically received by remote workers — Liang says that Otter.ai has already transcribed tens of millions of meetings.

Liang envisions even more practical effects of Otter.ai’s live captions. The platform can already identify keywords. Soon he thinks it’ll be recognizing action items, helping manage agendas and providing notifications.

Key Points From This Episode:

  • Otter.ai was founded in 2016 and is Liang’s second startup, after Alohar, a company focused on mobile behavior services. Once Alohar was acquired, Liang reflected that he needed better tools to help transcribe and share meetings, inspiring him to found Otter.ai.
  • The company’s AI model was built from scratch. Although Siri and Alexa predate it, Otter.ai needed to comprehend multiple voices that could overlap and vary in accents — a different, more complex task than understanding and responding to just one voice.

Tweetables:

“Though it’s been growing steadily before COVID, people have been using Otter on their laptop or on iOS or Android devices … you can use it anywhere.” — Sam Liang [7:32]

“Otter is your new meeting assistant. People will have the peace of mind that they don’t have to write down everything themselves.” — Sam Liang [22:07]

You Might Also Like:

Hugging Face’s Sam Shleifer Talks Natural Language Processing

Research engineer Sam Shleifer talks about Hugging Face’s natural language processing technology, which is in use at over 1,000 companies, including Apple, Bing and Grammarly, across fields ranging from finance to medical technology.

Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

Serial entrepreneur Andrew Mason talks about his company, Descript Podcast Studio, which is using AI, NLP and automatic speech synthesis to make podcast editing easier and more collaborative.

How SoundHound Uses AI to Bring Voice and Music Recognition to Any Platform

SoundHound made its name as a music identification service. Since then, it’s leveraged its 10+ years in data analytics to create a voice recognition tool that companies can bake into any product. SoundHound VP of Product Marketing Mike Zagorsek speaks about how the company has grown into a significant player in voice-driven AI.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Take Note: Otter.ai CEO Sam Liang on Bringing Live Captions to a Meeting Near You appeared first on The Official NVIDIA Blog.

In a Quarantine Slump? How One High School Student Used AI to to Stay on Track

Canadian high schooler Ana DuCristea has a clever solution for the quarantine slump.

Using AI and natural language processing, she programmed an app capable of setting customizable reminders so you won’t miss any important activities, like baking banana bread or whipping up Dalgona coffee.

The project’s emblematic of how a new generation – with access to powerful technology and training — approaches the once exotic domain of AI.

A decade ago, deep learning was the stuff of elite research labs with big budgets.

Now it’s the kind of thing a smart, motivated high school student can knock out to solve a tangible problem.

DuCristea’s been interested in coding from childhood, and spends her spare time teaching herself new skills and taking online AI courses. After winning a Jetson Nano Developer Kit this summer at AI4ALL, an AI camp, she set to work remedying one of her pet peeves — the limited functionality of reminder applications.

She’s long envisioned a more useful app that could snooze for more specific lengths of time, and set reminders for specific tasks, dates and times. Using the Nano and her background on Python, DuCristea spent her after-school hours creating an app that does just that.

With the app, users can message a bot on Discord requesting a reminder for a specific task, date and time. DuCristea has shared the app’s code on Github, and is planning to continue training it to increase its accuracy and capabilities.

Key Points From This Episode:

Her first hands-on experience with the Jetson Nano has only strengthened her intent to pursue software or computer engineering at college, where she’ll continue to learn more about what area of STEM she’d like to focus on.

  • DuCristea’s interest in programming and electronics started at age nine, when her father gifted her a book on Python and she found it so interesting that she worked through it in a week. Since then, she’s taken courses on coding and shares her most recent projects on GitHub.
  • Programming the app took some creativity, as DuCristea didn’t have a large dataset to train on. After trying neural networks and vectorization, she eventually found that template searches worked best for her limited list of examples.

Tweetables:

“There’s so many programs, even exclusively for girls now in STEM — I would say go for them.” — Ana DuCristea [14:55]

“The Jetson Nano is a lot more accessible than most things in AI right now.” — Ana DuCristea [18:51]

You Might Also Like:

AI4Good: Canadian Lab Empowers Women in Computer Science

Doina Precup, associate professor at McGill University and research team lead at AI startup DeepMind, speaks about her personal experiences, along with the AI4Good Lab she co-founded to give women more access to machine learning training.

Jetson Interns Assemble! Interns Discuss Amazing AI Robots They’re Building

NVIDIA’s Jetson interns, recruited at top robotics competitions, discuss what they’re building with NVIDIA Jetson, including a delivery robot, a trash-disposing robot and a remote control car to aid in rescue missions.

A Man, a GAN and a 1080 Ti: How Jason Antic Created ‘De-Oldify’

Jason Antic explains how he created his popular app, De-Oldify, with just an NVIDIA GeForce 1080 Ti and a generative adversarial network. The tool colors old black-and-white shots for a more modern look.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post In a Quarantine Slump? How One High School Student Used AI to to Stay on Track appeared first on The Official NVIDIA Blog.

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Brendon Cassidy, CTO and chief scientist at Super Hi-Fi, uses AI to give everyone the experience of a radio station tailored to their unique tastes.

Super Hi-Fi, an AI startup and member of the NVIDIA Inception program, develops technology that produces smooth transitions, intersperses content meaningfully and adjusts volume and crossfade. Started three years ago, Super Hi-Fi first partnered with iHeartRadio and is now also used by companies such as Peloton and Sonos.

Results are showing that users like this personalized approach. Cassidy notes that they tested MagicStitch, one of their tools that eliminates the gap between songs, and found that customers listening with MagicStitch turned on spent 10 percent more time streaming music.

Cassidy’s a veteran of the music industry — from Virgin Digital to the Wilshire Media Group — and recognizes this music experience is finally possible due to GPU acceleration, accessible cloud resources and AI powerful enough to process and learn from music and audio content from around the world.

Key Points From This Episode:

  • Cassidy, a radio DJ during his undergraduate and graduate careers, notes how difficult it is to “hit the post” — or to stop speaking just as the singing of the next song begins. Super Hi-Fi’s AI technology is using deep learning to understand and achieve that timing.
  • Super Hi-Fi’s technology is integrated into the iHeartRadio app, as well as Sonos Radio stations. Cassidy especially recommends the “Encyclopedia of Brittany” station, which is curated by Alabama Shakes’ musician Brittany Howard and integrates commentary and music.

Tweetables:

“This AI is trying to create a form of art in the listening experience.” — Brendon Cassidy [14:28]

“I hope we’re improving the enjoyment that listeners are getting from all of the musical experiences that we have.” — Brendon Cassidy [28:55]

You Might Also Like:

How Yahoo Uses AI to Create Instant eSports Highlight Reels

Like any sports fan, eSports followers want highlight reels of their kills and thrills as soon as possible, whether it’s StarCraft II, League of Legends or Heroes of the Storm. Yale Song, senior research scientist at Yahoo! Research, explains how AI can make instant eSports highlight reels.

Pierre Barreau Explains How Aiva Uses Deep Learning to Make Music

AI systems have been trained to take photos and transform them into the style of great artists, but now they’re learning about music. Pierre Barreau, head of Luxembourg-based startup Aiva Technologies, talks about the soaring music composed by an AI system — and used as the theme song of the AI Podcast.

How Tattoodo Uses AI to Help You Find Your Next Tattoo

What do you do when you’re at a tattoo parlor but none of the images on the wall strike your fancy? Use Tattoodo, an app that uses deep learning to help create a personalized tattoo.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound appeared first on The Official NVIDIA Blog.

Behind the Scenes at NeurIPS with NVIDIA and CalTech’s Anima Anandkumar

Anima Anandkumar is setting a personal record this week with seven of her team’s research papers accepted to NeurIPS 2020.

The 34th annual Neural Information Processing Systems conference is taking place virtually from Dec. 6-12. The premier event on neural networks, NeurIPS draws thousands of the world’s best researchers every year.

Anandkumar, NVIDIA’s director of machine learning research and Bren professor at CalTech’s CMS Department, joined AI Podcast host Noah Kravitz to talk about what to expect at the conference, and to explain what she sees as the future of AI.

The papers that Anandkumar and her teams at both NVIDIA and CalTech will be presenting are focused on topics including how to design more robust priors that improve network perception and how to create useful benchmarks to evaluate where neural networks need to improve.

In terms of what Anandkumar is focused on going forward, she continues to work on the transition from supervised to unsupervised and self-supervised learning, which she views as the key to next-generation AI.

Key Points From This Episode:

  • Anandkumar explains how her interest in AI grew from a love of math at a young age as well as influence from her family — her mother was an engineer and her grandfather a math teacher. Her family was also the first in their city to have a CNC machine — an automated machine, such as a drill or lathe, controlled by a computer — which sparked an interest in programming.
  • Anandkumar was instrumental in spearheading the development of tensor algorithms, which are crucial in achieving massive parallelism in large-scale AI applications. That’s one reason for her enthusiasm for NeurIPS, which is not constrained by a particular domain but focused more on improving algorithm development.

Tweetables:

“How do we ensure that everybody in the community is able to get the best benefits from the current AI and can contribute in a meaningful way?” — Anima Anandkumar [2:44]

“Labs like NVIDIA Research are thinking about, ‘Okay, where do we go five to 10 years and beyond from here?’” — Anima Anandkumar [11:16]

“What I’m trying to do is bridge this gap [between academia and industry] so that my students and collaborators are getting the best of both worlds” — Anima Anandkumar [23:54]

You Might Also Like:

NVIDIA Research’s David Luebke on Intersection of Graphics, AI

There may be no better guest to talk AI and graphics than David Luebke, vice president of graphics research at NVIDIA. He co-founded NVIDIA Research in 2006, after eight years on the faculty of the University of Virginia.

NVIDIA’s Jonah Alben Talks AI

Imagine building an engine with 54 billion parts. Now imagine each piece is the size of a gnat’s eyelash. That gives an idea of the scale Jonah Alben works at. Alben is the co-leader of GPU engineering at NVIDIA.

Demystifying AI with NVIDIA’s Will Ramey

One of NVIDIA’s best explainers, Will Ramey, provides an introduction to the AI boom and the key concepts behind it. Ramey is the senior director and global head of developer programs at NVIDIA.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Behind the Scenes at NeurIPS with NVIDIA and CalTech’s Anima Anandkumar appeared first on The Official NVIDIA Blog.

Lilt CEO Spence Green Talks Removing Language Barriers in Business

When large organizations require translation services, there’s no room for the amusing errors often produced by automated apps. That’s where Lilt, an AI-powered enterprise language translation company, comes in.

Lilt CEO Spence Green spoke with AI Podcast host Noah Kravitz about how the company is using a human-in-the-loop process to achieve fast, accurate and affordable translation.

Lilt does so with a predictive typing software, in which professional translators receive AI-based suggestions of how to translate content. By relying on machine assistance, Lilt’s translations are efficient while retaining accuracy.

However, including people in the company’s workflow also makes localization possible. Professional translators use cultural context to take direct translations and adjust phrases or words to reflect the local language and customs.

Lilt currently supports translations of 45 languages, and aims to continue improving its AI and make translation services more affordable.

Key Points From This Episode:

  • Green’s experience living in Abu Dhabi was part of the inspiration behind Lilt. While there, he met a man, an accountant, who had immigrated from Egypt. When asked why he no longer worked in accounting, the man explained that he didn’t speak English, and accountants who only spoke Arabic were paid less. Green didn’t want the difficulty of adult language learning to be a source of inequality in a business environment.
  • Lilt was founded in 2015, and evolved from a solely software company into a software and services business. Green explains the steps it took for the company to manage translators and act as a complete solution for enterprises.

Tweetables:

“We’re trying to provide technology that’s going to drive down the cost and increase the quality of this service, so that every organization can make all of its information available to anyone.” — Spence Green [2:53]

“One could argue that [machine translation systems] are getting better at a faster rate than at any point in the 70-year history of working on these systems.” — Spence Green [14:01]

You Might Also Like:

Hugging Face’s Sam Shleifer Talks Natural Language Processing

Hugging Face is more than just an adorable emoji — it’s a company that’s demystifying AI by transforming the latest developments in deep learning into usable code for businesses and researchers, explains research engineer Sam Shleifer.

Credit Check: Capital One’s Kyle Nicholson on Modern Machine Learning in Finance

Capital One Senior Software Engineer Kyle Nicholson explains how modern machine learning techniques have become a key tool for financial and credit analysis.

A Conversation with the Entrepreneur Behind the World’s Most Realistic Artificial Voices

Voice recognition is one thing, creating natural sounding artificial voices is quite another. Lyrebird co-founder Jose Solero speaks about how the startup is using deep learning to create a system that’s able to listen to human voices and generate speech mimicking the original human speaker.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Lilt CEO Spence Green Talks Removing Language Barriers in Business appeared first on The Official NVIDIA Blog.

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

Pindar Van Arman is a veritable triple threat — he can paint, he can program and he can program robots that paint.

Van Arman first started incorporating robots into his artistic method 15 years ago to save time. He coded a robot to paint the beginning stages of an art piece — like ”a printer that can pick up a brush” — to save time.

It wasn’t until Van Arman took part in the DARPA Grand Challenge, a prize competition for autonomous vehicles, that he was inspired to bring AI into his art.

Now, his robots are capable of creating artwork all on their own through the use of deep neural networks and feedback loops. Van Arman is never far away, though, sometimes pausing a robot to adjust its code and provide it some artistic guidance.

Van Arman’s work is on display in the AI Art Gallery at GTC 2020, and he’ll be giving conference attendees a virtual tour of his studio on Oct. 8 at 11 a.m. Pacific time.

Key Points From This Episode:

  • One of Van Arman’s most recent projects is artonomous, an artificially intelligent painting robot that is learning the subtleties of fine art. Anyone can submit their photo to be included in artonomous’ training set.
  • Van Arman predicts that AI will become even more creative, independent of its human creator. He predicts that AI artists will learn to program a variety of coexisting networks that give AI a greater understanding of what defines art.

Tweetables:

“I’m trying to understand myself better by exploring my own creativity — by trying to capture it in code, breaking it down and distilling it” — Pindar Van Arman [4:22]

“I’d say 50% of the paintings are completely autonomous, and 50% of the paintings are directed by me. 100% of them, though, are my art” — Pindar Van Arman [17:20]

You Might Also Like

How Tattoodo Uses AI to Help You Find Your Next Tattoo

Picture this, you find yourself in a tattoo parlor. But none of the dragons, flaming skulls, or gothic font lifestyle mottos you see on the wall seem like something you want on your body. So what do you do? You turn to AI, of course. We spoke to two members of the development team at Tattoodo.com, who created an app that uses deep learning to help you create the tattoo of your dreams.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Robots can do amazing things. Compare even the most advanced robots to a three year old, however, and they can come up short. UC Berkeley Professor Pieter Abbeel has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally.

How AI’s Storming the Fashion Industry

Costa Colbert — who holds degrees ranging from neural science to electrical engineering — is working at MAD Street Den to bring machine learning to fashion. He’ll explain how his team is using generative adversarial networks to create images of models wearing clothes.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020 appeared first on The Official NVIDIA Blog.

Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst

Jeff Herbst is a fixture of the AI startup ecosystem. Which makes sense since he’s the VP of business development at NVIDIA and head of NVIDIA Inception, a virtual accelerator that currently has over 6,000 members in a wide range of industries.

Ahead of the GPU Technology Conference, taking place Oct. 5-9, Herbst joined AI Podcast host Noah Kravitz to talk about what opportunities are available to startups at the conference, and how NVIDIA Inception is accelerating startups in every industry.

Herbst, who now has almost two decades at NVIDIA under his belt, studied computer graphics at Brown University and later became a partner at a Silicon Valley premier technology law firm. He’s served as a board member and observer for dozens of startups over his career.

On the podcast, he provides his perspective on the future of the NVIDIA Inception program. As AI continues to expand into every industry, Herbst predicts that more and more startups will incorporate GPU computing.

Those interested can learn more through NVIDIA Inception programming at GTC, which will bring together the world’s leading AI startups and venture capitalists. They’ll participate in activities such as the NVIDIA Inception Premier Showcase, where some of the most innovative AI startups in North America will present, and a fireside chat with Herbst, NVIDIA founder and CEO Jensen Huang, and several CEOs of AI startups.

Key Points From This Episode:

  • Herbst’s interest in supporting an AI startup ecosystem began in 2008 at the NVISION Conference — the precursor to GTC. The conference held an Emerging Company Summit, which brought together startups, reporters and VCs, and made Herbst realize that there were many young companies using GPU computing that could benefit from NVIDIA’s support.
  • Herbst provides listeners with an insider’s perspective on how NVIDIA expanded from computer graphics to the cutting edge of AI and accelerated computing, describing how it was clear from his first days at the company that NVIDIA envisioned a future where GPUs were essential to all industries.

Tweetables:

“We love startups. Startups are the future, especially when you’re working with a new technology like GPU computing and AI” — Jeff Herbst [14:06]

“NVIDIA is a horizontal platform company — we build this amazing platform on which other companies, particularly software companies, can build their businesses” — Jeff Herbst [27:49]

You Might Also Like

AI Startup Brings Computer Vision to Customer Service

When your appliances break, the last thing you want to do is spend an hour on the phone trying to reach a customer service representative. Using computer vision, Drishyam.AI analyzes the issue and communicates directly with manufacturers, rather than going through retail outlets.

How Vincent AI Uses a Generative Adversarial Network to Let You Sketch Like Picasso

If you’ve only ever been able to draw stick figures, this is the application for you. Vincent AI turns scribbles into a work of art inspired by one of seven artistic masters. Listen in to hear from Monty Barlow, machine learning director for Cambridge Consultants — the technology development house behind the app.

A USB Port for Your Body? Startup Uses AI to Connect Medical Devices to Nervous System

Think of it as a USB port for your body. Emil Hewage is the co-founder and CEO at Cambridge Bio-Augmentation Systems, a neural engineering startup. The UK startup is building interfaces that use AI to help plug medical devices into our nervous systems.

The post Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst appeared first on The Official NVIDIA Blog.