What’s the Difference Between VR, AR and MR?

Get up. Brush your teeth. Put on your pants. Go to the office. That’s reality.

Now, imagine you can play Tony Stark in Iron Man or the Joker in Batman. That’s virtual reality.

Advances in VR have enabled people to create, play, work, collaborate and explore in computer-generated environments like never before.

VR has been in development for decades, but only recently has it emerged as a fast-growing market opportunity for entertainment and business.

Now, picture a hero, and set that image in the room you’re in right now. That’s augmented reality.

AR is another overnight sensation decades in the making. The mania for Pokemon Go — which brought the popular Japanese trading card game for kids to city streets — has led to a score of games that blend pixie dust and real worlds. There’s another sign of its rise: Apple’s 2017 introduction of ARKit, a set of tools for developers to create mobile AR content, is encouraging companies to build AR for iOS 11.

Microsoft’s Hololens and Magic Leap’s Lightwear — both enabling people to engage with holographic content — are two major developments in pioneering head-mounted displays.

Magic Leap Lightwear

It’s not just fun and games either — it’s big business. Researchers at IDC predict worldwide spending on AR and VR products and services will rocket from $11.4 billion in 2017 to nearly $215 billion by 2021.

Yet just as VR and AR take flight, mixed reality, or MR, is evolving fast. Developers working with MR are quite literally mixing the qualities of VR and AR with the real world to offer hybrid experiences.

Imagine another VR setting: You’re instantly teleported onto a beach chair in Hawaii, sand at feet, mai tai in hand, transported to the islands while skipping economy-class airline seating. But are you really there?

In mixed reality, you could experience that scenario while actually sitting on a flight to Hawaii in a coach seat that emulates a creaky beach chair when you move, receiving the mai tai from a real flight attendant and touching sand on the floor to create a beach-like experience. A flight to Hawaii that feels like Hawaii is an example of MR.

VR Explained

The Sensorama

The notion of VR dates back to 1930s literature. In the 1950s, filmmaker Morton Hellig wrote of an “experience theater” and then later built an immersive video game-like machine he called the Sensorama for people to peer into. A pioneering moment came in 1968, when Ivan Sutherland is credited for developing the first head-mounted display.

Much has changed since. Consumer-grade VR headsets have made leaps of progress. Their advances have been propelled by technology breakthroughs in optics, tracking and GPU performance.

Consumer interest in VR has soared in the past several years as new headsets from Facebook’s Oculus, HTC, Samsung, Sony and a host of others offer substantial improvements to the experience. Yet producing 3D, computer-generated, immersive environments for people is about more than sleek VR goggles.

Obstacles that have been mostly overcome, so far, include delivering enough frames per second and reducing latency — the delays created when users move their head — to create experiences that aren’t herky jerky and potentially motion-sickness inducing.

VR taxes graphics processing requirements to the max — it’s about 7x more demanding than PC gaming. Today’s VR experiences wouldn’t be possible without blazing fast GPUs to help quickly deliver graphics.

Software plays a key role, too. The NVIDIA VRWorks software development kit, for example, helps headset and application developers access the best performance, lowest latency and plug-and-play compatibility available for VR. VRWorks is integrated into game engines such as Unity and Unreal Engine 4.

To be sure, VR still has a long way to go to reach its potential. Right now, the human eye is still able to detect imperfections in rendering for VR.

Vergence accommodation conflict, Journal of Vision

Some experts say VR technology will be able to perform at a level exceeding human perception when it can make a 200x leap in performance, roughly within a decade. In the meantime, NVIDIA researchers are working on ways to improve the experience.

One approach, known as foveated rendering, reduces the quality of images delivered to the edge of our retinas — where they’re less sensitive — while boosting the quality of the images delivered to the center of our retinas. This technology is powerful when combined with eye tracking that can inform processors where the viewing area needs to be sharpest.


VR’s Technical Hurdles

  • Frames per second: Virtual reality requires processing of 90 frames per second. That’s because lower frame rates reveal lags in movement detectable to the human eye. That can make some people nauseous. NVIDIA GPUs make VR possible by enabling rendering that’s faster than the human eye can perceive.
  • Latency: VR latency is the time span between initiating a movement and a computer-represented visual response. Experts say latency rates should be 20 milliseconds or less for VR. NVIDIA VRWorks, the SDK for VR headsets and game developers, helps address latency.
  • Field of view: In VR, it’s essential to create a sense of presence. Field of view is the angle of view available from a particular VR headset. For example, the Oculus Rift headset offers a 110-degree viewing angle.
  • Positional tracking: To enable a sense of presence and to deliver a good VR experience,  headsets need to be tracked in space within under 1 millimeter of  accuracy. This minimum is required in order to present images at any point in time and space.
  • Vergence accommodation conflict: This is a viewing problem for VR headsets today. Here’s the problem: Your pupils move with “vergence,” meaning looking toward or away from one another when focusing. But at the same time the lenses of the eyes focus on an object, or accommodation. The display of 3D images in VR goggles creates conflicts between vergence and accommodation that are unnatural to the eye, which can cause visual fatigue and discomfort.
  • Eye-tracking tech: VR head-mounted displays to date haven’t been adequate at tracking the user’s eyes for computing systems to react relative to where a person’s eyes are focused in a session. Increasing resolution where the eyes are moving to will help deliver better visuals.

Despite a decades-long crawl to market, VR has made a splash in Hollywood films such as the recently released Ready Player One and is winning fans across games, TV content, real estate, architecture, automotive design and other industries.

VR’s continued consumer momentum, however, is expected by analysts to be outpaced in the years ahead by enterprise customers.

AR Explained

AR development spans decades. Much of the early work dates to universities such as MIT’s Media Lab and pioneers the likes of Thad Starner — now the technical lead for Alphabet’s Google Glass smart glasses — who used to wear heavy belt packs of batteries attached to AR goggles in its infancy.

Google Glass (2.0)

Google Glass, the ill-fated consumer fashion faux pas, has since been rebirthed as a less conspicuous, behind the scenes technology for use in warehouses and manufacturing. Today Glass is being used by workers for access to training videos and hands-on help from colleagues.

And AR technology has been squeezed down into low-cost cameras and screen technology for use in inexpensive welding helmets that automatically dim, enabling people to melt steel without burning their retinas.

For now, AR is making inroads with businesses. It’s easy to see why when you think about overlaying useful information in AR that offers hands-free help for industrial applications.

Smart glasses for augmenting intelligence certainly support that image. Sporting microdisplays, AR glasses can behave like having that indispensable co-worker who can help out in a pinch. AR eyewear can make the difference between getting a job done or not.

Consider a junior-level heavy duty equipment mechanic assigned to make emergency repairs to a massive tractor on a construction site. The boss hands the newbie AR glasses providing service manuals with schematics for hands-free guidance while troubleshooting repairs.

Some of these smart glasses even pack Amazon’s Alexa service to enable voice queries for access to information without having to fumble for a smartphone or tap one’s temple.

Jensen Huang on VR at GTC

Business examples for consumers today include IKEA’s Place app, which allows people to use a smartphone to view images of furniture and other products overlayed into their actual home seen through the phone.

Today there is a wide variety of smart glasses from the likes of Sony, Epson, Vuzix, ODG and startups such as Magic Leap.

NVIDIA research is continuing to improve AR and VR experiences. Many of these can be experienced at our GPU Technology Conference VR demonstrations.

MR Explained

MR holds big promise for businesses. Because it can offer nearly unlimited variations in virtual experiences, MR can inform people on-the-fly, enable collaboration and solve real problems.

Now, imagine the junior-level heavy duty mechanic waist deep in a massive tractor and completely baffled by its mechanical problem. Crews sit idle unable to work. Site managers become anxious.

Thankfully, the mechanic’s smart glasses can start a virtual help session. This enables the mechanic to call in a senior mechanic by VR around the actual tractor model to walk through troubleshooting together while working on the real machine, tapping into access to online repair manual documents for torque specs and other maintenance details via AR.

That’s mixed reality.

To visualize a consumer version of this, consider the IKEA Place app for placing and viewing furniture in your house. You have your eye on a bright red couch, but want to hear the opinions of its appearance in your living room from friends and family before you buy it.

So you pipe in a VR session within the smart glasses. Let’s imagine this session is in an IKEA Place app for VR and you can invite these closest confidants. But first they must sit down on any couch they can find. Now, however, when everyone sits down it’s in a virtual setting of your house and the couch is red and just like the IKEA one.

Voila, in mixed reality, the group virtual showroom session yields a decision: your friends and family unanimously love the couch, so you buy it.

The possibilities are endless for businesses.

Mixed reality, while in its infancy, might be the holy grail for all the major developers of VR and AR. There are unlimited options for the likes of Apple, Microsoft, Facebook, Google, Sony, Netflix and Amazon. That’s because MR can blend the here and now reality with virtual reality and augmented reality for many entertainment and business situations.

Take this one: automotive designers can sit down on an actual car seat, with armrests, shifters and steering wheel, and pipe into VR sessions delivering the dashboard and interior. Many can assess designs together remotely this way. NVIDIA Holodeck actually enables this type of collaborative work.

Buckle up for the ride.

The post What’s the Difference Between VR, AR and MR? appeared first on The Official NVIDIA Blog.

Declining GPU Sales to Cryptocurrency Miners ‘Healthy’: AMD CEO

GPU manufacturers Nvidia and AMD each enjoyed massive sales over the past year, partially thanks to miners who buy GPUs to mine cryptocurrencies. However, those sales are now declining. The demand for GPUs kept increasing as the value of cryptocurrencies went up throughout 2017. But after the market cap reached an all-time high of $830

The post Declining GPU Sales to Cryptocurrency Miners ‘Healthy’: AMD CEO appeared first on CCN

Nvidia’s Huang: Cryptocurrencies Here to Stay, Will Be an ‘Important Driver for GPUs’

Nvidia has never been overly transparent about the impact of bitcoin mining on its business, but now we know the computing company expects to generate revenue from this market for years to come. Jensen Huang, Nvidia’s CEO, told CNBC that despite the recent downturn in bitcoin mining amid a BTC price that’s been under pressure, 

The post Nvidia’s Huang: Cryptocurrencies Here to Stay, Will Be an ‘Important Driver for GPUs’ appeared first on CCN

Live: Jensen Huang Keynotes NVIDIA’s 2018 GPU Technology Conference

Less than 24 hours until NVIDIA CEO Jensen Huang delivers the keynote at our ninth annual GPU Technology Conference in Silicon Valley, and the action’s already begun.

The crowd more more than 8,000 surging into the McEnery Convention Center — which includes researchers, press, technologists, analysts and partners from all over the globe — is our largest yet.

The 600+ talks on the docket may be the best testament to the spread of GPUs into every aspect of human endeavor.

Attendees are already crowding into conference rooms to hear about how GPUs can be used to model the formation of galaxies, generate dazzling special effects for blockbuster movies, and even analyze scans of the human heart.

Their mood: happy. At least, that’s what the Emotions Demo, set up on the convention’s main concourse, tells us. The demo uses deep learning to instantly read the facial expressions of people nearby in real time – whether they’re happy, neutral, afraid, or disgusted.

Also on the show floor: a pop up store selling NVIDIA Gear. The best sellers? The NVIDIA “I Am AI” t-shirt, and our much sought after NVIDIA Ruler, according the store’s staff.

We’ll be buttonholing speakers from a broad cross-section of these talks and interviewing them for AI Podcast, where we’re recording in a sleek glass booth positioned on the show floor.

If all this makes your heartbeat a little faster, check back for live updates from our keynote Tuesday. And keep an eye on our blog throughout the week for the latest news from the show.

And if you’re feeling nostalgic, check out last year’s live GTC keynote blog.

The post Live: Jensen Huang Keynotes NVIDIA’s 2018 GPU Technology Conference appeared first on The Official NVIDIA Blog.

NVIDIA Invests in Speech Recognition Startup Deepgram

As the AI ecosystem continues to expand, NVIDIA revealed today that speech analytics startup Deepgram is the newest member of the GPU Ventures portfolio.

Founded in 2015, San Francisco-based Deepgram is the first end-to-end deep learning speech recognition system in production that uses NVIDIA GPUs for inferencing and training.

“Using GPUs made our inferencing 100 times more efficient than when using CPUs,” said Scott Stephenson, CEO and co-founder of Deepgram. “Our technology coupled with NVIDIA’s expertise in AI allows us to make a greater impact in speech analysis.”

Deepgram is also part of our Inception virtual accelerator program, which includes more than 2,800 startups. Deepgram competed in the Inception pitch day competition last May. In this year’s Inception awards finale, six finalists will compete for a combined cash prize of $1 million at GTC in San Jose, on March 27.

Deepgram’s automatic transcription tool, Deepgram Brain, searches for keywords in transcripts by both sound and text. It also helps businesses analyze customer calls to improve their service. The company already serves over 5,000 clients, from financial institutions to journalists.

“While many companies already implement accelerated speech recognition, true speech analytics has until recently been largely untouched by deep learning,” said Jeff Herbst, vice president of business development at NVIDIA. “Deepgram has done an amazing job introducing deep learning to this field, and we look forward to working closely with them to advance deep learning-driven speech analytics to the next level.”

NVIDIA GPU Ventures has invested in more than three dozen companies to date, including 10 in five countries over the past year. In addition to Deepgram, some of the most recent members of its startup portfolio include:

  • BlazingDB – Peruvian startup accelerating the data parsing process
  • Graphistry – Bay Area startup streamlining data investigations using GPUs
  • H2O.ai – California startup seeking to make AI adoption more efficient
  • JingChi – Chinese self-driving startup developing an autonomous Uber-like service
  • TuSimple – Chinese autonomous truck startup

* Image courtesy of Deepgram

The post NVIDIA Invests in Speech Recognition Startup Deepgram appeared first on The Official NVIDIA Blog.

UC Berkeley’s Sergey Levine Explains How Deep Learning Will Unleash Robotics

How do you teach your robot to learn? This is the question that Sergey Levine, an assistant professor in the department of electrical engineering and computer science at UC Berkeley is trying to answer.

“One of the most important things is that you have to somehow communicate to the robot what it means to succeed,” Levine said in a conversation with AI Podcast host Michael Copeland. “That’s one of the most basic things …You need to tell it what it should be doing.”

During Levine’s research, he explored reinforcement learning, in which robots learn what functions are desired to fulfill a particular task. He’s also quick to point out that it’s important that the robots don’t just repeat what they learn in training, but understand why a task requires certain actions.

“So you observe people doing a task, try to infer what are the goals these people are optimizing, and then the robot can attempt to optimize the same goal itself,” said Levine.

It’s not easy, though. Teaching a robot to learn, instead of to just recognize images, is more difficult than building a deep learning system that can recognize images. In reality, the learning process is a lot of “trial and error.”

“If you want to get a robot to do interesting things, you kind of need it to learn on its own,” Levine said. “This is not something where people can just give you these perfect labels like they do for image recognition.”

Levine’s fascination with robots and his current area of research stemmed from two things: his work in computer graphics, and a Disney Animation film titled Big Hero 6.

One of the film’s characters is Baymax, a robot built by a graduate student to assist with people’s medical needs.

“The thing that I really liked about that movie…[is] this idea that scientists can build a robot as an experiment but for the purpose of actually helping people,” said Levine. “The fact that something like that makes it in popular culture and a lot of people see it, that to me is actually quite inspiring. I think we should be thinking more about robots in this way.”

AI Podcast: Move Over Sherlock, AI Is Here

And if you are a frequent PayPal user, give last week’s episode a listen: Vadim Kutsyy, a data scientist at PayPal, discusses how the online payments company uses AI to crack down on suspicious transactions.

How to Tune in to the AI Podcast

The AI Podcast is available through iTunes, DoggCatcher, Google Play MusicOvercastPlayerFMPodbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud. If your favorite isn’t listed here, email us at aipodcast@nvidia.com.

Listen on Google Play Music itunes

The post UC Berkeley’s Sergey Levine Explains How Deep Learning Will Unleash Robotics appeared first on The Official NVIDIA Blog.

Smartest AI Researchers Get Fastest GPUs: NVIDIA Gives Away More V100s

We’re working to put the world’s fastest GPU into the hands of the world’s smartest AI researchers.

Last month in Honolulu, NVIDIA shocked top AI researchers, giving them the world’s first NVIDIA Tesla V100 GPU accelerators. Last night, in Sydney, we struck again, handing out 15 NVIDIA Tesla V100 GPU accelerators.

“I think it’s fantastic,” said Sergey Levine, an assistant professor at the University of California, Berkeley, who is known for his work at the intersection of deep learning and robotics.

Caption: A few of the 15 recipients of a V100 on Monday night. From left, back row: Tatsuya Harada (University of Tokyo); Ben Poole (Stanford University); Aaron Courville (Montreal Institute for Learning Algorithms); Sergey Levine (UC Berkeley); Sedat Ozer (MIT). From left, front row: Marc Law (University of Toronto); Rupesh Srivastava (IDSIA); Pedro Domingos (University of Washington); Lars Mescheder (MPI Tübingen) and Jakob Foerster (Oxford University).

Given out at a meetup for participants in our NVIDIA AI Labs program at the International Conference on Machine Learning, and signed by NVIDIA founder and CEO Jensen Huang, the V100s are the world’s most powerful GPUs, offering more than 100 teraflops of deep learning performance.

“We are going to melt this with our algorithms, then we are going to melt the world,” said the University of Washington’s Pedro Domingos.

Through NVAIL, NVIDIA supports AI research at the world’s top universities and institutes.

Last month in Honolulu, NVIDIA shocked top AI researchers, giving them the world’s first NVIDIA Tesla V100 GPU accelerators. Last night, in Sydney, we struck again, handing out 15 NVIDIA Tesla V100 GPU accelerators.

Recipients of the V100s at last night’s meetup included representatives from Carnegie Mellon University, the Chinese Academy of Sciences (CAS), IDSIA – the Swiss AI Lab, the Massachusetts Institute of Technology, MPI Tübingen, the Montreal Institute for Learning Algorithms, National Taiwan University, Oxford University, Peking University, Stanford University, Tsinghua University, the University of California Berkeley, the University of Tokyo, the University of Toronto and the University of Washington.

“We are very much reliant on NVIDIA technology,” said Aaron Courville, of the Montreal Institute for Learning Algorithms. “More GPUs is always a very good thing, and very important for us”

All the recipients are from NVAIL member institutions.

“Supporting innovation at every level is a hallmark of NVIDIA,” says Ian Buck, general manager and vice president of Accelerating Computing at NVIDIA. “Our NVAIL partners are at the forefront of AI, making new discoveries every day that can benefit our lives.”

NVIDIA Pioneering Research Awards: Recognizing the World’s Best AI Research

AI researchers displaying the first NVIDIA Pioneering Research Awards include, from left to right: Aaron Courville with Montreal Institute for Learning Algorithms, Chelsea Finn with UC Berkeley, Sergey Levine with UC Berkeley, Sedat Ozer with MIT, Rupesh Srivastava with IDSIA, Marc Law with University of Toronto and Tatsuya Harada with University of Tokyo.

Another surprise at the meetup: the launch of the NVIDIA Pioneering Research Awards. It’s a new program to celebrate the acceptance of NVAIL partners’ research papers at conferences such as ICML.

Award recipients received a plaque featuring the first page of their papers. Inaugural winners include:

UC Berkeley’s Chelsea Finn, center, and Sergey Levine, right, were among the world’s first seven AI researchers to receive the NVIDIA Pioneering Award, presented by NVIDIA’s NVAIL program leader Anushree Saxena, on the left.

“I am very excited and honored to receive this award,” said Professor Tatsuya Harada of the University of Tokyo.

“It’s really great to see that NVIDIA is really so involved in research, that they invite us out here and that they look at the kind of papers we are writing and recognize that,” said Berkeley’s Levine.

Stay tuned to our blog to learn more about NVAIL. Next up, we’ll be taking a closer look at some of the cutting edge work our partners are debuting at ICML.

 

 

The post Smartest AI Researchers Get Fastest GPUs: NVIDIA Gives Away More V100s appeared first on The Official NVIDIA Blog.

NVIDIA Takes AI From Research to Production in New Work with Volvo, VW, ZF, Autoliv, HELLA  

Furthering its growth in the European automotive market, NVIDIA today unveiled a series of collaborations with five of the continent’s key players to move AI technology into production.

Speaking at the Automobil Elektronik Congress, an annual gathering outside Stuttgart focused on Europe’s auto industry, NVIDIA founder and CEO Jensen Huang described AI-based deep learning technology as “spectacular” for autonomous driving.

He unveiled new partnerships and collaborations for NVIDIA, which underscore the company’s growing headway in the $1 trillion transportation market. They include:

“There’s no industry that’s being revolutionized like this one and our contributions to it are significant,” Jensen told more than 500 auto execs in his half-hour keynote.

He described NVIDIA’s car-to-datacenter computing model for NVIDIA DRIVE – by which a vehicle senses its surroundings, determines its precision position on a highly detailed map and plans a safe path forward. Complementing his talk were presentations during the conference referring to the path to autonomous driving by members of NVIDIA’s autonomous vehicle ecosystem, including: Bosch, VW, HELLA and Audi, among others.

Huang explained that the heart of NVIDIA’s autonomous technology is NVIDIA DRIVE PX, an exceptionally powerful AI car computing platform that draws just 30 watts of power, enabling it to power Level 4 autonomous capability in which a car can essentially drive itself.

NVIDIA’s latest announcements build on existing collaborations with Europe’s Audi and Mercedes, as well as Toyota and Tesla Motors.

Volvo Cars and Autoliv Collaboration

Volvo Cars and Autoliv – together with Zenuity, an auto software joint venture equally owned by the two companies – will work with NVIDIA to develop next-generation self-driving car technologies built on the NVIDIA DRIVE PX car computing platform.

The agreement builds on a collaboration unveiled 18 months ago when the Swedish carmaker – known for its focus on safety – said it would use NVIDIA DRIVE PX for the pilot test of its DRIVE ME program involving 100 self-driving Volvo SUVs in its hometown of Gothenberg, and soon expanding into new test programs in the UK and China.

“Artificial intelligence is the essential tool for solving the incredibly demanding challenge of autonomous driving,” Huang said. “We are building on our earlier collaboration with Volvo to create production vehicles that will make driving safer, lead to greener cities and reduce congestion on our roads.”

Volvo president and CEO Hakan Samuelsson, said in a published comment: “Our cooperation with NVIDIA places Volvo Cars, Autoliv and Zenuity at the forefront of the fast moving market to develop next generation autonomous driving capabilities and will speed up the development of Volvo’s own commercially available autonomous drive cars.”

And Autoliv CEO Jan Carlson said: “With NVIDIA, we now have full access to the leading AI computing platform for autonomous driving. Autoliv, Volvo Cars and NVIDIA share the same vision for safe, autonomous driving. This cooperation will further advance our leading ADAS and autonomous driving offerings to the market.”

ZF-HELLA Partnership

ZF, one of the industry’s largest automotive suppliers, and HELLA, a leading tier 1 supplier of camera perception software and sensor technologies, will provide customers with a complete self-driving system that integrates front camera units, as well as supporting software functions and radar systems.

Using the NVIDIA DRIVE PX AI platform, the partnership aims to produce the highest NCAP safety ratings for passenger cars, while also addressing commercial vehicle and off-highway applications. NVIDIA DRIVE PX offers both NCAP driver assistance and self-driving capabilities on a single platform ready for production.

NVIDIA DRIVE PX will enable ZF and HELLA to develop software for scalable systems starting from modern driver assistance systems that connect their advanced imaging and radar sensor technologies for autonomous driving functionality.

“Creating a self-driving car is one of society’s most important endeavors – and one of the most challenging to deliver,” Huang said. “Our work with ZF and HELLA will bring AI self-driving solutions that include NCAP safety for millions of cars worldwide.”

ZF CEO Stefan Sommer said: “We are building up a powerful ecosystem step by step. Earlier this year ZF became the first supplier to adopt NVIDIA AI technology for cars and commercial vehicles in the ZF ProAI box. Just a few days ago HELLA and ZF joined forces in a non-exclusive partnership, and now together we are partnering with NVIDIA to make our roads safer and to support the development of autonomous driving functions.”

And HELLA CEO Rolf Breidenbach, who spoke before Jensen, said: “Combining our expertise in front camera perception software and radar sensor technologies with NVIDIA’s expertise in deep learning hardware and software will drive technological developments for broad adoption of self-driving capabilities across many transportation segments.”

Volkwagen Group Partnership

Earlier in the day, VW announced a partnership with NVIDIA to develop advanced AI systems based on deep learning that could be used for a variety of applications and in the field of mobility services.

Examples it cited are developing new procedures for optimizing traffic flow in cities and tasks involving humans working together with robots.

“Artificial intelligence is the key to the digital future of the Volkswagen Group,” Martin Hofmann, CIO of Volkswagen Group, said. “We want to develop and deploy high-performance AI systems ourselves. This is why we are expanding our expert knowledge required. Cooperation with NVIDIA will be a major step in this direction.”

VW also said it would work with NVIDIA to establish a startup support program that will provide technical and financial support for international startups developing machine learning and deep learning for the auto industry.

The companies will also work together to launch a training initiative for high-performing students to learn to program for AI initiatives working with NVIDIA’s Deep Learning Institute.

These are just the latest example of how the road to mass-adoption of autonomous vehicles seems to be getting shorter every day.

The post NVIDIA Takes AI From Research to Production in New Work with Volvo, VW, ZF, Autoliv, HELLA   appeared first on The Official NVIDIA Blog.

Deep Learning Meets HPC at ISC 2017

Understand the brain’s workings by analyzing cerebral slices. Use 3D heart models to develop better treatments, while lowering reliance on animal experiments. Study how dark matter will form when Andromeda galaxy and Milky Way collide.

These are just a few of the challenges researchers are tackling using supercomputers, AI and big data.

GPUs are at the center of this work. Find out how at NVIDIA’s booth at the International Supercomputing Conference, in Frankfurt, June 19-22.

Some ISC 2017 highlights:

  • Experience the NVIDIA Tesla Accelerated Computing platform, the most pervasive and accessible HPC platform. See our new HPC products and technologies and talk with NVIDIA experts.
  • Check out the new NVIDIA DGX portfolio, based on our new Volta architecture, which provides the fastest path to deep learning.
  • Listen to the Featured Evening Talks from top speakers like Thomas Schulthess, of the Swiss National Supercomputing Centre, and Dirk Pleiter, of the Juelich Research Centre.
  • Attend the dedicated deep learning day on June 21, with speakers from Fraunhofer, DFKI, IBM Research, NVIDIA and others.
  • Learn at Deep Learning Institute sessions, sponsored by Hewlett Packard Enterprise, on  June 21, at the Mövenpick Hotel
  • And don’t miss the opening keynote from Microsoft Research’s Jennifer Tour Chayes on how massive datasets in networks can be used for collaborative filtering or drug development.

To learn how to harness the power of AI supercomputing for your data center, be sure to book your attendance at these events:

  • “The Convergence of HPC and Deep Learning,” with Axel Koehler, principal solution architect at NVIDIA. In the Exhibitor Forum, on June 20, 12:20-12:40 p.m.
  • “Dawn of Smart Supercomputing: How HPC & AI Are Engaging in a Virtuous Cycle That Will Leave Both Forever Transformed & Immeasurably Improved” with Steve Oberlin, CTO of Accelerated Computing at NVIDIA. In the Panorama 1 space on June 20, 11-11:30 a.m.
  • “Exascale System Developments (Architecture & Concepts),” a panel with John Danskin, VP of GPU Architecture at NVIDIA. In the Panorama 1 space on June 20, 1:45-3:15 p.m.
  • The third OpenACC User Group meeting on Tuesday, June 20, from 6-9 p.m., at the Marriott hotel.

Stay tuned on our handles @NVIDIAEU and @NVIDIADC to find out more about HPC and AI.

And visit our ISC booth (#730) for a chance to win great daily prizes like an NVIDIA SHIELD TV or tickets to GTC Europe 2017, Oct. 10-12, in Munich.

The post Deep Learning Meets HPC at ISC 2017 appeared first on The Official NVIDIA Blog.

‘Destiny 2’ Latest Milestone for PC Gaming, an Entertainment Platform Like No Other

Home Improvement was on television. Micheal Keaton was Batman. Personal computers boasted speeds measured in megahertz. Twenty five-years ago, PC gaming was a zero billion dollar industry.

But it was about to become a cultural tsunami. Wolfenstein 3D, released in May 1992, showed how the PC could be good for much more than just spreadsheets. The title, released as shareware, spread from user to user like wildfire.

The broader lesson: the PC, open to all comers, would be where a new generation of electronic entertainment would be forged.

Today, PC games are a $33 billion industry. There are more than 1.2 billion gamers worldwide, a figure that will grow to 1.4 billion by 2020, outstripping the most populous nations on earth.

Destiny 2, debuting on the PC later this year, promises to bring the genre Wolfenstein 3D pioneered to new technical heights, and vast audiences.

And as PC gaming has grown, a clear pattern has emerged. Great games drive demand for great hardware. Great hardware drives demand for great games. It’s a virtuous cycle that’s made PC gaming bigger than ever. And with Destiny 2, the cycle continues.

The story behind the story: NVIDIA software and services that keep this glorious cycle moving at an ever faster pace.

PC Gaming’s Glorious Past, Glorious Present

pc-gaming-thriving

 

Thanks to the PC’s open architecture, NVIDIA has been on the forefront of the PC gaming trend since our inception, driving PC graphics to new heights. Whether you’re talking about technologies such as transform and lighting, programmable vertex and pixel shaders, or the invention of the GPU, a huge chunk of the 3D graphics innovations for the PC over the last 25 years has been introduced first by NVIDIA.

Those contributions have been woven into an industry that’s faster than ever. And the pace of change in the PC industry is even more remarkable when you consider its size. Look beyond the headline number — gamers measured in billions — and you’ll find a number of big trends.

eSports has turned PC gaming into a spectator sport with huge audiences — with more than 515 million expected to tune in by 2020, up from more than 385 million today, according to market researcher NewZoo. That industry, in turn, grew out of the vast market for online games, which is expected to grow to more than $37 billion in 2020, predicts DFC Intelligence.

The PC industry has become the engine that propels new kinds of entertainment into the lives of hundreds of millions of people because it’s so adaptable. A decade ago, most gamers were equipped with towering PCs. Now, they’re as likely to use a new generation of thin, light laptops.

PC gamers and developers are pioneering VR and AR games that blend the real with the virtual in ways that are impossible on any other platform. That adaptability — driven by the PC’s open architecture — has a long history. Twenty years ago, PCs were strung together for LAN parties, which today have evolved into ever more sophisticated online gaming networks.

Master the Best Games on the PC, Like Destiny 2

destiny-2-pc-gaming

Now, Destiny 2 is the latest in a long string of titles that will bring massive numbers of gamers together online to take part in amazing shared experiences. Just as the shareware phenomenon spread Wolfenstein 3D a two-and-a-half decades ago, today’s PC games are available everywhere: from stores to digital distribution services.

That means that indie games from small, innovative developers, and new kinds of games developed on a PC — such as our own VR Funhouse — can be shared with hundreds of millions of PC gamers overnight.

New experiences like these are driven, in part, by the ability to plug newer, faster GPUs into PCs. Our Pascal GPU architecture delivers 5x more performance and 7x the efficiency of the Fermi-based GPUs we introduced seven years ago. We’re working with the broader PC gaming community to perfect new technologies — such as DirectX 12 — that can boost the performance of AAA games by as much as 33 percent after their introduction. And our GeForce Experience platform and Game Ready Drivers make sure gamers get the most out of their individual PC configuration with a single click.

PC Gaming Races Ahead with GameWorks

And, with GameWorks, we’re putting the teams of our engineers — and our latest technologies — at the service of game developers who have made our technologies ubiquitous in PC gaming. Over 1,000 games from two dozen major studios now incorporate GameWorks technologies.

These include ShadowPlay — which has helped gamers generate more than 200 million videos of their gameplay a year; Ansel — which has let gamers generate more than 1 million in-game screenshots; and ShadowPlay Highlights — which automatically generates highlight reels for gamers too immersed in the action to start recording.

So stay tuned. The PC glorious gaming revolution has just begun.

 

 

The post ‘Destiny 2’ Latest Milestone for PC Gaming, an Entertainment Platform Like No Other appeared first on The Official NVIDIA Blog.