Intel’s Mix and Match Innovation (Infographic)

Today, at the annual Hot Chips Conference in Cupertino, California, Intel presented details about the company’s EMIB (Embedded Multi-die Interconnect Bridge) packaging technology. Developed by Intel, EMIB facilitates high-speed communication between multiple die in-package, and is a key component of Intel’s mix-and-match heterogeneous computing strategy. EMIB is used in Intel® Stratix® 10 FPGAs and 8th Gen Intel® Core™ processors with Radeon Graphics.

monolithic vs heterogeneous infographic thumb
» Click for full image

The post Intel’s Mix and Match Innovation (Infographic) appeared first on Intel Newsroom.

NVIDIA’s Jensen Huang to Kick Off GeForce Gaming Event Ahead of Gamescom 2018

You know. We know you know. You know we know you know.

Gamescom — the world’s largest gaming expo — is almost here. We’ve already told you our GeForce gaming event on Monday, Aug. 20, at the Palladium in Cologne, Germany, will be loaded with exclusive, hands-on demos running on the hottest upcoming games, presentations from some of the world’s biggest game developers and some spectacular surprises.

Now you know that NVIDIA founder and CEO Jensen Huang will be kicking things off with a keynote at the event.

Head here to register, and turn up for the start of the festivities. Doors open at 5.30pm CET. The fun starts at 6pm CET (9am PT). Arrive early because it’ll be packed.

If you can’t get there in person, check out the livestream at https://www.twitch.tv/nvidia.

The festivities continue Tuesday at 10am, and run clear through to 5pm.

All week we’ll be painting Gamescom green. Find us at Hall 10.1, Booth E-072 and at our partners’ booths, powering the latest PC games, through Aug. 25.

 

The post NVIDIA’s Jensen Huang to Kick Off GeForce Gaming Event Ahead of Gamescom 2018 appeared first on The Official NVIDIA Blog.

Calling All GPU-Powered Grad Students: Apply for $50,000 NVIDIA Grant

We know that putting GPU technology in the hands of the world’s brightest minds leads to research breakthroughs.

That’s why we’ve launched the 18th annual NVIDIA Graduate Fellowship Program. We’re looking for students doing outstanding GPU-based research. Our goal: Provide them with grants and technical support so they can help solve the world’s biggest research problems.

We are especially seeking doctoral students working in artificial intelligence, machine learning, autonomous vehicles, robotics, AI for healthcare, high performance computing and related fields. Our Graduate Fellowship awards are up to $50,000 per student.

Since its start in 2002, the Graduate Fellowship Program has awarded over 150 grants worth more than $4.4 million.

We’re looking for students who have completed their first year of Ph.D.-level studies. Candidates need to be studying computer science, computer engineering, system architecture, electrical engineering or a related area. Applicants must also be investigating innovative ways to use GPUs.

The NVIDIA Graduate Fellowship Program for the 2019-2020 academic year is open to applicants worldwide. The deadline for submitting applications is September 28, 2018.

For more on eligibility and how to apply, click here or email fellowship@nvidia.com.

The post Calling All GPU-Powered Grad Students: Apply for $50,000 NVIDIA Grant appeared first on The Official NVIDIA Blog.

Alluring Turing: Get Up Close with 7 Keynote-worthy Turing Demos

RT Cores. Tensor Cores. Advanced shading technologies.

If you’ve been following the news about our new Turing architecturelaunched this week at the SIGGRAPH professional graphics conference — you’re probably wondering what all this technology can do for you.

We invite you to step into our booth at SIGGRAPH — number 801 on the main floor — to see for yourself. Here’s what you’ll find:

  • Photorealistic, interactive car rendering — Spoiler: this demo of a Porsche prototype looks real, but is actually rendered. To prove it, you’ll be able to adjust the lighting, and move the car around. It’s all built in in Unreal Engine, with the Microsoft DXR API is used to access NVIDIA RTX dev platform. It runs two Quadro RTX GPUs.
  • Real-time ray tracing on a single GPU — This Star Wars themed demo stunned when it made its debut earlier this year running on a $70,000 DGX Station powered by four Volta GPUs. Now you can see the same interactive, real-time, ray tracing using Unreal Engine running on our NVIDIA RTX developer platform on a single Turing Quadro GPU.
  • Advanced rendering for games & film (dancing robots) — This one is built on Unreal, as well — and shows how real-time ray-tracing can a bring complex, action-packed scenes to life. Powered by a single Quadro RTX 6000, it shows effects such as real-time ray-traced effects such as global illumination, shadows, ambient occlusion, and reflections.
  • Advanced rendering for games & film (Project Sol) — An interaction between a man and his robotic assistants takes a surprising turn. Powered by the Quadro RTX 6000, this demo shows off production quality rendering and cinematic frame rates, enabling users to interact with scene elements in real time.
  • Cornell Box — Turn to this tested graphics teaching tool to see how Turing uses ray tracing to deliver complex effects — ranging from diffused reflection to refractions to caustics to global illumination — with stunning photorealism.
  • Ray-traced global illumination — This live, photorealistic demo is set in the lobby of the Rosewood Bangkok Hotel, and shows the effects of light switching between raster and ray-traced materials. You’ll be able to make changes to the scene, and see the effects in real time on this demo powered by a pair of Quadro RTX 6000 GPUs.
  • New Autodesk Arnold with GPU acceleration — Featuring a scene from Avengers: Infinity war courtesy of Cinesite, Autodesk, and Marvel Studios, this demo lets you see the benefits of Quadro RTX GPUs for both content creation and final frame rendering for feature film.

Of course, this isn’t all you’ll find in our booth.

In addition to being able to see demos from NVIDIA CEO Jensen Huang’s keynote Monday up close, you’ll be able to see a technology demo of our new NGX software development kit — featuring in-painting, super-slo mo, and up resing; a new version of our NSIGHT developer tools; AI-powered rendering enhancements including deep-learning anti-aliasing; and A simulation based on Palm4u, a modified version of PALM for urban environments, looking at how much urban surfaces receive solar radiation, as well as atmospheric and building heat emissions during summer in Berlin.

So, if you’re at SIGGRAPH stop by our booth. We’ll be here Tuesday and Wednesday from 9:30am – 6pm or Thursday from 9:30 am – 3:30 pm.

The post Alluring Turing: Get Up Close with 7 Keynote-worthy Turing Demos appeared first on The Official NVIDIA Blog.

Protecting Our Customers through the Lifecycle of Security Threats

By Leslie Culbertson

Intel’s Product Assurance and Security (IPAS) team is focused on the cybersecurity landscape and constantly working to protect our customers. Recent initiatives include the expansion of our Bug Bounty program and increased partnerships with the research community, together with ongoing internal security testing and review of our products. We are diligent in these efforts because we recognize bad actors continuously pursue increasingly sophisticated attacks, and it will take all of us working together to deliver solutions.

Today, Intel and our industry partners are sharing more details and mitigation information about a recently identified speculative execution side-channel method called L1 Terminal Fault (L1TF). This method affects select microprocessor products supporting Intel® Software Guard Extensions (Intel® SGX) and was first reported to us by researchers at KU Leuven University*, Technion – Israel Institute of Technology*, University of Michigan*, University of Adelaide* and Data61*1. Further research by our security team identified two related applications of L1TF with the potential to impact other microprocessors, operating systems and virtualization software.

More: Security Exploits and Intel Products (Press Kit) | Security Research Findings (Intel.com)

I will address the mitigation question right up front: Microcode updates (MCUs) we released earlier this year are an important component of the mitigation strategy for all three applications of L1TF. When coupled with corresponding updates to operating system and hypervisor software released starting today by our industry partners and the open source community, these updates help ensure that consumers, IT professionals and cloud service providers have access to the protections they need.

L1TF is also addressed by changes we are already making at the hardware level. As we announced in March, these changes begin with our next-generation Intel® Xeon® Scalable processors (code-named Cascade Lake), as well as new client processors expected to launch later this year.

We are not aware of reports that any of these methods have been used in real-world exploits, but this further underscores the need for everyone to adhere to security best practices. This includes keeping systems up-to-date and taking steps to prevent malware. More information on security best practices is available on the Homeland Security website.

About L1 Terminal Fault

All three applications of L1TF are speculative execution side channel cache timing vulnerabilities. In this regard, they are similar to previously reported variants. These particular methods target access to the L1 data cache, a small pool of memory within each processor core designed to store information about what the processor core is most likely to do next.

The microcode updates we released earlier this year provide a way for system software to clear this shared cache. Given the complexity, we created a short video to help explain L1TF.

Once systems are updated, we expect the risk to consumer and enterprise users running non-virtualized operating systems will be low. This includes most of the data center installed base and the vast majority of PC clients. In these cases, we haven’t seen any meaningful performance impact from the above mitigations based on the benchmarks we’ve run on our test systems.

There is a portion of the market – specifically a subset of those running traditional virtualization technology, and primarily in the data center – where it may be advisable that customers or partners take additional steps to protect their systems. This is principally to safeguard against situations where the IT administrator or cloud provider cannot guarantee that all virtualized operating systems have been updated. These actions may include enabling specific hypervisor core scheduling features or choosing not to use hyper-threading in some specific scenarios. While these additional steps might be applicable to a relatively small portion of the market, we think it’s important to provide solutions for all our customers.

For these specific cases, performance or resource utilization on some specific workloads may be affected and varies accordingly. We and our industry partners are working on several solutions to address this impact so that customers can choose the best option for their needs. As part of this, we have developed a method to detect L1TF-based exploits during system operation, applying mitigation only when necessary. We have provided pre-release microcode with this capability to some of our partners for evaluation, and hope to expand this offering over time.

For more information on L1TF, including detailed guidance for IT professionals, please visit the advisory on the security center. We’ve also provided a white paper and updated the FAQs on our security first website.

I’d like to again thank our industry partners and the researchers who first reported these issues for their collaboration and collected commitment to coordinated disclosure. Intel is committed to the security assurance of our products, and will continue to provide regular updates on issues as we identify and mitigate them.

As always, we continue to encourage everyone to take advantage of the latest security protections by keeping your systems up-to-date.

Leslie Culbertson is executive vice president and general manager of Product Assurance and Security at Intel Corporation.

1Raoul Strackx, Jo Van Bulck, Marina Minkin, Ofir Weisse, Daniel Genkin, Baris Kasikci, Frank Piessens, Mark Silberstein, Thomas F. Wenisch, and Yuval Yarom

The post Protecting Our Customers through the Lifecycle of Security Threats appeared first on Intel Newsroom.

NVIDIA CEO Jensen Huang Unveils Turing, Reinventing Computer Graphics

Ray-traced graphics offer incredible realism. Interactive graphics driven by GPUs offer speed and responsiveness. The two now come together In the greatest leap since the invention of the CUDA GPU in 2006, NVIDIA CEO Jensen Huang announced Monday.

Speaking at the SIGGRAPH professional graphics conference in Vancouver, Huang unveiled Turing, NVIDIA’s eighth-generation GPU architecture. He also introduced the first Turing-based GPUsthe NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000. And he detailed the Quadro RTX Server, a reference architecture for the $250 billion visual effects industry.

“This fundamentally changes how computer graphics will be done, it’s a step change in realism,” Huang told an audience of more than 1,200 graphics pros gathered at the sleek glass and steel Vancouver Convention Center, which sits across a waterway criss-crossed by cruise ships and seaplanes from the stunning North Shore mountains.

DellEMC, HPI, Hewlett-Packard Enterprise, Lenovo, Fujitsu, Boxx, and SuperMicro will be among the system vendors supporting the latest line of Quadro processors, he said. All three new Quadro GPUs will be available in the fourth quarter.

New Silicon, New Software

Turing — the result of more than 10,000 engineering-years of effort — features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing. Together for the first time, Huang explained, they make real-time ray tracing possible.

“It has to be amazing at today’s applications, but utterly awesome at tommorrow,” Huang said.

That new silicon is being supported by software from more than two dozen key ISVs. To help developers quickly take full advance of Turing’s capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs to speed Turing’s capabilities to key graphics applications addressing millions of designers, artists and scientists.

Huang also announcing that NVIDIA is open sourcing its Material Definition Language software development kit, starting today.

“We now have a brand new software stack for computer graphics merging rastering and ray tracing, computing and AI,” Huang said.

Bringing Ray-Tracing to Real Time

To put Turing’s capabilities into perspective, Huang opened his keynote talk with a video telling the visual history of computer graphics over the past decades narrated by its pioneering figures — many of whom sat in the audience as the video played. It’s the tale of a grand quest to simulate the world we see all around us, one that has captivated some of the world’s brightest minds for decades.

Turing’s dedicated ray-tracing processors — called RT Cores — accelerate the computation of how light and sound travel in 3D environments. Turing accelerates real-time ray tracing by 25x over the previous Pascal generations. It  can be used for final-frame rendering for film effects at more than 30x the speed of CPUs.

A Familiar Demo, Accelerated by a New GPU

To demonstrate this, Huang showed the audience a demo they’d seen —Epic Games’ stunning Star Wars themed Reflections ray-tracing demo — running on hardware they hadn’t. At the Game Developer Conference in March, Reflections ran on a $70,000 DGX Station equipped with four Volta GPUs. This time the demo ran on a single Turing GPU.

“It turns out it was running on this, one single GPU,” Huang said to wild applause as he playfully blinded the camera by angling the gleaming Quadro RTX 8000’s reflective outer shroud. “This is the world’s first ray-tracing GPU.”

An AI for Beauty

At the same time, the Turing architecture’s Tensor Cores — processors that accelerate deep learning training and inferencing — provide up to 500 trillion tensor operations a second. This, in turn, powers AI-enhanced features — such as denoising, resolution scaling and video re-timing — included in the NVIDIA NGX software development kit.

“At some point you can use AI or some heuristics to figure out what are the missing dots and how should we fill it all in, and it allows us to complete the frame a lot faster than we otherwise could,” Huang said, describing the new deep learning-powered technology stack that enables developers to integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

“Nothing is more powerful than using deep learning to do that,” Huang said.

Raster, Faster

Turing also cranks through rasterization — the mainstay of interactive graphics — 6x faster than Pascal, Huang said, detailing how technologies such as variable-rate shading, texture-space shading, and multi-view rendering will provide for more fluid interactivity with large models and scenes and improved VR experiences.

Turning to a tested graphics teaching tool, Huang told the story of how visual effects have progressed by using the Cornell Box — a three-dimensional box inside which various objects are displayed. Huang showed off how Turing uses ray-tracing to deliver complex effects — ranging from diffused reflection to refractions to caustics to global illumination  — with stunning photorealism.

Another showstopper: Huang showed off a video featuring a prototype Porsche — illuminated by lights that played across its undulating curves — celebrating the automakers 70th anniversary. While the photoreal demo looks filmed, it’s entirely generated on a Turing GPU running Epic Games’ Unreal Engine. “For the very first time, NVIDIA RTX is making it possible for us to bring accelerated workflows and acceleration to this market,” Huang said.

Creators looking to tackle such projects will have plenty of tools to choose from. In addition to four powerful Turing-powered graphics cards – $2,300 Quadro RTX 5000, the $6,300  Quadro RTX 6000, and the $10,000 Quadro RTX 8000 — Huang also introduced the RTX server.

Equipped with eight Turing GPUs, it’s designed to slash rendering times from hours to minutes. Four 8 x GPU RTX servers can do the rendering work of 240 dual core servers at 1/4th the cost, using 1/10 the space, and consuming 1/11th the power. “Instead of a shot taking five hours or six hours, it now takes just one hour,” Huang said. “It’s going to completely change how people do film.”

Summing up, Huang described Turing as the “world’s first ray-tracing GPU,” and “the single greatest leap that we have ever  made in one generation.”

A Rousing Cybernetic Strut

Huang ended his talk with a demo that had members of the audience dancing their way out the door. Dubbed Sol, it showed a pair of robotic assistants placing glossy white space-age armor onto a lone figure, each piece finding their place with with a satisfying click.

As the protagonist ascends to a hatch — ray-traced reflections of the futuristic environment all around him gleaming from his suit and visor — the now unsupervised robots begin to dance to the immortal, brass-backed 16 bar blues chord progression of 1977’s “Boogie Shoes” by KC and the Sunshine Band.

Hearing the music, the armored figure returns, cocks his head in surprise, and — to the audience’s delight — demonstrates his own loose-limbed, fluid dance moves.

As the screen fades to black, and then to an image of the new Quadro GPU RTX GPU, the music continues to pump. The message is clear: now it’s your turn to take what you’ve seen and dance.

The post NVIDIA CEO Jensen Huang Unveils Turing, Reinventing Computer Graphics appeared first on The Official NVIDIA Blog.

NVIDIA’s Jensen Huang Takes Center Stage at SIGGRAPH 2018

For anyone whose life’s work involves computer graphics, there’s no event in the world like the annual SIGGRAPH conference.

This year, NVIDIA founder and CEO Jensen Huang will take the stage at the show in Vancouver, Canada, to share how AI, real-time ray tracing and VR are transforming the computer graphics industry.

His talk will begin at 4pm PT on Monday, Aug. 13, at the Vancouver Convention Center, unofficially kicking off the show’s five-day immersion into the latest innovations in CG, animation, VR, games, mixed reality and emerging technologies.

The SIGGRAPH show floor opens the next day — and NVIDIA green will be everywhere. Find us in NVIDIA Booths 801 and 501. And see our technology power workflows in partner booths across the show floor, where we’re demonstrating the latest technologies incorporating AI, real-time ray tracing and virtual reality.

If you can’t make it to Vancouver, be sure to catch our livestream of Jensen’s keynote.

Move to the Head of the Class

Our courses, talks and tutorials throughout the week at SIGGRAPH (mostly in Room 220-222) will showcase how AI, real-time ray tracing and VR can make your work easier. A few highlights:

Tuesday, Aug. 14, 2-5:30pm — NVIDIA Presents: GPU Ray Tracing for Film and Design
Explore recent developments in GPU-accelerated, high-quality, interactive ray tracing to support the visual quality and scene complexity required for visual effects, animation and design. NVIDIA, Autodesk, Chaos Group, Isotropix, Pixar and Weta Digital will be among those presenting.

Wednesday, Aug. 15, 9:30am-12:30pm — NVIDIA Presents: Real-Time Ray Tracing
Researchers and engineers from NVIDIA joined by leading game studios, Epic Games and EA/SEED, will present state-of-the-art techniques for ray tracing, sampling and reconstruction in real time. This includes recent advances that promise to dramatically advance the state of ray tracing in games, simulation and VR applications.

Wednesday, Aug. 15, 4-4:25pm — Tackling the Realities of Virtual Reality
David Luebke, VP of graphics research at NVIDIA, will describe the company’s vision for the future of virtual and augmented reality. He’ll review some of the “realities of virtual reality” — including challenges presented by Moore’s law, battery technology, optics, and wired and wireless connections. He’ll discuss their implications and opportunities, such as foveation and specialization. He’ll conclude with a deep dive into how rendering technology, such as ray tracing, can evolve to solve the realities of VR. (Note: This talk takes place at NVIDIA booth 801.)

Thursday, Aug. 16, 9:30am-12:30pm — NVIDIA Presents: Deep Learning for Content Creation
Join NVIDIA’s top researchers for an examination of the novel ways deep learning and machine learning can supercharge content creation for films, games and advertising.

See our full schedule of SIGGRAPH talks and courses.

The post NVIDIA’s Jensen Huang Takes Center Stage at SIGGRAPH 2018 appeared first on The Official NVIDIA Blog.

Intel Applauds Bipartisan Congressional Effort to Accelerate Quantum Computing Research

quantum clarke 2x1
Intel’s director of quantum hardware, Jim Clarke, holds a 17-qubit superconducting test chip. (Credit: Intel Corporation)

What’s New: This week, the U.S. Senate is reviewing its version of the National Quantum Initiative Act (S. 3143), a bipartisan bill to create a 10-year coordinated federal program to accelerate quantum research and development for the economic and national security of the United States. The bill aims to ensure U.S. leadership in quantum information science by supporting research and development, improving interagency planning and coordination, promoting public-private partnerships, and promoting the development of international standards.

“When it comes to quantum computing research, we’re at mile one of a marathon. The U.S. has long been at the cutting edge of technology; a fact that has propelled our progress for decades. As nations around the world race to lead in quantum information science, the U.S. will require collaboration of industry, academia and the federal government to keep pace. The National Quantum Initiative Act is a great step forward, and Intel applauds the bipartisan leadership in Congress on their progress.”
–Jim Clarke, director of quantum hardware, Intel

Why It’s Important: The National Quantum Initiative Act will ensure the United States remains competitive in a global race to build quantum technologies.

Quantum computing is an exciting new computing paradigm with unique problems to be solved and new physics to be discovered. Academia, governments and companies are racing to advance quantum science given its potential to solve problems beyond the reach of conventional computers. For example, quantum computers may simulate nature to advance research in chemistry, materials science and molecular modeling.

Intel’s Context: In 2015, Intel initiated a significant investment in quantum research. Today, Intel is making fast progress toward developing commercially viable quantum computing systems, including the introduction of a 49-qubit superconducting test chip called “Tangle Lake.”

Federal Context: This week’s progress in the Senate follows progress in the U.S. House of Representatives on its version of the bill, H.R. 6227. In June, the U.S. House Science, Space, and Technology Committee unanimously approved the legislation.

More Context: Quantum Computing at Intel

The post Intel Applauds Bipartisan Congressional Effort to Accelerate Quantum Computing Research appeared first on Intel Newsroom.

1,001 Interns: Students from Around the Globe Bolster NVIDIA

Face time with top execs, pro sports tickets, free NVIDIA SHIELD TVs.

These are just some of the perks for our summer interns, who spend three months or more working with world-class engineers and burnishing their skills while tackling on some of the company’s top projects. They also volunteer in events — such as the one last week, pictured above, when scores of interns packaged more than 25,000 meals to feed the hungry — run by the NVIDIA Foundation.

This summer, NVIDIA had a record number of interns, with nearly 1,000 students descending on dozens of offices around the world. These internships, offered across our range of businesses, are customized to suit specific skill sets, giving students a chance to tackle real-world projects designed by individual teams.

In honor of National Intern Day, we’ve featured the experiences of three of our many exceptional interns below.

Eddy Dreams of Space

Since the age of 12, when she first saw black female astronaut Mae Jeminson on TV, Camille Eddy has dreamed of going into space. In pursuit of that goal, she’s since led an undergraduate research team at NASA, joined a college panel to educate undergrads about space research and delivered numerous speeches about cultural bias in engineering.

Today, the Boise State University undergrad works on mechanical design for the NVIDIA DRIVE team.

On a typical day, Eddy can be seen using CREO, a computer-aided design software, to create 2D drawings or review assemblies of hardware products. She’s also had the opportunity to retrofit vehicles, working on designs for gaskets, stands and anything else the team needs.

Eddy said her experiences at NVIDIA have given her lots of exposure to prototype engineering and 3D printing, while contributing to emerging technology that’s never been seen before.

“The skills I’m gaining from these projects are easily transferable,” Eddy said. “I’m going to walk away with the full feeling of knowing that I’ve truly learned skills that I can use to achieve my goals.”

Kadhem Burns the Midnight Oil

Even before Hussain Kadhem began his internship at NVIDIA, a previous stint elsewhere taught him to use CUDA, NVIDIA’s parallel computing platform and programming model for working with GPUs. So it was fate when, this May, the University of Toronto undergraduate joined our Linux software team to update its graphics driver software.

Having grown up blind, Kadhem was always surrounded by assistive technology and did much of his school work on computers. Gradually, he became more fascinated with exploring how these technologies worked, and began programming in middle school.

Now, he’s tasked with updating part of the graphics driver software of Linux that was written 15 years ago. His job has been to scrutinize the driver, specifically in areas where the code may be old and weak, to mitigate potential security exploits. The project has been grueling, according to Kadhem — who stays in the office past midnight about three times a week — but he’s gained many invaluable experiences.

“I came here to work and get more experience, but I didn’t expect to be involved in a truly impactful and central project,” Kadhem said. “I’m really happy that I’m making a tangible contribution to NVIDIA while I’m here.”

Sun Gets Hands-On to Go Hands-Free

A University of Waterloo undergrad, Tom Sun is working with NVIDIA’s automotive team to improve the ability of autonomous vehicles to change lanes. This task involves driving hands-free on the highway — a sight that he reports has shocked many drivers passing by.

One of the main challenges with autonomous vehicles is figuring out how to change lanes and enter/exit ramps, according to Sun. Currently, AVs use machine learning to detect lanes, which is often a costly and difficult task. However, NVIDIA’s team uses map data, which works with machine learning techniques to localize the car and create a drivable path through different lanes.

Sun’s first time in an autonomous vehicle occurred while driving 60 mph on the highway. His manager simply plugged in the code Sun had been working on and let go of the wheel.

“The wheel just started turning by itself — it was really amazing to see my code in action, even though I was also freaking out because we were driving on the highway without hands on the wheel,” Sun said.

An inveterate tinkerer, Sun said that he especially values NVIDIA’s opportunities to work on real code and autonomous vehicles. He hopes to continue on this career path.

“I feel like my work at NVIDIA has actually made an impact on the company’s future, even though I’m only an intern. Also, despite the fact that NVIDIA is a big company, I can just go to my neighbor and ask for help,” Sun said. “I’ve learned so much here; everyone gets their work done but it’s still a super-fun environment to be in.”

Got plans for summer 2019? Find out more about NVIDIA’s internship program.

The post 1,001 Interns: Students from Around the Globe Bolster NVIDIA appeared first on The Official NVIDIA Blog.

Intel Reports Second-Quarter 2018 Financial Results

Intel Corporation’s second-quarter 2018 earnings news release and presentation are available on the company’s Investor Relations website. The earnings conference call for investors begins at 2 p.m. PDT today; a public webcast will be available at www.intc.com.

q2 2018 earnings infographicFINAL
» Click for full-size graphic

The post Intel Reports Second-Quarter 2018 Financial Results appeared first on Intel Newsroom.