Tegra Team Gears Up for Growth


As our Tegra mobile processor becomes increasingly important across our businesses, we’re taking steps to ensure that it’s supported by the most efficient organizational structure.

We announced today that Deepu Talla will head the Tegra business unit. He succeeds Phil Carmack, who after 10 years building Tegra from scratch to a three-quarters of a billion dollar business, is leaving NVIDIA to become CEO of one of our partner companies.


Prior to joining NVIDIA earlier this year as VP of Tegra business development, Deepu spent more than 10 years at Texas Instruments, most recently as GM of its OMAP mobile computing business.

At the same time, we announced that we’re combining Tegra’s design team into our larger, engineering structure, which will be now fully centralized across the company.

This new org structure reflects the central importance of Tegra – not only in our mobile strategy, but increasingly in PCs, gaming, auto and beyond.

Chip Shot: Intel Capital Portfolio Company ASPEED Celebrates Initial Public Offering

Intel Capital congratulates ASPEED Technology Inc. (5274: Taiwan) on its initial public offering (IPO) on the GreTai Securities Market (GTSM) in Taiwan. Intel Capital invested in Aspeed in 2011 with the goal of helping the company grow its leadership within the semiconductor ecosystem. ASPEED used the investment to expand its research and development team and extend the company’s marketing programs. This IPO marks another successful collaboration between Intel Capital and the IT industry in Taiwan and is Intel Capital’s first portfolio company IPO this year. For more information view the release.

Introducing hUMA, Sophisticated New Memory Architecture from AMD

Over the past 50 years, we’ve seen our computers make significant advancements; they are faster, smaller and lighter. Despite these important improvements, the way we interact with our devices is still the same. We type, we click, we touch, we swipe… but we don’t truly interact. Our devices don’t recognize us, they don’t understand us and they don’t sense us. Have you ever wished that your device could simply scan your face to log you in or understand exactly what you are asking? Well, what sounded impossible in the past is now within our reach and let me tell you why.

The key to realizing these experiences lies in unlocking the full computing potential of today’s mainstream processors.  We know today’s GPUs have considerable compute capabilities that are measured in TFLOPS.  But all that horse-power remains relatively untapped for all but the most graphically demanding applications.  Through Heterogeneous System Architecture (HSA), we want to unlock those TFLOPS for a much broader range of uses.

Although CPUs and GPUs have coexisted on the same piece of silicon (better known as an APU), for a few years now, the CPU and GPU have not been ‘equal citizens’. Heterogeneous System Architecture (HSA) is an intelligent computing architecture that enables CPU, GPU and other processors to work together in harmony on a single piece of silicon by seamlessly moving the right tasks to the best suited processing element.  HSA creates an environment that allows the GPU to be used as fluidly as the CPU.

In order for HSA to be powerful and power-efficient, we still have two major obstacles to overcome in hardware:

  1. Unlock the GPU compute performance and
  2. Remove the bottlenecks of the GPU when accessing system memory.

Enabling high bandwidth access to memory will arguably be important in our quest for unlocking this compute performance. Breaking down bottlenecks in how GPU is accessing the memory is important to the future of programming because it allows apps to efficiently move the right tasks to the best suited processing element. heterogeneous Uniform Memory Access or hUMA signifies the first step in bringing  a heterogeneous compute ecosystem to life. hUMA is a highly sophisticated shared memory architecture used in APUs (Accelerated Processing Units).  In a hUMA architecture, CPU and GPU (inside APU) have full access to the entire system memory. hUMA architecture means that all processing cores in a true UMA system share a single memory address space. hUMA main features include

  1. Access to entire system memory space: CPU and GPU processes to dynamically allocate memory from the entire memory space
  2. Pageable Memory: GPU can take page faults, and is no longer restricted to page locked memory
  3. Bi-directional coherent memory: Any updates made by one processing element will be seen by all other processing elements – GPU or CPU



HSA’s revolutionary memory architecture is a new standard for high-speed GPU access to the system memory and removing the obstacle of having the GPU “starved for data”

HSA will empower software developers to easily innovate and unleash new levels of performance and functionality on all your modern devices and lead to powerful new experiences such as visually rich, intuitive, human-like interactivity.   Through HSA, we want to enable mainstream languages and programming models and bring heterogeneous compute to the broad development community. That will in turn enable broad performance uplift across mainstream applications and usages.

HSA has a number of unique features that enable powerful new experiences such as free-flowing  human interactivity with your devices and enable new levels of performance, functionality and efficiency on all modern computing devices. The following video shows the full potential of HSA technology (facial login, voice recognition, gesture recognition to name a few) and potentially gives people a fresh new reason to buy their next PC.

Watch a video on HSA Technology here.

Sasa Marinkovic  is a Senior Marketing Manager at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites, and references to third party trademarks, are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links and no third party endorsement of AMD or any of its products is implied.