You are on page 1of 13

c 


 
 
An APU integrates a CPU and a GPU on the same die thus improving
data transfer rates between these components while reducing power
consumption. APUs can also include video processing and other
application-specific accelerators. Examples: Intel¶s Sandy
Bridge, AMD Fusion and NVIDIA¶s Project ³Denver´
Make more sense now? An APU is the combination of a CPU
(generally a multi-core one), a graphics processing unit, and then
some way to get them to play together nicely.
Why do this in the first place? Because it turns out that GPUs are
good at things besides graphics, so the CPU can offload computing
tasks to them. To optimize this cooperation bottlenecks between the
CPU and the GPU had to be removed, the result being the APU!

 
 
c 
 

AMD's new Accelerated Processing Unit concept is a fusion of the
tradition x86 processor core (CPU) and graphics processing unit
(GPU). By bringing both the CPU and GPU together, many of the
latencies of CPU to GPU communication can be reduced - plus both
processing arrays can access the same data without having to copy it
over slow or high latency system interconnects. AMD's purchase of
ATI Technologies in 2006 was with the FUSION APU concept in
mind.




APU's are also coming to the mainstream desktop space, with existing
x86 technology cores from the Athlon II processor line up being
combined with DirectX 11 GPU cores offering graphics capabilities
well in excess of the current mainboard integrated graphics cores -
and Intel's Sandy Bridge, too.

This processing power is not being used only for enhanced med ia
consumption but also for application acceleration - AMD fusion
APU's currently accelerate more than 50 popular titles, with more on
the way. This is made possible by AMD's support for hardware
accelerated OpenCL using their GPU technology - an open standard
that any vendor can benefit from using.

Future developments of the AMD Fusion APU concepts will bring far
higher graphics and media performance to the basic desktop
platform, as less power will be needed to deliver higher performance
through the use of AMD Accelerate Application Processing. Software
companies can leverage the advantage of AMD APP by using
common and open standard technologies such as Microsoft's
DirectCompute 5 (part of DirectX 11) and OpenCL.

The Fusion APU concept is a fundamental shift in thinking about


what components are important for delivering the best user
experience with low power and low cost, as well as outright top
performance.
Athlough the future is fusion, AMD's new x86 architecture codenamed
Bulldozer (for high performance desktops and servers) launching this
year without integrated GPU's. This is widely predicted to be different
in 2012, where the new Bulldozer+ cores will be mated with SIMD
arrays to create high performance enthusiast parts for gamers and
workstation users to use.


Ä           
 
 




Nvidia Corp., a leading designer of graphics processors and
multimedia system-on-chips, on Wednesday announced plans to
develop its own ARM-based micro-architecture. The company intends
to integrate future custom central processing unit (CPU) cores into its
graphics processing units (GPUs) and install the latter into both
personal computers and servers.

Known under the internal code-name "Project Denver", this initiative


features an Nvidia CPU running the ARM instruction set, which will
be fully integrated on the same chip as the Nvidia GPU. This new
processor stems from a strategic partnership, also announced today,
in which Nvidia has obtained rights to develop its own high
performance CPU cores based on ARM's future processor
architecture. In addition, Nvidia licensed ARM's current Cortex-A15
processor for its future-generation Tegra mobile processors.

Nvidia's intention to integrate ARM-based microprocessor cores had


been known for over a year now. Back in November, 2010, the
company even unveiled plans to develop a so-called Echelon design
chip that incorporates a large number (~1024) of stream cores and a
smaller (~8) number of latency-optimized CPU-like cores on a single
chip. As a result, the current announcement is just a formal
confirmation of Nvidia's plans to enter the market of accelerated
processing units (APUs, the chips that feature both CPU and GPU
cores on the same die) eventually.

"ARM is the fastest-growing CPU architecture in history. This marks


the beginning of the Internet Everywhere era, where every device
provides instant access to the Internet, using advanced CPU cores
and rich operating systems. ARM's pervasiveness and open business
model make it the perfect architecture for this new era. With Project
Denver, we are designing a high-performing ARM CPU core in
combination with our massively parallel GPU cores to create a new
class of processor," said Jen-Hsun Huang, president and chief
executive officer of Nvidia.

While it remains to be seen how successful ARM architecture will be


in servers, it should be stressed that Nvidia makes no promises about
any actual products or time frames. Perhaps, by the time project
Denver evolves into actual devices the market of servers will be
completely different from today and highly-parallel accelerators will
be more important than x86-based AMD Opteron or Intel
Xeon microprocessors.

"Nvidia is a key partner for ARM and this announcement shows the
potential that partnership enables. With this architecture license,
Nvidia will be at the forefront of next generation SoC design,
enabling the Internet Everywhere era to become a reality," said
Warren East, ARM chief executive officer.

Itable workloads from the integrated APU cores onto discrete add-in
board cores, too j

 !"  c     #


   




Standalone central processing units (CPUs) for client computers will
be challenged by accelerated processing units (APUs) with integrated
graphics core, a high-ranking officer at AMD said in an interview.
APUs provide better performance because of its ability to efficiently
process both parallelized and serialized data.

"I think APUs will definitely challenge standalone CPUs. I believe


that the future of consumer as well as commercial computing
environments are characterized by the ability to present a compelling
visual experience. Taking a GPU core and a CPU core and using
them together on one chip will definitely challenge standalone
CPUs," said Neal Robison, senior director of content and application.

Accelerated processing units integrate many parallel GPU processing


elements as well as several "fat" x86 CPU cores that can efficiently
process typical data. Many performance-demanding applications
nowadays use parallel processing and receive benefits from both
multi-core CPUs as well as many-core CPUs. Nonetheless, there are
loads of programs that use advanced microprocessors and have no
need for GPUs.

But while APUs will confront low-cost systems, they will not
challenge and will not become part of advanced personal computers
with discrete graphics cards.

"I do not think that APUs will challenge discrete GPUs on anything,
but on the lowest-end systems. When you look at adding a discrete
GPU that enhances performance of the graphics side, it makes a huge
amount of sense as it scales [performance] on a wide a mount of
applications because of the rich visual experience that everybody
expects now when they are actually using their computing device,"
added Mr. Robison.

c 
   
  
    
 $ 
 
 


 

The CPU has been the heart of every PC since Intel¶s x86 processors
became popular over two decades ago. Yet the CPU does have
weaknesses,the greatest of which is their relatively linear data
execution. Graphics processors, by comparison, consist of many s mall
cores that execute data simultaneously. This makes it easier for them
to perform certain tasks, like video decoding and 3D graphics.

 !   



"
During its four-hour Financial Analyst Day presentation, AMD
revealed new elements of its processor roadmap spanning the next
couple of years, as well as its plans to scale beyond the current multi-
core model. Intel talked about processors with "tens to hundreds of
cores" at IDF earlier this year, but AMD believes the core race is just
a repeat of the megahertz race and that adding more cores isn't the
best way to go about scaling processor performance in the future.
Instead, AMD is cooking up what it calls "Accelerated Processing
Units":
Accelerated Processing Units, or APUs, will be multi-core chips that
include any mix of processor cores and other dedicated processors.
Fusion, AMD's integrated CPU and graphics processor, is AMD's
first step in that direction. However, the company eventually intends
add more specialized cores that can handle tasks other than general-
purpose computing and graphics. AMD didn't give any specific
examples, but one could easily imagine future Fusion-like chips with
cores for physics processing, audio/video encoding, and heck, maybe
even AI acceleration.

Whatever it ends up doing, AMD says it will be able to tailor APUs


with different combinations of core types to specific markets. Fusion,
for example, will be aimed chiefly at the mobile market when it
arrives in 2009. Enthusiasts needn't worry, though: AMD says it
doesn't plan to integrate high-end GPUs and CPUs into massive
silicon fireballs, because both production costs and power envelopes
for such chips would be too high.

AMD also revealed some of its more immediate plans for the
processor market by showing off new desktop and mobile roadmaps:



 


The term APU stands for ccelerated rocessing nit. At the


moment, this is a term that only AMD is using for its products. Intel¶s
recently released update to its processors also qualifies as an APU;
Intel simply seems unwilling to use the definition. That¶s
understandable, since the company has been known as the world¶s
leading CPUmaker for years.
An APU is simply a processor that combines CPU and GPU elements
into a single architecture. The first APU products being shipped by
AMD and Intel do this without much fuss by adding graphics
processing cores into the processor architecture and letting them
share a cache with the CPU. While both AMD and Intel are using
their own GPU architectures in their new processors, the basic
concepts and reasons behind the decision to bring a GPU into the
architecture remain the same.

The desktop roadmap is more or less self -explanatory, placing the
introduction of quad-core desktop CPUs in mid-2007 and the launch
of derived dual-core models in the latter part of the year. All new
chips will have support for HyperTransport 3.0 and DDR2 memory,
and accompanying motherboards will boast PCI Express 2.0 support.
AMD apparently doesn't intend to switch sockets or move to DDR3
memory until the middle of 2008.

On the mobile front, things are a little more interesting. Early next
year, AMD will launch a new mobile chip code-named "Hawk" that
will boast lower power utilization than current Turion 64 X2 and
Mobile Sempron chips. The chip will be paired with platforms that
will have support for hybrid hard drives as well as a somewhat novel
concept dubbed hybrid graphics.

According to AMD, notebooks with hybrid graphics will include both


discrete and integrated graphics processors. When such notebooks
are unplugged, their integrated graphics will kick in and disable the
discrete GPU. As soon as the notebook is plugged back into a power
source, the discrete GPU will be switched on again, apparently
without the need to reboot. AMD says this technology will enable
notebooks to provide the "best of both worlds" in terms of
performance and battery life.
In late 2007, AMD will introduce a mobile chip dubbed Griffin that
will support split power planes and HyperTransport 3.0, meaning it'll
almost certainly be based on AMD's quad-core architecture. Griffin
will be accompanied by platforms supporting PCI Express 2.0,
DirectX 10-class integrated graphics, DisplayPort video output, and
a Universal Video Decoder²essentially a dedicated video processor.

That about covers what AMD unveiled about its upcoming processors
today. The presentation did include a fair amount of discussion about
the company's financial performance and consumer electronics plans,
though; we'll fill you in on those topics a little later.

 
 $%"
w Use of 32nm technology provides power efficiency.
w Use of SOI power gating technology & core power consumption.
w Use of digital power meter.
w Use of less numbers of transistor

h
 "
Ú Handles multi-core processor as well as high end graphics using a
single silicon die.
Ú Improves data transfer rate.
Ú Reduces power consumption.
Ú Includes video processing & other specific acceleration.
Ú Offers high computing capability.

# "
AMD and Intel wouldn¶t go to the trouble of integrating a GPU into
their CPU architectures if there weren¶t some benefits to doing so,
but sometimes the benefit of a new technology seems to be focused
more on the company selling the product than the consum er.
Fortunately, the benefits of the APU are dramatic and will be noticed
by end users.

Obviously, improved performance is one advantage. The graphics


placed on current APUs are not meant to be competitive with high -
end or even mid-range discrete graphics cards, but they are better
than previous integrated graphics processors. Intel HD Graphics
3000, the fastest graphics option available on the company¶s newest
processor, is two to three times quicker than the previous Intel HD
Graphics solution, which was on the processor die but not integrated
into the architecture. This also makes it possible to include new
features, like Intel¶s QuickSync video transcoding techology.
Another advantage brought by APUs is improved power efficiency.
Integrated the GPU into the architecture makes it possibl e to share
resources and achieve the same results with less silicon. This means
an APU can replicate the performance of a system equipped with a
low-end discrete graphics card while using far less power. Early
benchmarks of Intel Sandy Bridge and AMD Fusion laptops make this
advantage obvious; systems equipped with these processors
have better battery life than similar system saddled with a CPU and a
separate discrete or integrated graphics processor j

   
The APU is the future of processor design. The only question at this
point is the term itself. While Intel¶s new processors also fit the
definition of an APU, the term APU is only used by AMD in its
marketing. If Nvidia also decides this is a term worth us ing as it
develops its new processor it may have some legs. Otherwise, I
wouldn¶t be surprised if Intel uses its considerable marketshare might
to squash it.
But whatever this new generation of hardware is called five years
from now, the results are the same. APUs are here, they¶re awesome,
and they¶ll make it easier for users to enjoy media without consuming
unreasonable amounts of power.

You might also like