You are on page 1of 6

Intel: 14nm Atom chip in 2014

May. 18, 2011 (5:32 am) By: Matthew Humphries

Intel dominates the desktop market with AMD continuing to bite at its heels. But in mobile
it’s a very different story. ARM chips dominate, and Intel is attempting to gain a foothold
in the growing smartphone and tablet markets.

The problem for Intel is it currently doesn’t have any processors on the market that can give
manufacturers a decent alternative to ARM chips. That is set to change over the next three
years, however, as Intel has just updated its roadmap for the Atom processor line.

Intel has realized its plan for future Atom chip updates wasn’t good enough, and CEO Paul
Otellini admitted this during an analyst meeting yesterday. The rethink of the roadmap for
the processor line now includes three new Atom chips for 2012, 2013, and 2014.

Today’s Atom processors are 45nm. That will change next year with the rollout of a 32nm
Medfield processor, the first to be targeted at tablets and smartphones as well as its
traditional place inside netbooks. Then in 2013 we can expect a 22nm Atom chip named
Silvermount which will demonstrate a significant decrease in power requirements from
today’s 40W down to just 15W.
In 2014 we will see Airmount, a 14nm Atom chip which is sure to bring further power
savings and should make Intel a viable alternative to ARM across all smartphone and tablet
models.

While we may have to wait until 2012 for the 32nm Medfield Atom to arrive, Intel already
has samples running in smartphone and tablet hardware. The chip giant is also working on
its own version of Android for the x86 platform. Intel also claims that its single-core
Medfield processor can match the current batch of dual-core ARM processors being
shipped in smartphones.

Read more at EETimes

Intel’s new E7 Xeon chips include 10-core


option
Apr. 6, 2011 (9:00 am) By: Matthew Humphries

In our desktop machines the Intel processor maxes out at 6 cores at the moment, but for the
server market, Intel has just introduced a new range of Xeon chips with up to 10 cores
available.

This new E7 family of Xeon processors is meant for the data center where high
performance and low power consumption are essential. Intel is hoping a range of
configurations coupled with a claim of up to a 40% performance gain over the previous
generation Xeon chips is enough to get E7s bought in serious quantities.

The E7-8800/4800/2800 processors offer 4, 6, 8, or 10 cores and up to 20 threads using


Hyper Threading. They can run at frequencies up to 2.67GHz and are built on a 32nm
process. Couple that with as much as 30MB of L3 cache, up to 4TB of DD3 memory, and
the ability to scale to 256 socket configurations, and you can see why these processors are
perfect for cloud computing on a massive scale.
Intel also has energy efficiency in mind and these processors can turn idle areas of the chip
off completely. As these are processors for business, security is built-in with automatic
hardware error correction, AES-NI for quick encryption and decryption of data, and Intel
Trusted Execution Technology for creating a secure environment from system boot.

Any businesses considering investing in the E7 range better have a nice fat IT budget ready
to spend. The high-end 10-core, 2.4GHz E7-8870 costs $4,616 per chip based on you
buying 1,000 units. At the other end of the scale is a 6-core 1.73GHz E7-2803 option for
$774, again buying in lots of 1,000.

Read more at the Intel press release, via ZDNet

Researchers achieve 26 terabits/s over


optical fiber
May. 23, 2011 (9:30 am) By: Matthew Humphries

We’ve recently heard how Netflix and its streaming video service is now the largest single
source of Internet traffic in North America, and such services are only just getting started.
That’s why it’s comforting to know the speed at which data can be transferred down a piece
of optical fiber can scale to terabits.

Researchers at the Karlsruhe Institute of Technology in Germany have managed to set a


new record by sending data down an optical fiber at a rate of 26 terabits per second.

That may actually seem quite slow when you consider the National Institute of Information
and Communications (NIIC) in Tokyo achieved 109 terabits/s earlier this month. But the
Karslruhe technique offers one big advantage over how other, higher transfer rates have
been achieved: it’s relatively cheap to implement.
The NIIC method required a new type of cable, previous 100 terabits/s speeds have
required hundreds of expensive lasers and vast amounts of energy to achieve making them
unworkable for the majority of our communication networks around the world.

Professor Wolfgang Freude, who is one half of the research duo that came up with this new
method, managed to achieve terabit speeds using just one laser. That laser shoots out very
short pulses of different colored light. 350 different colors are possible in total, each of
which can carry a data stream. On the receiving end an optical Fourier transform is
performed that can separate out the diferent color pulses and therefore the data, allowing
for such huge transfers of data over a single line.

The most important aspects of this new form of data transfer is its ability to work with
existing cable infrastructure and comparable low cost when compared to alternative terabit
transfer methods. Freude believes the technique can eventually take the form of a processor
making it very easy to deploy.

Read more at BBC News

First Ever Commercial Quantum


Computer Now Available for $10 Million
5 Comments

Technology News

9:21 AM
Sony Demonstrates Life-Altering SmartAR Technology
8:50 AM
26Tbps Transmitted with a Single Laser, Could Supercharge
Internet Backbone
5.20.2011
New Technologies from Microsoft and Korean Researchers
Can Detect and Block Porn
5.20.2011
First Ever Commercial Quantum Computer Now Available for
$10 Million
5.20.2011
Kaspersky Vulnerability Underscores Users' Poor Updating
Habits
5.20.2011
Chrome OS Handwriting Support Cements Google's Tablet
Aspirations
5.19.2011
Why Thunderbolt is Dead in the Water

more>

• ExtremeTech Podcast June 22, 2009

The Final Chapter: The Past, Present, and Future of Tech.

ET Podcast Archive
More Ziff Davis Podcasts

ExtremeTech Video

The Internet desktop is here in the form of the inexpensive, low-power ASUS Eee
Top 1602.

Play Video

Apple has redesigned, shrunken, and added more features to the least-inspiring
member of the iPod family.

Play Video

• View Video Archive

Newsletters

Get ExtremeTech's FREE online newsletters.

ExtremeTech Update

Build It

What's New Now


PCMagCast Update

ExtremeTech Announcements
Preferred e-mail format:

View all newsletters>

Newsletter Help>

D-Wave Systems, after some 12 years of research, the accumulation of 60 patents, and the
filing of 100 more, has finally released the world's first commercial quantum computer. The
computer, which has been labelled with the wondrously adventurous name "D-Wave One",
is outfitted with a 128-qubit (quantum bit) chipset that performs just a single task -- discrete
optimization -- and costs $10,000,000 dollars.

The processor itself, which has the far cooler codename "Rainier," uses a process called
quantum annealing to solve very specific problems. Quantum annealing is an exciting new
topic which allows the "moulding and warping" of quantum particle energy levels on a
scale far greater than any other approach. Quantum annealing enables the creation of
integrated circuits -- processors -- that look and operate much like conventional silicon. The
128-qubit Rainier processor is so "conventional" that it can be programmed with the
popular Python programming language.

The most exciting bit, though, is that quantum annealing allows scientists and researchers
to observe what's actually going on. Historically, the problem with quantum computing is
that observing the result is impossible -- to observe a quantum state is to destroy it -- which
makes it rather hard to prove that qubits are actually performing as they should. D-Wave
has pioneered a process that allows for quick, rapid "snapshotting" of Rainier's current
state, which then become the frames of a movie. By watching the result, D-Wave can
finally peer inside the quantum black box and begin to see whether quantum computing can
deliver mathematically-provable results.

Read more at Hack the multiverse

You might also like