You are on page 1of 24

LWN.net Weekly Edition for February 9, 2012 [LWN.

net]

https://lwn.net/Articles/479071/bigpage

Weekly edition

Kernel

Security

Distributions

Contact Us

Search

Archives

Calendar

Subscribe

Write for LWN

LWN.net FAQ

Sponsors

LWN.net Weekly Edition for February 9, 2012


Logged in as
ChristopheC
My Account
Unread comments
Log out
LWN Weekly Edition
Leading items
Security
Kernel development
Distributions
Development
Announcements
->Multipage format
This page
Previous week
Following week
Recent Features
The N9: what MeeGo
could have been
Toward better NUMA
scheduling
LWN.net Weekly Edition
for March 15, 2012
CAP_SYS_ADMIN: the
new root
OIN expands its coverage
Printable page

Tracking users
By Jake Edge
February 8, 2012

User tracking is always contentious. There are real advantages to gathering lots of information on how
an application is used, but there are also serious drawbacks in terms of privacy. Many applications or
distributions have "opt-in" mechanisms that report back, but that makes the data somewhat suspect
because it comes from a self-selected group. But "opt-out" data gathering is frowned upon by privacy advocates and privacyconscious users. As a recent discussion in the Mozilla dev-planning group shows, though, there are some who find that the need
for data may outweigh some privacy concerns.
Mozilla is understandably concerned with Firefox's decline in market share and would like to try to determine what the
underlying causes are. That has led to a proposal for a feature called MetricsDataPing that would collect a wide variety of
information about the browser, its add-ons, and how it is used. That information would be sent to Mozilla over HTTPS each day
that the browser is used. Crucially, the proposal is that MetricsDataPing would be an opt-out feature, which would require users
to know about the feature and disable it if they didn't want to share that data.
This stands in contrast to features like Telemetry, which gathers data on browser performance, but it has two crucial differences
from MetricsDataPing. First, it is opt-in so that users actively have to enable it, and secondly, it tries to avoid gathering any
personally identifiable information (PII). It does not store IP addresses (but does geolocate the IP address and store that) and it
generates a new ID every time the browser is restarted.
MetricsDataPing on the other hand would gather a much wider range of information such that "fingerprinting" a user just based
on the data gathered would be a real possibility. Just a list of add-ons installed is probably nearly unique, but adding in just the
installation date for the add-on, as MetricsDataPing does, would almost certainly make it unique. Information about search
sources used, number of searches done, and that sort of thing also rings alarm bells for those concerned about privacy. It also
uses a "document ID" to identify the data sent to the server, which would allow users to delete their data from the Mozilla
servers. But the document ID could also essentially serve as a unique user ID (UUID) because the previous document ID is
always sent with the current update, so that the older can be deleted.
There are efforts to anonymize the data that would be stored, but, as we have seen before, it is very difficult to truly anonymize
collected data. Some of that is also true for Telemetry, because it has added fingerprintable data after its initial roll-out, but the
key difference is that users have willingly chosen to share that data. That's the main difficulty that some see with the
MetricsDataPing proposal. Benjamin Smedberg started off the discussion with a posting of his concerns:
It seems as if we are saying that since we already collect most of this data via various product features, that makes
it ok to also collect this data in a central place and attach an ID to it. Or, that because we *need* this data in order
to make the product better, it's ok to collect it. This makes me intensely uncomfortable. At this point I think we'd be
better off either collecting only the data which cannot be used to track individual installs, or not implementing this
feature at all.
But others, especially on the Mozilla metrics team, believe that the information gathered is critical. Blake Cutler described it this
way:
The Metrics Data Ping is an attempt to apply scientific principles to product design and development. Mozilla relies
too much on gut decisions, which directly translates to poor product decisions. Firefox analytics are stuck in the
dark ages. It shows.
Ben Bucksch made several suggestions on how to improve the privacy of the data gathered, but he is also worried that
gathering data to figure out why Firefox usage is declining will actually result in more users leaving because of a perception that
the browser is intruding on their privacy. While the data may be important and useful, there are other considerations according
to Justin Lebar:
Yeah, it sucks that we can't tell why people stop using Firefox. But our [principles] are more important than that.
To that end, the discussion shouldn't center on why these metrics are important or difficult to obtain another way.
The discussion is about whether we can at once collect the proposed metrics and stay true to our values. If we
can't, then we can't collect the data, no matter how important it may be.
There was some discussion of technical measures to try to reduce the PII content of the messages, but there are still problems
with things like fingerprinting. If you gather enough information (of the kind the metrics team thinks it needs), you are very
likely to be able to track users. Even if the data is massaged in some fashion (aggregated for example), the perception of privacy
invasion will still be present as Boris Zbarsky pointed out:
One problem is that some people will assume that if data is being sent then it's being used, no matter what we
actually do with it and say we do with it. So if we _can_ design things such that we couldn't misuse them even if
we were to want to, we should. I understand that in general this is pretty difficult....
Even for opt-in services like Telemetry, gathering additional information requires user agreement. When the list of add-ons was
added to the information that Telemetry supplied, users were required to opt back in to Telemetry after being informed of that
change. As Lebar noted: "So again, here we have a decision made about sending the list of add-ons in a ping-type thing, that
we cannot do it without explicit permission, even for people who already opted in to data collection." But MetricsDataPing would,
seemingly, gather that information without asking the user even once.
Early in the thread, Mike Beltzner pointed to a posting on the Mozilla privacy blog that committed Mozilla "to a basic policy of 'no

1 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

surprises, real choices, sensible settings, limited data, and user control'", he said. It's a bit hard to see how MetricsDataPing fits
into that framework. For some Linux distributions (which is probably not really where Mozilla is focused on market share) it could
easily be seen as a misfeature that should be removed from the codethough that might lead to more "iceweasels" due to
Mozilla trademark issues.
In the end, Mozilla may need to find a way to satisfy its data needs with an opt-in feature, or find a very convincing argument
for the impossibility of user tracking with the data it does collect. There is also the argument that there is a subtle
self-unselection bias that is introduced with an opt-out feature. In what ways does the data get skewed by eliminating the very
privacy-conscious? It is certainly understandable that the metrics team (and Mozilla as a whole) wants the data, but, like Linux
distributions it may have to settle for indirect measurements or some self-selection bias.
Comments (80 posted)

Scribus 1.4.0 released


The Scribus project announced the release of version 1.4.0 in January, its first new stable
release in more than four years. The new release incorporates a suitably long list of
changes from that time span, covering new functionality that will be of interest to printpublishing diehards, and simplifications in the tool set that may make the application more
accessible for those who are new to desktop publishing (DTP).

February 8, 2012
This article was contributed by
Nathan Willis

Fans of Scribus have had access to unstable development builds for much of the time that 1.4.0 has been in development and
testing (LWN covered the release of 1.3.5, the first release in the series that became 1.4, in August 2009), but the "stable"
branding is one the project was intent on not rushing. The 1.3.5/1.4 series introduces changes
to the file format that make newer files incompatible with the old stable release, so running the
stable and development series side-by-side was a risky proposition for anyone using Scribus for
production work.
The new release is available in binary form for Debian-based Linux systems (32 and 64 bit),
Windows, Mac OS X, and (so they tell us...) OS/2. In the past, RPM packages have also been
provided on the site, but they do not appear to have landed yet for 1.4.0. Scribus pulls in a hefty list of dependencies, due to its
need to support a variety of embedded content types, but nothing out of the ordinary for a modern distribution.
New underpinnings
The biggest change to the code base in Scribus 1.4.0 is the migration to the Qt 4 framework and, according to developer
Peter Linnell, it was also the source of the long development cycle. But Qt 4 also brings with it the project's first stable builds for
Mac OS X, which is important because of its position as the dominant DTP operating system. As Linnell explained it, "we spent a
lot of time optimizing the code for Qt4, and also we wanted a really, really solid release." Porting to OS X involved plenty of GUI
and human interface guidelines (HIG) work in order to make the application fit in, but it also entailed integrating with the OS X
font, color management, and printing subsystems. At the same time, Qt 4 allowed the project to make its first appearance on
the *BSDs and OpenSolaris.
A seemingly minor change in version 1.4 is the unification of Undo/Redo tracking across the application. The Undo history now
tracks all sorts of edits that were previously not undoable, such as text formatting changes. But that effort also uncovered a lot
of "dodgy code" in need of refactoring, Linnell said.
The marquee feature in 1.4.0 is the Render Frame content type. In Scribus, as in most other DTP applications, the editing model
consists of a set of individual pages onto which you place and rearrange the objects that make up your document blocks of
text, images, lines and other features, footnotes, etc. Scribus calls these objects frames, and it has long supported an impressive
list of file formats for frame content including the native formats of Adobe Photoshop, Illustrator, and other proprietary
applications.
Render frames are different in that the content they contain is not a static file, but rather the output generated by an external
renderer that will be called when the Scribus project is printed or exported. Any application that can be called from the
command line and produce PostScript, PDF, or PNG output can be used as a renderer. This allows Scribus documents to pull in
generated content like graphs and complex mathematical formulas, without requiring them to first be exported to a separate
image file. That means the external files can be updated at any point without re-building the Scribus document, and it means
the final product can be rendered at the appropriate resolution (for print or on-screen viewing) without extra effort. The default
set of render frames in 1.4.0 includes TeX and LaTeX, Gnuplot, dot/Graphviz, Lilypond, and POV-Ray.
Usability
Render frames represent a conceptual left-turn from typical DTP thinking, but they open the door to a powerful new set of uses.
On the other hand, the typical DTP document-building approach differs so much from word processing and general text editing
that many new users find it difficult to get started. On that front, Scribus 1.4.0 introduces some changes that should make the
learning curve less intimidating, primarily by making far more on-screen objects directly editable.
In earlier releases, adding text to a document took two steps: dropping the text frame into position
on the page, and opening the text frame in the "story editor" component. For a lot of new users,
that was difficult to grasp, so it is likely to be a popular move that 1.4.0 enables direct text editing
on the page. The story editor component is still available, and offers access to more features like
saved paragraph and character styles, but it is not required.
Similar improvements have landed for working with image content. Almost any object can now be
directly edited on the canvas, including vectors and raster images. Transformation tools and Boolean
operations like those you would find in GIMP and Inkscape are provided, and when those will not suffice, images can be opened
in an external editor.
Scribus's image manager which serves as a browser and inspector for all of the image objects linked into a project also

2 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

received a significant upgrade for 1.4. From the manager, you can see image details for each object in the project (including the
file path, original dimensions, and scaled size), look up each instance where an image is used in the document, and apply
non-destructive image effects. Although a single-page document may not be difficult to keep tabs on without use of the image
manager, it is an indispensable aid for multi-page reports or booklets.
Finally, Scribus is focused on producing printed (or at least, print-worthy) output, and 1.4 adds some
features to assist users at print time. Users can toggle a number of features on or off in the print
previewer, with live updates to reflect the changes. These include anti-aliasing (which shows a
smoother preview, but takes more time), transparency (which can reveal problems with image
backgrounds that need to be masked-out), and spot-color-to-CMYK conversion (which is necessary
when printing spot colors on normal, desktop printers). Both the printing system and PDF exporter
can output registration, crop, bleed marks, and color bars. An extra nice touch is the ability to
simulate how the output will appear to people with four types of color-blindness.
Functional additions
Over the span of its development, the new stable release of Scribus picked up a wealth of individual new features some, but
not all, of which were present at the release of 1.3.5. Among the noteworthy additions are support for more advanced features
in Adobe Photoshop files, support for export to the PDF 1.5 format, advanced typography features, and a large collection of new
swatches, scripts, and templates.
Photoshop files, like it or not, are the most common application-specific raster images used by graphic designers. Although
converting them to an standardized export format like TIFF is usually preferable, there are times when Scribus needs to link
them in as-is. The new release supports multi-layer Photoshop files, and those with embedded clipping paths. PDF 1.5 likewise
introduces some important new features, such as animation transitions (which are useful for creating PDF presentations),
multi-layer documents, and PDFs that embed other PDF or EPS files. On the latter feature, older versions of Scribus could import
such PDF and EPS content only by rasterizing it, with the resulting loss in quality.
Scribus is slowly but surely adding support for advanced font and typesetting features it is one of the only open source
applications to properly do drop caps and discretionary ligatures (i.e. at the option of the user), for example. The new release
expands the typesetting feature set to include "optical margins" and "glyph extension," both of which are subtle techniques to
make the ragged-edge of a text block appear more naturally aligned. Optical margins allow non-letter bits like hyphens and
punctuation to hang off past the end of the line, so that the letters on adjacent rows appear to be lined up with each other.
Glyph extension allows the font renderer to slightly widen individual characters on a line of fully-justified text, rather than only
expanding the spaces between letters. The result is easier to read.
Finally, 1.4.0 ships with an expanded set of design elements like pattern fills, defines more gradient types, and includes more
document template styles. It also includes new scripts, some of which add complex, useful functionality. An example is the
Autoquote script, which you run on a text frame after you have finished editing its contents. The script parses the text and
intelligently converts "dumb" quotation marks into "smart quotes," correctly recognizing and accounting for nested quote styles,
and producing output that is tailored to the punctuation style of the language that you specify.
Now the fun begins
The release notes say that Scribus 1.4.0 closes more than 2000 bugs and feature requests, which is a weighty bill for any
project. It has been a long time since 1.3.5, at which point we were told that the pace of development would pick up but then
again, DTP is a very convoluted task. It covers proper format handling for everything from text to images, all the way from
import of vendor-specific files to printed output, and users expect fine-grained control over every aspect of the positioning and
characteristics of each element. Considering those challenges, it is impressive that Scribus is as full-featured as it is.
The new release is also remarkably stable. I have been running pre-release builds of 1.4.0 for several months, and have yet to
experience a crash or a corrupted file at least on Linux. Back when 1.3.5 was released, I commented that the first native
builds for Mac OS X would potentially have the biggest impact on the project. I still think that is true, given the hold that OS X
has among graphic designers.
Convincing OS X users to try Scribus is a prerequisite, of course, but that is not really a problem with a technical solution. In the
meantime, the project has more work cut out for it. Better support for OpenType features and page impositioning are common
requests, and even though PDF 1.5 support is an important milestone, Adobe has pushed the format through several revisions
since then. Of course, if the target stayed completely still, it probably wouldn't be as much fun to develop.
Comments (7 posted)

FOSDEM: Infrastructure as an open source project


The open source development model has many interesting properties so it's not surprising
February 8, 2012
it has also been applied in domains other than software. In his talk at FOSDEM (Free and
This article was contributed by
Open Source Software Developers' European Meeting) 2012 in Brussels, Ryan Lane
Koen Vervloesem
explained how the Wikimedia Foundation is treating their infrastructure as an open source
project, which enables the community to help run the Wikimedia web sites, including the popular Wikipedia.
Ryan Lane is an Operations Engineer at the Wikimedia Foundation and the Project Lead of Wikimedia Labs, a project aimed at
improving the involvement of volunteers in operations and software development for Wikimedia projects. These projects, like
Wikipedia, Wikibooks, and Wikimedia Commons, are well-known because of their large community of volunteers contributing
content. Moreover, MediaWiki, the wiki software originally developed for Wikipedia and now also used in many other wikis, is an
open source project.
In the early days, Wikimedia volunteers had not only their say on content and software, but also on infrastructure. There was no
staff doing operations as the server infrastructure was all managed by volunteers. However, in the meantime operations was
professionalized, and now it's all done by staff. Ryan's message in his talk was: "We want to change this, because operations is
currently a bottleneck: it doesn't scale as well as software. That's why we had the idea to re-open our infrastructure to

3 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

volunteers." But how do you give volunteers access to an infrastructure?


Puppet repositories
Wikimedia has already shared a lot of knowledge about its infrastructure on wikitech. This public wiki describes their network
and server infrastructure in detail, including the open source software they use, such as Ubuntu, Apache, Squid, PowerDNS,
Memcached, MySQL, and the configuration management tool Puppet to maintain a consistent configuration for all their servers.
Ryan's approach to open up Wikimedia's infrastructure even more was twofold. First, Wikimedia's system administrators spent a
few weeks to clean up Wikimedia's Puppet configuration. After stripping all private and sensitive information, they published the
Puppet files in a public Git repository. The sensitive stuff was moved to a private repository that is only available to Wikimedia
staff and volunteers with root access.
But Ryan wanted more than just sharing knowledge about how Wikimedia manages its servers (the information in wikitech and
the public Puppet repository): he wanted to treat operations as a real open source project where community members could edit
Wikimedia's server architecture just like they did with Wikimedia's content and software. So he had to build a self-sustaining
operations community around Wikimedia. For this to happen without sacrificing the reliability of Wikimedia's servers, a group of
volunteers created a clone of the production cluster, which is mostly set up now. Thanks to this, staff and community operations
engineers can push their changes to a test branch of the Puppet repository to try out new things on the cloned cluster. After a
code review of the changes by the staff operations engineers, the code is evaluated by a test suite. If the code passes the tests,
the changes are pushed to the production branch of the Puppet repository and hence the production systems are managed by
the new Puppet configuration.
Wikimedia Labs is using OpenStack as a private cloud to run their server instances (virtual machines). At the moment, there are
83 instances running in the test cluster, managed by various Puppet classes, including base (for the configuration that applies to
every server instance), exim::simple-mail-sender (for every server that has to send email), nfs::server (for an NFS
server), misc::apache2 (for a web server), and so on.
Managing projects
There are also 47 projects defined in the Wikimedia Labs project, each of them implementing a specific task such as adding a
new feature, adding monitoring, or puppetizing infrastructure that has been set up manually in the past. For instance, there are
projects for bots, the continuous integration tool Jenkins, Nginx, Search, Deployment-prep which implements the clone of the
production infrastructure, and so on. Each project has a project page on the wiki with documentation, the group members, and
other information.
The interesting thing about these project pages is that most of the information is automatically generated. For example, when a
server instance is running for a project, this instance is automatically shown at the bottom of the wiki page. And when someone
types the command !log <project> <message> on the #wikimedia-labs IRC channel, it is automatically logged on the
project page under the heading "Server Admin Log", which are subdivided by day. That way, a volunteer server administrator
can explain what he did so other volunteers who are maybe living in a different timezone on the other side of the world can
follow what is happening in the project.
The power of the community
So now that anyone has been able to push changes from ideas to production on Wikimedia's cluster for a couple of months,
what are the results? According to Ryan, there are 105 users now in the Wikimedia Labs project who have contributed a variety
of Puppet configurations:
One volunteer puppetized our existing Nagios monitoring setup (which was not managed by Puppet) in a very neat
way. The bot infrastructure has also been improved much by volunteers. And at the San Francisco hackathon in
January 2012 we had a project created, implemented, tested, and deployed to production during the hackathon.
We have a custom UDP logging module written for nginx, and it had a couple of bugs in the format. Abe Music built
an instance, installed our nginx source package, added the change to fix the formatting, then pushed them up for
review. We reviewed the change, then pushed it to production. All of this happened during the hackathon.
So has this ambitious experiment been successful? According to Ryan, the original goal to lessen the bottleneck of the
operations team definitely succeeded. However, he points out that the bottleneck has shifted: "We have to do these code
reviews now, but fortunately it takes less time to review code than it does to make a lot of changes." Another issue Ryan sees is
trust: "Giving out root to volunteers is dangerous, so we have to audit our infrastructure often. Moreover, there's always the
danger of social engineering: newcomers can try to build trust to have us give them sensitive information about our
infrastructure." But luckily the staff can count on a core of community people whom they trust to do these code reviews and
audits.
All in all, Ryan thinks that the same model as Wikimedia Labs uses can also be used in other organizations to set up a volunteerdriven infrastructure. In particular, non-profits or software development projects that rely on a big infrastructure could profit
from treating operations as an open source project. In addition to being able to tap into the potential of technical talents in the
community, opening operations is also a great way to identify skilled and passionate people to hire for a staff position.
Comments (4 posted)
Page editor: Jonathan Corbet

Security
Debian and Suhosin
By Jake Edge

4 sur 24

A recent proposal for Debian to stop shipping PHP with the Suhosin security patches has been

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

February 8, 2012

controversial. There are a number of reasons behind the proposalmanpower, sticking to the mainline,
performance, and morebut others responding in the thread consider the security mitigations that
Suhosin provides to be very important for the web application language given its less than stellar security track record. What
most would like to see is that those protections make their way out of the Suhosin patches and into the PHP mainline, but that
does not seem to be in the offing. In the meantime, users may find that the PHP protections they have depended on will
disappear from Debian.
Debian PHP maintainer Ondej Sur posted a message to several lists noting that the Suhosin patches have been disabled in the
unstable repository and tries "to summarize the reasons why I have decided to disable suhosin patch" in the message. Over
time, he has changed his mind about Suhosin, so he is documenting the reasons and looking for other opinions. The Debian PHP
team is evidently understaffed, and the work to add in the Suhosin patches (and module) eats up some of that time. Sur is not
convinced that the extra time is necessarily well-spent because PHP has "improved a lot".
By shipping only a Suhosin-enabled PHP, Debian is diverging not only from the mainline, but also from what other Linux
distributions do. That means that users coming from other distributions (like Fedora which doesn't ship Suhosin or openSUSE
where it is optional) may run into problems they don't expect. In addition, he said, bugs reported upstream from the Debian
version are often met with a request to reproduce it in vanilla PHP. There are also performance and memory usage impacts from
Suhosin that some find excessive.
Suhosin grew out of the PHP hardening patch that was developed in 2004. The basic idea is to add protections against bugs in
the PHP core (aka Zend Engine) by making proactive changes for things like buffer overflows or format string vulnerabilities. It
also tries to protect against badly written PHP applications, of which there are seemingly countless examples. Suhosin has two
parts, a patch to the PHP core along with a PHP extension that implements additional hardening features.
The core patches are what try to protect against buffer overflows by adding canary values to internal data structures so that the
overflows can be detected. In addition, the pointers to destructors (i.e. functions that are called when an element is freed) for
internal hash tables and linked lists are protected as they can be a vector for code execution if a buffer overflow overwrites
them. Format string vulnerability protection and a more robust implementation of realpath() round out the changes to the
core.
The extension provides a whole host of other kinds of protections, largely against dodgy PHP programming practices. For
example it protects against either remote or local code inclusion, which is one of the worst problems that has plagued PHP
applications. It can disable the eval() call, prevent infinite recursion by putting a limit on call depth, stop HTTP response
splitting attacks, filter uploaded files by a variety of conditions, and on and on. While it obviously can't prevent all badly written
PHP from running amok, it's clear that the Suhosin developers have looked at a lot of common problems and tried to address
them.
While most of the features are configurable, they are all going to impact performance in one way or another. That's a tradeoff
that many seem to be willing to make, especially in shared hosting facilities where a vulnerability in a particular customerinstalled application (or the version of PHP itself) might have serious repercussions for other customers. As the project's "Why?"
page notes, it comes down to a question of trust:
If you are using PHP only for your own server and only for your own scripts and applications, then you can judge
for yourself, if you trust your code enough. In that case you most probably dont need the Suhosin extension.
Because most of its features are meant to protect servers against vulnerable programming techniques. However
PHP is a very complex programming language with a lot of pitfalls that are often overseen during the development
of applications. Even PHP core programmers are writing insecure code from time to time, because they did not
know about a PHP pitfall. Therefore it is always a good idea to have Suhosin as your safety net. The Suhosin-Patch
on the other hand comes with Zend Engine Protection features that protect your server from possible
bufferoverflows and related vulnerabilities in the Zend Engine. History has shown that several of these bugs have
always existed in previous PHP versions.
But there is an additional reason for dropping Suhosin mentioned in Sur's posting: "Stefan's relationships with PHP upstream
(and vice versa) isn't helping very much". He is referring to Stefan Esser, lead developer of Suhosin. Sur's statement is borne
out in the thread, as there seems to be a fair amount of animosity between Esser and other posters in the php-devel list. But
beyond the personalities involved, there is a more important question: if the hardening features in Suhosin are truly useful, why
haven't they been pushed upstream?
A footnote in Sur's post refers to a section of the Suhosin FAQ that outlines the project's reasons for staying out of the
mainline. It mentions the performance impact of the patches and that some PHP developers are not interested in adding code to
protect against badly written applications. It also notes that by staying separate from the core, the project can make a
statement about what it sees as deficiencies in security handling in PHP. But there seems to be more to it than that.
Various people have encouraged Esser to create RFCs for the features in the patch that he thinks should go into the mainline,
but he largely dismisses those messages with statements like:
I am not interested in pushing Suhosin into PHP mainline. Why in hell would I want that. If Suhosin gets absorbed
by PHP.net then I would have to start a new project, because there are tons of mitigations I can think up that will
be implemented at some point in time and will never make it into PHP mainline.
With Suhosin existing I am free to implement as many security mitigations I like and do not have to beg the PHP
developers to consider adding something.
Esser clearly believes that all of the Suhosin changes should go into PHP without going through the RFC process. As Stas
Malyshev pointed out, though, that's part of the collaboration process:
Some people call "begging" collaboration and consider it a normal way to develop software with teams bigger than
one person. Of course, being part of the team is completely voluntary. I think it is clear that Stefan is not interested
in doing this. If somebody would want to take on himself working as part of PHP team on getting some features
from Suhosin to PHP, he's welcome.

5 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Malyshev also notes that it is hard for Esser to complain that the PHP developers aren't cooperating when he is unwilling to join
with the project and follow its processes. But Esser is convinced that it would be a waste of his time:
I know for sure that whatever will be the outcome of it, it will be a compromise (if at all) that will not be sufficient
for my personal taste. So in the end from my point of view people have to use Suhosin anyway. Why also waste
time merging 5 features of 100 if I can do something more useful in the time and give my Suhosin users 20 more
new mitigations.
Esser is also concerned that PHP developers are not paying enough attention to security overall. He pointed to a fix that recently
went in to address problems in the HTTP response splitting protection, where, even though it is a security-related fix, there was
inadequate review of two different attempts to fix the flaw. The first fix for the bug (bug #60227) went directly into PHP 5.3
back in November. Esser's complaint is strongly worded, but does point to a real problem:
And that my dear readers is exactly what would happen to the code of Suhosin if it gets merged into PHP. It would
be patched by people that do not know exactly what they are doing. And it would be broken. And if I would not sit
on every single commit to PHP this would happen, because obviously inside PHP no one cares about reviewing
security patches.
And with Suhosin outside of PHP, there is a secondary protection layer around PHP that would have [caught] this
problem: also known as defense in depth.
We've heard those kinds of arguments before in a slightly different context. The grsecurity/PaX patches for the Linux kernel have
been around for quite some time, but have always been maintained as out-of-tree patch sets. The pseudonymous "PaX Team",
who maintains the PaX patches, made many of the same arguments about why those patches have not been submitted. It is
certainly attractive to be able to go your own way, without having to coordinate or convince anyone outside of the project, but it
does have its costs as well.
One of those costs is a reduction in the audience because distributions and others may shy away from non-mainline code. Even
if the maintainer of an out-of-tree patch set does a perfect job (impossible, of course), there is a cost associated with using a
non-standard tool, whether it's a programming language or kernel. That cost is borne by the distributions and some may be
unwilling to start (or continue) bearing that cost.
One thing to note, however, is that Suhosin has been pretty effective at avoiding various PHP bugs along the way. The recent
PHP remote code execution vulnerability found by Esser was thwarted by a suitably configured Suhosin. The HTTP response
splitting problem was also solved in Suhosin long ago. On the other hand, certain (likely buggy) applications cannot run under
Suhosin which also makes it difficult to adopt.
There is a fundamental problem, at times, connecting upstreams and security researchers. Free software encourages folks to
scratch their own itch and that's just what Suhosin and grsecurity/PaX have done. But if the changes never make it to the
mainline of the project, users may be suffering from bugs that could be avoided. The "all or nothing" approach is not likely to
work with any project, but it is true that security issues are often not given the attention they deserve in those upstreams. It's a
difficult problem to solve, but projects would be well-served by finding a way to cultivate more security-oriented developers into
their communities.
Comments (3 posted)

Brief items
Security quotes of the week
I believe in paying money for products that earn it. I do not believe in a pricing and distribution model that still
thinks it's 1998. And I really don't believe in censoring the internet so that studio and label executives can add a
few more millions onto their already enormous money pile.
Treat your customers with respect , and they'll do the same to you. And that is how you fight piracy.
-- Paul Tassi in Forbes (worth a full read)
VeriSign said its executives "do not believe these attacks breached the servers that support our Domain Name
System network," which ensures people land at the right numeric Internet Protocol address when they type in a
name such as Google.com, but it did not rule anything out.
-- Reuters reports on previously unreported breaches at Verisign in 2010
The service performs a set of analyses on new applications, applications already in Android Market, and developer
accounts. Here's how it works: once an application is uploaded, the service immediately starts analyzing it for
known malware, spyware and trojans. It also looks for behaviors that indicate an application might be misbehaving,
and compares it against previously analyzed apps to detect possible red flags. We actually run every application on
Google's cloud infrastructure and simulate how it will run on an Android device to look for hidden, malicious
behavior. We also analyze new developer accounts to help prevent malicious and repeat-offending developers from
coming back.
-- Google's Hiroshi Lockheimer introduces "Bouncer"
Comments (none posted)

Critical PHP vulnerability being fixed (The H)


The H is reporting that a critical remote code execution bug has been found in PHP that was caused by the recent fix for the
widespread denial of service via hash collisions vulnerability. "The cause of the problem is the security update to PHP 5.3.9,
which was written to prevent denial of service (DoS) attacks using hash collisions. To do so, the developers limited the maximum
possible number of input parameters to 1,000 in php_variables.c using max_input_vars. Because of mistakes in the
implementation, hackers can intentionally exceed this limit and inject and execute code. The bug is considered to be critical as
code can be remotely injected over the web."

6 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Comments (5 posted)

PHP 5.3.10 released with critical security fix


The PHP 5.3.10 release is out; it contains a fix for a remote code execution bug introduced recently by another security fix.
Anybody running 5.3.9 should probably upgrade as soon as possible.
Full Story (comments: 5)

Langley: Revocation checking and Chrome's CRL


On his blog, Adam Langley writes about plans for removing online certificate revocation checking in the Chrome/Chromium
browser. Instead of OCSP and CRL checks, Google will be pushing lists of revoked certificates to the browser. "While the benefits
of online revocation checking are hard to find, the costs are clear: online revocation checks are slow and compromise privacy.
The median time for a successful OCSP check is ~300ms and the mean is nearly a second. This delays page loading and
discourages sites from using HTTPS. They are also a privacy concern because the CA learns the IP address of users and which
sites they're visiting. [...] On this basis, we're currently planning on disabling online revocation checks in a future version of
Chrome. (There is a class of higher-security certificate, called an EV certificate, where we haven't made a decision about what to
do yet.)"
Comments (26 posted)

New vulnerabilities
apache: multiple vulnerabilities
Package(s): apache

CVE #(s):

CVE-2012-0021 CVE-2012-0031 CVE-2012-0053

Created:

Updated:

March 7, 2012

Description:

February 2, 2012

From the Mandriva advisory:


The log_cookie function in mod_log_config.c in the mod_log_config module in the Apache HTTP Server 2.2.17
through 2.2.21, when a threaded MPM is used, does not properly handle a \%{}C format string, which allows
remote attackers to cause a denial of service (daemon crash) via a cookie that lacks both a name and a value
(CVE-2012-0021).
scoreboard.c in the Apache HTTP Server 2.2.21 and earlier might allow local users to cause a denial of service
(daemon crash during shutdown) or possibly have unspecified other impact by modifying a certain type field
within a scoreboard shared memory segment, leading to an invalid call to the free function (CVE-2012-0031).
protocol.c in the Apache HTTP Server 2.2.x through 2.2.21 does not properly restrict header information during
construction of Bad Request (aka 400) error documents, which allows remote attackers to obtain the values of
HTTPOnly cookies via vectors involving a (1) long or (2) malformed header in conjunction with crafted web
script (CVE-2012-0053).

Alerts:

Mandriva

MDVSA-2012:012

2012-02-02

Debian

DSA-2405-1

2012-02-06

Slackware

SSA:2012-041-01

2012-02-10

Red Hat

RHSA-2012:0128-01

2012-02-13

CentOS

CESA-2012:0128

2012-02-14

Oracle

ELSA-2012-0128

2012-02-14

Scientific Linux SL-http-20120214

2012-02-14

Ubuntu

USN-1368-1

2012-02-16

SUSE

SUSE-SU-2012:0284-1

2012-02-18

Fedora

FEDORA-2012-1598

2012-02-21

Red Hat

RHSA-2012:0323-01

2012-02-21

openSUSE

openSUSE-SU-2012:0314-1 2012-02-28

Fedora

FEDORA-2012-1642

2012-03-06

Scientific Linux SL-http-20120306

2012-03-06

SUSE

SUSE-SU-2012:0323-1

2012-03-06

Oracle

ELSA-2012-0323

2012-03-09

Comments (none posted)

condor: denial of service


Package(s): condor

CVE #(s):

CVE-2011-4930

Created:

Updated:

March 19, 2012

February 7, 2012

Description: From the Red Hat advisory:


Multiple format string flaws were found in Condor. An authenticated Condor service user could use these flaws
to prevent other jobs from being scheduled and executed or crash the condor_schedd daemon.

7 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

Alerts:

https://lwn.net/Articles/479071/bigpage

Red Hat RHSA-2012:0099-01 2012-02-06


Red Hat RHSA-2012:0100-01 2012-02-06
Fedora FEDORA-2012-3341 2012-03-17
Fedora FEDORA-2012-3363 2012-03-17

Comments (none posted)

ghostscript: PostScript code execution


Package(s): ghostscript

CVE #(s):

CVE-2010-4820

Created:

Updated:

February 8, 2012

February 3, 2012

Description: From the Red Hat advisory:


Ghostscript included the current working directory in its library search path by default. If a user ran Ghostscript
without the "-P-" option in an attacker-controlled directory containing a specially-crafted PostScript library file, it
could cause Ghostscript to execute arbitrary PostScript code. With this update, Ghostscript no longer searches
the current working directory for library files by default.
Alerts:

Red Hat

RHSA-2012:0096-01 2012-02-02

Red Hat

RHSA-2012:0095-01 2012-02-02

CentOS

CESA-2012:0095

2012-02-03

CentOS

CESA-2012:0095

2012-02-03

CentOS

CESA-2012:0096

2012-02-03

Scientific Linux SL-ghos-20120203

2012-02-03

Scientific Linux SL-ghos-20120203

2012-02-03

Oracle

ELSA-2012-0095

2012-02-03

Oracle

ELSA-2012-0095

2012-02-03

Oracle

ELSA-2012-0096

2012-02-03

Comments (none posted)

kernel: denial of service/code execution


Package(s): linux-ti-omap4

CVE #(s):

CVE-2012-0038 CVE-2012-0207

Created:

Updated:

March 7, 2012

February 7, 2012

Description: From the Ubuntu advisory:


A flaw was discovered in the XFS filesystem. If a local user mounts a specially crafted XFS image it could
potential execute arbitrary code on the system. (CVE-2012-0038)
A flaw was found in the linux kernels IPv4 IGMP query processing. A remote attacker could exploit this to cause
a denial of service. (CVE-2012-0207)
Alerts:

8 sur 24

Ubuntu

USN-1356-1

SUSE

SUSE-SU-2012:0153-2 2012-02-06

2012-02-07

Red Hat

RHSA-2012:0107-01

2012-02-09

CentOS

CESA-2012:0107

2012-02-09

Scientific Linux SL-kern-20120213

2012-02-13

Oracle

ELSA-2012-0107

2012-02-10

Ubuntu

USN-1361-1

2012-02-13

Ubuntu

USN-1362-1

2012-02-13

Ubuntu

USN-1363-1

2012-02-13

Ubuntu

USN-1364-1

2012-02-13

Red Hat

RHSA-2012:0333-01

2012-02-23

Ubuntu

USN-1380-1

2012-02-28

Ubuntu

USN-1384-1

2012-03-06

Ubuntu

USN-1386-1

2012-03-06

Ubuntu

USN-1387-1

2012-03-06

Ubuntu

USN-1388-1

2012-03-06

Red Hat

RHSA-2012:0350-01

2012-03-06

Ubuntu

USN-1389-1

2012-03-06

Ubuntu

USN-1391-1

2012-03-07

Ubuntu

USN-1394-1

2012-03-07

CentOS

CESA-2012:0350

2012-03-07

Scientific Linux SL-kern-20120308

2012-03-08

Oracle

ELSA-2012-2003

2012-03-12

Oracle

ELSA-2012-2003

2012-03-12

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

Oracle

https://lwn.net/Articles/479071/bigpage

ELSA-2012-0350

2012-03-12

Comments (none posted)

moodle: many vulnerabilities


Package(s): moodle

CVE #(s): CVE-2012-0792 CVE-2012-0793 CVE-2012-0794 CVE-2012-0795 CVE-2012-0796


CVE-2012-0797 CVE-2012-0798 CVE-2012-0799 CVE-2012-0800 CVE-2012-0801

Created:

Updated: February 8, 2012

Description:

February 2,
2012

From the Red Hat bugzilla entry:


CVE-2012-0792 Moodle MSA-12-0002: Personal information leak
CVE-2012-0793 Moodle MSA-12-0004: Added profile image security
CVE-2012-0794 Moodle MSA-12-0005: Encryption enhancement
CVE-2012-0795 Moodle MSA-12-0006: Additional email address validation
CVE-2012-0796 Moodle MSA-12-0007: Email injection prevention
CVE-2012-0797 Moodle MSA-12-0008: Unsynchronised access via tokens
CVE-2012-0798 Moodle MSA-12-0009: Role access issue
CVE-2012-0799 Moodle MSA-12-0010: Unauthorised access to session key
CVE-2012-0800 Moodle MSA-12-0011: Browser autofill password issue
CVE-2012-0801 Moodle MSA-12-0012: Form validation issue

Alerts:

Fedora FEDORA-2012-0939 2012-02-02


Fedora FEDORA-2012-0913 2012-02-02
Debian DSA-2421-1

2012-02-29

Comments (none posted)

php: remote code execution


Package(s): php5

CVE #(s):

CVE-2012-0830

Created:

Updated:

February 14, 2012

February 3, 2012

Description: PHP 5.3.9 contained a security update to prevent denial of service (DoS) attacks using hash collisions. Mistakes
in the implementation allow hackers to inject and execute code. This has been fixed in PHP 5.3.10. See this
article for details.
Alerts:

Debian

DSA-2403-1

Red Hat

RHSA-2012:0092-01 2012-02-02

2012-02-02

Red Hat

RHSA-2012:0093-01 2012-02-02

CentOS

CESA-2012:0093

2012-02-03

CentOS

CESA-2012:0093

2012-02-03

CentOS

CESA-2012:0093

2012-02-03

CentOS

CESA-2012:0092

2012-02-03

Scientific Linux SL-php5-20120203 2012-02-03


Scientific Linux SL-php-20120203

2012-02-03

Oracle

ELSA-2012-0093

2012-02-03

Oracle

ELSA-2012-0093

2012-02-03

Oracle

ELSA-2012-0093

2012-02-03

Oracle

ELSA-2012-0092

2012-02-03

Debian

DSA-2403-2

2012-02-06

Fedora

FEDORA-2012-1262 2012-02-08

Fedora

FEDORA-2012-1262 2012-02-08

Fedora

FEDORA-2012-1262 2012-02-08

Ubuntu

USN-1358-1

2012-02-09

Slackware

SSA:2012-041-02

2012-02-10

Fedora

FEDORA-2012-1301 2012-02-14

Fedora

FEDORA-2012-1301 2012-02-14

Fedora

FEDORA-2012-1301 2012-02-14

Comments (none posted)

polipo: denial of service

9 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Package(s): polipo

CVE #(s):

CVE-2011-3596

Created:

Updated:

February 8, 2012

Description:

February 2, 2012
From the Red Hat bugzilla entry:

A denial of service flaw was found in the way Polipo, a lightweight caching web proxy, processed certain HTTP
POST / PUT requests. If polipo was configured to allow remote client connections and particular host was
allowed to connect to polipo server instance, a remote attacker could use this flaw to cause denial of service
(polipo daemon abort due to assertion failure) via specially-crafted HTTP POST / PUT request.
Alerts:

Fedora FEDORA-2012-0849 2012-02-01


Fedora FEDORA-2012-0840 2012-02-01

Comments (none posted)

tomcat: multiple vulnerabilities


Package(s): tomcat6

CVE #(s):

CVE-2011-3375 CVE-2011-5062 CVE-2011-5063 CVE-2011-5064


CVE-2012-0022

Created:

Updated:

February 8, 2012

Description:

February 2, 2012

From the Debian advisory:


CVE-2011-3375: Incorrect request caching could lead to information disclosure.
CVE-2011-5062 CVE-2011-5063 CVE-2011-5064: The HTTP Digest Access Authentication implementation
performed insufficient countermeasures against replay attacks.
CVE-2012-0022: This update adds countermeasures against a collision denial of service vulnerability in the Java
hashtable implementation and addresses denial of service potentials when processing large amounts of
requests.

Alerts:

Debian

DSA-2401-1

2012-02-02

SUSE

SUSE-SU-2012:0155-1

2012-02-07

openSUSE openSUSE-SU-2012:0208-1 2012-02-09


Ubuntu

USN-1359-1

2012-02-13

Comments (none posted)


Page editor: Jake Edge

Kernel development
Brief items
Kernel release status
The current development kernel remains 3.3-rc2; there have been no 3.3 prepatches released in the last week.
The Stable updates picture is somewhat more complicated. 2.6.32.56, 3.0.19, and 3.2.3 were released on February 3 with a
long list of patches. 3.2.4 followed shortly thereafter to fix a build failure introduced in 3.2.3.
On February 6, 3.0.20 and 3.2.5 were released. These were single-patch updates containing the fix to the ASPM-related problem
that would significantly increase power consumption on some systems. This patch has been treated with some care: it seems to
work, but nobody really knows if it might cause behavioral problems on some obscure hardware. That said, at this point, it
seems safe enough to have found its way into a stable update.
Comments (none posted)

Quotes of the week


Note, I also removed the line in the unusedcode.easy file at the same time, if I shouldn't have done that, let me
know and I'll redo these patches.
If I messed anything up, or the patches need more information within the body of the changelog, please let me
know, and I'll be glad to respin them.
-- Greg Kroah-Hartman becomes a LibreOffice developer.
If we really want to improve the world we should jump into a time machine and set tabstops to 4.
-- Andrew Morton
10? We have a few cases of variables over 100 (not sure how you are supposed to use them with an 80 character
max line length). Current longest is:
eOpenLogicalChannelAck_reverseLogicalChannelParameters_multiplexParameters_h2250LogicalChannelParameters
at 104 characters.

10 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Tony Luck (see include/linux/netfilter/nf_conntrack_h323_types.h)


With kernel 3.1, Christoph removed i_alloc_sem and replaced it with calls (namely inode_dio_wait() and
inode_dio_done()) which are EXPORT_SYMBOL_GPL() thus they cannot be used by non-GPL file systems and
further inode_dio_wait() was pushed from notify_change() into the file system ->setattr() method but no non-GPL
file system can make this call.
That means non-GPL file systems cannot exist any more unless they do not use any VFS functionality related to
reading/writing as far as I can tell or at least as long as they want to implement direct i/o.
What are commercial file systems meant to do now?
-- Anton Altaparmakov
Comments (14 posted)

Intel's upcoming transactional memory feature


Here is a posting on the Intel software network describing the "transactional synchronization extensions" feature to be found in
the future "Haswell" processor.
With transactional synchronization, the hardware can determine dynamically whether threads need to serialize
through lock-protected critical sections, and perform serialization only when required. This lets the processor
expose and exploit concurrency that would otherwise be hidden due to dynamically unnecessary synchronization.
At the lowest level with Intel TSX, programmer-specified code regions (also referred to as transactional regions) are
executed transactionally. If the transactional execution completes successfully, then all memory operations
performed within the transactional region will appear to have occurred instantaneously when viewed from other
logical processors. A processor makes architectural updates performed within the region visible to other logical
processors only on a successful commit, a process referred to as an atomic commit.
Needless to say, there should be interesting ways to use such a feature in the kernel if it works well, but other projects (PyPy, for
example) have also expressed interest in transactional memory.
Comments (24 posted)

POHMELFS returns
LWN wrote briefly about the POHMELFS filesystem in early 2008; thereafter, POHMELFS has
languished in the staging tree without much interest or activity. The POHMELFS developer, Evgeniy
Polyakov, expressed his unhappiness with the development process and disappeared from the
kernel community for some time.
By Jonathan Corbet
February 8, 2012

Now, though, Evgeniy is back with a new POHMELFS release. He said:


It went a long way from parallel NFS design which lived in drivers/staging/pohmelfs for years effectively without
usage case - that design was dead.
New pohmelfs uses elliptics network as its storage backend, which was proved as effective distributed system.
Elliptics is used in production in Yandex search company for several years now and clusters range from small (like 6
nodes in 3 datacenters to host 15 billions of small files or hundred of nodes to scale to 1 Pb used for streaming).
This time around, he is asking that the filesystem be merged straight into the mainline without making a stop in the staging
tree. But merging a filesystem is hard without reviews from the virtual filesystem maintainers, and no such reviews have yet
been done. So Evgeniy may have to wait a little while longer yet.
Comments (10 posted)

Kernel development news


Autosleep and wake locks
The announcement of the Android merging project and the return of a number of Android-specific
drivers to the kernel's staging tree were notable events in December, 2011. The most controversial
Android change - "wakelocks" or "suspend blockers" - is not a part of this effort, though. That code
is sufficiently intrusive and sufficiently controversial that nobody seemed to want to revisit it at this time. Except that, as it turns
out, one person is still trying to make progress on this difficult issue. Rafael Wysocki's autosleep and wake locks patch set is yet
another attempt to support Android's opportunistic suspend mechanism in a mainline kernel.
By Jonathan Corbet
February 7, 2012

"Opportunistic suspend" is a heavy-handed approach to power management. In short, whenever nothing of interest is going on,
the entire system simply suspends itself. It is certainly effective on Android devices; in particular, it prevents poorly-written
applications from keeping the system awake and draining the battery. The hard part is the determination that nothing interesting
is happening; that is the role of the Android wakelock/suspend blocker mechanism. With suspend blockers, both the kernel and
suitably-privileged user-space code are able to block the normal suspension of the system, keeping it running for whatever
important work is being done.
Given that suspend blockers do not seem to be headed into the mainline kernel anytime soon, some sort of alternative
mechanism is required if the mainline is to support opportunistic suspend. As it happens, some pieces of that solution have been
in the mainline for a while; the wakeup events infrastructure was merged for 2.6.36. Wakeup events track events (a button
press, for example) that can wake the system or keep it awake. "Wakeup sources," which track sources of wakeup events, were
merged for 2.6.37. Thus far, the wakeup events subsystem remains lightly used in the kernel; few drivers actually signal such
events. Wakeup sources are almost entirely unused.
Rafael's patch set makes some significant changes that employ this infrastructure to support "autosleep," which is another word
for "opportunistic suspend." (Rafael says: "This series tests the theory that the easiest way to sell a once rejected feature is to

11 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

advertise it under a different name"). The first of those adds a new file to sysfs called /sys/power/autosleep; writing "mem"
to this file will cause the system to suspend whenever there are no active wakeup sources. One can also write "disk", with the
result that the system will opportunistically hibernate; this feature may see rather less real-world use, but it was an easy
addition to make.
The Android system tracks the time that suspend blockers prevent the system from suspending; that information is then used in
the "why is my battery dead?" screen. Rafael's patch adds a similar tracking feature and exports this time (as
prevent_sleep_time) in /sys/kernel/debug/wakeup_sources.
One little problem remains, though: wakeup sources are good for tracking kernel-originated events, but they do not provide any
way for user space to indicate that the system should not sleep. What's needed, clearly, is a mechanism with which user space
can create its own wakeup sources. The final patch in Rafael's series adds just such a feature. An application can write a name
(and an optional timeout) to /sys/power/wake_lock to establish a new, active wakeup source. That source will prevent
system suspend until either its timeout expires or the same name is written to /sys/power/wake_unlock.
It is easy to see that this mechanism can be used to implement Android's race-free opportunistic suspend. A driver receiving a
wakeup event will mark the associated wakeup source as active, keeping the system running. That source will stay active until
user space has consumed the event. But, before doing so, the user-space application takes a "wake lock" of its own, ensuring
that it will be able to complete its processing before the system goes back to sleep.
Those who have been paying attention to this controversy will have noted that the API for this feature looks suspiciously like the
native Android API. Needless to say, that is not a coincidence; the idea is to make it as easy as possible to switch over to the
new mechanism without breaking Android systems. If that goal can be achieved, then, even if Android itself never moves to this
implementation, it should be that much easier to run an Android user space on a mainline kernel.
And that, of course, will be the ultimate proof of this patch set. If somebody is able to demonstrate an Android system running
with native opportunistic suspend, with similar power consumption characteristics, then it's a lot more likely that this patch will
succeed where so many others have failed. Arranging such a demonstration will not be entirely easy, but, on the right hardware,
it is certainly possible. Linaro's Android build for the Pandaboard might be a good starting point. Until that happens, getting an
Android-compatible opportunistic suspend implementation into the mainline could be challenging.
Comments (none posted)

Memory power management, take 2


Last June, LWN looked at a set of memory power management patches intended to allow the
system to force unused banks of memory into partial-array self-refresh (PASR) mode. PASR makes
the memory unavailable to the CPU, but it does preserve those contents and reduce power
consumption. Last year's patch added a new layer of zones to the page allocator with the idea that specific zones - which
corresponded to the regions of memory with independent PASR control - could be vacated and powered down when memory
use was light. That patch set has not since returned, possibly because developers were worried about the (significant) overhead
of adding another layer of zones to the system.
By Jonathan Corbet
February 8, 2012

For a little while since, things have been quiet on the memory power management front. Recently, though, a new and seemingly
unrelated PASR patch set was posted to linux-kernel by Maxime Coquelin. This version adds no new zones; instead, it works at a
lower level beneath the buddy allocator.
The first step is to boot the kernel with the new ddr_die= parameter, describing the physical layout of the system's memory.
Another parameter (interleaved) must be used if physically-interleaved memory is present on the system. It would, of
course, be nice to obtain this information directly from the hardware, but, in the embedded world where Maxime works, such
mechanisms, if they are present at all, must be implemented on a per-subarchitecture or per-board basis. The final patch in the
series does add built-in support for the Ux500 system in a "board support" file, but that is the only system supported without
boot-time parameters at this early stage.
For each region defined at boot time, the PASR code sets up a pasr_section structure:
struct pasr_section {
phys_addr_t start;
struct pasr_section *pair;
unsigned long free_size;
spinlock_t *lock;
struct pasr_die *die;
};
The key value here is free_size, which tracks how many free pages exist within this section. When the kernel allocates a page
for use, it must tell the PASR code about it with a call to:
void pasr_kget(struct page *page, int order);
Pages that are freed should be marked with:
void pasr_kput(struct page *page, int order);
To a first approximation, these functions just increment and decrement free_size. If free_size reaches the size of the
segment, there are no used pages within that segment and it can be powered down. As soon as the first page is allocated from
such a segment, it must be powered back up.
Adding this accounting to the memory management code is just a matter of adding a few pasr_kget() and pasr_kput()
calls to the buddy allocator. Most other allocations in the kernel have their ultimate source in the buddy allocator, so this
approach will catch most of the memory allocation traffic in the system - though it could be somewhat fooled by unused pages
held by the slab allocator. There is no integration with "carveout-style" allocators like ION or CMA, but that could certainly be
added at some point.
One piece that is missing, though, is the mechanism by which a memory section becomes entirely free and eligible for PASR.

12 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

The kernel tends to spread pages of data throughout memory, and it does not drop them without a specific reason to do so; a
typical system shows almost no "free" pages at all even if it is not currently doing anything. The intent is to use the feature in
conjunction with a "page cache flush governor," but that code does not exist at this time. There was also talk of setting up a
large "movable" zone and using the compaction code to create large, free chunks within that zone.
The other thing that is missing at this point is any kind of measurement of how much power is actually saved using PASR. That
will certainly need to be provided before this code can be considered for inclusion. Meanwhile, it has the appearance of a
less-intrusive PASR capability that might just get past the roadblocks that stopped its predecessor.
Comments (none posted)

The Android ION memory allocator


Back in December 2011, LWN reviewed the list of Android kernel patches in the linux-next
February 8, 2012
staging directory. The merging of these drivers, one of which is a memory allocator called
This article was contributed by
PMEM, holds the promise that the mainline kernel release can one day boot an Android user
Thomas M. Zeng
space. Since then, it has become clear that PMEM is considered obsolete and will be
replaced by the ION memory manager . ION is a generalized memory manager that Google introduced in the Android 4.0 ICS
(Ice Cream Sandwich) release to address the issue of fragmented memory management interfaces across different Android
devices. There are at least three, probably more, PMEM-like interfaces. On Android devices using NVIDIA Tegra, there is
"NVMAP"; on Android devices using TI OMAP, there is "CMEM"; and on Android devices using Qualcomm MSM, there is "PMEM" .
All three SoC vendors are in the process of switching to ION.
This article takes a look at ION, summarizing its interfaces to user space and to kernel-space drivers. Besides being a memory
pool manager, ION also enables its clients to share buffers, hence it treads the same ground as the DMA buffer sharing
framework from Linaro (DMABUF). This article will end with a comparison of the two buffer sharing schemes.
ION heaps
Like its PMEM-like predecessors, ION manages one or more memory pools, some of which are set aside at boot time to combat
fragmentation or to serve special hardware needs. GPUs, display controllers, and cameras are some of the hardware blocks that
may have special memory requirements. ION presents its memory pools as ION heaps. Each type of Android device can be
provisioned with a different set of ION heaps according to the memory requirements of the device. The provider of an ION heap
must implement the following set of callbacks:
struct ion_heap_ops {
int (*allocate) (struct ion_heap *heap,
struct ion_buffer *buffer, unsigned long len,
unsigned long align, unsigned long flags);
void (*free) (struct ion_buffer *buffer);
int (*phys) (struct ion_heap *heap, struct ion_buffer *buffer,
ion_phys_addr_t *addr, size_t *len);
struct scatterlist *(*map_dma) (struct ion_heap *heap,
struct ion_buffer *buffer);
void (*unmap_dma) (struct ion_heap *heap,
struct ion_buffer *buffer);
void * (*map_kernel) (struct ion_heap *heap,
struct ion_buffer *buffer);
void (*unmap_kernel) (struct ion_heap *heap,
struct ion_buffer *buffer);
int (*map_user) (struct ion_heap *heap, struct ion_buffer *buffer,
struct vm_area_struct *vma);
};
Briefly, allocate() and free() obtain or release an ion_buffer object from the heap. A call to phys() will return the
physical address and length of the buffer, but only for physically-contiguous buffers. If the heap does not provide physically
contiguous buffers, it does not have to provide this callback. Here ion_phys_addr_t is a typedef of unsigned long, and
will, someday, be replaced by phys_addr_t in include/linux/types.h. The map_dma() and unmap_dma() callbacks
cause the buffer to be prepared (or unprepared) for DMA. The map_kernel() and unmap_kernel() callbacks map (or
unmap) the physical memory into the kernel virtual address space. A call to map_user() will map the memory to user space.
There is no unmap_user() because the mapping is represented as a file descriptor in user space. The closing of that file
descriptor will cause the memory to be unmapped from the calling process.
The default ION driver (which can be cloned from here) offers three heaps as listed below:
ION_HEAP_TYPE_SYSTEM:
memory allocated via vmalloc_user().
ION_HEAP_TYPE_SYSTEM_CONTIG: memory allocated via kzalloc.
ION_HEAP_TYPE_CARVEOUT:
carveout memory is physically contiguous and set aside at boot.
Developers may choose to add more ION heaps. For example, this NVIDIA patch was submitted to add
ION_HEAP_TYPE_IOMMU for hardware blocks equipped with an IOMMU.
Using ION from user space
Typically, user space device access libraries will use ION to allocate large contiguous media buffers. For example, the still camera
library may allocate a capture buffer to be used by the camera device. Once the buffer is fully populated with video data, the
library can pass the buffer to the kernel to be processed by a JPEG encoder hardware block.
A user space C/C++ program must have been granted access to the /dev/ion device before it can allocate memory from ION.
A call to open("/dev/ion", O_RDONLY) returns a file descriptor as a handle representing an ION client. Yes, one can
allocate writable memory with an O_RDONLY open. There can be no more than one client per user process. To allocate a buffer,
the client needs to fill in all the fields except the handle field in this data structure:

13 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

struct ion_allocation_data {
size_t len;
size_t align;
unsigned int flags;
struct ion_handle *handle;
}
The handle field is the output parameter, while the first three fields specify the alignment, length and flags as input
parameters. The flags field is a bit mask indicating one or more ION heaps to allocate from, with the fallback ordered
according to which ION heap was first added via calls to ion_device_add_heap() during boot. In the default
implementation, ION_HEAP_TYPE_CARVEOUT is added before ION_HEAP_TYPE_CONTIG. The flags of
ION_HEAP_TYPE_CONTIG | ION_HEAP_TYPE_CARVEOUT indicate the intention to allocate from
ION_HEAP_TYPE_CARVEOUT with fallback to ION_HEAP_TYPE_CONTIG.
User-space clients interact with ION using the ioctl() system call interface. To allocate a buffer, the client makes this call:
int ioctl(int client_fd, ION_IOC_ALLOC, struct ion_allocation_data *allocation_data)
This call returns a buffer represented by ion_handle which is not a CPU-accessible buffer pointer. The handle can only be used
to obtain a file descriptor for buffer sharing as follows:
int ioctl(int client_fd, ION_IOC_SHARE, struct ion_fd_data *fd_data);
Here client_fd is the file descriptor corresponding to /dev/ion, and fd_data is a data structure with an input handle field
and an output fd field, as defined below:
struct ion_fd_data {
struct ion_handle *handle;
int fd;
}
The fd field is the file descriptor that can be passed around for sharing. On Android devices the BINDER IPC mechanism may be
used to send fd to another process for sharing. To obtain the shared buffer, the second user process must obtain a client handle
first via the open("/dev/ion", O_RDONLY) system call. ION tracks its user space clients by the PID of the process
(specifically, the PID of the thread that is the "group leader" in the process). Repeating the open("/dev/ion", O_RDONLY)
call in the same process will get back another file descriptor corresponding to the same client structure in the kernel.
To free the buffer, the second client needs to undo the effect of mmap() with a call to munmap(), and the first client needs to
close the file descriptor it obtained via ION_IOC_SHARE, and call ION_IOC_FREE as follows:
int ioctl(int client_fd, ION_IOC_FREE, struct ion_handle_data *handle_data);
Here ion_handle_data holds the handle as shown below:
struct ion_handle_data {
struct ion_handle *handle;
}
The ION_IOC_FREE command causes the handle's reference counter to be decremented by one. When this reference counter
reaches zero, the ion_handle object gets destroyed and the affected ION bookkeeping data structure is updated.
User processes can also share ION buffers with a kernel driver, as explained in the next section.
Sharing ION buffers in the kernel
In the kernel, ION supports multiple clients, one for each driver that uses the ION functionality. A kernel driver calls the following
function to obtain an ION client handle:
struct ion_client *ion_client_create(struct ion_device *dev,
unsigned int heap_mask, const char *debug_name)
The first argument, dev, is the global ION device associated with /dev/ion; why a global device is needed, and why it must be
passed as a parameter, is not entirely clear. The second argument, heap_mask, selects one or more ION heaps in the same way
as the ion_allocation_data. The flags field was covered in the previous section. For smart phone use cases involving
multimedia middleware, the user process typically allocates the buffer from ION, obtains a file descriptor using the
ION_IOC_SHARE command, then passes the file desciptor to a kernel driver. The kernel driver calls ion_import_fd() which
converts the file descriptor to an ion_handle object, as shown below:
struct ion_handle *ion_import_fd(struct ion_client *client, int fd_from_user);
The ion_handle object is the driver's client-local reference to the shared buffer. The ion_import_fd() call looks up the
physical address of the buffer to see whether the client has obtained a handle to the same buffer before, and if it has, this call
simply increments the reference counter of the existing handle.
Some hardware blocks can only operate on physically-contiguous buffers with physical addresses, so affected drivers need to
convert ion_handle to a physical buffer via this call:
int ion_phys(struct ion_client *client, struct ion_handle *handle,
ion_phys_addr_t *addr, size_t *len)
Needless to say, if the buffer is not physically contiguous, this call will fail.
When handling calls from a client, ION always validates the input file descriptor, client and handle arguments. For example,
when importing a file descriptor, ION ensures the file descriptor was indeed created by an ION_IOC_SHARE command. When
ion_phys() is called, ION validates whether the buffer handle belongs to the list of handles the client is allowed to access, and
returns error if the handle is not on the list. This validation mechanism reduces the likelihood of unwanted accesses and

14 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

inadvertent resource leaks.


ION provides debug visibility through debugfs. It organizes debug information under /sys/kernel/debug/ion, with
bookkeeping information in stored files associated with heaps and clients identified by symbolic names or PIDs.
Comparing ION and DMABUF
ION and DMABUF share some common concepts. The dma_buf concept is similar to ion_buffer, while
dma_buf_attachment serves a similar purpose as ion_handle. Both ION and DMABUF use anonymous file descriptors as
the objects that can be passed around to provide reference-counted access to shared buffers. On the other hand, ION focuses
on allocating and freeing memory from provisioned memory pools in a manner that can be shared and tracked, while DMABUF
focuses more on buffer importing, exporting and synchronization in a manner that is consistent with buffer sharing solutions on
non-ARM architectures.
The following table presents a feature comparison between ION and DMABUF:
Feature

ION

DMABUF

Memory
Manager Role

ION replaces PMEM as the manager of


provisioned memory pools. The list of ION
heaps can be extended per device.

DMABUF is a buffer sharing framework, designed to


integrate with the memory allocators in DMA mapping
frameworks, like the work-in-progress DMA-contiguous
allocator, also known as the Contiguous Memory Allocator
(CMA). DMABUF exporters have the option to implement
custom allocators.

User Space
Access Control

ION offers the /dev/ion interface for


user-space programs to allocate and share
buffers. Any user program with ION access
can cripple the system by depleting the ION
heaps. Android checks user and group IDs to
block unauthorized access to ION heaps.

DMABUF offers only kernel APIs. Access control is a


function of the permissions on the devices using the
DMABUF feature.

Global Client and ION contains a device driver associated with


Buffer Database /dev/ion. The device structure contains a
database that tracks the allocated ION buffers,
handles and file descriptors, all grouped by
user clients and kernel clients. ION validates
all client calls according to the rules of the
database. For example, there is a rule that a
client cannot have two handles to the same
buffer.

The DMA debug facility implements a global hashtable,


dma_entry_hash, to track DMA buffers, but only when
the kernel was built with the CONFIG_DMA_API_DEBUG
option.

Crossarchitecture
Usage

ION usage today is limited to architectures


that run the Android kernel.

DMABUF usage is cross-architecture. The DMA mapping


redesign preparation patchset modified the DMA mapping
code in 9 architectures besides the ARM architecture.

Buffer
Synchronization

ION considers buffer synchronization to be an


orthogonal problem.

DMABUF provides a pair of APIs for synchronization. The


buffer-user calls dma_buf_map_attachment()
whenever it wants to use the buffer for DMA . Once the
DMA for the current buffer-user is over, it signals
'end-of-DMA' to the exporter via a call to
dma_buf_unmap_attachment() .

Delayed Buffer
Allocation

ION allocates the physical memory before the


buffer is shared.

DMABUF can defer the allocation until the first call to


dma_buf_map_attachment(). The exporter of DMA
buffer has the opportunity to scan all client attachments,
collate their buffer constraints, then choose the
appropriate backing storage.

ION and DMABUF can be separately integrated into multimedia applications written using the Video4Linux2 API. In the case of
ION, these multimedia programs tend to use PMEM now on Android devices, so switching to ION from PMEM should have a
relatively small impact.
Integrating DMABUF into Video4Linux2 is another story. It has taken ten patches to integrate the videobuf2 mechanism with
DMABUF; in fairness, many of these revisions were the result of changes to DMABUF as that interface stabilized. The effort
should pay dividends in the long run because the DMABUF-based sharing mechanism is designed with DMA mapping hooks for
CMA and IOMMU. CMA and IOMMU hold the promise to reduce the amount of carveout memory that it takes to build an Android
smart phone. In this email, Andrew Morton was urging the completion of the patch review process so that CMA can get through
the 3.4 merge window.
Even though ION and DMABUF serve similar purposes, the two are not mutually exclusive. The Linaro Unified Memory
Management team has started to integrate CMA into ION. To reach the state where a release of the mainline kernel can boot the
Android user space, the /dev/ion interface to user space must obviously be preserved. In the kernel though, ION drivers may
be able to use some of the DMABUF APIs to hook into CMA and IOMMU to take advantage of the capabilities offered by those
subsystems. Conversely, DMABUF might be able to leverage ION to present a unified interface to user space, especially to the
Android user space. DMABUF may also benefit from adopting some of the ION heap debugging features in order to become
more developer friendly. Thus far, many signs indicate that Linaro, Google, and the kernel community are working together to
bring the combined strength of ION and DMABUF to the mainline kernel.
Comments (6 posted)

15 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Patches and updates


Kernel trees
Greg KH: Linux 3.2.3 . (February 3, 2012)
Greg KH: Linux 3.2.4 . (February 3, 2012)
Greg KH: Linux 3.2.5 . (February 6, 2012)
Greg KH: Linux 3.0.19 . (February 3, 2012)
Greg KH: Linux 3.0.20 . (February 6, 2012)
Steven Rostedt: 3.0.20-rt35 . (February 7, 2012)
Greg KH: Linux 2.6.32.56 . (February 3, 2012)
Core kernel code
Hans Verkuil: Add poll_requested_events() function. . (February 2, 2012)
Paul Turner: sched: entity load-tracking re-work . (February 2, 2012)
Srikar Dronamraju: Uprobes patchset with perf probe support . (February 2, 2012)
Gilad Ben-Yossef: Reduce cross CPU IPI interference . (February 5, 2012)
Glauber Costa: per-cpu/cpuacct cgroup scheduler statistics . (February 6, 2012)
Rafael J. Wysocki: PM: Implement autosleep and "wake locks" . (February 7, 2012)
Anton Vorontsov: Scheduler idle notifiers and users . (February 8, 2012)
Development tools
Frank Ch. Eigler: systemtap 1.7 release . (February 2, 2012)
Stephane Eranian: perf: add support for sampling taken branches . (February 2, 2012)
Xiao Guangrong: KVM: perf: a smart tool to analyse kvm events . (February 2, 2012)
Mathieu Desnoyers: LTTng 2.0 prerelease bundle 20120202 . (February 3, 2012)
Jiri Olsa: ftrace, perf: Adding support to use function trace . (February 8, 2012)
Device drivers
Aaron Sierra: mfd: Add LPC driver for Intel ICH chipsets . (February 3, 2012)
Aneesh V: Add TI EMIF SDRAM controller driver . (February 4, 2012)
Paolo Bonzini: virtio-scsi driver . (February 5, 2012)
Alan Cox: [PATCH] Add I2C driver for Summit Microelectronics SMB347 Battery Charger. . (February 6, 2012)
Alan Cox: gpio: add MSIC gpio driver . (February 6, 2012)
Hans Verkuil: Add support functions and the radio-keene driver . (February 6, 2012)
Sakari Ailus: [PATCH v2 0/31] V4L2 subdev and sensor control changes, SMIA++ driver and N9 camera board code .
(February 6, 2012)
Adam Jackson: char/mem: Add /dev/io . (February 7, 2012)
Artem Bityutskiy: mtd: MTD API rework . (February 8, 2012)
Documentation
Paolo Bonzini: Add virtio-scsi to the virtio spec . (February 5, 2012)
Filesystems and block I/O
Evgeniy Polyakov: pohmelfs: call for inclusion . (February 8, 2012)
Memory management
Marek Szyprowski: Contiguous Memory Allocator . (February 3, 2012)
KAMEZAWA Hiroyuki: [PATCH 0/6 v3] memcg: page cgroup diet . (February 6, 2012)
Mel Gorman: Swap-over-NFS without deadlocking V2 . (February 6, 2012)
Mel Gorman: Swap-over-NBD without deadlocking V8 . (February 7, 2012)
Architecture-specific
Venkatesh Pallipadi: Extend mwait idle to optimize away IPIs when possible . (February 7, 2012)

16 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Marc Zyngier: Per SoC descriptor . (February 8, 2012)


Security-related
David Howells: KEYS: Allow special keyrings to be cleared . (February 8, 2012)
Virtualization and containers
Wei Liu: Xen netback / netfront improvement . (February 6, 2012)
Miscellaneous
Jrn Engel: Announce: cancd 0.2.0 netconsole capture server . (February 6, 2012)
Lucas De Marchi: kmod 5 . (February 6, 2012)
Karel Zak: util-linux v2.21-rc2 . (February 7, 2012)
Page editor: Jonathan Corbet

Distributions
How long should security embargoes be?
By Jake Edge
February 8, 2012

Security embargoes are something of a double-edged sword. The idea is to coordinate a release date for
a particular security fix between the affected parties (typically distributions in the Linux world). But,
while the embargo is in place, no fixes are being issued which increases the length of time that users
are vulnerable. That could lead to widespreador targetedattacks against vulnerable systems if the flaw is already known, is
leaked, or is rediscovered during the embargo period. The chances of that may be relatively small, but they certainly increase as
a function of the length of the embargo.
It is for those reasons that embargoes in the Linux world are typically short. While proprietary vendors will sometimes sit on a
vulnerability for months, Linux embargoes are generally on the order of a week or two. The rules for the acceptable length of an
embargo are set by the venue where the information is shared, which for Linux distributions is the closed linux-distros mailing
list. There is also an associated distros mailing list which also adds in representatives from some of the BSDs. These two lists
have taken the place of the vendor-sec list that was compromised in March 2011.
Up until recently, the guidelines for linux-distros specified that the maximum allowable embargo was 14 days. That means that
anyone reporting a bug to the list should be willing to wait that long to publicly release any information; it also binds list
participants to that deadline. The idea is that anyone who doesn't want to agree to embargoes of up to 14 days shouldn't be
using linux-distros. As part of the discussion of the bug and fix on that list, a coordinated release date (CRD) would be chosen so
that all distributions would release at the end of the embargo period.
List administrator Alexander Peslyak (aka Solar Designer) recently made a change to the guidelines to better reflect reality.
There is an effort to avoid having a CRD that falls on a Monday or a Friday to try to not land on a holiday or other inconvenient
day for administrators who may need to do lots of updates because of the security fix. That meant that the 14 day maximum
sometimes stretched to 19 days, so Peslyak changed the page to reflect that. He also posted to the open oss-security list to
highlight the change.
There were no complaints about the change, though there were some alternatives suggested (10 business days for example),
until Peslyak followed up his message suggesting that perhaps the 14-19 days be reduced to 7-11 days. Even that length of time
is longer than he would like, "but I am proposing what I think has a chance to be approved by others without making the list a
lot less useful to them".
Peslyak also outlined his reasons for preferring shorter embargo windows. Distributions that have the fix ready more quickly can
get it into the hands of users sooner, rather than waiting a week or more for the embargo to end. Also, it reduces the window of
time in which the vulnerability could be rediscovered or leaked from the list somehow. There are also logistical concerns with
longer embargoes including increased tracking of multiple overlapping embargoes.
Both Marc Deslauriers of the Ubuntu security team and Kurt Seifried of the Red Hat security response team were quick to
disagree with a one-week (more or less) embargo period. Depending on the severity of the bug and the difficulty of the fix, it
may be hard for some distributions to pull together, test, and release fixes in that time frame. In particular, Seifried is concerned
about volunteer-run distributions that may lack the staff to ensure fixes in a shorter period. But Deslauriers makes another
important point:
This means vendors will be keeping information about the vulnerability private until they are confident they are able
to release within a week, at which point they will then share the information with other vendors who will scramble
to get their updates ready.
As a distro, I now have two choices: I sit on vulnerabilities until our own QA and testing is done, at which point I
send them to the list and hope that 7 days is enough for everyone else, or I simply stop using the list for anything
that's more than trivial and contact other vendors directly.
That's the fine line that Peslyak is walking here. If the embargo requirements become too onerous (or seem that way),
distributions may stop reporting to the listor only report after they have made progress with a fix. But, other lists have other
rules. The closed kernel security list says that it will do embargoes up to 7 days, but it seems to rarely happen that way,
undoubtedly partially due to Linus Torvalds's distaste for embargoes. That policy may also result in distributions delaying reports
to the kernel security list, of course.
It should be noted that these "closed" lists have been fairly leaky at times. In addition to the vendor-sec list compromise, the
kernel security list may have been breached as part of the kernel.org compromise. The linux-distros list now uses PGP-encrypted

17 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

emails, which should help unless the host doing the re-encryption to each member's key is breached.
There are also dangers from fixes that are rushed, of course. The recent PHP remote code execution vulnerability came about
because of a rushed fix for a different security hole. There is certainly value in taking some time to get a security fix right, the
only question here is how long that should be, and, for full disclosure types, whether users should be made aware of the
problem while the fix is in progress.
In the end, the 14-19 day window stayed, though a preference for embargoes of less than 7 days was added. It's a difficult
problem, partly because there are so many unknowns. Fixes can be difficult to apply (particularly if they need to be backported)
and to test, especially for multiple distribution releases. But the longer the bug is embargoed, the longer users are at risk
without any ability to mitigate the problem locally while awaiting a fix. That's why the full disclosure camp believes that
information on security holes should be released without delay, so that users are empowered to make their own decisions. That
approach is not very popular with vendors, of course. Embargoes and their length is an issue that will likely be debated again
and again because there isn't an "obviously correct" solution.
Comments (8 posted)

Brief items
Distribution quotes of the week
The solution to "My kernel update doesn't boot" should be "Automatically detect that that happened, give the user
that information and fall back to the old kernel", not "Always show the user a menu that they almost always don't
care about". Solve the actual problem.
-- Matthew Garrett
And so, I intend to wear those shoes proudly. While I dont plan on following in the footsteps of anyone (because,
you know, that would be walking in circles, which isnt highly productive), I do aspire to step with the same spirit
that those before me have honestly, transparently, communicative-ly (new word!), with humor, and with care.
And I aspire to Get Stuff Done, sans red tape.
-- Robyn Bergeron
Anyhow, while I am looking forward to playing around with Debian Wheezy, the current Debian testing branch, I
can foresee Squeeze and my #! Statler builds remaining on a couple of my boxes for a good while yet. IMHO, the
release still has plenty of legs left in it, even if some people consider it only fit for servers. Troglodytes!
-- Philip Newborough
Comments (1 posted)

Distribution News
Debian GNU/Linux

bits from the DPL for January 2012


Stefano Zacchiroli shares a few bits on his January activities. Various legal issues have kept him busy, including the copyright
and licensing of the Debian web site, trademarks, and working with the Debian France association in becoming a Debian Trusted
Organization.
Full Story (comments: none)

Bits from the Debian GNU/Hurd porters


The Debian GNU/Hurd porters have installation media available. These bits also contain their Wheezy release goals. "Since the
ftp-master meeting in July 2011, significant improvements have been made, and a technological preview of GNU/Hurd with
Wheezy, as was made for kFreeBSD did for Squeeze, is still the target."
Full Story (comments: none)
Fedora

Jared Smith steps down as Fedora project leader


Fedora project leader Jared Smith has announced that he is moving on. "I'm happy to announce that Red Hat has selected
Robyn Bergeron to be the next Fedora Project Leader. Robyn has proven herself in the Fedora community over the last several
years, and I have complete confidence in her abilities to lead the Fedora Project. In addition to planning FUDCon Tempe in 2011
and helping to lead the Marketing and Cloud SIGs within Fedora, Robyn has been an integral part of many other Fedora events
and endeavors. Most recently, she has held the role of Fedora Program Manager, helping to ensure that we all stay on schedule
and helping the Fedora feature process stay on track."
Full Story (comments: 12)
Ubuntu family

Canonical pulls funding from Kubuntu


Kubuntu lead developer Jonathan Riddell has sent out an announcement that Canonical will no longer be funding work on the
KDE-based Kubuntu distribution. "The practical changes are I won't be able to work on KDE bits in my work time after 12.04 and
there won't be paid support for versions after 12.04. This is a rational business decision, Kubuntu has not been a business
success after 7 years of trying, and it is unrealistic to expect it to continue to have financial resources put into it."
Full Story (comments: 92)

Newsletters and articles of interest

18 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Distribution newsletters
Debian Project News (February 6)
DistroWatch Weekly, Issue 442 (February 6)
Maemo Weekly News (February 6)
Ubuntu Weekly Newsletter, Issue 251 (February 5)
Comments (none posted)

SUSE hits the big 2-0 (ITWorld)


"2012 will be the year of the lizard" says Brian Proffitt, in this nod to SUSE Linux's 20th anniversary this year. "But this relatively
long existence almost didn't come to pass, as what began as S.u.S.E. GmbH in 1992 has undergone two major takeovers, a
partnership with Microsoft that led to near-revolt in the Linux community, and a heretofore-unknown consideration by Red Hat to
purchase the German Linux company just prior to the turn of the century."
Comments (none posted)

Parabola GNU/Linux: Freedom Packaged (OSNews)


OSNews takes a look at Parabola GNU/Linux, a recent entry in the GNU list of free distributions. "Parabola developers chose a
refreshing approach to limiting the availability of non-free software while maintaining the ability to use Arch mirrors: all the
"liberated" (built with special options or otherwise stripped off the non-free parts) packages are included in a separate "libre"
repository; the blacklisting of non-free packages is done with a virtual "your-freedom" package that doesn't install any files but
conflicts with a long list of packages. Installing this package makes pacman (package manager) remove all the non-free software
to resolve conflict or replace it with free analogues if required. "
Comments (none posted)
Page editor: Rebecca Sobol

Development
XBMC 11 "Eden"
XBMC, the open source media center, has steadily grown from its humble origins as an
February 8, 2012
X-Box only replacement environment into the cross-platform, de facto playback front-end
This article was contributed by
for multimedia content. It merges the file-centric approach taken by traditional video
Nathan Willis
players with an add-on scripting environment that handles remote web content. The project
is currently finalizing its next major release, version 11.0 (codenamed Eden), which includes updates to the networking and
video acceleration subsystems, broader hardware support, and numerous changes to the APIs available to add-on developers.
Granted, there are plenty of other "media center" projects under active development at the
moment, most of which also employ FFmpeg and can play back the majority of the same
content types. Where XBMC differentiates itself, however, is with its auto-detection of critical
settings and the deep integration it provides across its (wide) range of networking and
content-delivery protocols. For example, XBMC auto-detects the presence of Universal
Plug-and-Play (UPnP) servers and HDHomeRun tuner devices on the network, while too
many other media centers require the user to come with the requisite connection details
written down.
Similarly, we no longer live in a world where the bulk of our media content consists of local or LAN-available files. XBMC's add-on
system permits developers to write site- or service-specific extensions that integrate commercial content creators' home-brewed
Flash video delivery sites neatly into the overall interface. Other media center applications work at the task too, but over time
the projects has earned itself a reputation for playing host to high-quality add-ons that stay current with changes rolled out on
the sites themselves.
Starting with the last major release, 10.0, XBMC has hosted its own add-on repository,
making such add-ons instantly installable from the main UI. The project wiki maintains a list
of active add-ons, including those not found in the official repository, broken down by
version compatibility and add-on type. It includes the site-specific content add-ons, plus
many that add new functionality (games, torrent support, electronic program guides) and
plug-ins to support new data sources (Icecast streams, MythTV servers, and fetching lyrics,
cover images, and program metadata from around the web). There are a handful of
commercial services that permit XBMC add-ons to use their APIs (such as Grooveshark), but if your interest is primarily in paid
services, most of those companies take steps to make add-ons for XBMC and other open source media centers incompatible
even when they permit in-browser playback.
User-visible changes in Eden
Speaking of add-ons, one of the most original ideas to debut in XMBC 11 is the ability to
roll-back add-ons to a previous version. Clearly that feature is not expected to be used
when the update to an add-on is designed to repair breakage with the package's screenscraping capabilities, but it may prove popular with users when an add-on update makes a
bad UI decision or implements a questionable new feature. Linux users may associate
package rollback with risky options to forcibly downgrade packages, but XBMC add-ons are
independent of each other. A closer comparison would be to Firefox and Thunderbird's
add-ons, but they offer no such rollback mechanism.

19 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

XBMC's user interface is also one of its major selling points - both in its ease-of-use and its ease-of-configurability. The Eden
release introduces a newly renovated default skin named "Confluence," which is the first default skin to use a horizontal
main-menu layout. That decision wastes slightly less screen space, considering the popularity of 16:9 and 16:10 aspect ratios.
But a more practical feature is that users can now add any item they want to the home screen's menu in previous releases, all
add-ons were relegated to the "Programs" submenu, which is a hassle for heavy add-on users. Naturally, users can also remove
menu items which I did immediately to the "Weather" entry (whose prominent place on the home menu has always felt
awkward). The new UI also sports multiple selections (when using a mouse), touch or gesture input (presumably for tablet
users), and auto-login (for users running XBMC in a kiosk setting). Users can also search for installable add-ons by keyword from
within XBMC itself, which is far faster than manually scrolling through screen after screen of available add-ons with a remote
control.
The application attempts to present all of the available media resources in a particular category (photos, audio, or video)
together in one place, regardless of origin. This release adds support for five new content source types: NFS shares, Apple Filing
Protocol (AFP) shares, Slingboxes, Apple AirPlay devices, and "disc stubs" of un-mounted DVD or Blu-Ray discs. The Slingbox is
an embedded video streaming device that can be connected to component or HDMI video sources. AirPlay is Apple's brand
name for streaming media over the LAN from iPod and other iOS devices.
The disc stub feature is intended to help users organize their physical media, indexing the contents of discs for searchability
you would still have to physically load the discs in order to play back their contents. However, the new release also adds support
for treating ISO files as virtual disk volumes, so if lugging the discs back and forth across the room is too taxing, XBMC has you
covered there, too. There are other minor tweaks in this arena, too, such as the elimination of an artificial distinction between
local "files" and the user's "library." From now on, all the files that XBMC knows about can be browsed together.
Finally, there is a major upgrade to the application's subtitle support, including support for subtitles embedded within MP4 files,
and support for rendering subtitles with the style tags (covering font selection, text color, and italics/boldface) found in several
external subtitle file formats.
New technical features
Arguably the biggest "silent" feature in XBMC 11.0 is full support for the Video Acceleration API (VAAPI), Freedesktop.org's
hardware-agnostic API for GPU acceleration of video decoding. XBMC is officially dropping support for the older, MPEG2-only
XvMC in favor of VAAPI, which supports hardware-accelerated decoding of more formats, on Nvidia, Intel, and ATI graphics
chips. VAAPI is a Linux-only feature, of course on Mac OS X 10.6 or later, XBMC uses Apple's Video Decode Acceleration
Decoder (VDADecoder) hardware acceleration instead, and on recent Windows systems it uses Microsoft's DirectX Video
Acceleration (DxVA). Linux boxes can also use OpenMAX for hardware video acceleration, which is most useful for systems built
on Nvidia's Tegra2 platform.
The user interface itself uses OpenGL, OpenGL ES, or EGL, so it, too, can be hardware accelerated. Use of GPU acceleration for
both video decoding and the GUI reduces XBMC's CPU requirements considerably, and 11.0 officially introduces support for
several more low-resource systems. On the Linux side, this includes Texas Instruments OMAP4 processors. On the Apple side, it
includes better support for recent iOS 4.x devices (including recent iPads and AppleTVs). But for those who still rely on their
CPUs, regardless of the platform, XBMC can now detect CPU features like SSE and MMX at runtime.
Apart from hardware concerns, this release introduces a revamped JSON-RPC subsystem, which is primarily of interest to add-on
developers. The changes are substantial, as the goal was to make XBMC's implementation compliant with the JSON-RPC 2.0
specification. The add-ons subsystem uses Python scripting, and in another important change for developers, 11.0 drops XBMC's
bundled Python implementation in favor of using the system Python library. This is more in keeping with XBMC's reliance on OS
libraries for other functionality (such as protocol stacks), although the project still uses its own media renderers for images,
video, and audio content. Because the host OS's Python version may differ from the bundled library found in older XBMC
releases, there is a backwards-compatibility mode that add-on developers can invoke.
Add-on authors have three other new features at their disposal from 11.0 on. The first is an XML storage system allowing each
add-on to save user preferences in its own private file. The second is a set of hooks to display progress-meters on screen for the
user's benefit, a feature designed to improve feedback when buffering web video. Finally, previous releases of XBMC allowed for
a web-based control interface, which could be exploited to bring remote-control-like features to arbitrary tablets and mobile
phone browsers in addition to desktops. With the new release, each add-on package can also provide a separate web-interface
of its own, which simply makes more features accessible to users not near an infrared remote.
The view from 10 feet back
If I were to channel my inner couch potato, I would have to admit that my favorite improvements in XBMC 11.0 are of the
cosmetic variety namely, the new and greatly-improved theme, the ability to customize the home menu, and the unification of
media content regardless of the source. That may sound superficial, but in my experience, building a remote-control-friendly UI
is one the highest hurdles in open source software development: many try, few, if any, succeed. XBMC may fare better than the
Linux-only media centers because it has a substantial following among Windows and OS X users, thus giving it exposure to far
more testing and feedback. But whatever the reason, in practical usage XBMC comes the closest to feeling like a genuine OEM
consumer electronics product.
Digging deeper, though, the VAAPI support is an important milestone, too. VAAPI has been a long time coming, but in 2011 and
now 2012, it appears to be hitting the mainstream. Low-power set-top boxes are certainly where VAAPI makes the most sense
at SCALE 10X in January, one of the most talked about booths was the demonstration of XBMC 11.0 running on a $25
Raspberry Pi board. There are already plenty of niche commercial products built on top of XBMC, but when HD video is available
on a $25 board, it will be hard for Apple and Microsoft to compete.
To see the impact of the changes to the add-on development APIs, we may have to wait, but the project's add-on community
has earned the benefit of the doubt. Sadly, at the moment the Linux builds of the most recent XBMC Eden have yet to land: Beta
1 is available for download, but Beta 2 is only provided for Windows and Apple systems at the moment. The final release does
not have a due date yet, but the hold-up is reported to be with the "XBMC Live" live CD version, which is getting a rework to be
more compatible with the upstream Ubuntu releases on which it is based. Given the pace of the first two beta releases, though,

20 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

you might want to keep the red carpet within reach.


Comments (24 posted)

Brief items
Quotes of the week
In the end we all agree GCC does something nasty (and I would call it a bug even), but any solution we find in GCC
won't be backportable to earlier releases so you have to deal with the GCC bug for quite some time and devise
workarounds in the kernel. You'll hit the bug for all structure fields that share the largest aligned machine word
with a bitfield (thus the size depends on the alignment of the full object, not that of the struct containing the
bitfield in case that struct is nested inside another more aligned one). This situation should be easily(?) detectable
with sparse.
-- Richard Guenther
In the embedded market, the biggest problem is that the distributions of BusyBox fail to include the "scripts to
control compilation and installation of the executable", which the GPLv2 requires.
As such, users who wish to take a new upstream version of BusyBox and install it on their device are left without
any hope of doing so. Most embedded-market GPL enforcement centers around remedying this.
Indeed, enforcement has brought some great successes in this regard. As I wrote on in my blog post on this
subject (at http://sfconservancy.org/blog/2012/feb/01/gpl-enforcement/ ), both the OpenWRT and SamyGo
firmware modification communities were launched because of source releases yielded in past BusyBox enforcement
actions. Getting the "scripts to control compilation and installation of the executable" for those specific devices are
what enabled these new upstream firmware projects to get started.
-- Bradley Kuhn
Comments (none posted)

Announcing fulltext
Fulltext is a Python library that can extract text from binary files. "Fulltext is a library that makes converting various file formats
to plain text simple. Mostly it is a wrapper around shell tools. It will execute the shell program, scrape it's results and then
post-process the results to pack as much text into as little space as possible."
Full Story (comments: none)

Gnash 0.8.10 released


Gnash 0.8.10 is out. "Gnash is the GNU Flash player, a free/libre SWF movie player, with all the source code released under
GPLv3 or later. Gnash is available as both a standalone player and also as a browser plugin for Firefox (and all other Gecko
based browsers), Chromium and Konqueror." Previous conversations on the development list suggest that this may be the last
Gnash release for some time.
Full Story (comments: none)

Newsletters and articles


Development newsletters from the last week
Caml Weekly News (February 7)
LibreOffice development summary (February 6)
Perl Weekly (February 6)
PostgreSQL Weekly News (February 5)
Tahoe-LAFS Weekly News (February 4)
Comments (none posted)

Tratt: Fast Enough VMs in Fast Enough Time


Laurence Tratt, the designer of the Converge language, has written a detailed introduction to RPython, the language used as the
base of the PyPy project. "However, in addition to outputting optimised C code, RPython automatically creates a second
representation of the user's program. Assuming RPython has been used to write a VM for language L, one gets not only a
traditional interpreter, but also an optimising Just-In-Time (JIT) compiler for free. In other words, when a program written in L
executes on an appropriately written RPython VM, hot loops (i.e. those which are executed frequently) are automatically turned
into machine code and executed directly. This is RPython's unique selling point, as I'll now explain."
Comments (22 posted)

Five open source hardware projects that could change the world (The H)
Here's a lengthy survey of open hardware projects in The H. "The price/performance of a general purpose computer built using
FPGAs wouldn't be great when compared with commodity gear, but the technology excels in many niche and specialist
applications, such as in areas of computing that make use of dedicated hardware to bring high performance to tasks such as
signal processing, encryption and networking. Since you can program many hardware paths in an FPGA they are well suited to
jobs that can be broken down and processed in parallel, and some of the more powerful devices pack millions of logic blocks and
have a transistor count well into the billions, with a blisteringly fast serial bandwidth that is measured in terabits/second."
Comments (none posted)

Interviewing Ton Roosendaal: will it blend?

21 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

Luca Tringali has posted an interview with Ton Roosendaal. "As you all may know, he is the creator of Blender and the head of
the Blender Institute. Anyway, for me, the most important idea he developed is the "open movie" project. It introduces a
completely new concept of creating an artistic opera, where the public can be an active part during the production and
expecially after it, possibly improving the opera itself or creating another version (if it's a movie, you can create your own final).
Basically, it's the power of free open source software ported to art, expecially cinematographic art." (Thanks to Paul Wise).
Comments (none posted)
Page editor: Jonathan Corbet

Announcements
Brief items
Google Summer of Code 2012 is on
The 2012 edition of the Google Summer of Code has been announced. "This will be the 8th year for Google Summer of Code, an
innovative program dedicated to introducing students from colleges and universities around the world to open source software
development. The program offers student developers stipends to write code for various open source projects with the help of
mentoring organizations from all around the globe. Over the past seven years Google Summer of Code has had 6,000 students
from over 90 countries complete the program. Our goal is to help these students pursue academic challenges over the summer
break while they create and release open source code for the benefit of all."
Comments (none posted)

The end of LinuxDevices?


LinuxDevices.com is carrying a brief note from the "outgoing editor-in-chief" stating that the site's owner has been acquired. "At
this point, the future of LinuxDevices.com is uncertain. What we can say for sure is that it has been a pleasure serving our
readers -- the best in the business."
Comments (29 posted)

New Books
"Open Advice" from 42 free software contributors
"Open Advice" is a new book consisting of essays from some 42 community authors; it is available in print form or downloadable
under the CC-BY-SA license. "This book is the answer to 'What would you have liked to know when you started contributing?'.
The authors give insights into the many different talents it takes to make a successful software project, coding of course but also
design, translation, marketing and other skills. We are here to give you a head start if you are new. And if you have been
contributing for a while already, we are here to give you some insight into other areas and projects."
Comments (5 posted)

Articles of interest
FSFE Newsletter - February 2012
The February edition of the Free Software Foundation Europe Newsletter looks at freeing your cell phone, learning to program,
Document Freedom Day, the 2012 Fellowship election, and "I love Free Software" - Day.
Full Story (comments: none)

Gettys: Bufferbloat demonstration videos


Jim Gettys says: "If people have heard of bufferbloat at all, it is usually just an abstraction despite having personal experience
with it. Bufferbloat can occur in your operating system, your home router, your broadband gear, wireless, and almost anywhere
in the Internet. They still think that if experience poor Internet speed means they must need more bandwidth, and take vast
speed variation for granted. Sometimes, adding bandwidth can actually hurt rather than help. Most people have no idea what
they can do about bufferbloat. So Ive been working to put together several demos to help make bufferbloat concrete, and
demonstrate at least partial mitigation." Definitely useful viewing for anybody who is concerned with the problem and how to
begin addressing it.
Comments (30 posted)

Mueller: Apple's iterative approach to FRAND abuse is not for the faint of heart
Florian Mueller's update on the patent battles between Apple, Motorola, and Samsung has a clear slant, but it is still a
worthwhile look at how the mobile patent wars may be settled. There is little cheer for the free software world here. "They hope
that the disruptive impact of such injunctions on Apple's business will force Apple to grant them a license to all of its
non-standards-related patents (such as its multitouch inventions) as part of a broader settlement. In other words, they want to
use FRAND patents to reach a state of 'mutually assured destruction', in which the notion of intellectual property would become
meaningless between large players that have a critical mass of patents (it would merely serve to exclude new entrants without
large patent portfolios)."
Comments (49 posted)

Seigo: Spark answers


Aaron Seigo answers questions about the Spark tablet, which is based on Plasma Active, that he announced on January 29.
There is more information about the hardware and software, delivery timeframe (May 2012), and pre-orders: "Pre-order
registration will open early next week. This was one piece in the puzzle that was taking a bit [longer] than I hoped for to

22 sur 24

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

come together, but it's finally slotted in and our distribution partner has got the necessary infrastructure settled. I'll lift the veil
off of the pre-order and our distribution strategy when it goes live."
Comments (10 posted)

Upton: Raspberry Pi: Two things you thought you werent going to get
Liz Upton reports that Raspberry Pi boards will be available by the end of the month. "Theres another big piece of news today.
Weve been leaning (gently and charmingly) on Broadcom, who make BCM2835, the SoC at the heart of the Raspberry Pi, to
produce an abbreviated datasheet describing the ARM peripherals in the chip. If youre a casual user, this wont be of much
interest to you, but if youre wanting to port your own operating system or just want to understand our Linux kernel sources,
this is the document for you." (Thanks to Paul Wise)
Comments (19 posted)

Wheeler: New Hampshire: Open source, open standards, open data


David A. Wheeler reports that the US state of New Hampshire has passed an act requiring state agencies to consider open
source software, promote the use of open data formats, and it requires the commissioner of information technology (IT) to
develop an open government data policy. "First, heres what it says about open source software (OSS): For all software
acquisitions, each state agency shall Consider whether proprietary or open source software offers the most cost effective
software solution for the agency, based on consideration of all associated acquisition, support, maintenance, and training
costs. Notice that this law does not mandate that the state government must always use OSS. Instead, it simply requires
government agencies to consider OSS. Youd think this would be useless, but youd be wrong. Fairly considering OSS is still
remarkably hard to do in many government agencies, so having a law or regulation clearly declare this is very valuable. Yes,
closed-minded people can claim they considered OSS and paper over their biases, but laws like this make it easier for OSS to
get a fair hearing. The law defines open source software (OSS) in a way consistent with its usual technical definition, indeed,
this laws definition looks a lot like the free software definition. Thats a good thing; the impact of laws and regulations is often
controlled by their definitions, so having good definitions (like this one for OSS) is really important."
Comments (none posted)

Calls for Presentations


Akademy 2012 Call for Papers
The KDE community conference, Akademy, will take place June 30-July 6 in Tallin, Estonia. Registration is open now, as is the
call for papers which will close March 15. See this announcement for details. "Akademy features a 2-day conference packed with
presentations on the latest KDE developments, followed by 5 days of workshops, birds of a feather (BoF) and coding sessions."
Comments (none posted)

The SuperCollider Algostep Remix Competition


The SuperCollider Symposium 2012 will be held April 12-19 in London, UK. The SuperCollider Algostep Remix Competition is
accepting entries until April 1. The idea is to remix SuperCollider code which is provided online.
You have the choice of:
(a) Remix the code take the source code, fork it, hack it, change it beyond belief, take it in whole new
directions.
(b) Remix the audio if youre a music producer but not so much into the code, take these stems and remix
them, however you like.
Comments (none posted)

Upcoming Events
Events: February 9, 2012 to April 9, 2012
The following event listing is taken from the LWN.net Calendar.
Date(s)

Event

Location

February 6 Linux on ARM: Linaro Connect Q1.12


February 10

San Francisco, CA, USA

February 10 Skolelinux/Debian Edu developer gathering


February 12

Oslo, Norway

February 10 Linux Vacation / Eastern Europe Winter session 2012 Minsk, Belarus
February 12

23 sur 24

February 13 Android Builder's Summit


February 14

Redwood Shores, CA, USA

February 15 2012 Embedded Linux Conference


February 17

Redwood Shores, CA, USA

February 16 Embedded Technology Conference 2012


February 17

San Jos, Costa Rica

February 17 Red Hat, Fedora, JBoss Developer Conference


February 18

Brno, Czech Republic

February 17 Debian BSP in Paris


February 19

Paris, France

20/03/12 21:08

LWN.net Weekly Edition for February 9, 2012 [LWN.net]

https://lwn.net/Articles/479071/bigpage

February 24 PHP UK Conference 2012


February 25

London, UK

February 27 ConFoo Web Techno Conference 2012


March 2

Montreal, Canada

February 28 Israeli Perl Workshop 2012

Ramat Gan, Israel

March 2
March 4

BSP2012 - Moenchengladbach

Mnchengladbach, Germany

March 2
March 4

Debian BSP in Cambridge

Cambridge, UK

March 5
March 7

14. German Perl Workshop

Erlangen, Germany

March 6
March 10

CeBIT 2012

Hannover, Germany

March 7
March 15

PyCon 2012

Santa Clara, CA, USA

March 10
March 11

Debian BSP in Perth

Perth, Australia

March 10
March 11

Open Source Days 2012

Copenhagen, Denmark

March 16
March 17

Clojure/West

San Jose, CA, USA

March 17
March 18

Chemnitz Linux Days

Chemnitz, Germany

March 23
March 24

Cascadia IT Conference (LOPSA regional conference) Seattle, WA, USA

March 24
March 25

LibrePlanet 2012

Boston, MA, USA

March 26
March 29

EclipseCon 2012

Washington D.C., United States

March 28
March 29

Palmetto Open Source Software Conference 2012

Columbia, South Carolina, USA

April 3
April 5

LF Collaboration Summit

San Francisco, USA

April 5
April 6

Android Open

San Francisco, CA, USA

If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol

Copyright 2012, Eklektix, Inc.


Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds

24 sur 24

20/03/12 21:08

You might also like