Professional Documents
Culture Documents
net]
https://lwn.net/Articles/479071/bigpage
Weekly edition
Kernel
Security
Distributions
Contact Us
Search
Archives
Calendar
Subscribe
LWN.net FAQ
Sponsors
Tracking users
By Jake Edge
February 8, 2012
User tracking is always contentious. There are real advantages to gathering lots of information on how
an application is used, but there are also serious drawbacks in terms of privacy. Many applications or
distributions have "opt-in" mechanisms that report back, but that makes the data somewhat suspect
because it comes from a self-selected group. But "opt-out" data gathering is frowned upon by privacy advocates and privacyconscious users. As a recent discussion in the Mozilla dev-planning group shows, though, there are some who find that the need
for data may outweigh some privacy concerns.
Mozilla is understandably concerned with Firefox's decline in market share and would like to try to determine what the
underlying causes are. That has led to a proposal for a feature called MetricsDataPing that would collect a wide variety of
information about the browser, its add-ons, and how it is used. That information would be sent to Mozilla over HTTPS each day
that the browser is used. Crucially, the proposal is that MetricsDataPing would be an opt-out feature, which would require users
to know about the feature and disable it if they didn't want to share that data.
This stands in contrast to features like Telemetry, which gathers data on browser performance, but it has two crucial differences
from MetricsDataPing. First, it is opt-in so that users actively have to enable it, and secondly, it tries to avoid gathering any
personally identifiable information (PII). It does not store IP addresses (but does geolocate the IP address and store that) and it
generates a new ID every time the browser is restarted.
MetricsDataPing on the other hand would gather a much wider range of information such that "fingerprinting" a user just based
on the data gathered would be a real possibility. Just a list of add-ons installed is probably nearly unique, but adding in just the
installation date for the add-on, as MetricsDataPing does, would almost certainly make it unique. Information about search
sources used, number of searches done, and that sort of thing also rings alarm bells for those concerned about privacy. It also
uses a "document ID" to identify the data sent to the server, which would allow users to delete their data from the Mozilla
servers. But the document ID could also essentially serve as a unique user ID (UUID) because the previous document ID is
always sent with the current update, so that the older can be deleted.
There are efforts to anonymize the data that would be stored, but, as we have seen before, it is very difficult to truly anonymize
collected data. Some of that is also true for Telemetry, because it has added fingerprintable data after its initial roll-out, but the
key difference is that users have willingly chosen to share that data. That's the main difficulty that some see with the
MetricsDataPing proposal. Benjamin Smedberg started off the discussion with a posting of his concerns:
It seems as if we are saying that since we already collect most of this data via various product features, that makes
it ok to also collect this data in a central place and attach an ID to it. Or, that because we *need* this data in order
to make the product better, it's ok to collect it. This makes me intensely uncomfortable. At this point I think we'd be
better off either collecting only the data which cannot be used to track individual installs, or not implementing this
feature at all.
But others, especially on the Mozilla metrics team, believe that the information gathered is critical. Blake Cutler described it this
way:
The Metrics Data Ping is an attempt to apply scientific principles to product design and development. Mozilla relies
too much on gut decisions, which directly translates to poor product decisions. Firefox analytics are stuck in the
dark ages. It shows.
Ben Bucksch made several suggestions on how to improve the privacy of the data gathered, but he is also worried that
gathering data to figure out why Firefox usage is declining will actually result in more users leaving because of a perception that
the browser is intruding on their privacy. While the data may be important and useful, there are other considerations according
to Justin Lebar:
Yeah, it sucks that we can't tell why people stop using Firefox. But our [principles] are more important than that.
To that end, the discussion shouldn't center on why these metrics are important or difficult to obtain another way.
The discussion is about whether we can at once collect the proposed metrics and stay true to our values. If we
can't, then we can't collect the data, no matter how important it may be.
There was some discussion of technical measures to try to reduce the PII content of the messages, but there are still problems
with things like fingerprinting. If you gather enough information (of the kind the metrics team thinks it needs), you are very
likely to be able to track users. Even if the data is massaged in some fashion (aggregated for example), the perception of privacy
invasion will still be present as Boris Zbarsky pointed out:
One problem is that some people will assume that if data is being sent then it's being used, no matter what we
actually do with it and say we do with it. So if we _can_ design things such that we couldn't misuse them even if
we were to want to, we should. I understand that in general this is pretty difficult....
Even for opt-in services like Telemetry, gathering additional information requires user agreement. When the list of add-ons was
added to the information that Telemetry supplied, users were required to opt back in to Telemetry after being informed of that
change. As Lebar noted: "So again, here we have a decision made about sending the list of add-ons in a ping-type thing, that
we cannot do it without explicit permission, even for people who already opted in to data collection." But MetricsDataPing would,
seemingly, gather that information without asking the user even once.
Early in the thread, Mike Beltzner pointed to a posting on the Mozilla privacy blog that committed Mozilla "to a basic policy of 'no
1 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
surprises, real choices, sensible settings, limited data, and user control'", he said. It's a bit hard to see how MetricsDataPing fits
into that framework. For some Linux distributions (which is probably not really where Mozilla is focused on market share) it could
easily be seen as a misfeature that should be removed from the codethough that might lead to more "iceweasels" due to
Mozilla trademark issues.
In the end, Mozilla may need to find a way to satisfy its data needs with an opt-in feature, or find a very convincing argument
for the impossibility of user tracking with the data it does collect. There is also the argument that there is a subtle
self-unselection bias that is introduced with an opt-out feature. In what ways does the data get skewed by eliminating the very
privacy-conscious? It is certainly understandable that the metrics team (and Mozilla as a whole) wants the data, but, like Linux
distributions it may have to settle for indirect measurements or some self-selection bias.
Comments (80 posted)
February 8, 2012
This article was contributed by
Nathan Willis
Fans of Scribus have had access to unstable development builds for much of the time that 1.4.0 has been in development and
testing (LWN covered the release of 1.3.5, the first release in the series that became 1.4, in August 2009), but the "stable"
branding is one the project was intent on not rushing. The 1.3.5/1.4 series introduces changes
to the file format that make newer files incompatible with the old stable release, so running the
stable and development series side-by-side was a risky proposition for anyone using Scribus for
production work.
The new release is available in binary form for Debian-based Linux systems (32 and 64 bit),
Windows, Mac OS X, and (so they tell us...) OS/2. In the past, RPM packages have also been
provided on the site, but they do not appear to have landed yet for 1.4.0. Scribus pulls in a hefty list of dependencies, due to its
need to support a variety of embedded content types, but nothing out of the ordinary for a modern distribution.
New underpinnings
The biggest change to the code base in Scribus 1.4.0 is the migration to the Qt 4 framework and, according to developer
Peter Linnell, it was also the source of the long development cycle. But Qt 4 also brings with it the project's first stable builds for
Mac OS X, which is important because of its position as the dominant DTP operating system. As Linnell explained it, "we spent a
lot of time optimizing the code for Qt4, and also we wanted a really, really solid release." Porting to OS X involved plenty of GUI
and human interface guidelines (HIG) work in order to make the application fit in, but it also entailed integrating with the OS X
font, color management, and printing subsystems. At the same time, Qt 4 allowed the project to make its first appearance on
the *BSDs and OpenSolaris.
A seemingly minor change in version 1.4 is the unification of Undo/Redo tracking across the application. The Undo history now
tracks all sorts of edits that were previously not undoable, such as text formatting changes. But that effort also uncovered a lot
of "dodgy code" in need of refactoring, Linnell said.
The marquee feature in 1.4.0 is the Render Frame content type. In Scribus, as in most other DTP applications, the editing model
consists of a set of individual pages onto which you place and rearrange the objects that make up your document blocks of
text, images, lines and other features, footnotes, etc. Scribus calls these objects frames, and it has long supported an impressive
list of file formats for frame content including the native formats of Adobe Photoshop, Illustrator, and other proprietary
applications.
Render frames are different in that the content they contain is not a static file, but rather the output generated by an external
renderer that will be called when the Scribus project is printed or exported. Any application that can be called from the
command line and produce PostScript, PDF, or PNG output can be used as a renderer. This allows Scribus documents to pull in
generated content like graphs and complex mathematical formulas, without requiring them to first be exported to a separate
image file. That means the external files can be updated at any point without re-building the Scribus document, and it means
the final product can be rendered at the appropriate resolution (for print or on-screen viewing) without extra effort. The default
set of render frames in 1.4.0 includes TeX and LaTeX, Gnuplot, dot/Graphviz, Lilypond, and POV-Ray.
Usability
Render frames represent a conceptual left-turn from typical DTP thinking, but they open the door to a powerful new set of uses.
On the other hand, the typical DTP document-building approach differs so much from word processing and general text editing
that many new users find it difficult to get started. On that front, Scribus 1.4.0 introduces some changes that should make the
learning curve less intimidating, primarily by making far more on-screen objects directly editable.
In earlier releases, adding text to a document took two steps: dropping the text frame into position
on the page, and opening the text frame in the "story editor" component. For a lot of new users,
that was difficult to grasp, so it is likely to be a popular move that 1.4.0 enables direct text editing
on the page. The story editor component is still available, and offers access to more features like
saved paragraph and character styles, but it is not required.
Similar improvements have landed for working with image content. Almost any object can now be
directly edited on the canvas, including vectors and raster images. Transformation tools and Boolean
operations like those you would find in GIMP and Inkscape are provided, and when those will not suffice, images can be opened
in an external editor.
Scribus's image manager which serves as a browser and inspector for all of the image objects linked into a project also
2 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
received a significant upgrade for 1.4. From the manager, you can see image details for each object in the project (including the
file path, original dimensions, and scaled size), look up each instance where an image is used in the document, and apply
non-destructive image effects. Although a single-page document may not be difficult to keep tabs on without use of the image
manager, it is an indispensable aid for multi-page reports or booklets.
Finally, Scribus is focused on producing printed (or at least, print-worthy) output, and 1.4 adds some
features to assist users at print time. Users can toggle a number of features on or off in the print
previewer, with live updates to reflect the changes. These include anti-aliasing (which shows a
smoother preview, but takes more time), transparency (which can reveal problems with image
backgrounds that need to be masked-out), and spot-color-to-CMYK conversion (which is necessary
when printing spot colors on normal, desktop printers). Both the printing system and PDF exporter
can output registration, crop, bleed marks, and color bars. An extra nice touch is the ability to
simulate how the output will appear to people with four types of color-blindness.
Functional additions
Over the span of its development, the new stable release of Scribus picked up a wealth of individual new features some, but
not all, of which were present at the release of 1.3.5. Among the noteworthy additions are support for more advanced features
in Adobe Photoshop files, support for export to the PDF 1.5 format, advanced typography features, and a large collection of new
swatches, scripts, and templates.
Photoshop files, like it or not, are the most common application-specific raster images used by graphic designers. Although
converting them to an standardized export format like TIFF is usually preferable, there are times when Scribus needs to link
them in as-is. The new release supports multi-layer Photoshop files, and those with embedded clipping paths. PDF 1.5 likewise
introduces some important new features, such as animation transitions (which are useful for creating PDF presentations),
multi-layer documents, and PDFs that embed other PDF or EPS files. On the latter feature, older versions of Scribus could import
such PDF and EPS content only by rasterizing it, with the resulting loss in quality.
Scribus is slowly but surely adding support for advanced font and typesetting features it is one of the only open source
applications to properly do drop caps and discretionary ligatures (i.e. at the option of the user), for example. The new release
expands the typesetting feature set to include "optical margins" and "glyph extension," both of which are subtle techniques to
make the ragged-edge of a text block appear more naturally aligned. Optical margins allow non-letter bits like hyphens and
punctuation to hang off past the end of the line, so that the letters on adjacent rows appear to be lined up with each other.
Glyph extension allows the font renderer to slightly widen individual characters on a line of fully-justified text, rather than only
expanding the spaces between letters. The result is easier to read.
Finally, 1.4.0 ships with an expanded set of design elements like pattern fills, defines more gradient types, and includes more
document template styles. It also includes new scripts, some of which add complex, useful functionality. An example is the
Autoquote script, which you run on a text frame after you have finished editing its contents. The script parses the text and
intelligently converts "dumb" quotation marks into "smart quotes," correctly recognizing and accounting for nested quote styles,
and producing output that is tailored to the punctuation style of the language that you specify.
Now the fun begins
The release notes say that Scribus 1.4.0 closes more than 2000 bugs and feature requests, which is a weighty bill for any
project. It has been a long time since 1.3.5, at which point we were told that the pace of development would pick up but then
again, DTP is a very convoluted task. It covers proper format handling for everything from text to images, all the way from
import of vendor-specific files to printed output, and users expect fine-grained control over every aspect of the positioning and
characteristics of each element. Considering those challenges, it is impressive that Scribus is as full-featured as it is.
The new release is also remarkably stable. I have been running pre-release builds of 1.4.0 for several months, and have yet to
experience a crash or a corrupted file at least on Linux. Back when 1.3.5 was released, I commented that the first native
builds for Mac OS X would potentially have the biggest impact on the project. I still think that is true, given the hold that OS X
has among graphic designers.
Convincing OS X users to try Scribus is a prerequisite, of course, but that is not really a problem with a technical solution. In the
meantime, the project has more work cut out for it. Better support for OpenType features and page impositioning are common
requests, and even though PDF 1.5 support is an important milestone, Adobe has pushed the format through several revisions
since then. Of course, if the target stayed completely still, it probably wouldn't be as much fun to develop.
Comments (7 posted)
3 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Security
Debian and Suhosin
By Jake Edge
4 sur 24
A recent proposal for Debian to stop shipping PHP with the Suhosin security patches has been
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
February 8, 2012
controversial. There are a number of reasons behind the proposalmanpower, sticking to the mainline,
performance, and morebut others responding in the thread consider the security mitigations that
Suhosin provides to be very important for the web application language given its less than stellar security track record. What
most would like to see is that those protections make their way out of the Suhosin patches and into the PHP mainline, but that
does not seem to be in the offing. In the meantime, users may find that the PHP protections they have depended on will
disappear from Debian.
Debian PHP maintainer Ondej Sur posted a message to several lists noting that the Suhosin patches have been disabled in the
unstable repository and tries "to summarize the reasons why I have decided to disable suhosin patch" in the message. Over
time, he has changed his mind about Suhosin, so he is documenting the reasons and looking for other opinions. The Debian PHP
team is evidently understaffed, and the work to add in the Suhosin patches (and module) eats up some of that time. Sur is not
convinced that the extra time is necessarily well-spent because PHP has "improved a lot".
By shipping only a Suhosin-enabled PHP, Debian is diverging not only from the mainline, but also from what other Linux
distributions do. That means that users coming from other distributions (like Fedora which doesn't ship Suhosin or openSUSE
where it is optional) may run into problems they don't expect. In addition, he said, bugs reported upstream from the Debian
version are often met with a request to reproduce it in vanilla PHP. There are also performance and memory usage impacts from
Suhosin that some find excessive.
Suhosin grew out of the PHP hardening patch that was developed in 2004. The basic idea is to add protections against bugs in
the PHP core (aka Zend Engine) by making proactive changes for things like buffer overflows or format string vulnerabilities. It
also tries to protect against badly written PHP applications, of which there are seemingly countless examples. Suhosin has two
parts, a patch to the PHP core along with a PHP extension that implements additional hardening features.
The core patches are what try to protect against buffer overflows by adding canary values to internal data structures so that the
overflows can be detected. In addition, the pointers to destructors (i.e. functions that are called when an element is freed) for
internal hash tables and linked lists are protected as they can be a vector for code execution if a buffer overflow overwrites
them. Format string vulnerability protection and a more robust implementation of realpath() round out the changes to the
core.
The extension provides a whole host of other kinds of protections, largely against dodgy PHP programming practices. For
example it protects against either remote or local code inclusion, which is one of the worst problems that has plagued PHP
applications. It can disable the eval() call, prevent infinite recursion by putting a limit on call depth, stop HTTP response
splitting attacks, filter uploaded files by a variety of conditions, and on and on. While it obviously can't prevent all badly written
PHP from running amok, it's clear that the Suhosin developers have looked at a lot of common problems and tried to address
them.
While most of the features are configurable, they are all going to impact performance in one way or another. That's a tradeoff
that many seem to be willing to make, especially in shared hosting facilities where a vulnerability in a particular customerinstalled application (or the version of PHP itself) might have serious repercussions for other customers. As the project's "Why?"
page notes, it comes down to a question of trust:
If you are using PHP only for your own server and only for your own scripts and applications, then you can judge
for yourself, if you trust your code enough. In that case you most probably dont need the Suhosin extension.
Because most of its features are meant to protect servers against vulnerable programming techniques. However
PHP is a very complex programming language with a lot of pitfalls that are often overseen during the development
of applications. Even PHP core programmers are writing insecure code from time to time, because they did not
know about a PHP pitfall. Therefore it is always a good idea to have Suhosin as your safety net. The Suhosin-Patch
on the other hand comes with Zend Engine Protection features that protect your server from possible
bufferoverflows and related vulnerabilities in the Zend Engine. History has shown that several of these bugs have
always existed in previous PHP versions.
But there is an additional reason for dropping Suhosin mentioned in Sur's posting: "Stefan's relationships with PHP upstream
(and vice versa) isn't helping very much". He is referring to Stefan Esser, lead developer of Suhosin. Sur's statement is borne
out in the thread, as there seems to be a fair amount of animosity between Esser and other posters in the php-devel list. But
beyond the personalities involved, there is a more important question: if the hardening features in Suhosin are truly useful, why
haven't they been pushed upstream?
A footnote in Sur's post refers to a section of the Suhosin FAQ that outlines the project's reasons for staying out of the
mainline. It mentions the performance impact of the patches and that some PHP developers are not interested in adding code to
protect against badly written applications. It also notes that by staying separate from the core, the project can make a
statement about what it sees as deficiencies in security handling in PHP. But there seems to be more to it than that.
Various people have encouraged Esser to create RFCs for the features in the patch that he thinks should go into the mainline,
but he largely dismisses those messages with statements like:
I am not interested in pushing Suhosin into PHP mainline. Why in hell would I want that. If Suhosin gets absorbed
by PHP.net then I would have to start a new project, because there are tons of mitigations I can think up that will
be implemented at some point in time and will never make it into PHP mainline.
With Suhosin existing I am free to implement as many security mitigations I like and do not have to beg the PHP
developers to consider adding something.
Esser clearly believes that all of the Suhosin changes should go into PHP without going through the RFC process. As Stas
Malyshev pointed out, though, that's part of the collaboration process:
Some people call "begging" collaboration and consider it a normal way to develop software with teams bigger than
one person. Of course, being part of the team is completely voluntary. I think it is clear that Stefan is not interested
in doing this. If somebody would want to take on himself working as part of PHP team on getting some features
from Suhosin to PHP, he's welcome.
5 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Malyshev also notes that it is hard for Esser to complain that the PHP developers aren't cooperating when he is unwilling to join
with the project and follow its processes. But Esser is convinced that it would be a waste of his time:
I know for sure that whatever will be the outcome of it, it will be a compromise (if at all) that will not be sufficient
for my personal taste. So in the end from my point of view people have to use Suhosin anyway. Why also waste
time merging 5 features of 100 if I can do something more useful in the time and give my Suhosin users 20 more
new mitigations.
Esser is also concerned that PHP developers are not paying enough attention to security overall. He pointed to a fix that recently
went in to address problems in the HTTP response splitting protection, where, even though it is a security-related fix, there was
inadequate review of two different attempts to fix the flaw. The first fix for the bug (bug #60227) went directly into PHP 5.3
back in November. Esser's complaint is strongly worded, but does point to a real problem:
And that my dear readers is exactly what would happen to the code of Suhosin if it gets merged into PHP. It would
be patched by people that do not know exactly what they are doing. And it would be broken. And if I would not sit
on every single commit to PHP this would happen, because obviously inside PHP no one cares about reviewing
security patches.
And with Suhosin outside of PHP, there is a secondary protection layer around PHP that would have [caught] this
problem: also known as defense in depth.
We've heard those kinds of arguments before in a slightly different context. The grsecurity/PaX patches for the Linux kernel have
been around for quite some time, but have always been maintained as out-of-tree patch sets. The pseudonymous "PaX Team",
who maintains the PaX patches, made many of the same arguments about why those patches have not been submitted. It is
certainly attractive to be able to go your own way, without having to coordinate or convince anyone outside of the project, but it
does have its costs as well.
One of those costs is a reduction in the audience because distributions and others may shy away from non-mainline code. Even
if the maintainer of an out-of-tree patch set does a perfect job (impossible, of course), there is a cost associated with using a
non-standard tool, whether it's a programming language or kernel. That cost is borne by the distributions and some may be
unwilling to start (or continue) bearing that cost.
One thing to note, however, is that Suhosin has been pretty effective at avoiding various PHP bugs along the way. The recent
PHP remote code execution vulnerability found by Esser was thwarted by a suitably configured Suhosin. The HTTP response
splitting problem was also solved in Suhosin long ago. On the other hand, certain (likely buggy) applications cannot run under
Suhosin which also makes it difficult to adopt.
There is a fundamental problem, at times, connecting upstreams and security researchers. Free software encourages folks to
scratch their own itch and that's just what Suhosin and grsecurity/PaX have done. But if the changes never make it to the
mainline of the project, users may be suffering from bugs that could be avoided. The "all or nothing" approach is not likely to
work with any project, but it is true that security issues are often not given the attention they deserve in those upstreams. It's a
difficult problem to solve, but projects would be well-served by finding a way to cultivate more security-oriented developers into
their communities.
Comments (3 posted)
Brief items
Security quotes of the week
I believe in paying money for products that earn it. I do not believe in a pricing and distribution model that still
thinks it's 1998. And I really don't believe in censoring the internet so that studio and label executives can add a
few more millions onto their already enormous money pile.
Treat your customers with respect , and they'll do the same to you. And that is how you fight piracy.
-- Paul Tassi in Forbes (worth a full read)
VeriSign said its executives "do not believe these attacks breached the servers that support our Domain Name
System network," which ensures people land at the right numeric Internet Protocol address when they type in a
name such as Google.com, but it did not rule anything out.
-- Reuters reports on previously unreported breaches at Verisign in 2010
The service performs a set of analyses on new applications, applications already in Android Market, and developer
accounts. Here's how it works: once an application is uploaded, the service immediately starts analyzing it for
known malware, spyware and trojans. It also looks for behaviors that indicate an application might be misbehaving,
and compares it against previously analyzed apps to detect possible red flags. We actually run every application on
Google's cloud infrastructure and simulate how it will run on an Android device to look for hidden, malicious
behavior. We also analyze new developer accounts to help prevent malicious and repeat-offending developers from
coming back.
-- Google's Hiroshi Lockheimer introduces "Bouncer"
Comments (none posted)
6 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Comments (5 posted)
New vulnerabilities
apache: multiple vulnerabilities
Package(s): apache
CVE #(s):
Created:
Updated:
March 7, 2012
Description:
February 2, 2012
Alerts:
Mandriva
MDVSA-2012:012
2012-02-02
Debian
DSA-2405-1
2012-02-06
Slackware
SSA:2012-041-01
2012-02-10
Red Hat
RHSA-2012:0128-01
2012-02-13
CentOS
CESA-2012:0128
2012-02-14
Oracle
ELSA-2012-0128
2012-02-14
2012-02-14
Ubuntu
USN-1368-1
2012-02-16
SUSE
SUSE-SU-2012:0284-1
2012-02-18
Fedora
FEDORA-2012-1598
2012-02-21
Red Hat
RHSA-2012:0323-01
2012-02-21
openSUSE
openSUSE-SU-2012:0314-1 2012-02-28
Fedora
FEDORA-2012-1642
2012-03-06
2012-03-06
SUSE
SUSE-SU-2012:0323-1
2012-03-06
Oracle
ELSA-2012-0323
2012-03-09
CVE #(s):
CVE-2011-4930
Created:
Updated:
February 7, 2012
7 sur 24
20/03/12 21:08
Alerts:
https://lwn.net/Articles/479071/bigpage
CVE #(s):
CVE-2010-4820
Created:
Updated:
February 8, 2012
February 3, 2012
Red Hat
RHSA-2012:0096-01 2012-02-02
Red Hat
RHSA-2012:0095-01 2012-02-02
CentOS
CESA-2012:0095
2012-02-03
CentOS
CESA-2012:0095
2012-02-03
CentOS
CESA-2012:0096
2012-02-03
2012-02-03
2012-02-03
Oracle
ELSA-2012-0095
2012-02-03
Oracle
ELSA-2012-0095
2012-02-03
Oracle
ELSA-2012-0096
2012-02-03
CVE #(s):
CVE-2012-0038 CVE-2012-0207
Created:
Updated:
March 7, 2012
February 7, 2012
8 sur 24
Ubuntu
USN-1356-1
SUSE
SUSE-SU-2012:0153-2 2012-02-06
2012-02-07
Red Hat
RHSA-2012:0107-01
2012-02-09
CentOS
CESA-2012:0107
2012-02-09
2012-02-13
Oracle
ELSA-2012-0107
2012-02-10
Ubuntu
USN-1361-1
2012-02-13
Ubuntu
USN-1362-1
2012-02-13
Ubuntu
USN-1363-1
2012-02-13
Ubuntu
USN-1364-1
2012-02-13
Red Hat
RHSA-2012:0333-01
2012-02-23
Ubuntu
USN-1380-1
2012-02-28
Ubuntu
USN-1384-1
2012-03-06
Ubuntu
USN-1386-1
2012-03-06
Ubuntu
USN-1387-1
2012-03-06
Ubuntu
USN-1388-1
2012-03-06
Red Hat
RHSA-2012:0350-01
2012-03-06
Ubuntu
USN-1389-1
2012-03-06
Ubuntu
USN-1391-1
2012-03-07
Ubuntu
USN-1394-1
2012-03-07
CentOS
CESA-2012:0350
2012-03-07
2012-03-08
Oracle
ELSA-2012-2003
2012-03-12
Oracle
ELSA-2012-2003
2012-03-12
20/03/12 21:08
Oracle
https://lwn.net/Articles/479071/bigpage
ELSA-2012-0350
2012-03-12
Created:
Description:
February 2,
2012
Alerts:
2012-02-29
CVE #(s):
CVE-2012-0830
Created:
Updated:
February 3, 2012
Description: PHP 5.3.9 contained a security update to prevent denial of service (DoS) attacks using hash collisions. Mistakes
in the implementation allow hackers to inject and execute code. This has been fixed in PHP 5.3.10. See this
article for details.
Alerts:
Debian
DSA-2403-1
Red Hat
RHSA-2012:0092-01 2012-02-02
2012-02-02
Red Hat
RHSA-2012:0093-01 2012-02-02
CentOS
CESA-2012:0093
2012-02-03
CentOS
CESA-2012:0093
2012-02-03
CentOS
CESA-2012:0093
2012-02-03
CentOS
CESA-2012:0092
2012-02-03
2012-02-03
Oracle
ELSA-2012-0093
2012-02-03
Oracle
ELSA-2012-0093
2012-02-03
Oracle
ELSA-2012-0093
2012-02-03
Oracle
ELSA-2012-0092
2012-02-03
Debian
DSA-2403-2
2012-02-06
Fedora
FEDORA-2012-1262 2012-02-08
Fedora
FEDORA-2012-1262 2012-02-08
Fedora
FEDORA-2012-1262 2012-02-08
Ubuntu
USN-1358-1
2012-02-09
Slackware
SSA:2012-041-02
2012-02-10
Fedora
FEDORA-2012-1301 2012-02-14
Fedora
FEDORA-2012-1301 2012-02-14
Fedora
FEDORA-2012-1301 2012-02-14
9 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Package(s): polipo
CVE #(s):
CVE-2011-3596
Created:
Updated:
February 8, 2012
Description:
February 2, 2012
From the Red Hat bugzilla entry:
A denial of service flaw was found in the way Polipo, a lightweight caching web proxy, processed certain HTTP
POST / PUT requests. If polipo was configured to allow remote client connections and particular host was
allowed to connect to polipo server instance, a remote attacker could use this flaw to cause denial of service
(polipo daemon abort due to assertion failure) via specially-crafted HTTP POST / PUT request.
Alerts:
CVE #(s):
Created:
Updated:
February 8, 2012
Description:
February 2, 2012
Alerts:
Debian
DSA-2401-1
2012-02-02
SUSE
SUSE-SU-2012:0155-1
2012-02-07
USN-1359-1
2012-02-13
Kernel development
Brief items
Kernel release status
The current development kernel remains 3.3-rc2; there have been no 3.3 prepatches released in the last week.
The Stable updates picture is somewhat more complicated. 2.6.32.56, 3.0.19, and 3.2.3 were released on February 3 with a
long list of patches. 3.2.4 followed shortly thereafter to fix a build failure introduced in 3.2.3.
On February 6, 3.0.20 and 3.2.5 were released. These were single-patch updates containing the fix to the ASPM-related problem
that would significantly increase power consumption on some systems. This patch has been treated with some care: it seems to
work, but nobody really knows if it might cause behavioral problems on some obscure hardware. That said, at this point, it
seems safe enough to have found its way into a stable update.
Comments (none posted)
10 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
POHMELFS returns
LWN wrote briefly about the POHMELFS filesystem in early 2008; thereafter, POHMELFS has
languished in the staging tree without much interest or activity. The POHMELFS developer, Evgeniy
Polyakov, expressed his unhappiness with the development process and disappeared from the
kernel community for some time.
By Jonathan Corbet
February 8, 2012
"Opportunistic suspend" is a heavy-handed approach to power management. In short, whenever nothing of interest is going on,
the entire system simply suspends itself. It is certainly effective on Android devices; in particular, it prevents poorly-written
applications from keeping the system awake and draining the battery. The hard part is the determination that nothing interesting
is happening; that is the role of the Android wakelock/suspend blocker mechanism. With suspend blockers, both the kernel and
suitably-privileged user-space code are able to block the normal suspension of the system, keeping it running for whatever
important work is being done.
Given that suspend blockers do not seem to be headed into the mainline kernel anytime soon, some sort of alternative
mechanism is required if the mainline is to support opportunistic suspend. As it happens, some pieces of that solution have been
in the mainline for a while; the wakeup events infrastructure was merged for 2.6.36. Wakeup events track events (a button
press, for example) that can wake the system or keep it awake. "Wakeup sources," which track sources of wakeup events, were
merged for 2.6.37. Thus far, the wakeup events subsystem remains lightly used in the kernel; few drivers actually signal such
events. Wakeup sources are almost entirely unused.
Rafael's patch set makes some significant changes that employ this infrastructure to support "autosleep," which is another word
for "opportunistic suspend." (Rafael says: "This series tests the theory that the easiest way to sell a once rejected feature is to
11 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
advertise it under a different name"). The first of those adds a new file to sysfs called /sys/power/autosleep; writing "mem"
to this file will cause the system to suspend whenever there are no active wakeup sources. One can also write "disk", with the
result that the system will opportunistically hibernate; this feature may see rather less real-world use, but it was an easy
addition to make.
The Android system tracks the time that suspend blockers prevent the system from suspending; that information is then used in
the "why is my battery dead?" screen. Rafael's patch adds a similar tracking feature and exports this time (as
prevent_sleep_time) in /sys/kernel/debug/wakeup_sources.
One little problem remains, though: wakeup sources are good for tracking kernel-originated events, but they do not provide any
way for user space to indicate that the system should not sleep. What's needed, clearly, is a mechanism with which user space
can create its own wakeup sources. The final patch in Rafael's series adds just such a feature. An application can write a name
(and an optional timeout) to /sys/power/wake_lock to establish a new, active wakeup source. That source will prevent
system suspend until either its timeout expires or the same name is written to /sys/power/wake_unlock.
It is easy to see that this mechanism can be used to implement Android's race-free opportunistic suspend. A driver receiving a
wakeup event will mark the associated wakeup source as active, keeping the system running. That source will stay active until
user space has consumed the event. But, before doing so, the user-space application takes a "wake lock" of its own, ensuring
that it will be able to complete its processing before the system goes back to sleep.
Those who have been paying attention to this controversy will have noted that the API for this feature looks suspiciously like the
native Android API. Needless to say, that is not a coincidence; the idea is to make it as easy as possible to switch over to the
new mechanism without breaking Android systems. If that goal can be achieved, then, even if Android itself never moves to this
implementation, it should be that much easier to run an Android user space on a mainline kernel.
And that, of course, will be the ultimate proof of this patch set. If somebody is able to demonstrate an Android system running
with native opportunistic suspend, with similar power consumption characteristics, then it's a lot more likely that this patch will
succeed where so many others have failed. Arranging such a demonstration will not be entirely easy, but, on the right hardware,
it is certainly possible. Linaro's Android build for the Pandaboard might be a good starting point. Until that happens, getting an
Android-compatible opportunistic suspend implementation into the mainline could be challenging.
Comments (none posted)
For a little while since, things have been quiet on the memory power management front. Recently, though, a new and seemingly
unrelated PASR patch set was posted to linux-kernel by Maxime Coquelin. This version adds no new zones; instead, it works at a
lower level beneath the buddy allocator.
The first step is to boot the kernel with the new ddr_die= parameter, describing the physical layout of the system's memory.
Another parameter (interleaved) must be used if physically-interleaved memory is present on the system. It would, of
course, be nice to obtain this information directly from the hardware, but, in the embedded world where Maxime works, such
mechanisms, if they are present at all, must be implemented on a per-subarchitecture or per-board basis. The final patch in the
series does add built-in support for the Ux500 system in a "board support" file, but that is the only system supported without
boot-time parameters at this early stage.
For each region defined at boot time, the PASR code sets up a pasr_section structure:
struct pasr_section {
phys_addr_t start;
struct pasr_section *pair;
unsigned long free_size;
spinlock_t *lock;
struct pasr_die *die;
};
The key value here is free_size, which tracks how many free pages exist within this section. When the kernel allocates a page
for use, it must tell the PASR code about it with a call to:
void pasr_kget(struct page *page, int order);
Pages that are freed should be marked with:
void pasr_kput(struct page *page, int order);
To a first approximation, these functions just increment and decrement free_size. If free_size reaches the size of the
segment, there are no used pages within that segment and it can be powered down. As soon as the first page is allocated from
such a segment, it must be powered back up.
Adding this accounting to the memory management code is just a matter of adding a few pasr_kget() and pasr_kput()
calls to the buddy allocator. Most other allocations in the kernel have their ultimate source in the buddy allocator, so this
approach will catch most of the memory allocation traffic in the system - though it could be somewhat fooled by unused pages
held by the slab allocator. There is no integration with "carveout-style" allocators like ION or CMA, but that could certainly be
added at some point.
One piece that is missing, though, is the mechanism by which a memory section becomes entirely free and eligible for PASR.
12 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
The kernel tends to spread pages of data throughout memory, and it does not drop them without a specific reason to do so; a
typical system shows almost no "free" pages at all even if it is not currently doing anything. The intent is to use the feature in
conjunction with a "page cache flush governor," but that code does not exist at this time. There was also talk of setting up a
large "movable" zone and using the compaction code to create large, free chunks within that zone.
The other thing that is missing at this point is any kind of measurement of how much power is actually saved using PASR. That
will certainly need to be provided before this code can be considered for inclusion. Meanwhile, it has the appearance of a
less-intrusive PASR capability that might just get past the roadblocks that stopped its predecessor.
Comments (none posted)
13 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
struct ion_allocation_data {
size_t len;
size_t align;
unsigned int flags;
struct ion_handle *handle;
}
The handle field is the output parameter, while the first three fields specify the alignment, length and flags as input
parameters. The flags field is a bit mask indicating one or more ION heaps to allocate from, with the fallback ordered
according to which ION heap was first added via calls to ion_device_add_heap() during boot. In the default
implementation, ION_HEAP_TYPE_CARVEOUT is added before ION_HEAP_TYPE_CONTIG. The flags of
ION_HEAP_TYPE_CONTIG | ION_HEAP_TYPE_CARVEOUT indicate the intention to allocate from
ION_HEAP_TYPE_CARVEOUT with fallback to ION_HEAP_TYPE_CONTIG.
User-space clients interact with ION using the ioctl() system call interface. To allocate a buffer, the client makes this call:
int ioctl(int client_fd, ION_IOC_ALLOC, struct ion_allocation_data *allocation_data)
This call returns a buffer represented by ion_handle which is not a CPU-accessible buffer pointer. The handle can only be used
to obtain a file descriptor for buffer sharing as follows:
int ioctl(int client_fd, ION_IOC_SHARE, struct ion_fd_data *fd_data);
Here client_fd is the file descriptor corresponding to /dev/ion, and fd_data is a data structure with an input handle field
and an output fd field, as defined below:
struct ion_fd_data {
struct ion_handle *handle;
int fd;
}
The fd field is the file descriptor that can be passed around for sharing. On Android devices the BINDER IPC mechanism may be
used to send fd to another process for sharing. To obtain the shared buffer, the second user process must obtain a client handle
first via the open("/dev/ion", O_RDONLY) system call. ION tracks its user space clients by the PID of the process
(specifically, the PID of the thread that is the "group leader" in the process). Repeating the open("/dev/ion", O_RDONLY)
call in the same process will get back another file descriptor corresponding to the same client structure in the kernel.
To free the buffer, the second client needs to undo the effect of mmap() with a call to munmap(), and the first client needs to
close the file descriptor it obtained via ION_IOC_SHARE, and call ION_IOC_FREE as follows:
int ioctl(int client_fd, ION_IOC_FREE, struct ion_handle_data *handle_data);
Here ion_handle_data holds the handle as shown below:
struct ion_handle_data {
struct ion_handle *handle;
}
The ION_IOC_FREE command causes the handle's reference counter to be decremented by one. When this reference counter
reaches zero, the ion_handle object gets destroyed and the affected ION bookkeeping data structure is updated.
User processes can also share ION buffers with a kernel driver, as explained in the next section.
Sharing ION buffers in the kernel
In the kernel, ION supports multiple clients, one for each driver that uses the ION functionality. A kernel driver calls the following
function to obtain an ION client handle:
struct ion_client *ion_client_create(struct ion_device *dev,
unsigned int heap_mask, const char *debug_name)
The first argument, dev, is the global ION device associated with /dev/ion; why a global device is needed, and why it must be
passed as a parameter, is not entirely clear. The second argument, heap_mask, selects one or more ION heaps in the same way
as the ion_allocation_data. The flags field was covered in the previous section. For smart phone use cases involving
multimedia middleware, the user process typically allocates the buffer from ION, obtains a file descriptor using the
ION_IOC_SHARE command, then passes the file desciptor to a kernel driver. The kernel driver calls ion_import_fd() which
converts the file descriptor to an ion_handle object, as shown below:
struct ion_handle *ion_import_fd(struct ion_client *client, int fd_from_user);
The ion_handle object is the driver's client-local reference to the shared buffer. The ion_import_fd() call looks up the
physical address of the buffer to see whether the client has obtained a handle to the same buffer before, and if it has, this call
simply increments the reference counter of the existing handle.
Some hardware blocks can only operate on physically-contiguous buffers with physical addresses, so affected drivers need to
convert ion_handle to a physical buffer via this call:
int ion_phys(struct ion_client *client, struct ion_handle *handle,
ion_phys_addr_t *addr, size_t *len)
Needless to say, if the buffer is not physically contiguous, this call will fail.
When handling calls from a client, ION always validates the input file descriptor, client and handle arguments. For example,
when importing a file descriptor, ION ensures the file descriptor was indeed created by an ION_IOC_SHARE command. When
ion_phys() is called, ION validates whether the buffer handle belongs to the list of handles the client is allowed to access, and
returns error if the handle is not on the list. This validation mechanism reduces the likelihood of unwanted accesses and
14 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
ION
DMABUF
Memory
Manager Role
User Space
Access Control
Crossarchitecture
Usage
Buffer
Synchronization
Delayed Buffer
Allocation
ION and DMABUF can be separately integrated into multimedia applications written using the Video4Linux2 API. In the case of
ION, these multimedia programs tend to use PMEM now on Android devices, so switching to ION from PMEM should have a
relatively small impact.
Integrating DMABUF into Video4Linux2 is another story. It has taken ten patches to integrate the videobuf2 mechanism with
DMABUF; in fairness, many of these revisions were the result of changes to DMABUF as that interface stabilized. The effort
should pay dividends in the long run because the DMABUF-based sharing mechanism is designed with DMA mapping hooks for
CMA and IOMMU. CMA and IOMMU hold the promise to reduce the amount of carveout memory that it takes to build an Android
smart phone. In this email, Andrew Morton was urging the completion of the patch review process so that CMA can get through
the 3.4 merge window.
Even though ION and DMABUF serve similar purposes, the two are not mutually exclusive. The Linaro Unified Memory
Management team has started to integrate CMA into ION. To reach the state where a release of the mainline kernel can boot the
Android user space, the /dev/ion interface to user space must obviously be preserved. In the kernel though, ION drivers may
be able to use some of the DMABUF APIs to hook into CMA and IOMMU to take advantage of the capabilities offered by those
subsystems. Conversely, DMABUF might be able to leverage ION to present a unified interface to user space, especially to the
Android user space. DMABUF may also benefit from adopting some of the ION heap debugging features in order to become
more developer friendly. Thus far, many signs indicate that Linaro, Google, and the kernel community are working together to
bring the combined strength of ION and DMABUF to the mainline kernel.
Comments (6 posted)
15 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
16 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Distributions
How long should security embargoes be?
By Jake Edge
February 8, 2012
Security embargoes are something of a double-edged sword. The idea is to coordinate a release date for
a particular security fix between the affected parties (typically distributions in the Linux world). But,
while the embargo is in place, no fixes are being issued which increases the length of time that users
are vulnerable. That could lead to widespreador targetedattacks against vulnerable systems if the flaw is already known, is
leaked, or is rediscovered during the embargo period. The chances of that may be relatively small, but they certainly increase as
a function of the length of the embargo.
It is for those reasons that embargoes in the Linux world are typically short. While proprietary vendors will sometimes sit on a
vulnerability for months, Linux embargoes are generally on the order of a week or two. The rules for the acceptable length of an
embargo are set by the venue where the information is shared, which for Linux distributions is the closed linux-distros mailing
list. There is also an associated distros mailing list which also adds in representatives from some of the BSDs. These two lists
have taken the place of the vendor-sec list that was compromised in March 2011.
Up until recently, the guidelines for linux-distros specified that the maximum allowable embargo was 14 days. That means that
anyone reporting a bug to the list should be willing to wait that long to publicly release any information; it also binds list
participants to that deadline. The idea is that anyone who doesn't want to agree to embargoes of up to 14 days shouldn't be
using linux-distros. As part of the discussion of the bug and fix on that list, a coordinated release date (CRD) would be chosen so
that all distributions would release at the end of the embargo period.
List administrator Alexander Peslyak (aka Solar Designer) recently made a change to the guidelines to better reflect reality.
There is an effort to avoid having a CRD that falls on a Monday or a Friday to try to not land on a holiday or other inconvenient
day for administrators who may need to do lots of updates because of the security fix. That meant that the 14 day maximum
sometimes stretched to 19 days, so Peslyak changed the page to reflect that. He also posted to the open oss-security list to
highlight the change.
There were no complaints about the change, though there were some alternatives suggested (10 business days for example),
until Peslyak followed up his message suggesting that perhaps the 14-19 days be reduced to 7-11 days. Even that length of time
is longer than he would like, "but I am proposing what I think has a chance to be approved by others without making the list a
lot less useful to them".
Peslyak also outlined his reasons for preferring shorter embargo windows. Distributions that have the fix ready more quickly can
get it into the hands of users sooner, rather than waiting a week or more for the embargo to end. Also, it reduces the window of
time in which the vulnerability could be rediscovered or leaked from the list somehow. There are also logistical concerns with
longer embargoes including increased tracking of multiple overlapping embargoes.
Both Marc Deslauriers of the Ubuntu security team and Kurt Seifried of the Red Hat security response team were quick to
disagree with a one-week (more or less) embargo period. Depending on the severity of the bug and the difficulty of the fix, it
may be hard for some distributions to pull together, test, and release fixes in that time frame. In particular, Seifried is concerned
about volunteer-run distributions that may lack the staff to ensure fixes in a shorter period. But Deslauriers makes another
important point:
This means vendors will be keeping information about the vulnerability private until they are confident they are able
to release within a week, at which point they will then share the information with other vendors who will scramble
to get their updates ready.
As a distro, I now have two choices: I sit on vulnerabilities until our own QA and testing is done, at which point I
send them to the list and hope that 7 days is enough for everyone else, or I simply stop using the list for anything
that's more than trivial and contact other vendors directly.
That's the fine line that Peslyak is walking here. If the embargo requirements become too onerous (or seem that way),
distributions may stop reporting to the listor only report after they have made progress with a fix. But, other lists have other
rules. The closed kernel security list says that it will do embargoes up to 7 days, but it seems to rarely happen that way,
undoubtedly partially due to Linus Torvalds's distaste for embargoes. That policy may also result in distributions delaying reports
to the kernel security list, of course.
It should be noted that these "closed" lists have been fairly leaky at times. In addition to the vendor-sec list compromise, the
kernel security list may have been breached as part of the kernel.org compromise. The linux-distros list now uses PGP-encrypted
17 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
emails, which should help unless the host doing the re-encryption to each member's key is breached.
There are also dangers from fixes that are rushed, of course. The recent PHP remote code execution vulnerability came about
because of a rushed fix for a different security hole. There is certainly value in taking some time to get a security fix right, the
only question here is how long that should be, and, for full disclosure types, whether users should be made aware of the
problem while the fix is in progress.
In the end, the 14-19 day window stayed, though a preference for embargoes of less than 7 days was added. It's a difficult
problem, partly because there are so many unknowns. Fixes can be difficult to apply (particularly if they need to be backported)
and to test, especially for multiple distribution releases. But the longer the bug is embargoed, the longer users are at risk
without any ability to mitigate the problem locally while awaiting a fix. That's why the full disclosure camp believes that
information on security holes should be released without delay, so that users are empowered to make their own decisions. That
approach is not very popular with vendors, of course. Embargoes and their length is an issue that will likely be debated again
and again because there isn't an "obviously correct" solution.
Comments (8 posted)
Brief items
Distribution quotes of the week
The solution to "My kernel update doesn't boot" should be "Automatically detect that that happened, give the user
that information and fall back to the old kernel", not "Always show the user a menu that they almost always don't
care about". Solve the actual problem.
-- Matthew Garrett
And so, I intend to wear those shoes proudly. While I dont plan on following in the footsteps of anyone (because,
you know, that would be walking in circles, which isnt highly productive), I do aspire to step with the same spirit
that those before me have honestly, transparently, communicative-ly (new word!), with humor, and with care.
And I aspire to Get Stuff Done, sans red tape.
-- Robyn Bergeron
Anyhow, while I am looking forward to playing around with Debian Wheezy, the current Debian testing branch, I
can foresee Squeeze and my #! Statler builds remaining on a couple of my boxes for a good while yet. IMHO, the
release still has plenty of legs left in it, even if some people consider it only fit for servers. Troglodytes!
-- Philip Newborough
Comments (1 posted)
Distribution News
Debian GNU/Linux
18 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Distribution newsletters
Debian Project News (February 6)
DistroWatch Weekly, Issue 442 (February 6)
Maemo Weekly News (February 6)
Ubuntu Weekly Newsletter, Issue 251 (February 5)
Comments (none posted)
Development
XBMC 11 "Eden"
XBMC, the open source media center, has steadily grown from its humble origins as an
February 8, 2012
X-Box only replacement environment into the cross-platform, de facto playback front-end
This article was contributed by
for multimedia content. It merges the file-centric approach taken by traditional video
Nathan Willis
players with an add-on scripting environment that handles remote web content. The project
is currently finalizing its next major release, version 11.0 (codenamed Eden), which includes updates to the networking and
video acceleration subsystems, broader hardware support, and numerous changes to the APIs available to add-on developers.
Granted, there are plenty of other "media center" projects under active development at the
moment, most of which also employ FFmpeg and can play back the majority of the same
content types. Where XBMC differentiates itself, however, is with its auto-detection of critical
settings and the deep integration it provides across its (wide) range of networking and
content-delivery protocols. For example, XBMC auto-detects the presence of Universal
Plug-and-Play (UPnP) servers and HDHomeRun tuner devices on the network, while too
many other media centers require the user to come with the requisite connection details
written down.
Similarly, we no longer live in a world where the bulk of our media content consists of local or LAN-available files. XBMC's add-on
system permits developers to write site- or service-specific extensions that integrate commercial content creators' home-brewed
Flash video delivery sites neatly into the overall interface. Other media center applications work at the task too, but over time
the projects has earned itself a reputation for playing host to high-quality add-ons that stay current with changes rolled out on
the sites themselves.
Starting with the last major release, 10.0, XBMC has hosted its own add-on repository,
making such add-ons instantly installable from the main UI. The project wiki maintains a list
of active add-ons, including those not found in the official repository, broken down by
version compatibility and add-on type. It includes the site-specific content add-ons, plus
many that add new functionality (games, torrent support, electronic program guides) and
plug-ins to support new data sources (Icecast streams, MythTV servers, and fetching lyrics,
cover images, and program metadata from around the web). There are a handful of
commercial services that permit XBMC add-ons to use their APIs (such as Grooveshark), but if your interest is primarily in paid
services, most of those companies take steps to make add-ons for XBMC and other open source media centers incompatible
even when they permit in-browser playback.
User-visible changes in Eden
Speaking of add-ons, one of the most original ideas to debut in XMBC 11 is the ability to
roll-back add-ons to a previous version. Clearly that feature is not expected to be used
when the update to an add-on is designed to repair breakage with the package's screenscraping capabilities, but it may prove popular with users when an add-on update makes a
bad UI decision or implements a questionable new feature. Linux users may associate
package rollback with risky options to forcibly downgrade packages, but XBMC add-ons are
independent of each other. A closer comparison would be to Firefox and Thunderbird's
add-ons, but they offer no such rollback mechanism.
19 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
XBMC's user interface is also one of its major selling points - both in its ease-of-use and its ease-of-configurability. The Eden
release introduces a newly renovated default skin named "Confluence," which is the first default skin to use a horizontal
main-menu layout. That decision wastes slightly less screen space, considering the popularity of 16:9 and 16:10 aspect ratios.
But a more practical feature is that users can now add any item they want to the home screen's menu in previous releases, all
add-ons were relegated to the "Programs" submenu, which is a hassle for heavy add-on users. Naturally, users can also remove
menu items which I did immediately to the "Weather" entry (whose prominent place on the home menu has always felt
awkward). The new UI also sports multiple selections (when using a mouse), touch or gesture input (presumably for tablet
users), and auto-login (for users running XBMC in a kiosk setting). Users can also search for installable add-ons by keyword from
within XBMC itself, which is far faster than manually scrolling through screen after screen of available add-ons with a remote
control.
The application attempts to present all of the available media resources in a particular category (photos, audio, or video)
together in one place, regardless of origin. This release adds support for five new content source types: NFS shares, Apple Filing
Protocol (AFP) shares, Slingboxes, Apple AirPlay devices, and "disc stubs" of un-mounted DVD or Blu-Ray discs. The Slingbox is
an embedded video streaming device that can be connected to component or HDMI video sources. AirPlay is Apple's brand
name for streaming media over the LAN from iPod and other iOS devices.
The disc stub feature is intended to help users organize their physical media, indexing the contents of discs for searchability
you would still have to physically load the discs in order to play back their contents. However, the new release also adds support
for treating ISO files as virtual disk volumes, so if lugging the discs back and forth across the room is too taxing, XBMC has you
covered there, too. There are other minor tweaks in this arena, too, such as the elimination of an artificial distinction between
local "files" and the user's "library." From now on, all the files that XBMC knows about can be browsed together.
Finally, there is a major upgrade to the application's subtitle support, including support for subtitles embedded within MP4 files,
and support for rendering subtitles with the style tags (covering font selection, text color, and italics/boldface) found in several
external subtitle file formats.
New technical features
Arguably the biggest "silent" feature in XBMC 11.0 is full support for the Video Acceleration API (VAAPI), Freedesktop.org's
hardware-agnostic API for GPU acceleration of video decoding. XBMC is officially dropping support for the older, MPEG2-only
XvMC in favor of VAAPI, which supports hardware-accelerated decoding of more formats, on Nvidia, Intel, and ATI graphics
chips. VAAPI is a Linux-only feature, of course on Mac OS X 10.6 or later, XBMC uses Apple's Video Decode Acceleration
Decoder (VDADecoder) hardware acceleration instead, and on recent Windows systems it uses Microsoft's DirectX Video
Acceleration (DxVA). Linux boxes can also use OpenMAX for hardware video acceleration, which is most useful for systems built
on Nvidia's Tegra2 platform.
The user interface itself uses OpenGL, OpenGL ES, or EGL, so it, too, can be hardware accelerated. Use of GPU acceleration for
both video decoding and the GUI reduces XBMC's CPU requirements considerably, and 11.0 officially introduces support for
several more low-resource systems. On the Linux side, this includes Texas Instruments OMAP4 processors. On the Apple side, it
includes better support for recent iOS 4.x devices (including recent iPads and AppleTVs). But for those who still rely on their
CPUs, regardless of the platform, XBMC can now detect CPU features like SSE and MMX at runtime.
Apart from hardware concerns, this release introduces a revamped JSON-RPC subsystem, which is primarily of interest to add-on
developers. The changes are substantial, as the goal was to make XBMC's implementation compliant with the JSON-RPC 2.0
specification. The add-ons subsystem uses Python scripting, and in another important change for developers, 11.0 drops XBMC's
bundled Python implementation in favor of using the system Python library. This is more in keeping with XBMC's reliance on OS
libraries for other functionality (such as protocol stacks), although the project still uses its own media renderers for images,
video, and audio content. Because the host OS's Python version may differ from the bundled library found in older XBMC
releases, there is a backwards-compatibility mode that add-on developers can invoke.
Add-on authors have three other new features at their disposal from 11.0 on. The first is an XML storage system allowing each
add-on to save user preferences in its own private file. The second is a set of hooks to display progress-meters on screen for the
user's benefit, a feature designed to improve feedback when buffering web video. Finally, previous releases of XBMC allowed for
a web-based control interface, which could be exploited to bring remote-control-like features to arbitrary tablets and mobile
phone browsers in addition to desktops. With the new release, each add-on package can also provide a separate web-interface
of its own, which simply makes more features accessible to users not near an infrared remote.
The view from 10 feet back
If I were to channel my inner couch potato, I would have to admit that my favorite improvements in XBMC 11.0 are of the
cosmetic variety namely, the new and greatly-improved theme, the ability to customize the home menu, and the unification of
media content regardless of the source. That may sound superficial, but in my experience, building a remote-control-friendly UI
is one the highest hurdles in open source software development: many try, few, if any, succeed. XBMC may fare better than the
Linux-only media centers because it has a substantial following among Windows and OS X users, thus giving it exposure to far
more testing and feedback. But whatever the reason, in practical usage XBMC comes the closest to feeling like a genuine OEM
consumer electronics product.
Digging deeper, though, the VAAPI support is an important milestone, too. VAAPI has been a long time coming, but in 2011 and
now 2012, it appears to be hitting the mainstream. Low-power set-top boxes are certainly where VAAPI makes the most sense
at SCALE 10X in January, one of the most talked about booths was the demonstration of XBMC 11.0 running on a $25
Raspberry Pi board. There are already plenty of niche commercial products built on top of XBMC, but when HD video is available
on a $25 board, it will be hard for Apple and Microsoft to compete.
To see the impact of the changes to the add-on development APIs, we may have to wait, but the project's add-on community
has earned the benefit of the doubt. Sadly, at the moment the Linux builds of the most recent XBMC Eden have yet to land: Beta
1 is available for download, but Beta 2 is only provided for Windows and Apple systems at the moment. The final release does
not have a due date yet, but the hold-up is reported to be with the "XBMC Live" live CD version, which is getting a rework to be
more compatible with the upstream Ubuntu releases on which it is based. Given the pace of the first two beta releases, though,
20 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Brief items
Quotes of the week
In the end we all agree GCC does something nasty (and I would call it a bug even), but any solution we find in GCC
won't be backportable to earlier releases so you have to deal with the GCC bug for quite some time and devise
workarounds in the kernel. You'll hit the bug for all structure fields that share the largest aligned machine word
with a bitfield (thus the size depends on the alignment of the full object, not that of the struct containing the
bitfield in case that struct is nested inside another more aligned one). This situation should be easily(?) detectable
with sparse.
-- Richard Guenther
In the embedded market, the biggest problem is that the distributions of BusyBox fail to include the "scripts to
control compilation and installation of the executable", which the GPLv2 requires.
As such, users who wish to take a new upstream version of BusyBox and install it on their device are left without
any hope of doing so. Most embedded-market GPL enforcement centers around remedying this.
Indeed, enforcement has brought some great successes in this regard. As I wrote on in my blog post on this
subject (at http://sfconservancy.org/blog/2012/feb/01/gpl-enforcement/ ), both the OpenWRT and SamyGo
firmware modification communities were launched because of source releases yielded in past BusyBox enforcement
actions. Getting the "scripts to control compilation and installation of the executable" for those specific devices are
what enabled these new upstream firmware projects to get started.
-- Bradley Kuhn
Comments (none posted)
Announcing fulltext
Fulltext is a Python library that can extract text from binary files. "Fulltext is a library that makes converting various file formats
to plain text simple. Mostly it is a wrapper around shell tools. It will execute the shell program, scrape it's results and then
post-process the results to pack as much text into as little space as possible."
Full Story (comments: none)
Five open source hardware projects that could change the world (The H)
Here's a lengthy survey of open hardware projects in The H. "The price/performance of a general purpose computer built using
FPGAs wouldn't be great when compared with commodity gear, but the technology excels in many niche and specialist
applications, such as in areas of computing that make use of dedicated hardware to bring high performance to tasks such as
signal processing, encryption and networking. Since you can program many hardware paths in an FPGA they are well suited to
jobs that can be broken down and processed in parallel, and some of the more powerful devices pack millions of logic blocks and
have a transistor count well into the billions, with a blisteringly fast serial bandwidth that is measured in terabits/second."
Comments (none posted)
21 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
Luca Tringali has posted an interview with Ton Roosendaal. "As you all may know, he is the creator of Blender and the head of
the Blender Institute. Anyway, for me, the most important idea he developed is the "open movie" project. It introduces a
completely new concept of creating an artistic opera, where the public can be an active part during the production and
expecially after it, possibly improving the opera itself or creating another version (if it's a movie, you can create your own final).
Basically, it's the power of free open source software ported to art, expecially cinematographic art." (Thanks to Paul Wise).
Comments (none posted)
Page editor: Jonathan Corbet
Announcements
Brief items
Google Summer of Code 2012 is on
The 2012 edition of the Google Summer of Code has been announced. "This will be the 8th year for Google Summer of Code, an
innovative program dedicated to introducing students from colleges and universities around the world to open source software
development. The program offers student developers stipends to write code for various open source projects with the help of
mentoring organizations from all around the globe. Over the past seven years Google Summer of Code has had 6,000 students
from over 90 countries complete the program. Our goal is to help these students pursue academic challenges over the summer
break while they create and release open source code for the benefit of all."
Comments (none posted)
New Books
"Open Advice" from 42 free software contributors
"Open Advice" is a new book consisting of essays from some 42 community authors; it is available in print form or downloadable
under the CC-BY-SA license. "This book is the answer to 'What would you have liked to know when you started contributing?'.
The authors give insights into the many different talents it takes to make a successful software project, coding of course but also
design, translation, marketing and other skills. We are here to give you a head start if you are new. And if you have been
contributing for a while already, we are here to give you some insight into other areas and projects."
Comments (5 posted)
Articles of interest
FSFE Newsletter - February 2012
The February edition of the Free Software Foundation Europe Newsletter looks at freeing your cell phone, learning to program,
Document Freedom Day, the 2012 Fellowship election, and "I love Free Software" - Day.
Full Story (comments: none)
Mueller: Apple's iterative approach to FRAND abuse is not for the faint of heart
Florian Mueller's update on the patent battles between Apple, Motorola, and Samsung has a clear slant, but it is still a
worthwhile look at how the mobile patent wars may be settled. There is little cheer for the free software world here. "They hope
that the disruptive impact of such injunctions on Apple's business will force Apple to grant them a license to all of its
non-standards-related patents (such as its multitouch inventions) as part of a broader settlement. In other words, they want to
use FRAND patents to reach a state of 'mutually assured destruction', in which the notion of intellectual property would become
meaningless between large players that have a critical mass of patents (it would merely serve to exclude new entrants without
large patent portfolios)."
Comments (49 posted)
22 sur 24
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
come together, but it's finally slotted in and our distribution partner has got the necessary infrastructure settled. I'll lift the veil
off of the pre-order and our distribution strategy when it goes live."
Comments (10 posted)
Upton: Raspberry Pi: Two things you thought you werent going to get
Liz Upton reports that Raspberry Pi boards will be available by the end of the month. "Theres another big piece of news today.
Weve been leaning (gently and charmingly) on Broadcom, who make BCM2835, the SoC at the heart of the Raspberry Pi, to
produce an abbreviated datasheet describing the ARM peripherals in the chip. If youre a casual user, this wont be of much
interest to you, but if youre wanting to port your own operating system or just want to understand our Linux kernel sources,
this is the document for you." (Thanks to Paul Wise)
Comments (19 posted)
Upcoming Events
Events: February 9, 2012 to April 9, 2012
The following event listing is taken from the LWN.net Calendar.
Date(s)
Event
Location
Oslo, Norway
February 10 Linux Vacation / Eastern Europe Winter session 2012 Minsk, Belarus
February 12
23 sur 24
Paris, France
20/03/12 21:08
https://lwn.net/Articles/479071/bigpage
London, UK
Montreal, Canada
March 2
March 4
BSP2012 - Moenchengladbach
Mnchengladbach, Germany
March 2
March 4
Cambridge, UK
March 5
March 7
Erlangen, Germany
March 6
March 10
CeBIT 2012
Hannover, Germany
March 7
March 15
PyCon 2012
March 10
March 11
Perth, Australia
March 10
March 11
Copenhagen, Denmark
March 16
March 17
Clojure/West
March 17
March 18
Chemnitz, Germany
March 23
March 24
March 24
March 25
LibrePlanet 2012
March 26
March 29
EclipseCon 2012
March 28
March 29
April 3
April 5
LF Collaboration Summit
April 5
April 6
Android Open
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
24 sur 24
20/03/12 21:08