You are on page 1of 50

M A Y / J U N E 2 0 0 6

W W W . E O S J . C O M
A P U B L I C A T I O N
Focusing on Open Source Strategies in the Enterprise
Data Connectivity
for Business-Critical
Systems in Mixed
Source
Environments
Evaluating Open Source
Products for the Enterprise
Open Source Network Backup
Identifying & Overcoming
Security Concerns
Troubleshooting Linux Firewalls
Focusing on Open Source Strategies in the Enterprise
|e tuture ct ,cur |us|uess deeuds cu sta,|ug
ccuet|t|.e. |at ueaus ccustaut|, adat|ug tc
tcda,'s c|aug|ug g|c|a| uar|et|ace. w|t| 0cu|ere
lRl ? 0RN, t|e reu|er ceu scurce |us|uess
a||cat|cu, ,cu s|ae ,cur tuture |ased cu
|us|uess strateg, rat|er t|au scttWare ||u|tat|cus.
A tu||, |utegrated sc|ut|cu, 0cu|ere lRl ? 0RN
creates a seau|ess 360degree .|eW ct |ct| ,cur
|us|uess aud ,cur custcuer |utcruat|cu. lt rc.|des
,cur |us|uess W|t| a|| t|e tcc|s uecessar, tc
ettect|.e|, uauage rescurces, uuauces, custcuer
|utcruat|cu, aud ct|er cr|t|ca| |us|uess data acrcss
t|e euterr|se, |u t|e tcruat ,cu Waut, aud W|t|
eact|, t|e teatures aud tuuct|cus ,cu ueed tcda,
aud |u t|e tuture.
0cu|ere's uu|cue ceu scurce arc||tecture aud
de.e|cueut ucde| ua|e |t css|||e tcr ,cu tc
eer|euce ||geuterr|se teatures aud e|||||t, cu
a sua|||us|uess |udget.
lere's |cW:
|e rcduct traueWcr| Was des|gued tcr custcu|za
t|cu tc ueet ,cur ueeds W|||e sucrt|ug ccuugura
t|cu c|auges "cut|e," e.eu |u rcduct|cu.
\cur |u|eueutat|cu sucrt ccues trcu 0cu|ere
0ert|ued lartuers, lRl eerts trcu arcuud t|e
Wcr|d, W|c art|c|ate |u des|gu aud de.e|cueut ct
t|e 0cu|ere sc|ut|cu.
0cu|ere aud |ts Wcr|dW|de uetWcr| staud |e||ud
t|e rcduct W|t| re||a||e custcuer sucrt.
\cu a, tcr .a|ueadded ser.|ces cu|,, dr|.|ug dcWu
,cur |u|eueutat|cu aud cugc|ug usage ccsts.
1uu start ,cur rc,ect get tra|u|ug d|rect|, trcu
"t|e $curce":
Aogost 21~25, 2006 | lcrt|aud, 0regcu
November 6~10, 2006 | N|au|, l|cr|da
c uud cut ucre cr tc |ccate a 0cu|ere 0ert|ued
lartuer, .|s|t ooo[geha]j]gj_.
:`Yf_]]n]jql`af_Kjm]af\]h]f\]f[]Kjm]hYjlf]jk`ahKjm]gh]fkgmj[]ooo[geha]j]gj_
lu||, |utegrated lRl, 0RN,
su|, c|a|u uauageueut aud
acccuut|ug s,steus
last |u|eueutat|cu
W|t|cut tcrc|ug uua| dec|s|cus
Re||a||e sucrt .|a Wcr|dW|de
artuer uetWcr|
lerscua||zed |cca| aud reucte
user |utertace
$ateta|| arc||tecture tcr 100
a.a||a||||t,
0|c|a| uar|et caa||||t|es:
Nu|t|curreuc,
Nu|t|ta
Nu|t|ccst|ug
Nu|t|acccuut|ug
Nu|t|crgau|zat|cu
$ca|a||e, e|||e, aud
custcu|za||e tcr tuture grcWt|
Ngjd\:dYkk<IG:IDK`]NYqPgmNYfl@l
2006. 0cu|ere, luc. A|| r|g|ts reser.ed. 0cu|ere |s a reg|stered tradeuar| ct 0cul|ere, luc. |u t|e uu|ted $tates aud Wcr|dW|de .|a t|e Nadr|d rctccc|. |e uaues ct actua| ccuau|es aud rcducts ueut|cued |ere|u ua, |e t|e tradeuar|s ct t|e|r resect|.e cWuers.
COMPR_Ad4a.indd 1 4/20/06 11:50:29 AM
8 Data Connectivity for Business-Critical Systems in Mixed Source
Environments
By Mark Troester
12 Apache Incubator Project: An Entry Path to the ASF
By Vikram Gaur
14 What You Need to Know About the Dierences Between Free
Software and Open Source Software
By Mark Post
18 Open Source in France: Insight for Other Countries
By Mathieu Poujol
21 Solution Showcase: BZ Makes a Change to Improve Results
By Denny Yost
23 Troubleshooting Linux Firewalls: Local Firewall Security
By Michael Shinn & Scott Shinn
28 Solution Showcase: StepUp Commerce Achieves and
Delivers Business Agility
By Denny Yost
29 Bacula: Open Source Network Backup
By Adam Thornton
33 Strategies for Success With Open Source Projects: How to Eectively
Evaluate Open Source Products for the Enterprise
By Edmon Begoli
M A Y / J U N E 2 0 0 6
V O L U M E 2 | N U M B E R 3
W W W . E O S J . C O M
CONTENTS | FEATURES
2
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
37 Identifying and Overcoming Security Concerns in Todays Complex
Environment
By Aruneesh Salhotra
40 Solution Showcase: Altralux Implements ERP System in
Less Than 90 Days
By Denny Yost
42 Linux Server Performance Monitoring: Technical Tips for Better
Application, System, and Network Monitoring
By Tom Speight
45 JBoss Innovation Award
By Rebecca Goldstein
6 Publishers Page
7 Source Tree: Adopting Open Source SoftwareLiterally
By Michael Goulde
11 Open Standards: Taking Back the Word Open
By Jim Zemlin
16 Legal Issues: Living With Software Patents in an Open Source World
By Diane M. Peters
22 Open Issues: Open Source Service and SupportThe Great Debate
By Mark Driver
27 Open Systems: Professor Wirth Was Right, You Know.. .
By Larry Smith
32 The Devilish Advocate: How Much Open Source Software Do You Use?
By Robert Lefkowitz
36 Open Mind: Securing Windows With Free Software
By Howard Fosdick
41 Open for Business: The Blurring Lines Between OSS and SOA
Re-Inventing the Future of Software
By Nathaniel Palmer
44 Perspectives on CAOS: The Upsell Opportunity
By Raven Zachary
CONTENTS | FEATURES
CONTENTS | COLUMNS
4
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
JBOSS
IBM
BEA
ORACLE
accord|ng to Ve|oc|t| Partners.
|n support serv|ces,

0ROFESSIONAL /PEN 3OURCE FROM *"OSS )NC


*"OSS 2OLL )T /UT
JBoss pioneered the Professional Open Source
model that combines the cost efficiencies of
open source software with the accountability
and expert support services expected from
enterprise software vendors. The result?
Several JBoss Professional Open Source
products including JBoss AS, Hibernate, and
Apache Tomcat have now obtained the #1
position in their markets.
At JBoss, we understand that our success relies
solely upon our ability to deliver expert support
services. Whether you are trying our JEMS
products or rolling them out across your
enterprise, choose our industry-leading
Professional Support, Consulting, and Training
support services to help you every step along the
way.
To learn more, please visit our website: www.jboss.com or contact us directly at sales@jboss.com.

JBOSS
IBM
BEA
ORACLE

|n J2EE
app||cat|on
server usage,
accord|ng to BZ Research.

Publisher
B O B T H O M A S
b o b @ e o s j . c o m
9 7 2 . 3 5 4 . 1 0 2 4

Associate Publisher
D E N N Y Y O S T
d e n n y @ e o s j . c o m
9 7 2 . 3 5 4 . 1 0 3 0

Managing Editor
A M Y B . N O V O T N Y
a my @ e o s j . c o m
3 5 2 . 3 9 4 . 4 4 7 8

Online Services Manager


B L A I R T H O M A S
b l a i r @ e o s j . c o m

Copy Editors
D E A N L A M P M A N
P A T WA R N E R

Art Director
M A R T I N W. M A S S I N G E R
m a r t i n @ e o s j . c o m

Production Manager
K Y L E R I C H A R D S O N
k y l e @ e o s j . c o m

Advertising Manager
B L A I R T H O M A S
b l a i r @ e o s j . c o m
9 7 2 . 3 5 4 . 1 0 2 5

Advertising Administrator
D E N I S E T . C U L L E R S

The editorial material in this magazine is accurate to


the best of our knowledge. No formal testing has been
performed by Enterprise Open Source Journal or Thomas
Communications, Inc. The opinions of the authors
do not necessarily represent those of Enterprise Open
Source Journal, its publisher, editors, or staff.

Subscription Rates: Free subscriptions are available to


qualified applicants worldwide at www.eosj.com.

Inquiries: All inquiries should be sent to:


Enterprise Open Source Journal
9330 LBJ Freeway, Suite 800
Dallas, Texas 75243
Voice: 214.340.2147
e-Mail: info@eosj.com

All products and visual representations are the


trademarks/registered trademarks of their respective
owners.

Thomas Communications, Inc. 2006. All rights


reserved. Reproductions in whole or in part are
prohibited except with permission in writing.

Enterprise Open Source Journal Article Submission Guidelines:


Enterprise Open Source Journal accepts submission of articles
on subjects related to open source. Enterprise Open Source
Journal Writers Guidelines are available by visiting
www.eosj.com. Articles and article abstracts may be
sent directly via e-mail to managing editor, Amy
Novotny, at amy@eosj.com.
6
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
BOB T HOMAS ,
PUBL I S HE R,
E DI TOR- I N- CHI E F
I
n each issue of Enterprise Open Source Journal we feature columns
by a world-class group of Open Source Soware (OSS) thought lead-
ers. Aer reading the columns they submitted for this issue, I decided
they are so good, I need to draw particular attention to them here.
OSS is in a period of change and transition, as is evidenced by
the columns in this issue. e following is my transparent attempt
to encourage you to read each column by providing a teaser from
each one.
Mark Driver: Open Source Service and Supporte Great Debate
If open source eorts are to truly challenge closed source technologies in
open market competition, then the traditional technology adoption cycle
must apply as well. Otherwise, open source will be relegated to the
exclusive bastion of bearded men in stretchy sweat pants and sandals
who spend 18 hours a day in front of a computer monitor.
Howard Fosdick: Securing Windows With Free Soware
Pew Research states that 43 million Internet users in the U.S. have
malware on their PCs Meanwhile, neither the U.S. government nor
the international community has addressed the problem. Its up to you,
the IT professional, to manage the problem yourself.
Michael Goulde: Adopting Open Source SowareLiterally
e fact that the creators of OSS dont provide warranty
and indemnication doesnt have to be the risk factor
that it appears to be.
Robert Lefkowitz: How Much Open Source Soware Do You Use?
If Im using Microso Windows XP or Wolfram Mathematica, and we
know these products use OSS, can it be said that Im using OSS? e
answer is No.
Nathaniel Palmer: e Blurring Lines Between OSS and SOARe-
Inventing the Future of Soware
Open SOA oers one of the rst real opportunities for competing to
enable business agility and innovation.
Diane Peters: Living With Soware Patents in an Open Source World
e simple fact remains that all users and developers of soware,
whether proprietary or open source, must live with soware patents and
our current system for issuing them, however awed, for the foreseeable
future.
Larry Smith: Professor Wirth Was Right, You Know
Did we really save money developing with C instead of Captain
Bondage Pascal? Heck, no. Not by billions of dollars.
Jim Zemlin: Taking Back the Word Open
e word open is one of the most widely used and abused terms in
computing. Vendors have co-opted the term for their own marketing
purposes, slapping open on everything from business and development
models to interface design.
Read on; I hope you enjoy this issue of EOSJ!
Publ i s her s Page
Source Tree BY MICHAEL GOULDE
C
ompanies wary of using Open Source Software (OSS)
often pose a series of questions that can be boiled down to,
If something happens, who can I sue? Sometimes its their
compliance officers or corporate legal counsel, and sometimes
their purchasing or contracting officers asking the question.
The issue for them is that open source licenses disclaim
warranty to protect developers who contribute to a project
and to encourage participation in projects. Indemnification
against legal suit is generally provided only by projects that
have a commercial sponsor, and even then indemnification
isnt always provided in ways that corporate customers would
like to seefree replacement code rather than financial
compensation. The fact that the creators of OSS dont provide
warranty and indemnification doesnt have to be the risk
factor that it appears to be. The openness and freedoms
associated with open source allow the code to be taken
under the wing of any party and provided with the necessary
protection. In the case of Linux, this has been done by a
variety of parties, including Red Hat, IBM, and Novell. These
and other suppliers can perform this function for other open
source projects as well, reducing the risk for customers.
Indemnication Can Be a Dierentiator for Suppliers
Commercial entities supporting OSS have found that
offering indemnification is one way they can create business
value beyond the value in the open source code itself.
Commercial open source companies, such as JBoss, MySQL
and others, provide indemnification for the software whose
intellectual property they own. For example, HP is offering
indemnification for Linux running on its hardware as a
market differentiator, even though the company doesnt
own the intellectual property, because the business case was
therethe cost of vetting the code is offset by the potential
revenue stream generated by offering indemnification.
These commercial entities can provide warranty and
indemnification by treating the software as if they were
developing it in-house. Any company that develops software
as a business engages in extensive testing, intellectual
property risk assessment, and other risk-mitigating actions
to reduce the exposure to risk they might face in the
marketplace. OSS would become just another product, albeit
one that was developed externally. By necessity, only the
most popular OSS would receive this consideration on the
part of suppliers because of the economics involved.
Customers Can Play a Similar Role
There are thousands of small open source projects that
carry no form of risk mitigation, either from the project or
from a third party. The demand for one of these projects
is too small for a major supplier to be willing to put forth
the effort and expense to review the software sufficiently to
warranty and indemnify it. How can these projects be made
more acceptable to risk-adverse companies?
The solution lies in the hands of customers. Consider
the fact that open source code used within an enterprise has
characteristics in common with code developed internally
(or by consultants working under work-for-hire-contracts).
The source code is readily available, as it is with internally
developed software. There is no need to safeguard against
issues with copyright or patent infringement for internally
developed software that isnt sold outside.
Then why shouldnt companies that develop software for
internal use treat open source code as if it were internally
developed? It should be possible to subject open source code
thats being considered for inclusion in enterprise application
development to the same testing and certification processes
as internally developed code to ensure quality and freedom
from security vulnerabilities. If the open source code meets
corporate software standards, then it can be managed like
any other internally developed code. And if it is going to be
included in a package that will be re-distributed, it can be
put through the same review as internally developed code
being re-distributed. Using third-party tools, such as those
from Black Duck and Palamida, the code can be assessed
for potential open source license and other copyright
infringements. The risks associated with the open source
code would be at the same level as internally developed code
from the perspective of enterprise risk management.
Potential Drawbacks Exist
The biggest drawback to this approach are the costs
for testing, certification and maintenance. For very small
projects, this may be trivial. But adopting a significantly
sized project may require a major investment, with an
uncertain return. Companies also have to make a decision
about whether they will maintain the code themselves on
an ongoing basis and risk creating a forked and possibly
incompatible version, or track the communitys work. This
latter approach, kind of an open adoption, maintains the
ties back to the projects community and the code retains
awareness of its lineage. Perhaps a community of users can
agree to adopt a project jointly and cooperate on reviewing
the project for internal adoption. This might be considered
the It Takes a Village approach to OSS.
Have you successfully used this
approach at your company? If so, drop me
an e-mail at mgoulde@forrester.com.
Michael Goulde is a senior analyst with Forrester Research.
e-Mail: mgoulde@forrester.com
Website: www.forrester.com
Adopting Open Source Software: Literally
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
7
8
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Data
Connectivity
for Business-
Critical Systems
in Mixed Source
Environments
By Mark
Troester
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
9
O
pen source adoption is increasing in
todays enterprises. Driven by the
success of commercially backed open
source technologies such as Linux and
JBoss, even conservative IT departments
have moved past the pilot projects stage. Be-
cause of this proliferation, IT architects face
the challenge of integrating open source
technologies with the existing technology
stackcreating a blended environment.
is poses unique challenges as IT archi-
tects, managers and developers wrestle with
integration and deployment issues. Al-
though migration from a purely commercial
technology stack to a mixed source stack
can be dicult in the testing and pilot stag-
es, it also can help IT organizations achieve
the blended, best-of-breed enterprise archi-
tecture that incorporates the best available
technologies at all levels, while reducing
cost and risk.
Such projects are oen meticulously
planned, with specic technological and
business objectives mapped along budgetary
constraints and timelines. However, IT ar-
chitects oen overlook a crucial, founda-
tional layerthe data connectivity that en-
ables the fast, reliable ow of information.
Choosing the right data connectivity
components for a mixed
source, heterogeneous
and multiple-database
environment is criti-
cal for enterprise-
class performance,
reliability and func-
tionality. is is es-
pecially true for ar-
c h i t e c t s a n d
managers designing
and developing large-
scale strategic systems
b a s e d o n a mi x e d -
source paradigm.
The Reality of Mixed Source
Mixed source is a success in
todays environments and will be-
come increasingly prevalent in IT or-
ganizations as open source technolo-
gies become more viable. Most of this
adoption is predicted to occur in soware
infrastructure. According to Gartner, by
2010, the Global 2000 IT organizations will
see open source as a viable option for 80 per-
cent of infrastructure soware investments.
Open source has become increasingly suc-
cessful in mainstream enterprise environ-
ments, even for core applications, because of
improvement in the product and supporting
services.
One example is the rapid uptake of en-
terprise Linux, with the technologys market
share continuing to grow among servers, ac-
cording to industry analysts. Unix revenue
continues to decrease while the role of Linux
servers continues to expand to take on a
broader array of application and technical
workloads. Adoption of Linux has been
spurred by support from established players
such as IBM and HP. e technology is sup-
ported by virtually every soware category
from Enterprise Resource Planning (ERP)
applications such as SAP to databases such
as Oracle and DB2.
Commercial acceptance and support
have made technical integration easier and
safer. is principle is seen throughout the
technology stack with the rise of commer-
cially based, highly supported open source
components at all levels of the architecture.
Open source technologies, such as JBoss,
Apache HTTP Server, Apache Tomcat and
MySQL, are backed by commercial ventures
or consortia with the nancial backing and
management infrastructure necessary to sus-
tain ongoing research and development,
technical and legal support, and all the bene-
ts of an open source approach to develop-
ment. From the operating system and appli-
cation server level down to the database level,
companies are given several technological al-
ternatives and are making their implementa-
tion decisions based on criteria, including
cost, performance, support, and more. For
example, open source technologies have cre-
ated alternative pricing structures, including
pay as you go models, which several com-
mercial vendors also support.
Several other factors are driving the in-
creased presence of mixed source environ-
ments. First, via acquisition, established In-
dependent Soware Vendors (ISVs) such as
Oracle and IBM are unveiling models where
they oer open source alternatives for low-
er-end deployments while they upsell their
traditional oerings for larger, mission-crit-
ical deployments, increasing heterogeneity
between open source and traditionally li-
censed soware. Second, commercial ven-
dors are providing support for mixed source
stacks such as Novells recent release of its
SuSE Linux Server, JBoss application server
and Oracles proprietary database running
on top of HP blades.
Organizations also are leveraging stan-
dards-based open source technology in de-
velopment and testing environments while
deploying on a traditional licensed technol-
ogy. For example, in the Java space, organi-
zations may develop applications using open
source containers such as JBoss and Tomcat,
while deploying them on commercial plat-
forms such as BEA WebLogic or IBM Web-
Sphere. Development organizations also are
using a mix of open source and commercial
sowareincluding tools such as Eclipse,
JUnit, Nagios and CVS/Subversion along-
side, or in combination with products such
as IBM Rational, Mercury Interactive, BMC
and Visual Source Safeto develop, test,
deploy, and manage their applications.
Mixed source systems are quickly becom-
ing a widespread reality. As these systems be-
come proven and integrated, core applica-
tions will be deployed on these heterogeneous
environments. As IT architects plan deploy-
ments of strategic systems, the role of indus-
try-leading, best-available componentses-
pecially those technologies that can aect the
performance of crucial applications the
mostbecome important to consider.
Critical Systems and Architectural Choices
A critical business system is one or more
sets of soware applications that are core to
the most important processes of an organi-
zation and impact the cost, revenue and risk
structures of the business. ese are systems
that generate revenue, ensure regulatory
compliance, speed application development
and time-to-market, contribute to opera-
tional control, enhance customer and part-
ner loyalty, and provide a competitive ad-
vantage. ese also are systems that rely
heavily on accessing crucial data housed in
underlying databases.
Critical systems have zero tolerance for
delays or errors related to accessing, pro-
cessing, and storing data. Critical systems
require seamless, bulletproof components
that are selected based on technical merits,
Return on Investment (ROI), Total Cost of
Ownership (TCO) considerations, licensing,
and support.
For the data connectivity layer, these
considerations are tied to high performing
critical systems and applications. Oen, ar-
chitects assume network and database tun-
ing will suce. However, independent re-
search shows that in a well-tuned, eciently
coded application, a signicant amount of
time (up to 75 percent) related to database
activity is spent in the database connectivity
layer. Database connectivity components
can have a substantial impact on application
performance and supporting network and
server resources. e negative impact of in-
ferior data connectivity on the critical appli-
cation can be avoided by considering im-
portant criteria for superior data access.
Top Data Connectivity Considerations for Mixed Source
Environments
Data connectivity solutions range from
proprietary, locked-in components to highly
interoperable products that communicate
with the databases at a low-level network
interface. e top issues to consider when
looking at optimal data connectivity are:
Architecture: Choose a data connectivity
solution architected to communicate di-
rectly with the database engine. Avoid
any option that relies on the native data-
base clients, as this adds a level of ab-
straction that hampers performance, reli-
ability and stability. Architectures that
communicate directly with the database
engine eliminate database client version
control issues and the need to deploy and
maintain the database client, cutting the
cost of the application.
Functional breadth: As upgrades and im-
provements are added to both databases
and Application Program Interfaces (APIs)
alike, data connectivity components must
be able to support new features and func-
tionality and provide backward compati-
bility with previous products.
Performance, reliability, scalability, and
manageability: Application performance
is tightly coupled to the database drivers.
Avoid database con-
nectivity products
that require client
libraries, use disk
caching rather than
in-memory processing,
or take excessive network trips in commu-
nicating with databases. A highly manage-
able product also will help contain costs
and free up valuable sta time.
Standards support: Beware of data con-
nectivity components that implement pro-
prietary extensions to standards. ese
extensions limit interoperability, force
vendor lock-in and reduce application
exibility.
Technical support: Its crucial to have a
solid, expert network of support. Choose
a data connectivity product backed by
reputable, live support from professionals
who specialize in database connectivity.
Many open source data connectivity op-
tions, for example, arent production-
proven for reliability and performance,
and also lack the technical and legal sup-
port required.
e organizational skillset: IT shops with
limited resources can't aord to devote
valuable and costly sta time to develop-
ing, testing and re-working data connec-
tivity solutions, nor is this capability al-
ways available in small organizations.
Look for a company that oers unparal-
leled expertise in data connectivity, with
solutions backed by substantial support
capabilities.
Future-proof your investments: Choose
a data connectivity architecture that sim-
plies portability among databases and
platforms to get the most out of your in-
vestments. Standardized data access
components share a common architec-
ture that makes it easy to change or up-
grade the underlying database infra-
structure, reducing cost and complexity
of supporting multiple database versions,
especially for future growth consider-
ations. Added features such as an SQL
leveling capability will allow exibility
and interoperability for many database
types.
License exibility: Factor in your specic
licensing requirements when selecting a
database connectivity product. Your selec-
tion should reduce the overall risk to your
organization from a liability perspective,
while providing a exible licensing model
that ts your organizations needs.
Summary
Failure of IT leads to failure in meeting
business objectives, so high-stakes applica-
tions warrant best-in-breed solutions for the
architecture, at all levels. With the prolifera-
tion of open source components, IT organi-
zations can now weave together a mixed-
source architecture based on an optimal
combination of open source and tradition-
ally licensed soware components. Data
connectivity has a great impact on the per-
formance of strategic applications and
choosing the right products for these sys-
tems is essential.
Consider data connectivity that enables
maximum interoperability and portability,
especially for enterprisewide applications
that rely on multiple databases and plat-
forms. Architects and developers should
work in concert to consider the comprehen-
siveness and functionality of data connec-
tivity solutions, including performance,
scalability, quality and reliability. IT organi-
zations also need to consider interoperabili-
ty and portability issues. Broad-based and
expert support for data connectivity solu-
tions will help the enterprise deal with is-
sues during initial project phases and for
ongoing maintenance.
When choosing a data connectivity
product, consider companies with a holistic
approach to developing database drivers.
ese companies:
Participate in standards development, en-
abling full and in-depth understanding of
the most timely specications
Have long-standing, development-level
partnerships with major ISVs and knowl-
edge of present and future product direc-
tions
Have rigorous testing and QA processes to
guarantee product quality.
Careful consideration of these issues will
help IT organizations link their activities
more eectively to business goals.
Mark Troester is senior manager of product marketing for
DataDirect Technologies, a software industry leader in
standards-based components for connecting applications to
data, and an operating unit of Progress Software Corp. (Nasdaq:
PRGS). In his role, he leads the
strategic marketing eorts for the
companys connectivity technologies.
He has more than 20 years of
enterprise software development
experience and is currently involved in
the development of data connectivity
components, including ODBC, JDBC,
ADO.NET, and XML.
e-Mail: mark.troester@datadirect.com
10
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Open Standards BY JIM ZEMLIN
B
uying technology is inherently risky. When you purchase
software, you are entering into a futures contract with
the supplier of that software. Open standards are the
best solution to alleviate that risk. Customers understand
this, vendors understand, and government organizations
understand this. Not surprisingly then, the word open is
one of the most widely used and abused terms in computing.
Vendors have co-opted the term for their own marketing
purposes, slapping open on everything from business and
development models to interface design.
Now, open source is the new buzzword, with proprietary
vendors attempting to grab a little of this halo effect by
opening up their technology in various wayssome more
meaningful than others. Vendors such as Oracle, for instance,
have recently been acquiring open source companies, many
times at a high premium, to grab a piece of this new world.
This rush to open source is reminiscent of the heady late 90s
when everyone slapped dot com on their names, business
plans, and business cards in order to get a higher valuation.
Unfortunately, words with fungible meanings rapidly lose
their power. Terms that are co-opted by self-serving vendors
lose their power even more rapidly. Think of value added,
innovative, industry leading, or customer service,
and I think youll understand what I mean. According to
Wikipedia, there are many divergent viewpoints in the open
standards debate: There is little really universal agreement
about the usage of either of the terms open or standard.
However, not everyone has given up. The European
Commission (EC) recently defined the term open standards
as part of the final Version 1.0 of the European Interoperability
Framework for PanEuropean eGovernment Services. This
organization has issued a mandate to use open standards as
much as possible, defining the minimal characteristics that a
specification must have in order to be considered open as:
e standard is adopted and will be maintained by a not-for-
prot organization, and its ongoing development occurs on the
basis of an open decision-making procedure available to all
interested parties (consensus or majority decision, etc.).
e standard has been published and the standard specication
document is available either freely or at a nominal charge. It
must be permissible to all to copy, distribute and use it for no fee
or at a nominal fee.
e intellectual propertyi.e., patents possibly presentof
(parts of) the standard is made irrevocably available on a
royalty-free basis.
The important part of this or any other open standard
definition is the development of the standard, the availability
of the standard and the implementation of that standard.

Development must be open to all interested parties. e
standard must be born out of consensus, not the limited
interests of one or two individuals. e process must be
transparent.
Availability of the standard must be free for all to use. You cant
restrict access to the standard to those who can aord to pay
for it.
Implementations of the standard must not be restricted;
certications or compliance can be enforced, but the standard
must not place undo restrictions (such as Draconian licensing
provisions or intellectual property royalties) on the people
implementing soware from those standards. Otherwise, its
not an open standard; its just a standard.
Now how is this related to open source? The EC says it
this way: Open Source Software (OSS) tends to use and help
define open standards and publicly available specifications. OSS
products are, by their nature, publicly available specifications, and
the availability of their source code promotes open, democratic
debate around the specifications, making them both more robust
and interoperable. As such, OSS corresponds to the objectives of
this Framework and should be assessed and considered favorably
alongside proprietary alternatives.
Basically the term open source describes a development
and licensing methodology and its result, while open
standards describe a blueprint for interoperability along
with the development and availability of the standard in
question. Open standards dont always result in open source
(nor should they). The result of open standards is a set of
specifications (or blueprints); the result of open source is open
code. Both are developed collaboratively.
So why doesnt open standards always result in open
source? We believe open standards shouldnt force a
business model on the users of those standards. Thats why
implementers of the Linux Standard Base, for instance, could be
compliant with our standard and still offer a proprietary, non-
GPL product. Microsoft could, if it so chose, make Windows
LSB-compliant and thereby achieve interoperability with LSB-
compliant applications.
While we will never convince vendors to be more careful
with their use of words such as open
standards or open source in their marketing
materials, we can hopefully start asking tough
questions and evaluating their claims on clear
definitions.
Jim Zemlin is executive director of the Free Standards Group.
He previously served at Covalent Technologies and Corio.
Taking Back the Word Open
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
11
12
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
I
n the January/February issue of
Enterprise Open Source Journal, we ex-
amined the Apache Soware Foundation
(ASF), its structure, and projects. In this is-
sue, we continue and extend that discussion
to the Apache Incubator Project.
e ASF began with a single project, the
Apache HTTP Server, but has grown tremen-
dously; currently, the ASF has more than
1,500 Committers working on hundreds of
projects and sub-projects. Given this contin-
ued growth, the ASF faced the challenge of
ensuring its philosophy and standards for
community development and intellectual
property were maintained as the ASF grew
beyond the bounds of its Founding Members.
Accordingly, in October 2002, the ASF
created the Incubator and charged it with
accepting new products into the Founda-
tion, providing guidance and support to
help each new product engender their own
collaborative community, educating new
developers in the philosophy and guidelines
for collaborative development as dened by
the members of the Foundation, and pro-
posing to the board the promotion of such
products to independent Project Manage-
ment Committee (PMC) status once their
community has reached maturity.
Since its inception, the Incubator PMC
has worked to establish a community dedi-
cated to support Foundation projects and
codify the policies and procedures neces-
sary to balance the needs of the ASF, the
larger ASF community, and the projects un-
dergoing incubation.
As it normally does, the ASF board
agreed on the basic responsibilities and
goals for the Incubator, and established a
PMC to oversee the project.
e Incubator PMC has the right and
responsibility to establish and enforce such
procedures that it deems necessary and de-
sirable to achieve the Incubator Project ob-
jectives, including promoting the ASFs core
values. e Incubator PMC is well-suited to
this task, as it consists primarily of people
who have been elected as Members of the
Foundation, as well as many of the ASFs di-
rectors, other ASF ocers, and invited par-
ticipants who have demonstrated their un-
derstanding of the ASFs goals and policies.
e Incubator Project has evolved a great
deal since its inception. It regularly reviews
and revises its procedures and documenta-
tion in an eort to continue to make things
both easier and clearer for projects, while
preserving the necessary protections for
both the ASF and the larger community. Be-
cause Incubator provides the documentation
explaining ASF structure and operations as
well as how to work within the pre-specied
framework, the methodologies that Incuba-
tor projects use within the Incubator (with a
few Incubator-specic exceptions) are used
as blueprints when those projects graduate
the Incubator into the ASF proper.
Developers who are interested in associ-
ating a project with the ASF should contact
the Apache Incubator Project. One of the
requirements is that the project(s) must nd
some backing from within the ASF. e
Apache Incubator accepts projects that
Members of the Foundation are willing to
help mentor, but doesnt accept projects for
which such support doesnt materialize.
Before moving on, lets take a brief look
at some terms commonly used in the ASF
community and other open source projects:
Codebase: Codebase is a bunch of code
that collectively forms soware
Committer: Aer making sucient con-
tributions to earn merit in the community,
a contributor may be invited to become a
Committer. Upon accepting the invitation
and signing a Contributor License Agree-
ment (CLA) with the ASF, a contributor
can become a Committer. Aer becoming
a Committer, access is normally granted to
one or more portions of the ASFs code re-
pository.
Contributor: A contributor is dened as
one who helps ASF introduce project im-
provements.
PMC Member: Someone who has been
approved by the ASF Board as a part of a
PMC, generally aer having been elected
by the existing PMC Members. PMC
Members have the only binding votes on
their projects decisions. e goal would
be for all Committers to also be PMC
Members, although theres oen a matur-
ing phase before a Committer is granted
a binding vote by election to the PMC.
Functionality
e Incubator Project is a gateway to the
ASF for new projects. e ASF board has
claried how anyone can participate in the
Foundation. e process to add a new proj-
ect has been formulated. So, Incubator acts
as a deciding factor for a project to work as
a sub-project or a top-level project. is
all depends on how this has been projected
and what the Apache Incubator PMC rec-
ommends to the board about the project.
Lets start with how to participate in the
Apache Incubator Project and then well ex-
amine the incubation process.
Apache
Incubator
Project: An
Entry Path
to the ASF
By Vikram Gaur
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
13
How to Participate
Anyone can participate in Apache proj-
ects. e rst step is to join the mailing
list(s) for the project(s) in which you wish
to participate. Subscribers can participate in
general and public discussions, access gen-
eral messages, access the digest of general
messages, access the messages related to
older projects, and work with code available
for download and/or anonymous subver-
sion access. In addition to specic mailing
lists for each project undergoing incubation,
the Incubator also has a number of public
mailing lists. e most important is the gen-
eral entry point, general@incubator.
apache.org, to which one can subscribe by
sending e-mail to general-subscribe@incu-
bator.apache.org. All incubating project
Committers should be subscribed to the In-
cubator general list.
Now lets concentrate on how the Incu-
bator Project works to add new projects to
the list.
Incubation Process
To propose a project to the Incubator,
send an e-mail to the general Incubator
mailing list. If there are existing Apache
projects related to what you are proposing,
you might rst solicit support among them.
e Incubator Website also provides infor-
mation on how to submit a proposal. Once
accepted for Incubator, the new project will
have one or more (usually more) mentors
who will help to guide the new community
through the process of getting resources set
up, making decisions, and going through
the general process known as community
building.
e mentor acts as a guide, monitoring
the communitys progress, continuity, stabil-
ity, and suitability throughout the process.
e mentor will guide you based on experi-
ence and knowledge of technical and ad-
ministrative best practices; the mentor also
will provide required information on ASF
polices and procedures.
Eventually, the project will be objectively
evaluated for having met certain criteria re-
lated to intellectual property concerns, and
subjectively evaluated in terms of how well
the community has adopted Apache philos-
ophies and procedures. When a project is
deemed ready to leave the Incubator, it may
join an existing project if thats appropriate
and mutually desired, or it may be proposed
to the ASF Board to be established as a new
top-level project, with its own PMC.
Signicance
Why is all this signicant? e ASF has
evolved to meet the demands of its rapid
growth. rough its internal restructuring
eorts, the ASF has achieved stability and
continuity. e incubation process ensures
those who want to enter and prosper are
aware of and committed to the principles
that made the ASF successful, especially
its nurturing, clearly dened project devel-
opment process, which embraces collabo-
ration and openness. e incubation pro-
cess also helps dierentiate between
projects that are considered a stable part
of the ASF and those that are still evolving,
and may or may not ever successfully be-
come part of the ASF. is is so end users
have some idea of whether or not all the
projects intellectual property has been
properly cleared, documented and licensed
according to the ASFs requirements, and
whether or not the community is consid-
ered stable.
e Incubator is responsible for:
Filtering proposals that ultimately can be-
come a new project or sub-project
Facilitating project creation and providing
the required infrastructure
Supervising and mentoring the incubated
community
Evaluating the maturity of the incubated
project and either retiring or promoting it
Ensuring that incubating projects started
by teams within companies or other non-
ASF organizations achieve the diversity of
community membership necessary to suc-
ceed as an ASF project.
e Incubator lters projects based pri-
marily on the basis of interest among ASF
Members to mentor the projects. Little con-
cern is given to the level of overlap between
similar projects or even to technical issues
because many technical approaches can be
valid. In this way, the Incubator provides
signicant benets for both ASF and project
developers. For the ASF, it helps ensure a
stable and predictable development process.
For a project developer, it ensures there's a
clear path of project evolution.
Acknowledgement:
Special thanks to Noel Bergman of the ASF for his contribution
to this article.
Vikram Gaur is an electronics graduate from Kurukshetra
University, India. In 1999, he and Neeraj Gaur formed IIRA
Technologies, which later became IIRA Compuserv
Technologies (P) Ltd. As CTO, he has headed numerous projects
based on Linux and Windows. Currently, he and his team are
working on Centralized Network Management. He also is
heading projects based on PHP, Java, and SAP.
e-Mail: vikramgaur@iiratechnologies.com
FAQ
Q: Since the software available at ASF is open source, what happens to my
Intellectual Property (IP) rights?
A: Contributors maintain the copyright of their contributions. They simply
license it to the ASF and its recipients.
Q: I'm not a Committer, but I want to update the code. Is that possible?
A: Yes, you can update the code even though you aren't a Committer. Read
the "how to participate" page on the Incubator Website.
Q: Can we expect ASF volunteers to reply to all our direct queries?
A: No, ASF volunteers aren't paid, so it isn't feasible for them to answer all
questions from everyone in the community. However, hundreds of volun-
teers are available to help you. Be sure to join the mailing lists so you have
access to these individuals.
-V.G .
T
here seems to be a lot of confusion sur-
rounding the concepts of free so-
ware and Open Source Soware (OSS).
Many people use the terms interchangeably,
which isnt correct. ere are even some
who lump free soware and OSS into the
category of public domain soware, which
they clearly arent. In this article, Ill discuss
how free soware and OSS got started, and
how the Free Soware Foundations GNU
General Public License (GNU GPL) and
other licenses relate to the dierences be-
tween them.
First, some history. Although many
changes took place in the Unix world in the
80s, well focus here only on free soware
and OSS. Before Richard M. Stallmans cre-
ation of the GNU Project in 1984, there
were no concepts such as free soware or
OSS. While there was sharing of source code
and executables among universities, so-
ware categories were limited to proprietary
soware, shareware, and public domain
soware.
Only proprietary soware received any
signicant amount of respect. Most people
assumed that shareware and soware re-
leased into the public domain were poorly
written and generated by amateurs. To a
large extent this was true, but there were
some true gems strewn throughout. Even
so, except for soware released to the public
domain, the source code wasnt available,
since the shareware authors still hoped to
make money from their work. For most
people, the common choices for soware
were expensive proprietary soware pack-
ages or generally inferior shareware or pub-
lic domain soware.
Richard Stallmans radical idea was that
soware shouldnt be owned, per se, and us-
ers should have the freedom to run, copy,
distribute, study, change, and improve the
soware. Stallmans reasoning behind this is
explained in great detail at www.gnu.org/
philosophy/philosophy.html and www.gnu.
org/gnu/thegnuproject.html. Whether you
agree with his reasoning or not, Stallman de-
cided it was literally immoral and unethical
to write proprietary soware. Being a pro-
grammer, he decided to start creating free
soware on his own. He began with Emacs,
but his plan was to eventually have an entire
operating system composed of free soware.
To make sure soware that started out as
free soware couldnt be turned into propri-
etary soware, the GNU GPL was created.
To quote from the second URL above:
e goal of GNU was to give users free-
dom, not just to be popular. So we needed to
use distribution terms that would prevent
GNU soware from being turned into propri-
etary soware. e method we use is called
copyle.
Copyle uses copyright law, but ips it
over to serve the opposite of its usual purpose:
instead of a means of privatizing soware, it
becomes a means of keeping soware free.
e central idea of copyle is that we give
everyone permission to run the program, copy
the program, modify the program, and dis-
tribute modied versionsbut not permis-
sion to add restrictions of their own. us,
the crucial freedoms that dene free so-
ware are guaranteed to everyone who has a
copy; they become inalienable rights.
As you can see from this quote, the GNU
GPL doesnt place the soware in the public
domain. is is also the case with every
other OSS license. e authors retain their
copyright in their own work. ey just
choose to make the source code available,
rather than keep it closed.
As time went on, more people became
familiar with the GNU project and its so-
ware, and more developers contributed to
the eort. Unix system administrators liked
to use these tools to do their jobs without
having to convince management to pay for
proprietary versions, or pay for them out
of their own pocket. As a result, the GNU
GPL became well-known to technicians
and developers.
e free soware community and GNU
project continued to grow and expand into
other areas. Although popular with techni-
cians and developers, Stallmans viewpoints
and approach were too radical and confron-
tational for some. To make the concept of
free soware more palatable, a more practi-
cal, less ideological approach was needed.
Much of this thinking was crystallized when
Netscape decided to release the source code
for its Web browser to the public. Netscapes
CEO, Jim Barksdale, cited Eric S. Raymonds
book, e Cathedral and the Bazaar, as the
fundamental inspiration for the move.
Raymonds thesis was that access to source
code was critical, not necessarily for philo-
sophical reasons, but because the develop-
ment methodology and development com-
munities that resulted were superior to
closed-source methods.
Netscapes decision was entirely busi-
ness-oriented. ey were trying to preserve
a market for their server products. With the
14
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
What You
Need to
Know
About the
Differences
Between
Free
Software
and Open
Source
Software
BY MARK POST
help of Raymond and others, they devel-
oped their own license, the Mozilla Public
License (MPL), rather than adopt the GNU
GPL. e MPL doesnt have the same type
of strong copyle aspect that the GNU
GPL does. In his essay, Revenge of the
Hackers (www.catb.org/~esr/writings/ca-
thedral-bazaar/hacker-revenge/ar01s05.
html), Raymond writes:
It seemed clear to us in retrospect that the
term free soware had done our movement
tremendous damage over the years. Part of
this stemmed from the fact that the word
`free has two dierent meanings in the Eng-
lish language, one suggesting a price of zero
and one related to the idea of liberty. Richard
Stallman, whose Free Soware Foundation
has long championed the term, says ink
free speech, not free beer but the ambiguity
of the term has nevertheless created serious
problemsespecially since most free soware
is also distributed free of charge.
Most of the damage, though, came from
something worsethe strong association of
the term free soware with hostility to intel-
lectual property rights, communism, and oth-
er ideas hardly likely to endear it to an MIS
manager.
It was, and still is, beside the point to ar-
gue that the Free Soware Foundation is not
hostile to all intellectual property and that its
position is not exactly communistic. We knew
that. What we realized, under the pressure of
the Netscape release, was that FSFs actual
position didnt matter. Only the fact that its
evangelism had backred (associating free
soware with these negative stereotypes in
the minds of the trade press and the corporate
world) actually mattered.
Our success aer Netscape would depend
on replacing the negative FSF stereotypes with
positive stereotypes of our ownpragmatic
tales, sweet to managers and investors ears,
of higher reliability and lower cost and better
features.
We began acting on it within a few days
aer. Bruce Perens had the <opensource.org>
domain registered and the rst version of the
Open Source Website up within a week. He
also suggested that the Debian Free Soware
Guidelines become the `Open Source Deni-
tion, and began the process of registering
Open Source as a certication mark so that
we could legally require people to use `Open
Source for products conforming to the OSD.
With the Open Source Initiative acting as
arbiter of what licenses conform to the OSD,
many other soware projects and companies
started creating licenses for OSS. e FSF
provides an overview of many of these licens-
es at www.fsf.org/licensing/licenses/index_
html. A brief reading of the commentary
there shows it to be concerned with what li-
censes are compatible with the GNU GPL,
and which ones provide a strong copyle.
Drilling down on some of the hyperlinks pro-
vided reveals the FSF has a long-term strategy
of making all soware free. Whether or not
this is possible, or will ever happen, this in-
forms nearly all their work. at page also has
hyperlinks to many of the actual license docu-
ments themselves, making it a valuable re-
source for anyone who needs to read the li-
censes for various packages.
e GNU GPL isnt about open source,
its about free soware. Without the GNU
GPL or the Free Soware Foundation, its
entirely possible there would be no such
thing as OSS. is has no eect on people
who simply want to use free or OSS as a
tool, but it has serious ramications for
those who want to incorporate the source
code into their own application.
ere are probably many small soware
projects that choose the GNU GPL for their
license simply because its the most common
one. For any large project likely to be useful
to businesses, license selection is usually re-
ective of their actual intent for the so-
ware. If that choice is the GNU GPL, then
its likely the project members sincerely
dont want people taking their work propri-
etary. is has certain implications:
If a project team decides to change the li-
cense to something more permissive, any
developers who object to that could possi-
bly cripple the project by refusing to let
their contributions stay in place. Since
they own the copyright to their own con-
tributions, they would have that right.
Violators of the GNU GPL, when found
out, are likely to hear from the FSF, or the
copyright holders themselves. Just ask
Linksys, Belkin, Fujitsu-Siemens, U.S. Ro-
botics, D-Link, and Siemens. eyve all
been taken to task for not living up to all
the GNU GPL requirements and all have
since complied.
Harald Welte, author of the netlter/ipt-
ables component of the Linux kernel, has
made it his personal mission to enforce
compliance with the GNU GPL. Several
other authors of GNU GPL licensed so-
ware have assigned him their copyrights so
he can enforce them on their behalf. Welte
has dealt with numerous companies that
took the Linux kernel, embedded it in a
product, and didnt release their modica-
tions back to the community. Hes received
settlements in at least 25 cases so far and
won a couple of rounds in court with others.
While his main goal, like the FSF, is compli-
ance, there have been some monetary
awards. e awards havent been all that
large, but he also has obtained the release of
the source code that some of the companies
were trying to keep secret, or made them
distribute a copy of the GNU GPL with their
soware, etc. Whatever was necessary to
bring compliance was done.
If anyone wants to use OSS to create a
legally derivative work but not release its
source code to the public, they must make
sure the license permits that. ey may not
be able to nd an appropriate package with
an appropriate license, but trying to go that
route with something licensed under the
GNU GPL exposes them to considerable le-
gal risk. is seems to be the sticking point
for some people, and why there are calls to
make the GPL more business friendly, or
to get rid of copyle, etc. But developers
dont want a license that lets their work be
turned into non-free soware. If they didnt
care about that, theyd have chosen a dier-
ent license in the rst place. eres certainly
several to choose from, and many large OSS
projects have done that, including:
Apache
BIND
Mozilla, which has Firefox and under-
bird as subprojects
JBoss
Struts
Tomcat
OpenOce.org
OpenSSH
OpenBSD, NetBSD, FreeBSD
X.org.
eres a big dierence between free
soware and OSS. Oen, this is reected in
the philosophy and attitude of the develop-
ers toward keeping derivative works free.
Make sure you understand and abide by the
license of any source code you intend to in-
corporate in a non-free package.
Mark Post is a senior infrastructure specialist at EDS. After
working with MVS for more than 20 years, he is technical lead
for EDS Linux capability.
e-Mail: mark.post@eds.com
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
15
Living With Software Patents in an
Open Source World
Legal Issues BY DIANE M. PETERS
A
t the intersection of software patents and innovation lies
an interesting and important debate. Do software patents
support innovation by protecting inventors, as many patent
holders claim? Or, do they hinder innovation because of
their questionable quality and abundance, as asserted by their
opponents? These arent simple questions to answer, nor will
they be resolved anytime soon. Regardless of when or how
they get answered, however, the simple fact remains that all
users and developers of software, whether proprietary or
open source, must live with software patents and our current
system for issuing them, however flawed, for the foreseeable
future.
Early in 2005, many of us predicted that the next big issue
in need of attention by the computer software industry would
be software patents. The past 12 months have indeed been
important in terms of raising awareness and the quality of
debate on the issue. Among other events:
With encouragement from many in the soware industry,
Congress considered patent reform eorts that, aer
becoming stalled last year following intense lobbying by
large patentees, once again appear to be making headway.
e U.S. Supreme Court agreed to hear several important
cases involving patents that will have far-reaching repercus-
sions for the soware industry in particular.
e U.S. Patent and Trademark Oce (USPTO) conceded
that the quality of patents it issues, and soware and
business method patents in particular, is unacceptably low
and deserves its dedicated attention.
These events and many others, including the recently
settled dispute between BlackBerry-maker Research In
Motion and NTP involving software patents subsequently
invalidated by the USPTO, point to the need for serious
consideration by policy makers of the challenges presented
by software patents and their impact on innovation in the
software industry.
While that debate unfolds, several in the software
industry and community have embraced efforts that account
for the reality of software patents and the threat they pose
for users and developers of Open Source Software (OSS) and
open standards in particular. All these efforts use the existing
patent regime to support innovation and protect those users
and developers.
The Patent Commons Project is a good example.
Launched in 2005, that Project uses the current system by
encouraging patent holders to promise to use the monopoly
rights that accompany issued patents and that instead
ordinarily threaten developers and users of software to
protect those users and developers. The Project also helps
users and developers of software make sense of the myriad
promises and policies made by patent holders with respect to
patents in their portfolio.
One of the innovative approaches adopted by patent
holders and catalogued in the Projects database is the
adoption of a public patent policy or promise articulating the
companys intentions vis--vis its use of its software patents.
Novell, for example, has stated that it will use its patent
portfolio to defend against others who assert its patents
against OSS marketed or sold by Novell. Red Hat, on the
other hand, has publicly promised not to enforce any patents
it holds against OSS licensed under particular open source
licenses.
Some companies are starting to adopt other approaches
to using their patent portfolios in support of open standards
that also are documented by the Project. Sun Microsystems
has covenanted not to sue to enforce any patents it holds that
read on implementations of the Open Document Format
adopted by the standard-setting organization OASIS. In
this same vein, IBM has promised not to enforce its patents
against approved implementations of specified standards
in the healthcare and education industries. And still other
companies, such as CA, have pledged not to assert specific
patents they hold against any developers and users of OSS
generally, provided those developers and users in turn agree
not to assert their own patents against OSS.
All three approaches signal an encouraging and
innovative shift in the way companies are using their patent
portfolios and the rights granted by the existing patent
regime to support OSS and standards. Each approach does
so by relying on the very system that is under review and
evaluation by Congress, the U.S. Supreme Court, and the
USPTO itself.
The Patent Commons Project and the promises and
policies it documents arent the only initiatives gaining
momentum and support while longer-term patent policy
issues are debated and taking shape. Several other projects
are also under way, each designed to reduce the patent threat
to OSS users and developers. Open Inventions Network
(OIN), a privately funded company founded by IBM, Novell,
Phillips, Red Hat and Sony in late 2005, offers royalty-free
licenses for its patents to anyone, but only on the condition
that licensees agree not to assert their patents against Linux
or certain Linux applications. While the covenants not to sue
16
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Legal Issues BY DIANE M. PETERS
extracted from licensees havent (to date) been made public,
users and developers of Linux can at a minimum be assured
the founding companies are united in their dedication to
protecting Linux from becoming the target of possible patent
litigation. Conversely, detractors of Linux have been sent a
clear message from those major industry players that their
unsubstantiated patent threats arent being taken lightly.
Other publicly supported projects also are in progress
and assume the existing patent system wont change
fundamentally. The Open Source as Prior Art (OSAPA)
project, an effort championed by OSDL, IBM and
Sourceforge.net, and supported by the USPTO and many in
the open source community, seeks to improve the quality of
software patents that issue. Its goal is to reduce the number
of patents that issue but shouldnt, or that are issued but
arent enforceable. This goal is accomplished by increasing
accessibility to OSS code and documentation that can be
used as prior art during the patent examination process
or after a patent is issued. For users and developers of
open source, this means a reduction in the number of bad
software patents and against which they may need to defend
themselves.
The Open Patent Review project is a further example of a
project designed to increase the quality of patents that issue
and reduce the number of patents that shouldnt issue under
applicable law. The initiative seeks to create a process for
collaborative, peer review of patent applications by experts
who provide information to patent examiners during their
review of patent applications. Like OSAPA, the Open Patent
Review initiative will better ensure patent examiners have
access to all relevant information when reviewing patent
applications, thereby improving patent quality.
Another recent initiative is designed to provide
information about the quality of patent applications and
issued patents. Similar to a project previously considered by
the USPTO, the Patent Quality Index (PQI) project seeks
to provide a quantitative measure of a patents quality that
can be used as a tool by patent examiners in the patent
application review process, as well as others after the patent
issues to assess the patents quality. Like OSAPA and the
Open Patent Review initiative, the PQI project rests on the
premise that patents and the monopoly rights they carry
should be granted only for true inventions.
All these initiatives and projects, whether public
or private, approach the patent issue from a pragmatic
perspective. Each recognizes that the patent issues debated
by Congress, under review by the courts and burdening the
USPTO, are complex and not easily remedied. Each has its
supporters and detractors within the software industry, OSS
community and the USPTO. When considered collectively,
however, these initiatives present a fairly comprehensive
approach that, if successful, go a long way toward providing
an environment in which OSS, standards, and software
patents can coexist while at the same time reducing the
potential harm and threat to innovation.
For more information about each of the projects, please visit their Websites:
Patent Commons Project: www.patentcommons.org
Open Inventions Network: www.openinventionnetwork.com
Open Source as Prior Art: http://developer.osdl.org/dev/priorart/
Open Patent Review: http://dotank.nyls.edu/communitypatent/
Patent Quality Index: www.law.upenn.edu/blogs/polk/pqi/
Diane M. Peters is general counsel for Open Source Development Labs. She joined
OSDL as its rst general counsel in 2004, after serving as their outside counsel for more
than two years. She is responsible for all the organizations global legal operations,
including overseeing the Patent Commons Project and working
with the U.S. Patent Oce on patent quality reform with specic
emphasis on the Open Source as Prior Art Initiative. She holds a
number of board positions, including board director for
Software Freedom Law Center. She earned a B.A. in political
science from Grinnell College in 1986, and a J.D. from
Washington University School of Law in 1989, where she served
as an executive editor of the Washington University Law
Quarterly.
e-Mail: dmp@osdl.org
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
17
All these initiatives and
projects, whether public
or private, approach the
patent issue from a
pragmatic perspective.
18
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Open Source in France:
Insight for Other
Countries

By Mathieu Poujol
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
19
T
he open source market in France
is emerging but growing
strongly. Figure 1 shows all
soware and IT services
expenditures related to open source
products and solutions. ese gures
include only soware and IT services related
to open source technologies. e non-
open source market generated by Open
Source Soware (OSS) is much larger.
French Connections
e French market is particularly dy-
namic and is in the vanguard on issues re-
lated to open source. France is one of the
most partial countries when it comes to
open source, along with Germany and Ja-
pan. ere are several explanations for this:
e importance of the ideological as-
pect, especially in the public and semi-
public sectors
IT services companies power (and whose
share of added value on projects is in-
creasing)
e American domination in infrastruc-
ture soware that acts as a repellent for
some commercial soware
A more technical approach to IT projects
e strong tradition of specic develop-
ment in France.
State of the Art
Open source has a tendency to move
up from infrastructures toward applica-
tions. Tools tend to become gradually more
complex and less able to be standardized as
one moves toward human processes and
toward the most promising soware with
the highest level of added value and impor-
tance. Small to Medium-Size Businesses
and Industries (SMBs/SMIs) remain reti-
cent when it comes to large open source
deployments, which are oen still too com-
plex for them. A versatile market is devel-
oping around open source, from infra-
structures to applications. Oen, open
source is used to respond to relatively sim-
ple needs, but its now becoming essential
in increasingly complex, strategic situa-
tions (see Figure 2).
System Infrastructure Software
Linux launched the open source move-
ment. It oers an operating system thats
nearly equivalent to Unix and runs on the
most common chips. is solution is gain-
ing ground, particularly for simple or highly
specic needs where Linux can be com-
pletely adapted to those needs (e.g., defense,
real-time, scientic calculation). Linux
changed the tone in the market around X86
chips (under Microso domination) and for
low-end or even high-end Unix, since So-
laris 10 is delivered in near open source.
e breakthrough of Linux will continue in
operating systems, although it remains, in
the short term, rather limited with regard to
large companies strategic systems. Linux
needs only X86 chips, which further proves
their value.
Open source is also present in scientic
and technical applications and real-time
soware. Its capacity for adaptation and
evolution, along with its openness, are great
in an infrastructure stack within a larger
overall and oen proprietary solution. is
is especially the case in the defense and en-
ergy sectors.
Systems Administration and Management
Open source is also present in security
and directory soware, but at a low level.
ese products cant compete with overall
systems administration platforms. Open
source is, however, a stimulator of demand
for this type of platform because it relies
more on administration capabilities than
traditional soware. On the other hand,
open source oers many utilities that can be
Figure 2: A Versatile Market Is Developing Around Open Source
PROCESSES
APPLICATIONS
MIDDLEWARE
DATABASE
SYSTEMS INFRASTRUCTURE SOFTWARE
A
D
D
E
D

V
A
L
U
E
S
Y
S
T
E
M
S

M
A
N
A
G
E
M
E
N
T
D
E
V
E
L
O
P
M
E
N
T
Figure 1: Open Source Market in France
PAC
2002 2003 2004 2005 2006 2007 2008
Software and IT Services 27,132 26,101 27,066 28,918 31,013 33,272 35,699
Open Source 60 100 146 211 305 430 580
Growth 02/03 03/04 04/05 05/06 06/07 07/08 04/08
Software and IT Services -3.8% 3.7% 6.8% 7.2% 7.3% 7.3% 7.2%
Open Source 66.7% 46.0% 44.5% 44.5% 41.0% 34.8% 41.2%
20
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
integrated into more global solutions capa-
ble of responding to tactical needs, particu-
larly security.
Databases
Databases are already a mature market.
Open source should reach a good level of
penetration in this market, especially for
relatively simple needs. is trend is already
noticeable in France, with a database market
thats declining in value, while still increas-
ing in volume. e breakthrough of data-
bases in open source is particularly visible
in Internet environments, for example, be-
hind Apache for PHP applications. Tradi-
tional soware companies in this market
are pushing their oerings upward, particu-
larly toward cluster and grid computing,
business decision-making, highly transac-
tional applications, and XML to justify add-
ing more value to their solutions compared
to open source.
Development Tools
Open source originates from develop-
ment, particularly development around
communities; in this market segment, OSS
oers undeniable added value, which means
it signicantly impacts the market.
Eclipse, the development community
environment from IBM, is in the process of
absorbing the Java 2 Enterprise Edition
(J2EE) development tools market. One can
compare Eclipses breakthrough to that of
Linux on Unix. Its now an acknowledged
standard. Microso and Borland are suer-
ing and are evolving toward Enterprise Re-
source Planning (ERP) development envi-
ronments. IBM, promoter of Eclipse, is also
moving toward functional layers with At-
lantic and Rational.
Middleware
Open source is managing to successfully
break through into front-oce environ-
ments related to the Web, with the Apache
Web server, as well as with portal develop-
ment environments such as PHP. Web con-
tent management also is developed based
on open technologies such as SPIP. is is
particularly true for the French public sec-
tor (which shares some of its development)
and SMBs/SMIs. In areas that are strongly
transactional or complex, particularly in
back-oces, there are few large-scale de-
ployments of OSS. Instead, open source
stacks are used to address limited needs.
Open source also can serve as a labora-
tory for testing an innovative technical con-
cept such as Service-Oriented Architecture
(SOA). is is why Iona oered Celtix, a
limited Enterprise Service Bus (ESB) as an
open source product hosted by Object Web.
For an information system, its important to
choose technologies that are the most com-
plete, least risky, highest performing, easiest
to administer, and best integrated. ese ca-
pabilities are essential in an SOA, the next
step of IT architectures. OSS has yet to prove
its value in this crucial SOA market.
On the other hand, for simpler and less
strategic needs, it can be advantageous to
use OSS. A Ferrari isnt necessary to go buy
bread in the morning; a sedan (or a good
walk) suces.
Applications
is market continues to be relatively
unreceptive to OSS. Its requirements are
such that the solutions oered are returning
to specic development. Certain open ap-
plication stacks can be incorporated into
soware or application soware packages.
OSS is interesting for certain specic ap-
plications, especially in non-competitive
sectors such as the public sector.
Ranking Open Source Players
PAC has ranked open source players ac-
cording to ve categories:
1. IT services suppliers and traditional ser-
vice providers that were positioned early
on open source, either directly, through
a specialized subsidiary, or thanks to an
acquisition
2. Open source service providers and small
companies specialized entirely in open
technologies
3. Traditional software suppliers that have
opened up their offerings to open
source
4. Software suppliers specialized in open
source, a new model
5. Communities, an old model that has
been stimulated by the development of
the Internet and open source.
Experience has shown that although open
source is a credible alternative, we cant claim
it constitutes a better option in terms of qual-
ity, security, and costs. Its an approach that
must be dierentiated based on the company.
As for all soware, open source is a reason-
able choice if its well thought out. Open
source is easier to integrate if you have:
Considerable experience with specic de-
velopment
Internal teams that are relatively large or
highly specialized
Rather simple needs in soware infra-
structure
Highly specic needs.
Companies typically dont want to trans-
form themselves into soware vendors.
ey also dont want to get locked into a de-
pendent relationship with an integrator, as
theyve already done with soware suppli-
ers. Open source is a windfall for IT services
companies that recover a portion of the
added value that had been taken from them
by suppliers of soware packages. But is this
an advantage for the user?
To make the best choice for any soware
its important to:
Have a signicant evaluation phase to
study the implications of a choice in so-
ware
Follow standards scrupulously
Bet on SOAs.
Dont re-invent the wheel because:
Its oen more advantageous to buy exist-
ing soware than to develop new soware,
even if one uses a lot of open source
stacks.
It can be counterproductive to develop or
buy certain soware if the same already
exists in open source.
Its especially important to avoid suc-
cumbing to the ideological approaches and
media hype related to open source. ere
can be a strong temptation to rush into open
source following setbacks encountered with
traditional soware suppliers, but this isnt
necessarily the best solution.
Although open source has boosted com-
petition in the IT market, its not a cure-all.
To be strategic, eective, and reliable, OSS
must be designed within the framework of
overall IT architecture processes, or there
will again be a return to IT entropy.
Mathieu Poujol is a consultant with Pierre Audoin Consultants
(PAC), specializing in Systems Infrastructure Software. PAC
advises IT companies on achieving domestic and international
growth objectives in Europe and the U.S. through the planning,
development, implementation, and ongoing support of
successful growth strategies.
Voice: +33 (0)1 56 56 74 17
e-Mail: m.poujol@pac-online.com
Website: www.pac-online.com
4
)
0
8
$
"
4
&
4 0 - 6 5 * 0 / BY DENNY YOST
W
hen the continued success of a business is dependent
on finding new clients and providing current clients
with outstanding service, unreliable and incomplete internal
systems can be disastrous. Finding and keeping clients is a
difficult, expensive, and time-consuming job for sales teams,
support staffs, finance personnel, and others who come
into direct contact with customers. If the sales team has
an inadequate tool for tracking customer communications
and other parts of the company cant read or add to these
communications, then it is very difficult for a company to
serve clients in an organized and efficient manner. Such
was the case for BZ Results, a provider of digital marketing
solutions to the automotive industry.
BZ Results differentiates itself from larger competitors
by delivering an intense focus on client responsiveness and
satisfaction. The results of BZs efforts have rewarded the
company well over the last two years, but it also highlighted
the need for an improved customer management system.
We were using a system that was designed to be a contact
management system for a relatively small number of
salespeople, says Rob Lackey, CTO at BZ Results. The system
uptime was very undependable. There was no easy way to
customize the solution for integrating it into our other systems,
and keeping track of pre-sales mistakes and lost opportunity
costs was impossible. Staff from multiple departments couldnt
access the information stored in it, and the ability to accurately
forecast sales didnt exist. We essentially had 44 salespeople
who had an inadequate tool to do their job, combined with
a company that valued customer relations but couldnt easily
leverage communications across departments to better and
more easily serve clients. It was time for a change, and we
needed a complete Customer Relationship Management
(CRM) system we could depend on.
There are many CRM systems on the market from
which to choose, and BZ Results reviewed all that met their
requirements. We collected 46 requirements from the
stakeholders throughout the company, and organized them
into three separate levels of importance, Lackey continues.
Everyone agreed we needed a sales, marketing, and post-
sales platform that places the customer at the center of all
communications between the client and BZ Results, and could
be integrated into the other systems used within the company.
We also needed to find a system that was truly easy to use so
it permitted the salespeople to spend their time working on
relationships with clients vs. spending inordinate amounts of
time documenting the relationships. Of course, we also wanted
the best possible solution at the best possible price!
After applying the selection criteria to the various CRM
products on the market, the SugarCRM open source solution
was chosen. The SugarCRM system was the clear winner
against our criteria, says Lackey. It was easy to install and
the salespeople like it much better and quickly adopted it. We
can integrate it into our other systems without paying large
fees for professional services. In addition, staff from multiple
departments can now utilize the information collected and
stored in a single place for each customer. The sales process
has improved, and the benefits of accurate sales forecasting
are being realized. Our salespeople also are using SugarCRM
when they are on the road. The mobile wireless support
permits our salespeople to update customer information
while still in the parking lot of a client rather than waiting
until returning to the office. This permits us to be that much
more responsive to customer needs on an almost minute-by-
minute basis. Its this kind of responsiveness that separates
BZ Results from its competitors and gives us a strategic
advantage.
For more information, contact SugarCRM Inc., 10050 North Wolfe Road, SW2-130,
Cupertino, CA 95014. Voice: 408-454-6900; Website: www.sugarcrm.com/.
BZ Makes a Change to Improve Results
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
21
The sales process has improved,
and the benefits of accurate sales
forecasting are being realized.
- Rob Lackey, CTO at BZ Results
Open Issues BY MARK DRIVER
S
ervice and support are the subjects of an odd dichotomy
within the open source community today. As the
community has grown, it also has diversified across a wider
range of member profiles; its membership is no longer
limited to hard-core technology elitists and hackers. Instead,
theres a new wave of nine-to-five professionals participating
in the open source model who approach service and support
from very different perspectives from the old guard.
Before the elitists bemoan the dumbing down of open
source too loudly, let me just say you cant have your cake
and eat it, too. If open source efforts are to truly challenge
closed source technologies in open market competition, then
the traditional technology adoption cycle must apply as well.
Otherwise, open source will be relegated to the exclusive
bastion of bearded men in stretchy sweat pants and sandals
who spend 18 hours a day in front of a computer monitor.
Personally, I think theres a small element within the
traditional open source community that enjoys the label
of alternative. These folks are the ones who, among other
things, insist on clinging to clichd David vs. Goliath anti-
Microsoft propaganda as the principle battle cry for the
model. The rest of the community (thankfully) is learning to
count on open source for its own merit, independent of what
vendors such as Microsoft do. But I digress.
It should come as no surprise that the extreme ends of the
continuum that make up the open source community have very
different views on service and support. Consequently, rather
than one-size-fits-all, a successful service and support strategy
lies within the context of both the adopter and the technology.
In other words, an approach that serves a technology aggressive
early adopter will almost certainly spell doom for a conservative
mainstream IT organization. However, we shouldnt assume
both profiles cant learn from each other.
Most long-time open source proponents view the
community largely as the first line of service and support
for an open source project, and they are absolutely correct.
In fact, the health of the community is the principle litmus
test for overall health and maturity of a project. However,
the mistake some make is assuming this first line of defense
is the only support channel needed. These community
members discount third-party commercial support channels
and instead rely heavily on their own technical acumen.
When they do need external help, the quick Google search
is the only tool they typically turn to. This combination
of direct technical skill and a deep community-centric
knowledge base can be a very powerful tool. Indeed, this is
one of the many strengths of open source.
The community around a successful project provides an
incredibly deep and valuable knowledge base that far exceeds
the quality of most closed source alternatives. Consequently,
a self-reliance strategy is more realistic with open source if an
IT shop has the aptitude, budget, and bandwidth to support it.
The danger comes in projecting this approach into mainstream
IT organizations where these resources are most certainly
limited. In other words, what works for Google, Amazon, or
eBay probably wont work for you unless you have a beard and
wear stretchy sweat pants and sandals to work.
At the other end of the extreme, I often find more
traditional IT organizations completely overlook community-
based service and support. Instead, they approach an open
source product as any other closed source alternative. The
reality, however, is that the community is an integral part of
the success of any open source project. Its a mistake to rely
on a commercial service and support contract via some third-
party as the sole channel of support for all open source efforts.
Instead, you must apply some common sense metrics to the
decision.
For example, the level of service and support realistically
needed for a project such as Apache Struts is a world apart
from an enterprise Linux system. Applying the same criteria
to both is either dramatic over-kill on the part of Struts or
dramatic under-kill for Linux. I often hear this question from
potential adopters, and I only half jokingly tell them to buy
the OReilly book on Struts and thats pretty much the only
time you should ever pay for service and support.
Service and support in the open source world is a
continuum starting with an individual, then the larger
community, and finally a rapidly emerging market of
commercial vendors. Dont assume the rules that apply to one
adopter should apply to all, and dont assume the rules that
apply to one project should apply to all others, either. Instead,
balance worst-case scenarios with pragmatic real-world
casesits fundamentally an issue of risk management. The
open source model, more than any other, gives the user a
wide set of options that can be daunting. Navigating these
options is one of the critical paths to maximizing your return
on investment over the long haul.
Mark Driver is a research vice president with Gartner. He has
more than 18 years of experience in IT focused primarily on
client/server and open systems technologies. At Gartner, he
covers application development tools and best practices. He
also serves as the agenda manager for Gartners open source
research initiatives.
e-Mail: mark.driver@gmail.com
Website: www.gartner.com
Open Source Service and Support:
The Great Debate
22
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Editors Note: This article is an abridged excerpt from Chapter
three of the book, Troubleshooting Linux Firewalls, by
Michael Shinn and Scott Shinn, published by Addison-Wesley
Professional, ISBN: 0321227239.
A
s part of our goal of cover-
ing the larger security
issues before moving on to
the information about how to trou-
bleshoot your rewall problems, we
feel its important to cover issues
that aect the security of the rewall
itself. Just because a system is a
rewall will not imbue it with some
inherent lack of susceptibility to
being broken into.
A rewall is just like any other
system; in fact, your rewall might
be nothing more than a typical >
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
23
T R O U B L E S H O O T I N G L I N U X F I R E W A L L S :
Local
Firewall
Security
BY MI C HA E L S HI NN & S C OT T S HI NN
server with two or more Network Interface
Cards (NICs) in it, running rewall rules,
while doing double duty as your leserver,
rewall, and e-mail server. Weve seen it
done. e point here is that adding rewall
rules alone will not protect your system
completely. ere are other actions you
will need to take to ensure your system is
properly secured against the risks you have
identied.
e local rewall security approach is
broken into the following macro steps:
Patch your system and keep it patched
Turn o services you cant prove you need
Run services with the least amount of
privileges needed
Use chroot services (this is the process of
essentially putting the service into its own
isolated le system that, if done properly,
will be dicult for an attacker to escape
from)
Remove all unnecessary soware
Install security tools to help manage secu-
rity posture and to help detect intrusions
Log events remotely to a trusted system as
well as to the local syslog subsystem
Congure your soware securely
If you can, use a hardened kernel such as
grsecurity, openwall, SELinux, LIDS, and
other patches
Test your systems security and improve it.
Keep in mind that these are general con-
cepts, so if you have a better means of ac-
complishing these goals, stick with what
works for you. Security is a complicated
process, and people seem to have their own
specic methods that work for them.
The Importance of Keeping Your Software Up-to-Date
If you havent already picked up on the
importance of keeping your system properly
patched and up-to-date from all the stories
constantly in the media and on various
mailing lists, then lets join the chorus. You
must get into the habit of monitoring the
latest patch releases from your vendors, and
you must keep up-to-date with the various
security lists for your products. eres a
continuous daily deluge of new vulnerabili-
ties being discovered, which might aect
you. If you want to keep the probability of
intrusion down, then you really need to
make this a top priority in your IT plan. As
weve already stated, computer products,
hardware, and soware are created by peo-
ple, and people arent perfect creatures. ey
make buggy soware, and they keep making
buggy soware, revision aer revision.
Hardware is no dierent. Its critical you in-
stall vendor security patches, new security
ash updates for your hardware, and other
xes as quickly as you can aer they come
out. In some cases, we recommend install-
ing them the day they are released. is isnt
always practical, so you will have to weigh
the risks associated with each new soware
change in your enterprise against what vul-
nerabilities they protect you against, if those
vulnerabilities are exposed in your enter-
prise, the problems they x, and if they in-
troduce any new problems.
is isnt a call to procrastinate. Waiting
for your peers to show you the way is a reci-
pe for disaster. e really bad news is that
by the time a patch comes out, the bad guys
are denitely on their way to writing an ex-
ploit for it that will be crawling around the
Internet within days, if not hours. In some
cases, the patch might x a vulnerability
that the bad guys found out about before the
good guys did, and are already actively ex-
ploiting. Every day the window of safety be-
tween when a patch is released and when it
starts to become actively exploited shrinks,
and, as stated, sometimes that gap doesnt
exist at all.
Over Reliance on Patching
With all the talk about patching and the
importance of doing it regularly, we must
caution you that patching isnt going to be a
silver bullet either. Not only is it likely that
your system will still have numerous securi-
ty vulnerabilities in it that you will need to
patch in the future, but also the programs
you are using could have fundamental aws
in them that dont properly guard against
the risks you wish to manage. For instance,
if you are concerned with protecting the
condentiality of your e-mails, patching
your e-mail clients is unlikely to accomplish
that without some additional security mea-
sures, such as encrypting your e-mail. In
short, just because the soware is bug-free
and working properly doesnt mean using it
is risk-free.
ere are now several examples of previ-
ously unknown vulnerabilities being discov-
ered in widely used soware by the bad
guys, months before the good guys, such as
you the gentle reader, and vendors nd out.
For instance, several very high prole sites
were broken into in early 2004 due to an un-
discovered aw in rsync. e only people
who were likely patched against that aw
were the crackers who discovered the aw
and kept it to themselves! Just because a aw
isnt known to exist in something doesnt
mean it doesnt exist. As Carl Sagan said,
Absence of evidence is not evidence of ab-
sence. Its always wise to assume that when
you patch youre only doing the bare mini-
mum necessary to secure your system. Its
just par for the course to patch.
Turning O Services
You have likely heard this one as well. If
you dont need a service, turn it o. We have
to add one thing to thatif you cant prove
you need the service, turn it o. Many of the
SERVICE EXAMPLE OF USER TO RUN AS METHOD
sshd sshd Set in the sshd_cong le
ntpd ntpd ntpd -u ntpd
dhcpd dhcpd dhcpd -u dhcpd
apache apache Set in the httpd.conf le
named named Named -u named
postx postx Set in the main.conf le
sendmail sendmail Set in the sendmail.cf le
squid squid Set in the squid.conf le
Figure 1: Common Services You Can Run as Non-Privileged Users
24
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
25
customers weve worked with insisted they
needed a service, or at least thought they
did. ey assumed they needed the service.
Aer all, it was on, so it must be necessary.
A lot of services are turned on by default on
many products that arent necessary to most
users. If you arent sure if you need a service,
turn it o. If it doesnt break anything, then
you dont need it. We cant think of a better
way to learn about a service than by turning
if o. If no one complains about it being o,
you can le it away as a service to learn
about when you have free time and focus
your eorts on the services that your users
do need. When in doubt, turn it o.
Using TCP Wrappers and Firewall Rules
What article on securing a system would
be complete without a quick reminder that
you should protect all your running services
with TCP wrappers and rewall rules? For
those who arent familiar with TCP wrap-
pers, as this tool is referred to, this gives you
the ability to monitor and lter TCP con-
nections via IP address or hostname. For
example, lets say you want to allow SSH
connections to your rewall but only from
certain hosts. Of course, you could control
this via rewall rules, but what if you dont
yet have any rewall rules on your rewall
or on the interface you want to lter out un-
wanted connections to the SSH port. e
solution is simpleuse TCP wrappers.
Running Services With Least Privilege
e next important step with any system
is to ensure the services running it are run-
ning with the least amount of privilege they
need to work. at means, for most services,
they shouldnt be running as root. ank-
fully, many Linux vendors have gotten bet-
ter in the last few years about conguring
services to run as non-privileged users and
going the extra mile to congure them to
run in restricted le systems, such as ch-
roots. Figure 1 shows some examples of
common services you can run as non-privi-
leged users.
Youll notice that we recommend you
run each service as a dierent user. In the
past, the practice was to run them all as one
common non-privileged user, such as no-
body. We dont recommend this approach.
Its literally putting all your eggs into one
basket. If someone breaks in as the nobody
user, he might have access to everything else
of importance on your system also running
as nobody. e goal is to
compartmentalize. Every service, where pos-
sible, should run as its own user. We have
additional recipes and instructions about
running services with least privilege, as al-
ways, at our Website, www.gotroot.com.
Restricting the File System
Aside from managing your le permis-
sions and ensuring you dont have certain
critical les and directories set to be world-
or group-writable by untrusted groups and
users, we also recommend isolating certain
les and processes and mounting them
within special le systems.
e rst approach we recommend is the
use of chroots, which is just shorthand for
change root. A chroot simply changes the
root directory for a process. For instance, if
you create a directory named /chroot/
named, and you chroot the named process
to this directory, the named process will see
the /chroot/named directory as its directory
and shouldnt be able to reach any les out-
side of that directory. is essentially traps
the named process inside the chrooted le
system. We must pause at this point and
warn you that the vanilla chroot in Linux
kernels isnt nearly as secure as our descrip-
tion implies. ere are many known ways of
escaping chroots under Linux with the va-
nilla Linux kernel.
Security Tools to Install
This is by no means
an all-inclusive list
just a listing of general
categories of tools we
think you should be
running and some ex-
amples of software that
can fill this need.
Log Monitoring Tools
These are tools that
parse through the sys-
tem logs on the firewall
to detect events wor-
thy of attention. Some
of the better tools can
categorize and priori-
tize the events to help
you identify attacks or
even just suspicious
behavior.
Network Intrusion Detection
is might seem out of scope for a re-
wall, but we think a rewall is a perfect place
to run an NIDS, or Network Intrusion De-
tection System, provided your rewall has
enough disk space for the logs and enough
memory and CPU power to handle the extra
overhead of the NIDS. Generally, a rewall is
a good choke point to see trac from many
networks as it ows through the rewall.
Host Intrusion Detection
ese are tools that focus not on looking
at whats going on with your network, but
solely on whats going on with the local sys-
tem. ese tools might look at what mod-
ules are loaded on a system, the users logged
in, when they logged in, and from where.
Others crawl the le system looking for
known signs of intrusion, les with bad per-
missions, miscongured programs, and
other behaviors or indicators of prob-
lems. You might have to
run several of
these to get the
ful l range of
features you
want. Again,
this is only a
partial list to
get you started.
Th e r e a r e
many excellent
HIDS tools and products out there.
TIGER: TIGER is a set of Bourne shell
scripts, C programs, and data les which are
used to perform a security audit of Unix
systems. e security audit results are useful
both for system analysis (security auditing)
and for real-time, host-based intrusion de-
tection (www.tigersecurity.org).
rkhunter: is tool scans for rootkits,
backdoors, and local exploits (www.root-
kit.nl/).
chkrootkit: chkrootkit is a tool to local-
ly check for signs of a rootkit. is tool can
also integrate with later versions of TIGER
(www.chkrootkit.org/).
TITAN: TITAN is a collection of pro-
grams, each of which either xes or tightens
one or more potential security problems
with a particular aspect in the setup or con-
guration of a Unix system. Conceived and
created by Brad Powell, it was written in
Bourne shell, and its simple modular design
makes it trivial for anyone who can write a
shell script or program to add to it, as well
as completely understand the internal work-
ings of the system.
TITAN doesnt replace other security
tools, but when used in combination with
them, it can help make the transformation
of a new, out-of-the-box system into a re-
wall- or security-conscious system into a
signicantly easier task. In a nutshell, it at-
tempts to help improve the security of the
system it runs on (www.sh.com/titan/).
samhain: samhain is an open source le
integrity and host-based intrusion detection
system for Unix and Linux (www.la-samh-
na.de/samhain/index.html).
tripwire: tripwire is a le integrity check-
ing tool. Its probably one of the most well-
known HIDS tools. tripwire basically gener-
ates a hash, or checksum, of all the les on
your system you tell it to monitor. If the le
changes, tripwire will alert you. ere are
commercial as well as open source versions
of tripwire. e commercial version can be
found on tripwires Website www.tripwire.
com/. e open source version is available at
our Website www.gotroot.com.
AIDE: AIDE (Advanced Intrusion De-
tection Environment) is a free open source
replacement for tripwire. Its very similar to
tripwire and is even included in some Linux
distributions. As with tripwire, its a tool for
generating hashes and checksums on les,
and then periodically checking those les
for changes. You can download AIDE from
either our Website (www.gotroot.com) or
from its ocial Website http://sourceforge.
net/projects/aide.
Remote Logging
You also might want to keep a real-time
copy of your rewalls logs on another sys-
tem you can trust. is system shouldnt be
used for anything else if you intend to use
the logs for forensics or evidentiary purpos-
es. Judges and lawyers are starting to catch
up with technology and beginning to realize
how fragile digital evidence can be. If youre
relying on a copy of logs running on a sys-
tem that has been compromised, you have a
serious problem.
Correctly Congure the Software Youre Using
Most soware comes with too many
features turned on and is rarely congured
to operate in the most paranoid mode.
Again, assume the worstthat all the ser-
vices and soware running on your rewall
are congured to allow the world to have
full access to the system without having to
so much as have an account and password
on the system.
Use a Hardened Kernel
e vanilla Linux kernel is a marvel of
open source development but has tradition-
ally been lacking in truly above and beyond
security models. e security model, partic-
ularly in the 2.4 kernel, is only adequate for
systems where security isnt a primary con-
cern and with a rewall thats the systems
entire purpose: security. To rest easy, the se-
curity model of the vanilla kernel will not
do.
With 2.6, these trusted enhancements,
referred to as SELinux, or Security En-
hanced Linux, are now available in the va-
nilla kernel. SELinux, according to the Na-
tional Security Agency (NSA), provides for
a exible, mandatory access control ar-
chitecture incorporated into the major sub-
systems of the kernel. e system provides a
mechanism to enforce the separation of in-
formation based on condentiality and in-
tegrity requirements. is allows threats of
tampering and bypassing of application se-
curity mechanisms to be addressed and en-
ables the connement of damage that can be
caused by malicious or awed applications.
You will have to check your system to see if
these features are compiled in by default to
use them. ere are also SELinux patches
for the 2.4. kernel, should you choose to use
a 2.4 kernel.
In addition to SELinux, were particu-
larly fond of combining the grsecurity
patches with a 2.6 kernel running SELinux,
or when running 2.4, we always add in the
grsecurity patches. grsecurity, another ker-
nel security enhancement project run by
Brad Spengler, includes chroot hardening,
IP stack hardening, stack overow protec-
tion, address randomization, trusted path
execution, and a real RBAC and MAC se-
curity model. e combination of the two
works wonderfully for us and provides for
so many extra features; its more than worth
the eort to patch your 2.6 kernel with the
grsecurity 2.6 features.
You can nd the grsecurity patches at
www.grsecurity.net. Weve also collected
many essays and manuals on SELinux at our
Website (www.gotroot.com). You also can
go straight to the source, the NSA, which
funded the SELinux enhancements to Linux
for the latest patches, documents, and for
instructions on how to join the SELinux
mailing lists at www.nsa.gov/seLinux/.
Other Hardening Steps
Remove any soware you cant prove
that you need. One easy piece of soware to
remove is your compiler. Regardless of what
you remove, getting rid of soware you
dont need will reduce the number of patch-
es you have to install, and it will also lessen
your exposure to unknown aws. e less
soware you have, the fewer vulnerabilities
to which youre exposed.
And nally, dont assume you have any
of this right. Now is the time to test the
system and see if you can break in. Never
assume you havent missed something or
incorrectly installed some useful security
tool. If you really want to test your system,
dont do the testing yourself. Have a trust-
ed associate test the system or hire a secu-
rity auditor to test the security posture of
your system.
Michael Shinn is managing partner of the Prometheus Group,
an IT security consulting rm. He was formerly a member of
Ciscos Advanced Network Security Research group and a
senior software developer and founding member of the rms
Signatures and Exploits Development Team. He served on the
White House technology sta, specializing in security and
penetration testing of both internal and Internet-connected
systems.
e-Mail: mike@shinn.net
Scott Shinn co-founded Plesk, a server management rm. He
was formerly a senior network security engineer specializing in
penetration testing for Fortune 50 clients at Wheelgroup, a rm
later acquired by Cisco. He also served on the White House
technology sta, specializing in security and penetration
testing of both internal and Internet-connected systems.
e-Mail: scott@shinn.net
26
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Professor Wirth Was Right, You Know. . .
Open Systems BY LARRY SMITH
B
rowsing through the latest list of security issues, I was
struck by how many issues90 percent or more, by my
rough calculationare purely the result of bounds-checking
failures. That means these programs have arrays declared
with fixed buffer sizes, but they dont enforce these fixed
limitsthey will cheerfully take extra input and continue,
stuffing data into areas of memory they shouldn't be using.
In the best of cases, the extra data overwrites data
structures near the guilty array and you will get a core dump.
Count your blessings. Now you at least know a bug is present.
However, often programs will use this bad data and continue,
producing totally spurious results. The original coder usually
wont find these errors; he or she will never think to feed the
program data that can produce the errors. But your users will.
Whats worse, these kinds of bugs are the hardest to diagnose,
since the only clue is the weird results. The support team must
somehow deduce what input actually exceeds the boundaries
of what particular array. This is neither cheap nor easy; it is, in
fact, one of the major support costs in the software industry.
However, a malicious user can target a program lacking
these boundary checksand deliberately feed in long chunks of
data. But not with random data at the end, past the boundary,
but rather with binary machine code. In the programming
world, this is called a patch, and your malicious user has just
added executable code to your program. Your program jumps
into the patched code andpresto!your program is now the
malicious users program. And its no longer working for you.
In the real world, this is called a security issue.
Why doesnt the programmer check those bounds? Better
yet, why dont we make it harder to write code that can
be so easily taken over by the forces of darkness? Because
almost all these programs are written in C. C programmers,
back in the day, liked C because their programs would run
quicker without bounds checks, and the C compiler was
much less likely to refuse to compile something. Enthusiastic
programmers sneered at bondage and discipline languages
such as Pascal. C was fast and easy; convincing a Pascal
compiler to even compile your code was a major task.
Of course, Pascal was designed to be picky about the
programs it compiled. It was constantly checking things
counters passing array boundaries, incompatible types in
assigning variableseverything professor (Niklaus) Wirth
could think of. When the compiler was satisfied, it output
a program that was slower than the same program in C,
and it seemed more fragile; it was constantly dumping core
or coming out with error messages, all of which had to
be diagnosed and tidied up before the program could be
released. That extra time was expensive, no question about it.
So, Pascal was slow and fragile, C was fast and loose.
It was easier to code in C, and C won out. Pascal and its
descendants are few and far between, Cand close sibling
C++, which shares most of Cs wartsrule the world. Thanks
to C, we now live in fear of viruses and worms stalking the
net, taking over our computers. In fact, weve created an
entire software industry to create anti-virus programs. So,
was the trade-off really worth it? Did we really save money
developing with C instead of Captain Bondage Pascal?
Heck, no; not by billions of dollars.
In the last few years, some major software companies
looked at this situation and decidedbetter late than never
that Wirth was right. From these ruminations came Java, D, C#
and others, every one of which does the whole Pascal bondage
and discipline thing.
In fact, these companies swiped much more than just
bounds and type checkingJava and C# also swiped Pascals
p-code. Yes, Wirth invented that, too, in 1970, no lesswith
the P1-P4 porting kits. So, 35 years ago the programming
world stood at a crossroads: go the hard way with strong typing
and bounds checking and p-code, or the shortcut through the
woods? Well, theres no other way to put it: We went the wrong
way, and it just led us deeper and deeper into the woods.
We have unearthed the same concepts Wirth was
propounding 35 years ago, dusted them off and dressed them
up to look like C, but the evidence is there, and we all know
in our hearts the good professor was right
all along.
Larry Smith started at Digital Equipment in 1978, and has
accumulated 27 years of experience in software engineering. He
is presently a consultant at Wild Open Source, specializing in
quality issues and user interface design.
e-Mail: larry@wildopensource.com
The support team must somehow deduce
what input actually exceeds the boundaries
of what particular array. This is neither
cheap nor easy; it is, in fact, one of the major
support costs in the software industry.
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
27
4
)
0
8
$
"
4
&
4 0 - 6 5 * 0 / BY DENNY YOST
H
ow do you enable an IT department to quickly and
efficiently keep pace with the ever-changing business
needs of a company serving the Web-based product display
and sales needs of more than 3,000 local retailers with more
than 20 million products? That was the question Mike
Bertrand, the CTO at StepUp Commerce, had to answer.
Headquartered in San Francisco, StepUp is a leading
local shopping services company that helps retailers use the
Internet to display their in-store products. StepUps mission
is to expose local inventory information from retailers of
all sizes to consumers wherever they are browsing products
online. Its patent-pending inventory extraction tools make it
easy for any store to capitalize on online consumers growing
reliance on the Web for locating products to buy locally.
Many products from manufacturers and local retailers
are not displayed on the Internet for a wide variety of
reasons, including the demands of creating and managing
a Website, Bertrand says. By helping retailers and
manufacturers display more of these products on the Internet
and helping shoppers find the items locally at physical
stores, shoppers gain convenience while local businesses
receive increased foot-traffic and in-store sales. With more
than 3,000 retailers and millions of product items available,
StepUps business is rapidly growing. The challenge is to
continuously deliver new products and services that quickly
meet the arising needs of retailers, manufacturers, and
consumers. That means the IT department must be able to
rapidly respond to business needs by quickly developing
new solutions across a stable computing environment and
delivering them in a timely manner.
Two different development environments were
considered to determine which would be better able to meet
the development needs of StepUp. Having worked in a Java
environment for the better part of a decade, we knew the
speed at which applications could be developed and delivered
using this technology, Bertrand continues. However, we
had recently done a few projects using PHP that showed
us we could gain a time-to-market advantage by using this
language in place of Java. That caused us to step back and
formally review PHP against our requirements for scalability,
supportability, long-term development environment needs,
and the ability of PHP to withstand the demands of a real,
enterprise-class company. We also needed to determine if we
could truly build processes and methodologies around PHP,
and implement this new development environment with best
practices.
The search for a superior PHP integrated development
environment combined with enterprise-class support on
a 24/7 basis began. Once we started looking at what was
available for PHP development environments and the level
of support we needed, it didnt take long for us to narrow
our focus to the Zend Core, Zend Studio and Zend Platform
products, says Bertrand. Zend Studio provides us with
the unified development environment we needed for PHP,
and an excellent debugging/remote debugging tool. Zend
Platform gives us a production PHP environment in which
to run our PHP applications that delivers the availability,
reliability, and scalability we need. And Zend Core supplies
us with a robust, supported version of PHP we can depend
on. The other half of our need was support, and Zends
support has been incredibly responsive. We have needed help
from Zend in achieving some low-level integration solutions,
and knowing the best practices for rolling out Zend Studio to
our entire development staff. Each time we called for support
we were delighted with the caliber of the expertise at Zend,
and the timely help they provided.
Today, StepUp is reaping the benefits of using certified
PHP, Zend Studio and Zend Platform. The speed of our
business demands that we be able to develop and deliver
new applications very rapidly, Bertrand comments. At this
time, we are creating and rolling-out new applications every
quarter, which is two to three times faster than we could have
accomplished with the previous development environment
being used at the time. We have truly gained a time-to-
market advantage.
For more information, contact Zend Technologies, Inc., 19200 Stevens Creek Blvd., Suite
100, Cupertino, CA 95014. Voice: 888-PHP-ZEND or 888-747-9363
Website: www.zend.com.
StepUp Commerce Achieves
and Delivers Business Agility
28
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
B
acula is a cross-platform open source network
backup application, written by John Walker and
Kern Sibbald, and primarily maintained by Kern
Sibbald. Designed to scale from single machines
to enterprise networks, it can back up disk, tape,
or optical media singly or in combination. e Bacula server
is developed on Linux and runs on either Linux or Unix, but
clients are available for a variety of Unix and Linux versions,
Windows systems (post-Win95 SE), and Macintosh OS X.
Design Philosophy
Bacula has ve major components: the director, the
catalog, the administrative console, the storage daemon, and
the le daemon. eir roles are as follows:
e director is responsible for media pool denition, backup
job scheduling, job dependency tracking, access control, and
reporting. Most changes to a sites Bacula conguration
occur in the directors conguration le.
e catalog is an index of les that have been backed up. It
lists a date for each les most recent backup, and where
within its storage pool the les contents can be found. e
catalog is implemented with a relational database; this makes
indexing large backup data sets much faster than a at le
catalog, as the database developers have already done the
hard work of developing eective index strategies for large
data sets. For enterprises with large backup requirements,
having a relational database index becomes a signicant
advantage.
e administrative console provides the interface through
which the backup operator interacts with the Bacula system.
In the context of mainframe systems, this will almost always
be the bconsole text console. However, a Windows Graphical
User Interface (GUI) console exists, and work is under way
to supplant this with a much more extensible (and cross-
platform) Python-based GUI.
e storage daemon is responsible for all interaction with >
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
29
Open
Source
Network
Backup
BY ADAM THORNTON
BACULA
the actual media volumes. Because all oth-
er Bacula components can access storage
only via the storage daemons interface, the
actual methods used to store and retrieve
backup data arent known to them. A Bac-
ula-specic script provides access to auto-
matic tape loaders and libraries. Although
this script is, by default, simply a wrapper
around the Unix mtx command, the site
administrator can replace it. is provides
an opportunity to leverage technologies
Linux doesnt natively support, such as
VM-driven automated tape libraries. Bac-
ula can use either its own label format or
standard ANSI or IBM tape labels; using
the standard labels greatly enhances peace-
ful coexistence with other enterprise ap-
plications using removable media. e
data storage format is unique to Bacula,
but well-documented in the source code.
e le daemon is a client component. Its
an agent that runs on each client to be
backed up by Bacula. It communicates
with the director to nd out what data
must be stored and with which storage
daemon to communicate, then transmits
both the le contents and le metadata
(e.g., ownership and access control infor-
mation) to that Daemon.
Each component may reside on a dier-
ent host or all may reside on the same host.
A typical Bacula scaling strategy is to rst
separate each function to a separate host,
then begin adding duplicate components
(such as multiple storage daemons), each
responsible for one tape (or other media) li-
brary. e catalog must be a single unique
entity for each Bacula instance, so large sites
may wish to have multiple catalogs. Howev-
er, because the catalog is simply a relational
database, it can be clustered or federated if
the underlying database technology sup-
ports that capability. Currently, Bacula sup-
ports SQLite, PostgreSQL, and MySQL as
back-end databases; of these, SQLite doesnt
support clustering. PostgreSQL can be clus-
tered using third-party patches, and, as of
MySQL Version 5, clustering is supported.
Some research has been done to use Oracle
and DB2 as back-ends for Bacula; if these
are ocially supported, then theyre both
cluster-capable Relational Database Man-
agement (RDBM) systems.
All Bacula components communicate
with each other via well-known TCP/IP
ports (congurable if you need to change
the defaults). e director is involved in
Bacula Case Study:
Linux on the
Mainframe
Sine Nomine Associates faced a problem backing up Linux guests running under
z/VM. Backup support needed to:
Handle le-level backup of Linux guests (thereby excluding DDR [DASD Dump/Restore]
or other volume-based solutions)
Use channel-attached tape devices (excluding Tivoli Storage Manager, which requires
SCSI tapes for Linux, even z/Linux)
Use a single tape catalog (thereby mandating IBM tape label support) rather than
maintaining a private catalog.
Although support for the actual channel-attached tape devices (3480 and 3490, and
recently, 3590) is present in Linux, theres no ATL support within Linux for S/390 or zSeries.
However, because Bacula uses its mtx-changer script to drive an autochanger and because
that script presents a well-specied and documented interface to Bacula, its possible to
replace that script to drive something that will still look to Bacula like a SCSI autochanger,
but will actually have a completely dierent implementation.
We took the following approach: Because we were running Linux under z/VM and
knew z/VM knows how to use the tape library, we chose to replace the mtx-changer script
with the client side of a network service. From Baculas perspective, the interface remains
unchanged. The director can still ask for a particular slot to be loaded, for the drive to be
unloaded, or for a list of which slots are populated and barcodes of tapes in those slots.
We had to develop a network protocol that could carry these requests and replies
between the Linux and VM systems. Once the protocol was determined, we had to write
both server and client-side implementations of the protocol. On the server side, wed be
communicating with a z/VM service machine that would receive protocol commands and
turn them into commands it could issue to the appropriate tape management system. We
chose Rexx and the RXSOCKETS TCP/IP interface to implement the server. Performance
isnt critical since most of the time is spent waiting for tape loads anyway and Rexx allows
speedy development. The Linux-based client is implemented in Perl for similar reasons.
The server side needs to be able to load and unload specic tapes by label, and to
attach and detach the tape drive from the Linux guest. This implies Class B privileges for
the virtual machine actually doing the ATTACH/DETACH. Some tape management
systems rely on additional service machines to handle tape manipulation and manage
user authorization to issue commands some other way. Its necessary to maintain a list of
tapes available for Baculas use somewhere on the z/VM system; our implementation
stores this list in a CMS le, but it wouldnt be dicult to keep this information in a z/VM
relational database.
Our implementation is available at http://sinenomine.net/vm/tapemount. We
developed the server implementation to isolate general protocol handling in the le
BACULATM EXEC, and tape-management system-specic, tape-and-catalog manipulation
in xxxBACIF EXEC. The version available for free download includes support only for
manual tape operations (RAWBACIF EXEC) and a list of Bacula tapes as a CMS le; however,
since the protocol denition and calling conventions are included in the Bacula-VM.pdf
le, vendors or customers can write their own back-end interface to their particular tape
management system or contract another party to provide that integration.
When deployed, this system lets Bacula put its Linux backups on a subset of the
tapes owned by the storage management facility at the site. From Baculas perspective,
these tapes are simply volumes in its storage pool. From z/VMs viewpoint, these tapes
are standard label tapes reserved for the Bacula application. They dont appear as scratch
or blank tapes, so other applications will not try to write to them. In this way, all the
goals of the backup scenario are met. File-level backup of Linux guests is performed
using the existing tape infrastructure and libraries, and a single catalog of tapes is
known to the z/VM system.
-A.T.
30
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
brokering component connections, but once
the director has given communication in-
structions to the various components, they
communicate directly with one another.
is has the benet that the machine on
which the director resides doesnt become a
network bottleneck because the bulk of the
TCP/IP tracactual le contentsisnt
required to go through the directors node.
e Bacula TCP/IP data transport can
be protected by Secure Sockets Layer (SSL);
this ensures the privacy and integrity of data
in ight. Quite recently, Bacula also has
gained the ability to encrypt stored data.
is protects against the tape falling o a
truck privacy exposures prevalent in recent
news reports. Even if the storage volumes
containing backups were stolen, without the
appropriate decryption key, the thief
couldnt read the data.
Enterprise Features
Bacula has several features that make it
attractive for enterprise sites:
Storage and transport encryption: Many
enterprises are now required to ensure
that any data that moves osite is encrypt-
ed. is protects customer data from ex-
posure in the event of the loss or the of
the device containing that data. Bacula can
help meet this goal by encrypting data
both in ight and at rest.
Storage pool migration: Preliminary im-
plementation of storage pool migration
within Bacula is complete, and support is
being enhanced at an impressive rate.
Work began in January 2006, and testable,
working code was available by mid-March.
(At the time of this articles publication, it
should be ready for production use.) Pool
migration allows construction of a hierar-
chically managed storage system based on
Bacula, where data can be kept on high-
speed online storage (such as disk) for a
policy-determined time, then migrated
(again, by policy) to near-line storage
(such as a local tape library), and then
eventually migrated osite for archiving.
IBM/ANSI label support: Bacula can
keep its tape labels in a standard label for-
mat, enabling the media it uses to coexist
with other applications volumes.
Scalability: e component architecture
of Bacula makes increasing capacity as
easy as dening additional storage and le
daemons and adding them to the directors
conguration. Because all Bacula congu-
ration les support inclusion of other les
as a standard mechanism, its easy to in-
crementally add clients and le systems.
Smart media handling: Bacula can split
dumps across multiple volumes and
combine multiple dump or restore
streams to single or multiple le and
storage daemons.
VSS support: With Version 1.38 or later
of the Bacula server, and a similar version
of the Win32 client, Bacula can use VSS
(Volume Shadow Copy Service) support
in Windows XP or later versions to take
virtual snapshots of the le system and ob-
tain consistent backups of even open les
such as Microso Exchange mailboxes.
Responsiveness to user concerns: e
Bacula project seems to react with more
care and concern to user requirements
than many commercial backup solution
vendors. e project is committed to mak-
ing Bacula a rst-rate enterprise backup
system, and improvement suggestions
from users are generally cheerfully accept-
ed and oen quickly implemented.
Other Features
Bacula has several other dierentiating
features that arent necessarily enterprise-
specic but useful in various contexts.
Helpful documentation: e Bacula doc-
umentation available at www.bacula.org/
rel-manual/index.html is comprehensive
and lucida refreshing change from many
other open source projects where docu-
mentation seems largely an aerthought.
With Baculas documentation, youll nd
it straightforward to determine whether
what you want to do is supported, and
how to do it.
Excellent bare-metal recovery: For Linux
clients, Bacula includes a mechanism to
produce stand-alone bootable CD images,
tailored to the individual host, which will
bring a bare system up far enough to re-
store a Bacula-generated backup. A simi-
lar, although less thoroughly automated,
process exists for Windows clients.
Python script support: Bacula contains
an embedded Python interpreter in the di-
rector, le daemon, and storage daemon.
is interpreter has access to all Bacula
objects and can be used to modify job pa-
rameters and data streams on the y. Since
this support is present in the le daemon,
sophisticated client-side job processing is
feasible.
Macintosh HFS+ support: Bacula (as of
Version 1.38.2) supports resource and
data forks on Macintosh HFS+ volumes,
allowing correct backup and restoration
of Mac OS X volumes that include re-
source forks. is support is entirely
transparent to the user.
Licensing and Support
Bacula is available under a slightly mod-
ied version of the GNU Public License
Version 2. e project Website is www.bac-
ula.org. Because Bacula is released under
the GNU Public License, if you make chang-
es to Bacula and distribute the changed ver-
sion, you must distribute the source code
implementing those changes. Commercial
support is available from a variety of ven-
dors; this list is linked to from the bacula.
org home page.
Summary
Bacula is an enterprise-class open source
backup system. Although it may not yet be
quite as fully functional (in areas such as hi-
erarchical storage management, for in-
stance) as some of the big commercial ven-
dors implementations, its rapidly
improving, and the Bacula project is more
responsive to customer concerns than many
large vendors. Available free, Bacula also is
much less expensive than the commercial
backup solutions from those vendors. It
contains many essential enterprise features
out of the box and users can leverage it to
exploit a robust, mature mainframe-based
tape infrastructure.
Bacula is our preferred solution for per-
forming le-level backups of Linux z/VM
guests, with coexistence with existing re-
movable media infrastructure, adequate
functionality and performance, and an un-
beatable price tag. Further, Bacula isnt in
any way limited to z/Linux guests; its a
workable solution for open systems Linux
and Unix machines, as well as Windows and
Macintosh OS X clients.
Bacula has a large, active development
community with mailing lists for both users
and developers. e responsiveness of the
mailing lists is excellent. In addition, com-
mercial support is available from several or-
ganizations.
Adam Thornton is a principal engineer at Sine Nomine
Associates. He has worked with z/VM (or its predecessors) since
1991 and with Linux since 1992.
e-Mail: athornton@sinenomine.net
Website: www.sinenomine.net
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
31
The Devilish Advocate BY ROBERT LEFKOWITZ
A
ssuming you had set a target to ensure 33 percent
of an organizations software usage should be open
source, various questions arise as to how to calculate
that percentage. Software developers tend to think of the
answer to that question in terms of measuring software
bulkthe size of the code. The software industry,
however, has long measured software usage by a different
metricnumber of users. Most licensing schemes
eventually boil down to counting users because thats the
most meaningful measure of software usage and value.
Even if were not charging per user (but, then, most open
source business models do), we can still count them in
order to measure usage. If we get more sophisticated, we
can weight the usage by time (two hours of using product
A is twice as much usage as one hour of using product B).
Lets start with the easy onecounting usersbecause
it raises a fundamental issue about using Open Source
Software (OSS): Is it transitive?
Transitivity, in logic, is that property of a relationship that
allows it to be passed along. That is, if a relationship holds
for x and y, and also for y and z, then it would hold for x and
z. For example, if x is greater than y, and y is greater than z,
then its also true that x is greater than z.
The question of transitivity arose when, as an
experiment, I decided to stop using OSS and instead
use only the most aggressively protected proprietary
software. So, I purchased (among other things) Wolframs
Mathematica and Microsofts Windows XP. As a result
of my prolonged involvement with OSS, Im in the
habit of checking the copyright notice for software
products. I found in the copyright notices of all these
software products numerous acknowledgments of OSS.
Mathematica, for example, lists GMP, METIS, PCRE,
TAUCS and UMFPACK on its third-party license page.
The copyright page doesnt mention HSQLDB (although
its mentioned in the documentation), which is the (open
source) bundled database engine. The copyright page
doesnt mention Apache Axis (although its mentioned in
the documentation), which is the (open source) bundled
Web Services engine. So, it would seem that Mathematica
includes a fair amount of OSS.
So does Microsofts Windows XP. The copyright page
for Windows XP (also known as KB306819) acknowledges
OSS developed by Mark Colburn, the Regents of the
University of California, Greg Roelofs, the Hewlett-
Packard Company, the University of Southern California,
Luigi Rizzo, Phil Karn, Robert Morelos-Zaragoza, Hari
Thirumoorthy; the Massachusetts Institute of Technology,
and the Regents of the University of Michigan. Sounds like
a fair amount of OSS.
Which brings us back to transitivity. If Im using
Microsoft Windows XP or Wolfram Mathematica, and we
know these products use OSS, can it be said that Im using
OSS? The answer is No. The relationship of using open
source is not transitive. Some licenses permit the creation
of proprietary derivative works using OSS. Some dont. But
where such derivatives have been created, using them doesnt
constitute using OSS. Hence, using Microsoft Windows XP
isnt using OSS.
Likewise, if an enterprise purchases per-seat usage
licenses from an application service provider where the
software provider has built a proprietary application (using
OSS), is the enterprise using OSS? The answer is still No.
When I perform a banking transaction on my banks Website
(or search on Google) using Microsoft Windows XP and
Internet Explorer, am I using OSS if they use Linux? Again,
the answer is No.
OK, consider this: If, for example, an enterprises IT
department develops a derivatives trading application or
a hotel reservation system using JBoss, and the trading
or reservation application itself isnt released as an open
source application, are the traders or hoteliers using OSS?
The answer, according to the previous logic, is No. The
IT department is using OSS, but the traders or hoteliers
are not.
If using open source isnt transitive, how much OSS
are enterprises using? Not very much. And, if its transitive,
then we may be at 99 percent open source usage. There
are utilities for the Macintosh and Windows (I use Active
Timer on the Mac) that keep track of how much time you
spend using various applications. Looking at the statistics
for the last few days, all 24 applications I used were built
using OSS. Only one of them, however, was actually an open
source application. Thats 4 percent. I spent eight hours using
the applications. I spent 13 minutes using the open source
application. Thats almost 3 percent.
How much OSS do you use?
Robert Lefkowitz has spent more than 20 years being a
contrarian inside large IT organizations. His rst open source job
dates back to the late 70s as the public software librarian for a
timesharing company.
e-Mail: r0ml@mac.com
How Much Open Source Software
Do You Use?
32
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
T
he challenge enterprise architects
face with Open Source Soware
(OSS) is how to select soware thats
stable and reliable enough for the enterprise
environment. Finding the right open source
development product is challenging.
e selection process for open source
products isnt an exact science. is article
provides a simple group of heuristics based
on formal soware engineering and em-
pirical methods. ese heuristics have
worked well in demanding and formal
government and healthcare enterprise de-
velopment environments.
Selecting a product, whether open
source or commercial, should be based on
how well the product ts your functional
and non-functional requirements. If the
open source product surfaces as a strong
candidate, then you can apply the open
source specic evaluation criteria de-
scribed here.
Step 1: Introspection
Choosing the right open source product
oen involves assessing your own develop-
ment environment and your readiness to
accept open source products (see Figure 1).
How experienced is your technical team?
Open source products range from the ex-
tremely stable and well-documented to
scarcely documented products on the
leading edge. It may be perfectly accept-
able for your project to adopt the latter if
your team is competent to fully under-
stand the product and overcome the chal-
lenges associated with leading-edge prod-
ucts. Most of the best open source
products grew from the leading-edge mar-
gins into mainstream market leaders (e.g.,
RedHat Linux, Apache Web Server,
Eclipse, etc.). Adopting the right open
source product even while its early in its
lifecycle may give you a competitive ad-
vantage.
How important is your application? Crit-
ics of the open source movement oen use
Figure 1: Evaluation Steps
Strategies for Success With
Open Source Projects
HOW TO EFFECTIVELY EVALUATE OPEN SOURCE PRODUCTS FOR THE ENTERPRISE
BY EDMON BEGOLI
INTROSPECTION
CATEGORIZATION
EXAMINATION
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
33
the lack of ocial support for the products
as their major complaint. Actually, the lev-
el of support oered via open source prod-
uct support forums is oen eective and
sucient. However, some enterprise ap-
plications cant aord to rely on public fo-
rums for support. So you should assess the
strategic importance of your enterprise
application. Would a bug in the soware
cause a loss of life, signicant loss of reve-
nue, business reputation, or market share?
If so, seek soware for which you can get
the highest level of commercial support.
Otherwise, you may be perfectly safe with
the open source product.
What is the licensing and distribution
model of your future application? ere
are dozens of dierent licensing models
for OSS. Some, such as Berkeley Soware
Distribution (BSD), are quite liberal, while
others, such as General Public License
(GPL), are quite restrictive. How do you
plan to distribute your application? If
youre developing in-house enterprise ap-
plications, most of the open source licens-
es will work for you. If you plan to com-
mercially distribute your products, you
should pay close attention to the license
type of the product you plan to use. e
Website http://opensource.org/licenses/
was specically created for this purpose
and explains open source license types.
Step 2: Categorize the Open Source Product
Part of the evaluation of OSS is to better
understand the motivation for the product.
Why was the project started? Who sponsors
it and who are the contributors? Some proj-
ects were started as the commercial vendors
donated the source code; some were started
by the company that wanted to create more
market presence; some were created to ll a
void in the IT market, and some were creat-
ed out of pure professional excitement. You
should try to understand where the open
source product youre interested in belongs
and, based on that, conduct further evalua-
tion. Open source products can be grouped
into three major categories:
1. Institutional projects such as Eclipse
and Apache: These projects are well-
run, full-time staffed, and often have
logistical support from some of the larg-
est commercial enterprises such as IBM,
Sun, and Novell. These are usually sta-
ble, safe productsprobably more so
than some commercial counterparts.
Selecting an open source product from
this category is mostly a matter of evalu-
ating whether the product features
match your requirements.
2. Commercially sponsored projects: The
business model where companies are
backing the development of open source
products but charging for the profes-
sional services and product support is
becoming increasingly popular. Open
source products from this category are
usually stable, with a large installed base,
but you have to look closely at the license
types. You should also consider the
chance of the products sponsor company
getting acquired by the other company
and how that might impact you.
3. Independent OS projects: This is the
broadest and most diverse category.
Products here range from multi-per-
son, open source star projects to small
projects sometimes run by just a single
person. So these projects will require
the most thorough evaluation. Be sure
you understand the motivation for the
project and whether you can handle it
on your own if the current group of
committers leaves. Some of the proj-
ects in this category could be risky for
your environment, but you shouldnt
shy away from considering them.
Remember that most open source proj-
ects started as small, independent proj-
ects. Being the early adopter may bring
significant technical and organizational
benefits. A calculated risk in the open
source adoption can provide you with a
big payback. You just have to do your
homework before implementing any of
the more risky OSS.
Step 3: ExaminationLooking Under the Hood of
the Open Source Product
Open source projects provide an un-
precedented visibility into their develop-
ment process and many other factors.
eres open access to the source code, issue
tracking tools, project plans and develop-
ment mailing lists. You should use all these
information sources to get the best possible
Figure 2: JIRA Issue Tracking Home Page for the Spring Framework
34
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
picture of the project. Access to projects is-
sue tracking tools such as Bugzilla or JIRA
will give you a better understanding of the
products quality and stability. Look into the
issue tracking tool and determine:
e total number of outstanding issues,
especially those with an elevated severity
ranking
e number of outstanding issues that
lack an assigned owner
e average issue resolution period.
Commonly used issue tracking tools
such as Atlassian JIRA and Bugzilla provide
a rich set of reporting features that will facil-
itate your evaluation process (see Figure 2).
Evaluate Architectural Vision
e future plans for the product are im-
portant. You need to know if theres an ar-
chitectural vision for the product, how real-
istic it is, and how disruptive it may be for
your architecture. Most open source prod-
ucts have well-dened roadmaps; you
should analyze them and determine wheth-
er theres a coherent architectural vision. A
project with a great architectural vision will
likely survive the turnover of the contribu-
tors, which can be relatively high.
Consider the evolution of the Active MQ
product. Initial releases were somehow
rough around the edges with a relatively high
defect ratio, but the product had a great ar-
chitectural vision. In just six months, Active
MQ caught up on the outstanding issues and
added all the promised architectural features.
Its now a premier open source Java Service
Message (JMS) product and a strong com-
petitor to the commercial alternatives.
Determine the roadmap density. For
stable, well-established products, a roadmap
can be sparse, but for products still in devel-
opment or beta stages, it had better be elab-
orate and have a well-dened set of features.
Finally, decide whether the roadmap is
realistic. A roadmap with many features on
the aggressive schedule may be a warning
sign. Some open source products suered
from ambitious plans that could not be easily
met. For example, Tomcat 5.5.12s brand-new
clustering method was released too early and
the project lead had to declare it defunct.
Evaluate Eectiveness of the Products Support Forum
and Mailing Lists
Mailing lists are invaluable tools in eval-
uating open source products. Being a mem-
ber of the users mailing list is a must. Mail-
ing lists will be your rst line of support and
the quality and speed of the responses re-
ceived will tell you to a great degree if the
product is right for you. A great users list is
a major dierentiator in selecting an open
source product. Avoid using open source
products whose unfriendly and unprofes-
sional users base is reected on the support
forums. e enterprise computing world is
too serious and demanding to tolerate the
chronic lack of professionalism and taste
that characterize some forums.
During the evaluation period, you might
want to join a developers mailing list. Do
this in a silent moderead e-mails but
dont post unless you plan to contribute
some code or code suggestions. is list is
for product developers only and you should
not disturb it with user questions. However,
if you want to understand more about the
inner-workings of the team that develops
the product youre considering, their atti-
tudes, and professionalism, then you should
subscribe to the developers mailing list.
Summary
Adoption of open source products may
bring a signicant level of innovation, im-
proved productivity, and cost savings to
your enterprise. You should keep your mind
open toward the OSS and its value, but also
make a signicant eort to properly evalu-
ate the attractive products. e only way
youll reap all the potential benets of OSS
is to do a thorough evaluation.
You also should consider contributing to
open source projects. You may nd a product
with features that would perfectly t your re-
quirements with a few nal touches in some
areas. In that case, consider investing your
time to make it perfect for your environment
and more valuable to other OSS users.
Edmon Begoli is an IT architect with
10 years of professional experience
working on large integration and e-
business projects using commercial
and open source products.
Voice: 865-241-2295
e-Mail: ebegoli@gmail.com
Blog: http://blogs.ittoolbox.com/eai/
software
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
35
Subscriptions to EOSJ
are FREE WORLDWIDE!
SUBSCRIBE INSTANTLY AT www.eosj.com
A
s a roving contractor, I often use the Windows PCs
supplied by the contracts IT organization. Companies
purchase corporate software licenses to secure and protect
their PCs with enterprise security suites. Yet, their
machines are often still vulnerable, sometimes even running
spyware or other malware.
This is critically important because the goal of malware has
shifted. Once the province of teenagers seeking thrills, today its
dominated by criminals. Stealing corporate data is big business.
The problem has gotten dramatically worse in just
the past couple years. Pew Research states that 43 million
Internet users in the U.S. have malware on their PCs.
An IBM study found a 50 percent increase in attacks
in the first half of 2005 (see MIT Technology Review,
12/19/05). Meanwhile, neither the U.S. government nor the
international community has addressed the problem. Its up
to you, the IT professional, to manage the problem yourself.
This column is the first of a series that demonstrates how
to secure Windows through free and open source software.
I wont discuss commercial corporate packages, which I
assume your organization has installed. (If not, check out
Informationweeks article, Review: Spyware Detectors, in
their 09/16/05 issue. The article evaluates 10 prominent
corporate products.)
Upcoming columns will describe free and open source
tools that address all aspects of Windows security. To secure
a Windows PC, you must:
Stop incoming penetration attempts
Stop unauthorized outgoing communication
Prevent malware installs
Detect and eliminate any malware already present
Eliminate the information Windows stores to track your
behavior
Stop the tracking of your behavior during your use of the
Internet.
The good news is you can accomplish all this with free
software. The programs come in several categories. Examples
include firewalls, virus scanners, real-time install preventers,
malware scanners, file erasers, Windows cleaners, file
encrypters, file-properties cleaners, rootkit detectors, and
Web anonymizers. I'll discuss each category in upcoming
columns and offer software recommendations that address
the vulnerabilities. (I have no business relationships with
any of the product developers and my recommendations are
based on personal use of the products.)
You might wonder, why use Windows at all? Open source
Linux is now mature for the desktop. If you believe the major
IT trade magazines, there appear to be far fewer cases of
compromised Linux PCs than Windows PCs. I agree and use
Linux whenever I have a choice.
However, the vast majority of IT organizations I
encounter still use Windows, so these columns will focus
on securing Windows. (Well save the desktop Linux vs.
Windows debate for a future column.)
Some companies take a half-step into open source
software. They retain Windows as the operating system
but replace Microsofts other products with open source
products. This keeps the organization standardized on the
Windows platform but eliminates the security vulnerabilities
of other Microsoft products. The table in Figure 1 shows
typical open source product choices.
A few IT executives have told me theyre counting on
Microsofts next version of Windows, called Vista, to solve
their security problems. These managers should consider that
Microsofts Trustworthy Computing Initiative is now more
than three years old, and Microsoft has promised security
solutions in the past several Windows releases.
Look to Vista to incrementally improve security but dont
count on it to solve the problem.
In my next column, I'll dive into how
to secure Windows PCs with free and open
source software.

Howard Fosdick is the author of the Rexx Programmers
Reference, the book that covers everything about free Rexx, its
interfaces, and tools. Find it at www.amazon.com/rexx.
e-Mail: hfosdick@compuserve.com
Securing Windows With Free Software
Open Mind BY HOWARD FOSDICK
Product Function Replaces Microsoft Product Download
Firefox Web browser Internet Explorer www.mozilla.com
Thunderbird e-Mail Outlook and Outlook Express www.mozilla.com
Mozilla All-in-one Internet Explorer, Outlook, www.mozilla.com
Internet and Outlook Express
application suite
Open Office Complete Microsoft Office www.openoffice.org
office suite
Figure 1: Typical Open Source Product Choices
36
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
37
A
history of security breaches, some involving the loss of condential customer data, is causing
enterprises to re-think their existing security infrastructure and apply security patches.
Consider these real examples of security breaches:
In only eight months, the security of some 45 million accounts was compromised. As Bank Technology
News reported (www.banktechnews.com/article.html?id=20051003NNLIJORR), Bank of America lost
tapes containing private data for 1.2 million personal accounts; ChoicePoint told 145,000 customers
their data had inadvertently gone to Los Angeles County fraudsters; and another 310,000 accounts
from LexisNexis, 200,000 from Ameritrade, and 1.4 million from DSW Shoe Warehouse were >
Identifying and Overcoming
Security Concerns in Todays
Complex Environment
BY ARUNEESH SALHOTRA
38
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
compromised. Citigroup lost tapes in Tex-
as containing private data for 3.9 million
of its customers. Finally, credit processor
CardSystems, which moves information
for Visa, American Express and Master-
Card, admitted it exposed a whopping 40
million credit card accounts to potential
fraud, and that at least 200,000 records
were stolen by thieves using a parasite
computer program.
Due to what Montclair State University of-
cials called inadvertent error, the social
security numbers of 9,100 of its students
were made available online for nearly ve
months, putting them at risk for identity
the and credit fraud. Details of this inci-
dent were reported in an October 2005
Information Security News article, Negli-
gence At MSU Exposes 9,100 Students to
I.D. e, (http://seclists.org/lists/
isn/2005/Oct/0099.html).
In an earlier incident at another academic
institution, a thief walked into a University
of California, Berkeley, oce and swiped a
computer laptop containing personal in-
formation for nearly 100,000 alumni,
graduate students, and past applicants.
Details were recorded in an Associated
Press article, Stolen UC Berkeley Laptop
Exposes Personal Data of Nearly 100,000,
(www.sfgate.com/cgi-bin/article.cgi?f=/n/
a/2005/03/28/nancial/f151143S80.DTL).
Reasons for the Problem
In a highly competitive economy, busi-
nesses have been under intense pressure to
improve eciency. Many have changed their
business models and procedures. Services
businesses, for example, have had to oer
online service rather than only mail or
phone service. Web-enabled applications
now include online banking, e-mail, and
shopping. For such applications, theres a
high risk if security provisions fail. Identity
the is frighteningly prevalent as hackers
take advantage of tools such as NMap,
Foundstore FPort and LANSpy, which can
help them nd security holes.
e nature of security threats is varied
and dynamic. One simple threat is the the
or destruction of tangibles such as desktops,
peripherals, and oce equipment. Most
companies try to protect these systems with
on-premise security and insurance. Some
progressive companies also have imple-
mented security badges, iris scanners, and
biometrics that control building access.
Regulatory pressures have forced some
companies to adopt these measures.
e advent of the Internet has made it
relatively fast and easy to reach any network.
Wireless networks also are becoming in-
creasingly common. Service companies
such as Amazon are now automating order
processing using Application Program In-
terfaces (APIs) oered by shipping rms.
is can lead to security vulnerabilities. e
more features services customers and clients
require, the wider the spectrum of unde-
tected security holes.
All networks with a gateway to the In-
ternet are potentially accessible directly or
indirectly. Meanwhile, in this world of cut
throat competition, hackers break into com-
pany networks, steal information, and create
service disruption. is has led to laws de-
signed to prevent such attacks and penalize
hackers. Unfortunately, these laws arent
consistently present throughout the world,
nor are they sucient to prevent all unau-
thorized data access.
Types of Security Problems
Security holes include:
Unsecured e-mails and Web browsing
SQL and Unix injections
Brute force attacks
Viruses and Trojan horses
Decompilation of proprietary code and
application modications
Server application, extranet, and wireless
security vulnerabilities.

A diligent employee might ignore e-
mails from an unknown user, and e-mail l-
ters can help block spam. But spam messag-
es can pass through poorly implemented
lters; they may appear to come from
known users if the e-mail headers are simi-
lar. When attachments with viruses or Tro-
jan horses are opened, this initiates a chain
of events that can wreak havoc in various
ways. Spam e-mails also have brought cor-
porate networks to their knees.
Problems with Web browsing can in-
clude malicious sites that bring pop-ups and
install small programs intentionally with or
without user approval. ese programs can
change the way the machine works and de-
lete or steal information.
SQL injection takes advantage of non-
validated input vulnerabilities to pass SQL
commands through a Web application for
execution by a back-end database. Direct
SQL commands (SQL injection) can com-
promise data and the database system itself.
Brute force attacks use exhaustive trial-
and-error methods to nd legitimate au-
thentication credentials. Parameter tamper-
ing is a simple attack targeting the
application business logic. is attack takes
advantage of the fact that many program-
mers rely on hidden or xed elds (such as
a hidden tag in a form or a parameter in a
URL) as the only security measure for cer-
tain operations. Attackers can easily modify
these parameters to bypass the security
mechanisms that rely on them. Cross-Site
Scripting (XSS or CSS) is an attack that
takes advantage of a Website vulnerability in
which the site displays content that includes
unsanitized user-provided data. For exam-
ple, an attacker might place a hyperlink with
an embedded malicious script into an on-
line discussion forum.
Even though most companies have im-
plemented anti-virus soware, there are al-
ways new viruses and Trojan horses to con-
tend with.
Server applications listening for client
requests run on specic ports. Commonly
used applications have their port published.
Without sucient security, companies are
vulnerable to hackers who can use means
such as buer overow to change and dis-
rupt application functionality, causing un-
expected results. Source control reposito-
ries, databases, application servers, File
Transfer Protocol (FTP), and the Domain
Name System (DNS) are common applica-
tions that hackers can easily reach.
Most Java 2 Enterprise Edition (J2EE)
applications and Microso .NET applica-
tions can be easily decompiled. is espe-
cially applies to situations where enterprises
tend to give evaluation versions of the so-
ware that run on the client side for a limited
time. e result can provide the client access
to the code base, which makes it possible to
decompile the code and create a variation of
the application.
All applications run on the trust that the
services theyre using arent compromised.
Operating systems must trust the hardware
for expected behavior; applications should
trust the operating system and libraries
theyre using. Application behavior can be
severely altered by changing any of the li-
braries or kernel settings.
On the wireless network front, theres
been progress in the 802.1x domain, and
companies are converting their wired net-
works into a wireless network. But sniers
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
39
and wireless cards with powerful antennas
can be placed outside the oce premises to
gain access to the wireless network, essen-
tially gaining access to the computer net-
work and resources. Wireless security pro-
tocols also are prone to errors and hacking.
Extranets can provide excellent value for
numerous enterprises and their respective
partners, but the convenience of easy access
to enterprise data must be weighed against
the risk of sluggish system performance, se-
curity breaches, and unexpected disasters.
Security Decisions and Solutions
When making decisions on network se-
curity, consider these questions:
To what level does the enterprise want to
safeguard its resources? is step requires
identifying resources at risk and what
needs to be accomplished to safeguard
them.
Where are the resources that must be pro-
tected? Are they located in the oce, at a
remote site or in multiple, separate facili-
ties?
How much money and resources, includ-
ing manpower, can your enterprise allo-
cate for securing its computing and other
resources? Small enterprises oen cant af-
ford to allocate sucient resources to this
task. Outsourcing the responsibility, if fea-
sible, can provide some cost savings.
Is upper management technically savvy
enough to understand and respond appro-
priately to security breach outcomes?
e vastness of security holes and di-
culty of addressing them become evident as
we explore these questions. Fortunately,
there are some solutions that can be ap-
plied.
You should periodically perform a net-
work scan using a tool such as SoPerfect
Network Scanner that will help you identify
possible errors and loopholes. Terminal ac-
cess should occur through secure protocols
such as Keberos and Secure Shell (SSH),
which is much improved in the current ver-
sion (see www.openssh.com/). Port blocking
should occur via a rewall or gateway (e.g.,
CheckPoint and Symantec), especially if a
resource is accessible through ports such as
an HTTP and FTP. Only services that are al-
lowed should be open; everything else
should be blocked.
Access to your enterprises commerce
Websites should be secure. Payments should
be accommodated through Secure Sockets
Layer (SSL). Also, Rivest-Shamir-Adleman
(RSA) is a good cryptography method to
provide secure access. It uses a two-part key;
the private key is kept by the owner and the
public key is published.
For J2EE and .NET applications, code
obfuscation should be incorporated every-
where possible using tools such as DashO,
Retrologic, yWorks, and Dotfuscator Pro.
is safeguards enterprise interests by help-
ing avoid loss of enterprise code and years
of eort. You should also incorporate appli-
cation executable changes and re-writes us-
ing a tool such as Fortify Application De-
fense. Use a checksum value over the whole
application and use a copyright notice in the
binary le.
User machines, browsers, and e-mails
should be kept up-to-date and all patches
should be made when security patches are
released. Non-standard browsers or e-mail
clients should be discouraged. Where al-
lowed, they should be cautiously selected.
ere are rule engines available that
block access to certain sites depending on
their category in the rule engine. Enterprise
can subscribe to the rule engine, and block
access to sites deemed objectionable to their
employees to avoid adware and viruses. Em-
ployees should be advised about possible e-
mail spam that oats around.
Its advisable to turn on a boot-level log-
on script so all users login with a designated
permission and their activity can be moni-
tored. Attacks are just as common from the
inside as from beyond enterprise walls. Op-
erations such as le uploads should be care-
fully handled. ere are services such as
XDrive where you can upload a large
amount of data; these should be blocked on
case-by-case basis.
Wireless security should be carefully de-
ned and set up. Proper security protocols
should be selected and implemented. Wired
Equivalent Privacy (WEP) was found to be
inadequate and it was superseded by Wi-Fi
Protected Access (WPA), WPA2 and
802.11i, which is also known as Robust Se-
curity Network (RSN).
Disaster Recovery (DR) sites should be
dened if the enterprise needs to be up con-
tinuously or if access to work is essential.
is normally is a remote site (not even in
the same city). Investing in DR is a good
practice, though oen expensive. Another
good practice that isnt expensive is asking
your application vendor to put code/docu-
mentation/binaries in an escrow account in
case the vendor ceases operations.
A Virtual Local Area Network (VLAN) is
another place where security can be compro-
mised. Leased lines between oces represent
the most secure option for setting up VLANs.
(Of course, hackers can wiretap the leased
lines.) Due to cost constraints and availability
of leased lines, most companies use the
VLAN over shared Internet lines. is con-
veniently opens up the resources, but requires
use of strict mechanisms to ensure security
and that no functionality is lost.
Enterprises that provide extranet servic-
es should scrutinize the network architec-
ture and user environment to verify that
performance guarantees and security are
congruent with IT delivery capabilities and
Service Level Agreement (SLA) guarantees.
Companies should plan for security just
as carefully as for any other project. In the
planning phase, they should bring in system
and network administration people who can
identify the purpose of resources, potential
routes of attacks, current level of security
mechanisms, and costs of soware or hard-
ware required for new security measures. In
the design phase, all changes on the gateway,
rewall, e-mail lters, and virus scanners
should be identied. All this should be
mapped with the network diagram in place.
In case the systems are hosted at a Demilita-
rized Zone (DMZ), all the information per-
taining to the DMZ should be made avail-
able. Implementation is always an ongoing
process. Its always nice to have network dia-
grams accessible, so its easy to look up is-
sues when they arise and whenever a secu-
rity breach requires blocking ports or
shutting down resources or services.
Responsible people need to be identied
and notied when problems occur. You can
use lters and rules to log levels, warnings
and error messages. is can help in both
responding to issues as they arise and moni-
toring overall security trends.
For more information on security solutions, visit these
online resources:
www.howstuworks.com/rewall.htm
www.snort.org/
www.imperva.com/application_defense_center/glossary/sql_
injection.html (SQL Injection)
http://blogs.zdnet.com/Ou/index.php?p=43
http://antivirus.about.com/od/securitytips/a/emailsafety.htm
Aruneesh Salhotra has more than seven years of experience in
the IT domain. He currently is writing a book on managing
projects in varied technology domains that he expects to
publish later this year.
e-Mail: aruneesh.salhotra@miletustrading.com
4
)
0
8
$
"
4
&
4 0 - 6 5 * 0 /
B
ased in Luxembourg, Altralux faced a significant
challenge as it was preparing to distribute its consumer-
based, digital, night-vision products in the United States.
The company needed to quickly provide a fully integrated,
computerized infrastructure that would support all the back-
office services necessary for managing sales, distribution,
returns, order tracking, accounting, pricing, inventory, and
customer service for a growing network of highly mobile,
independent marketing agents. How could Altralux do this
without opening and staffing a U.S.-based IT operation
or spending exorbitant sums of money on licensing fees?
Furthermore, how would a company like Altralux gain
the expertise and know-how of an entire staff of highly
experienced, back-office, ERP and business professionals
necessary for implementing and managing the best practices
for such a complex technical solution?
Paul Moody, president of Altralux, took the challenge
of finding a solution in stride. In Europe, we run all
our operations on open source software, and we are very
confident in the quality of the software, he says. So, we
looked to see what open source solutions were available
for our newest challenge. It seemed reasonable to assume
we would not be the only company to have faced such a
challenge.
Altralux spent some time searching for open source
solutions before narrowing the field to a few choices. We
identified several individual products from different sources
that could do the job, and two products that were complete
ERP systems, Moody continues. However, one of the
ERP solutions was a proprietary product that could not be
customized to meet the needs of Altralux. That left us with
using either a bunch of individual products or the Compiere
ERP system. That choice was easy. But we still had the
challenge of determining how we were going to evaluate and
implement this system, since we did not want to open and
staff a U.S.-based IT operation.
The solution to the most perplexing challenge for
Altralux came after further researching the Compiere
solution. Moody discovered that Compiere does not
directly install or customize the Compiere product, but
chooses to operate through independent partners much like
Altralux does. Moody says, In our research, we discovered
KnowledgeBlue, a Compiere Certified Partner, which hosts
and delivers the Compiere product through a turn-key,
software-as-a-service solution called openBLUE.
Upon meeting and talking with KnowledgeBlue about
the many needs Altralux had to satisfy, Paul discovered
he had found a company that was much more than just
an implementation service provider. KnowledgeBlue
could manage both the technical and business aspects of
implementing and managing all the back-office needs of the
company while providing guidance and training to Altralux
employees about the best practices in implementing and
utilizing an ERP system for their business model. We really
needed more than a company that could simply implement
a software solution for us, says Paul. What we needed was
a business partner that could implement the ERP system,
educate us on the best way to customize all the work-flow
processes necessary to conduct business in the United States,
and manage the entire back-office system. We needed a
complete business solution, not just the implementation of a
software product. I would suspect most companies like ours
need this type of complete technical and business solution.
Through the use of KnowledgeBlues openBLUE service,
Altralux had everything they needed to pursue implementing
a fully integrated ERP system. KnowledgeBlue also provided
additional call center and fulfillment services that were
complementary to Altralux entering into the U.S. market.
Once we had the software solution identified and a cost-
effective way to have the software integrated into the Altralux
operations, we were ready to begin the process of building
the back-office infrastructure, Moody continues. Normally,
implementing an ERP system takes a year or longer. We
couldnt afford to wait a year or more for our back-office system
to be operational. The openBLUE service from KnowledgeBlue
already had the Compiere product installed, running, and
being used by other companies. So, it seemed feasible to have
the Compiere product operational for use at Altralux in a
relatively short timeframe. As it turned out, it took less than
three months to have everything up and running!
The combination of the KnowledgeBlue openBLUE
service and Compiere has provided Altralux with a winning
formula to meet its goals. Finding a service provider that
can implement a software solution critical to your business
that can truly deliveris a difficult task by itself, comments
Moody. Finding one that is a business partner that guides
companies in how to best set-up, run, and manage their
back-office operations is the key to success. It is this total
solution that enabled us to enter the U.S. market far faster
and at a significantly lower cost than would have otherwise
been possible. This is exactly what we wanted to achieve.
For more information, please contact Compiere at www.compiere.org/contact.html or
KnowledgeBlue at www.knowledgeblue.com.
Altralux Implements ERP System in
Less Than 90 Days
BY DENNY YOST
40
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
The Blurring Lines Between OSS and SOA:
Re-Inventing the Future of Software
Open for Business BY NATHANIEL PALMER
I
n the short history of software, few technologies have
presented greater potential for disruption or opportunity
for transformation, either separately or together, than Open
Source Software (OSS) licensing and Service-Oriented
Architecture (SOA). When taken together, however, these
present the promise to radically alter how software is used,
acquired, and developed.
Today the software market is in the middle of an
inflexion point, a fundamental transformation in both the
business of software and how business leverages IT. One of
the byproducts of this is that weve reached a point where its
no longer practical to believe that either you dont or wont
use OSS. Nor is it practical to expect that you will use only
an OSS alternative.
The reality is that for the foreseeable future, any
comprehensive IT architecture must be designed with
expectation of a hybrid or blended licensing mix of both
closed and OSS. Understanding the interplay between OSS
and SOA offers the key leverage point for successful IT-led
business transformation, as well as understanding the future
of software itself.
This reality has been recognized and embraced by all
leading suppliers of software and IT infrastructure that
have begun to tout blended options and OSS entry points
as a means for accelerating adoption of their wares. System
integrators, IT outsourcers and enterprise users now give
OSS much more consideration as a vehicle for system
transformation and IT modernization without having to
rip and replace existing infrastructure and mission-critical
legacy systems.
The need for firms to realize greater returns on past
IT investments, notably by extending existing capabilities
and centralizing IT and data management, has hastened
the coalescence of OSS and SOA. This combination has
introduced comparatively low-cost, easily adopted options
for introducing the architectural and infrastructure
transformation needed to leverage the SOA model (lets call it
Open SOA).
This has fueled the land grab of OSS developers seen
over the last 12 to 18 months. Perhaps the most visible, or
at least most talked about, grab was Red Hats acquisition
of JBoss. This move no doubt came as a surprise to those
expecting Oracle to buy JBoss, but its big news any way
you look at it. It offers the largest M&A transaction to date
concerning either OSS or SOA, and brings under a single
roof two of the most recognized brands in the segment.
Whatever Red Hat is able to do alone, however, its the
aftershocks that will matter most. The response by IBM,
Oracle, Sun, BEA and even Microsoft will be far more
pronounced than if any had bought JBoss themselves. The
result will be not simply a reshuffling of chess pieces, but the
re-invention of the software industry.
Many of the issues that have restricted OSS adoption
such as confusion over competing projects, lack of support
from commercial suppliers, and liability vis--vis murky
ownership and licensingare being incrementally resolved
in parallel by many of these large vendors, from Red Hat to
IBM. In the process, the software business model is shifting
from proprietary interfaces and architectures, to services and
value-added capabilities.
Perhaps more important, much of this shift has come
through coalitions of erstwhile competitors collaborating
on the development of core components. This migration,
of course, is nothing new to the OSS community (indeed,
these qualities define it); however, its a relatively recent
phenomenon for middleware vendors that in the past have
worked together on integration standards, yet otherwise
eschewed the community ownership model of OSS licensing.
The result of this transformation, which is now fully
under way but still in its nascent stage, is a new means
of competitive differentiation based on services (e.g., the
delivery and quality of application services), rather than
programmatic access to infrastructure. In other words,
decisions and differentiation can be based on how new
investments address business goals, rather than simply
gaining access to systems and data already owned.
For an industry that has increasingly competed on
reducing the cost of the status quo (notably through wage
arbitrage of offshore labor), Open SOA offers one of the first
real opportunities for competing to enable business agility
and innovation.
Nathaniel Palmer is president of Transformation+Innovation
a consulting, education, and advisory rm that guides business
strategy and transformation through the optimization of
technology, knowledge management, and process redesign.
Hes the
co-author of The X-Economy (Texere, May 2001) and has
authored more than 200 studies and published articles.
e-Mail: npalmer@os30.com
Website: www.transformationandinnovation.com
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
41
Linux Server
Performance Monitoring: Technical Tips for Better
Application, System, and
Network Monitoring By Tom Speight
I
T departments are increasingly asked to
deliver consistent service levels, mea-
sure performance, and document the
value of IT to the business by monitoring
and optimizing application and system per-
formance on Linux and other platforms.
While necessary, daily monitoring and tun-
ing exercises can deplete the resources of al-
ready stretched stas and budgets.
Commonly, IT organizations spend too
much time on details and not enough on the
big picture. Some waste time analyzing too
many metrics on an ongoing basis; others
pay too much attention to momentary blips
and waste eort tuning problems that dont
really exist. is article provides a general
overview of Linux server performance mon-
itoring and suggests a simple, eective ap-
proachwhat to monitor, how oen, and
basic rules of thumbto assist in optimiz-
ing performance.
Key Performance Metrics
e following are some key performance
metrics to watch to ensure optimal server
and application performance. e thresh-
olds are rules of thumb, and you may need
to make adjustments based on the charac-
teristics of your individual servers. However,
if you monitor these metrics consistently,
youll develop a clear picture of what consti-
tutes normal behavior on your system and
youll be better equipped to detect and deal
with problems.
Memory: If your system experiences
low usable free memory (less than 15 per-
cent) over signicant periods of time (15
minutes or more), it may be suering from
a memory bottleneck. Looking at the free
memory in the cache will provide a good
indication of the memory thats usable. e
actual free memory will usually be lower
than the free memory in the cache because
its more ecient to leave the memory allo-
cated to the process that originally requested
the page until its needed by another pro-
cess. e free memory in the cache repre-
sents the memory thats usable, not just the
memory that hasnt been allocated.
To conrm theres a memory bottleneck
on Linux, you should examine a few metrics
other than low usable free memory. You can
be reasonably sure your system is memory
constrained if you see signicant:
Page outs (more than 300 to 500 per sec-
ond)
Swap outs (more than one per second)
Excessive swap space used (more than 80
to 90 percent).
If your system is regularly experiencing
any of these symptoms, rst examine pro-
cesses consuming large amounts of memory
to rule out design aws or algorithm prob-
lems. If process behavior appears normal,
you may need to consider shiing applica-
tion workloads or increasing memory ca-
pacity. Remember that low free memory by
itself, with no other symptoms, isnt neces-
sarily a problem; it could simply mean that
resources are being used eectively.
I/O: Once youve ruled out memory
problems (which can contribute to high I/O
through paging and swapping), the next
place to look is I/O. Youll want to check the
performance specications of the disk(s) in
question, but a good rule of thumb is to in-
vestigate any disk experiencing a sustained
I/O rate greater than 100 KB/second.
As with memory investigations, youll
want to check the application performing
the I/O for design or algorithmic issues,
then consider whether your application load
is reasonable for the server and disk(s) cur-
rently in use. An easy way to do this on
Linux is to identify the processes that are
consuming large amounts of CPU time.
Chances are, those processes are contribut-
ing to your I/O load.
CPU: Its normal to see occasional spikes
in CPU usage, but a CPU Busy% consistent-
ly more than 60 percent (for 15 minutes or
more) may indicate a problem, and a value
consistently over 80 percent indicates a like-
ly CPU bottleneck.
Youll want to check for individual pro-
cesses using large amounts of CPU time.
Any single process consuming more than 50
percent of the CPU on a consistent basis
could indicate bugs or eciency problems
with the application. Or, its possible that
high CPU use is simply a signature of this
particular application.
A good auxiliary metric to look at is the
number of runnable processesthat is, the
number of processes that could use the CPU
if it were availableor, in other words, the
size of the CPU queue. If you see high CPU
usage and more than one runnable process
over time, it means processes consistently
are waiting for the CPUa sure sign of a
bottleneck. On the other hand, a high over-
all CPU Busy% with no runnable processes
could mean the CPU is simply being eec-
tively utilized.
Two more indicators to help track down
CPU issues are the total number of process-
es on the system (to see if your application
workload is approaching or exceeding the
systems capacityyour specic mileage will
vary), and the presence of zombie process-
esprocesses that are using resources but
42
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Linux Server
Performance Monitoring: Technical Tips for Better
Application, System, and
Network Monitoring
arent doing any useful work (which could
indicate an application bug or design aw).
File system: When le system free space
is lowless than 20 percent free space or
less than 10,000 free kilobytesyou should
review the le system to see if there are les
that can be archived or removed before the
le system space is exhausted. If all the space
on a le system is used, critical applications
may fail, as they wont be able to write infor-
mation to the le system. Similarly, a low
number of free inodesless than 20 percent
free (or less than 1,000 total free inodes)
indicates that some housekeeping should be
undertaken on the le system before appli-
cations are deprived of necessary space on
the le system.
Network: Network problems can also
aect application and server performance
by encumbering access to necessary net-
work resources. A high interface collision
percentgreater than 5 percentindicates
a high level of trac on the network. is
will result in a reduced level of throughput.
If the trac on the network cant be re-
duced, the design of the network may need
to be modied.
Looking at the trac on the interfaces of
individual servers will help you isolate the
source of any unnecessary trac and plan
the partitioning scheme for the network.
You should watch for a high rate of input
trac, output trac or both, and look to re-
duce the level of trac on individual net-
work interfaces. An input or output packet
rate that exceeds 500 indicates a high level
of trac, which may degrade the overall
performance of the network interface and
contribute to overall network congestion.
A network interface with a high error
rate can also aect network interface perfor-
mance. An input or output error rate thats
consistently above zero should be examined.
A consistently high output error rate may
indicate a problem with the local network
interface. A consistently high input error
rate may indicate a:
Problem with the servers connection to
the network (either with the interface or
cabling)
Problem with the network interface on an-
other server
General network problem.
Trend monitoring: is helps you keep
tabs on changing demands on the system,
and helps you anticipate and prevent re-
source saturation. By collecting key statistics
(for example, memory usage, CPU usage,
and le system space usage) at regular inter-
vals over time, youll be able to determine if
the values are trending toward saturation
and react before server performance is sig-
nicantly aected. Periodically archiving
key metrics and incorporating them into a
timeline graph will help you determine if a
metric is trending in one direction or an-
other.
Service level monitoring: Besides mon-
itoring individual server resources, look at
the services the servers provide. Break down
key services into transactions you can rou-
tinely monitor. For example, monitoring the
response time of commonly run queries
over time is an eective way of gauging the
relative performance of a database server.
Similarly, track the availability, response
time, and content of a Web page to ensure a
Web server is satisfactorily responsive to
user requests.
Also, look at the overall business service
level, which could encompass multiple com-
ponents or servers. For example, the fact
that a Web page is unavailable could be re-
lated to a problem with the Web server, a
problem with a router, a problem with a da-
tabase server, etc. Similarly, if redundancy
is built into a service, the failure of one com-
ponent may not indicate a failure of the ser-
vice. e correlation between the states of
the individual components provides a more
meaningful representation of the overall
service status.
Tracking service availability over time
yields a better indication of how well the
needs of users are being met than server
monitoring alone.
Conclusion
Performance monitoring need not be
complex or burdensome to be eective.
With the right approachless is morefol-
lowing simple steps on a consistent basis
will pay o over time. As with any rules of
thumb, the thresholds and other recom-
mendations given here will need to be ad-
justed based on the characteristics of your
expected workloads, your specic hardware
infrastructure, and your companys perfor-
mance expectations. However, if you moni-
tor these metrics consistently, youll develop
a clear understanding of the big picture of
how your Linux systems perform, and youll
be able to spot subtle changes in perfor-
mance that need to be investigated before
service levels are aected.
Tom Speight is a senior consultant and Technical Support
manager at Heroix, where he has spent the past eight years
working with clients to help them maximize application and
system performance on Windows, Unix, and Linux.
e-Mail: tom.speight@heroix.com
Website: www.heroix.com
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
43
44
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Perspectives on CAOS BY RAVEN ZACHARY
I
f you look at the recently funded Open Source Soware
(OSS) vendors, youll see a pattern. Providing an open
source version with an additional enterprise or professional
version that includes extra features but uses a more restrictive
license seems to be the preferred open source business model
these days. Im not talking about dual licensing here, which
generally refers to two licenses for the same product; Im
talking about OSS vendors providing two distinct, functional
versions of a product. e more appropriate term would be
dual versionor simply stated, upsell.
The Upsell Model
OSS vendors following an upsell business model provide
an open source version of their product with a basic feature
set and include additional features in an enterprise version,
but without providing access to the enterprise versions source
code (although Red Hat and Alfresco, for instance, do). e
goal here is to generate a community of users and contribu-
tors by giving away the open source version while also selling
an enterprise version with additional features. If this business
model is properly executed, the open source version generates
a large user base, and the company then goes about convert-
ing a percentage of these users into paying customerseither
through support and services or through upselling users to
the enterprise version with an associated soware license fee.
Some vendors dont even provide a paid support option for
the open source version, requiring a customer to purchase the
enterprise version rst if they want support. Its easier to
convert a user of your soware into a paying customer than it
is to sell to a new customer, as you can imagine.
The Pure-Play Model
Vendors using the pure-play open source model,
meanwhile, provide a single version of a product (or project,
depending on your perspective) under an open source
license and then go about converting users into customers
with support and services, but without associated soware
license costs. In this model, everyone gets the same version,
and you pay only for what you need (in terms of support,
training, etc.).
The Preferred Model
So, is one business model better than the other? It all
depends on your point of view. For an investor, the upsell
business model looks like a safer bet. Revenue is generated
not just from services, but also from licenses. In the
proprietary soware world, licenses still make up a
signicant portion of a soware vendors revenue. A pure-
play OSS vendor, on the other hand, is less confusing to end
users. If an organization wants support or services for an
open source project, it simply pays the vendor for the
soware its already using.
Both types of vendors have communities of users and pay-
ing customers. But are the communities similar? Will an
individual contributor be less willing to donate time and code
to an open source project that serves as the basis for an upsell
vendors licensed product? Why code for free when someone
else is making money from your eorts? It makes sense that
an upsell model would increase revenue at the expense of
limiting participation in terms of developer contributions.
But maybe that doesnt matter. SugarCRM and JBoss,
which have diering business models, both employ the
primary developers on their projects. If the core development
teams are employees, then the community becomes much
more about users than developers. And these are companies
aer all, not charities.
Because of the nature of OSS licenses, both types of
vendors run the risk of spawning their own competition, but I
believe the upsell vendors run an even greater risk. Why?
Community fears of centralized control. Red Hat and
SugarCRM, which both follow an upsell model, have
generated competition based on their own source code
CentOS and vTiger. However, neither of these competitors
has been much of a concern, and some argue that derivatives
such as these will ultimately drive additional customers to the
source vendor.
Its too early to tell which business model is preferred by
the enterprise and is more likely to succeed. However, I
believe the pure-play model makes more sense for the
enterprise and that we will see a movement
in this direction over time.
Raven Zachary is a senior analyst and the open source practice
head for The 451 Group, a technology analyst company. At The
451 Group, hes responsible for the Commercial Adoption of
Open Source (CAOS) Research Service.
e-Mail: raven.zachary@the451group.com
Website: www.the451group.com
The Upsell Opportunity
So, is one business model better than the
other? It all depends on your point of view.
2006 JBoss Innovation Award Winners:
Kroger: Converting open source savings into a new Grid
Kroger Co. spans the vast majority of the United States operating over 2,500 grocery stores, 790 convenience
stores, 430 jewelry stores and 42 manufacturing and food processing plants. They were selected as a JBoss
Innovation Award winner in the ROI category for migrating many of their mission-critical applications to JBoss
thus saving over $400 per CPU in yearly maintenance costs and over $100,000 in yearly license fees. Kroger
used these savings to fund a shared infrastructure (grid) system that has boosted their overall computing
capacity by 40% and will save them an additional $70,000 per year moving forward.
Stanislaus County and Atomogy: Migrating to a more responsive law enforcement
solution
Stanislaus County, California, located 90 miles east of San Francisco and Silicon Valley, successfully migrated
critical line-of-business criminal justice and law enforcement applications from mainframe terminal-based
systems written in COBOL to enterprise web apps built using JBoss AS and its high-performance Java
Enterprise Edition implementation. This migration resulted in improved responsiveness and functionality,
increased use of industry standards, reduced development and operational costs, improved architectural
exibility, and easier integration with afliated government agencies.
A W A R D
http://www.jboss.com/innovation/innovationawards
The JBoss community is strong and growing. With almost 10
million downloads of the JBoss Enterprise Middleware Suite
(JEMS), JBoss wanted to recognize and congratulate the
developers worldwide who are pushing the boundaries and
deploying JEMS in innovative ways. The JBoss Innovation
Awards was created to highlight a few of the best.
The inaugural JBoss Innovation Awards drew nearly 100
nominations from JBoss users, customers and partners.
Winners were selected in 11 categories based on their use of
JEMS to improve business processes, overcome technology
challenges and enhance their organizations bottom lines.
JBoss Innovation Award winners will be recognized at a
ceremony during JBoss World 2006 in Las Vegas on June 15th
and will present overviews of their projects on the Innovators
Track. During the conference, attendees will vote among
category winners for the JBoss Innovator of the Year, who will
be revealed live at the ceremony.
For more information about the JBoss Innovation Award
Winners and Honorable Mentions visit us online at:
http://www.jboss.com/innovation/innovationawards
By Rebecca Goldstein
innovationadwip.indd 1 6/12/2006 11:45:53 AM
MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
45
RLPTechnologies: Building a real-world SOA platform on JBoss
RLPTechnologies, Inc., a wholly-owned R&D subsidiary of R.L. Polk, chose JBoss Application Server and
Hibernate as the foundation for the SOA-based platform that has revolutionalized how data is collected,
enhanced, and compiled into data warehouses. This has resulted in a 70% increase in data-le processing performance and a 400%
increase in scalability as well as an overall enrichment of the timeliness, accuracy, and quality of R.L. Polks data for the automotive
industry.
Met@Logo: Using JBoss jBPM to expedite and simplify government processes
The Met@LoGo project in cooperation with OpenPELGO aids local government agencies throughout Latin
America by leveraging JBoss jBPM workow technology to create eGovernment solutions that automate
complex business processes. Met@Logo has allowed many local government agencies to dramatically reduce
the time necessary to complete business processes, reduced overall project costs, and dramatically improve
service to their constituents.
Cendant TDS: Leveraging JBoss to deliver travel services more efciently
Cendant Travel Distribution Services Group, Inc. (TDS) deploys core online travel services across a massive
farm of JBoss servers. They selected JBoss Application Server and its unique JMX capabilities to serve as the
foundation for their new core services container, allowing them to provision travel services more quickly and efciently to a number of
leading travel sites including Orbitz.com and Cheaptickets.com.
J. Craig Venter Institute: Sequencing DNA across a clustered JBoss environment
J. Craig Venter Institute, a not-for-prot research institute dedicated to the advancement of the science of
genomics, built a clustered JBoss messaging environment that can process over 40 million traces in batch
across over 100 DNA sequencers. By choosing JBoss open source products, they dramatically reduced their
development time and increased stability of their applications while also saving one of the leading DNA sequencing organizations in
the world over $500,000 per year in licensing and maintenance costs.
ADP: Using Hibernate to ensure delivery of paychecks worldwide
Automatic Data Processing (ADP), one of the largest payroll and tax ling processors in the world, chose
Hibernate and other JBoss products to improve uptime and reduce cost for their EasyPayNet 5 and TeleNet 1.X
Web-based payroll systems. By leveraging Hibernate and their AJAX Adaptor for Hibernate, ADP simplied their
development process while improving performance of their data caching.
Cendant TDS: Leveraging Portal technology to improve service for travel agents
Galileo.com is the business unit of the Cendant Travel Distribution Services division (TDS). They chose JBoss
Portal to improve user experience, reduce transactions and reporting times, and reduce costs and overall
development time for building myaccount.gallileo.com, their portal for providing self-service capabilities to thousands of travel agents
and suppliers that purchase hotel, airline, tour group, cruises, and other travel-related services.
http://www.jboss.com/innovation/innovationawards
innovationadwip.indd 2 6/12/2006 11:46:21 AM
46
|
ENTERPRISE OPEN SOURCE JOURNAL
|
MAY/JUNE 2006
Lexicon Genetics: Using JEMS to deploy truly mission-critical applications
Lexicon Genetics is a biopharmaceutical company focused on the discovery of breakthrough treatments for
human disease. They needed to reengineer a legacy application from PHP/Apache to an enterprise platform
and chose JBoss Seam to glue together Hibernate, JSF, EJB3, and JBoss jBPM. The solution combined the
development simplicity with the robustness needed to deploy truly mission-critical applications for the Texas
Institute of Genomic Medicine (TIGM). JBoss Seams direct integration of JSF made it the perfect framework to allow them to reuse of
their existing customer JSF components that provide a rich interface for their users.
DataSynapse: Helping enterprises virtualize J2EE and SOA-based applications
DataSynapse, a global leader in virtual application infrastructure software, enables enterprises to dynamically
provision and execute virtual JBoss clusters without introducing any additional complexities to the
applications being deployed. Their applications virtualize J2EE and SOA-based applications and services
deployed in JBoss to run in a shared computing infrastructure (Grid) environment and help enterprises improve business agility while
reducing the cost and complexity of their IT infrastructure.
Amentra: Assisting enterprises deploy JBoss solutions through mentoring
Amentra, Inc., a leading IT consulting organization, helps Fortune 500 enterprises deploy mission-critical
systems on JBoss technology through a formal, experience-proven mentoring and software development
program. Amentra has earned industry accolades for combining two areas that have historically been
separate service offerings into a single solution: deliverable-based project solutions integrated with IT
mentoring.
http://www.jboss.com/innovation/innovationawards
Honorable Mentions:
JBoss would also like to recognize and congratulate the honorable mention winners in each category for their remarkable skill and
technical achievement.
ROI: Appear Networks AB (Kista, Sweden), La Petite Academy (Chicago, Illinois)
Migration: Vodafone (Maastricht, Netherlands), Daiwa Securities America (New York, NY), University of Utah Healthcare (Salt Lake City, Utah)
SOA: Fiserv, CIR. GalaxyPlus Credit Union Systems (Troy, Michigan), Compucredit Corporation (Atlanta, GA)
Business Process Management: RUNA Consulting Group (Moscow, Russian Fed), Smartmatic Corp. (Boca Raton, Florida)
New Generation Technology: MIRO Technologies (La Jolla, California), Ready Technology Limited (London, United Kingdom)
Portal: ADP (Roseland, New Jersey), Jahnvi Consultants (Ahmedabad, India)
Clustering: Helsinki Institute of Physics, Technology Programme (Geneva, Switzerland), Riptown.com Media ( Vancouver, Canada), Tieline Research
(Malaga, Australia)
Persistence: RUNA Consulting Group (Moscow, Russian Federation), Stefanini IT Solutions (Curitiba, Brazil)
Core Infrastructure : AutoSkill International Inc. (Ottawa, Canada), Broadmedia Technology Ltd. (North Shields, United Kingdom), Harvard Medical
School-Partners HealthCare Center for Genetics and Genomics (Cambridge, Mass.), Harte-Hanks (Austin, Texas), Kroger (Cincinnati, Ohio)
Certied Solution Provider: DataDirect Technologies (Bedford, Mass.), Quest Software, Inc. (Aliso Viejo, Calif.)
To learn more about the JBoss Innovation Award Winners and Honorable Mentions, please visit us at
http://www.jboss.com/innovation/innovationawards.

innovationadwip.indd 3 6/12/2006 11:46:28 AM


MAY/JUNE 2006
|
ENTERPRISE OPEN SOURCE JOURNAL
|
47
If you are looking for alternatives to pricey, complicated open systems backup products than look no
further. If you need a backup product that is simple to use, very cost effective, with a personalized
technical support model, then consider FDR/UPSTREAMs Family of Products.
Thousands of sites have trusted INNOVATIONs backup products for decades. Take the INNOVATION
challenge and see how FDR/UPSTREAM can help you manage your UNIX and distributed
backup issues.
Innovation Data Processing offers a FREE 90-day No-obligation trial to evaluate the product in your
environment. To order the trial, request documentation or an UPSTREAM white paper, please dont
hesitate to call us at (973) 890-7300, or email sales@fdrinnovation.com.
CORPORATE HEADQUARTERS: 275 Paterson Ave., Little Falls, NJ 07424 (973) 890-7300 Fax: (973) 890-7147
E-mail: support@fdrinnovation.com sales@fdrinnovation.com http:/ / www.innovationdp.fdr.com
EUROPEAN FRANCE GERMANY NETHERLANDS UNITED KINGDOM NORDIC COUNTRIES
OFFICES: 01-49-69-94-02 089-489-0210 036-534-1660 0208-905-1266 +31-36-534-1660
Searching for Alternatives
to Pricey, Complicated Open
Systems Backup Products
Can Be a Hassle.
LOOK NO FURTHER!
Reliable Backup/Recovery Extensive Multi-Platform Support
Backup to Disk or Tape Centralized Administration
Broad Based Device Support Affordable License Fees
SAN Express LAN-Free Backup Hot Database Agents
ALL FROM A VENDOR PROVIDING BACKUP SOFTWARE FOR OVER 33 YEARS!
The UPSTREAM Reservoir extends the power of UPSTREAM so organizations can
now utilize UPSTREAM either in mixed z/OS mainframeopen systems environments
or entirely non-mainframe environments.
06068_INupsSrch_EOSJ2 5/12/06 4:01 PM Page 1

You might also like