You are on page 1of 141

A Small Treatise On Computer Anti-Forensics - Part One

======================================================
"...All the world will be your enemy, Prince with a Thousand Enemies, and
when they catch you, they will kill you; but first... they must CATCH
you."
R. Adams, Watership Down, 1978
By An. Onee. Moose
Part 1 : A Mighty Fortress Is Your Computer
------------------------------------------Preface
------Who am I? That's not important, except that I am quite skilled in IT
security. So you can assume that I more or less know what I'm talking
about. Keep in mind that I am by far not the only one with this kind of
experience... see the excellent Fosdick article, which is appended below
my own.
It is a reasonable assumption, incidentally, that certain minor
characteristics of the document below, have deliberately been altered to
make it difficult for the authorities to trace the origin of the document
back to myself. So if you're someone from the Home Office, CIA, NSA, FSB
or MI5, trying to find out who the nasty bloke is that's helping the
"perps" hide their secrets, well, sod off on my behalf.
You won't find me, at least not by deconstructing this document. Go do
some real police work, as opposed to enforcing a police state.
Why Am I Writing This?
---------------------Many reasons. You will see and hopefully understand some of them, in the
material that is listed below. But above all, it's because I want to
restore the balance between the state (well-funded, powerful, ruthless,
all-encompassing) and the individual (poor, 'playing by the rules',
isolated), at least insofar as this concerns the area of safeguarding the
privacy of individuals.
Oppressive governments, and also other sinister organizations such as the
American media industry, have finally become wise to the fact that
gaining unauthorized access to the digital information kept on storage
media like hard drives, flash memory devices and so on, is a perfect way
for them to build the "evidence" needed to harass and punish individuals
for "crimes" that should really not be "crimes", in the first place.
There are untold examples of this, and what is particularly disturbing
about it is, whereas in the past, it was really quite difficult for an
intruder to "build a case against you" (typically it would involve actual
physical access to storage cabinets and so on), now, you can be arrested,
tried, convicted and possibly even jailed or executed, all because of
someone disapproving of the content stored on your hard drive. The
possibilities for abuse of this power, if held unilaterally by
governments, intelligence agencies and groups like the RIAA and MPAA, are
practically unlimited.

I mean to even out that balance and give you a few weapons to use against
your local constabulary, as they come knocking at your door to arrest you
for "possessing subversive computer files". In most places of the world,
they can't convict you if they can't find anything. I want to stop them
from even KNOWING that you have anything "interesting" on your PC.
What is Computer Forensics?
--------------------------First of all, we have to define "forensics" to define what the opposite
of it is.
"Forensics" (basically) is, "the art and science of finding things that
aren't obvious, particularly, the art and science of finding things that
were deliberately hidden". The term "forensics" orginated in the law
enforcement world in which forensics experts would, for example, check a
strand of hair found on a murder scene with the DNA of a suspected
murderer, would check the characteristics of a bullet with a gun that the
murder suspect owned, and so on.
Fair enough -- if we are to have even the semblance of law and order, or
a marginally safe society (to say nothing of convicting only people who
actually committed a crime, as opposed to anyone that the police happen
to pick up off the street), ordinary forensics has a legitimate place in
doing this.
Now, COMPUTER forensics is often, deliberately confused and conflated
with conventional "crime scene" forensics, but in fact, although the two
disciplines do share a few superficial similarities (namely, "looking for
hidden things"), they are in fact substantially different, so much so in
fact that I am going to argue that on balance, computer forensics are a
BAD thing, not a GOOD thing. For example:
* "Normal" (conventional) forensics, are almost exclusively used in
circumstances in which there is no reasonable doubt that a crime -usually a serious one like murder, rape etc. -- has already occurred.
(The presence of that bullet-holed body lying on the floor in a pool of
blood, is usually a pretty good indicator that a crime has happened here.)
With computer forensics, as often as not, the purpose is to determine if
a "crime" (see below for why this is a problematic concept) has even
occurred in the first place. In other words -- the fact that a computer
is sitting there, connected to the Internet, by itself proves nothing. It
is how it is _used_, that defines the "crime" (if any) that the computer
has been used for. This is a deeply troubling concept if you think
through its implications.
* Here we get to the real issue: virtually ALL "computer crimes"
deserving of forensics investigation, are crimes not of social consensus,
but of subjective definition and discretion on the part of "the
authorities" -- usually, of an average police officer, sometimes by a
secret police thug.
What do I mean by this?
Well, it's actually a pretty simple idea. When, for example, we see a
body shot full of bullet holes, lying on the ground in a pool of blood,
there is a universally consistent consensus among all but a tiny fringe

element that "shooting and killing other people is a bad crime for which
the perpetrators should be punished". We don't need debates about "what
counts as a bullet hole" or "how much should the victim have been
bleeding for it to count as a crime". Everyone intuitively KNOWS that
murder has to be a crime for society to keep functioning.
Nothing could be less true of the vast majority of "computer crimes",
such as the ones that I will be trying to show you how to cover up and
conceal, later in this document. Nothing even remotely similar to the
"body lying in blood" situation applies to the collective consensus on
the criminality, if any, of these types of computer activities. In a vast
range of activities for which PC users might want to conceal
"incriminating" data from the authorities, if you asked ten people on the
street, "is having this kind of data on your computer, a crime for which
someone should be punished and go to jail", you couldn't get two or three
people to understand and agree, let alone ten out of ten.
There are untold infamous examples of this, but let me just quote one.
From time to time, we see media stories about people being hauled into
court and in some cases severely punished, with their reputations always
ruined by sensationalistic tabloid media headlines, such as "FATHER
CHARGED WITH CHILD PORNOGRAPHY", "INTERNET KIDDY PORN RING BUSTED, POLICE
SAY".
Sound good to you? I mean, surely you're for protecting kids from
perverts... aren't you?
But, you see, in fact...
In the first case, a father took a few pictures of his own 3 year old
daughter splashing around, happily nude, in the family's backyard wading
pool. He took his digital camera in to a photo shop to have some prints
made, one of the technicians at the shop decided that this was "child
pornography", called the police, and the next thing that the poor man
knew, he was dragged into court with his name and reputation totally
ruined by having his mug shot published in the local newspaper, along
with wildly misleading charges of "distributing kiddy porn" that were, of
course, all quietly dropped later when the police and prosecutors had to
provide some real evidence of criminal intent to the magistrate. Too
late, I'm afraid; the damage is done and there's no way to undo it.
In the second case, a bunch of teenagers, adolescent hormones raging,
started sending nude pictures of each other (girlfriends and boyfriends)
back and forth, not only directly over their camera-equipped cell phones
but also over a social networking site (the pictures involved were never
made publicly available, they were only stored on the "perpetrators'" own
private storage spaces).
Now, the problem here was, some of the young people involved were under
the legal minimum age for sex, in their part of the world. So, the
crusading local prosecutors and police charged ALL of them with
"distributing child pornography"... that is, the police wanted to
humiliate and jail these teenagers for distributing "indecent"
photographs OF THEMSELVES. On top of this, the youthful "perverts" in
this case have now all been put on American "sex offender registries", a
Mark of Cain that will destroy their ability to get a job, a loan, or
anything, for the rest of their lives. (Like the notorious U.S. "No-Fly
List", once you get put on one of these sex offender blacklists, there's
no way to get off of it. You're screwed, forever.)

You mean you didn't know that in some American jurisdictions, if you are
under age, and you take a nude picture of yourself, and you post it only
in your own private section of a social networking site (or you have it
only on your own cell phone), that means you're subject to the same
punishment as a pervert who rapes 5 year old children in front of
streaming video? You mean you thought that the wise lawmakers of this
U.S. state, might have been a bit more discriminating in drafting the law
that currently sweeps both types of "kiddy porn distributors", in the
same dragnet?
Silly you.
The larger point in all of this is, when we start to get into the realm
of "crimes of definition", we're talking about "crimes" that are only
that, because either conservative lawmakers, or the police, or some noisy
special interest group, have a narrow agenda, usually endorsed only by
the general public because of the latter's vast ignorance of the details
that are really involved, wants the activities involved, to be
criminalized.
The classic example of this is homosexual literature, which was for years
in Western countries (still is, in much of the Third World) routinely
labeled "filthy unnatural pornography" and for which you could go to jail
if you were caught possessing it. But there are many other examples and
the theme that you see consistently running through it is that the
authorities have a tendency to make these rules up as they go, simply
because they need a convenient excuse to crack down on sexual, political,
social, religious, cultural or other minorities that either the police or
the conservative authorities just want to harass and humiliate.
In otherwords, the police and the authorities define some activity that
you could never get a real social consensus as a "crime", as such, then
they go about what policemen love doing, that is, getting a power rush by
harassing, beating and humiliating people who just want to be left alone.
One of the prime tools for doing this, is computer forensics, because it
allows the police to rummage through their victims' private digital
histories, hoping to find some sliver of "evidence" that they can use as
"proof of having committed a crime". The police may not know what they're
looking for, when they start out, but they'll take anything that shows
up, as long as it helps them get a scalp and a conviction.
All of this is far different from the "body lying in the pool of blood"
scenario mentioned above. Society clearly IS threatened, by people being
murdered; it clearly is NOT, by fathers taking innocent pictures of their
children in a swimming pool or by teenagers showing off their bodies to
other teenagers. Yet the police would far prefer to prosecute the latter
type of crime over the former, simply because going after ordinary people
who have no idea or intent of doing something really anti-social, is much
easier and satisfying to the authoritarian nature of the police, than is
the difficult, highly work-intensive job of going after an experienced,
hardened, real criminal. The crying, confused, bewildered teenagers that
the police haul into court won't shoot back at the cops. The guy who
murdered the other gangster, will. The police know that, and they pick
the easy job.
MY job, is to make that "easy" job of harassing those "guilty" of "crimes
of definition", as hard as possible for the police. And to do that, I
intend to give you the knowledge to defeat their forensics experts.

But Aren't You Just Helping "The Bad Guys" Evade Righteous Justice?
------------------------------------------------------------------I can't tell you how much contempt I have for this stupid argument, which
comes up all the time whenever ordinary (read: "ignorant") people ask me
about why I help people on the Internet -- e.g., people who I've never
met and therefore have no idea if they're good or evil -- to hide data.
The standard bogeymen, who are inevitably trotted out to justify any and
all government spying on private communications (and, by inference, any
and all restrictions on private use technology designed to thwart that
spying), are:
* Child pornographers / paedophiles / sexual minorities of various types;
* International terrorists (hello, Usama!);
* Drug dealers;
* Cyber-criminals of various types (for example East European fraudsters);
* Crooked businessmen (hooray for Enron); and
* Anybody that the local authorities think the population hates or
distrusts.
The most famous way of putting this fatuous belief is, "If you don't have
anything to hide, then you shouldn't be afraid to let the police see
everything that you're doing."
There are so many good rebuttals of this line of "reasoning" that I won't
list them here, except to say that I simply don't believe the assertion
that "the state" (meaning, "the police, who enforce the demands of 'the
state'") has ANY RIGHT WHATSOEVER to its citizens' private data. None,
zilch, null set, call it what you want -- the evidence of history is
painfully clear here, that governments will inevitably expand the
envelope of what they consider a "legitmate" reason to spy on
individuals, until (recent example), the jaunty old Home Office RIPA Act
(which was passed "to give Scotland Yard the tools they need to break the
encryption being used by Islamic terrorists) has been used by local
councils to spy on married couples "suspected of registering their
children in the wrong district school".
The point here is that governments, and the police -- even those of socalled "liberal democracies" such as the U.K. and the U.S. -- will
INEVITABLY abuse any power they get, to spy on their citizens. THEY CAN'T
HELP IT, THE TEMPTATION TO ABUSE THEIR POWER IS IMPOSSIBLE FOR THEM TO
RESIST. SPYING ON, ABUSING AND OPPRESSING CITIZENS IS SOMETHING THAT
COMES NATURALLY TO THE POLICE. IT'S WHAT THEY DO. IT'S WHAT THEY WANT TO
DO, AND WHAT THEY LIKE TO DO.
You can no more expect a policeman to "refrain from unjustified
surveillance of legitimate dissent" than you can expect a wolf or tiger
to pass up that juicy fresh steak that just got dropped inside their
cage. Sinking its teeth into that blood and flesh is as innate to the
carnivore, as is the urge to spy, to listen in on, to oppress and punish,
to a cop or intelligence agent. That's what they do. That's what they're
all about. No amount of nice talk or promises "not to do it again", is
going to work. They are what they are, and you're kidding yourself if you
let yourself get convinced that they're ever going to change.
This being the case, you need a weapon to fend off the police and their
willingness to ruin your life for activities that you have every right to
undertake.

I aim to give you that weapon.


But Surely You're Not For "Kiddy Porn", Are You?
-----------------------------------------------This is the "nuclear weapon" that advocates of pervasive government (and
private sector) spying inevitably fall back on, whenever someone like me
points out the terrible track record that large institutions have on
respecting individual privacy and shoots down all their other weak
excuses for leaving people at the mercy of police snooping.
"But", plead these supposedly well-meaning types, "If you show everyone
how to hide data on their computers so the police can't get at it, aren't
you just giving paedophiles and child molesters the ability to abuse
children and escape being caught and prosecuted for their perverted,
nefarious deeds? Why, doing that makes you JUST AS BAD as the paedophiles
themselves! You MONSTER, you!"
I could spend hours on this topic, but let me just touch on the most
important and obvious refutations of this tiresome red herring argument:
-- No, providing a tool to hide evidence of a crime (all assuming, of
course, that this activity IS a crime -- more on that in a minute), is
NOT the same as commiting a crime. If it was, throwing a tarpulin over a
getaway car would be the same as holding a gun and shooting someone dead.
This assertion, therefore, is simply and demonstrably false, and I hugely
resent the implication that I'm somehow "complicit in child abuse" by
providing people with the security tools they need to keep themselves
safe from oppressive governments. How dare these self-righteous
busybodies accuse me of that. Drop dead, fuck off, but DON'T call me a
"child molester". If I catch you doing it to my face, I'll punch you in
the nose; then I'll kick your fucking head in. I MEAN IT.
-- Furthermore, is "looking at child pornography just as bad as molesting
children"? Here again, most people are afraid of stating the obvious
(lest they immediately be slandered as "a paedophile sympathiser"), which
is that it obviously isn't. As a parent, would you rather the pervert
down the street stay at home, masturbating over naked pictures of little
boys, or would you rather they physically anally rape your 8 year-old
son? Not a hard decision, is it?
-- Do I get excited (in any sense of the word) by those few images of
naked children that I have occasionally stumbled across, in my years of
using various types of computers? No, I don't, and frankly I don't really
understand the psychology of those who do.
But here we have to keep a sense of proportionality. While some kinds of
sexually explicit literature, pictures and multimedia involving children
undoubtedly DO cause the unfortunate young victims of these practices
some degree of psychological harm, it is wild hyperbole to assert that
"it's worse for a child to be sexually abused than for him / her to be
killed" or "this is the worst crime that human beings can inflict" (both
of these statements are encountered very frequently when this subject is
"discussed" -- I use the quotes because there is never a rational
discussion of the topic, only an escalating series of one angry writer
trying demand yet more severe punishments for the "perverts" than the
next).
Use your brains, fellow citizens; no responsible parent would prefer to

have their child murdered, or maimed, over having them be introduced to


sex at an inappropriately early age.
Is child molestation a "bad" thing? Of course it is, just like any number
of other "bad" things affecting children, for example poverty, economic
exploitation (both of which are frequently the cause of child sexual
exploitation), disease and so on.
Personally, based on first-hand testimony -- the nature and source of
which I'm obviously not free to discuss here -- I believe that the impact
of most kinds of casual sexual relationships between adults and children,
while clearly not something that should be encouraged or tolerated, is
far less than the alarmist propaganda always trumpeted by the police and
the media, would have you believe. Like many other negative childhood
experiences, someone encountering this kind of inappropriate contact as a
child can either be strong and get on with their life, or use it as an
excuse for a lifetime of self-pity and emotional failure. But ending up
in bed with "Wicked Uncle Ernie", in my opinion, is a far less traumatic
experience than, say, being constantly bullied at school, having one's
parents divorce, or, worse, losing a parent at an early age. The
notoriety that society attaches to this kind of sexual activity,
perversely, makes its impact much worse than if it was merely
acknowledged as "something you shouldn't do until you grow up" and then
left at that. Elevating this activity to a level of seriousness that it
doesn't deserve, simply makes for bad policy in every sense of the word.
-- But by far the most important thing to consider about child
pornography in the context of data hiding is, BY THE TIME THAT A SEXUALLY
EXPLICIT DOCUMENT HAS MADE ITS WAY ON TO THE INTERNET, THE "DAMAGE" (if
any) TO THE CHILDREN INVOLVED, HAS ALREADY OCCURRED, AND CANNOT BE UNDONE.
Stop to think about this, for a second. Suppose that we could wave a
magic wand and miraculously eliminate each and every last piece of "kiddy
porn" on every hard drive, CD-ROM and memory chip in the entire world
(leaving aside the obvious question of "what counts as 'kiddy porn'").
This magical act would have no effect whatsoever on the fact that the
children who had been involved in the creation of these media, would
STILL have been molested... the fact that there is, or is not, a picture
or movie depicting the molestation, would change the child's situation,
and the damage (whatever it might be) to their psychological or sexual
development, not one whit.
It is this that is the immensely nonsensical thing about the fevered
campaign to "rid the world of child pornography", because all of the
pictures, movies and other media showing children being abused are the
symptom, not the cause -- the cause is, of course, the original
molestation itself. Eliminate the gangs of East European, Southeast Asian
and South American organized criminals that profit from this activity,
eliminate the terrible poverty that drives parents into prostituting
their own children into this activity, and you'll eliminate child
pornography along with the molestation that causes it. Trying to wish
child molestation away by throwing people with kiddy porn collections on
their hard drives into jail, will be as effective as King Canute ordering
the sea to go its merry way.
In summary, I have no sympathy whatsoever for the very weak claim that
"giving people the ability to defeat forensics, is just helping
paedophiles". By the same logic, you could argue, "giving people the
ability to delete files from their hard drives, is just helping

paedophiles", or "giving people the ability to wipe a hard drive of their


private tax information, is just helping paedophiles", or "giving people
the ability to view a .JPG image on their computer screen, without it
always being permanently written to a built-in, unremovable DVD-R disc,
is just helping paedophiles".
When properly deconstructed, what all of these arguments all really come
down to is, "we need to set up a pervasive, Orwell-like surveillance
society and remove all computer users' rights to manage their own
computers, as they see fit, completely eliminating the privacy, liberty
and security of 99.9999% of everyone else, so we can (theoretically)
catch the paedophiles that make up the other .00001%". Undoubtedly, if
you work for the Iranian or Chinese governments, MI5, the NSA (or the
motion picture or recording industries), such a model of society might be
to your benefit; luckily, however, so far these entities don't
(completely) run the world or the Internet... yet.
In this document, I am giving you a set of security tools. How you use
it, and what you use it for -- good, bad or indifferent -- is up to you,
and the responsibility for what you do with your computer rests with YOU,
not with me.
First, Some General Comments
---------------------------Regardless of what computer or operating system that you use, there are
some basic principles of secure operations that are universally
applicable to ALL systems. If you don't appreciate, and implement, these,
you have little chance of resisting even a casual attack, let alone the
expert types of attacks that I will be describing below.
The most important single thing that you need to understand is that by
far the single most important element in keeping your PC secure, is YOU.
No amount of security technology will protect your sensitive data from
stupidity or carelessness on your part. If you use the technologies and
methodologies that I will describe below properly AND CONSISTENTLY, they
will virtually never let you down. But slip up, forget to do something
that you should EVEN ONE TIME, and you are leaving yourself wide open, no
matter how good your encryption is. It only takes ONE slip-up, ONE file,
ONE picture, ONE URL, ONE anything, for really bad things to happen to
you. This is the truth, whether or not you want to deal with it.
It takes considerable mental discipline to become, and stay, secure and
private, particularly if (see below) you are working in a context in
which your information assets may be specifically targeted by an attacker
who is singling you out for "special attention".
The second thing to remember is, "computer security is a moving target".
While you don't necessarily have to be totally paranoid and scan the
security related Websites on the Internet every day for details of the
latest exploits, the hard reality of the situation is that new ways to
compromise the security and confidentiality of your Internet connection,
your PC, your cell phone and your confidential data, are unearthed on a
continuous basis. Most of these are just variations on a common theme -for example, while there are new types of attacks found all the time
against Microsoft's badly flawed "ActiveX" browser plug-in architecture",
the general fact that this architecture is very vulnerable has been known
for years, so each successive attack isn't really "new", strictly
speaking -- but occasionally, a new attack will surface that can have

dramatic implications for the confidentiality of your data.


A good example of this was the Columbia University "RAM chip chilling"
attack, which proved that under certain conditions, a RAM memory chip
pulled from a physically compromised PC and then "frozen" with a can of
compressed air, could be made to reveal its data far longer than when the
conventional wisdom claimed that the chip's memory circuits were supposed
to have discharged and replaced all "real" data (like your encryption
keys!) with random, static-like patterns. If you aren't up to date on
this kind of exploit, and (obviously) if you don't take appropriate
precautions to prevent or mitigate it, then you may be rendered wide open
when the secret police come to call.
Remember, an intelligent human opponent (see below) will ALWAYS attack
the weakest point of your defences. If he knows of a weak point that you
haven't kept up to date on, you're at a significant disadvantage. So scan
the security news sites and mailing lists, every so often; I'd recommend
at least once per week, more if your data is highly confidential.
You're Under Attack -- By Someone Damn Smart
-------------------------------------------I will assume, for purposes of this discussion, that your PC will be
attacked by a sophisticated opponent (e.g. a computer forensics expert)
who is in physical possession of your PC, in a situation in which you had
little if any time to prepare for this calamity. The main point here is
that if the measures undertaken can protect you against this, extreme
kind of attack, they can certainly protect you against weaker types of
attacks.
But there is another, more subtle implication of this, namely the wellknown computer security saying that "owning the hardware is 99% of owning
its security". What this means, basically, is that someone who is in
physical possession of your computer -- whether that's a thief who stole
your laptop at the airport, or a cop who broke down the door and arrested
you in the middle of your favorite daily Web surfing session, or a
jealous spouse who sits down in front of your PC while you're away at
work -- has recourse to a huge array of snooping and spying techniques
that a remote intruder or attacker would never be able to undertake.
For example, even if you turned off your PC in a panic when the jackboots of the SWAT team broke down your door, would it surprise you to
know that if they have the right expertise and tools, they can just
attach one of their own computers to a FireWire cable, connect it to the
FireWire port on your PC and then download all of the RAM memory image
that you thought had "disappeared" when you turned off the power, on to
their own hard drive for subsequent use to break all the encryption keys
that you had in your PC's RAM memory at the time?
Just THINK of all the juicy, "confidential" data that you THOUGHT you had
erased, that they now can use to put you away for a long, long time.
Depending on the type of PC, the type of RAM chips that you use and how
long your PC was turned off, they have up to three hours or so to do
this, incidentally.
All of these types of techniques, everything from the one mentioned above
to sneaking a "keylogger" that records each and every keystroke that you
do, and then sends them all to the local police department, to just
taking the hard drive out of your PC and putting it in their own, require

direct physical access to either your PC or where you use it, or both.
Incidentally, if at any time your PC HAS come into the physical
possession of a skilled adversary who would have had a few minutes to
hours of undisturbed, private time to compromise your computer, unless
you are very good at being able to recognize the signs of a technically
advanced compromise -- just an extra little chip soldered on to the
motherboard (how would you know if it's out of place or not?), or a few
bytes of machine language code added to your boot sector, for example -I'd strongly suggest that you immediately sanitize (wipe out and erase)
all hard drives and other storage media on the PC as well as all its
peripherals such as keyboards, etc., sell it to the first sucker you find
and then use the proceeds to buy a "clean" new PC. Computer hardware
nowadays is cheap... far cheaper than a 10 year stint in your local
prison for being a "terrorist organizer" or "on-line pervert".
(Note: One of the most important principles of this is, "know your
enemy". That is, you must become at least casually familiar with the
principles of computer forensics investigations, because these techniques
are what is going to be used against you, when the police come to call.
An example of police training materials is available at: http://
www.ncjrs.gov/pdffiles1/nij/219941.pdf, but be aware, this manual only
scratches the surface of what a sophisticated attacker equipped with a
powerful tool like EnCase, can accomplish. So devote some time learning
about how computer forensics works. It's time and effort well spent.)
You have to base your data security protection measures on the assumption
that your PC WILL be attacked in the above manner; just protecting it
against some snoop coming in across the Internet is by no means adequate.
Defending In Depth
-----------------A classic concept of computer security -- really, this is simply an
adaptation of classic military strategy -- is what's called "defence in
depth". If you want to have even a chance of staying secure in the face
of an attack by an intelligent, well-equipped adversary, you will have to
understand this concept and apply it diligently.
Although its actual applicaton can be quite complex, the basic idea of
defence in depth is quite simple: every defensive measure is implemented
on the assumption that it could fail (that is, that it could be somehow
overcome by an adversary). Thus, when designing the _entire_ defensive
system, we have to construct in such a way that a failure of one
defensive measure is "mitigated" -- that is, reduced, with its negative
impact lessened as much as possible. (Note: The opposite of this concept
is called "all-or-nothing"; it is built upon the very questionable
assumption that a "barrier" or "wall" defence can be erected, that can
never be beaten or breached. Of course, the problem with "all-or-nothing"
is that it has to work one hundred per cent of the time, all the time.
Even ONE failure with this model is disastrous.)
The idea of defence in depth is used thousands of times, every day, for
almost every kind of complex system or machinery in the real world. For
example, jet airplanes are built so they won't come crashing out of the
sky, without something (like a bomb) going dramatically wrong; but, they
are also built so that if they DO come crashing down, as much as is
possible they won't instantly explode (this is achieved by fire
suppression systems, "self-sealing" fuel tanks and so on). When your

withdraw money from an automatic banking machine, there is a complex


series of mutually confirming transactions to ensure that (a) you
actually get your crisp new stack of ten-pound notes, and (b) the bank's
computers at the other end, have properly recorded that you now have that
much less money in your account. If one of these safeguards fails, the
other one takes over. These are systems that have to work perfectly, all
the time, and they're designed with the assumption that a single failure
somewhere within them, will be compensated for by the system's other
checks and balances.
In the real world, bad and unexpected things happen all the time, so the
prudent thing to do is work on this fact and try to contain the damage,
when the worst case scenario rears its ugly head.
A perfect example of defence in depth, in the world of computer security,
is, "using different passwords for different encrypted containers".
Without this "mitigating", defence in depth measure, a successful
compromise of even ONE password will give the attacker unrestricted
access to each and every piece of confidential data that might be
contained on the compromised PC. But if different passwords are used,
then the extent of the compromise will instantly be contained to the
individual container for which the password was reverse-engineered. Many
other examples will be given below.
In considering the steps that are described below, always keep the
concept of defence in depth, in the back of your mind. A good way to
remember it is, at each point, thinking, "...if THIS one fails, then what
do I do?" If the likely answer is, "oops, I'm buggered, mate", then you
probably don't have a sufficient amount of in-depth defence for that
particular system or process. Keep in mind, though, that at the end of
the day, no system can be 100% fool-proof.
Working Alone, Always Alone
--------------------------I will assume that you are the ONLY person who will have any (known) type
of significant access, either direct (physically in front of the
keyboard) or remote (over the Internet). This is a very important point
because for each ONE (1) extra person to whom you entrust the details, or
even the general knowledge, of what you are doing on the computer, in my
opinion you degrade the overall security of any and all measures that you
might undertake, by at least FIFTY (50) PER CENT.
YES, IT'S THAT IMPORTANT.
Stop to think about this... suppose that you're an Islamic militant
surfing to Jihad Websites, or that you're a "pervert" surfing to Websites
with "dirty pictures" on them, or that you're a corporate whistle-blower
who's had his PC collecting the paper trail of how your boss has been
dumping toxic waste into the local river, and in any of these cases, you
let someone else know what you've been up to.
Consider:
* Your Islamic confidant may suddenly convert to Judaism and decide to
turn in this dirty rotten "terrorist";
* Your fellow pervert on that dirty movie video sharing site may get
nabbed by the cops, and may squeal your identity to get his sentence

reduced from 20 to 10 years at the State Pen;


* Your buddy from the partition three doors away, may quietly whisper
your name to the boss, in the hopes of getting that promotion that both
you and he were in the running for.
The moral of this story: DO IT ALONE, BY YOURSELF, WITH NOBODY WATCHING,
AND TELL NOBODY ABOUT IT. In particular, NEVER, NEVER, NEVER share
sensitive security related information (passwords, locations of encrypted
files, user identities, etc.) with anyone else. You CAN control your own
actions and how you interact with hostile authorities. You CANNOT control
the actions or motivations of third parties, no matter how well you may
think you "know" them.
In the vast majority of cases where someone is busted for doing "illegal"
things with their PC, it's not that their encryption failed or anything
like that; far more often, it's because the cops or the local secret
police were able to compromise someone else that the bustee trusted, and
in so doing were able to bypass all of his defenses with little or no
effort. Never engage in any kind of "risky" activity in which there are
people who can identify and incriminate you.
You'll Always Be There... Won't You?
-----------------------------------In saying this, there is another "common sense" thing that I need to restate here, even though anyone with an ounce of brains should not have to
be told.
Namely, NEVER, EVER, EVER, leave a computer with "sensitive" data on it,
unattended by yourself. There is a specific meaning to this : NEVER leave
the computer running but unattended, let alone if it's connected to an
untrusted network like the Internet, if it has or might receive,
"sensitive" data, particularly if that data will not be immediately
encrypted (with the plaintext version of the data impossible to access
without entering your credentials, e.g. your password, etc.) upon it
being stored.
It's bad enough that further on in this document, I have to explain
"emergency" data hiding processes for the "jackboot kicks down the front
door" scenario; now, try to imagine how much worse your exposure is
likely to be, if an intruder (remember, this isn't necessarily the
government -- it could be someone as unskilled or seemingly unthreatening
as your girlfriend, your kids, your co-workers, your boss, the cashier at
your local Internet cafe, nosy Great-Aunt Marjorie who just happens to be
living in the downstairs suite... whomever) might be able to compromise
the security of your computer, access your "confidential" data, and so
on, without you even _knowing_ about it!
Taking a risk like this, trusting the computer to "defend itself", is a
disaster waiting to happen. I don't care if your PC is "protected" by a
screen-saver that forces someone to log in with your password; that kind
of thing can be negated in two seconds by even a moderately experienced
attacker. And I also don't care if you have the data stored in a
relatively strong encryption system like TrueCrypt (see below); all
computer operating systems leave all sorts of interesting little tid-bits
(for example, how about your surfing bookmarks?) of forensic information
around for an experienced attacker, even if the really "secret" stuff was
well-secured. The only way to defend against this is manual, human

action, which is why you have to be there, 100% of the time.


You have to appreciate that in your "defence in depth" data security
strategy, while the technological measures that I'll discuss further on
in this document are very useful, at the end of the day, the most robust,
effective anti-forensics "tool" in your arsenal, is... YOU. There is
simply no substitute for an intelligent, knowledgeable, cautious, prudent
and vigilant security system like the "Mark 1 Paranoid" human being. The
minute that you absent yourself from minute-to-minute supervision of the
repository of your "sensitive" information, you immediately enable a wide
range of hard-to-detect attacks that would obviously be difficult to
impossible to engineer, if you are physically there while the attacks are
being initiated.
Here's a perfect example : You're in an Internet chat room that's devoted
to, shall we say, a "controversial" topic. All of a sudden, you notice
that all of the TCP/IP ports on your PC are being remotely probed,
probably by a hacker or a police officer, who is trying to find an open
port to use in depositing a "keylogger" or other remote surveillance
program, on your PC. If you are physically at the computer and watching
this happen, you would use your brains, immediately terminate the chat
session, disconnect your computer from the Internet, and check very
carefully for signs of unwanted software. Quite possibly, you would wipe
your entire hard drive and re-install the operating system from scratch,
to avoid the slightest chance of being compromised. But -- if you've
decided to head off to the local pub for a meal, none of this would be
visible to you; and when you come back, everything just seems fine...
doesn't it?
Do you see, now? You can't take chances. Not even once.
Obviously, none of the above means that you can never leave your computer
-- indeed, hanging around it all the time, would in itself, be a cause
for suspicion. All I'm trying to point out is, there has to be a well
thought-out procedure for starting up the computer, for using it when
accessing "sensitive" data, and for purging it of all traces of the
latter, upon shutting down the PC. You MUST be physically present for all
three stages of this process, but assuming that you have done so (and
that the process and tools that you have used, are robust and
appropriate), then you can go on with your other life duties, as best you
see fit.
Keeping A Low Profile
--------------------A related issue is, you always have to keep in the back of your mind that
one of the most important, and powerful, forensic tools available to an
attacker, is being able to "co-relate" your personal activities (e.g.
where you were physically) at a given time, with traces of activity on
the computer. If they can "prove" (remember that the amount of "proof"
that they have to give to the witless, credulous juries of most countries
is very low) that you were where the computer was, when "these heinous
computer crimes were committed", your chances of being convicted go up
exponentially.
More information about this is given in the context of time- and datestamping below, but at a higher level, what this means from a prudent
private computing perspective is, you should always refrain from
activities [for example, receiving an incoming phone call (let it go

through to the answering machine instead), answering a knock on the front


door from the local vacuum cleaner salesman, going out on your driveway
to take the empty trash bin back to the side of the house, using your PC
close to a window that an intruder could surveil via a pair of binoculars
or a telescope, playing loud music or a television show (especially the
latter since it can establish a narrow "time of day" fix), or driving to
the corner store to pick up a litre of milk] that could allow the police
to co-relate the exact time and date when you were physically co-located
with / had access to, your computer, and any data processing activities
that might have occurred around the same time (especially if these had
anything to do with Internet access, since here, there would be a third
party -- that is, your ISP -- who could independently confirm the time
and date assertion).
The proper stance here is, "a normally observant person could drive by
wherever you are in a car, and not have the slightest idea that you're at
home (or wherever you normally work from), using the computer". Now,
understand that this is NOT going to protect you against very
sophisticated opponents using high-tech gear such as backscatter
radiation detectors, infra-red (thermal) imaging devices, and so on; what
it WILL provide a measure of protection from, is when the police come to
interview the neighbours and ask them, "say, do you remember seeing
Achmed at home, sometime on the afternoon of March 15th?". If they all
saw you cursing and swearing after that improperly-tied bag of household
waste fell all over your brand new patent-leather shoes, the police now
have what they call a "positive ID" on time, date and location. This is
very bad news for you, so don't give it to them through carelessness.
One other thought, here. Unfortunately, fully obeying the above rules,
makes it inadvisable for you to have any kind of normal life activity
that would naturally make you famous or prominent in some way. There is a
simple reason why this is so : fame, or notoriety, attracts the malicious
and the curious, and this, in turn, will enormously increase the risk
that someone is going to deliberately start "snooping" against you. The
likelihood that your personal affairs, including but by no means limited
to your computer and telecommunications activities (cell phone
compromises are a favorite of sleazebags who hope to enrich themselves by
revealing embarassing private information of celebrities) will thus be
pried into, goes up in like proportion to your level of fame and it is
especially likely if you are in a profession (politics, business
management, being in the legal profession, the news media, high rank in a
NGO or opposition group, etc.) where by the nature of your job, you are
doing something that would offend or antagonize someone else.
There is a reason why all the great spies throughout history - and here,
note that I'm deliberately speaking about the ones who you NEVER heard
of, not the ones who got caught - have all led quiet, unremarkable,
unassuming lives, always out of the public spotlight. (And it's worth
observing, that many of the spies who _did_ get caught, for example
Aldrich Ames in the U.S., were undone by profligate spending and a highrolling lifestyle that was out of sync with their known income. Here
again, those trips to Spain and the Jaguar in the driveway were nice to
have, but they carried with them a heavy price...) Successful spies
behaved discreetly because they knew that fame and fortune are
incompatible with privacy; James Bond may be everyone's fantasy spy hero,
but he's about as viable as a real spy, as would be the Pope, a movie
star, King Kong or Osama Bin Laden.
Nobody spends much time breaking into a child's piggy bank, because there
just isn't enough in there to make it worth the effort (at least, for

kids who I know...), but, in the famous words of Willie Sutton, "Why do I
break into banks? Because... that's where the money is."; the more
prominent you are, the more it's worth, either directly or indirectly,
for some malicious or self-serving third party, to bring you crashing
down to Earth.
Where you draw the line between having an "exciting", high-profile public
lifestyle, and the need to retain the confidentiality of your "sensitive"
data, is something that only YOU can really decide. All I can do here is
warn you of the likely consequences.
Your Work PC Is An Unsafe PC
---------------------------There is a special aspect to this concerning a PC (or network) to which
you might have access while you are at work, that is, when you're away
from home (so, in this sense, the word "at work" really means "any time
that you're using any computer or network that you didn't buy for
yourself and which you don't have complete, undisputed administrative
control over"). There's a simple rule, here: NEVER do anything on a work,
or third-party, computer or network, that you aren't comfortable having
your boss instantly know about.
Most computers located in large companies are pre-loaded with an
extensive portfolio of remote "management" (read: "remote spying")
applications that you either / or (a) can't detect or (b) can't disable,
even if you do somehow manage to detect that they're there; while there
are a variety of somewhat justifiable reasons (such as, "the computer's
there for you to do work with, not for your to play games with") why an
employer might want to use these types of surveillance programs, the
larger point is that the minute in which administrative control of a PC
passes from your EXCLUSIVE control, to a control scenario where someone
other than you can tell the computer to do or not do something, this
opens an enormous -- and, largely, impossible to mitigate -- security
hole.
This is true of everything that you might do on a work PC, although it's
worth noting that in particular, so-called "nanny filters" (gateway
applications that limit where you can go and what you can do, when
surfing the Internet) have become very widely used in large corporations
these days; the minute that you try to surf to a "naughty" Website, not
only does the filter stop you from doing so, but it also alerts an
administrator, and / or possibly your boss, that you're a "time-wasting
pervert who's abusing Company resources for personal gratification".
Another commonly-encountered surveillance system, in the corporate
computing context, is "IDS" or "Intrusion Detection System", sometimes
combined with "DPI" or "Deep Packet Inspection" technology; this scans
each and every little TCP/IP packet that you send out over the company
Ethernet cable, checking for "naughty" or "illegitimate" content (however
they define that), wherever it is. Some employers even have "keyloggers",
which are a hidden background system that capture each and every
keystroke that you enter at your keyboard, also in some cases every .jpg
or .gif file that you open on your PC, and then forward this data to an
administrator who can use it to punish you for "non-work related conduct"
or "inadequate data entry speeds" or, basically, whatever the company
involve wants to punish you for.
I have seen situations like this where even ONE such infraction is an

instant dismissal offence. Add to this the fact that the Information
Technology administrators of a big company have every reason to cooperate with the police and virtually no reason to defend your interests
against them, and you can easily see why accessing "controversial" data
on a work PC is a very, very bad idea.
And, incidentally, don't fall into the very easy-to-accept trap of
"everybody at my office downloads porno on to their computers -- why
should I be holier-than-thou and not go with the crowd?". This is an
excuse that I hear with frustrating frequency and it's ridiculously easy
to shoot down.
Use common sense, for God's sake: if it's against company rules to
download inappropriate material using your work PC, and the other 9 out
of the 10 people in your office do it anyway, and then all 9 of them are
subsequently fired for this transgression, how does it "help" you to be
the 10th person to be fired? In most large corporations, when you get
dragged in front of a disciplinary hearing, what matters is the written
rules, not what you claim the "office corporate culture" was.
Mobile Devices To Get You Moving... Right To Jail
------------------------------------------------In the comments that you see below, I'm assuming that we are talking
about a conventional operating system on a conventional desktop or laptop
computer, not something like a virtualized OS session, an iPhone, a
BlackBerry or other handheld, since the issues and protective measures
for those scenarios are quite different from what we're looking at here.
(Although, I will talk briefly on special considerations for data storage
on removable devices such as a "USB key or SD RAM chip".)
In general, you should NEVER store sensitive data on anything other than
a "real" PC that is normally located either in your home or with you, in
the case of a laptop. If you're stupid enough to put sensitive data on
something like an iPod, iPhone or cell phone, then you deserve what you
almost certainly will get.
Note that in this respect, "sensitive" data can be stuff that you might
otherwise think to be innocuous, for example friends' phone numbers,
Websites that you frequently visit (don't surf the Internet on something
like an iPhone, to any site that you don't want your friendly local
police officer to know about instantly!), or, worse, lists of passwords
(believe it or not, this happens all the time -- one of the first places
that the cops check, when they have suspected drug dealer, is his cell
phone, because 'them cops would never think I put my passwords there').
Most of these new portable devices have either very weak protective
technologies, or no protective technologies at all (see the following two
URLs for a rather dramatic depiction of how easy it is to "suck" all the
data off your cell phone, to an even moderately well-equipped attacker :
http://news.cnet.com/8301-1009_3-10028589-83.html?
tag=newsEditorsPicksArea.0 and http://csistick.com/). What's even worse
is that this process can usually be accomplished in a matter of seconds
or minutes and that it leaves no signs at all of the cell phone having
been tampered with. (It is technically quite difficult to instantly
download confidential data from a PC just by plugging a forensics USB key
into the PC's data port, not only because of the volume of data involved
but also because with the exception of a few technologies like Firewire,
modern PCs have a degree of built-in protection against unauthenticated

access of static data by external devices. Most cell phones and mobile
devices have no such protection and can easily have all their data
harvested by someone with the right forensics tool.)
A carelessly stored cell phone, BlackBerry, etc., is thus far more
vulnerable to this kind of "fly-by" forensic attack than a conventional
computer would be... all that the attacker has to have, is a few seconds
of undisturbed physical access to the mobile device, and he's got all the
data that's contained within it.
The other thing that makes mobile devices especially dangerous is that
they are always connected to the manufacturer's network (for example,
Apple and Verizon's network in the case of an iPhone), and you don't
control that network, in fact you usually don't have any idea what kind
of visibility it has on what is going on with your portable device. (See:
http://www.pcworld.com/article/id,143932-c,cellphones/article.html).
Another cute little example of this is how the iPhone just, er, "happens"
to secretly take screenshots of whatever you were doing when you hit the
"Home" button, then discreetly files away these little forensics gems in
a secret place unknown to you, just waiting for the next police officer
to retrieve them. (Read about the gory details at: http://
www.networkworld.com/community/node/32645).
I don't suppose that Apple had a little, er, "advice" from the U.S. NSA,
CIA and FBI, when they put that little, er, "feature" into the iPhone, do
you? Yeah, baby, you got it. Steve Jobs may be a "cool dude", but he's
still an American, and American "cool dudes", at the end of the day, are
going to do whatever Uncle Sam tells them to do.
Considering that 99% of the major phone companies work hand in hand with
the FBI, the CIA, M.I.5, the NSA and your local law enforcement, and
considering that in almost every case they will happily hand over your
private information to the cops "on request", you are stark raving nuts
to put sensitive information on a mobile device that uses one of these
companies' networks, as well as you are nuts to access this kind of
information over their networks. You might as well put a big bullseye on
your back.
The Lying, Incompetent Thugs Called "Police"
-------------------------------------------You must appreciate that the police are, in most jurisdictions, out to
get convictions at ANY cost, whether or not the person involved has
actually done anything illegal or immoral. It's just a big game of
"gotcha", to the cops; that's how they get promoted and get public
recognition, by proudly showing up on the local news and boasting about
how they "put that pervert away for life".
Most citizens, and therefore juries, have a child-like trust in claims
made by people in positions of authority, particularly policemen
confidently asserting that the defendant is an awful terrorist /
paedophile / drug dealer / subversive / gang member / {pick your
favourite Devil figure}. You have to assume that many or all of these
claims, true or otherwise, will be made against yourself, when your PC
gets seized by the authorities.
In thinking about this, you have to understand that the average person,
who is 100% ignorant about virtually every concept associated with
computers, has a naive, trusting belief in the honesty and integrity of

the police, as well as in the completely false idea that "if you don't
have something to hide, then you shouldn't be afraid of anyone rummaging
through your personal affairs".
What if you DO have something to hide, say, you're secretly gay, or
you're planning to divorce your husband and run off with the man next
door to Morocco, or you're planning to sue the local Council over that
tree that fell on your car, last week, or you have the secret formula for
a revolutionary anti-cancer drug, hidden on your encrypted volume, or you
are campaigning for free speech rights in an oppressive society like
China or Iran; all of these are completely legal in most countries (or
should be), but there are perfectly valid reasons why you'd want them not
to be revealed to unauthorized viewers.
But the average, ignorant, police-loving, "patriotic" citizen of most
countries, knows nothing of the above and cares less. People crave what
they (usually falsely) believe to be a benevolent dictatorship that makes
the trains run on time, and they just cannot envisage any legitimate
situation where anyone could want to hide information from the public or
the police.
NO amount of evidence to the contrary (and I have tried doing this, many
times) will convince a "law-abiding" conservative citizen that the police
would lie or cheat. In fact, the average citizen will simply get angry
with you, for "impugning the reputation of our fine law enforcement
officials". Trying to secure your data, or appearing to be trying to do
so, is two and a half strikes against you, before the police even pitch
the next baseball.
You may think I'm exaggerating, about the above; I wish that I wasn't,
but the available evidence overwhelmingly suggests that if anything I am
understating the situation.
Furthermore, in some jurisdictions like much of the southern United
States, under "forfeiture" laws passed originally to "deny drug lords
income from their crimes", the police actually get to seize, impound and
then sell, for their own personal profit, most or all of a suspect's
property, BEFORE there is even a trial, much less a conviction on the
original grounds for which the "perp" is charged. You don't have to be a
rocket scientist to appreciate the huge incentive this gives the police
to cheat, falsify or with-hold evidence and otherwise eliminate what few
legal restraints are imposed upon them, so that they can sell your
computer, car and house and then take a nice vacation somewhere. Your
data confidentiality plans should start from the assumption that your
opponents will use every tactic, fair or unfair, legal or illegal (such
as, for example, beating the crap out of you, to "encourage" you to tell
them your encryption passwords), to get what they want out of you.
Surprise, uncertainty, subterfuge, fear, lies, deception and causing
hesitation: these are all weapons that an experienced, ruthless adversary
will use against you. Prepare for them, be able to recognize them when
they are arrayed against you and don't fall for them -- have a good plan
and stick to it, but learn from your mistakes, especially "close calls"
where you almost revealed sensitive data, and make sure that you never
repeat the same mistake twice.
Keep Your Damn Mouth Shut, Bloke
--------------------------------

Having said the above, there is another very important issue that you
have to be aware of. In the (hopefully) unlikely event that you get
arrested, to the maximum extent that your personal pain threshold allows
you to do so (because, in many parts of the world, the police will simply
beat the tar out of you, to get the information that they believe you to
be in possession of), you should NEVER, EVER voluntarily communicate with
the authorities, reveal information to them (even information that you
figure that they already know, and even if they have told you that they
_do_ already have), talk with them or give them even the slightest
insight about the details of your life or how you use your computer.
There is a really simple, if inelegant, way to put this: SHUT THE F*CK
UP. NEVER SAY ANYTHING TO THE AUTHORITIES, NO MATTER HOW INSIGNIFICANT IT
MAY SEEM.
You have to understand that the police habitually lie to suspected
criminals to get the latter to cough up information that the cops would
otherwise have a difficult job obtaining. Incidentally, on their rare
candid moments (see: http://video.google.com/videoplay?
docid=6014022229458915912&q=&hl=en), the police will openly brag about
how they lie, cheat and mislead defendants -- many of whom are likely or
obviously innocent -- into signing false confessions, divulging seemingly
innocent and irrelevant information which the police later twist into
"evidence of guilt", and so on. Such tactics are Standard Operating
Procedure for all nations' corrupt, incompetent "guardians of public
safety".
Here are examples of typical admonishments:
-- "Look, you pervert, we already have more than enough kiddy porn
pictures taken off your PC, to convict you in any court in the country.
Why don't you just save everybody some time and tell us where the rest of
them are?" {In fact, the policeman hasn't found anything at all, but rest
assured that if you respond to this question in any way that might even
hint that you did have some of this horrendous data on your computer, he
very definitely will take it down as an "admission of guilt" at your
eventual trial.)
-- "Come on, Carlos, surely you aren't denying that this is YOUR
computer? We found it in your house!" {In fact, while it may be your
computer and may have been found in your house, a judge or jury has no
idea if it was really yours or instead was owned by any of the other 6
people who share your rooming-house. If you answer in the affirmative
because you figure the police already know that it's yours, you have just
denied your attorney a plausible defence tactic in court.}
-- "Look, Mohammed, the two other guys that you've been sending those
'jihad' e-mails to, on-line, got picked up earlier today, and they've
already confessed. Not only that, but they've told us that you were the
ringleader! I'm telling you, pal, that if you confess now and co-operate
with us, we can get your sentence reduced; but if you don't play ball,
don't blame me when you get put away for 20 as 'lead conspirator'!"
In fact, they picked up both of your friends, but had to let one go due
to a complete lack of evidence while the other one hasn't told them
anything. If you spill the beans, the cops will just smile and then
charge you with whatever they were going to charge you with, whether or
not you 'play ball' with them, makes no difference whatsoever. They
probably have no latitude in this, anyway, because of 'minimum sentencing
guidelines' and other such 'get tough' measures against the menace of

people with 'illegal' materials on their computers.


Consider, in this context, if you co-operate with the police, and then
they renege on their part of the bargain, how do you enforce the deal?
You're in their custody, not the other way around. At the end of the day,
the only thing that governments, or the police, respect, is POWER. This
is precisely why they hate an independent judiciary, defence attorneys
and a free press, so much, because all three of these represent a source
of power different from their own. If you don't have power, you'll get
nothing but contempt and dirty tricks from the police, whether or not you
do what they want you to do.
-- "Your girlfriend has already told us that you were using her computer
on the night when we detected you were downloading all of those pirated
movies. Here's her testimony. Want us to get her to repeat it, to your
face?" {In fact, the girlfriend never said anything of the sort; the
'testimony' is a tissue of lies written up by the policeman's secretary
10 minutes before the interrogation session, and if you say "yeah, I'd
like to hear her say that", the policeman will angrily storm out of the
room, shrieking "don't you try to bargain with ME, you fucking little
prick, who do you think you are"... because, he's not allowed to expose
one witness to another as doing so would taint both witness' testimony,
in court.}
There are untold thousands of examples of the above kinds of dirty tricks
recorded in the annals of the criminal justice systems, around the world.
The larger point is, forensic analysis of a seized computer, as well as
of communications data traffic, is actually a very difficult task that
requires a great deal of specialized knowledge, good forensic software
and hardware tools, hard work, patience, and, not uncommonly, good luck.
This is offset to some degree by (see below) the naive trust that judges
and juries have about evidence "found" (more often, faked) by fine
upstanding policemen and policewomen, but you are just making your
attacker's work far, far easier if you reveal anything -- and I DO mean
ANYTHING -- about either yourself or especially your computer environment
or habits, to an interrogator.
It wasn't by accident that Bill Clinton stonewalled and lied for years
about his personal sexual habits, when these were used by his American
political opposition to try to destroy his Presidency. Good old Slick
Willie knew that his tormentors would never cut him any slack, whether or
not he confessed to having a wild time with those White House interns. He
was right about that. You should do the same. For a very good explanation
of why, see: http://video.google.com/videoplay?docid=-4097602514885833865.
Assume that the police know nothing at all, that your encryption and
defensive measures have worked, that the cops are frustrated to tears by
their lack of progress on your case, and that, like 99% of all lazy,
corrupt, cynical, "who cares if we got the right guy" law enforcement
officials around the world, they want the easy way out by getting the
"perp" to incriminate himself or herself. DON'T DO IT. Don't make it easy
for them to throw you in the dungeon and chuck the key down a well. Make
them WORK for all the pain they plan to inflict on you. Then, if it
happens anyway, at least you'll know that you didn't collaborate in your
own punishment.
Don't Let Your Data Do The Talking, Or Confessing
-------------------------------------------------

Incidentally, there is a side-note to the above that I find many


otherwise careful people are shockingly uninformed about. It is: if you
have "confidential" or "sensitive" data, make ABSOLUTELY sure that there
is NOTHING (repeat, zero, zilch, rien, nada) within that data that, if
revealed to an opponent, can personally or uniquely identify you,
especially if in so doing it can also divulge time- or location-based
information such as when you worked on a particular file, when you were
using the computer, and so on.
The interesting issue here is that within conventional information
technology security, the concepts of "accountability" and "nonrepudiation" -- that is, "being able to definitely establish who did what
with what data on a computer system" and "being able to prove this so
definitively so that someone cannot say 'it wasn't me who did that" -are very important, and legitimate, core components of a good enterprise
security strategy (because when the CEO of a company starts stealing from
its share-holders, presumably the latter have a right to find out where
the money went; or, when some idiot crashed your company's most important
file server, who gets fired for not knowing what he was doing), and many
IT security tools, as well as ordinary components like applications and
computer operating systems, have built-in abilities to create and
preserve a non-repudiable "audit trail".
However, in your case, these concepts are the OPPOSITE of the environment
in which you want to work. You DON'T want anyone to be able to establish,
without a reasonable doubt, that "Achmed Islam was working on that
particular .doc file at 12:30 p.m. on Sunday, May 9". You DON'T want
anyone to be able to say, "It was Billy K. Pervert who sent me this
filthy pornography over MSN Chat, and nobody else." This means that in
many cases, you are going to be working at cross purposes with how the
computer system was originally meant to function. Take note of this, and
prepare yourself accordingly.
An obvious example of the kind of thing that you don't want to have
happen is, "having a picture of yourself in a compromising situation with
a 8 year old boy and a German Shepherd dog" -- you would be amazed at how
many "perverts" get caught in exactly this way -- but, less obviously,
little snippets of data (particularly stuff like "metadata" which is
sometimes includied in the "Properties" or file header area of Microsoft
Office documents like a MS-Word .doc file or a MS-Excel .xls file, or
even just the file names themselves of "controversial" content that you
may have downloaded from the Internet -- make sure to change the latter
to your own naming scheme) that may contain a current or previous phone
number, credit card number, and so on, are of vital importance to a
police officer in "proving the trail of evidence back to the perpetrator
of this nefarious crime".
Consider, in this context, that destroying or making implausible the
repudiation factor -- e.g., "oh, no, Officer, I didn't download that
file, it must have been someone else with access to the computer, who did
that" -- is perhaps the #1 priority of the police, at least in nations
with even a modicum of rules of reasonable doubt and admissible evidence.
This is why confessions of any type, or poorly-sanitized data evidence
that "confesses" on your behalf, are such a high priority for the police
and are such a threat to your safety and liberty. In some jurisdictions,
the police actually have quite a hard job in establishing the "ownership
chain" for "controversial" content. Don't make their job any easier... it
may be the difference between freedom and long jail sentences, or worse,
in some cases.

Also, you should temember that the law enforcement authorities, as well
as quasi-legal entities like the notorious U.S. RIAA and MPAA and their
henchmen like MediaSentry, all have very large and very well-indexed
databases containing the file names, sizes, MD5 digital "hashes" (this is
an encryption tool that produces a "fingerprint" that is unique to each
particular file, so the file can be quickly and uniquely identified) of
many types of "controversial" content, ranging from kiddy porn to
"pirated" multimedia files like .MP3s, movies being traded on file
sharing networks like LimeWire, BitTorrent and so on.
If an opponent can use these tools to get a positive identification on a
known "controversial" file that they found, or can claim (truly or
falsely) to have found in your physical possession, they can then "prove"
that you (and nobody else) was responsible for this filthy / disgusting /
terrorist / fraudulent material on your hard drive, just from the files
themselves (without having to beat it out of you with a truncheon)... and
in so doing, you have made the job of convicting you 1000 per cent easier.
At least for files of moderate size, there is an interesting way of
creating a roadbloack to this kind of analysis. Suppose, for example,
that you have a large number of "controversial" .JPG format graphics
(picture) files. While you should ALWAYS rename these anyway, if you are
willing to permanently delete the originals of the graphics files (as
separate entities on your hard disk), what you can do, is to one by one
paste them into different pages of a Microsoft Word (or other word
processing) document (or some other kind of document -- it could be
anything, for example PowerPoint, etc., as long as it has the ability to
display a graphics file), then save that file under a new, misleading
name ("MY_THOUGHTS_ON_GARDENING.DOC" will do). (Note: For "controversial"
text, my preference would be to embed the original text file within a
word processing format document; for pictures and, possibly, movies, I
would probably choose a slide presentation format document since these
are typically quite large, the size element will therefore not be as
immediately noticeable as it would for pictures pasted into a .doc file.)
Take care not to paste too many of these pictures into any one file,
since often the applications that can open .doc, .xls files, et cetera,
have poorly documented internal limits and if you exceed these, you might
find that the file (therefore all the pictures embedded within it) can no
longer be opened or accessed. Also be aware that some applications,
including most Microsoft ones, have a bad habit of automatically
recording potentially incriminating information such as "last modified by
{whomever} on date {whenever}", within the file's metadata (look under
"Properties" in the "File" menu); you will want to purge this if
possible, before saving the file.
Although you should never consider the above technique as a safe
alternative to encryption, it can be a useful addition to your defence in
depth strategy, because, from the point of view of the attacker, he can
no longer just run a quick scan of file names within a directory, hoping
to find a match with known "controversial" files or content. The actual
content is now obsfucated away as a "binary object" within the wrapper of
a Microsoft Word / Powerpoint file; to see, and recognize, this content,
the attacker has to suspect that "MY_THOUGHTS_ON_GARDENING.DOC" is in
fact about something quite different than how to grow better rows of
leeks, then has to open up the relevant .doc file and page up and down
through it until something interesting shows up on the screen. A
diligent, patient, skilled attacker will do this; a great many ordinary
policemen, won't.

A final special note here, concerns how you sanitize "controversial" or


personally identifying content in pictures and other computer documents.
Do NOT just apply a PhotoShop (or other bitmap graphics editing software)
"filter" to the personally identifying portions of a "compromising"
picture and think that this will hide your identity. Several fugitives
have been caught by the authorities for assuming just that. The key issue
here is that a single filter -- for example one that "twirls" the bits
around a particular section of the picture so that they look like a
pinwheel -- can usually, with the (always happily given) assistance of
the software company that devised the filter in the first place, be
reversed by the authorities to reveal the original configuration of the
various bits / pixels that have so been manipulated, in so doing
revealing the original, incriminating data.
The same concept also applies to any other kind of data or file in which
a "transform" only hides or in some way changes, but does not completely
destroy, the actual data that is supposedly being hidden. Notorious
examples of the latter include Adobe Acrobat (.pdf) files that the U.S.
Department of Defence thought had been "redacted" via black over-writing
of the sensitive text (it turned out that the "blackout" was on an
internally superimposed layer which could be, and which was, easily
removed by anyone with the least bit of skill in manipulating Acrobat
documents) or Microsoft Word documents using the "track changes" feature,
where parts of the text that the editors thought had been deleted, were
in fact easily visible to anyone who bothered to turn the "track changes"
feature back on. Examples of this kind of thing are legion in the world
of digital forensics.
To PROPERLY sanitize a picture, you have to COMPLETELY DESTROY the
pixels / bits in the affected area, either by over-writing them with a
tool like the Paintbrush or Eraser, or, preferably, by selecting the
sensitive data area and using the "Cut" command so that nothing but a
white hole is left.
Furthermore, when saving a picture that has thus been sanitized, make
absolutely sure that the file has had all of its component "layers"
merged / melded into a single layer, before the file is finally written
to the hard drive. The reason why this is important is, some file formats
-- particularly the more sophisticated ones like Adobe PhotoShop (.PSD)
have a "multi-layering" ability that allows a skilled artist to
superimpose several independently created graphics pictures, one on top
of the other in the manner that you might see if you put several stainedglass windows on top of each other, so as to achieve interesting or
complex visual effects when someone looks at the final picture.
The problem here is that some of these formats also track changes such as
"transforms" that would otherwise permanently distort or eliminate the
original pixels in the picture, into temporary "layers" that can be
removed or reversed, and these layers are preserved when the file is
saved to disk. This means that an experienced attacker can simply load
the file into his copy of PhotoShop, rummage through the layers until he
finds one that looks like a data-obscuring change and then remove the
offending layer. Presto! Instant incrimination! In much the same way, a
word processing program with multiple levels of "Undo", carries with it
the risk that an attacker might "undo" changes that you thought had
eliminated parts of the document that you didn't want to have preserved.
A simple way to ensure that this hazard doesn't affect you, is to always

save a graphics file into less sophisticated formats, such as JPEG (.jpg)
or Bitmap (.BMP). These formats cannot save multiple layers and must
merge all the layers into a single one, prior to finally saving the file.
The same general principle has to do with "remnant" data for other
formats. If you are saving a word processing document, why not just save
it in basic ASCII text? When you do this, you can be sure that the only
thing that will end up in the document's data file on the hard drive, is
what you actually intended to have saved.
Here again, we see one of the basic principles of data security, at work:
"the mortal enemy of security, is complexity". Or, put another way: "Keep
it simple, stupid."
Is Your Digital Camera Going To Testify Against You?
---------------------------------------------------One final comment about digital graphics files. Many otherwise
intelligent and conscientious data hiders aren't aware that if the
pictures in your "My Pictures" folder were self-created (that is, they
were originally taken by yourself, using your own digital camera), the
camera itself can sometimes be used as a piece of evidence to be used
against you in a trial. The idea here is that the pictures (particularly
in their original, "raw" format as stored on the camera's flash RAM
memory) taken by different digital cameras have unique characteristics,
that an expert forensics investigator can use to trace back the picture
to the particular camera that captured this particular image.
This is obviously NOT what we want to have happen, and to try to reduce
the chance of it being used against you, please check out the steps
detailed at the following URL:
http://www.instructables.com/id/Avoiding-Camera-Noise-Signatures/.
What is especially scary -- but also highly informative -- about the
above Webpage, is the sublink (http://www.ws.binghamton.edu/fridrich/)
that it contains to the personal Website of one "Jessica Fridrich", a
professor at Binghampton University in the United States who seems to be
a walking encyclopedia of "how to do forensics on digital pictures and
digital data". Note that this very skilled lady does the bulk of her
publicly declared work for lovely little social help agencies like the
little old U.S. Air Force (can there be ANY doubt, therefore, that the
other, _private_ work that she does in breaking encryption keys and
"finding out where the perverts have hidden the steaganographic data", is
on behalf of certain U.S. agencies with the letters "C", "I" and "A", and
"N" "S" and "A", in their names?)
I add this comment just to give you a bit of a flavour of who you're up
against, when you try to hide data against the professional forensics
experts employed by governments and, sometimes, the police. Be of no
doubt, people like Ms. Fridrich are extremely intelligent, highly
motivated and you have to remember that they do this stuff for a living,
each and every day. She gets paid handsomely by the U.S. spook community
to give them the tools to enforce American power all around the rest of
the world, because she knows very well that "knowledge and information,
ARE power". She believes 100% in her work, she is gung-ho to help Uncle
Sam catch and jail all the "perverts, child molesters, terrorists and
drug dealers who are such a threat to the American Way".
In the cyber-punk and cypher-punk community, which by definition you are,

if you're reading this document with a view to implementing its


suggestions yourself, it may be very difficult for you to understand the
mind-set of someone as obviously intelligent and interested in computers
and encryption technology, as Ms. Fridrich; by now, I'm sure you're
asking, "...but this lady is a hacker just like ME, how can she be
working for the Pentagon?". But, my friend, she's NOT like you. She
doesn't believe in any of the civil liberties or personal privacy
concepts that you do. She is as much like you, in the sense of having a
common interest in technology, as are two soldiers from opposite sides in
a war, both of which have a common interest in knowing how to use a gun.
Nations like the United States (or China) have hundreds of thousands of
well-paid, well-motivated, "patriotic" computer scientists like Ms.
Fridrich, and while they understand and use the same technology that you
do, their basic outlook on life, human rights, freedom of speech, etc.,
is far different from yours or mine. They are the enemy. Never forget
that.
To beat someone like Ms. Fridrich, you have to do your homework, use all
the available tools consistently and thoroughly, never making one single
mistake. THEY don't have anything to hide (that I know of); YOU, do. It's
like you're the goalie on the football pitch with Mr. Beckham bearing
down on you. Can you keep the ball out of the net? Yes, you can; but you
won't do so by not taking things seriously.
By Yourself
----------The only safe world, is a lonely one, I'm afraid. So TRUST NO-ONE EXCEPT
YOURSELF. Don't even trust me. Evaluate everything that I say to you in
this document, against other authorities who are in the know about
security and decide for yourself if I'm talking rot or giving you good
advice.
The concept of "security by obscurity", that is, thinking that the bad
guys won't catch you or target you if you just don't provoke them,
currently has a bad reputation in the security industry, but in fact it
is a highly effective technique if used correctly and in the context that
it's a necessary, but not sufficient, condition of your overall personal
security and privacy plan. James Bond may make for good movie plots but
in fact, becoming "well known" is your worst nightmare come true.
You need to be quiet, unremarkable, average in every respect, just like
all successful spies are in real life. The minute that your activities
come to the attention of the authorities, you have instantly lost 75% of
the battle and you need to drastically change your strategy and tactics
to account for the new circumstances. Probably, this means curtailing any
controversial activity until you are good and sure that the heat is off
you. Doing so can take weeks, months or even years. I'm sorry that it's
that way, but I'm here to tell you the truth, not what you want to hear.
Prepare for the worst and don't think that "it can't happen to ME"... yes
it CAN and it all too often DOES. Assume that you're being watched, each
and every waking hour, particularly when you're using your PC, and try to
think what the Man least wants you to do, all the while; then, do that.
Assume that all your regular defenses have failed, and think through what
you will do in each attack scenario. Think it through again, then a
second time, then a third, until knowing what to do is second nature.
It's not by accident that the U.S. Army says, "The More You Train, The
Less You Feel Pain".

Pay attention, play smart and trust no-one. There is no other way, in the
world of PC security.
----------------------------------------------------------------------------------------Computer Operating Systems
-------------------------Before I get going on this section there is a basic recommendation that
everyone reading this should take seriously. Namely, GET YOURSELF A
(REASONABLY) FAST, MODERN COMPUTER. (And make sure that it has a good,
fast hard drive. And make sure that you are using a fast Internet
connection, although here there are some special considerations.) Why?
Several reasons.
First, you will be using encryption for a great many purposes.
Encryption, by its very nature, involves complicated mathematical
calculations, which put quite a bit of stress on your computer's CPU (its
"brain"). The faster your CPU, the faster it will get all the crypto
stuff done, which is a good thing.
But secondly, and actually far more importantly, you have to understand
that the basic idea of this document is to teach you the best ways in
which to defend yourself, when that dreaded jackboot comes kicking at the
door. Would you prefer, in that situation, to be using a computer that
takes 10 minutes to shut down, or one that requires 10 seconds? Not a
hard decision, is it?
One special note here : although in some ways, large-capacity, external
hard drives that connect to your PC via USB 2.x0 cables are an attractive
option, because of their portability, disposability and so on, I have
found out that they can be MUCH slower than internal hard drives. Most of
the time, when you are using small data sets (say, a few encrypted
files), you will never notice this, but try to start copying multigigabyte files across a USB cable and you will very quickly come to
appreciate the difference in speed. This is not necessarily a reason not
to use external hard drives... but just plan in advance and compensate
for the slower speed, when you know that you will be handling "sensitive"
data.
-----------------------------------------------------------------------------------This part of the document will compare Microsoft Windows XP (assuming
that you have all the most recent patches -- see below however for a
warning regarding Windows Update) with recent versions of Linux.
I'm deliberately NOT discussing operating systems (e.g. MacOS, Windows
Vista, BSD, Solaris) that I know little about, although I will mention
things about them where relevant. Just as a general comment, though, for
modern versions of the MacOS, 10.x that is, you should assume that it is
more like Linux than it is like Windows; however, unlike Linux, the MacOS
has a significant amount of proprietary Apple program code in it, so some
Mac features will differ quite a bit from their Linux equivalents.
Nor will I be discussing security for obsolete operating systems like
Microsoft Windows 2000, NT, Me or 98 / 98SE / 95, or old versions of

Linux or Unix, because in general, while these are not necessarily


insecure, there are rarely up to date security tools for them that can
cope with today's threats.
For example, if you can find an encryption program for Windows 98, there
is a very good chance that it will not support the modern cryptographic
algorithms or key lengths needed to protect your confidential data from
"brute force" key hacking attacks. An expert might be able to keep a
Windows 98 computer secure, but I have to assume that anyone reading this
document isn't an expert.
For Microsoft Windows Vista, the thing to keep in the back of your head
is, "like Windows XP but worse in almost every way". You'll see why I'm
saying this, later on, but one special issue that you should remember
about Vista is that many of its so-called security features, such as they
are, are available only in the "Professional" and "Ultimate" versions of
Vista, so if you have the plain old garden variety "Home Basic" version
of this operating system you are even more behind the 8-ball. Not that it
really matters much because you shouldn't be using ANY version of Vista
in my opinion.
One note about Linux: As anyone who has ever investigated it knows, Linux
isn't a single product; instead, it's an almost anarchic, ever-changing
collection of "distributions", that is, packaged, developed, fine-tuned
versions of the basic Linux "kernel" that is updated and maintained by
Linus Torvalds. This leads to the obvious question of, "when you say
'Linux', which distribution are you talking about"?
My answer to this question would be, "I'm talking about any of the major
distributions that are freely available on the market today." I'm being
deliberately vague about saying this so I don't get drawn into the flame
wars about "which distro is best from a security perspective"... the
truth is, they ALL are, as long as you know what you're doing, and that
you actually DO it. But just so that everyone has a general idea about
what I consider to be "major" distributions, here is my list:
1. Ubuntu and its clones (Kubuntu, Xubuntu) (a Debian-derived family)
2. Fedora (RedHat) (source code by Red Hat)*
3. Mandriva (formerly Mandrake, source code by Mandriva)
4. SuSE (source code by SuSE but now owned by Novell)*
5. PCLinuxOS (source code by Mandriva I think)*
6. Knoppix (a Debian-derived family known for its "Live CD" approach)
7. MEPIS (a Debian-derived family)*
8. Sabayon (a Debian-derived family)
9. Mint (a Debian-derived family)
* : Refers to a distro based in the United States; see below for why this
is important.
Unless otherwise specified, you should assume that when I say "Linux" I
really mean "Ubuntu, Fedora, Mandriva and SuSE". In my opinion these four
distros are the most important ones and the ones that people are most

likely to be using.
You should be aware that there are in fact many viable alternate
operating systems these days and it may be worth your while to
investigate one or more of these. The BSD group (NetBSD and FreeBSD) in
particular have a good reputation for security and are largely, but not
completely, compatible with Linux applications; however, most of the BSD
group's security features are really more oriented to fending off attacks
from remote (network-based) intruders, rather than the more extreme "the
cops now have your PC in their lab" scenario that I will describe below,
so they're not as useful as they first might seem. There are also
operating systems like Solaris, AIX and so on, but these are not very
well suited to the casual home user.
As a general comment, I should point out that there is one big advantage
that both Linux (and the MacOS) have generically : namely, your average,
run-of-the mill street cop, is much less likely to know anything about
these operating systems, compared to Windows.
This might not look like it's very important at first blush, but in fact,
it can make a huge difference, especially if, in those crucial few
seconds just after the SWAT team kicks down your door, they don't know
how to act quickly and "secure the perp's data", because the only GUI
interface that they're familiar with is the good old Windows one. This
extends to a wide range of other activities, for example from simple ones
like "where is 'My Documents'" to the more obscure ones, such as "where
do I find the Registry files?".
There is, of course, an important caveat to all this : if you get
attacked (either initially or later, once your PC is in the tender hands
of the police forensics lab) by a "pro", that is, a really well-trained
police forensics expert, these guys know Linux and the MacOS very well
indeed, so don't think that just by using a different operating system,
you're going to be buying yourself a lot of additional intrusion
resistance.
Think of using a non-Windows operating system in the same perspective as
using "security by obscurity" : it can add a little to a lot of security,
depending on the circumstances, but you'd be crazy to rely on it as your
_only_ defence.
----------------------------------------------------------------------------------------General Comments About Each Operating System
-------------------------------------------Although both Windows XP and modern versions of Linux sort of look the
same (they both have a "windowing" user interface that uses the mouse,
has pull-down menus and icons and so on), there are in fact some very
dramatic differences between them that have a big effect on your
computer's overall security.
Here are a few:
Who Owns Your Operating System?
------------------------------Microsoft Windows is a private, "closed-source",

"proprietary" (commercial), piece of computer software; that is, it is


100% owned by Microsoft Corporation; you can't examine so much as one
line of its source code (that is, the original language in which software
is written by computer programmers, as distinct from the "binary" or
"object" code, which is an obscure collection of bits and bytes that only
computer CPU's can understand).
This stands in sharp contrast with all versions of Linux (but not the
MacOS which is also a "closed-source" system like Windows, the MacOS is
owned by Apple like Windows is owned by Microsoft), because Linux is a
publicly accessible, "open-source", non-commercial system. ANYBODY can
create his or her own version of Linux simply by recompiling the source
code, which is freely available for anyone and everyone to check and
review.
This is an extremely important point. Why? Here's why: Because of the
fact that Windows is "closed-source", THERE IS NO WAY YOU CAN BE SURE
THAT IT HASN'T BEEN "BACK-DOORED" BY MALICIOUS THIRD PARTIES, TO ALLOW
THEM TO SNOOP ON WHATEVER YOU DO ON OR WITH YOUR WINDOWS PC. Without
being able to examine the source code of a computer program, you can
never be sure that the developer of that application hasn't "added a
little something extra" that allows the program to do things that you
never anticipated or intended; this is, of course, exactly how most
computer viruses and malware operate.
With an open-source system, any attempt to hide this kind of crap inside
either the operating system, or inside the application programs that run
on top of it, would be very difficult to implement, because the original
"source code" programs from which the computer binary code programs are
compiled, are all carefully checked by third party experts in the major
open source software repositories, before they are made available for
download and use by you and me.
Backdoors
--------As a matter of fact, there is abundant evidence that Windows (and
probably the MacOS as well) very definitely HAVE been "back-doored", in
possibly both of the following two ways:
1. They have probably been back-doored in one or more very subtle, hardto-detect ways to allow remote access by U.S. spy agencies like the CIA,
NSA, FBI, etc., or, possibly, just to allow these spooks to remotely
disable protective measures like your firewall, so that the spies can
then exploit other vulnerabilities and thereby get secret access to your
PC.
One of the interesting things about this kind of back-door is that you
are in fact very unlikely to have it used against you, because the NSA,
CIA etc. would probably only enable it in situations that are very
important to them, e.g. you're suspected of being Usama Bin Laden's next
of kin. The minute that they turn on this backdoor and it gets detected
by someone in the computer security industry, the NSA's cover is blown
and the back-door's value would go down dramatically. Thus they have a
strong incentive to use it only in highly important cases, or in
situations where there's almost no chance that they'll get caught. You
and I have no way of knowing what the NSA's priorities are, in this
respect; we can only speculate.

2. The other back-door, which you are far more likely to have used
against you, is one (and the evidence is that it's not just one, in fact
it's many of them) in which Windows secretly keeps track of everything
and anything that you do on your PC -- e.g., what Websites you accessed,
what files you opened (and what content they contained), when you used
your computer, and so on -- whether or not you thought you had "hidden"
or encrypted it.
The idea here is that you THINK that you have wiped all "incriminating"
evidence off your Windows PC, but meanwhile, the operating system has
been specifically rigged to allow a cop with the correct back-door code
to tap into the secret tracking database and happily download all the
"smoking gun" information that you thought had been purged. There is,
again, abundant evidence that this kind of back-door not only exists, but
has actually been integrated with the most popular law enforcement
forensics (electronic snooping) programs, for example EnCase. (The EnCase
corporation has been very cagey as to whether they have or haven't had
access to this kind of thing; they won't answer questions directly, so
you would have to assume that the answer is "yes". It's only the extent
and flexibility of the backdoors that's in question, IMO.)
By the way, lately, Microsoft has been more or less saying that this is
exactly what they're doing -- check out the following story:
http://arstechnica.com/news.ars/post/20080429-new-microsoft-lawenforcement-tool-bypasses-pc-security.html.
What's interesting about this is, note how it has this little comment
about "decrypts system passwords". This casual comment points out that
Microsoft has no problem at all, with enabling what can only be described
as a backdoor, for its chums in the U.S. government. Ask yourself -- if
they're admitting this out loud, what AREN'T they admitting? This one
thing, in my opinion, means that you're nuts to use Windows for anything
that you seriously want to keep secure.
Can you evade / defeat these kinds of built-in snooping / tracking
functions, given that they are hard-coded right into the basic operating
system? Yes... I will show you some techniques in what's to follow,
however you have to ask yourself, "can I ever be SURE that I've defeated
all the back-doors?". Remember that Microsoft and the U.S. government
could theoretically be adding new exploits with each "patch" that you add
to your computer. Unless you are very good and very persistent, there is
always the chance that they'll enable one that you won't find. The
decision is yours.
Vista -- Just Say No
-------------------Incidentally, this situation with Windows is much worse in the most
recent version of Windows, that is, "Vista". The reason why is, unlike
every other computer operating system before it, Vista incorporates an
extensive series of so-called "DRM" or "Digital Rights Management"
measures, put in at the demand of the U.S. recording and movie
industries, designed to lock down your ability to copy or display
multimedia content (like songs, movies and so on) without their
permission.
For example, one of the most hated features of Vista is that it checks to
see if each and every component -- software and hardware -- of your

computer, enforces these DRM restrictions (over which of course you have
absolutely no control). If it finds EVEN ONE component, say, a video
card, that doesn't have DRM copy protection built in, Vista cripples the
output of your video card so that you get only a tiny, postage-stamp
sized screen instead of that nice big 50-inch plasma TV output that you
thought you had paid for. You can't disable this feature or turn it off,
nor can you even find out the basic details of how it works... these are
all secrets held by Microsoft and Hollywood.
Vista's DRM infrastructure would be the ideal hiding place for government
spying modules, because by definition it is hidden from the computer user
and is remotely controlled by a third party (either Microsoft, Hollywood,
the government, or all three) that the computer user has no knowledge of
or control over.
Why is this relevant to ensuring that your computer is safe from
intrusion? Well, stop to think about it -- if Microsoft has teamed up
with Hollywood to take away 75% of your control over how your own
computer works, when accessing "copyrighted" content (something that is
right in your face every time that you try to play a DVD on your
computer, which is immensely unpopular with Microsoft's own customers and
which requires all sorts of secret code that you can't change or
examine), what do you think the chances are that they have ALSO
collaborated with the CIA, the NSA, etc., to also slip in a little hidden
spying program along with the DRM stuff? Most users would never notice
the spying program because, unlike the DRM nonsense, it works quietly in
the background, never bothering you, until the cops show up at your door.
One other thing about Vista that doesn't immediately look like it's
relevant to security, but actually is very important, is simply how
"bloated" that it is. Vista sets new records as to the amount of hard
drive space, RAM memory and CPU speed that it needs, just to give you the
ability to see that nice shiny new user interface (which is of course
99.9% the same as the old XP interface, but for which Microsoft expects
you to buy a brand new computer and spend another 200 Euros... but I
digress). It is agonizingly slow to do anything, particularly boot up in
the first place, and this is not good from a security perspective,
particularly in time-sensitive situations such as the "boot down the
door" scenario.
Remember, in this context, the famous, and very true, computer security
motto that "the mortal enemy of security, is complexity". Largely because
of its DRM encumbrance, Vista is fantastically complicated, both in
design and in implementation, and these characteristics make it almost
impossible to properly secure. (How can you "secure", an operating system
that nobody -- except possibly a few of Microsoft's own programmers -can really understand how it works? You can't close off security holes in
system components that you don't know about, probably because Microsoft
has never revealed them.)
But the most important implication of Vista's (and actually XP's as well,
XP is not quite as bad but it's still bad enough) clumsy, bloated
implementation is, it will NEVER be able to run from a removable device
such as a CD-ROM or USB key (see below). You _must_ run it from a hard
drive, and that's a bad thing, as we shall see further on.
Bottom line on Vista: Just stay away. Windows is bad, from a security and
confidentiality point of view; Vista is hopeless. Using Vista means that
you might as well turn yourself in to the FBI right now and save everyone
a great deal of trouble with the legal paperwork.

Who Does The CIA Have To Recruit?


--------------------------------Another very important point about the ownership issue concerns how many
people that governmental authorities would have to compromise, in order
to embed spying back-doors into an operating system.
Both Windows and the MacOS are owned by huge corporations located in the
United States of America, and decisions affecting what goes in and out of
these operating systems can be made by a very small circle of powerful
individuals, for example Bill Gates and Steve Ballmer in the case of
Windows or Steve Jobs in the case of the MacOS.
Under the U.S. "PATRIOT" Act, not only would Gates, Ballmer and Jobs have
to embed this type of spying function in their software if ordered to by
the U.S. government, but, on top of that, if they were ever to tell the
public that they had done so, they would be instantly subject to a long
prison term (it is a "crime" under the PATRIOT Act to tell people that
the U.S. government is spying on them, whether or not the spying is in
fact legal or illegal).
In other words, it would only take the compromise (willing or otherwise)
of a very few select American CEO's and so on, for Windows, the MacOS,
etc., to be back-doored by the CIA, NSA, etc., and there would be a very
good chance that the compromised individuals would have to shut up about
the fact.
This situation is not the same with Linux and most other open-source
operating systems. Control over the source code for most international
versions of Linux, as well as for its close relatives NetBSD and FreeBSD,
is handled not by a single individual in a single country, but instead is
delegated to committees of expert programmers who are located all around
the world.
It would be an extremely difficult task for any intelligence agency or
government to subvert even a few of these programmers to get them to put
spying code into their programs, because each programmer in effect acts
as a check on each other one. If a single programmer, or even a few of
them, started to do this kind of thing, they would be quickly caught and
expelled from the open source development community.
It is noteworthy, by the way, that there are versions of Linux and other
open-source or semi-open source operating systems, for example Red Hat,
PCLinuxOS, MEPIS, OpenSolaris and AIX, that, although they do partly or
entirely subscribe to the open source development model, are
headquartered in the United States.
In view of the PATRIOT Act and so on, I'd advise you to steer clear of
these systems, even though they are probably much less likely to have
been compromised than Windows or the MacOS. While it is true that in
theory, you could examine (say) the source code for OpenSolaris, we are
talking about looking at literally millions of lines of dense programming
here, and there would still be a lot of places, device drivers for
example, where any American-based software manufacturer could be
compelled by the CIA and NSA to embed spying code.
Can You Trust Your Hardware?

---------------------------There is also an interesting hardware-related side to this issue.


Consider, if you will, that just two American-based companies -- Intel
and AMD -- control over 90% of the CPU market (the CPU is the "brain" of
your computer that lets it add numbers, store things in memory and so
on), as well as almost as big a chunk of the other desktop PC integrated
circuit market for the other chips that go onto your PC's motherboard.
Recently, Intel has proudly announced that its new "LaGrande" chip will
have an integrated TPM ("Trusted" Platform Module) chip embedded right
within the CPU itself -- "TPM" is supposedly a security technology but
what it's really meant for, or so they say, is to allow Big Media (e.g.
the U.S. recording and movie industries) to "lock down" multimedia
content so all you can do is watch it under whatever rules they decide to
impose on you... you can't copy it, modify it, back it up or anything
like that.
Now, this would be bad enough if all the TPM sub-chip within the LaGrande
CPU did, was to enforce Big Media's money-grubbing, obsolete business
model on poor old consumers like you and me. But there is another, much
more insidious possibility to it, namely that on top of the Hollywood
connection, the TPM chip -- or some other "secret" part of either the
LaGrande chip, or of Intel's other chips, possibly AMD's as well, or some
other "secret" motherboard integrated circuit (keep in mind that Intel
makes motherboards, as well, so what if they quietly embedded this within
the motherboard itself?) -- has a secret, HARDWARE-BASED backdoor just
waiting for the NSA, CIA, FBI, your local cop, etc., to turn it on, when
presented with the right sequence of bits and bytes.
Why is this important, from a "how secure is my PC" perspective? Well,
because, if the backdoor is at least partly implemented in hardware, and
because a hardware-based backdoor's inner workings can be at least partly
made "invisible" to the software that's running on the same PC (this is
much different from a software-based system which by definition has to be
loaded into the computer's RAM memory, for it to function at all; but if
it's loaded into RAM memory, then at least theoretically it can be
investigated and discovered by other software on the computer), all that
a "patriotic" American computer operating system manufacturer like
Microsoft or Apple would have to do, to enable the backdoor, is to put a
hidden programming call (called an "API" or "Application Programming
Interface") into the OS. When the U.S. spooks want to turn it on, all
they would have to do is fire a few innocuous bits and bites at this
particular programming interface and, bingo! instant access to your
private data and communications!
The point is that the act of turning on the backdoor API would look
totally innocent, just a few bytes to a memory location with no obvious
other activity forthcoming from this action, to even an experienced
computer security expert who was actively scanning the system for signs
of intrusion, at the time. That's the miracle of hardware (and a TPM) -all of the "action" takes place behind the silicon walls of whatever chip
is doing the dirty work.
This is NOT just idle speculation; for a very good explanation about how
this could be accomplished right on the CPU, check out (http://
www.usenix.org/event/leet08/tech/full_papers/king/king.pdf). What is
scary about this possibility -- who knows, the NSA and CIA may already
have it quietly stashed away on every Intel- and AMD-based CPU -- is that
theoretically, the secret hardware-based backdoor might be triggered

without even having an operating system (see the above-noted paper for
how they do this with a single, "magic" UDP data packet). This kind of
compromise would make very good sense for the U.S. intelligence
community, since the CPU is one of the few types of computer chips that
by definition have to be distributed with every PC. Yet more reason to
stay away from American equipment!
As would also be the case with a software-based backdoor, the American
spooks' work would be made much easier by at least the passive cooperation of the operating system manufacturer. This wouldn't be hard to
do with organizations like Apple and Microsoft because of crap like the
PATRIOT Act, but it would be much more difficult (not impossible!) for
software like Linux, for the same reasons why it would be difficult for a
software-based backdoor. (Note: There is an active debate currently in
the security community as to whether the CIA, NSA, etc., could do this
even without the OS to help them, maybe by secretly injecting the remote
turn-on code into some widely downloaded application like, say,
RealPlayer or the STEAM gaming patch update network, or maybe into a
device driver. Personally I think that the CIA would be much more likely
to just turn the screws on Gates, Ballmer and Jobs, but you can't
completely rule out the possibility that they'd try the other route, as
well. The CIA is very patient and very thorough... they'll keep trying,
until they find a method that works, and they rarely rely on only a
single mechansim by which to gain secret information. They're
professionals, and they're very, very good at what they do. You can take
that advice to the bank, my friends.)
Because the vast majority of desktop and laptop PCs sold today are built
with either Intel or AMD processors, the security-conscious consumer
really has limited options here. If you can get a PC with one of the
Taiwanese-built VIA CPUs, I'd suggest that you do so (they're typically
somewhat cheaper than, but a bit slower than, the Intel or AMD models),
but if you have to choose between Intel and AMD, all things being equal,
pick AMD, since it appears that AMD is a little less in bed with the
entertainment industry even though it's an American company. Another
option might be some other computer with a different chip, for example
the "Geode" series.
Just keep it in the back of your mind, that you have to assume that the
hardware may have been pre-engineered to set up a backdoor on your PC...
but it would probably (maybe) need a compliant operating system to turn
it on.
Incidentally, there is a side-note to all the above that's well worth
mentioning. Let's assume, for the sake of argument, that, like 99% of all
normal PC users, you don't have a realistic choice but to use a computer
with either an AMD or Intel CPU in it. But, like the good little paranoid
that you are, you are worried about limiting the damage to your privacy
and security posed by the little old CIA / NSA backdoor that might just
be hiding somewhere in the millions of tiny integrated circuits on the
CPU. How are you going to protect yourself against this kind of
fundamental, and difficult-to-detect, threat?
Although there's no 100% effective "mitigation" step, one thing that
might be useful is to simply disconnect -- and here, I mean "physically
disconnect" as in "pull the Ethernet cable from its little RJ-45 plug-in
port" -- from any network, when you are doing security-sensitive
operations such as entering passwords for encrypted storage containers
and so on, then power down the computer while the network cable is
physically disconnected. Conversely, when you power up and first enter

the passwords for your encrypted containers, do so with the network cable
physically disconnected. (Don't worry, it shouldn't stop you from being
able to get to the Internet, when you eventually do decide to plug in;
most network cards and operating systems can automatically re-synchronize
with the network when they sense that the cable -- the "physical medium"
-- is now available where it previously was not.)
The reason here is pretty straight-forward; a hardware backdoor (let's
say it functions like a keylogger), which would (possibly) have to
operate completely independently from whatever operating system was on
the PC, would have to transmit sensitive captured data back immediately
(or at the very least, at the point at which it noticed that the power
was about to be turned off on the compromised PC), because the hardware
backdoor probably wouldn't have anywhere that it could safely store this
captured data in between power cycles on the computer. (Yes, it
theoretically COULD do something funky like put the decrypted passwords
in your BIOS'es non-volatile Flash RAM chip. The problem with doing this
is, a careful defender would be able to see the passwords as well and
would thus be tipped off that his PC had been backdoored, and the whole
point of a hardware backdoor is to avoid leaving any trace that it's
there. So I think it unlikely that the hardware backdoor would try to
preserve this kind of evidence in the hope of queueing it up for the next
time that it saw the network cable being connected.) If you physically
pull the network cable (or just prevent your Wi-Fi chip from ever
associating with a wireless access point), the hardware backdoor has no
way of sending your private passwords back to our friends in Langley and
Fort Meade, U.S.A..
So What's The Bottom Line On Ownership?
--------------------------------------The bottom line here: Windows starts with at least 30 points out of 100
against it, just because it is (a) a closed-source operating system and
(b) because it is headquartered in the United States. The MacOS is only
better in the sense that its market share is probably not big enough for
the spooks to have paid a lot of attention to, but this could change at
any time.
Just on the "how is it programmed and who owns it" front, most versions
of Linux are far more trustworthy than either Windows or the MacOS, but
steer clear of Red Hat and other America-based versions since they could
at some point be compromised like Windows undoubtedly already has been;
this would be difficult to do, but not impossible. The CIA and NSA have a
lot of people and a great deal of money. What may seem "impossible" to
you, might just be a slow day's work for them.
How Is The Operating System Designed
-----------------------------------All modern operating systems, including of course Windows, most versions
of consumer Linux and the MacOS, have a "windowing", "GUI" ("Graphical
User Interface") type of interface that lets you use the mouse, drop-down
menus, and so on; in fact, they all look more or less alike, these days.
This would lead a casual observer to conclude that they're all basically
built in the same way. But any such conclusion would be very wrong -there are dramatic differences, many of which impact security and
confidentiality, in the ways in which the various operating systems are

architected. I will try to explain a few of these, below.


Integrated vs. Non-Integrated GUI
--------------------------------Before you read what's next, you have to appreciate that a computer can
operate very happily without a "GUI" interface. Graphical user interfaces
were invented largely to make operating the computer more simple and
intuitive for dumb old human beings like you and I, but only at the
expense of considerable complexity and at the expense of slowing down the
computer by quite a bit (because moving all those pixels around on the
screen, takes a lot of CPU and graphics card horsepower).
The alternative to a GUI, incidentally, is a "CLI" or "Command Line
Interface", which involves you having to type in sometimes cryptic kinds
of commands (example: "cat MYDOCUMENT.TXT | more") at a command line, if
you get even ONE character wrong in the command it just gives you an
error message and doesn't do what you expected or intended it to do. CLI
interfaces were how you had to interact with a computer, before the
bright people at Xerox figured out how to create a GUI interface.
Why is this important when considering security? The main reason is, some
operating systems -- particularly Microsoft Windows -- have "baked in"
their own particular kind of GUI (it's called "Windows"... duh) into the
operating system in a way in which you cannot remove it, nor can you
substitute a different GUI interface for the one that Microsoft wants you
to use.
Here is what the Windows interface model looks like:
+-------------------------------------+
2. | GUI level Windows interface and OS |
+-------------------------------------+
1. |
Computer Hardware
|
+-------------------------------------+
This is BTW also true of the MacOS, just not to the same degree.
As a matter of fact, to make matters worse, Microsoft has not just baked
the Windows GUI into the underlying Windows operating system (they're
both called "Windows" but actually they are two very different pieces of
software), but it has also baked in a number of other highly insecure
components, the Internet Explorer browser and its associated ActiveX
browser extension protocol, in particular (you can't completely remove IE
nor can you remove a lot of the junk that goes with it).
Neither Linux, nor most of its relatives like BSD, work this way. Linux
has a three-layer software architecture:
+-------------------------------------+
4. | Enhanced GUI level "Window Manager" |
+-------------------------------------+
3. | Basic GUI level X-Windows interface |
+-------------------------------------+
2. |
Basic CLI level Linux OS
|
+-------------------------------------+
1. |
Computer Hardware
|
+-------------------------------------+

This might look like a more complicated model but actually it is much
simpler, because -- and this is a very important point when considering
security -- you can run the computer with only a CLI (Command Line)
interface, being sure that nothing is being managed / executed / tracked
"behind the scenes" by some aspect of the GUI interface. Now, of course,
there could be a spying application running anyway, because all these
modern operating systems are "multitasking", that is, they can run many
programs all at the same time.
For Your Convenience... NOT
--------------------------What I'm referring to, however, is the possibility that the GUI interface
itself, or some other application that is running with the GUI interface,
is enabling some kind of tracking of your actions, either innocently or
deliberately.
A classic example of this concerns copying files. Suppose, for example,
you want to copy a bunch of pictures from a directory (folder) on your
hard drive, on to a USB key. Almost all modern GUI interfaces will, "for
your convenience", automatically create "thumbnail" (smaller) versions of
the pictures in any folder that you access... this is presumably so you
can tell which picture is which, before you decide to keep the pictures
of Aunt Nellie's 80th birthday party and throw the pictures of your last
drunken March Break table dance into the trash bin.
The problem is, almost all modern GUI interfaces, when they are creating
these thumbnail versions of the original picture, do so by secretly
creating a reduced size version of the original picture in a hidden
folder or directory. Needless to say, if the original pictures included
"controversial" content, and you wiped them from your hard drive, the
"thumbnail" versions ARE STILL RETAINED IN THE HIDDEN DIRECTORY AND CAN
VERY EASILY BE ACCESSED BY AN INTRUDER, LIKE A COP. This one "feature"
can easily land you in jail, even if you've carefully sanitized just
about everything else.
[Incidentally, there is a stupid little side-effect to this, which is
immensely annoying but which you have to take account of... let's say
that you have dutifully wiped all the "incriminating" thumbnails from the
appropriate hidden folder under "My Documents", then you open a Windows
Explorer window to the folder with the original "controversial" files,
and from there you dutifully wipe all these files. The problem is, the
second that you visited the folder containing the original files, your
trusty operating system will have just re-created the thumbnails
corresponding to them, again! (After all, it's just helping you see which
one is which, right?) The point here is that sometimes, the sequence in
which you undertake security-related activities, is just as important as
what you actually do. Sorry, but computers just work this way, don't
blame me!]
There is a very subtle but important point here. The GUI interface is not
_intentionally_ trying to leave an incriminating trail for an intruder or
forensic investigator; it's merely implementing a "convenience" tool to
make ordinary use of the computer easier. But in doing so, it is
accomplishing more or less the same function as a real, malicious
background spying program would do. There are untold variations of this
same concept in GUI user interfaces, ranging from the "Most Recently
Accessed Documents" list that they commonly store (note this one applies
both to Linux and Windows), to Microsoft Windows' trick of secretly

storing your list of recently accessed Internet URLs in the USER.DAT


file, to the browser history cache in your copy of IE or Firefox... the
list goes on and on.
With a CLI interface, conversely, the computer is going to do only the
one, specific thing that you told it to do, when you typed in a command
at the command line. For example:
cp /home/JohnJones/Pictures/* /media/sda1/Pictures
Will copy every picture in the /home/JohnJones/Pictures folder, to the
corresponding folder on your USB key, WITHOUT doing any fancy business
like secretly creating a bunch of hidden "thumbnail" files.
Is the additional security worth the loss in convenience? Maybe yes,
maybe no, each user will have to choose for himself or herself. But isn't
it nice to know that under Linux, BSD, etc., you at least have the CHOICE
of doing things this way, instead of "trusting" the GUI interface not to
do things behind your back? (Note: There are a number of add-ons for
Windows, particularly the "PowerShell" one, that can give Windows users a
measure of the abilities already enjoyed by the Linux folk. It's well
worth your while to download and become familiar with one of these tools.)
One important note, here: some CLI interfaces will create a "history" of
commands that you typed at the command line, again for the sake of
"convenience". This sequence of commands can be highly incriminating in
the hands of an intruder, so make sure to find out where it is stored on
your computer (for example in a hidden file like "/home/
JohnJones/.bash_history") and wipe it out at the end of each CLI session.
One possible solution to this is as follows:
Use the "chattr" command to lock out the ability to update the file. As
root, access the home directory that you normally use when accessing
potentially problematic information. Type:
rm .bash_history
touch .bash_history
chattr +i .bash_history
The user will still have a command line history, but it will only apply
to the current session. When you log out, the .bash command history won't
be saved into the .bash_history file, so it won't hang around to cause
trouble for you later.
(For more information on this interesting Linux command, see: http://
www.cyberciti.biz/faq/how-to-make-a-file-unchangeable-unalterable-so-thatno-one-can-modify-it/).
Note that the Windows / DOS "attrib" command is functionally very similar
and can be used for many of the same purposes, if you want to writeprotect a particular file under Windows.
The same issue is true of log files where the computer's operating system
happily records every event that occurred while you're using it; for
example, if an intruder can see a log entry saying, "MOUNTED PGP SECURED
VOLUME JUNE 10, 2005", then he's got a very valuable guide-post to what
kind of encryption you were using to protect your data. Make sure to wipe
the log files clean, as well.

Always-On Backup -- Always A Threat


----------------------------------Watch out for another nasty little "convenience" feature that has started
to be implemented by some modern computer operating systems, notably the
MacOS -- "always-on backup". The concept here is that as a stupid
computer user, you are going to mistakenly delete your files, throw your
PC into the toilet while you're on a drunk, etc. and then you are going
to moan and groan, saying, "gee, if I had just done my backups...". So
the friendly computer operating system does this for you without your
knowledge, regularly backing up what it thinks are your data files, as
well as, depending on the implementation, various other system files as
well, to some hidden or offline location.
Now, while this kind of feature may be of some use to the average
computer user, it's not hard to see how it can potentially represent a
very serious threat to the confidentiality of our data. Very few of these
"always-on" backup systems have any ability to encrypt or otherwise
secure the files that they are backing up, particularly they have little
or no ability to provide a secondary encryption key against the data when
accessed by what the backup program regards as the PC's "legitimate"
owner... so, when Scotland Yard's jack-booted thugs break down your door
and grab your PC, all they have to do is threaten your room-mate with
jail time "but we'll be lenient, if you tell us that terrorist's password
to get on his Mac". Now they have the general password that you use to
log on to your PC, so the Mac thinks that they are "you" and it happily
shows them everything that it backed up, but that you THOUGHT you had
secured, over the last year or so.
In other words, the "always on" backup program was never designed from
the assumption that someone who has user-level access to the computer or
your account, may NOT, in fact, have legitimate access to some piece of
data that would otherwise normally be accessible by you. In fact, most of
these "background backup" programs don't do any kind of encryption or
security at all, meaning that now, no matter how carefully you had
secured and encrypted the primary copy of your sensitive data, there
exists a secondary, completely unsecured copy of it, just waiting to show
itself in all its glory to the next intruder who gets access to the PC.
For example under the MacOS Time Machine system your files are silently
backed up on a regular basis to an external hard drive, all nicely
identified, time-tagged and organized for easy restore later. Some of the
same issues are associated with Windows' "System Restore Point" function.
What do you suppose happens, when you have a whole bunch of confidential
files open on your Windows desktop, and then your friendly background
System Restore Point function gets triggered because you told it 'back up
my configuration every X days'? Right... you have now backed up all that
juicy sensitive information with no protection whatsoever.
Windows Vista, incidentally, is even worse than all the other systems :
consider the new Vista feature, "Previous Versions". What this feature
does is, Windows makes snapshots of all the files on your hard disk every
so often. If you accidentally delete or overwrite a file, you can browse
an earlier snapshot and retrieve it. It's like having a complete backup
made of your hard disk made every few hours, that you can instantly
restore any files from. Needless to say, this kind of "feature" is an
ideal tool for an intruder to use, to find incriminating or controversial
evidence that you "thought" that you had eliminated.

(By the way: note how this feature makes a bad joke of Microsoft's
repeated assertions that their "Bitlocker" file encryption system, allows
you to "secure" your confidential files. If I have this stupid "Previous
Versions" thing turned on, but I then go and save the last version into
my Bitlocker-protected area, where's the security? An intruder can just
go looking for the "Previous Version", which is stored completely
unencrypted somewhere else, and bingo! You're owned, dude! Let's hear it
for "Security By Microsoft"!)
The larger problem with Vista in this respect is that it is so large and
complicated, and so much of its inner workings have intentionally been
hidden from / made inaccessible to, the average end user, that it's
basically impossible to accurately determine what the damn operating
system really is, or is not, doing at any given point. Therefore, using
Vista for any purpose that involves confidential data, is like playing
Russian Roulette with 5 out of the 6 chambers loaded. Sure, you might get
lucky, but the odds are against you.
The moral to this story is, TURN OFF THE BLOODY BACKGROUND BACKUP
PROGRAMS, AND MAKE SURE THEY STAY TURNED OFF. If you want to backup your
confidential data, do so yourself by manually copying it to another
secured, encrypted location. Backup programs are designed for people with
nothing to hide but a lot of stuff to retain. These design goals are in
many ways diametrically opposed to what we need to do for good PC
security, so disable the backup programs and do it right, by yourself.
Desktop Search
-------------An especially dangerous aspect of this subject is that many modern
operating systems, and this unfortunately DOES include not only Windows
and the MacOS but also many versions of Linux that are trying to compete
with the "features" (bad ones, in this case) of the commercial operating
systems, by default will enable "desktop search" background applications
that basically index each and every file on your computer.
These features have been enabled supposedly to make it easier for you to
search for and locate files (I have never once seen them work properly,
by the way; all they do, in my experience, is drastically slow the
computer down, while the application thrashes the hard disk while doing
its indexing), but from a security point of view they are a disaster
waiting to happen.
Just think... if you were a jealous spouse, or a private investigator, or
a repressive government, or a morality police cop, wouldn't you just LOVE
to get an up to date list of exactly what files that the regular user of
a particular PC, most often opened, accessed and searched for? It's hard
for me to believe, therefore, that the trend to enable this kind of
"convenient" file searching wasn't started at least partly by law
enforcement... the benefits to them, far outweigh the benefits to the
supposed user of the tracking application.
Again, the Windows and particularly Vista implementations of this socalled "feature" are considerably worse than those available with other
operating systems, although the MacOS isn't significantly better. What
all 3 of XP, Vista and the MacOS share in terms of bad things here, is
that the background search and indexing service is built in to the basic
operating system in a way that is difficult (MacOS) or just about damn
impossible (XP / Vista), to turn off and kill once and for all. Most

mainstream consumer Linux implementations, for example Mandriva and


Ubuntu, while they may come with background searching pre-enabled, do
have a fairly simple and fool-proof way of disabling it, one that isn't
deliberately made hard to access -- that is, you go into the package
manager application and just remove the tracking application from the
"installed" list.
You will notice, in sharp contrast, that at least in the case of Windows,
this is far more difficult to do, because the background tracking
service, like many other security-compromising aspects of how the
operating system as a whole, is considered to be an "integral" part of
the operating system, not a separate application. So it doesn't show up
in your "Add / Remove Programs" list and therefore can't be individually
expunged from your computer... you may be able to disable it, but how do
you know that some update from Microsoft hasn't quietly re-enabled it?
You don't. This is yet another reason to steer clear of Windows.
The moral of this story, if you hadn't already guessed, is TURN OFF
(DISABLE) THE BLOODY TRACKING APPLICATION BEFORE YOU DO ANY WORK WITH
YOUR PC. That is, if you can. There is plenty of evidence to suggest that
at least in the case of Microsoft Windows, you can't fully disable the
background file tracking functions. Not hard to see who benefits by that,
is it?
But How Complete Are The Security Tools?
---------------------------------------Now, before I get into the next section, I want to state for the record
that I have no interest in starting one of the "whose operating system is
best" flame wars that appear from time to time. I am only trying to state
the facts as best I understand them; I have no interest in promoting one
type of software over another.
Why is this important? Well, in a perverse way, Windows (sort of) wins in
this area, if only because its many security and confidentiality
vulnerabilities are so well-known, and so serious, that an entire cottage
industry, both in terms of freeware and in terms of commercial security
software solutions, has evolved to try to give Windows users a fighting
chance to safeguard their privacy. As we will see, it is a very open
question as to whether this "cornucopia" of add-on security software can
ever really offset the Windows operating system's built-in limitations,
but it certainly bears discussing.
A little-appreciated aspect of having security software on your PC, which
is especially a problem for Windows users, is that oppressive
governmental and law enforcement authorities are now routinely assuming
that "if you have a security program on your PC, you must have something
to hide".
There are several well-documented examples of situations where people
have been thrown in jail simply for having a program like "Evidence
Eliminator" installed on their PC, and, more recently, there have been
incidents where U.S. border police have been confiscating laptops for
"deep dive" forensics attacks using tools like the EnCase one mentioned
above, all for the heinous crime of having a PGP encrypted volume
somewhere within their C: drive. The implication of this is, if you are
going to use security tool on your PC, it should ideally meet all of the
following requirements:

1. It should be a built-in part of the operating system, for example like


FileVault for the MacOS or (as inadequate and probably backdoored as it
is) EFS / BitLocker for Windows. The reason why this is important is,
when the cops come to call and ask, "why is this encryption program
installed?", you can say, "it came with the computer, I had nothing to do
with setting it up".
2. THE PROGRAM SHOULD NOT REQUIRE PERMANENT INSTALLATION, AND THEREFORE
REGISTRATION IN ANY PERMANENTLY ACCESSIBLE DATABASE (like the Windows
Registry, for example) FOR IT TO WORK. For the vast majority of security
programs, there is no valid reason, other than for programmer laziness,
why the program should have to be permanently "installed" in a manner
that makes its identification easy for an intruder or a cop; the most it
should require is to have its executable files decompressed (un-Zipped)
to some folder somewhere.
The other, and equally important consideration here, is that if the
program can run without being installed, it can usually be run from a
removable device such as a CD-ROM, Secure Flash chip or USB key. Being
able to do this is highly desirable from a security perspective because
it allows the encryption / security program to be physically disconnected
from the data (files) upon which it operates -- making an intruder's job
far more difficult not only because he now has to relate software in one
location, to a task carried out in another location, but also because if
(say) the security program is stored on a USB key, the latter can simply
be smashed to bits or flushed down a toilet, at the first sign of danger.
3. The program should run, as unobtrusively as possible, leaving few if
any traces that it was, in fact, ever installed or used.
A good example of the right way to implement this is how TrueCrypt (see
below), by default, does not touch or change the file modification date
stamp on any file that it uses as an encrypted pseudo-volume ("virtual
disk drive"), when such a file is opened or accessed. Since a standard
forensic investigation technique is to first go looking for files that
the subject of the investigation commonly opens, uses or accesses, under
the usually reasonable assumption that these will probably be the places
where the user has stored the "controversial" information that the snoop
is looking for. (Also, if you run this kind of analysis on a computer and
you discover that "Evidence Eliminator" is one of the most frequently
accessed programs, this by itself is enough of a red flag to convince
conservative juries and judges to convict someone of being a "pervert" or
"terrorist", whether or not any real evidence of any illegal activity is
in fact found.)
Because TrueCrypt deliberately over-writes the Windows' operating
system's normal function of always recording when a file was last
accessed, TrueCrypt virtual disk volumes become essentially "invisible"
to this kind of basic forensic analysis. The larger point here is that
data protection techniques and the programs that enable them should,
ideally, not show up AT ALL as anything other than the ordinary use of
the computer.
The key phrase here is, "ideally". (In an ideal world, we wouldn't need
data protection or confidentiality tools at all, would we?) Particularly
in the Windows environment, but also to a lesser extent for most versions
of Linux, on a practical level it is typically very difficult to not only
hide the data that the encryption / confidentiality program is intended
to hide, but also to hide even the FACT that the confidentiality program
was used in the first place.

So Where's The Mother Lode?


--------------------------Remember that we are assuming that whoever is attacking you, has direct,
physical access to EVERYTHING on or about your PC, particularly the hard
drive and whatever happens to be in its RAM memory when the door gets
kicked in and the SWAT team menaces you with a M-16, screaming "GET ON
THE FLOOR, SCUM!!!" Areas of your hard drive that you never ordinarily
think about, can hold a wealth of incriminating or suggestive evidence
for an attacker.
Among these are:
Cache Directories
----------------As noted above, these are the #1 place that an intruder will go looking;
therefore, they should be your absolute top priority in "cleansing" and
securing. Modern computer operating systems "cache" -- that is,
"temporarily store data that they think you are going to access
repeatedly, in, for your 'convenience'" -- files in an amazingly large
and varied number of places. I have already mentioned the thumbnail and
shell commands caching issue above, but some other key offenders are:
(a) Your Web browser cache, which has not only what Web pages you visited
over the last 2 weeks but also all the pictures and linked files that may
have appeared either on each site or on sites linked to the original
site;
(b) For Windows users, the subfolders in your "C:\Documents and Settings
\{your Windows log on ID here}\Application Data" folder... there can be a
huge amount of stuff cached in here;
(c) For Windows users, your "C:\WINDOWS\Prefetch" folder... remember how
I mentioned about your computer tracking basically every program that you
ran over the last week? Just by looking in this folder, an intruder can
tell pretty much the whole history;
(d) For users of all operating systems, the "temporary" directories that
the system uses to store files that supposedly will only be used for a
short while. Under Windows, the main one is "C:\WINDOWS\Temp", while
under Linux, it's "/hda1/tmp". You would be AMAZED at the stuff that in
fact gets retained in these directories.
The solution to virtually all the above potential confidential data leak
locations is, simply permanently delete / erase everything within them,
or instruct your computer to do so. Do this securely, use any one of the
better "secure data wiping" tools to get rid of the offending files,
don't just "delete" them, since even a moderately experienced forensics
attack can retrieve normally deleted data.
Incidentally : Many applications, for example the popular "Firefox"
browser, have a built-in ability to "clear private data" when they exit.
I believe that unless you really know what you're doing, using this alone
to sanitize your surfing trail and so on, is like playing Russian
Roulette, since very few of these built-in application routines actually
wipe the involved files, most of them just do a regular 'delete' on

these. I would recommend that you go in to the involved folders and


manually wipe the sensitive cache files, or, failing that, use a purposebuilt tool like CC:Cleaner to achieve the same result. Remember, you're
not defending just against some nosy boyfriend or spouse who wants to spy
on where you've been surfing; you're defending against a professional,
skilled attacker who knows all the weak spots in schemes such as the
Firefox "clear private data" system. Don't get scuppered by a weakness
not in your own due diligence, but in the way in which your browser
program works.
Don't forget that many applications (Microsoft Word is famous for this,
but even the mundane old Linux GNOME text editor does it) create "shadow"
copies of a file that you have open; these are usually created by the
application so that you can revert to the original version of the file,
if you somehow screwed it up while editing it. Needless to say, there can
be all sorts of compromising stuff in these "shadow" files (typically
their names start with some special character like '~' that the operating
system 'knows' means 'this is a hidden shadow file, don't show it in the
Windows Explorer window'), so find some way to display all the hidden
files in a directory that you want to sanitize and manually wipe any
shadow files that you find.
And watch out for the following nasty little "gotcha" : remember how I
said that the image thumbnail cache works? Well, for some operating
systems, I have noticed that even if you open up one of the above cache
folders, and it happens to have images in it (say, .JPG graphics files
that were in your browser cache folder), guess what the thumbnail viewer
does? Right! It now makes thumbnail images, in the hidden thumbnail image
cache folder, of the images that you were about to wipe from the browser
cache folder! Cache-22, wouldn't you say? (Sorry, I couldn't resist that
one.) The solution to this is to kill all the files in the browser cache
folder, THEN go back to the thumbnail image cache folder and kill these
too. Don't you love computer "convenience" features?
Finally, not that you should really be doing this a lot (since you should
be wiping files securely, not just using standard tools to delete them),
make sure to "Empty Trash" after deleting files. You would be amazed at
how many data leaks result from this simple kind of oversight. Remember,
it's consistency that makes for a good security policy. YOU, have to
properly protect your data, all the time. Your attacker, conversely, only
has to find a weak spot, ONCE. Keep that in mind with everything that you
do, not just in file wiping.
"Recent Files" List -- The One You See And The One The Cops Build For
Themselves
-------------------------------------------------------------------------------This is another "convenience" feature that lets you quickly access the
last few files that you used yesterday and the day before, typically via
a pull-down menu or list that's built in to your "Windows" key pop-up
menu, or your K Interface menu, or your GNOME menu... you get the idea.
The actual place where these file links and listings varies from
operating system to operating system, but for example with Windows it's
"C:\Documents and Settings\{your Windows log on ID here}\Recent" folder.
In many versions of Linux, it's on the "Recent Documents" menu item
(which fortunately has a clearing option built in along with the list of
recently accessed files).

Either use the operating system's built-in facility to "Clear Recent


Documents" or, better still, manually erase these links from the relevant
cache directory. And don't forget, you have to clear any references not
just to the "sensitive" data files (e.g., "My Plans To Blow Up My Bosses
House.doc"), but ALSO to encrypted container files that the original
"sensitive" data files were eventually safeguarded within. If you don't
do this, you are giving your attacker a concise, easy-to-read treasure
map, right to the few files that he has to compromise, to get to the
treasure chest within. Why make it so easy? Make him hunt through all
100,000+ data files that a typical modern computer operating system has.
Don't narrow it down to 3 or 4, all by yourself.
One other thing is worth noting, here. Remember that one of the most
important, and powerful, forensic tools available to an attacker, is
being able to "co-relate" your personal activities (e.g. where you were
physically) at a given time, with traces of activity on the computer.
This kind of evidence, if properly captured, analyzed and presented, can
be devastating at a trial, and what a lot of casual computer users aren't
aware of is that a "recent files" list can be auto-generated by an
intruder simply by running a forensics tool (EnCase will do) that sorts
and segregates files by their "creation" and "modification" dates -- this
is possible because all modern computer operating systems do what's
called "time-stamping"; that is, any time that a file or folder is
accessed, they flip a few bits and bytes in the file's hidden "header"
section, in so doing recorded when the access took place.
It's obvious that this "feature", while it may have some convenience
value for activities such as backups, etc., is something that you will
need to defeat, to deny the attacker his ability to relate the date and
time stamp on a particular file to the exact date and time when (perhaps
by having parked his car across the lot and using a camera to record your
comings and goings) he can "prove" to a judge or jury, that you were at
home and presumably in front of the computer.
The bottom line here is, you are going to have to change the time / date
stamps of all potentially incriminating files to some date and time when
you couldn't have been working on your PC, especially any files that you
are using as encrypted storage containers (since, isolating these
enormously lessens the forensics investigator's job of sifting the wheat
from the chaff, as it were).
The best utility to do this within the Windows environment (at least that
I know of) is the "DateEE" program (http://www.nirsoft.net/utils/
filedatech.html), but at one point there was a MS-DOS equivalent of the
Linux "touch" utility that might have worked, you can search for it on
Google.
For Linux, as with so many other things, a feature that Windows users
have to look for on their own time, is built right into almost every
Linux distro. This is the "touch" utility, which is used as follows:
touch -t 9905210915.30 MySecretFile.bin
Will change the date stamp on "MySecretFile.bin" to be May 21, 1999, at
9:15 (and 30 seconds) of the morning.
"Trash Can", "Recycle Bin", etc.
--------------------------------

This is the place where the windowing interface stores files that you
THOUGHT you had deleted. I will get into the messy details of "permanent"
file deletion later, but for now, what you need to know is that when you
"delete" a file under any of the newer operating systems, in fact, all
that the computer is going to do is move it (NOT, in fact, delete it) to
a special folder called the "Trash Can" or "Recycle Bin". This is to
enable you to retrieve the file later, if like the dumb schmuck that you
really are, you realise that you didn't mean to delete it in the first
place.
The point here is that to REALLY delete the file you have to then tell
the operating system to "empty the Trash Can"; this action (sort of)
finally kills the file. Watch out for this one, because it's very easy to
forget, meaning that your computer Recycle Bin is filled to the brim with
data that you'd just as soon not let your jealous spouse see, all just
waiting to be restored and again made reviewable, with one mouse click.
The solution here is simple: make sure you "empty the Trash Can" each and
every time that you delete a file. (You really SHOULD be securely wiping
your files... right?)
If you're using Linux, watch out for the near-bug that prevents you from
emptying a system Trash Can for a volume (for example an encrypted
TrueCrypt container) that has been dismounted. (While I believe the risk
here is low, since the "deleted" files would be somewhere on the
dismounted container, there might still be a suspicious reference in the
Trash Can that you'd be better off with your attacker not seeing. The
simple solution to this is, re-mount the container, either wipe or delete
the files from the Trash Can, then dismount it again.)Swapfiles / Virtual Memory
-------------------------This is really a subject in and of itself, but for now, I'll just say the
following. As you read what's immediately below, keep in mind that the
swapfile / virtual memory file is the #1 place that a sophisticated
attacker will usually go looking for "incriminating" data, first. So you
need to pay attention to this section.
Although nowadays RAM memory (the kind that operates basically at the
speed of electricity, so it's fast enough for you to use for computer
programs; it's also the kind of memory that sort of vanishes when you
turn the power off) is relatively cheap, in the old days in which the
basic architectural assumptions of modern computer operating systems were
first thought out, RAM chips were very expensive, so programmers looked
around for a way to 'cheat' and get the computer to run more programs (or
larger programs) than the available amount of RAM memory would otherwise
accommodate.
The concept that they came up with is known by a variety of names such as
"swapping", "virtual memory", "demand paging" and so on, but these all
more or less refer to the same thing -- the idea is that the operating
system tries to keep track of which programs, and which data files, are
being constantly used and which ones only get occasional use; then, it
copies the seldom-used ones out of RAM memory on to a special place on
the computer's hard disk. If, all of a sudden, one of these "swapped out"
programs or files gets some kind of input or otherwise has to do
something, it is recalled from the special "swapfile" into the scarce
supply of RAM memory.

In this way, the computer can appear to be running many more programs (or
can be using much larger data files) than it otherwise could be, if only
"real" RAM memory was in use. This usually works fine except when pushed
beyond a certain point where the computer's RAM memory is too small to
even run a couple of programs without constantly swapping other ones out
to the hard drive -- this is a symptom called "thrashing the swap file"
and can be detected by the computer being very slow, with the hard drive
in-use light constantly being on.
The memory swapping process goes on continually in the background with no
intervention (or even awareness of it) by the computer user and it is
enabled by default on almost all modern computer operating systems;
indeed, it can be difficult to impossible to disable unless you really
know what you're doing.
For example, it is just about impossible to run Microsoft Vista without a
large swapfile, because Vista is so big and bloated that it can hardly
fit into most computers' RAM memory even with "virtual memory", let alone
without it. Linux has a much more reasonable RAM overhead, in this
respect, but this is partly offset by the fact that in my experience,
Linux users tend to run it on older computers that typically have less
RAM than a brand new machine would have, so we're back at Square One in
that respect. Windows XP is kind of in the middle; if you have 512
megabytes or more of RAM, you should be able to at least temporarily
disable memory swapping. I don't know enough about the MacOS to
confidently say one way or the other, but my guess is that it would be
more like Linux than it would Windows.
The problem with virtual memory, from a security point of view, is that
for the system to work at all, it has to be able to access, and therefore
send out to the swapfile, ANY and EVERY last byte of data that may at
some time reside in, or pass through, the computer's RAM chips. Stop to
consider the implications of this: since, by definition, EVERYTHING that
you do with your PC, from using your Web browsing program, to the
confidential Microsoft Word document that you edited today, to the
encryption keys (the secret password that scrambles confidential data,
wherever you have put this) that protect your secured files, ABSOLUTELY
EVERYTHING, resides in your PC's RAM chips at some point, it follows that
all of this could be, and probably will be, "swapped" out to the swapfile
on your hard drive, at some point, if you have the virtual memory feature
enabled.
You don't have to be very smart to figure out where all this is leading;
namely, that an intruder who has physical access to your computer, can
just look through your swapfile and find a treasure-trove of sensitive
information that you THOUGHT that you had "erased" or "encrypted", during
your day to day use of the PC. This isn't as easy as it may at first seem
to be, since the intruder has to ensure that the computer won't overwrite the swapping area again, and furthermore, data in the swapfile is
not conveniently organized into files and folders, etc.; but be of no
doubt that an even moderately experienced attacker, particularly if he or
she is equipped with good forensics tools like EnCase, very much CAN get
at and make sense of what your virtual memory system put in the swapfile.
Needless to say, having the wrong kind of evidence gathered in this way
can be disastrous... "game over"... from a security perspective.
A very few programs, notably TrueCrypt, are aware of the memory swapping
danger and they use advanced techniques to try to "lock" RAM memory that
they use to prevent it from being swapped out to the hard disk, however

if I were you I would never rely on this as your primary safeguard. You
are going to have to find a way to secure, or sanitize, your swapfile, if
you are to have a hope of protecting your computer against an
experienced, well-equipped attacker.
It's worth also noting that Windows and Linux implement swapping in
similar, but subtly different, ways. Under Windows, the "swapfile" is
just that -- it's a file contained in a Windows (usually NTFS or FAT
format) partition. Under Linux, the "swapfile" isn't a conventional file,
it's in fact by default an entire partition of its own. There are
advantages and disadvantages to both approaches, but what's important for
us to remember is that because of these differences the way in which we
have to secure a swapfile in each case is different.
So what do you do? The obvious approach is to ensure that you aren't
using virtual memory at all. If you're using a relatively modern computer
-- that is, one with more than about 1 Gb (gigabyte) of RAM memory -this is likely to be much easier to do with Linux than it is with
Windows, simply because most versions of Linux have a significantly
smaller RAM overhead than does Windows... if you disable swapping on a
Windows XP with 256 Mb RAM, for example, there is a good chance that the
operating system will simply crash (it can happen even with 512 Mb,
actually).
Under Windows, you have to do this by modifying the My Computer -->
Advanced --> Performance --> Settings --> Advanced --> Virtual Memory
settings. Remember to disable swapping for all hard drives (not just C:)
on your computer if you have more than one.
Under Linux, you have to first figure out what the "device name" of the
partition in fact is. Typically, this is something like /dev/hda5 and you
may be able to find it in your /etc/fstab file, for example:
/dev/hda5

swap

swap

pri=42

0 0

To stop swapping, you have to open up a command terminal window, then


type:
sudo swapoff /dev/xxxx
where "xxxx" is the actual device name of the swap partition.
Now to actually CLEAR the swapfile, at least under the Debian
derivatives, the whole shebang is (assuming that "/dev/sda5" is in fact
where your swap partition resides):
sudo
sudo
sudo
sudo

swapoff -a
dd if=/dev/urandom of=/dev/sda5
mkswap /dev/sda5
swapon -a

Under Windows, you are going to need a dedicated program like Evidence
Eliminator. You can see by this little example how something that is
basically a built-in function of the operating system under Linux,
requires software installation under Windows. Yet another vote for the
Penguin, methinks.
The Windows Registry
--------------------

The next subject that I am going to tackle here is a perfect example of


(a) why Microsoft Windows is a poorly designed operating system and (b)
why, partly because of (a), it's a very difficult system to fully secure
against a determined, intelligent and well-resourced attacker.
As you read what's below, keep in the back of your mind that no other
computer operating system (certainly none of which I'm aware, and I know
quite a bit about quite a few of them) uses this architecture, and for
good reason; it's a perfect example of Microsoft's arrogance, in thinking
that it can just ignore good software engineering concepts that have been
proved over long periods of time, blithely assuming that its own
programmers can do "better". Unfortunately, the Registry is a perfect
example of "worse".
First of all, technically the Registry is a kind of specialised database,
supposedly designed to facilitate how the Windows operating system keeps
track of various system settings and resources, everything from what
drive letter (e.g. "C:", "A:", "D:" etc.) is assigned to what storage
device, to what software programs are installed, to what the computer's
TCP/IP address is. The thinking behind this was (apparently) that it
would be preferable to aggregate all of these settings in a single place,
rather than to have them haphazardly strewn in various configuration
files all over the computer, as had previously been the case in early
versions of Windows in the early 1990s.
Unfortunately, Microsoft's actual implementation of this (sort of)
sensible idea leaves a great deal to be desired. Most of the Registry's
everyday use problems are beyond the scope of this document (the one that
I'll stop to mention is size bloat -- basically, every time that you add
an entry to the Registry, it uses more RAM memory, until it can actually
take up most of the RAM in your computer), but there are a few things
about this architecture that are very risky from a data confidentiality
point of view.
First of all, on most Windows systems, the Registry is physically
contained in two files within the computer's \Windows folder: SYSTEM.DAT
and USER.DAT. While the Windows Registry Editor (REGEDIT.EXE) program
does give a user with Administrator privileges a limited ability to
change or delete data within the Registry (at least, within the part of
it that is loaded into RAM memory), significantly, these files
(particularly USER.DAT) contain a great deal of hidden, potentially
sensitive data -- such as, for example, the last Website URL's that you
accessed or the last files that you accessed on the hard drive -- that
CAN NOT be directly edited or removed, at least not while Windows is
running (this is because SYSTEM.DAT and USER.DAT are "locked" by the
operating system, since inadvertent deletion of them could cause Windows
to crash and then fail to re-start, if the computer was rebooted).
This is a cardinal sin from the point of view of data confidentiality and
"plausible repudiability", since it in effect creates an evidence / audit
trail of user activities that the legitimate owner of the PC cannot
access or delete. It is hard to avoid the obvious conclusion that this
architecture must have been put in place DELIBERATELY by Microsoft for
this exact purpose -- that is, to allow both itself, and "authorized law
enforcement personnel", to easily review an evidence trail that the PC's
real owner can't touch.
If not for a purpose like this, why would the Registry have been set up
this way? You tell me. From my point of view, it looks awfully

suspicious, especially since (absent a third party tool such as a Linux


Live-CD distribution), most ordinary Windows users have no convenient way
of even booting the computer, let alone scrutinizing what's in USER.DAT
and SYSTEM.DAT., without first runing Windows and thereby rendering these
two files impossible to modify.
There IS a partial work-around to the problem; what you do is, first use
your "normal" account and the User Administration tool to create a
temporary account on the Windows PC in question, log out and then log in
using the temporary account, do whatever "sensitive" work that you wanted
to do using the temporary account, save (encrypt) whatever data was saved
from this session, then log out, go back into your normal account and use
the User Administration tool to delete the temporary account (make sure
to check the "Delete {name of user}'s files as well?" option). This
process will also delete the USER.DAT file associated with the temporary
account -- which is where much of the hidden "tracking" data is contained
-- but it is not a 100% solution as some of this data may still remain in
SYSTEM.DAT which is common to all users of a Windows computer, regardless
of what account they happen to be using.
The second issue with the Registry has to do with the types of
information that it stores and how this information is (if at all)
portrayed to the owner of the Windows PC in question. A quick fire-up of
REGEDIT.EXE and a quick trip down the various "keys" (e.g. "HKLM",
"HKCU", etc.) will reveal literally thousands of obscurely-named Registry
entries, such as:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\DeviceClasses
\{53f56307-b6bf-11d0-94f2-00a0c91efb8b}
Now, just what does this little gem refer to, you might ask? That's easy;
if you drill a bit deeper down, you'll discover that it will cough up the
name of the last (or one of the last) removable USB devices that was
connected to this particular Windows computer.
Do you think that a police officer, trying to determine "what computer
was used to download 'the Mojehaddin's Handbook' on to this SanDisk USB
key", might find that kind of data interesting?
There are THOUSANDS upon untold THOUSANDS of entries like this in every
Windows Registry, and -- because of a long-standing bug in how Windows
handles deletions of software, changes in configuration, etc. -- things
that get added to the Registry have a tendency never to be deleted from
it, what you have, in effect, is a permanent history of each and every
major change to the computer, nicely preserved for the forensics analyst.
How would you even _know_ if it is tracking what you do on, or with, your
computer? How do you keep tabs on 20,000 Registry entries, many in
obscure hexadecimal notation, that may or may not reveal information that
you don't want to have divulged?
It's important to note, by the way, that this obscurity is partly
intentional, by Microsoft's own admission. One of the design goals in
creating the Registry was to "enable DRM or Digital Rights Management",
that is, to create a pre-existing infrastructure by which both Microsoft
itself and the media and commercial software industries could create, and
hide, secret settings (for example an obscure Registry key such as the
one shown above) that would enforce copy protection... that is, that
would prevent the legitimate owner of the computer, sitting directly in
front of his PC at the keyboard, from performing certain actions (like,
for example, using a copy of Windows that hasn't been "Activated") that

either Microsoft or its business partners, wanted to discourage or


prohibit. (Windows Vista has carried this concept to the next, even more
extreme, level, because in Vista, DRM is integrated so tightly into the
operating system that it can't be turned off, let alone removed.)
Apart from the obviously user-hostile aspects of this architecture, from
a security point of view, the exact same Registry infrastructure that
allows the recording industry or Microsoft itself, to spy on your use of
the computer or to prevent you from obscuring the records of your use of
the computer, is the ideal tool for someone like a secret police agent to
leverage for the same types of functionality. The bottom line is, these
Registry and Windows operating system "features" restrict YOUR use of the
computer that YOU own. Any time that happens, a severe risk is created to
the anonymity and confidentiality of your computer use, because there is
now what we refer to in the security industry as a "covert channel" that
may relay data associated with your use of the computer, to who knows
where.
You would have to be CRAZY to entrust your personal liberty to a system
like this. Yet every Windows user is doing just that, every day that they
boot up and see that familiar log-on prompt.
As a side note, most Linux distributions, while they do keep limited
track of certain "housekeeping" details (particularly what software and
dependent software libraries have been installed, so that these can
safely be removed without inadvertantly disabling some other program),
have nothing that even remotely resembles the awkward and securityhazardous way in which Microsoft implemented the Registry. Instead, Linux
(for most purposes) uses plain ASCII text configuration files in various
directories (especially the /etc directory) to track the types of values
(for example, TCP/IP or firewall settings); and, in sharp contrast to how
Windows does it, EVERYTHING on a Linux computer is easily accessible, and
subject to modification by, the "root" (administrator) level account
owner (or, as noted elsewhere, via the "sudo" privilege elevation
command).
More information about how Linux is configured is available at: http://
www.ibm.com/developerworks/linux/library/l-config.html.
So What File Wiping System Should I Use?
---------------------------------------In view of the above, and in view of the many other places in this
document in which I (correctly!) advocate sanitization / wiping of
potentially sensitive data, the obvious question is, "Okay, smart-ass,
what's YOUR suggested solution? What sanitization program should I buy /
install / use?"
As I have already mentioned, except in the limited case of TrueCrypt -which, IMO, really has no reasonable alternative / competitor in the
world of trustworthy data security -- I have deliberately avoided
recommending any particular security program, mostly because I'm not into
the business of promoting one system over another but also because the
landscape of these applications changes on a weekly basis; the latter is
a good thing rather than a bad thing, because it shows how serious an
issue data security has become on the Internet.
This having been said, I will venture forth and make a couple of basic
recommendations, based on the "KISS" principle that at the end of the

day, the most simple solution is likely to be the best one.


In this context, though, I want to point out that another important
motivation is to keep the tools used, as unobtrusive and easy-to-overlook
as possible. The idea here is that whereas a computer with a desktop link
to a program called "Evidence Eliminator", is going to be a red flag to
any even marginally experienced opponent (or to a jealous spouse, for
example), a little command-line utility like "wipe" sitting quietly in an
obscure folder, several levels deep, is likely to raise far less
suspicion.
The trade-off to this approach is, of course, that (unlike a tool such as
Evidence Eliminator which has many pre-defined places that it will
automatically wipe), you will have to manually identify and delete the
files that you want to eliminate. Personally, I prefer to do everything
by hand, since I simply don't trust the judgement of someone else on a
matter of this importance, but I will acknowledge that it does involve
more work (and you have to know what to delete), which may make the
command-line approach inconvienient for some.
Particularly with the Linux secure deletion tools (but this comment also
applies to the Windows utility mentioned below), be VERY sure that when
you specify the "target" of the utility (e.g. what you are telling it to
wipe), that you are 100% certain that you've got the name of the target
right.
For example, suppose your "controversial" data is all on the primary
partition of your secondary hard drive, under Linux.
The command,
sudo wipe /dev/sdb1
Will wipe everything on the intended (second drive, first partition)
target.
The command,
sudo wipe /dev/sda1
Will wipe everything ON YOUR BOOT PARTITION -- time to re-install the
operating system! Ouch! (I sure hope that you backed up your "/home"
directory!)
The point here is, note that in the above commands, only one letter -"b" vs. "a" -- makes the difference between data security and complete
disaster to the functionality of your computer. So be VERY sure that
you've got the syntax and the device identifiers right, before you hit
that "Enter" key. You have been warned!!! (Note: A good way to be sure,
here, is to run the "Partition Editor" program for your system and just
review which partition / drive is identified by which, then write down
the assignments, before you invoke the data destruction command. Better
safe than sorry!)
Recommended Data Sanitization Programs
-------------------------------------For the Windows environment, the best of a bad bunch is probably Mark
Russinovich's excellent "Secure Delete" command-line utility; find it at

http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx (or you can


search for it on the Microsoft Website).
If you use NTFS (as most XP and Vista users will now be doing), I'd also
suggest that you look into Russinovich's "Streams" (http://
technet.microsoft.com/en-us/sysinternals/bb897440.aspx) utility, since
this can clear additional NTFS-specific data that might be of use to an
attacker.
Another one that you will want to become familiar with is the "Cipher"
command which you can learn about at (http://support.microsoft.com/
kb/298009); this is built into most Windows operating systems today but
can be separately downloaded if necessary.
Unfortunately (as near as I can tell), unlike Linux, Windows does not
appear to have a built-in utility to properly clear its swapfile.
Instead, the recommended solution -- although I'm by no means sure how
really comprehensive that this is -- is to force a Registry "tweak" that
tells Windows to clear the swapfile when the system is shut down. This is
documented at: (http://support.microsoft.com/kb/314834) but basically the
process is to run your Registry Editor, search for the key
"ClearPageFileAtShutdown" and set the value in this key to "1" (you may
have to manually create the key if it doesn't already exist). There are
obviously several significant shortcomings to this approach, not least of
which is that you have to shut down or reboot the computer to properly
clear the virtual memory file, but it's better than nothing.
For the Linux environment, the standard, and deservedly so, for secure
file deletion is the "Wipe" tool, which you can find at http://
sourceforge.net/project/showfiles.php?group_id=804, but there is also the
"shred" command (this command is built in to many distributions and a
description of how to use it is at: http://www.linux.com/feature/52258),
finally, for more sophisticated functions, I strongly recommend the
"secure-delete tools" collection (as of now download it from: http://
freeworld.thc.org/download.php?t=r&f=secure_delete-3.1.tar.gz); this
collection includes srm, smem, sfill, and sswap, which (respectively)
securely and permanently delete files and directories, wipe memory space,
cleanse the free space on a drive and clean swap spaces.
Regarding swapfile sanitization in Linux, while you can use the "sswap"
tool mentioned above, as I described earlier, the excellent "dd" utility
that's built into almost every major Linux distribution can substitute as
a "good enough" wiping program, provided that you know how to use it.
(BEWARE -- if you DON'T know how to use "dd" correctly, kiss your data
goodbye... it's a VERY powerful tool and it's lethal to helpless data, in
the wrong hands!)
As an example, to clear the swapfile... (assuming that "/dev/sda5" is in
fact where your swap partition resides):
sudo swapoff -a
sudo dd if=/dev/urandom of=/dev/sda5 {NOTE: this will take quite a bit
of time on a large swap}
sudo mkswap /dev/sda5
sudo swapon -a
To overwrite all the free space on a partition (deleted files you don't
want recovered):

dd if=/dev/urandom of=/home/{your home directory name here}/


consume_all_remaining_space.bin
(An alternative, which is faster but is a bit less secure, would be:
dd if=/dev/zero of=/home/{your home directory name here}/
consume_all_remaining_space.bin
This will just write zero values to the file rather than random data.)
When dd says "no room left on device", all the free space has been
overwritten with random
characters.
Then, delete the big space-consuming file with:
rm consume_all_remaining_space.bin
As you can see, Linux comes pre-equipped with good tools to ensure
security already. You just have to know how to use them.
"sudo"
-----The "sudo" command means "execute as root / super-user"; if you don't use
it you will discover that a lot of Linux systems won't let you do it,
this is actually a good thing because it limits the damage that a dumb
ordinary end user can do to the system. "Sudo" was invented because in
the old days of Linux, doing practically anything that made a permanent
or major change to the way a Linux PC operated, required you to log in as
the "root" or "administrator" account. The problem with this approach was
that if you forgot to do the slightest thing while you were logged in as
"root", you had to log out from your regular, user-level session, log in
as "root", fix whatever it was that you had to fix and then log back in
again as a user. This would rapidly become frustrating unless you were
very organized in how you did Linux system administration.
What "sudo" does is, it allows you to run a single command as if you were
the "root" user, although you have to enter your normal user-level
password to do it. (Note: Most implementations of "sudo" will let you
keep on using it for up to 15 minutes from the time at which you first
gave it your password, to avoid driving you crazy with password requests
each time.)
You should get in the habit of using the "sudo" command as a prefix to
your Linux command line actions because you will find that a LOT of Linux
security tools require you to have more than just regular user access
rights. Having said this, don't use "sudo" if you don't have to. One of
the big advantages of Linux over Windows is that by default, under Linux,
programs run in "user" mode, which means that they do NOT (unlike in
Windows) have the ability to do dangerous things like permanently rewrite the operating system (or components of it; this is, of course,
exactly how most viruses work) or erase your hard drive.
Another annoying little side-effect of "sudo" that you should know about
is, because you are considered to be the "root" or "super-user" identity
when you do anything with "sudo", it means that any file or directory
that you create during that time, has its ownership (see below) tagged as

"root", meaning that when later you want to access or delete that file or
directory as an ordinary user, you can't because it is "owned" by a
different user (the "root" user). This can be very irritating if you have
to mix actions under "sudo" and actions under your normal, user-level
account while performing a security-related function (for example,
creating mount points or encrypted volumes using TrueCrypt), because you
can end up with a mixture of files and directories, some that you can
access / delete and others that you can't (at least not without switching
back to "sudo"). The only real way to deal with this is just to remember
what things you can, and can't, do as an ordinary Linux user.
File Systems -- Why They Matter
------------------------------In the good old days of Stone Age computer operating systems such as MSDOS (does anybody remember those, LOL), the corresponding file and
directory storage organization systems were more or less as simple as the
operating systems themselves: they could do basic tasks like "save a
file", "read a file", "delete a file", "copy a file from one place to
another", and so on. This sort of made sense, if you consider that most
computers that these file systems were associated with, were only meant
for a single user and they were not networked.
As computers became more interconnected and the applications running on
them became more complex, the file storage systems also became more
sophisticated, and this has many implications from a data confidentiality
point of view. Some of the techniques that are used here are
"journalling" (the file system tries to keep a record of every change
that was done to every file on the hard drive, so that you can restore
back to whatever point if something goes wrong), "background
replication" (the file system keeps a duplicate copy of every file that
you create, again so you can restore it if you mistakenly delete it and
then "empty the Trash"), "duplicated Volume Table Of Contents" (VTOC)
(the file system keeps more than one listing of the contents of each
directory and these listings are kept in separate parts of the hard
drive, so that if one listing goes bad, the other can still be used to
retrieve files).
Generically, there is a common confidentiality-related problem that I
hope you can see running through all of these concepts: THEY ALL
BASICALLY MEAN THAT EVEN IF YOU _THINK_ THAT YOU HAVE DELETED A FILE
"ONCE AND FOR ALL", IN FACT, IT (OR A COPY OF IT) MAY STILL BE HAUNTING
YOUR HARD DRIVE SOMEWHERE. This is a crucially important point from a
confidentiality point of view and it is one of the hardest ones to 100%
compensate for, because depending on a number of factors, among them what
kind of operating system you have, what kind of file system you have and
how it has been set up, it may be difficult to even know which of these
techniques are in use (it very well can be more than one of them) and
next to impossible to turn them off or work around them, at least without
specialized software and knowledge.
And just to re-state the obvious, suppose that you are an Al-Qaeda
operative and you have your secret plans to cause some really nasty
unpleasantness at No. 10 Downing St., next month, set up in a Microsoft
Word file, somewhere on your hard drive. (You really shouldn't be using
Microsoft Word, by the way, but that's a different subject.) But you know
that Scotland Yard is on your tail, so you both delete the file and then,
like a good little computer paranoid, also "empty the Trash Can".
Unfortunately, your wonderful Linux ReiserFS "journalling" file system

has also kept a secret copy of this file, as well as of its last six
versions, somewhere on the hard drive, and when you hear that knock on
the door, 5 minutes later your PC is in the tender hands of MI5's best
forensics experts. All they need do is use a few simple, publiclyavailable tools to read back the journalling trail and you might as well
hand them a printed copy of the document, to save everybody a day's work.
Well, it's off to Guantanamo for you, mate!
Another, closely related issue is what we call "File Ownership By
Identity". This is a fancy way of saying, "when using a modern computer
file system, particularly one meant for use on a hard drive that may be
shared by multiple users / log-in accounts, the file system by default
will 'tag' each and every file or directory on the hard drive, with the
name of whomever 'owns' that file or directory".
Now, there is a good reason for this, namely that the operating system
and the file system are trying to ensure that, under regular computer
operations, (a) 'Bob Smith' can't see or delete 'Mary Jones' files and
(b) system files, for example the executable program files that allow the
operating system to run the computer in the first place, are owned by an
'administrator' or 'super user' who must be logged in as such, for any
major system change (for example, upgrading the operating system) to
affect the system files that might be affected, changed or deleted. But
note the phrase, "under regular computer operations".
Remember, we are working under the assumption that the PC that we're
talking about, is going to be in the physical possession of an
intelligent, hostile intruder who is equipped with advanced forensics
tools. Feeble security safeguards such as "file ownership tags"
absolutely WILL NOT stop such a determined attacker for so much as five
seconds, but they definitely WILL -- and this is the crucial point -provide a legally valid identity attribution trail to whatever set of
files the attacker wants to access and use.
In other words, the file ownership tag WILL TELL THE ATTACKER, THE
POLICE, THE JUDGE AND THE JURY THAT IT WAS _YOU_ -- NOT SOMEBODY ELSE -WHO OWNS AND IS RESPONSIBLE FOR THAT "CONTROVERSIAL" FILE.
Now, understand that from the point of view of the attacker, this
situation is still not perfect. For example (and this is the case for
many home computers), there may be only one log-in account, for example
"The_Jones_Family" and only one corresponding password (say,
"iamajones"), therefore only one identity to tag the files with, even
though that particular identity may in fact be little Billy Jones when
he's playing World of WarCraft online, or his sister Janie Jones when
she's chatting on MSN, or dad Frank Jones when he's checking out the
latest football scores, or whomever.
In a case like this, the file identity tagging will make the forensics
investigator's job more difficult, but definitely not impossible -- after
all, how likely would it be that Billy or Janie was checking out those
"controversial" Websites with their "controversial" pictures? In a case
like this, it's far more likely that poor old Frank is going to get
fingered by the Bobbies, even though in theory the files might have been
generated by the other two.
There are also other issues with file identities (particularly, the fact
that they can quite easily be changed, as can incidentally a file's
creation or modification date, by any one of a number of widely available
software tools, for example the 'touch' or 'chown' CLI level Linux

commands) that make them less than a 100% effective file-to-real-personidentity-tracking tool, but are they useful to an intruder? You bet they
are.
So How Can We Defend Ourselves Against The File System?
------------------------------------------------------There are various defences against the above types of file system
confidentiality risks, but I have found that the following ones are
probably the simplest and most reliable:
(1.) Use an external device: Certain kinds of external storage devices,
in particular USB keys, are by default configured to use less
sophisticated file storage systems (especially Microsoft's old FAT and
FAT32 systems) that cannot store or track much of the information,
including the dreaded "file ownership tag" thing that we mentioned a few
paragraphs ago. The main reason for this is that inherently, these
devices are meant to enable file portability -- that is, you save a file
on to your USB key when it's attached to PC #1, plug it in to PC #2 and
then copy it on to the second PC. Considering that there is a very high
chance that your identity (if any) on PC #1 is completely different from
your identity on PC #2, if the removable device file system enforced
strict ownership rules, it would make this kind of casual copying
difficult to impossible, so it has sensibly been stripped from this
aspect of how these devices store and categorize files.
The advantage, of course, from a confidentiality point of view, is that
if you just always work from a copy of the file on the removable device,
then by definition it will never get into the tender clutches of your
hard drive's much more sophisticated file system that DOES track
attributes such as "who owns it". Furthermore, largely due to
restrictions on the amount of storage space and other technical issues,
functions such as "journalling" and so on are seldom found on removable
media such as USB keys.
MacOS:
I don't know. I would suspect that it would be similar to Linux but I'm
not sure.
Startup and Auto-Run Programs
----------------------------Except for truly malicious software -- by which I mean keyloggers
(whether installed by some Russian computer criminal or by your friendly
local intelligence agency), viruses, worms, adware, spyware, rootkits and
so on -- which may be able to secretly "hook" itself into the startup
sequence for some other, legitimate program and therefore be executed
silently without you even being aware of it -- there are basically only
two ways for a program to be executed on your computer.
One is, you manually tell the computer to run the program, either by
double-clicking on an icon on your GUI desktop, or by entering a command
such as "nano MyNewTextFile.txt" at a CLI command line.
The other is an "auto-started" program which the computer has been
instructed to automatically run, each and every time that a "start-up
event" occurs.

What's a "start-up event"? It can mean any of the following things:


1. The most common kind of "start-up event" is simply that you turned the
computer on. The most common kind of program that's run when the computer
is turned on, is of course your operating system: Windows, Linux, the
MacOS, whatever. The computer "knows" how to do this because of an
interaction between a "boot loader" (a tiny piece of CPU-specific machine
language program code that exists on the first track of your hard drive),
which has just enough intelligence to know where on the hard drive your
operating system lives, so it passes control over the CPU from itself to
the first routines of the operating system.
Some types of boot loaders, for example the LILO and GRUB systems found
typically with Linux, are quite tolerant of other operating systems and
can co-exist with the latter; others, especially Windows Vista, want the
computer all to themselves and can make other operating systems basically
inaccessible if you make the mistake of, say, installing Vista on a PC
that you have previously installed Linux to a different hard drive
partition on.
The only reason that I mention this is that there are a few kinds of
security threats, particularly some kinds of viruses and also keyloggers
installed by someone (typically a cop) with physical access to the
computer, that can also live in the boot loader section of the hard drive
and which can therefore theoretically compromise your PC regardless of
what operating system you have.
In practice, these types of malware are very rare, because of the
physical access issue (they are very difficult to install remotely, i.e.
from over the Internet), but they must be considered as a very serious
potential threat if you have reason to believe that at any time your PC
has come into the physical possession of a skilled adversary who would
have had a few minutes to hours of undisturbed, private time to secretly
install a boot sector keylogger and so on. In that case I recommend that
you ditch the entire PC and buy a new one; doing so is much cheaper than
a long stint in jail.
2. The next most common start-up event, and this is the one that you
really have to most worry about, is what goes on when your operating
system loads and first starts up itself. Because a modern computer
operating system is not one but actually hundreds or thousands of related
programs, there are only a few (typically the most basic, integral parts
of the operating system -- referred to in Linux jargon as the "kernel",
Windows has a "kernel" as well, it's just usually not called that) that
absolutely always have to be running... the rest of the operating
system's programs may or may not have to load and run when it starts up,
depending on how your computer is configured and so on.
For most of these, the operating system itself will decide which ones to
run, for example, it will know to start the wireless networking
subsystems if you have a properly configured WLAN (Wireless LAN) card.
For others, you may have to manually decide whether to run them on
startup or not, then the operating system will do this consistently in
the future. But the bottom line is, "if you don't need it, don't run it".
Complexity is the mortal enemy of good security, so keep your system lean
and mean. If you do this it's much easier to spot signs of intrusion.
From a security perspective, there is an important implication of this.
As we have seen above, some programs, for example "background desktop

search" or "background file backup", while perhaps useful to a non


security-oriented user, are actually very dangerous from a data
confidentiality point of view. On top of that, we might encounter
programs -- for example a computer virus or a law enforcement originated
keylogger -- that are actively destructive of data confidentiality and
that we don't want to have run on our computers, ever.
Since nobody in his right mind is going to intentionally run any of the
latter kind of programs, it follows that for them to work, they have to
find some way to run WITHOUT our active involvement, in between powerdowns and reboots of the compromised computer. By far the most commonly
encountered mechanism for doing this is for the offending program to
insert a reference to itself into the operating system's start-up
sequence, so that the program is run each and every time that the
operating system itself starts up. This is based on the usually correct
assumption that most casual computer users have no idea that some
programs can be designated to operate in this way.
Most computer operating systems have fairly well-defined rules for how
and where you designate a program so it will be executed on operating
system start-up, however these rules vary quite a bit from system to
system. In general, be careful what you remove from the start-up
locations below (e.g. what you kill from the various Windows Registry
keys or from the Linux "inittab" file), since some entries in these
locations refer to programs that your operating system needs to work at
all.
Having said that, here are some things that I am always suspicious about:
(a) Programs that were installed on a time and date where you were
physically in front of the computer and you know that you didn't install
anything at that particular point in time (if it got installed then, why
didn't it tell you that it was installing itself into the startup
routines?)
(b) Programs with no apparent purpose or a deliberately obscure name that
says nothing about the program's real purpose (a program named
"ZoneAlarm.exe", for example, MAY be acceptable; but how about a program
called "xQ1z-tbr.exe"? What the hell is that?)
(c) Programs with references to them that appear in more than one startup location. In other words, if I see "2bnSkUr.exe" in the Registry key
"HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run", but
then I see it again in the separate, different key "HKEY_CURRENT_USER
\Software\Microsoft\Windows\CurrentVersion\RunServices", I have to ask
myself, why would any legitimate program try to run two instances of
itself, unless it was to ensure that someone could delete one reference
and it would still have the other to fall back on?
Any measure like this that looks like an attempt to "defend" a program
against the actions of the computer's legitimate owner is a red flag that
the program should be treated with great suspicion.
(3.) The third type of start-up event, and this is one that really only
affects Linux users, is when you load the "window manager" (remember the
software architecture diagrams that I showed you above?) Under Windows,
there is no such thing as a separate "operating system start-up event"
and "window manager start-up event", because there is only one Windows
"window manager"... it's name is "Windows" <LOL>. But under Linux, it is
indeed possible to start up the computer so that you don't have a window

manager or graphics interface at all; therefore, Linux has two separate


start-up events, one for the basic (character mode) operating system
itself and another for the window manager (KDE, GNOME, XFCE,
Enlightenment, whatever).
(4.) A final type of start-up event, and this one is especially important
for Windows users, much less so for Linux users, is the browser start-up
sequence. Modern browsers like Internet Explorer for Windows, Firefox /
Opera (for both Windows and Linux) and some others, have the ability to
load "add-ins" which automatically get run when you first run the browser
and try to surf the Internet. The problem is that particularly for IE
under Windows, these "add-ins" are frequently viruses or malware and they
sometimes can affect all sorts of aspects of your computer's operations.
In this respect there is something that you should always be on the lookout for: it is very possible for you to surf to a specially booby-trapped
Website that automatically downloads malicious browser-based add-in
modules and then tags these to auto-start every time that you run your
browser. It's pretty obvious that this kind of malware would be an ideal,
low-cost mechanism by which a group of on-line computer criminals, an
oppressive government or just your local cops, could track your every
move over the Internet, complete with nicely formatted reports on your
surfing delivered back to their central computer at whatever interval
they found convenient. So be careful.
The subject of analysing both the computer's start-up routines, as well
as its "task manager" while it's running, so as to identify and remove
malicious programs, is a topic in and of itself that I cannot fully deal
with here, but suffice it to say that be vigilant about what is running
on your computer, at all times.
If you see something suspicious, shut it down, immediately (hit Ctrl-AltDel on your Windows computer and pick "Task Manager", or on a Linux
system run a command terminal window and type "sudo top"; highlight and
kill the offending programs / tasks under Windows or, once you know the
"Process ID" under Linux (it will look like "P12345"), exit the "top"
program with the Q key and type "kill {ID number of Process}").
Finding the Start-Up Locations
-----------------------------Windows XP
---------Microsoft Windows is notorious for having far too many obscure ways for a
program to force itself to load when the Windows operating system itself
starts up and, looking at this from my perspective as an IT security
person it is hard for me to avoid reaching the conclusion that this was
done intentionally, perhaps to enable easy compromise by law enforcement
personnel. The following shows the various places that you may find a
reference to an auto-started application.
Note that where you see the identifier "HKEY" in the following list, this
refers to a binary key value somewhere within the Windows Registry (you
have to use REGEDIT.EXE to access or edit this).
{User}\Start Menu\Programs\Startup;
All Users\Start Menu\Programs\Startup;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run;

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce;
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServices;
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServices;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion
\RunServicesOnce;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies
\Explorer\Run;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnceEx;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
Shell;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
System;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
VmApplet;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
UIHost;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
Userinit;
HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows,
run;
HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows,
load;
HKEY_LOCAL_MACHINE\Software\Microsoft\Active Setup\Installed Components;
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager,
BootExecute;
HKEY_CURRENT_USER\Software\Mirabilis\ICQ\Agent\Apps;
win.ini, load;
win.ini, run;
system.ini, shell.
Linux
----In general, auto-started programs under Linux follow a much less complex
(surprise, surprise!) system than under Windows. Most auto-started
programs can be found in the following file:
/etc/inittab
For the "start-up event on window manager initiation" event, here are
some likely areas:
KDE: ~/.kde/Autostart/
GNOME: ~/.gnome2/session-manual
XFCE: ~/.config/autostart/
Note: Much more good info on this is at: http://gentoo-wiki.com/
HOWTO_Autostart_Programs
Other "Interesting" Files
------------------------Now in view of the above, there is one other category of files that you
must definitely protect, and that is (to put it as simply as I can): "any
file that either (a) would give an attacker EVEN THE SLIGHTEST HINT that
you're attempting to either hide or 'sanitize' data, or (worse) (b) a

file that describes your methodology for doing so.


It's amazing how frequently otherwise diligent data defenders forget
about or omit this step (I have to admit, that I'm not perfect here,
either)... to their detriment.
What's a good example of both kinds of files?
How about the document that you're reading, right now?
Stop to think about this one. Suppose that the secret police break down
your door, but, like a conscientious little security wonk, you've
carefully encrypted all of your "incriminating" data and you've
sanitized / wiped any traces of it from the unencrypted portions of your
hard drive. Unfortunately, you've left a copy of this document (the one
you're now reading) in plaintext format, somewhere in the un-secured
portion of your drive.
After they give you a thorough beating for possessing a document like
this one, what with all of its uncomplimentary (but true) depictions of
police motives and tactics, the police will wave this very document in
front of the judge and jury like the proverbial bloody red shirt,
screaming,
"See? SEE?!?!? This filthy little pervert has a MANUAL of how to hide his
dirty pictures, right here, right on his hard drive!!! Surely, Ladies and
Gentlemen of the Jury, you will appreciate that even though we haven't
been able to find so much as a scintilla of actually illegal content on
this little swine's hard drive, that's only because he's been following
the suggestions in this despicable document! Ladies and Gentlemen, I can
assure you, if we could only force this little prick to cough up his
encryption key, you'd be SHOCKED, yes, SHOCKED at the perverted smut that
he's been keeping hidden from us! I promise, on my word of honour as a
police officer, that it's really there, even though I can't show it to
you! I move for and demand immediate conviction on all charges!!!"
As noted above, even though
of evidence in most Western
jury after jury, with their
just wouldn't lie" will buy

this type of argument violates all the rules


jurisdictions, you have to understand that
child-like trust that "the nice policeman
it hook, line and sinker.

Your job, therefore, is to deny the cops ANYTHING -- this document is


simply one example of such -- that could allow them to make the above
type of claim to the jury (note that of course, they may simply lie about
it and say it anyway; sorry, nothin' I can do about that, bloke).
Here are some prime candidates for files, objects and file types that you
will want to securely wipe, move to an encrypted area or just keep the
knowledge of, in your head:
-- Those, such as this one, that contain information on data security,
particularly on data encryption, data sanitization, secured surfing /
Internet access, and / or tactics to resist police investigations;
-- Those that give a history of security-related activities on your
computer, for example a Linux .bash shell command history that shows you
installing or using some kind of encryption function or histories of
sudden deletion of large numbers of files;
-- Those that contain links to resources on the Internet (particularly,

file storage sites, file swapping sites, "Webmail" sites like Gmail or
Hotmail, or e-mail addresses, especially your own); you can be sure that
the attacker will be contacting each and every one of these and demanding
that "under penalty of obstruction of a criminal investigation, they
immediately remit all and any evidence associated with Mr. William Q.
Pervert";
-- Those that contain data sanitization commands (for example the Linux
commands needed to clear the swapfile, as shown above) or, worse,
procedures (e.g. "First, encrypt my data, second, wipe my swap file,
third, run Evidence Eliminator, fourth, reboot my PC') -- remember that
if an attacker knows, or even partly knows, the process by which you are
securing your PC (especially the order in which the various steps are
taken), it is much easier for him to reverse-engineer it and break it;
-- Those that give a history of activities that a 'dumb normal user'
probably wouldn't do; for example, a Linux .bash shell command history
that shows you altering file modification / creation date stamps, or a
Windows log file that shows repeated creations and deletions of
"temporary" user accounts on the same PC; included in this category are
events without a straight-forward, obvious explanation, such as repeated
accesses to a particular file without a clear reason why the file needs
to be used;
-- Those that contain cryptic or obscure phrases (example: "the falcon
flies at midnight"); these might, in the eyes of an attacker, represent
passwords or pass phrases;
-- Any object or data structure (for example, an entry like "X:\SECRET
\JIHAD.DOC" in the "Recent Files" listing of your Microsoft Word program)
that references either / or (a) a file that is no longer on the
unencrypted, "normal" part of your hard drive, or (b) a volume (in this
case, it was probably a virtual PGP or TrueCrypt volume) that is not
visible or accessible when the computer is under normal use (the point
here is that an intelligent intruder is going to say, "hey, wait a
minute, this link is to a volume that isn't here now... that has to mean
that Achmed has a hidden volume, somewhere on the hard drive, or he's
hidden his Islamicist propaganda on a USB key" -- if the intruder doesn't
suspect that you have an encrypted volume, he is far less likely to go
looking for it, and far less likely to correctly identify it);
-- Finally -- and you'd be amazed at how often this happens -- don't
allow files with "suspicious" names (e.g. "WIPEINFO.DAT", "EncryptPswds.txt", "Last_Wipe.txt", etc.) to clutter the unsecured part of your
hard drive, even if these files don't have any meaningful data (or any
data at all) in them. A good attacker can derive a surprising amount of
information from these, starting with the time / date stamps (gives him
insight as to when you were last using the PC, so he can narrow down the
scope of his attack to other files that were accessed around the same
time), and of course the mere presence of these files is another "bloody
red shirt" that the prosecutor can wave in front of the jury to convince
them of your guilt, all the while in the absence of any legitimate
evidence.
Remember, when considering what you must do, you are fighting a war with
your attacker, not a Marquis of Queensbury boxing match. In war, nobody
fights fair and you can expect your attacker not to, either. Don't give
him the smallest stone to toss your way.

The Backup Issue


---------------Lurking in the background of many of the actions that you must undertake
from a data security point of view is the inherent contradiction between
two fundamental principles of good information technology management -namely,
(a) The need to backup / replicate data, in case the original copy of a
data object is somehow lost or corrupted. This is especially true of
encrypted data, because, unlike plaintext data, the loss of even a few
bytes of an encrypted file or virtual drive, usually results in the
inaccessibility of any part of that file or container (this is due to the
way that modern forms of encryption work, that is, by "chaining" -- the
bytes of a first block of data are used to provide part of the key for
the second block of data and so on throughout the file; if even a few
bytes are corrupt, the key chain is broken and everything is just
gibberish).
(b) The need to keep as few copies as possible of "sensitive" data, as
well as to ensure that at no point in the data's life-cycle, it is ever
left in unencrypted format, except as specifically and deliberately
requested by a live human administrator (e.g., you).
As is pointed out elsewhere in this document, many of the technologies -for example, "always-on" or "background" backup systems -- that admirably
meet the first requirement (a), are the mortal enemies of the second
requirement (b). There are also some other issues that might not at first
be apparent. For example, if an intruder gets physical access to your PC
and discovers that the configuration files, or the log files, of your
backup program indicate that you have been backing up a large file called
"TheMatrix.mp4" every day, an intelligent attacker is likely to think,
"Hmm, this is interesting... why would someone want to back up the same
movie, day after day, when having just one archive copy of it is all he
ever should need?". Whether or not the encryption on the file involved is
robust, having this kind of information is of significant help to the
attacker, since now, he can narrow down what he attacks to only a few
files as opposed to the thousands that would otherwise be on your system.
The point is that backup programs, by their very nature, give away
valuable information about your file and folder use habits. They have to,
in order to do the job -- that is, reliable data replication -- that 99%
of normal users need them to do. Unfortunately, you aren't one of that
99%. You have special needs, and you must therefore implement special
measures to accommodate them.
So you have to find a happy medium, between the two. I can offer this
advice:
(c) Do backups manually, by yourself. It is a relatively simple process
to manually copy a file from (say) a primary hard drive to an external
hard drive, or to burn them on to a CD-RW or DVD+RW disc. By doing this,
you also have the peace of mind to know that you have a duplicate copy of
the original data object that you don't have to run through a third party
program such as a tape management application, in order to get at your
backup copy. Again, we can't assume here that the tape program will work
correctly. Do you really want to trust it with vital data? I wouldn't.
(d) You can back up non-essential data, using one of the automatic or
dedicated software tools meant for this purpose. But be careful that in

so doing, you don't open a different security hole, such as the "whoops,
it backed up all your sensitive data while it was unencrypted" issue.
(e) Try, if at all possible, to have only one duplicate copy of each
sensitive data object, in your possession. Having six different copies of
"TheMatrix.mp4" is bound to attract the suspicion of an attacker, unless
you have a very valid reason (maybe you're into movie piracy? certainly
better to plead to that, than say to being a member of Al-Qaeda) why you
should have multiple versions of a single file.
Online Backup -- Is It For You?
------------------------------Related to the above set of issues is a trend that has lately become
quite popular in the consumer information technology market, namely, the
idea of "on-line storage" or "on-line backup". The concepts behind these
two terms are closely related and basically involve you using someone
else's computer, accessible only over the Internet, to store and retrieve
files that you would otherwise have to store on a PC at your local site
(e.g. your residence, your place of work, or somewhere that you have
personal, physical access to).
Remember that for these purposes, the terms "remote hosting provider",
"on-line backup service", etc., can be used somewhat loosely to describe
any remote service that can store any kind of data for you. There is a
specific reason why I mention this: In some ways, you can consider remote
Web e-mail services, for example Hotmail, Yahoo Mail, Gmail, etc., as a
"remote hosting provider". So, if (see below) the files that you are
planning to store on such a service are relatively small, you can
actually just e-mail them (as attachments to an otherwise innocuous
message) to the Webmail system and just leave them archived in some
folder that you maintain in your on-line mailbox.
From a data security point of view, there are a number of drawbacks that
you need to be aware of, about these services:
Instability -- You have no guarantee whatsoever, that the on-line data
storage company or organization, with whom you have left your data, will
be there tomorrow; this is particularly true of the "free" companies that
are presumably making their money by presenting you with advertisements,
the business scene is littered with the corpses of these companies. If
this happens, you can say "goodbye" to any files that you uploaded to
your former data hosting provider.
In this respect, make sure to remember that most of the "free" on-line
mail services (for example Hotmail) have an automatic timeout designed to
avoid their servers being cluttered up with files from dormant e-mail
accounts that have been forgotten or otherwise abandoned by whomever set
them up (this is also true of the few "free" file backup services that
remain on the Web0, so you have to access the Webmail account every so
often to keep it "alive".
This can be done via an automated set of scripts triggered on a timer
basis (Windows "Scheduled Tasks" or the "cron" facility under Linux), but
I would advise against doing so because review of these scripts by an
intelligent attacker would give him instant knowledge of the mailbox and
user ID that you used to access the Webmail service (of course, what
would happen next in this scenario is a US PATRIOT Act or UK RIPA request
for Hotmail, Yahoo, etc., to immediately forward the contents of your

remote mailbox to the attacker, on pain of criminal prosecution). Even if


you do your keep-alive accesses manually, you will have to be careful to
cover your tracks and ensure that no traces are left of these actions in,
for example, your browser history or cache.
Cost -- A few of these hosting providers will give you a certain amount
of space for free, but most will charge you handsomely for storage of
more than a few megabytes' worth of data. If you don't keep paying the
charge, your data will simply vanish... which is obviously a bad thing.
But there is also another very serious issue from a security point of
view for the commercial data hosters, namely, since you have to pay for
them by credit card, your identity will now be clearly known both to the
hosting provider and to the police. When combined with some of the issues
shown under "Security" below, this really means that you are crazy to use
a paid hosting provider, at least for any type of data that might
possibly incriminate you.
Accessibility -- What happens if the Internet link, either from wherever
you are to the Internet, or from the hosting company to the Internet, is
"down"? There is no "off-line" mode for these systems; if you can't get
to them over the Internet, you can't get through to them, period.
Security -- You have no effective guarantee at all (especially for
hosting companies located in police-friendly locations such as the U.S.
or our good old United Kingdom) that the organization involved, won't
happily hand over all the files that you have stored with them, to the
first person who claims (correctly or otherwise) to be from the
government and demands this information.
This is an especially important issue when you consider that each and
every access that you made to a given remotely hosted file, would be
logged at the hosting provider's end, thereby giving law enforcement
authorities a precise description of your digital comings and goings (for
example, allowing them to track the IP address from which you are
accessing the Internet, unless you have used some system like Tor to
anonymize this). And note, furthermore, that any spying of this sort,
would go on in secret, without any way for you to know that it is
happening.
Capacity -- Given the inherent bandwidth limitations of Internet access
(even for relatively "high speed" options such as ADSL, cable modems and
so on), as well as the fact that the hosting providers can't allocate an
entire hard drive for every customer, after all. This, effectively, rules
out on-line backup or storage for really large datasets or files, at
least if you don't want to be waiting an hour to access any of your
material while it downloads (and you shouldn't wait that long, for
reasons that are explained elsewhere in this document).
Control -- Remember that in the preamble to this document, I went out of
my way to make the point that you should never share knowledge of, or
control over, your sensitive data, with anyone else, let alone a large
corporation that has every incentive to co-operate with the government
and the police and no incentive at all to co-operate with you. The
instant that you upload your data to a third party remote hosting
service, you have violated this basic rule of data security. If you still
want to do it, you have to be sure that you have taken 100%, "24x7x365"
robust steps to prevent the remote hosting provider from becoming the
fatal weak link in your data security fortress.

So, having taken all the above into account, is there any point at all in
using an on-line data hosting service? I believe that there is.
One very valid approach, from a security point of view, would be to
archive only relatively small (< 100 kilobyte), robustly encrypted files,
each of which would preferably be obfuscated in some other way (for
example, the real name of the file is "MySecretPasswords.tc", but the
name of the file when you upload it to the hosting provider is
"FreeBeer.doc"), to two or more different remote hosting providers (to
avoid the "poof it disappeared" scenario, since both providers are
unlikely to go bankrupt at precisely the same time). You would then be
especially careful only to access these hosting sites with the Web
surfing precautions explained elsewhere in this document (e.g. never use
your real name, use encrypted, anonymized connections, etc.) and would
only access the documents when absolutely necessary.
Incidentally, not that I should need to say this, but -- NEVER, EVER,
unencrypt the contents of your sensitive data stored on a remote server,
so as to create an unencrypted / plaintext version of the same file,
which is also stored on the remote hosting site; you might as well
forward a copy of it to the police at the remote site, if you do this. If
you have to unencrypt something that is stored on the remote site, (a)
copy or transfer the original, encrypted version to an encrypted
container on your local PC; (b) unencrypt the transferred file to the
local, encrypted volume; (c) do whatever you wanted to do with the local
copy of the plaintext data, then either (d) securely wipe the local
plaintext file, or (if you need to store a revised copy of it), (e) reencrypt the file and upload it to the remote hosting site.
Keep in mind that many of the remote hosting providers have background
search and indexing systems that scour their servers' hard drives for
"illegal" content; so, the second that such content shows up in plaintext
form at their end, it will be noted as such and the local law authorities
(at their end, not yours) will be immediately notified. At best, this
sequence of events would mean that the remote hosting company would
delete all of your files and close your account (bad from a data
availability point of view); at worst, you will get your front door
kicked in by the local police (at your end), after the remote location's
police put your name up on INTERPOL and phone the cops wherever you live.
You have been warned!
So what would you put on such a hosting site? If your archives of
"sensitive" data are small, perhaps you could put the data itself on the
remote site; however, as explained above, there are risks associated with
doing this. A better set of candidates for remote storage might be an
encrypted master archive of passwords to your actual encrypted, local
data, or, additionally, something like TrueCrypt keyfiles that are
required to unlock an encrypted volume.
Steaganography (see elsewhere in this document) is a very good option
here, since, from the point of view of a hostile government or police
force who demands access to your account at the hosting provider end,
when the ISP / hosting provider happily complies (you're just some
anonymous customer "out there in cyberspace"; they're the local police,
threatening to arrest anyone at the hosting provider who gives them the
slightest back-talk; whose interests do you think are going to come out
on top, in a dispute like this?), all the police will see is your nice
little collection of pictures of prized orchids and petunias. (In such a
case, the attacker is far more likely to conclude that they've got the

wrong person under surveillance, as opposed to the much less probable


possibility that they have the right person, but that person has been
smart enough to use tactics like the ones you're learning about by
reading this document.)
The huge advantage of doing things this way is, provided that you are
willing to accept the risk of some kind of Internet or hosting provider
disruption making your encrypted local data permanently inaccessible, you
could completely remove the local password storage from anywhere on your
local network, making it for all intents and purposes invulnerable to
compromise by an intelligent, well-equipped intruder. (He can use EnCase
all he wants, to scan your hard drives for your passwords, or the file
that contains them; but in fact the files involved are on some server,
somewhere up on the Internet.)
It's an intriguing idea. Am I doing it, myself, right now? Ah, you'll
have to guess. That would be telling, wouldn't it?
Sh*t Happens, Or, "Don't Be A Crash Test Dummy"
----------------------------------------------Pay attention to this next section, IT IS IMPORTANT!!!
Another very rarely appreciated aspect of computer data security, when
viewed in the context of a sudden attack by a determined, well-equipped,
intelligent adversary (e.g. the secret policeman who kicks in your door),
is, "what happens when the computer / its applications, crash".
That is, you are SUPPOSED to shut down your PC nicely, by selecting the
"Shut Down" option on the Start menu, so it can close off all of its open
files, write all of its buffer data back to the hard drive, and so on.
-- But what if you have to turn off the power, suddenly, when you hear
the sound of the front door being smashed open?
-- What if the computer's power supply simply dies on you, in the middle
of a surfing session?
-- What if, one day, when you turn the computer on, the main hard drive
develops "bad sectors" and refuses to boot your PC's operating system?
All of these situations happen with depressing frequency with modern
digital equipment; price competition has brought the profit margins of
most types of computers, hard drives, etc., down to the point where
manufacturers simply can't afford to build a lot of robustness into what
they build... not that they would, anyway, even if they did get a decent
margin. But hardware (and, to a lesser extent, software) failures
represent a grave and very under-rated threat to both the confidentiality
of your sensitive data, and to you.
There are a wide range of specific technical reasons for this, but in
general, they all boil down to one thing: IN ORDER FOR YOU TO USE IT,
DATA HAS TO BE UNENCRYPTED, AT THE POINT AT WHICH YOU ACCESS IT. (E.g.,
you can't read that .DOC file from Islamic Central Command, if it isn't
in Arabic text on your screen.) Now, normally, when you're done reading
your document, you will save it safely to your encrypted folder and shut
down the computer, making sure to clear your swapfile and do all the
other things that a diligent little revolutionary needs to do.

Unfortunately, just before you were about to do all of the above, you
heard a 'pop' from your PC's power supply and the whole system went dead.
Meaning: ALL YOUR DATA IS NOW UNENCRYPTED, SITTING ON THE HARD DRIVE,
JUST ASKING TO BE INTERCEPTED BY ANYONE AND EVERYONE. The point is,
neither the software nor the hardware can do anything to secure your
data, if some physical problem with the PC itself has prevented the PC
from doing a "controlled" shut-down or re-encryption of the sensitive
data. So, in effect, a hardware failure at the wrong time can basically
stop all of your data security best practices dead in their tracks, since
you can no longer access the computer to tell it to re-encrypt your data
files, clear away incriminating evidence on the hard drive and so on.
As almost anyone who has used a moderately priced personal computer for a
few months can attest, the quality control on PC components, especially
certain parts of the computer such as its fan, power supply, RAM memory,
video card, hard drive and (sometimes) motherboard, can vary
tremendously, even between individual PCs manufactured by mainstream
builders such as Compaq, HP or Dell. Component quality on "clone" PCs
from no-name builders can be even worse.
And don't think that your computer is the only thing that can fail,
incidentally. Firewalls, routers, gateways, everything... these can, and
often do, die in a split second. In saying this, I'm not just referring
to whatever infrastructure that you have under your own direct control.
You also have to keep in the back of your mind, "what will be my
approach, when (say) the Web gateway being used by my ISP crashes and
dumps all of its then-current connection information, into a 'crash log'
on the disk?"
The concern here is the same issue as when you send your PC in to the
repair shop to have it fixed. The technician at the other end may (in the
ISP case) have to look at the crash log file to see what, if anything,
crashed the gateway server. The technician in the repair shop may have to
restore Windows operating system files on your hard drive, to get the
operating system to boot up again. In either case, while their main goal
has nothing to do with intentionally compromising your security, if they
stumble across "controversial" content that they believe you to have been
in possession of (for example a transaction record with a "controversial"
Website, or certain kinds of digital images on your hard drive), they may
believe themselves to have a legal or "moral" duty to inform law
enforcement of their suspicions. You know what happens then -- an hour
later, the steel-toed boots kick down your front door. You would be
AMAZED, and horrified, at how many otherwise careful users of
"controversial" data get caught in exactly this way.
For a careful computer user, there can only be one conclusion to draw
from the above:
-- You have to structure the way in which you use your PC, on the
assumption that it, and the network infrastructure through which it
communicates, may fail, unexpectedly, at any time.
-- You have to build this assumption into your data confidentiality
plans, so that if the computer DOES fail, the impact on the privacy of
your data will be as little as possible.
Is a complete answer to this problem, possible? I have some not-so-good
news for you, here: I believe that there isn't any 100% effective way
that you can completely protect yourself from the compromising effects of
sudden hardware failure. That having been said, there are some steps you

can take to reduce the impact, when and if this should happen to you:
(1.) Use high-quality hardware components that you can easily, and,
preferably, cheaply, replace yourself. The more work that you can do to
fix your own equipment, the less you will have to rely on untrustworthy
third parties, to do your repair work for you.
Now, whether this means "buy a good computer" as opposed to "buy a
clone", that's a much more difficult one to call. I can make a good
argument either way. But the key point here is that whatever you use, you
should be able to fix it -- or destroy it -- yourself, without the
assistance of anyone else. Remember the "third party" warning that I gave
you at the top of this document?
(2.) NEVER store, or access, the original copy of anything that you don't
want an intruder to have access to, from any unencrypted source or
connection. For example, never access a controversial file from an
unencrypted folder on your hard drive, and never access controversial
content on the Internet except by using an anonymized, encrypted
connection via services like Tor. (Note: The challenge here is that
unless you encrypt a lot of "hidden" or "temporary" directories like the
"thumbnails" one, as well, you can THINK that all your "controversial"
data is protected, but, behind your back, the operating system has
quietly made copies of it, in some unprotected location. Plan against
this happening!)
While taking these steps will not completely protect you from a sudden
hardware crash (due to the possibility of some of the data having been
swapped out to the swapfile, etc.), they will significantly reduce the
impact of a crash, and will greatly complicate the task of an intruder,
if and when a crash comes your way.
(3.) Use applications, and hardware, that have deliberately been
engineered with the assumption that they will have to safeguard the
confidentiality of data even in "anomalous" operational conditions.
There is a simple meaning to this, currently: USE TRUECRYPT. This Open
Source encryption system, while not perfect, implements excellent
practices such as never storing the key unencrypted in RAM memory and so
on, drastically limiting the impact of a crash or sudden shutdown.
(4.) Try to store "controversial" data on media that (a) can quickly and
easily be physically destroyed (for example, by a good old whack by a
hammer or a stomp by your boot) and which (b) are cheap enough so that
you won't think twice, when the time comes to do so.
Maybe the NSA _can_ retrieve data that was once on a USB key that has
been shattered into 256 tiny little bits; but you can be sure that your
local cops can't, so physical destruction of this kind of storage media
is a nearly 100% guarantee that prying eyes aren't going to see it. Good
media types to use here are USB keys (the cheaper the better), rewritable CD and DVD discs, and SDRAM chips (the thin little almost square
guys that you can put into a digital camera).
The main problem that you are going to run into here, is hard drives,
because these are both expensive and durable enough as to discourage
casual physical destruction. But for hard drives there is a slick trick.
What you can do, in the event of an unexpected "in the middle of a
session" operating system or computer crash that might have left
sensitive data unencrypted, is always keep a bare-bones computer standing

around; all it needs is an ATA / SATA / SCSI (depending on the kind of


hard drive you use) controller plus hard drive power and connector
cables, a keyboard, some kind of cheapish monitor, a floppy drive and a
handy floppy-disk based version of data sanitization software such as
DBAN (see: http://dban.sourceforge.net/). The minute that the PC
containing a "sensitive" hard drive goes south, you open it up, yank the
compromised hard drive, connect it to the stand-by PC, connect the power
and data cables and then boot up with your DBAN floppy, following the
instructions to do a quick wipe of the compromised hard drive's media.
Obviously, this technique is not completely practical in the "SWAT
kicking down the front door" scenario, but it's a very good fail-safe for
less serious scenarios of possible sensitive data compromise.
(5.) Make sure that when you arrange your storage media, you completely
segregate the hard drive / partition / external device from which you
execute your computer's operating system, from the equivalent media where
you store your sensitive data. The reason why this is important (allowing
for the "contamination of swapfile" issue that can occur in the case of a
catastrophic hardware failure), is that it makes discarding of the
malfunctioning storage medium less painful because you can still run your
operating system.
(6.) Last, but certainly not least, keep your eyes and ears open for
signs of an impending system failure; if it does look like something is
about to fail, stop your normal work and replace the offending component,
right away.
Things to watch out for include, but are not limited to:
-- Excessive slowness in accessing media reads and writes (this can
indicate "retries" where the hard drive is repeatedly trying to read from
or write to a sector of the hard drive that is marginally good),
especially if accompanied by a "groaning" sound;
-- Odd noises, particularly "click, click" or "groaning" ("whoo-ooo, whooooo") sounds or repeated sounds of the hard drive powering up and down,
coming out of the hard drive (these can be indicative of either bad
sectors or a problem with the hard drive's read / write head);
-- Sudden flickering of the computer screen (this can indicate a power
supply issue);
-- Errors displayed by the operating system when trying to read from or
write to a file;
-- Obviously, computer "hangs" or "freezes" which cause you to have to reboot with the computer's "Reset" button, in mid-session (you may well be
able to re-start the computer, but use what time it gives you to do this,
to back up your data and erase anything confidential; you are living on
borrowed time, when this kind of thing happens).
Incidentally, if you have bad RAM memory, sometimes the computer will
work perfectly well until you start multitasking a number of large
programs or start working with really large multimedia files -- the
symptom here is that the computer only "hangs" when heavily loaded,
because the low range RAM chips are good but the ones higher up in the
memory map are bad, but the higher range ones are only accessed when the
system is pushed to its limit.

Your Nuclear Weapon Against Vulnerabilities AND Hardware Failure: The


Humble USB Key
----------------------------------------------------------------------------------In reading all of the above, undoubtedly the smarter among you have
started to ask the questions, "Gee, you know, these computer operating
systems are just crawling with potential compromises and data
leaks" (which they are -- as we have seen, Linux is far better than
Windows in this respect, but it's equally far from being perfect) and
"It's too damn bad that we have to load these operating systems on a hard
drive that's (a) expensive to throw out or replace, (b) relatively prone
to failure because it has a lot of moving parts, all doing so in a very
hot environment and (c) hard to swap in and swap out, for whatever
reason, isn't there some other way in which I can run my PC".
I might be old enough to remember those good old days when operating
systems weren't so bloated that you didn't have a choice about installing
them on your computer's hard drive. Back in the Stone Age of personal
computers, a PC would boot off an operating system like Apple II DOS or
MS-DOS, running off the vast storage available on a floppy disk whose
capacity ranged from around 100 kilobytes (!) to 1.44 megabytes (and
recall that the operating system part of these disks was much smaller
than their overall capacity; early MS-DOS, for example, was typically in
the 64 kb range). Want to try a new operating system? Simple -- stick in
a new floppy disk, suitably formatted, and fire away! Ah, those _were_
the days, weren't they?
For a long time, the continued, and in most cases unnecessary (due mostly
to bad programming practice by Microsoft, which by the mid 1990s had
eliminated its major competitors and therefore had little incentive to
stay sharp) growth in the size of computer operating systems made it
virtually impossible to run them properly, unless you installed them
permanently on a hard drive which was in turn permanently attached to the
computer. This situation, while obviously undesirable from a data
confidentiality perspective, continued until the early 21st Century, when
a counter-movement developed among the computer hacker community
(particularly the Linux side of it) to reverse the "feature bloat" that
had come to characterise operating systems.
The first widely-used manifestation of this was the hugely popular
"Knoppix" "Live CD" implementation of Debian Linux that was introduced by
the German engineer, Klaus Knopper, a few years ago.
The "Live CD" concept basically allows an operating system, in this case
a version of Debian Linux with the "KDE" graphics interface, to run off a
700 Mb CD-R, in more or less the same way as in which old computers used
to boot off a floppy disk. Knoppix was a development of enormous
significance for the data security community and we all owe Dr. Knopper a
debt of gratitude for the ground-breaking work that he did in perfecting
his operating system.
For the first time in years, users could boot and use a modern PC
productively (because Knoppix included almost all the popular Linux
environment applications, for example the Firefox browser, KNews reader,
chat clients, multimedia players and so on; it also had an amazingly wellimplemented way of auto-detecting and configuring peripheral hardware,
for example video cards and LAN cards, which was necessary when you
consider that the Knoppix CD can't know what PC it will be running off of

today, as opposed to yesterday), without being enslaved to a tedious,


problem-prone, lengthy hard drive installation. Knoppix is still
available today, although its development has lagged lately due to
personal issues that Dr. Knopper has had to attend to, and it is still a
viable option for the data security conscious user, although it has
largely been succeeded in the Live CD arena by later, more sophisticated
distributions such as Ubuntu, Xubuntu, Mint, Sabayon, MEPIS and so on.
Good as it is, the Live-CD approach has certain limitations from a easeof-use perspective.
First, CD-ROM drives are technically extremely slow, compared to hard
drives (and more modern media like USB 2.0 keys), and to make matters
worse, most CD- and DVD- drives have a power saving feature that "spins
down" the disc while it isn't being accessed; leading to a notable, and
very irritating, delay, every time you try to do something new on the
system and the operating system has to go to the CD-ROM drive to fetch
the requested program. CDs are also rather large and cumbersome, and
they're fragile as well; they scratch easily and can also shatter easily.
Secondly, of course, once a CD-R or DVD+/-R is burned, you can't record
anything else on it, meaning that you have to have some secondary storage
medium on which to save data. This means that you are more or less stuck
with the mix of applications that the designer of the particular Live CD
distribution picked, when he created it; some Live CD systems, for
example later versions of Knoppix, do allow you to add applications to an
extent, but the problem here is that you have to either use a hard drive
or an external USB key to do so.
The obvious question is, "Why not just use a USB key, or other external,
removable, low-cost, read-write medium (for example a Secure Digital
'wafer') to run the entire operating system from?" Until very recently,
this was not a very practical option, for a variety of reasons.
The flash RAM memory (this is like conventional RAM memory in that it can
be read from or written to, quite quickly, and it has a very small form
factor with no moving parts; it is unlike regular RAM in that it does
_not_ vanish into oblivion, when you remove it from an electric power
supply) in these devices is quite expensive on a byte-for-byte basis,
meaning that until 2005 or so, purchasing a USB / flash RAM key with
enough storage to handle even a "lightweight" computer operating system,
meant spending quite a bit of money.
Secondly, many computer motherboards built before about the 2003-2005
time period, did not have a "boot from USB" option, meaning that you
would have to use some awkward tricks (for example, booting intially from
CD-ROM but then having the CD-ROM pass control over to the USB key at
some point) to implement the solution.
But if you have a reasonably modern computer, you can hit whatever key
the "BIOS Features" hotkey is when the computer is about to boot (on many
Award BIOS this is the "Del" key, on some others it is F1, sometimes F10)
and see if your PC's BIOS has a "Boot Order" or "Boot Options" selection.
Go to that setting and see if it has a "Boot from USB Hard Drive" or
"Boot From USB Flash" option, if so, move that option ahead of "Boot from
First Hard Drive", save your options and then reboot your computer, which
is now theoretically capable of booting from a USB key... if (and only
if) that is properly configured.
The subject of how to prepare an USB key as a boot device is quite

complicated and is out of scope for the purposes of this document; but,
fortunately, unless you want to do something crazy like trying to get
Windows XP or Vista to boot in this way (good luck trying that!), it's
actually not too difficult to do. Here is an excellent place to get
started: http://www.pendrivelinux.com/.
(Note: I am using the term "USB key" in a rather loose sense of the word;
most of the information in this section can also be applied to other
small removable semi-permanent storage devices, for example Secure
Digital chips, SDM cards, "mini" hard drives with a USB interface, etc.,
provided, of course, that the computer with which you are going to use
them, both has an interface in which to plug them in and that the
computer has the technical capability to boot from the device. However,
as of the time when this is being written, the results have been very
mixed on the subject of being able to boot a computer from anything other
than a "conventional" USB key. Of the alternate types of media that I
have so far tried, I have yet to see a computer BIOS that has a "boot
from SD Chip" option, for example, however desirable that it might be to
do this. You could possibly end-run this problem by getting a USBinterface SD chip reader, but how well it would actually work, I don't
know.)
There is even an anonymity-specific version of a Live CD Linux setup that
was specifically meant for use with a USB key -- check out http://
www.browseanonymouslyanywhere.com/incognito/. (Preliminary testing of
this tool, including using its built-in ability to be installed to and
then booted from a USB key, indicates that it is _very_ good; "Incognito"
even gives you an easy, built-in option to use TrueCrypt to encrypt your /
home directory, and it is set up to use the Tor anonymizing peer network
for Internet communications. Another excellent feature, which shows that
the Incognito people aren't amateurs, is that it clears RAM memory
completely, by writing random data to each RAM page, before the computer
finally shuts down. Anyone interested in either secure data storage or
secure Web surfing should certainly give Incognito a serious look.)
I'll let you explore the Pendrive Linux site to learn all the gory
details, but as a brief comment, the most important thing that you will
have to do, to get a cheap USB key to be bootable, is to ensure that it
has a "bootable primary partition" established on it (by default as they
come from the store, most cheap USB keys don't have this), and you will
have to have a "boot loader" installed as the first program to be set up
on that partition. Once you have these set up, the rest is usually
relatively easy, at least with Linux.
I would recommend that you use a USB key with a little spare space on it;
you may be able to get away with a 2 Gb (two gigabyte) one, but I'd
recommend at least 4 Gb to give yourself some space for updates, caching
and so on; 8 Gb is clearly preferable as you should have plenty of space
for everything. It really depends on your budget.
I would also recommend that you only use them on a PC with enough RAM
memory (~1 Gb to ~3 Gb depending on the operating system you are using)
so that it doesn't need to do "swapping" or "demand paging" to the hard
disk, as described elsewhere in this document. There are two reasons for
this; the obvious one is just that swapping is a bad idea, period, from a
security point of view, but, additionally, the flash RAM memory used on
USB keys is not as robust as that on a conventional hard drive, and
having it constantly written to and read from, in the manner that demand
paging usually implements, can reduce the durability of the flash RAM
chips themselves.

There are many reasons why having your entire operating system running
from a small USB key is a great idea, from a security point of view. Here
are just a few:
(1.) These keys are so small and portable that they can be always with
you. That is, you can wear one on a keychain around your neck, or hang it
on your car keychain, or just keep it in your pocket. This is a very good
idea from a security point of view because it means that the chance of
someone getting secret physical access to your operating system and
quietly installing a "backdoor" on it, is now next to zero. (Note: Just
remember to take it out of your pocket, when you send your pants to the
cleaners. USB keys aren't waterproof! You have been warned!)
(2.) The portability and "always with me" aspects of USB keys means that,
in an emergency, they can be discarded or totally destroyed, in almost
any set of circumstances (a few good whacks with a hammer will do
nicely). While this is obviously an extreme option, consider that (at
least where I live) a 4 Gb USB key costs in the neighbourhood of 20 Euros
or about 40 U.S. Dollars, not counting the cost in time to reinstall
Linux on the replacement key; now, I don't know about you, but although I
probably wouldn't throw away 20 Euro without thinking about it, I sure
would prefer doing that to spending 20 years in jail after the police do
a forensic analysis of all the hidden nooks and crannies on my friendly
local computer operating system.
Incidentally, depending upon the relative size of your "controversial"
datasets, it may be a good idea to store all data of this type on
something like a SD chip (and its relatives SDHC, etc.), which you can
access either via a built-in reader on your PC (many modern laptops and
even some desktops come with these nowadays) or by a USB-interface SD
chip reader.
Logically, your private data would also be stored in something like an
encrypted TrueCrypt container, the file listing for which would look like
"RADIOSTATIC.WAV" when its directory listing on the SD chip was viewed in
the operating system's file browser application. For added plausible
deniability, you could engage in a little primitive security by obscurity
by marking the "hidden file" bit on "RADIOSTATIC.WAV"'s directory listing
(this would cause it not to be shown by a default configuration file
browser, although any experienced forensics attacker would know how to
bypass this), then add a few .JPG format pictures of random landscape
scenes from places where you've never visited, to the SD chip's directory
(don't forget to change the file time and date stamps to something that
doesn't implicate you).
When and if the police find the SD chip and scream at you "ha, we caught
you, you filthy drug dealer, we found the chip where you're hiding all
your records, confess now or you'll get it", you can ask to see the chip
and then blandly say, "hmm, you know, that looks like one of those chips
that you put in a digital camera to store pictures on, tell me officer,
does it have any pictures on it? Because I don't even HAVE a digital
camera (or, you have one that doesn't use that kind of chip.) Try to
imagine the cops' frustration at being presented with a bullet-proof
alibi like this; you weren't in physical possession of the chip when they
seized it, any "controversial" data on it is encrypted so they can't
access it, and the only unencrypted data on the storage device has
nothing to do with you. Serves the buggers right, I say.
I know that it's annoying to give up the speed and convenience of using a

"real" hard drive, but consider the huge privacy and security advantage
that you'd get by using something as small and easy to hide / conceal as
a SD chip, for secure data storage. There are so many ways that you could
do this -- mail it off in a letter to yourself, bury it in an air-tight,
water-tight, insecticide-laden box in your garden, stick it in between
the pages of a book (make sure not to give that one away to the local
charity auction!)... the only limit is your resourcefulness and
imagination.
Another highly desirable aspect of the rapidly increasing data storage
density and small form factor of SD chips is that they make it relatively
easy and affordable to back up your "controversial" data, but to do so in
a way that does not attract attention or open up a possible attack
vector. Just buy two of them and write the same data to each. You can
store the alternate copy somewhere that the police will never suspect
(how about behind the last row of teacups in your mother's kitchen
cabinet?) and then, when the secret police come and do the door kick
trick at your own flat, you can quickly shatter the primary SD chip
(eating the shards, should you be so inclined) and let them search
wherever they want; all the important stuff is securely stored in a tiny
little chip, somewhere physically far away.
No data-hiding strategy is perfect (since, the police can simply beat and
torture you until you tell them where the goodies are), but this one is
as close to perfect as you are likely to get.
Note that all of the comments immediately above also pertain to
conventional USB keys, but are less compelling with a data storage device
in that form factor, since it is more likely to be detected and
recognized as a covert data repository.
(3.) This next advantage is poorly understood but it is in fact extremely
important.
Consider, for the moment, that one of the most prized pieces
for a sophisticated intruder conducting forensic analysis of
the ability to prove a relationship between the user of that
(you) and the terrible, awful, "prohibited" data, that is on
computer, somewhere.

of evidence
your PC, is
computer
the

On a conventional PC setup, where the operating system and data storage


areas are on one and the same physical computer and hard drive
(separated, if at that, only perhaps by partitioning or just by the data
being in a different folder, e.g. "My Documents" for Windows or "/home/
MyName" for Linux), establishing this relationship is usually not too
difficult, since by definition you would have had to have had physical
possession of the computer in order to boot the operating system and run
the PC in the first place.
Also, the operating system is right there with the computer's CPU and
hard drive, so there is a high probability that if a file on a given PC
says it was owned by "Bill Jones", and your name is "Bill Jones", and the
file was modified by "Bill Jones" at 2:30 p.m. on Saturday, September 2,
2007, you are the one who modified it and was therefore in front of the
computer at that time. Certainly, this kind of evidence would be more
than enough to convince any judge or jury that you are the sinister
person who "owns" this appalling data whose mere existence is a mortal
threat to moral society.
When you boot a PC from a removable device such as a "Live CD" or a USB

key, this scenario changes radically, because now, the hard drive on the
PC is simply a storage device -- ironically, your having booted off a
removable device (which, let us remember, was originally designed merely
as a kind of convenience storage device to replace the old floppy drives)
has in effect reversed the relationship between the PC (which was
traditionally supposed to have been the thing with the operating system)
and the external storage device (which was simply a "dumb" peripheral).
This has dramatic effects on the evidence chain, because the PC's hard
drive (which either does not contain an operating system at all, or which
contains an operating system that you have never touched) has no evidence
of any kind linking your access to, or use of, data on the hard drive,
other than possibly the time / date stamps on the files that you accessed.
In effect, from the point of view of a forensic investigator, it is as if
a "ghost" magically accessed the files on the computer's hard drive,
without doing any of the processes (e.g. firing up the operating system
from the primary boot partition on the hard drive, logging in as whatever
user, etc.) that the investigator would ordinarly use to narrow down the
question of who was using the computer at what time.
And it gets better. One of the very funny aspects of this system is that
you can actually set up a computer with Microsoft Windows (using either
the FAT or NTFS file systems that Windows uses as partition formats), but
boot the computer from a Linux-based USB key [the technical reason why
this is possible, is that most modern versions of Linux can use the
"FUSE" module that allows you to "mount" a FAT- or NTFS-based partition,
and access the files on it, in more or less exactly the same was as you
would for an ordinary Linux ext2, ext3 (not good because it journals) or
other partition], then do whatever you want, within some limits, using
the files on the Windows part of the system, then shut it down and go on
your way.
Try to imagine the fits that this would give an investigator; he or she
would be assiduously looking through the Windows Registry, the Event Log,
etc., for traces of who was using the PC; but there would be no such
thing, because at no time in this process was the Windows operating
system even running. Indeed, there would be no trace that the system had
ever been used, at all.
Personally, I wouldn't recommend doing this because I just don't like the
proprietary Windows NTFS file system -- remember that it is a journalling
system, although it's unclear whether this "feature" would still be fully
functional if an NTFS partition were to be mounted via Linux FUSE -- but
if you absolutely must have a Windows computer somewhere (possibly, to do
"non-controversial" duties with), this is one very secure way in which to
use it.
(4.) Finally, USB keys have the additional advantage that you can "mix
and match" operating systems (via simply installing different ones on
different USB keys and then plugging in whichever key suits your fancy
today) for different purposes; you might, for example, want to have an
operating system that fully and by default implements encrypted /
anonymized Internet routing (e.g. Tor, which we discuss elsewhere), but
have another that just uses regular Internet access for purposes of
convenience.
Certain kinds of high-security features, for example anonymized surfing,
are a "red flag" to forensic investigators, as well as juries and judges,
but they are technically difficult and time-consuming to install and / or

enable only when needed. You may want to therefore keep a "Key A" for
your regular use of the computer and a "Key B" for high-security
activities, substituting the appropriate key (along with a reboot) at the
appropriate time.
Despite all the above, there are a few limitations of using removable USB
keys:
(1.) Wear Leveling: This is a process that goes on behind your back when
you read from or write to the USB key, it is driven by the fact that the
Flash RAM chips that are the actual storage medium for the USB key have a
very large, but finite, number of times that they can be read from or
written to, before that particular memory range of the Flash RAM chip
becomes unstable. Wear leveling is a technique that minimizes the impact
of this factor, by spreading out where data is physically stored on a USB
key, so that a file that the operating system and file system thinks is
in a contiguous series of blocks in a single place, is in fact all over
the Flash RAM chip. (A good explanation of wear levelling is at: http://
en.wikipedia.org/wiki/Wear_levelling.)
From a data security point of view, unless you boot your USB key dozens
of times a day over a period of several years, or unless you are
repeatedly wiping it and re-writing it (note however that secure file
deletion technologies very definitely do do this), you are unlikely to
encounter "bad sector" type problems. However, you should be aware that
wear leveling has the side effect of, possibly, partly negating the
benefits of good security tools such as secure file deleters, simply
because its background data dispersal may allow some data to be retained
on the USB key even if your security tool thinks that this has been
eliminated once and for all.
Another important implication of this is, I would strongly suggest that
you not enable a swapfile, swap partition or other demand paging area,
that is physically on your bootable USB key. This is bad both from a
reliability perspective (as the swapfile tends to be constantly read from
and written to) but also from a confidentiality one, since as we have
seen with swapfiles, these can be a gold mine of information for an
intruder; due to wear leveling, even if you think that you have erased or
"sanitized" your swapfile, if it physically resides on the USB key, it
may still have confidential data that is difficult to finally delete
without throwing the key in the dustbin.
(2.) Storage Capacity: Keep in mind that unless you have the budget to
purchase a large (~8 Gb+) USB key -- and at that point we start to get
into the "can I afford to throw it away at the first sign of trouble"
issue (the answer is "yes", by the way -- what's easier, spending another
100 Euro or spending another 10 years in jail?) -- while you may be able
to install and run a Linux operating system in, say, 2 Gb or so of space,
you will want to leave a little extra storage for operating system
component / application updates (particularly security-related updates)
as well as add-ons that you might want to install, as well as for
temporary files that the system needs while it is running.
For this reason I do not recommend that you set up a bootable USB key
with anything less than 4 Gb of space on the key; fortunately, 4 Gb keys
are pretty cheap these days and are bound to become even more so in the
near future.
(3.) Boot / Shutdown / Activity Speed: Although USB keys are much faster
than CD and DVD drives, they are still noticeably slower than even the

slowest "real" hard drive, particularly when they have a lot of inputoutput intensive tasks to perform, such as starting up the operating
system and shutting it down when you're finished your computing session.
(Indeed, at times it can look like the system is hung... it hasn't, just
be patient, something will happen eventually.) Note that once a program
is loaded into your computer's RAM memory, its execution speed will be no
different than if you had been running it from a normal hard drive.
There is an important consideration here, from a data security point of
view: you must always keep the slower performance of USB keys in the back
of your mind, when deciding how much time will be required to perform
certain tasks (for example, shutting down an encrypted TrueCrypt virtual
volume; this process requires the TrueCrypt application, or whatever
application you are using to encrypt your data, to encrypt each data
block and then write it back to wherever the virtual volume is being
stored). If for whatever reason your computing session will be timeconstrained, make sure that you take the USB key speed issue into
account, when planning what activities you will perform -- don't bite off
more than you can chew and then sit there fuming or sweating as you wait
for the USB key to do its stuff.
There is one other thing to remember about the speed issue: make sure
that you use a USB key with the "USB 2.0" interface format (the vast
majority of USB devices of all kinds sold today use this format, but
every so often you will run across one that is only compatible with the
original, much slower USB 1.1 format; I have found that in particular a
lot of cheap passive USB hubs, while they say they support USB 2.0,
actually only implement USB 1.1). Manufacturers make a lot of claims
about "my USB 2.0 key is faster than that other guy's", but in reality I
have found very little real difference because the USB 2.0 standard is
the thing that is actually what sets the key's transfer speed (there is
little a manufacturer can do to get around this, beyond becoming non
compliant with the standard itself). Incidentally, be careful about
mixing USB 2.0 and USB 1.1 devices on the same USB hub; you may find that
all the devices connected to that hub are slowing down to the lowest
common denominator, that is the very slow USB 1.1 interface speed.
(4.) It's Still an OS: Last, but certainly not least, remember that just
because your computer operating system is on a USB key, that doesn't make
it any more, or less, a "real" operating system. You still have to cover
your tracks when using the USB key version of Linux (or whatever other
operating system) in the same way you would if the USB key were the hard
drive inside your PC. In other words, running the operating system from
the USB key is an excellent tool in your data security / privacy arsenal,
but it is in no way a SUBSTITUTE for all the other good practices that
you have to use, to stay secure.
External Hard Drives? Yes or No?
-------------------------------As a final comment on this subject, lately, there has been a lot of talk
on various Linux forums about the possibility of booting Linux from an
external USB (or FireWire) hard drive (e.g. a "real" hard drive with
multiple gigabytes of "real", read / write millions of times, storage
space, as opposed to the much more restricted environment of a USB key
with its Flash memory, write leveling and smaller available storage
space).
Usually, of course, the motivation for doing this has little to do with

security; typically, the person involved has a Vista-based computer and


does not want to (horrors!) tamper with the boot records on his or her
primary, internal hard drive, since both XP and Vista are notorious for
refusing to boot after anyone tries to set up (double horrors!) an
alternate operating system on the hard drive that Microsoft now "owns".
Another funny situation, which has a bit more to do with security, is one
in which a teenager in a conservative family wants to get around the
parental control (e.g. "censorship") features that Microsoft, in its
infinite wisdom, has built into both XP and Vista... what better way to
do this, than just boot the PC into Linux from an external hard drive
that can easily be hidden under the kid's bed? I certainly can't blame
anyone for wanting to do it, but then again I suppose my perspective
might be a bit different if I were a parent.
While the external hard drive route is a possibility -- and it's
certainly preferable to installing the operating system conventionally,
on a standard, internal hard drive -- I'm not sure that the advantages
are enough to justify the investment. External hard drives are large and
heavy enough so as to make them not casually portable, and they're also
robust and expensive enough to make it difficult to destroy them or throw
them away in the "they're busting down the front door" scenario.
Furthermore, for technical reasons (mostly the usual bone-headed, "you
can only use it how we say you can use it" approach on the part of
Microsoft), it is, as far as I know, impossible to get Windows XP or
Vista to be installed on, and therefore boot from, an external USB hard
drive in exactly the same way as it's impossible for them to use a USB
key for the same purpose, so this would shut off the possibility of
having a "just for show when the secret police make their appearance"
instance of one of these Microsoft operating systems on the external
drive (which is the only reason I can think of, that you'd want to use
Windows or Vista as opposed to Linux... on second thought, I suppose you
might want to use XP for games, which aren't as well supported under
Linux as they perhaps should be).
For these reasons, I think that external hard drives aren't a good
alternative to the USB key approach, but you may want to consider an
external, high-capacity, USB or FireWire hard drive as a way of further
breaking the evidence trail. That is, you could boot off the USB key but
have the external USB hard drive as a storage medium, hanging off another
USB port, placing your confidential data (for example, encrypted
TrueCrypt volumes) on the external drive, then storing the latter in a
safe place when you're done whatever it is that you were doing on the
computer.
If you want to try this, test carefully that your data is being saved,
and erased, properly. The interactions between an operating system on one
external physical interface of the computer's hardware (e.g., "USB Port
#1") and data storage media on other physical interfaces (e.g., "USB Port
#2"), have not yet been extensively torture-tested and it is conceivable
that you might get a few strange results, if you, for example, try to do
something fancy like store TrueCrypt volumes on the external hard drive
(what happens if you accidentally pull the external hard drive's USB
cable, while a TrueCrypt volume is open?). As always with data security,
better safe than sorry.
So You're Thinking Of Encrypting Your Data. Good For You. Now What?
-------------------------------------------------------------------

Before you read what is below, consider that the general idea of
"encrypting" -- that is, "scrambling with a secret key number that
(hopefully) only you know, so that someone who wants to look at the data
in its original state, has to know the secret key" -- is a GOOD thing.
Without using encryption to secure your data, you are basically a sitting
duck for the next computer criminal, secret police officer, snoopy spouse
or curious co-worker who wants to get the gory details of what you're
doing on your PC.
But also consider that encryption, IF AND ONLY IF IT IS PROPERLY
IMPLEMENTED (it's next to worthless if it isn't used right), is only a
necessary, not a sufficient, condition to true data security. It is
certainly true that an amateurishly encrypted file, folder or hard drive
will provide some security against a casual intruder, but it will last
all of about 5 minutes against the kind of sophisticated, well-equipped
attacker with physical access to your PC, that we are talking about in
this document.
Can The Police Break Your Encryption?
------------------------------------This topic seems to generate endless discussion in the on-line security /
anonymity community, and I won't presume to perpetuate too much of that
here, except to say a few things:
(1.) The short answer to the question, "Can they break my encryption, if
it's 'good' encryption and I have used it properly?", is, "probably...
YES... BUT".
Consider, in this context, that the U.S. NSA (National Security Agency)
has TEN TIMES the budget of the CIA (read: "the NSA has as much money as
it wants to spend, its budget has NO LIMIT"), that it has the world's
best cryptographers and cryptanalysists, that it has the world's most
powerful supercomputers, that it has been around doing this since the end
of the Second World War (e.g., 60+ years), and that breaking crypto is
about 50% of its day to day work responsibilities. (For more, see: http://
video.google.com/videosearch?q=Echelon+-+The+Most+Secret+Spy
+System&sitesearch=#.) The NSA is the Godzilla of code-breaking, and you
are "Bambi" against them.
Scared, yet? You should be... sort of.
(2.) The best available evidence -- and here, I have this based on
sources who I am not free to name -- is that if (and ONLY if),
sufficiently high priority is placed on breaking your encryption, the NSA
should be able to crack the password(s) of data files that are protected
by almost any widely available type of encryption -- PGP, TrueCrypt,
BitLocker, whatever you want -- in less than 24 hours, using the
incredibly powerful, expensive distributed computing supercomputer arrays
that they have dedicated to this task, down in the U.S.A..
Furthermore, most Western governments (perversely, it is residents of
"hostile" nations such as Russia, China, etc., who are 'safest' here,
because these countries are outside the Echelon / NATO "old boys' spy
club" and therefore do not have access to the NSA's tools) have
reciprocal arrangements with the NSA, meaning that in the unusual
circumstance where they encounter a type or implementation of encrypted
data that they can't easily crack themselves, they can just courier the
offending hard drive, CD, USB key, etc., off to their NSA friends at Fort

Meade, Maryland, and get the "professionals" to fix things once and for
all.
It is very telling (again, I can't reveal where I got this information...
you'll just have to trust me, and my source, that it's true), that even
for AES, the U.S. government itself, specifically states that AES
encryption is only to be used for "sensitive but unclassified", data;
they use far more robust, internally developed, "tell anyone about it and
we shoot you" algorithms for more sensitive purposes (like, protecting
their missile launch codes from malicious foreign governments).
Now, ask yourself; if the U.S. government has data that's more secret
than "sensitive but unclassified", and if, (by inference), they have to
use even more powerful, still-secret encryption algorithms than AES to
secure their _own_ data, what does that say about AES' (or any other
publicly known) ability to withstand a determined attack by an
exquisitely well-equipped, sophisticated opponent like the NSA itself?
The conclusion is unavoidable: both AES, and most other commercially
available encryption algorithms, CAN, without a reasonable doubt, be
conveniently broken by the U.S. NSA (and, possibly, certain other
entities of similar capabilities, for example the U.K.'s GHCQ, CIA, the
U.S. military, maybe the Chinese secret police... obviously, the exact
abilities here are among the most closely guarded state secrets),
probably with trivial effort and probably within a short time period,
probably no more than a few days to a week at most.
So, in an "ideal world" from the cops' point of view, they have you
"owned" no matter what you do, right, dude?
(2.) Having established that the most powerful levels of governments can,
in fact, probably break your "robustly" encrypted files, we have to ask
the next, more relevant question, which is, "WILL they"?
Here, the answer is much more complex, it's in shades of grey rather than
in black and white.
To fully understand this question, you have to first appreciate that the
police services within a given nation, as well as different nations
within the Western (NATO) political / economic sphere, have a very
definite pecking order, in terms of who gets access to what, based on
what set of criteria and on what time schedule.
In most Western nations, normal operational control over -- and therefore
use of -- the most powerful crypto cracking tools (e.g. the NSA's
buildings full of supercomputers mentioned above), is allocated only to a
very small subset of the nation's overall police apparatus; this is
because the most powerful anti-encryption tools are a relatively scarce
and expensive resource.
Furthermore, these tools were not originally meant for, nor was the huge
budget they require allocated for, mundane tasks like breaking the local
pervert's kiddy porn collection, (in China) figuring out which dissident
is sending e-mails about the Dalai Lama or finding out which stockbroker
has been doing insider trading; instead, a nation's cryptographic "crown
jewels" are intended for attacks on their counterparts in other, hostile
nations, for example they would usually be used to try to break in to a
potential adversary's military command and control apparatus, to spy on
the adversary's diplomatic traffic, and so on. They are just too
important to be "wasted" on day to day police activities.

Finally, administrative and operation control over these most powerful


crypto systems is typically in the hands of an agency, for example the
NSA, that is not technically directly within the police or military chain
of command, meaning that the NSA (or GHCQ, etc.) very definitely _does_
have the right to refuse a police request, or, at least, to make the
police wait in line behind the agency's other, more pressing codebreaking priorities. This significantly adds to the bureaucratic and
administrative overhead that an ordinary police officer, or department,
must incur, in order to even request that a given piece of "evidence" be
subjected to the intelligence agency's most powerful techniques. Cops are
lazy; most of the time, they won't bother with dealing with "those white
coated spooks in the NSA", unless they are working on a project that is
unusually important to them.
So how likely are you, to have the big guns of your friendly local
government's spy apparatus turned against you? There's no hard and fast
way of knowing, and there may never be, not only for the obvious reason
that the authorities involved have an interest in keeping the exact rules
secret, but also because the rules can and will change according to the
latest policy objectives and public events.
For example, the NSA was originally set up to spy on "sinister, evil
Commie Russia" (and, despite the so-called "end of the Cold War", it
still does this on a continuous basis); but more recently, with the
disaster of September 11, 2001 in hand, its considerable resources have
been concentrated on tracking down the leaders of Al-Qaeda (it hasn't had
a lot of success here, because Mr. Bin Laden is smart enough to know that
any piece of Western technology that he might use, might very well be
compromised by the NSA; according to rumor, Al-Qaeda conducts all of its
communications via very low-tech, but reliable, methods such as human
couriers or carrier pigeons).
The above having been said, I can offer you only the following hierarchy
of "importance", in terms of my own perception of the amount of effort or
priority that the task of attacking your encrypted data, is likely to get
from your local secret police, and, by proxy, the NSA (should your local
police decide to use that resource). (Note: I am basing the conclusions
you see below, based largely on my own intuitions, and I am assuming that
the data set is protected at least by a well implemented, strong
algorithm such as 128-bit RC5 / Blowfish and so on. I am assuming that
the situation is in peacetime as these rules can change dramatically
under a declared war.)
(Suspected) Nature of Encrypted Data
Likely
Priority
Time To Crack Via NSA
------------------------------------------------------------------------------------------------"Soft" political dissident (e.g. animal rights)
low
N/A (would be refused)
Copyright infringement / piracy
low
N/A (would be refused)
Ordinary pornography (anything non-mainstream)
low
N/A (would be refused)
Legal action (to unmask on-line anonymity)
low
N/A (would be refused)
Computer hacking (against business)
Low
N/A (probably be refused)
"Medium" political dissident (anti-globalization)

Very
Very
Very
Very

Low
N/A (probably be refused)
Virus / malware creation / distribution
Low
N/A (probably be refused)
Ordinary drug dealing ("little fish")
Low
1 month or refused
Child pornography (most varieties)
Low
1 month or refused
Legal action (ordered by court or judge)
Low
1 month or refused
Ordinary economic crime (insider trading)
Moderate
1 month or refused
Special economic crime (large-scale fraud)
Moderate
1-3 weeks or refused
Special drug dealing ("kingpin")
Moderate
1-3 weeks or refused
Computer hacking (against government)
Moderate
1-3 weeks
"Hard" political dissident (radical or violent)
Moderate
1-3 weeks
Foreign government secrets (ordinary)
High
1-4 days
"Terrorist" group (Al-Qaeda foot soldier)
High
1-4 days
Child pornography (imminent threat to a child)
High
1-3 days
Imminent death threat to ordinary citizen
High
1-2 days
Computer hacking (against military)
High
1-2 days
Foreign government secrets (military)
High
Same day / 1-3 days
"Terrorist" group (Al-Qaeda leader or plot)
high
1-2 days
Suspected assassination threat to leader(s)
high
Same day
Nuclear terrorism (movie-plot scenario)
high
1 hour

Very
Very
Extremely

What's interesting in looking at the list that you see above, is the very
wide range of requests that the spooks at a place like GHCQ or NSA may
have sent to them on a day to day basis. Try to put yourself in their
shoes; they have a finite amount of time within which they can operate
their supercomputers, and not all of this time can be allocated towards
breaking encryption on behalf of local law enforcement (some of it has to
be spent on research, backups, etc.). Furthermore, the amount of time
needed to break a key can vary, even for a supercomputer, making
scheduling rather difficult.
What it all amounts to, is an extremely important rule of using
encryption to defend yourself against the ultimate in sophisticated
attackers (e.g. the NSA):
IT IS JUST AS IMPORTANT TO CONVINCE YOUR ATTACKERS THAT YOU DON'T HAVE
ANYTHING OF INTEREST TO THEM, AS IT IS TO ENCRYPT THE DATA IN THE FIRST
PLACE.
Looking at the above table -- whether or not the exact assessments that I
have made of NSA's perceptions regarding the relative importance of each
type of encrypted data, are exactly true, or not -- we can see that the
reaction of the intelligence agency, and therefore how much supercomputer

effort they will devote to the task forthcoming from it, is intimitely
associated with what kind of encrypted data that the intelligence agency
believes that it is working with.
If they believe (correctly or otherwise) that the block of encrypted data
that your local police have sent them in the latest courier delivery,
contains the map to where Al-Qaeda has hidden the stolen nuclear bomb
that's set to go off in your favorite city in 3 hours, they are likely to
use all the resources at their disposal to crack and analyse this data
instantly.
If they believe that the police have simply given them the hard drive of
some kid who's pirating music, they are likely to tell the cops to go
take a hike and not bother them with trivial matters like the preceding.
YOUR "sensitive" data probably lies somewhere between these two extremes
on the above continuum, but clearly, it is in your interest to have the
police believe that whatever your data is, it's something less serious
than it really is. How you do this, is explained below.
Finally, in view of all the above, every so often the cops let the mask
slip and reveal something relevant about what their REAL, day-to-day
capabilities are regarding breaking encryption. Here's one juicy little
tid-bit, from (http://www.csoonline.com/article/221208/
The_Rise_of_Anti_Forensics?page=7):
"One rule hackers used to go by, says Grugq, was the 17-hour rule.
'Police officers [in London s forensics unit] had two days to examine a
computer. So your attack didn t have to be perfect. It just had to take
more than two eight-hour working days for someone to figure out. That was
like an unwritten rule. They only had those 16 hours to work on it. So if
you made it take 17 hours to figure out, you win.' Since then, Grugq
says, law enforcement has built up 18-month backlogs on systems to
investigate, giving them even less time per machine."
Remember, the police, just like you, live in a real world of limited and
finite resources, versus potentially infinite demands to break
encryption. And since encrypting data is so much easier than breaking
encryption, this is a race that you can win, if you're smart.
Or, try this little gem:
"...The (local U.K. police) team cracks low-grade encryption using 100
quad-core PCs but for high-grade encryption it relies on the threat of a
prison sentence for individuals refusing to hand over passwords or
decrypted files..."
(Source: http://networks.silicon.com/silicon/networks/
mobile/0,39024665,39282266-2,00.htm)
One Hundred Quad-Core PCs, eh? That's verrry interesting, because it
gives a glimpse into the kind of code-breaking infrastructure that is
LIKELY, as opposed to possibly, to be used against you, the minute that
they break down the door and confiscate your supposedly robustly
encrypted hard drive.
Using my own background in the field, what this says to me is, "any level
of key under (currently) 128-bit, is possibly vulnerable to this kind of
attack"... but you have to keep in the back of your mind that the amount
of CPU cycles that a police department like this would be able to expend,
would depend upon a large number of other factors, particularly how many

other keys that they had queued up to break, the nature of the protected
data (e.g. is it an easily recognizable .JPG or .DOC file, or is it some
double-encrypted TrueCrypt volume?), the strength of the original
passphrase used to encrypt the data, as well as (this plays a
surprisingly large factor) luck.
Personally, I wouldn't use anything less than 256-bit, largely because
you have to take into account the continuing evolution of CPU speeds and
mathematics processing capabilities (remember that it isn't just the
speed of the CPU that affects its ability to break encryption keys; other
factors, particularly the ability of the cryptanalytic application to
spread the work over multiple CPU cores / computers and the internal
architecture of each chip involved, are just as, or more, important).
The point is, the trade-off between using a 128-bit key or a 256-bit one
is usually a couple of seconds more for the larger key to encrypt or
decrypt a large data set. That seems to me to be a reasonable sacrifice
to make, considering the risks if the key is broken.
Plausible Deniability, Or; "It Wasn't Me"
----------------------------------------Another seldom-appreciated aspect of using encryption is the "since only
'bad people' use encryption, and I found encryption software on your
computer, that must mean that you're a 'bad person'", concept.
So, when the police seize your computer and find (say) a program like PGP
installed on it, then say to the jury, "See? SEE? This awful person has a
data hiding program on his PC, which obviously means that he's hiding
plans for terrorism, child pornography, drug dealing, {your favorite
bogeyman here} and a host of other nefarious activities too sinister to
describe, somewhere on his computer! I mean, for what _legitimate_
purpose would anyone ever hide anything from his upstanding, honest,
selfless law enforcement authorities? Clearly, there can be none. Your
Lordship, I move for immediate conviction!!"
You would be AMAZED at how readily juries made up of ordinary, computerilliterate people will fall for this argument. (See: http://www.news.com/
Minnesota-court-takes-dim-view-of-encryption/2100-1030_3-5718978.html)
The larger point is, even the SLIGHTEST HINT that you are using
encryption, steaganography (see below), password-protected resources,
data sanitization software or any other kind of "data hiding" system on
your PC, WHETHER OR NOT Y0U HAVE IN FACT STORED ANY CONTROVERSIAL CONTENT
AT ALL ON THE COMPUTER, can and will be taken as "evidence of criminal
intent" by juries who are hand-picked by the prosecution for
technological illiteracy, ignorance of basic constitutional rights and
pre-disposition to believe the assertions made by authority figures
(e.g., the police). This is ridiculously unfair, but, as we have noted
elsewhere, the police don't have to, and won't, "play fair".
So all other things being equal, you will want to employ encryption
technologies that do their work as unobtrusively as possible -- in
particular, systems that can work without being permanently "installed".
Unfortunately, the current state of the art in this area is, in my
opinion, far from perfect. Especially in the Windows environment, the
vast majority of encryption programs that are otherwise acceptable from a
security point of view (with the notable exception of the Windows version

of TrueCrypt, at least in its "traveler" mode), require permanent


installation, which leaves plenty of evidence within the Registry and
elsewhere for the police to make the above kind of assertions against
your character. And, in the Linux environment, the situation isn't a
great deal better, at least if you want the convenience of using a GUI
interface to your security software as opposed to typing in long, semicomprehensible commands at a CLI terminal shell prompt (as of when this
is being written, TrueCrypt for Linux, as well as most of the other
"industrial-strength" encryption systems with which I'm aware, requires
permanent installation, as does the "EasyCrypt" GUI front end to TC Linux
which is otherwise a command-line only affair).
There is some hope for the future, here, however. Due to the atrocious
state of security on the Internet today, encryption and other security
software is being more and more frequently built in to modern operating
systems. Examples of this are BitLocker in the Windows world and
FileVault for the Mac OS. The point is, if the security software is a
common, "normal" feature of your computer operating system -- one that
every Windows or Mac user gets on his or her PC, whether or not it is
specifically requested or deliberately installed -- then the above types
of "you've clearly got something to hide" types of arguments become much
less convincing.
For the time being, your best way to reduce your exposure to the "you've
got something to hide" tactic is (a) to keep the presence of encryption
and, especially, data sanitization, tools on your system as inconspicuous
as possible (don't do stupid things like having a big Desktop icon that
says, "Erase All Bad Data" or something equally incriminating), and (b)
develop and practice good encrypted data obfuscation (see below) tactics
so that it is difficult or impossible for intruders to isolate and
recognize encrypted data sets and then associate them with the correct,
originating encryption program.
A possible alternative, although I don't really recommend it because I
feel that it would just lend to the "you have something to hide" attack,
is to install or have recourse to a wide number of encryption and data
scrubbing programs; the purpose here is to force the attacker to guess
which one was used to encrypt or erase your data and waste time in doing
so. You can, in this situation, truthfully say, "oh, officer, I really
didn't encrypt anything, permanently; I was just playing around with a
bunch of security programs to see how they all work". But as I said, I
think this argument would be unconvincing to a jury that is already
prejudiced against you. The choice is yourse.
At the end of the day, although having an identifiable encryption or data
shredding program on your computer is less desirable than having one that
works "invisibly", either of these two situations are far preferable to
using no encryption whatsoever. If you don't use robust, properly
implemented encryption, you are GUARANTEED to be "owned" by the first
sophisticated intruder who attacks your PC. If you _do_, at worst, you
will have the "something to hide" finger pointed at you; but it's still
very unlikely that any attacker short of the U.S. NSA's banks of
supercomputers, will be able to get at the actual evidence. The world
isn't perfect, so do what you can.
Keep all of this in the back of your mind as you read what follows.
First, we need to define some commonly used terms:
"Plaintext" -- the original, or unencrypted, version of data. This can be

anything on your hard drive, not just ASCII text; it can be a .JPG
graphics file, a folder, whatever, as long as you don't need some kind of
unencryption software and a key (see below) to access it.
"Cyphertext" -- the scrambled, secured, version of data which was
originally in "plaintext" format. The point here is that you need to have
the key and the encryption program to reverse the encryption process and
output a new copy of the plaintext version for your use.
One thing about cyphertext that I find is usually never mentioned in
discussions of this type is, whereas in most cases you can instantly see
what kind of data is contained in a plaintext file (e.g. you can see if
it's a Microsoft Word document, a .JPG picture file, a folder, whatever),
most modern encryption programs deliberately obscure, or can obscure, the
type of file, so it just looks like gibberish. (I strongly suggest,
incidentally, that if you are using an encryption program that doesn't do
this by default -- a good example of which is PGP which by default will
take the plaintext file "MYDOC.DOC" and encrypt it into "MYDOC.DOC.PGP"
-- you do it manually yourself by changing the file name to something
like "MYSOCKS.XLS" or something equally misleading. Advanced forensics
programs like EnCase have a limited ability to defeat this kind of
trickery, but it is still worth doing, since every little element in your
layered defence system makes the attacker's job just that one little bit
more difficult.)
There is a specific reason why good encryption programs obscure the
original type of file; this is to defeat certain types of cryptanalysis
(see below) that use well-known characteristics of the internal formats
of some file types as a mechanism to try to guess the encryption key. If
an attacker knows that a secured file was originally in Microsoft Word
format, therefore, it is much easier for him to attack the encryption
because he knows that a certain number of bytes in the file is always the
Microsoft Word "header" and so on. If he doesn't know that it was a .DOC
file he has to guess at the file type, which as you can imagine is a much
more significant task.
UPDATE : To defeat attacks based on keyloggers, I now strongly recommend
the use of "keyfiles" under TrueCrypt. These are just what the name
implies : A file that is in effect one part of your password.
The idea is that when you ask TrueCrypt to mount an encrypted volume,
whereas with simple encryption, it would just ask you for a password,
this time, it not only requests the password but also the keyfile as
well. The TrueCrypt program then uses a certain number of bytes from the
first part of the keyfile (as of now this is 1024 bytes, or "one
kilobyte"), and adds this to your password, forming a "super password"
that, in my opinion, would be difficult for even entities like the NSA to
break.
Note that I said "difficult", not "impossible"; the NSA has other tricks
besides password hacking, to get at your secret data, but here again, the
objective is to slow them down, make them think twice about going after
you.
The great advantage of keyfiles is (apart from the obvious one noted
above, that is, significantly better password security), even if your
opponent has installed a keylogger on your computer that secretly records
each and every keystroke (including your password!) that you type in,
since the data in the keyfile does not pass through the keyboard buffer
(the part of the computer's RAM memory that handles key press and key

data events), it would remain immune from keylogging attempts... in other


words, your opponent could compromise your password, but he still could
not decrypt your TrueCrypt volume.
Sure, your attacker could simply use the keyfiles as you do (remember,
they're just regular files, they're not encrypted themselves), but
consider that he first has to know (a) what files are involved, (b) where
they are (on the file system), (c) what order they go in and (d) that
you're using keyfiles at all. This kind of thing is a forensics
investigator's nightmare. There are just too many variable factors to
make an attack practical, except for very large and specialized
organizations such as our friends in the U.S. intelligence agencies.
Keyfiles are a very important additional security factor, in these days
of "RAM chilling" attacks and possible NSA / CIA hardware keyloggers
being installed in all American-made PCs. In computer security, we call
the concept "something you _have_" (e.g. the contents of the keyfile) and
"something you _know_" (e.g. your password), as, together making up "twofactor security". Having either part (one of, the password or the
keyfile, but not both), is insufficient to compromise security.
There is also the very useful additional factor that the
"entropy" (randomness) of the first 1024 bytes of a keyfile is typically
much greater than the "entropy" (complexity) of any password that a
normal human being could remember.
Now there is one big drawback of keyfiles that you should be aware of
(VERY aware of... I have lost data big time to this issue!).
Specifically, IF THERE IS ANY CHANGE TO THE FIRST 1024 BYTES OF THE
KEYFILE AFTER YOU ENCRYPT A VOLUME WITH IT, KISS YOUR ENCRYPTED DATA
GOODBYE.
This might not sound like such a big deal, but consider that some kinds
of files -- everything to Microsoft Word documents (which change "header"
information with the current date and time, each time you open them)
to .mp3 sound files (which can be changed if you run a tagging program
over them without understanding what it's doing) -- can be changed
without you even being aware of this happening. And, of course, you could
simply do something stupid like forget which files are the keyfiles, what
order they go in, or you could just erase the keyfiles by mistake. In
which case, "bye-bye data".
There is really no fool-proof way to completely safeguard yourself from
the above data availability hazards, but here are some practical steps
you can take :
(1.) Use your operating system to mark the involved files as "read-only".
This will prevent normal applications from writing anything back to the
file image on the hard drive (you will get an error in whatever
application was trying to do so). The obvious issue here is, a forensics
investigator could then search for all "read-only" files and
significantly narrow down his keyfile search to only the ones that can't
be written to. This may be a risk that you would be willing to take, if
combined with some of the other steps outlined below.
(2.) Store the keyfiles on a separate storage medium, preferably one that
can physically be removed from the hard disk on which the actual
encrypted volume(s) are being kept. A USB key or SD chip is ideal for
this purpose, as long as it isn't marked with something obvious like
"SECRET KEYS" and so long as it has other stuff on it to add "noise" to

the file directory picture. Even better, why not store your keyfiles online, maybe as attachments to an e-mail to yourself? (Download them from
your AOL or Hotmail account as needed, use them to mount your TrueCrypt
volume, then delete them afterward.) Just make sure that your on-line
account doesn't expire, or... well, you should already know what happens
then.
(3.) Use files whose header information normally doesn't change. .JPG
format files and most other compressed graphics files are pretty good
here, as long as you don't edit them; you can also use regular ASCII text
files (don't open them in a text editor... ever!), specifically generated
keyfiles made for you by TrueCrypt (rename them so that they don't look
obvious)... really you can use anything as long as you test it to make
sure that it won't auto-change each time it is loaded into the "owning"
application that originally created this kind of file.
Keyfiles : Don't leave home without 'em. They can save you when your
password just isn't good enough.
Layered Defence, Or, Your Opponent's Attack Sequence, And Why It Matters
-----------------------------------------------------------------------The term "layered defence" isn't popular just by accident; it's popular
because it works. This is a technique first (as far as I know) invented
by the military, and it basically describes a posture in which the
defender sets up a series of concentric (one inside the other, like one
of those old Russian dolls) set of protective measures that force an
attacker to overcome multiple barriers / threats, to get to the asset
that he's trying to attack. (You see this a lot in military air defence
systems, where an army will have "long", "medium" and "short" range
surface to air missiles, each with somewhat different characteristics; a
bomber plane that may be easily able to avoid the long-range SAMs, may
have a more difficult time with the short-range ones and vice versa.)
"Layered defence" is, in military jargon, more or less the direct
opposite of the "all-or-nothing" concept, in which a very strong outer
"shell" is constructed, but this powerful barrier is not backed up by
anything else. The obvious advantage of "layered defence" versus "all-ornothing" is, whereas under all-or-nothing, the outside barrier is very
strong, if it somehow is compromised then the attacker has free rein
thereafter, under layered defence, although the outer barrier(s) are
perhaps less robust than would be the case for all-or-nothing, the
consequences of a single protective measure failing, are far less serious
because the attacker would still have to defeat a secondary, a tertiary,
etc. defence, before arriving at the ultimate target.
The point here is that layered defences are usually a much better bet,
simply because they admit for the possibility of failure and try to
compensate for it, if it occurs.
A good example of "all-or-nothing" versus "layered defence" can be seen
as follows:
(1.) All-or-nothing: You defend your sensitive data by encrypting your
entire hard drive with a robust algorithm and a strong password, which
you have committed to memory in your head (and have not written down
anywhere). If either of these defences are compromised, an attacker then
gets complete, unrestricted access to whatever sensitive data is on your
hard drive. The reasoning behind this is, "if I use these measures, and I

choose my algorithm carefully, it should never be possible to gain


unauthorized access to my PC". (This assertion is more or less correct.
But what if the authorities simply torture you until you give them the
password, rather than have the rest of your fingernails pulled out? What
if they ship the whole hard drive off to the NSA for the encryption to be
broken with supercomputer arrays?)
(2.) Layered defence: You defend your sensitive data by establishing a
series of encrypted virtual volumes on the hard drive, some created with
PGP, some created with a weak, easily compromised commercial software
product and a very few -- these are the only ones with "real" sensitive
data -- created with hidden internal TrueCrypt volumes within
conventional TrueCrypt volumes. The "loss leader" volumes have visible,
descriptive file extensions (e.g. ".pgd") and they are filled with
personal files that you don't really want the world to see, but that you
can afford to have compromised by the police. The TrueCrypt passphrase to
unlock the outer / inner TrueCrypt volumes is so long and complex that no
human being can remember it, but it's in fact stored in a file on a Web
server somewhere out in cyberspace that only you know how to access, and
the file in which it is stored is itself encrypted with a humanrecallable password.
Finally, you used an unusual application -- say, a graphics program that
stores its files in a unique format -- to actually create and access your
sensitive data (so that, even if EVERYTHING in this scheme is defeated,
you still have one last, desperate defence against your attacker, namely
that he'll try to load it into his favorite .jpg viewer or Microsoft
Word, and just get gibberish -- this is, in military terms, called a
"last-ditch" or "terminal" defence).
I think you can see here that an attacker will have to guess correctly on
a number of different levels, in order to compromise your data. The
system isn't perfect (no system is), but it presents the attacker with an
extremely different array of misleading avenues to explore, in order to
find the target that he's really looking for. (Note that if you get
tortured, or are subjected to a lie detector test, and get asked, "do you
know the password to access this file", you can truthfully say, "no",
because it's stored on some other computer and you don't even remember
what it is.)
In order to help you secure your sensitive data, you have to learn to
think like the sophisticated, ruthless opponent who has just gained
physical control over your PC. Now, in saying this, I'm not suggesting
that you should or could try to guess his motivations; you can't, because
you don't think like a secret policeman. What I mean is, you should try
to put yourself in the attacker's shoes and try to imagine how he will go
about attacking your data and your digital defences.
I Think I Broke The Your Crypto Key... And That Helps Me How???
--------------------------------------------------------------There is also another little-talked-about aspect of cyphertext that I
want to bring to your attention -- pay attention here, because this is
one of those dirty little secrets that the cops and the spooks least want
you to know about.
The issue is, "what do I do when I break your encryption and find your
secret key". This SOUNDS like something that's pretty easy ("oh, well, I
just use your key to unencrypt all your data, you got pwned, dude") but

in fact, if you are careful about how you organize your cyphertext in the
first place, you can just give even a successful attacker fits. The
reason is that an attacker with an illegitimately derived key has to
first understand the nature of the data that he is trying to unencrypt,
and then has to have a program that conveniently enables this process.
For example, consider the following cases. (Note: I should point out here
that we are assuming that the attacker ALSO understands the algorithm
that you used to do the encryption, possibly because he tried thousands
or millions of variations of the key against each possible algorithm in
the process of breaking the key in the first place):
Case #1: You have encrypted a single, ASCII text file (.TXT) that the
attacker knows was originally an ASCII text file.
In this case the attacker has a pretty easy task to undertake, because
all he has to do is get any one of a wide variety of forensics programs
(EnCase will do) to use its built-in ability to decrypt the file by
running the key against the file using first the AES algorithm (see
below), then Blowfish, then Serpent, etc., until he sees something that
looks like English text in front of him. When he sees something in
English, he can stop because he knows he has the original plaintext.
Case #2: You have encrypted a single file, but this time the original
might not have been ASCII; it might have been a .DOC, it might have been
a .JPG, or it might have been something else altogether.
Now in this case, although the attacker can still use the techniques
shown in Case #1 above, his job has just grown much more difficult,
because the attacker has to multiply the number of algorithms against the
number of potential file formats... and THEN, the attacker's forensics
program has to display each potential output, until something that looks
like an original plaintext document shows up in front of the attacker
himself.
Is this impossible? No; some modern forensics programs like EnCase have a
variety of tools to make this process more convenient, for example if
they use a given encryption key and given algorithm, and the first few
bytes of what is "unencrypted" in this manner looks like the file header
for a .JPG, a .DOC, etc., they will immediately bring it to the
attacker's attention or store it for later examination.
However, there are a number of practical limits to this approach,
particularly, the original file might simply have been in a format that
the forensics program doesn't yet understand or cannot easily display.
[For example, suppose that the original program was in Microsoft Visio
(.VSD) format. The forensics application has to have the Visio "engine"
within it, to take the objects that are in the .VSD file and put them on
a screen in a way that a human police officer, corporate forensics
officer or other human intruder, can recognize as meaningful data. These
types of file display / interpretation engines are often complex to
implement and are also frequently copyrighted or otherwise difficult to
implement from a legal point of view.]
Case #3: You have encrypted
"X1aJnBA3.spl", that may be
archive (e.g. a .ZIP, .RAR,
like an encrypted TrueCrypt

a data object, for example a file called


a single file of unknown format, or may be an
etc.) of unknown format, or may be something
or other security-specific container.

Now here, the attacker is up against an extremely difficult challenge,

even if he does, in fact, have the right key. The problem is, basically,
"how does he KNOW that it's the right key", and "even if he DOES
(somehow) 'KNOW' that it's the right key, how does he access the file
with the key and turn it into human-readable data".
This is a much more challenging task than most of the forensics software
manufacturers would have you believe, because if the attacker doesn't
know what kind of application created the file in the first place, and if
it is a type of file that requires a specific application (one that by
its very nature cannot easily be embedded in a forensics application like
EnCase, in the manner in which a simple ASCII, .DOC or .JPG file viewer
can), then the only real way that the attacker has in which to access the
original data is for him to manually try various applications against the
"unencrypted" file, one by one, until he sees something that looks like
valid output.
For example, suppose that the attacker's forensics program tells him,
"Bingo, you've got a match, you've broken the key for file
'X1aJnBA3.spl'", but the program cannot tell what kind of file that it
is. The attacker must, for each application that he suspects may have
been used to create the file, (a) rename the file with the extension
(.DOC, .ZIP, .JPG) appropriate to the owning application and then (b) try
to load it or access it with that application.
Alternatively, the attacker can load the file into a hex editing program
to look at each individual byte, or sequence of bytes, and then try to
recognize something that gives the attacker an idea as to what kind of
file it really is. For some file types (e.g. Microsoft Word, etc., again
assuming that the original creator of the file has not used some kind of
secondary internal encryption, however weak, to further obscure the
file's internal structure), this isn't a particularly difficult task; for
others (for example graphics file formats, especially the less well-known
ones) this is possible but not nearly as easy; but it can be very timeconsuming and frustrating to do if the file format is an obscure one or
if it doesn't contain obvious give-away strings (e.g.
"FMT:WORDPERFECT5.1") within the file's "header" area. The point is that
it is NOT easy to do this without the active or passive co-operation of
whomever originally recorded the data... which is another good reason to
never, ever, communicate in any way with a law enforcement official who
has you in custody, about any characteristic of your "controversial" data.
Is deriving the type of the file, using either of the above techniques,
impossible? No, for a determined attacker with a lot (and I do mean a
LOT) of time on his hands, eventually he may find what he's looking for.
But in practice, I have found that most attackers are equipped only to
test against the most well-known file formats and then only if there
aren't any secondary defensive measures (such as "hidden" TrueCrypt
volumes, double-encryption, illegally renamed files, double-container
archives and so on; all of these will be discussed below) affecting the
targeted files.
The point here is that a few simple defensive measures like the above
will enormously complicate and prolong the attacker's job, even if you're
unlucky or stupid enough for him to have compromised your keys.
"Cryptographic Algorithm" -- This is a very complex mathematical
equation that is typically used with a "key" (see below) to scramble /
unscramble plaintext in to / out of cyphertext.

For example, if I am using the very simple cryptographic algorithm by


which the place-value in the English alphabet is incremented by one for
each letter in a piece of plaintext:
"Hello There"
and I apply the algorithm, "the key is always 'increment by one'" to it,
the results are:
"Ifmmp Uifsf" (this is the cyphertext version of "Hello There").
And then obviously I know that if I want to decrypt the cyphertext "Ifmmp
Uifsf", I just reverse the algorithm and decrement each letter's place in
the alphabet by one.
There is something very important about cryptographic algorithms that I
need to explain here: DON'T EVER, EVER, EVER THINK THAT YOU CAN INVENT
ONE THAT A GOOD ATTACKER CAN'T BREAK IN 5 MINUTES. You'll just have to
trust me on this one; a "robust" crypto algorithm is the product of
fantastically advanced, complex mathematics that the average user can't
have a hope of even understanding, let alone coming close to duplicating
on his or her own. This is rocket-science stuff; in order to be accepted
in the computer security community, any new algorithm has to be subjected
to over a year of determined attack by professional cryptanalysists
(these are the guys whose only job is to break encryption schemes; they
are almost all genius-level mathematicians; have a look at "A Beautiful
Mind" with Russell Crowe if you want to find out all about them) and has
to survive every such attack.
Please believe me when I say that the average person's "home-brewed"
encryption system has about as much chance of surviving so much as an
hour's attack by these guys, as the average person would have of
surviving an hour in the ring with a professional heavyweight boxer.
It's fine if you get interested in designing crypto systems and you use
your own as a supplement to a professionally-designed one, that is, you
use it to "double-encrypt" documents that have already been encrypted by
a good, industry-standard crypto algorithm; but using your own homebrew
scrambling algorithm as a SUBSTITUTE for the latter (as your only
encryption system), is a recipe for certain disaster, the second that you
get attacked by a sophisticated opponent. Just don't do it, O.K.?
Now, implicit in all the above is the obvious question, "what crypto
algorithm(s) should I trust"? This is the subject of religious wars all
over the IT security industry, and you have to appreciate that it is a
moving target simply because as computer CPUs (the speed at which a CPU
can do mathematics is a very important factor in how quickly it can be
used to "crack", that is "find out without legitimately knowing it in the
first place", an encryption key) get faster and as better cryptanalytic
attacks are found, the algorithms used to encrypt data also have to adapt
and become more robust. None the less, for at least the next 5 years or
so, I am quite certain that the following algorithms, again, IF PROPERLY
IMPLEMENTED BY THE ENCRYPTION PROGRAM, will provide a substantial amount
of security:
(a.) The Bruce Schneier algorithms: Blowfish and Twofish. Scheier is a
world-renowned encryption expert and is the author of several books on
the subject. He is also a strong proponent of civil rights and data
security, so it is unlikely that he has voluntarily back-doored his
algorithms to suit the intelligence community, and anyway these have been

subjected to a great deal of the kind of third-party peer review that I


mentioned above. Blowfish and Twofish also have the significant advantage
that they are supported by most Open Source and other encryption programs.
(b.) AES (the Rijndael algorithm): "AES" stands for "Advanced Encryption
Standard" which is the U.S. government's replacement for the now-obsolete
"DES" ("Data Encryption Standard") algorithm of the 1980s. It is based on
the "Rijndael" algorithm that won a contest against a number of other
professionally-designed crypto algorithms in the early 21st Century
(including, incidentally, Schneier's Twofish one).
While the general consensus in the IT security community is that AES is
secure -- because it has been peer-reviewed and attacked by a number of
independent experts -- recently there have been allegations, including by
Schneier himself, that there are inconsistencies in the way in which the
Rijndael algorithm has been implemented within AES, that raise some
disturbing possibilities of a secret NSA backdoor. Personally I think on
balance, these are false alarms but you should always be careful with any
standard that is substantially controlled by the U.S. government.
(c.) IDEA: This was an earlier, European-based crypto standard that
lately appears to have been dropped from many freeware crypto programs
due to patent encumbrance issues. None the less it is still a viable one
but for my part I wouldn't use it. Note that IDEA has been removed from
modern versions of TrueCrypt because of the patent problem.
(e.) Triple-DES: This is basically an implementation of the now-obsolete
DES standard, but to improve robustness against hostile cryptanalysis it
performs three rounds of encryption on a given piece of plaintext as
opposed to DES' one.
For the next few years 3DES (as it is sometimes known) will still be
secure but it is much slower than either Blowfish, Twofish or AES; this
can be a significant issue on older computers or where really large
amounts of data (e.g. encrypted volumes) are a concern. I can't think of
a good reason to use it in view of the availability of other algorithms
but it's still viable.
(f.) Serpent: This was another contender in the AES competition (which
was won by Rijndael). It is a robust, fast, modern algorithm that is a
viable alternative to conventional AES or the Schneier systems. Use it if
you want to, it's a good choice.
(g.) RSA (any version, e.g. "RSA-4", "RSA-5" etc.): This is a commercial,
proprietary algorithm developed by the RSA Corporation in the United
States and is often found deployed for applications like providing the
encryption functions for X.509 digital certificates, for Secure Sockets
Layer Web page encryption and so on. RSA is slower than most of the ones
I described above and is a copyrighted algorithm so it is not always
available in Open Source implementations, but it is still robust and has
a good track record.
Again, this would be a good choice except that Blowfish, Twofish, Serpent
and AES are all free and are faster, so all things being equal I wouldn't
use RSA unless my encryption program didn't support any of these other
ones.
You should stay away from any algorithm that isn't on the above list. In
particular, there are two extremely dangerous situations from a data
security point of view:

DES -- As explained above, this is an obsolete standard that can be


broken quite easily by a modern computer CPU with the right software. The
reason I am going out of my way to mention it is that you will still see
an amazingly large amount of so-called "security" software, or "security"
features in other software, that claim "we encrypt things with DES so
it's secure". Well, that may have been true in 1990 but it sure isn't
today. If you know what's good for you, you won't use DES.
Proprietary encryption algorithms -- Another trend, which we see far too
often and which just leaves us IT security people shaking our collective
heads, is the situation where a so-called "security" program, or a
"security" feature of some other program, uses a secret, proprietary
encryption algorithm to "secure" your data. Yeah, RIGHT. 99.99% of these
systems are complete crap that can be broken by an experienced attacker
in a minute to an hour, with the right tools.
There is a specific meaning to this cautionary note, as well -- NEVER,
EVER, EVER, RELY ON MICROSOFT'S SO-CALLED "ENCRYPTION" SYSTEMS TO SECURE
YOUR DATA. Modern versions of Microsoft Word, Excel, Outlook and so on
all include options to encrypt the .DOC, .XLS, .PST files that these
applications use; and I can tell you from personal experience that the
"encryption" that these programs employ, is next to useless from a
security robustness perspective. (There is, surprise surprise, also a
very good chance that all of these Microsoft proprietary security
algorithms have been back-doored so that U.S. law enforcement can easily
break their encryption.)
In general, never, EVER rely on a third-party application's assertion
that it has "unbreakable, proprietary, secret encryption algorithms". In
security talk, we can translate the preceding phrase as meaning, "no
encryption at all". Get the idea?
"Key" -- A "key", in cryptographic jargon, is a long, obscure number
that, when used with a good encryption program and good cryptographic
algorithm, is used as a filter by the encryption program to apply to
plaintext and scramble this, turning it into cyphertext.
I am not going to get into the usual long-winded dissertation here about
"good password practice", except to say a couple of things.
One, don't be stupid and use something that's easy to guess -- for
example, your pet's name -- as a password for any data that you expect to
withstand so much as a few minutes' determined attack by an experienced
opponent. (Want to find out how good the password attackers are, in the
hands of a knowledgeable cop? Check this out: http://www.schneier.com/
essay-148.html).
Second, there is a specific risk here that you should be aware of. Any
intelligent forensics investigator who is trying to crack passwords that
for whatever reason you haven't given him, will first use his forensics
tool (EnCase will do nicely) to create a "dictionary" of words found on
your hard drive; what these tools do is cruise through the hard drive and
patiently add each and every intelligable English word into a special
dictionary that is then used by the password cracking program to attack
the encryption of whatever files the attacker thinks, is secured.
Now, this can create some very large dictionaries -- try to imagine, for
example, if your PC is loaded with ASCII text versions of books like "War
And Peace" or "David Copperfield" -- but, and here's the important thing,

even a 300,000 word dictionary created in this way is far, far more
efficient a source of potential passwords than the cracking program would
have to use (guess at) to attack your encrypted files, by simply picking
potential passwords out of thin air.
This is because experienced forensics attackers know that most people use
something familiar to them -- a pet's name, the name of a child, the name
of your favorite rock group or football club, some cute phrase like
"TheFalconFlies", etc. -- as a password, and that most people pick this
credential out of an existing document if they can't get it out of their
own imagination.
Here again we see one of the most often-encountered stories in the
failure of digital data security, that is, "the tools work fine, but the
user implemented them in a way that makes it easy for a knowledgeable
attacker to bypass the tools via social engineering". The attacker isn't
going to go after the part of your encryption defences that's hard to
attack (e.g. the cryptographic algorithm); he's going to attack wherever
he thinks the weakest link is, and that's all too frequently the
carelessness with which you picked your password.
The point is, if you have left your password in plaintext (unencrypted)
form ANYWHERE on your hard drive -- no matter how out-of-the way that
place might seem, like, say, as the 3567th word of the letter you wrote
to Uncle Achmed on your last trip to Mecca -- the forensics program WILL
find it and WILL (successfully) use it to magically decrypt your
supposedly "secure" files. So don't, under any circumstances, EVER, leave
your passwords unencrypted, on any storage media that the police might
get their hands on. Doing so is just as good as leaving everything
unencrypted in the first place.
Two, and this is probably the most important thing, if you don't write
down your password anywhere (which is good practice if you want to keep
your data secure; keep in mind that when the black-suited SWAT team kicks
in your door, they are going to go over everything in your house that
could remotely give them a clue as to what your password is... trust me,
if it's written down, they'll find it), MAKE SURE THAT YOU PICK A
PASSWORD THAT YOU CAN AND WILL REMEMBER. AND USE IT TO DECRYPT YOUR DATA
EVERY SO OFTEN, SO YOU DON'T FORGET IT.
The point here is that actually, if you look at risks to your data that
are likely, as opposed to just the risks to you, you are far more likely
to lose all of your precious data due to simply forgetting your password,
than you are to having most of the other negative things described
elsewhere in this document, happen to you, personally.
Unfortunately, I have had exactly this (e.g., losing access to my
encrypted files) happen to me, more times than I'd care to admit; but I
am able to justify this by saying, "it's better that I lose some of this
data, once in a while, due to my own forgetfulness, than to suffer the
possibly far worse consequences of having it divulged to an attacker".
Your own way of balancing these two demands, that is, ease of use vs.
security, may be different.
Choose the balance that's right for you... just don't do something stupid
like choosing "Password" or "Secret" for your password.
Most people confuse the terms "key" (as described above) for some
encrypted data, and the term "password" (what you, the human being, enter
to access the encrypted data, you either enter this for access to the

encryption / decryption program, or for access to a particular encrypted


file), but while this is a convenient description it is in fact wrong.
Most modern encryption programs run your password through a series of
sophisticated "transforms", which are mathematical and cryptographic
operations that make your original human-entered password greatly more
difficult to "crack" or "reverse engineer", before they actually use the
resulting combination of numbers and letters to actually encrypt data.
For example, let's consider a very simple "transform", which simply adds
spaces to your password, with an increasing number of spaces after every
real letter in the password:
For your password, you enter: IAmJohn
So after the transform, it would be: I A m

The point here is that while your password is the simple


"IAmJohn",
the key -- that is, the thing that the encryption program actually uses
to scramble your data (along with, obviously, the very sophisticated
transforms that are built into the encryption algorithm, be it
"Blowfish", "AES", etc., that is being used in this case) -- is the much
longer, less predictable
"I A m

n" version.

Since the encryption program "knows" what its own transforms are, it can
easily reverse the process so that you can get your data back by it being
decrypted. (Note: In reality, the transform process is much more complex
than I am showing above; I have described it this way to minimise the
amount of techno-babble, here.)
Dirty Little Encryption Secrets
------------------------------If encryption is, or can be, so robust, then why, do you think, are the
authorities regularly able to get access to the supposedly "secured" data
on their victims' computers? The reasons for this are very poorly
understood, but knowing the past history on this front is critically
important if you want your secured data to stay secure for more than a
few minutes in the tender hands of an experienced attacker.
Here are some of the most commonly heard stories about how, despite the
possible presence of encryption, the authorities got the goods on the
"perps":
(1.) The data wasn't encrypted, at all; that is, the victim "didn't think
that anyone would find that three level deep folder I had named,
'kyddy_pwrn'".
I'm not kidding about this; you hear it all the time. The jails are full
of stupid criminals, both high-tech ones and otherwise. (I especially
liked the story of the guy who -- this is God's own truth -- phoned up
his local police precinct in the southern United States, to report "some
other dude ripped off my hundred dollar bag of cocaine, can you get it
back for me please?"; the good news was, the cops were indeed able to
track down his stash of illegal drugs and arrest the other low-life who

stole it from the phone caller, but the BAD news was... well, I'm sure
you get the point.)
Another great story, which is referenced in one of the above video links,
is how the American cops routinely "ask the defendant to write a letter
of apology to the victim of a crime". Stupid criminals (or, innocent
people that the cops want to pin the charge on) will routinely comply
with this kind of request; of course, the second that the ink is dry on
the paper, the "apology letter" gets waved in front of the jury or judge
as a "signed confession".
(2.) A much more insidious variation on (1.) is, "gee, I THOUGHT that I
had encrypted it, what went wrong?". This story gets heard with certain
poorly implemented or inadequately tested encryption programs (or data
sanitization programs) that a user naively trusted to do what the
programs advertised that they could do.
Now, let's use a little common sense, here: if you were working on a
Microsoft Word document for (say) eight hours straight, and then hit the
little "Save" icon, wouldn't you check on your computer's hard disk,
wherever you thought you had saved the document, to see if it was, in
fact, recorded as being there? Would you just blindly trust the MS-Word
application to "do what it said it would do", or would you check, first,
before closing down the word processor, entirely?
Now, try to imagine, you are using a security program, the failure of
which IN ANY EVEN MARGINAL WAY, could expose you to far more serious
consequences, ranging at the low end to social ostracism, to long jail
terms, to even (in some societies), death? Why on Earth would you use
something like that on your "confidential" data, without carefully
testing, first, that:
(a.) The encryption program, actually encrypted your data so that it
can't be recovered without the appropriate key, and didn't leave any part
of your original, plaintext data around for an attacker?
(b.) The data sanitization program, actually wiped your data, so that it
can't be recovered, period?
It totally escapes me how end users just "trust" security applications,
particularly programs that haven't been subjected to robust peer review,
with their critical data, without ever spending so much as a good
minute's worth of checking to see if the security program is working as
advertised.
Now, the problem here is, it can be quite difficult to properly test
security programs, if you aren't a trained observer and especially if you
don't have a lot, and I DO mean a lot, of spare time on your hands to
check out each and every nook and cranny in which a defective security
program may have leaked information. I actually do check out most of my
applications (a "hex editor" program can be very useful here, so you can
load the supposedly encrypted version of your file into the editor and
then examine what you see for obvious signs of poorly, or non-, encrypted
content), but this may not be practical for less technically oriented
users to do.
So, as the best advice that I can give you here, all I can suggest is
that you use well-known programs -- for example, PGP, TrueCrypt, and so
on -- that have a good reputation. Look especially for Open Source based
programs (fortunately, in the security world, there are a lot of these),

because when the source code is available for a security program it


becomes instantly, and painfully, obvious if a developer has screwed up
and created an application that basically doesn't work. Security
applications that don't provide their source code, in my view, have 2
strikes against them right from the start; I'd strongly suggest that you
stay away from them except under unusual circumstances.
Another good technique is to subscribe to security mailing lists or just
to periodically surf to security sites such as Bruce Schneier's blog
(http://www.schneier.com/blog/), so you can be up to date if a serious
vulnerability is discovered in a security program that you use yourself.
(If that happens, needless to say, don't just sit there and wait to be
owned by the next cop that breaks down your door and goes over your
"encrypted" files with the just-discovered exploit; switch to a system
that hasn't been compromised.)
They Broke In To One Of My Encrypted Containers. All Is Lost -- Or Is It?
------------------------------------------------------------------------Another dirty little encryption trick, and it's an extremely effective
one that far too few people take advantage of, is the "loss leader"
approach. What this basically involves, is creating a number of different
encrypted objects (say, five or six different strongly encrypted
TrueCrypt virtual volumes, plus two or three more with weak, easily
breakable encryption, preferably created with some poorly-crafted
security program other than TC), then putting the REAL sensitive
information into the robustly protected containers, while depositing
something potentially embarrassing, something that you wouldn't
ordinarily want a stranger to know about you (for example, a list of your
credit card numbers, Web search results for "hot babes", controversial
posts to blogs... anything that isn't technically illegal, wherever you
may live), into the "loss leader", weak encryption containers.
When the attacker gets physical control over your PC, to his delight, he
manages to overcome the thin layer of protection on the "loss leader"
containers and starts to go through what's in them... only to find that
the data therein, while perhaps something that you didn't want anyone to
know, is very far from what would let the police easily convince a jury
or judge that you're a "terrorist" / "pervert" / "drug dealer" / "money
launderer" / whatever. (Note, however, that faced with a situation like
this, many police departments will just cheat and manufacture the
evidence that they think they need, lying about the fact when they get to
court. Nothing's perfect...)
The point of this strategy is to play on the normal human psychology that
causes an attacker to give up, when he (incorrectly) believes that he has
found "everything that the little creep tried to hide". The human mind
has a hard time dealing with the fact that there might well still be even
more interesting stuff hidden elsewhere on the target's hard drive, but
that it might be even more difficult to attack than the interesting, but
not incriminating, things that have already been found. When one thinks
of this it only makes sense; when do you stop, if you have already found
what you think you were looking for?
There is another aspect of this approach which is also important to know
about: Even for the robustly encrypted containers, you should, if
possible, try to vary the passwords, and / or (as long as you always
choose between robust, well-implemented ones), the encryption algorithms,
used for each container.

Now, there is an obvious shortcoming to this approach, namely normal


human memory, which has a very limited ability to remember complex
passwords. But if you can consistently remember even a few different
passwords, doing this can tremendously complicate both the attacker's job
of breaking your encryption (because, a successful attack on one
container, will now not automatically give the attacker the 'magic key'
to unlock all the other containers), but also, in most jurisdictions,
being caught with only "a little" controversial or illegal content, can
make a huge difference to the sentence that you might get if caught with
"megabytes and megabytes" of exactly the same types of incriminating
documents, pictures, movies or Web links.
In other words, limiting the attacker's ability to easily compromise your
entire encrypted infrastructure, even if he manages to compromise one
component of that infrastructure, strongly reduces your risks at every
step along the way. For this same reason, you should never use the same
password for access to different parts of your security system; for
example, don't use the same password for your encrypted TrueCrypt
containers, as you do for administrative management of your home Wi-Fi
router. (Because, if you do this, a weakness in the protection of this
crucial credential at any point, will instantly and fatally compromise
the protections of every other part of your security infrastructure,
whether or not these other components are in fact well-implemented
themselves.)
Full-Disk Encryption (FDE) -- Is It For You?
-------------------------------------------Lately, in response to a number of hair-raising incidents where
government or corporate laptop computers containing large amounts of very
sensitive information (for example "personal contact details of everyone
using the U.K.'s Health Service") were lost, possibly releasing this
sensitive data to crooks or whomever, there has been a big push in the
public and corporate sectors to implement what's called "FDE" or "Full
Disk Encryption".
This is a security technology that does more or less exactly what the
name implies -- that is, it encrypts EVERYTHING -- most of the boot
sector, the swapfile, all your data, etc. -- from the time that the PC
first starts up. This is in contrast to the types of security tools that
I have discussed elsewhere in this document, because these work at the
individual file / folder / partition level. Usually, FDE prompts the user
for a password (which can either be the same as the operating system or
user ID password that you would use in any event, whether or not FDE is
running, or it can be a secondary password specific to the encryption
subsystem) when the computer's OS is up and running.
Obviously, if you don't have the correct password, you can't use the
computer; this is sort of the same as with conventional file / folder /
partition encryption, with the very significant difference that in the
latter case an attacker may still be able to get enough access to the
computer's hard drive (typically by booting the computer from a forensicsbased Live CD) to then examine and record anything that wasn't
specifically encrypted (for example the swapfile) on the hard drive. With
FDE, this avenue of approach is theoretically closed off, because the
entire hard drive appears to the forensics CD OS as an incomprehensible
jumble of encrypted characters.

The huge advantage of FDE is, of course, that it is (supposedly)


completely foolproof in how it operates... everything, every last bit and
byte on your hard drive, is encrypted, safe from the prying eyes of the
police when they kick down your door, that is at least if you have the
brains to pull the power plug from your PC when they do.
So, is FDE the "Holy Grail" of data security? Uhh... no. Read on for some
of its practical strengths and weaknesses.
FDE Advantages
-------------(1.) Simple -- FDE works in the background; it's "always on". In theory
you never have to remember to encrypt, or decrypt, a file or folder, ever
again. (But see below for why this isn't necessarily that good an idea.)
(2.) Protects entire hard drive -- This speaks for itself. As we have
seen elsewhere in this document, because of the habit of modern operating
systems to "leak" sensitive or forensically revealing data in obscure
places, particularly the swapfile, it can be very difficult to ensure
that you really have deleted / encrypted anything that's sensitive. FDE
largely escapes you from this responsibility because even if the
operating system does leak something, it ends up on a hard drive that's
already encrypted. So in this respect FDE would have to be considered an
attractive option for less technologically advanced users or users who
just don't want to put up with all the intricacies of data security best
practices.
(3.) Relatively immune to data leaks caused by sudden system shutdown -This is the "kick in the door" scenario. Even if you turn off a normal PC
in the middle of a "sensitive" data session, because of the issues
identified in (2.) above, when the police turn it back on, there may be
plenty of incriminating evidence in temporary data files to cause you
trouble. With FDE, this is again largely impossible, since whatever did
get caught in mid-session would still be protected by at least one layer
of encryption.
So does that mean that FDE is perfect? Not really. Here are some
disadvantages:
(3.) Slows down PC -- This is more of a consideration for users with
older computers and hard drives. FDE isn't "intelligent" encryption; it
encrypts everything, whether or not the data in question is really
"sensitive", when it writes to the hard drive; and, conversely, it has to
decrypt everything that is read off the hard drive. This means, for
example, that when you play a big media file (say a MPEG-4 rip of a DVD),
every last bit and byte is being decrypted by the computer's CPU before
it can be used. On a new, fast computer, the overhead generated in doing
so is occasionally irritating but most of the time you will never notice
it. On any computer made before, in my view, about 2004, you are likely
to find FDE rather frustrating, as it can slow some types of operations
to a crawl.
Note the other implication of this: Under FDE, computers typically take
significantly more time to start up and shut down. If the amount of time
that you have in which to complete a "sensitive" data access session is
limited, you may want to take this factor into account, because it can
be, to say the least, disconcerting to be sweating bricks, waiting for
your PC to shut down, while your friendly old Aunt Hilda is demanding to

be let in at the front door.


(4.) Danger of data loss -- By far the most serious shortcoming of FDE,
paradoxically, comes about from how well it works. That is, the
encryption on most FDE schemes (see below for a caveat) is basically
unbreakable. This is great as long as the hard drive and the PC are
working fine. But what if, one day, the hard drive develops a "bad
sector"? (This can happen in all sorts of surprisingly common ways; one
frequently encountered one is when the AC power suddenly cuts out, while
the write head of the hard drive is attempting to record data on to the
hard drive's surface; the head can "crash", leaving corrupt, half-correct
data wherever it came to land when it no longer had power to enable it.)
Depending on how extensive the media failure is, this can render a FDEsecured hard drive completely inaccessible even if you do have the right
password. Of course, a conventional, unencrypted drive is subject to the
same risks, but the consequences of them are much lower since it would
still be possible to recover much of the hard drive's other data. With
FDE, if your hard drive dies, you may be totally out of luck.
(5.) Possible backdoors -- Full Disk Encryption is currently still in its
infancy in terms of market adoption, and virtually all of the FDE systems
available for Microsoft Windows are commercial programs developed in the
United States. This raises the very real possibility that the
manufacturers of these programs (for example Check Point, WinMagic etc.)
have been forced to introduce a law enforcement backdoor into their
encryption systems under the good old U.S. PATRIOT Act. On the other side
of the pond, any U.K.-based systems will have been compromised under
RIPA. (Thanks a heap, Tony.)
In the case of Linux, the situation is a bit better, because the backdoor
issue is less likely there, but the problem in the Linux world is that
there is no commonly accepted FDE standard in the way there is for less
ambitious encryption standards like CryptoAPI, encrypted "loop" file
systems and so on. This raises the issue that if you update your FDEenabled Linux system to a new kernel at some time in the future, you
might lose all access to your FDE drive if the new kernel doesn't
completely support the FDE method that you used when you first encrypted
the hard drive.
(6.) Password changing problems -- If you use "conventional" file /
folder encryption and have reason to believe that your password may (a)
have been compromised or (b) is just inadequate, you can always create a
new encrypted container with a new password, open up the old container,
copy all your "sensitive" data to the new container and then close and
delete the old container. This is very difficult with FDE because if the
entire hard drive is encrypted with a single password, where do you
temporarily store your "sensitive" data while you change your password
and the FDE program goes about re-encrypting the whole hard drive with
the new password?
Incidentally, note that inherently, FDE requires that you use a single
password to encrypt the entire hard drive. Thus it's an "all or nothing"
proposition -- if you use a weak password, or if an intruder manages to
trick or coerce the password out of you, and you have no other secondary
form of data encryption in use, once the attacker has the hard drive
access password, you're toast -- he has unlimited access to everything
that you might have wanted to keep secret. In contrast, with conventional
file / folder encryption, you can (assuming you have a good memory...)
employ a number of different passwords, one for each encrypted object, so
that a successful attack on one won't automatically engineer a successful

attack on all the others. (See comments for the "loss leader" approach.)
(7.) Cost -- This is not an issue in the Linux world, of course, but
since most of the Windows-environment Full Disk Encryption systems are
for-profit, commercial products, the cost to implement them can be a
deterrent for the budget-conscious, especially if you have more than a
couple of computers that you intend to protect in this manner.
(8.) Backups have to be encrypted anyway -- This is a "sleeper" issue,
not only with FDE but with conventional encryption, as well; only, the
issue is more serious if you are using FDE as your primary, or only,
method of static data security. It comes about from the fact that it is
the height of idiocy (but you would be amazed at how frequently this
error shows up in the day to day world of static data protection, both at
the consumer and corporate levels) to robustly encrypt the original copy
of a sensitive data object, but to then allow a backup copy of precisely
the same thing be made without any protection at all.
Suppose, for the sake of discussion, that you have encrypted your entire
hard drive with FDE, but that you then use something like Apple's "Time
Machine" system to do regular backups of important files, to an external
hard drive. Or, suppose you even do it manually, just by copying the
involved files to the USB or Firewire hard drive. Unless the destination
folder / directory where the original file will be copied to, is itself
robustly encrypted, then the instant it shows up on the backup medium
(whatever that is -- it could be an external drive, a tape, a DVD+/-RW, a
Flash USB key, another computer on your home LAN, an on-line Web-based
backup service... anything), THE FILE IS IN PLAINTEXT, COMPLETELY
UNPROTECTED FORMAT.
Psychologically, FDE is subtly dangerous here because, unlike more
conventional forms of encryption (say, TrueCrypt-based virtual volumes),
it isn't "in your face"; it works silently in the background and doesn't
make you conscious of the fact that you have to consider the security of
your data wherever it is transmitted or stored. Unless each and every
place where you might store a sensitive data file is itself protected by
either FDE or some equivalent form of "always on" encryption -- a near
impossibility, considering the very wide variety of storage media
available to users these days -- you have to either:
(a) Leave a single copy of the sensitive file on the FDE-protected hard
drive and never back it up, to any different medium (risky from a data
availability point of view);
(b) Ensure that the recipient end of the backup process has its own,
robust form of encryption (a perfectly valid idea; but if you have to do
this, then what is the value of FDE? you might as well use the other form
of encryption at both ends);
(d) Periodically back up the ENTIRE FDE-protected hard drive -- each and
every bit and byte of it, starting with physical hard drive sector "0,0"
and continuing to the very last bit on the hard drive -- to some other
media (encrypted or not); this requires very large amounts of backup
media space, and it's clumsy and has risks all of its own (when and if
you have to restore, is the FDE system going to let you then access the
hard drive? remember, the FDE software can't tell if you are "you", or
you are a hostile attacker; it is designed to be hard to restore to a
different hard drive, if it wasn't, it would be much less secure).
(9.) No plausible deniability -- The fact that a hard drive is FDE-

protected is immediately obvious to any experienced attacker; some forms


of FDE require a password to be entered before the operating system even
starts up, while others, although they integrate the drive encryption
password with the one that the operating system user would otherwise have
to enter anyway, can still be easily detected by an attacker who tries
(for example) to boot from a Linux "Live CD" and then discovers that
there is no recognizable partition structure on the computer's internal
hard drive. This is in contrast to the ability of more sophisticated
encryption systems (TrueCrypt internal volumes and steaganography come to
mind here) to be set up in such a manner as to make even their presence,
difficult to impossible to detect in the first place.
We have to remember here that the scenario we are trying to protect you
against, is the one in which the thug police from your local dictatorship
smash down your front door. If confronted with a FDE-encrypted hard
drive, the next thing you will hear from your local constabulary will be,
"give us the password to access your computer, you little prick, or we'll
bash your fucking head in!!!".
This exact thing, incidentally, is going on every day at border crossings
and airport customs inspections points, leading to the United States, due
to suddenly implemented U.S. Customs policies claiming the right to full
inspection of laptop computers, iPods, everything, in fact, that one
tries to enter the wonderful U.S. of A., with. If a U.S. Customs agent
finds an FDE-protected laptop, they apparently do have the "right" to
demand that you share your password with them; and if you don't, you are
subject to immediate arrest, indefinite detention (hello, Guantanamo
Bay!) and, of course, permanent confiscation of your laptop.
In other words, the presence of FDE is a red flag to the authorities that
"you have something to hide". If you think that they're going to fold
their tents and give up trying to get your secrets, when faced with that,
you're dreaming in Technicolour, mate. They will simply go to the next
level of coercion and intimidation, to get what they now are certain that
you're hiding from them. This might happen with more subtle, conventional
methods of encryption, too, but it's less likely and you at least have a
fighting chance to look like any old innocent PC user.
In a perverse way, you can look at FDE as "no encryption at all", in this
sense. What I mean by this seemingly extreme statement is, although that
it's true that FDE _does_ protect your hard drive(s) against an attack by
someone who doesn't have the correct password to just unlock the whole
damn thing at the point where you normally authenticate to the operating
system (e.g. when you're at the "Username:" and "Password:" prompt),
there are likely going to be situations where constantly logging out and
securing the computer will be awkward (at least) and might cause a great
deal of suspicion (at best).
For example, suppose that you have some "controversial" content in a
particular folder on your PC, whose hard drive is protected by FDE. You
aren't accessing this folder and are working on something else, but then
you have to get up and leave the computer unattended for a short while,
perhaps to answer the call of nature or to get something to eat.
If FDE is your only line of defence, you will have to log out (or "lock"
the computer) each time you do something like this (because, if you
don't, all that a nosy individual has to do, to get access to the
"controversial" content, is to open Windows Explorer and double-click on
the appropriate folder icon). In many social settings, for example if
you're living at home with a number of siblings or if you're at a college

fraternity house, those around you are bound to notice that "something
funny is going on here, Maksim is always looking over his shoulder and
locking the computer every time that someone walks by".
If, on the other hand, the sensitive content is separately encrypted -so that you can have normal access to the mundane functions of the
computer without even being aware that something "nasty" is squirreled
away in some encrypted file or directory -- then you can safely take the
chance of going for a short break without attracting undue attention.
(Just be aware, it's not a good idea to allow nosy people extensive
access of this type without properly sanitizing the PC. Remember that
thumbnail pictures, browser histories and file names, just to name three
resources that might be accessible even though the original sensitive
content is now encrypted, can give rise to a lot of awkward questions, if
a "nosy" sibling, spouse or friend starts to poke around in the wrong
places on your computer.)
The bottom line, here: FDE isn't a bad technology, and it can play a
valuable role under some circumstances, but you need to carefully
evaluate its good and bad points, then decide if it's right for you.
Personally, I would never use Full Disk Encryption without also using
conventional methods like TrueCrypt, but you may feel differently. It's
up to you; it's your data, and your life.
RAM Disks: Cheap, Dirty, Risky, Effective
----------------------------------------Now the intelligent data hider also has another option at his disposal
that I find is rarely discussed in the context of anti-forensics; this is
a shame because it can be very effective against many of the "end run"
types of attacks (that is, attacks that target traces of a file or
pointers to it, as opposed to the original file itself), that I have
described above.
I'm referring here to "RAM disks". This is actually quite an archaic
concept, going back at least as far as the start of the microcomputer
age. The concept is simply to take some of your computer's precious RAM
memory (the kind that sort of goes 'poof' when the computer is powered
down) and segment it off into a kind of very fast virtual hard drive. Now
the obvious disadvantage to this is, unless you copy the contents of
whatever you had in the RAM disk, to a more permanent storage medium
(logically, something like a TrueCrypt volume on a conventional hard
drive), when you turn the computer off, then you just lost all the files
that you had placed in the RAM disk.
But... maybe that's what you wanted to have happen, right? Consider the
'jackboot at the door' scenario and you'll see why using a RAM disk can
be a useful tool -- all you have to do is turn off the computer, and,
barring some of the highly specialized "RAM chilling" attacks mentioned
much earlier in this document, all of your "controversial" data instantly
disappears, whether it was encrypted or not. (Yes, it _is_ theoretically
possible that a very knowledgeable and determined attacker, with exactly
the right forensics tools and a perfect procedure, could still compromise
this data; but it's very unlikely, unless you were stupid enough to do
something like bragging to him, 'ha ha, you can't get me'. You never
would do something like that... would you? If you would, start learning
how to be very polite, as you say to the police, 'I'm sorry officer but I
can't help you with that, would you like to speak to my solicitor?')

The other disadvantage about RAM disks is obviously that "you don't get
something for nothing". That is, the amount of RAM memory in your PC is
finite, and is probably quite limited, particularly for low-budget
computers, and the more RAM that you allocate to the RAM disk, the less
that your "normal" operating system, plus any applications that you
choose to run, will have to use. In extreme cases, if you are overly
ambitious with the RAM disk, you can stop your operating system from
functioning altogether, although this is usually a temporary problem :
just reboot and try a lower number for the amount of memory to give to
the RAM disk.
In this respect, Windows users are again likely to be at a rather severe
disadvantage compared to Linux users, since the Windows operating system
uses so much more RAM memory to begin with, but even a low-footprint (~50
megabyte) RAM disk can still be very useful for Windows users for
temporary storage of small, sensitive files like lists of URLs, buddy
lists and so on.
For information on Windows RAM disks you could try searching Google, or
check out : http://channel9.msdn.com/forums/TechOff/19142-Microsofts-XPRAM-Disk-Driver/
For information on Linux RAM disks, search Google (or your own
distribution's help system) for "tmpfs" and "RAMfs", they are both very
easy to use and both have their own advantages and disadvantages.
One other note : I am not sure how well RAM disks (for any operating
system) would react to the "Suspend" or "Hibernate" features of some more
modern computers (especially laptops), so I'd be cautious about using
them in those environments. Generally, doing any kind of securitysensitive work on your PC and then putting it into any kind of powersaving state, is a BAD thing from a security and privacy point of view,
because you have no control over whether the BIOS of the computer and the
operating system might swap out some of your "controversial" data, from a
secured area such as an encrypted volume, out to something completely
unprotected (such as the "hibernation file" on the hard disk).
TrueCrypt's Website specifically mentions this as a significant data
leakage possibility, so you would do well to heed their advice and not
let your computer go to sleep while it's in use for your 'sensitive' data
access.
Steganography
-------------"Steganography", a word derived from two Greek words that mean "hiding",
is (from the computer security and privacy point of view), basically the
discipline of hiding / embedding a first file (which you want to remain
secret / un-noticed), within a second, "container" file (which is meant
to be unencrypted, so that it can freely be looked at by an examiner).
You can think of the "container" file as kind of an innocent-looking
"shell", surrounding the hidden inner document, which, following the
analogy, would be like the "pearl within the shell". To the outside, it
just looks like sea-bottom... that's why the outside of the shell is so
plain-looking. But inside, why, there's a lovely pearl, if only you knew
which part of sea-bottom to pry open!
Technically, the concept leverages the fact that many modern file formats
such as JPEG, MPEG, WAV and so on, either allocate and use more storage

space (bits and bytes) than the amount of data that they contain actually
needs to use, or, they can -- with the right software -- be forced into
storing this data in fewer bits and bytes than the original, unmodified
"container" file had allocated.
For many types of files (particularly graphics files like a picture of a
mountain scene or audio files like a recording of your grandmother's
voice), it is extremely difficult for even an experienced forensics
investigator to be able to look at a steganographically modified file and
intuitively "know" that some of its internal bits and bytes have been
"stolen" (so that they can be used in which to store the "hidden" file),
especially if an unmodified, original version of the same file is not
available as a base for comparison.
The main privacy value of steganography derives from the fact that the
container file is usually something quite mundane and innocuous, for
example it could be a .jpg picture of the Eifel Tower -- in other words,
something completely legitimate and "non-controversial". Usually,
steganography is combined with encryption of the "hidden" document; this
is not just to make it difficult to extract without the orginator's
permission, but also because it technically makes it more difficult to
detect in the first place.
What is quite "cool", from a privacy point of view, about steganography,
is simply that it adds a highly desirable factor to your privacy "defence
in depth" strategy : namely, it (can, if perfectly implemented) defeat
even the suspicion that some kind of "controversial" data is being
hidden, in the first place.
That is, absent steganography (or some other similar technique, for
example the weak defence of just changing the file extensions of an
encrypted file from ".pgp" to ".doc"), an attacker knows that you're
hiding something -- this is implicit in the presence of encrypted files
that have no other plausible reason to be present on your hard drive -and his only remaining job is to find a way to get past the encryption
that's protecting the original plaintext data.
With steganography, conversely, anyone but a sophisticated, intelligent
attacker who is armed with very good forensics tools, would look at the
steganographically modified "container" files and automatically conclude
that "this is just some picture of the Eifel Tower, better look elsewhere
for Achmed's nefarious Islamic militant plans".
So, is steganography the "nuclear weapon" of the anti-forensics toolkit?
Unfortunately, not. Here are some of its shortcomings :
* Perhaps the most important limitation of steganography is simply that
by the very nature of how it works, the size of the inner, "hidden" file
has to be far smaller than that of the outer "container" file, although
this is to some extent offset by the fact that the relationship is
proportionate -- that is, the larger the container file, the larger the
hidden file that's allowed.
The reason that steganography has to work this way is, if more than a
small percentage of the container file is allocated to storage of the
hidden file, then "artifacts" (errors) in the container file start to
become apparent to even the untutored eye, a situation which defeats the
purpose of steganography in the first place. (For example, if you used
too much of the file's total space for hidden file storage, your picture
of the Eifel Tower might have a sky that's purple with pink polka dots,

as opposed to it being in nice Parisian blue. This would be a dead


giveaway to a forensics investigator that "there's something fishy here",
which is why most good steganographic software simply won't let you try
to do it.)
A practical rule of thumb to use in considering the above, is that you
generally can't use more than 10% of the size (in bytes) of the container
file, for storage of the hidden file. As you can imagine, this means that
you won't be hiding too many full-length movies, using steganography.
* Generally, steganographic software only allows you to embed / hide a
single hidden file, in a single container file. So you can't send an
entire directory of 25 small files to be hidden in a single large
container file. (Of course, you could theoretically archive all 25 into
something like a single .zip file and then embed that.)
* Once you embed / hide a file inside a container file (let's say, you've
hidden a .txt format file containing your dastardly bank robbing plans,
inside a .jpg format picture of Big Ben), if you subsequently edit or
change EVEN ONE BYTE of the container file (without first extracting and
making a backup copy of the hidden file), there is a very high chance
that the hidden file will be corrupted and lost forever. Remember how
steganography works -- typically, it "steals" one or two bits out of
every pixel coloring byte in (in this case) the .jpg picture file.
Photoshop knows nothing about this and has no way of avoiding changing
the "stolen" bits (in which the hidden file resides), so the instant that
it changes one of these bits, the linked data structures that,
collectively, make up the hidden file will be disrupted... and, 'bye-bye
data'!
This issue is really very much the same as a similar one concerning
TrueCrypt "keyfiles", so the safeguards associated with keyfiles should
also be observed with steanographically affected container files.
* Although modern steganographic software generally employs pretty good
encryption algorithms (typically AES, Blowfish and so on), remember that
these are largely cypherpunk hobbyist tools and the implementation of the
software code that encrypts and decrypts the hidden files frequently has
not been peer reviewed to ensure its robustness and resistance to
cryptanalytic attack. For this reason, I would strongly suggest that you
not rely on the encryption tools of a steganographic application as your
only line of defense against forensic attack. (Put your "innocent"
pictures of the Eifel Tower on a robustly encrypted volume, just to be on
the safe side.)
* Keep in mind that a good forensics investigator isn't stupid; if he
(rarely, she) sees a huge collection of files on your computer that seem
out of place (you are bored to tears by baseball, but you have 200
pictures of baseball 'greats' in one particular directory on your hard
drive), he's probably going to guess that they're likely steganographic
container files, then he'll hit them with the full range of all his
cryptanalytic tools... remember, forensics investigators have a long list
of all the stego tools that are available out there on the Internet, they
know of all the weaknesses in all of them, so this is another reason not
to put all your trust in the software that you used to hide your files
with.
Also remember that the forensics investigator may be able to indirectly
infer the fact that a given file is a stego container if (for example)
its date / time modification stamp shows signs of having recently being

updated. (Another, more problematic issue is, to avoid the container


files being inadvertently modified, you have the option of setting their
attributes to "read-only". This is a good thing from a data resiliency
point of view, but -- because a forensics investigator can and will
search for exactly this kind of attribute, as one of his first attacks -it is a bad thing from a defence in depth point of view. You will have to
decide where the appropriate trade-off applies between the one and the
other.)
* If you use files such as .jpg pictures or .wav audio recordings as
containers, don't be stupid and give the forensics investigator an easy
way to do a "comparison with original" attack; securely delete and wipe
the original, unmodified container files.
* As with all data hiding software, it's best if this is not stored on,
or close to, the computer that you're using to hold the
steganographically modified data, since the mere presence of a stego
program is a red flag to an experienced forensics investigator. Put your
stego program on a USB key or somewhere that breaks the chain of
incriminating proximity to yourself.
* Test, test, test! Don't just store all your precious, "controversial"
data with a stego program, and then assume that you'll be able to
retrieve all your hidden files. Remember that these are specialist
applications that may contain bugs that can come back to bite you later.
Make sure to test that you can, for everything that you try to hide,
before you securely delete the original. And make sure that you have one
or two archival copies of the steganographic program itself. If you don't
do this, you may find that it has disappeared from the Internet's
download locations, leaving you high and dry when you want to access some
of your precious data.
* Finally, you will have to remember the password that you used to embed
the hidden document, in each steganographically modified container file.
If you lose this or forget it, well... hopefully you know where that
ends, now.
The bottom line : Steganography is an intriguing area of anti-forensics
and it definitely has a place in your defence in depth strategy; it is
probably most effectively used for small-size files of moderate
sensitivity, but could be used for more privacy-critical tasks if
supplemented by other protections such as physical separation and / or
separately encrypted containers. It's definitely worth a look!
Virtualization: Is It An Option?
-------------------------------Recently (within the last 5 or so years), a relatively new technique for
running computer operating systems has begun to become very popular.
This is "virtualization", which involves:
(a.) A regular computer (but, one with a fast CPU, large amounts of RAM
memory and more than the usual amount of hard disk storage; see below);
(b.) A regular computer operating system -- this is called the "host"
operating system;
(c.) Virtualization software (for example VMware or VirtualBox); and

(d.) One or more "guest" computer operating systems.


The basic concept of virtualization is quite simple -- the computer
starts up as it normally would and boots its standard operating system as
always, but then runs the virtualization software (let's assume for the
sake of this discussion, that it's VMware). VMware then loads one or more
"guest operating system virtual machines", which is itself associated
with one or more "guest virtual disk drives" (these are simply large hard
drive files, typically with a .vdmk file type -- these hold all of the
guest operating system's system files), which then, under control of
VMware, execute a "virtual boot" (this looks exactly like it would if the
guest operating system were to be installed on its own, separate PC,
except that in this case, the "virtual boot" is all taking place in a
desktop manager window maintained jointly by the host operating system
and VMware), loading its various files and sub-programs from the virtual
disk drive.
The virtualization software basically runs interference between the host
operating system and the guest one, allowing several virtualized guest
operating systems to use the same physical CPU, network card, video /
audio subsystem, USB ports, etc..
The one hardware component that can't be shared in this way, is the
computer's RAM memory: this has to be apportioned by the virtualization
software in a pre-determined manner, so, for example, if you are starting
with a PC with 3 gigabytes of physical RAM, and you decide to allocate
512 Mb of it to "VMware virtual machine #1", then all remaining programs
-- both the original host operating system, whatever applications it
happens to be running, and any other guest operating systems that you
want to run on the same computer -- must share the remaining 2.5 Gb of
RAM. For this reason, virtualization tends to be very RAM-hungry,
especially if the guest operating systems are to run RAM-intensive
applications like games or high-resolution graphics software.
The virtualization software carefully compartmentalises the host
operating system from the guest operating system(s): _nothing_ (other
than for reducing the amount of available RAM, hard drive space and CPU
cycles) that the latter do, can affect the host operating system in any
way. (Nor, incidentally, can the guest operating systems affect each
other or infringe on each other's chunk of RAM memory.) This is a key
requirement for security and stability reasons.
From a general use point of view, virtualization provides a number of
useful functions:
(a.) It allows you to run a "real" version of a dissimilar operating
system on the same computer as on which you have installed a different
operating system (for example, if your standard PC operating system is
Windows XP, you could run the MacOS or Red Hat Linux in a virtualized
session). This is as opposed to "dual partitioning" (in which you can
have more than one operating system sharing a single hard drive, but only
one operating system can be running at a time, and hard drive space
devoted to one can't be used by the other), and "emulation" (in which a
special layer of software attempts to make applications intended for one
operating system work under another; the most famous example of this is
the Linux WINE emulator for Windows programs, but the MacOS also has an
emulator, as well).
(b.) Because of (a.) it gives you more flexibility in terms of software

(you can run Windows programs on a Mac, for example) and, unlike
emulators -- which are notorious for compatibility problems due to the
subtle differences between different computer operating systems -- since
the foreign application is running under the operating system that it was
intended for, compatibility problems are almost non-existent.
(c.) It can (possibly) make better use of surplus CPU cycles and hard
drive space. Where this is most important is in "hosting" environments
where several organizations can be sold / rented use of the same physical
server computer; basically, each organization gets a virtualized
operating system instance which they think is running on dedicated
hardware, but which is, in reality, sharing use of the same physical
hardware with several other organizations' own virtualized operating
systems, with access to the CPU being arbitrated between these by the
virtualization software. This can make for a very efficient use of
expensive hardware and physical hosting space, if correctly implemented.
(d.) It looks cool. (Having Linux, MacOS, IBM OS/2 Warp and Solaris
virtualized sessions all running at once on your Windows XP desktop, is
worth a lot of "geek credibility".)
(e.) It (possibly) can make your systems more robust, in the sense that
in theory, to back up the entire shooting match, all you have to back up
is the relatively small virtual machine configuration file and the .vdmk
disk image associated with it... all you need to move this image to a
different PC is a minimally configured host operating system on the new
physical hardware and suitable (physical) disk space on the new PC.
This is of course an important consideration in hosted and corporate
environments, but it's also important for groups like anti-virus
researchers who constantly have to experiment with malicious software
that might otherwise trash a conventional Windows PC; without
virtualization, an AV researcher in this situation would be faced with
the painful business of re-installing Windows from scratch (device
drivers and all), but with virtualization, all that need be done is to
make another copy of the "clean" .vdmk disk image (the original one,
before the testing of all the nasty stuff began) and virtually boot that
image up again.
Having said all that, there are some general purpose disadvantages of
this approach, that you need to know about:
(f.) Virtualization, depending upon the minimum and realistic memory, CPU
and hard drive requirements of the host and guest operating systems, can
be very hardware-hungry.
Personally, if (for example) you want to host one guest copy of Windows
XP and one guest copy of Red Hat Linux on your Solaris host computer, you
should assume that you'll need at least 2 to 3 gigabytes more physical
RAM memory than you needed just to run Solaris, as well as at least 40 Gb
more hard drive space.
CPU requirements will vary according to what applications are running on
the host and guest operating systems, but don't try doing this on
anything less than a CPU purchased after about 2005, and in particular
don't try it on a "crippled" CPU like an Intel Celeron. I strongly
recommend a multi-core CPU, as some of these newer chips have
virtualization-friendly features built right in to them and all of them
have at least the theoretical ability for one or more of the virtual
operating systems to be "off-loaded" on to the secondary CPU / core,

while the host operating system uses the primary CPU.


(g.) It isn't a very good option for constantly mobile uses such as on a
laptop that is always connecting to a different Wi-Fi wireless LAN access
point (or for any scenario where network or hardware resources may differ
between one reboot of the host operating system to another).
Virtualization works best if it is used in a consistent, seldom-changing
hardware and network configuration.
(h.) Usually, access to external hardware from inside the virtualized
"box", is limited to a few key items -- for example, the (virtualized)
network adapter, mouse, sound card, video card, etc. -- for a guest
operating system. Access to other peripherals, for example scanners,
Webcams and so on, tends to be spotty at best. Even access to USB
peripherals such as flash RAM USB keys, etc., has only recently been
added and can be tricky to use.
But What About Virtualization From A Security Perspective?
---------------------------------------------------------If we look at virtualization from an anti-forensics point of view, this
technology has a mixture of promising and not-so-good features:
Good Things About Virtualization:
--------------------------------(a.) The key advantage of virtualization -- and here this is in sharp
contrast to almost all other similar or comparable technologies -- is
that the guest operating system is a "real" one, COMPLETELY partitioned
and compartmentalised from the host operating system; it has no access
whatsoever, and therefore cannot leave incriminating traces of data
anywhere within, the file storage system of the host operating system.
This has potentially huge benefits from an anti-forensics perspective,
because now, you could potentially (1) boot up the host operating system,
do one or two completely innocent-looking activities with it; (2) boot up
a fresh copy of the guest operating system using the virtualization
software; (3) use the guest operating system to process whatever
"controversial" data must be accessed; (4) store or encrypt this
"controversial" data from within the guest operating system (note:
logically, the storage place for this data would have to be on a
networked shared folder, or on the Internet -- somewhere NOT on the host
hard drive); (5) shut down the guest operating system; (6) securely
delete the guest .vdmk file (thus eliminating any trace of any activity
conducted during the session); and, finally, (7) repeat as necessary,
always working from a "fresh", uncontaminated copy of the guest operating
system's .vdmk file.
This is the kind of process that makes forensics investigators sit down
and cry, tear out their hair, etc., because -- apart from being able to
prove that "Waqas was using virtualization software to hide his tracks,
M'Lord" -- if properly executed, it leaves the intruder with absolutely
NO evidence, other than possibly interceptions of TCP/IP traffic
transiting the Internet connection being used "at the time of the crime",
to present to the authorities.
This is because by definition, any records (log files, file change dates,
etc.) of computer use have to be generated by the guest operating system

within its compartmentalised virtual "box", and can only be stored within
the .vdmk file that represents the guest operating system's virtual hard
drive. Wipe the virtual hard drive and all possible local static evidence
is gone forever -- not only can't it even be guessed at, but also, it's
very difficult to even tell which kind of operating system the person
physically at the computer, was using at a specific time. (Possibly, a
forensics expert could data-mine the host operating system to establish
that "aha, file MY_WINDOWS_VISTA.VDMK was erased by this filthy pervert
at 10:20 p.m. last Tuesday!"... but the obvious solution to that is to
sanitize the appropriate log files of the host operating system.)
From an anti-forensics point of view, this is pretty hard to beat,
particularly when used for purposes like a single Web / Internet surfing
session that does not involve the permanent capture or storage of
potentially incriminating data.
(b.) A virtualized session may be an attractive option in certain
scenarios (for example, if one wishes to make remote use of one's work
PC, which is otherwise tightly locked down by the local MIS department)
where the target computer cannot be re-booted into an alternate operating
system. Load a virtualized guest operating system on the main computer
and you're away to the races; to the MIS department, this should look
just as if nothing has changed, but if configured in the right way,
incoming remote access request packets should be taken over by the
"listening" virtual guest operating system (not the main host one) and
the computer can then be used as you see fit.
However, apart from the resource issues noted above, there are some
significant drawbacks that you should be aware of, before entrusting your
"confidential" data to this technology.
(c.) The most important drawback concerns the "kick in the door"
scenario. To properly shut down the system, you will have to (1) shut
down whatever application you were running within the virtualized guest
operating system, (2) log out of / shut down, the guest operating system
itself; (3) (ideally) shut down the virtualization software (which will
be running as a task under the host operating system); (4) shut down the
host operating system, then (5) smile at the police, saying, "why are you
invading the house of a perfectly innocent person like me, officer?"
The point is, this all TAKES TIME, potentially quite a bit more time than
(say) just doing a forced dismount of all your TrueCrypt drive containers
and hitting the power button. Time is one factor that you may not have a
surplus of, if sudden physical compromise of your computing premises is a
possibility. This must be taken into account, if you decide to go the
virtualization route.
(d.) By default, virtualized .vdmk disk volumes are UNENCRYPTED, meaning
that an intruder with physical access to your computer should be able to
access any "controversial" data contained within them, with relatively
little effort (just boot up the appropriate virtual guest operating
system and Bob's your uncle!). The privacy and confidentiality
implications of this should be obvious -- you are going to have to
encrypt the data within the guest operating system [e.g., there will now
be one or more encrypted file(s) somewhere within the .vdmk virtual hard
drive file]. It has been suggested that you could "nest" a TrueCrypt (or
other) encrypted container, somewhere within the .vdmk file system. This
seems implausible, but it may be worth a try.
An alternative way to preserve confidentiality, which I should warn the

reader that I have so far not personally tried and which I predict might
be unstable (remember: "instability + strong encryption == 'kiss your
data goodbye'"), would be to (1) create an encrypted TrueCrypt container
(a large one, obviously) on a suitable physical hard drive; (2) use
VMware to create a .vdmk virtualized disk image within the TrueCrypt
container; and (3) when setting up the guest operating system, tell it to
use the now safely encrypted virtualized .vdmk file.
The problem here is that you have multiple layers of software and device
driver control, trying to regulate access to the data stored in this
scheme, and each and every one of them must work PERFECTLY, ALL THE TIME,
or a data loss disaster is certain to ensue.
Consider:
+----------------------------------------+
|
Virtual (guest) OS
|
|
(let's say, Windows)
|
|
{trying to read or write a byte}
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Virtualization s/w
|
| (VMWare for Red Hat Linux host OS) |
|
virtual hard drive emulator
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Encryption s/w
|
| (TrueCrypt for Red Hat Linux host OS) |
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Real / primary (host) OS
|
|
(let's say, Red Hat Linux)
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Red Hat Linux host OS
|
|
(physical hard disk driver s/w)
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Physical hard drive
|
+----------------------------------------+
As I think you can see, in this configuration -- something like which,
would be required to keep data within the virtualized .vdmk disk file
really secure -- we are basically doubling (from 3 to 6) the levels of
indirection, between the application that is trying to read or write a

byte to the "hard disk" (or virtual representation thereof), and the
actual, cold, hard magnetic medium where that byte of data is physically
going to be stored or read from.
If the slightest thing goes wrong with this setup (for example, a "bad
sector" appears within either the .vdmk file or the TrueCrypt container),
the entire house of cards may come crashing down and the data within
the .vdmk file may become permanently inaccessible (unless, of course,
you're the NSA and you have a supercomputer and an office-full of expert
code-breakers, to get it back for you). FDE (Full Disk Encryption) just
makes matters worse, because it adds a seventh (!) layer of read / write
data handling that must always work with 100% correctness. And all of
this is on top of the fact that data access speeds will be quite
substantially impacted by all the handing-off of work from one layer of
software to another.
Personally I'm unwilling to entrust my most important data to any such
configuration, at least not without a great deal more testing... but you
may feel differently. It's up to you.
(e.) Remember that some of the industry-leading virtualization software
vendors -- the best example of which is VMware -- are U.S.-based
companies, which, unfortunately, raises the possibility of a "backdoor"
for U.S. spying agencies and law enforcement personnel, in more or less
the same way as is a risk for Microsoft Windows, Sun Solaris and Red Hat
Linux. The best solution to this is (like the solution for the backdoor
problem for the ordinary operating systems) to use Open Source-based
virtualization software that is downloaded from somewhere outside the
United States.
Having said that, personally I think that the chance of a backdoor
working efficiently in a virtualized environment is rather small, because
of a variety of technical factors as well as the fact that it would be
far simpler (not simple, mind you) and more effective for the NSA, CIA,
etc., just to attack the host operating system -- after all, there are
far more copies of Microsoft Windows than there ever will be of VMware,
so a secret backdoor in Windows is going to be much more useful to U.S.
intelligence agencies than would be one in VMware. Still, you can never
be too careful, where your personal data is concerned.
Incidentally, as of when this has been written, some virtualization
software vendors (VMware) are requiring those who download the "free"
versions of the software, to fill out lengthy forms, including name,
location, e-mail address, etc., to be "allowed" to download the
application(s). Needless to say, if you must use these vendors' wares,
don't give them accurate personally identifying information. Would you
trade a free copy of VMware, for a 20-year prison sentence in Guantanamo
Bay, courtesy of the NSA, the FBI and the U.S. military? I thought not.
So, "just don't".
(f.) Finally, remember that a virtualized operating system session is a
"real" operating system, with all the good and bad things that implies.
If, as is likely, a "normal" instance of Windows XP installed on a "real"
PC has a NSA-inspired backdoor inside it, then your virtualized guest
version of XP will, too. The fact that it's running in a window inside a
virtual machine, doesn't make it any more or less secure, in and of
itself. If, for example, you are running Windows XP or Vista as a guest,
just because it's virtualized doesn't mean that you don't have to wipe
your swapfile, clear the "Recent Files" list, and so on.

It's very easy to skip these steps, because psychologically, the guest
operating system "looks just like another application in a window of the
host operating system" -- but it's _not_, it's an entire operating system
all unto itself.
At the very minimum, if the host and guest operating systems are
dissimilar (as they will be in 90 per cent of all the cases), then you
will have to learn, and diligently execute, two different sets of
confidentiality techniques (one for the guest and one for the host
operating system). If you're having a hard time remembering just ONE set
of secure computing guidelines, this might prove to be a challenge.
So what's the "bottom line"? As of right now, I believe that
virtualization is a potentially valuable tool in your anti-forensics
toolkit, but also that you have to carefully consider both the
accessibility and "tear-down time" issues before you use it on a routine
basis, for access to and storage of, your "confidential" data. None the
less, it's a promising technology which shows every sign of becoming even
more powerful in the future, so it's definitely worth keeping an eye on.
Keyloggers
---------One other comment, here -- keep in mind that a very simple method for
anyone, be that a law enforcement official, or a spy, or (much more
likely) just a common garden variety cyber-criminal, to instantly 'pwn'
everything and anything that you do over the Internet (or locally on your
own PC, for that matter), is to install a 'keylogger', that is, a
malicious little piece of software that quietly hides on your PC and then
sends every keystroke that you type, back to... wherever.
Since this would not only capture which Websites you visited, not to
mention the user-name and password credentials that you used to log in to
them, it doesn't take a rocket scientist to understand what a grave
threat to your privacy that a 'keylogger' represents. (As in, 'if you get
one of these things on your PC, you can pretty much forget about all the
rest of your security measures, they'll all be bypassed by an intruder
who now knows your security credentials, just as well as you do'.)
The reason why I mention this in the context of Internet surveillance is,
many of the keyloggers afflicting computers today, end up being installed
simply because of end-user (YOU!) stupidity : the classic example is a
Windows user who clicks on an .exe format file that came in via e-mail,
because the associated message claimed, "Run this leet program 2 see Anna
Kornukova n00d, dudes!". (Well, when the 'dude' double-clicked on
"ANNANUDE.EXE", he may or may not have seen anything cute and au naturel,
but he definitely did get his PC added to a cyber-criminal's keylogger
and botnet.)
Most keyloggers work only with Windows, which is yet another reason why
not to use this awful operating system, but even if you use something
like Linux or the MacOS, you still should be careful:
At all costs, avoid running programs whose origins and authenticity that
you cannot verify. This is, unfortunately, a rule that is in practice
quite difficult to follow consistently, because in theory, anything can
be faked on the Internet, and some of the most important security tools
(example : TrueCrypt) in your valid arsenal, have to be downloaded from
the Internet... so how would you know a "real" copy of TrueCrypt from a

"fake" one that's just designed to put malware on your PC? The truth is,
while there are limits to how far you can go to verify this kind of
thing, you can't be 100% sure that the application is "safe".
Finally
------Stay tuned for Part 2 of this series, "No Man Is An Island", in which I
will be describing some valuable tips for how to stay secure when using
the Internet.
================================================================================
APPENDIX: How to Secure Your Windows Computer and Protect Your Privacy
By Howard Fosdick
1 May 2008
Do you know that -* Windows secretly records all the web sites you've ever visited?
* After you delete your Outlook emails and empty the Waste Basket,
someone could still read your email?
* After you delete a file and empty the Recycle Bin, the file still
exists?
* Your computer might run software that spies on you?
* Your computer might be a bot , a slave computer waiting to perform
tasks assigned by a remote master?
* The web sites you visit might be able to compile a complete dossier of
your online activities?
* Microsoft Word and Excel documents contain secret keys that uniquely
identify you? They also collect statistics telling anyone how long you
spent working on them and when
This guide explains these -- and many other -- threats to your security
and privacy when you use Windows computers. It describes these concerns
in simple, non-technical terms. The goal is to provide information anyone
can understand. This guide also offers solutions: safe practices you can
follow, and free programs you can install. Download links appear for the
free programs as they are cited. No one can guarantee the security and
privacy of your Windows computer.
Achieving foolproof security and privacy with Windows is difficult. Even
most computer professionals don't have this expertise. Instead, this
guide addresses the security and privacy needs of most Windows users,
most of the time. Follow its recommendations and your chances of a
security or privacy problem will be minimal. Since this guide leaves out
technical details and obscure threats, it includes a detailed Appendix.
Look there first for deeper explanations and links to more information.
Why Security and Privacy Matter

Why should you care about making Windows secure and private? Once young
"hackers" tried to breach Windows security for thrills. But today
penetrating Windows computers yields big money. So professional criminals
have moved in, including overseas gangs and organized crime. All intend
to make money off you -- or anyone else who does not know how to secure
Windows. Security threats are increasing exponentially.
This guide tells you how to defend yourself against those trying to steal
your passwords, personal data, and financial information. It helps you
secure your Windows system from outside manipulation or even destruction.
It also helps you deal with corporations and governments that breach
Windows security and your privacy for their own ends. You have privacy if
only you determine when, how, and to whom your personal information is
communicated. Organizations try to gain advantage by eliminating your
privacy. This guide helps you defend it.
The Threats
Windows security and privacy concerns fall into three categories -1. How to defend your computer against outside penetration attempts
2. How Windows tracks your behavior --and how to stop it
3. How to protect your privacy when using the Internet
The first two threats are specific to Windows computers. The last one
applies to the use of any kind of computer. These three points comprise
the outline to this guide.
"1. How to Defend Against Penetration Attempts"
There are many reasons someone or some organization out in the Internet
might want to penetrate your Windows computer. Here are a few examples:
* To secretly install software that steals your passwords or financial
information
* To enroll your computer as a bot that secretly sends out junk email or
spam
* To implant software that tracks your personal web surfing habits
* To destroy programs or data on your PC
Your goals are to?
* Prevent installation of malicious software or malware
* Identify and eliminate any malware that does get installed
* Prevent malware from sending information from your computer out into
the web
* Prevent any other secret penetration of your computer
1.1 Act Safely Online
Let's start with the basics. Your use of your computer -- your online

behavior --significantly affects how easy it is to penetrate your PC.


Practice safe web surfing. Handle your email safely. Follow these tips to
reduce the chances that outsiders can penetrate your computer:
* Don't download free screensavers, wallpaper, games, or toolbars unless
you know they're safe. These often come with embedded malware. If you
just can't pass up freebies, download them to a directory where you scan
them with your anti-virus and anti-malware programs before using them.
* Don't visit questionable web sites. Hacker sites, sexually explicit
sites, and sites that engage in illegal activity like piracy of music,
videos, or software are well known for malware. You could get hit by a
drive-by -- a malicious program that runs just by virtue of your viewing
a web page.
* Don't open email or
might install malware
present themselves as
cards, or invoices so
email, reduce it with

email attachments from questionable sources. These


on your system. Dangerous email attachments often
games, interesting pictures, electronic greeting
that you will open them. (If you get too much junk
these free programs .)

* Don't click on links provided in emails. These could direct you to a


legitimate-looking but bogus web site designed to steal your personal
information. Companies that protect their customers don't conduct
business through embedded links in emails!
* Before you enter your online account name and password into any web
site, be sure the web page is secure. The web page's address should start
with the letters https (rather than http ). Most browsers display a
closed lock icon at the bottom of the browser panel to indicate a secure
web site form.
* Don't give out your full name, address, phone number, or other personal
information in chat rooms, forums, on web forms, or in social networks.
(Section 3 on "How to Protect Your Privacy When Using the Internet" has
more on this topic.)
* The Appendix links to articles with more safety tips.
1.2 Install Self-Defense Software
To defend Windows, you need to install software that protects against
several kinds of threats. This section describes the threats and the
software that defends against each. Some programs provide protection
against multiple threats. but no single program protects you from all
kinds of threats! Compare any protective software you already have
installed to what I describe here. To cover any gaps, this section
recommends free software you can download and install. It provides
download links for these free programs.
Firewall -- Firewalls are programs that prevent data from coming into or
leaving from your computer without your permission. Unsolicited data
coming into your computer could be an attempt to compromise it;
unauthorized data leaving your computer may be an attempt to secretly
steal your data or spy on your activities.
Every Windows computer should run a firewall at all times when it is
connected to the Internet.
I recommend downloading and installing a free firewall, such as

ZoneAlarm, Comodo Firewall, Sygate Personal Firewall, or Jetico Personal


Firewall. ZoneAlarm is especially easy to set up, since it is selfconfiguring. Find these and other free firewalls along with a quick
comparative review here.
Windows ME, 98, and 95 did not come with a firewall. XP and Vista do.
However, the XP and Vista firewalls have shortcomings. The XP firewalls
(there are actually two versions) do not stop unauthorized outgoing data.
This is unacceptable because if malware somehow got installed on your
computer, it could send data out without you realizing it. Vista's builtin firewall can stop unauthorized outbound data. But it does not do so by
default.
Enabling this critical feature is not easy. I recommend installing a free
firewall whether or not you have a Microsoft firewall. (It doesn't hurt
to run two firewalls.) Since the procedures for configuring Microsoft's
firewalls vary according to your Windows version and service pack level,
see the Appendix for how to configure them.
Anti-Virus -- Viruses are programs that are installed on your computer
without your knowledge or permission. The damage they do ranges from
acting as a nuisance and wasting your computer's resources, all the way
up to destroying your data or Windows itself. Anti-virus programs help
identify and eliminate viruses that get into your computer.
Free anti-virus programs include AVG Anti-Virus, avast! Anti-Virus Home
Edition, and PC Tools Anti-Virus Free Edition. If you don't already have
an anti-virus scanner, download and install one of these, then run it
regularly to scan your disk for any viruses. You can schedule the program
to run automatically either through its own built-in scheduling facility
or through the Windows Scheduler. Good anti-virus programs like these
automatically scan data as it downloads into your computer. This includes
emails you receive and any files you download
Anti-Malware -- In addition to viruses, there are many other kinds of
programs that try to secretly install themselves on your computer.
Generically, they're called malware. They include:
Spyware:
computer

It spies on your behavior and sends this data to a remote

Adware: It targets you for advertisements


Trojans: These scam their way into your computer
Rootkits: These take over administrator rights and can do anything to
your PC
Dialers: These secretly use your communication facilities
Keyloggers: These record your keystrokes (including passwords) and send
this data to a remote computer
Botware: This turns your computer into a bot or zombie, ready to silently
carry out instructions sent from a remote server.
Since no one program identifies and removes all kinds of malware, you
need a couple in addition to your anti-virus scanner. Free programs for
this purpose include AVG Anti-Spyware, Ad-Aware 2007 Free, Spybot Search
and Destroy, and a-Squared Free Anti-Malware. I recommend running two

anti-malware programs on a regularly-scheduled basis.


Anti-Rootkit -- Rootkits are a particularly vicious form of malware. They
take over the master or Administrator user rights on your PC and
therefore are very effective at hiding themselves. Many of the antimalware programs above provide some protection against rootkits. But
sometimes a specialized detection program is useful.
Rootkit detectors often require technical expertise but I can recommend
two as easy-to-use, AVG Anti-Rootkit Free and Sophos Anti-Rootkit. Both
require Windows XP or 2000 or newer.
Intrusion Prevention -- Intrusion detection programs alert you if some
outside program tries to secretly enter Windows by replacing a program on
your computer. For example, an outside program might try to replace part
of Windows or alter a program such as Internet Explorer. Free intrusion
detection programs include WinPatrol, SpywareGuard, ThreatFire Free
Edition, and ProcessGuard Free. Install one of them and it will run
constantly in the background on your computer, detecting and preventing
intrusions.
1.3 Keep Your Programs Up-to-Date!
All anti-malware programs require frequent updating. This enables them to
recognize new kinds of malware as they are developed. The programs listed
above automatically check for updates and download and install them as
needed. (Each has a panel where you can verify this feature.) You must
also keep Windows up-to-date. In Vista, the automatic feature for this
purpose is called Windows Update. It is on by default. You can manage it
through the Control Panel | Security | Windows Update option.
As Microsoft explains, they have broadened Windows Update into a facility
they call Microsoft Update. The latter auto-updates a broader range of
Microsoft products than does Windows Update. For example, it updates
Microsoft Office. You can sign up for Microsoft Update at the Microsoft
Update web site. In XP and Windows 2000, the auto-update feature was
usually referred to as Automatic Updates. Manage it through Control Panel
| Automatic Updates.
Beyond Windows, you must also keep the major applications on your
computer up-to-date. Examples are Adobe's Flash Player, Firefox, and
RealPlayer. Most default to automatic updating. It's a good practice to
verify the auto-update setting right after you install any new program.
Then you never need check it again.
If you don't know whether your system has all the required updates for
your programs, run the free Secunia Software Inspector. It detects and
reports on out-of-date programs and ensures all "bug fixes" are applied.
If you need to download software updates for many programs, The Software
Patch allows you to download them all through one web site.
1.4 Test Your Computer's Defenses
You can test how well your computer resists penetration attempts by
running the free ShieldsUp! program. ShieldsUp! tells you about any
security flaws it finds. It also displays the system information your
computer gives out to every web site you visit. Section 3 on "How to
Protect Your Privacy When Using the Internet" addresses this privacy
concern. Test whether your computer's firewall stops unauthorized
outgoing data by downloading the free program called LeakTest.

1.5 Peer-to-Peer Programs Can Be Risky


Peer-to-peer programs share music, videos and software. Popular examples
include BitTorrent, Morpheus, Kazaa, Napster, and Gnutella. Peer-to-peer
(or P2P) networking makes it possible for you to easily download files
from any of the thousands of other personal computers in the network. The
problem is that by using peer-to-peer programs, you agree to allow others
to read files from your computer. Be sure that only a single Folder on
your computer is shared to the Internet, not your entire disk! Then, be
very careful about what you place into that shared Folder.
Some peer-to-peer programs use the lure of the free to implant adware or
spyware on your computer. Other P2P systems engage in theft because they
"share" files illegally. The popular PC Pitstop web site tested major P2P
programs for bundled malware in July 2005 and here's what they found:
P2P Program: Adware or Spyware Installed:
Kazaa: Brilliant Digital, Gator, Joltid, TopSearch
Ares: NavExcel Toolbar
Bearshare: WhenU SaveNow, WhenU Weather
Morpheus: PIB Toolbar, Huntbar Toolbar, NEO Toolbar
Imesh: Ezula, Gator
Shareaza, WinMX, Emule, LimeWire, BitTorrent, BitTornade: None
The SpywareInfo web site offers another good list of P2P infections here.
If you decide to install any peer-to-peer program, determine if the P2P
program comes with malware before you install it. You greatly increase
your personal security by not getting involved in the illegal sharing of
music, videos, and software. File "sharing" in violation of copyright is
theft. The Recording Industry Association of America has sued over 20,000
people for it as of mid-2006.
1.6 Don't Let Another User Compromise Your Computer
Got kids in the house? A teen or younger child might violate the "safe
surfing" rules above and you wouldn't know it . . . until you get
blindsided by malware the next time you use your computer. This article
tells about a couple whose tax returns and banking data ended up on the
web after their kids used P2P networking software the parents didn't even
know was installed. A spouse or friend could cause you the same grief.
If you are not the sole user of your computer -- or if you do not feel
completely confident that your computer is secure -- consider what
personal information you store. Do you really want to manage your credit
cards, bank accounts or mutual funds from your PC? Only if you know it's
secure! (Read the agreements for online financial services and you'll see
that you are responsible for security breaches that compromise your
accounts.)
Some families use two computers: one for the kids and a secure one for
the adults. They use the less secure computer for games and web surfing,
and carefully restrict the use of the more secure machine. This twocomputer strategy is appealing because today you can buy a used computer

for only a hundred dollars.


An alternative is to share one computer among everyone but set up
separate user ids with different access rights (explained below). Ensure
that only a single user id has the authority to make changes to Windows
and restrict its use.
Never use a public computer at a computer cafe or the library for online
finances or other activities you must keep secure.
1.7 Use Administrator Rights Sparingly
To install programs or perform security-sensitive activities on a Windows
computer requires administrator rights. When you use administrator
rights, any malware program you accidentally or unknowingly run has these
rights -- and can do anything on your system. In systems like Windows XP
and Windows 2000, the built-in Administrator user id inherently has
administrator rights. You can also create other user ids to which you
assign administrator rights. Working full-time with a user id that has
administrator rights makes you vulnerable!
In contrast, using an account that does not have administrator rights
gives you a great deal of protection. So create a new user id without
administrator rights and use it. Then use the Administrator id only when
necessary.
Windows Vista introduces a new feature called user account control that
helps you avoid using administrator rights except when required. This
feature prompts you to enter a password when you want to perform any
action that requires administrator rights. While entering passwords may
seem like a hassle, UAC is a big step towards a more secure Windows.
Early Windows versions --ME, 98, and 95 -- don't have a system of access
rights. Whatever user id you use has the administrator powers. To keep
these systems secure, all you can do is follow the other recommendations
in this guide very carefully.
1.8 Use Strong Passwords
Passwords are the front door into your computer --and any online accounts
you have on the web. You need to:
* Create strong passwords
* Change them regularly
* Use different passwords for different account
Strong passwords are random mixes of letters, numbers, and punctuation
(if allowed) that contain eight or more characters:
AlbqP_1793, pp30-Mow9, PPw9a3mc84
Weak passwords are composed of personal names or words you can find in
the dictionary:
Polly28, Bigdog, alphahouse, wisewoman2, PhoebeJane
If keeping track of different passwords for many different accounts
strikes you as impractical (or drives you nuts!) you might try a

"password management" tool from any of the dozen free products listed
here. If you set up a home wireless network, be sure to assign the router
a password!
1.9 Always Back Up Your Data
One day you turn on your computer and it won't start. Yikes! What now? If
you backed up your data, you won't lose it no matter what the problem is.
Backing up data is simple. For example, keep all your Word documents in a
single Folder, then write that Folder to a plug-in USB memory stick after
you update the documents. Or, write out all your data Folders once a week
to a writeable CD. You can also try an automatic online backup service
like Mozy.
For the few minutes it takes to make a backup, you'll insure your data
against a system meltdown. This also protects you if malware corrupts or
destroys what's on your disk drive. If you didn't back up your data and
you have a system problem, you can still recover your data as long as the
disk drive still works and the data files are not corrupted. You could,
for example, take the disk drive out of the computer and place it into
another Windows machine as its second drive. Then read your data -- and
back it up!
If the problem is that Windows won't start up, the web offers tons of
advice on how to fix and start Windows (see the Appendix). Another option
is to start the machine using a Linux operating system Live CD and use
Linux to read and save data from your Windows disk. If the problem is
that the disk drive itself fails, you'll need your data backup. If you
didn't make one, your only option is to remove the drive and send it to a
service that uses forensics to recover data. This is expensive and may or
may not be able to restore your data. Learn the lesson from this guide
rather than from experience --back up your data!
1.10 Encrypt Your Data
Even if you have locked your Windows system with a good password, anyone
with physical access to your computer can still read the data! One easy
way to do this is simply to boot up the Linux operating system using a
Live CD, then read the Windows files with Linux. This circumvents the
Windows password that otherwise protects the files.
Modern versions of Windows like Vista and XP include built-in encryption.
Right-click on either a Folder or File to see its Properties. The
Properties' Advanced button allows you to specify that all the files in
the Folder or the single File will be automatically encrypted and
decrypted for you. This protects that data from being read even if
someone circumvents your Windows password. It is sufficient protection
for most situations.
Alternatively, you might install free encryption software like TrueCrypt,
BestCrypt or many others.
If you encrypt your data, be sure you will always be able to decrypt it!
If the encryption is based on a key you enter, you must remember the key.
If the encryption is based on an encryption certificate, be sure to back
up or "export" the certificates, as described here. You might wish to
keep unencrypted backups of your data on CD or USB memory stick.
Laptop and notebook computers are most at risk to physical access by an
outsider because they are most frequently lost or stolen -- keep all data

files your portable computer encrypted.


1.11 Reduce Browser Vulnerabilities
As the program you run to access the Internet, your web browser is either
your first line of defense or a key vulnerability in protecting your
computer from Internet malware.
Will Your Browser Run Anybody's Program? - From a security standpoint,
the worldwide web has a basic design flaw --many web sites expect to be
able to run any program they want on your personal computer. You are
expected to accept the risk of running their code! The risk stems from
both accidental program defects and purposefully malicious code.
Some web sites require that
get full value from the web
web sites you visit require
visiting the site to see if
keywords to look for in web

you allow their programs to run their code to


site. Others do not. You can find whether the
programmability simply by turning it off and
it still works properly. Here are the
browsers to turn off their programmability:

* ActiveX
* Active Scripting (or Scripting)
* .NET components (or .NET Framework components)
* Java (or Java VM)
* JavaScript
Turn off the programmability of your browser by un-checking those
keywords at these menu options
Browser: How to Set Programmability:
Internet Explorer: Tools | Internet Options | Security | Internet Custom
Level
Firefox *: Tools | Options | Content
Opera: Tools | Preferences | Advanced | Content
K-Meleon: Edit | Advanced Preferences | JavaScript
SeaMonkey: Edit | Preferences | Advanced (Java) | Scripts and Plugins
(JavaScript)
* Version 2 on
Internet Explorer Vulnerabilities -- The Internet Explorer browser has
historically been vulnerable to malware. Free programs like
SpywareBlaster, SpywareGuard, HijackThis, BHODemon, and others help
prevent and fix these problems.
Tracking Internet Explorer's vulnerabilities is time-consuming because
criminals continually devise new "IE attacks." If you use Internet
Explorer, be sure you're using the latest version and that Windows'
automatic update feature is enabled so that downloads will quickly fix
any newly-discovered bug. Some feel that IE versions 7 and 8 adequately
address the security issues of earlier versions. I believe that competing

free browsers are safer.


Firefox is popular with those who want a safe browser that competes
feature-for-feature with IE. K-Meleon couples safety with top performance
if you don't need all the bells and whistles of resource-consuming
browsers like IE or Firefox. It runs very fast even on older computers.
1.12 Wireless Risks
Wireless communication allows you to use the Internet from your computer
without connecting it to a modem by a wire or cable. Sometimes called WiFi, wireless technology is very convenient because you can use your
laptop from anywhere there is a invisible Internet connection or hotspot.
For example, you could use your laptop and the Internet from a cafe,
hotel, restaurant, or library hotspot.
But wireless presents security concerns. Most public hotspots are unsecured. All your wireless transmissions at the hotspot are sent in
unencrypted "clear text" (except for information on web pages whose
addresses begin with https). Someone with a computer and the right
software could scan and read what passes between your computer and the
Internet.
Don't use public hotspots for Internet communications you need to keep
secure (like your online banking).
Many people set up a wireless home network. You create your own local
hotspot so that you can use your laptop anywhere in the house without a
physical connection. Be sure the wireless equipment you use supports
either the 802.11 G or 802.11 N standards. These secure wireless
transmissions through WPA (Wi-Fi Protected Access) or WPA2 encryption.
Do not base a wireless home network on equipment that only supports the
older 802.11 A or 802.11 B standards. These use an encryption technology,
called WEP (Wired Equivalent Privacy), that is not secure. You might
inadvertently create a public hotspot! Freeloaders on your home network
could reduce the Internet performance you're paying for. Activities like
illegal song downloads would likely be traced to you, not to the guilty
party you've unknowingly allowed to use your network.
When you set up your wireless home network, assign your system a unique
name, tell it not to broadcast that name, give it a tough new password,
and turn on encryption. Specify that only certain computers can remotely
use the network through MAC address filtering. Turn off your router and
modem when you're not using them. Expert advice varies on how to best
secure wireless networks, so see the Appendix for more detail.
"2. How Windows Tracks Your Behavior"
Are you aware that Windows tracks your behavior? It records all the web
sites you ever visit, keeps track of all the documents you've worked on
recently, embeds personal information into every document you create, and
keeps Outlook email even if you tell Outlook to delete it. These are just
a few examples of many.
This section first tells how to securely delete your files, folders, and
email so that no one can ever retrieve them. Then it describes the many
ways in which Windows tracks your behavior. In some cases you can turn
off this tracking. In most, your only option is to eliminate the tracking
information after it has been collected.

2.1 How to Securely Delete Data


Let's start with how to permanently delete data from your computer.
How to Securely Delete Files -- When you delete a file in Windows,
Windows only removes the reference it uses to locate that file on disk.
Even after you empty the Recycle Bin, the file still resides on the disk.
It remains on the disk until some random time in the future when Windows
re-uses this "unused" disk space. This means that someone might be able
to read some of your "deleted" files. (You can use free programs like
Undelete+ and Free Undelete to recover deleted files that are still on
your disk.)
To securely delete files, you need to over-write them with zeroes or
random data. Free programs that do this include Eraser, BCWipe, and many
others. After installing Eraser or BCWipe, you highlight a File or
Folder, right-click the mouse, then select Delete with Wiping or Erase
from the drop-down menu. This over-writes or securely deletes the data
and so that it can never be read again.
Programs like Eraser and BCWipe also offer an option to over-write "all
unused space" on a disk. This securely deletes any files you previously
deleted using Windows Delete.
How to Securely Delete Email and Address Books -- Even after you delete
your Outlook or Outlook Express emails and empty the email Waste Basket,
files containing your emails remain to be read by someone later. What if
you want to permanently delete all your emails so no one could ever read
them?
Whether this is possible depends on whether your computer is stand-alone
or part of an organizational network. In an organizational setting,
emails may be stored on central servers in addition to -- or instead of
-- your personal computer. Many organizations store all the emails you
ever send or receive on their servers so that you can never delete them.
Here is a good discussion about whether you can really delete old emails
in organizational settings.
If you have a stand-alone PC, emails are stored on your computer's hard
disk. To securely erase emails residing on your computer, locate the
Outlook or Outlook Express files that contain your emails. Then use a
secure-erase tool like Eraser or BCWipe to permanently destroy them. You
can do the same with your Windows address book.
The files you need to securely erase may be marked as hidden files within
Windows. To work with hidden files, you first need to make them visible.
Checkmark Show Hidden Files and Folders under Start | Settings | Control
Panel | Folder Options | View.
Now, search for file names having these extensions (ending characters) by
using Windows' Search or Find facility
pst : Outlook emails, contacts, appointments, tasks, notes, and journal
entries
dbx or .mbx : Outlook Express emails
wab : Windows address book file

Note that Outlook stores much other information in the same file along
with your obsolete emails. You can either erase all that data along with
your emails by securely deleting the file, or, follow this procedure to
securely delete the email while retaining the other information.
For Outlook Express emails and Windows address books, just securely
delete the files with the given extensions and you're done.
How to Securely Delete All Personal Data on Your Computer -- How can you
securely delete all your personal information on an old computer before
giving it away or disposing of it? This is difficult to achieve if you
wish to preserve Windows and its installed programs. It takes a lot of
time and there is no single tool that performs this function. The easiest
solution is to overwrite the entire hard disk. This destroys all your
personal information, wherever Windows hides it. Unfortunately it also
destroys Windows itself and all its installed programs.
Be sure to copy whatever data you want to keep to another computer or
storage medium first!
Several free programs securely overwrite your entire disk, such as
Darik's Boot and Nuke. The only possible way to recover data after
running such programs is expensive physical analysis of the disk media,
which may not be successful. Over-writing a disk is secure deletion for
normal computer use.
2.2 The Registry Contains Personal Data
Windows keeps a central database of information crucial to its operations
called the Registry. Our interest in the Registry is that it stores your
personal information. Examples include the information you enter when you
register Windows and Office products like Word and Excel, lists of web
sites you have visited, login profiles required for using various
applications, and much more.
Upcoming sections discuss your personal information in the Registry how
you can remove it. For now, let's just introduce a few useful Registry
facts -* The Registry is a large, complicated database (about which you can find
tons of material on the Web).
* The Registry consists of thousands of individual entries. Each entry
consists of two parts, a key and a value. Each value is the setting for
its associated key.
* The Registry organizes the entries into hierarchies.
* This guide tells how to change or remove your personal information in
the Registry by running free programs, but it doesn't cover how to edit
the Registry yourself -- a technical topic beyond the scope of this paper.
* Making a mistake while editing the Registry could damage Windows, so
you should only edit it if you feel well qualified to do so. Always make
a backup before editing the Registry
2.3 Windows Tracks All the Web Sites You've Ever Visited
Windows keeps a list of all the web sites you've ever visited. You can
tell Internet Explorer to eliminate this list through the IE selection

Tools | Internet Options | Clear History. But Windows still retains it!
To view the web site history Windows retains, download and run a free
program like Index.dat Spy. Windows records your web surfing history in a
file named index.dat. (There are actually several index.dat files on your
computer . . . I'll describe what the others track later.) The index.dat
files are special --you can not delete them or Windows will not start.
Since Windows prevents you from changing or deleting these files, you
need to run a free program to erase your web site history.
If you use Internet Explorer and have the default Auto-Complete feature
turned on, your web surfing history is also kept in a second location -in the Windows Registry. (You'll see web sites you've visited listed
under the Registry key TypedURLs.) If you turn off Auto-Complete,
Internet Explorer no longer saves your web history in the Registry.
To turn off Auto-complete,
Internet Options | Content
complete of Web addresses.
Windows from tracking your

go into Internet Explorer, then select Tools |


| AutoComplete and un-check the box for autoTurning off Auto-Complete does not stop
web site history in its index.dat files.

Several free programs securely erase your web site history from both the
Registry and the index.dat files. Among them are CCleaner, Free Internet
Windows Washer, CleanUp!, and ScrubXP, The shareware programs PurgeIE and
PurgeFox are also popular. I've found CCleaner to be both thorough and
easy-to-use.
2.4 Windows Leaves Your Personal Information in its Temporary Files
Windows, web browsers, and other programs leave a ton of temporary files
on your computer. Some hold web pages you've recently viewed, so that if
you go back to that web page, you'll be able to view it quickly from disk
instead of downloading it again from the web. Other files are used by
Windows and its applications as temporary work areas. Still others are
used to log program actions or store debugging information. These
temporary files sometimes contain personal information.
For example, web page caches contain copies of web forms into which
you've entered passwords or your credit card number. You may not wish to
disclose the web pages, videos, images, audio files, and downloaded
programs you've viewed lately. The trouble is that these temporary files
are not erased after use. Some remain until the system needs that disk
space for another purpose. Others hang around forever, unless you know to
clean them.
The free programs above that erase your web history also erase these
temporary files and cache areas. Find more free programs here and a
review of the best commercial programs here.
2.5 Your "Most Recently Used" Lists Show What You're Working On
Windows tracks the documents you've recently worked with through its Most
Recently Used or "MRU" lists. MRU lists are kept by Microsoft Office
products like Word and Excel, as well as applications from other vendors.
Window's Start | Documents list also shows documents you have recently
worked with.
Products keep MRU lists for your convenience. They help you recall and
quickly open documents you're currently working on. These lists also
offer the perfect tracking tool for anyone who wants to find out what

you've been doing on your computer. They provide a ready-made behavioral


profile.
Windows and its applications keep many more MRU items than you might
expect -- thousands of them, if you have never cleared the lists. Free
program MRU Blaster cleans out these lists. Other free programs like AdAware 2007 Free, CCleaner, and Free Internet Windows Washer erase many of
the lists. Run an MRU cleaner whenever you like. Remember that after you
clean the lists, the "quick picks" of your recent documents will not
appear in Word, Excel, or other products.
2.6 Product Registration Information May Be Hard to Change
When you register Windows, Microsoft Office, or other products, that
information is stored in the Windows Registry. It can be read from there
by any program or person who reads the Registry. Registering a software
product shows your legal ownership of the product and may be required to
receive product support and updates. However, changing or eliminating the
personal registration information later might be difficult.
Some products have an Options or User Information panel in the program
where you can easily change the product registration. But most require
you to either directly edit the Windows Registry or even de-install the
product to change or remove the personal registration data. Consider
carefully what you enter into any product's registration panel when
installing it. You may not be able to change it later. If you know you
won't need vendor support or updates and the product license permits it,
you could enter blank registration information.
2.7 File "Properties" Expose Personal Data
Right-click on any Microsoft Word, Excel, or Powerpoint file, and select
Properties from the pop-up menu. You'll see a tabbed set of panels that
keep information about the file. (For some versions of Microsoft Office,
you need to click the Advanced button to expose all the information.)
You'll see that Microsoft Office saves information about the file such
as: Who created it
* The company at which it was created
* The name of the computer on which it was created
* A list of all who have edited it
* When it was created and when it was last saved
* The number of times it has been edited
* Total editing time
* Comments
* A hidden revision log
* Recent links used in the file
* Various statistics about the size of the file, the word count, etc
The information varies according to the type of file you view (Word,
Excel, or Powerpoint) and the version of Microsoft Office that was used

to create and edit the file. You can't see everything Office saves in the
Properties panel --some of it remains hidden from your view.
You can change some of the Properties information by right-clicking on
the file name, then editing it. Or alter it while editing the document by
selecting Edit | Properties.
Other data is collected for you whether you want it or not, and you can
not change it. Should you care? It depends on whether it matters if
anyone sees this information. In most cases it doesn't. But sometimes
this data is private and its exposure matters.
Just ask former U.K. Prime Minister Tony Blair. He took Britain to war
against Iraq in 2003 based on the contents of what he presented as his
government's authoritative Iraq Dossier. But this Word file's properties
exposed the high-powered dossier as the work of an American graduate
student, not a team of British government experts. A political firestorm
ensued.
Microsoft offers manual procedures that minimize Office files' hidden
information. But these are too cumbersome to be useful. Microsoft
eventually developed a free tool to cleanse Office documents created with
Office 2002 SP2 or later. But restrictions limit its value. The free tool
Doc Scrubber is an alternative for cleansing the Properties metadata from
Word files.
Whichever tool you use, you must run it as your last action before you
distribute your finished Office document. Cleansing Microsoft Office
files is inconvenient and it's difficult to remember to do it. Those who
require "clean" office documents are advised to use the free office suite
that competes with Office, called OpenOffice.org. The OpenOffice suite
does not require personally-identifying Registration information and it
gives you control over the Properties information. It reads and writes
Microsoft Office file formats. (I edited this document interchangeably
with OpenOffice and several different versions of Microsoft Word, then
created the final PDF file using OpenOffice.)
2.8 Microsoft Embeds Secret Identifiers in Your Documents
Windows, Windows Media Player, Internet Explorer, and other Microsoft
applications contain a number that identifies the software called the
Globally Unique Identifier or GUID. Microsoft Office embeds the GUID in
every document you create. The GUID could be used to trace the documents
you create back to your computer and copy of Microsoft Office. It could
even theoretically be used to identify you when you surf the web. The
free program ID-Blaster Plus can randomize (change) the GUIDs embedded in
Windows, Internet Explorer, and Windows Media player. The free program
Doc Scrubber erases GUIDs contained in a single Word document or all the
Word documents in a Folder.
If you're concerned about secret identifiers embedded in your Office
documents, use the OpenOffice suite instead. This compatible alternative
to Microsoft Office doesn't embed GUIDs in your documents nor does it
require personal registration and Properties information.
2.9 Chart of Tracking Technologies
I've discussed the major areas in which Windows and other Microsoft
products track your computer use. In most cases you can not turn off this
tracking. But the free programs I've described will delete the tracking

information. The chart below summarizes where and how Windows and other
Microsoft products track your behavior.
Many items apply only to specific software versions. A few functions
report your behavior back to Microsoft. Examples include when Windows
Media Player sent your personal audio and video play lists to Microsoft
and the company's attempts to use the Internet to remotely cripple
Windows installs it considers illegal.
--- Where Windows Tracks Your Behavior --Application Logs: Records on how often you run various programs
Clipboard Data: Data you've copied/pasted is in this memory area
Common Dialog History: Lists Windows "dialogs" with which you've
interacted
Empty Directory Entries: File pointers unused by Windows but still usable
by those with special software
Error Reporting Services: Reports Windows or Microsoft Office errors back
to Microsoft
File Slack Space: "Unused" parts of file clusters on disk that may
contain old data
File Properties: Office document Properties contain your personal editing
information and more
Find / Search History: Lists all your Find or Search queries (used by
Windows auto-complete)
GUIDs: Embedded secret codes that link Office documents back to your
computer
Hotfix Unistallers: Temporary files left for un-doing Windows updates
IIS Log files: Logged actions for Microsoft's IIS web server
Index.dat Files: Secret files that list all web sites you visit and other
data
Infection reporting: Microsoft's Malicious Software Removal Tool reports
infections to Microsoft
Last user login: Tracks the last user login to Windows
Microsoft Office History: MRU lists for Office products like Word, Excel,
Powerpoint, Access, and Photo Editor
Open / Save History: List of documents or files for these actions
Recently Opened Doc. List: MRU list accessible off Start | Documents
Recycle Bin: Deleted files remain accessible here
Registration of MS Office: Registration information is kept in the
product Options, Splash panels, and Registry

Registration for Windows: Registration information is kept in the Registry


Registry Backups: Registry backups may contain personal data you may have
edited out of the Registry
Registry Fragment Files: Deleted or obsolete data in the Registry that
remains there
Registry Streams: History of Explorer settings
Remote Help: Allows remote access to your computer for Help
Run History: Lists all programs you have run through Windows Run box
Scan Disk Files: Files output from SCANDISK (may contain valid data in
*.chk files)
Start-Menu Click History: Dates and Times of all mouse clicks you make
for the Start Menu
Start-Menu Order History: Records historical ordering of Start Menu items
Swap File: Parts of memory written to disk
Temporary Files: Temporary files used during program installation or
execution
Time synchronization service: Synchronizes your computer clock by remote
Internet verification
User Assist History: Most used programs on the Start Menu
Windows Authentication: Identifies Windows license validity to Microsoft
Windows log files: Trace results of Windows actions and installs
Windows Media Player content: Automatically downloads content-licenses
through the Internet
Windows Media Player History: Lists the Most Recently Used (MRU) files
for Windows Media Player
Windows Media Player metadata: Automatically retrieves metadata for audio
CDs through the Internet
Windows Media Player Playlist: Your Windows Media Player play lists
Windows Media Player statistics: Sends your Windows Media Player usage
statistics to Microsoft
--- Where Internet Explorer Tracks Your Behavior --Auto-complete form history: Everything you type into web site forms (inc.
passwords & personal information)
Auto-complete for passwords: Convenient but less secure
Cookies: Data web sites store on your computer (sometimes used to track
your surfing habits)

Downloaded files: Files you download while using the Internet


Favorites: Web sites you list as "favorites" in your browser
Plug-ins: Information saved or cached by third-party software that "plugs
into" Internet Explorer
Searches: Searches are retained by both IE and search engines
Temporary files (cache): Web pages the browser stores on disk
Web site error logs: Errors encountered during web site retrieval
Web sites visited: All the web sites you have ever visited are stored in
the Registry and index.dat files
"3. How to Protect Your Privacy When Using the Internet"
Privacy is the ability to control when, how, and to whom your personal
information is given. Privacy is power. Losing your privacy means losing
personal power. This section offers tips and technical advice to help you
protect your privacy when using the Internet. It applies whether you use
Windows or some other operating system, like Linux or Apple's Mac OS.
Web privacy is a fast-moving area in which technologies and laws are in
flux. This guide can no more guarantee you absolute privacy than it can
guarantee you a completely secure Windows. But if you follow our tips
you'll minimize your privacy exposure.
3.1 Limit the Personal Information You Give Out
Before entering personal information into a web site form, a social
network, or a forum, read the site's Privacy Policy and Terms of Use. If
they're legalistic and hard-to-read, chances are they have more to do
with harvesting your personal data than protecting it. Many agreements
are written so that they can be changed at any time. This makes any
assurance of protection for your personal data worthless because the web
site could simply change the agreement after you've provided the
information. Some agreements even include fine print by which you agree
to the installation of malware on your computer!
Few privacy policies guarantee that information will be destroyed as it
ages. Once given out, information tends to live forever. Few privacy
policies give you any legal rights if your information is lost or stolen.
In 2007 alone, over 162 million personal records were reported lost or
stolen in the United States. (Yet it remains legal for companies to buy
and sell your social security number and personal data.)
Once you post personal information on the web, you lose control over how
that information is used. Changes to the "context" in which that data is
used can harm you. An example is the information students enter into
social web sites like MySpace or Facebook for their friends' amusement,
only to find it resurfacing later to harm their employment opportunities
or their careers. Both sites offer privacy controls that easily allow
individuals to avoid such consequences -- but most users don't apply them.
The selling of personal data is a multibillion dollar, largelyunregulated business in the United States. It's an entire industry called
information brokering.

People who give out their personal data expose themselves to manipulation
or worse. Even the U.S. government is researching the harvesting of
personal data from social networking sites for public surveillance. And
why not? People voluntarily post the information. Fans of social
networking will consider these cautions anachronistic. Please read how
people expose themselves to manipulation or harm by posting personal
data, found in authoritative books such as The The Digital Person, The
Soft Cage, or The Future of Reputation: Gossip, Rumor, and Privacy on the
Internet.
We need government regulation to enforce minimal rights for social
network users, much the way we have consumer-protection legislation for
credit cards. Meanwhile, protect yourself by educating yourself. Tiny
bits of information can be collected and compiled by web computers into
comprehensive profiles. If an organization can collect enough small bits
of information -- for example, just the names of all the web sites you
visit -- they can eventually develop a complete picture of who you are,
what you do, how you live, and what you believe.
Privacy is power. You give away your personal power when you give out
personal information. You assume risk you can not measure at the time you
assume it.
3.2 Don't Let Web Sites Track You
Cookies are small files that web sites store on your computer's disk.
They allow web sites to store information about your interaction with
them. For example, they might store the data required for you to purchase
items across the several web pages this involves. However, cookies -originally called tracking cookies -- can also be used to track your
movement across the web. Depending on the software using them, this data
could be used to create a detailed record of your behavior as you surf.
The resulting profile might be used for innocuous purposes, such as
targeted marketing, or for malicious reasons, like spying.
Most browsers accept cookies by default. To retain your privacy, set the
browser not to accept any cookies other than exceptions you specify. Then
only web sites you approve can set cookies on your computer. A few web
sites won't let you interact with them unless you accept their cookies -but most will. You can also set most browsers to automatically delete all
cookies when you exit. This allows web sites to set the cookies required
for transactions like purchasing through the web but prevents tracking
you across sessions.
To manage cookie settings in your browser, access these panels:
To turn cookies on or off -Internet Explorer: Tools | Internet Options | Privacy | Advanced
Firefox: (version 2 on) Tools | Options | Privacy | Cookies
Opera: Tools | Quick Preferences | Enable Cookies
K-Meleon: Tools | Privacy | Block Cookies
SeaMonkey: Edit | Preferences | Privacy & Security | Cookies
To allow specific web sites to set cookies --

Internet Explorer: Tools | Internet Options | Privacy | Edit


Firefox: Tools | Options | Privacy | Cookies | Exceptions
Opera: Tools | Preferences | Advanced | Cookies | Manage cookies
K-Meleon: Edit | Preferences | Privacy
SeaMonkey: Tools | Cookie Manager
To "clear" (erase) all cookies currently on your computer for the
specified browser -Internet Explorer: Tools | Internet Options | General | Delete Cookies
Firefox: Tools | Clear Private Data
Opera: Tools | Preferences | Advanced | Cookies
K-Meleon: Tools | Privacy | Clear Cookies
SeaMonkey: Tools | Cookie Manager | Manage Stored Cookies | Remove All
Cookie
To automatically clear all cookies whenever you exit the browser -Internet Explorer: Not available
Firefox: Tools | Options | Privacy | Cookies | Settings. . .
Opera: Tools | Preferences | Advanced | Cookies
K-Meleon: Tools | Privacy | Settings. . .
SeaMonkey: Not available
CookieCentral has more information about cookies and how to manage them.
Other tracking mechanisms include web bugs, Flash cookies, third-party
local shared objects. These are less common than cookies and rather
technical so follow the links and see the Appendix if they concern you.
3.3 Email Privacy
Sending an email over the Internet is like sending a postcard through the
mail. Anyone with the ability to intercept it can read it. There is
evidence that the United States government either scans or compiles data
about every email sent in the country.
You can keep the contents of your personal communications private by
encrypting your email. This web page provides information and free
downloads. It also lists programs that will encrypt your online
interactive Chat. This article illustrates how to set up secure email
step by step. The trouble with encrypted email is that both the sender
and the recipient must participate. It's impractical to send encrypted
email to people you don't know. Or to anyone using a different encryption
system. The major email programs could easily support standardized,
universally-compatible encryption in their clients -- but don't.

Remember that emails are often the basis for phishing scams -- attempts
to get you to reveal your personal information for nefarious purposes.
Don't respond to email that may not be from a legitimate source. Don't
even open it. Examples include claims you've won the lottery, pleas for
help in handling large sums of money, sales pitches for outrageous deals,
and the like.
Email may also be spoofed -- masquerading as from a legitimate source
when it is not. Examples are emails that ask you to click on a link to
update your credit card account or those that ask for account information
or passwords.
Legitimate businesses are well aware of criminal misuse of email and
don't conduct serious business transactions through mass emailings!
Many people use two email addresses to avoid spam and retain their
privacy. They use one account as a "junk" email address for filling out
web site forms, joining forums, and the like. This email address doesn't
disclose the person's identity and it collects the spam. They reserve a
second email account for personal communications. They never give this
one out except to personal friends, so it remains spam-free.
3.4 Web Surfing Privacy
If you tested your computer as suggested earlier using ShieldsUp!, you
saw that it gives out information to every web site you visit. This data
includes your Internet protocol address, operating system, browser
version, and more.
Your Internet protocol address or IP address is a unique identifier
assigned to your computer when you access the Internet. Web sites can use
it to track you. Your Internet Service Provider or ISP assigns your
computer its IP address using one of several different techniques. How
traceable you are on the web varies according to the technique your ISP
employs along with several other factors, such as whether you allow web
sites to set cookies and whether your computer is compromised by malware.
One way to mask who you are when web surfing is to change your IP
address. Anonymizing services hide your IP address and location from the
web sites you visit by stripping it out as your data passes through them
on the way to your destination web site. Anonymizers help hide your
identity and prevent web sites from tracking you but they are not a
perfect privacy solution (because the anonymizer itself could be
compromised). Anonymizer.com is a very popular free anonymizing service.
Find other free services here and here.
A more robust approach to anonymity is offered by free software from JAP
and TOR. Both route your data through intermediary servers called proxies
so that the destination web site can't identify you. Your data is
encrypted in transit, so it can not be intercepted or read by anyone who
scans passing data. Services like JAP and TOR present two downsides.
First, your data is sent through intermediary computers on the way to its
destination, so response time slows. Whether you still find it acceptable
depends on many factors; the best way to find out is simply to try the
software for yourself.
These systems still leave you exposed to privacy violations by your
Internet Service Provider. Your ISP is the your computer's entry point
into the Internet, so your ISP can track all your actions online.

For this reason, when the Bush administration decided to monitor American
citizens through the Internet, they proposed legislation that would force
all ISPs to keep two years of data about all their customers' activities.
The government's current web surveillance program made it necessary for
major ISPs like AT&T / Yahoo to change its privacy policy in June 2006 to
say that AT&T -- not its customers -- owns all the customers' Internet
records and can use them however it likes.
Repeated congressional proposals to immunize ISPs from all legal
challenges only make sense if the ISPs colluded with the government in
illegally monitoring Internet activities.
3.5 Search Privacy
Web sites that help you search the web are called search engines. Popular
search engines like Google, Yahoo!, and MSN Search retain records of all
your web searches.
Individually, the keywords you type into search engines show little. But
aggregated, they may expose your identity. They may also expose your
innermost thoughts -- or be misinterpreted as doing so.
Here's an example. Say the search engine captures you entering this list
of searches -* kill wife
* how to kill wife
* killing with untraceable substance
* kill with unknown substance
Someone might interpret these searches as indicating that you should be
reported to the authorities because you're planning a murder. But what if
you were simply doing research for that murder mystery you always wanted
to write? You can see need for search privacy. Do you have it? The
federal government has demanded search records from major search engines
like Google, AOL, Yahoo, and MSN.
While the government claims these requests are to combat sexual
predators, most analysts believe they are for public surveillance and
data mining. America Online (AOL) accidentally posted online 20 million
personal queries from over 650,000 users. The data was immediately
gobbled up and saved in other web servers. Although AOL apologized and
quickly took down their posting, this data will probably remain available
forever somewhere. Some people can be identified by their "anonymous"
searches and have been harmed as a result of this violation of their
privacy.
The AOL incident is a wake-up call to those who don't understand how
small pieces of information about people can be collected by Internet
servers, then compiled into revealing dossiers about our individual
behaviors. This principle doesn't just apply to search engines. It
extends to the web sites you visit, the books you buy online, the
comments you enter into forums, the political web sites you read, and all
your other web activities. The AOL debacle demonstrates that web
activities many assume to be anonymous can sometimes be traceable to
specific individuals.

The Electronic Frontier Foundation's excellent white paper Six Tips to


Protect Your Search Privacy offers these recommendations to ensure your
search privacy:
* Don't include words in your searches that identify you personally (such
as your name or social security number)
* Don't use your ISP's search engine (since they know who you are)
* Don't "log in" to search engine web sites
* Don't let the search engine set cookies
* Don't use the same IP address all the time
* Use anonymizers like JAP or TOR to thwart traceability
If you use Windows, Microsoft Office, and Internet Explorer, you need to
be aware of how these products could compromise your security and
privacy. You can minimize these issues by following this guide's
recommendations. Anyone can achieve sufficient security and privacy when
using Windows. But you must follow safe practices and download and
install a number of programs.
Your privacy is not a design goal of Windows. It is up to you to make
Windows secure and private.
Appendix: Further Information and Links
PDF and Appendix: For a printable and archivable version of this article,
please download the PDF version. It is released under the OPL (Open
Publication License) and may be freely reproduced and distributed but not
altered prior to redistribution. This product is distributed at no cost
under the terms of the Open Publication License with License Option A -"Distribution of modified versions of this document is prohibited without
the explicit permission of the copyright holder."
The PDF document also has a detailed Appendix with links to much more
online information.
Feedback: Please send recommendations for improving this guide to the
author at email address "ContactFCI" at the domain name "sbcglobal.net".
Disclaimer: This paper is provided without warranty. Fosdick Consulting
Inc. and the author accept no responsibility for any use of the data
contained herein.
Trademarks: All trademarks included in this document are the property of
their respective owners.
About the Author: Howard Fosdick is an independent consultant who works
hands-on with databases and operating systems. He's written a couple
hundred articles and several books. He's presented at conferences,
founded software users groups, and invented concepts like hype curves and
open consulting.
Acknowledgments: Thank you to the reviewers without whose expert feedback
this guide could not have been developed: Bill Backs, Huw Collingbourne,
Rich Kurtz, Scott Nemec, Priscilla Polk, Janet Rizner, Kate Robinson, and

others who prefer anonymity. Thank you also to the Association of PC


Users (APCU), Better Software Association, BitWise Magazine, IBM Database
Magazine, and UniForum.

You might also like