Professional Documents
Culture Documents
======================================================
"...All the world will be your enemy, Prince with a Thousand Enemies, and
when they catch you, they will kill you; but first... they must CATCH
you."
R. Adams, Watership Down, 1978
By An. Onee. Moose
Part 1 : A Mighty Fortress Is Your Computer
------------------------------------------Preface
------Who am I? That's not important, except that I am quite skilled in IT
security. So you can assume that I more or less know what I'm talking
about. Keep in mind that I am by far not the only one with this kind of
experience... see the excellent Fosdick article, which is appended below
my own.
It is a reasonable assumption, incidentally, that certain minor
characteristics of the document below, have deliberately been altered to
make it difficult for the authorities to trace the origin of the document
back to myself. So if you're someone from the Home Office, CIA, NSA, FSB
or MI5, trying to find out who the nasty bloke is that's helping the
"perps" hide their secrets, well, sod off on my behalf.
You won't find me, at least not by deconstructing this document. Go do
some real police work, as opposed to enforcing a police state.
Why Am I Writing This?
---------------------Many reasons. You will see and hopefully understand some of them, in the
material that is listed below. But above all, it's because I want to
restore the balance between the state (well-funded, powerful, ruthless,
all-encompassing) and the individual (poor, 'playing by the rules',
isolated), at least insofar as this concerns the area of safeguarding the
privacy of individuals.
Oppressive governments, and also other sinister organizations such as the
American media industry, have finally become wise to the fact that
gaining unauthorized access to the digital information kept on storage
media like hard drives, flash memory devices and so on, is a perfect way
for them to build the "evidence" needed to harass and punish individuals
for "crimes" that should really not be "crimes", in the first place.
There are untold examples of this, and what is particularly disturbing
about it is, whereas in the past, it was really quite difficult for an
intruder to "build a case against you" (typically it would involve actual
physical access to storage cabinets and so on), now, you can be arrested,
tried, convicted and possibly even jailed or executed, all because of
someone disapproving of the content stored on your hard drive. The
possibilities for abuse of this power, if held unilaterally by
governments, intelligence agencies and groups like the RIAA and MPAA, are
practically unlimited.
I mean to even out that balance and give you a few weapons to use against
your local constabulary, as they come knocking at your door to arrest you
for "possessing subversive computer files". In most places of the world,
they can't convict you if they can't find anything. I want to stop them
from even KNOWING that you have anything "interesting" on your PC.
What is Computer Forensics?
--------------------------First of all, we have to define "forensics" to define what the opposite
of it is.
"Forensics" (basically) is, "the art and science of finding things that
aren't obvious, particularly, the art and science of finding things that
were deliberately hidden". The term "forensics" orginated in the law
enforcement world in which forensics experts would, for example, check a
strand of hair found on a murder scene with the DNA of a suspected
murderer, would check the characteristics of a bullet with a gun that the
murder suspect owned, and so on.
Fair enough -- if we are to have even the semblance of law and order, or
a marginally safe society (to say nothing of convicting only people who
actually committed a crime, as opposed to anyone that the police happen
to pick up off the street), ordinary forensics has a legitimate place in
doing this.
Now, COMPUTER forensics is often, deliberately confused and conflated
with conventional "crime scene" forensics, but in fact, although the two
disciplines do share a few superficial similarities (namely, "looking for
hidden things"), they are in fact substantially different, so much so in
fact that I am going to argue that on balance, computer forensics are a
BAD thing, not a GOOD thing. For example:
* "Normal" (conventional) forensics, are almost exclusively used in
circumstances in which there is no reasonable doubt that a crime -usually a serious one like murder, rape etc. -- has already occurred.
(The presence of that bullet-holed body lying on the floor in a pool of
blood, is usually a pretty good indicator that a crime has happened here.)
With computer forensics, as often as not, the purpose is to determine if
a "crime" (see below for why this is a problematic concept) has even
occurred in the first place. In other words -- the fact that a computer
is sitting there, connected to the Internet, by itself proves nothing. It
is how it is _used_, that defines the "crime" (if any) that the computer
has been used for. This is a deeply troubling concept if you think
through its implications.
* Here we get to the real issue: virtually ALL "computer crimes"
deserving of forensics investigation, are crimes not of social consensus,
but of subjective definition and discretion on the part of "the
authorities" -- usually, of an average police officer, sometimes by a
secret police thug.
What do I mean by this?
Well, it's actually a pretty simple idea. When, for example, we see a
body shot full of bullet holes, lying on the ground in a pool of blood,
there is a universally consistent consensus among all but a tiny fringe
element that "shooting and killing other people is a bad crime for which
the perpetrators should be punished". We don't need debates about "what
counts as a bullet hole" or "how much should the victim have been
bleeding for it to count as a crime". Everyone intuitively KNOWS that
murder has to be a crime for society to keep functioning.
Nothing could be less true of the vast majority of "computer crimes",
such as the ones that I will be trying to show you how to cover up and
conceal, later in this document. Nothing even remotely similar to the
"body lying in blood" situation applies to the collective consensus on
the criminality, if any, of these types of computer activities. In a vast
range of activities for which PC users might want to conceal
"incriminating" data from the authorities, if you asked ten people on the
street, "is having this kind of data on your computer, a crime for which
someone should be punished and go to jail", you couldn't get two or three
people to understand and agree, let alone ten out of ten.
There are untold infamous examples of this, but let me just quote one.
From time to time, we see media stories about people being hauled into
court and in some cases severely punished, with their reputations always
ruined by sensationalistic tabloid media headlines, such as "FATHER
CHARGED WITH CHILD PORNOGRAPHY", "INTERNET KIDDY PORN RING BUSTED, POLICE
SAY".
Sound good to you? I mean, surely you're for protecting kids from
perverts... aren't you?
But, you see, in fact...
In the first case, a father took a few pictures of his own 3 year old
daughter splashing around, happily nude, in the family's backyard wading
pool. He took his digital camera in to a photo shop to have some prints
made, one of the technicians at the shop decided that this was "child
pornography", called the police, and the next thing that the poor man
knew, he was dragged into court with his name and reputation totally
ruined by having his mug shot published in the local newspaper, along
with wildly misleading charges of "distributing kiddy porn" that were, of
course, all quietly dropped later when the police and prosecutors had to
provide some real evidence of criminal intent to the magistrate. Too
late, I'm afraid; the damage is done and there's no way to undo it.
In the second case, a bunch of teenagers, adolescent hormones raging,
started sending nude pictures of each other (girlfriends and boyfriends)
back and forth, not only directly over their camera-equipped cell phones
but also over a social networking site (the pictures involved were never
made publicly available, they were only stored on the "perpetrators'" own
private storage spaces).
Now, the problem here was, some of the young people involved were under
the legal minimum age for sex, in their part of the world. So, the
crusading local prosecutors and police charged ALL of them with
"distributing child pornography"... that is, the police wanted to
humiliate and jail these teenagers for distributing "indecent"
photographs OF THEMSELVES. On top of this, the youthful "perverts" in
this case have now all been put on American "sex offender registries", a
Mark of Cain that will destroy their ability to get a job, a loan, or
anything, for the rest of their lives. (Like the notorious U.S. "No-Fly
List", once you get put on one of these sex offender blacklists, there's
no way to get off of it. You're screwed, forever.)
You mean you didn't know that in some American jurisdictions, if you are
under age, and you take a nude picture of yourself, and you post it only
in your own private section of a social networking site (or you have it
only on your own cell phone), that means you're subject to the same
punishment as a pervert who rapes 5 year old children in front of
streaming video? You mean you thought that the wise lawmakers of this
U.S. state, might have been a bit more discriminating in drafting the law
that currently sweeps both types of "kiddy porn distributors", in the
same dragnet?
Silly you.
The larger point in all of this is, when we start to get into the realm
of "crimes of definition", we're talking about "crimes" that are only
that, because either conservative lawmakers, or the police, or some noisy
special interest group, have a narrow agenda, usually endorsed only by
the general public because of the latter's vast ignorance of the details
that are really involved, wants the activities involved, to be
criminalized.
The classic example of this is homosexual literature, which was for years
in Western countries (still is, in much of the Third World) routinely
labeled "filthy unnatural pornography" and for which you could go to jail
if you were caught possessing it. But there are many other examples and
the theme that you see consistently running through it is that the
authorities have a tendency to make these rules up as they go, simply
because they need a convenient excuse to crack down on sexual, political,
social, religious, cultural or other minorities that either the police or
the conservative authorities just want to harass and humiliate.
In otherwords, the police and the authorities define some activity that
you could never get a real social consensus as a "crime", as such, then
they go about what policemen love doing, that is, getting a power rush by
harassing, beating and humiliating people who just want to be left alone.
One of the prime tools for doing this, is computer forensics, because it
allows the police to rummage through their victims' private digital
histories, hoping to find some sliver of "evidence" that they can use as
"proof of having committed a crime". The police may not know what they're
looking for, when they start out, but they'll take anything that shows
up, as long as it helps them get a scalp and a conviction.
All of this is far different from the "body lying in the pool of blood"
scenario mentioned above. Society clearly IS threatened, by people being
murdered; it clearly is NOT, by fathers taking innocent pictures of their
children in a swimming pool or by teenagers showing off their bodies to
other teenagers. Yet the police would far prefer to prosecute the latter
type of crime over the former, simply because going after ordinary people
who have no idea or intent of doing something really anti-social, is much
easier and satisfying to the authoritarian nature of the police, than is
the difficult, highly work-intensive job of going after an experienced,
hardened, real criminal. The crying, confused, bewildered teenagers that
the police haul into court won't shoot back at the cops. The guy who
murdered the other gangster, will. The police know that, and they pick
the easy job.
MY job, is to make that "easy" job of harassing those "guilty" of "crimes
of definition", as hard as possible for the police. And to do that, I
intend to give you the knowledge to defeat their forensics experts.
But Aren't You Just Helping "The Bad Guys" Evade Righteous Justice?
------------------------------------------------------------------I can't tell you how much contempt I have for this stupid argument, which
comes up all the time whenever ordinary (read: "ignorant") people ask me
about why I help people on the Internet -- e.g., people who I've never
met and therefore have no idea if they're good or evil -- to hide data.
The standard bogeymen, who are inevitably trotted out to justify any and
all government spying on private communications (and, by inference, any
and all restrictions on private use technology designed to thwart that
spying), are:
* Child pornographers / paedophiles / sexual minorities of various types;
* International terrorists (hello, Usama!);
* Drug dealers;
* Cyber-criminals of various types (for example East European fraudsters);
* Crooked businessmen (hooray for Enron); and
* Anybody that the local authorities think the population hates or
distrusts.
The most famous way of putting this fatuous belief is, "If you don't have
anything to hide, then you shouldn't be afraid to let the police see
everything that you're doing."
There are so many good rebuttals of this line of "reasoning" that I won't
list them here, except to say that I simply don't believe the assertion
that "the state" (meaning, "the police, who enforce the demands of 'the
state'") has ANY RIGHT WHATSOEVER to its citizens' private data. None,
zilch, null set, call it what you want -- the evidence of history is
painfully clear here, that governments will inevitably expand the
envelope of what they consider a "legitmate" reason to spy on
individuals, until (recent example), the jaunty old Home Office RIPA Act
(which was passed "to give Scotland Yard the tools they need to break the
encryption being used by Islamic terrorists) has been used by local
councils to spy on married couples "suspected of registering their
children in the wrong district school".
The point here is that governments, and the police -- even those of socalled "liberal democracies" such as the U.K. and the U.S. -- will
INEVITABLY abuse any power they get, to spy on their citizens. THEY CAN'T
HELP IT, THE TEMPTATION TO ABUSE THEIR POWER IS IMPOSSIBLE FOR THEM TO
RESIST. SPYING ON, ABUSING AND OPPRESSING CITIZENS IS SOMETHING THAT
COMES NATURALLY TO THE POLICE. IT'S WHAT THEY DO. IT'S WHAT THEY WANT TO
DO, AND WHAT THEY LIKE TO DO.
You can no more expect a policeman to "refrain from unjustified
surveillance of legitimate dissent" than you can expect a wolf or tiger
to pass up that juicy fresh steak that just got dropped inside their
cage. Sinking its teeth into that blood and flesh is as innate to the
carnivore, as is the urge to spy, to listen in on, to oppress and punish,
to a cop or intelligence agent. That's what they do. That's what they're
all about. No amount of nice talk or promises "not to do it again", is
going to work. They are what they are, and you're kidding yourself if you
let yourself get convinced that they're ever going to change.
This being the case, you need a weapon to fend off the police and their
willingness to ruin your life for activities that you have every right to
undertake.
direct physical access to either your PC or where you use it, or both.
Incidentally, if at any time your PC HAS come into the physical
possession of a skilled adversary who would have had a few minutes to
hours of undisturbed, private time to compromise your computer, unless
you are very good at being able to recognize the signs of a technically
advanced compromise -- just an extra little chip soldered on to the
motherboard (how would you know if it's out of place or not?), or a few
bytes of machine language code added to your boot sector, for example -I'd strongly suggest that you immediately sanitize (wipe out and erase)
all hard drives and other storage media on the PC as well as all its
peripherals such as keyboards, etc., sell it to the first sucker you find
and then use the proceeds to buy a "clean" new PC. Computer hardware
nowadays is cheap... far cheaper than a 10 year stint in your local
prison for being a "terrorist organizer" or "on-line pervert".
(Note: One of the most important principles of this is, "know your
enemy". That is, you must become at least casually familiar with the
principles of computer forensics investigations, because these techniques
are what is going to be used against you, when the police come to call.
An example of police training materials is available at: http://
www.ncjrs.gov/pdffiles1/nij/219941.pdf, but be aware, this manual only
scratches the surface of what a sophisticated attacker equipped with a
powerful tool like EnCase, can accomplish. So devote some time learning
about how computer forensics works. It's time and effort well spent.)
You have to base your data security protection measures on the assumption
that your PC WILL be attacked in the above manner; just protecting it
against some snoop coming in across the Internet is by no means adequate.
Defending In Depth
-----------------A classic concept of computer security -- really, this is simply an
adaptation of classic military strategy -- is what's called "defence in
depth". If you want to have even a chance of staying secure in the face
of an attack by an intelligent, well-equipped adversary, you will have to
understand this concept and apply it diligently.
Although its actual applicaton can be quite complex, the basic idea of
defence in depth is quite simple: every defensive measure is implemented
on the assumption that it could fail (that is, that it could be somehow
overcome by an adversary). Thus, when designing the _entire_ defensive
system, we have to construct in such a way that a failure of one
defensive measure is "mitigated" -- that is, reduced, with its negative
impact lessened as much as possible. (Note: The opposite of this concept
is called "all-or-nothing"; it is built upon the very questionable
assumption that a "barrier" or "wall" defence can be erected, that can
never be beaten or breached. Of course, the problem with "all-or-nothing"
is that it has to work one hundred per cent of the time, all the time.
Even ONE failure with this model is disastrous.)
The idea of defence in depth is used thousands of times, every day, for
almost every kind of complex system or machinery in the real world. For
example, jet airplanes are built so they won't come crashing out of the
sky, without something (like a bomb) going dramatically wrong; but, they
are also built so that if they DO come crashing down, as much as is
possible they won't instantly explode (this is achieved by fire
suppression systems, "self-sealing" fuel tanks and so on). When your
kids who I know...), but, in the famous words of Willie Sutton, "Why do I
break into banks? Because... that's where the money is."; the more
prominent you are, the more it's worth, either directly or indirectly,
for some malicious or self-serving third party, to bring you crashing
down to Earth.
Where you draw the line between having an "exciting", high-profile public
lifestyle, and the need to retain the confidentiality of your "sensitive"
data, is something that only YOU can really decide. All I can do here is
warn you of the likely consequences.
Your Work PC Is An Unsafe PC
---------------------------There is a special aspect to this concerning a PC (or network) to which
you might have access while you are at work, that is, when you're away
from home (so, in this sense, the word "at work" really means "any time
that you're using any computer or network that you didn't buy for
yourself and which you don't have complete, undisputed administrative
control over"). There's a simple rule, here: NEVER do anything on a work,
or third-party, computer or network, that you aren't comfortable having
your boss instantly know about.
Most computers located in large companies are pre-loaded with an
extensive portfolio of remote "management" (read: "remote spying")
applications that you either / or (a) can't detect or (b) can't disable,
even if you do somehow manage to detect that they're there; while there
are a variety of somewhat justifiable reasons (such as, "the computer's
there for you to do work with, not for your to play games with") why an
employer might want to use these types of surveillance programs, the
larger point is that the minute in which administrative control of a PC
passes from your EXCLUSIVE control, to a control scenario where someone
other than you can tell the computer to do or not do something, this
opens an enormous -- and, largely, impossible to mitigate -- security
hole.
This is true of everything that you might do on a work PC, although it's
worth noting that in particular, so-called "nanny filters" (gateway
applications that limit where you can go and what you can do, when
surfing the Internet) have become very widely used in large corporations
these days; the minute that you try to surf to a "naughty" Website, not
only does the filter stop you from doing so, but it also alerts an
administrator, and / or possibly your boss, that you're a "time-wasting
pervert who's abusing Company resources for personal gratification".
Another commonly-encountered surveillance system, in the corporate
computing context, is "IDS" or "Intrusion Detection System", sometimes
combined with "DPI" or "Deep Packet Inspection" technology; this scans
each and every little TCP/IP packet that you send out over the company
Ethernet cable, checking for "naughty" or "illegitimate" content (however
they define that), wherever it is. Some employers even have "keyloggers",
which are a hidden background system that capture each and every
keystroke that you enter at your keyboard, also in some cases every .jpg
or .gif file that you open on your PC, and then forward this data to an
administrator who can use it to punish you for "non-work related conduct"
or "inadequate data entry speeds" or, basically, whatever the company
involve wants to punish you for.
I have seen situations like this where even ONE such infraction is an
instant dismissal offence. Add to this the fact that the Information
Technology administrators of a big company have every reason to cooperate with the police and virtually no reason to defend your interests
against them, and you can easily see why accessing "controversial" data
on a work PC is a very, very bad idea.
And, incidentally, don't fall into the very easy-to-accept trap of
"everybody at my office downloads porno on to their computers -- why
should I be holier-than-thou and not go with the crowd?". This is an
excuse that I hear with frustrating frequency and it's ridiculously easy
to shoot down.
Use common sense, for God's sake: if it's against company rules to
download inappropriate material using your work PC, and the other 9 out
of the 10 people in your office do it anyway, and then all 9 of them are
subsequently fired for this transgression, how does it "help" you to be
the 10th person to be fired? In most large corporations, when you get
dragged in front of a disciplinary hearing, what matters is the written
rules, not what you claim the "office corporate culture" was.
Mobile Devices To Get You Moving... Right To Jail
------------------------------------------------In the comments that you see below, I'm assuming that we are talking
about a conventional operating system on a conventional desktop or laptop
computer, not something like a virtualized OS session, an iPhone, a
BlackBerry or other handheld, since the issues and protective measures
for those scenarios are quite different from what we're looking at here.
(Although, I will talk briefly on special considerations for data storage
on removable devices such as a "USB key or SD RAM chip".)
In general, you should NEVER store sensitive data on anything other than
a "real" PC that is normally located either in your home or with you, in
the case of a laptop. If you're stupid enough to put sensitive data on
something like an iPod, iPhone or cell phone, then you deserve what you
almost certainly will get.
Note that in this respect, "sensitive" data can be stuff that you might
otherwise think to be innocuous, for example friends' phone numbers,
Websites that you frequently visit (don't surf the Internet on something
like an iPhone, to any site that you don't want your friendly local
police officer to know about instantly!), or, worse, lists of passwords
(believe it or not, this happens all the time -- one of the first places
that the cops check, when they have suspected drug dealer, is his cell
phone, because 'them cops would never think I put my passwords there').
Most of these new portable devices have either very weak protective
technologies, or no protective technologies at all (see the following two
URLs for a rather dramatic depiction of how easy it is to "suck" all the
data off your cell phone, to an even moderately well-equipped attacker :
http://news.cnet.com/8301-1009_3-10028589-83.html?
tag=newsEditorsPicksArea.0 and http://csistick.com/). What's even worse
is that this process can usually be accomplished in a matter of seconds
or minutes and that it leaves no signs at all of the cell phone having
been tampered with. (It is technically quite difficult to instantly
download confidential data from a PC just by plugging a forensics USB key
into the PC's data port, not only because of the volume of data involved
but also because with the exception of a few technologies like Firewire,
modern PCs have a degree of built-in protection against unauthenticated
access of static data by external devices. Most cell phones and mobile
devices have no such protection and can easily have all their data
harvested by someone with the right forensics tool.)
A carelessly stored cell phone, BlackBerry, etc., is thus far more
vulnerable to this kind of "fly-by" forensic attack than a conventional
computer would be... all that the attacker has to have, is a few seconds
of undisturbed physical access to the mobile device, and he's got all the
data that's contained within it.
The other thing that makes mobile devices especially dangerous is that
they are always connected to the manufacturer's network (for example,
Apple and Verizon's network in the case of an iPhone), and you don't
control that network, in fact you usually don't have any idea what kind
of visibility it has on what is going on with your portable device. (See:
http://www.pcworld.com/article/id,143932-c,cellphones/article.html).
Another cute little example of this is how the iPhone just, er, "happens"
to secretly take screenshots of whatever you were doing when you hit the
"Home" button, then discreetly files away these little forensics gems in
a secret place unknown to you, just waiting for the next police officer
to retrieve them. (Read about the gory details at: http://
www.networkworld.com/community/node/32645).
I don't suppose that Apple had a little, er, "advice" from the U.S. NSA,
CIA and FBI, when they put that little, er, "feature" into the iPhone, do
you? Yeah, baby, you got it. Steve Jobs may be a "cool dude", but he's
still an American, and American "cool dudes", at the end of the day, are
going to do whatever Uncle Sam tells them to do.
Considering that 99% of the major phone companies work hand in hand with
the FBI, the CIA, M.I.5, the NSA and your local law enforcement, and
considering that in almost every case they will happily hand over your
private information to the cops "on request", you are stark raving nuts
to put sensitive information on a mobile device that uses one of these
companies' networks, as well as you are nuts to access this kind of
information over their networks. You might as well put a big bullseye on
your back.
The Lying, Incompetent Thugs Called "Police"
-------------------------------------------You must appreciate that the police are, in most jurisdictions, out to
get convictions at ANY cost, whether or not the person involved has
actually done anything illegal or immoral. It's just a big game of
"gotcha", to the cops; that's how they get promoted and get public
recognition, by proudly showing up on the local news and boasting about
how they "put that pervert away for life".
Most citizens, and therefore juries, have a child-like trust in claims
made by people in positions of authority, particularly policemen
confidently asserting that the defendant is an awful terrorist /
paedophile / drug dealer / subversive / gang member / {pick your
favourite Devil figure}. You have to assume that many or all of these
claims, true or otherwise, will be made against yourself, when your PC
gets seized by the authorities.
In thinking about this, you have to understand that the average person,
who is 100% ignorant about virtually every concept associated with
computers, has a naive, trusting belief in the honesty and integrity of
the police, as well as in the completely false idea that "if you don't
have something to hide, then you shouldn't be afraid of anyone rummaging
through your personal affairs".
What if you DO have something to hide, say, you're secretly gay, or
you're planning to divorce your husband and run off with the man next
door to Morocco, or you're planning to sue the local Council over that
tree that fell on your car, last week, or you have the secret formula for
a revolutionary anti-cancer drug, hidden on your encrypted volume, or you
are campaigning for free speech rights in an oppressive society like
China or Iran; all of these are completely legal in most countries (or
should be), but there are perfectly valid reasons why you'd want them not
to be revealed to unauthorized viewers.
But the average, ignorant, police-loving, "patriotic" citizen of most
countries, knows nothing of the above and cares less. People crave what
they (usually falsely) believe to be a benevolent dictatorship that makes
the trains run on time, and they just cannot envisage any legitimate
situation where anyone could want to hide information from the public or
the police.
NO amount of evidence to the contrary (and I have tried doing this, many
times) will convince a "law-abiding" conservative citizen that the police
would lie or cheat. In fact, the average citizen will simply get angry
with you, for "impugning the reputation of our fine law enforcement
officials". Trying to secure your data, or appearing to be trying to do
so, is two and a half strikes against you, before the police even pitch
the next baseball.
You may think I'm exaggerating, about the above; I wish that I wasn't,
but the available evidence overwhelmingly suggests that if anything I am
understating the situation.
Furthermore, in some jurisdictions like much of the southern United
States, under "forfeiture" laws passed originally to "deny drug lords
income from their crimes", the police actually get to seize, impound and
then sell, for their own personal profit, most or all of a suspect's
property, BEFORE there is even a trial, much less a conviction on the
original grounds for which the "perp" is charged. You don't have to be a
rocket scientist to appreciate the huge incentive this gives the police
to cheat, falsify or with-hold evidence and otherwise eliminate what few
legal restraints are imposed upon them, so that they can sell your
computer, car and house and then take a nice vacation somewhere. Your
data confidentiality plans should start from the assumption that your
opponents will use every tactic, fair or unfair, legal or illegal (such
as, for example, beating the crap out of you, to "encourage" you to tell
them your encryption passwords), to get what they want out of you.
Surprise, uncertainty, subterfuge, fear, lies, deception and causing
hesitation: these are all weapons that an experienced, ruthless adversary
will use against you. Prepare for them, be able to recognize them when
they are arrayed against you and don't fall for them -- have a good plan
and stick to it, but learn from your mistakes, especially "close calls"
where you almost revealed sensitive data, and make sure that you never
repeat the same mistake twice.
Keep Your Damn Mouth Shut, Bloke
--------------------------------
Having said the above, there is another very important issue that you
have to be aware of. In the (hopefully) unlikely event that you get
arrested, to the maximum extent that your personal pain threshold allows
you to do so (because, in many parts of the world, the police will simply
beat the tar out of you, to get the information that they believe you to
be in possession of), you should NEVER, EVER voluntarily communicate with
the authorities, reveal information to them (even information that you
figure that they already know, and even if they have told you that they
_do_ already have), talk with them or give them even the slightest
insight about the details of your life or how you use your computer.
There is a really simple, if inelegant, way to put this: SHUT THE F*CK
UP. NEVER SAY ANYTHING TO THE AUTHORITIES, NO MATTER HOW INSIGNIFICANT IT
MAY SEEM.
You have to understand that the police habitually lie to suspected
criminals to get the latter to cough up information that the cops would
otherwise have a difficult job obtaining. Incidentally, on their rare
candid moments (see: http://video.google.com/videoplay?
docid=6014022229458915912&q=&hl=en), the police will openly brag about
how they lie, cheat and mislead defendants -- many of whom are likely or
obviously innocent -- into signing false confessions, divulging seemingly
innocent and irrelevant information which the police later twist into
"evidence of guilt", and so on. Such tactics are Standard Operating
Procedure for all nations' corrupt, incompetent "guardians of public
safety".
Here are examples of typical admonishments:
-- "Look, you pervert, we already have more than enough kiddy porn
pictures taken off your PC, to convict you in any court in the country.
Why don't you just save everybody some time and tell us where the rest of
them are?" {In fact, the policeman hasn't found anything at all, but rest
assured that if you respond to this question in any way that might even
hint that you did have some of this horrendous data on your computer, he
very definitely will take it down as an "admission of guilt" at your
eventual trial.)
-- "Come on, Carlos, surely you aren't denying that this is YOUR
computer? We found it in your house!" {In fact, while it may be your
computer and may have been found in your house, a judge or jury has no
idea if it was really yours or instead was owned by any of the other 6
people who share your rooming-house. If you answer in the affirmative
because you figure the police already know that it's yours, you have just
denied your attorney a plausible defence tactic in court.}
-- "Look, Mohammed, the two other guys that you've been sending those
'jihad' e-mails to, on-line, got picked up earlier today, and they've
already confessed. Not only that, but they've told us that you were the
ringleader! I'm telling you, pal, that if you confess now and co-operate
with us, we can get your sentence reduced; but if you don't play ball,
don't blame me when you get put away for 20 as 'lead conspirator'!"
In fact, they picked up both of your friends, but had to let one go due
to a complete lack of evidence while the other one hasn't told them
anything. If you spill the beans, the cops will just smile and then
charge you with whatever they were going to charge you with, whether or
not you 'play ball' with them, makes no difference whatsoever. They
probably have no latitude in this, anyway, because of 'minimum sentencing
guidelines' and other such 'get tough' measures against the menace of
Also, you should temember that the law enforcement authorities, as well
as quasi-legal entities like the notorious U.S. RIAA and MPAA and their
henchmen like MediaSentry, all have very large and very well-indexed
databases containing the file names, sizes, MD5 digital "hashes" (this is
an encryption tool that produces a "fingerprint" that is unique to each
particular file, so the file can be quickly and uniquely identified) of
many types of "controversial" content, ranging from kiddy porn to
"pirated" multimedia files like .MP3s, movies being traded on file
sharing networks like LimeWire, BitTorrent and so on.
If an opponent can use these tools to get a positive identification on a
known "controversial" file that they found, or can claim (truly or
falsely) to have found in your physical possession, they can then "prove"
that you (and nobody else) was responsible for this filthy / disgusting /
terrorist / fraudulent material on your hard drive, just from the files
themselves (without having to beat it out of you with a truncheon)... and
in so doing, you have made the job of convicting you 1000 per cent easier.
At least for files of moderate size, there is an interesting way of
creating a roadbloack to this kind of analysis. Suppose, for example,
that you have a large number of "controversial" .JPG format graphics
(picture) files. While you should ALWAYS rename these anyway, if you are
willing to permanently delete the originals of the graphics files (as
separate entities on your hard disk), what you can do, is to one by one
paste them into different pages of a Microsoft Word (or other word
processing) document (or some other kind of document -- it could be
anything, for example PowerPoint, etc., as long as it has the ability to
display a graphics file), then save that file under a new, misleading
name ("MY_THOUGHTS_ON_GARDENING.DOC" will do). (Note: For "controversial"
text, my preference would be to embed the original text file within a
word processing format document; for pictures and, possibly, movies, I
would probably choose a slide presentation format document since these
are typically quite large, the size element will therefore not be as
immediately noticeable as it would for pictures pasted into a .doc file.)
Take care not to paste too many of these pictures into any one file,
since often the applications that can open .doc, .xls files, et cetera,
have poorly documented internal limits and if you exceed these, you might
find that the file (therefore all the pictures embedded within it) can no
longer be opened or accessed. Also be aware that some applications,
including most Microsoft ones, have a bad habit of automatically
recording potentially incriminating information such as "last modified by
{whomever} on date {whenever}", within the file's metadata (look under
"Properties" in the "File" menu); you will want to purge this if
possible, before saving the file.
Although you should never consider the above technique as a safe
alternative to encryption, it can be a useful addition to your defence in
depth strategy, because, from the point of view of the attacker, he can
no longer just run a quick scan of file names within a directory, hoping
to find a match with known "controversial" files or content. The actual
content is now obsfucated away as a "binary object" within the wrapper of
a Microsoft Word / Powerpoint file; to see, and recognize, this content,
the attacker has to suspect that "MY_THOUGHTS_ON_GARDENING.DOC" is in
fact about something quite different than how to grow better rows of
leeks, then has to open up the relevant .doc file and page up and down
through it until something interesting shows up on the screen. A
diligent, patient, skilled attacker will do this; a great many ordinary
policemen, won't.
save a graphics file into less sophisticated formats, such as JPEG (.jpg)
or Bitmap (.BMP). These formats cannot save multiple layers and must
merge all the layers into a single one, prior to finally saving the file.
The same general principle has to do with "remnant" data for other
formats. If you are saving a word processing document, why not just save
it in basic ASCII text? When you do this, you can be sure that the only
thing that will end up in the document's data file on the hard drive, is
what you actually intended to have saved.
Here again, we see one of the basic principles of data security, at work:
"the mortal enemy of security, is complexity". Or, put another way: "Keep
it simple, stupid."
Is Your Digital Camera Going To Testify Against You?
---------------------------------------------------One final comment about digital graphics files. Many otherwise
intelligent and conscientious data hiders aren't aware that if the
pictures in your "My Pictures" folder were self-created (that is, they
were originally taken by yourself, using your own digital camera), the
camera itself can sometimes be used as a piece of evidence to be used
against you in a trial. The idea here is that the pictures (particularly
in their original, "raw" format as stored on the camera's flash RAM
memory) taken by different digital cameras have unique characteristics,
that an expert forensics investigator can use to trace back the picture
to the particular camera that captured this particular image.
This is obviously NOT what we want to have happen, and to try to reduce
the chance of it being used against you, please check out the steps
detailed at the following URL:
http://www.instructables.com/id/Avoiding-Camera-Noise-Signatures/.
What is especially scary -- but also highly informative -- about the
above Webpage, is the sublink (http://www.ws.binghamton.edu/fridrich/)
that it contains to the personal Website of one "Jessica Fridrich", a
professor at Binghampton University in the United States who seems to be
a walking encyclopedia of "how to do forensics on digital pictures and
digital data". Note that this very skilled lady does the bulk of her
publicly declared work for lovely little social help agencies like the
little old U.S. Air Force (can there be ANY doubt, therefore, that the
other, _private_ work that she does in breaking encryption keys and
"finding out where the perverts have hidden the steaganographic data", is
on behalf of certain U.S. agencies with the letters "C", "I" and "A", and
"N" "S" and "A", in their names?)
I add this comment just to give you a bit of a flavour of who you're up
against, when you try to hide data against the professional forensics
experts employed by governments and, sometimes, the police. Be of no
doubt, people like Ms. Fridrich are extremely intelligent, highly
motivated and you have to remember that they do this stuff for a living,
each and every day. She gets paid handsomely by the U.S. spook community
to give them the tools to enforce American power all around the rest of
the world, because she knows very well that "knowledge and information,
ARE power". She believes 100% in her work, she is gung-ho to help Uncle
Sam catch and jail all the "perverts, child molesters, terrorists and
drug dealers who are such a threat to the American Way".
In the cyber-punk and cypher-punk community, which by definition you are,
Pay attention, play smart and trust no-one. There is no other way, in the
world of PC security.
----------------------------------------------------------------------------------------Computer Operating Systems
-------------------------Before I get going on this section there is a basic recommendation that
everyone reading this should take seriously. Namely, GET YOURSELF A
(REASONABLY) FAST, MODERN COMPUTER. (And make sure that it has a good,
fast hard drive. And make sure that you are using a fast Internet
connection, although here there are some special considerations.) Why?
Several reasons.
First, you will be using encryption for a great many purposes.
Encryption, by its very nature, involves complicated mathematical
calculations, which put quite a bit of stress on your computer's CPU (its
"brain"). The faster your CPU, the faster it will get all the crypto
stuff done, which is a good thing.
But secondly, and actually far more importantly, you have to understand
that the basic idea of this document is to teach you the best ways in
which to defend yourself, when that dreaded jackboot comes kicking at the
door. Would you prefer, in that situation, to be using a computer that
takes 10 minutes to shut down, or one that requires 10 seconds? Not a
hard decision, is it?
One special note here : although in some ways, large-capacity, external
hard drives that connect to your PC via USB 2.x0 cables are an attractive
option, because of their portability, disposability and so on, I have
found out that they can be MUCH slower than internal hard drives. Most of
the time, when you are using small data sets (say, a few encrypted
files), you will never notice this, but try to start copying multigigabyte files across a USB cable and you will very quickly come to
appreciate the difference in speed. This is not necessarily a reason not
to use external hard drives... but just plan in advance and compensate
for the slower speed, when you know that you will be handling "sensitive"
data.
-----------------------------------------------------------------------------------This part of the document will compare Microsoft Windows XP (assuming
that you have all the most recent patches -- see below however for a
warning regarding Windows Update) with recent versions of Linux.
I'm deliberately NOT discussing operating systems (e.g. MacOS, Windows
Vista, BSD, Solaris) that I know little about, although I will mention
things about them where relevant. Just as a general comment, though, for
modern versions of the MacOS, 10.x that is, you should assume that it is
more like Linux than it is like Windows; however, unlike Linux, the MacOS
has a significant amount of proprietary Apple program code in it, so some
Mac features will differ quite a bit from their Linux equivalents.
Nor will I be discussing security for obsolete operating systems like
Microsoft Windows 2000, NT, Me or 98 / 98SE / 95, or old versions of
likely to be using.
You should be aware that there are in fact many viable alternate
operating systems these days and it may be worth your while to
investigate one or more of these. The BSD group (NetBSD and FreeBSD) in
particular have a good reputation for security and are largely, but not
completely, compatible with Linux applications; however, most of the BSD
group's security features are really more oriented to fending off attacks
from remote (network-based) intruders, rather than the more extreme "the
cops now have your PC in their lab" scenario that I will describe below,
so they're not as useful as they first might seem. There are also
operating systems like Solaris, AIX and so on, but these are not very
well suited to the casual home user.
As a general comment, I should point out that there is one big advantage
that both Linux (and the MacOS) have generically : namely, your average,
run-of-the mill street cop, is much less likely to know anything about
these operating systems, compared to Windows.
This might not look like it's very important at first blush, but in fact,
it can make a huge difference, especially if, in those crucial few
seconds just after the SWAT team kicks down your door, they don't know
how to act quickly and "secure the perp's data", because the only GUI
interface that they're familiar with is the good old Windows one. This
extends to a wide range of other activities, for example from simple ones
like "where is 'My Documents'" to the more obscure ones, such as "where
do I find the Registry files?".
There is, of course, an important caveat to all this : if you get
attacked (either initially or later, once your PC is in the tender hands
of the police forensics lab) by a "pro", that is, a really well-trained
police forensics expert, these guys know Linux and the MacOS very well
indeed, so don't think that just by using a different operating system,
you're going to be buying yourself a lot of additional intrusion
resistance.
Think of using a non-Windows operating system in the same perspective as
using "security by obscurity" : it can add a little to a lot of security,
depending on the circumstances, but you'd be crazy to rely on it as your
_only_ defence.
----------------------------------------------------------------------------------------General Comments About Each Operating System
-------------------------------------------Although both Windows XP and modern versions of Linux sort of look the
same (they both have a "windowing" user interface that uses the mouse,
has pull-down menus and icons and so on), there are in fact some very
dramatic differences between them that have a big effect on your
computer's overall security.
Here are a few:
Who Owns Your Operating System?
------------------------------Microsoft Windows is a private, "closed-source",
2. The other back-door, which you are far more likely to have used
against you, is one (and the evidence is that it's not just one, in fact
it's many of them) in which Windows secretly keeps track of everything
and anything that you do on your PC -- e.g., what Websites you accessed,
what files you opened (and what content they contained), when you used
your computer, and so on -- whether or not you thought you had "hidden"
or encrypted it.
The idea here is that you THINK that you have wiped all "incriminating"
evidence off your Windows PC, but meanwhile, the operating system has
been specifically rigged to allow a cop with the correct back-door code
to tap into the secret tracking database and happily download all the
"smoking gun" information that you thought had been purged. There is,
again, abundant evidence that this kind of back-door not only exists, but
has actually been integrated with the most popular law enforcement
forensics (electronic snooping) programs, for example EnCase. (The EnCase
corporation has been very cagey as to whether they have or haven't had
access to this kind of thing; they won't answer questions directly, so
you would have to assume that the answer is "yes". It's only the extent
and flexibility of the backdoors that's in question, IMO.)
By the way, lately, Microsoft has been more or less saying that this is
exactly what they're doing -- check out the following story:
http://arstechnica.com/news.ars/post/20080429-new-microsoft-lawenforcement-tool-bypasses-pc-security.html.
What's interesting about this is, note how it has this little comment
about "decrypts system passwords". This casual comment points out that
Microsoft has no problem at all, with enabling what can only be described
as a backdoor, for its chums in the U.S. government. Ask yourself -- if
they're admitting this out loud, what AREN'T they admitting? This one
thing, in my opinion, means that you're nuts to use Windows for anything
that you seriously want to keep secure.
Can you evade / defeat these kinds of built-in snooping / tracking
functions, given that they are hard-coded right into the basic operating
system? Yes... I will show you some techniques in what's to follow,
however you have to ask yourself, "can I ever be SURE that I've defeated
all the back-doors?". Remember that Microsoft and the U.S. government
could theoretically be adding new exploits with each "patch" that you add
to your computer. Unless you are very good and very persistent, there is
always the chance that they'll enable one that you won't find. The
decision is yours.
Vista -- Just Say No
-------------------Incidentally, this situation with Windows is much worse in the most
recent version of Windows, that is, "Vista". The reason why is, unlike
every other computer operating system before it, Vista incorporates an
extensive series of so-called "DRM" or "Digital Rights Management"
measures, put in at the demand of the U.S. recording and movie
industries, designed to lock down your ability to copy or display
multimedia content (like songs, movies and so on) without their
permission.
For example, one of the most hated features of Vista is that it checks to
see if each and every component -- software and hardware -- of your
computer, enforces these DRM restrictions (over which of course you have
absolutely no control). If it finds EVEN ONE component, say, a video
card, that doesn't have DRM copy protection built in, Vista cripples the
output of your video card so that you get only a tiny, postage-stamp
sized screen instead of that nice big 50-inch plasma TV output that you
thought you had paid for. You can't disable this feature or turn it off,
nor can you even find out the basic details of how it works... these are
all secrets held by Microsoft and Hollywood.
Vista's DRM infrastructure would be the ideal hiding place for government
spying modules, because by definition it is hidden from the computer user
and is remotely controlled by a third party (either Microsoft, Hollywood,
the government, or all three) that the computer user has no knowledge of
or control over.
Why is this relevant to ensuring that your computer is safe from
intrusion? Well, stop to think about it -- if Microsoft has teamed up
with Hollywood to take away 75% of your control over how your own
computer works, when accessing "copyrighted" content (something that is
right in your face every time that you try to play a DVD on your
computer, which is immensely unpopular with Microsoft's own customers and
which requires all sorts of secret code that you can't change or
examine), what do you think the chances are that they have ALSO
collaborated with the CIA, the NSA, etc., to also slip in a little hidden
spying program along with the DRM stuff? Most users would never notice
the spying program because, unlike the DRM nonsense, it works quietly in
the background, never bothering you, until the cops show up at your door.
One other thing about Vista that doesn't immediately look like it's
relevant to security, but actually is very important, is simply how
"bloated" that it is. Vista sets new records as to the amount of hard
drive space, RAM memory and CPU speed that it needs, just to give you the
ability to see that nice shiny new user interface (which is of course
99.9% the same as the old XP interface, but for which Microsoft expects
you to buy a brand new computer and spend another 200 Euros... but I
digress). It is agonizingly slow to do anything, particularly boot up in
the first place, and this is not good from a security perspective,
particularly in time-sensitive situations such as the "boot down the
door" scenario.
Remember, in this context, the famous, and very true, computer security
motto that "the mortal enemy of security, is complexity". Largely because
of its DRM encumbrance, Vista is fantastically complicated, both in
design and in implementation, and these characteristics make it almost
impossible to properly secure. (How can you "secure", an operating system
that nobody -- except possibly a few of Microsoft's own programmers -can really understand how it works? You can't close off security holes in
system components that you don't know about, probably because Microsoft
has never revealed them.)
But the most important implication of Vista's (and actually XP's as well,
XP is not quite as bad but it's still bad enough) clumsy, bloated
implementation is, it will NEVER be able to run from a removable device
such as a CD-ROM or USB key (see below). You _must_ run it from a hard
drive, and that's a bad thing, as we shall see further on.
Bottom line on Vista: Just stay away. Windows is bad, from a security and
confidentiality point of view; Vista is hopeless. Using Vista means that
you might as well turn yourself in to the FBI right now and save everyone
a great deal of trouble with the legal paperwork.
without even having an operating system (see the above-noted paper for
how they do this with a single, "magic" UDP data packet). This kind of
compromise would make very good sense for the U.S. intelligence
community, since the CPU is one of the few types of computer chips that
by definition have to be distributed with every PC. Yet more reason to
stay away from American equipment!
As would also be the case with a software-based backdoor, the American
spooks' work would be made much easier by at least the passive cooperation of the operating system manufacturer. This wouldn't be hard to
do with organizations like Apple and Microsoft because of crap like the
PATRIOT Act, but it would be much more difficult (not impossible!) for
software like Linux, for the same reasons why it would be difficult for a
software-based backdoor. (Note: There is an active debate currently in
the security community as to whether the CIA, NSA, etc., could do this
even without the OS to help them, maybe by secretly injecting the remote
turn-on code into some widely downloaded application like, say,
RealPlayer or the STEAM gaming patch update network, or maybe into a
device driver. Personally I think that the CIA would be much more likely
to just turn the screws on Gates, Ballmer and Jobs, but you can't
completely rule out the possibility that they'd try the other route, as
well. The CIA is very patient and very thorough... they'll keep trying,
until they find a method that works, and they rarely rely on only a
single mechansim by which to gain secret information. They're
professionals, and they're very, very good at what they do. You can take
that advice to the bank, my friends.)
Because the vast majority of desktop and laptop PCs sold today are built
with either Intel or AMD processors, the security-conscious consumer
really has limited options here. If you can get a PC with one of the
Taiwanese-built VIA CPUs, I'd suggest that you do so (they're typically
somewhat cheaper than, but a bit slower than, the Intel or AMD models),
but if you have to choose between Intel and AMD, all things being equal,
pick AMD, since it appears that AMD is a little less in bed with the
entertainment industry even though it's an American company. Another
option might be some other computer with a different chip, for example
the "Geode" series.
Just keep it in the back of your mind, that you have to assume that the
hardware may have been pre-engineered to set up a backdoor on your PC...
but it would probably (maybe) need a compliant operating system to turn
it on.
Incidentally, there is a side-note to all the above that's well worth
mentioning. Let's assume, for the sake of argument, that, like 99% of all
normal PC users, you don't have a realistic choice but to use a computer
with either an AMD or Intel CPU in it. But, like the good little paranoid
that you are, you are worried about limiting the damage to your privacy
and security posed by the little old CIA / NSA backdoor that might just
be hiding somewhere in the millions of tiny integrated circuits on the
CPU. How are you going to protect yourself against this kind of
fundamental, and difficult-to-detect, threat?
Although there's no 100% effective "mitigation" step, one thing that
might be useful is to simply disconnect -- and here, I mean "physically
disconnect" as in "pull the Ethernet cable from its little RJ-45 plug-in
port" -- from any network, when you are doing security-sensitive
operations such as entering passwords for encrypted storage containers
and so on, then power down the computer while the network cable is
physically disconnected. Conversely, when you power up and first enter
the passwords for your encrypted containers, do so with the network cable
physically disconnected. (Don't worry, it shouldn't stop you from being
able to get to the Internet, when you eventually do decide to plug in;
most network cards and operating systems can automatically re-synchronize
with the network when they sense that the cable -- the "physical medium"
-- is now available where it previously was not.)
The reason here is pretty straight-forward; a hardware backdoor (let's
say it functions like a keylogger), which would (possibly) have to
operate completely independently from whatever operating system was on
the PC, would have to transmit sensitive captured data back immediately
(or at the very least, at the point at which it noticed that the power
was about to be turned off on the compromised PC), because the hardware
backdoor probably wouldn't have anywhere that it could safely store this
captured data in between power cycles on the computer. (Yes, it
theoretically COULD do something funky like put the decrypted passwords
in your BIOS'es non-volatile Flash RAM chip. The problem with doing this
is, a careful defender would be able to see the passwords as well and
would thus be tipped off that his PC had been backdoored, and the whole
point of a hardware backdoor is to avoid leaving any trace that it's
there. So I think it unlikely that the hardware backdoor would try to
preserve this kind of evidence in the hope of queueing it up for the next
time that it saw the network cable being connected.) If you physically
pull the network cable (or just prevent your Wi-Fi chip from ever
associating with a wireless access point), the hardware backdoor has no
way of sending your private passwords back to our friends in Langley and
Fort Meade, U.S.A..
So What's The Bottom Line On Ownership?
--------------------------------------The bottom line here: Windows starts with at least 30 points out of 100
against it, just because it is (a) a closed-source operating system and
(b) because it is headquartered in the United States. The MacOS is only
better in the sense that its market share is probably not big enough for
the spooks to have paid a lot of attention to, but this could change at
any time.
Just on the "how is it programmed and who owns it" front, most versions
of Linux are far more trustworthy than either Windows or the MacOS, but
steer clear of Red Hat and other America-based versions since they could
at some point be compromised like Windows undoubtedly already has been;
this would be difficult to do, but not impossible. The CIA and NSA have a
lot of people and a great deal of money. What may seem "impossible" to
you, might just be a slow day's work for them.
How Is The Operating System Designed
-----------------------------------All modern operating systems, including of course Windows, most versions
of consumer Linux and the MacOS, have a "windowing", "GUI" ("Graphical
User Interface") type of interface that lets you use the mouse, drop-down
menus, and so on; in fact, they all look more or less alike, these days.
This would lead a casual observer to conclude that they're all basically
built in the same way. But any such conclusion would be very wrong -there are dramatic differences, many of which impact security and
confidentiality, in the ways in which the various operating systems are
This might look like a more complicated model but actually it is much
simpler, because -- and this is a very important point when considering
security -- you can run the computer with only a CLI (Command Line)
interface, being sure that nothing is being managed / executed / tracked
"behind the scenes" by some aspect of the GUI interface. Now, of course,
there could be a spying application running anyway, because all these
modern operating systems are "multitasking", that is, they can run many
programs all at the same time.
For Your Convenience... NOT
--------------------------What I'm referring to, however, is the possibility that the GUI interface
itself, or some other application that is running with the GUI interface,
is enabling some kind of tracking of your actions, either innocently or
deliberately.
A classic example of this concerns copying files. Suppose, for example,
you want to copy a bunch of pictures from a directory (folder) on your
hard drive, on to a USB key. Almost all modern GUI interfaces will, "for
your convenience", automatically create "thumbnail" (smaller) versions of
the pictures in any folder that you access... this is presumably so you
can tell which picture is which, before you decide to keep the pictures
of Aunt Nellie's 80th birthday party and throw the pictures of your last
drunken March Break table dance into the trash bin.
The problem is, almost all modern GUI interfaces, when they are creating
these thumbnail versions of the original picture, do so by secretly
creating a reduced size version of the original picture in a hidden
folder or directory. Needless to say, if the original pictures included
"controversial" content, and you wiped them from your hard drive, the
"thumbnail" versions ARE STILL RETAINED IN THE HIDDEN DIRECTORY AND CAN
VERY EASILY BE ACCESSED BY AN INTRUDER, LIKE A COP. This one "feature"
can easily land you in jail, even if you've carefully sanitized just
about everything else.
[Incidentally, there is a stupid little side-effect to this, which is
immensely annoying but which you have to take account of... let's say
that you have dutifully wiped all the "incriminating" thumbnails from the
appropriate hidden folder under "My Documents", then you open a Windows
Explorer window to the folder with the original "controversial" files,
and from there you dutifully wipe all these files. The problem is, the
second that you visited the folder containing the original files, your
trusty operating system will have just re-created the thumbnails
corresponding to them, again! (After all, it's just helping you see which
one is which, right?) The point here is that sometimes, the sequence in
which you undertake security-related activities, is just as important as
what you actually do. Sorry, but computers just work this way, don't
blame me!]
There is a very subtle but important point here. The GUI interface is not
_intentionally_ trying to leave an incriminating trail for an intruder or
forensic investigator; it's merely implementing a "convenience" tool to
make ordinary use of the computer easier. But in doing so, it is
accomplishing more or less the same function as a real, malicious
background spying program would do. There are untold variations of this
same concept in GUI user interfaces, ranging from the "Most Recently
Accessed Documents" list that they commonly store (note this one applies
both to Linux and Windows), to Microsoft Windows' trick of secretly
(By the way: note how this feature makes a bad joke of Microsoft's
repeated assertions that their "Bitlocker" file encryption system, allows
you to "secure" your confidential files. If I have this stupid "Previous
Versions" thing turned on, but I then go and save the last version into
my Bitlocker-protected area, where's the security? An intruder can just
go looking for the "Previous Version", which is stored completely
unencrypted somewhere else, and bingo! You're owned, dude! Let's hear it
for "Security By Microsoft"!)
The larger problem with Vista in this respect is that it is so large and
complicated, and so much of its inner workings have intentionally been
hidden from / made inaccessible to, the average end user, that it's
basically impossible to accurately determine what the damn operating
system really is, or is not, doing at any given point. Therefore, using
Vista for any purpose that involves confidential data, is like playing
Russian Roulette with 5 out of the 6 chambers loaded. Sure, you might get
lucky, but the odds are against you.
The moral to this story is, TURN OFF THE BLOODY BACKGROUND BACKUP
PROGRAMS, AND MAKE SURE THEY STAY TURNED OFF. If you want to backup your
confidential data, do so yourself by manually copying it to another
secured, encrypted location. Backup programs are designed for people with
nothing to hide but a lot of stuff to retain. These design goals are in
many ways diametrically opposed to what we need to do for good PC
security, so disable the backup programs and do it right, by yourself.
Desktop Search
-------------An especially dangerous aspect of this subject is that many modern
operating systems, and this unfortunately DOES include not only Windows
and the MacOS but also many versions of Linux that are trying to compete
with the "features" (bad ones, in this case) of the commercial operating
systems, by default will enable "desktop search" background applications
that basically index each and every file on your computer.
These features have been enabled supposedly to make it easier for you to
search for and locate files (I have never once seen them work properly,
by the way; all they do, in my experience, is drastically slow the
computer down, while the application thrashes the hard disk while doing
its indexing), but from a security point of view they are a disaster
waiting to happen.
Just think... if you were a jealous spouse, or a private investigator, or
a repressive government, or a morality police cop, wouldn't you just LOVE
to get an up to date list of exactly what files that the regular user of
a particular PC, most often opened, accessed and searched for? It's hard
for me to believe, therefore, that the trend to enable this kind of
"convenient" file searching wasn't started at least partly by law
enforcement... the benefits to them, far outweigh the benefits to the
supposed user of the tracking application.
Again, the Windows and particularly Vista implementations of this socalled "feature" are considerably worse than those available with other
operating systems, although the MacOS isn't significantly better. What
all 3 of XP, Vista and the MacOS share in terms of bad things here, is
that the background search and indexing service is built in to the basic
operating system in a way that is difficult (MacOS) or just about damn
impossible (XP / Vista), to turn off and kill once and for all. Most
This is the place where the windowing interface stores files that you
THOUGHT you had deleted. I will get into the messy details of "permanent"
file deletion later, but for now, what you need to know is that when you
"delete" a file under any of the newer operating systems, in fact, all
that the computer is going to do is move it (NOT, in fact, delete it) to
a special folder called the "Trash Can" or "Recycle Bin". This is to
enable you to retrieve the file later, if like the dumb schmuck that you
really are, you realise that you didn't mean to delete it in the first
place.
The point here is that to REALLY delete the file you have to then tell
the operating system to "empty the Trash Can"; this action (sort of)
finally kills the file. Watch out for this one, because it's very easy to
forget, meaning that your computer Recycle Bin is filled to the brim with
data that you'd just as soon not let your jealous spouse see, all just
waiting to be restored and again made reviewable, with one mouse click.
The solution here is simple: make sure you "empty the Trash Can" each and
every time that you delete a file. (You really SHOULD be securely wiping
your files... right?)
If you're using Linux, watch out for the near-bug that prevents you from
emptying a system Trash Can for a volume (for example an encrypted
TrueCrypt container) that has been dismounted. (While I believe the risk
here is low, since the "deleted" files would be somewhere on the
dismounted container, there might still be a suspicious reference in the
Trash Can that you'd be better off with your attacker not seeing. The
simple solution to this is, re-mount the container, either wipe or delete
the files from the Trash Can, then dismount it again.)Swapfiles / Virtual Memory
-------------------------This is really a subject in and of itself, but for now, I'll just say the
following. As you read what's immediately below, keep in mind that the
swapfile / virtual memory file is the #1 place that a sophisticated
attacker will usually go looking for "incriminating" data, first. So you
need to pay attention to this section.
Although nowadays RAM memory (the kind that operates basically at the
speed of electricity, so it's fast enough for you to use for computer
programs; it's also the kind of memory that sort of vanishes when you
turn the power off) is relatively cheap, in the old days in which the
basic architectural assumptions of modern computer operating systems were
first thought out, RAM chips were very expensive, so programmers looked
around for a way to 'cheat' and get the computer to run more programs (or
larger programs) than the available amount of RAM memory would otherwise
accommodate.
The concept that they came up with is known by a variety of names such as
"swapping", "virtual memory", "demand paging" and so on, but these all
more or less refer to the same thing -- the idea is that the operating
system tries to keep track of which programs, and which data files, are
being constantly used and which ones only get occasional use; then, it
copies the seldom-used ones out of RAM memory on to a special place on
the computer's hard disk. If, all of a sudden, one of these "swapped out"
programs or files gets some kind of input or otherwise has to do
something, it is recalled from the special "swapfile" into the scarce
supply of RAM memory.
In this way, the computer can appear to be running many more programs (or
can be using much larger data files) than it otherwise could be, if only
"real" RAM memory was in use. This usually works fine except when pushed
beyond a certain point where the computer's RAM memory is too small to
even run a couple of programs without constantly swapping other ones out
to the hard drive -- this is a symptom called "thrashing the swap file"
and can be detected by the computer being very slow, with the hard drive
in-use light constantly being on.
The memory swapping process goes on continually in the background with no
intervention (or even awareness of it) by the computer user and it is
enabled by default on almost all modern computer operating systems;
indeed, it can be difficult to impossible to disable unless you really
know what you're doing.
For example, it is just about impossible to run Microsoft Vista without a
large swapfile, because Vista is so big and bloated that it can hardly
fit into most computers' RAM memory even with "virtual memory", let alone
without it. Linux has a much more reasonable RAM overhead, in this
respect, but this is partly offset by the fact that in my experience,
Linux users tend to run it on older computers that typically have less
RAM than a brand new machine would have, so we're back at Square One in
that respect. Windows XP is kind of in the middle; if you have 512
megabytes or more of RAM, you should be able to at least temporarily
disable memory swapping. I don't know enough about the MacOS to
confidently say one way or the other, but my guess is that it would be
more like Linux than it would Windows.
The problem with virtual memory, from a security point of view, is that
for the system to work at all, it has to be able to access, and therefore
send out to the swapfile, ANY and EVERY last byte of data that may at
some time reside in, or pass through, the computer's RAM chips. Stop to
consider the implications of this: since, by definition, EVERYTHING that
you do with your PC, from using your Web browsing program, to the
confidential Microsoft Word document that you edited today, to the
encryption keys (the secret password that scrambles confidential data,
wherever you have put this) that protect your secured files, ABSOLUTELY
EVERYTHING, resides in your PC's RAM chips at some point, it follows that
all of this could be, and probably will be, "swapped" out to the swapfile
on your hard drive, at some point, if you have the virtual memory feature
enabled.
You don't have to be very smart to figure out where all this is leading;
namely, that an intruder who has physical access to your computer, can
just look through your swapfile and find a treasure-trove of sensitive
information that you THOUGHT that you had "erased" or "encrypted", during
your day to day use of the PC. This isn't as easy as it may at first seem
to be, since the intruder has to ensure that the computer won't overwrite the swapping area again, and furthermore, data in the swapfile is
not conveniently organized into files and folders, etc.; but be of no
doubt that an even moderately experienced attacker, particularly if he or
she is equipped with good forensics tools like EnCase, very much CAN get
at and make sense of what your virtual memory system put in the swapfile.
Needless to say, having the wrong kind of evidence gathered in this way
can be disastrous... "game over"... from a security perspective.
A very few programs, notably TrueCrypt, are aware of the memory swapping
danger and they use advanced techniques to try to "lock" RAM memory that
they use to prevent it from being swapped out to the hard disk, however
if I were you I would never rely on this as your primary safeguard. You
are going to have to find a way to secure, or sanitize, your swapfile, if
you are to have a hope of protecting your computer against an
experienced, well-equipped attacker.
It's worth also noting that Windows and Linux implement swapping in
similar, but subtly different, ways. Under Windows, the "swapfile" is
just that -- it's a file contained in a Windows (usually NTFS or FAT
format) partition. Under Linux, the "swapfile" isn't a conventional file,
it's in fact by default an entire partition of its own. There are
advantages and disadvantages to both approaches, but what's important for
us to remember is that because of these differences the way in which we
have to secure a swapfile in each case is different.
So what do you do? The obvious approach is to ensure that you aren't
using virtual memory at all. If you're using a relatively modern computer
-- that is, one with more than about 1 Gb (gigabyte) of RAM memory -this is likely to be much easier to do with Linux than it is with
Windows, simply because most versions of Linux have a significantly
smaller RAM overhead than does Windows... if you disable swapping on a
Windows XP with 256 Mb RAM, for example, there is a good chance that the
operating system will simply crash (it can happen even with 512 Mb,
actually).
Under Windows, you have to do this by modifying the My Computer -->
Advanced --> Performance --> Settings --> Advanced --> Virtual Memory
settings. Remember to disable swapping for all hard drives (not just C:)
on your computer if you have more than one.
Under Linux, you have to first figure out what the "device name" of the
partition in fact is. Typically, this is something like /dev/hda5 and you
may be able to find it in your /etc/fstab file, for example:
/dev/hda5
swap
swap
pri=42
0 0
swapoff -a
dd if=/dev/urandom of=/dev/sda5
mkswap /dev/sda5
swapon -a
Under Windows, you are going to need a dedicated program like Evidence
Eliminator. You can see by this little example how something that is
basically a built-in function of the operating system under Linux,
requires software installation under Windows. Yet another vote for the
Penguin, methinks.
The Windows Registry
--------------------
"root", meaning that when later you want to access or delete that file or
directory as an ordinary user, you can't because it is "owned" by a
different user (the "root" user). This can be very irritating if you have
to mix actions under "sudo" and actions under your normal, user-level
account while performing a security-related function (for example,
creating mount points or encrypted volumes using TrueCrypt), because you
can end up with a mixture of files and directories, some that you can
access / delete and others that you can't (at least not without switching
back to "sudo"). The only real way to deal with this is just to remember
what things you can, and can't, do as an ordinary Linux user.
File Systems -- Why They Matter
------------------------------In the good old days of Stone Age computer operating systems such as MSDOS (does anybody remember those, LOL), the corresponding file and
directory storage organization systems were more or less as simple as the
operating systems themselves: they could do basic tasks like "save a
file", "read a file", "delete a file", "copy a file from one place to
another", and so on. This sort of made sense, if you consider that most
computers that these file systems were associated with, were only meant
for a single user and they were not networked.
As computers became more interconnected and the applications running on
them became more complex, the file storage systems also became more
sophisticated, and this has many implications from a data confidentiality
point of view. Some of the techniques that are used here are
"journalling" (the file system tries to keep a record of every change
that was done to every file on the hard drive, so that you can restore
back to whatever point if something goes wrong), "background
replication" (the file system keeps a duplicate copy of every file that
you create, again so you can restore it if you mistakenly delete it and
then "empty the Trash"), "duplicated Volume Table Of Contents" (VTOC)
(the file system keeps more than one listing of the contents of each
directory and these listings are kept in separate parts of the hard
drive, so that if one listing goes bad, the other can still be used to
retrieve files).
Generically, there is a common confidentiality-related problem that I
hope you can see running through all of these concepts: THEY ALL
BASICALLY MEAN THAT EVEN IF YOU _THINK_ THAT YOU HAVE DELETED A FILE
"ONCE AND FOR ALL", IN FACT, IT (OR A COPY OF IT) MAY STILL BE HAUNTING
YOUR HARD DRIVE SOMEWHERE. This is a crucially important point from a
confidentiality point of view and it is one of the hardest ones to 100%
compensate for, because depending on a number of factors, among them what
kind of operating system you have, what kind of file system you have and
how it has been set up, it may be difficult to even know which of these
techniques are in use (it very well can be more than one of them) and
next to impossible to turn them off or work around them, at least without
specialized software and knowledge.
And just to re-state the obvious, suppose that you are an Al-Qaeda
operative and you have your secret plans to cause some really nasty
unpleasantness at No. 10 Downing St., next month, set up in a Microsoft
Word file, somewhere on your hard drive. (You really shouldn't be using
Microsoft Word, by the way, but that's a different subject.) But you know
that Scotland Yard is on your tail, so you both delete the file and then,
like a good little computer paranoid, also "empty the Trash Can".
Unfortunately, your wonderful Linux ReiserFS "journalling" file system
has also kept a secret copy of this file, as well as of its last six
versions, somewhere on the hard drive, and when you hear that knock on
the door, 5 minutes later your PC is in the tender hands of MI5's best
forensics experts. All they need do is use a few simple, publiclyavailable tools to read back the journalling trail and you might as well
hand them a printed copy of the document, to save everybody a day's work.
Well, it's off to Guantanamo for you, mate!
Another, closely related issue is what we call "File Ownership By
Identity". This is a fancy way of saying, "when using a modern computer
file system, particularly one meant for use on a hard drive that may be
shared by multiple users / log-in accounts, the file system by default
will 'tag' each and every file or directory on the hard drive, with the
name of whomever 'owns' that file or directory".
Now, there is a good reason for this, namely that the operating system
and the file system are trying to ensure that, under regular computer
operations, (a) 'Bob Smith' can't see or delete 'Mary Jones' files and
(b) system files, for example the executable program files that allow the
operating system to run the computer in the first place, are owned by an
'administrator' or 'super user' who must be logged in as such, for any
major system change (for example, upgrading the operating system) to
affect the system files that might be affected, changed or deleted. But
note the phrase, "under regular computer operations".
Remember, we are working under the assumption that the PC that we're
talking about, is going to be in the physical possession of an
intelligent, hostile intruder who is equipped with advanced forensics
tools. Feeble security safeguards such as "file ownership tags"
absolutely WILL NOT stop such a determined attacker for so much as five
seconds, but they definitely WILL -- and this is the crucial point -provide a legally valid identity attribution trail to whatever set of
files the attacker wants to access and use.
In other words, the file ownership tag WILL TELL THE ATTACKER, THE
POLICE, THE JUDGE AND THE JURY THAT IT WAS _YOU_ -- NOT SOMEBODY ELSE -WHO OWNS AND IS RESPONSIBLE FOR THAT "CONTROVERSIAL" FILE.
Now, understand that from the point of view of the attacker, this
situation is still not perfect. For example (and this is the case for
many home computers), there may be only one log-in account, for example
"The_Jones_Family" and only one corresponding password (say,
"iamajones"), therefore only one identity to tag the files with, even
though that particular identity may in fact be little Billy Jones when
he's playing World of WarCraft online, or his sister Janie Jones when
she's chatting on MSN, or dad Frank Jones when he's checking out the
latest football scores, or whomever.
In a case like this, the file identity tagging will make the forensics
investigator's job more difficult, but definitely not impossible -- after
all, how likely would it be that Billy or Janie was checking out those
"controversial" Websites with their "controversial" pictures? In a case
like this, it's far more likely that poor old Frank is going to get
fingered by the Bobbies, even though in theory the files might have been
generated by the other two.
There are also other issues with file identities (particularly, the fact
that they can quite easily be changed, as can incidentally a file's
creation or modification date, by any one of a number of widely available
software tools, for example the 'touch' or 'chown' CLI level Linux
commands) that make them less than a 100% effective file-to-real-personidentity-tracking tool, but are they useful to an intruder? You bet they
are.
So How Can We Defend Ourselves Against The File System?
------------------------------------------------------There are various defences against the above types of file system
confidentiality risks, but I have found that the following ones are
probably the simplest and most reliable:
(1.) Use an external device: Certain kinds of external storage devices,
in particular USB keys, are by default configured to use less
sophisticated file storage systems (especially Microsoft's old FAT and
FAT32 systems) that cannot store or track much of the information,
including the dreaded "file ownership tag" thing that we mentioned a few
paragraphs ago. The main reason for this is that inherently, these
devices are meant to enable file portability -- that is, you save a file
on to your USB key when it's attached to PC #1, plug it in to PC #2 and
then copy it on to the second PC. Considering that there is a very high
chance that your identity (if any) on PC #1 is completely different from
your identity on PC #2, if the removable device file system enforced
strict ownership rules, it would make this kind of casual copying
difficult to impossible, so it has sensibly been stripped from this
aspect of how these devices store and categorize files.
The advantage, of course, from a confidentiality point of view, is that
if you just always work from a copy of the file on the removable device,
then by definition it will never get into the tender clutches of your
hard drive's much more sophisticated file system that DOES track
attributes such as "who owns it". Furthermore, largely due to
restrictions on the amount of storage space and other technical issues,
functions such as "journalling" and so on are seldom found on removable
media such as USB keys.
MacOS:
I don't know. I would suspect that it would be similar to Linux but I'm
not sure.
Startup and Auto-Run Programs
----------------------------Except for truly malicious software -- by which I mean keyloggers
(whether installed by some Russian computer criminal or by your friendly
local intelligence agency), viruses, worms, adware, spyware, rootkits and
so on -- which may be able to secretly "hook" itself into the startup
sequence for some other, legitimate program and therefore be executed
silently without you even being aware of it -- there are basically only
two ways for a program to be executed on your computer.
One is, you manually tell the computer to run the program, either by
double-clicking on an icon on your GUI desktop, or by entering a command
such as "nano MyNewTextFile.txt" at a CLI command line.
The other is an "auto-started" program which the computer has been
instructed to automatically run, each and every time that a "start-up
event" occurs.
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce;
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServices;
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServices;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion
\RunServicesOnce;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies
\Explorer\Run;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnceEx;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
Shell;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
System;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
VmApplet;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
UIHost;
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon,
Userinit;
HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows,
run;
HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows,
load;
HKEY_LOCAL_MACHINE\Software\Microsoft\Active Setup\Installed Components;
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager,
BootExecute;
HKEY_CURRENT_USER\Software\Mirabilis\ICQ\Agent\Apps;
win.ini, load;
win.ini, run;
system.ini, shell.
Linux
----In general, auto-started programs under Linux follow a much less complex
(surprise, surprise!) system than under Windows. Most auto-started
programs can be found in the following file:
/etc/inittab
For the "start-up event on window manager initiation" event, here are
some likely areas:
KDE: ~/.kde/Autostart/
GNOME: ~/.gnome2/session-manual
XFCE: ~/.config/autostart/
Note: Much more good info on this is at: http://gentoo-wiki.com/
HOWTO_Autostart_Programs
Other "Interesting" Files
------------------------Now in view of the above, there is one other category of files that you
must definitely protect, and that is (to put it as simply as I can): "any
file that either (a) would give an attacker EVEN THE SLIGHTEST HINT that
you're attempting to either hide or 'sanitize' data, or (worse) (b) a
file storage sites, file swapping sites, "Webmail" sites like Gmail or
Hotmail, or e-mail addresses, especially your own); you can be sure that
the attacker will be contacting each and every one of these and demanding
that "under penalty of obstruction of a criminal investigation, they
immediately remit all and any evidence associated with Mr. William Q.
Pervert";
-- Those that contain data sanitization commands (for example the Linux
commands needed to clear the swapfile, as shown above) or, worse,
procedures (e.g. "First, encrypt my data, second, wipe my swap file,
third, run Evidence Eliminator, fourth, reboot my PC') -- remember that
if an attacker knows, or even partly knows, the process by which you are
securing your PC (especially the order in which the various steps are
taken), it is much easier for him to reverse-engineer it and break it;
-- Those that give a history of activities that a 'dumb normal user'
probably wouldn't do; for example, a Linux .bash shell command history
that shows you altering file modification / creation date stamps, or a
Windows log file that shows repeated creations and deletions of
"temporary" user accounts on the same PC; included in this category are
events without a straight-forward, obvious explanation, such as repeated
accesses to a particular file without a clear reason why the file needs
to be used;
-- Those that contain cryptic or obscure phrases (example: "the falcon
flies at midnight"); these might, in the eyes of an attacker, represent
passwords or pass phrases;
-- Any object or data structure (for example, an entry like "X:\SECRET
\JIHAD.DOC" in the "Recent Files" listing of your Microsoft Word program)
that references either / or (a) a file that is no longer on the
unencrypted, "normal" part of your hard drive, or (b) a volume (in this
case, it was probably a virtual PGP or TrueCrypt volume) that is not
visible or accessible when the computer is under normal use (the point
here is that an intelligent intruder is going to say, "hey, wait a
minute, this link is to a volume that isn't here now... that has to mean
that Achmed has a hidden volume, somewhere on the hard drive, or he's
hidden his Islamicist propaganda on a USB key" -- if the intruder doesn't
suspect that you have an encrypted volume, he is far less likely to go
looking for it, and far less likely to correctly identify it);
-- Finally -- and you'd be amazed at how often this happens -- don't
allow files with "suspicious" names (e.g. "WIPEINFO.DAT", "EncryptPswds.txt", "Last_Wipe.txt", etc.) to clutter the unsecured part of your
hard drive, even if these files don't have any meaningful data (or any
data at all) in them. A good attacker can derive a surprising amount of
information from these, starting with the time / date stamps (gives him
insight as to when you were last using the PC, so he can narrow down the
scope of his attack to other files that were accessed around the same
time), and of course the mere presence of these files is another "bloody
red shirt" that the prosecutor can wave in front of the jury to convince
them of your guilt, all the while in the absence of any legitimate
evidence.
Remember, when considering what you must do, you are fighting a war with
your attacker, not a Marquis of Queensbury boxing match. In war, nobody
fights fair and you can expect your attacker not to, either. Don't give
him the smallest stone to toss your way.
so doing, you don't open a different security hole, such as the "whoops,
it backed up all your sensitive data while it was unencrypted" issue.
(e) Try, if at all possible, to have only one duplicate copy of each
sensitive data object, in your possession. Having six different copies of
"TheMatrix.mp4" is bound to attract the suspicion of an attacker, unless
you have a very valid reason (maybe you're into movie piracy? certainly
better to plead to that, than say to being a member of Al-Qaeda) why you
should have multiple versions of a single file.
Online Backup -- Is It For You?
------------------------------Related to the above set of issues is a trend that has lately become
quite popular in the consumer information technology market, namely, the
idea of "on-line storage" or "on-line backup". The concepts behind these
two terms are closely related and basically involve you using someone
else's computer, accessible only over the Internet, to store and retrieve
files that you would otherwise have to store on a PC at your local site
(e.g. your residence, your place of work, or somewhere that you have
personal, physical access to).
Remember that for these purposes, the terms "remote hosting provider",
"on-line backup service", etc., can be used somewhat loosely to describe
any remote service that can store any kind of data for you. There is a
specific reason why I mention this: In some ways, you can consider remote
Web e-mail services, for example Hotmail, Yahoo Mail, Gmail, etc., as a
"remote hosting provider". So, if (see below) the files that you are
planning to store on such a service are relatively small, you can
actually just e-mail them (as attachments to an otherwise innocuous
message) to the Webmail system and just leave them archived in some
folder that you maintain in your on-line mailbox.
From a data security point of view, there are a number of drawbacks that
you need to be aware of, about these services:
Instability -- You have no guarantee whatsoever, that the on-line data
storage company or organization, with whom you have left your data, will
be there tomorrow; this is particularly true of the "free" companies that
are presumably making their money by presenting you with advertisements,
the business scene is littered with the corpses of these companies. If
this happens, you can say "goodbye" to any files that you uploaded to
your former data hosting provider.
In this respect, make sure to remember that most of the "free" on-line
mail services (for example Hotmail) have an automatic timeout designed to
avoid their servers being cluttered up with files from dormant e-mail
accounts that have been forgotten or otherwise abandoned by whomever set
them up (this is also true of the few "free" file backup services that
remain on the Web0, so you have to access the Webmail account every so
often to keep it "alive".
This can be done via an automated set of scripts triggered on a timer
basis (Windows "Scheduled Tasks" or the "cron" facility under Linux), but
I would advise against doing so because review of these scripts by an
intelligent attacker would give him instant knowledge of the mailbox and
user ID that you used to access the Webmail service (of course, what
would happen next in this scenario is a US PATRIOT Act or UK RIPA request
for Hotmail, Yahoo, etc., to immediately forward the contents of your
So, having taken all the above into account, is there any point at all in
using an on-line data hosting service? I believe that there is.
One very valid approach, from a security point of view, would be to
archive only relatively small (< 100 kilobyte), robustly encrypted files,
each of which would preferably be obfuscated in some other way (for
example, the real name of the file is "MySecretPasswords.tc", but the
name of the file when you upload it to the hosting provider is
"FreeBeer.doc"), to two or more different remote hosting providers (to
avoid the "poof it disappeared" scenario, since both providers are
unlikely to go bankrupt at precisely the same time). You would then be
especially careful only to access these hosting sites with the Web
surfing precautions explained elsewhere in this document (e.g. never use
your real name, use encrypted, anonymized connections, etc.) and would
only access the documents when absolutely necessary.
Incidentally, not that I should need to say this, but -- NEVER, EVER,
unencrypt the contents of your sensitive data stored on a remote server,
so as to create an unencrypted / plaintext version of the same file,
which is also stored on the remote hosting site; you might as well
forward a copy of it to the police at the remote site, if you do this. If
you have to unencrypt something that is stored on the remote site, (a)
copy or transfer the original, encrypted version to an encrypted
container on your local PC; (b) unencrypt the transferred file to the
local, encrypted volume; (c) do whatever you wanted to do with the local
copy of the plaintext data, then either (d) securely wipe the local
plaintext file, or (if you need to store a revised copy of it), (e) reencrypt the file and upload it to the remote hosting site.
Keep in mind that many of the remote hosting providers have background
search and indexing systems that scour their servers' hard drives for
"illegal" content; so, the second that such content shows up in plaintext
form at their end, it will be noted as such and the local law authorities
(at their end, not yours) will be immediately notified. At best, this
sequence of events would mean that the remote hosting company would
delete all of your files and close your account (bad from a data
availability point of view); at worst, you will get your front door
kicked in by the local police (at your end), after the remote location's
police put your name up on INTERPOL and phone the cops wherever you live.
You have been warned!
So what would you put on such a hosting site? If your archives of
"sensitive" data are small, perhaps you could put the data itself on the
remote site; however, as explained above, there are risks associated with
doing this. A better set of candidates for remote storage might be an
encrypted master archive of passwords to your actual encrypted, local
data, or, additionally, something like TrueCrypt keyfiles that are
required to unlock an encrypted volume.
Steaganography (see elsewhere in this document) is a very good option
here, since, from the point of view of a hostile government or police
force who demands access to your account at the hosting provider end,
when the ISP / hosting provider happily complies (you're just some
anonymous customer "out there in cyberspace"; they're the local police,
threatening to arrest anyone at the hosting provider who gives them the
slightest back-talk; whose interests do you think are going to come out
on top, in a dispute like this?), all the police will see is your nice
little collection of pictures of prized orchids and petunias. (In such a
case, the attacker is far more likely to conclude that they've got the
Unfortunately, just before you were about to do all of the above, you
heard a 'pop' from your PC's power supply and the whole system went dead.
Meaning: ALL YOUR DATA IS NOW UNENCRYPTED, SITTING ON THE HARD DRIVE,
JUST ASKING TO BE INTERCEPTED BY ANYONE AND EVERYONE. The point is,
neither the software nor the hardware can do anything to secure your
data, if some physical problem with the PC itself has prevented the PC
from doing a "controlled" shut-down or re-encryption of the sensitive
data. So, in effect, a hardware failure at the wrong time can basically
stop all of your data security best practices dead in their tracks, since
you can no longer access the computer to tell it to re-encrypt your data
files, clear away incriminating evidence on the hard drive and so on.
As almost anyone who has used a moderately priced personal computer for a
few months can attest, the quality control on PC components, especially
certain parts of the computer such as its fan, power supply, RAM memory,
video card, hard drive and (sometimes) motherboard, can vary
tremendously, even between individual PCs manufactured by mainstream
builders such as Compaq, HP or Dell. Component quality on "clone" PCs
from no-name builders can be even worse.
And don't think that your computer is the only thing that can fail,
incidentally. Firewalls, routers, gateways, everything... these can, and
often do, die in a split second. In saying this, I'm not just referring
to whatever infrastructure that you have under your own direct control.
You also have to keep in the back of your mind, "what will be my
approach, when (say) the Web gateway being used by my ISP crashes and
dumps all of its then-current connection information, into a 'crash log'
on the disk?"
The concern here is the same issue as when you send your PC in to the
repair shop to have it fixed. The technician at the other end may (in the
ISP case) have to look at the crash log file to see what, if anything,
crashed the gateway server. The technician in the repair shop may have to
restore Windows operating system files on your hard drive, to get the
operating system to boot up again. In either case, while their main goal
has nothing to do with intentionally compromising your security, if they
stumble across "controversial" content that they believe you to have been
in possession of (for example a transaction record with a "controversial"
Website, or certain kinds of digital images on your hard drive), they may
believe themselves to have a legal or "moral" duty to inform law
enforcement of their suspicions. You know what happens then -- an hour
later, the steel-toed boots kick down your front door. You would be
AMAZED, and horrified, at how many otherwise careful users of
"controversial" data get caught in exactly this way.
For a careful computer user, there can only be one conclusion to draw
from the above:
-- You have to structure the way in which you use your PC, on the
assumption that it, and the network infrastructure through which it
communicates, may fail, unexpectedly, at any time.
-- You have to build this assumption into your data confidentiality
plans, so that if the computer DOES fail, the impact on the privacy of
your data will be as little as possible.
Is a complete answer to this problem, possible? I have some not-so-good
news for you, here: I believe that there isn't any 100% effective way
that you can completely protect yourself from the compromising effects of
sudden hardware failure. That having been said, there are some steps you
can take to reduce the impact, when and if this should happen to you:
(1.) Use high-quality hardware components that you can easily, and,
preferably, cheaply, replace yourself. The more work that you can do to
fix your own equipment, the less you will have to rely on untrustworthy
third parties, to do your repair work for you.
Now, whether this means "buy a good computer" as opposed to "buy a
clone", that's a much more difficult one to call. I can make a good
argument either way. But the key point here is that whatever you use, you
should be able to fix it -- or destroy it -- yourself, without the
assistance of anyone else. Remember the "third party" warning that I gave
you at the top of this document?
(2.) NEVER store, or access, the original copy of anything that you don't
want an intruder to have access to, from any unencrypted source or
connection. For example, never access a controversial file from an
unencrypted folder on your hard drive, and never access controversial
content on the Internet except by using an anonymized, encrypted
connection via services like Tor. (Note: The challenge here is that
unless you encrypt a lot of "hidden" or "temporary" directories like the
"thumbnails" one, as well, you can THINK that all your "controversial"
data is protected, but, behind your back, the operating system has
quietly made copies of it, in some unprotected location. Plan against
this happening!)
While taking these steps will not completely protect you from a sudden
hardware crash (due to the possibility of some of the data having been
swapped out to the swapfile, etc.), they will significantly reduce the
impact of a crash, and will greatly complicate the task of an intruder,
if and when a crash comes your way.
(3.) Use applications, and hardware, that have deliberately been
engineered with the assumption that they will have to safeguard the
confidentiality of data even in "anomalous" operational conditions.
There is a simple meaning to this, currently: USE TRUECRYPT. This Open
Source encryption system, while not perfect, implements excellent
practices such as never storing the key unencrypted in RAM memory and so
on, drastically limiting the impact of a crash or sudden shutdown.
(4.) Try to store "controversial" data on media that (a) can quickly and
easily be physically destroyed (for example, by a good old whack by a
hammer or a stomp by your boot) and which (b) are cheap enough so that
you won't think twice, when the time comes to do so.
Maybe the NSA _can_ retrieve data that was once on a USB key that has
been shattered into 256 tiny little bits; but you can be sure that your
local cops can't, so physical destruction of this kind of storage media
is a nearly 100% guarantee that prying eyes aren't going to see it. Good
media types to use here are USB keys (the cheaper the better), rewritable CD and DVD discs, and SDRAM chips (the thin little almost square
guys that you can put into a digital camera).
The main problem that you are going to run into here, is hard drives,
because these are both expensive and durable enough as to discourage
casual physical destruction. But for hard drives there is a slick trick.
What you can do, in the event of an unexpected "in the middle of a
session" operating system or computer crash that might have left
sensitive data unencrypted, is always keep a bare-bones computer standing
complicated and is out of scope for the purposes of this document; but,
fortunately, unless you want to do something crazy like trying to get
Windows XP or Vista to boot in this way (good luck trying that!), it's
actually not too difficult to do. Here is an excellent place to get
started: http://www.pendrivelinux.com/.
(Note: I am using the term "USB key" in a rather loose sense of the word;
most of the information in this section can also be applied to other
small removable semi-permanent storage devices, for example Secure
Digital chips, SDM cards, "mini" hard drives with a USB interface, etc.,
provided, of course, that the computer with which you are going to use
them, both has an interface in which to plug them in and that the
computer has the technical capability to boot from the device. However,
as of the time when this is being written, the results have been very
mixed on the subject of being able to boot a computer from anything other
than a "conventional" USB key. Of the alternate types of media that I
have so far tried, I have yet to see a computer BIOS that has a "boot
from SD Chip" option, for example, however desirable that it might be to
do this. You could possibly end-run this problem by getting a USBinterface SD chip reader, but how well it would actually work, I don't
know.)
There is even an anonymity-specific version of a Live CD Linux setup that
was specifically meant for use with a USB key -- check out http://
www.browseanonymouslyanywhere.com/incognito/. (Preliminary testing of
this tool, including using its built-in ability to be installed to and
then booted from a USB key, indicates that it is _very_ good; "Incognito"
even gives you an easy, built-in option to use TrueCrypt to encrypt your /
home directory, and it is set up to use the Tor anonymizing peer network
for Internet communications. Another excellent feature, which shows that
the Incognito people aren't amateurs, is that it clears RAM memory
completely, by writing random data to each RAM page, before the computer
finally shuts down. Anyone interested in either secure data storage or
secure Web surfing should certainly give Incognito a serious look.)
I'll let you explore the Pendrive Linux site to learn all the gory
details, but as a brief comment, the most important thing that you will
have to do, to get a cheap USB key to be bootable, is to ensure that it
has a "bootable primary partition" established on it (by default as they
come from the store, most cheap USB keys don't have this), and you will
have to have a "boot loader" installed as the first program to be set up
on that partition. Once you have these set up, the rest is usually
relatively easy, at least with Linux.
I would recommend that you use a USB key with a little spare space on it;
you may be able to get away with a 2 Gb (two gigabyte) one, but I'd
recommend at least 4 Gb to give yourself some space for updates, caching
and so on; 8 Gb is clearly preferable as you should have plenty of space
for everything. It really depends on your budget.
I would also recommend that you only use them on a PC with enough RAM
memory (~1 Gb to ~3 Gb depending on the operating system you are using)
so that it doesn't need to do "swapping" or "demand paging" to the hard
disk, as described elsewhere in this document. There are two reasons for
this; the obvious one is just that swapping is a bad idea, period, from a
security point of view, but, additionally, the flash RAM memory used on
USB keys is not as robust as that on a conventional hard drive, and
having it constantly written to and read from, in the manner that demand
paging usually implements, can reduce the durability of the flash RAM
chips themselves.
There are many reasons why having your entire operating system running
from a small USB key is a great idea, from a security point of view. Here
are just a few:
(1.) These keys are so small and portable that they can be always with
you. That is, you can wear one on a keychain around your neck, or hang it
on your car keychain, or just keep it in your pocket. This is a very good
idea from a security point of view because it means that the chance of
someone getting secret physical access to your operating system and
quietly installing a "backdoor" on it, is now next to zero. (Note: Just
remember to take it out of your pocket, when you send your pants to the
cleaners. USB keys aren't waterproof! You have been warned!)
(2.) The portability and "always with me" aspects of USB keys means that,
in an emergency, they can be discarded or totally destroyed, in almost
any set of circumstances (a few good whacks with a hammer will do
nicely). While this is obviously an extreme option, consider that (at
least where I live) a 4 Gb USB key costs in the neighbourhood of 20 Euros
or about 40 U.S. Dollars, not counting the cost in time to reinstall
Linux on the replacement key; now, I don't know about you, but although I
probably wouldn't throw away 20 Euro without thinking about it, I sure
would prefer doing that to spending 20 years in jail after the police do
a forensic analysis of all the hidden nooks and crannies on my friendly
local computer operating system.
Incidentally, depending upon the relative size of your "controversial"
datasets, it may be a good idea to store all data of this type on
something like a SD chip (and its relatives SDHC, etc.), which you can
access either via a built-in reader on your PC (many modern laptops and
even some desktops come with these nowadays) or by a USB-interface SD
chip reader.
Logically, your private data would also be stored in something like an
encrypted TrueCrypt container, the file listing for which would look like
"RADIOSTATIC.WAV" when its directory listing on the SD chip was viewed in
the operating system's file browser application. For added plausible
deniability, you could engage in a little primitive security by obscurity
by marking the "hidden file" bit on "RADIOSTATIC.WAV"'s directory listing
(this would cause it not to be shown by a default configuration file
browser, although any experienced forensics attacker would know how to
bypass this), then add a few .JPG format pictures of random landscape
scenes from places where you've never visited, to the SD chip's directory
(don't forget to change the file time and date stamps to something that
doesn't implicate you).
When and if the police find the SD chip and scream at you "ha, we caught
you, you filthy drug dealer, we found the chip where you're hiding all
your records, confess now or you'll get it", you can ask to see the chip
and then blandly say, "hmm, you know, that looks like one of those chips
that you put in a digital camera to store pictures on, tell me officer,
does it have any pictures on it? Because I don't even HAVE a digital
camera (or, you have one that doesn't use that kind of chip.) Try to
imagine the cops' frustration at being presented with a bullet-proof
alibi like this; you weren't in physical possession of the chip when they
seized it, any "controversial" data on it is encrypted so they can't
access it, and the only unencrypted data on the storage device has
nothing to do with you. Serves the buggers right, I say.
I know that it's annoying to give up the speed and convenience of using a
"real" hard drive, but consider the huge privacy and security advantage
that you'd get by using something as small and easy to hide / conceal as
a SD chip, for secure data storage. There are so many ways that you could
do this -- mail it off in a letter to yourself, bury it in an air-tight,
water-tight, insecticide-laden box in your garden, stick it in between
the pages of a book (make sure not to give that one away to the local
charity auction!)... the only limit is your resourcefulness and
imagination.
Another highly desirable aspect of the rapidly increasing data storage
density and small form factor of SD chips is that they make it relatively
easy and affordable to back up your "controversial" data, but to do so in
a way that does not attract attention or open up a possible attack
vector. Just buy two of them and write the same data to each. You can
store the alternate copy somewhere that the police will never suspect
(how about behind the last row of teacups in your mother's kitchen
cabinet?) and then, when the secret police come and do the door kick
trick at your own flat, you can quickly shatter the primary SD chip
(eating the shards, should you be so inclined) and let them search
wherever they want; all the important stuff is securely stored in a tiny
little chip, somewhere physically far away.
No data-hiding strategy is perfect (since, the police can simply beat and
torture you until you tell them where the goodies are), but this one is
as close to perfect as you are likely to get.
Note that all of the comments immediately above also pertain to
conventional USB keys, but are less compelling with a data storage device
in that form factor, since it is more likely to be detected and
recognized as a covert data repository.
(3.) This next advantage is poorly understood but it is in fact extremely
important.
Consider, for the moment, that one of the most prized pieces
for a sophisticated intruder conducting forensic analysis of
the ability to prove a relationship between the user of that
(you) and the terrible, awful, "prohibited" data, that is on
computer, somewhere.
of evidence
your PC, is
computer
the
key, this scenario changes radically, because now, the hard drive on the
PC is simply a storage device -- ironically, your having booted off a
removable device (which, let us remember, was originally designed merely
as a kind of convenience storage device to replace the old floppy drives)
has in effect reversed the relationship between the PC (which was
traditionally supposed to have been the thing with the operating system)
and the external storage device (which was simply a "dumb" peripheral).
This has dramatic effects on the evidence chain, because the PC's hard
drive (which either does not contain an operating system at all, or which
contains an operating system that you have never touched) has no evidence
of any kind linking your access to, or use of, data on the hard drive,
other than possibly the time / date stamps on the files that you accessed.
In effect, from the point of view of a forensic investigator, it is as if
a "ghost" magically accessed the files on the computer's hard drive,
without doing any of the processes (e.g. firing up the operating system
from the primary boot partition on the hard drive, logging in as whatever
user, etc.) that the investigator would ordinarly use to narrow down the
question of who was using the computer at what time.
And it gets better. One of the very funny aspects of this system is that
you can actually set up a computer with Microsoft Windows (using either
the FAT or NTFS file systems that Windows uses as partition formats), but
boot the computer from a Linux-based USB key [the technical reason why
this is possible, is that most modern versions of Linux can use the
"FUSE" module that allows you to "mount" a FAT- or NTFS-based partition,
and access the files on it, in more or less exactly the same was as you
would for an ordinary Linux ext2, ext3 (not good because it journals) or
other partition], then do whatever you want, within some limits, using
the files on the Windows part of the system, then shut it down and go on
your way.
Try to imagine the fits that this would give an investigator; he or she
would be assiduously looking through the Windows Registry, the Event Log,
etc., for traces of who was using the PC; but there would be no such
thing, because at no time in this process was the Windows operating
system even running. Indeed, there would be no trace that the system had
ever been used, at all.
Personally, I wouldn't recommend doing this because I just don't like the
proprietary Windows NTFS file system -- remember that it is a journalling
system, although it's unclear whether this "feature" would still be fully
functional if an NTFS partition were to be mounted via Linux FUSE -- but
if you absolutely must have a Windows computer somewhere (possibly, to do
"non-controversial" duties with), this is one very secure way in which to
use it.
(4.) Finally, USB keys have the additional advantage that you can "mix
and match" operating systems (via simply installing different ones on
different USB keys and then plugging in whichever key suits your fancy
today) for different purposes; you might, for example, want to have an
operating system that fully and by default implements encrypted /
anonymized Internet routing (e.g. Tor, which we discuss elsewhere), but
have another that just uses regular Internet access for purposes of
convenience.
Certain kinds of high-security features, for example anonymized surfing,
are a "red flag" to forensic investigators, as well as juries and judges,
but they are technically difficult and time-consuming to install and / or
enable only when needed. You may want to therefore keep a "Key A" for
your regular use of the computer and a "Key B" for high-security
activities, substituting the appropriate key (along with a reboot) at the
appropriate time.
Despite all the above, there are a few limitations of using removable USB
keys:
(1.) Wear Leveling: This is a process that goes on behind your back when
you read from or write to the USB key, it is driven by the fact that the
Flash RAM chips that are the actual storage medium for the USB key have a
very large, but finite, number of times that they can be read from or
written to, before that particular memory range of the Flash RAM chip
becomes unstable. Wear leveling is a technique that minimizes the impact
of this factor, by spreading out where data is physically stored on a USB
key, so that a file that the operating system and file system thinks is
in a contiguous series of blocks in a single place, is in fact all over
the Flash RAM chip. (A good explanation of wear levelling is at: http://
en.wikipedia.org/wiki/Wear_levelling.)
From a data security point of view, unless you boot your USB key dozens
of times a day over a period of several years, or unless you are
repeatedly wiping it and re-writing it (note however that secure file
deletion technologies very definitely do do this), you are unlikely to
encounter "bad sector" type problems. However, you should be aware that
wear leveling has the side effect of, possibly, partly negating the
benefits of good security tools such as secure file deleters, simply
because its background data dispersal may allow some data to be retained
on the USB key even if your security tool thinks that this has been
eliminated once and for all.
Another important implication of this is, I would strongly suggest that
you not enable a swapfile, swap partition or other demand paging area,
that is physically on your bootable USB key. This is bad both from a
reliability perspective (as the swapfile tends to be constantly read from
and written to) but also from a confidentiality one, since as we have
seen with swapfiles, these can be a gold mine of information for an
intruder; due to wear leveling, even if you think that you have erased or
"sanitized" your swapfile, if it physically resides on the USB key, it
may still have confidential data that is difficult to finally delete
without throwing the key in the dustbin.
(2.) Storage Capacity: Keep in mind that unless you have the budget to
purchase a large (~8 Gb+) USB key -- and at that point we start to get
into the "can I afford to throw it away at the first sign of trouble"
issue (the answer is "yes", by the way -- what's easier, spending another
100 Euro or spending another 10 years in jail?) -- while you may be able
to install and run a Linux operating system in, say, 2 Gb or so of space,
you will want to leave a little extra storage for operating system
component / application updates (particularly security-related updates)
as well as add-ons that you might want to install, as well as for
temporary files that the system needs while it is running.
For this reason I do not recommend that you set up a bootable USB key
with anything less than 4 Gb of space on the key; fortunately, 4 Gb keys
are pretty cheap these days and are bound to become even more so in the
near future.
(3.) Boot / Shutdown / Activity Speed: Although USB keys are much faster
than CD and DVD drives, they are still noticeably slower than even the
slowest "real" hard drive, particularly when they have a lot of inputoutput intensive tasks to perform, such as starting up the operating
system and shutting it down when you're finished your computing session.
(Indeed, at times it can look like the system is hung... it hasn't, just
be patient, something will happen eventually.) Note that once a program
is loaded into your computer's RAM memory, its execution speed will be no
different than if you had been running it from a normal hard drive.
There is an important consideration here, from a data security point of
view: you must always keep the slower performance of USB keys in the back
of your mind, when deciding how much time will be required to perform
certain tasks (for example, shutting down an encrypted TrueCrypt virtual
volume; this process requires the TrueCrypt application, or whatever
application you are using to encrypt your data, to encrypt each data
block and then write it back to wherever the virtual volume is being
stored). If for whatever reason your computing session will be timeconstrained, make sure that you take the USB key speed issue into
account, when planning what activities you will perform -- don't bite off
more than you can chew and then sit there fuming or sweating as you wait
for the USB key to do its stuff.
There is one other thing to remember about the speed issue: make sure
that you use a USB key with the "USB 2.0" interface format (the vast
majority of USB devices of all kinds sold today use this format, but
every so often you will run across one that is only compatible with the
original, much slower USB 1.1 format; I have found that in particular a
lot of cheap passive USB hubs, while they say they support USB 2.0,
actually only implement USB 1.1). Manufacturers make a lot of claims
about "my USB 2.0 key is faster than that other guy's", but in reality I
have found very little real difference because the USB 2.0 standard is
the thing that is actually what sets the key's transfer speed (there is
little a manufacturer can do to get around this, beyond becoming non
compliant with the standard itself). Incidentally, be careful about
mixing USB 2.0 and USB 1.1 devices on the same USB hub; you may find that
all the devices connected to that hub are slowing down to the lowest
common denominator, that is the very slow USB 1.1 interface speed.
(4.) It's Still an OS: Last, but certainly not least, remember that just
because your computer operating system is on a USB key, that doesn't make
it any more, or less, a "real" operating system. You still have to cover
your tracks when using the USB key version of Linux (or whatever other
operating system) in the same way you would if the USB key were the hard
drive inside your PC. In other words, running the operating system from
the USB key is an excellent tool in your data security / privacy arsenal,
but it is in no way a SUBSTITUTE for all the other good practices that
you have to use, to stay secure.
External Hard Drives? Yes or No?
-------------------------------As a final comment on this subject, lately, there has been a lot of talk
on various Linux forums about the possibility of booting Linux from an
external USB (or FireWire) hard drive (e.g. a "real" hard drive with
multiple gigabytes of "real", read / write millions of times, storage
space, as opposed to the much more restricted environment of a USB key
with its Flash memory, write leveling and smaller available storage
space).
Usually, of course, the motivation for doing this has little to do with
Before you read what is below, consider that the general idea of
"encrypting" -- that is, "scrambling with a secret key number that
(hopefully) only you know, so that someone who wants to look at the data
in its original state, has to know the secret key" -- is a GOOD thing.
Without using encryption to secure your data, you are basically a sitting
duck for the next computer criminal, secret police officer, snoopy spouse
or curious co-worker who wants to get the gory details of what you're
doing on your PC.
But also consider that encryption, IF AND ONLY IF IT IS PROPERLY
IMPLEMENTED (it's next to worthless if it isn't used right), is only a
necessary, not a sufficient, condition to true data security. It is
certainly true that an amateurishly encrypted file, folder or hard drive
will provide some security against a casual intruder, but it will last
all of about 5 minutes against the kind of sophisticated, well-equipped
attacker with physical access to your PC, that we are talking about in
this document.
Can The Police Break Your Encryption?
------------------------------------This topic seems to generate endless discussion in the on-line security /
anonymity community, and I won't presume to perpetuate too much of that
here, except to say a few things:
(1.) The short answer to the question, "Can they break my encryption, if
it's 'good' encryption and I have used it properly?", is, "probably...
YES... BUT".
Consider, in this context, that the U.S. NSA (National Security Agency)
has TEN TIMES the budget of the CIA (read: "the NSA has as much money as
it wants to spend, its budget has NO LIMIT"), that it has the world's
best cryptographers and cryptanalysists, that it has the world's most
powerful supercomputers, that it has been around doing this since the end
of the Second World War (e.g., 60+ years), and that breaking crypto is
about 50% of its day to day work responsibilities. (For more, see: http://
video.google.com/videosearch?q=Echelon+-+The+Most+Secret+Spy
+System&sitesearch=#.) The NSA is the Godzilla of code-breaking, and you
are "Bambi" against them.
Scared, yet? You should be... sort of.
(2.) The best available evidence -- and here, I have this based on
sources who I am not free to name -- is that if (and ONLY if),
sufficiently high priority is placed on breaking your encryption, the NSA
should be able to crack the password(s) of data files that are protected
by almost any widely available type of encryption -- PGP, TrueCrypt,
BitLocker, whatever you want -- in less than 24 hours, using the
incredibly powerful, expensive distributed computing supercomputer arrays
that they have dedicated to this task, down in the U.S.A..
Furthermore, most Western governments (perversely, it is residents of
"hostile" nations such as Russia, China, etc., who are 'safest' here,
because these countries are outside the Echelon / NATO "old boys' spy
club" and therefore do not have access to the NSA's tools) have
reciprocal arrangements with the NSA, meaning that in the unusual
circumstance where they encounter a type or implementation of encrypted
data that they can't easily crack themselves, they can just courier the
offending hard drive, CD, USB key, etc., off to their NSA friends at Fort
Meade, Maryland, and get the "professionals" to fix things once and for
all.
It is very telling (again, I can't reveal where I got this information...
you'll just have to trust me, and my source, that it's true), that even
for AES, the U.S. government itself, specifically states that AES
encryption is only to be used for "sensitive but unclassified", data;
they use far more robust, internally developed, "tell anyone about it and
we shoot you" algorithms for more sensitive purposes (like, protecting
their missile launch codes from malicious foreign governments).
Now, ask yourself; if the U.S. government has data that's more secret
than "sensitive but unclassified", and if, (by inference), they have to
use even more powerful, still-secret encryption algorithms than AES to
secure their _own_ data, what does that say about AES' (or any other
publicly known) ability to withstand a determined attack by an
exquisitely well-equipped, sophisticated opponent like the NSA itself?
The conclusion is unavoidable: both AES, and most other commercially
available encryption algorithms, CAN, without a reasonable doubt, be
conveniently broken by the U.S. NSA (and, possibly, certain other
entities of similar capabilities, for example the U.K.'s GHCQ, CIA, the
U.S. military, maybe the Chinese secret police... obviously, the exact
abilities here are among the most closely guarded state secrets),
probably with trivial effort and probably within a short time period,
probably no more than a few days to a week at most.
So, in an "ideal world" from the cops' point of view, they have you
"owned" no matter what you do, right, dude?
(2.) Having established that the most powerful levels of governments can,
in fact, probably break your "robustly" encrypted files, we have to ask
the next, more relevant question, which is, "WILL they"?
Here, the answer is much more complex, it's in shades of grey rather than
in black and white.
To fully understand this question, you have to first appreciate that the
police services within a given nation, as well as different nations
within the Western (NATO) political / economic sphere, have a very
definite pecking order, in terms of who gets access to what, based on
what set of criteria and on what time schedule.
In most Western nations, normal operational control over -- and therefore
use of -- the most powerful crypto cracking tools (e.g. the NSA's
buildings full of supercomputers mentioned above), is allocated only to a
very small subset of the nation's overall police apparatus; this is
because the most powerful anti-encryption tools are a relatively scarce
and expensive resource.
Furthermore, these tools were not originally meant for, nor was the huge
budget they require allocated for, mundane tasks like breaking the local
pervert's kiddy porn collection, (in China) figuring out which dissident
is sending e-mails about the Dalai Lama or finding out which stockbroker
has been doing insider trading; instead, a nation's cryptographic "crown
jewels" are intended for attacks on their counterparts in other, hostile
nations, for example they would usually be used to try to break in to a
potential adversary's military command and control apparatus, to spy on
the adversary's diplomatic traffic, and so on. They are just too
important to be "wasted" on day to day police activities.
Very
Very
Very
Very
Low
N/A (probably be refused)
Virus / malware creation / distribution
Low
N/A (probably be refused)
Ordinary drug dealing ("little fish")
Low
1 month or refused
Child pornography (most varieties)
Low
1 month or refused
Legal action (ordered by court or judge)
Low
1 month or refused
Ordinary economic crime (insider trading)
Moderate
1 month or refused
Special economic crime (large-scale fraud)
Moderate
1-3 weeks or refused
Special drug dealing ("kingpin")
Moderate
1-3 weeks or refused
Computer hacking (against government)
Moderate
1-3 weeks
"Hard" political dissident (radical or violent)
Moderate
1-3 weeks
Foreign government secrets (ordinary)
High
1-4 days
"Terrorist" group (Al-Qaeda foot soldier)
High
1-4 days
Child pornography (imminent threat to a child)
High
1-3 days
Imminent death threat to ordinary citizen
High
1-2 days
Computer hacking (against military)
High
1-2 days
Foreign government secrets (military)
High
Same day / 1-3 days
"Terrorist" group (Al-Qaeda leader or plot)
high
1-2 days
Suspected assassination threat to leader(s)
high
Same day
Nuclear terrorism (movie-plot scenario)
high
1 hour
Very
Very
Extremely
What's interesting in looking at the list that you see above, is the very
wide range of requests that the spooks at a place like GHCQ or NSA may
have sent to them on a day to day basis. Try to put yourself in their
shoes; they have a finite amount of time within which they can operate
their supercomputers, and not all of this time can be allocated towards
breaking encryption on behalf of local law enforcement (some of it has to
be spent on research, backups, etc.). Furthermore, the amount of time
needed to break a key can vary, even for a supercomputer, making
scheduling rather difficult.
What it all amounts to, is an extremely important rule of using
encryption to defend yourself against the ultimate in sophisticated
attackers (e.g. the NSA):
IT IS JUST AS IMPORTANT TO CONVINCE YOUR ATTACKERS THAT YOU DON'T HAVE
ANYTHING OF INTEREST TO THEM, AS IT IS TO ENCRYPT THE DATA IN THE FIRST
PLACE.
Looking at the above table -- whether or not the exact assessments that I
have made of NSA's perceptions regarding the relative importance of each
type of encrypted data, are exactly true, or not -- we can see that the
reaction of the intelligence agency, and therefore how much supercomputer
effort they will devote to the task forthcoming from it, is intimitely
associated with what kind of encrypted data that the intelligence agency
believes that it is working with.
If they believe (correctly or otherwise) that the block of encrypted data
that your local police have sent them in the latest courier delivery,
contains the map to where Al-Qaeda has hidden the stolen nuclear bomb
that's set to go off in your favorite city in 3 hours, they are likely to
use all the resources at their disposal to crack and analyse this data
instantly.
If they believe that the police have simply given them the hard drive of
some kid who's pirating music, they are likely to tell the cops to go
take a hike and not bother them with trivial matters like the preceding.
YOUR "sensitive" data probably lies somewhere between these two extremes
on the above continuum, but clearly, it is in your interest to have the
police believe that whatever your data is, it's something less serious
than it really is. How you do this, is explained below.
Finally, in view of all the above, every so often the cops let the mask
slip and reveal something relevant about what their REAL, day-to-day
capabilities are regarding breaking encryption. Here's one juicy little
tid-bit, from (http://www.csoonline.com/article/221208/
The_Rise_of_Anti_Forensics?page=7):
"One rule hackers used to go by, says Grugq, was the 17-hour rule.
'Police officers [in London s forensics unit] had two days to examine a
computer. So your attack didn t have to be perfect. It just had to take
more than two eight-hour working days for someone to figure out. That was
like an unwritten rule. They only had those 16 hours to work on it. So if
you made it take 17 hours to figure out, you win.' Since then, Grugq
says, law enforcement has built up 18-month backlogs on systems to
investigate, giving them even less time per machine."
Remember, the police, just like you, live in a real world of limited and
finite resources, versus potentially infinite demands to break
encryption. And since encrypting data is so much easier than breaking
encryption, this is a race that you can win, if you're smart.
Or, try this little gem:
"...The (local U.K. police) team cracks low-grade encryption using 100
quad-core PCs but for high-grade encryption it relies on the threat of a
prison sentence for individuals refusing to hand over passwords or
decrypted files..."
(Source: http://networks.silicon.com/silicon/networks/
mobile/0,39024665,39282266-2,00.htm)
One Hundred Quad-Core PCs, eh? That's verrry interesting, because it
gives a glimpse into the kind of code-breaking infrastructure that is
LIKELY, as opposed to possibly, to be used against you, the minute that
they break down the door and confiscate your supposedly robustly
encrypted hard drive.
Using my own background in the field, what this says to me is, "any level
of key under (currently) 128-bit, is possibly vulnerable to this kind of
attack"... but you have to keep in the back of your mind that the amount
of CPU cycles that a police department like this would be able to expend,
would depend upon a large number of other factors, particularly how many
other keys that they had queued up to break, the nature of the protected
data (e.g. is it an easily recognizable .JPG or .DOC file, or is it some
double-encrypted TrueCrypt volume?), the strength of the original
passphrase used to encrypt the data, as well as (this plays a
surprisingly large factor) luck.
Personally, I wouldn't use anything less than 256-bit, largely because
you have to take into account the continuing evolution of CPU speeds and
mathematics processing capabilities (remember that it isn't just the
speed of the CPU that affects its ability to break encryption keys; other
factors, particularly the ability of the cryptanalytic application to
spread the work over multiple CPU cores / computers and the internal
architecture of each chip involved, are just as, or more, important).
The point is, the trade-off between using a 128-bit key or a 256-bit one
is usually a couple of seconds more for the larger key to encrypt or
decrypt a large data set. That seems to me to be a reasonable sacrifice
to make, considering the risks if the key is broken.
Plausible Deniability, Or; "It Wasn't Me"
----------------------------------------Another seldom-appreciated aspect of using encryption is the "since only
'bad people' use encryption, and I found encryption software on your
computer, that must mean that you're a 'bad person'", concept.
So, when the police seize your computer and find (say) a program like PGP
installed on it, then say to the jury, "See? SEE? This awful person has a
data hiding program on his PC, which obviously means that he's hiding
plans for terrorism, child pornography, drug dealing, {your favorite
bogeyman here} and a host of other nefarious activities too sinister to
describe, somewhere on his computer! I mean, for what _legitimate_
purpose would anyone ever hide anything from his upstanding, honest,
selfless law enforcement authorities? Clearly, there can be none. Your
Lordship, I move for immediate conviction!!"
You would be AMAZED at how readily juries made up of ordinary, computerilliterate people will fall for this argument. (See: http://www.news.com/
Minnesota-court-takes-dim-view-of-encryption/2100-1030_3-5718978.html)
The larger point is, even the SLIGHTEST HINT that you are using
encryption, steaganography (see below), password-protected resources,
data sanitization software or any other kind of "data hiding" system on
your PC, WHETHER OR NOT Y0U HAVE IN FACT STORED ANY CONTROVERSIAL CONTENT
AT ALL ON THE COMPUTER, can and will be taken as "evidence of criminal
intent" by juries who are hand-picked by the prosecution for
technological illiteracy, ignorance of basic constitutional rights and
pre-disposition to believe the assertions made by authority figures
(e.g., the police). This is ridiculously unfair, but, as we have noted
elsewhere, the police don't have to, and won't, "play fair".
So all other things being equal, you will want to employ encryption
technologies that do their work as unobtrusively as possible -- in
particular, systems that can work without being permanently "installed".
Unfortunately, the current state of the art in this area is, in my
opinion, far from perfect. Especially in the Windows environment, the
vast majority of encryption programs that are otherwise acceptable from a
security point of view (with the notable exception of the Windows version
anything on your hard drive, not just ASCII text; it can be a .JPG
graphics file, a folder, whatever, as long as you don't need some kind of
unencryption software and a key (see below) to access it.
"Cyphertext" -- the scrambled, secured, version of data which was
originally in "plaintext" format. The point here is that you need to have
the key and the encryption program to reverse the encryption process and
output a new copy of the plaintext version for your use.
One thing about cyphertext that I find is usually never mentioned in
discussions of this type is, whereas in most cases you can instantly see
what kind of data is contained in a plaintext file (e.g. you can see if
it's a Microsoft Word document, a .JPG picture file, a folder, whatever),
most modern encryption programs deliberately obscure, or can obscure, the
type of file, so it just looks like gibberish. (I strongly suggest,
incidentally, that if you are using an encryption program that doesn't do
this by default -- a good example of which is PGP which by default will
take the plaintext file "MYDOC.DOC" and encrypt it into "MYDOC.DOC.PGP"
-- you do it manually yourself by changing the file name to something
like "MYSOCKS.XLS" or something equally misleading. Advanced forensics
programs like EnCase have a limited ability to defeat this kind of
trickery, but it is still worth doing, since every little element in your
layered defence system makes the attacker's job just that one little bit
more difficult.)
There is a specific reason why good encryption programs obscure the
original type of file; this is to defeat certain types of cryptanalysis
(see below) that use well-known characteristics of the internal formats
of some file types as a mechanism to try to guess the encryption key. If
an attacker knows that a secured file was originally in Microsoft Word
format, therefore, it is much easier for him to attack the encryption
because he knows that a certain number of bytes in the file is always the
Microsoft Word "header" and so on. If he doesn't know that it was a .DOC
file he has to guess at the file type, which as you can imagine is a much
more significant task.
UPDATE : To defeat attacks based on keyloggers, I now strongly recommend
the use of "keyfiles" under TrueCrypt. These are just what the name
implies : A file that is in effect one part of your password.
The idea is that when you ask TrueCrypt to mount an encrypted volume,
whereas with simple encryption, it would just ask you for a password,
this time, it not only requests the password but also the keyfile as
well. The TrueCrypt program then uses a certain number of bytes from the
first part of the keyfile (as of now this is 1024 bytes, or "one
kilobyte"), and adds this to your password, forming a "super password"
that, in my opinion, would be difficult for even entities like the NSA to
break.
Note that I said "difficult", not "impossible"; the NSA has other tricks
besides password hacking, to get at your secret data, but here again, the
objective is to slow them down, make them think twice about going after
you.
The great advantage of keyfiles is (apart from the obvious one noted
above, that is, significantly better password security), even if your
opponent has installed a keylogger on your computer that secretly records
each and every keystroke (including your password!) that you type in,
since the data in the keyfile does not pass through the keyboard buffer
(the part of the computer's RAM memory that handles key press and key
the file directory picture. Even better, why not store your keyfiles online, maybe as attachments to an e-mail to yourself? (Download them from
your AOL or Hotmail account as needed, use them to mount your TrueCrypt
volume, then delete them afterward.) Just make sure that your on-line
account doesn't expire, or... well, you should already know what happens
then.
(3.) Use files whose header information normally doesn't change. .JPG
format files and most other compressed graphics files are pretty good
here, as long as you don't edit them; you can also use regular ASCII text
files (don't open them in a text editor... ever!), specifically generated
keyfiles made for you by TrueCrypt (rename them so that they don't look
obvious)... really you can use anything as long as you test it to make
sure that it won't auto-change each time it is loaded into the "owning"
application that originally created this kind of file.
Keyfiles : Don't leave home without 'em. They can save you when your
password just isn't good enough.
Layered Defence, Or, Your Opponent's Attack Sequence, And Why It Matters
-----------------------------------------------------------------------The term "layered defence" isn't popular just by accident; it's popular
because it works. This is a technique first (as far as I know) invented
by the military, and it basically describes a posture in which the
defender sets up a series of concentric (one inside the other, like one
of those old Russian dolls) set of protective measures that force an
attacker to overcome multiple barriers / threats, to get to the asset
that he's trying to attack. (You see this a lot in military air defence
systems, where an army will have "long", "medium" and "short" range
surface to air missiles, each with somewhat different characteristics; a
bomber plane that may be easily able to avoid the long-range SAMs, may
have a more difficult time with the short-range ones and vice versa.)
"Layered defence" is, in military jargon, more or less the direct
opposite of the "all-or-nothing" concept, in which a very strong outer
"shell" is constructed, but this powerful barrier is not backed up by
anything else. The obvious advantage of "layered defence" versus "all-ornothing" is, whereas under all-or-nothing, the outside barrier is very
strong, if it somehow is compromised then the attacker has free rein
thereafter, under layered defence, although the outer barrier(s) are
perhaps less robust than would be the case for all-or-nothing, the
consequences of a single protective measure failing, are far less serious
because the attacker would still have to defeat a secondary, a tertiary,
etc. defence, before arriving at the ultimate target.
The point here is that layered defences are usually a much better bet,
simply because they admit for the possibility of failure and try to
compensate for it, if it occurs.
A good example of "all-or-nothing" versus "layered defence" can be seen
as follows:
(1.) All-or-nothing: You defend your sensitive data by encrypting your
entire hard drive with a robust algorithm and a strong password, which
you have committed to memory in your head (and have not written down
anywhere). If either of these defences are compromised, an attacker then
gets complete, unrestricted access to whatever sensitive data is on your
hard drive. The reasoning behind this is, "if I use these measures, and I
in fact, if you are careful about how you organize your cyphertext in the
first place, you can just give even a successful attacker fits. The
reason is that an attacker with an illegitimately derived key has to
first understand the nature of the data that he is trying to unencrypt,
and then has to have a program that conveniently enables this process.
For example, consider the following cases. (Note: I should point out here
that we are assuming that the attacker ALSO understands the algorithm
that you used to do the encryption, possibly because he tried thousands
or millions of variations of the key against each possible algorithm in
the process of breaking the key in the first place):
Case #1: You have encrypted a single, ASCII text file (.TXT) that the
attacker knows was originally an ASCII text file.
In this case the attacker has a pretty easy task to undertake, because
all he has to do is get any one of a wide variety of forensics programs
(EnCase will do) to use its built-in ability to decrypt the file by
running the key against the file using first the AES algorithm (see
below), then Blowfish, then Serpent, etc., until he sees something that
looks like English text in front of him. When he sees something in
English, he can stop because he knows he has the original plaintext.
Case #2: You have encrypted a single file, but this time the original
might not have been ASCII; it might have been a .DOC, it might have been
a .JPG, or it might have been something else altogether.
Now in this case, although the attacker can still use the techniques
shown in Case #1 above, his job has just grown much more difficult,
because the attacker has to multiply the number of algorithms against the
number of potential file formats... and THEN, the attacker's forensics
program has to display each potential output, until something that looks
like an original plaintext document shows up in front of the attacker
himself.
Is this impossible? No; some modern forensics programs like EnCase have a
variety of tools to make this process more convenient, for example if
they use a given encryption key and given algorithm, and the first few
bytes of what is "unencrypted" in this manner looks like the file header
for a .JPG, a .DOC, etc., they will immediately bring it to the
attacker's attention or store it for later examination.
However, there are a number of practical limits to this approach,
particularly, the original file might simply have been in a format that
the forensics program doesn't yet understand or cannot easily display.
[For example, suppose that the original program was in Microsoft Visio
(.VSD) format. The forensics application has to have the Visio "engine"
within it, to take the objects that are in the .VSD file and put them on
a screen in a way that a human police officer, corporate forensics
officer or other human intruder, can recognize as meaningful data. These
types of file display / interpretation engines are often complex to
implement and are also frequently copyrighted or otherwise difficult to
implement from a legal point of view.]
Case #3: You have encrypted
"X1aJnBA3.spl", that may be
archive (e.g. a .ZIP, .RAR,
like an encrypted TrueCrypt
even if he does, in fact, have the right key. The problem is, basically,
"how does he KNOW that it's the right key", and "even if he DOES
(somehow) 'KNOW' that it's the right key, how does he access the file
with the key and turn it into human-readable data".
This is a much more challenging task than most of the forensics software
manufacturers would have you believe, because if the attacker doesn't
know what kind of application created the file in the first place, and if
it is a type of file that requires a specific application (one that by
its very nature cannot easily be embedded in a forensics application like
EnCase, in the manner in which a simple ASCII, .DOC or .JPG file viewer
can), then the only real way that the attacker has in which to access the
original data is for him to manually try various applications against the
"unencrypted" file, one by one, until he sees something that looks like
valid output.
For example, suppose that the attacker's forensics program tells him,
"Bingo, you've got a match, you've broken the key for file
'X1aJnBA3.spl'", but the program cannot tell what kind of file that it
is. The attacker must, for each application that he suspects may have
been used to create the file, (a) rename the file with the extension
(.DOC, .ZIP, .JPG) appropriate to the owning application and then (b) try
to load it or access it with that application.
Alternatively, the attacker can load the file into a hex editing program
to look at each individual byte, or sequence of bytes, and then try to
recognize something that gives the attacker an idea as to what kind of
file it really is. For some file types (e.g. Microsoft Word, etc., again
assuming that the original creator of the file has not used some kind of
secondary internal encryption, however weak, to further obscure the
file's internal structure), this isn't a particularly difficult task; for
others (for example graphics file formats, especially the less well-known
ones) this is possible but not nearly as easy; but it can be very timeconsuming and frustrating to do if the file format is an obscure one or
if it doesn't contain obvious give-away strings (e.g.
"FMT:WORDPERFECT5.1") within the file's "header" area. The point is that
it is NOT easy to do this without the active or passive co-operation of
whomever originally recorded the data... which is another good reason to
never, ever, communicate in any way with a law enforcement official who
has you in custody, about any characteristic of your "controversial" data.
Is deriving the type of the file, using either of the above techniques,
impossible? No, for a determined attacker with a lot (and I do mean a
LOT) of time on his hands, eventually he may find what he's looking for.
But in practice, I have found that most attackers are equipped only to
test against the most well-known file formats and then only if there
aren't any secondary defensive measures (such as "hidden" TrueCrypt
volumes, double-encryption, illegally renamed files, double-container
archives and so on; all of these will be discussed below) affecting the
targeted files.
The point here is that a few simple defensive measures like the above
will enormously complicate and prolong the attacker's job, even if you're
unlucky or stupid enough for him to have compromised your keys.
"Cryptographic Algorithm" -- This is a very complex mathematical
equation that is typically used with a "key" (see below) to scramble /
unscramble plaintext in to / out of cyphertext.
even a 300,000 word dictionary created in this way is far, far more
efficient a source of potential passwords than the cracking program would
have to use (guess at) to attack your encrypted files, by simply picking
potential passwords out of thin air.
This is because experienced forensics attackers know that most people use
something familiar to them -- a pet's name, the name of a child, the name
of your favorite rock group or football club, some cute phrase like
"TheFalconFlies", etc. -- as a password, and that most people pick this
credential out of an existing document if they can't get it out of their
own imagination.
Here again we see one of the most often-encountered stories in the
failure of digital data security, that is, "the tools work fine, but the
user implemented them in a way that makes it easy for a knowledgeable
attacker to bypass the tools via social engineering". The attacker isn't
going to go after the part of your encryption defences that's hard to
attack (e.g. the cryptographic algorithm); he's going to attack wherever
he thinks the weakest link is, and that's all too frequently the
carelessness with which you picked your password.
The point is, if you have left your password in plaintext (unencrypted)
form ANYWHERE on your hard drive -- no matter how out-of-the way that
place might seem, like, say, as the 3567th word of the letter you wrote
to Uncle Achmed on your last trip to Mecca -- the forensics program WILL
find it and WILL (successfully) use it to magically decrypt your
supposedly "secure" files. So don't, under any circumstances, EVER, leave
your passwords unencrypted, on any storage media that the police might
get their hands on. Doing so is just as good as leaving everything
unencrypted in the first place.
Two, and this is probably the most important thing, if you don't write
down your password anywhere (which is good practice if you want to keep
your data secure; keep in mind that when the black-suited SWAT team kicks
in your door, they are going to go over everything in your house that
could remotely give them a clue as to what your password is... trust me,
if it's written down, they'll find it), MAKE SURE THAT YOU PICK A
PASSWORD THAT YOU CAN AND WILL REMEMBER. AND USE IT TO DECRYPT YOUR DATA
EVERY SO OFTEN, SO YOU DON'T FORGET IT.
The point here is that actually, if you look at risks to your data that
are likely, as opposed to just the risks to you, you are far more likely
to lose all of your precious data due to simply forgetting your password,
than you are to having most of the other negative things described
elsewhere in this document, happen to you, personally.
Unfortunately, I have had exactly this (e.g., losing access to my
encrypted files) happen to me, more times than I'd care to admit; but I
am able to justify this by saying, "it's better that I lose some of this
data, once in a while, due to my own forgetfulness, than to suffer the
possibly far worse consequences of having it divulged to an attacker".
Your own way of balancing these two demands, that is, ease of use vs.
security, may be different.
Choose the balance that's right for you... just don't do something stupid
like choosing "Password" or "Secret" for your password.
Most people confuse the terms "key" (as described above) for some
encrypted data, and the term "password" (what you, the human being, enter
to access the encrypted data, you either enter this for access to the
n" version.
Since the encryption program "knows" what its own transforms are, it can
easily reverse the process so that you can get your data back by it being
decrypted. (Note: In reality, the transform process is much more complex
than I am showing above; I have described it this way to minimise the
amount of techno-babble, here.)
Dirty Little Encryption Secrets
------------------------------If encryption is, or can be, so robust, then why, do you think, are the
authorities regularly able to get access to the supposedly "secured" data
on their victims' computers? The reasons for this are very poorly
understood, but knowing the past history on this front is critically
important if you want your secured data to stay secure for more than a
few minutes in the tender hands of an experienced attacker.
Here are some of the most commonly heard stories about how, despite the
possible presence of encryption, the authorities got the goods on the
"perps":
(1.) The data wasn't encrypted, at all; that is, the victim "didn't think
that anyone would find that three level deep folder I had named,
'kyddy_pwrn'".
I'm not kidding about this; you hear it all the time. The jails are full
of stupid criminals, both high-tech ones and otherwise. (I especially
liked the story of the guy who -- this is God's own truth -- phoned up
his local police precinct in the southern United States, to report "some
other dude ripped off my hundred dollar bag of cocaine, can you get it
back for me please?"; the good news was, the cops were indeed able to
track down his stash of illegal drugs and arrest the other low-life who
stole it from the phone caller, but the BAD news was... well, I'm sure
you get the point.)
Another great story, which is referenced in one of the above video links,
is how the American cops routinely "ask the defendant to write a letter
of apology to the victim of a crime". Stupid criminals (or, innocent
people that the cops want to pin the charge on) will routinely comply
with this kind of request; of course, the second that the ink is dry on
the paper, the "apology letter" gets waved in front of the jury or judge
as a "signed confession".
(2.) A much more insidious variation on (1.) is, "gee, I THOUGHT that I
had encrypted it, what went wrong?". This story gets heard with certain
poorly implemented or inadequately tested encryption programs (or data
sanitization programs) that a user naively trusted to do what the
programs advertised that they could do.
Now, let's use a little common sense, here: if you were working on a
Microsoft Word document for (say) eight hours straight, and then hit the
little "Save" icon, wouldn't you check on your computer's hard disk,
wherever you thought you had saved the document, to see if it was, in
fact, recorded as being there? Would you just blindly trust the MS-Word
application to "do what it said it would do", or would you check, first,
before closing down the word processor, entirely?
Now, try to imagine, you are using a security program, the failure of
which IN ANY EVEN MARGINAL WAY, could expose you to far more serious
consequences, ranging at the low end to social ostracism, to long jail
terms, to even (in some societies), death? Why on Earth would you use
something like that on your "confidential" data, without carefully
testing, first, that:
(a.) The encryption program, actually encrypted your data so that it
can't be recovered without the appropriate key, and didn't leave any part
of your original, plaintext data around for an attacker?
(b.) The data sanitization program, actually wiped your data, so that it
can't be recovered, period?
It totally escapes me how end users just "trust" security applications,
particularly programs that haven't been subjected to robust peer review,
with their critical data, without ever spending so much as a good
minute's worth of checking to see if the security program is working as
advertised.
Now, the problem here is, it can be quite difficult to properly test
security programs, if you aren't a trained observer and especially if you
don't have a lot, and I DO mean a lot, of spare time on your hands to
check out each and every nook and cranny in which a defective security
program may have leaked information. I actually do check out most of my
applications (a "hex editor" program can be very useful here, so you can
load the supposedly encrypted version of your file into the editor and
then examine what you see for obvious signs of poorly, or non-, encrypted
content), but this may not be practical for less technically oriented
users to do.
So, as the best advice that I can give you here, all I can suggest is
that you use well-known programs -- for example, PGP, TrueCrypt, and so
on -- that have a good reputation. Look especially for Open Source based
programs (fortunately, in the security world, there are a lot of these),
attack on all the others. (See comments for the "loss leader" approach.)
(7.) Cost -- This is not an issue in the Linux world, of course, but
since most of the Windows-environment Full Disk Encryption systems are
for-profit, commercial products, the cost to implement them can be a
deterrent for the budget-conscious, especially if you have more than a
couple of computers that you intend to protect in this manner.
(8.) Backups have to be encrypted anyway -- This is a "sleeper" issue,
not only with FDE but with conventional encryption, as well; only, the
issue is more serious if you are using FDE as your primary, or only,
method of static data security. It comes about from the fact that it is
the height of idiocy (but you would be amazed at how frequently this
error shows up in the day to day world of static data protection, both at
the consumer and corporate levels) to robustly encrypt the original copy
of a sensitive data object, but to then allow a backup copy of precisely
the same thing be made without any protection at all.
Suppose, for the sake of discussion, that you have encrypted your entire
hard drive with FDE, but that you then use something like Apple's "Time
Machine" system to do regular backups of important files, to an external
hard drive. Or, suppose you even do it manually, just by copying the
involved files to the USB or Firewire hard drive. Unless the destination
folder / directory where the original file will be copied to, is itself
robustly encrypted, then the instant it shows up on the backup medium
(whatever that is -- it could be an external drive, a tape, a DVD+/-RW, a
Flash USB key, another computer on your home LAN, an on-line Web-based
backup service... anything), THE FILE IS IN PLAINTEXT, COMPLETELY
UNPROTECTED FORMAT.
Psychologically, FDE is subtly dangerous here because, unlike more
conventional forms of encryption (say, TrueCrypt-based virtual volumes),
it isn't "in your face"; it works silently in the background and doesn't
make you conscious of the fact that you have to consider the security of
your data wherever it is transmitted or stored. Unless each and every
place where you might store a sensitive data file is itself protected by
either FDE or some equivalent form of "always on" encryption -- a near
impossibility, considering the very wide variety of storage media
available to users these days -- you have to either:
(a) Leave a single copy of the sensitive file on the FDE-protected hard
drive and never back it up, to any different medium (risky from a data
availability point of view);
(b) Ensure that the recipient end of the backup process has its own,
robust form of encryption (a perfectly valid idea; but if you have to do
this, then what is the value of FDE? you might as well use the other form
of encryption at both ends);
(d) Periodically back up the ENTIRE FDE-protected hard drive -- each and
every bit and byte of it, starting with physical hard drive sector "0,0"
and continuing to the very last bit on the hard drive -- to some other
media (encrypted or not); this requires very large amounts of backup
media space, and it's clumsy and has risks all of its own (when and if
you have to restore, is the FDE system going to let you then access the
hard drive? remember, the FDE software can't tell if you are "you", or
you are a hostile attacker; it is designed to be hard to restore to a
different hard drive, if it wasn't, it would be much less secure).
(9.) No plausible deniability -- The fact that a hard drive is FDE-
fraternity house, those around you are bound to notice that "something
funny is going on here, Maksim is always looking over his shoulder and
locking the computer every time that someone walks by".
If, on the other hand, the sensitive content is separately encrypted -so that you can have normal access to the mundane functions of the
computer without even being aware that something "nasty" is squirreled
away in some encrypted file or directory -- then you can safely take the
chance of going for a short break without attracting undue attention.
(Just be aware, it's not a good idea to allow nosy people extensive
access of this type without properly sanitizing the PC. Remember that
thumbnail pictures, browser histories and file names, just to name three
resources that might be accessible even though the original sensitive
content is now encrypted, can give rise to a lot of awkward questions, if
a "nosy" sibling, spouse or friend starts to poke around in the wrong
places on your computer.)
The bottom line, here: FDE isn't a bad technology, and it can play a
valuable role under some circumstances, but you need to carefully
evaluate its good and bad points, then decide if it's right for you.
Personally, I would never use Full Disk Encryption without also using
conventional methods like TrueCrypt, but you may feel differently. It's
up to you; it's your data, and your life.
RAM Disks: Cheap, Dirty, Risky, Effective
----------------------------------------Now the intelligent data hider also has another option at his disposal
that I find is rarely discussed in the context of anti-forensics; this is
a shame because it can be very effective against many of the "end run"
types of attacks (that is, attacks that target traces of a file or
pointers to it, as opposed to the original file itself), that I have
described above.
I'm referring here to "RAM disks". This is actually quite an archaic
concept, going back at least as far as the start of the microcomputer
age. The concept is simply to take some of your computer's precious RAM
memory (the kind that sort of goes 'poof' when the computer is powered
down) and segment it off into a kind of very fast virtual hard drive. Now
the obvious disadvantage to this is, unless you copy the contents of
whatever you had in the RAM disk, to a more permanent storage medium
(logically, something like a TrueCrypt volume on a conventional hard
drive), when you turn the computer off, then you just lost all the files
that you had placed in the RAM disk.
But... maybe that's what you wanted to have happen, right? Consider the
'jackboot at the door' scenario and you'll see why using a RAM disk can
be a useful tool -- all you have to do is turn off the computer, and,
barring some of the highly specialized "RAM chilling" attacks mentioned
much earlier in this document, all of your "controversial" data instantly
disappears, whether it was encrypted or not. (Yes, it _is_ theoretically
possible that a very knowledgeable and determined attacker, with exactly
the right forensics tools and a perfect procedure, could still compromise
this data; but it's very unlikely, unless you were stupid enough to do
something like bragging to him, 'ha ha, you can't get me'. You never
would do something like that... would you? If you would, start learning
how to be very polite, as you say to the police, 'I'm sorry officer but I
can't help you with that, would you like to speak to my solicitor?')
The other disadvantage about RAM disks is obviously that "you don't get
something for nothing". That is, the amount of RAM memory in your PC is
finite, and is probably quite limited, particularly for low-budget
computers, and the more RAM that you allocate to the RAM disk, the less
that your "normal" operating system, plus any applications that you
choose to run, will have to use. In extreme cases, if you are overly
ambitious with the RAM disk, you can stop your operating system from
functioning altogether, although this is usually a temporary problem :
just reboot and try a lower number for the amount of memory to give to
the RAM disk.
In this respect, Windows users are again likely to be at a rather severe
disadvantage compared to Linux users, since the Windows operating system
uses so much more RAM memory to begin with, but even a low-footprint (~50
megabyte) RAM disk can still be very useful for Windows users for
temporary storage of small, sensitive files like lists of URLs, buddy
lists and so on.
For information on Windows RAM disks you could try searching Google, or
check out : http://channel9.msdn.com/forums/TechOff/19142-Microsofts-XPRAM-Disk-Driver/
For information on Linux RAM disks, search Google (or your own
distribution's help system) for "tmpfs" and "RAMfs", they are both very
easy to use and both have their own advantages and disadvantages.
One other note : I am not sure how well RAM disks (for any operating
system) would react to the "Suspend" or "Hibernate" features of some more
modern computers (especially laptops), so I'd be cautious about using
them in those environments. Generally, doing any kind of securitysensitive work on your PC and then putting it into any kind of powersaving state, is a BAD thing from a security and privacy point of view,
because you have no control over whether the BIOS of the computer and the
operating system might swap out some of your "controversial" data, from a
secured area such as an encrypted volume, out to something completely
unprotected (such as the "hibernation file" on the hard disk).
TrueCrypt's Website specifically mentions this as a significant data
leakage possibility, so you would do well to heed their advice and not
let your computer go to sleep while it's in use for your 'sensitive' data
access.
Steganography
-------------"Steganography", a word derived from two Greek words that mean "hiding",
is (from the computer security and privacy point of view), basically the
discipline of hiding / embedding a first file (which you want to remain
secret / un-noticed), within a second, "container" file (which is meant
to be unencrypted, so that it can freely be looked at by an examiner).
You can think of the "container" file as kind of an innocent-looking
"shell", surrounding the hidden inner document, which, following the
analogy, would be like the "pearl within the shell". To the outside, it
just looks like sea-bottom... that's why the outside of the shell is so
plain-looking. But inside, why, there's a lovely pearl, if only you knew
which part of sea-bottom to pry open!
Technically, the concept leverages the fact that many modern file formats
such as JPEG, MPEG, WAV and so on, either allocate and use more storage
space (bits and bytes) than the amount of data that they contain actually
needs to use, or, they can -- with the right software -- be forced into
storing this data in fewer bits and bytes than the original, unmodified
"container" file had allocated.
For many types of files (particularly graphics files like a picture of a
mountain scene or audio files like a recording of your grandmother's
voice), it is extremely difficult for even an experienced forensics
investigator to be able to look at a steganographically modified file and
intuitively "know" that some of its internal bits and bytes have been
"stolen" (so that they can be used in which to store the "hidden" file),
especially if an unmodified, original version of the same file is not
available as a base for comparison.
The main privacy value of steganography derives from the fact that the
container file is usually something quite mundane and innocuous, for
example it could be a .jpg picture of the Eifel Tower -- in other words,
something completely legitimate and "non-controversial". Usually,
steganography is combined with encryption of the "hidden" document; this
is not just to make it difficult to extract without the orginator's
permission, but also because it technically makes it more difficult to
detect in the first place.
What is quite "cool", from a privacy point of view, about steganography,
is simply that it adds a highly desirable factor to your privacy "defence
in depth" strategy : namely, it (can, if perfectly implemented) defeat
even the suspicion that some kind of "controversial" data is being
hidden, in the first place.
That is, absent steganography (or some other similar technique, for
example the weak defence of just changing the file extensions of an
encrypted file from ".pgp" to ".doc"), an attacker knows that you're
hiding something -- this is implicit in the presence of encrypted files
that have no other plausible reason to be present on your hard drive -and his only remaining job is to find a way to get past the encryption
that's protecting the original plaintext data.
With steganography, conversely, anyone but a sophisticated, intelligent
attacker who is armed with very good forensics tools, would look at the
steganographically modified "container" files and automatically conclude
that "this is just some picture of the Eifel Tower, better look elsewhere
for Achmed's nefarious Islamic militant plans".
So, is steganography the "nuclear weapon" of the anti-forensics toolkit?
Unfortunately, not. Here are some of its shortcomings :
* Perhaps the most important limitation of steganography is simply that
by the very nature of how it works, the size of the inner, "hidden" file
has to be far smaller than that of the outer "container" file, although
this is to some extent offset by the fact that the relationship is
proportionate -- that is, the larger the container file, the larger the
hidden file that's allowed.
The reason that steganography has to work this way is, if more than a
small percentage of the container file is allocated to storage of the
hidden file, then "artifacts" (errors) in the container file start to
become apparent to even the untutored eye, a situation which defeats the
purpose of steganography in the first place. (For example, if you used
too much of the file's total space for hidden file storage, your picture
of the Eifel Tower might have a sky that's purple with pink polka dots,
(you can run Windows programs on a Mac, for example) and, unlike
emulators -- which are notorious for compatibility problems due to the
subtle differences between different computer operating systems -- since
the foreign application is running under the operating system that it was
intended for, compatibility problems are almost non-existent.
(c.) It can (possibly) make better use of surplus CPU cycles and hard
drive space. Where this is most important is in "hosting" environments
where several organizations can be sold / rented use of the same physical
server computer; basically, each organization gets a virtualized
operating system instance which they think is running on dedicated
hardware, but which is, in reality, sharing use of the same physical
hardware with several other organizations' own virtualized operating
systems, with access to the CPU being arbitrated between these by the
virtualization software. This can make for a very efficient use of
expensive hardware and physical hosting space, if correctly implemented.
(d.) It looks cool. (Having Linux, MacOS, IBM OS/2 Warp and Solaris
virtualized sessions all running at once on your Windows XP desktop, is
worth a lot of "geek credibility".)
(e.) It (possibly) can make your systems more robust, in the sense that
in theory, to back up the entire shooting match, all you have to back up
is the relatively small virtual machine configuration file and the .vdmk
disk image associated with it... all you need to move this image to a
different PC is a minimally configured host operating system on the new
physical hardware and suitable (physical) disk space on the new PC.
This is of course an important consideration in hosted and corporate
environments, but it's also important for groups like anti-virus
researchers who constantly have to experiment with malicious software
that might otherwise trash a conventional Windows PC; without
virtualization, an AV researcher in this situation would be faced with
the painful business of re-installing Windows from scratch (device
drivers and all), but with virtualization, all that need be done is to
make another copy of the "clean" .vdmk disk image (the original one,
before the testing of all the nasty stuff began) and virtually boot that
image up again.
Having said all that, there are some general purpose disadvantages of
this approach, that you need to know about:
(f.) Virtualization, depending upon the minimum and realistic memory, CPU
and hard drive requirements of the host and guest operating systems, can
be very hardware-hungry.
Personally, if (for example) you want to host one guest copy of Windows
XP and one guest copy of Red Hat Linux on your Solaris host computer, you
should assume that you'll need at least 2 to 3 gigabytes more physical
RAM memory than you needed just to run Solaris, as well as at least 40 Gb
more hard drive space.
CPU requirements will vary according to what applications are running on
the host and guest operating systems, but don't try doing this on
anything less than a CPU purchased after about 2005, and in particular
don't try it on a "crippled" CPU like an Intel Celeron. I strongly
recommend a multi-core CPU, as some of these newer chips have
virtualization-friendly features built right in to them and all of them
have at least the theoretical ability for one or more of the virtual
operating systems to be "off-loaded" on to the secondary CPU / core,
within its compartmentalised virtual "box", and can only be stored within
the .vdmk file that represents the guest operating system's virtual hard
drive. Wipe the virtual hard drive and all possible local static evidence
is gone forever -- not only can't it even be guessed at, but also, it's
very difficult to even tell which kind of operating system the person
physically at the computer, was using at a specific time. (Possibly, a
forensics expert could data-mine the host operating system to establish
that "aha, file MY_WINDOWS_VISTA.VDMK was erased by this filthy pervert
at 10:20 p.m. last Tuesday!"... but the obvious solution to that is to
sanitize the appropriate log files of the host operating system.)
From an anti-forensics point of view, this is pretty hard to beat,
particularly when used for purposes like a single Web / Internet surfing
session that does not involve the permanent capture or storage of
potentially incriminating data.
(b.) A virtualized session may be an attractive option in certain
scenarios (for example, if one wishes to make remote use of one's work
PC, which is otherwise tightly locked down by the local MIS department)
where the target computer cannot be re-booted into an alternate operating
system. Load a virtualized guest operating system on the main computer
and you're away to the races; to the MIS department, this should look
just as if nothing has changed, but if configured in the right way,
incoming remote access request packets should be taken over by the
"listening" virtual guest operating system (not the main host one) and
the computer can then be used as you see fit.
However, apart from the resource issues noted above, there are some
significant drawbacks that you should be aware of, before entrusting your
"confidential" data to this technology.
(c.) The most important drawback concerns the "kick in the door"
scenario. To properly shut down the system, you will have to (1) shut
down whatever application you were running within the virtualized guest
operating system, (2) log out of / shut down, the guest operating system
itself; (3) (ideally) shut down the virtualization software (which will
be running as a task under the host operating system); (4) shut down the
host operating system, then (5) smile at the police, saying, "why are you
invading the house of a perfectly innocent person like me, officer?"
The point is, this all TAKES TIME, potentially quite a bit more time than
(say) just doing a forced dismount of all your TrueCrypt drive containers
and hitting the power button. Time is one factor that you may not have a
surplus of, if sudden physical compromise of your computing premises is a
possibility. This must be taken into account, if you decide to go the
virtualization route.
(d.) By default, virtualized .vdmk disk volumes are UNENCRYPTED, meaning
that an intruder with physical access to your computer should be able to
access any "controversial" data contained within them, with relatively
little effort (just boot up the appropriate virtual guest operating
system and Bob's your uncle!). The privacy and confidentiality
implications of this should be obvious -- you are going to have to
encrypt the data within the guest operating system [e.g., there will now
be one or more encrypted file(s) somewhere within the .vdmk virtual hard
drive file]. It has been suggested that you could "nest" a TrueCrypt (or
other) encrypted container, somewhere within the .vdmk file system. This
seems implausible, but it may be worth a try.
An alternative way to preserve confidentiality, which I should warn the
reader that I have so far not personally tried and which I predict might
be unstable (remember: "instability + strong encryption == 'kiss your
data goodbye'"), would be to (1) create an encrypted TrueCrypt container
(a large one, obviously) on a suitable physical hard drive; (2) use
VMware to create a .vdmk virtualized disk image within the TrueCrypt
container; and (3) when setting up the guest operating system, tell it to
use the now safely encrypted virtualized .vdmk file.
The problem here is that you have multiple layers of software and device
driver control, trying to regulate access to the data stored in this
scheme, and each and every one of them must work PERFECTLY, ALL THE TIME,
or a data loss disaster is certain to ensue.
Consider:
+----------------------------------------+
|
Virtual (guest) OS
|
|
(let's say, Windows)
|
|
{trying to read or write a byte}
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Virtualization s/w
|
| (VMWare for Red Hat Linux host OS) |
|
virtual hard drive emulator
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Encryption s/w
|
| (TrueCrypt for Red Hat Linux host OS) |
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Real / primary (host) OS
|
|
(let's say, Red Hat Linux)
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Red Hat Linux host OS
|
|
(physical hard disk driver s/w)
|
+----------------------------------------+
|
|
\|/
+----------------------------------------+
|
Physical hard drive
|
+----------------------------------------+
As I think you can see, in this configuration -- something like which,
would be required to keep data within the virtualized .vdmk disk file
really secure -- we are basically doubling (from 3 to 6) the levels of
indirection, between the application that is trying to read or write a
byte to the "hard disk" (or virtual representation thereof), and the
actual, cold, hard magnetic medium where that byte of data is physically
going to be stored or read from.
If the slightest thing goes wrong with this setup (for example, a "bad
sector" appears within either the .vdmk file or the TrueCrypt container),
the entire house of cards may come crashing down and the data within
the .vdmk file may become permanently inaccessible (unless, of course,
you're the NSA and you have a supercomputer and an office-full of expert
code-breakers, to get it back for you). FDE (Full Disk Encryption) just
makes matters worse, because it adds a seventh (!) layer of read / write
data handling that must always work with 100% correctness. And all of
this is on top of the fact that data access speeds will be quite
substantially impacted by all the handing-off of work from one layer of
software to another.
Personally I'm unwilling to entrust my most important data to any such
configuration, at least not without a great deal more testing... but you
may feel differently. It's up to you.
(e.) Remember that some of the industry-leading virtualization software
vendors -- the best example of which is VMware -- are U.S.-based
companies, which, unfortunately, raises the possibility of a "backdoor"
for U.S. spying agencies and law enforcement personnel, in more or less
the same way as is a risk for Microsoft Windows, Sun Solaris and Red Hat
Linux. The best solution to this is (like the solution for the backdoor
problem for the ordinary operating systems) to use Open Source-based
virtualization software that is downloaded from somewhere outside the
United States.
Having said that, personally I think that the chance of a backdoor
working efficiently in a virtualized environment is rather small, because
of a variety of technical factors as well as the fact that it would be
far simpler (not simple, mind you) and more effective for the NSA, CIA,
etc., just to attack the host operating system -- after all, there are
far more copies of Microsoft Windows than there ever will be of VMware,
so a secret backdoor in Windows is going to be much more useful to U.S.
intelligence agencies than would be one in VMware. Still, you can never
be too careful, where your personal data is concerned.
Incidentally, as of when this has been written, some virtualization
software vendors (VMware) are requiring those who download the "free"
versions of the software, to fill out lengthy forms, including name,
location, e-mail address, etc., to be "allowed" to download the
application(s). Needless to say, if you must use these vendors' wares,
don't give them accurate personally identifying information. Would you
trade a free copy of VMware, for a 20-year prison sentence in Guantanamo
Bay, courtesy of the NSA, the FBI and the U.S. military? I thought not.
So, "just don't".
(f.) Finally, remember that a virtualized operating system session is a
"real" operating system, with all the good and bad things that implies.
If, as is likely, a "normal" instance of Windows XP installed on a "real"
PC has a NSA-inspired backdoor inside it, then your virtualized guest
version of XP will, too. The fact that it's running in a window inside a
virtual machine, doesn't make it any more or less secure, in and of
itself. If, for example, you are running Windows XP or Vista as a guest,
just because it's virtualized doesn't mean that you don't have to wipe
your swapfile, clear the "Recent Files" list, and so on.
It's very easy to skip these steps, because psychologically, the guest
operating system "looks just like another application in a window of the
host operating system" -- but it's _not_, it's an entire operating system
all unto itself.
At the very minimum, if the host and guest operating systems are
dissimilar (as they will be in 90 per cent of all the cases), then you
will have to learn, and diligently execute, two different sets of
confidentiality techniques (one for the guest and one for the host
operating system). If you're having a hard time remembering just ONE set
of secure computing guidelines, this might prove to be a challenge.
So what's the "bottom line"? As of right now, I believe that
virtualization is a potentially valuable tool in your anti-forensics
toolkit, but also that you have to carefully consider both the
accessibility and "tear-down time" issues before you use it on a routine
basis, for access to and storage of, your "confidential" data. None the
less, it's a promising technology which shows every sign of becoming even
more powerful in the future, so it's definitely worth keeping an eye on.
Keyloggers
---------One other comment, here -- keep in mind that a very simple method for
anyone, be that a law enforcement official, or a spy, or (much more
likely) just a common garden variety cyber-criminal, to instantly 'pwn'
everything and anything that you do over the Internet (or locally on your
own PC, for that matter), is to install a 'keylogger', that is, a
malicious little piece of software that quietly hides on your PC and then
sends every keystroke that you type, back to... wherever.
Since this would not only capture which Websites you visited, not to
mention the user-name and password credentials that you used to log in to
them, it doesn't take a rocket scientist to understand what a grave
threat to your privacy that a 'keylogger' represents. (As in, 'if you get
one of these things on your PC, you can pretty much forget about all the
rest of your security measures, they'll all be bypassed by an intruder
who now knows your security credentials, just as well as you do'.)
The reason why I mention this in the context of Internet surveillance is,
many of the keyloggers afflicting computers today, end up being installed
simply because of end-user (YOU!) stupidity : the classic example is a
Windows user who clicks on an .exe format file that came in via e-mail,
because the associated message claimed, "Run this leet program 2 see Anna
Kornukova n00d, dudes!". (Well, when the 'dude' double-clicked on
"ANNANUDE.EXE", he may or may not have seen anything cute and au naturel,
but he definitely did get his PC added to a cyber-criminal's keylogger
and botnet.)
Most keyloggers work only with Windows, which is yet another reason why
not to use this awful operating system, but even if you use something
like Linux or the MacOS, you still should be careful:
At all costs, avoid running programs whose origins and authenticity that
you cannot verify. This is, unfortunately, a rule that is in practice
quite difficult to follow consistently, because in theory, anything can
be faked on the Internet, and some of the most important security tools
(example : TrueCrypt) in your valid arsenal, have to be downloaded from
the Internet... so how would you know a "real" copy of TrueCrypt from a
"fake" one that's just designed to put malware on your PC? The truth is,
while there are limits to how far you can go to verify this kind of
thing, you can't be 100% sure that the application is "safe".
Finally
------Stay tuned for Part 2 of this series, "No Man Is An Island", in which I
will be describing some valuable tips for how to stay secure when using
the Internet.
================================================================================
APPENDIX: How to Secure Your Windows Computer and Protect Your Privacy
By Howard Fosdick
1 May 2008
Do you know that -* Windows secretly records all the web sites you've ever visited?
* After you delete your Outlook emails and empty the Waste Basket,
someone could still read your email?
* After you delete a file and empty the Recycle Bin, the file still
exists?
* Your computer might run software that spies on you?
* Your computer might be a bot , a slave computer waiting to perform
tasks assigned by a remote master?
* The web sites you visit might be able to compile a complete dossier of
your online activities?
* Microsoft Word and Excel documents contain secret keys that uniquely
identify you? They also collect statistics telling anyone how long you
spent working on them and when
This guide explains these -- and many other -- threats to your security
and privacy when you use Windows computers. It describes these concerns
in simple, non-technical terms. The goal is to provide information anyone
can understand. This guide also offers solutions: safe practices you can
follow, and free programs you can install. Download links appear for the
free programs as they are cited. No one can guarantee the security and
privacy of your Windows computer.
Achieving foolproof security and privacy with Windows is difficult. Even
most computer professionals don't have this expertise. Instead, this
guide addresses the security and privacy needs of most Windows users,
most of the time. Follow its recommendations and your chances of a
security or privacy problem will be minimal. Since this guide leaves out
technical details and obscure threats, it includes a detailed Appendix.
Look there first for deeper explanations and links to more information.
Why Security and Privacy Matter
Why should you care about making Windows secure and private? Once young
"hackers" tried to breach Windows security for thrills. But today
penetrating Windows computers yields big money. So professional criminals
have moved in, including overseas gangs and organized crime. All intend
to make money off you -- or anyone else who does not know how to secure
Windows. Security threats are increasing exponentially.
This guide tells you how to defend yourself against those trying to steal
your passwords, personal data, and financial information. It helps you
secure your Windows system from outside manipulation or even destruction.
It also helps you deal with corporations and governments that breach
Windows security and your privacy for their own ends. You have privacy if
only you determine when, how, and to whom your personal information is
communicated. Organizations try to gain advantage by eliminating your
privacy. This guide helps you defend it.
The Threats
Windows security and privacy concerns fall into three categories -1. How to defend your computer against outside penetration attempts
2. How Windows tracks your behavior --and how to stop it
3. How to protect your privacy when using the Internet
The first two threats are specific to Windows computers. The last one
applies to the use of any kind of computer. These three points comprise
the outline to this guide.
"1. How to Defend Against Penetration Attempts"
There are many reasons someone or some organization out in the Internet
might want to penetrate your Windows computer. Here are a few examples:
* To secretly install software that steals your passwords or financial
information
* To enroll your computer as a bot that secretly sends out junk email or
spam
* To implant software that tracks your personal web surfing habits
* To destroy programs or data on your PC
Your goals are to?
* Prevent installation of malicious software or malware
* Identify and eliminate any malware that does get installed
* Prevent malware from sending information from your computer out into
the web
* Prevent any other secret penetration of your computer
1.1 Act Safely Online
Let's start with the basics. Your use of your computer -- your online
"password management" tool from any of the dozen free products listed
here. If you set up a home wireless network, be sure to assign the router
a password!
1.9 Always Back Up Your Data
One day you turn on your computer and it won't start. Yikes! What now? If
you backed up your data, you won't lose it no matter what the problem is.
Backing up data is simple. For example, keep all your Word documents in a
single Folder, then write that Folder to a plug-in USB memory stick after
you update the documents. Or, write out all your data Folders once a week
to a writeable CD. You can also try an automatic online backup service
like Mozy.
For the few minutes it takes to make a backup, you'll insure your data
against a system meltdown. This also protects you if malware corrupts or
destroys what's on your disk drive. If you didn't back up your data and
you have a system problem, you can still recover your data as long as the
disk drive still works and the data files are not corrupted. You could,
for example, take the disk drive out of the computer and place it into
another Windows machine as its second drive. Then read your data -- and
back it up!
If the problem is that Windows won't start up, the web offers tons of
advice on how to fix and start Windows (see the Appendix). Another option
is to start the machine using a Linux operating system Live CD and use
Linux to read and save data from your Windows disk. If the problem is
that the disk drive itself fails, you'll need your data backup. If you
didn't make one, your only option is to remove the drive and send it to a
service that uses forensics to recover data. This is expensive and may or
may not be able to restore your data. Learn the lesson from this guide
rather than from experience --back up your data!
1.10 Encrypt Your Data
Even if you have locked your Windows system with a good password, anyone
with physical access to your computer can still read the data! One easy
way to do this is simply to boot up the Linux operating system using a
Live CD, then read the Windows files with Linux. This circumvents the
Windows password that otherwise protects the files.
Modern versions of Windows like Vista and XP include built-in encryption.
Right-click on either a Folder or File to see its Properties. The
Properties' Advanced button allows you to specify that all the files in
the Folder or the single File will be automatically encrypted and
decrypted for you. This protects that data from being read even if
someone circumvents your Windows password. It is sufficient protection
for most situations.
Alternatively, you might install free encryption software like TrueCrypt,
BestCrypt or many others.
If you encrypt your data, be sure you will always be able to decrypt it!
If the encryption is based on a key you enter, you must remember the key.
If the encryption is based on an encryption certificate, be sure to back
up or "export" the certificates, as described here. You might wish to
keep unencrypted backups of your data on CD or USB memory stick.
Laptop and notebook computers are most at risk to physical access by an
outsider because they are most frequently lost or stolen -- keep all data
* ActiveX
* Active Scripting (or Scripting)
* .NET components (or .NET Framework components)
* Java (or Java VM)
* JavaScript
Turn off the programmability of your browser by un-checking those
keywords at these menu options
Browser: How to Set Programmability:
Internet Explorer: Tools | Internet Options | Security | Internet Custom
Level
Firefox *: Tools | Options | Content
Opera: Tools | Preferences | Advanced | Content
K-Meleon: Edit | Advanced Preferences | JavaScript
SeaMonkey: Edit | Preferences | Advanced (Java) | Scripts and Plugins
(JavaScript)
* Version 2 on
Internet Explorer Vulnerabilities -- The Internet Explorer browser has
historically been vulnerable to malware. Free programs like
SpywareBlaster, SpywareGuard, HijackThis, BHODemon, and others help
prevent and fix these problems.
Tracking Internet Explorer's vulnerabilities is time-consuming because
criminals continually devise new "IE attacks." If you use Internet
Explorer, be sure you're using the latest version and that Windows'
automatic update feature is enabled so that downloads will quickly fix
any newly-discovered bug. Some feel that IE versions 7 and 8 adequately
address the security issues of earlier versions. I believe that competing
Note that Outlook stores much other information in the same file along
with your obsolete emails. You can either erase all that data along with
your emails by securely deleting the file, or, follow this procedure to
securely delete the email while retaining the other information.
For Outlook Express emails and Windows address books, just securely
delete the files with the given extensions and you're done.
How to Securely Delete All Personal Data on Your Computer -- How can you
securely delete all your personal information on an old computer before
giving it away or disposing of it? This is difficult to achieve if you
wish to preserve Windows and its installed programs. It takes a lot of
time and there is no single tool that performs this function. The easiest
solution is to overwrite the entire hard disk. This destroys all your
personal information, wherever Windows hides it. Unfortunately it also
destroys Windows itself and all its installed programs.
Be sure to copy whatever data you want to keep to another computer or
storage medium first!
Several free programs securely overwrite your entire disk, such as
Darik's Boot and Nuke. The only possible way to recover data after
running such programs is expensive physical analysis of the disk media,
which may not be successful. Over-writing a disk is secure deletion for
normal computer use.
2.2 The Registry Contains Personal Data
Windows keeps a central database of information crucial to its operations
called the Registry. Our interest in the Registry is that it stores your
personal information. Examples include the information you enter when you
register Windows and Office products like Word and Excel, lists of web
sites you have visited, login profiles required for using various
applications, and much more.
Upcoming sections discuss your personal information in the Registry how
you can remove it. For now, let's just introduce a few useful Registry
facts -* The Registry is a large, complicated database (about which you can find
tons of material on the Web).
* The Registry consists of thousands of individual entries. Each entry
consists of two parts, a key and a value. Each value is the setting for
its associated key.
* The Registry organizes the entries into hierarchies.
* This guide tells how to change or remove your personal information in
the Registry by running free programs, but it doesn't cover how to edit
the Registry yourself -- a technical topic beyond the scope of this paper.
* Making a mistake while editing the Registry could damage Windows, so
you should only edit it if you feel well qualified to do so. Always make
a backup before editing the Registry
2.3 Windows Tracks All the Web Sites You've Ever Visited
Windows keeps a list of all the web sites you've ever visited. You can
tell Internet Explorer to eliminate this list through the IE selection
Tools | Internet Options | Clear History. But Windows still retains it!
To view the web site history Windows retains, download and run a free
program like Index.dat Spy. Windows records your web surfing history in a
file named index.dat. (There are actually several index.dat files on your
computer . . . I'll describe what the others track later.) The index.dat
files are special --you can not delete them or Windows will not start.
Since Windows prevents you from changing or deleting these files, you
need to run a free program to erase your web site history.
If you use Internet Explorer and have the default Auto-Complete feature
turned on, your web surfing history is also kept in a second location -in the Windows Registry. (You'll see web sites you've visited listed
under the Registry key TypedURLs.) If you turn off Auto-Complete,
Internet Explorer no longer saves your web history in the Registry.
To turn off Auto-complete,
Internet Options | Content
complete of Web addresses.
Windows from tracking your
Several free programs securely erase your web site history from both the
Registry and the index.dat files. Among them are CCleaner, Free Internet
Windows Washer, CleanUp!, and ScrubXP, The shareware programs PurgeIE and
PurgeFox are also popular. I've found CCleaner to be both thorough and
easy-to-use.
2.4 Windows Leaves Your Personal Information in its Temporary Files
Windows, web browsers, and other programs leave a ton of temporary files
on your computer. Some hold web pages you've recently viewed, so that if
you go back to that web page, you'll be able to view it quickly from disk
instead of downloading it again from the web. Other files are used by
Windows and its applications as temporary work areas. Still others are
used to log program actions or store debugging information. These
temporary files sometimes contain personal information.
For example, web page caches contain copies of web forms into which
you've entered passwords or your credit card number. You may not wish to
disclose the web pages, videos, images, audio files, and downloaded
programs you've viewed lately. The trouble is that these temporary files
are not erased after use. Some remain until the system needs that disk
space for another purpose. Others hang around forever, unless you know to
clean them.
The free programs above that erase your web history also erase these
temporary files and cache areas. Find more free programs here and a
review of the best commercial programs here.
2.5 Your "Most Recently Used" Lists Show What You're Working On
Windows tracks the documents you've recently worked with through its Most
Recently Used or "MRU" lists. MRU lists are kept by Microsoft Office
products like Word and Excel, as well as applications from other vendors.
Window's Start | Documents list also shows documents you have recently
worked with.
Products keep MRU lists for your convenience. They help you recall and
quickly open documents you're currently working on. These lists also
offer the perfect tracking tool for anyone who wants to find out what
to create and edit the file. You can't see everything Office saves in the
Properties panel --some of it remains hidden from your view.
You can change some of the Properties information by right-clicking on
the file name, then editing it. Or alter it while editing the document by
selecting Edit | Properties.
Other data is collected for you whether you want it or not, and you can
not change it. Should you care? It depends on whether it matters if
anyone sees this information. In most cases it doesn't. But sometimes
this data is private and its exposure matters.
Just ask former U.K. Prime Minister Tony Blair. He took Britain to war
against Iraq in 2003 based on the contents of what he presented as his
government's authoritative Iraq Dossier. But this Word file's properties
exposed the high-powered dossier as the work of an American graduate
student, not a team of British government experts. A political firestorm
ensued.
Microsoft offers manual procedures that minimize Office files' hidden
information. But these are too cumbersome to be useful. Microsoft
eventually developed a free tool to cleanse Office documents created with
Office 2002 SP2 or later. But restrictions limit its value. The free tool
Doc Scrubber is an alternative for cleansing the Properties metadata from
Word files.
Whichever tool you use, you must run it as your last action before you
distribute your finished Office document. Cleansing Microsoft Office
files is inconvenient and it's difficult to remember to do it. Those who
require "clean" office documents are advised to use the free office suite
that competes with Office, called OpenOffice.org. The OpenOffice suite
does not require personally-identifying Registration information and it
gives you control over the Properties information. It reads and writes
Microsoft Office file formats. (I edited this document interchangeably
with OpenOffice and several different versions of Microsoft Word, then
created the final PDF file using OpenOffice.)
2.8 Microsoft Embeds Secret Identifiers in Your Documents
Windows, Windows Media Player, Internet Explorer, and other Microsoft
applications contain a number that identifies the software called the
Globally Unique Identifier or GUID. Microsoft Office embeds the GUID in
every document you create. The GUID could be used to trace the documents
you create back to your computer and copy of Microsoft Office. It could
even theoretically be used to identify you when you surf the web. The
free program ID-Blaster Plus can randomize (change) the GUIDs embedded in
Windows, Internet Explorer, and Windows Media player. The free program
Doc Scrubber erases GUIDs contained in a single Word document or all the
Word documents in a Folder.
If you're concerned about secret identifiers embedded in your Office
documents, use the OpenOffice suite instead. This compatible alternative
to Microsoft Office doesn't embed GUIDs in your documents nor does it
require personal registration and Properties information.
2.9 Chart of Tracking Technologies
I've discussed the major areas in which Windows and other Microsoft
products track your computer use. In most cases you can not turn off this
tracking. But the free programs I've described will delete the tracking
information. The chart below summarizes where and how Windows and other
Microsoft products track your behavior.
Many items apply only to specific software versions. A few functions
report your behavior back to Microsoft. Examples include when Windows
Media Player sent your personal audio and video play lists to Microsoft
and the company's attempts to use the Internet to remotely cripple
Windows installs it considers illegal.
--- Where Windows Tracks Your Behavior --Application Logs: Records on how often you run various programs
Clipboard Data: Data you've copied/pasted is in this memory area
Common Dialog History: Lists Windows "dialogs" with which you've
interacted
Empty Directory Entries: File pointers unused by Windows but still usable
by those with special software
Error Reporting Services: Reports Windows or Microsoft Office errors back
to Microsoft
File Slack Space: "Unused" parts of file clusters on disk that may
contain old data
File Properties: Office document Properties contain your personal editing
information and more
Find / Search History: Lists all your Find or Search queries (used by
Windows auto-complete)
GUIDs: Embedded secret codes that link Office documents back to your
computer
Hotfix Unistallers: Temporary files left for un-doing Windows updates
IIS Log files: Logged actions for Microsoft's IIS web server
Index.dat Files: Secret files that list all web sites you visit and other
data
Infection reporting: Microsoft's Malicious Software Removal Tool reports
infections to Microsoft
Last user login: Tracks the last user login to Windows
Microsoft Office History: MRU lists for Office products like Word, Excel,
Powerpoint, Access, and Photo Editor
Open / Save History: List of documents or files for these actions
Recently Opened Doc. List: MRU list accessible off Start | Documents
Recycle Bin: Deleted files remain accessible here
Registration of MS Office: Registration information is kept in the
product Options, Splash panels, and Registry
People who give out their personal data expose themselves to manipulation
or worse. Even the U.S. government is researching the harvesting of
personal data from social networking sites for public surveillance. And
why not? People voluntarily post the information. Fans of social
networking will consider these cautions anachronistic. Please read how
people expose themselves to manipulation or harm by posting personal
data, found in authoritative books such as The The Digital Person, The
Soft Cage, or The Future of Reputation: Gossip, Rumor, and Privacy on the
Internet.
We need government regulation to enforce minimal rights for social
network users, much the way we have consumer-protection legislation for
credit cards. Meanwhile, protect yourself by educating yourself. Tiny
bits of information can be collected and compiled by web computers into
comprehensive profiles. If an organization can collect enough small bits
of information -- for example, just the names of all the web sites you
visit -- they can eventually develop a complete picture of who you are,
what you do, how you live, and what you believe.
Privacy is power. You give away your personal power when you give out
personal information. You assume risk you can not measure at the time you
assume it.
3.2 Don't Let Web Sites Track You
Cookies are small files that web sites store on your computer's disk.
They allow web sites to store information about your interaction with
them. For example, they might store the data required for you to purchase
items across the several web pages this involves. However, cookies -originally called tracking cookies -- can also be used to track your
movement across the web. Depending on the software using them, this data
could be used to create a detailed record of your behavior as you surf.
The resulting profile might be used for innocuous purposes, such as
targeted marketing, or for malicious reasons, like spying.
Most browsers accept cookies by default. To retain your privacy, set the
browser not to accept any cookies other than exceptions you specify. Then
only web sites you approve can set cookies on your computer. A few web
sites won't let you interact with them unless you accept their cookies -but most will. You can also set most browsers to automatically delete all
cookies when you exit. This allows web sites to set the cookies required
for transactions like purchasing through the web but prevents tracking
you across sessions.
To manage cookie settings in your browser, access these panels:
To turn cookies on or off -Internet Explorer: Tools | Internet Options | Privacy | Advanced
Firefox: (version 2 on) Tools | Options | Privacy | Cookies
Opera: Tools | Quick Preferences | Enable Cookies
K-Meleon: Tools | Privacy | Block Cookies
SeaMonkey: Edit | Preferences | Privacy & Security | Cookies
To allow specific web sites to set cookies --
Remember that emails are often the basis for phishing scams -- attempts
to get you to reveal your personal information for nefarious purposes.
Don't respond to email that may not be from a legitimate source. Don't
even open it. Examples include claims you've won the lottery, pleas for
help in handling large sums of money, sales pitches for outrageous deals,
and the like.
Email may also be spoofed -- masquerading as from a legitimate source
when it is not. Examples are emails that ask you to click on a link to
update your credit card account or those that ask for account information
or passwords.
Legitimate businesses are well aware of criminal misuse of email and
don't conduct serious business transactions through mass emailings!
Many people use two email addresses to avoid spam and retain their
privacy. They use one account as a "junk" email address for filling out
web site forms, joining forums, and the like. This email address doesn't
disclose the person's identity and it collects the spam. They reserve a
second email account for personal communications. They never give this
one out except to personal friends, so it remains spam-free.
3.4 Web Surfing Privacy
If you tested your computer as suggested earlier using ShieldsUp!, you
saw that it gives out information to every web site you visit. This data
includes your Internet protocol address, operating system, browser
version, and more.
Your Internet protocol address or IP address is a unique identifier
assigned to your computer when you access the Internet. Web sites can use
it to track you. Your Internet Service Provider or ISP assigns your
computer its IP address using one of several different techniques. How
traceable you are on the web varies according to the technique your ISP
employs along with several other factors, such as whether you allow web
sites to set cookies and whether your computer is compromised by malware.
One way to mask who you are when web surfing is to change your IP
address. Anonymizing services hide your IP address and location from the
web sites you visit by stripping it out as your data passes through them
on the way to your destination web site. Anonymizers help hide your
identity and prevent web sites from tracking you but they are not a
perfect privacy solution (because the anonymizer itself could be
compromised). Anonymizer.com is a very popular free anonymizing service.
Find other free services here and here.
A more robust approach to anonymity is offered by free software from JAP
and TOR. Both route your data through intermediary servers called proxies
so that the destination web site can't identify you. Your data is
encrypted in transit, so it can not be intercepted or read by anyone who
scans passing data. Services like JAP and TOR present two downsides.
First, your data is sent through intermediary computers on the way to its
destination, so response time slows. Whether you still find it acceptable
depends on many factors; the best way to find out is simply to try the
software for yourself.
These systems still leave you exposed to privacy violations by your
Internet Service Provider. Your ISP is the your computer's entry point
into the Internet, so your ISP can track all your actions online.
For this reason, when the Bush administration decided to monitor American
citizens through the Internet, they proposed legislation that would force
all ISPs to keep two years of data about all their customers' activities.
The government's current web surveillance program made it necessary for
major ISPs like AT&T / Yahoo to change its privacy policy in June 2006 to
say that AT&T -- not its customers -- owns all the customers' Internet
records and can use them however it likes.
Repeated congressional proposals to immunize ISPs from all legal
challenges only make sense if the ISPs colluded with the government in
illegally monitoring Internet activities.
3.5 Search Privacy
Web sites that help you search the web are called search engines. Popular
search engines like Google, Yahoo!, and MSN Search retain records of all
your web searches.
Individually, the keywords you type into search engines show little. But
aggregated, they may expose your identity. They may also expose your
innermost thoughts -- or be misinterpreted as doing so.
Here's an example. Say the search engine captures you entering this list
of searches -* kill wife
* how to kill wife
* killing with untraceable substance
* kill with unknown substance
Someone might interpret these searches as indicating that you should be
reported to the authorities because you're planning a murder. But what if
you were simply doing research for that murder mystery you always wanted
to write? You can see need for search privacy. Do you have it? The
federal government has demanded search records from major search engines
like Google, AOL, Yahoo, and MSN.
While the government claims these requests are to combat sexual
predators, most analysts believe they are for public surveillance and
data mining. America Online (AOL) accidentally posted online 20 million
personal queries from over 650,000 users. The data was immediately
gobbled up and saved in other web servers. Although AOL apologized and
quickly took down their posting, this data will probably remain available
forever somewhere. Some people can be identified by their "anonymous"
searches and have been harmed as a result of this violation of their
privacy.
The AOL incident is a wake-up call to those who don't understand how
small pieces of information about people can be collected by Internet
servers, then compiled into revealing dossiers about our individual
behaviors. This principle doesn't just apply to search engines. It
extends to the web sites you visit, the books you buy online, the
comments you enter into forums, the political web sites you read, and all
your other web activities. The AOL debacle demonstrates that web
activities many assume to be anonymous can sometimes be traceable to
specific individuals.