You are on page 1of 18

Unit 3

REGULATING INTERNET CONTENT, TECHNOLOGY AND


SAFETY
INTRODUCTION
Why we shouldn't regulate the internet
Many people argue that it would be wrong to attempt to regulate the Internet and advance
arguments such as the following:
The Internet was created as a totally different kind of network and should be a free
space. This argument essentially refers back to the origins of the Net, when it was first used by
the military as an open network designed to ensure that the communication always got through,
and then by academics who largely knew and trusted each other and put a high value on freedom
of expression.
The Internet is a pull not a push communications network. This argument implicitly
accepts that it is acceptable, even necessary, to regulate content which is simply 'pushed' at the
consumer, such as conventional radio and television broadcasting, but suggests that it is is
unnecessary or inappropriate to regulate content which the consumer 'pulls' to him or her such as
by surfing or searching on the Net.
The Internet is a global network that simply cannot be regulated. Almost all content
regulation is based on national laws and conventions and of course the Net is a worldwide
phenomenon, so it is argued that, even if one wanted to do so, any regulation of Internet content
could not be effective.
The Internet is a technically complex and evolving network that can never be regulated.
Effectively the Web only became a mass media in the mid 1990s and, since then, developments like Google and blogging - have been so rapid that, it is argued, any attempt to regulate the
medium is doomed.
Any form of regulation is flawed and imperfect. This argument rests on the experience
that techniques such as blocking of content by filters have often been less than perfect - for
instance, sometimes offensive material still gets through and other times educational material is
blocked.

Why we should regulate the internet


However, there are strong arguments in favor of some form of regulation of the Internet,
including the following:
The Internet is fundamentally just another communications network. The argument runs:
if we regulate radio, television, and telecommunications networks, why don't we regulate the
Net? This argument suggests that, not only is the Internet in a sense, just another network, as a
result of convergence it is essentially becoming the network. so that, if we do not regulate the
Net at all, effectively over time we are going to abandon the notion of content regulation.
There is a range of problematic content on the Internet. There is illegal content such as
child abuse images; there is harmful content such as advice on how to commit suicide; and there
is offensive content such as pornography. The argument goes that we cannot regulate these
different forms of problematic content in the same way, but equally we cannot simply ignore it.
There is criminal activity on the Internet. Spam, scams, viruses, hacking, phishing,
money laundering, identification theft, grooming of children .. almost all criminal activity in the
physical world has its online analogue and again, the argument goes, we cannot simply ignore
this.
The Internet now has users in every country totalling around several billion. This
argument implicitly accepts that the origins of the Internet involved a philosophy of free
expression but insists that the user base and the range of activities of the Net are now so
fundamentally different that it is a mass media and needs regulation like other media.
Most users want some form of regulation or control. The Oxford Internet Survey (OxIS)
[click here] published in May 2005 had some typical indications of this. When asked if
governments should regulate the Internet, 29% said that they should. When asked who should be
responsible for restricting children's content, 95% said parents, 75% said ISPs and 46% said
government.
REGULATING ILLEGAL CONTENT
It is a major proposition of this presentation that any sensible discussion of
regulation of the Internet needs to distinguish between illegal content, harmful content, and
offensive content. I now deal with these in turn.

In the UK, effectively illegal content is regulated by the Internet Watch Foundation [click here]
through a self-regulatory approach.
What is the nature of the IWF?
It was founded by the industry in late 1996 when two trade bodies - the Internet Service
Providers' Association (ISPA) and the London Internet Exchange (LINX) - together with some
large players like BT and AOL came together to create the body.
It has an independent Chair selected through open advertisement and appointed by the Board.
The Board consists of six non-industry members selected through open advertisement and three
industry members chosen by the Funding Council.
There is Funding Council which has on it representatives of every subscribing member.
The IWF has no statutory powers. Although in effect it is giving force to certain aspects of the
criminal law, all its notices and advice are technically advisory.
The IWF has no Government funding, although it does receive European Union funding under
the Commission's Safer Internet plus Action Plan [click here].
Although not a statutory body and receiving no state funding, the IWF has strong Government
support as expressed in Ministerial statements and access to Ministers and officials.
The IWF has a very specific remit focused on illegal content, more specifically:
images of child abuse anywhere in the world
adult material that potentially breaches the Obscene Publications Act in the UK
criminally racist material in the UK.
The IWF has been very successful in fulfilling that remit:
The number of reports handled has increased from 1,291 in 1997 to 23,658 in 2005.
The proportion of illegal content found to be hosted in UK has fallen from 18% in 1997 to 0.3%
in 2005.
The number of funders has increased from 9 in 1997 to 60 in 2005.
No Internet Service Provider has ever been prosecuted and the reputation of the ISP community
has been greatly enhanced.
Then Prime Minister Tony Blair described the IWF as perhaps the worlds best regime for
tackling child pornography.
How is illegal material removed or blocked under the IWF regime?

There is a 'notice and take down' procedure for individual images which are both illegal and
hosted in UK.
The IWF compiles a list of newsgroups judged to be advertising illegal material and recommends
to members that these newsgroups not be carried. About 250 newsgroups are 'caught' by this
policy.
The IWF compiles a list of newsgroups known regularly to contain illegal material and again
recommends to members that these newsgroups not be carried. A small number of additional
newsgroups are 'caught' by this policy.
Most recently and most significantly, ISPs are blocking illegal URLs using the IWF's child abuse
image content (CAIC) database and using technologies like BTs Cleanfeed. The number of
URLs on this list - which is up-dated twice a day - is between 800-1200.
The problem now for the IWF - and indeed for the other such hotlines around the world - is
abroad, more specifically:
United States - the source of 40% of illegal reports in 2005
Russia - the source of 28% of illegal reports in 2005
Thailand, China, Japan & South Korea - the source of 17% of illegal reports in 2005
In early 2005, a study by the International Centre for Missing and Exploited Children (ICMEC)
in the United States found that possession of child abuse material is not a crime in 138 countries
and, in 122 countries, there is no law dealing with the use of computers and the Internet as a
means of distribution of child abuse images [for more information on this report click here]. So
the UK needs the cooperation of other governments, law enforcement agencies and major
industry players if we are to combat and reduce the availability of child abuse images in this
country and around the world.
Since the IWF's remit is illegal material, there are some possible areas of the law which might
be amended in terms which would suggest a minor extension to the IWF's existing remit,
specifically:
The proposed new law on possession of extreme adult pornographic material
The proposed new law on incitement to religious hatred
A possible review of the law on incitement to racial hatred
A possible review of the law on protection of minors in relation to adult pornographic material
A possible review of the law on the test of obscenity in relation to adult pornographic material

However, the IWF has absolutely no intention or wish to engage in harmful or offensive content,
so the proposals that now follow are my personal suggestions for discussion and debate.
REGULATING HARMFUL CONTENT
It is my view that currently there is Internet content that is not illegal in UK law but would be
regarded as harmful by most people. It is my contention that the industry needs to tackle such
harmful content if it is to be credible in then insisting that users effectively have to protect
themselves from content which, however offensive, is not illegal or harmful. Clearly it is for
Government and Parliament to define illegal content. But how one would define harmful
content?
I offer the following definition for discussion and debate: Content the creation of which or the
viewing of which involves or is likely to cause actual physical or possible psychological harm.
Examples of material likely to be caught by such a definition would be incitement to racial
hatred or acts of violence and promotion of anorexia, bulimia or suicide.
Often when I introduce such a notion into the debate on Internet regulation, I am challenged by
the question: How can you draw the line? My immediate response is that, in this country (as in
most others), people are drawing the line every day in relation to whether and, if so how and
when, one can hear, see, or read various forms of content, whether it be radio, television, films,
videos & DVDs, newspapers & magazines. Sometimes the same material is subject to different
rules - for instance, something unacceptable for broadcast at 8 pm might well be permissable at
10 pm or a film which is unacceptable for an '18' certificate in the cinema might receive a 'R18'
classification in a video shop.
Therefore I propose in relation to Internet content that we consult bodies which already make
judgements on content about creation of an appropriate panel. Such bodies would include the
Ofcom Content Board [click here], the BBC [click here], the Association for Television On
Demand (ATVOD) [click here], the British Board for Film Classification (BBFC) [click here],
and the Independent Mobile Classification Body (ICMB) [click here]. I would suggest that we
then create an independent panel of individuals with expertise in physical and psychological
health who would draw up an agreed definition of harmful content and be available to judge
whether material referred to them did or did not fall within this definition.

What would one do about such harmful content?


There should be no requirement on ISPs to monitor proactively content to which they are
providing access to determine whether it is harmful.
Reports of suspected material from the public should be submitted to a defined body.
This body should immediately refer this material to the independent panel which would make a
judgement and a recommendation as to whether it was in fact harmful.
A database of sites judged to be harmful should be maintained by the defined body.
ISPs should be invited to block access to such sites on a voluntary basis.
Each ISP should be transparent about its policy in relation to blocking or otherwise of such
content and set out its policy in a link from the homepage of its web site, as many sites do now in
relation to privacy policy.
REGULATING OFFENSIVE CONTENT
Once we have effective regimes for illegal and harmful content respectively, one has to
consider that material which is offensive - sometimes grossly offensive - to certain users of the
Internet. This is content which some users would rather not access or would rather that their
children not access.
Now identification of content as offensive is subjective and reflects the values of the user
who must therefore exercise some responsibility for controlling access. The judgement of a
religious household would probably be different from that of a secular household. The judgement
of a household with children would probably be different from that of one with no chidren. The
judgement of what a 12 year old could access might well be different from what it would be
appropriate for an 8 year old to view. Tolerance of sexual images might be different to those of
violent images.
It is my view that, once we have proper arrangements for handling illegal and harmful
content, it is reasonable and right for government and industry to argue that end users themselves
have to exercise control in relation to material that they find offensive BUT we should inform
users of the techniques and the tools that they can use to exercise such control. What are such
techniques and tools? They include:

Labelling of material through systems such as that of the Internet Content Rating Association
(ICRA) [click here] - The ICRA descriptors were determined through a process of international
consultation to establish a content labelling system that gives consistency across different
cultures and languages.
Rating systems drawn up by third parties such as parents' or childrens' organisations Whereas labelling should as far as possible be value-free, rating sytems act on those labels to
express a value judgement that should be explicit, so that users of the system know what kinds of
material are likely to be blocked.
Filtering software of which there are many different options on the market - The
European Commission's Safer Internet Programme has initiated a study aiming at an independent
assessment of the filtering software and services. Started in November 2005, the study will be
carried out through an annual benchmarking exercise of 30 parental control and spam filtering
products or services, which will be repeated over three years.
Search engine 'preferences' which are unknown to most parents - Google, the most used
browser has the word 'preferences' in tiny text to the right of the search box and clicking on this
reveals the option of three settings for what is called 'SafeSearch Filtering', yet this facility is
vitually a secret to most parents.
Use of the 'history' tab on the browser which again is unknown to many parents - This is
a means for parents to keep a check on where their young children are going in cyberspace,
although there has to be some respect for the privacy of children.
Promotion of education, awareness and media literacy programmes - Section 11 of the
Communications Act 2003 provides that Ofcom has a duty to promote media literacy and the
Department of Culture, Media & Sports (DCMS) has granted 500,000 a year for this purpose,
but a very wide range of organisations have a role to play in the promotion of such programmes.
Of course, it would help parents and others with responsibility for children if they could buy a
PC with filtering software pre-installed and set at the maximum level of safety and if the default
setting for all web browsers was a child-safe mode. Then adult users of such hardware and
software would have the opportunity, when they wished, to to switch to a less filtered or
completely open mode.

"Censorship", in the paper, is used to identify the act of preventing expression, by formal
means such as the administration of a ban or cut; or informally, in what may appear as "friendly"
but still insistently preventing certain expressions to occur.
Regulation
The word "regulation" has a couple of meanings - either, where "regulate" means to
authorize, and thus possibly prevent the occurrence of expressions by making certain other ones
"unauthorized"; or, where it refers to the process of standardization.
It is the act of standardization that the position paper refers to, and thus what may appear as a
contradiction in the call for no censorship and a better process of regulation, is not in fact the
case.
What does regulation of the arts entail? In the paper, regulation is "the disinterested
classification of content according to publicly available guidelines". The keyword here is
"disinterested", where classificatory information is unexceptional and impartial, not biased
towards an individual, group or institution's preference or concerns.
Such regulation results in a system of information that artists and public, including
government agencies, can refer to. The purpose of such a system is simple: Choice - everyone
gets to decide for themselves what they wish to see, hear and produce The Right to Choose A
critical principle of such a system of regulation, is that, one has no right not to be offended ever.
Anyone can be offended, anytime, by anything. To censor because a few (or even many) are
offended is an offence again is individual rights. But to be able to choose to avoid being offended
is also a right, and this is where the system of regulation (in the definition we are using) comes
in. In such a system, one is able to choose not to see, hear or produce if one wishes, but this does
not prevent others from choosing to see, hear or produce. For example, I may have a deep dislike
for mime, and it would be useful for me to know if a production I am going to includes mime,
but that does not stop anyone else who has a passion for the method to continue to produce mime
for other interested audiences. No Censorship The position paper calls for an end to bureaucratic
censorship, with the exception of materials that are prohibited by law. In Ang Peng Hwa's
commentary, "Time to review censorship process" (published on Aug 26), he remarks that it is
not feasible to expect regulation to be undertaken by the judicial courts or police. However
regulation, in the paper, is not the job of the courts or the police, as the courts and police are not
necessarily well versed in artistic developments and issues. But as Ang rightly notes, artistic

productions are "neither so life-threatening nor of earth-shaking importance as to require the


judicial process".
A vibrant, multi-cultural democracy needs to safeguard freedom of expression, and the
arts play an important role in this. But freedom of expression will mean that at one point or
another we might disagree.
The proposal for mediation in the position paper aims at managing such disagreements
about content in the arts that are not "lifethreatening", through dialogue, not censorship. It should
not be the case that one (or more) audience members of a play are the reason why a play is
banned or stopped. Rather in such a situation, an opportunity presents itself for the artists and
these audiences to communicate with each other. Dialogue is critical because all of us, not just
artists, have a stake in cultural production and consumption. Censorship, on the other hand,
dismisses our differences as insignificant, it dignifies no one and cannot be the solution for a
populous society such as ours.
Freedom of information
Freedom of information is an extension of freedom of speech where the medium of
expression is the Internet. Freedom of information may also refer to the right to privacy in the
context of the Internet and information technology. As with the right to freedom of expression,
the right to privacy is a recognised human right and freedom of information acts as an extension
to this right. Freedom of information may also concern censorship in an information technology
context, i.e. the ability to access Web content, without censorship or restrictions.
Freedom of information is also explicitly protected by acts such as the Freedom of
Information and Protection of Privacy Act of Ontario, in Canada.
Internet censorship
The concept of freedom of information has emerged in response to state sponsored
censorship, monitoring and surveillance of the internet. Internet censorship includes the control
or suppression of the publishing or accessing of information on the Internet. The Global Internet
Freedom Consortium claims to remove blocks to the "free flow of information" for what they
term "closed societies". According to the Reporters without Borders (RWB) "internet enemy list"
the following states engage in pervasive internet censorship: China, Cuba, Iran,
Myanmar/Burma, North Korea, Saudi Arabia, Syria, Turkmenistan, Uzbekistan, and Vietnam.

A widely publicized example of internet censorship is the "Great Firewall of China" (in
reference both to its role as a network firewall and to the ancient Great Wall of China). The
system blocks content by preventing IP addresses from being routed through and consists of
standard firewall and proxy servers at the Internet gateways. The system also selectively engages
in DNS poisoning when particular sites are requested. The government does not appear to be
systematically examining Internet content, as this appears to be technically impractical. Internet
censorship in the People's Republic of China is conducted under a wide variety of laws and
administrative regulations. In accordance with these laws, more than sixty Internet regulations
have been made by the People's Republic of China (PRC) government, and censorship systems
are vigorously implemented by provincial branches of state-owned ISPs, business companies,
and organizations.
Najat Vallaud-Belkacem a French Socialist Minister of Women's Rights proposed that the
French government force Twitter to filter out hate speech that is illegal under French law, such as
speech that is homophobic. Jason Farago, writing in the The Guardian praised the efforts to
"restrict bigotry's free expression", while Glenn Greenwald sharply condemned the efforts and
Farago's column.
laws upholding free speech Free speech and the Internet
Information wants to be free, and the Internet fosters freedom of speech on a global scale.
The Internet is a common area, a public space like any village square, except that it is the largest
common area that has ever existed. Anything that anybody wishes to say can be heard by anyone
else with access to the Internet, and this world-wide community is as large and diverse as
humanity itself. Therefore, from a practical point of view, no one community's standards can
govern the type of speech permissible on the Internet. In the words of John Barlow, a founding
member of the Electronic Frontier Foundation (EFF) -- "In Cyberspace, the First Amendment is a
local ordinance".
The principle of freedom of speech is also embedded in the Internet's robust architecture.
In the words of John Gilmore, another founding member of the EFF -- "The Net interprets
censorship as damage, and routes around it." Because of the Internet's robust design, it is
impossible to completely block access to information except in very limited and controlled

circumstances, such as when blocking access to a specific site from a home computer, or when
using a firewall to block certain sites from employees on a workplace network.
If you believe that progress of human civilization depends on individual expression of
new ideas, especially unpopular ideas, then the principle of freedom of speech is the most
important value society can uphold. The more experience someone has with the Internet the more
strongly they generally believe in the importance of freedom of speech, usually because their
personal experience has convinced them of the benefits of open expression. The Internet not only
provides universal access to free speech, it also promotes the basic concept of freedom of speech.
If you believe that there is an inherent value in truth, that human beings on average and over time
recognize and value truth, and that truth is best decided in a free marketplace of ideas, then the
ability of the Internet to promote freedom of speech is very important indeed.
A few of the early events that signaled the power of the Internet to promote freedom of
speech are summarized below:
Tiananmen. During the Tiananmen Square rebellion in China in 1990, the Internet kept
Chinese communities around the world, especially in universities, in touch with the current
events through email and the newsgroups, bypassing all government censorship.
Russian Coup. In 1991 a Soviet computer network called Relcom stayed online and
bypassed an information blackout to keep Soviet citizens and others around the world in touch
with eyewitness accounts and up-to-date information about the attempted communist coup
against Mikhail Gorbachev.
Kuwait Invasion. Internet Relay Chat became well-known to the general public around
the world in 1991, when traffic skyrocketed as users logged on to get up-to-date information on
Iraq's invasion of Baghdad through an Internet link with Kuwait. The links stayed operational for
a week after radio and television broadcasts were cut off. Archives of this first world famous IRC
event can be found here.
CDA. In 1996 the US Government passed the Communications Decency Act (CDA)
prohibiting distribution of adult material over the Internet, even though the law was widely
believed to be unenforceable and unconstitutional. This gave birth to a blue ribbon campaign to
show support for freedom of speech on the Internet. Many sites placed a black background on
their web pages for the first 24 hours after the CDA passed. A few months later a three-judge

panel imposed an injunction against the law's enforcement, pending resolution of lawsuits
launched by several civil liberties groups, and the law was subsequently found be be
unconstitutional.
National Restrictions. In 1996 many countries around the world became frightened of the
freedom of speech associated with the Internet. China mandated that Internet users must register
with the police. Germany banned access to some adult newsgroups on Compuserve. Saudi
Arabia restricted Internet access to universities and hospitals. Singapore mandated that political
and religious sites must register with the government. New Zealand courts ruled that computer
disks are a type of "publication" that can be censored. None of these efforts had much lasting
effect.
Yugoslavia. 1996, a radio station in Yugoslavia bravely exercised their right to freedom of
speech and continued to broadcast over the Internet after all other normal broadcasting was shut
down by one of the last remaining dictatorial governments in Europe, later overthrown.
Privacy enhancing technologies (PET) is a general term for a set of computer tools, applications
and mechanisms which - when integrated in online services or applications, or when used in
conjunction with such services or applications - allow online users to protect the privacy of their
personally identifiable information (PII) provided to and handled by such services or
applications.
Internet technologies and privacy
Privacy enhancing technologies can also be defined as: Privacy-Enhancing Technologies
is a system of ICT measures protecting informational privacy by eliminating or minimising
personal data thereby preventing unnecessary or unwanted processing of personal data, without
the loss of the functionality of the information system.
Goals of PETs
PETs aim at allowing users to take one or more of the following actions related to their
personal data sent to, and used by, online service providers, merchants or other users:
Increase control over their personal data sent to, and used by, online service providers and
merchants (or other online users) (self-determination)
Data minimization: minimize the personal data collected and used by service providers
and merchants

choose the degree of anonymity (e.g. by using pseudonyms, anonymisers or anonymous


data credentials) choose the degree of unlinkability (e.g. by using multiple virtual identities)
achieve informed consent about giving their personal data to online service providers and
merchants provide the possibility to negotiate the terms and conditions of giving their personal
data to online service providers and merchants (data handling/privacy policy negotiation). In
Privacy Negotiations, consumers and service providers establish, maintain, and refine privacy
policies as individualised agreements through the ongoing choice amongst service alternatives.
In incentivised privacy negotiations, the transaction partners may additionally bundle the
personal information collection and processing schemes with monetary or non-monetary
rewards. provide the possibility to have these negotiated terms and conditions technically
enforced by the infrastructures of online service providers and merchants (i.e. not just having to
rely on promises, but being confident that it is technically impossible for service providers to
violate the agreed upon data handling conditions) Provide the possibility to remotely audit the
enforcement of these terms and conditions at the online service providers and merchants
(assurance)
Data tracking: allow users to log, archive and look up past transfers of their personal data,
including what data has been transferred, when, to whom and under what conditions
Facilitate the use of their legal rights of data inspection, correction and deletion
The use of and dependence on the Internet and social networking has various
implications for personal privacy. Many people worry "big brother" is watching their virtual
footsteps or that their personal data can be auctioned to the highest bidder in the advertising
world.
How to Protect Your Privacy Online
Under the Bill of Rights, the Fourth Amendment of the U.S. Constitution guarantees
protection of American citizens against unlawful search and seizure, including reading mail,
wiretapping and entering homes without a warrant. But it is up to individuals to prevent cyber
stalking and identity theft.
Check the security settings on your social networking sites, including on photos and
things you or others post. Facebook and other sites are infamous for frequently changing settings
without letting its users know. On Facebook, under "Account" and "Privacy Settings," you can

customize what you share and with whom. You can also alter your application settings there, as
well as customize settings to block unwanted visitors.
For immediate personal privacy, Dr. Shaoen Wu, a computer-science professor at the
University of Southern Mississippi, uses Facebook but always keeps the chat option turned off.
He recommends an "awareness of what we are trying to disclose to others. (Social networking
sites) provide features but you don't have to use those features," Wu said.
Facebook requires several steps to cancel your account. If you only go through the first
step, Facebook "holds" your account, so that you can return to your profile if you decide to
reinstate your account.
Clear your web browser's history, cache and cookies on a regular basis.
Install good antivirus and spybot programs. Check http://www.freeware.com for free
options.
Be wary of who you're giving your private info to, including your Social Security
number, bank account or credit card info, what sites you join or where you make online
purchases.
Be careful about accepting unknown "friend" requests. Some may be looking to spread
the "Koobface" virus by sending infected links via e-mail and/or wall posts in the hopes people
click on the infected links.
Use a unique password for each social network, e-mail or e-commerce account. The
passwords should be difficult to guess and include a combination of nonsense words, numbers
and symbols.
Switch browsers. Internet Explorer is the most commonly used browser and the most
susceptible to intrusion. Switch to Mozilla Firefox or Google Chrome, both with built in malware
and phishing protection.
Protect Your Photos
Exchangeable Image File Format, or EXIF, tags embedded in a digital camera photo can
tell not just technical details about the photo, but also its location using the Global Positioning
System. GPS is more commonly found in cell phone cameras like iPhones or smartphones. This
feature, just like reverse e-mail address information, can work to a cyber-stalker's advantage. If
you have a Flickr account or another online site where you upload unprotected photos on the

Internet, make your photos available only to trusted friends, or disable the EXIF feature either in
your camera or by using software like Photoshop or Gimp.
Identity Theft
Identity theft happens most commonly through "dumpster diving" for unshredded mail or
stolen wallets, which can reveal Social Security numbers, credit card numbers and other personal
info, and from organizations that store sensitive information in hard copy or online. In the online
world, one must also be aware of phishing: bogus e-mails or spam, often in the forms of "your
bank" or other institutions asking for your personal information. Also, watch out for fake charity
websites or PayPal accounts set up to take your money. Of course there is also the potential
threat of hackers, hijackers or malware, so always use secure sites for any online purchasing.
Secure sites include https:// in the URL, display a padlock (usually on the lower bar of an
Internet window) and the URL should be an "official" domain name. Watch for a string of
numbers in a URL or an address separated using "dot" segments, like "paypal.bogusaddress.net."
Safety and risk
Concept of Safety:
A thing is safe if its risks are judged to be acceptable. Safety are tacitly value judgements
about what is acceptable risk to a given person or group.
Types of Risks:
o Voluntary and Involuntary Risks
o Short term and Long Term Consequences
o Expected Portability
o Reversible Effects
o Threshold levels for Risk
o Delayed and Immediate Risk
Risk is one of the most elaborate and extensive studies. The site is visited and exhaustive
discussions with site personnel are undertaken. The study usually covers risk identification, risk
analysis, risk assessment, risk rating, suggestions on risk control and risk mitigation.
Interestingly, risk analysis can be expanded to full fledge risk management study. The risk
management

study

also

includes

Stepwise, Risk Analysis will include:

residual

risk

transfer,

risk

financing

etc.

Hazards identification
Failure modes and frequencies evaluation from established sources and

best practices.

Selection of credible scenarios and risks.


Fault and event trees for various scenarios.
Consequences - effect calculations with work out from models.
Individual and societal risks.
ISO risk contours superimposed on layouts for various scenarios.
Probability and frequency analysis.
Established risk criteria of countries, bodies, standards.
Comparison of risk against defined risk criteria.
Identification of risk beyond the location boundary, if any.
Risk mitigation measures.
The steps followed are need based and all or some of these may be required from the above
depending upon the nature of site/plant.
Risk Analysis is undertaken after detailed site study and will reflect Chi worth exposure to
various situations. It may also include study on frequency analysis, consequences analysis, risk
acceptability analysis etc., if required. Probability and frequency analysis covers failure modes
and frequencies from established sources and best practices for various scenarios and probability
estimation.
Consequences analysis deals with selection of credible scenarios and consequences effect
calculation including worked out scenarios and using software package.
Risk Benefit Analysis and Reducing Risk
Risk-benefit analysis is the comparison of the risk of a situation to its related benefits.
For research that involves more than minimal risk of harm to the subjects, the investigator must
assure that the amount of benefit clearly outweighs the amount of risk. Only if there is favorable
risk benefit ratio, a study may be considered ethical.
Risk Benefit Analysis Example
Exposure to personal risk is recognized as a normal aspect of everyday life. We accept a
certain level of risk in our lives as necessary to achieve certain benefits. In most of these risks we
feel as though we have some sort of control over the situation. For example, driving an
automobile is a risk most people take daily. "The controlling factor appears to be their perception

of their individual ability to manage the risk-creating situation." Analyzing the risk of a situation
is, however, very dependent on the individual doing the analysis. When individuals are exposed
to involuntary risk, risk which they have no control, they make risk aversion their primary goal.
Under these circumstances individuals require the probabilty of risk to be as much as one
thousand times smaller then for the same situation under their perceived control.
Evaluations of future risk:
Real future risk as disclosed by the fully matured future circumstances when they
develop.
Statistical risk, as determined by currently available data, as measured actuarially for insurance
premiums.
Projected risk, as analytically based on system models structured from historical studies.
Perceived risk, as intuitively seen by individuals.
Air transportation as an example:
Flight insurance company - statistical risk.
Passenger - percieved risk.
Federal Aviation Administration(FAA) - projected risks.
How to Reduce Risk?
1.Define the Problem
2.Generate Several Solutions
3. Analyse each solution to determine the pros and cons of each
4. Test the solutions
5.Select the best solution
6. Implement the chosen solution
7. Analyse the risk in the chosen solution
8. Try to solve it. Or move to next solution.
Risk-Benefit Analysis and Risk Management
Informative risk-benefit analysis and effective risk management are essential to the
ultimate commercial success of your product. We are a leader in developing statistically rigorous,
scientifically valid risk-benefit assessment studies that can be used to demonstrate the level of

risk patients and other decision makers are willing to accept to achieve the benefits provided by
your product.
Risk-Benefit

Systematically quantify the relative importance of risks and

Modeling

benefits to demonstrate the net benefits of treatment

Risk-Benefit

Quantify patients maximum acceptable risk for specific

Tradeoffs

therapeutic benefits

Third-party evaluations
Consider a situation where the owner of a majority of a publicly held corporation decides
to buy out the minority shareholders and take the corporation private. What is a fair price?
Obviously it is improper (and, typically, illegal) for the majority owner to simply state a price
and then have the (majority-controlled) board of directors approve that price. What is typically
done is to hire an independent firm (a third party), well-qualified to evaluate such matters, to
calculate a "fair price", which is then voted on by the minority shareholders.
Third-party evaluations may also be used as proof that transactions were, in fact, fair
("arm's-length"). For example, a corporation that leases an office building that is owned by the
CEO might get an independent evaluation showing what the market rate is for such leases in the
locale, to address the conflict of interest that exists between the fiduciary duty of the CEO (to the
stockholders, by getting the lowest rent possible) and the personal interest of that CEO (to
maximize the income that the CEO gets from owning that office building by getting the highest
rent possible).

You might also like