Professional Documents
Culture Documents
In the UK, effectively illegal content is regulated by the Internet Watch Foundation [click here]
through a self-regulatory approach.
What is the nature of the IWF?
It was founded by the industry in late 1996 when two trade bodies - the Internet Service
Providers' Association (ISPA) and the London Internet Exchange (LINX) - together with some
large players like BT and AOL came together to create the body.
It has an independent Chair selected through open advertisement and appointed by the Board.
The Board consists of six non-industry members selected through open advertisement and three
industry members chosen by the Funding Council.
There is Funding Council which has on it representatives of every subscribing member.
The IWF has no statutory powers. Although in effect it is giving force to certain aspects of the
criminal law, all its notices and advice are technically advisory.
The IWF has no Government funding, although it does receive European Union funding under
the Commission's Safer Internet plus Action Plan [click here].
Although not a statutory body and receiving no state funding, the IWF has strong Government
support as expressed in Ministerial statements and access to Ministers and officials.
The IWF has a very specific remit focused on illegal content, more specifically:
images of child abuse anywhere in the world
adult material that potentially breaches the Obscene Publications Act in the UK
criminally racist material in the UK.
The IWF has been very successful in fulfilling that remit:
The number of reports handled has increased from 1,291 in 1997 to 23,658 in 2005.
The proportion of illegal content found to be hosted in UK has fallen from 18% in 1997 to 0.3%
in 2005.
The number of funders has increased from 9 in 1997 to 60 in 2005.
No Internet Service Provider has ever been prosecuted and the reputation of the ISP community
has been greatly enhanced.
Then Prime Minister Tony Blair described the IWF as perhaps the worlds best regime for
tackling child pornography.
How is illegal material removed or blocked under the IWF regime?
There is a 'notice and take down' procedure for individual images which are both illegal and
hosted in UK.
The IWF compiles a list of newsgroups judged to be advertising illegal material and recommends
to members that these newsgroups not be carried. About 250 newsgroups are 'caught' by this
policy.
The IWF compiles a list of newsgroups known regularly to contain illegal material and again
recommends to members that these newsgroups not be carried. A small number of additional
newsgroups are 'caught' by this policy.
Most recently and most significantly, ISPs are blocking illegal URLs using the IWF's child abuse
image content (CAIC) database and using technologies like BTs Cleanfeed. The number of
URLs on this list - which is up-dated twice a day - is between 800-1200.
The problem now for the IWF - and indeed for the other such hotlines around the world - is
abroad, more specifically:
United States - the source of 40% of illegal reports in 2005
Russia - the source of 28% of illegal reports in 2005
Thailand, China, Japan & South Korea - the source of 17% of illegal reports in 2005
In early 2005, a study by the International Centre for Missing and Exploited Children (ICMEC)
in the United States found that possession of child abuse material is not a crime in 138 countries
and, in 122 countries, there is no law dealing with the use of computers and the Internet as a
means of distribution of child abuse images [for more information on this report click here]. So
the UK needs the cooperation of other governments, law enforcement agencies and major
industry players if we are to combat and reduce the availability of child abuse images in this
country and around the world.
Since the IWF's remit is illegal material, there are some possible areas of the law which might
be amended in terms which would suggest a minor extension to the IWF's existing remit,
specifically:
The proposed new law on possession of extreme adult pornographic material
The proposed new law on incitement to religious hatred
A possible review of the law on incitement to racial hatred
A possible review of the law on protection of minors in relation to adult pornographic material
A possible review of the law on the test of obscenity in relation to adult pornographic material
However, the IWF has absolutely no intention or wish to engage in harmful or offensive content,
so the proposals that now follow are my personal suggestions for discussion and debate.
REGULATING HARMFUL CONTENT
It is my view that currently there is Internet content that is not illegal in UK law but would be
regarded as harmful by most people. It is my contention that the industry needs to tackle such
harmful content if it is to be credible in then insisting that users effectively have to protect
themselves from content which, however offensive, is not illegal or harmful. Clearly it is for
Government and Parliament to define illegal content. But how one would define harmful
content?
I offer the following definition for discussion and debate: Content the creation of which or the
viewing of which involves or is likely to cause actual physical or possible psychological harm.
Examples of material likely to be caught by such a definition would be incitement to racial
hatred or acts of violence and promotion of anorexia, bulimia or suicide.
Often when I introduce such a notion into the debate on Internet regulation, I am challenged by
the question: How can you draw the line? My immediate response is that, in this country (as in
most others), people are drawing the line every day in relation to whether and, if so how and
when, one can hear, see, or read various forms of content, whether it be radio, television, films,
videos & DVDs, newspapers & magazines. Sometimes the same material is subject to different
rules - for instance, something unacceptable for broadcast at 8 pm might well be permissable at
10 pm or a film which is unacceptable for an '18' certificate in the cinema might receive a 'R18'
classification in a video shop.
Therefore I propose in relation to Internet content that we consult bodies which already make
judgements on content about creation of an appropriate panel. Such bodies would include the
Ofcom Content Board [click here], the BBC [click here], the Association for Television On
Demand (ATVOD) [click here], the British Board for Film Classification (BBFC) [click here],
and the Independent Mobile Classification Body (ICMB) [click here]. I would suggest that we
then create an independent panel of individuals with expertise in physical and psychological
health who would draw up an agreed definition of harmful content and be available to judge
whether material referred to them did or did not fall within this definition.
Labelling of material through systems such as that of the Internet Content Rating Association
(ICRA) [click here] - The ICRA descriptors were determined through a process of international
consultation to establish a content labelling system that gives consistency across different
cultures and languages.
Rating systems drawn up by third parties such as parents' or childrens' organisations Whereas labelling should as far as possible be value-free, rating sytems act on those labels to
express a value judgement that should be explicit, so that users of the system know what kinds of
material are likely to be blocked.
Filtering software of which there are many different options on the market - The
European Commission's Safer Internet Programme has initiated a study aiming at an independent
assessment of the filtering software and services. Started in November 2005, the study will be
carried out through an annual benchmarking exercise of 30 parental control and spam filtering
products or services, which will be repeated over three years.
Search engine 'preferences' which are unknown to most parents - Google, the most used
browser has the word 'preferences' in tiny text to the right of the search box and clicking on this
reveals the option of three settings for what is called 'SafeSearch Filtering', yet this facility is
vitually a secret to most parents.
Use of the 'history' tab on the browser which again is unknown to many parents - This is
a means for parents to keep a check on where their young children are going in cyberspace,
although there has to be some respect for the privacy of children.
Promotion of education, awareness and media literacy programmes - Section 11 of the
Communications Act 2003 provides that Ofcom has a duty to promote media literacy and the
Department of Culture, Media & Sports (DCMS) has granted 500,000 a year for this purpose,
but a very wide range of organisations have a role to play in the promotion of such programmes.
Of course, it would help parents and others with responsibility for children if they could buy a
PC with filtering software pre-installed and set at the maximum level of safety and if the default
setting for all web browsers was a child-safe mode. Then adult users of such hardware and
software would have the opportunity, when they wished, to to switch to a less filtered or
completely open mode.
"Censorship", in the paper, is used to identify the act of preventing expression, by formal
means such as the administration of a ban or cut; or informally, in what may appear as "friendly"
but still insistently preventing certain expressions to occur.
Regulation
The word "regulation" has a couple of meanings - either, where "regulate" means to
authorize, and thus possibly prevent the occurrence of expressions by making certain other ones
"unauthorized"; or, where it refers to the process of standardization.
It is the act of standardization that the position paper refers to, and thus what may appear as a
contradiction in the call for no censorship and a better process of regulation, is not in fact the
case.
What does regulation of the arts entail? In the paper, regulation is "the disinterested
classification of content according to publicly available guidelines". The keyword here is
"disinterested", where classificatory information is unexceptional and impartial, not biased
towards an individual, group or institution's preference or concerns.
Such regulation results in a system of information that artists and public, including
government agencies, can refer to. The purpose of such a system is simple: Choice - everyone
gets to decide for themselves what they wish to see, hear and produce The Right to Choose A
critical principle of such a system of regulation, is that, one has no right not to be offended ever.
Anyone can be offended, anytime, by anything. To censor because a few (or even many) are
offended is an offence again is individual rights. But to be able to choose to avoid being offended
is also a right, and this is where the system of regulation (in the definition we are using) comes
in. In such a system, one is able to choose not to see, hear or produce if one wishes, but this does
not prevent others from choosing to see, hear or produce. For example, I may have a deep dislike
for mime, and it would be useful for me to know if a production I am going to includes mime,
but that does not stop anyone else who has a passion for the method to continue to produce mime
for other interested audiences. No Censorship The position paper calls for an end to bureaucratic
censorship, with the exception of materials that are prohibited by law. In Ang Peng Hwa's
commentary, "Time to review censorship process" (published on Aug 26), he remarks that it is
not feasible to expect regulation to be undertaken by the judicial courts or police. However
regulation, in the paper, is not the job of the courts or the police, as the courts and police are not
necessarily well versed in artistic developments and issues. But as Ang rightly notes, artistic
A widely publicized example of internet censorship is the "Great Firewall of China" (in
reference both to its role as a network firewall and to the ancient Great Wall of China). The
system blocks content by preventing IP addresses from being routed through and consists of
standard firewall and proxy servers at the Internet gateways. The system also selectively engages
in DNS poisoning when particular sites are requested. The government does not appear to be
systematically examining Internet content, as this appears to be technically impractical. Internet
censorship in the People's Republic of China is conducted under a wide variety of laws and
administrative regulations. In accordance with these laws, more than sixty Internet regulations
have been made by the People's Republic of China (PRC) government, and censorship systems
are vigorously implemented by provincial branches of state-owned ISPs, business companies,
and organizations.
Najat Vallaud-Belkacem a French Socialist Minister of Women's Rights proposed that the
French government force Twitter to filter out hate speech that is illegal under French law, such as
speech that is homophobic. Jason Farago, writing in the The Guardian praised the efforts to
"restrict bigotry's free expression", while Glenn Greenwald sharply condemned the efforts and
Farago's column.
laws upholding free speech Free speech and the Internet
Information wants to be free, and the Internet fosters freedom of speech on a global scale.
The Internet is a common area, a public space like any village square, except that it is the largest
common area that has ever existed. Anything that anybody wishes to say can be heard by anyone
else with access to the Internet, and this world-wide community is as large and diverse as
humanity itself. Therefore, from a practical point of view, no one community's standards can
govern the type of speech permissible on the Internet. In the words of John Barlow, a founding
member of the Electronic Frontier Foundation (EFF) -- "In Cyberspace, the First Amendment is a
local ordinance".
The principle of freedom of speech is also embedded in the Internet's robust architecture.
In the words of John Gilmore, another founding member of the EFF -- "The Net interprets
censorship as damage, and routes around it." Because of the Internet's robust design, it is
impossible to completely block access to information except in very limited and controlled
circumstances, such as when blocking access to a specific site from a home computer, or when
using a firewall to block certain sites from employees on a workplace network.
If you believe that progress of human civilization depends on individual expression of
new ideas, especially unpopular ideas, then the principle of freedom of speech is the most
important value society can uphold. The more experience someone has with the Internet the more
strongly they generally believe in the importance of freedom of speech, usually because their
personal experience has convinced them of the benefits of open expression. The Internet not only
provides universal access to free speech, it also promotes the basic concept of freedom of speech.
If you believe that there is an inherent value in truth, that human beings on average and over time
recognize and value truth, and that truth is best decided in a free marketplace of ideas, then the
ability of the Internet to promote freedom of speech is very important indeed.
A few of the early events that signaled the power of the Internet to promote freedom of
speech are summarized below:
Tiananmen. During the Tiananmen Square rebellion in China in 1990, the Internet kept
Chinese communities around the world, especially in universities, in touch with the current
events through email and the newsgroups, bypassing all government censorship.
Russian Coup. In 1991 a Soviet computer network called Relcom stayed online and
bypassed an information blackout to keep Soviet citizens and others around the world in touch
with eyewitness accounts and up-to-date information about the attempted communist coup
against Mikhail Gorbachev.
Kuwait Invasion. Internet Relay Chat became well-known to the general public around
the world in 1991, when traffic skyrocketed as users logged on to get up-to-date information on
Iraq's invasion of Baghdad through an Internet link with Kuwait. The links stayed operational for
a week after radio and television broadcasts were cut off. Archives of this first world famous IRC
event can be found here.
CDA. In 1996 the US Government passed the Communications Decency Act (CDA)
prohibiting distribution of adult material over the Internet, even though the law was widely
believed to be unenforceable and unconstitutional. This gave birth to a blue ribbon campaign to
show support for freedom of speech on the Internet. Many sites placed a black background on
their web pages for the first 24 hours after the CDA passed. A few months later a three-judge
panel imposed an injunction against the law's enforcement, pending resolution of lawsuits
launched by several civil liberties groups, and the law was subsequently found be be
unconstitutional.
National Restrictions. In 1996 many countries around the world became frightened of the
freedom of speech associated with the Internet. China mandated that Internet users must register
with the police. Germany banned access to some adult newsgroups on Compuserve. Saudi
Arabia restricted Internet access to universities and hospitals. Singapore mandated that political
and religious sites must register with the government. New Zealand courts ruled that computer
disks are a type of "publication" that can be censored. None of these efforts had much lasting
effect.
Yugoslavia. 1996, a radio station in Yugoslavia bravely exercised their right to freedom of
speech and continued to broadcast over the Internet after all other normal broadcasting was shut
down by one of the last remaining dictatorial governments in Europe, later overthrown.
Privacy enhancing technologies (PET) is a general term for a set of computer tools, applications
and mechanisms which - when integrated in online services or applications, or when used in
conjunction with such services or applications - allow online users to protect the privacy of their
personally identifiable information (PII) provided to and handled by such services or
applications.
Internet technologies and privacy
Privacy enhancing technologies can also be defined as: Privacy-Enhancing Technologies
is a system of ICT measures protecting informational privacy by eliminating or minimising
personal data thereby preventing unnecessary or unwanted processing of personal data, without
the loss of the functionality of the information system.
Goals of PETs
PETs aim at allowing users to take one or more of the following actions related to their
personal data sent to, and used by, online service providers, merchants or other users:
Increase control over their personal data sent to, and used by, online service providers and
merchants (or other online users) (self-determination)
Data minimization: minimize the personal data collected and used by service providers
and merchants
customize what you share and with whom. You can also alter your application settings there, as
well as customize settings to block unwanted visitors.
For immediate personal privacy, Dr. Shaoen Wu, a computer-science professor at the
University of Southern Mississippi, uses Facebook but always keeps the chat option turned off.
He recommends an "awareness of what we are trying to disclose to others. (Social networking
sites) provide features but you don't have to use those features," Wu said.
Facebook requires several steps to cancel your account. If you only go through the first
step, Facebook "holds" your account, so that you can return to your profile if you decide to
reinstate your account.
Clear your web browser's history, cache and cookies on a regular basis.
Install good antivirus and spybot programs. Check http://www.freeware.com for free
options.
Be wary of who you're giving your private info to, including your Social Security
number, bank account or credit card info, what sites you join or where you make online
purchases.
Be careful about accepting unknown "friend" requests. Some may be looking to spread
the "Koobface" virus by sending infected links via e-mail and/or wall posts in the hopes people
click on the infected links.
Use a unique password for each social network, e-mail or e-commerce account. The
passwords should be difficult to guess and include a combination of nonsense words, numbers
and symbols.
Switch browsers. Internet Explorer is the most commonly used browser and the most
susceptible to intrusion. Switch to Mozilla Firefox or Google Chrome, both with built in malware
and phishing protection.
Protect Your Photos
Exchangeable Image File Format, or EXIF, tags embedded in a digital camera photo can
tell not just technical details about the photo, but also its location using the Global Positioning
System. GPS is more commonly found in cell phone cameras like iPhones or smartphones. This
feature, just like reverse e-mail address information, can work to a cyber-stalker's advantage. If
you have a Flickr account or another online site where you upload unprotected photos on the
Internet, make your photos available only to trusted friends, or disable the EXIF feature either in
your camera or by using software like Photoshop or Gimp.
Identity Theft
Identity theft happens most commonly through "dumpster diving" for unshredded mail or
stolen wallets, which can reveal Social Security numbers, credit card numbers and other personal
info, and from organizations that store sensitive information in hard copy or online. In the online
world, one must also be aware of phishing: bogus e-mails or spam, often in the forms of "your
bank" or other institutions asking for your personal information. Also, watch out for fake charity
websites or PayPal accounts set up to take your money. Of course there is also the potential
threat of hackers, hijackers or malware, so always use secure sites for any online purchasing.
Secure sites include https:// in the URL, display a padlock (usually on the lower bar of an
Internet window) and the URL should be an "official" domain name. Watch for a string of
numbers in a URL or an address separated using "dot" segments, like "paypal.bogusaddress.net."
Safety and risk
Concept of Safety:
A thing is safe if its risks are judged to be acceptable. Safety are tacitly value judgements
about what is acceptable risk to a given person or group.
Types of Risks:
o Voluntary and Involuntary Risks
o Short term and Long Term Consequences
o Expected Portability
o Reversible Effects
o Threshold levels for Risk
o Delayed and Immediate Risk
Risk is one of the most elaborate and extensive studies. The site is visited and exhaustive
discussions with site personnel are undertaken. The study usually covers risk identification, risk
analysis, risk assessment, risk rating, suggestions on risk control and risk mitigation.
Interestingly, risk analysis can be expanded to full fledge risk management study. The risk
management
study
also
includes
residual
risk
transfer,
risk
financing
etc.
Hazards identification
Failure modes and frequencies evaluation from established sources and
best practices.
of their individual ability to manage the risk-creating situation." Analyzing the risk of a situation
is, however, very dependent on the individual doing the analysis. When individuals are exposed
to involuntary risk, risk which they have no control, they make risk aversion their primary goal.
Under these circumstances individuals require the probabilty of risk to be as much as one
thousand times smaller then for the same situation under their perceived control.
Evaluations of future risk:
Real future risk as disclosed by the fully matured future circumstances when they
develop.
Statistical risk, as determined by currently available data, as measured actuarially for insurance
premiums.
Projected risk, as analytically based on system models structured from historical studies.
Perceived risk, as intuitively seen by individuals.
Air transportation as an example:
Flight insurance company - statistical risk.
Passenger - percieved risk.
Federal Aviation Administration(FAA) - projected risks.
How to Reduce Risk?
1.Define the Problem
2.Generate Several Solutions
3. Analyse each solution to determine the pros and cons of each
4. Test the solutions
5.Select the best solution
6. Implement the chosen solution
7. Analyse the risk in the chosen solution
8. Try to solve it. Or move to next solution.
Risk-Benefit Analysis and Risk Management
Informative risk-benefit analysis and effective risk management are essential to the
ultimate commercial success of your product. We are a leader in developing statistically rigorous,
scientifically valid risk-benefit assessment studies that can be used to demonstrate the level of
risk patients and other decision makers are willing to accept to achieve the benefits provided by
your product.
Risk-Benefit
Modeling
Risk-Benefit
Tradeoffs
therapeutic benefits
Third-party evaluations
Consider a situation where the owner of a majority of a publicly held corporation decides
to buy out the minority shareholders and take the corporation private. What is a fair price?
Obviously it is improper (and, typically, illegal) for the majority owner to simply state a price
and then have the (majority-controlled) board of directors approve that price. What is typically
done is to hire an independent firm (a third party), well-qualified to evaluate such matters, to
calculate a "fair price", which is then voted on by the minority shareholders.
Third-party evaluations may also be used as proof that transactions were, in fact, fair
("arm's-length"). For example, a corporation that leases an office building that is owned by the
CEO might get an independent evaluation showing what the market rate is for such leases in the
locale, to address the conflict of interest that exists between the fiduciary duty of the CEO (to the
stockholders, by getting the lowest rent possible) and the personal interest of that CEO (to
maximize the income that the CEO gets from owning that office building by getting the highest
rent possible).