You are on page 1of 139

English for informatics

engineering

UNIVERSITAS ISLAM NEGERI SUNAN GUNUNG DJATI


BANDUNG
2017

1
DAFTAR ISI
HACKER1
The Definition of Hacker ...1
History1
How Hacker Work3
Difference of Black, White, Gray Hat Hacker5
How To Avoid Hacking6
NETWORK8
Understanding Compuer Network10
Network History11
Types of Computer Network12
Computer Network Topology14
Benefits of Computer Networking17
COMPUTER APPLICATION20
The Definition of Computer Application20
The Benefits of Computer Application21
History Computer Application21
Kinds of Computer Application23
Remove Virus in the Computer23
MULTIMEDIA33
Definition of Multimedia33
Multimedia in terms of computer35
Category of Multimedia36
Advantage of Multimedia40
Componenta of Multimedia42
Multimedia Software Tools45

2
OPERATING SYSTEM49
Definition of The Operating System51
Types of Operating System51
The Purpose And Basic Functions of The Operating System54
Kind Of Operating System55
RECENT DEVELOPMENT IN IT60
Recent Development in IT62
Positive and Negative Effect of IT70
COMPUTER ARCHITECTURE74
Definition74
History74
Operate78
SOFTWARE ENGNNEERING92
Definition Software Enginnering95
History Software Engineering 96
Function Software98
WEBSITE102
Definition of Website102
History of Wbsite104
Categories of Website109
Development of Website113
Type of Website118
DATA SECURITY122
Understanding of Data Security125
Data Security127
Data Security techneologis128
Key Threats to data security128

3
HACKER

A. THE DEFINITION OF A HACKER

Hackers are people who study, analyze, modify, break into


computers and computer networks, either for profit or motivated
by challenges.

B. HISTORY

Hacker terminology appeared in the early 1960s among


members of the Tech Model Railroad Club student organization
at the Artificial Intelligence Laboratory of the Massachusetts
Institute of Technology (MIT). The group of students is one of
the pioneers of computer technology development and they are
struggling with a number of mainframe computers. The English
word "hacker" first appeared with a positive meaning to refer to a
member who has computer expertise and is able to create a better
computer program than has been designed together.

Then in 1983, the term hacker started to have negative


connotations. The reason, in that year for the first time the FBI
arrested a computer criminal group "The 414s" based in
Milwaukee, United States. 414 is their local area code. The so-
called hackers were found guilty of breaches of 60 computers,
from computers belonging to the Sloan-Kettering Memorial
Cancer Center to computers belonging to the Los Alamos

4
National Laboratory. One of the perpetrators was immunized by
his testimonials, while the other five were sentenced to probation.

Then in the next development appears another group who


mentions herself as a hacker, but it is not. These (especially adult
men) who get satisfaction through breaking into the computer and
outsmart the phone (phreaking). True hackers call these people
crackers and do not like to associate with them. True hackers look
at crackers as lazy, irresponsible, and not very smart. True
hackers disagree if it says that by breaking into security someone
has become a hacker.

The hackers hold an annual meeting, which is every mid-


July in Las Vegas. The world's largest hacker gathering event is
called Def Con. Def Con event is more to the event of
information exchange and technology related to hacking
activities.

Hackers have a negative connotation because of the


misunderstanding of the community about the different terms
about hackers and crackers. Many people understand that hackers
are causing certain party losses such as changing the look of a
website (defacing), inserting virus codes, etc., when they are
crackers. Cracker is using the security gaps that have not been
fixed by software makers (bugs) to infiltrate and destroy a
system. For this reason hackers are generally understood to be

5
divided into two groups: White Hat Hackers, the real hackers and
crackers are often referred to as Black Hat Hackers.

Hackers by Eric Raymond are defined as clever


programmers. A good hack is a pretty solution to the problem of
programming and hacking is the process of making it. According
Raymond there are five (5) characteristics that indicate a person
is a hacker, namely :

1. Someone who likes to learn details from programming


language or system.
2. Someone who did the programming, not just theory.
3. Someone who can appreciate, enjoy the results of hacking
others.
4. Someone who can quickly learn programming.
5. Someone who is expert in certain programming languages
or specific systems, such as UNIX hackers
C. HOW HACKERS WORK

To protect the computer when opening the Intenet, we need to


know how the hacker works to access a system, can simply be
described as follows: Hacker is a 'art' of its own which involves
the process of looking for fragments of information scattered
everywhere and as if nothing Its relationship to each other. To
give an overall description of the hacking process, below are
presented logical steps, namely :

6
1. Footprinting. Search for detailed information on systems
to target, including information search with search engine,
whois, and DNS one transfer.
2. Scanning. Against certain targets sought the most likely
entrance. Used ping sweep and port scan
3. Enumeration Intensive review of targets, which look for
legitimate user accounts, network resources and share,
and applications to find out which ones are weak.
4. Gaining Access. Get more data to start trying to access
goals. Includes or robs passwords, guesses passwords, and
buffers overflow
5. Escalating Privilege. If you just get a user password in the
previous stage, at this stage in trying to get privilese admi
network with password cracking or exploit like getadmin,
sechole or lc_messages.
6. Pilfering. The information gathering process begins again
to identify mechanisms to gain access to a trusted system.
Includes trust evaluation and cleartext password search in
the registry, config files, and user data.
7. Convering Tracks. Once full control of the system is
obtained, closing the track becomes a priority. Includes
cleaning up network logs and using hide tools such as
rootkits and streaming files
8. Creating Backdoors. The rear door is created on various
parts of the system to make it easy to re-enter the system

7
to this system by establishing user accounts, scheduling
joob batches, changing startup files, adding remote
control services and monitoring tools, and replacing
applications with qtrojan.
9. Denial of Service. If all of the above attempts fail, the
attacker may be disabled by the target as a last resort.
Includes SYN flood, ICMP techniques, supernuke, land /
latierra, teardrop, bonk, newtear, trincoo, smurf, and
others.

D. DIFFERENCE OF BLACK, WHITE, AND GRAY HAT


HACKER

BLACK HAT HACKER

Black-hat hackers, or simply black hats, are the type of


hacker the popular media seems to focus on. Black-hat hackers
violate computer security for personal gain (such as stealing
credit card numbers or harvesting personal data for sale to
identity thieves) or for pure maliciousness (such as creating a
botnet and using that botnet to perform DDOS attacks against
websites they dont like.)Black hats fit the widely-held stereotype
that hackers are criminals performing illegal activities for
personal gain and attacking others. Theyre the computer
criminals.A black-hat hacker who finds a new, zero-day

8
security vulnerability would sell it to criminal organizations on
the black market or use it to compromise computer systems.

WHITE HAT HACKER

White-hat hackers are the opposite of the black-hat hackers.


Theyre the ethical hackers, experts in compromising computer
security systems who use their abilities for good, ethical, and
legal purposes rather than bad, unethical, and criminal purposes.

GRAY HAT HACKER

Very few things in life are clear black-and-white categories.


In reality, theres often a gray area. A gray-hat hacker falls
somewhere between a black hat and a white hat. A gray hat
doesnt work for their own personal gain or to cause carnage, but
they may technically commit crimes and do arguably unethical
things.

E. HOW TO AVOID HACKING

Black hat hacker is anywhere,make sure your data is safety from


black hat hacker. There some tips for handle hacking,
1. Make sure you sign out of your account if you're on a
public device after using it.
2. Use hard password.
3. Do not download from unknown sites.

9
4. Do not click on links you do not know in emails or
sites.

10
NETWORK

Grou 2

Member:

1. Fitri Nurholidah
2. Muhammad Faisal S
3. Muhammad Nur Sidiq S

CHAPTER I
PRELIMINARY

A. Background
Computer networking is not something new right now. Almost in
every has a computer network to facilitate the flow of information
inside. The company. Internet is gaining popularity nowadays is a
network. A giant computer that is a network of connected
computers and can beInteract with each other. This can happen
because of technological developmentsThe network is very rapid,
so in a few years only the number of usersThe network of
computers that are incorporated in the Internet multiply. Since
Internet and marketed operating systemWindows95 by Microsoft,
connecting multiple computers to both computersPersonal (PC)
and server with a network of LAN types (Local Area Network) to
WAN (Wide Area Network) becomes a common thing.Similarly,
the concept of "downsizing" and "lightsizing" which

11
aimsPressing the budget especially computer equipment, then a
network is one thing that is very necessary.

B. The Purpose of Writing


The purpose of writing this paper is to know /Understand about
computer network and its utilization in everyday life.By reading
this paper we hope to better understand the
technologyInformation, because the advanced nation is a nation
that mastered the technology andinformation. By understanding
the computer network it will be very easy. We are accomplishing
something in our daily life.

12
CHAPTER II
THEORETICAL BASIS

A. Understanding Computer Network


The computer network is a collection of computers, printers and
Other connected equipment. Information and data move through
wiresAllowing users of computer networks to exchange
documents and data, print on the same printer and together using
hardware / software connected to the network. Each computer,
printers or peripherals connected to a network are called nodes. A
network computers can have two, tens, thousands or even
millions of nodes. A networks usually consist of 2 or more
interconnected computers between one another, and sharing
resources such as CDROM, printer, file exchange, or enable to
communicate with each otherelectronic. The connected computer,
it is possible to connect with cable media, telephone lines, radio
waves, satellites, or infrared rays.

13
CHAPTER III

DISCUSSION

A. Network History

The concept of computer networks was born in the 1940s in


America from A computer development project MODEL I in Bell
and group laboratories Harvard University research led by
professor H. Aiken. At first the project It is just want to utilize a
computer device that should be Shared. To work on multiple
processes without much discarding Free time is made process
batch (Batch Processing), so some The program can run in a
computer with queue rules. In the 1950s when the type of
computer began to grow up The creation of a super computer,
then a computer must serve some terminal. For that found the
concept of process distribution based on that time Known as TSS
(Time Sharing System), then for the first time form The network
(computer) computer is applied. In the TSS system several
terminals Connected in series to a host computer. In the TSS
process begins to appear A combination of computer technology
and telecommunication technology that originally Develop
independently Entering the 1970s, after the burden of work
multiplied And the price of large computer devices is starting to
seem very expensive, so start Used the concept of distribution

14
process (Distributed Processing). In this process Some host
computers do a great job in parallel to Serving multiple terminals
that are connected in series to each host computer. Dala
distribution process is absolutely necessary in-depth integration
between Computer technology and telecommunications, because
in addition to the process that must be Distributed, all host
computers are obliged to serve their inner terminals One
command from the central computer 5 Furthermore, when the
prices of small computers have begun to decline and The concept
of the distribution process is mature, then the use of computers
and networks Have begun to vary from start handling the process
together and communication Between computers (Peer to Peer
System) alone without going through a central computer.
Therefore Start developing local network technology known as
LAN. Similarly when the Internet began to be introduced, then
most of that LAN Stand alone began to connect and formed a
network of giants WAN.

B. Types Of Computer Network


The computer network is a collection of computers, printers and
other equipment connected in one unit. Information and data
moving via wired or wireless cable so as to enable network users
computers can exchange documents and data, print on the same
printer
and together using hardware / software connected withnetwork.
Any computers, printers or peripherals connected to the

15
networkcalled nodes. A computer network can have two, tens,
thousands or even millions of nodes. In general, computer
networks are divided into five types, namely;
1. Local Area Network (LAN)
Local Area Network (LAN), is a privately owned network within
a buildings or campuses measuring up to several kilometers. LAN
often used to connect personal computers and workstations within
the office of a company or factories to wear along with resources
(resouce, eg printers) and exchanging information.
2. Metropolitan Area Network (MAN)
Metropolitan Area Network (MAN), is basically a LAN version
which are larger and typically use the same technology with
LAN. MAN can include company offices that are located
adjacent or also a city and can be used for purposes private
(private) or public. MAN is able to support data and voice, even
can be connected with cable television networks.
3. Wide Area Network (WAN)
Wide Area Network (WAN), its reach includes a geographical
area broad, often including a country and even a continent. The
WAN consists of a collection of machines that aim to run
programs(Application) user.
4. Internet
Actually there are many networks in this world, often using
hardware and software. People who connected to the network
often hope to be able to communicate with people o others

16
connected to other networks. Such desires require relationships
between networks that are often not kampatibel and different.
Usually to do this required a machine called gateways to conduct
relationships and carry out translations required, both hardware
and software. Set this interconnected network is what is called the
internet.
5. Network Without Wires
The wireless network is a solution to the communication that
does not can be done with a network using a cable. For example a
person who is want to get information or communicate even
though it is located above the car or airplane, then absolute
network without wires
Necessary because cable connections are not possible in cars or
aircraft. Currently the wireless network is already widespread use
with utilizing satellite services and able to provide more access
speed faster than wired networks.

C. Computer Network Topology

Topology is a way of connecting one computer with Other


computers so as to form a network. The way that many today
Used are bus, token-ring, star and peer-to-peer network. Each
This topology has its own characteristics, with its own advantages
and disadvantages.

1. Bus Topology

17
In this topology all central are connected directly on
Transmission medium with a configuration called Bus.
Transmission of signals from a Central is not flowed
simultaneously in both directions. This is very different With
that occurring on the mesh network topology or star, which is on
the second The system can be done communication or
interconnection between the central At the same time. Bus
network topology is not commonly used for interconnection
between Central, but usually used on computer network systems.

Advantages
Save cable
Simple cable layout
Easy to develop
Loss
Detection and isolation error is very small
Traffic density
If one client is damaged, the network will not work.
Remote repeater required

2. Ring Topology
The token-ring method (often called the ring only) is a way of
connecting computer so that the form of a ring (circle). Each
node has a level the same one. The network will be called a loop,
data is sent to each node and any information received by the
node is checked for its address whether the data is for it or not.

18
Advantages:
Cable Saver
Loss:
Error-sensitive
Network development is more rigid

3. Star topology
Centralized control, all links must pass through the center of
the channel The data is all the node or client selected. Central
node is named Primary station or server and other called
secondary station or client Server. Once the network connection
is started by the server then each client server At any time can use
the network connection without waiting Command from the
server.

Advantages:

Most flexible
Installation / change station is very easy and not disturbing
Other parts of the network
Centralized control
Ease of detection and isolation of errors / damage
Ease of network management

Loss:

19
Wasted cables
Need special handling
Centralized control (HUB) becomes a critical element

4. Peer-to-peer Network topology


Peer means co-worker. Peer-to-peer network is a network
computers consisting of multiple computers (usually no more
than 10 computers with 1-2 printers). In this preferred network
system is usage programs, data and printers together. Computer
user named Dona can use the program installed on Dino's
computer, and they both can print to the same printer at the same
time. This network system as well can be worn at home.
Computer users who have "old-fashioned" computers, eg AT, and
want to buy a new computer, say Pentium IV, not necessary
throw away the old computer. He just installed a netword card in
the second the computer is then connected to a special cable used
for network system.

D. Benefits of Computer Networking


1. Resource Sharing
Can use existing resources simultaneously. Suppose a Users who
are 100 km away from the data, do not get difficulties in using
such data, as if the data were located nearby. This often means
that computer networks resolvedistance problem.
2. Reliability

20
With our computer network will get high reliability by having
alternative sources of inventory. For example, all files can be
saved or copied to two, three or more computers connected. So
when one of the machines is broken, then a copy on that machine
others can be used.
3. Saving money.
Smaller-scale computers have better price / performance ratios
compared to large computers. Large computers such as
mainframes has a density of approximately ten times the speed of
a computer small / private. Still, mainframe prices are a thousand
times more expensive than personal computer. This price /
performance and speed ratio imbalance is making the system
designers to build a system consisting of personal computer.

21
CHAPTER IV

COVER

A. Conclusion

From the writings that have been authors described in the


above article can be taken Some conclusions, including the
computer network is divided into:

1. Local Area Network (LAN)

2. Metropolitan Area Network (MAN)

3. Wide Area Network (WAN)

B. Suggestions

The suggestions that the author can convey on this occasion


are:

1. Computer network technology greatly facilitates us in


variety Activities, therefore do not we use wrong as will
be Harm the other side,

2. Rebuild our knowledge of networked computers.

COMPUTER APPLICATION

Group 3

22
MEMBER :

1. FAWAZ HUTOMI
2. GITA SONIA INDRIANI
3. WIRA NUGRAHA

A. The definition of Computer Application


Computer Applications or Application Software is a computer
program written in a programming language and is used to
solve specific problems
A Computer application or an application program is a computer
program designed to perform a group of coordinated functions,
tasks, or activities for the benefit of the user. Examples of an
application include a word processor, a spreadsheet, an
accounting application, a web browser, a media player, an
aeronautical flight simulator, a console game or a photo editor.
Computer Applications is designed to test students abilities to use
word processing, spreadsheet, and database applications
software, including integration of applications.
B. The Benefits of computer application
Computer applications are usually created to facilitate man in
working on a task in a computer, such as for processing the
data or for the purposes of editing

C. History computer application

23
Eniac Computer
The first substantial computer was the giant ENIAC machine by
John W. Mauchly and J. Presper Eckert at the University of
Pennsylvania. ENIAC (Electrical Numerical Integrator and
Calculator) used a word of 10 decimal digits instead of binary
ones like previous automated calculators/computers. ENIAC was
also the first machine to use more than 2,000 vacuum tubes, using
nearly 18,000 vacuum tubes. Storage of all those vacuum tubes
and the machinery required to keep the cool took up over 167
square meters (1800 square feet) of floor space. Nonetheless, it
had punched-card input and output and arithmetically had 1
multiplier, 1 divider- square rooter, and 20 adders employing
decimal "ring counters," which served as adders and also as
quick-access (0.0002 seconds) read-write register storage.
The executable instructions composing a program were embodied
in the separate units of ENIAC, which were plugged together to
form a route through the machine for the flow of computations.
These connections had to be redone for each different problem,

24
together with presetting function tables and switches. This "wire-
your-own" instruction technique was inconvenient, and only
with some license could ENIAC be considered programmable; it
was, however, efficient in handling the particular programs for
which it had been designed. ENIAC is generally acknowledged to
be the first successful high-speed electronic digital computer
(EDC) and was productively used from 1946 to 1955. A
controversy developed in 1971, however, over the patentability of
ENIAC's basic digital concepts, the claim being made that
another U.S. physicist, John V. Atanasoff, had already used the
same ideas in a simpler vacuum-tube device he built in the 1930s
while at Iowa State College. In 1973, the court found in favor of
the company using Atanasoff claim and Atanasoff received the
acclaim he rightly deserved.
D. Kinds of Computer Application

1. Microsoft Word

The program this application has a function for writing or typing


assignments, papers and so forth.

25
2. Microsoft Excel

This application program has a function to create data tables or


spreadsheets. This program will greatly assist you in creating the
table of financial expenditures and the program also has
automatic command or formula that can facilitate you in
calculating the amount, average and so on.

3. Microsoft Power Point

This application program has a function to create a slide show for


presentation or meeting and also for web pages.

26
4. Notepad

This application program has a function as a place to keep notes.

5. Mozilla Firefox and Google Chrome

2 This application program is a web browser, which is used to


surf the internet.

6. Adobe Photoshop

Adobe Photoshop is a graphics program that can be used to edit


image.

27
7. Paint

This program can be used also to edit pictures or even serve as a


canvas or a place to paint.
8. Winamp

Winamp program serves to play music or video.

9. Windows Media Player

This program is used to play videos and can also to play music.

28
10. Antivirus

This program serves to prevent entry of the virus into the


computer and also to remove the virus from your computer.

E. Remove viruses in the computer

A computer virus is one of several types of malware which can


damage our computer system. Generally a computer that has
been infected with this virus have the signs that indicate a
decrease in the performance of the
computer. Among the signs indicating the presence of a virus on
the computer are as follows:

29
1. The computer feels slow
2. The computer frequently hangs
3. The startup time of windows that feels old
4. Cannot open some files as usual
5. Could not access the internet but the connection is
not problematic.
6. The appearance of the popup window as if from an
antivirus program that containsa warning that your
computer is currently problematic and should
be immediately corrected, whereas previously You never i
nstall the
Antivirus.
7. Cannot change file attributes
8. Some of your files as lost simultaneously but only ter-
hidden
9. The emergence of files with strange names, you
never create
10. and things that are suspicious of the other.

Clean A Computer Virus

how to clean up viruses and other malware if the


computer we use already infected by the virus :

30
a. Prepare the application to be used to clean viruses, including
the following:

Kaspersky TDSSKiller Anti Rootkit Utility,


Malwarebytes Anti-Malware,
SUPERAntiSpyware Portable Scanner
HitmanPro 3 Malware Scanner
Programs such as antivirus, Avast, AVG, Microsoft
Security Esential etc.
The default Windows 7 File Extension reg.
Norman Malware Cleaner
Auslogics Browser Care, is a
very useful tool to configur web browser and disable
the plugin is not needed that often unknowingly installed
on our browser.

Save all those files into Flash.

b. Start Windows in Safe Mode mode


After all the tool collected, the next step is to start
windows in Safe Mode mode, the trick is to how
to: restart the computer and press the F8 key until the
display appears the Windows Boot Option as shown in the
picture below, then choose the Windows Safe Mode.

31
c. Do the cleaning viruses with tool, tool that has been
provided in advance.

After a successful login, the Windows began to do the


cleaning viruses
with first runadvance Kaspersky TDSSKiller Anti
Rootkit Utility. Then run Malwarebytes Anti-Malware
and the Malwarebytes program should update it first. After
that, run the SUPERAntiSpyware Portable Scanner followed
by HitmanPro 3 Malware Scanner.

After the computer system cleaned by Malware scanners, the next


step then installor run Antivirus programs like Avast, AVG,
Kaspersky or other and select the option "Full Scanning". To

32
note make sure antivirus has been updated in advance with the
latest database version or.

d. Clean the System Restore

To keep your computer system is not infected with a virus that is


present on the System Restore, then we need to delete system
restore has been made, here's how:

33
Click the Start menu All Programs > > > Accessories System
Tools and then click Disk Cleanup. Select the drives to be clear,
that is the C Drive.

Click on the More Options tab and then on the System


Restore section, click the Cleanup.

A confirmation window will be appears Disk CleanUp, click the


Delete button.

e. Clean the Temporary files

Next clean temporary files, could be using the


Temp File Cleaner application. This application will
clear all temporary folders on all user accounts (temp, IE temp,
java, FF, Opera, Chrome, Safari), including the
Administrator, All Users, LocalService, NetworkService, and
the other user account in the users folder. Temp File Cleaner will
also clean the%systemroot%\temp folder and the% systemdrive%
is the root folder,% systemroot% system32 folder.

f. Repair the Windows operating system

To repair the damage inflicted by viruses, such as


the broken file association, so some of the document could not
be opened by the default program, we
must repair with registry repair. To restore the default file
association to default on windows can use a tool that had been
prepared in advance.

34
If the damage was severe enough, then do a Repair of
Windows is a step that we have to run.

g. use a Rescue CD

If the damage inflicted by viruses resulted in the computer is


not booting up at all, then we must use what is referred
to as a Rescue CD

35
KELOMPOK MULTIMEDIA:

RIZKA ALAWIYAH
ARIP HIDAYAT

MULTIMEDIA

Group 4

A. Definition of Multimedia

Multi: more thanone


Medium (singular): middle, intermediary,mean
Media (plural): means for conveyinginformation
a. Mediainthepress,newspaper,radioandTVcontext-
massmedia
b. Mediaincommunications:cables,satellite,network
transmissionmedia
c. Mediaincomputerstorage:floppy,CD,DVD,HD,US
Bstoragemedia
d. MediainHCIcontext:text,image,audio,video,CG
interactionmedia

Multimedia: refers tovariousinformation forms


text, image, audio,video,graphics, and animation in a
variety of application environments

36
Multimedia :
product, application, technology, platform, board,
device, network computer, system, classroom,
school,
Word multimedia is widely used to mean many
different things

Multimedia is more than one concurrent


presentation medium (for example, on CD-ROM or a
Web site). Although still images are a different medium
than text, multimedia is typically used to mean the
combination of text, sound, and/or motion video. Some
people might say that the addition of animated images (for
example, animated GIF on the Web) produces
multimedia, but it has typically meant one of the
following:

a. Text and sound

b. Text, sound, and still or animated graphic


images

c. Text, sound, and video images

d. Video and sound

e. Multiple display areas, images, or presentations


presented concurrently

37
f. In live situations, the use of a speaker or actors
and "props" together with sound, images, and
motion video

B. Multimedia in terms of Computing

Computing is The process of utilizing computer


technology to complete a task. Computing may involve
computer hardware and/or software, but must involve
some form of a computer system. Most individuals use
some form of computing every day whether they realize it
or not. Swiping a debit card, sending an email, or using a
cell phone can all be considered forms of computing.
"Mason understood that his new job at the large IT firm
would require a large portion of computing projects.

In terms of computing, four fundamental


multimedia attributes:

a. Digitized: All media including audio/video


are represented in digitalformat

b. Distributed:Theinformationconveyedisrem
ote,eitherpre-producedand stored or
produced in realtime, distributed
overnetworks

c. Interactive: It is possible to affect the


information received, andsendown

38
information, in a non-trivial way beyond
start, stop, fastforward.

d. Integrated: The media are treated in a


uniform way, presentedinanorchestrated
way, but are possible to
manipulateindependently.

C. Category of Multimedia
a. Linear Multimedia
Linear Multimedia is a type of a
multimedia that is designed to be presented in a
sequential manner. It has a distinct beginning and
end. It goes on a logical flow from a starting point
to a conclusion.
It is usually intended for display purposes
with not much interaction or distraction from the
audience. Because of its nature where audience
participation is not expected, Linear Multimedia
may also be referred to as Passive Multimedia.
In this kind of presentation, the creator of the
multimedia is in control.

This kind of media is preferential if


interaction is not necessary in the presentation.

39
Main goals include: to entertain, to
transmit knowledge, and to make people familiar
on a certain topic WITHOUT any form of
diversion

Examples may be:

a) A powerpoint presentation
b) A slideshow of pictures that goes on with
a specific direction
c) A storyline/ A movie
d) An anime episode
e) A Youtube video

Advantages:

a) Audience gets to focus and concentrate on


a specific topic.
b) There is logical order in the presentation.
Organized
c) Presenter controls the flow of the
presentation
d) Effective when we need our audience to
absorb the information well.

Disadvantages:

40
a) Minimal interactivity, or none at all
b) Audience has no say on the topic they want
to dwell into.
b. Non-Linear Multimedia
Non-linear multimedia is a nonsequential
type of multimedia where the persons participation
is crucial.

In this type of media, the person needs to


interact with a computer program, thus making him
in control of the experience.With the presence of an
interface, the person and the computer interacts
with each other.

From a starting point, the person using a


nonlinear multimedia is given a range of options
that, according to his own preferences, will lead
him to a new information.

Examples may include:

a) A Website
b) A search engines home page
c) A DvD menu screen
d) A Youtube Channel

41
e) An anime or Korean drama
streaming site

Advantages:
The person is in control and may use the
multimedia according to his preferences and
needs.
Disadvantages:
a) Requires a level of computer literacy from
the user
b) May be unorganized if not used well.
Imagine a movie. Normally a movie goes
on a linear format, starting from point A and
ending on point B. The viewer watches and needs
not to do anything in order to enjoy the movie.
However, if viewed on a DvD, the viewer is now
given the option to choose which scenes to watch,
which subtitles to use, and can now even pause
and rewound the movie.

D. Advantage of Multimedia
There is some advantage of multimedia:
a) This is a very user-friendly. It does not need the
number of energy users, in this sense, you can sit
down to watch the demo, you can read the text and
hear the sound.

42
b) It is a multi-sensory. It uses the senses of many
users, while the use of multimedia, such as
hearings, see and talk.
c) It is a comprehensive and interactive. Through
different media in the process of digital
integration. The possibility of interaction easy
feedback are greatly increased.
d) It is flexible. Digitalization, this media can easily
be changed to adapt to different situations and
audiences.
e) It can be used for a variety of audiences, ranging
from one person to the whole group.
f) Creative Industries: the creative industries,
including advertising, media and news, they use
multimedia fun and interactive way to express
their thoughts. Organization of advertising
agencies and other creative work across a creative
way of information, ideas and news. Path
information in an interactive visualization of these
ideas, multimedia plays a vital role.
g) The latest developments in the enterprise:
technology and multimedia environment has made
it possible for entrepreneurs to come up with an
attractive company website or presentation,
including information about their products and

43
services to the interpretation of text, audio and
video.
h) Marketing: construction in the site text, images,
video shows the general idea of the product is very
popular. To explain the links with the media,
social networking sites to promote our ideas is
inevitable. Customers can easily visualization and
link to website, in a good way to understand the
message.
i) Telecommunications industry: Today, everyone is
clear Multimedia Messaging Service (MMS). This
service makes it possible to audio and video
content from our mobile phones to send text.
Previously, it was limited to only a certain number
of text messages. In the phone's multimedia
applications area in 2012 will only increase with
the daily development of the function, such as
playing music, games, watching movies, and our
mobile news.
j) It can be seen in the entertainment industry
entertainment: multimedia use one. With the latest
technology research and invention, the annual
multimedia range has been expanded. We like to
see a 3D movie in the cinema theaters and on

44
television - and enjoy the movie special effects
would not have been no multimedia possible.
k) Running on a multimedia platform for video
games. Multimedia range is the frequent
introduction of a wide range of new video game
every other day. In short, it can be said that
increased consumer support and positive feedback
from the range of digital multimedia, and only
increase in 2012. Has become more technology-
friendly in the world. Network marketing, the
popularity of the use of computers, mobile phones
and video games is expanding its door in 2012 of
multimedia opportunities.
E. Components of Multimedia
a. Text
It may be an easy content type to forget
when considering multimedia systems, but text
content is by far the most common media type in
computing applications. Most multimedia systems
use a combination of text and other media to
deliver functionality. Text in multimedia systems
can express specific information, or it can act as
reinforcement for information contained in other
media items. This is a common practice in
applications with accessibility requirements. For

45
example, when Web pages include image
elements, they can also include a short amount of
text for the user's browser to include as an
alternative, in case the digital image item is not
available.
b. Images
Digital image files appear in many
multimedia applications. Digital photographs can
display application content or can alternatively
form part of a user interface. Interactive elements,
such as buttons, often use custom images created
by the designers and developers involved in an
application. Digital image files use a variety of
formats and file extensions. Among the most
common are JPEGs and PNGs. Both of these often
appear on websites, as the formats allow
developers to minimize on file size while
maximizing on picture quality. Graphic design
software programs such as Photoshop and
Paint.NET allow developers to create complex
visual effects with digital images.
c. Audio
Audio files and streams play a major role in some
multimedia systems. Audio files appear as part of
application content and also to aid interaction.

46
When they appear within Web applications and
sites, audio files sometimes need to be deployed
using plug-in media players. Audio formats
include MP3, WMA, Wave, MIDI and RealAudio.
When developers include audio within a website,
they will generally use a compressed format to
minimize on download times. Web services can
also stream audio, so that users can begin playback
before the entire file is downloaded.
d. Video
Digital video appears in many multimedia
applications, particularly on the Web. As with
audio, websites can stream digital video to
increase the speed and availability of playback.
Common digital video formats include Flash,
MPEG, AVI, WMV and QuickTime. Most digital
video requires use of browser plug-ins to play
within Web pages, but in many cases the user's
browser will already have the required resources
installed.
e. Animation
Animated components are common within both
Web and desktop multimedia applications.
Animations can also include interactive effects,
allowing users to engage with the animation action

47
using their mouse and keyboard. The most
common tool for creating animations on the Web
is Adobe Flash, which also facilitates desktop
applications. Using Flash, developers can author
FLV files, exporting them as SWF movies for
deployment to users. Flash also uses ActionScript
code to achieve animated and interactive effects.
F. Multimedia Software Tools
a. Music Sequencing And Notation
a) Cakewalk
Supports General MIDI
Provides several editing views (staff,
piano roll, event list) and Virtual Piano
Can insert WAV files and Windows MCI
commands (animation and video) into
tracks.
b) Cubase
A better software than Cakewalk
Express
Intuitive Interface to arrange and
play Music (Figs 2.6 and 2.7)
Wide Variety of editing tools
including Audio (Figs 2.8 and 2.9).
c) Cubase Arrange Window (Main)

48
d) Cubase Transport Bar Window ---
Emulates a Tape Recorder Interface
e) Cubase Audio Window
f) Cubase Audio Editing Window with
Editing Functions
Allows printing of notation sheets
g) Cubase Score Editing Window
h) Logic Audio
Cubase Competitor, similar
functionality
i) Marc of the Unicorn Performer
Cubase/Logic Audio Competitor,
similar functionality
b. Digital Audio
a) Audacity
b) Power sound editor
c) Music editor free
d) Wavosaur
e) Ardour
f) Rosegarden
c. Video Editing
a) Adobe pemier
b) Adobe After Effect
c) Final Cut Pro
d. Graphic and Image Editing

49
a) Adobe Illustrator
b) Adobe Photoshop
c) Macromedia Fireworks
d) Macromedia Freehand
e. Animation
a) Java3D
b) DirectX
c) OpenGL
d) 3D Studio Max
e) Softimage XSI
f) Maya
g) Rendermen
h) GIF Animain Packages
f. Multiedia Authoring
a) Macromedia Flash
b) Macromedia director
c) Authorware
d) Quest

50
Oprating system
Group 5
THE PREFACE

Praise and great gratitude submitted to Almighty God,


Allah SWT who always gives her gracious mercy and
tremendous blessing that has helped the writer finishing this
articles entitled: operating systems Sholawat and salam we
submitted our Lord to the Prophet Muhammad SAW.
The computer is an advanced tool that has many uses to
help the work of man. With computers, much work can be done
effectively and efficiently. The computer is a great tool
(inanimate), while the human being is the user (user). Without
being operated by a human, the computer cant work by itself.
How to let your computer be able to work in accordance with the
wishes of humans? What were the tools used to rule the
computer? So that we can answer the above question, let us learn
and understand the discussion that is discussed in this paper.

51
CHAPTER I

INTRODUCTION

A.Background
Computer operating systems are computer is software that
is in charge to perform the control and management of the
hardware and also the basic system operations, including
running software applications such as data processing programs
that can be used to facilitate human activities. Operating
systems in English called the Operating System, or in short the
OS.

B. The purpose of the writing


As for the purpose of making of this paper is to fulfill the
English task. In addition, so that we can find out in more detail
what is meant by different types of computers and operating
systems.

52
CHAPTER II
DISCUSSION

A.DefinitionOf The Operating System


Definition Operating System in general is a person who
manages all resourse contained on your computer system and
provides system calls to users so that it can facilitate and
comfort to users as well as his utulization of the resources of the
computer system. The operating system is the most important
type of software system in a computer system.

Without an operating system, the user can not run the


application on their computers, unless the application is booted.
An example of a modern operating system is Linux, Android,
iOS, Mac OS X, and Microsoft Windows.

B.TypesOf Operating Systems


The operating system is divided into two parts, namely the
operating system Open Source and Closed Source operating
system as well:
1. Open Source operating system
Open source is software where the source code is open
and is provided by the developer in General so that it can be

53
studied, modified or further developed and widely diffused. If
there's a software maker that does not allow source code is
changed or modified, then it is not referred to as open source
code of programs though such software available.

Open source (open source code) was popularized in


1998. The history of open source software hacker culture born
since developing in the lab-computer lab in American
universities such as Stanford, Berkeley, Carnegie Mellon
University, and MIT in the 1960s and 1970s. Open source
operating systems referred to profitable, especially by the
users of open source. Some of the advantages and
disadvantages of open source:

Advantage
1. A lot of power (HR) which starred projects
2. Errors (bugs, errors) much faster is found and repaired
3. The quality of the results is more secure because the
evaluation community
4. More secure
5. Cost effective
6. Do not repeat the development

shortage

54
1. lack of HUMAN RESOURCES that can benefit from open
source
2. The lack of protection of intellectual property (IP)

2. Closed Source operating systems


Close Source code operating system that is not open
to the public, a close source code owners could split the source
code through a license with free or pay. Although free, certain
licenses could make an operating system not fully open source.
For example, if the license is no restrictions for modifying the
code, then this operating system is not open source.

Advantages Of Closed Source


1. The stability of the system is assured because there is
penangung official responsibility.
2. Support direct from the owner of the application/programe.
3. Easy to get certification.
4. Easier to use/learned/understood because the majority of
users using it (in certain areas).

Shortage Of Closed Source


1. There is no special support/direct from the maker
(developer).
2. open Gap, can be used for the retrieval of information.

55
3. Dissemination of usage, it is rather difficult, because
generally users using the close source (e.g. Windows), only
in certain areas).
4. Difficult to get certification.
5. The existence of a license that requires users to provide
funds/financial.
6. development limited.
7. the Necessary antivirus.
8. Applications are generally available.
9. The detection of weaknesses of applications waiting for
feedback from users.

C.The Purpose And Basic Functions Of The Operating


System
Operating system is expected to have two purposes, namely:
1. Comfort: an operating system to make your computer easier
to use.
2. Efficient: an operating system allows the system resources
of the computer can be used in a manner that is most
efficient.
Operating system has four basic functions are:
1.1 Bridge the connection between hardware and application
programs run by the user.

56
1.2 Organize and supervise the use of the hardware by the
user and the various application programs (Resources
Allocator).
1.3 .As the control program that aims to avoid confusion
(error) and the use of a computer is not necessary (as guardian
guarding the computer from a variety of possible damage).
3. Management of hardware resources, such as memory, set up
a printer, cdrom, etc.
4. main components of operating systems
Modern operating systems have the main components,
namely:

1.The Kernel
2. Files
3. The User Interface
5. operating system within the community

D. Kind of Operating System


1. Windows
Fajrillah (2011:239-240) Microsoft Windows or better
known as Windows is a family of operating systems developed
by Microsoft, by using the graphical user interface.
Windows operating systems have evolved from MS-
DOS, an operating system-based text mode and command-
line. The first versions of Windows, Windows Graphic

57
Environment 1.0 was first introduced on 10 November 1983,
but the new outgoing markets in November 1985, created to
meet the needs of the computer with the display picture.
Windows 1.0 is a 16-bit software (not the operating system)
running on top of MS-DOS (and several variants from MS-
DOS), so he won't be able to run without the operating system
DOS. Version 2.x, 3.x version is also the same. Some versions
of Windows (starting from version 4.0 and Windows NT 3.1)
is a standalone operating system that no longer relies on the
MS-DOS operating system.

2. Unix
pangera (2010:64) Including the earliest operating
systems exist for the computer. Is the parent of the linux
operating system. UNIX or UNIX is a computer operating
system that starts from project Multics (Multiplexed
Information and Computing Service) in 1965 conducted the
American Telephone and Telegraph AT&T, General Electric
(GE), and the Massachusetts Institute of Technology (MIT),
and costs of the Department of Defense (Department of
DefenceAdvenced Research Project, DARPA or ARPA),
UNIX was designed as a portable operating system, multi-
tasking and multi-user.

3. Linux

58
Fajrillah (2011:311) Linux is a cloning UNIX, written
completely from under more than a decade ago. Same with
BSD Linux in many respects, but BSD does have a culture that
has long been, as well as more hospitable to the commercial
world. This operating system created by Linus Torvald and
grew quickly so can almost exceeded the number of Windows
users in the world. Linux can be obtained in a variety of
distribution (often called Distros). Distro is a bundle of the
Linux kernel, together with a basic linux system, the
installation program, basic tools, and any other programs that
are beneficial in accordance with the purpose of making the
distribution. There are countless Linux distributions,
including:
1.1 RedHat, the distribution of the most popular, at least
in Indonesia. The first distribution is redhat installation
and operation easier.
1.2 distribution of Debian, which give priority to stability
and reliability, despite the expense of aspects of
convenience and recency of the program. Debian .deb
package using the installation program.
1.3 Slackware, is the distribution of the majority of the
world's ever Linux. Almost all of the documentation for
Linux compiled based on Slackware. Two important
things from Slackware is that all its contents (kernel,
libraries or application) was tested. So that might be a

59
little old but a definite stable. The second because she
advocated for installing from source so that every
program that we install our systems fully optimized. This
is the reason he does not want to use the binary RPM and
Slackware 4.0, he continued to use libc5 glibc2 is not like
the others.
1.4 SuSE distribution, which is very popular with YaST
(Yet another Setup Tool) to configure the system. SUSE
is the first distribution where the installation can use the
language of Indonesia.
1.5 Mandrake, RedHatdistro is a Variant that is optimized
for the pentium. If our computers using pentium, Linux
generally can walk faster with Mandrake.
1.6 WinLinux, distro designed to install on top of the
DOS partition (WIndows). So to run it can be clicked
from Windows. WinLinux is created as if it is an
application program under Windows.
And many other distros have available or that will
emerge.

60
CHAPTER III
CLOUSING

A. Conclusions
The operating system is the software that serves to enable
the entire device installed on your computer so that each can
communicate with each other. In General, the operating system
is the software on the first layer put on computer memory when
the computer is turned on. While other softwares to run after the
operating system is running, the operating system will do for
common core services software-software that. The common
core services such as access to disks, memory management,
task, skeduling and advance users. So each of the software no
longer need to perform tasks that common core, because it can
be catered for and performed by the operating system. The
section of code that perform the core duties and General is
called with "kernel" of an operating system.

B. Advice

61
This paper was made the beginning of the process of
learning about the operating system, so that the next
dikesempatan be better, either in the discussion, explanation and
writing that has not been achieved.

RECENT DEVELOPMENT IN IT (INFORMATION OF


TECHNOLOGY)

Group 6

Abdul Aziz 1167050 002


Agus Rasyidin 1167050 010
Bastomi Maulana G 1167050 042
Zamzam Habib K N 1167050 171

CHAPTER I
INTRODUCTION
A. Issue Background
In this modern era we can not be separated from its name
technology. This is in because everything nowadays has been
widely used technology as a tool to assist all activities.
Moreover technology is very influential in the field of
Information Technology.

62
Information Technology is a technology used to process
data, including processing, obtaining, compiling, storing,
manipulating data in various ways to produce quality
information, ie relevant, accurate and timely information,
used for personal, business, and Government and is a
strategic information for decision-making. This technology
uses a set of computers to process data, network systems to
connect one computer with other computers as needed, and
telecommunication technology is used for data to be spread
and accessed globally. Computers can be found anywhere,
for example in schools, homes, let alone offices and
agencies. The development of Information and
Communication Technology is now more rapidly, it is felt by
the increasing emergence of information and communication
equipment is increasingly sophisticated. Then the means of
communication as well as mobile phones / mobile phones, at
this time almost everyone has it.

We only know how to use the technology without


knowing what effect it will have on us when we use the
technology when in the future, more and more technology
products are created especially in IT development. This is the
basis for us to create a paper on Recent Development in IT.

B. Problem Identification
1. What are the latest developments in the IT?

63
2. How are the positive and negative impacts of
technology??

C. Problem Formulation
1. Knowing the latest developments in the IT area
2. Explain the positive and negative impacts of technology

64
CHPT. II
DISCUSSION

A. Recent Development in IT
1. Internet of Things
IoT Definition

The IoT refers to the connection of devices (other


than typical fare such as computers and smartphones) to
the Internet. Cars, kitchen appliances, and even heart
monitors can all be connected through the IoT. And as the
Internet of Things grows in the next few years, more
devices will join that list.

IoT Predictions, Trends, and Market

BI Intelligence, Business Insider's premium


research service, expects there will be more than 24
billion IoT devices on Earth by 2020. That's
approximately four devices for every human being on the
planet.

And as we approach that point, $6 billion will flow


into IoT solutions, including application development,
device hardware, system integration, data storage,
security, and connectivity. But that will be money well

65
spent, as those investments will generate $13 trillion by
2025.

Who will reap these benefits? There are three major


entities that will use IoT ecosystems: consumers,
governments, and businesses. For more detail, see the
Industries section below.

IoT Industries

Several environments within the three groups of


consumers, governments, and ecosystems will benefit
from the IoT. These include:

Manufacturing Transportation Defense Agriculture


Infrastructure Retail Logistics Banks
Oil, gas, and Connected
Insurance Food Services
mining Home
Utilities Hospitality Healthcare Smart Buildings

IoT Companies

There are literally hundreds of companies linked to the


Internet of Things, and the list should only expand in the
coming years. Here are some of the major players that
have stood out in the IoT to this point:

66
Honeywell T-Mobile Comcast
Hitachi
(HON) (TMUS) (CMCSA)
GE (GE) AT&T (T) Cisco (CSCO) IBM (IBM)
Sierra
Amazon Skyworks
Apple (AAPL) Wireless
(AMZN) (SWKS)
(SWIR)
Iridium ARM
Google Ambarella
Communications Holdings
(GOOGL) (AMBA)
(IRDM) (ARMH)
Texas
Fitbit (FIT) ORBCOMM
Instruments PTC (PTC)
(ORBC)
(TXN)
Garmin InvenSense Microsoft
Blackrock (BLK)
(GRMN) (INVN) (MSFT)
Silicon
Control4 CalAmp LogMeIn
Laboratories
(CTRL) (CAMP) (LOGM)
(SLAB)
Linear
InterDigital Ruckus Wireless Red Hat
Technology
(IDCC) (RKUS) (RHT)
(LLTC)
Zebra Arrow
Nimble Storage Silver Spring
Technologies Electronics
(NMBL) Networks (SSNI)
(ZBRA) (ARW)

67
IoT Platforms

One IoT device connects to another to transmit


information using Internet transfer protocols. IoT
platforms serve as the bridge between the devices' sensors
and the data networks.

The following are some of the top IoT platforms on the


market today:

Amazon Web Services

Microsoft Azure

ThingWorx IoT Platform

IBM's Watson

Cisco IoT Cloud Connect

Salesforce IoT Cloud

Oracle Integrated Cloud

GE Predix

IoT Security & Privacy

As devices become more connected thanks to the


IoT, security and privacy have become the primary
concern among consumers and businesses. In fact, the
protection of sensitive data ranked as the top concern (at

68
36% of those polled) among enterprises, according to the
2016 Vormetric Data Threat Report.

Cyber attacks are also a growing threat as more


connected devices pop up around the globe. Hackers
could penetrate connected cars, critical infrastructure, and
even people's homes. As a result, several tech companies
are focusing on cyber security in order to secure the
privacy and safety of all this data.

2. Augment Reality and Virtual Reality


Augment Reality

Augmented reality is using technology to


superimpose information on the world we see. For
example, images and sounds are superimposed over what
the user sees and hears. Picture the "Minority Report" or
"Iron Man" style of interactivity.

This is rather different from virtual reality. Virtual reality


means computer-generated environments for you to
interact with, and being immersed in. Augmented reality
(also known as AR), adds to the reality you would
ordinarily see rather than replacing it.

69
Augmented reality is often presented as a kind of
futuristic technology, but it's been around in some form
for years, if your definition is loose. For example, the
heads-up displays in many fighter aircraft as far back as
the 1990s would show information about the attitude,
direction and speed of the plane, and only a few years
later they could show which objects in the field of view
were targets.

Virtual Realitiy

While devices generally take the same form, how


they project imaging in front of our eyes varies greatly.
The likes of the HTC Vive and Oculus Rift provide PC-
based operations, though major players such as Google
and Samsung offer more affordable, smartphone-based
headsets. Sony have also managed to crack the console
scene with Playstation VR.

Once your headset and power source are secured,


some kind of input is also required for you to connect -
whether this is through head tracking, controllers, hand
tracking, voice, on-device buttons or trackpads.

Total immersion is what everyone making a VR headset,


game or app is aiming towards - making the virtual reality
experience so real that we forget the computer, headgear

70
and accessories and act exactly as we would in the real
world.

3. Artifficial Intelligence (AI)


AI Definition

Artificial intelligence (AI), the ability of a digital


computer or computer-controlled robot to perform tasks
commonly associated with intelligent beings. The term is
frequently applied to the project of developing systems
endowed with the intellectual processes characteristic of
humans, such as the ability to reason, discover meaning,
generalize, or learn from past experience. Since the
development of the digital computer in the 1940s, it has
been demonstrated that computers can be programmed to
carry out very complex tasksas, for example,
discovering proofs for mathematical theorems or playing
chesswith great proficiency. Still, despite continuing
advances in computer processing speed and memory
capacity, there are as yet no programs that can match
human flexibility over wider domains or in tasks requiring
much everyday knowledge. On the other hand, some
programs have attained the performance levels of human
experts and professionals in performing certain specific
tasks, so that artificial intelligence in this limited sense is
found in applications as diverse as medical diagnosis,

71
computer search engines, and voice or handwriting
recognition.

AI Problem-Solving

Problem solving, particularly in artificial


intelligence, may be characterized as a systematic search
through a range of possible actions in order to reach some
predefined goal or solution. Problem-solving methods
divide into special purpose and general purpose. A
special-purpose method is tailor-made for a particular
problem and often exploits very specific features of the
situation in which the problem is embedded. In contrast, a
general-purpose method is applicable to a wide variety of
problems. One general-purpose technique used in AI is
means-end analysisa step-by-step, or incremental,
reduction of the difference between the current state and
the final goal. The program selects actions from a list of
meansin the case of a simple robot this might consist of
PICKUP, PUTDOWN, MOVEFORWARD,
MOVEBACK, MOVELEFT, and MOVERIGHTuntil
the goal is reached.

Many diverse problems have been solved by


artificial intelligence programs. Some examples are
finding the winning move (or sequence of moves) in a
board game, devising mathematical proofs, and

72
manipulating virtual objects in a computer-generated
world.

B. Positive and Negative Effect of IT / Technology


Development
1 Positive Effect
Positive effect of IT / Technology :

1) The technology helped us to strengthen the


relationships by keeping in contact with old
friends , the colleagues , and the co-workers , The
e-mails cause speed delivery of messages, they
reduce of the paper costs .
2) The technology enables the communication among
people , it has helped you to communicate with
people all over the world through email , instant
messaging , Skype , social media , etc .
3) The technology is very useful for the students ,
They can take the courses and attain their degree
online just as any student on campus as the
technology provides many chances for the
students all over the world to receive an education
online .
4) The technology helped the companies to save the
time and gain a lot of money , They use the
technology to communicate with individuals , they
can release the information to many different

73
people at once without calling a meeting or
requiring printing of the materials .
5) The technology is very necessary in our life as it
has improved the transportation , it has
mechanized the agriculture , It has improved the
communication , And it has improved the
education and learning process .

2 Negative Effect
Negative effect of IT / Technology :
1) The technology can lead to the social isolation
which is characterized by a lack of contact with
other people in normal daily living such as the
workplace , with friends and in social activities .
2) The technology causes a lack of privacy , where
anyone can with a few flicks on the keyboard find
anyones address and contact information , So ,
they can use of phishing , viruses and hacking that
helps to find any information they wish to obtain,
they can obtain the location on Google Map and
they can know the life story on Facebook .
3) The technology can cause the tendonitis in the
thumb which is a form of repetitive strain injury
caused by the frequent use of thumbs to press
buttons on mobile devices or playing too many
video games .

74
4) The technology affects on our body , it causes the
neck and head pain when you look down the
devices , It causes blurred vision and migraines ,
and eyestrain can also causes the headaches , it
causes an extra layer of stress which was not found
before the overuse of technology .
5) The technology causes a higher consumption of
energy , when you dont turn your devices off ,
when you keep the computers on , the mobile
devices charging , the televisions plugged in , and
all the high tech toys , So , it causes an increase in
greenhouse gas emissions .
6) The technology causes lack of empathy ?
The constant stream of violent scenes on the video
games , TV, the movies and YouTube causes
people to become desensitized to destruction of
any kind .

75
CHPT. III
FINAL

A. Conclusion
Technology is like a coin which has both positive and
negative sides. We are the deciders and we have to choose
how to use it. The usage of technology for over exploitation
of resources should be always avoided. If we use it for
positive things, it will have positive effect of our lives and
vice versa. Nobody would oppose the development of
technologies in any sector but the developments should be in
a positive way and they should not have any negative impact
on present or future generations.

B. Advice
For informatics engineering students, knowing the
latest developments in the IT field is a thing to do, because
the region is our work area later. And for the general public,
we should not only be able to use technology products but
also to know the positive and negative impacts of these
technologies for our lives.

76
COMPUTER ARCHITECTURE
Group 7
A. Definition
In computer engineering, computer architecture is a set of
rules and methods that describe the functionality, organization,
and implementation of computer systems.
Some definitions of architecture define it as describing the
capabilities and programming model of a computer but not a
particular implementation. In other definitions computer
architecture involves instruction set architecture design,
microarchitecture design, logic design, and implementation.

B. History
1. Harvard Architecture
The Harvard architecture is a computer architecture with
physically separate storage and signal pathways for instructions
and data. Discovered by Harvard University in collaborate with
IBM. The term originated from the Harvard Mark I relay-based
computer, which stored instructions on punched tape (24 bits
wide) and data in electro-mechanical counters. These early
machines had data storage entirely contained within the central
processing unit, and provided no access to the instruction storage
as data. Programs needed to be loaded by an operator; the
processor could not initialize itself.
Today, most processors implement such separate signal
pathways for performance reasons, but actually implement
a modified Harvard architecture, so they can support tasks like
loading a program from disk storage as data and then executing it.

In memory use, Harvard Architecture use RAM & ROM for


different memory

2. Von Noumann Architecture


The von Neumann architecture, which is also known as
the von Neumann model and Princeton architecture, is
a computer architecture based on that described in 1945 by the

3
mathematician and physicist John von Neumann and others in
the First Draft of a Report on the EDVAC.
This describes a design architecture for an
electronic digital computer with parts consisting of a processing
unit containing an arithmetic logic unit and processor registers;
a control unit containing an instruction register and program
counter; a memory to store both data and instructions;
external mass storage; and input and output mechanisms. The
meaning has evolved to be any stored-program computer in
which an instruction fetch and a data operation cannot occur at
the same time because they share a common bus. This is referred
to as the von Neumann bottleneck and often limits the
performance of the system.

Von noumann architecture have four main component.

4
1. Input
For input the data to execute. Example
keyboard,mouse,Joystick, etc.
2. Processor (CPU)
For process and execute the data .
3. Storage
The function is for keep the data for a moment or permanent
and for keep the System Operation.
4. Output
For presenting from the processing unit. Example Printer ,
monitor, speaker, etc

Inside the processor, there is two main component.


1. Arithmethic Logical Unit (ALU)
The function is for proccessing the arithmetic ( +,/,x, etc)
andProccessing Computer Logic (and,or,nor,nand, etc).
2. Control Unit (CU)

5
The function is for control the operation command from
ALU.

C. Operate
In computer architecture, a bus (a contraction of the
Latin omnibus) is a communication system that transfers data
between components inside a computer, or between computers.
This expression covers all related hardware components (wire,
optical fiber, etc.) and software, including communication
protocols.

BUS data have architecture and some main function

6
Part of BUS data :
1. Control
This way is use for control the path of command.
2. Address
This way is use for give the address of program command.
3. Data
This way is use for Containing data for execute.

7
1. The CPU

The CPU performs instructions on values held in registers. This


example shows firstly setting the value of R1 to 100, loading the
value from memory location 0x100 into R2, adding the two
values together and placing the result in R3 and finally storing the
new value (110) to R4 (for further use).

To greatly simplify, a computer consists of a central processing


unit (CPU) attached to memory. The figure above illustrates the
general principle behind all computer operations.
The CPU executes instructions read from memory. There
are two categories of instructions
1. Those that load values from memory into registers and
store values from registers to memory.

8
2. Those that operate on values stored in registers. For
example adding, subtracting multiplying or dividing the
values in two registers, performing bitwise operations
(and, or, xor, etc) or performing other mathematical
operations (square root, sin, cos, tan, etc).
So in the example we are simply adding 100 to a value stored in
memory, and storing this new result back into memory.
2. Branching
Apart from loading or storing, the other important
operation of a CPU is branching. Internally, the CPU keeps a
record of the next instruction to be executed in the instruction
pointer. Usually, the instruction pointer is incremented to point to
the next instruction sequentially; the branch instruction will
usually check if a specific register is zero or if a flag is set and, if
so, will modify the pointer to a different address. Thus the next
instruction to execute will be from a different part of program;
this is how loops and decision statements work.
For example, a statement like if (x==0) might be
implemented by finding the or of two registers, one holding x and
the other zero; if the result is zero the comparison is true (i.e. all
bits of x were zero) and the body of the statement should be
taken, otherwise branch past the body code.

3. Cycles

9
We are all familiar with the speed of the computer, given
in Megahertz or Gigahertz (millions or thousands of millions
cycles per second). This is called the clock speed since it is the
speed that an internal clock within the computer pulses.
The pulses are used within the processor to keep it
internally synchronised. On each tick or pulse another operation
can be started; think of the clock like the person beating the drum
to keep the rower's oars in sync.

4. Fetch, Decode, Execute, Store


Executing a single instruction consists of a particular
cycle of events; fetching, decoding, executing and storing.
For example, to do the add instruction above the CPU must
1. Fetch : get the instruction from memory into the
processor.
2. Decode : internally decode what it has to do (in this case
add).
3. Execute : take the values from the registers, actually add
them together
4. Store : store the result back into another register. You
might also see the term retiring the instruction.

5. Looking inside a CPU

10
Internally the CPU has many different sub components
that perform each of the above steps, and generally they can all
happen independently of each other. This is analogous to a
physical production line, where there are many stations where
each step has a particular task to perform. Once done it can pass
the results to the next station and take a new input to work on.

The CPU is made up of many different sub-components, each


doing a dedicated task.

shows a very simple block diagram illustrating some of


the main parts of a modern CPU.
You can see the instructions come in and are decoded by
the processor. The CPU has two main types of registers, those for
integer calculations and those for floating point calculations.
Floating point is a way of representing numbers with a decimal
place in binary form, and is handled differently within the CPU.

11
MMX (multimedia extension) and SSE (Streaming Single
Instruction Multiple Data) or Altivec registers are similar to
floating point registers.
A register file is the collective name for the registers
inside the CPU. Below that we have the parts of the CPU which
really do all the work.
We said that processors are either loading or storing a
value into a register or from a register into memory, or doing
some operation on values in registers.

The Arithmetic Logic Unit (ALU) is the heart of the CPU


operation. It takes values in registers and performs any of the
multitude of operations the CPU is capable of. All modern
processors have a number of ALUs so each can be working
independently. In fact, processors such as the Pentium have both
fast and slow ALUs; the fast ones are smaller (so you can fit
more on the CPU) but can do only the most common operations,
slow ALUs can do all operations but are bigger.
The Address Generation Unit (AGU) handles talking to
cache and main memory to get values into the registers for the
ALU to operate on and get values out of registers back into main
memory.
Floating point registers have the same concepts, but use
slightly different terminology for their components.

12
6. Pipelining
As we can see above, whilst the ALU is adding registers
together is completely separate to the AGU writing values back to
memory, so there is no reason why the CPU can not be doing
both at once. We also have multiple ALUs in the system, each
which can be working on separate instructions. Finally the CPU
could be doing some floating point operations with its floating
point logic whilst integer instructions are in flight too. This
process is called pipelining[5], and a processor that can do this is
referred to as a superscalar architecture. All modern processors
are superscalar.
Another analogy might be to think of the pipeline like a
hose that is being filled with marbles, except our marbles are
instructions for the CPU. Ideally you will be putting your marbles
in one end, one after the other (one per clock pulse), filling up the
pipe. Once full, for each marble (instruction) you push in all the
others will move to the next position and one will fall out the end
(the result).
Branch instruction play havoc with this model however,
since they may or may not cause execution to start from a
different place. If you are pipelining, you will have to basically
guess which way the branch will go, so you know which
instructions to bring into the pipeline. If the CPU has predicted
correctly, everything goes fine![6] Conversely, if the processor

13
has predicted incorrectly it has wasted a lot of time and has to
clear the pipeline and start again.
This process is usually referred to as a pipeline flush and
is analogous to having to stop and empty out all your marbles
from your hose!

7. Branch Prediction
pipeline flush, predict taken, predict not taken, branch
delay slots

8. Reordering
This bit is crap
In fact, if the CPU is the hose, it is free to reorder the
marbles within the hose, as long as they pop out the end in the
same order you put them in. We call this program order since this
is the order that instructions are given in the computer program
Reorder buffer example

1: r3 = r1 * r2
2: r4 = r2 + r3
3: r7 = r5 * r6
4: r8 = r1 + r7

Consider an instruction stream such as that shown in


Figure 3.3, Reorder buffer example Instruction 2 needs to wait
for instruction 1 to complete fully before it can start. This means

14
that the pipeline has to stall as it waits for the value to be
calculated. Similarly instructions 3 and 4 have a dependency on
r7. However, instructions 2 and 3 have no dependency on each
other at all; this means they operate on completely separate
registers. If we swap instructions 2 and 3 we can get a much
better ordering for the pipeline since the processor can be doing
useful work rather than waiting for the pipeline to complete to get
the result of a previous instruction.
However, when writing very low level code some
instructions may require some security about how operations are
ordered. We call this requirement memory semantics. If you
require acquire semantics this means that for this instruction you
must ensure that the results of all previous instructions have been
completed. If you require release semantics you are saying that
all instructions after this one must see the current result. Another
even stricter semantic is a memory barrier or memory fence
which requires that operations have been committed to memory
before continuing.
On some architectures these semantics are guaranteed for
you by the processor, whilst on others you must specify them
explicitly. Most programmers do not need to worry directly about
them, although you may see the terms.

15
9. CISC v RISC
A common way to divide computer architectures is into
Complex Instruction Set Computer (CISC) and Reduced
Instruction Set Computer (RISC).
Note in the first example, we have explicitly loaded
values into registers, performed an addition and stored the result
value held in another register back to memory. This is an example
of a RISC approach to computing -- only performing operations
on values in registers and explicitly loading and storing values to
and from memory.
A CISC approach may be only a single instruction taking
values from memory, performing the addition internally and
writing the result back. This means the instruction may take many
cycles, but ultimately both approaches achieve the same goal.
All modern architectures would be considered RISC
architectures.
There are a number of reasons for this
Whilst RISC makes assembly programming becomes
more complex, since virtually all programmers use high
level languages and leave the hard work of producing
assembly code to the compiler, so the other advantages
outweigh this disadvantage.
Because the instructions in a RISC processor are much
more simple, there is more space inside the chip for
registers. As we know from the memory hierarchy,

16
registers are the fastest type of memory and ultimately all
instructions must be performed on values held in registers,
so all other things being equal more registers leads to
higher performance.

Since all instructions execute in the same time, pipelining


is possible. We know pipelining requires streams of
instructions being constantly fed into the processor, so if
some instructions take a very long time and others do not,
the pipeline becomes far to complex to be effective.

10. EPIC
The Itanium processor, which is used in many example
through this book, is an example of a modified architecture called
Explicitly Parallel Instruction Computing.
We have discussed how superscaler processors have
pipelines that have many instructions in flight at the same time in
different parts of the processor. Obviously for this to work as well
as possible instructions should be given the processor in an order
that can make best use of the available elements of the CPU.
Traditionally organising the incoming instruction stream
has been the job of the hardware. Instructions are issued by the
program in a sequential manner; the processor must look ahead
and try to make decisions about how to organise the incoming
instructions.

17
The theory behind EPIC is that there is more information
available at higher levels which can make these decisions better
than the processor. Analysing a stream of assembly language
instructions, as current processors do, loses a lot of information
that the programmer may have provided in the original source
code. Think of it as the difference between studying a
Shakespeare play and reading the Cliff's Notes version of the
same. Both give you the same result, but the original has all sorts
of extra information that sets the scene and gives you insight into
the characters.
Thus the logic of ordering instructions can be moved from
the processor to the compiler. This means that compiler writers
need to be smarter to try and find the best ordering of code for the
processor. The processor is also significantly simplified, since a
lot of its work has been moved to the compiler.

In fact, any modern processor has many more than four stages it
can pipeline, above we have only shown a very simplified view.
The more stages that can be executed at the same time, the deeper
the pipeline.
Processors such as the Pentium use a trace cache to keep a
track of which way branches are going. Much of the time it can
predict which way a branch will go by remembering its previous
result. For example, in a loop that happens 100 times, if you

18
remember the last result of the branch you will be right 99 times,
since only the last time will you actually continue with the
program.
Even the most common architecture, the Intel Pentium,
whilst having an instruction set that is categorised as CISC,
internally breaks down instructions to RISC style sub-instructions
inside the chip before executing.
Another term often used around EPIC is Very Long
Instruction World (VLIW), which is where each instruction to the
processor is extended to tell the processor about where it should
execute the instruction in its internal units. The problem with this
approach is that code is then completely dependent on the model
of processor is has been compiled for. Companies are always
making revisions to hardware, and making customers recompile
their application every single time, and maintain a range of
different binaries was impractical.
EPIC solves this in the usual computer science manner by
adding a layer of abstraction. Rather than explicitly specifying the
exact part of the processor the instructions should execute on,
EPIC creates a simplified view with a few core units like
memory, integer and floating point.

19
SOFTWARE

ENGINNEERING

IF CLASS -4

Grup 8

Arranged By:
Darno
(1167050047)
Rina Anjari
(1167050140)
RadenIrham
(1167050125)

Chapter I Introduction

A. Background

Software engineering is the manufacture and use of


sound engineering principles to obtain reliable economic software
and work efficiently on real machines.Software engineering is the
application of principles used in the field of engineering, which
usually deals with physical systems, to the design, development,
testing, deployment and management of software systems.System

20
engineering is concerned with all aspects of computer-based
systems development including hardware, software and process
engineering. Software engineering is part of this more general
process.Software engineering is] the establishment and use of
sound engineering principles in order to obtain economically
software that is reliable and works efficiently on real machines.
More and more individuals and communities rely on
advanced software systems. We must be able to produce a
reliable and reliable system economically and quickly. Usually
cheaper, in the long run, to use software engineering methods and
techniques for software systems rather than just writing the
program as if it were a personal programming project.
B. Problem Identification
This paper presents an approach to automatically identify
recurrent software failures using symptoms, in environments
where many users run the same software. The approach is based
on observations that the majority of field software failures in such
environments are recurrences and that failures due to a single
fault often share common symptoms. The paper proposes the
comparison of failure symptoms, such as stack traces and
symptom strings, as a strategy for identifying recurrences. This
diagnosis strategy is applied using the actual field software
failure data. The results obtained are compared with the diagnosis
and repair logs by analysts.Results of such comparisons using

21
l diagnosis, and repair logs in two Tandem system software
products show that between 75% and 95% of recurrences can be
identified success- fully by matching stack traces and symptom
strings. Less than 10% of faults are misdiagnosed. These re- sults
indicate that automatic identification of recurrences based on
their symptoms is possible.

C. Problem Formulation.
1. Understanding Software

2. Understanding Software Engineerin

3. Function and development

D. Purpose of Paper

1. Understanding the notion of software engineering


2 Understand the elements in software engineering
3 Understand the phases or steps that exist in software
engingeering

22
Chapter II Discussion

A.Definitions Software Engainnering

Typical formal definitions of software engineering include:

"Research, design, develop, and test operating systems-


level software, compilers, and network distribution
software for medical, industrial, military,
communications, aerospace, business, scientific, and
general computing applications"Bureau of Labor
Statistics
"the systematic application of scientific and technological
knowledge, methods, and experience to the design,
implementation, testing, and documentation of software"
The Bureau of Labor StatisticsIEEE Systems and
software engineering - Vocabulary
"The application of a systematic, disciplined, quantifiable
approach to the development, operation, and maintenance
of software"IEEE Standard Glossary of Software
Engineering Terminology
"an engineering discipline that is concerned with all
aspects of software production" Ian Sommerville

23
"the establishment and use of sound engineering
principles in order to economically obtain software that is
reliable and works efficiently on real machines"Fritz
Bauer

B.History Software Engineering

When the first digital computers appeared in the early


1940s, the instructions to make them operate were wired into
the machine. Practitioners quickly realized that this design
was not flexible and came up with the "stored program
architecture" or von Neumann architecture. Thus the division
between "hardware" and "software" began
with abstraction being used to deal with the complexity of
computing.
Programming languages started to appear in the early 1950s
and this was
also another major step in abstraction. Major languages such
as Fortran, ALGOL, and COBOL were released in the late
1950s to deal with scientific, algorithmic, and business
problems respectively. Edsger W. Dijkstra wrote his seminal
paper, "Go To Statement Considered Harmful", in 1968
and David Parnas introduced the key concept
of modularity and information hiding in 1972 to help

24
programmers deal with the ever increasing complexity
of software systems.
The origins of the term "software engineering" have been
attributed to different sources, but it was used in 1968 as a
title for the World's first conference on software engineering,
sponsored and facilitated by NATO. The conference was
attended by international experts on software who agreed on
defining best practices for software grounded in the
application of engineering. The result of the conference is a
report that defines how software should be developed. The
original report is publicly available.
The discipline of software engineering was created to address
poor quality of software, get projects exceeding time and
budget under control, and ensure that software is built
systematically, rigorously, measurably, on time, on budget,
and within specification. Engineering already addresses all
these issues, hence the same principles used in engineering
can be applied to software. The widespread lack of best
practices for software at the time was perceived as a
"software crisis".
Barry W. Boehm documented several key advances to the field
in his 1981 book, 'Software Engineering Economics'. These
include his Constructive Cost Model (COCOMO), which
relates software development effort for a program, in man-
years T, to source lines of The book analyzes sixty-three

25
software projects and concludes the cost of fixing errors
escalates as the project moves toward field use. The book also
asserts that the key driver of software cost is the capability of
the software development team.
In 1984, the Software Engineering Institute (SEI) was
established as a federally funded research and development
center headquartered on the campus of Carnegie Mellon
University in Pittsburgh, Pennsylvania, United States. Watts
Humphrey founded the SEI Software Process Program, aimed
at understanding and managing the software engineering
process. His 1989 book, Managing the Software
Process, asserts that the Software

Development Process can and should be controlled,


measured, and improved. The Process Maturity Levels
introduced would become the Capability Maturity Model
Integration for Development(CMMi-DEV), which has defined
how the US Government evaluates the abilities of a software
development team.

Modern, generally accepted best-practices for software


engineering have been collected by the ISO/IEC JTC 1/SC
7 subcommittee and published as the Software Engineering
Body of Knowledge (SWEBOK).

C. Function Software

26
Function Software - In the important role in the running of
computer systems, of course have special functions that the
software. The functions of the software include the following ..
A. Software provides basic functions for the needs of computers
that can be divided into operating systems or support systems.
B. Software functions in managing various hardware to work
together.
C. As a liaison between other software with hardware.
D. As an interpreter of other software in every instruction into
machine language so that it can be received by hardware.
E. Identify the program

Chapter III Final

A. Knot

Function Software - In the important role in the running


of computer systems, of course have special functions that the
software. The functions of the software include the following
1. Software provides basic functions for the needs of
computers that can be divided into operating systems or support
systems. 2. Software functions in managing various hardware to
work together.3.As a liaison between other software with
hardware.4. As an interpreter of other software in every

27
instruction into machine language so that it can be received by
hardware.5. Identify the program.

"Research, design, develop, and test operating systems-


level software, compilers, and network distribution
software for medical, industrial, military,
communications, aerospace, business, scientific, and
general computing applications"Bureau of Labor
Statistics
"the systematic application of scientific and technological
knowledge, methods, and experience to the design,
implementation, testing, and documentation of software"
The Bureau of Labor StatisticsIEEE Systems and
software engineering - Vocabulary
"The application of a systematic, disciplined, quantifiable
approach to the development, operation, and maintenance
of software"IEEE Standard Glossary of Software
Engineering Terminology
"an engineering discipline that is concerned with all
aspects of software production" Ian Sommerville
"the establishment and use of sound engineering
principles in order to economically obtain software that is
reliable and works efficiently on real machines"Fritz
Bauer

28
When the first digital computers appeared in the early
1940s, the instructions to make them operate were wired into
the machine.Programming languages started to appear in the
early 1950s and this was also another major step in
abstraction. Major languages such as Fortran, ALGOL,
and COBOL were released in the late 1950s to deal with
scientific, algorithmic, and business problems
respectively. Edsger W. Dijkstra

29
Kelompok 9 : Lita Arinda 1167050089

Rendi Azhari 1167050130

Silmi Azdkiatul A 1167050149

WEBSITE

A. Definition Of Website

A website is a collection of related web pages, including


multimedia content, typically identified with a common
domain name, and published on at least one web server. A
website may be accessible via a public Internet Protocol (IP)
network, such as the Internet, or a private local area network
(LAN), by referencing a uniform resource locator (URL) that
identifies the site.

Websites have many functions and can be used in various


fashions; a website can be a personal website, a commercial
website for a company, a government website or a non-profit
organization website. Websites are typically dedicated to a
particular topic or purpose, ranging from entertainment and
social networking to providing news and education. All
publicly accessible websites collectively constitute the World
Wide Web, while private websites, such as a company's
website for its employees, are typically a part of an intranet.

30
Web pages, which are the building blocks of websites, are
documents, typically composed in plain text interspersed with
formatting instructions of Hypertext Markup Language
(HTML, XHTML). They may incorporate elements from
other websites with suitable markup anchors. Web pages are
accessed and transported with the Hypertext Transfer
Protocol (HTTP), which may optionally employ encryption
(HTTP Secure, HTTPS) to provide security and privacy for
the user. The user's application, often a web browser, renders
the page content according to its HTML markup instructions
onto a display terminal.

Hyperlinking between web pages conveys to the reader


the site structure and guides the navigation of the site, which
often starts with a home page containing a directory of the
site web content. Some websites require user registration or
subscription to access content. Examples of subscription
websites include many business sites, news websites,
academic journal websites, gaming websites, file-sharing
websites, message boards, web-based email, social
networking websites, websites providing real-time stock
market data, as well as sites providing various other services.
As of 2016 end users can access websites on a range of
devices, including desktop and laptop computers, tablet
computers, smartphones and smart TVs.

31
B. History Of Website

Sir Tim Berners-Lee is a British computer scientist. He


was born in London, and his parents were early computer
scientists, working on one of the earliest computers.
Growing up, Sir Tim was interested in trains and had a
model railway in his bedroom.

After graduating from Oxford University, Berners-Lee


became a software engineer at CERN, the large particle
physics laboratory near Geneva, Switzerland. Scientists
come from all over the world to use its accelerators, but Sir
Tim noticed that they were having difficulty sharing
information.

Tim thought he saw a way to solve this problem


one that he could see could also have much broader
applications. Already, millions of computers were being
connected together through the fast-developing internet and
Berners-Lee realised they could share information by
exploiting an emerging technology called hypertext.

In March 1989, Tim laid out his vision for what


would become the web in a document called Information
Management: A Proposal. Believe it or not, Tims initial
proposal was not immediately accepted. In fact, his boss at
the time, Mike Sendall, noted the words Vague but

32
exciting on the cover. The web was never an official CERN
project, but Mike managed to give Tim time to work on it in
September 1990. He began work using a NeXT computer,
one of Steve Jobs early products.

Tims original proposal. Image: CERN

By October of 1990, Tim had written the three


fundamental technologies that remain the foundation of
todays web (and which you may have seen appear on parts
of your web browser :

1. HTML: HyperText Markup Language. The


markup (formatting) language for the web.

33
2. URI: Uniform Resource Identifier. A kind of
address that is unique and used to identify to
each resource on the web. It is also commonly
called a URL.
3. HTTP: Hypertext Transfer Protocol. Allows for
the retrieval of linked resources from across the
web.

Tim also wrote the first web page editor/browser


(WorldWideWeb.app) and the first web server
(httpd). By the end of 1990, the first web page was
served on the open internet, and in 1991, people outside of
CERN were invited to join this new web community.

As the web began to grow, Tim realised that its


true potential would only be unleashed if anyone,
anywhere could use it without paying a fee or having to
ask for permission.

He explains: Had the technology been


proprietary, and in my total control, it would probably not
have taken off. You cant propose that something be a
universal space and at the same time keep control of it.

So, Tim and others advocated to ensure that


CERN would agree to make the underlying code available
on a royalty-free basis, forever. This decision was

34
announced in April 1993, and sparked a global wave of
creativity, collaboration and innovation never seen before.
In 2003, the companies developing new web standards
committed to a Royalty Free Policy for their work. In
2014, the year we celebrated the webs 25th birthday,
almost two in five people around the world were using it.

Tim moved from CERN to the Massachusetts


Institute of Technology in 1994 to found the World Wide
Web Consortium (W3C), an international community
devoted to developing open web standards. He remains
the Director of W3C to this day.

The early web community produced some


revolutionary ideas that are now spreading far beyond the
technology sector:

1. Decentralisation: No permission is needed from a


central authority to post anything on the web, there
is no central controlling node, and so no single
point of failure and no kill switch! This also
implies freedom from indiscriminate censorship
and surveillance.
2. Non-discrimination: If I pay to connect to the
internet with a certain quality of service, and you
pay to connect with that or a greater quality of

35
service, then we can both communicate at the
same level. This principle of equity is also known
as Net Neutrality.
3. Bottom-up design: Instead of code being written
and controlled by a small group of experts, it was
developed in full view of everyone, encouraging
maximum participation and experimentation.
4. Universality: For anyone to be able to publish
anything on the web, all the computers involved
have to speak the same languages to each other, no
matter what different hardware people are using;
where they live; or what cultural and political
beliefs they have. In this way, the web breaks
down silos while still allowing diversity to
flourish.
5. Consensus: For universal standards to work,
everyone had to agree to use them. Tim and others
achieved this consensus by giving everyone a say
in creating the standards, through a transparent,
participatory process at W3C.

New permutations of these ideas are giving rise to


exciting new approaches in fields as diverse as
information (Open Data), politics (Open Government),
scientific research (Open Access), education, and culture
(Free Culture). But to date we have only scratched the

36
surface of how these principles could change society and
politics for the better.

In 2009, Sir Tim established the World Wide Web


Foundation. The Web Foundation is advancing the Open
Web as a means to build a just and thriving society by
connecting everyone, raising voices and enhancing
participation.

C. Categories Of Website
1. Static Website

A static website is one that has web pages stored


on the server in the format that is sent to a client web
browser. It is primarily coded in Hypertext Markup
Language (HTML); Cascading Style Sheets (CSS)
are used to control appearance beyond basic HTML.
Images are commonly used to effect the desired
appearance and as part of the main content. Audio or
video might also be considered "static" content if it
plays automatically or is generally non-interactive.
This type of website usually displays the same
information to all visitors. Similar to handing out a
printed brochure to customers or clients, a static
website will generally provide consistent, standard
information for an extended period of time. Although

37
the website owner may make updates periodically, it
is a manual process to edit the text, photos and other
content and may require basic website design skills
and software. Simple forms or marketing examples of
websites, such as classic website, a five-page website
or a brochure website are often static websites,
because they present pre-defined, static information
to the user. This may include information about a
company and its products and services through text,
photos, animations, audio/video, and navigation
menus.

Static websites can be edited using four broad


categories of software:

a) Text editors, such as Notepad or TextEdit, where


content and HTML markup are manipulated
directly within the editor program
b) WYSIWYG offline editors, such as Microsoft
FrontPage and Adobe Dreamweaver (previously
Macromedia Dreamweaver), with which the site
is edited using a GUI and the final HTML
markup is generated automatically by the editor
software

38
c) WYSIWYG online editors which create media
rich online presentation like web pages, widgets,
intro, blogs, and other documents.
d) Template-based editors such as iWeb allow users
to create and upload web pages to a web server
without detailed HTML knowledge, as they pick
a suitable template from a palette and add
pictures and text to it in a desktop publishing
fashion without direct manipulation of HTML
code.

Static websites may still use server side includes


(SSI) as an editing convenience, such as sharing a
common menu bar across many pages. As the site's
behaviourto the reader is still static, this is not
considered a dynamic site.

2. Dynamic Website
A dynamic website is one that changes or
customizes itself frequently and automatically.
Server-side dynamic pages are generated "on the fly"
by computer code that produces the HTML (CSS are
responsible for appearance and thus, are static files).
There are a wide range of software systems, such as
CGI, Java Servlets and Java Server Pages (JSP),
Active Server Pages and ColdFusion (CFML) that are

39
available to generate dynamic web systems and
dynamic sites. Various web application frameworks
and web template systems are available for general-
use programming languages like Perl, PHP, Python
and Ruby to make it faster and easier to create
complex dynamic websites.
A site can display the current state of a dialogue
between users, monitor a changing situation, or
provide information in some way personalized to the
requirements of the individual user. For example,
when the front page of a news site is requested, the
code running on the web server might combine stored
HTML fragments with news stories retrieved from a
database or another website via RSS to produce a
page that includes the latest information. Dynamic
sites can be interactive by using HTML forms,
storing and reading back browser cookies, or by
creating a series of pages that reflect the previous
history of clicks. Another example of dynamic
content is when a retail website with a database of
media products allows a user to input a search
request, e.g. for the keyword Beatles. In response, the
content of the web page will spontaneously change
the way it looked before, and will then display a list
of Beatles products like CDs, DVDs and books.

40
Dynamic HTML uses JavaScriptcode to instruct the
web browser how to interactively modify the page
contents. One way to simulate a certain type of
dynamic website while avoiding the performance loss
of initiating the dynamic engine on a per-user or per-
connection basis, is to periodically automatically
regenerate a large series of static pages.
D. Development Of Website
1. Web 0.0 Developping the internet
2. Web 1.0 The shopping carts & static web
Experts call the Internet before 1999
Read-Only web. The average internet users role
was limited to reading the information which was
presented to him. The best examples of this 1.0
web era are millions of static websites which
mushroomed during the dot-com boom (which
eventually has led to the dotcom bubble). There
was no active communication or information flow
from consumer (of the information) to producer
(of the information).
The first shopping cart applications, which
most e-commerce website owners use in some
shape or form, basically fall under the category of
Web 1.0. The overall goal was to present products
to potential customers, much as a catalog or a

41
brochure does only through a website retailers
could also provide a method for anyone (anywhere
in the world) to purchase (their) products.
3. Web 2.0 The writing and participating web
The lack of active interaction of common
users with the web lead to the birth of Web 2.0.
The year 1999 marked the beginning of a Read-
Write-Publish era with notable contributions from
LiveJournal (Launched in April, 1999) and
Blogger (Launched in August, 1999). Now even a
non-technical user can actively interact &
contribute to the web using different blog
platforms. If we stick to Berners-Lees method of
describing it, the Web 2.0, or the read-write
web has the ability to contribute content and
interact with other web users. This interaction and
contribution has dramatically changed the
landscape of the web It has even more potential
that we have yet to see. The Web 2.0 appears to
be a welcome response to a web users demand to
be more involved in what information is available
to them.
This era empowered the common user
with a few new concepts like Blogs, Social-
Media & Video-Streaming. Publishing your

42
content is only a few clicks away! Few
remarkable developments of Web 2.0 are Twitter,
YouTube, eZineArticles, Flickr and Facebook.
4. Web 3.0 The semantic executing web
By extending Tim Berners-Lees explanations, the
Web 3.0 would be a read-write-execute web.
However, this is difficult to envision in its abstract
form, so lets take a look at two things that will
form the basis of the Web 3.0 semantic markup
and web services.

Semantic markup refers to the


communication gap between human web users and
computerized applications. One of the largest
organizational challenges of presenting
information on the web was that web applications
werent able to provide context to data, and,
therefore, didnt really understand what was
relevant and what was not . While this is still
evolving, this notion of formatting data to be
understood by software agents leads to the
execute portion of our definition, and provides a
way to discuss web service.

A web service is a software system


designed to support computer-to-computer

43
interaction over the Internet . Currently, thousands
of web services are available. However, in the
context of Web 3.0, they take center stage. By
combining a semantic markup and web services,
the Web 3.0 promises the potential for applications
that can speak to each other directly, and for
broader searches for information through simpler
interfaces.

5. Web 4.0 Mobile Web


The next step is not realy a new version,
but is a alternate version of what we already have.
Web needed to adapt to its mobile surroundings.
Web 4.0 connects all devices in the real and virtual
world in real-time.
An overview of the web 1.0 web 2.0 web3.0 in
one graph and one table:

44
Graph web 1.0 web 2.0 web 3.0

Table web 1.0 web 2.0 web 3.0

45
E. Types of Websites
1. Personal Websites

Personal websites are mostly build by


individuals. The theme and content of these websites
is based mostly around personal information. These
websites can include,

a. Sites that give information about the builder,


like what he/she does or something about
their family etc.
b. Hobby sites.
c. Resume sites.
d. Websites that are built by people doing some
work that they want to show others, like a
website build by an artist to show others
his/her work or a student showing others the
projects he/she is working on or similar.

2. Business Websites

Business websites are professional websites


that are build purely for business reasons. They
generally belong to organizations.

Their tasks include,

46
a. Providing company information
b. Providing company's products information
c. Marketing company's products
d. Providing help and support to costumers.
e. Selling company's products online.

Business websites can be divided into

a) Small Business

Small Business websites range from


individuals running business to groups of
people running small businesses. They have a
budget more than personal websites but less
then large business websites.

b) Large Business

Large business websites belong to


large organizations and have more budget
than small business websites. In large
organiztions, generally a section of staff is
dedicated for the mantainance and
management of these sites. Examples of these
sites include,

www.intel.com (Intel Inside)

47
www.hp.com (Hewlet Packard)

3. Informative Websites

Informative websites are built for the purpose


of providing information. They can include anything
like

a. News websites.
b. Science websites.
c. Encyclopedias.
d. Business news websites.
e. Websites giving analysis on some subject.
f. Medical information websites.
g. Educational websites.
h. Website that give information on a subject
like this website.
i. University/college/school website and others.

Examples of informative websites include;

www.wikipedia.com (online encyclopeia)

www.howstuffworks.com(online science/general
information)

4. Search Engines /Directories

48
Search Engines or directories do look like
websites but they are much more than that. Search
Engines search through the entire web to find
websites and lists them on their servers. This
searching procedure of search engines is called
crawler and it does its work automatically. The web
interface we see on our browsers fetch these results
from the compliled list and show us those results.

Example include;

www.google.com

Directories on the other hand are not serch


engines. Their entries are mantained by people and
are submited by users. They are based on categories
and subcategories.

Example include;

dir.yahoo.com (yahoo's ditory)

49
Kelompok 10 :

Nama : Aditia Wardani

M. Lutfhi

Mustafhaadji Adi.P

Nurazmi Muhamad

Data Security

CHAPTER I

PRELIMINARY

I. BACKGROUND

One's thinking on the introduction of data security is


very less, why? Because there are still many who do not
understand about data security, well in this paper I explain
about what kind of data security, and the data here what
kind of purpose, as you (the reader) know that database
security here is de defined as a data set Which are
interconnected and shared (shared) which aims to

50
memeliahar information needed by an organization or
company. Well here's a sense of the actual data.
How about data security ? Data security I describe
and explain in the contents of paper / paper that I make this,
hopefully what is contained in the contents of this paper can
be useful for you and for us all.

II. FORMULATION OF THE PROBLEM


Understanding in the introduction of data in the problem
is a must. Because it becomes a problem if a user or admin does
not understand a threat to the data or does not control the issue of
data threats so many of the data is not safe. Therefore, there are
some balances that we must pay attention to. Who will guarantee
an admin to act according to the guidelines? Who is conducting
an audit of overall security issues?
These are just a few of the many things we should consider.
So what is the main problem of what is the main threat and how
does the first aid measure to avoid these threats?

III. PURPOSE OF PROBLEM

In accordance with the problems that have been


formulated above, the purpose of the problem should be
discussed as follows:

51
I. Definition of data security.
II. Data Security
III. Data Security Techneologies
IV.Key Threats To Data Security

52
CHAPTER II
DISCUSSION

I. Understanding of data security


Data security refers to protective digital privacy
measures that are applied to prevent unauthorized access to
computers, databases and websites. Data security also protects
data from corruption. Data security is an essential aspect of IT for
organizations of every size and type.

Data security is also known as information security


(IS) or computer security.

Examples of data security technologies include


backups, data masking and data erasure. A key data security
technology measure is encryption, where digital data,
software/hardware, and hard drives are encrypted and therefore
rendered unreadable to unauthorized users and hackers.

One of the most commonly encountered methods of


practicing data security is the use of authentication. With
authentication, users must provide a password, code, biometric
data, or some other form of data to verify identity before access
to a system or data is granted.

Data security is also very important for health care


records, so health advocates and medical practitioners in the U.S.
and other countries are working toward implementing electronic

53
medical record (EMR) privacy by creating awareness about
patient rights related to the release of data to laboratories,
physicians, hospitals and other medical facilities

II. Data Security

The aspect of data security is an important thing to be


considered In data management. Data security has become a part
of Development of information technology given that millions of
bits of information Has been exchanged in computer networks
primarily on the internet. Akib (2009)Write down that the data
security problem can be classified intoSeveral dimensions. A
commercial site for example must meet the requirementsas
follows:

1). Secrecy: the category of computer security that includes


protection Data / information on access from unauthorized parties
as well asAuthenticity issues from data / information sources.
Secrecy issue Related to the process of encryption-denotes and
the authentication process.

2). Integrity: a data security category that ensures that data is


not interrupted During the transfer process from source to
destination through channels communication. Integrity issues
relate to how to protect data From intruders attempting to break
into data sources, or infiltrate within Data network, to change and
corrupt. Virus problems that can Destroying data also becomes
part of integrity.

54
3). Availability: data security category that can maintain the
source Information to be always available and active to serve its
users. Problem Availability related to the business of protecting
the server from intrusion Can cause the server to fail to provide
services (denial of service / DOS)

III. Data Security Techneologies

1). Disk encryption

Disk encryption refers to encryption technology that

encrypts data on a hard disk drive. Disk encryption

typically takes form in either software (see disk

encryption software) or hardware (see disk encryption

hardware). Disk encryption is often referred to as on-the

fly encryption (OTFE) or transparent encryption

2). Backups

Backups are used to ensure data which is lost can be

recovered from another source. It is considered essential

to keep a backup of any data in most industries and the

process is recommended for any files of importance to a

user.

3). Data masking

55
Data masking of structured data is the process of
obscuring (masking) specific data within a database table
or cell to ensure that data security is maintained and
sensitive information is not exposed to unauthorized
personnel.This may include masking the data from users
(for example so banking customer representatives can
only see the last 4 digits of a customers national identity
number), developers (who need real production data to
test new software releases but should not be able to see
sensitive financial data), outsourcing vendors, etc.

4). Data erasure

Data erasure is a method of software-based overwriting


that completely destroys all electronic data residing on a
hard drive or other digital media to ensure that no
sensitive data is leaked when an asset is retired or reused.

IV. Key threats to data security

Hacking and data breaches are not the only type of


insecurities affecting data. And SMBs and startups are prone to
these as well. Ransomware is becoming a more common place
type of attack. According to Kaspersky a leading anti virus and
cyber security developer ransomware is a type of malware that
severely restricts access to a computer, device or file until a
ransom is paid by the user. This class of malware is a criminal

56
moneymaking scheme that can be installed through deceptive
links in an email message, instant message or website. Why
hack data? Motivations behind cyber attacks include cyber crime,
hacktivism, cyber espionage and cyber warfare.

57
CHAPTER III
CONCLUSIONS AND
RECOMMENDATIONS
I. Conclusion
Data security refers to protective digital privacy measures that
are applied to prevent unauthorized access to computers,
databases and websites. Data security also protects data from
corruption.
The aspect of data security is an important thing to be
considered In data management. Data security has become a
part of Development of information technology given that
millions of bits of information Has been exchanged in
computer networks primarily on the internet. A commercial
site for example must meet the requirementsas follows:
1). Secrecy
2). Integrity
3). Availability
Hacking and data breaches are not the only type of
insecurities affecting data. And SMBs and startups are prone
to these as well. Ransomware is becoming a more common
place type of attack
II. Recommendation
For those of you who read this paper to always keep your data
using the database system. Because a lot of data 'is lost in causing
irresponsible people. Hope this paper works

58
BIBLIOGRAPHY

Logik Bomb: Hacker's Encyclopedia (1997)

Hafner, Katie; Markoff, John (1991). Cyberpunk: Outlaws and


Hackers on the Computer Frontier. New York: Simon &
Schuster.

Sterling, Bruce (1992). The Hacker Crackdown. Bantam.

Slatalla, Michelle; Joshua Quittner (1995). Masters of Deception:


The Gang That Ruled Cyberspace. HarperCollins.

Dreyfus, Suelette (1997). Underground: Tales of Hacking,


Madness and Obsession on the Electronic Frontier. Mandarin.

Verton, Dan (2002). The Hacker Diaries : Confessions of


Teenage Hackers. McGraw-Hill Osborne Media.

Thomas, Douglas (2002). Hacker Culture. University of


Minnesota Press.

Taylor, Paul A. (1999). Hackers: Crime in the Digital Sublime.


Routledge.

Levy, Steven (2002). Crypto: How the Code Rebels Beat the
Government Saving Privacy in the Digital Age. Penguin.

Ventre, Daniel (2009). Information Warfare. Wiley - ISTE.

59
Raymond, Eric S.; Steele, Guy L., ed. (1996). The New Hacker's
Dictionary. The MIT Press.

Raymond, Eric S. (2003). The Art of Unix Programming.


Prentice Hall International.

Levy, Steven (1984). Hackers: Heroes of the Computer


Revolution. Doubleday.

Turkle, Sherry (1984). The Second Self: Computers and the


Human Spirit. MIT Press.

Graham, Paul (2004). Hackers and Painters. Beijing: O'Reilly.

Lakhani, Karim R.; Wolf, Robert G. (2005). "Why Hackers Do


What They Do: Understanding Motivation and Effort in
Free/Open Source Software Projects" (PDF). Di Feller, J.;
Fitzgerald, B.; Hissam, S. et al. Perspectives on Free and Open
Source Software. MIT Press.

Himanen, Pekka (2001). The Hacker Ethic and the Spirit of the
Information Age. Random House.

Ingo, Henrik (2006). Open Life: The Philosophy of Open Source.


Lulu.

homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading03.

sulaidihasibuan.com/2015/03/pengertian-aplikasi komputer.

60
computerbasicslearning.blogspot.co.id/2011/07/computer-
application-system-software.

catatanteknisi.com/2013/12/cara-membersihkan-virus-komputer-
secara-total

en.wikipedia.org/wiki/Application_software

homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading03.

sulaidihasibuan.com/2015/03/pengertian-aplikasi-komputer.

computerbasicslearning.blogspot.co.id/2011/07/computer-
application-system-software.

catatanteknisi.com/2013/12/cara-membersihkan-virus-komputer-
secara-total.

en.wikipedia.org/wiki/Application_software

searchmicroservices.techtarget.com/definition/multimedia

businessdictionary.com/definition/computing.

groupsevenmultimedia.wordpress.com/linear-and-non-linear-
multimedia

ukessays.com/essays/computer-science/advantages-and-
disadvantages-of-multimedia-computer-science-essay

smallbusiness.chron.com/5-components-multimedia-28279

users.cs.cf.ac.uk/Dave.Marshall/Multimedia/node42.

61
.hongkiat.com/blog/25-free-digital-audio-editors/

Fajrillah.(2011).sistemoperasicomputer.Bogor:Ghalia Indonesia
press.
Pangera,A.AdanAriyus,D.
(2010).SistemOpersi.Yogyakarta:ANDI OFFSET press.
Bussiness Insider. (2016). Wha is Internet of Things.
businessinsider.com/what-is-the-internet-of-things-definition-
2016-8?IR=T&r=US&IR=T [Accessed: 4 May 2017]

Live Science. (2016).Augment Reality.

livescience.com/34843-augmented-reality [Accessed: 5 May


2017]

Wareable. (2017).Explained: How does VR work?.


wareable.com/vr/how-does-vr-work-explained. [Accessed: 5 May
2017]

Britannica. (2017).Artiffial Intelligence.

britannica.com/technology/artificial-intelligence. [Accessed: 6
May 2017]

Online Science. (2015). What are positive and negative effect of


Technology.

online-sciences.com/technology/what-are-the-positive-and-
negative-effects-of-technology. [Accessed: 6 May 2017]

62
en.wikipedia.org/wiki/Computer_architecture
bottomupcs.com/chapter02.
inf.fu-berlin.de/lehre/SS01/hc/eniac
destaaditya.wordpress.com/kuliah/organisasi-arsitektur-
komputer/tugas-3/arsitektur-komputer-von-noumann-dan-harvard
venilicilious.blogspot.co.id/2008/12/jenis-jenis-arsitektur-
komputer.
hackerzone.do.am/load/black_hacker/operation_system/arti_dari_
32_bit_dan_64_bit_pada_komputer/47-1-0-191
https://en.wikipedia.org/wiki/Software ,05 MEI 2017.

lifeblogid.com/2015/05/06/contoh-makalah-bahasa-inggris
,04MEI 2017

castsoftware.com/research-labs/software-risk.06 MEI 2017

pdfs.semanticscholar.org, 06 MEI 2017

manvodurand.blogspot.co.id/2013/12/software-engineering
06MEI 2017

en.wikipedia.org/wiki/Website

businessdictionary.com/definition/website

searchmicroservices.techtarget.com/definition/Web-site

63
webfoundation.org/about/vision/history-of-the-web

flatworldbusiness.wordpress.com/flat-education/previously/web-
1-0-vs-web-2-0-vs-web-3-0-a-bird-eye-on-the-definition

Sukmaaji A, Rianto. 2008. Jaringan Komputer: Konsep Dasar


Pengembangan Jaringan dan Keamanan Jaringan. Yogyakarta:
Penerbit Andi.

fabacus.com/symphony-api-eai/the-importance-of-data-security

securityintelligence.com/data-security-securing-your-most-
important-asset

en.wikipedia.org/wiki/Data_security

techopedia.com/definition/26464/data-security

spamlaws.com/data-security-importance.

reference.com/technology/data-protection-important-
7419357969089455

64

You might also like