You are on page 1of 23

The Computer Chronicles

2010 Alton C. Crews Middle School Spring Issue 2010 \

FREE & Excellent Student Web design App.

NEWS FLASH!
A New Generation of Computers is about to be Announced
by Roderick Hames

In the beginning ...


A generation refers to the state of improvement in the development of a product. This term is also used in the different advancements of computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and memory of computers has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.

The First Generation: 1946-1958 (The Vacuum Tube Years)


The first generation computers were huge, slow, expensive, and often undependable. In 1946two Americans, Presper Eckert, and John Mauchly built the ENIAC electronic computer which used vacuum tubes instead of the mechanical switches of the Mark I. The ENIAC used thousands of vacuum tubes, which took up a lot of space and gave off a great deal of heat just like light bulbs do. The ENIAC led to other vacuum tube type computers like the EDVAC (Electronic Discrete Variable Automatic Computer) and the UNIVAC I (UNIVersal Automatic Computer). The vacuum tube was an extremely important step in the advancement of computers. Vacuum tubes were invented the same time the light bulb was invented by Thomas Edison and worked very similar to light bulbs. It's purpose was to act like an amplifier and a switch. Without any moving parts, vacuum tubes could take very weak signals and make the signal stronger (amplify it). Vacuum tubes could also stop and start the flow of electricity instantly (switch). These two properties made the ENIAC computer possible.

The ENIAC gave off so much heat that they had to be cooled by gigantic air conditioners. However even with these huge coolers, vacuum tubes still overheated regularly. It was time for something new.

The Second Generation: 1959-1964 (The Era of the Transistor)


The transistor computer did not last as long as the vacuum tube computer lasted, but it was no less important in the advancement of computer technology. In 1947 three scientists, John Bardeen, William Shockley, and Walter Brattain working at AT&T's Bell Labs invented what would replace the vacuum tube forever. This invention was the transistor which functions like a vacuum tube in that it can be used to relay and switch electronic signals. There were obvious differences between the transisitor and the vacuum tube. The transistor was faster, more reliable, smaller, and much cheaper to build than a vacuum tube. One transistor replaced the equivalent of 40 vacuum tubes. These transistors were made of solid material, some of which is silicon, an abundant element (second only to oxygen) found in beach sand and glass. Therefore they were very cheap to produce. Transistors were found to conduct electricity faster and better than vacuum tubes. They were also much smaller and gave off virtually no heat compared to vacuum tubes. Their use marked a new beginning for the computer. Without this invention, space travel in the 1960's would not have been possible. However, a new invention would even further advance our ability to use computers.

The Third Generation: 1965-1970 (Integrated Circuits - Miniaturizing the


Computer) Transistors were a tremendous breakthrough in advancing the computer. However no one could predict that thousands even now millions of transistors (circuits) could be compacted in such a small space. The integrated circuit, or as it is sometimes referred to as semiconductor chip, packs a huge number of transistors onto a single wafer of silicon. Robert Noyce of Fairchild Corporation and Jack Kilby of Texas Instruments independently discovered the amazing attributes of integrated circuits. Placing such large numbers of transistors on a single chip vastly increased the power of a single computer and lowered its cost considerably. Since the invention of integrated circuits, the number of transistors that can be placed on a single chip has doubled every two years, shrinking both the size and cost of computers even further and further enhancing its power. Most electronic devices today use some form of integrated circuits placed on printed circuit boards-- thin pieces of bakelite or fiberglass that have electrical connections etched onto them -- sometimes called a mother board. These third generation computers could carry out instructions in billionths of a second. The size of these machines dropped to the size of small file cabinets. Yet, the single biggest advancement in the computer era was yet to be discovered.

The Fourth Generation: 1971-Today (The


Microprocessor) This generation can be characterized by both the jump to monolithic integrated circuits(millions of transistors put onto one integrated circuit chip) and the invention of the microprocessor (a single chip that could do all the processing of a full-scale computer). By putting millions of transistors onto one single chip more calculation and faster speeds could be reached by computers. Because electricity travels about a foot in a billionth of a second, the smaller the distance the greater the speed of computers. However what really triggered the tremendous growth of computers and its significant impact on our lives is the invention of the microprocessor. Ted Hoff, employed by Intel (Robert Noyce's new company) invented a chip the size of a pencil eraser that could do all the computing and logic work of a computer. The microprocessor was made to be used in calculators, not computers. It led, however, to the invention of personal computers, or microcomputers. It wasn't until the 1970's that people began buying computer for personal use. One of the earliest personal computers was the Altair 8800 computer kit. In 1975 you could purchase this kit and put it together to make your own personal computer. In 1977 the Apple II was sold to the public and in 1981 IBM entered the PC (personal computer) market. Today we have all heard of Intel and its Pentium Processors and now we know how it all got started. The computers of the next generation will have millions upon millions of transistors on one chip and will perform over a billion calculations in a single second. There is no end in sight for the computer movement.

Questions
Directions: Answer each of the questions after reading the article above. Write in complete sentences. You must think and be creative with your answers. 1. In each of the 4 generations what was the cause for the increase of speed, power, or memory? 2. Why did the ENIAC and other computers like it give off so much heat? (Be very specific) 3. What characteristics made the transistors better than the vacuum tube? 4. How was space travel made possible through the invention of transistors? 5. What did the microprocessor allow the computers to do? and What was the microprocessor's original purpose? 6. When was the first computer offered to the public and what was its name? 7. What was Robert Noyce and Jack Kilby known for?

8. Intel was started by who? 9. What is monolithic integrated circuits? 10. How do you think society will be different if scientists are able to create a chip that will perform a trillion operations in a single second? Processors of old and new

One of the first ICs

386 Processor

Pentium Processor

The New Processors

This site was created by Roderick Hames for the primary purpose of teaching and demonstrating computer & business skills.. Any distribution or copying without the express or written consent of Alton C. Crews Middle School or its creator is strictly prohibited. *** Any questions, comments or suggestions concerning this page or this Web site should be forwarded to Roderick Hames, Computer Science / Business Education Teacher Copyright 2009, Alton C. Crews Middle School: CS Dept - Articles

Dated: Aug. 13, 2004

Related Categories Computer Beginners Guides


By Najmi

Related Article: History, Origins, and Various Generations of Computers, Charles Babbage - Father of Computing
Jan. 01, 2010 Update: Minor Tweaks were done for keywords.

The history of computer development is often referred to in reference to the different generations of computing devices. A generation refers to the state of improvement in the product development process. This term is also used in the different advancements of new computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and computer memory has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play. Each generation of computers is characterized by major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.

First Generation - 1940-1956: Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. A magnetic drum,also referred to as drum, is a metal cylinder coated with magnetic ironoxide material on which data and programs can be stored. Magnetic drums were once use das a primary storage device but have since been implemented as auxiliary storage devices. The tracks on a magnetic drum are assigned to channels located around the circumference of the drum, forming adjacent circular bands that wind around the drum. A single drum can have up to 200 tracks. As the drum rotates at a speed of up to 3,000 rpm, the device's read/write heads deposit magnetized spots on the drum during the write

operation and sense these spots during a read operation. This action is similar to that of a magnetic tape or disk drive. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Computer Programmers, therefore, use either high level programming languages or an assembly language programming. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers. Programs written in high level programming languages retranslated into assembly language or machine language by a compiler. Assembly language program retranslated into machine language by a program called an assembler (assembly language compiler). Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore, to run on different types of computers. Input was based onpunch card and paper tapes, and output was displayed on printouts. The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951. Acronym for Electronic Numerical Integrator And Computer, the world's first operational electronic digital computer, developed by Army Ordnance to compute World War II ballistic firing tables. The ENIAC, weighing 30 tons, using 200 kilowatts of electric power and consisting of 18,000 vacuum tubes,1,500 relays, and hundreds of thousands of resistors,capacitors, and inductors, was completed in 1945. In addition to ballistics, the ENIAC's field of application included weather prediction, atomic-energy calculations, cosmic-ray studies, thermal ignition,random-number studies, wind-tunnel design, and other scientific uses. The ENIAC soon became obsolete as the need arose for faster computing speeds.

Second Generation - 1956-1963: Transistors


Transistors replaced vacuum tubes and ushered in the second generation computer. Transistor is a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all

digital circuits, including computers. Today's latest microprocessor contains tens of millions of microscopic transistors. Prior to the invention of transistors, digital circuits were composed of vacuum tubes, which had many disadvantages. They were much larger, required more energy, dissipated more heat, and were more prone to failures. It's safe to say that without the invention of transistors, computing as we know it today would not be possible. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube,allowing computers to become smaller, faster, cheaper,more energy-efficient and more reliable than their firstgeneration predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages,which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.

Third Generation - 1964-1971: Integrated Circuits


The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. A nonmetallic chemical element in the carbon family of elements. Silicon atomic symbol "Si" - is the second most abundant element in the earth's crust, surpassed only by oxygen. Silicon does not occur uncombined in nature. Sand and almost all rocks contain silicon combined with oxygen, forming silica. When silicon combines with other elements, such as iron, aluminum or potassium, a silicate is formed. Compounds of silicon also occur in the atmosphere, natural waters,many plants and in the bodies of some animals.

Silicon is the basic material used to make computer chips, transistors, silicon diodes and other electronic circuits and switching devices because its atomic structure makes the element an ideal semiconductor. Silicon is commonly doped, or mixed,with other elements, such as boron, phosphorous and arsenic, to alter its conductive properties. A chip is a small piece of semi conducting material(usually silicon) on which an integrated circuit is embedded. A typical chip is less than -square inches and can contain millions of electronic components(transistors). Computers consist of many chips placed on electronic boards called printed circuit boards. There are different types of chips. For example, CPU chips (also called microprocessors) contain an entire processing unit, whereas memory chips contain blank memory. Semiconductor is a material that is neither a good conductor of electricity (like copper) nor a good insulator (like rubber). The most common semiconductor materials are silicon and germanium. These materials are then doped to create an excess or lack of electrons. Computer chips, both for CPU and memory, are composed of semiconductor materials. Semiconductors make it possible to miniaturize electronic components, such as transistors. Not only does miniaturization mean that the components take up less space, it also means that they are faster and require less energy. Related Article: History Behind It All Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Fourth Generation - 1971-Present: Microprocessors


The microprocessor brought the fourth generation of computers, as thousands of integrated circuits we rebuilt onto a single silicon chip. A silicon chip that contains a CPU. In the world of personal computers,the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers and most workstations sits a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles. Three basic characteristics differentiate microprocessors:

Instruction Set: The set of instructions that the microprocessor can execute.

Bandwidth: The number of bits processed in a single instruction.

Clock Speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.

In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit microprocessor that runs at 50MHz is more powerful than a 16-bitmicroprocessor that runs at 25MHz. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004chip, developed in 1971, located all the components of the computer from the central processing unit and memory to input/output controls - on a single chip. Abbreviation of central processing unit, and pronounced as separate letters. The CPU is the brains of the computer. Sometimes referred to simply as the processor or central processor, the CPU is where most calculations take place. In terms of computing power,the CPU is the most important element of a computer system. On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor. Two typical components of a CPU are:

The arithmetic logic unit (ALU), which performs arithmetic and logical operations.

The control unit, which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUI's, the mouse and handheld devices

Fifth Generation - Present and Beyond: Artificial Intelligence


Fifth generation computing devices, based on artificial intelligence, are still in development,though there are some applications, such as voice recognition, that are being used today. Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes:

Games Playing: programming computers to play games such as chess and checkers

Expert Systems: programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)

Natural Language: programming computers to understand natural human languages

Neural Networks: Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains

Robotics: programming computers to see and hear and react to other sensory stimuli

Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May,1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match. In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.

Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators. There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly. In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations. Today, the hottest area of artificial intelligence is neural networks, which are proving successful in an umber of disciplines such as voice recognition and natural-language processing. There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog. Related Article: Discover Computer History

Voice Recognition
The field of computer science that deals with designing computer systems that can recognize spoken words. Note that voice recognition implies only that the computer can take dictation, not that it understands what is being said. Comprehending human languages falls under a different field of computer science called natural language processing. A number of voice recognition systems are available on the market. The most powerful can recognize thousands of words. However, they generally require an extended training session during which the computer system becomes accustomed to a particular voice and accent.Such systems are said to be speaker dependent. Many systems also require that the speaker speak slowly and distinctly and separate each word with a short pause. These systems are called discrete speech systems. Recently,

great strides have been made in continuous speech systems -- voice recognition systems that allow you to speak naturally. There are now several continuous-speech systems available for personal computers. Because of their limitations and high cost, voice recognition systems have traditionally been used only in a few specialized situations. For example, such systems are useful in instances when the user is unable to use a keyboard to enter data because his or her hands are occupied or disabled. Instead of typing commands, the user can simply speak into a headset. Increasingly, however, as the cost decreases and performance improves, speech recognition systems are entering the mainstream and are being used as an alternative to keyboards. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Parallel processing is the simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several. There are even computers with thousands of CPUs. With single-CPU computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software. Note that parallel processing differs from multitasking, in which a single CPU executes several programs at once. Parallel processing is also called parallel computing. Quantum computation and molecular and nano-technology will radically change the face of computers in years to come. First proposed in the 1970s, quantum computing relies on quantum physics by taking advantage of certain quantum physics properties of atoms or nuclei that allow them to work together as quantum bits, or qubits, to be the computer's processor and memory. By interacting with each other while being isolated from the external environment,qubits can perform certain calculations exponentially faster than conventional computers. Qubits do not rely on the traditional binary nature of computing. While traditional computers encode information into bits using binary numbers, either a 0or 1, and can only do calculations on one set of numbers at once, quantum computers encode information as a series of quantum-mechanical states such as spin directions of electrons or polarization orientations of a photon that might represent a 1 or a 0, might represent a combination of the two or might represent a number expressing that the state of the qubit is somewhere between 1 and 0, or a superposition of many different numbers at once. A quantum computer can doan arbitrary reversible classical computation on all the numbers

simultaneously, which a binary system cannot do, and also has some ability to produce interference between various different numbers. By doing a computation on many different numbers at once,then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size.In using only a single processing unit, a quantum computer can naturally perform myriad operations in parallel. Quantum computing is not well suited for tasks such as word processing and email, but it is ideal for tasks such as cryptography and modeling and indexing very large databases. Nanotechnology is a field of science whose goal is to control individual atoms and molecules to create computer chips and other devices that are thousands of times smaller than current technologies permit. Current manufacturing processes use lithography to imprint circuits on semiconductor materials. While lithography has improved dramatically over the last two decades -- to the point where some manufacturing plants can produce circuits smaller than one micron(1,000 nanometers) -- it still deals with aggregates of millions of atoms. It is widely believed that lithography is quickly approaching its physical limits. To continue reducing the size of semiconductors, new technologies that juggle individual atoms will be necessary. This is the realm of nanotechnology. Although research in this field dates back to Richard P. Feynman's classic talk in 1959, the term nanotechnology was first coined by K. Eric Drexler in1986 in the book Engines of Creation. In the popular press, the term nanotechnology is sometimes used to refer to any submicron process,including lithography. Because of this, many scientists are beginning to use the term molecular nanotechnology when talking about true nanotechnology at the molecular level. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization. Here natural language means a human language. For example, English, French, and Chinese are natural languages. Computer languages, such as FORTRAN and C,are not. Probably the single most challenging problem in computer science is to develop computers that can understand natural languages. So far, the complete solution to this problem has proved elusive, although great deal of progress has been made. Fourthgeneration languages are the programming languages closest to natural languages. The Internet has sunk its claws deep in peoples hearts and minds. Users now want to stay connected to the Net all the time so that none of the correspondence is delayed or they dont miss any opportunity to click on a prize. One tends to forget the amount of manpower that is being consumed while playing lotto online or downloading a new flash game. The gum on the screen is so sticky that nobody gets time to shift his or her eye

from the monitor. This is a destructive trait for employees in Mission Critical Environments. Thanks to the proliferation of the Internet in the corporate sector, how employees utilize their online time at workplace is now a critical debate. Increasingly, companies are finding that workers tend to get sidetracked indulging in personal entertainment or catching up with the friends via Instant Messengers resulting in hours of wasted company time. Employees who do waste precious company time in mindless Net pursuits are termed as CyberSlackers. Over the past three years, several organizations such as The New York Times, Rolls Royce and Xerox have fired workers for abusing company Net accounts. Two popular reasons why more workers are finding themselves out of job are downloading porn and sending out obscene emails to friends and colleagues.In the age of the connected, a company can therefore never expect to meet targets if checks are not applied and policies are not outlined for employees to spend their online time. What factors should organizations keep in mind while drafting the ground rules for their employees?

Is the time spent in various online non-productive activities an appreciable thing? Is the costing of Internet usage worth the companys monthly expense? Is the time spent on the Net beneficial for the company or are the employees wasting a major chunk of the account in individual pursuits?

There are three main reasons why companies should be concerned about their employees surf habits:

Loss of productivity Legal liability Waste of Bandwidth

Loss of Productivity If employees are using the Internet for non-work related purposes, then this results in reduced productivity and ultimate loss in profits. On an average, workers browse the Internet more at the office as opposed to the odd few hours at home due to presence of proxy settings on their PCs, which allows full-time connectivity. The US Treasury Department recently monitored the Internal Revenue Services (IRS)Workforces Internet use They found that activities such as personal email, chat, online shopping and personal finance and stocks accounted for 51% of employees time spent online. The top non-work Web activity favored by IRS officials was surfing financial websites. Chat and email ran a close second, followed by miscellaneous activities including visiting adult sites, search requests, and looking at or downloading streaming media. Time is an asset and a misuse of that asset is just as wrong as the misuses of any of other assets that the company holds. As with any project, meeting a deadline is always a core

issue. Internet addiction makes meeting crucial deadlines an impossible task and one never realizes how his precious time is wasted whiling away on the Internet checking horoscope and news trivia. Legal Liability Employees are betraying their companys trust and abusing their online time when they download material that is illegal or inappropriate. An employer can face legal risks at the hands of careless employees. Companies are also at legal risk for copyright violation when employees download protected mp3 files or pirated software. Employees can also sue their employers if a co-worker has downloaded pornographic or racist materials. Clearly, it has become essential for companies to be aware of what there employees are downloading from the Internet, and for them to take steps to avoid liability by introducing employee Internet management strategies. Following habits should not be entertained in a regular office:

Uploading illegal materials to a public Web site, illegally gained access to a network, server, by hacking or cracking passwords. Sending out computer viruses or denial of service attack to the Internet. Sending illegal material such as child pornography to co-workers. Emailing hate letters or slanderous letters over the Internet. Posting unfounded corporate rumors on stock market bulletin boards. Sending emails that may offend co-workers and are covered under sexual harassment laws. Otherwise engage in similar online behavior.

Employees who are treated with dignity and respect, who take pride in their organization and its ethics, tend to respect the assets of that organization. As stealing pen, stationary, and hardware is unethical. Similarly acting indifferent on the Net during office hours is a bad techie trait. Waste of Bandwidth As broadband applications over the Internet continue to become increasingly popular, corporate networks are becoming bottlenecked; Streaming media, mp3 files, video and audio files, large graphic files, are increasing network crashes.For many companies, network quality of service (QoS) may be their most important business asset. If QoS is dragging, so is the companys ability to keep pace with competition. Todays Internet lets employees buy products, chat with friends, visit their kids at daycare, listen to Real audio feeds and play interactive games. As a result, the bandwidth increases. Preventive Measures

While monitoring and filtering software can be effective for managing current Internet abusers in the workplace, a more effective means of Employee Internet Management is via preventative measures.

Draft Company Internet access policy Company-wide education about proper Net use

Internet Access Policy Most companies already require employees to sign a basic contract indicating what acceptable Internet use is, the fact that employees may be monitored without indication, and that unacceptable Internet abuse is grounds for termination. Company-Wide Education About Proper Net Usage Surprisingly, very few companies offer seminars or educational materials for employees to learn about the ramifications of Internet abuse. By educating employees on how abusing the Internet has a negative impact on both the self and the company, many of these problems outlined in this article may be alleviated. Boss Plays Big Brother It is certainly not ethical to poke into someones inbox. Given the importance of user privacy in the cyber age, it is important not to overlook how bosses can step over the line, closely monitoring their employees surf trails. If your organization is drafting a policy paper for workers Net use, make sure it is well balanced in light of the standards set by International privacy advocates. The Electronic Frontier Foundation maintains an archive of information pertinent to work privacy. Work First Wasting company Internet time, intentionally or otherwise, is wrong. It cuts into our abilities to do the job, to be productive and competitive. And in todays market place, being competitive is the key to survival. Internet is an excellent research tool and its use should benefit the companys cause. As workers, our online time should produce new growth strategies for the company rather than mindless IMing. For any response,

History of generations of Computers

[an error occurred while processing this directive] [an error The computers that you see and use today hasn't come off by any occurred inventor at one go. Rather it took centuries of rigorous research work to while reach the present stage. And scientists are still working hard to make it processing better and better. But that is a different story. this directive] First, let us see when the very idea of computing with a machine or
device, as against the conventional manual calculation, was given a shape.

Though experiments were going on even earlier, it dates back to the 17th century when the first such successful device came into being. Edmund Gunter, an English mathematician, is credited with its development in 1620. Yet it was too primitive to be recognized even as the forefather of computers. The first mechanical digital calculating machine was built in 1642 by the French scientist-philosopher Blaise Pascal. And since then the ideas and inventions of many mathematicians, scientists, and engineers paved the way for the development of the modern computer in following years. But the world has had to wait for yet another couple of centuries to reach the next milestone in developing a computer. Then it was the English mathematician and inventor Charles Babbage who did the wonder with his works during 1830s. In fact, he was the first to work on a machine that can use and store values of large mathematical tables. The most important thing of this machine is its use in recording electric impulses, coded in the very simple binary system, with the help of only two kinds of symbols. This is quite a big leap closer to the basics on which computers today work. However, there was yet a long way to go. And, compared to present day computers, Babbage's machine could be regarded as more of high-speed counting devices. For, they could only work on numbers alone! The Boolean algebra developed in the 19th century removed the numbers-alone limitation for these counting devices. This technique of mathematics, invented by Boole, helped correlate the binary digits with our language. For instance, the values of 0s are related with false statements and 1s with the true ones. British mathematician Alan

Turing made further progress with the help of his theory of a computing model. Meanwhile the technological advancements of the 1930s helped much in furthering the advancement of computing devices. But the direct forefathers of present-day computer systems evolved in about 1940s. The Harvard Mark 1 Computer designed by Howard Aiken is the world's first digital computer which made use of electromechanical devices. It was developed jointly by the International Business Machines (IBM) and the Harvard University in 1944. But the real breakthrough was the concept of the stored-program computer. This was when the Hungarian-American mathematician John von Neumann introduced the Electronic Discrete Variable Automatic Computer (EDVAC). The idea--that instructions as well as data should be stored in the computer's memory for better results--made this device totally different from its counting device type of forerunners. And since then computers have increasingly become faster and more powerful. Still, as against the present day's personal computers, they had the simplest form of designs. It was based on a single CPU performing various operations, like, addition, multiplication and so on. And these operations would be performed following an order of instructions, called program, to produce the desired result. This form of design, was followed, with a little change even in the advanced versions of computers developed later. This changed version saw a division of the CPU into memory and arithmetic logical unit (ALU) parts and a separate input and output sections. In fact, the first four generations of computers followed this as their basic form of design. It was basically the type of hardware used that caused the difference over the generation. For instance, the first generation variety was based on vacuum tube technology. This was upgraded with the coming up of the transistors, and printed circuit board technology in the 2nd generations. It was further upgraded by the coming up of integrated circuit chip technology where the little chips replaced a large number of components. Thus the size of computer was greatly reduced in the 3rd generation, while it become more powerful. But the real marvel came during the 1970s. It was with the introduction of the very large scale integrated technology (VLSI) in the 4th generation. Aided by this technology a tiny microprocessor can store millions of pieces of data. And based on this technology the IBM introduced its famous Personal Computers. Since then IBM itself, and other makers including Apple, Sinclair, and so forth, kept on developing more and more advanced versions of personal computers along with bigger and more powerful ones like Mainframe and Supercomputers for more complicated works. Meanwhile the tinier versions like laptops and even palmtops came up with more advanced technologies over the past couple of decades. But only advancement of technology cannot take the full credit for the amazing advancement of computers over the past few decades.

Software, or the inbuilt logic to run the computer the way you like, kept on being developed at an equal pace. The coming of famous software manufacturers like Microsoft, Oracle, Sun have helped pacing up the development. The result of all these painstaking research is to add to our ease in solving complex problems at a lightning speed with a device that is easy to use and operate, called computer.

Dated: Feb. 21, 2006

Related Categories PHP SQL


By Donald W. Hyatt

Inserting a New Table Entry


For the examples we have been using in this tutorial, we are using an account called games in which there is a table called scores for keeping track of high scores. The table was initialized from a file, but now we are going to add a new player in interactive mode. We will use the MySQL command called INSERT INTO to select the table and operation, and then the command SET to specify the value of any variables that we wish to initialize. In order to add a new player called "Richard", we will use the following syntax: mysql> INSERT INTO scores SET Name="Richard";
Query OK, 1 row affected (0.00 sec)

Let's see the current values in the table scores. mysql> SELECT * FROM scores;
+---------+------+ | Name | Num | +---------+------+ | Phyllis | 987 | | Randy | 1285 | | Don | 919 | | Mark | 0 | | Mary | 567 | | Bob | 23 | | Pete | 456 | | Sally | 333 | | Richard | NULL | +---------+------+ 9 rows in set (0.00 sec)

It is important to note that if a variable is a "PRIMARY KEY" or is specified in the initial table creation as being something "NOT NULL", a value must be supplied at the time the

entry is inserted. Notice that Richard does not have a score at this time, so his score is not 0 but NULL instead.

Updating Information
Since Richard does not have a score at this time, let's take a look at the syntax to change the information in a table. We will use the command UPDATE to identify the type of action and the table being used, and then the operation SET to assign a value to a variable as well as WHERE to establish the criteria for updating the record. The systax for that command would be: mysql> UPDATE scores SET Num=0 WHERE Name="Richard";
Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0

mysql> SELECT * FROM scores WHERE Num=0; Now Richard's score is also zero. Of course, we could have created Richard's entry and assigned the initial score to zero during the insert operation by doing the following command instead: mysql> INSERT INTO scores SET Name="Richard", Num=0; We can even change one of the user's names. Let's suppose that Mary actually should be called Marianne. We can change that entry for the name in the following way: mysql> UPDATE scores SET Name="Marianne" WHERE Name="Mary";
Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 +----------+------+ | Name | Num | +----------+------+ | Phyllis | 987 | | Randy | 1385 | | Don | 919 | | Mark | 0 | | Marianne | 567 | | Bob | 23 | | Pete | 456 | | Sally | 333 | | Richard | 100 | +----------+------+ 9 rows in set (0.00 sec)

Now let's try a slightly more sophisticated update operation. Suppose we wish to give 100 Bonus points to the score of anyone whose name begins with an "R", such as in "Randy" and "Richard". We could update each row separately by replacing their scores with the appropriate values, but the following approach is a bit better. We will use the command LIKE which permits us to have match of some value such as the leading "R" in both names, and use the wildcard character "%" to match the rest. We will then allow MySQL

to do the arithmetic by adding 100 points to the old value of Num for any of those that match. The syntax for that command is: mysql> UPDATE scores SET Num=Num+100 WHERE Name LIKE "R%";
Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 +----------+------+ | Name | Num | +----------+------+ | Phyllis | 987 | | Randy | 1385 | | Don | 919 | | Mark | 0 | | Marianne | 567 | | Bob | 23 | | Pete | 456 | | Sally | 333 | | Richard | 100 | +----------+------+ 9 rows in set (0.01 sec)

Now both scores have been changed.

Deleting a Table Entry


Now that we can add entries to the table, it will be important to learn how to delete them, too. The command for removing something from a table is DELETE FROM to specify the action and table, and then WHERE to indicate the criteria for deletion. If we desire to delete Mark from the table, the command would be: mysql> DELETE FROM scores WHERE Name="Mark";
Query OK, 1 row affected (0.00 sec)

Let's see the current values in the table scores.


+----------+------+ | Name | Num | +----------+------+ | Phyllis | 987 | | Randy | 1385 | | Don | 919 | | Marianne | 567 | | Bob | 23 | | Pete | 456 | | Sally | 333 | | Richard | 100 | +----------+------+ 8 rows in set (0.00 sec)

If we add another user back to the table, MySQL apparently puts it in the empty slot it has because Mark has been deleted. mysql> INSERT INTO scores SET Name="Marty", Num=0;
Query OK, 1 row affected (0.00 sec) +----------+------+ | Name | Num | +----------+------+

| Phyllis | 987 | | Randy | 1385 | | Don | 919 | | Marty | 0 | | Marianne | 567 | | Bob | 23 | | Pete | 456 | | Sally | 333 | | Richard | 100 | +----------+------+ 9 rows in set (0.00 sec)

ModifyingTable Attributes
Occasionally, it becomes necessary to change the attributes of one of the variables or columns in a table. This is a frequent situation for a variable that might be declared VARCHAR(20) lets say, and then the user wants to add something that might be 25 characters in length. Rather than destroying the entire table and starting from scratch, this modification can be done using the MySQL command, ALTER TABLE combined with the MODIFY command. Before we modify a column or a field entry, lets take a look at how the fields are currently defined using the SHOW command: mysql> SHOW FIELDS FROM scores;
+-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | Name | varchar(20) | YES | | NULL | | | Num | int(5) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ 2 rows in set (0.00 sec)

Now to change the Name variable from 20 to 25 characters, the command would be: mysql> ALTER TABLE scores MODIFY Name VARCHAR(25);
Query OK, 9 rows affected (0.02 sec) Records: 9 Duplicates: 0 Warnings: 0

Let's see how the values have changed:


+-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | Name | varchar(25) | YES | | NULL | | | Num | int(5) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ 2 rows in set (0.00 sec)

There are very sophisticated queries and updates that can be done with MySQL. It is possible to add new columns to existing tables and even merge two databases into one large table. Please check out the documentation at the MySQL website for more information.

MySQL: www.mysql.com

You might also like