You are on page 1of 61

A Transaction Processing System (TPS) is a type of information system that collects, stores, modifies and retrieves the data

transactions of an enterprise. A transaction is any event that passes the ACID test in which data is generated or modified before storage in an information systemThe success of commercial enterprises depends on the reliable processing of transactions to ensure that customer orders are met on time, and that partners and suppliers are paid and can make payment. The field of transaction processing, therefore, has become a vital part of effective business management, led by such organisations as the Association for Work Process Improvement and the Transaction Processing Performance Council. Transaction processing systems offer enterprises the means to rapidly process transactions to ensure the smooth flow of data and the progression of processes throughout the enterprise. Typically, a TPS will exhibit the following characteristics: Rapid Processing The rapid processing of transactions is vital to the success of any enterprise now more than ever, in the face of advancing technology and customer demand for immediate action. TPS systems are designed to process transactions virtually instantly to ensure that customer data is available to the processes that require it. Reliability Similarly, customers will not tolerate mistakes. TPS systems must be designed to ensure that not only do transactions never slip past the net, but that the systems themselves remain operational permanently. TPS systems are therefore designed to incorporate comprehensive safeguards and disaster recovery systems. These measures keep the failure rate well within tolerance levels. Standardisation Transactions must be processed in the same way each time to maximise efficiency. To ensure this, TPS interfaces are designed to acquire identical data for each transaction, regardless of the customer. Controlled Access Since TPS systems can be such a powerful business tool, access must be restricted to only those employees who require their use. Restricted access to the system ensures that employees who lack the skills and ability to control it cannot influence the transaction process. Transactions Processing Qualifiers In order to qualify as a TPS, transactions made by the system must pass the ACID test. The ACID tests refers to the following four prerequisites:

Atomicity Atomicity means that a transaction is either completed in full or not at all. For example, if funds are transferred from one account to another, this only counts as a bone fide transaction if both the withdrawal and deposit take place. If one account is debited and the other is not credited, it does not qualify as a transaction. TPS systems ensure that transactions take place in their entirety. Consistency TPS systems exist within a set of operating rules (or integrity constraints). If an integrity constraint states that all transactions in a database must have a positive value, any transaction with a negative value would be refused. Isolation Transactions must appear to take place in isolation. For example, when a fund transfer is made between two accounts the debiting of one and the crediting of another must appear to take place simultaneously. The funds cannot be credited to an account before they are debited from another. Durability Once transactions are completed they cannot be undone. To ensure that this is the case even if the TPS suffers failure, a log will be created to document all completed transactions. These four conditions ensure that TPS systems carry out their transactions in a methodical, standardised and reliable manner. Types of Transactions While the transaction process must be standardised to maximise efficiency, every enterprise requires a tailored transaction process that aligns with its business strategies and processes. For this reason, there are two broad types of transaction: Batch Processing Batch processing is a resource-saving transaction type that stores data for processing at pre-defined times. Batch processing is useful for enterprises that need to process large amounts of data using limited resources. Examples of batch processing include credit card transactions, for which the transactions are processed monthly rather than in real time. Credit card transactions need only be processed once a month in order to produce a statement for the customer, so batch processing saves IT resources from having to process each transaction individually. Real Time Processing

In many circumstances the primary factor is speed. For example, when a bank customer withdraws a sum of money from his or her account it is vital that the transaction be processed and the account balance updated as soon as possible, allowing both the bank and customer to keep track of funds. Backup procedures

Since business organizations have become very dependent on TPSs, a breakdown in their TPS may stop the business' regular routines and thus stopping its operation for a certain amount of time. In order to prevent data loss and minimize disruptions when a TPS breaks down a well-designed backup and recovery procedure is put into use. The recovery process can rebuild the system when it goes down. Recovery process A TPS may fail for many reasons. These reasons could include a system failure, human errors, hardware failure, incorrect or invalid data, computer viruses, software application errors or natural or man-made disasters. As it's not possible to prevent all TPS failures, a TPS must be able to cope with failures. The TPS must be able to detect and correct errors when they occur. A TPS will go through a recovery of the database to cope when the system fails, it involves the backup, journal, checkpoint, and recovery manager:

Journal: A journal maintains an audit trail of transactions and database changes. Transaction logs and Database change logs are used, a transaction log records all the essential data for each transactions, including data values, time of transaction and terminal number. A database change log contains before and after copies of records that have been modified by transactions. Checkpoint: The purpose of checkpointing is to provide a snapshot of the data within the databas e. A checkpoint, in general, is any identifier or other reference that identifies at a point in time the state of the database Modifications to database pages are performed in memory and are not necessarily written to disk after every update. Therefore, periodically, the database system must perform a checkpoint to write these updates which are held in-memory to the storage disk. Writing these updates to storage disk creates a point in time in which the database system can apply changes contained in a transaction log during recovery after an unexpected shut down or crash of the database system. If a checkpoint is interrupted and a recovery is required, then the database system must start recovery from a previous successful checkpoint. Checkpointing can be either transaction-consistent or nontransaction-consistent (called also fuzzy checkpointing). Transactionconsistent checkpointing produces a persistent database image that is sufficient to recover the database to the state that was externally perceived at the moment of starting the checkpointing. A non-transaction-consistent checkpointing results in a persistent database image that is insufficient to perform a recovery of the database st ate. To perform the database recovery, additional information is needed, typically

contained in transaction logs. Transaction consistent checkpointing refers to a consistent database, which doesn't necessarily include all the latest committed transactions, but all modifications made by transactions, that were committed at the time checkpoint creation was started, are fully present. A non-consistent transaction refers to a checkpoint which is not necessarily a consistent database, and can't be recovered to one without all log records generated for open transactions included in the checkpoint. Depending on the type of database management system implemented a checkpoint may incorporate indexes or storage pages (user data), indexes and storage pages. If no indexes are incorporated into the checkpoint, indexes must be created when the database is restored from the checkpoint image.

Recovery Manager: A recovery manager is a program which restores the database to a correct condition which can restart the transaction processing.

Depending on how the system failed, there can be two different recovery procedures used. Generally, the procedures involves restoring data that has been collected from a backup device and then running the transaction processing again. Two types of recovery are backward recovery and forward recovery:

Backward recovery: used to undo unwanted changes to the database. It reverses the changes made by transactions which have been aborted. It involves the logic of reprocessing each transaction, which is very time-consuming. Forward recovery: it starts with a backup copy of the database. The transaction will then reprocess according to the transaction journal that occurred between the time the backup was made and the present time. It's much faster and more accurate.

Types of back-up procedures There are two main types of Back-up Procedures: Grandfather-father-son and Partial backups:

Grandfather-father-son: This procedure refers to at least three generations of backup master files. thus, the most recent backup is the son, the oldest backup is the grandfather. It's commonly used for a batch transaction processing system with a magnetic tape. If the system fails during a batch run, the master file is recreated by using the son backup and then restarting the batch. However if the son backup fails, is corrupted or destroyed, then the next generation up backup (father) is required. Likewise, if that fails, then the next generation up backup (grandfather) is required. Of course the older the generation, the more the data may be out of date. Organizations can have up to twenty generations of backup. Partial backups :This only occurs when parts of the master file are backed up. The master file is usually backed up to magnetic tape at regular times, this could be daily, weekly or monthly. Completed transactions since the last backup are stored separately and are called journals, or journal files. The master file can be recreated from the journal files on the backup tape if the system is to fail. Updating in a batch This is used when transactions are recorded on paper (such as bills and invoices) or when it's being stored on a magnetic tape. Transactions will be collected and updated as a batch at when it's convenient or economical to process them. Historically, this was the most common method as the information technology did not exist to allow real-time processing. The two stages in batch processing are:

Collecting and storage of the transaction data into a transaction file - this involves sorting the data into sequential order. Processing the data by updating the master file - which can be difficult, this may involve data additions, updates and deletions that may require to happen in a certain order. If an error occurs, then the entire batch fails.

Updating in batch requires sequential access - since it uses a magnetic tape this is the only way to access data. A batch will start at the beginning of the tape, then reading it from the order it was stored; it's very time-consuming to locate specific transactions. The information technology used includes a secondary storage medium which can store large quantities of data inexpensively (thus the common choice of a magnetic tape). The software used to collect data does not have to be online - it doesn't even need a user interface. Updating in real-time This is the immediate processing of data. It provides instant confirmation of a transaction. This involves a large amount of users who are simultaneously performing transactions to change data. Because of advances in technology (such as the increase in the speed of data transmission and larger bandwidth), real-time updating is now possible. Steps in a real-time update involve the sending of a transaction data to an online database in a master file. The person providing information is usually able to help with error correction and receives confirmation of the transaction completion. Updating in real-time uses direct access of data. This occurs when data are accessed without accessing previous data items. The storage device stores data in a particular

location based on a mathematical procedure. This will then be calculated to find an approximate location of the data. If data are not found at this location, it will search through successive locations until it's found. The information technology used could be a secondary storage medium that can store large amounts of data and provide quick access (thus the common choice of a magnetic disk). It requires a user-friendly interface as it's important for rapid response time. Reservation Systems Reservation systems are used for any type of business where a service or a product is set aside for a customer to use for a future time. Marketing information system Definition Set of procedures and practices employed in analyzing and assessing marketing information, gathered continuously from sources inside and outside of a firm. Timely marketing information provides basis for decisions such as product development or improvement, pricing, packaging, distribution, media selection, and promotion. Processes associated with collecting, analyzing, and reporting marketing research information. The system used may be as simple as a manually tabulated consumer survey or as complex as a computer system for tracking the distribution and redemption of cents-off coupons in terms of where the consumer got the coupon, where he redeemed it, what he purchased with it and in what size or quantity he purchased it, and how the sales volume for each product was affected in each area or store where coupons were available. Marketing information provides input to marketing decisions including product improvements, price and packaging changes, copywriting, media buying, distribution, and so forth. Accounting systems are a part of marketing information systems providing product sales and profit information. A Marketing Information System can be defined as 'People, equipment and procedures to gather, sort, analyze, evaluate and distribute needed, timely and accurate information to marketing decision makers' (Gray Armstrong, 2008) A marketing information system (MIS)consists of people, equipment and procedures to gather, sort, analyze, evaluate and distribute needed, timely and accurate information to marketing decision makers. The MIS begins and ends with marketing managers. First, it interacts with these managers to assess their information needs. Next, it develops the needed information from internal company records, marketing intelligence activities and the marketing research process. Information analysis processes the information to make it more useful. Finally, the MIS distributes information to managers in the right form at the right time to help them in marketing planning, implementation and control. DEVELOPING INFORMATION The information needed by marketing managers comes from internal company records, marketing intelligence and marketing research. The information analysis system then processes this information to make it more useful for managers. Internal Records Information gathered from sources within the company to evaluate marketing performances and to detect marketing problems and opportunities. Most marketing managers use internal records and reports regularly, especially for making day-to-day planning, implementation and control decisions. Internal records information consists of information gathered from sources

within the company to evaluate marketing performance and to detect marketing problems and opportunities. Example Office World offers shoppers a free membership card when they make their first purchase at their store. The card entitles shoppers to discounts on selected items, but also provides valuable information to the chain. Since Office World encourages customers to use their card with each purchase, it can track what customers buy, where and when. Using this information, it can track the effectiveness of promotions, trace customers who have defected to other stores and keep in touch with them if they relocate. Information from internal records is usually quicker and cheaper to get than information from other sources, but it also presents some problems. Because internal information was for other purposes, it may be incomplete or in the wrong form for making marketing decisions. For example, accounting department sales and cost data used for preparing financial statements need adapting for use in evaluating product, sales force or channel performance. Marketing Intelligence Everyday information about developments in changing marketing environment that helps managers prepares marketing plans. The marketing intelligence system determines the intelligence needed, collects it by searching the environment and delivers it to marketing managers who need it. Marketing intelligence comes from many sources. Much intelligence is from the company's personnel - executives, engineers and scientists, purchasing agents and the sales force. But company people are often busy and fail to pass on important information. The company must 'sell' its people on their importance as intelligence gatherers, train them to spot new developments and urge them to report intelligence hack to the company. The company must also persuade suppliers, resellers and customers to pass along important intelligence. Some information on competitors conies from what they say about themselves in annual reports, speeches, press releases and advertisements. The company can also learn about competitors from what others say about them in business publications and at trade shows. Or the company can watch what competitors do - buying and analyzing competitors' products, monitoring their sales and checking for new patents. Companies also buy intelligence information from outside suppliers. Some companies set up an office to collect and circulate marketing intelligence. The staff scans relevant publications, summarize important news and send news bulletins to marketing managers. They develop a file of intelligence information and help managers evaluate new information. These services greatly improve the quality of information available to marketing managers. The methods used to gather competitive information range from the ridiculous to the illegal. Managers routinely shred documents because wastepaper baskets can be an information source. Components of a marketing information system A marketing information system (MIS) is intended to bring together disparate items of data into a coherent body of information. An MIS is, as will shortly be seen, more than raw data or information suitable for the purposes of decision making. An MIS also provides methods for interpreting the information the MIS provides. Moreover, as Kotler's1 definition says, an MIS is more than a system of data collection or a set of information technologies: "A marketing information system is a continuing and interacting structure of people, equipment and procedures to gather, sort, analyse, evaluate, and distribute pertinent, timely

and accurate information for use by marketing decision makers to improve their marketing planning, implementation, and control". Figure 9.1 illustrates the major components of an MIS, the environmental factors monitored by the system and the types of marketing decision which the MIS seeks to underpin. Figure 9.1 The marketing information systems and its subsystems

The explanation of this model of an MIS begins with a description of each of its four main constituent parts: the internal reporting systems, marketing research system, marketing intelligence system and marketing models. It is suggested that whilst the MIS varies in its degree of sophistication - with many in the industrialised countries being computerised and few in the developing countries being so - a fully fledged MIS should have these components, the methods (and technologies) of collection, storing, retrieving and processing data notwithstanding. Internal reporting systems: All enterprises which have been in operation for any period of time nave a wealth of information. However, this information often remains under-utilised because it is compartmentalised, either in the form of an individual entrepreneur or in the functional departments of larger businesses. That is, information is usually categorised according to its nature so that there are, for example, financial, production, manpower, marketing, stockholding and logistical data. Often the entrepreneur, or various personnel working in the functional departments holding these pieces of data, do not see how it could help decision makers in other functional areas. Similarly, decision makers can fail to appreciate how information from other functional areas might help them and therefore do not request it. The internal records that are of immediate value to marketing decisions are: orders received, stockholdings and sales invoices. These are but a few of the internal records that can be used by marketing managers, but even this small set of records is capable of generating a great deal of information. Below, is a list of some of the information that can be derived from sales invoices.

count

By comparing orders received with invoices an enterprise can establish the extent to which it is providing an acceptable level of customer service. In the same way, comparing stockholding records with orders received helps an enterprise ascertain whether its stocks are in line with current demand patterns. Marketing research systems: The general topic of marketing research has been the prime ' subject of the textbook and only a little more needs to be added here. Marketing research is a proactive search for information. That is, the enterprise which commissions these studies does so to solve a perceived marketing problem. In many cases, data is collected in a purposeful way to address a well-defined problem (or a problem which can be defined and solved within the course of the study). The other form of marketing research centres not around a specific marketing problem but is an attempt to continuously monitor the marketing environment. These monitoring or tracking exercises are continuous marketing research studies, often involving panels of farmers, consumers or distributors from which the same data is collected at regular intervals. Whilst the ad hoc study and continuous marketing research differs in the orientation, yet they are both proactive. Marketing intelligence systems: Whereas marketing research is focused, market intelligence is not. A marketing intelligence system is a set of procedures and data sources used by marketing managers to sift information from the environment that they can use in their decision making. This scanning of the economic and business environment can be undertaken in a variety of ways, including2 Unfocused scanning The manager, by virtue of what he/she reads, hears and watches exposes him/herself to information that may prove useful. Whilst the behaviour is unfocused and the manager has no specific purpose in mind, it is not unintentional Again, the manager is not in search of particular pieces of information that he/she is actively searching but does narrow the range of media that is scanned. For instance, the manager may focus more on economic and business publications, broadcasts etc. and pay less attention to political, scientific or technological media. This describes the situation where a fairly limited and unstructured attempt is made to obtain information for a specific purpose. For example, the marketing manager of a firm considering entering the business of importing frozen fish from a neighbouring country may make informal inquiries as to prices and demand levels of frozen and fresh fish. There would be little structure to this search with the manager making inquiries with traders he/she happens to encounter as well as with other ad hoc contacts in ministries, international aid agencies, with trade associations, importers/exporters etc. This is a purposeful search after information in some systematic way. The information will be required to address a specific issue. Whilst this sort of activity may seem to share the characteristics of marketing research it is carried out by the manager him/herself rather than a professional researcher. Moreover, the scope of the search is likely to be narrow in scope and far less intensive than marketing research

Semifocused scanning

Informal search

Formal search

Marketing intelligence is the province of entrepreneurs and senior managers within an agribusiness. It involves them in scanning newspaper trade magazines, business journals and reports, economic forecasts and other media. In addition it involves management in talking to producers, suppliers and customers, as well as to competitors. Nonetheless, it is a largely informal process of observing and conversing. Some enterprises will approach marketing intelligence gathering in a more deliberate fashion and will train its sales force, after-sales personnel and district/area managers to take cognisance of competitors' actions, customer complaints and requests and distributor problems. Enterprises with vision will also encourage intermediaries, such as collectors, retailers, traders and other middlemen to be proactive in conveying market intelligence back to them. Marketing models: Within the MIS there has to be the means of interpreting information in order to give direction to decision. These models may be computerised or may not. Typical tools are:

nalysis of Variance (ANOVA) models

These and similar mathematical, statistical, econometric and financial models are the analytical subsystem of the MIS. A relatively modest investment in a desktop computer is enough to allow an enterprise to automate the analysis of its data. Some of the models used are stochastic, i.e. those containing a probabilistic element whereas others are deterministic models where chance plays no part. Brand switching models are stochastic since these express brand choices in probabilities whereas linear programming is deterministic in that the relationships between variables are expressed in exact mathematical terms. Human Resource Information System (HRIS) The Human Resource Information System (HRIS) is a software or online solution for the data entry, data tracking, and data information needs of the Human Resources, payroll, management, and accounting functions within a business. Normally packaged as a data base, hundreds of companies sell some form of HRIS and every HRIS has different capabilities. Pick your HRIS carefully based on the capabilities you need in your company. Typically, the better The Human Resource Information Systems (HRIS) provide overall:

Management of all employee information. Reporting and analysis of employee information. Company-related documents such as employee handbooks, emergency evacuation procedures, and safety guidelines. Benefits administration including enrollment, status changes, and personal information updating.

Complete integration with payroll and other company financial software and accounting systems. Applicant tracking and resume management.

The HRIS that most effectively serves companies tracks:


attendance and PTO use, pay raises and history, pay grades and positions held, performance development plans, training received, disciplinary action received, personal employee information, and occasionally, management and key employee succession plans, high potential employee identification, and applicant tracking, interviewing, and selection.

An effective HRIS provides information on just about anything the company needs to track and analyze about employees, former employees, and applicants. Your company will need to select a Human Resources Information System and customize it to meet your needs. With an appropriate HRIS, Human Resources staff enables employees to do their own benefits updates and address changes, thus freeing HR staff for more strategic functions. Additionally, data necessary for employee management, knowledge development, career growth and development, and equal treatment is facilitated. Finally, managers can access the information they need to legally, ethically, and effectively support the success of their reporting employees. Purpose The function of Human Resources departments is generally administrative and common to all organizations. Organizations may have formalized selection, evaluation, and payroll processes. Efficient and effective management of "Human Capital" has progressed to an increasingly imperative and complex process. The HR function consists of tracking existing employee data which traditionally includes personal histories, skills, capabilities, accomplishments and salary. To reduce the manual workload of these administrative activities, organizations began to electronically automate many of these processes by introducing specialized Human Resource Management Systems. HR executives rely on internal or external IT professionals to develop and maintain an integrated HRMS. Before the clientserver architecture evolved in the late 1980s, many HR automation processes were relegated to mainframe computers that could handle large amounts of data transactions. In consequence of the high capital investment necessary to buy or program proprietary software, these internally-developed HRMS were limited to organizations that possessed a large amount of capital. The advent of clientserver, Application Service Provider, and Software as a Service SaaS or Human Resource Management Systems enabled increasingly higher administrative control of such systems. Currently Human Resource Management Systems encompass: 1. Payroll 2. Work Time 3. Benefits Administration

4. 5. 6. 7. 8.

HR management Information system Recruiting Training/Learning Management System Performance Record Employee Self-Service

The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic pay cheques and employee tax reports. Data is generally fed from the human resources and time keeping modules to calculate automatic deposit and manual cheque writing capabilities. This module can encompass all employee-related transactions as well as integrate with existing financial management systems. The work time module gathers standardized time and work related efforts. The most advanced modules provide broad flexibility in data collection methods, labor distribution capabilities and data analysis features. Cost analysis and efficiency metrics are the primary functions. The benefits administration module provides a system for organizations to administer and track employee participation in benefits programs. These typically encompass insurance, compensation, profit sharing and retirement. The HR management module is a component covering many other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and development, capabilities and skills management, compensation planning records and other related activities. Leading edge systems provide the ability to "read" applications and enter relevant data to applicable database fields, notify employers and provide position management and position control. Human resource management function involves the recruitment, placement, evaluation, compensation and development of the employees of an organization. Initially, businesses used computer based information systems to:

produce pay checks and payroll reports; maintain personnel records; pursue Talent Management.

Online recruiting has become one of the primary methods employed by HR departments to garner potential candidates for available positions within an organization. Talent Management systems typically encompass:

analyzing personnel usage within an organization; identifying potential applicants; recruiting through company-facing listings; recruiting through online recruiting sites or publications that market to both recruiters and applicants.

The significant cost incurred in maintaining an organized recruitment effort, cross-posting within and across general or industry-specific job boards and maintaining a competitive exposure of availabilities has given rise to the development of a dedicated Applicant Tracking System, or 'ATS', module. The training module provides a system for organizations to administer and track employee training and development efforts. The system, normally called a Learning Management System if a stand alone product, allows HR to track education, qualifications and skills of the

employees, as well as outlining what training courses, books, CDs, web based learning or materials are available to develop which skills. Courses can then be offered in date specific sessions, with delegates and training resources being mapped and managed within the same system. Sophisticated LMS allow managers to approve training, budgets and calendars alongside performance management and appraisal metrics. The Employee Self-Service module allows employees to query HR related data and perform some HR transactions over the system. Employees may query their attendance record from the system without asking the information from HR personnel. The module also lets supervisors approve O.T. requests from their subordinates through the system without overloading the task on HR department. Many organizations have gone beyond the traditional functions and developed human resource management information systems, which support recruitment, selection, hiring, job placement, performance appraisals, employee benefit analysis, health, safety and security, while others integrate an outsourced Applicant Tracking System that encompasses a subset of the above. Assigning Responsibilities Communication between the Employees. SUPPLY CHAIN MANAGEMENT Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies. It is said that the ultimate goal of any effective supply chain management system is to reduce inventory (with the assumption that products are available when needed). As a solution for successful supply chain management, sophisticated software systems with Web interfaces are competing with Web-based application service providers (ASP) who promise to provide part or all of the SCM service for companies who rent their service. Supply chain management flows can be divided into three main flows:

The product flow The information flow The finances flow

The product flow includes the movement of goods from a supplier to a customer, as well as any customer returns or service needs. The information flow involves transmitting orders and updating the status of delivery. The financial flow consists of credit terms, payment schedules, and consignment and title ownership arrangements. There are two main types of SCM software: planning applications and execution applications. Planning applications use advanced algorithms to determine the best way to fill an order. Execution applications track the physical status of goods, the management of materials, and financial information involving all parties. Some SCM applications are based on open data models that support the sharing of data both inside and outside the enterprise (this is called the extended enterprise, and includes key suppliers, manufacturers, and end customers of a specific company). This shared data may reside in diverse database systems, or data warehouses, at several different sites and companies.

By sharing this data "upstream" (with a company's suppliers) and "downstream" (with a company's clients), SCM applications have the potential to improve the time-to-market of products, reduce costs, and allow all parties in the supply chain to better manage current resources and plan for future needs. Increasing numbers of companies are turning to Web sites and Web-based applications as part of the SCM solution. A number of major Web sites offer e-procurement marketplaces where manufacturers can trade and even make auction bids with suppliers. definition Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies. It is said that the ultimate goal of any effective supply chain management system is to reduce inventory (with the assumption that products are available when needed). As a solution for successful supply chain management, sophisticated software systems with Web interfaces are competing with Web-based application service providers (ASP) who promise to provide part or all of the SCM service for companies who rent their service. Supply chain management flows can be divided into three main flows:

The product flow The information flow The finances flow

The product flow includes the movement of goods from a supplier to a customer, as well as any customer returns or service needs. The information flow involves transmitting orders and updating the status of delivery. The financial flow consists of credit terms, payment schedules, and consignment and title ownership arrangements. There are two main types of SCM software: planning applications and execution applications. Planning applications use advanced algorithms to determine the best way to fill an order. Execution applications track the physical status of goods, the management of materials, and financial information involving all parties. Some SCM applications are based on open data models that support the sharing of data both inside and outside the enterprise (this is called the extended enterprise, and includes key suppliers, manufacturers, and end customers of a specific company). This shared data may reside in diverse database systems, or data warehouses, at several different sites and companies. By sharing this data "upstream" (with a company's suppliers) and "downstream" (with a company's clients), SCM applications have the potential to improve the time-to-market of products, reduce costs, and allow all parties in the supply chain to better manage current resources and plan for future needs. Increasing numbers of companies are turning to Web sites and Web-based applications as part of the SCM solution. A number of major Web sites offer e-procurement marketplaces where manufacturers can trade and even make auction bids with suppliers.

Customer relationship management (CRM) is a widely-implemented strategy for managing a companys interactions with customers, clients and sales prospects. It involves using technology to organize, automate, and synchronize business processesprincipally sales activities, but also those for marketing, customer service, and technical support. The overall goals are to find, attract, and win new clients, nurture and retain those the company already has, entice former clients back into the fold, and reduce the costs of marketing and client service.[1] Customer relationship management describes a company-wide business strategy including customer-interface departments as well as other departments. The three phases in which CRM support the relationship between a business and its customers are to:

Acquire: CRM can help a business acquire new customers through contact management, selling, and fulfillment.[3] Enhance: web-enabled CRM combined with customer service tools offers customers service from a team of sales and service specialists, which offers customers the convenience of one-stop shopping.[3] Retain: CRM software and databases enable a business to identify and reward its loyal customers and further develop its targeted marketing and relationship marketing initiatives.[4]

Types/variations Sales force automation Sales force automation (SFA) involves using software to streamline all phases of the sales process, minimizing the time that sales representatives need to spend on each phase. This allows sales representatives to pursue more clients in a shorter amount of time than would otherwise be possible. At the heart of SFA is a contact management system for tracking and recording every stage in the sales process for each prospective client, from initial contact to final disposition. Many SFA applications also include insights into opportunities, territories, sales forecasts and workflow automation, quote generation, and product knowledge. Modules for Web 2.0 e-commerce and pricing are new, emerging interests in SFA Marketing CRM systems for marketing help the enterprise identify and target potential clients and generate leads for the sales team. A key marketing capability is tracking and measuring multichannel campaigns, including email, search, social media, telephone and direct mail. Metrics monitored include clicks, responses, leads, deals, and revenue. This has been superseded by marketing automation and Prospect Relationship Management (PRM) solutions which track customer behaviour and nurture them from first contact to sale, often cutting out the active sales process altogether. Customer service and support Recognizing that service is an important factor in attracting and retaining customers, organizations are increasingly turning to technology to help them improve their clients experience while aiming to increase efficiency and minimize costs. Even so, a 2009 study revealed that only 39% of corporate executives believe their employees have the right tools and authority to solve client problems. The core for these applications has been and still is comprehensive call center solutions, including such features as intelligent call routing, computer telephone integration (CTI), and escalation capabilities.

Analytics Relevant analytics capabilities are often interwoven into applications for sales, marketing, and service. These features can be complemented and augmented with links to separate, purpose-built applications for analytics and business intelligence. Sales analytics let companies monitor and understand client actions and preferences, through sales forecasting and data quality. Marketing applications generally come with predictive analytics to improve segmentation and targeting, and features for measuring the effectiveness of online, offline, and search marketing campaign. Web analytics have evolved significantly from their starting point of merely tracking mouse clicks on Web sites. By evaluating buy signals, marketers can see which prospects are most likely to transact and also identify those who are bogged down in a sales process and need assistance. Marketing and finance personnel also use analytics to assess the value of multi-faceted programs as a whole. These types of analytics are increasing in popularity as companies demand greater visibility into the performance of call centers and other service and support channels, [6] in order to correct problems before they affect satisfaction levels. Support-focused applications typically include dashboards similar to those for sales, plus capabilities to measure and analyze response times, service quality, agent performance, and the frequency of various issues. Integrated/Collaborative Departments within enterprises especially large enterprises tend to function with little collaboration. More recently, the development and adoption of these tools and services have fostered greater fluidity and cooperation among sales, service, and marketing. This finds expression in the concept of collaborative systems which uses technology to build bridges between departments. For example, feedback from a technical support center can enlighten marketers about specific services and product features clients are asking for. Reps, in their turn, want to be able to pursue these opportunities without the burden of re-entering records and contact data into a separate SFA system. Owing to these factors, many of the top-rated and most popular products come as integrated suites. Small business For small business, basic client service can be accomplished by a contact manager system: an integrated solution that lets organizations and individuals efficiently track and record interactions, including emails, documents, jobs, faxes, scheduling, and more. These tools usually focus on accounts rather than on individual contacts. They also generally include opportunity insight for tracking sales pipelines plus added functionality for marketing and service. As with larger enterprises, small businesses are finding value in online solutions, especially for mobile and telecommuting workers. Social media Social media sites like Twitter, LinkedIn and Facebook are amplifying the voice of people in the marketplace and are having profound and far-reaching effects on the ways in which people buy. Customers can now research companies online and then ask for recommendations through social media channels, making their buying decision without contacting the company.

People also use social media to share opinions and experiences on companies, products and services. As social media is not as widely moderated or censored as mainstream media, individuals can say anything they want about a company or brand, positive or negative. Increasingly, companies are looking to gain access to these conversations and take part in the dialogue. More than a few systems are now integrating to social networking sites. Social media promoters cite a number of business advantages, such as using online communities as a source of high-quality leads and a vehicle for crowd sourcing solutions to client-support problems. Companies can also leverage client stated habits and preferences to "hypertarget" their sales and marketing communications.[9] Some analysts take the view that business-to-business marketers should proceed cautiously when weaving social media into their business processes. These observers recommend careful market research to determine if and where the phenomenon can provide measurable benefits for client interactions, sales and support.[10] It is stated[by whom?] that people feel their interactions are peer-to-peer between them and their contacts, and resent company involvement, sometimes responding with negatives about that company. Non-profit and membership-based Systems for non-profit and membership-based organizations help track constituents and their involvement in the organization. Capabilities typically include tracking the following: fund-raising, demographics, membership levels, membership directories, volunteering and communications with individuals. Many include tools for identifying potential donors based on previous donations and participation. In light of the growth of social networking tools, there may be some overlap between social/community driven tools and non-profit/membership tools. Strategy For larger-scale enterprises, a complete and detailed plan is required to obtain the funding, resources, and company-wide support that can make the initiative of choosing and implementing a system successful. Benefits must be defined, risks assessed, and cost quantified in three general areas:

Processes: Though these systems have many technological components, business processes lie at its core. It can be seen as a more client-centric way of doing business, enabled by technology that consolidates and intelligently distributes pertinent information about clients, sales, marketing effectiveness, responsiveness, and market trends. Therefore, a company must analyze its business workflows and processes before choosing a technology platform; some will likely need reengineering to better serve the overall goal of winning and satisfying clients. Moreover, planners need to determine the types of client information that are most relevant, and how best to employ them People: For an initiative to be effective, an organization must convince its staff that the new technology and workflows will benefit employees as well as clients. Senior executives need to be strong and visible advocates who can clearly state and support the case for change. Collaboration, teamwork, and two-way communication should be encouraged across hierarchical boundaries, especially with respect to process improvement

Technology: In evaluating technology, key factors include alignment with the companys business process strategy and goals, including the ability to deliver the right data to the right employees and sufficient ease of adoption and use. Platform selection is best undertaken by a carefully chosen group of executives who understand the business processes to be automated as well as the software issues. Depending upon the size of the company and the breadth of data, choosing an application can take anywhere from a few weeks to a year or more.

Implementation Implementation issues Increases in revenue, higher rates of client satisfaction, and significant savings in operating costs are some of the benefits to an enterprise. Proponents emphasize that technology should be implemented only in the context of careful strategic and operational planning Implementations almost invariably fall short when one or more facets of this prescription are ignored:

Poor planning: Initiatives can easily fail when efforts are limited to choosing and deploying software, without an accompanying rationale, context, and support for the workforce. In other instances, enterprises simply automate flawed client-facing processes rather than redesign them according to best practices. Poor integration: For many companies, integrations are piecemeal initiatives that address a glaring need: improving a particular client-facing process or two or automating a favored sales or client support channel.] Such point solutions offer little or no integration or alignment with a companys overall strategy. They offer a less than complete client view and often lead to unsatisfactory user experiences. Toward a solution: overcoming siloed thinking. Experts advise organizations to recognize the immense value of integrating their client-facing operations. In this view, internally-focused, department-centric views should be discarded in favor of reorienting processes toward information-sharing across marketing, sales, and service. For example, sales representatives need to know about current issues and relevant marketing promotions before attempting to cross-sell to a specific client. Marketing staff should be able to leverage client information from sales and service to better target campaigns and offers. And support agents require quick and complete access to a clients sales and service history

. Specialists offer these recommendations for boosting adoptions rates and coaxing users to blend these tools into their daily workflow:

Choose a system that is easy to use: not all solutions are created equal; some vendors offer applications that are more user-friendly a factor that should be as important to the decision as is functionality. Choose appropriate capabilities: employees need to know that the time they invest in learning and in using the new system will not be wasted, indeed that it will yield personal advantages; otherwise, they will ignore or circumvent the system. Provide training: changing the way people work is no small task; to be successful, some familiarization training and help-desk support are usually required, even with todays more usable systems.

Lead by example: upper management must use the new application themselves, thereby showing employees that the top leaders fully support the application or else it will skew the ultimate course of the initiative toward failure, by risking a greatly reduced rate of adoption by employees.[

Privacy and data security system One of the primary functions of these tools is to collect information about clients, thus a company must consider the desire for privacy and data security, as well as the legislative and cultural norms. Some clients prefer assurances that their data will not be shared with third parties without their prior consent and that safeguards are in place to prevent illegal access by third parties.

Related trends Many CRM vendors offer Web-based tools (cloud computing) and software as a service (SaaS), which are accessed via a secure Internet connection and displayed in a Web browser. These applications are sold as subscriptions, with customers not needing to invest in purchasing and maintaining IT hardware, and subscription fees are a fraction of the cost of purchasing software outright. The era of the "social customer" refers to the use of social media (Twitter, Facebook, LinkedIn, Yelp, customer reviews in Amazon etc) by customers in ways that allow other potential customers to glimpse real world experience of current customers with the seller's products and services. This shift increases the power of customers to make purchase decisions that are informed by other parties sometimes outside of the control of the seller or seller's network. In response, CRM philosophy and strategy has shifted to encompass social networks and user communities, podcasting, and personalization in addition to internally generated marketing, advertising and webpage design. With the spread of self-initiated customer reviews, the user experience of a product or service requires increased attention to design and simplicity, as customer expectations have risen. CRM as a philosophy and strategy is growing to encompass these broader components of the customer relationship, so that businesses may anticipate and innovate to better serve customers, referred to as "Social CRM".

Another related development is Vendor Relationship Management, or VRM, which is the customer-side counterpart of CRM: tools and services that equip customers to be both independent of vendors and better able to engage with them. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of CRM Magazine[21]. In a 2001 research note, META Group (now Gartner) analyst Doug Laney first proposed, defined and coined the term Extended Relationship Management. He defined XRM as the principle and practice of applying CRM disciplines and technologies to other core enterprise constituents, primarily partners, employees and suppliers...as well as other secondary allies including government, press, and industry consortia.

e-CRM e-CRM Electronic CRM concerns all forms of managing relationships with customers making use of Information Technology (IT). eCRM is enterprises using IT to integrate internal organization resources and external marketing strategies to understand and fulfill their customers needs. Comparing with traditional CRM, the integrated information for eCRM intraorganizational collaboration can be more efficient to communicate with customers. From Relationship Marketing to Customer Relationship Marketing The concept of relationship marketing was first coined by Leonard Berry[in 1983. He considered it to consist of attracting, maintaining and enhancing customer relationships within organizations In the years that followed, companies were engaging more and more in a meaningful dialogue with individual customers. In doing so, new organizational forms as well as technologies were used, eventually resulting in what we know as Customer Relationship Management (CRM). The main difference between RM and CRM is that the first does not acknowledge the use of technology, where the latter uses Information Technology (IT) in implementing RM strategies.] The essence of CRM The exact meaning of CRM is still subject of heavy discussions.[4] However, the overall goal can be seen as effectively managing differentiated relationships with all customers and communicating with them on an individual basis.[5] Underlying thought is that companies realize that they can supercharge profits by acknowledging that different groups of customers vary widely in their behavior, desires, and responsiveness to marketing.[6] Loyal customers can not only give operational companies sustained revenue but also advertise for new marketers. To reinforce the reliance of customers and create additional customer sources, firms utilize CRM to maintain the relationship as the general two categories B2B(Business-to-Business) and B2C(Business-to-Customer or Business-toConsumer). Because of the needs and behaviors are different between B2B and B2C, so that the implementation of CRM should come from respective viewpoints. Differences between CRM and eCRM Major differences between CRM and eCRM: Customer contacts

CRM- contact with customer made through the retail store, phone, and fax. eCRM- all of the traditional methods are used in addition to Internet, email, wireless, and PDA technologies

System Interface

CRM- implements the use of ERP systems, emphasis is on the back-end eCRM- geared more toward front end, which interacts with the back-end through use of ERP systems, data warehouses, and data marts

System overhead (client computers)

CRM- the client must download various applications to view the web-enabled applications. They would have to be rewritten for different platform. eCRM- doesn't have these requirements because the client uses the browser

Customization and Personalization of Information


CRM- views differ based on the audience, and personalized views are not available. Individual personalization requires program changes eCRM- Personalized individual views based on purchase history and preferences. Individual has ability to customize view.

System Focus

CRM- System (created for internal use) designed based on job function and products. Web applications designed for a single department or business unit eCRM- System (created for external use) designed based on customer needs. Web application designed for enterprise-wide use.

System Maintenance and Modification


CRM- More time involved in implementation and maintenance is more expensive because the system exists at different locations and on various servers eCRM- Reduction in time and cost. Implementation and maintenance can take place at one location and on one server

As the internet is becoming more and more important in business life, many companies consider it as an opportunity to reduce customer-service costs, tighten customer relationships and most important, further personalize marketing messages and enable mass customization. ECRM is being adopted by companies because it increases customer loyalty and customer retention by improving customer satisfaction, one of the objectives of eCRM. E-loyalty results in long-term profits for online retailers because they incur less costs of recruiting new customers, plus they have an increase in customer retention. [10] Together with the creation of Sales Force Automation (SFA), where electronic methods were used to gather data and analyze customer information, the trend of the upcoming Internet can be seen as the foundation of what we know as eCRM today. As we implement eCRM process, there are three steps life cycle: 1. Data Collection: About customers preference information for actively (answer knowledge) and passively (surfing record) ways via website, email, questionnaire. 2. Data Aggregation: Filter and analysis for firms specific needs to fulfill their customers. 3. Customer Interaction: According to customers need, company provide the proper feedback them. We can define eCRM as activities to manage customer relationships by using the Internet, web browsers or other electronic touch points. The challenge hereby is to offer communication and information on the right topic, in the right amount, and at the right time that fits the customers specific needs.

eCRM strategy components When enterprises integrate their customer information, there are three eCRM strategy components: 1. Operational: Because of sharing information, the processes in business should make customers need as first and seamlessly implement. This avoids multiple times to bother customers and redundant process 2. Analytical: Analysis helps company maintain a long-term relationship with customers. 3. Collaborative: Due to improved communication technology, different departments in company implement (intraorganizational) or work with business partners (interorganizational) more efficiently by sharing information. Several CRM software packages exist that can help companies in deploying CRM activities. Besides choosing one of these packages, companies can also choose to design and build their own solutions. In order to implement CRM in an effective way, one needs to consider the following factors:

Create a customer-focused culture in the organization. Adopt customer-based managers to assess satisfaction. Develop an end-to-end process to serve customers. Recommend questions to be asked to help a customer solve a problem. Track all aspects of selling to customers, as well as prospects.

Furthermore, CRM solutions are more effective once they are being implemented in other information systems used by the company. Examples are Transaction Processing System (TPS) to process data real-time, which can then be sent to the sales and finance departments in order to recalculate inventory and financial position quick and accurately. Once this information is transferred back to the CRM software and services it could prevent customers from placing an order in the belief that an item is in stock while it is not. Contrast with traditional CRM being implemented under ERP (Enterprise Resource Planning) interface communicating in firms and with their customers, eCRM optimizes the customized environment via web browser. This provides beneficial for effective communication not only enterprises to external customers and internal departments Business personalized each of their customer profile unified in entire organization. By central repository, customer may communicate with different department staff in the corporate via Internet (or phone call). And firms are able to use the marketing analysis for customer more mature services. As each department integrates customers information, they can focus on individual operational duty more efficiently, so that firm may reduce execution cost

eCRM in B2B market

Traditional B2B customers are usually seeking ways in order to decrease the firms expense. Customizing the specific product and reducing the repeated routine cost products for them can expense least. Due to information technology developing, websites information has been becoming an important medium to reduce collecting cost and time, and it becomes a long-term relationship eventually. At the same time, more complex collaboration can be implemented on networking platform.[7] After businesses collaborate as supply chain partners, it takes the mutual benefits can give both sides an equal position to increase cowork confidence [16]

eCRM in B2C market

In contrast with B2B, the attitude of B2C individually purchase is decided by the positive experience and online shopping knowledge. Previous research[17][18] found B2C enjoy the shopping online is quick to transact, convenient to return, save energy to retailer, and fun to browser. Thus, the marketing investigating, customers communicating, and information obtaining are important factors to maintain customer services.[7] cloud solution Today, more and more enterprise CRM systems move to cloud computing solution, up from 8 percent of the CRM market in 2005 to 20 percent of the market in 2008, according to Gartner. Moving managing system into cloud, companies can cost efficiently as pay-per-use on manage, maintain, and upgrade etc. system and connect with their customers streamlined in the cloud. In cloud based CRM system, transaction can be recorded via CRM database immediately Some enterprise CRM in cloud systems like eSalesTrack and Cloud 9 analytics etc. are web-based customers dont need to install an additional interface and the activities with businesses can be updated real-time. People may communication on mobile devices to get the efficient services. Furthermore, customer/case experience and the interaction feedbacks are another way of CRM collaboration and integration information in corporate organization to improve businesses services, like Pega and VeraCentra etc. There are multifarious cloud CRM services for enterprise to use and here are some hints to the your right CRMsystem: 1. Assess your companys needs: some of enterprise CRM systems are featured, for example: OutStart provide a community management, SmartLead for leaders etc.. 2. Take advantage of free trials: comparison and familiarization each of the optional. 3. Do the math: estimate the customer strategy for company budget. 4. Consider mobile options: some system like Salesforce.com can be combined with other mobile device application. 5. Ask about security: consider whether the cloud CRM provider give enough protect as your own. 6. Make sure the sales team is on board: as the frontline of enterprise, the launched CRM system should be the help for sales. 7. Know your exit strategy: understand the exit mechanism to keep flexibility. vCRM Channels through which companies can communicate with its customers, are growing by the day, and as a result, getting their time and attention has turned into a major challenge. One of the reasons eCRM is so popular nowadays is that digital channels can create unique and positive experiences not just transactions for customers. An extreme, but ever growing in popularity, example of the creation of experiences in order to establish customer service is the use of Virtual Worlds, such as Second Life. Through this so-called vCRM, companies are able to create synergies between virtual and physical channels and reaching a very wide consumer base. However, given the newness of the technology, most companies are still struggling to identify effective entries in Virtual Worlds. Its highly interactive character, which allows companies to respond directly to any customers requests or problems, is another feature of eCRM that helps companies establish and sustain long-term customer relationships.]

Furthermore, Information Technology has helped companies to even further differentiate between customers and address a personal message or service. Some examples of tools used in eCRM:

Personalized Web Pages where customers are recognized and their preferences are shown. Customized products or services (Dell).

CRM programs should be directed towards customer value that competitors cannot match.[27] However, in a world where almost every company is connected to the Internet, eCRM has become a requirement for survival, not just a competitive advantage. Different levels of eCRM In defining the scope of eCRM, three different levels can be distinguished:

Foundational services:

This includes the minimum necessary services such as web site effectiveness and responsiveness as well as order fulfillment.

Customer-centered services:

These services include order tracking, product configuration and customization as well as security/trust.

Value-added services:

These are extra services such as online auctions and online training and education. Selfservices are becoming increasingly important in CRM activities. The rise of the Internet and eCRM has boosted the options for self-service activities. A critical success factor is the integration of such activities into traditional channels. An example was Fords plan to sell cars directly to customers via its Web Site, which provoked an outcry among its dealers network.[30] CRM activities are mainly of two different types. Reactive service is where the customer has a problem and contacts the company. Proactive service is where the manager has decided not to wait for the customer to contact the firm, but to be aggressive and contact the customer himself in order to establish a dialogue and solve problems.[31] Steps to eCRM Success Many factors play a part in ensuring that the implementation any level of eCRM is successful. One obvious way it could be measured is by the ability for the system to add value to the existing business. There are four suggested implementation steps that affect the viability of a project like this: 1. Developing customer-centric strategies 2. Redesigning workflow management systems 3. Re-engineering work processes 4. Supporting with the right technologies

Failures Designing, creating and implementing IT projects has always been risky. Not only because of the amount of money that is involved, but also because of the high chances of failure. However, a positive trend can be seen, indicating that CRM failures dropped from a failure rate of 80% in 1998, to about 40% in 2003 Some of the major issues relating to CRM failure are the following:

Difficulty in measuring and valuing intangible benefits. Failure to identify and focus on specific business problems. Lack of active senior management sponsorship. Poor user acceptance. Trying to automate a poorly defined process.

Privacy The effective and efficient employment of CRM activities cannot go without the remarks of safety and privacy. CRM systems depend on databases in which all kinds of customer data is stored. In general, the following rule applies: the more data, the better the service companies can deliver to individual customers. Some known examples of these problems are conducting credit-card transaction online of the phenomenon known as 'cookies' used on the Internet in order to track someones information and behavior. The design and the quality of the website are two very important aspects that influences the level of trust customers experience and their willingness of reluctance to do a transaction or leave personal information.[ Privacy policies can be ineffective in relaying to customers how much of their information is being used. In a recent study by The University of Pennsylvania and University of California, it was revealed that over half the respondents have an incorrect understanding of how their information is being used. They believe that, if a company has a privacy policy, they will not share the customer's information with third party companies without the customer's express consent. Therefore, if marketers want to use consumer information for advertising purposes, they must clearly illustrate the ways in which they will use the customer's information and present the benefits of this in order to acquire the customer's consent.[45] Privacy concerns are being addressed more and more. Legislation is being proposed that regulates the use of personal data. Also, Internet policy officials are calling for more performance measures of privacy policies.

Knowledge Management System


Knowledge Management System (KM System) refers to a (generally IT based) system for managing knowledge in organizations for supporting creation, capture, storage and dissemination of information. It can comprise a part (neither necessary nor sufficient) of a Knowledge Management initiative. The idea of a KM system is to enable employees to have ready access to the organization's documented base of facts, sources of information, and solutions. For example a typical claim justifying the creation of a KM system might run something like this: an engineer could know the metallurgical composition of an alloy that reduces sound in gear systems. Sharing this information organization wide can lead to more effective engine design and it could also lead to ideas for new or improved equipment. A KM system could be any of the following:

1. Document based i.e. any technology that permits creation/management/sharing of formatted documents such as Lotus Notes, web, distributed databases etc. 2. Ontology/Taxonomy based: these are similar to document technologies in the sense that a system of terminologies (i.e. ontology) are used to summarize the document e.g. Author, Subj, Organization etc. as in DAML & other XML based ontologies 3. Based on AI technologies which use a customized representation scheme to represent the problem domain. 4. Provide network maps of the organization showing the flow of communication between entities and individuals 5. Increasingly social computing tools are being deployed to provide a more organic approach to creation of a KM system. KMS systems deal with information (although Knowledge Management as a discipline may extend beyond the information centric aspect of any system) so they are a class of information system and may build on, or utilize other information sources. Distinguishing features of a KMS can include: 1. Purpose: a KMS will have an explicit Knowledge Management objective of some type such as collaboration, sharing good practice or the like. 2. Context: One perspective on KMS would see knowledge is information that is meaningfully organized, accumulated and embedded in a context of creation and application. 3. Processes: KMS are developed to support and enhance knowledge-intensive processes, tasks or projects of e.g., creation, construction, identification, capturing, acquisition, selection, valuation, organization, linking, structuring, formalization, visualization, transfer, distribution, retention, maintenance, refinement, revision, evolution, accessing, retrieval and last but not least the application of knowledge, also called the knowledge life cycle. 4. Participants: Users can play the roles of active, involved participants in knowledge networks and communities fostered by KMS, although this is not necessarily the case. KMS designs are held to reflect that knowledge is developed collectively and that the distribution of knowledge leads to its continuous change, reconstruction and application in different contexts, by different participants with differing backgrounds and experiences. 5. Instruments: KMS support KM instruments, e.g., the capture, creation and sharing of the codifiable aspects of experience, the creation of corporate knowledge directories, taxonomies or ontologies, expertise locators, skill management systems, collaborative filtering and handling of interests used to connect people, the creation and fostering of communities or knowledge networks. A KMS offers integrated services to deploy KM instruments for networks of participants, i.e. active knowledge workers, in knowledge-intensive business processes along the entire knowledge life cycle. KMS can be used for a wide range of cooperative, collaborative, adhocracy and hierarchy communities, virtual organizations, societies and other virtual networks, to manage media contents; activities, interactions and work-flows purposes; projects; works, networks, departments, privileges, roles, participants and other active users in order to extract and generate new knowledge and to enhance, leverage and transfer in new outcomes of knowledge providing new services using new formats and interfaces and different communication channels. Some of the advantages claimed for KM systems are: 1. Sharing of valuable organizational information throughout organisational hierarchy. 2. Can avoid re-inventing the wheel, reducing redundant work.

3. May reduce training time for new employees 4. Retention of Intellectual Property after the employee leaves if such knowledge can be codified.

ERP
Enterprise resource planning software, or ERP, doesnt live up to its acronym. Forget about planningit doesnt do much of thatand forget about resource, a throwaway term. But remember the enterprise part. This is ERPs true ambition. It attempts to integrate all departments and functions across a company onto a single computer system that can serve all those different departments particular needs. That is a tall order, building a single software program that serves the needs of people in finance as well as it does the people in human resources and in the warehouse. Each of those departments typically has its own computer system optimized for the particular ways that the department does its work. But ERP combines them all together into a single, integrated software program that runs off a single database so that the various departments can more easily share information and communicate with each other. That integrated approach can have a tremendous payback if companies install the software correctly. ERP, which is an abbreviation for Enterprise Resource Planning, is principally an integration of business management practices and modern technology. Information Technology (IT) integrates with the core business processes of a corporate house to streamline and accomplish specific business objectives. Consequently, ERP is an amalgamation of three most important components; Business Management Practices, Information Technology and Specific Business Objectives. In simpler words, an ERP is a massive software architecture that supports the streaming and distribution of geographically scattered enterprise wide information across all the functional units of a business house. It provides the business management executives with a comprehensive overview of the complete business execution which in turn influences their decisions in a productive way. At the core of ERP is a well managed centralized data repository which acquires information from and supply information into the fragmented applications operating on a universal computing platform. Information in large business organizations is accumulated on various servers across many functional units and sometimes separated by geographical boundaries. Such information islands can possibly service individual organizational units but fail to enhance enterprise wide performance, speed and competence. The term ERP originally referred to the way a large organization planned to use its organizational wide resources. Formerly, ERP systems were used in larger and more industrial types of companies. However, the use of ERP has changed radically over a period of few years. Today the term can be applied to any type of company, operating in any kind of field and of any magnitude. Todays ERP software architecture can possibly envelop a broad range of enterprise wide functions and integrate them into a single unified database repository. For instance,

functions such as Human Resources, Supply Chain Management, Customer Relationship Management, Finance, Manufacturing Warehouse Management and Logistics were all previously stand alone software applications, generally housed with their own applications, database and network, but today, they can all work under a single umbrella the ERP architecture.

Take a customer order, for example. Typically, when a customer places an order, that order begins a mostly paper-based journey from in-basket to in-basket around the company, often being keyed and rekeyed into different departments computer systems along the way. All that lounging around in in-baskets causes delays and lost orders, and all the keying into different computer systems invites errors. Meanwhile, no one in the company truly knows what the status of the order is at any given point because there is no way for the finance department, for example, to get into the warehouses computer system to see whether the item has been shipped. "Youll have to call the warehouse" is the familiar refrain heard by frustrated customers. ERP vanquishes the old standalone computer systems in finance, HR, manufacturing and the warehouse, and replaces them with a single unified software program divided into software modules that roughly approximate the old standalone systems. Finance, manufacturing and the warehouse all still get their own software, except now the software is linked together so that someone in finance can look into the warehouse software to see if an order has been shipped. Most vendors ERP software is flexible enough that you can install some modules without buying the whole package. Many companies, for example, will just install an ERP finance or HR module and leave the rest of the functions for another day. How can ERP improve a companys business performance? ERPs best hope for demonstrating value is as a sort of battering ram for improving the way your company takes a customer order and processes it into an invoice and revenueotherwise known as the order fulfillment process. That is why ERP is often referred to as back-office software. It doesnt handle the up-front selling process (although most ERP vendors have developed CRM software or acquired pure-play CRM providers that can do this); rather, ERP takes a customer order and provides a software road map for automating the different steps along the path to fulfilling it. When a customer service representative enters a customer order into an ERP system, he has all the information necessary to complete the order (the customers credit rating and order history from the finance module, the companys inventory levels from the warehouse module and the shipping docks trucking schedule from the logistics module, for example). People in these different departments all see the same information and can update it. When one department finishes with the order it is automatically routed via the ERP system to the next department. To find out where the order is at any point, you need only log in to the ERP system and track it down. With luck, the order process moves like a bolt of lightning through the organization, and customers get their orders faster and with fewer errors than before. ERP can apply that same magic to the other major business processes, such as employee benefits or financial reporting.

In order for a software system to be considered ERP, it must provide a business with wide collection of functionalities supported by features like flexibility, modularity & openness, widespread, finest business processes and global focus. Integration is Key to ERP Systems Integration is an exceptionally significant ingredient to ERP systems. The integration between business processes helps develop communication and information distribution, leading to remarkable increase in productivity, speed and performance. The key objective of an ERP system is to integrate information and processes from all functional divisions of an organization and merge it for effortless access and structured workflow. The integration is typically accomplished by constructing a single database repository that communicates with multiple software applications providing different divisions of an organization with various business statistics and information. Although the perfect configuration would be a single ERP system for an entire organization, but many larger organizations usually deploy a single functional system and slowly interface it with other functional divisions. This type of deployment can really be time-consuming and expensive. The Ideal ERP System An ERP system would qualify as the best model for enterprise wide solution architecture, if it chains all the below organizational processes together with a central database repository and a fused computing platform. Manufacturing Engineering, resource & capacity planning, material planning, workflow management, shop floor management, quality control, bills of material, manufacturing process, etc. Financials Accounts payable, accounts receivable, fixed assets, general ledger, cash management, and billing (contract/service) Human Resource Recruitment, benefits, compensations, training, payroll, time and attendance, labour rules, people management Supply Chain Management Inventory management, supply chain planning, supplier scheduling, claim processing, sales order administration, procurement planning, transportation and distribution Projects Costing, billing, activity management, time and expense Customer Relationship Management

Sales and marketing, service, commissions, customer contact and after sales support Data Warehouse Generally, this is an information storehouse that can be accessed by organizations, customers, suppliers and employees for their learning and orientation ERP Systems Improve Productivity, Speed and Performance Prior to evolution of the ERP model, each department in an enterprise had their own isolated software application which did not interface with any other system. Such isolated framework could not synchronize the inter-department processes and hence hampered the productivity, speed and performance of the overall organization. These led to issues such as incompatible exchange standards, lack of synchronization, incomplete understanding of the enterprise functioning, unproductive decisions and many more. For example: The financials could not coordinate with the procurement team to plan out purchases as per the availability of money. Hence, deploying a comprehensive ERP system across an organization leads to performance increase, workflow synchronization, standardized information exchange formats, complete overview of the enterprise functioning, global decision optimization, speed enhancement and much more.

Implementation of an ERP System Implementing an ERP system in an organization is an extremely complex process. It takes lot of systematic planning, expert consultation and well structured approach. Due to its extensive scope it may even take years to implement in a large organization. Implementing an ERP system will eventually necessitate significant changes on staff and work processes. While it may seem practical for an in-house IT administration to head the project, it is commonly advised that special ERP implementation experts be consulted, since they are specially trained in deploying these kinds of systems. Organizations generally use ERP vendors or consulting companies to implement their customized ERP system. There are three types of professional services that are provided when implementing an ERP system, they are Consulting, Customization and Support.

Consulting Services are responsible for the initial stages of ERP implementation where they help an organization go live with their new system, with product training, workflow, improve ERPs use in the specific organization, etc. Customization Services work by extending the use of the new ERP system or changing its use by creating customized interfaces and/or underlying application code. While ERP systems are made for many core routines, there are still some needs that need to be built or customized for a particular organization. Support Services include both support and maintenance of ERP systems. For instance, trouble shooting and assistance with ERP issues.

The ERP implementation process goes through five major stages which are Structured Planning, Process Assessment, Data Compilation & Cleanup, Education & Testing and Usage & Evaluation. 1. Structured Planning: is the foremost and the most crucial stage where an capable project team is selected, present business processes are studied, information flow within and outside the organization is scrutinized, vital objectives are set and a comprehensive implementation plan is formulated. 2. Process Assessment: is the next important stage where the prospective software capabilities are examined, manual business processes are recognized and standard working procedures are constructed. 3. Data Compilation & Cleanup: helps in identifying data which is to be converted and the new information that would be needed. The compiled data is then analyzed for accuracy and completeness, throwing away the worthless/unwanted information. 4. Education & Testing: aids in proofing the system and educating the users with ERP mechanisms. The complete database is tested and verified by the project team using multiple testing methods and processes. A broad in-house training is held where all the concerned users are oriented with the functioning of the new ERP system. 5. Usage & Evaluation: is the final and an ongoing stage for the ERP. The lately implemented ERP is deployed live within the organization and is regularly checked by the project team for any flaw or error detection. Advantages of ERP Systems There are many advantages of implementing an EPR system. A few of them are listed below:

A perfectly integrated system chaining all the functional areas together The capability to streamline different organizational processes and workflows The ability to effortlessly communicate information across various departments Improved efficiency, performance and productivity levels Enhanced tracking and forecasting Improved customer service and satisfaction

Disadvantages of ERP Systems While advantages usually outweigh disadvantages for most organizations implementing an ERP system, here are some of the most common obstacles experienced:

The scope of customization is limited in several circumstances The present business processes have to be rethought to make them synchronize with the ERP ERP systems can be extremely expensive to implement There could be lack of continuous technical support ERP systems may be too rigid for specific organizations that are either new or want to move in a new direction in the near future

Commercial applications

Manufacturing Engineering, bills of material, work orders, scheduling, capacity, workflow management, quality control, cost management, manufacturing process, manufacturing projects, manufacturing flow Supply chain management

Order to cash, inventory, order entry, purchasing, product configurator, supply chain planning, supplier scheduling, inspection of goods, claim processing, commission calculation Financials General ledger, cash management, accounts payable, accounts receivable, fixed assets Project management Costing, billing, time and expense, performance units, activity management Human resources Human resources, payroll, training, time and attendance, rostering, benefits Customer relationship management Sales and marketing, commissions, service, customer contact, call-center support Data services Various "self-service" interfaces for customers, suppliers and/or employees Access control Management of user privileges for various processes

Business Process Re-engineering (BPR) Business Process Re-engineering (BPR) is the strategic analysis of business processes and the planning and implementation of improved business processes. The analysis is often customer centred and holistic in approach.BPR is a multi-disciplinary subject and should involve more than IT specialists. Nevertheless, IT can support BPR in many ways: Software can be used to help with gathering information about the current business organisation. Workflow and process analysis tools, business modelling and simulation, On-line analytic processing (OLAP), Data mining all have a role to play. definition Business process reengineering (BPR) is the analysis and redesign of workflow within and between enterprises. BPR reached its heyday in the early 1990's when Michael Hammer and James Champy published their best-selling book, "Reengineering the Corporation". The authors promoted the idea that sometimes radical redesign and reorganization of an enterprise (wiping the slate clean) was necessary to lower costs and increase quality of service and that information technology was the key enabler for that radical change. Hammer and Champy felt that the design of workflow in most large corporations was based on assumptions about technology, people, and organizational goals that were no longer valid. They suggested seven principles of reengineering to streamline the work process and thereby achieve significant levels of improvement in quality, time management, and cost: 1. Organize around outcomes, not tasks. 2. Identify all the processes in an organization and prioritize them in order of redesign urgency. 3. Integrate information processing work into the real work that produces the information. 4. Treat geographically dispersed resources as though they were centralized. 5. Link parallel activities in the workflow instead of just integrating their results. 6. Put the decision point where the work is performed, and build control into the process. 7. Capture information once and at the source. By the mid-1990's, BPR gained the reputation of being a nice way of saying "downsizing." According to Hammer, lack of sustained management commitment and leadership, unrealistic scope and expectations, and resistance to change prompted management to abandon the concept of BPR and embrace the next new methodology, enterprise resource planning (ERP).

Business process reengineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work in order to dramatically improve customer service, cut operational costs, and become world-class competitors. A key stimulus for reengineering has been the continuing development and deployment of sophisticated information systems and networks. Leading organizations are becoming bolder in using this technology to support innovative business processes, rather than refining current ways of doing work.[1]

Reengineering guidance and relationship of Mission and Work Processes to Information Technology. Business Process Reengineering (BPR) is basically the fundamental rethinking and radical re-design, made to an organizations existing resources. It is more than just business improvising.

It is an approach for redesigning the way work is done to better support the organization's mission and reduce costs. Reengineering starts with a high-level assessment of the organization's mission, strategic goals, and customer needs. Basic questions are asked, such as "Does our mission need to be redefined? Are our strategic goals aligned with our mission? Who are our customers?" An organization may find that it is operating on questionable assumptions, particularly in terms of the wants and needs of its customers. Only after the organization rethinks what it should be doing, does it go on to decide how best to do it.[1] Within the framework of this basic assessment of mission and goals, reengineering focuses on the organization's business processesthe steps and procedures that govern how resources are used to create products and services that meet the needs of particular customers or markets. As a structured ordering of work steps across time and place, a business process can be decomposed into specific activities, measured, modeled, and improved. It can also be completely redesigned or eliminated altogether. Reengineering identifies, analyzes, and redesigns an organization's core business processes with the aim

of achieving dramatic improvements in critical performance measures, such as cost, quality, service, and speed. Reengineering recognizes that an organization's business processes are usually fragmented into subprocesses and tasks that are carried out by several specialized functional areas within the organization. Often, no one is responsible for the overall performance of the entire process. Reengineering maintains that optimizing the performance of subprocesses can result in some benefits, but cannot yield dramatic improvements if the process itself is fundamentally inefficient and outmoded. For that reason, reengineering focuses on redesigning the process as a whole in order to achieve the greatest possible benefits to the organization and their customers. This drive for realizing dramatic improvements by fundamentally rethinking how the organization's work should be done distinguishes reengineering from process improvement efforts that focus on functional or incremental improvement History In 1990, Michael Hammer, a former professor of computer science at the Massachusetts Institute of Technology (MIT), published an article in the Harvard Business Review, in which he claimed that the major challenge for managers is to obliterate non-value adding work, rather than using technology for automating it.[2] This statement implicitly accused managers of having focused on the wrong issues, namely that technology in general, and more specifically information technology, has been used primarily for automating existing processes rather than using it as an enabler for making non-value adding work obsolete. Hammer's claim was simple: Most of the work being done does not add any value for customers, and this work should be removed, not accelerated through automation. Instead, companies should reconsider their processes in order to maximize customer value, while minimizing the consumption of resources required for delivering their product or service. A similar idea was advocated by Thomas H. Davenport and J. Short in 1990[3], at that time a member of the Ernst & Young research center, in a paper published in the Sloan Management Review the same year as Hammer published his paper. This idea, to unbiasedly review a companys business processes, was rapidly adopted by a huge number of firms, which were striving for renewed competitiveness, which they had lost due to the market entrance of foreign competitors, their inability to satisfy customer needs, and their insufficient cost structure[citation needed]. Even well established management thinkers, such as Peter Drucker and Tom Peters, were accepting and advocating BPR as a new tool for (re-)achieving success in a dynamic world[citation needed]. During the following years, a fast growing number of publications, books as well as journal articles, were dedicated to BPR, and many consulting firms embarked on this trend and developed BPR methods. However, the critics were fast to claim that BPR was a way to dehumanize the work place, increase managerial control, and to justify downsizing, i.e. major reductions of the work force [4], and a rebirth of Taylorism under a different label. Despite this critique, reengineering was adopted at an accelerating pace and by 1993, as many as 65% of the Fortune 500 companies claimed to either have initiated reengineering efforts, or to have plans to do so[citation needed]. This trend was fueled by the fast adoption of BPR by the consulting industry, but also by the study Made in America[citation needed], conducted by MIT, that showed how companies in many US industries had lagged behind their foreign counterparts in terms of competitiveness, time-to-market and productivity.

Development after 1995 With the publication of critiques in 1995 and 1996 by some of the early BPR proponents[citation needed] , coupled with abuses and misuses of the concept by others, the reengineering fervor in the U.S. began to wane. Since then, considering business processes as a starting point for business analysis and redesign has become a widely accepted approach and is a standard part of the change methodology portfolio, but is typically performed in a less radical way as originally proposed. More recently, the concept of Business Process Management (BPM) has gained major attention in the corporate world and can be considered as a successor to the BPR wave of the 1990s, as it is evenly driven by a striving for process efficiency supported by information technology. Equivalently to the critique brought forward against BPR, BPM is now accused[citation needed] of focusing on technology and disregarding the people aspects of change. can be found. This section contains the definition provided in notable publications in the field:

"... the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service, and speed "encompasses the envisioning of new work strategies, the actual process design activity, and the implementation of the change in all its complex technological, human, and organizational dimensions Additionally, Davenport (ibid.) points out the major difference between BPR and other approaches to organization development (OD), especially the continuous improvement or TQM movement, when he states: "Today firms must seek not fractional, but multiplicative levels of improvement 10x rather than 10%." Finally, Johansson[7] provide a description of BPR relative to other process-oriented views, such as Total Quality Management (TQM) and Just-in-time (JIT), and state: "Business Process Reengineering, although a close relative, seeks radical rather than merely continuous improvement. It escalates the efforts of JIT and TQM to make process orientation a strategic tool and a core competence of the organization. BPR concentrates on core business processes, and uses the specific techniques within the JIT and TQM toolboxes as enablers, while broadening the process vision."

In order to achieve the major improvements BPR is seeking for, the change of structural organizational variables, and other ways of managing and performing work is often considered as being insufficient. For being able to reap the achievable benefits fully, the use of information technology (IT) is conceived as a major contributing factor. While IT traditionally has been used for supporting the existing business functions, i.e. it was used for increasing organizational efficiency, it now plays a role as enabler of new organizational forms, and patterns of collaboration within and between organizations[citation needed]. BPR derives its existence from different disciplines, and four major areas can be identified as being subjected to change in BPR - organization, technology, strategy, and people where a process view is used as common framework for considering these dimensions. The approach can be graphically depicted by a modification of "Leavitts diamond". Business strategy is the primary driver of BPR initiatives and the other dimensions are governed by strategy's encompassing role. The organization dimension reflects the structural elements of the company, such as hierarchical levels, the composition of

organizational units, and the distribution of work between them[.Technology is concerned with the use of computer systems and other forms of communication technology in the business. In BPR, information technology is generally considered as playing a role as enabler of new forms of organizing and collaborating, rather than supporting existing business functions. The people / human resources dimension deals with aspects such as education, training, motivation and reward systems. The concept of business processes interrelated activities aiming at creating a value added output to a customer - is the basic underlying idea of BPR. These processes are characterized by a number of attributes: Process ownership, customer focus, value adding, and cross-functionality. The role of information technology Information technology (IT) has historically played an important role in the reengineering concept[It is considered by some as a major enabler for new forms of working and collaborating within an organization and across organizational borders. Early BPR literature identified several so called disruptive technologies that were supposed to challenge traditional wisdom about how work should be performed.

Shared databases, making information available at many places Expert systems, allowing generalists to perform specialist tasks Telecommunication networks, allowing organizations to be centralized and decentralized at the same time Decision-support tools, allowing decision-making to be a part of everybody's job Wireless data communication and portable computers, allowing field personnel to work office independent Interactive videodisk, to get in immediate contact with potential buyers Automatic identification and tracking, allowing things to tell where they are, instead of requiring to be found High performance computing, allowing on-the-fly planning and revisioning

In the mid 1990s, especially workflow management systems were considered as a significant contributor to improved process efficiency. Also ERP (Enterprise Resource Planning) vendors, such as SAP, JD Edwards, Oracle, PeopleSoft, positioned their solutions as vehicles for business process redesign and improvement. Research & Methodology Although the labels and steps differ slightly, the early methodologies that were rooted in ITcentric BPR solutions share many of the same basic principles and elements. The following outline is one such model, based on the PRLC (Process Reengineering Life Cycle) approach developed by Guha.

Simplified schematic outline of using a business process approach, examplified for pharmceutical R&D: 1. Structural organization with functional units 2. Introduction of New Product Development as cross-functional process 3. Re-structuring and streamlining activities, removal of non-value adding tasks Benefiting from lessons learned from the early adopters, some BPR practitioners advocated a change in emphasis to a customer-centric, as opposed to an IT-centric, methodology. One such methodology, that also incorporated a Risk and Impact Assessment to account for the impact that BPR can have on jobs and operations, was described by Lon Roberts (1994]. Roberts also stressed the use of change management tools to proactively address resistance to changea factor linked to the demise of many reengineering initiatives that looked good on the drawing board. Some items to use on a process analysis checklist are: Reduce handoffs, Centralize data, Reduce delays, Free resources faster, Combine similar activities. Also within the management consulting industry, a significant number of methodological approaches have been developed Critique Reengineering has earned a bad reputation because such projects have often resulted in massive layoffs[citation needed]. This reputation is not altogether unwarranted, since companies have often downsized under the banner of reengineering. Further, reengineering has not always lived up to its expectations. The main reasons seem to be that:

Reengineering assumes that the factor that limits an organization's performance is the ineffectiveness of its processes (which may or may not be true) and offers no means of validating that assumption. Reengineering assumes the need to start the process of performance improvement with a "clean slate," i.e. totally disregard the status quo. According to Eliyahu M. Goldratt (and his Theory of Constraints) reengineering does not provide an effective way to focus improvement efforts on the organization's constraint[citation needed].

There was considerable hype surrounding the introduction of Reengineering the Corporation (partially due to the fact that the authors of the book reportedly[citation needed] bought numbers of copies to promote it to the top of bestseller lists). Abrahamson (1996) showed that fashionable management terms tend to follow a lifecycle, which for Reengineering peaked between 1993 and 1996 (Ponzi and Koenig 2002). They argue that Reengineering was in fact nothing new (as e.g. when Henry Ford implemented the assembly line in 1908, he was in fact reengineering, radically changing the way of thinking in an organization). Dubois (2002) highlights the value of signaling terms as Reengineering, giving it a name, and stimulating it. At the same there can be a danger in usage of such fashionable concepts as mere ammunition to implement particular reform. Read Article by Faraz Rafique. The most frequent and harsh critique against BPR concerns the strict focus on efficiency and technology and the disregard of people in the organization that is subjected to a reengineering initiative. Very often, the label BPR was used for major workforce reductions. Thomas Davenport, an early BPR proponent, stated that: "When I wrote about "business process redesign" in 1990, I explicitly said that using it for cost reduction alone was not a sensible goal. And consultants Michael Hammer and James Champy, the two names most closely associated with reengineering, have insisted all along that layoffs shouldn't be the point. But the fact is, once out of the bottle, the reengineering genie quickly turned ugly." Michael Hammer similarly admitted that: "I wasn't smart enough about that. I was reflecting my engineering background and was insufficient appreciative of the human dimension. I've learned that's critical Other criticism brought forward against the BPR concept include

It never changed management thinking, actually the largest causes of failure in an organization lack of management support for the initiative and thus poor acceptance in the organization. exaggerated expectations regarding the potential benefits from a BPR initiative and consequently failure to achieve the expected results. underestimation of the resistance to change within the organization. implementation of generic so-called best-practice processes that do not fit specific company needs. overtrust in technology solutions. performing BPR as a one-off project with limited strategy alignment and long-term perspective. poor project management.

Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction. The terms information security, computer security and information assurance are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them.

These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of data regardless of the form the data may take: electronic, print, or other forms. Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer. Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic computers and transmitted across networks to other computers. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor, such a breach of security could lead to lost business, law suits or even bankruptcy of the business. Protecting confidential information is a business requirement, and in many cases also an ethical and legal requirement. For the individual, information security has a significant effect on privacy, which is viewed very differently in different cultures. The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics science, etc.
o

History Since the early days of writing, heads of state and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of written correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher ca. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. World War II brought about many advancements in information security and marked the beginning of the professional field of information security. The end of the 20th century and early years of the 21st century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful and less expensive computing equipment made electronic data

processing within the reach of small business and the home user. These computers quickly became interconnected through a network generically called the Internet or World Wide Web. The rapid growth and widespread use of electronic data processing and electronic business conducted through the Internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process and transmit. The academic disciplines of computer security, information security and information assurance emerged along with numerous professional organizations - all sharing the common goals of ensuring the security and reliability of information systems. For over twenty years, information security has held confidentiality, integrity and availability (known as the CIA triad) to be the core principles of information security. There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes been proposed for addition - it has been pointed out that issues such as Non-Repudiation do not fit well within the three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations) Legality is becoming a key consideration for practical security installations. In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian hexad are a subject of debate amongst security professionals. Confidentiality Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to the merchant and from the merchant to a transaction processing network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer containing sensitive information about a company's employees is stolen or sold, it could result in a breach of confidentiality. Giving out confidential information over the telephone is a breach of confidentiality if the caller is not authorized to have the information. Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds.[ Integrity In information security, integrity means that data cannot be modified undetectably. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of Consistency as understood in the classic ACID model of transaction

processing. Integrity is violated when a message is actively modified in transit. Information security systems typically provide message integrity in addition to data confidentiality. Availability For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks. Authenticity In computing, e-Business and information security it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim they are. Non-repudiation In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. Electronic commerce uses technology such as digital signatures and encryption to establish authenticity and non-repudiation. Risk management A comprehensive treatment of the topic of risk management is beyond the scope of this article. However, a useful definition of risk management will be provided as well as some basic terminology and a commonly used process for risk management. The CISA Review Manual 2006 provides the following definition of risk management: "Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."[2] There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerability emerge every day. Second, the choice of countermeasure (computer)s (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information

security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk. A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis. The research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human [3] The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment:

security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development and maintenance, information security incident management, business continuity management, and regulatory compliance.

In broad terms the risk management process consists of: 1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies. 2. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization. 3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security. 4. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis. 5. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset. 6. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity. For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or out-sourcing to another business.[4] The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a potential risk

Controls When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types of controls. Administrative Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day to day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed - the Payment Card Industry (PCI) Data Security Standard required by Visa and Master Card is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls. Administrative controls are of paramount importance. Logical Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. For example: passwords, network and host based firewalls, network intrusion detection systems, access control lists, and data encryption are logical controls. An important logical control that is frequently overlooked is the principle of least privilege. The principle of least privilege requires that an individual, program or system process is not granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read Email and surf the Web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, or they are promoted to a new position, or they transfer to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges which may no longer be necessary or appropriate. Physical Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and work place into functional areas are also physical controls. An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures that an individual can not complete a critical task by himself. For example: an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator - these roles and responsibilities must be separated from one another.

Security classification for information An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The type of information security classification labels selected and used will depend on the nature of the organisation, with examples being:

In the business sector, labels such as: Public, Sensitive, Private, Confidential. In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential, Secret, Top Secret and their non-English equivalents. In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber and Red.

All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification a particular information asset has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place. Access control Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected - the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe." they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph

on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. There are three different types of information that can be used for authentication: something you know, something you have, or something you are. Examples of something you know include such things as a PIN, a password, or your mother's maiden name. Examples of something you have include a driver's license or a magnetic swipe card. Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor authentication. On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms. After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the Mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include Role-based access control available in many advanced Database Management Systems, simple file permissions provided in the UNIX and Windows operating systems, Group Policy Objects provided in Windows network systems, Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. All failed and successful authentication attempts must be logged, and all access to information must leave some type of audit trail

Cryptography Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less secure application such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and Email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction and they must be available when needed. PKI solutions address many of the problems that surround key management. [edit] Defense in depth

Information security must protect information throughout the life span of the information, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its life time, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own

protection mechanisms. The building up, layering on and overlapping of security measures is called defense in depth. The strength of any system is no greater than its weakest link. Using a defence in depth strategy, should one defensive measure fail there are other defensive measures in place that continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense-in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense-indepth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people as the outer layer of the onion, and network security, host-based security and application security forming the inner layers of the onion. Both perspectives are equally valid and each provides valuable insight into the implementation of a good defensein-depth strategy. Process The terms reasonable and prudent person, due care and due diligence have been used in the fields of Finance, Securities, and Law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S.A. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the business world, stockholders, customers, business partners and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to operate the business by sound business principles and in a legal ethical manner. A prudent person is also diligent (mindful, attentive, and ongoing) in their due care of the business. In the field of Information Security, Harris[6] offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due diligence are the] "continual activities that make sure the protection mechanisms are continually maintained and operational." Attention should be made to two important points in these definitions. First, in due care, steps are taken to show - this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there are continual activities - this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing. Security governance The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise Security (GES)", defines characteristics of effective security governance. These include:

An enterprise-wide issue

Leaders are accountable Viewed as a business requirement Risk-based Roles, responsibilities, and segregation of duties defined Addressed and enforced in policy Adequate resources committed Staff aware and trained A development life cycle requirement Planned, managed, measurable, and measured Reviewed and audited

Change management Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of Managements many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented. Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system. Change management is usually overseen by a Change Review Board composed of representatives from key business areas, security, networking, systems administrators, Database administration, applications development, desktop support and the help desk. The tasks of the Change Review Board can be facilitated with the use of automated work flow application. The responsibility of the Change Review Board is to ensure the organizations documented change management procedures are followed. The change management process is as follows:

Requested: Anyone can request a change. The person making the change request may or may not be the same person that performs the analysis or implements the change. When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change. Approved: Management runs the business and controls the allocation of resources therefore, Management must approve requests for changes and assign a priority for

every change. Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices. Management might also choose to reject a change request if the change requires more resources than can be allocated for the change.

Planned: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing and documenting both implementation and backout plans. Need to define the criteria on which a decision to back out will be made. Tested: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment. The backout plan must also be tested. Scheduled: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities. Communicated: Once a change has been scheduled it must be communicated. The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change. The communication also serves to make the Help Desk and users aware that a change is about to occur. Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change. Implemented: At the appointed date and time, the changes must be implemented. Part of the planning process was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan should be implemented. Documented: All changes must be documented. The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed. Post change review: The change review board should hold a post implementation review of changes. It is particularly important to review failed and backed out changes. The review board should try to understand the problems that were encountered, and look for areas for improvement.

Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve the over all quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation and communication. Business continuity is the mechanism by which an organization continues to operate its critical business units, during planned or unplanned disruptions that affect normal business operations, by invoking planned and managed procedures.

Unlike what most people think business continuity is not necessarily an IT system or process, simply because it is about the business. Today disasters or disruptions to business are a reality. Whether the disaster is natural or man-made (the TIME magazine has a website on the top 10), it affects normal life and so business. So why is planning so important? Let us face reality that "all businesses recover", whether they planned for recovery or not, simply because business is about earning money for survival. The planning is merely getting better prepared to face it, knowing fully well that the best plans may fail. Planning helps to reduce cost of recovery, operational overheads and most importantly sail through some smaller ones effortlessly. For businesses to create effective plans they need to focus upon the following key questions. Most of these are common knowledge, and anyone can do a BCP. 1. Should a disaster strike, what are the first few things that I should do? Should I call people to find if they are OK or call up the bank to figure out my money is safe? This is Emergencey Response. Emergency Response services help take the first hit when the disaster strikes and if the disaster is serious enough the Emergency Response teams need to quickly get a Crisis Management team in place. 2. What parts of my business should I recover first? The one that brings me most money or the one where I spend the most, or the one that will ensure I shall be able to get sustained future growth? The identified sections are the critical business units. There is no magic bullet here, no one answer satisfies all. Businesses need to find answers that meet business requirements. 3. How soon should I target to recover my critical business units? In BCP technical jargon this is called Recovery Time Objective, or RTO. This objective will define what costs the business will need to spend to recover from a disruption. For example, it is cheaper to recover a business in 1 day than in 1 hour. 4. What all do I need to recover the business? IT, machinery, records...food, water, people...So many aspects to dwell upon. The cost factor becomes clearer now...Business leaders need to drive business continuity. Hold on. My IT manager spent $200000 last month and created a DRP (Disaster Recovery Plan), whatever happened to that? a DRP is about continuing an IT system, and is one of the sections of a comprehensive Business Continuity Plan. Look below for more on this. 5. And where do I recover my business from... Will the business center give me space to work, or would it be flooded by many people queuing up for the same reasons that I am. 6. But once I do recover from the disaster and work in reduced production capacity, since my main operational sites are unavailable, how long can this go on. How long can I do without my original sites, systems, people? this defines the amount of business resilience a business may have. 7. Now that I know how to recover my business. How do I make sure my plan works? Most BCP pundits would recommend testing the plan at least once a year, reviewing it for adequacy and rewriting or updating the plans either annually or when businesses change. Disaster recovery planning While a business continuity plan (BCP) takes a broad approach to dealing with organizational-wide effects of a disaster, a disaster recovery plan (DRP), which is a subset of the business continuity plan, is instead focused on taking the necessary steps to resume normal business operations as quickly as possible. A disaster recovery plan is executed immediately after the disaster occurs and details what steps are to be taken in order to recover critical information technology infrastructure

Laws and regulations Below is a partial listing of European, United Kingdom, Canadian and USA governmental laws and regulations that have, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security.

UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data Protection Directive (EUDPD) requires that all EU member must adopt national regulations to standardize the protection of data privacy for citizens throughout the EU. The Computer Misuse Act 1990 is an Act of the UK Parliament making computer crime (e.g. cracking - sometimes incorrectly referred to as hacking) a criminal offence. The Act has become a model upon which several other countries including Canada and the Republic of Ireland have drawn inspiration when subsequently drafting their own information security laws. EU Data Retention laws requires Internet service providers and phone companies to keep data on every electronic message sent and phone call made for between six months and two years. The Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. 1232 g; 34 CFR Part 99) is a USA Federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable program of the U.S. Department of Education. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record. Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. And, it requires health care providers, insurance providers and employers to safeguard the security and privacy of health data. Gramm-Leach-Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999, protects the privacy and security of private financial information that financial institutions collect, hold, and process. Sarbanes-Oxley Act of 2002 (SOX). Section 404 of the act requires publicly traded companies to assess the effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each fiscal year. Chief information officers are responsible for the security, accuracy and the reliability of the systems that manage and report the financial data. The act also requires publicly traded companies to engage independent auditors who must attest to, and report on, the validity of their assessments. Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing payment account data security. It was developed by the founding payment brands of the PCI Security Standards Council, including American Express, Discover Financial Services, JCB, MasterCard Worldwide and Visa International, to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design and other critical protective measures.

State Security Breach Notification Laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen. Personal Information Protection and Electronics Document Act (PIPEDA) - An Act to support and promote electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act.

Sources of standards International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of standards. ISO 15443: "Information technology - Security techniques - A framework for IT security assurance", ISO/IEC 27002: "Information technology - Security techniques - Code of practice for information security management", ISO-20000: "Information technology - Service management", and ISO/IEC27001: "Information technology - Security techniques - Information security management systems Requirements" are of particular interest to information security professionals. The USA National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management and operation. NIST is also the custodian of the USA Federal Information Processing Standard publications (FIPS). The Internet Society is a professional membership society with more than 100 organization and over 20,000 individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the Internet, and is the organization home for the groups responsible for Internet infrastructure standards, including the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security Handbook. The Information Security Forum is a global nonprofit organization of several hundred leading organizations in financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It undertakes research into information security practices and offers advice in its biannual Standard of Good Practice and more detailed advisories for members. . Information technology adoption is always increasing and spread to vital infrastructure for civil and military organizations. Everybody can get involved in the Cyberwar. It is crucial that a nation can have skilled professional to defend its vital interests.

Firewalls

A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices that is configured to permit or deny network transmissions based upon a set of rules and other criteria. The term firewall/fireblock originally
meant a wall to confine a fire or potential fire within a building; cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment.

Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which inspects each message and blocks those that do not meet the specified security criteria. There are several types of firewall techniques: 1. Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. It is susceptible to IP spoofing. 2. Application gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation. 3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking. 4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses. First generation: packet filters The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what became a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based on their original first generation architecture. This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (i.e. it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number). TCP and UDP protocols constitute most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports. Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which means most of the work is done between the network and physical layers, with a little bit of peeking into the transport layer to figure out source and destination port numbers [2].

When a packet originates from the sender and filters through a firewall, the device checks for matches to any of the packet filtering rules that are configured in the firewall and drops or rejects the packet accordingly. When the packet passes through the firewall, it filters the packet on a protocol/port number basis (GSS). For example, if a rule in the firewall exists to block telnet access, then the firewall will block the IP protocol for port number 23. Second generation: application layer The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is sneaking through on a non-standard port or if a protocol is being abused in any harmful way. An application firewall is much more secure and reliable compared to packet filter firewalls because it works on all seven layers of the OSI reference model, from the application down to the physical Layer. This is similar to a packet filter firewall but here we can also filter information on the basis of content. The best example of an application firewall is ISA (Internet Security and Acceleration) server. An application firewall can filter higher-layer protocols such as FTP, Telnet, DNS, DHCP, HTTP, TCP, UDP and TFTP (GSS). For example, if an organization wants to block all the information related to "foo" then content filtering can be enabled on the firewall to block that particular word. Software-based firewalls are thus much slower than stateful firewalls. Third generation: "stateful" filters From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam, developed the third generation of firewalls, calling them circuit level firewalls. Third-generation firewalls, in addition to what first- and second-generation look for, regard placement of each individual packet within the packet series. This technology is generally referred to as a stateful packet inspection as it maintains records of all connections passing through the firewall and is able to determine whether a packet is the start of a new connection, a part of an existing connection, or is an invalid packet. Though there is still a set of static rules in such a firewall, the state of a connection can itself be one of the criteria which trigger specific rules. This type of firewall can actually be exploited by certain Denial-of-service attacks which can fill the connection tables with illegitimate connections. Subsequent developments In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colors and icons, which could be easily implemented and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1. The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes.

Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the user's signature for each connection. authpf on BSD systems loads firewall rules dynamically per user, after authentication via SSH.

Types There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced.

Network layer and packet filters Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation, handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing. Stateless firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. They may also be necessary for filtering stateless network protocols that have no concept of a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached. Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes. Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac OS X), pf (OpenBSD, and all other BSDs), iptables/ipchains (Linux). Application layer

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines. On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to their destination. Proxy

A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets. Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network. Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconnaissance. Encryption Encryption is the conversion of data into a form, called a ciphertext, that cannot be easily understood by unauthorized people. Decryption is the process of converting encrypted data back into its original form, so it can be understood. The use of encryption/decryption is as old as the art of communication. In wartime, a cipher, often incorrectly called a code, can be employed to keep the enemy from obtaining the contents of transmissions. (Technically, a code is a means of representing a signal without the intent of keeping it secret; examples are Morse code and ASCII.) Simple ciphers include the substitution of letters for numbers, the rotation of letters in the alphabet, and the "scrambling" of voice signals by inverting the sideband frequencies. More complex ciphers work according to sophisticated computer algorithms that rearrange the data bits in digital signals. In order to easily recover the contents of an encrypted signal, the correct decryption key is required. The key is an algorithm that undoes the work of the encryption algorithm. Alternatively, a computer can be used in an attempt to break the cipher. The more complex the encryption algorithm, the more difficult it becomes to eavesdrop on the communications without access to the key. Encryption/decryption is especially important in wireless communications. This is because wireless circuits are easier to tap than their hard-wired counterparts. Nevertheless, encryption/decryption is a good idea when carrying out any kind of sensitive transaction, such as a credit-card purchase online, or the discussion of a company secret between different departments in the organization. The stronger the cipher -- that is, the harder it is for unauthorized people to break it -- the better, in general. However, as the strength of encryption/decryption increases, so does the cost. In recent years, a controversy has arisen over so-called strong encryption. This refers to ciphers that are essentially unbreakable without the decryption keys. While most companies and their customers view it as a means of keeping secrets and minimizing fraud, some governments view strong encryption as a potential vehicle by which

terrorists might evade authorities. These governments, including that of the United States, want to set up a key-escrow arrangement. This means everyone who uses a cipher would be required to provide the government with a copy of the key. Decryption keys would be stored in a supposedly secure place, used only by authorities, and used only if backed up by a court order. Opponents of this scheme argue that criminals could hack into the key-escrow database and illegally obtain, steal, or alter the keys. Supporters claim that while this is a possibility, implementing the key escrow scheme would be better than doing nothing to prevent criminals from freely using encryption/decryption. Why Have Cryptography Encryption is the science of changing data so that it is unrecognisable and useless to an unauthorised person. Decryption is changing it back to its original form. The most secure techniques use a mathematical algorithm and a variable value known as a 'key'. The selected key (often any random character string) is input on encryption and is integral to the changing of the data. The EXACT same key MUST be input to enable decryption of the data. This is the basis of the protection.... if the key (sometimes called a password) is only known by authorized individual(s), the data cannot be exposed to other parties. Only those who know the key can decrypt it. This is known as 'private key' cryptography, which is the most well known form.

OTHER USES OF CRYPTOGRAPHY Many techniques also provide for detection of any tampering with the encrypted data. A 'message authentication code' (MAC) is created, which is checked when the data is decrypted. If the code fails to match, the data has been altered since it was encrypted. This facility has may practical applications.

Cyberterrorism is a phrase used to describe the use of Internet based attacks in terrorist activities, including acts of deliberate, large-scale disruption of computer networks, especially of personal computers attached to the Internet, by the means of tools such as computer viruses. Cyberterrorism is a controversial term. Some authors choose a very narrow definition, relating to deployments, by known terrorist organizations, of disruption attacks against information systems for the primary purpose of creating alarm and panic. By this narrow definition, it is difficult to identify any instances of cyberterrorism. Cyberterrorism can also be defined much more generally as any computer crime targeting computer networks without necessarily affecting real world infrastructure, property, or lives. There is much concern from government and media sources about potential damages that could be caused by cyberterrorism, and this has prompted official responses from government agencies. Cyberterrorism is defined by the Technolytics Institute as "The premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives. Or to intimidate any person in furtherance of such objectives." [2] The term was coined by Barry C. Collin.[3] The National Conference of State Legislatures, an organization of legislators created to help policymakers issues such as economy and homeland security defines cyberterrorism as: [T]he use of information technology by terrorist groups and individuals to further their agenda. This can include use of information technology to organize and execute attacks against networks, computer systems and telecommunications infrastructures, or for exchanging information or making threats electronically. Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site defacing, Denial-ofservice attacks, or terroristic threats made via electronic communication.[4] For the use of the Internet by terrorist groups for organization, see Internet and terrorism. Cyberterrorism can also include attacks on Internet business, but when this is done for economic motivations rather than ideological, it is typically regarded as cybercrime. As shown above, there are multiple definitions of cyber terrorism and most are overly broad. There is controversy concerning overuse of the term and hyperbole in the media and by security vendors trying to sell "solutions". Concerns As the Internet becomes more pervasive in all areas of human endeavor, individuals or groups can use the anonymity afforded by cyberspace to threaten citizens, specific groups (i.e. with membership based on ethnicity or belief), communities and entire countries, without the inherent threat of capture, injury, or death to the attacker that being physically present would bring. As the Internet continues to expand, and computer systems continue to be assigned more responsibility while becoming more and more complex and interdependent. Sabotage or

terrorism via cyberspace may become a more serious threat and is possibly one of the top 10 events to "end the human race History Public interest in cyberterrorism began in the late 1980s. As 2000 approached, the fear and uncertainty about the millennium bug heightened and interest in potential cyberterrorist attacks also increased. However, although the millennium bug was by no means a terrorist attack or plot against the world or the United States, it did act as a catalyst in sparking the fears of a possibly large-scale devastating cyber-attack. Commentators noted that many of the facts of such incidents seemed to change, often with exaggerated media reports. The high profile terrorist attacks in the United States on September 11, 2001 and the ensuing War on Terror by the US led to further media coverage of the potential threats of cyberterrorism in the years following. Mainstream media coverage often discusses the possibility of a large attack making use of computer networks to sabotage critical infrastructures with the aim of putting human lives in jeopardy or causing disruption on a national scale either directly or by disruption of the national economy. Authors such as Winn Schwartau and John Arquilla are reported to have had considerable financial success selling books which described what were purported to be plausible scenarios of mayhem caused by cyberterrorism. Many critics claim that these books were unrealistic in their assessments of whether the attacks described (such as nuclear meltdowns and chemical plant explosions) were possible. A common thread throughout what critics perceive as cyberterror-hype is that of non-falsifiability; that is, when the predicted disasters fail to occur, it only goes to show how lucky we've been so far, rather than impugning the theory. US military response The US Department of Defense (DoD) charged the United States Strategic Command with the duty of combating cyberterrorism. This is accomplished through the Joint Task ForceGlobal Network Operations, which is the operational component supporting USSTRATCOM in defense of the DoD's Global Information Grid. This is done by integrating GNO capabilities into the operations of all DoD computers, networks, and systems used by DoD combatant commands, services and agencies. On November 2, 2006, the Secretary of the Air Force announced the creation of the Air Force's newest MAJCOM, the Air Force Cyber Command, which would be tasked to monitor and defend American interest in cyberspace. The plan was however replaced by the creation of Twenty-Fourth Air Force which became active in August 2009 and would be a component of the planned United States Cyber Command. On December 22, 2009, the White House named its head of Cyber Security as Howard Schmidt. He will coordinate U.S Government, military and intelligence efforts to repel hackers. Examples

Sabotage Mostly non-political acts of sabotage have caused financial and other damage, as in a case where a disgruntled employee caused the release of untreated sewage into water in Maroochy Shire, Australia More recently, in May 2007 Estonia was subjected to a mass cyber-attack in the wake of the removal of a Russian World War II war memorial from downtown Tallinn. The attack was a distributed denial-of-service attack in which selected sites were bombarded with traffic to force them offline; nearly all Estonian government ministry networks as well as two major Estonian bank networks were knocked offline; in addition, the political party website of Estonia's current Prime Minister Andrus Ansip featured a counterfeit letter of apology from Ansip for removing the memorial statue. Despite speculation that the attack had been coordinated by the Russian government, Estonia's defense minister admitted he had no conclusive evidence linking cyber attacks to Russian authorities. Russia called accusations of its involvement "unfounded," and neither NATO nor European Commission experts were able to find any conclusive proof of official Russian government participation. [8] In January 2008 a man from Estonia was convicted for launching the attacks against the Estonian Reform Party website and fined Website defacement and denial of service

The website of Air Botswana, defaced by a group calling themselves the "Pakistan Cyber Army". Even more recently, in October 2007, the website of Ukrainian president Viktor Yushchenko was attacked by hackers. A radical Russian nationalist youth group, the Eurasian Youth Movement, claimed responsibility In 1999 hackers attacked NATO computers. The computers flooded them with email and hit them with a denial of service (DoS). The hackers were protesting against the NATO bombings in Kosovo. Businesses, public organizations and academic institutions were bombarded with highly politicized emails containing viruses from other European countries. Other Since the world of computers is ever-growing and still largely unexplored, countries with young Internet cultures produce young computer scientists usually interested in "having fun". Countries like China, Pakistan, Greece, India, Israel, and South Korea have all been in the spotlight before by the U.S. Media for attacks on information systems related to the CIA and NSA. Though these attacks are usually the result of curious young computer programmers,

the United States has concerns about national security when such critical information systems fall under attack. In the past five years, the United States has taken a larger interest in protecting its critical information systems. It has issued contracts for high-leveled research in electronic security to nations such as Greece and Israel, to help protect against more serious and dangerous attacks]

You might also like