You are on page 1of 25

Electronic data interchange:

Electronic data interchange (EDI) was one of the earliest uses of information technology for supply chain management. EDI involves the electronic exchange of business transaction documents over the internet and other networks between supply chain trading partners (organizations and their customers and suppliers) without human intervention .Data representing a variety of business transaction documents ( such as purchase orders, invoices, requests for quotations and shipping notices) are automatically exchanges between computers using standard document message formats. In 1996, the National Institute of Standards and Technology defined electronic data interchange as "the computer-to-computer interchange of strictly formatted messages that represent documents other than monetary instruments. EDI implies a sequence of messages between two parties, either of whom may serve as originator or recipient. The formatted data representing the documents may be transmitted from originator to recipient via telecommunications or physically transported on electronic storage media." It distinguishes mere electronic communication or data exchange, specifying that "in EDI, the usual processing of received messages is by computer only. Human intervention in the processing of a received message is typically intended only for error conditions, for quality review, and for special situations. For example, the transmission of binary or textual data is not EDI as defined here unless the data are treated as one or more data elements of an EDI message and are not normally intended for human interpretation as part of online data processing." Value-added network companies like GE Global Exchange Services and computer associates offer a variety of EDI services for relatively high fees. But many EDI service providers now offer secure, lower cost EDI services over the internet. EDI is still a popular data transmission format among major trading partners, primarily to automate repetitive transactions, though it is slowly being replaced by XML-based Web services. EDI automatically tracks inventory changes; triggers orders, invoices and other documents related to transactions and schedules and confirm delivery and payment. By digitally integrating the supply chain, EDI stream-lines process save time, and increases accuracy. And by using internet technologies, lower cost internet-based EDI services are now available to smaller businesses.

Standard:
EDI provides a technical basis for commercial "conversations" between two entities, either internal or external. EDI standards describe the rigorous format of electronic documents. EDI is very useful in supply chains. The EDI standards were designed by the implementers, initially in the automotive industry, to be independent of communication and software technologies. EDI can be transmitted using any methodology agreed to by the sender and recipient. This includes a variety of technologies, including modem (asynchronous and synchronous), FTP, e-mail, HTTP, AS1, AS2, etc. It is important to differentiate between the EDI documents and the methods for transmitting them.

Some major sets of EDI standards:


The UN-recommended UN/EDIFACT is the only international standard and is predominant outside of North America. The US standard ANSI ASC X12 (X12) is predominant in North America.

1|Page

The TRADACOMS standard developed by the ANA (Article Numbering Association now known as GS1) is predominant in the UK retail industry. The ODETTE standard used within the European automotive industry

All of these standards first appeared in the early to mid 1980s. The standards prescribe the formats, character sets, and data elements used in the exchange of business documents and forms.

Barriers to implementation of EDI:


There are a few barriers to adopting electronic data interchange. One of the most significant barriers is the accompanying business process change. Existing business processes built around paper handling may not be suited for EDI and would require changes to accommodate automated processing of business documents. For example, a business may receive the bulk of their goods by 1 or 2 day shipping and all of their invoices by mail. The existing process may therefore assume that goods are typically received before the invoice. With EDI, the invoice will typically be sent when the goods ship and will therefore require a process that handles large numbers of invoices whose corresponding goods have not yet been received. Another significant barrier is the cost in time and money in the initial set-up. The preliminary expenses and time that arise from the implementation, customization and training can be costly. It is important to select the correct level of integration to match the business requirement. The key hindrance to a successful implementation of EDI is the perception many businesses have of the nature of EDI. Many view EDI from the technical perspective that EDI is a data format; it would be more accurate to take the business view that EDI is a system for exchanging business documents with external entities, and integrating the data from those documents into the company's internal systems.

Gateway:
In telecommunications, the term gateway has the following meaning:

In a communications network, a network node equipped for interfacing with another network that uses different protocols. A gateway may contain devices such as protocol translators, impedance matching devices, rate converters, fault isolators, or signal translators as necessary to provide system interoperability. It also requires the establishment of mutually acceptable administrative procedures between both networks. A protocol translation/mapping gateway interconnects networks with different network protocol technologies by performing the required protocol conversions. Loosely, a computer or computer program configured to perform the tasks of a gateway. For a specific case, see default gateway.

Gateways, also called protocol converters, can operate at any network layer. The activities of a gateway are more complex than that of the router or switch as it communicates using more than one protocol.

Details:
A gateway is a network point that acts as an entrance to another network. On the Internet, a node or stopping point node or a host (end-point) node. Both the computers of Internet users and the
2|Page

computers that serve pages to users are host nodes, while the nodes that connect the networks in between are gateways. For example, the computers that control traffic between company networks or the computers used by internet service providers (ISPs) to connect users to the internet are gateway nodes. In the network for an enterprise, a computer server pacting as a gateway node is often also acting as a proxy server and a firewall server. A gateway is often associated with both a router and switch. A gateway is an essential feature of most routers. A gateway may contain devices such as protocol translators, impedance matching devices, rate converters, fault isolators, or signal translators as necessary to provide system interoperability. It also requires the establishment of mutually acceptable administrative procedures between both networks. Microsoft Windows, however, describes this standard networking feature as Internet Connection Sharing, which acts as a gateway, offering a connection between the Internet and an internal network.

Internet-to-Orbit Gateway:
An Internet to orbit gateway (I2O) is a machine that acts as a connector between computers or devices connected to the Internet and computer systems orbiting Earth, like satellites or even manned spacecrafts. Such connection is made when the I2O establishes a stable link between the spacecraft and a computer or a network of computers on the Internet, such link can be control signals, audio frequency, or even visible spectrum signals.

Cloud Gateway:
A Cloud storage gateway is a network appliance or server which resides at the customer premises and translates cloud storage APIs such as SOAP or REST to block-based storage protocols such as iSCSI or Fibre Channel or file-based interfaces such as NFS or CIFS. Cloud storage gateways enable companies to integrate cloud storage into applications without moving the applications into the cloud.

Routers:
A router is a device that forwards data packets along networks. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Routers are located at gateways, the places where two or more networks connect, and are the critical device that keeps data flowing between networks and keeps the networks connected to the Internet. When data is sent between locations on one network or from one network to a second network the data is always seen and directed to the correct location by the router. The router accomplishes this by using headers and forwarding tables to determine the best path for forwarding the data packets, and they also use protocols such as ICMP to communicate with each other and configure the best route between any two hosts.

The Internet itself is a global network connecting millions of computers and smaller networks so you can see how crucial the role of a router is to our way of communicating and computing.

3|Page

The most familiar type of routers are home and small office routers that simply pass data, such as web pages and email, between the home computers and the owner's cable or DSL modem, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.

Applications of Router: When multiple routers are used in interconnected networks, the routers exchange information about destination addresses, using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless transmission). It also contains firmware for different networking protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another.
Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnets addresses recorded in the router do not necessarily map directly to the physical interface connections. A router has two stages of operation called planes.

Control plane: A router records a routing table listing what route should be used to forward a data packet, and through which physical interface connection. It does these using internal pre-configured addresses, called static routes. Forwarding plane: The router forwards data packets between incoming and outgoing interface connections. It routes it to the correct network type using information that the packet header contains. It uses data recorded in the routing table control plane.

Routers may provide connectivity within enterprises, between enterprises and the Internet, and between internet service providers (ISPs) networks. The largest routers (such as the Cisco CRS-1 or Juniper T1600) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks. Other networking solutions may be provided by a backbone Wireless Distribution System (WDS), which avoids the costs of introducing networking cables into buildings. All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities.

Routers for Home & Small Business:


Not all routers are created equal since their job will differ slightly from network to network. A cable modem which routes data between your PC and your ISP can be considered a router. In its most basic form, a router could simply be one of two computers running the Windows 98 (or higher) operating system connected together using ICS (Internet Connection Sharing). In this scenario, the computer that is connected to the Internet is acting as the router for the second computer to obtain its Internet connection. Going a step up from ICS, we have a category of hardware routers that are used to perform the same basic task as ICS, albeit with more features and functions. Often called broadband or
4|Page

Internet connection sharing routers, these routers allow you to share one Internet connection between multiple computers.

Data dictionary:
A data dictionary, or metadata repository, as defined in the IBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format." The term may have one of several closely related meanings pertaining to databases and database management systems (DBMS):

a document describing a database or collection of databases an integral component of a DBMS that is required to determine its structure a piece of middleware that extends or supplants the native data dictionary of a DBMS

Documentation:
The term Data Dictionary and Data Repository are used to indicate a more general software utility than a catalogue. A Catalogue is closely coupled with the DBMS Software; it provides the information stored in it to user and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such as DDL and DML compilers, the query optimizer, the transaction processor, report generators, and the constraint enforcer. On the other hand, a Data Dictionary is a data structure that stores meta-data, i.e., data about data. The Software package for a stand-alone Data Dictionary or Data Repository may interact with the software modules of the DBMS, but it is mainly used by the Designers, Users and Administrators of a computer system for information resource management. These systems are used to maintain information on system hardware and software configuration, documentation, application and users as well as other information relevant to system administration. If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called a Passive Data Dictionary; otherwise, it is called an Active Data Dictionary or Data Dictionary. An Active Data Dictionary is automatically updated as changes occur in the database. A Passive Data Dictionary must be manually updated. The data Dictionary consists of record types (tables) created in the database by systems generated command files, tailored for each supported back-end DBMS. Command files contain SQL Statements for CREATE TABLE, CREATE UNIQUE INDEX, ALTER TABLE (for referential integrity), etc., using the specific statement required by that type of database. Database users and application developers can benefit from an authoritative data dictionary document that catalogs the organization, contents, and conventions of one or more databases. This typically includes the names and descriptions of various tables and fields in each database, plus additional details, like the type and length of each data element. There is no universal standard as to the level of detail in such a document, but it is primarily a weak kind of data.

5|Page

Query language:
Query languages are computer languages used to make queries into databases and information systems. Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry. Examples include: QL is a proprietary object-oriented query language for querying relational databases; successor of Data log. Contextual Query Language (CQL) a formal language for representing queries to information retrieval systems such as web indexes or bibliographic catalogues. Concept-Oriented Query Language (COQL) is used in the concept-oriented model (COM). It is based on a novel data modeling construct, concept, and uses such operations as projection and de-projection for multi-dimensional analysis, analytical operations and inference; D is a query language for truly relational database management systems (TRDBMS); DMX is a query language for Data Mining models; Data log is a query language for deductive databases; HTSQL is a query language that translates HTTP queries to SQL; ISBL is a query language for PRTV, one of the earliest relational database management systems; LINQ query-expressions is a way to query various data sources from .NET languages LDAP is an application protocol for querying and modifying directory services running over TCP/IP; MQL is a chemin formatics query language for a substructure search allowing beside nominal properties also numerical properties; MDX is a query language for OLAP databases; OQL is Object Query Language; OCL (Object Constraint Language). Despite its name, OCL is also an object query language and an OMG standard; OPath, intended for use in querying WinFS Stores; OttoQL, intended for querying tables, XML, and databases; QUEL is a relational database access language, similar in most ways to SQL; RDQL is a RDF query language; SPARQL is a query language for RDF graphs; SQL is a well known query language and Data Manipulation Language for relational databases; TMQL Topic Map Query Language is a query language for Topic Maps; UnQL (Unstructured Query Language) is a functional superset of SQL, developed by the authors of SQLite and CouchDB; XQuery is a query language for XML data sources; XPath is a declarative language for navigating XML documents; YQL is an SQL-like query language created by Yahoo!

6|Page

Accounting information system:


An accounting information system (AIS) is a system of collection, storage and processing of financial and accounting data that is used by decision makers. An accounting information system is generally a computer-based method for tracking accounting activity in conjunction with information technology resources. The resulting statistical reports can be used internally by management or externally by other interested parties including investors, creditors and tax authorities. The actual physical devices and systems that allows the AIS to operate and perform its functions 1. Internal controls and security measures: what is implemented to safeguard the data 2. Model Base Management

History:
Initially, accounting information systems were predominantly developed in-house as legacy systems. Such solutions were difficult to develop and expensive to maintain. Today, accounting information systems are more commonly sold as pre-built software packages from vendors such as Microsoft, Sage Group, SAP and Oracle where it is configured and customized to match the organizations business processes. As the need for connectivity and consolidation between other business systems increased, accounting information systems were merged with larger, more centralized systems known as enterprise resource planning (ERP). In ERP, a system such as accounting information system is built as a module integrated into a suite of applications that can include manufacturing, supply chain, human resources.

How to effectively implement AIS:


Accounting information systems are composed of six main components: When AIS is initially implemented or converted from an existing system; organizations sometimes make the mistake of not considering each of these six components and treating them equally in the implementation process. This results in a system being "built 3 times" rather than once because the initial system is not designed to meet the needs of the organization, the organization then tries to get the system to work, and ultimately, the organization begins again, following the appropriate process. The steps necessary to implement a successful accounting information system are as follows: 1. Detailed Requirements Analysis: Where all individuals involved in the system are interviewed. The current system is thoroughly understood, including problems and complete documentation of the current system transactions, reports, and questions that need to be answered are gathered. What the users need that is not in the current system is outlined and documented. Users include everyone, from top management to data entry.

2. Systems Design (synthesis): The analysis is thoroughly reviewed and a new system is created. The system that surrounds the system is often the most important. What data needs to go into the system and how is this going to be handled? What information needs to come out of the system, and how is it going to be formatted? The system is designed to include appropriate internal controls and to provide management with the information needed to make decisions. It is a goal of an accounting information system to provide information that is relevant, meaningful, reliable, useful, and current. To achieve this, the system is designed so that transactions are entered as they occur (either manually or electronically) and information is immediately available on-line for management to use.
7|Page

3. Documentation: As the system is being designed, it is documented. The documentation includes vendor documentation of the system and, more importantly, the procedures, or detailed instructions that help users handle each process specific to the organization. Most documentation and procedures are online and it is helpful if organizations can add to the help instructions provided by the software vendor. The documentation is tested during the training so that when the system is launched, there is no question that it works and that the users are confident with the change. 4. Testing: Prior to launch, all processes are tested from input through output. Unfortunately, most organizations launch systems prior to thorough testing, adding to the end-user frustration when processes don't work. The documentation and procedures may be modified during this process. All identified transactions must be tested during this step in the process. All reports and on-line information must be verified and traced through the "audit trail" so that management is ensured that transactions will be handled consistently and that the information can be relied upon to make decisions. 5. Training: Prior to launch, all users need to be trained, with procedures. This means, a trainer using the procedures to show each end user how to handle a procedures. The procedures often need to be updated during training as users describe their unique circumstances and the "design" is modified with this additional information. The end user then performs the procedure with the trainer and the documentation. The end user then performs the procedure with the documentation alone. The enduser is then on his or her own with the support, either in person or by phone, of the trainer or other support person. This is prior to data conversion. 6. Data Conversion Tools are developed to convert the data from the current system (which was documented in the requirements analysis) to the new system. The data is mapped from one system to the other and data files are created that will work with the tools that are developed. The conversion is thoroughly tested and verified prior to final conversion. Of course, theres a backup so that it can be restarted, if necessary. 7. Launch: The system is implemented only AFTER all of the above is completed. The entire organization is aware of the launch date. With the current "mass-market" software used by thousands of companies and fundamentally proven to work, the "parallel" run that is mandatory with software tailor-made to a company is generally not done. This is only true, however, when the above process is followed and the system is thoroughly documented and tested and users are trained PRIOR to launch. 8. Support: The end-users and managers have ongoing support available at all times. System upgrades follow a similar process and all users are thoroughly apprised of changes, upgraded in an efficient manner, and trained. Many organizations chose to limit the amount of time and money spent on the analysis, design, documentation, and training, and move right into software selection and implementation. It is a proven fact that if a detailed requirements analysis is performed with adequate time being spent on the analysis, that the implementation and ongoing support will
8|Page

be minimal. Organizations who skip the steps necessary to ensure the system meets the needs of the organization are often left with frustrated end users, costly support, and information that is not current or correct. Worse yet, these organizations build the system 3 times instead of once.

Advantages and implications of AIS:


A big advantage of computer-based accounting information systems is that they automate and streamline reporting. Reporting is major tool for organizations to accurately see summarized, timely information used for decision-making and financial reporting. The accounting information system pulls data from the centralized database, processes and transforms it and ultimately generates a summary of that data as information that can now be easily consumed and analyzed by business analysts, managers or other decision makers. These systems must ensure that the reports are timely so that decision-makers are not acting on old, irrelevant information and, rather, able to act quickly and effectively based on report results. Consolidation is one of the hallmarks of reporting as people do not have to look through an enormous number of transactions. For instance, at the end of the month, a financial accountant consolidates all the paid vouchers by running a report on the system. The systems application layer provides a report with the total amount paid to its vendors for that particular month.

Difference between MIS and DSS: MIS: Management Information system operates on operational efficiency i.e. it concentrates to do the things in right manner. It allows the communication across the managers from different areas in a business organization. It allows flow of information in both upward and downward direction. MIS is original form of management information. MIS is a primary level of decision making. MIS the report is usually not flexible. MIS is characterized by an input of large volume of data, an output of summary reports and process characterized by a simple model. MIS on the other hand focuses more on planning the report of various topics concerned with the organization that would assist the managers to take vital decisions pertaining to the functioning of the organization. MIS focuses on operational efficiency As a matter of fact MIS is all about theory DSS: DSS stands for Decision Support Systems. Decision support system helps in making effective decisions as it allows doing only right things. It is concerned about leadership and senior management in an organization providing effective judgment support. It flows only in upward direction. DSS is actually advancement of MIS. DSS is the ultimate and the main part of the decision. In the case of DSS the report can be flexible DSS is featured by an input of low volume of data, an output of decision analysis and a process characterized by interactive model. Experts on managerial behavior say that DSS focuses more on decision making. DSS focuses more on making effective decision or in other words helping the company to do the right thing. DSS is all about practice and analysis. An organization should employ both the systems effectively.

9|Page

Intranet:
An intranet is a computer network that uses Internet Protocol technology to share information, operational systems, or computing services within an organization. The term is used in contrast to internet, a network between organizations, and instead refers to a network within an organization. Sometimes, the term refers only to the organization's internal website, but may be a more extensive part of the organization's information technology infrastructure, and may be composed of multiple local area networks. The objective is to organize each individual's desktop with minimal cost, time and effort to be more productive, cost efficient, timely, and competitive. An intranet may host multiple private websites and constitute an important component and focal point of internal communication and collaboration. Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer protocol). An intranet can be understood as a private analog of the Internet, or as a private extension of the Internet confined to an organization. The first intranet websites and home pages began to appear in organizations in 1994-1996. Although not officially noted, the term intranet first became common-place among early adopters, such as universities and technology corporations, in 1992. Intranets are sometimes contrasted to extranets. While intranets are generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. In many organizations, intranets are protected from unauthorized external access by means of a network gateway and firewall. For smaller companies, intranets may be created simply by using private IP address ranges, such as 192.168.0.0/16. In these cases, the intranet can only be directly accessed from a computer in the local network; however, companies may provide access to off-site employees by using a virtual private network, or by other access methods, requiring user authentication and encryption.

Uses of Intranet:
Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and customer relationship management tools, project management etc., to advance productivity. Intranets are also being used as corporate culture-change platforms. For example, large numbers of employees discussing key issues in an intranet forum application could lead to new ideas in management, productivity, quality, and other corporate issues. In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness. Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen messages coming and going keeping security intact. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these.

10 | P a g e

Benefits of Intranet:

Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and subject to security provisions from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users. Time: Intranets allow organizations to distribute information to employees on an as-needed basis through the electronic mail by saving time and faster information sharing. Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed around the organization for purpose of the taking initiative and making decision for achieving the goals. By providing this information on the intranet, staffs have the opportunity to keep up-to-date with the strategic focus of the organization. Some examples of communication would be chat, email, and or blogs. Web publishing: allows the organizations corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, news feeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet. Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internet worked enterprise. Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. For example, People soft "derived significant cost savings by shifting HR processes to the intranet". Enhance collaboration: Information is easily accessible by all authorized users, which enables teamwork. Cross-platform capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX. Promote common corporate culture: Every user has the ability to view the same information within the Intranet. Immediate updates: Intranets make it possible to provide your audience with "live" changes so they are kept up-to-date, which can limit a company's liability. Supports a distributed computing architecture: The intranet can also be linked to a companys management information system, for example a time keeping system.

Modem:
A modem (modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog
11 | P a g e

signals, from light emitting diodes to radio. The most familiar example is a voice band modem that turns the digital data of a personal computer into modulated electrical signals in the voice frequency range of a telephone channel. These signals can be transmitted over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems are generally classified by the amount of data they can send in a given unit of time, usually expressed in bits per second (bit/s, or bps), or bytes per second (B/s). Modems can alternatively be classified by their symbol rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency-shift keying, that is to say, tones of different frequencies, with two possible frequencies corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU V.22 standard, which was able to transmit and receive four distinct symbols (two bits per symbol), handled 1,200 bit/s by sending 600 symbols per second (600 baud) using phase shift keying. While the modem interfaces are standardized, a number of different protocols for formatting data to be transmitted over telephone lines exist. Some, like CCITT V.34, are official standards, while others have been developed by private companies. Most modems have built-in support for the more common protocols -- at slow data transmission speeds at least, most modems can communicate with each other. At high transmission speeds, however, the protocols are less standardized. Following characteristics distinguish one modem from another: Bps : How fast the modem can transmit and receive data. At slow rates, modems are measured in terms of baud rates. The slowest rate is 300 baud (about 25 cps). At higher speeds, modems are measured in terms of bits per second (bps). The fastest modems run at 57,600 bps, although they can achieve even higher data transfer rates by compressing the data. Obviously, the faster the transmission rate, the faster you can send and receive data. Note, however, that you cannot receive data any faster than it is being sent. If, for example, the device sending data to your computer is sending it at 2,400 bps, you must receive it at 2,400 bps. Voice/data: Many modems support a switch to change between voice and data modes. In data mode, the modem acts like a regular modem. In voice mode, the modem acts like a regular telephone. Modems that support a voice/data switch have a built-in loudspeaker and microphone for voice communication. Auto-answer : An auto-answer modem enables your computer to receive calls in your absence. This is only necessary if you are offering some type of computer service that people can call in to use. Data compression : Some modems perform data compression, which enables them to send data at faster rates. However, the modem at the receiving end must be able to decompress the data using the same compression technique. Flash memory : Some modems come with flash memory rather than conventional ROM, which means that the communications protocols can be easily updated if necessary. Fax capability: Most modern modems are fax modems, which means that they can send and receive faxes.

12 | P a g e

Manufacturing Information Systems:


Global competitive pressures of the information society have been highly pronounced in manufacturing and have radically changed it. The new marketplace calls for manufacturing that are: 1. Lean - highly efficient, using fewer input resources in production through better engineering and through production processes that rely on low inventories and result in less waste. 2. Agile - fit for time-based competition. Both the new product design and order fulfillment are drastically shortened. 3. Flexible - able to adjust the product to a customer's preferences rapidly and cost effectively. 4. Managed for quality - by measuring quality throughout the production process and following world standards, manufacturers treat quality as a necessity and not a high-price option.

Structure of Manufacturing Information Systems:


Information technology must play a vital role in the design and manufacturing processes. Manufacturing information systems are among the most difficult both to develop and to implement. TPSs are embedded in the production process or in other company processes. The data provided by the transaction processing systems are used by management support subsystems, which are tightly integrated and interdependent.

Manufacturing information subsystems include:


1. Product design and engineering 2. Product scheduling 3. Quality control 4. Facilities planning, production costing, logistics and inventory subsystems

1. Product Design and Engineering:


Product design and engineering are widely supported today by computer-aided design (CAD) and computer-aided engineering (CAE) systems. CAD systems assist the designer with automatic calculations and display of surfaces while storing the design information in databases. The produced designs are subject to processing with CAE systems to ensure their quality, safety, manufacturability, and cost-effectiveness. CAD/CAE systems increasingly eliminate paperwork from the design process, while speeding up the process itself. As well, the combined techniques of CAD/CAE and rapid prototyping cut time to market.

2. Product Scheduling:
Production scheduling is the heart of the manufacturing information system. This complex subsystem has to ensure that an appropriate combination of human, machinery, and material resources will be provided at an appropriate time in order to manufacture the goods. Production scheduling and the ancillary processes are today frequently controlled with a manufacturing resource planning system as the main informational tool. This elaborate software converts the sales forecast for the plants products into a detailed production plan and further into a master schedule of production. Computer integrated manufacturing (CIM) is a strategy through which a manufacturer takes control of the entire manufacturing process. The process starts with CAD and CAE and continues on the factory floor where robots and numerically controlled machinery are installed - and thus computer-aided 13 | P a g e

manufacturing (CAM) is implemented. A manufacturing system based on this concept can turn out very small batches of a particular product as cost-effectively as a traditional production line can turn out millions of identical products. A full-fledged CIM is extremely difficult to implement; indeed, many firms have failed in their attempts to do so.

3. Quality Control:
The quality control subsystem of a manufacturing information system relies on the data collected on the shop floor by the sensors embedded in the process control systems. Total quality management (TQM) is a management technique for continuously improving the performance of all members and units of a firm to ensure customer satisfaction. In particular, the principles of TQM state that quality comes from improving the design and manufacturing process, rather than A inspecting out@ defective products. The foundation of quality is also understanding and reducing variation in the overall manufacturing process.

4. Facilities Planning, Production Costing, Logistics and Inventory Subsystems:


Among the higher-level decision making supported by manufacturing information systems are facilities planning - locating the sites for manufacturing plants, deciding on their production capacities, and laying out the plant floors. Manufacturing management requires a cost control program, relying on the information systems. Among the informational outputs of the production costing subsystem are labor and equipment productivity reports, performance of plants as cost centers, and schedules for equipment maintenance and replacement. Managing the raw-materials, packaging, and the work in progress inventory is a responsibility of the manufacturing function. In some cases, inventory management is combined with the general logistics systems, which plan and control the arrival of purchased goods into the firm as well as shipments to the customers.

Business Intelligence:
Business intelligence (BI) is defined as the ability for an organization to take all its capabilities and convert them into knowledge. This produces large amounts of information that can lead to the development of new opportunities. Identifying these opportunities, and implementing an effective strategy, can provide a competitive market advantage and long-term stability within the organization's industry. Common functions of business intelligence technologies are reporting, online analytical processing, analytics, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, predictive analytics and prescriptive analytics. Business intelligence aims to support better business decision-making. Thus a BI system can be called a decision support system (DSS). Although the term business intelligence is sometimes used as a synonym for competitive intelligence (because they both support decision making), BI uses technologies, processes, and applications to analyze mostly internal, structured data and business processes while competitive intelligence gathers, analyzes and disseminates information with a topical focus on company competitors. If understood broadly, business intelligence can include the subset of competitive intelligence.

14 | P a g e

History
In a 1958 article, IBM researcher Hans Peter Luhn used the term business intelligence. He defined intelligence as: "the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal." Business intelligence as it is understood today is said to have evolved from the decision support systems which began in the 1960s and developed throughout the mid-1980s. In 1989, Howard Dresner (later a Gartner Group analyst) proposed "business intelligence" as an umbrella term to describe "concepts and methods to improve business decision making by using fact-based support systems." It was not until the late 1990s that this usage was widespread.

Business Intelligence and Data Warehousing:


Often BI applications use data gathered from a data warehouse or a data mart. However, not all data warehouses are used for business intelligence, nor do all business intelligence applications require a data warehouse. In order to distinguish between concepts of business intelligence and data warehouses, Forrester Research often defines business intelligence in one of two ways: Using a broad definition: "Business Intelligence is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision-making." When using this definition, business intelligence also includes technologies such as data integration, data quality, data warehousing, master data management, text and content analytics, and many others that the market sometimes lumps into the Information Management segment. Therefore, Forrester refers to data preparation and data usage as two separate, but closely linked segments of the business intelligence architectural stack. Forrester defines the latter, narrower business intelligence market as "referring to just the top layers of the BI architectural stack such as reporting, analytics and dashboards."

Applications in an enterprise: Business intelligence can be applied to the following business purposes, in order to drive business value.
1. Measurement program that creates a hierarchy of performance metrics (see also Metrics Reference Model) and benchmarking that informs business leaders about progress towards business goals (business process management). 2. Analytics program that builds quantitative processes for a business to arrive at optimal decisions and to perform business knowledge discovery. Frequently involves: data mining, process mining, statistical analysis, predictive analytics, predictive modeling, business process modeling, complex event processing and prescriptive analytics. 3. Reporting/enterprise reporting program that builds infrastructure for strategic reporting to serve the strategic management of a business, not operational reporting. Frequently involves data visualization, executive information system and OLAP.

15 | P a g e

4. Collaboration/collaboration platform program that gets different areas (both inside and outside the business) to work together through data sharing and electronic data interchange. 5. Knowledge management program to make the company data driven through strategies and practices to identify, create, represent, distribute, and enable adoption of insights and experiences that are true business knowledge. Knowledge management leads to learning management and regulatory compliance. In addition to above, business intelligence also can provide a pro-active approach, such as ALARM function to alert immediately to end-user.

Success Factors of Implementation:


Before implementing a BI solution, it is worth taking different factors into consideration before proceeding. According to Kimball et al., these are the three critical areas that you need to assess within your organization before getting ready to do a BI project. 1. The level of commitment and sponsorship of the project from senior management 2. The level of business need for creating a BI implementation 3. The amount and quality of bupsiness data available.

Difference between Wi-Fi and Bluetooth:

Wi-Fi:
Wi-Fi is a wireless networking solution that allows computers to connect to the network via an access point. It was developed as an alternative to wired networking which is restrictive. Wi-Fi has already begun to appear in a few mobile phones, you are more likely to find it in laptops, PDAs, and Smartphones where it is often used to connect to the internet via a hotspot. Though it is possible to connect two devices via Wi-Fi, it is a lot more technical and tedious since you would need to define one as an access point so that the other can connect. Wi-Fi is meant to provide mobility to its users while staying connected; its radios transmit at high power levels to achieve a long range that can extend up to 300ft. Bandwidth is essential for Wi-Fi since it provides a connection to the internet or intranet and manufacturers are always trying to find ways to improve the bandwidth even further.

Bluetooth:

Bluetooth is a standard that was developed largely for the mobile phone market. It was created to supersede Infrared which had a lot of limitations. Bluetooth is used to send small files from one device to another and to connect other devices like headsets and other peripherals. Since Bluetooth was developed mainly for the mobile phone industry, it has become fairly common in mobile phones. Its ability to connect peripherals like keyboards and headsets is not possible with Wi-Fi and it is a lot easier and faster to send pictures and other small files via Bluetooth than Wi-Fi. Bluetooth does not require this much distance between two devices, thats why it uses a much weaker radio to achieve 30ft of separation. Most Bluetooth devices do not require a lot of bandwidth and greater bandwidth would usually result to greater cost. Thats why Bluetooth still has a very small bandwidth making it unsuitable for transferring larger files.

16 | P a g e

Protocols:
A communications protocol is a system of digital message formats and rules for exchanging those messages in or between computing systems and in telecommunications. A protocol may have a formal description. Protocols may include signaling, authentication and error detection and correction capabilities. A protocol definition defines the syntax, semantics, and synchronization of communication; the specified behavior is typically independent of how it is to be implemented. A protocol can therefore be implemented as hardware or software or both. Communications protocols have to be agreed upon by the parties involved. To reach agreement a protocol may be developed into a technical standard. Communicating systems use well-defined formats for exchanging messages. Each message has an exact meaning intended to provoke a defined response of the receiver. A protocol therefore describes the syntax, semantics, and synchronization of communication. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communications while programming languages are to computations.

Basic Requirements of Protocols:


Messages are sent and received on communicating systems to establish communications. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed.

Data formats for data exchange. Digital message bit strings are exchanged. The bit strings are divided in fields and each field carries information relevant to the protocol. Conceptually the bit string is divided into two parts called the header area and the data area. The actual message is stored in the data area, so the header area contains the fields with more relevance to the protocol. Bit strings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange. Addresses are used to identify both the sender and the intended receiver(s). The addresses are stored in the header area of the bit strings, allowing the receivers to determine whether the bit strings are intended for themselves and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance to translate a logical IP address specified by the application to an Ethernet hardware address. This is referred to as address mapping. Routing. When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. This way of connecting networks is called internetworking. Detection of transmission errors is necessary on networks which cannot guarantee errorfree operation. In a common approach, CRCs of the data area are added to the end of packets, making it possible for the receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. Acknowledgements of correct reception of packets are required for connection-oriented communication. Acknowledgements are sent from receivers back to their respective senders.

17 | P a g e

Loss of information - timeouts and retries. Packets may be lost on the network or suffer from long delays. On timeouts, the sender must assume the packet was not received and retransmit it. In case of a permanently broken link, the retransmission has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links. This is known as Media Access Control. Sequence control. We have seen that long bit strings are divided in pieces, and then sent on the network individually. The pieces may get lost or delayed or take different routes to their destination on some types of networks. As a result pieces may arrive out of sequence. Retransmissions can result duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender.

Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol has to specify rules describing the context. These kinds of rules are said to express the syntax of the communications. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the semantics of the communications.

Digital Signals:
A digital signal is a physical signal that is a representation of a sequence of discrete values (a quantified discrete-time signal), for example of an arbitrary bit stream, or of a digitized (sampled and analog-to-digital converted) analog signal. The term digital signal can refer to Digital is a non-continuous electrical signal. A continuous-time waveform signal used in any form of digital communication. a pulse train signal that switches between a, also known as a line coded signal, for example a signal found in digital electronics or in serial communications using digital baseband transmission, or a pulse code modulation (PCM) representation of a digitized analog signal. Digital signals change in individual steps and consist of pulses or digits. Digital signals provide better continuous delivery. Digital signals were easier to transmit and were more reliable. Digital signals have discrete levels, and the specified value of the pulse remains constant until the change in the next digit. There are two amplitude levels, which are called nodes that are based on 1 or 0, true or false, and high or low.

Analog signal:
An analog or analogue signal is any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the pressure of the sound waves. It differs from a digital signal, in which a continuous quantity is represented by a discrete function which can only take on one of a finite number of values. The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, and other systems may also convey analog signals.
18 | P a g e

An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information. Any information may be conveyed by an analog signal; often such a signal is a measured response to changes in physical phenomena, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a microphone which induces corresponding fluctuations in the current produced by a coil in an electromagnetic microphone, or the voltage produced by a condenser microphone. The voltage or the current is said to be an "analog" of the sound.

Batch Processing:
Batch processing is used when there are a lot of transactions affecting a high percentage of master file records and the response needed is not immediate, usually until the end of the week or month. A good example of this in a large, national business would be payroll processing, where nearly every master file record will be affected. The data is collected over a period of time, then input and verified by clerks (verified means input by someone else and then both inputs are compared by computer) and processed centrally. The transactions are entered in batches by keyboard and stored in transaction files. These batches consist of thirty or so records which are given a batch control ID. The batches are then run through a validation process and to make sure the batches balance a computed total is compared with a manually produced total. This helps to ensure that all data is entered without error or omission. The actual updating of master files only takes place after verification and validation are complete. This means batch processing is often run overnight, unattended. A new master file is produced as a result of a batch processing run. The original master file is kept along with a previous version. After processing the output is produced, and is usually printed media such as pay slips or invoices, although this is changing with the advent of the web. There is really no such thing as real-time. The best you will get is a few milliseconds from input to response. However such fast systems are used in critical systems that control aircraft or the manufacture of sensitive or dangerous compounds. It is also called deferred processing or off-line processing. Tasks are stored in the form of batches and processed each batches as requirement.

Benefits of batch processing:


Batch processing has these benefits:

It allows sharing of computer resources among many users and programs. It shifts the time of job processing to when the computing resources are less busy. It avoids idling the computing resources with minute-by-minute manual intervention and supervision. By keeping high overall rate of utilization, it better amortizes the cost of a computer, especially an expensive one.

19 | P a g e

Common batch processing usage:


Data processing: A typical batch processing schedule includes end of day- reporting (EOD). Historically, many systems had a batch window where online subsystems were turned off and the system capacity was used to run jobs common to all data (accounts, users, or customers) on a system. In a bank, for example, EOD jobs include interest calculation, generation of reports and data sets to other systems, printing (statements), and payment processing. Many businesses have moved to concurrent online and batch architectures in order to support globalization, the Internet, and other relatively newer business demands. Such architectures place unique stresses on system design; programming techniques, availability engineering, and IT service delivery. Databases: Batch processing is also used for efficient bulk database updates and automated transaction processing, as contrasted to interactive online transaction processing (OLTP) applications. The extract, transform, load (ETL) step in populating data warehouses is inherently a batch process in most implementations. Images: Batch processing is often used to perform various operations with digital images. Computer programs exist that let one resize, convert, watermark, or otherwise edit image files. Converting: Batch processing is also used for converting a number of computer files from one format to another. This is to make files portable and versatile especially for proprietary and legacy files where viewers are not easy to come by.

Online Processing
Online processing means users directly enter information online (usually, online, in this case, means online to a central processor, rather than its modern connotation of the Internet, but it could mean both!), it is validated and updated directly onto the master file. No new file is created in this case. Therefore, there is near immediate input process, and output. The process involves making use of the internet. It means that the customer who wants to take travel insurance has to visit the relevant website of the company that is offering travel insurance. He has to sort through a variety of insurance plans that he will find on the site to get to one that is appropriate to him. He then has to weigh the various factors like fees, premiums, time period of insurance, and decide which plan is suitable. There are plans available that cover everything from a simple overnight journey to a vacation in Hawaii or a ship cruise. Once he has selected a plan he has enter all his personal and official info into as well as the name of the plan. Some of the personal data that would be included is age, state of health, personal income etc. He could also be asked for which company he's working and if they have any official scheme of insuring their employees. Once he sends the info across to the insurance company, they will begin the online process. They will scrutinize the documents to check for eligibility. Details given will be cross checked to verify their truthfulness. They will also be used as a yardstick to determine if the candidate can qualify or not. Finally if the insurance company is satisfied it will ask the candidate to pay the required amount via the net. The candidate can use his credit or debit card to make his payment or he can use some other service like Pay Pal.

20 | P a g e

once the payment has been made it is understood that the insurance company and the individual have entered into a contract that will be mutually beneficial. It is also called direct access or random access processing. It is a fast processing in which input devices is directly connected to computer.

OLAP (online analytical processing):


OLAP (online analytical processing) is computer processing that enables a user to easily and selectively extract and view data from different points of view. For example, a user can request that data be analyzed to display a spreadsheet showing all of a company's beach ball products sold in Florida in the month of July, compare revenue figures with those for the same products in September, and then see a comparison of other product sales in Florida in the same time period. To facilitate this kind of analysis, OLAP data is stored in a multidimensional database. Whereas a relational database can be thought of as two-dimensional, a multidimensional database considers each data attribute (such as product, geographic sales region, and time period) as a separate "dimension." OLAP software can locate the intersection of dimensions (all products sold in the Eastern region above a certain price during a certain time period) and display them. Attributes such as time periods can be broken down into sub-attributes. OLAP can be used for data mining or the discovery of previously un-discerned relationships between data items. An OLAP database does not need to be as large as a data warehouse, since not all transactional data is needed for trend analysis. Using Open Database Connectivity (ODBC), data can be imported from existing relational databases to create a multidimensional database for OLAP. Two leading OLAP products are Hyperion Solution's Essbase and Oracle's Express Server. OLAP products are typically designed for multiple-user environments, with the cost of the software based on the number of users.

Overview of OLAP systems:


The core of any OLAP system is an OLAP cube (also called a 'multidimensional cube' or a hypercube). It consists of numeric facts called measures which are categorized by dimensions. The measures are placed at the intersections of the hypercube, which is spanned by the dimensions as a Vector space. The usual interface to manipulate an OLAP cube is a matrix interface like Pivot tables in a spreadsheet program, which performs projection operations along the dimensions, such as aggregation or averaging. The cube metadata is typically created from a star schema or snowflake schema of tables in a relational database. Measures are derived from the records in the fact table and dimensions are derived from the dimension tables. Each measure can be thought of as having a set of labels, or meta-data associated with it. A dimension is what describes these labels; it provides information about the measure. A simple example would be a cube that contains a store's sales as a measure, and Date/Time as a dimension. Each Sale has a Date/Time label that describes more about that sale. Any number of dimensions can be added to the structure such as Store, Cashier, or Customer by adding a foreign key column to the fact table. This allows an analyst to view the measures along any combination of the dimensions.
21 | P a g e

For example:
Sales Fact Table +-------------+----------+ | sale_amount | time_id | +-------------+----------+ Time Dimension | 2008.10| 1234 |---+ +---------+-------------------+ +-------------+----------+ | | time_id | timestamp | | +---------+-------------------+ +---->| 1234 | 20080902 12:35:43 | +---------+-------------------+

Multidimensional databases:
Multidimensional structure is defined as a variation of the relational model that uses multidimensional structures to organize data and express the relationships between data. The structure is broken into cubes and the cubes are able to store and access data within the confines of each cube. Each cell within a multidimensional structure contains aggregated data related to elements along each of its dimensions. Even when data is manipulated it remains easy to access and continues to constitute a compact database format. The data still remains interrelated. Multidimensional structure is quite popular for analytical databases that use online analytical processing (OLAP) applications (OBrien & Marakas, 2009). Analytical databases use these databases because of their ability to deliver answers to complex business queries swiftly. Data can be viewed from different angles, which gives a broader perspective of a problem unlike other models.

Aggregations:
It has been claimed that for complex queries OLAP cubes can produce an answer in around 0.1% of the time required for the same query on OLTP relational data. The most important mechanism in OLAP which allows it to achieve such performance is the use of aggregations. Aggregations are built from the fact table by changing the granularity on specific dimensions and aggregating up data along these dimensions. The number of possible aggregations is determined by every possible combination of dimension granularities. The combination of all possible aggregations and the base data contains the answers to every query which can be answered from the data. Because usually there are many aggregations that can be calculated, often only a predetermined number are fully calculated; the remainder is solved on demand. The problem of deciding which aggregations (views) to calculate is known as the view selection problem. View selection can be constrained by the total size of the selected set of aggregations, the time to update them from changes in the base data, or both. The objective of view selection is typically to minimize the average time to answer OLAP queries, although some studies also minimize the update time. View selection is NP-Complete. Many approaches to the problem have been explored, including greedy algorithms, randomized search, genetic algorithms and A* search algorithm.

22 | P a g e

Types of OLAP:
OLAP systems have been traditionally categorized using the following taxonomy.
Multidimensional 'MOLAP' is the 'classic' form of OLAP and is sometimes referred to as just OLAP. MOLAP stores this data in optimized multi-dimensional array storage, rather than in a relational database. Therefore it requires the pre-computation and storage of information in the cube - the operation known as processing. Relational ROLAP works directly with relational databases. The base data and the dimension tables are stored as relational tables and new tables are created to hold the aggregated information. Depends on a specialized schema design. This methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement. Hybrid

There is no clear agreement across the industry as to what constitutes "Hybrid OLAP", except that a database will divide data between relational and specialized storage. For example, for some vendors, a HOLAP database will use relational tables to hold the larger quantities of detailed data, and use specialized storage for at least some aspects of the smaller quantities of more-aggregate or less-detailed data.
Comparison Each type has certain benefits, although there is disagreement about the specifics of the benefits between providers.

Some MOLAP implementations are prone to database explosion, a phenomenon causing vast amounts of storage space to be used by MOLAP databases when certain common conditions are met: high number of dimensions, pre-calculated results and sparse multidimensional data. MOLAP generally delivers better performance due to specialized indexing and storage optimizations. MOLAP also needs less storage space compared to ROLAP because the specialized storage typically includes compression techniques. ROLAP is generally more scalable. However, large volume pre-processing is difficult to implement efficiently so it is frequently skipped. ROLAP query performance can therefore suffer tremendously. Since ROLAP relies more on the database to perform calculations, it has more limitations in the specialized functions it can use. HOLAP encompasses a range of solutions that attempt to mix the best of ROLAP and MOLAP. It can generally pre-process swiftly, scale well, and offer good function support.

23 | P a g e

Other types:
The following acronyms are also sometimes used, although they are not as widespread as the ones above:

WOLAP - Web-based OLAP DOLAP - Desktop OLAP RTOLAP - Real-Time OLAP

Difference between OLTP & OLAP:


we can divide IT systems into transactional (OLTP) and analytical (OLAP). In general we can assume that OLTP systems provide source data to data warehouses, whereas OLAP systems help to analyze it.

- OLTP (On-line Transaction Processing): OLTP (on-line transaction processing) is characterized by a large number of short on-line transactions (INSERT, UPDATE, and DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by number of transactions per second. In OLTP database there is detailed and current data, and schema used to store transactional databases is the entity model (usually 3NF).

- OLAP (On-line Analytical Processing):


OLAP (on-line analytical processing) is characterized by relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems a response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually star schema). 24 | P a g e

The following table summarizes the major differences between OLTP and OLAP system design.
OLTP System Online Transaction Processing (Operational System) OLAP System Online Analytical Processing (Data Warehouse)

Source of data Purpose of data What the data Inserts and Updates Queries

Operational data; OLTPs are the original source of the data. To control and run fundamental business tasks Reveals a snapshot of ongoing business processes Short and fast inserts and updates initiated by end users Relatively standardized and simple queries Returning relatively few records

Consolidation data; OLAP data comes from the various OLTP Databases To help with planning, problem solving, and decision support Multi-dimensional views of various kinds of business activities Periodic long-running batch jobs refresh the data Often complex queries involving aggregations

Depends on the amount of data involved; batch data refreshes and complex queries Typically very fast may take many hours; query speed can be improved by creating indexes Larger due to the existence of aggregation Space Can be relatively small if historical structures and history data; requires more Requirements data is archived indexes than OLTP Typically de-normalized with fewer Database Highly normalized with many tables tables; use of star and/or snowflake Design schemas Backup religiously; operational data Instead of regular backups, some Backup and is critical to run the business, data environments may consider simply Recovery loss is likely to entail significant reloading the OLTP data as a recovery monetary loss and legal liability method Processing Speed

25 | P a g e

You might also like