You are on page 1of 19

Functional Decomposition

Top-Down Development
The top-down approach builds a system by stepwise refinement, starting with a
definition of its abstract function. You start the process by expressing a topmost
statement of this function, such as
“Translate a C prog to machine code”
and continue with a sequence of refinement steps. Each step must decrease the level
of abstraction of the elements obtained; it decomposes every operation into a
combination of one or more simpler operations.
“Read prog and produce seq of tokens”
“Parse tokens into abst syntax tree”
“Decorate tree with semantic info”
“Generate code from decorated tree”
The top-down approach has a number of advantages:
- logical, well-organized thought discipline
- can be taught effectively
- encourages orderly develpmnt of systems
- helps the designer find a way through the apparent complexity that systems often
present at the initial stages of their design
The top-down approach can indeed be useful for developing individual algorithms.
But it also suffers from limitations that make it questionable as a tool for the design
of
entire systems:
· The very idea of characterizing a system by just one function is subject to
doubt.
· By using as a basis for modular decomposition the properties that tend to
change the most, the method fails to account for the evolutionary nature of
software systems.
Example: Payroll Program
The system takes some inputs (record of hrs worked and employee info) and
produces some outputs (paychecks . . .).
The top-down functional method is meant precisely for such well-defined problems,
where the task is to perform a single function — the “top” of the system to be built.
Good systems have the detestable habit of giving their users plenty of ideas about all
the other things they could do.
- Could it gather statistics on the side?
- We are going to start paying some employees monthly and others biweekly
24
- I need a summary every month for management, and one every quarter for the
shareholders.
This phenomenon of having to add unanticipated functions to successful systems
occurs in all application areas.
The new system is still, in many respects, “the same system” as the old one: still a
payroll system; but the original “main function”, which may have seemed so
important at first, often becomes just one of many functions; sometimes, it just
vanishes, having outlived its usefulness.
If analysis and design have used a decomposition method based on the function, the
system structure will follow from the designers’ original understanding of the
system’s main function
Each addition of a new function, however incremental it seems to the customer, risks
invalidating the entire structure.

Functional Decomposition Diagrams

A top-down design or functional decomposition diagram resembles a method call dependency diagram
where each method at level n is the root of a sub-branch whose children are methods the root calls. The
diagram is not strictly a tree as recursion results in a cycle and a method may invoke other branches of the
diagram.

Object-oriented Decomposition design is the process of planning a system of interacting objects for the
purpose of solving a software problem. It is one approach to software design.

An object contains encapsulated data and procedures grouped together to represent an entity. The 'object
interface', how the object can be interacted with, is also defined. An object-oriented program is described by
the interaction of these objects. Object-oriented design is the discipline of defining the objects and their
interactions to solve a problem that was identified and documented during object-oriented analysis.

What follows is a description of the class-based subset of object-oriented design, which does not include
object prototype-based approaches where objects are not typically obtained by instancing classes but by
cloning other (prototype) objects.

Object-oriented design topics

] Input (sources) for object-oriented design

The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an
output artifact does not need to be completely developed to serve as input of object-oriented design; analysis
and design may occur in parallel, and in practice the results of one activity can feed the other in a short
feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and
the artifacts can be continuously grown instead of completely developed in one shot.

Some typical input artifacts for object-oriented design are:


25
 Conceptual model: Conceptual model is the result of object-oriented analysis, it captures concepts in
the problem domain. The conceptual model is explicitly chosen to be independent of implementation
details, such as concurrency or data storage.

 Use case: Use case is a description of sequences of events that, taken together, lead to a system doing
something useful. Each use case provides one or more scenarios that convey how the system should
interact with the users called actors to achieve a specific business goal or function. Use case actors
may be end users or other systems. In many circumstances use cases are further elaborated into use
case diagrams. Use case diagrams are used to identify the actor (users or other systems) and the
processes they perform.

 System Sequence Diagram: System Sequence diagram (SSD) is a picture that shows, for a particular
scenario of a use case, the events that external actors generate, their order, and possible inter-system
events.

 User interface documentations (if applicable): Document that shows and describes the look and feel
of the end product's user interface. It is not mandatory to have this, but it helps to visualize the end-
product and therefore helps the designer.

 Relational data model (if applicable): A data model is an abstract model that describes how data is
represented and used. If an object database is not used, the relational data model should usually be
created before the design, since the strategy chosen for object-relational mapping is an output of the
OO design process. However, it is possible to develop the relational data model and the object-
oriented design artifacts in parallel, and the growth of an artifact can stimulate the refinement of
other artifacts.

Object-oriented concepts

The five basic concepts of object-oriented design are the implementation level features that are built into the
programming language. These features are often referred to by these common names:

 Object/Class: A tight coupling or association of data structures with the methods or functions that act
on the data. This is called a class, or object (an object is created based on a class). Each object serves
a separate function. It is defined by its properties, what it is and what it can do. An object can be part
of a class, which is a set of objects that are similar.

 Information hiding: The ability to protect some components of the object from external entities. This
is realized by language keywords to enable a variable to be declared as private or protected to the
owning class.
 Inheritance: The ability for a class to extend or override functionality of another class. The so-called
subclass has a whole section that is derived (inherited) from the superclass and then it has its own set
of functions and data.
 Interface: The ability to defer the implementation of a method. The ability to define the functions or
methods signatures without implementing them.
 Polymorphism: The ability to replace an object with its subobjects. The ability of an object-variable
to contain, not only that object, but also all of its subobjects.

Designing concepts

 Defining objects, creating class diagram from conceptual diagram: Usually map entity to class.

 Identifying attributes.

26
 Use design patterns (if applicable): A design pattern is not a finished design, it is a description of a
solution to a common problem, in a context[1]. The main advantage of using a design pattern is that it
can be reused in multiple applications. It can also be thought of as a template for how to solve a
problem that can be used in many different situations and/or applications. Object-oriented design
patterns typically show relationships and interactions between classes or objects, without specifying
the final application classes or objects that are involved.

 Define application framework (if applicable): Application framework is a term usually used to refer
to a set of libraries or classes that are used to implement the standard structure of an application for a
specific operating system. By bundling a large amount of reusable code into a framework, much time
is saved for the developer, since he/she is saved the task of rewriting large amounts of standard code
for each new application that is developed.

 Identify persistent objects/data (if applicable): Identify objects that have to last longer than a single
runtime of the application. If a relational database is used, design the object relation mapping.

 Identify and define remote objects (if applicable).

[edit] Output (deliverables) of object-oriented design

 Sequence Diagrams: Extend the System Sequence Diagram to add specific objects that handle the
system events.

A sequence diagram shows, as parallel vertical lines, different processes or objects that live
simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in
which they occur.

 Class diagram: A class diagram is a type of static structure UML diagram that describes the structure
of a system by showing the system's classes, their attributes, and the relationships between the
classes. The messages and classes identified through the development of the sequence diagrams can
serve as input to the automatic generation of the global class diagram of the system.

[edit] Some design principles and strategies

 Dependency injection: The basic idea is that if an object depends upon having an instance of some
other object then the needed object is "injected" into the dependent object; for example, being passed
a database connection as an argument to the constructor instead of creating one internally.
 Acyclic dependencies principle: The dependency graph of packages or components should have no
cycles. This is also referred to as having a directed acyclic graph. [2] For example, package C depends
on package B, which depends on package A. If package A also depended on package C, then you
would have a cycle.
 Composite reuse principle: Favor polymorphic composition of objects over inheritance.[1]

27
The leaves of the tree are self-contained methods that do not need to invoke other methods (except perhaps
System.out.println() and the like) such as task3. In any given node, you should assume that all other methods
work or will become available--focus solely on the sequence of steps taken by that method.

This diagram doesn't show implementation details; it just shows the breakdown of tasks into subtasks for a
particular operation (the root). Some subtasks are implemented later as inline code while other subtasks are
implemented as references to other methods that you then must further break down into another branch.

If you want, you can label the information flow to and from the methods by their name.

Sometimes a branch maps to a module for which we use a class like a DatabaseManager singleton object.
Most often a decomposition diagram path will weave through several objects.

I prefer doing these things sideways so my nodes can have lots of text

Another method is to imply a diagram by simply referencing another method that lives on another sheet of
paper or wherever.

Start:
task1
task2
task3

task1:
blah blah

task2:
blort blort

The Design Process

1. Write down the functionality of your system so you have a clear picture of what it does. For
example, you can draw out the site map for a website and list all the pages, which will identify all the
top-level functional requirements.
2. Identify the actors in your system (what are the major components in your system?). List the major
message and data traffic interaction, which usually highlights some new helper objects.

28
3. Group similar tasks or aspects of your program into a single object; objects have to play the role of
modules or services in Java.
4. For each method of each object, apply top-down functional decomposition. Each method should
have as few "operations" as possible while still being a complete concept. Further break down each
of these operations until you think you have an "atomic" operation that is just a few simple
programming instructions.

Coding: Implement the methods top-down. Contrary to some advocates of top-down design, I recommend
strongly that you implement your methods starting at the highest level and working downwards. Changes at
level n force changes at level n+1 and below like a ripple effect.

Testing: Test your methods bottom-up by writing little test harnesses; this is called unit testing. At each
level, you will have confidence in the level(s) below you.

In summary, you will design your overall program using a object interaction diagram and design each object
using functional decomposition.

Documentation:

The reality is that increased processes usually result in increased documentation.  An improved process
produces intermediate work products that represent the elaboration of the product design at each step in the
development life cycle.  Where possible, documentation should be generated using automated tools, so
outputs can contribute to generation of code structures or help generate the code itself. 

The difference between hacking and software engineering is professional discipline applied with common
sense. Software quality, reliability, and maintainability are enhanced by having good documentation for
requirements, architecture, interfaces, detailed design, well-commented code, and good test procedures. 
Requirements documentation practices should facilitate your customer's understanding and review of the
real requirements.  Software project planning should include estimating the time and resources to produce,
review, approve, and manage such documentation products.

Sample Software Documentation Work Products

A project manager is a professional in the field of project management. Project managers can have the
responsibility of the planning, execution, and closing of any project, typically relating to construction
industry, architecture, computer networking, telecommunications or software development.

Many other fields in the production, design and service industries also have project managers.

Overview

29
A project manager is the person responsible for accomplishing the stated project objectives. Key project
management responsibilities include creating clear and attainable project objectives, building the project
requirements, and managing the triple constraint for projects, which are cost, time, and quality (also known
as scope).

A project manager is often a client representative and has to determine and implement the exact needs of the
client, based on knowledge of the firm they are representing. The ability to adapt to the various internal
procedures of the contracting party, and to form close links with the nominated representatives, is essential
in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized.

The term and title 'project manager' has come to be used generically to describe anyone given responsibility
to complete a project. However, it is more properly used to describe a person with full responsibility and the
same level of authority required to complete a project. If a person does not have high levels of both
responsibility and authority then they are better described as a project administrator, coordinator, facilitator
or expeditor.

concerns.

Types of project managers

Construction Project Manager

Construction project managers in the past were individuals, who worked in construction or supporting
industries and were promoted to foreman. It was not until the late 20th century that construction and
Construction management became distinct fields.

Until recently, the American construction industry lacked any level of standardization, with individual States
determining the eligibility requirements within their jurisdiction. However, several Trade associations based
in the United States have made strides in creating a commonly-accepted set of qualifications and tests to
determine a project manager's competency.

 The Project Management Institute has made some headway into being a standardizing body with its
creation of the Project Management Professional (PMP) designation.
 The Constructor Certification Commission of the American Institute of Constructors holds
semiannual nationwide tests. Eight American Construction Management programs require that
students take these exams before they may receive their Bachelor of Science in Construction
Management degree, and 15 other Universities actively encourage their students to consider the
exams.
 The Associated Colleges of Construction Education, and the Associated Schools of Construction
have made considerable progress in developing national standards for construction education
programs.

The profession has recently grown to accommodate several dozen Construction Management Bachelor of
Science programs.

The US Navy Construction Battalion, nicknamed the SeaBees, puts their command through strenuous
training and certifications at every level. To become a Chief Petty Officer in the SeaBees is equivalent to a
BS in Construction Management with the added benefit of several years of experience to their credit. See
ACE accreditation.

Architectural Project Manager

Architectural project manager are project managers in the field of architecture. They have many of the same
skills as their counterpart in the construction industry. An architect will often work closely with the
30
construction project manager in the office of the General contractor (GC), and at the same time, coordinate
the work of the design team and numerous consultants who contribute to a construction project, and manage
communication with the client. The issues of budget, scheduling, and quality-control are the responsibility
of the Project Manager in an architect's office.

Software Project Manager

A Software Project Manager has many of the same skills as their counterparts in other industries. Beyond
the skills normally associated with traditional project management in industries such as construction and
manufacturing, a software project manager will typically have an extensive background in software
development. Many software project managers hold a degree in Computer Science, Information Technology
or another related field.

In traditional project management a heavyweight, predictive methodology such as the waterfall model is
often employed, but software project managers must also be skilled in more lightweight, adaptive
methodologies such as DSDM, SCRUM and XP. These project management methodologies are based on the
uncertainty of developing a new software system and advocate smaller, incremental development cycles.
These incremental or iterative cycles are timeboxed (constrained to a known period of time, typically from
one to four weeks) and produce a working subset of the entire system deliverable at the end of each iteration.
The increasing adoption of lightweight approaches is due largely to the fact that software requirements are
very susceptible to change, and it is extremely difficult to illuminate all the potential requirements in a single
project phase before the software development commences.

The software project manager is also expected to be familiar with the Software Development Life Cycle
(SDLC). This may require in depth knowledge of requirements solicitation, application development, logical
and physical database design and networking. This knowledge is typically the result of the aforementioned
education and experience. There is not a widely accepted certification for software project managers, but
many will hold the PMP designation offered by the Project Management Institute, PRINCE2 or an advanced
degree in project management, such as a MSPM or other graduate degree in technology management.

Responsibilities

The specific responsibilities of the Project Manager vary depending on the industry, the company size, the
company maturity, and the company culture. However, there are some responsibilities that are common to
all Project Managers, noting[2]:

 Developing the project plan


 Managing the project stakeholders
 Managing the project team
 Managing the project risk
 Managing the project schedule
 Managing the project budget
 Managing the project conflicts

The critical path method (CPM) is an algorithm for scheduling a set of project activities.[1] It is an
important tool for effective project management.

History

The Critical Path Method (CPM) is a project modeling technique developed in the late 1950s by Morgan R.
Walker of DuPont and James E. Kelley, Jr. of Remington Rand.[2] Kelley and Walker related their memories
of the development of CPM in 1989.[3] Kelley attributed the term "critical path" to the developers of the
31
Program Evaluation and Review Technique which was developed at about the same time by Booz Allen
Hamilton and the US Navy.[4] The precursors of what came to be known as Critical Path were developed and
put into practice by DuPont between 1940 and 1943 and contributed to the success of the Manhattan Project.
[5]

CPM is commonly used with all forms of projects, including construction, aerospace and defense, software
development, research projects, product development, engineering, and plant maintenance, among others.
Any project with interdependent activities can apply this method of mathematical analysis. Although the
original CPM program and approach is no longer used, the term is generally applied to any approach used to
analyze a project network logic diagram.

Basic technique

The essential technique for using CPM [6] is to construct a model of the project that includes the following:

1. A list of all activities required to complete the project (typically categorized within a work
breakdown structure),
2. The time (duration) that each activity will take to completion, and
3. The dependencies between the activities

Using these values, CPM calculates the longest path of planned activities to the end of the project, and the
earliest and latest that each activity can start and finish without making the project longer. This process
determines which activities are "critical" (i.e., on the longest path) and which have "total float" (i.e., can be
delayed without making the project longer). In project management, a critical path is the sequence of
project network activities which add up to the longest overall duration. This determines the shortest time
possible to complete the project. Any delay of an activity on the critical path directly impacts the planned
project completion date (i.e. there is no float on the critical path). A project can have several, parallel, near
critical paths. An additional parallel path through the network with the total durations shorter than the
critical path is called a sub-critical or non-critical path.

These results allow managers to prioritize activities for the effective management of project completion, and
to shorten the planned critical path of a project by pruning critical path activities, by "fast tracking" (i.e.,
performing more activities in parallel), and/or by "crashing the critical path" (i.e., shortening the durations
of critical path activities by adding resources).

CPM Diagram

Steps in CPM Project Planning

1. Specify the individual activities.


2. Determine the sequence of those activities.
3. Draw a network diagram.
4. Estimate the completion time for each activity.
5. Identify the critical path (longest path through the network)
32
6. Update the CPM diagram as the project progresses.

1. Specify the Individual Activities

From the work breakdown structure, a listing can be made of all the activities in the project. This listing can
be used as the basis for adding sequence and duration information in later steps.

2. Determine the Sequence of the Activities

Some activities are dependent on the completion of others. A listing of the immediate predecessors of each
activity is useful for constructing the CPM network diagram.

3. Draw the Network Diagram

Once the activities and their sequencing have been defined, the CPM diagram can be drawn. CPM originally
was developed as an activity on node (AON) network, but some project planners prefer to specify the
activities on the arcs.

4. Estimate Activity Completion Time

The time required to complete each activity can be estimated using past experience or the estimates of
knowledgeable persons. CPM is a deterministic model that does not take into account variation in the
completion time, so only one number is used for an activity's time estimate.

5. Identify the Critical Path

The critical path is the longest-duration path through the network. The significance of the critical path is that
the activities that lie on it cannot be delayed without delaying the project. Because of its impact on the entire
project, critical path analysis is an important aspect of project planning.

The critical path can be identified by determining the following four parameters for each activity:

 ES - earliest start time: the earliest time at which the activity can start given that its precedent
activities must be completed first.

 EF - earliest finish time, equal to the earliest start time for the activity plus the time required to
complete the activity.

 LF - latest finish time: the latest time at which the activity can be completed without delaying the
project.

 LS - latest start time, equal to the latest finish time minus the time required to complete the activity.

The slack time for an activity is the time between its earliest and latest start time, or between its earliest and
latest finish time. Slack is the amount of time that an activity can be delayed past its earliest start or earliest
finish without delaying the project.

The critical path is the path through the project network in which none of the activities have slack, that is,
the path for which ES=LS and EF=LF for all activities in the path. A delay in the critical path delays the
project. Similarly, to accelerate the project it is necessary to reduce the total time required for the activities
in the critical path.

33
6. Update CPM Diagram

As the project progresses, the actual task completion times will be known and the network diagram can be
updated to include this information. A new critical path may emerge, and structural changes may be made in
the network if project requirements change.

CPM Limitations

CPM was developed for complex but fairly routine projects with minimal uncertainty in the project
completion times. For less routine projects there is more uncertainty in the completion times, and this
uncertainty limits the usefulness of the deterministic CPM model. An alternative to CPM is the PERT
project planning model, which allows a range of durations to be specified for each activity.

Characteristics of a good design

 Minimal complexity – if your design doesn’t let you safely ignore most other parts of the program
when you’re immersed in one specific part, the design isn’t doing its job. This is also known as
avoiding complexity.
 Ease if maintenance - design the system to be self-explanatory. Wheter it’s yourself or a
maintainance programmer that’s going to sit down and fix a bug in a couple of months time it’ll be
worth it.
 Loose coupling – Loose coupling means designing so that you hold connections among different
parts of a program to a minimum. This means; encapsulation, and information hiding [read] and
good abstractions in class interfaces. This also makes the stuff easier to test, which is a Good Thing.
 Extensibility – You can change a piece of the system without affecting other pieces.
 Reusability – Designing the system so that you can use pieces of it in other systems.
 High fan-in – This refers to having a high number of classes that use a given class. This is good, the
opposite on the other hand …
 Low-to-medium fan-out – Refers to how many classes a given class use. If a class have a high fan-
out (Code Complete says this is more than 7 classes, I don’t want to be that specific) this is often an
indication of that the class may be overly complex. And complexity is bad, remember?
 Portability – How easy would it be to move the system to another environment?
 Leanness - I guess this could be compared to KISS. Voiltaire said that a book is finished not when
nothing more can be added but when nothing more can be taken away. Extra code will have to be
developed, reviewed, tested, and considered when the other code is modified.
 Stratification – designing “in layers”. Can you view “one layer” of the code without thinking about
the underlying layer? An example giving in Code Complete is if you’re writing a modern system that
has to use a lot of older, poorly designed code – you would want to write a layer of the new system
that is responsible for interfacing with the old code.
 Standard techniques – this means using design patterns whenever it is appropiate to do so. This
way, if you say to another coder “Here I use the Factory pattern” he will instantly know what you’re
talking about if he knows the pattern. You do not want to be one of those “valued employees” who
only write code that the “valued employee” can understand.

And, whatever you do: DO NOT CONSIDER SPEED! NEVER OPTIMIZE WHILE DESIGNING!
(yes, those really needed to be capitalized). If I have one more PHP coder tell me that the MVC solution
probably won’t be speedy, I will have to slashdot him physically in public. You don’t want to start off by
trading any of the characteristics above for speed.

34
Types of Maintenance Programs�

5.1 Introduction

What is maintenance and why is it performed? Past and current maintenance practices in both the private
and government sectors would imply that maintenance is the actions associated with equipment repair after
it is broken. The dictionary defines maintenance as follows: “the work of keeping something in proper
condition; upkeep.” This would imply that maintenance should be actions taken to prevent a device or
component from failing or to repair normal equipment degradation experienced with the operation of the
device to keep it in proper working order. Unfortunately, data obtained in many studies over the past decade
indicates that most private and government facilities do not expend the necessary resources to maintain
equipment in proper working order. Rather, they wait for equipment failure to occur and then take whatever
actions are necessary to repair or replace the equipment. Nothing lasts forever and all equipment has
associated with it some predefined life expectancy or operational life. For example, equipment may be
designed to operate at full design load for 5,000 hours and may be designed to go through 15,000 start and
stop cycles.

The need for maintenance is predicated on actual or impending failure – ideally, maintenance is
performed to keep equipment and systems running efficiently for at least design life of the component(s).
As such, the practical operation of a component is time-based function. If one were to graph the failure rate
a component population versus time, it is likely the graph would take the “bathtub” shape shown in Figure
5.1.1. In the figure the Y axis represents the failure rate and the X axis is time. From its shape, the curve
can be divided into three distinct: infant mortality, useful life, and wear-out periods.

The initial infant mortality period of bathtub curve is characterized by high failure rate followed by a
period of decreasing failure. Many of the failures associated with this region are linked to poor design,
poor installation, or misapplication. The infant mortality period is followed by a nearly constant failure rate
period known as useful life. There are many theories on why components fail in this region, most
acknowledge that poor O&M often plays significant role. It is also generally agreed

5.2 Reactive Maintenance

Reactive maintenance is basically the “run it till it breaks” maintenance mode. No actions or efforts are
taken to maintain the equipment as the designer originally intended to ensure design life is reached. Studies
as recent as the winter of 2000 indicate this is still the predominant mode of maintenance in the United
States. The referenced study breaks down the average maintenance program as follows:

• >55% Reactive
• 31% Preventive
• 12% Predictive
• 2% Other.
Advantages • Low cost. • Less staff. Disadvantages • Increased cost due to unplanned downtime of
equipment. • Increased labor cost, especially if overtime is needed. • Cost involved with repair or
replacement of equipment. • Possible secondary equipment or process damage from equipment failure. •
Inefficient use of staff resources.
Note that more than 55% of maintenance resources and activities of an average facility are still
reactive.

Advantages to reactive maintenance can be viewed as a double-edged sword. If we are dealing with new
equipment, we can expect minimal incidents of failure. If our maintenance program is purely reactive, we
will not expend manpower dollars or incur capital cost until something breaks. Since we do not see any
associated maintenance cost, we could view this period as saving money. The downside is reality. In reality,
35
during the time we believe we are saving maintenance and capital cost, we are really spending more dollars
than we would have under a different maintenance approach. We are spending more dollars associated with
capital cost because, while waiting for the equipment to break, we are shortening the life of the equipment
resulting in more frequent replacement. We may incur cost upon failure of the primary device associated
with its failure causing the failure of a secondary device.

5.3 Preventive Maintenance

Preventive maintenance can be defined as follows: Actions performed on a time- or machine-run-based


schedule that detect, preclude, or mitigate degradation of a component or system with the aim of sustaining
or extending its useful life through controlling degradation to an acceptable level.

The U.S. Navy pioneered preventive maintenance as a means to increase the reliability of their vessels.
By simply expending the necessary resources to conduct maintenance activities intended by the equipment
designer, equipment life is extended and its reliability is increased. In addition to an increase in reliability,
dollars are saved over that of a program just using reactive maintenance. Studies indicate that this savings
can amount to as much as 12% to Advantages • Cost effective in many capital-intensive processes. •
Flexibility allows for the adjustment of maintenance periodicity. • Increased component life cycle. •
Energy savings. • Reduced equipment or process failure. • Estimated 12% to 18% cost savings over
reactive maintenance program. Disadvantages • Catastrophic failures still likely to occur. • Labor
intensive. • Includes performance of unneeded maintenance. • Potential for incidental damage to
components in conducting unneeded maintenance.

18% on the average. Depending on the facilities current maintenance practices, present equipment
reliability, and facility downtime, there is little doubt that many facilities purely reliant on reactive
maintenance could save much more than 18% by instituting a proper preventive maintenance program.

While preventive maintenance is not the optimum maintenance program, it does have several advantages
over that of a purely reactive program. By performing the preventive maintenance as the equipment designer
envisioned, we will extend the life of the equipment closer to design. This translates into dollar savings.
Preventive maintenance (lubrication, filter change, etc.) will generally run the equipment more efficiently
resulting in dollar savings. While we will not prevent equipment catastrophic failures, we will decrease the
number of failures. Minimizing failures translate into maintenance and capital cost savings.

5.4 Predictive Maintenance

Predictive maintenance can be defined as follows: Measurements that detect the onset of system
degradation (lower functional state), thereby allowing causal stressors to be eliminated or controlled prior
to any significant deterioration in the component physical state. Results indicate current and future
functional capability.

Basically, predictive maintenance differs from preventive maintenance by basing maintenance need on
the actual condition of the machine rather than on some preset schedule. You will recall that preventive
maintenance is time-based. Activities such as changing lubricant are based on time, like calendar time or
equipment run time. For example, most people change the oil in their vehicles every 3,000 to 5,000 miles
traveled. This is effectively basing the oil change needs on equipment
Advantages
• Increased component operational life/availability.
• Allows for preemptive corrective actions.
• Decrease in equipment or process downtime.
• Decrease in costs for parts and labor.
• Better product quality.

36
• Improved worker and environmental safety.
• Improved worker morale.
• Energy savings.
• Estimated 8% to 12% cost savings over preventive maintenance program.

Disadvantages
• Increased investment in diagnostic equipment.
• Increased investment in staff training.
• Savings potential not readily seen by management.

run time. No concern is given to the actual condition and performance capability of the oil. It is changed
because it is time. This methodology would be analogous to a preventive maintenance task. If, on the other
hand, the operator of the car discounted the vehicle run time and had the oil analyzed at some periodicity to
determine its actual condition and lubrication properties, he/she may be able to extend the oil change until
the vehicle had traveled 10,000 miles. This is the fundamental difference between predictive maintenance
and preventive maintenance, whereby predictive maintenance is used to define needed maintenance task
based on quantified material/equipment condition.

The advantages of predictive maintenance are many. A well-orchestrated predictive maintenance


program will all but eliminate catastrophic equipment failures. We will be able to schedule maintenance
activities to minimize or delete overtime cost. We will be able to minimize inventory and order parts, as
required, well ahead of time to support the downstream maintenance needs. We can optimize the
operation of the equipment, saving energy cost and increasing plant reliability. Past studies have
estimated that a properly functioning predictive maintenance program can provide a savings of 8% to
12% over a program utilizing preventive maintenance alone. Depending on a facility’s reliance on
reactive maintenance and material condition, it could easily recognize savings opportunities exceeding
30% to 40%. In fact, independent surveys indicate the following industrial average savings resultant from
initiation of a functional predictive maintenance program:

• Return on investment: 10 times


• Reduction in maintenance costs: 25% to 30%
• Elimination of breakdowns: 70% to 75%

5.5 Reliability Centered Maintenance

Reliability centered maintenance (RCM) magazine provides the following definition of RCM: “a process
used to determine the maintenance requirements of any physical asset in its operating
context.”

Basically, RCM methodology deals with some key issues not dealt with by other maintenance programs.
It recognizes that all equipment in a facility is not of equal importance to either the process or facility safety.
It recognizes that equipment design and operation differs and that different equipment will have a higher
probability to undergo failures from different degradation mechanisms than others. It also approaches the
structuring of a maintenance program recognizing that a facility does not have unlimited financial and
personnel resources and that the use of both need to be prioritized and optimized. In a nutshell, RCM is a
systematic approach to evaluate a facility’s equipment and resources to best mate the two and result in a
high degree of facility reliability and cost-effectiveness. RCM is highly reliant Advantages • Can be the
most efficient maintenance program. • Lower costs by eliminating unnecessary maintenance or overhauls. •
Minimize frequency of overhauls. • Reduced probability of sudden equipment failures. • Able to focus
maintenance activities on critical components. • Increased component reliability. • Incorporates root cause
analysis. Disadvantages • Can have significant startup cost, training, equipment, etc. • Savings potential not
readily seen by management.
37
on predictive maintenance but also recognizes that maintenance activities on equipment that is inexpensive
and unimportant to facility reliability may best be left to a reactive maintenance approach. The following
maintenance program breakdowns of continually top-performing facilities would echo the RCM approach to
utilize all available maintenance approaches with the predominant methodology being predictive.

• <10% Reactive
• 25% to 35% Preventive
• 45% to 55% Predictive.

Because RCM is so heavily weighted in utilization of predictive maintenance technologies, its program
advantages and disadvantages mirror those of predictive maintenance. In addition to these advantages,
RCM will allow a facility to more closely match resources to needs while improving reliability and
decreasing cost.

FEASIBILITY ANALYSIS

A major but optional activity within systems analysis is feasibility analysis. A wise person once said, "All
things are possible, but not all things are profitable." Simply stated, this quote addresses feasibility. Systems
analysts are often called upon to assist with feasibility analysis for proposed systems development projects.
Therefore, let's take a brief look at this topic.

Consider your answer to the following questions. Can you ride a bicycle? Can you drive a car? Can you
repair a car's transmission? Can you make lasagna? Can you snow ski? Can you earn an "A" in this course?
Can you walk on the moon? As you considered your response to each of these questions, you quickly did
some kind of feasibility analysis in your mind. Maybe your feasibility analysis and responses went
something like this: Can you ride a bicycle? "Of course I can! I just went mountain bike riding last weekend
with my best friend." Can you drive a car? "Naturally. I drove to school today and gasoline is sure
expensive." Can you repair a car's transmission? "Are you kidding? I don't even know what a transmission
is!" Can you make lasagna? "I never have, but with a recipe and directions I'm sure that I could. My mom
makes the best lasagna, yum!" Can you snow ski? "I tried it once and hated it. It was so cold and it cost a lot
of money." Can you earn an "A" in this course? "I think it would be easier to walk on the moon." Can you
walk on the moon? "People have done it. With training, I think I could, and I would like to also."

Each of us does hundreds or thousands of feasibility analyses every day. Some of these are "no brainers"
while others are more thorough. Every time we think words like "can I...?" we are assessing our feasibility
to do something.

38
Information systems development projects are usually subjected to one or more feasibility analyses prior
to and during their life. In an information systems development project context, feasibility is the measure of
how beneficial the development or enhancement of an information system would be to the business.
Feasibility analysis is the process by which feasibility is measured. It is an ongoing process done frequently
during systems development projects in order to achieve a creeping commitment from the user and to
continually assess the current status of the project. A creeping commitment is one that continues over time to
reinforce the user's commitment and ownership of the information system being developed. Knowing a
project's current status at strategic points in time gives us and the user the opportunity to (1) continue the
project as planned, (2) make changes to the project, or (3) cancel the project.

Feasibility Types

Information systems development projects are subjected to at least three interrelated feasibility types—
operational feasibility, technical feasibility, and economic feasibility. Operational feasibility is the measure
of how well particular information systems will work in a given environment. Just because XYZ
Corporation's payroll clerks all have PCs that can display and allow editing of payroll data doesn't
necessarily mean that ABC Corporation's payroll clerks can do the same thing. Part of the feasibility
analysis study would be to assess the current capability of ABC Corporation's payroll clerks in order to
determine the next best transition for them. Depending on the current situation, it might take one or more
interim upgrades prior to them actually getting the PCs for display and editing of payroll data. Historically,
of the three types of feasibility, operational feasibility is the one that is most often overlooked, minimized, or
assumed to be okay. For example, several years ago many supermarkets installed "talking" point-of-sale
terminals only to discover that customers did not like having people all around them hearing the names of
the products they were purchasing. Nor did the cashiers like to hear all of those talking point-of-sale
terminals because they were very distracting. Now the point-of-sale terminals are once again mute.

Technical feasibility is the measure of the practicality of a specific technical information system solution
and the availability of technical resources. Often new technologies are solutions looking for a problem to
solve. As voice recognition systems become more sophisticated, many businesses will consider this
technology as a possible solution for certain information systems applications. When CASE technology was
first introduced in the mid-1980s, many businesses decided it was impractical for them to adopt it for a
variety of reasons, among them being the limited availability of the technical expertise in the marketplace to
use it. Adoption of Smalltalk, C++, and other object-oriented programming for business applications is slow
for similar reasons.

Economic feasibility is the measure of the cost-effectiveness of an information system solution. Without
a doubt, this measure is most often the most important one of the three. Information systems are often
viewed as capital investments for the business, and, as such, should be subjected to the same type of
investment analyses as other capital investments. Financial analyses such as return on investment (ROI),
internal rate of return (IRR), cost/benefit, payback period, and the time value of money are utilized when
considering information system development projects.

Cost/benefit analysis identifies the costs of developing the information system and operating it over a
specified period of time. It also identifies the benefits in financial terms in order to compare them with the

39
costs. Economically speaking, when the benefits exceed the costs, the system has economic value to the
business; just how much value is a function of management's perspective on investments.

Systems development and annual operating costs are the two primary components used to determine the cost
estimates for a proposed information system. These two components are similar to the costs associated with
constructing and operating a new building on the university campus. The building has a one-time
construction cost—usually quite high. For example, a new library addition on campus recently costs $20
million to build. Once ready for occupancy and use, the library addition will incur operating costs, such as
electricity, custodial care, maintenance, and library staff. The operating costs per year are probably a
fraction of the construction costs. However, the operating costs continue for the life of the library addition
and will more than likely exceed the construction costs at some time in the future.

Systems development costs are a one-time cost similar to the construction cost of the library addition.
The annual operating costs are an ongoing cost once the information system is implemented. Figure 2.1
illustrates an example of these two types of costs. In this example, the annual operating costs are a very
small fraction of the development costs. If the system is projected to have a useful life of ten years, the
operational costs will still be significantly more than the development costs.

Two types of benefits are usually identified and quantified—tangible and intangible. Tangible benefits
are those that can objectively be quantified in terms of dollars. Figure 2.2a lists several tangible benefits.
Intangible benefits are those that cannot be objectively quantified in terms of dollars. These benefits must be
subjectively quantified in terms of dollars. A list of several intangible benefits is shown in Figure 2.2b.

Comparing the benefit dollars to the cost dollars, one can tell if the proposed information system is going
to break even, cost the business, or save the business money. Once a project is started, financial analyses
should continue to be done at periodic intervals to determine if the information system still makes economic
sense. Sometimes systems development projects are canceled before they become operational, many because
they no longer make economic sense to the business. Operational and technical feasibility should also be
continually assessed during the life of a systems development project in order to make adjustments when
necessary.

The Requirements Specification


Document (RSD)
n This is the document that is generated by the requirements
engineering process; the document describes all requirements
for the system under design and is intended for several
purposes:
n Communication among customers, users and designers --
specification should be quite specific about what the system
will look like externally
n Supporting system testing, verification and validation
activities -- specification should include sufficient information
so that when the system is delivered, it is possible to make
sure that it meets requirements
n Controlling system evolution -- maintenance, extensions and
enhancements to system should be consistent with
requirements, else the requirements themselves must evolve
40
Contents of a RSD
n What to include in a RSD:
üA complete yet concise description of the entire external
interface of the system with its environment, including other
software, communication ports, hardware and user interfaces
üFunctional requirements (also called behavioural requirements)
specify what the system does by relating inputs to outputs
üNon-Functional requirements (also called quality or nonbehavioural)
define the attributes of the system as it operates
n What not to include in a RSD:
üProject requirements -- because these are development-specific
and become irrelevant as soon as the project is over
üDesigns -- because inclusion of designs is irrelevant to endusers
and customers and pre-empts the design phase
üQuality assurance plans -- for example, configuration
management plans, verification and validation plans, test plans,
quality assurance plans

Content Qualities
n Correct in the sense that all stated requirements represent a need a
stakeholder has (customer, user, analyst or designer)
n Unambiguous in the sense that every stated requirement has a
unique interpretation.
n Complete in the sense that it possesses the following four qualities:
üEverything the software is supposed to do is in the RSD;
üThe response to all possible input combinations is stated explicitly;
üPages and figures are numbered (document completeness);
üThere are no “to-be-determined” sections in the document.
n Verifiable in that every requirement can be established through a
finite cost, effective process
n Consistent in that no requirement is in conflict with existing
documents or with another stated requirement; incosistencies among
requirements may be of four kinds (i) conflicting behaviour, (ii)
conflicting terms, (iii) conflicting attributes (iv) temporal inconsistencies
Information Systems Analysis and Design csc340
2002 John Mylopoulos Software Requirements Specification 5
Qualities of a Well-Written RSD
n Understandable by customers, which means that formal notations
can only be used as backup to help with consistency and precision,
while the RSD document itself is expressed in natural language or
some notation the customer is familiar with (e.g., UML.)
n Modifiable in the sense that it can be easily changed without
affecting completeness, consistency; modifiability is enhanced by a
table of contents (TOC), an index and cross references where
appropriate; redundancy can also be used (mention the same
requirement several times, but cross-reference them all)
n Traced in that the origin of every requirement is clear; this can be
achieved by referencing earlier documents (pre-existing documents,
drafts, memos,...)
n Traceable in the sense that attributes of the design can be traced
back to requirements and vice versa; also, during testing you want to
know which requirement is being tested by which test batch; to
41
enhance traceability (i) number every requirement, (ii) number every
part of the RSD hierarchically, all the way down to paragraphs

Style Qualities
n Design-independent in the sense that it does not imply a
particular software architecture or algorithm
n Annotated in that it provides guidance to the developers; two
useful types of annotations are (i) relative neccesity, i.e., how
necessary is a particular requirement from a stakeholder
perspective, (ii) relative stability, i.e., how likely is it that a
requirement will change
n Concise -- the shorter an RSD document the better
n Organized in the sense that it is easy to locate any one
requirement

42

You might also like