Professional Documents
Culture Documents
Project management is a systematic method of defining and achieving targets with optimized use of resources such as time,
money, manpower, material, energy, and space. It is an application of knowledge, skills, resources, and techniques to meet
project requirements. Project management involves various activities, which are as follows:
Work planning
Resource estimation
Organizing the work
Acquiring recourses such as manpower, material, energy, and space
Risk assessment
Task assigning
Controlling the project execution
Reporting the progress
Directing the activities
Analyzing the results
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The
model uses a basic regression formula, with parameters that are derived from historical project data and current project
characteristics.
COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a model for estimating
effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was
Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2,000 to 100,000
lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of
software development which was the prevalent software development process in 1981.
References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally published in 2000 in
the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81 and is better suited for
estimating modern software development projects. It provides more support for modern software development processes and an
updated project database. The need for the new model came as software development technology moved from mainframe and
overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This
article refers to COCOMO 81.
COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good
for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to
account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and
Detailed COCOMO additionally accounts for the influence of individual project phases.
COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a model for estimating
effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was
Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2,000 to 100,000
lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of
software development which was the prevalent software development process in 1981.
References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally published in 2000 in
the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81 and is better suited for
estimating modern software development projects. It provides more support for modern software development processes and an
updated project database. The need for the new model came as software development technology moved from mainframe and
overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This
article refers to COCOMO 81.
COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good
for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to
account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and
Detailed COCOMO additionally accounts for the influence of individual project phases.Basic COCOMO computes software
development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of lines of code
(KLOC).
The coefficients ab, bb, cb and db are given in the following table.
Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32
* Product attributes
Required software reliability
Size of application database
Complexity of the product
* Hardware attributes
Run-time performance constraints
Memory constraints
Volatility of the virtual machine environment
Required turnabout time
* Personnel attributes
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language experience
* Project attributes
Use of software tools
Application of software engineering methods
Required development schedule
Project Scheduling
Project scheduling is concerned with the techniques that can be employed to manage the activities that need to be undertaken
during the development of a project.
Scheduling is carried out in advance of the project commencing and involves:
• identifying the tasks that need to be carried out;
• estimating how long they will take;
• allocating resources (mainly personnel);
• scheduling when the tasks will occur.
Once the project is underway control needs to be exerted to ensure that the plan continues to represent the best prediction of
what will occur in the future:
• based on what occurs during the development;
• often necessitates revision of the plan.
Effective project planning will help to ensure that the systems are delivered:
Activity Networks
The foundation of the approach came from the Special Projects Office of the US Navy in 1958. It developed a technique for
evaluating the performance of large development projects, which became known as PERT - Project Evaluation and Review
Technique. Other variations of the same approach are known as the critical path method (CPM) or critical path analysis (CPA).
White-box testing (a.k.a. clear box testing, glass box testing, transparent box testing, or structural testing) is a method of testing
software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In
white-box testing an internal perspective of the system, as well as programming skills, are required and used to design test
cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to
testing nodes in a circuit, e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually
done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a
system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented
parts of the specification or missing requirements.
Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal
structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming
knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is
supposed to do. It uses external descriptions of the software, including specifications, requirements, and design to derive test
cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid
inputs and determines the correct output. There is no knowledge of the test object's internal structure.
This method of test can be applied to all levels of software testing: unit, integration, functional, system and acceptance. It
typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Typical black-box test design techniques include:
Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of
electronic hardware, thus making it behave as expected. Debugging tends to be harder when various subsystems are tightly
coupled, as changes in one may cause bugs to emerge in another. Many books have been written about debugging (see below:
Further reading), as it involves numerous aspects, including: interactive debugging, control flow, integration testing, log files,
monitoring (application, system), memory dumps, profiling, Statistical Process Control, and special design tactics to improve
detection while simplifying changes.
Step 1. Identify the error.
This is an obvious step but a tricky one, sometimes a bad identification of an error can cause lots of wasted developing time, is
usual that production errors reported by users are hard to be interpreted and sometimes the information we are getting from
them is misleading.
A few tips to make sure you identify correctly the bug are.
Ishikawa diagrams (also called fishbone diagrams, cause-and-effect diagrams or Fishikawa) are causal diagrams that show the
causes of a certain event -- created by Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are product design
and quality defect prevention, to identify potential factors causing an overall effect. Each cause or reason for imperfection is a
source of variation. Causes are usually grouped into major categories to identify these sources of variation. The categories
typically include:
It was first used in the 1940s, and is considered one of the seven basic tools of quality control.[3] It is known as a fishbone
diagram because of its shape, similar to the side view of a fish skeleton.
Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car, where the required result was
"Jinba Ittai" or "Horse and Rider as One". The main causes included such aspects as "touch" and "braking" with the lesser
causes including highly granular factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door".
Every factor identified in the diagram was included in the final design.
Causes
Causes in the diagram are often categorized, such as to the 8 M's, described below. Cause-and-effect diagrams can reveal key
relationships among various variables, and the possible causes provide additional insight into process behavior.
Here are examples and explanations of four commonly used tools in project planning and project management, namely:
Brainstorming, Fishbone Diagrams, Critical Path Analysis Flow Diagrams, and Gantt Charts. Additionally and separately see
business process modelling and quality management, which contain related tools and methods aside from the main project
management models shown below.
The tools here each have their strengths and particular purposes, summarised as a basic guide in the matrix below.
Matrix key:
A wide range of computerised systems/software now exists for project management and planning, and new methods continue
to be developed. It is an area of high innovation, with lots of scope for improvement and development. I welcome suggestions
of particularly good systems, especially if inexpensive or free. Many organizations develop or specify particular computerised
tools, so it's a good idea to seek local relevant advice and examples of best practice before deciding the best computerised
project management system(s) for your own situation.
Project planning tools naturally become used also for subsequent project reporting, presentations, etc., and you will make life
easier for everyone if you use formats that people recognize and find familiar.
There are many risks involved in creating high quality software on time and within budget. However, in order for it to
be worthwhile to take these risks, they must be compensated for by a perceived reward. The greater the risk, the
greater the reward must be to make it worthwhile to take the chance. In software development, the possibility of
reward is high, but so is the potential for disaster. The need for software risk management is illustrated in Gilb’s risk
principle. “If you don’t actively attack the risks, they will actively attack you" [Gilb-88]. In order to successfully
manage a software project and reap our rewards, we must learn to identify, analyze, and control these risks. This
paper focuses on the basic concepts, processes, and techniques of software risk management.
There are basic risks that are generic to almost all software projects. Although there is a basic component of risk
management inherent in good project management, risk management differs from project management in the following
Within risk management the “emphasis is shifted from crisis management to anticipatory management” [Down-94].
Boehm defines four major reasons for implementing software risk management [Boehm-89]:
1. Avoiding software project disasters, including run away budgets and schedules, defect-ridden software
products, and operational failures.
2. Avoiding rework caused by erroneous, missing, or ambiguous requirements, design or code, which typically
consumes 40-50% of the total cost of software development.
3. Avoiding overkill with detection and prevention techniques in areas of minimal or no risk.
4. Stimulating a win-win software solution where the customer receives the product they need and the vendor
makes the profits they expect.
Decentralized
In a decentralized-control team organization, decisions are made by consensus, and all work is considered group work. Team
members review each other’s work and are responsible as a group for what every member produces. Figure 8.1 shows the
patterns of control and communication among team members in a decentralized-control orga-nization. The ringlike
management structure is intended to show the lack of a hierar-chy and that all team members are at the same level.
Such a "democratic" organization leads to higher morale and job satisfaction and, therefore, to less turnover. The engineers feel
more ownership of the project and responsibility for the problem, resulting in higher quality in their work.
A decen-tralized-control organization is more suited for long-term projects, because the amount of intragroup communication
that it encourages leads to a longer develop-ment time, presumably accompanied by lower life cycle costs.
The proponents of this kind of team organization claim that it is more appropriate for less understood and more complicated
problems, since a group can invent better solutions than a single individual can. Such an organization is based on a technique
referred to as "egoless programming," because it encourages programmers to share and review one another’s work.
On the negative side, decentralized-control team organization is not appropriate for large teams, where the communication
overhead can overwhelm all the engineers, reducing their individual productivity.
One way to centralize the control of a software development team is through chief-programmer team. In this kind of
organization, one engineer, known as the chief programmer, is responsible for the design and all the technical details of the
project.
The chief programmer reports to a peer project manager who is responsible for the administrative aspects of the project. Other
members of the team are a software librarian and programmers who report to the chief programmer and are added to the team
on a temporary basis when needed. Specialists may be used as consultants to the team. The need for programmers and
consultants, as well as what tasks they perform is determined by the chief programmer, who initiates and controls all decisions.
The software library maintained by the librarian is the central repository for all the documentation, and decisions made by the
team. Figure 8.0 is a graphical representation of the patterns of control and communication supported by this kind of
organization…
A mixed-control team organization attempts to combine the benefits of centralized and decentralized control, while minimizing
or avoiding their disadvantages.
Rather than treating all members the same, as in a decentralized organization, or treatin single individual as the chief, as in a
centralized organization, the mixed organization differentiates the engineers into senior and junior engineers. Each senior
engineer leads a group of junior engineers and reports, in its turn, to a project manager. Control is vested in the project manager
and senior programmers, while communication is decentralized among each set of individuals, peers, and their immediate
supervisors. The patterns of control and communication in mixed-control organizations are shown in Figure 8.2.
A mixed-mode organization tries to limit communication to within a group that is most likely to benefit from it. It also tries to
realize the benefits of group decision making by vesting authority in a group of senior programmers or architects. The mixed-
control organization is an example of the use of a hierarchy to master the complexity of software development as well as
organizational structure.
Business Process Re-Engineering services offered by Encore update the architecture of the technical applications and initiate
the new business process based upon the requirements of the new business model. Such modifications in both technical and
functional processes are a requisite to remaining competitive and profitable.
We are able to provide you with the required project management, business analysis and technical expertise to perform efficient
process re-engineering. We understand the reason for business process re-engineering and will work closely with your staff to
improve the intricate business rules of large enterprise systems in a way that is consistent with industry standards.
Encore has a successful history of performing a Capability Maturity Model (CMM) Mini Assessment Process (MAP) to
improve a process in accordance with the CMM phases as defined by the Software Engineering Institute (SEI).
The CMM phases are:
Initial
Repeatable
Defined
Managed
Optimized
Using certified CMM analysts and following our PMTech and Technology Process Framework methodologies, we are able to
provide a proven performance by delivering reliable, consistent and high-quality results.
To deliver a successful CMM MAP, we execute the following high-level phases and associated deliverable processes:
Define the enterprise appraisal objectives and critical success factors of the mini assessment
Conduct an opening briefing to summarize the process maturity concepts and the MAP methodology
The term black box is a metaphor for a specific kind of abstraction. Black-box abstraction means, that none of the internal
workings are visible, and that one can only observe output as reaction to some specific input (Fig. 1). Black-box testing, for
instance, works this way. Test cases are selected without knowledge about the implementation. They are run, and the delivered
results are checked for correctness.
The external operation, activated by the component, needs to be specified, too. This specification is not needed to see how the
operation is to be applied, but to see how it needs to be implemented. It is a specification of duties, sometimes referred to as
required interfaces [AOB97]. Often, such an external operation will depend on the state of the component calling it. As an
example of this, in [Szy97] the Observer Pattern from [JV95] is analyzed. To perform its task, the observer needs to request
state information from the observed object. If the observed object is a text and the observer a text view component, the observer
needs to know whether it sees the old or the new version of the text. In other words, one needs to know, whether the observer is
called before or after the data is changed. Specifying the text manipulation operations, such as delete, as black boxes does not
reveal when the observer is called and what the state of the caller is at that time (Fig. 3). In this simple example, the
intermediate state of the black-box at the time of the call could be described verbally, but in more involved cases this approach
usually fails.
The black box, state box and clear box views are distinct usage perspectives which are effective in defining the behaviour of
individual components but they provide little in the way of compositionality. Combining specifications cannot make statements
about the behaviour as a whole [1]. Cleanroom provides no means of describing concurrent (dynamic) behaviours and
analysing pathological problems, such as deadlock, livelocks, race conditions and so on.
Box Structured Development Method. The box structured development method outlines a hierarchy of concerns by a box
structure, which allows for divide, conquer and connect software specifications. The box structured development method
originates from Cleanroom. Cleanroom defines three views of the software system, referred to as the black box, state box and
clear box views. An initial black box is refined into a state box and then into a clear box. A clear box
can be further refined into one or more black boxes or it closes a hierarchical branch as a leave box providing a control
structure. This hierarchy of views allows for a stepwise refinement and verification as each view is derived from the previous.
The clear box is verified for equivalence against its state box, and the state box against its black box. The box structure should
specify all requirements for the component, so that no further specification is logically required to complete the component. We
have slightly altered the box structured development method to make it more beneficial
Sequence-Based Specification Method. The sequence-based specification method also originates from Cleanroom. The
sequence-based specification method describes the causality between stimuli and responses using a sequence enumeration
table. Sequence enumerations describe the responses of a process after accepting a history of stimuli. Every mapping of an
input sequence to a response is justified by explicit reference to the informal specifications. The sequence-based specification
method is applied to the black box and state box specifications. Each sequence enumeration can be tagged with a requirement
reference. The tagged requirement maps a stimuli-response causality of the system to the customer or derived requirements.
Figure 3.1 shows the 3-tier hierarchy of box structures namely black box, state box, and clear
box forms. Referential transparency is ensured which means traceability to requirements.
The major property of mathematics is that it supports abstraction and is an excellent medium for modeling. As it is an exact
medium there is little possibility of ambiguity: Specifications can be mathematically validated for contradictions and
incompleteness, and vagueness disappears completely.
In addition, mathematics can be used to represent levels of abstraction in a system specification in an organized way.
Mathematics is an ideal tool for modeling. It enables the bare bones of a specification to be exhibited and helps the analyst and
system specifier to validate a specification for functionality without intrusion of such issues as response time, design directives,
implementation directives, and project constraints. It also helps the designer, because the system design specification exhibits
the properties of a model, providing only sufficient details to enable the task in hand to be carried out. Finally, mathematics
provides a high level of validation when it is used as a software development medium. It is possible to use a mathematical
proof to demonstrate that a design matches a specification and that some program code is a correct reflection of a design. This
is preferable to current practice, where often little effort is put into early validation and where much of the checking of a
software system occurs during system and acceptance testing.
Mathematical Preliminaries
To apply formal methods effectively, a software engineer must have a working knowledge of the mathematical notation
associated with sets and sequences and the logical notation used in predicate calculus. The intent of the section is to provide a
brief introduction. For a more detailed discussion the reader is urged to examine books dedicated to these subjects
A set is a collection of objects or elements and is used as a cornerstone of formal methods. The elements contained within a set
are unique (i.e., no duplicates are allowed). Sets with a small number of elements are written within curly brackets (braces)
with the elements separated by commas. For example, the set {C++, Pascal, Ada, COBOL, Java} contains the names of five
programming languages. The order in which the elements appear within a set is immaterial. The number of items in a set is
known as its cardinality. The # operator returns a set's cardinality. For example, the expression #{A, B, C, D} = 4 implies that
Therefore, this specification defines the set {0, 1, 2} When the form of the elements of a set is obvious, the term can be
omitted. For example, the preceding set could be specified as (n : _ | n < 3} All the sets that have been described here have
elements that are single items. Sets can also be made from elements that are pairs, triples, and so on. For example, the set
specification {x, y : _ | x + y = 10 . (x, y2)} describes the set of pairs of natural numbers that have the form (x, y2) and where
the sum of x and y is 10. This is the set { (1, 81), (2, 64), (3, 49), . . .} Obviously, a constructive set specification required to
represent some component of computer software can be considerably more complex than those noted here. How ever the basic
form and structure remains the same.