You are on page 1of 38

SYSTEM IMPLEMENTATION,

MAINTENANCE AND REVIEW


SYSTEM IMPLEMENTATION,
MAINTETNACE AND REVIEW
SYSTEM
PERFORMANC POST
E IMPLEMENTAT
SYSTEM ION REVIEW
MAINTENANC
E
WHAT CAN
GO WRONG
FILE CONVERSION
AND
CHANGEOVER
SYSTEM
IMPLEMENTATION,
MAINTETNACE AND
REVIEW

DOCUME
NTATION

TRAINING
INSTALLATION
AND
TESTING IMPLEMENTATION
STAGES OF IMPLEMENTATION
Step 1 Select location / site

Step 2 Choose and order hardware


Step 3 Design and write software or purchase off the shelf


Step 4 Program testing


Step 5 Staff training


Step 6 Produce user documentation


Step 7 Produce systems documentation


Step 8 File conversion


Step 9 Testing (including user acceptance testing)


Step 10 System changeover (and further testing and training required)



TESTING - IMPORTANCE

 Process of running the database system with the


intent of finding errors.
 Use carefully planned test strategies and realistic
data.
 Testing cannot show absence of faults; it can show
only that software faults are present.
 Demonstrates that database and application
programs appear to be working according to
requirements.

© Pearson Education Limited, 2004 4


TESTING - HOW

 Testersusually try to "break the system" by


entering data that may cause the system to
malfunction or return incorrect information.

 Forexample, a tester may put in a city in a


search engine designed to only accept states,
to see how the system will respond to the
incorrect input.
TYPES OF TESTING Contd

 Testing system logic


 Use of flowcharts and structured diagrams to test the logic
devised by the system analyst

 Program testing
 Involves processing test through all programs
 Fully documented – to use if modifications are required
 Cover the following areas:
▪ Input validity checks
▪ Program logic functioning
▪ Interfaces with related modules / systems
▪ Output format and validity
TYPES OF TESTING Contd

 Types of Program Testing


 Unit testing
▪ Functional and reliability testing in an Engineering environment. Producing
tests for the behavior of components of a product to ensure their correct
behavior prior to system integration.
▪ The primary goal of unit testing is to take the smallest piece of testable
software in the application, isolate it from the remainder of the code, and
determine whether it behaves exactly as you expect. Each unit is tested
separately before integrating them into modules to test the interfaces
between modules. Unit testing has proven its value in that a large
percentage of defects are identified during its use.
 Unit integration testing
▪ Testing in which modules are combined and tested as a group. Modules are
typically code modules, individual applications, client and server
applications on a network, etc. Integration Testing follows unit testing and
precedes system testing, that is, It occurs after unit testing and before
system testing
TYPES OF TESTING Contd

 System testing
 Testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified
requirements. Wider focus than program testing. System
testing falls within the scope of black box testing, and as
such, should require no knowledge of the inner design of
the code or logic.
▪ Input documentation and practicalities of input
▪ Flexibility to allow amendments
▪ Ability to produce timely information
▪ Ability to cope with peak system requirements
▪ Viability of operating procedures
 Occurs both before and after implementation
TYPES OF TESTING Contd

 User acceptance testing


 to verify a product meets customer specified requirements. A
customer usually does this type of testing on a product that is
developed externally.
 Static Testing and dynamic testing
 Static testing is a form of software testing where the software isn't
actually used. This is in contrast to dynamic testing. It is generally not
detailed testing, but checks mainly for the sanity of the code,
algorithm, or document. It is primarily syntax checking of the code
and/or manually reviewing the code or document to find errors
 Dynamic testing refers to the examination of the physical response
from the system to variables that are not constant and change with
time. In dynamic testing the software must actually be compiled and
run; Dynamic testing is used to test software through executing it
TYPES OF TESTING Contd

 Performance testing
 Performance testing can be applied to understand your application or
WWW site's scalability, or to benchmark the performance in an
environment of third party products such as servers and middleware
for potential purchase. This sort of testing is particularly useful to
identify performance bottlenecks in high use applications.
Performance testing generally involves an automated test suite as this
allows easy simulation of a variety of normal, peak, and exceptional
load conditions.
 Evaluate compliance of a system or component with specified
performance requirements
TYPES OF TESTING Contd

 Usability testing
 Technique used to evaluate a product by testing it on users. This can be seen as an
irreplaceable usability practice, since it gives direct input on how real users use the
system. Establishment of users satisfaction.
 The aim is to observe people using the product to discover errors and areas of
improvement. Usability testing generally involves measuring how well test subjects
respond in four areas: efficiency, accuracy, recall, and emotional response. The results of
the first test can be treated as a baseline or control measurement; all subsequent tests
can then be compared to the baseline to indicate improvement.
▪ Performance -- How much time, and how many steps, are required for people to complete basic
tasks? (For example, find something to buy, create a new account, and order the item.)
▪ Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the
right information?)
▪ Recall -- How much does the person remember afterwards or after periods of non-use?
▪ Emotional response -- How does the person feel about the tasks completed? Is the person
confident, stressed? Would the user recommend this system to a friend?
TYPES OF TESTING Contd

 Automated Testing tools


 Program testing and fault detection can be aided significantly by testing tools and
debuggers. Testing/debug tools include features such as:
 Program monitors, permitting full or partial monitoring of program code including:
▪ Instruction set simulator, permitting complete instruction level monitoring and trace facilities
▪ Program animation, permitting step-by-step execution and conditional breakpoint at source
level or in machine code
▪ Code coverage reports
 Formatted dump or symbolic debugging, tools allowing inspection of program
variables on error or at chosen points
 Automated functional GUI testing tools are used to repeat system-level tests
through the GUI
 Benchmarks, allowing run-time performance comparisons to be made
 Performance analysis (or profiling tools) that can help to highlight hot spots and
resource usage
TESTING STRATEGY

Strategy details of the test
STRATEGY APPROACH ●
Technical test


What

TEST PLAN ●


When
Under which environment

TEST DESIGN ●
The logic and reasoning

Detailed procedures
PERFORMING TESTS ●
Consistent testing at different time period


Documentation of the results and how to be done
DOCUMENTATION ●


Record of errors
Correction of errors procedure

Re-testing procedure
RE-TESTING


Re-testing of all modules and all aspects of the software
LIMITATIONS OF SOFTWARE TESTING

 Poor testing process (Bad test plan, Testers are not well trained)

 Inadequate time to test the project

 Future requirements not anticipated

 Inadequate test data (positive and negative data)

 Software changes inadequately tested


TRAINING
SYSTEMATIC TRAINING APPROACH
 Senior management
 Middle management
 Operators

 TRAINING METHODS
 Individual
 Classroom
 Computer based
 Case studies
 Software reference material
DOCUMENTATION
DOCUMENTATION

 Wide range of technical and non – technical books


 Technical manual
▪ Installation steps
▪ Hardware specifications
▪ Flow charts
▪ Data dictionary, etc
 User manual
▪ Systems set up
▪ Security procedures
▪ System messages
▪ Control procedures
FILE CONVERSION AND
CHANGEOVER
FILE CONVERSION
 Converting existing files into suitable format for the new
system. Create new files conform to the new system
 Existing data files
▪ Manual files (input data)
▪ Existing computer files (either through automation or coding)
▪ Existing data incomplete
 Controls of changes (accuracy of new files)
▪ One to one
▪ Sample checking
▪ Buil tin validation
▪ Control total and reconciliation
File conversion process

Check original Completeness of data


Removal of redundant data


files
Establish
controls Transcribe onto
total ●
Input forms = data entry screen
input forms
check

Controls Key in data


total

Verify data Validation of data and verification – built in in the


check system

Controls
total Print reports ●
Standing records for a starting point
Changeover
 After satisfactory testing, changeover shall take place
 Four approaches
 Direct changeover
▪ Old system completely replaced
 Parallel running
▪ Old and new system run in parallel within a definite delay
 Pilot operation
▪ One department run the two system in parallel on a pilot basis. If
satisfied, whole changeover
 Phased or staged changeover
▪ Section of the system for a direct changeover over phases
Advantages and limitations
METHOD ADVANATGES DISADVANTAGES
Direct changeover Quick at minimal cost Risky
Minimises workload Possible disruptions of
operations
Failure - costly

Parallel running Safe, built in safety Costly and Time consuming


Way to verify results of changes Additional work load

Pilot operations Less risky than direct changeover Long time to achieve total
Less costly than complete parallel change
running Less safe than parallel running

Phased changeover Less risky than single direct Long time to change
changeover Unpractical - Interfaces between
Problems in one section does not parts of system -
affect others
SYSTEM MAINTENANCE
Types of Maintenance

 Features of maintenance
 Flexibility and adaptability
 Types of maintenance
 Corrective
▪ Reaction to system failure
 Perfective
▪ Process of making the software perfect
 Adaptive
▪ Take account of changing environment
Causes of system maintenance
 Errors
 Changes in requirements
 Poor documentation

 Maintenance is only for a period of time


whereby redevelopment will be
necessary due to various new
requirements and changes in
environment

27
SYSTEM PERFORMANCE AND
EVALUATION
PERFORMANCE MEASUREMENT

 INDIRECT MEASURES
 Significant task relevance (observe the results of
system use, earlier or late)
 Willingness to pay
▪ Pay as you satisfy
 System logs
 User information satisfaction
▪ A survey of users on several criteria
 Adequacy of documentation
PERFORMANCE REVIEWS

 Performance reviews vary from organisation to


organisation. The main issues however would be :
 Growth rate
 Clerical manpower
 Identification of delays
 Efficiency of security procedures
 Check of error rates
 Use of output for good purpose
 Operational running costs
IMPROVING PERFORMANCE

 COMPUTER SYSTEMS EFFICIENCY AUDITS


 Output from a computer system
▪ More outputs with same input (e.g more transaction, ore
Management info, availability of the system to more users)
▪ Elimination of Output of little value
▪ Frequency of reports, and its distribution and bulkiness
▪ Better timing of outputs and timely accessibility to
management
▪ Reasons that restricts better output (access to information
that needs a database or network, method of data processing
used that is batch or real time or type of equipment used that
is stand alone pc or client /server system)
System Evaluation

 Cost benefit analysis


 Based on clear objectives and factors
 Efficiency and effectiveness
 Output greater than input
 Productivity improvement
 Accuracy of data
 Measurement of achievement of objectives
 Metrics
Metrics
 Quantified measurement to measure performance
▪ Response time, no of transactions process per minute, no of bugs
per hundred lines of codes, no of system crashes per week
▪ Three methods used
▪ Hardware monitors
 Measurement of electrical signal in selected circuits, idle and
level of activity
▪ Software monitors
 Programs that interrupt the application in use and record data
about it
▪ System logs
 Job start and finish, Variations in job running, Down time
POST IMPLEMENTATION
REVIEW
Post implementation review report

 Objectives and targeted performance criteria


 Comparison between actual and predicted
performance
 Internal audit
 Post implementation review report
 Summary of findings
 System performance review
 Cost-benefit analysis
 Recommendations for further action
WHAT CAN GO WRONG
What can go wrong

 Conflicting demands
 Time, cost, quality, resources
 Appointment of project managers
 Good manager but good specialist
 Other factors
 Unrealistic deadline
 Non-existent planning (fail to plan is plan to fail)
 Poor time table and resourcing
 Inexistence control
 Changing requirements
Establishment of steering committee

 Definite objectives
 Approve projects
 Recommend projects
 Establishing priority projects
 Establish company guidelines
 Coordination and control
 Evaluation
 System review after implementation

You might also like