You are on page 1of 6

Quality Assurance

We take every care to ensure that the software we build satisfies our client's requirements. The only way to ensure that is to perform quality assurance throughout the software lifecycle right from requirements elicitation, analysis and understanding of the business objectives. This understanding enables the team to develop a comprehensive quality assurance plan for the project that includes the following elements:

Test Plan and Test Cases - We develop a complete testing plan based on the requirements. The test
plan includes unit, integration and system testing. The test plan includes all the test cases that typically cover functionality, error handling, performance, scalability, and fail over, among other required tests. The test plan and test cases are validated with the client during the early stages of a project, and are refined and enhanced during detailed design and coding.

Traceability Matrix - During the software lifecycle we trace the requirements to the design components, to
the code modules, and to the test cases. This enables us to track changes to the requirements and to validate if the test plan covers all requirements and all design elements.

Peer reviews of designs and code - Depending on the size and duration of a project, we conduct
design and code reviews with architects and engineers from outside the project. These reviews provide an opportunity for valuable feedback and independent perspective.

Acceptance Plan - At the beginning of each iteration of a project, we develop a detailed acceptance plan
that describes all the expected deliverables during or at the end of the iteration. The client signs off on the acceptance plan to indicate agreement on the expected results. Minveli Infotech quality assurance team is responsible for implementing, tracking and adjusting the quality assurance plan to make sure that it is completed successfully by the delivery date, conducting the required testing and establishing a complete regression test suite for the product that is as automated as possible.

Testing process

Pic: Minveli Infotech - Software Quality Assurance Process Analysis Requirements: gathering and analyzing customer requirements. Software Test Plan (STP): definition of scope and goals; elaboration of appropriate testing
methodologies; preparation of software testing strategy; assigning roles and responsibilities; definition of resource requirements, start and completion criteria.

Test Environment: setting up the test infrastructure, identification of testing environment and test
tools, installation and configuration of the product.

Test Metrics: description of areas to be measured, development and collection of metrics. Test Design and Implementation: development of test scenarios, test cases, test check-list, test
procedures, test scripts, development of test applications, etc. Test Execution: performance of testing, both static or dynamic, which is provided for usage of manual and automatic test cases as required by STP and STS. Defect Management (Bug Tracking): recording testing results, defect description (Problem Reports, Change Requests), defect review and testing results analysis, errors correction, defect resolution verification.

Reporting: status reports, weekly reports, milestone reports, closure report.

1 Quality Assurance Presentation - Presentation Transcript 1. QUALITY ASSURANCE FRAMEWORK 2. AGENDA 1.What is Quality? 2.What is Software Quality Assurance? 3.Components Of Quality Assurance. 4.Software Quality Assurance Plan. 5.Quality Standards. 3. What is Quality? o 1.Accroding to computer literature Quality means Meeting Requirement. o 2.The product has something that other similar products do not that adds value. (Product based Definition).
o o o o o

4. Software Quality Assurace o Systematic activities providing evidence of the fitness for use of the total software product. o It is achieved through the use of established guidelines for quality control to ensure integrity and prolonged life of software. o It is a planned effort to ensure that a software product fulfils criteria and has additional attributes specific to the product. 5. o It is the collection of activities and functions used to monitor and control a software project so that specific objectives are achieved with the desired level of confidence. o It is not the sole responsibility of the software quality assurance group but is determined by the consenses of the project manager ,project leader, project personnel, and the users. Software Quality Assurace 6. Components of Quality Assurance 7. Software Testing o Software testing is a popular risk management strategy.It is used to verify that functional requirements were met.

The major purpose of verification and validation activities is to ensure that software design, code, and documentation meet all the requirements imposed on them. 8. Quality Control o Quality control is defined as the processes and methods used to monitor work and observe whether requirements are met.It focuses on reviews and removal of defects before shipment of products. o For small projects,the project personnels peer group or the departments software quality coordinator can inspect the documents.on large projects a configuration control board may be responsible for qualitycontrol 9. Software Configuration Management. o It is concerned with the labeling, tracking and controlling changes in the software elements of a system. o It consists of activities that ensure that design and code are defined and cannot be changed without a review of the effect of the change itself and its documentation.
o

10. Elements of software configuration management. 11. Component Identification A basic software configuration management activity is to identify the Software components that make up deliverable at each point of development. o In order to mange the development process one must establish methods and name the component standards.
o

12.
o

Version control

Software is frequently changed as it evolves through a succession of temporary states called versions. o A software configuration management o facility for controlling versions is a software configuration management repository or library. 13. Configuration Building o To build a software configuration one needs to identify the correct components versions and execute the component build procedures.This is often called Configuration building. o Software configuration management uses different approaches for selecting versions.The simplest method is to maintain all the component verisons. 14. Change control Software Change control is the process by which a modification to a software component is proposed. o Modification of a configuration has four elements : a change request, an impact analysis of the change,a set of
o

modifications and additions of new components and a method for reliably installing new componets. 15. Software Quality Assurance Plan o Software quality assurance plan is an outline of quality measures to ensure quality levels within asoftware development effort. o The plan provides the framework and guidelines for development of understandable and maintainable code. 16. Step to develop and implement a Software quality Assurance Plan o Step 1. Document the plan. o Step 2.Obtain Management Acceptance.
o o o

Step 3.Obtain development acceptance.

Step 4.Plan for implementation of the SQA Plan. Step 5.Execute the SQA Plan. 17. Quality Standards. o ISO9000
o o o

CMM( Capability Maturity Model) PCMM(People Capability Maturity Model) CMMI

Timing of STEP Activities


STEP specifies when the testing activities and tasks are to be performed, as well as what the tasks should be and their sequence, as shown in Figure 1-5. The timing emphasis is based on getting most of the test design work completed before the detailed design of the software. The trigger for beginning the test design work is an external, functional, or black box specification of the software component to be tested. For higher test levels (e.g., acceptance or system), the external specification is equivalent to the system requirements document. As soon as that document is available, work can (and should) begin on the design of the requirements-based tests.

Figure 1-5: Activity Timing at Various Levels of Test

The test design process continues as the software is being designed and additional tests based on the detailed design of the software are identified and added to the requirements-based tests. As the software design process proceeds, detailed design documents are produced for the various software components and modules comprising the system. These, in turn , serve as functional specifications for the component or module, and thus may be used to trigger the development of requirements-based tests at the component or module level. As the software project moves to the coding stage, a third increment of tests is designed based on the code and implementation details.

Key Point

The goal at each level is to complete the bulk of the test design work as soon as possible.

Test inventory and design activities at the various levels overlap. The goal at each level is to complete the bulk of the test design work as soon as possible. This helps to ensure that the requirements are "testable" and well thought out and that defects are discovered early in the process. This strategy supports an effective software review and inspection program. Measurement phase activities are conducted by level. Units are executed first, then modules or functions are integrated and system and acceptance execution is performed. The sequential execution from small pieces to big pieces is a physical constraint that we must follow. A major contribution of the methodology is in pointing out that the planning and acquisition phases are not so constrained; and furthermore, it's in our interest to reverse the order and begin to develop the high-level test sets first - even though we use them last! The timing within a given test level is shown in Figure 1-6 and follows our natural expectation. Plans and objectives come first, then test design, then implementation, then finally execution and evaluation. Overlap of activities is possible.

Figure 1-6: Activity Timing at Various Levels of Test

Work Products of STEP


Another aspect of the STEP process model is the set of work products produced in each phase and activity. STEP uses the word "testware" to refer to the major testing products such as test plans and test specification documents and the implemented test procedures, test cases, and test data files. The word "testware" is intentionally analogous to software and, as suggested by Figure 1-7, is intended to reflect a parallel development process. As the software is designed, specified, and built, the testware is also designed, specified, and built.

Figure 1-7: Parallel, Mutually Supportive Development These two broad classes of work products support each other. Testware development, by relying on software work products, supports the prevention and detection of software faults. Software development, by reviewing testware work products, supports the prevention and detection of testware faults. STEP uses IEEE standard document templates as a recommended guideline for document structure and content. Figure 1-8 lists the documents that are included in this book.

Roles and Responsibilities in STEP


Roles and responsibilities for various testing activities are defined by STEP. The four major roles of manager, analyst, technician, and reviewer are listed in Table 1-4.

Table 1-4: Roles and Responsibilities Role Description of Responsibilities Manager Communicate, plan, and coordinate. Analyst Plan, inventory, design, and evaluate. Technician Implement, execute, and check. Reviewer Examine and evaluate.
These roles are analogous to their counterpart roles in software development. The test manager is responsible for providing overall test direction and coordination, and communicating key information to all interested parties. The test analyst is responsible for detailed planning, inventorying of test objectives and coverage areas, test designs and specifications, and test review and evaluation. The test technician is responsible for implementation of test procedures and test sets according to the designs provided by the analyst, for test execution and checking of results for termination criteria, and for test logging and problem reporting. The test reviewer provides review and oversight over all steps and work products in the process. The STEP methodology does not require that these roles be filled by different individuals. On small projects, it's possible that one person may wear all four hats: manager, analyst, technician, and reviewer. On larger projects and as a test specialty becomes more refined in an organization, the roles will tend to be assigned to different individuals and test specialty career paths will develop.

Key Point

On smaller projects, it's possible that one person may wear all four hats: manager, analyst, technician, and reviewer.

You might also like