Professional Documents
Culture Documents
Needs for Partners components in the system has come to dominate software and
• You might want to get business analysts, help desk or technical hardware systems engineering. The trend started in hardware in the
support, target customers or users, or sales and marketing staff 1990s. RBCS clients like Dell, Hitachi, Hewlett-Packard, and other
involved in testing to build their confidence in the quality of the computer systems vendors took advantage of cheap yet educated
system that’s about to be released— or the testing that your team or labor overseas to compete effectively in an increasingly
some other team performed on that system. In some cases, the commoditized market.
motivation is political, to make up for credibility deficits in the • Outsourcing spread slowly into software in the 1990s. The near-
development team or your own team of testers. In other cases— simultaneous bursting of the telecom and dot-com bubbles in 2000
especially customer involvement in acceptance testing— the combined with the Y2K and Euro-conversion wind-downs, making
motivation is contractual, since many outsource development efforts the software industry and its customers more conservative and
include a period of acceptance testing by the customers prior to final price-conscious. By the end of 2002, when after three years into a
payment. spectacular IT downturn computer science enrollments in the
Partners in Testing Project United States fell to less than half of their 1999 levels, price had
become the primary determinant inmost IT project decisions. Mass
outsourcing of software projects took hold, and it continues
unabated to this day.
• As with hardware, commoditization plays a role with software
outsourcing. Software as a Service (SaaS) represents one facet of
commoditized software, and it is clearly a form of pay-as-you-go
Vendors outsourcing. The use of open source packages represents another
• When vendors are producing key components for your systems, facet of commoditized software, being a form of pay-nothing-as-
clear-headed assessments of component quality and smart testing you-go outsourcing (albeit requiring potentially expensive self-
strategies are key to mitigating those risks. support).
• Most vendors take a narrow, targeted view of their testing: they • We also need to consider a from the outside looking in perspective;
might test very deeply in certain areas, but they aren’t likely to test i.e., how you manage testing when a constraint is distribution of
broadly. some of the testing, all of the testing, and perhaps even the entire
• What is the risk to your system’s quality related to your vendors’ development effort.
components? Outsourcing Scopes
– Component irreplaceability. • Outsource the development, but retain the testing in-house.
– Component essentiality. • Outsource the development and the testing to one company only.
– Component/system coupling. • Outsource the development and the testing to two different
– Vendor quality problems. companies.
Testing Service Providers • Outsource the development and/or the testing, each to multiple
• Testing service providers include any organization that offers test companies.
services to clients. Most of the time these services come at a fee, Reasons for Outsourcing
but some organizations provide some specialized services at no • The desire to realize labor and other cost savings.
charge to the client. The provider can offer these services on-site • The expertise, capital equipment, or geographical advantages of the
(insourcing), off-site (outsourcing), or both. outsource organization.
• A testing service provider brings several key strengths to the table. • The need for system or product certification by a qualified service
The most important is expertise in test project management and provider.
test engineering. • The inability to handle the work in-house, either due to a temporary
• Another advantage is that testing service providers can often begin spike in workload or a one-off project.
running tests for you more quickly than you could do it yourself. • Organizational or peer pressure on decision-makers to outsource,
• A testing service provider, whether lab or consultancy or whatever, whichis the ‘‘everybody’s doing it’’ argument.
might also offer expert consulting and training services. • Dissatisfaction with the in-house team’s capability, whether for
Sales Office service, cost, attitude, quality, or some other reason, which is the ‘‘it
• If you sell a product internationally, you might have a local sales office couldn’t be worse, at least it’s cheaper’’ argument.
or a sales partner (such as a distributor) in various regions. Success Factors for Outsourcing
• As fellow employees, the staff members in a sales office have the • You have to select the right testers for the testing tasks, with the right
same goals and objectives you do —they have a stake in ensuring that skills. This includes test management skills for large chunks of testing
the test effort is successful. work that will be outsourced.
• Unfortunately, these sales and marketing people might not have • The outsourced testers must have access to the right equipment, the
technical sophistication or a particular skill with testing. If you are right tools, the right infrastructure, and so forth. If the assumption is
responsible for the results of the testing and want specific items that the outsourced testing will use equipment, tools, and
tested, you will need to spell out these details. Any test cases you give infrastructure that you have in-house, this will not happen by magic,
to nontechnical colleagues must be precise and unambiguous. but will require careful planning and management of the logistics of
Users & User-Surrogates that.
• This category includes business analysts, help desk, customer support, • The outsourced testers must have the ability to adapt their work to
and technical support personnel along with actual target customers your project and your organization.
and users. • The outsourced testers must have sufficient independence to tell the
• Most commonly, these folks participate in alpha, beta, pilot, or straight truth about their results.
acceptance testing efforts. (One of our clients, though, invites its Issues for Outsourced Project
customers’ most-expert users of its system to participate in system • The system being built: This includes not only the transfer of test
test, using their own unique data and workflows). releases, release notes, code, and other supporting files, under proper
• As mentioned previously, testing by users and user-surrogates can configuration management control and as part of a solid release
result from considerations of credibility, contractual obligation, or engineering process, but also the change management and project
from a need to broaden test coverage. management elements.
Outsourcing in Testing Project • The test system: Due to reasons of expense or security, it is often
How Outsourcing Affects Testing necessary to share test environments with the outsource
development and, if you have them, outsource testing partners. This – The most costly outcome—in terms of both effort
includes the obvious elements of the test environment, but also and schedule time—is that the document requires
access to cohabiting software, affected or linked systems or extensive changes and a re-review.
databases, and other connected elements that will be present in the • Now, when that happens, keep in mind that while this is a
customer or production environment. You’ll need also to be able to costly outcome, it’s less costly than simply ignoring the
exchange various testware items, such as test data, cases, and test serious problems and then dealing with them during
tools, both to audit the test work of the outsource organizations and component, integration, system, or—worse yet—acceptance
to avoid problems with gaps and overlap in the creation of the testing.
testware. Finally, you also need to coordinate the test processes, at Essential Roles and Responsibilities
least the major touchpoints that cross-organizational boundaries. • During a formal review, there are some essential roles and
• Information flows: A tremendous amount of information should flow responsibilities:
across organization boundaries as part of testing in an outsourced • The manager. The manager allocates resources, schedules reviews,
project. This includes project documents, quality risk analysis and the like. However, the manager might not be allowed to attend
documents, estimates and plans, work assignments, bug reports, and, based on the review type.
of course, test results reports. It also includes less-formalized but • The moderator or leader: This is the chair of the review meeting.
equally critical information flows like emails, project discussion • The author: This is the person who wrote the item under review. A
enablers like wikis and newsgroups, and the like. review meeting, done properly, should not be a sad or humiliating
Risks for Outsourced Project experience for the author.
• Political instability, especially when using outsource organizations in • The reviewers: These are the people who examine the item under
developing regions review, possibly finding defects in it. Reviewers can play specialized
• Time zones, language barriers, and other communication issues. roles based on their expertise or based on some type of defect they
• Overtime and other off-hours constraints due to lack of access to should target.
the facility, lack of power or other infrastructure during off-hours at • The scribe or secretary or recorder: This is the person who writes
the facility, or physical safety issues associated with being at the down the findings.
facility during off-hours. Types of Review
• Infrastructure problems and inadequacies, such as unreliable • At the lowest level of formality (and, usually, defect removal
Internet connectivity, poor roads and airports, difficulties obtaining effectiveness), we find the informal review. This can be as simple as
potable water and safe food during site visits, and so forth. two people, the author and a colleague, discussing a design
• Skills availability, as mentioned in the earlier section. document over the phone.
• Unforeseen and sometimes abruptly exposed organizational • Technical reviews are more formalized, but still not highly formal.
weakness, including loss of key players due to extreme rates of • Walk-throughs are reviews where the author is the moderator and
turnover, organizational collapse due to governance problems, and the item under review itself is the agenda. That is, the author leads
the like. the review, and in the review, the reviewers go section by section
Conclusion about Outsourcing for Testing through the item under review.
• As discussed in this section, outsourcing does pose a number of • Inspections are the most formalized reviews. The roles are well
challenges to testing and quality. However, none of those challenges defined. Managers may not attend. The author may be neither
are fundamentally that much different or harder to manage than the moderator nor secretary. A reader is typically involved.
risks and activities on non-outsourced, collocated projects. The Six Phases for Formal Review
differences are those of degree more than of kind. So, the diligent test
manager can succeed with outsourced projects if she manages the
overall testing process with the same level of attention to detail that
she would use to manage any other testing effort.
• Software outsourcing matures, testers and test managers are well
positioned to become essential project participants. Outsourcing
might be cheaper than doing projects in-house, but it’s harder, much
more complex, and thus riskier. Since testing, ultimately, is composed
of risk management activities, I suggest you bring your risk
management skills to bear. You might well find that you are the key to Preparing the Review
outsourcing success in your organization. • You don’t have to wait until a document is done before you start
SESSION 10 - Review for Testing Project reviewing it. You can and should review early drafts or partial
Review Principles documents when you’re dealing with something critical. This can help
• A review is a type of static test. The object being reviewed is not to identify and prevent patterns of defects before they are built into
executed or run during the review. Like any test activity, reviews can the whole document.
have various objectives. • That said, make sure you have some rules about what it means for
• Reviews usually precede dynamic tests. They should complement something to be ready for review.
dynamic tests. Because the cost of a defect increases the longer that • Checklists are helpful for reviews. It’s too easy to forget important
defect remains in the system, reviews should happen as soon as areas without them. Have some checklists.
possible. Marick’s Code Review Checklist
• Because reviews are so effective when done properly, organizations • Technical test analysts are likely to be invited to code reviews. So, let’s
should review all important documents. That includes test go through Brian Marick’s code review checklist, which he calls a
documents: test plans, test cases, quality risk analyses, bug reports, “question catalog.” This catalog has several categories of questions
test status report, you name it. that developers should ask themselves when going through their
What Can Happen after a Review? code. These questions are useful for many procedural and object-
• There are three possible outcomes. oriented programming languages, though in some cases certain
– The ideal case is that the document is okay as is or questions might not apply.
with minor changes. • However, customization based on your own experience, and your
– Another possibility is that the document requires organization’s needs, is encouraged.
some non-trivial changes but not a rereview. • For variable declarations, Marick says we should ask the following
questions:
• Are the literal values correct? How do we know?
• Has every single variable been set to a known value OpenLaszlo Code Review Checklist
before first use? When the code changes, it is easy to • In the previous section, we discussed Marick’s questions that should
miss changing these. be asked about the code itself. In this section, we will discuss
• Have we picked the right data type for the need? Can questions that should be asked about the changes to the system.
the value ever go negative? These are essentially meta-questions about the changes that
• For each data item and operations on data items, Marick says occurred during maintenance. These come from the OpenLaszlo
we should ask the following questions: website.
• Are all strings NULL terminated? If we have shortened or • For all changes in code, here are the main questions we should ask:
lengthened a string, or processed it in any way, did the • Do we understand all of the code changes that were
final byte get changed? made and the reasons for them?
• Did we check every assignment to a buffer for length? • Are there test cases for all changes? Have they been
• When using bitfields, are our manipulations (shifts, run?
rotates, etc.) going to be portable to other architectures • Were the changes formally documented as per our
and endian schemes? guidelines?
• Does every sizeof() function call actually go to the object • Were any unauthorized changes slipped in?
we meant it to? • In terms of coding standards, here are some additional questions to
• For every allocation, deallocation, and reallocation of ask (assumingyou are not enforcing coding standards via static
memory, Marick says we should ask the following questions: analysis):
• Is the amount of memory sufficient to the purpose • Do all of the code changes meet our standards and guidelines?
without being wasteful? If not, why not?
• How will the memory be initialized? • Are all data values to be passed parameterized correctly? In
• Are all fields being initialized correctly if it is a complex terms of design changes, here are the questions to ask:
data structure? • Do you understand the design changes and the reasons they
• Is the memory freed correctly after use? were made?
• Do we ever have side effects from static storage in • Does the actual implementation match the designs?
functions or methods? • Here are the maintainability questions to ask:
• After reallocating memory, do we still have any pointers to • Are there enough comments? Are they correct and sufficient?
the old memory • Are all variables documented with enough information to
• location? understand why they were chosen?
• Is there any chance that the memory might be freed • Finally, here are the documentation questions included in the
multiple times? OpenLaszlo checklist:
• After deallocation, are there still pointers to the memory? • Are all command-line arguments documented?
• Are we mistakenly freeing data we don’t mean to? • Are all environmental variables needed to run the system
• Is it possible that the pointer we are using to free the defined and documented?
memory is already NULL? • Has all user-facing functionality been documented in the user
• For all operations on files, Marick says we should ask the manual and help file?
following questions: • Does the implementation match the documentation?
• Do we have a way of ensuring that each temp file we create Developing a Review Program
is unique? • Reviews are an effective tool used along with execution-based testing
• Is it possible to reuse a file pointer while it is pointing to an to support defect detection, increased software quality, and customer
open file? satisfaction. Reviews should be used for evaluating newly developing
• Do we recover each file handle when we are done with it? products as well as new releases or versions of existing software. If
• Do we close each file explicitly when we are done with it? reviews are conducted on the software artifacts as they are
• For every computation, Marick says we should ask the developed throughout the software life cycle, they serve as filters for
following questions: each artifact.
- Are parentheses correct? Do they mean what we want them • A multiple set of filters allows an organization to separate out and
to mean? eliminate defects, inconsistencies, ambiguities, and low-quality
- When using synchronization, are we updating variables in the components early in the software life cycle. If we compare the
critical sections together? process of defect detection during reviews and execution-based
- Do we allow division by zero to occur? testing/debugging we can see that the review process may be more
- Are floating point numbers compared for exact equality? efficient for detecting, locating, and repairing these defects, especially
• For every operation that involves a pointer, Marick says we in the code, for the following reasons:
should ask the following questions: 1. When testing software, unexpected behavior is observed
• Is there any place in the code where we might try to because of a defect(s) in the code. The symptomatic
dereference a NULL pointer? information is what the developer works with to find and repair
• When dealing with objects, do we want to copy (fix) the defect.
pointers (shallow copy) or content (deep copy)? 2. Reviews also have the advantage of a two-pass approach for
• For all assignments, Marick says we should ask the following defect detection. Pass 1 has individuals first reading the
question: reviewed item and pass 2 has the item read by the group as a
• Are we assigning dissimilar data types where we can whole. If one individual reviewer did not identify a defect or a
lose precision? problem, others in the group are likely to find it.
• For every function call, Marick says we should ask the 3. Inspectors have the advantage of the checklist which calls their
following questions: attention to specific areas that are defect prone. These are
• Was the correct function with the correct arguments important clues. Testers/developers may not have such
called? information available.
• Are the preconditions of the function actually met? Review-related Policies
• Finally, Marick provides a couple of miscellaneous questions: (i) testing policies with an emphasis on defect detection and
• Have we removed all of the debug code and bogus error quality, and measurements for controlling and monitoring;
messages? (ii) a test organization with staff devoted to defect detection and
• Does the program have a specific return value when exiting? quality issues;
(iii) policies and standards that define requirements, design, test • The preconditions need to be described in the review policy
plan, and other documents; statement and specified in the review plan for an item.
(iv) organizational culture with a focus on quality products and General preconditions for a review are:
quality processes. – the review of an item(s) is a required activity in the project
• Review policies should specify when reviews should take plan. (Unplanned reviews are also possible at the request
place, what is to be reviewed, types of reviews that will take of management, SQA or software engineers. Review policy
place, who is responsible, what training is required and what statements should include the conditions for holding an
the review deliverables are. unplanned review.)
• Review procedures should define the steps and phases for – a statement of objectives for the review has been
each type of review. developed;
• Policies should ensure that each project has an associated – the individuals responsible for developing the reviewed
project plan, test plan, configuration management plan, and a item indicate readiness for the review;
review plan, and/or a software quality assurance plan. – the review leader believes that the item to be reviewed is
• Project plans and the review plans should ensure that sufficiently complete for the review to be useful.
adequate time and resources are available for reviews and • The review planner must also keep in mind that a given item
that cycle time is set aside for reviews. to be reviewed may be too large and complex for a single
• Managers need to follow-up and enforce the stated policies. review meeting. The smart planner partitions the review item
Only strong managerial commitment will lead to a successful into components that are of a size and complexity that allows
review program. them to be reviewed in 1–2 hours. This is the time range in
Review Plan Components which most reviewers have maximum effectiveness. For
• Reviews are development and maintenance activities that example, the design document for a procedure-oriented
require time and resources. They should be planned so that system may be reviewed in parts that encompass:
there is a place for them in the project schedule. An – the overall architectural design;
organization should develop a review plan template that can – data items and module interface design;
be applied to all software projects. The template should – component design.
specify the following items for inclusion in the review plan. • If the architectural design is complex and/or the number of
– review goals; components is large, then multiple design review sessions
– items being reviewed; should be scheduled for each. The project plan should have
– preconditions for the review; time allocated for this.
– roles, team size, participants; Roles, Participants, Team Size, and Time Requirements
– training requirements; • Two major roles that need filling for a successful review are (i)
– review steps; a leader or moderator, and (ii) a recorder.
– checklists and other related documents to be disturbed to • Some of the responsibilities of the moderator have been
participants; described. These include planning the reviews, managing the
– time requirements; review meeting, and issuing the review report. Because of
– the nature of the review log and summary report; these responsibilities the moderator plays an important role;
– rework and follow-up. the success of the review depends on the experience and
Review Goals expertise of the moderator.
• As in the test plan or any other type of plan, the review • Reviewing a software item is a tedious process and requires
planner should specify the goals to be accomplished by the great attention to details. The moderator needs to be sure
review and include: that all are prepared for the review and that the review
i. identification of problem components or meeting stays on track. Reviewers often tire and become less
components in the software artifact that need effective at detecting errors if the review time period is too
improvement, long and the item is too complex for a single review meeting.
ii. identification of specific errors or defects in the The moderator/planner must ensure that a time period is
software artifact, selected that is appropriate for the size and complexity of the
iii. ensuring that the artifact conforms to organizational item under review.
standards, and • There is no set value for a review time period, but a rule of
iv. communication to the staff about the nature of the thumb advises that a review session should not be longer than
product being developed. 2 hours. Review sessions can be scheduled over 2-hour time
• Additional goals might be to establish traceability with other periods separated by breaks. The time allocated for a review
project documents, and familiarization with the item being should be adequate enough to ensure that the material under
reviewed. Goals for inspections and walkthroughs are usually review can be adequately covered.
different; those of walkthroughs are more limited in scope • The review recorder has the responsibility for documenting
and are usually confined to identification of defects. defects, and recording review findings and recommendations,
Preconditions and Items to Be Reviewed Other roles may include a reader who reads or presents the
• Given the principal goals of a technical review—early defect item under review. Readers are usually the authors or
detection, identification of problem areas, and familiarization preparers of the item under review. The author(s) is
with software artifacts—many software items are candidates responsible for performing any rework on the reviewed item.
for review. In many organizations the items selected for In a walkthrough type of review, the author may serve as the
review include: moderator, but this is not true for an inspection. All reviewers
– requirements documents; should be trained in the review process.
– design documents; • The size of the review team will vary depending type, size, and
– code; complexity of the item under review. Again, as with time,
– test plans (for the multiple levels); there is no fixed size for a review team. In most cases a size
– user manuals; between 3 and 7 is a rule of thumb, but that depends on the
– training manuals; items under review and the experience level of the review
– standards documents. team. Of special importance is the experience of the review
moderator who is responsible for ensuring the material is
covered, the review meeting stays on track, and review
outputs are produced. The minimal team size of 3 ensures need to come to a consensus on what needs to be fixed and what
that the review will be public. remains unchanged.
Example Components for an Inspection Checklist
• The first column lists all the defect types or potential problem areas
that may occur in the item under review. Sources for these defect
types are usually data from past projects. Abbreviations for detect/
problem types can be developed to simplify the checklist forms.
Status refers to coverage during the review meeting—has the item
been discussed? If so, a check mark is placed in the column.
• Major or minor are the two severity or impact levels shown here. Each
organization needs to decide on the severity levels that work for
them. Using this simple severity scale, a defect or problem that is
classified as major has a large impact on product quality; it can cause
failure or deviation from specification. A minor problem has a small
impact on these; in general, it would affect a nonfunctional aspect of
Review Procedures the software. The letters M, I, and S indicate whether a checklist item
• For each type of review that an organization wishes to implement, is missing (M), incorrect (I), or superfluous (S).
there should be a set of standardized steps that define the given
review procedure. These are initiation, preparation, inspection
meeting, reporting results, and rework and follow-up. For each step in
the procedure the activities and tasks for all the reviewer participants
should be defined. The review plan should refer to the standardized
procedures where applicable.
Review Training
• Review participants need training to be effective. Responsibility for
reviewer training classes usually belongs to the internal technical
training staff. Alternatively, an organization may decide to send its
review trainees to external training courses run by commercial
institutions. Review participants, and especially those who will be
review leaders, need the training. Test specialists should also receive
review training.
Extended Checklist
• In addition to covering the items on the general document checklist as
shown in previous table, the following items should be included in the
checklist for a requirements review.
– completeness (have all functional and quality requirements
described in the problem statement been included?);
– correctness (do the requirements reflect the user’s needs? are
they stated without error?);
– consistency (do any requirements contradict each other?);
– clarity (it is very important to identify and clarify any ambiguous
requirements);
Review Checklists – relevance (is the requirement pertinent to the problem area?
• Checklists are very important for inspectors. They provide structure requirements should not be superfluous);
and an agenda for the review meeting. They guide the review – redundancy (a requirement may be repeated; if it is a duplicate it
activities, identify focus areas for discussion and evaluation, ensure all should be combined with an equivalent one);
relevant items are covered, and help to frame review record keeping – testability (can each requirement be covered successfully with
and measurement. Reviews are really a two-step process: (i) reviews one or more test cases? can tests determine if the requirement
by individuals, and (ii) reviews by the group. The checklist plays its has been satisfied?);
important role in both steps. The first step involves the individual – feasibility (are requirements implementable given the conditions
reviewer and the review material. Prior to the review meeting each under which the project will progress?).
individual must be provided with the materials to review and the Design Review
checklist of items. It is his responsibility to do his homework and • Some specific items that should be checked for at a design review
individually inspect that document using the checklist as a guide, and – a description of the design technique used;
to document any problems he encounters. When they attend the – an explanation of the design notation used;
group meeting which is the second review step, each reviewer should – evaluation of design alternatives;
bring his or her individual list of defect/problems, and as each item on – quality of the high-level architectural model;
the checklist is discussed they should comment. Finally, the reviewers – description of module interfaces;
– quality of the user interface;
– quality of the user help facilities;
– identification of execution criteria and operational sequences;
– clear description of interfaces between this system and other
software and hardware systems;
– coverage of all functional requirements by design elements;
– coverage of all quality requirements, for example, ease of use,
portability, maintainability, security, readability, adaptability,
performance requirements (storage, response time) by design
elements;
– reusability of design components;
– Testability.
Detailed Design Review
• For reviewing detailed design the following focus areas should also be
revisited:
– encapsulation, information hiding and inheritance;
– module cohesion and coupling;
– quality of module interface description;
– module reuse.
Review Reports
• The review reports should contain the following information.
– For inspections—the group checklist with all items covered and
comments relating to each item.
– For inspections—a status, or summary, report (described below)
signed by all participants.
– A list of defects found, and classified by type and frequency. Each
defect should be cross-referenced to the line, pages, or figure in the
reviewed document where it occurs.
– Review metric data.
IEEE Standards Inspection Report
• IEEE standards suggest that the inspection report contain vital
data such as [8]:
– number of participants in the review;
– the duration of the meeting;
– size of the item being reviewed (usually LOC or number of
pages);
– total preparation time for the inspection team;
– status of the reviewed item;
– estimate of rework effort and the estimated date for
completion of the rework.
Purpose of Review
• A final important item to note: The purpose of a review is to evaluate
a software artifact, not the developer or author of the artifact.
Reviews should not be used to evaluate the perforance of a software
Test Plan Checklist
analyst, developer, designer, or tester.This important point should be
well established in the review policy. It is essential to adhere to this
policy for the review process to work. If authors of software artifacts
believe they are being evaluated as individuals, the objective and
impartial nature of the review will change, and its effectiveness in
revealing problems will be minimized