You are on page 1of 5

Requirements Attributes Document Guidelines

1. Objectives
The Requirements Attributes Guidelines identifies and describes the attributes that will be used for
managing the requirements for the C-Registration System. In addition, this document outlines the
requirements traceability that will be maintained on the project during development.
The attributes assigned to each requirement will be used to manage the software development and to
prioritize the features for each release.
The objective of requirements traceability is to reduce the number of defects found late in the
development cycle. Ensuring all product requirements are captured in the software requirements,
design, and test cases improves the quality of the product.
2. Scope
The attribute and traceability guidelines in this document apply to the product requirements, software
requirements, and test requirements for the C-Registration System.
1. Requirement Attributes
This section identifies the type of requirements that will be managed and the attributes that will be used for each
requirement type. The C-Registration System will identify and manage the following requirement types:
o Product Requirements,
o Use Case Requirements, and
o Test Cases.
This project is planning to use RequisitePro for managing the requirements. The attributes described in this
section will be defined in RequisitePro. RequisitePro will enable each requirement tagged in the Word
document to be described in terms of the attributes. RequisitePro will also be used to trace requirements as
described in Section 5 of this document.
1. Attributes for Product Requirements
The product requirements (defined in the Vision Document [1]) will be managed using the
attributes defined in this section. These attributes are useful for managing the development
effort and for prioritizing the features targeted for various releases.
1. Status
Set after the product requirements are defined in the Vision Document. Tracks
progress during definition of the project baseline.
Proposed Used to describe features that are under discussion but
have not yet been reviewed and approved.
Approved Capabilities that are deemed useful and feasible and
have been approved for implementation.
Incorporated Features incorporated into the product baseline at a
specific point in time.
2. Benefit
Set by Marketing, the product manager or the business analyst. All requirements are
not created equal. Ranking requirements by their relative benefit to the end user
opens a dialogue with customers, analysts and members of the development team.
Used in managing scope and determining development priority.
The product manager will specify the benefit of each proposed feature in terms of
critical, important, or useful.
Critical Essential features. Failure to implement means the system
will not meet customer needs. All critical features must be
implemented in the release or the schedule will slip.
Important Features important to the effectiveness and efficiency of the
system for most applications. The functionality cannot be
easily provided in some other way. Lack of inclusion of an
important feature may affect customer or user satisfaction,
or even revenue, but release will not be delayed due to lack
of any important feature.
Useful Features that are useful in less typical applications, will be
used less frequently, or for which reasonably efficient
workarounds can be achieved. No significant revenue or
customer satisfaction impact can be expected if such an
item is not included in a release.
3. Effort
Set by the development team. Because some features require more time and
resources than others, estimating the number of team or person-weeks, lines of code
required or function points, for example, is the best way to gauge complexity and set
expectations of what can and cannot be accomplished in a given time frame. Used in
managing scope and determining development priority.
For the C-Registration System, effort will be defined in person days of effort.
4. Risk
Set by development team based on the probability the project will experience
undesirable events, such as cost overruns, schedule delays or even cancellation. Most
project managers find categorizing risks as high, medium, and low sufficient,
although finer gradations are possible. Risk can often be assessed indirectly by
measuring the uncertainty (range) of the projects teams schedule estimate.
On the C-Registration Project, the Project Manager will define risk in terms of high,
medium, and low.
High The impact of the risk combined with the probability of the
risk occurring is high.
Medium The impact of the risk is less severe and the probability of
the risk occurring is less.
Low The impact of the risk is minimal and the probability of the
risk occurring is low.
5. Target Release
Records the intended product version in which the feature will first appear. This field
can be used to allocate features from a Vision Document into a particular baseline
release. When combined with the status field, the project team can propose, record
and discuss various features of the release without committing them to development.
Only features whose Status is set to Incorporated and whose Target Release is
defined will be implemented. When scope management occurs, the Target Release
Version Number can be increased so the item will remain in the Vision Document
but will be scheduled for a later release.
For the C-Registration System, the features are being planned out for the first 3
releases.
R 1.0 Scheduled for C-Registration Release 1.0 (Oct 1999)
R 2.0 Scheduled for C-Registration Release 2.0 (Oct 2000)
R 3.0 Scheduled for C-Registration Release 3.0 (Oct 2001)
Other Scheduled for future releases TBD
6. Assigned To
In many projects, features will be assigned to "feature teams" responsible for further
elicitation, writing the software requirements and implementation. This simple pull
down list will help everyone on the project team better understand responsibilities.
2. Attributes for Use Case Requirements
The use case requirements (defined in the C-Registration Use Case Specifications [3 10] and
the Supplementary Specification [2]) will be managed using the attributes defined in this
section. These attributes are useful for managing the development effort, determining iteration
content, and for associating use cases with their specific Rose models.
1. Status
Set after the analysis has drafted the use cases. Tracks progress of the development of
the use case from initial drafting of the use case through to final validation of the use
case.
Proposed Use Cases which have been identified though not yet
reviewed and approved.
Approved Use Cases approved for further design and
implementation.
Validated Use Cases which have been validated in a system test.
2. Priority
Set by the Project Manager. Determines the priority of the use case in terms of the
importance of assigning development resources to the use case and monitoring the
progress of the use case development. Priority is typically based upon the perceived
benefit to the user, the planned release, the planned iteration, complexity of the use
case (risk), and effort to implement the use case.
High Use Case is a high priority relative to ensuring the
implementation of the use case is monitored closely and
that resources are assigned appropriately to the task.
Medium Use Case is medium priority relative to other use cases.
Low Use Case is low priority. Implementation of this use case is
less critical and may be relayed or rescheduled to
subsequent iterations or releases.
3. Effort Estimate
Set by the development team. Because some use cases require more time and
resources than others, estimating the number of team or person-weeks, lines of code
required or function points, for example, is the best way to gauge complexity and set
expectations of what can and cannot be accomplished in a given time frame. Used in
managing scope and determining development priority. The Project Manager uses
these effort estimates to determine the project schedule and to effectively plan the
resourcing of the tasks.
C-Registration Project estimates effort in Person Days (assume 7.5 hours in a
workday).
4. Technical Risk
Set by development team based on the probability the use case will experience
undesirable events, such as effort overruns, design flaws, high number of defects,
poor quality, poor performance, etc. Undesirable events such as these are often the
result of poorly understood or defined requirements, insufficient knowledge, lack of
resources, technical complexity, new technology, new tools, or new equipment.
C-Registration Project will categorize the technical risks of each use case as high,
medium, or low.
High The impact of the risk combined with the probability of the
risk occurring is high.
Medium The impact of the risk is less severe and the probability of
the risk occurring is less.
Low The impact of the risk is minimal and the probability of the
risk occurring is low.
5. Target Development Iteration
Records the development iteration in which the use case will be implemented. It is
anticipated that the development for each release will be performed over several
development iterations during the Construction Phase of the project.
The iteration number assigned to each use case is used by the Project Manager to
plan the activities of the project team.
The current plan is for the C-Reg Project to undergo 3-4 iterations during the
Construction Phase. In each iteration the selected set of use cases will be coded and
tested.
Iteration E-1 Scheduled for Elaboration Phase, Iteration 1
Iteration C-1 Scheduled for Construction Phase, Iteration 1
Iteration C-2 Scheduled for Construction Phase, Iteration 2
Iteration C-3 Scheduled for Construction Phase, Iteration 3
6. Assigned To
Use cases are assigned to either individuals or development teams for further
analysis, design, and implementation. A simple pull down list will help everyone on
the project team better understand responsibilities.
7. Rose model
Identifies the Rose use case model associated with the use case requirement.
3. Attributes for Test Cases
The test cases (defined in the Test Plan for the C-Registration System [11]) will be planned
and tracked using the attributes defined in this section.
1. Test Status
Set by the Test Lead. Tracks status of each test case.
Untested Test Case has not been performed.
Failed Test has been conducted and failed.
Conditional Pass Test has been completed with problems. Test assigned
status of Pass upon the condition that certain actions are
completed.
Pass Test has completed successfully.
2. Build Number
Records the system build in which the specific test case will be verified.
The C-Registration System will be implemented in several iterations of the
Construction Phase. An iteration typically requires 1-3 builds.
Build A Test Case scheduled for System Build A
Build B Test Case scheduled for System Build B
Build C Test Case scheduled for System Build C
Build D Test Case scheduled for System Build D
Build E Test Case scheduled for System Build E
Build F Test Case scheduled for System Build F
Build G Test Case scheduled for System Build G
Build H Test Case scheduled for System Build H
Build I Test Case scheduled for System Build I
Build J Test Case scheduled for System Build J
Build K Test Case scheduled for System Build K
Build L Test Case scheduled for System Build L
3. Tested By
Individual assigned to perform and verify the test case. This simple pull down list
will help everyone on the project team better understand responsibilities.
4. Date Tested
Planned test date or actual test date.
5. Test Notes
Any notes associated with planning or executing the test.

You might also like