Professional Documents
Culture Documents
2.2.1.3
Sprint Charters .....................................................................................................................................
10
2.2.1.5
Sprint Review Meetings .......................................................................................................................
11
2.2.1.6
Sprint User Acceptance ........................................................................................................................
11
2.2.1.7
Retrospectives ....................................................................................................................................
.
12
2.2.1.7.2
SOLUTION RETROSPECTIVES .........................................................................................................
12
2.2.1.8
Pre-Sprint Preparation .........................................................................................................................
12
2.2.2
Graphical Depiction of Planned Sprint Cycle ...........................................................................................
12
2.3
Sprint 1: Project Startup
Activities ............................................................................................................
12
2.4
Solution Engineering
Approach ................................................................................................................
13
2.4.1
Collective Ownership of the Solution ......................................................................................................
13
2.4.2
Sprint Execution User Validation .............................................................................................................
13
2.4.3
Training Material Production ...................................................................................................................
14
2.4.4
Continuous Integration ............................................................................................................................
14
2.4.4.1.1
SOFTWARE VERSION CONTROL .....................................................................................................
15
2.4.4.1.3
AUTOMATED BUILDS .....................................................................................................................
15
2.4.4.1.4
PRIVATE DEVELOPER BUILDS .........................................................................................................
15
2.4.4.1.5
FAST/QUICK BUILD CYCLES ............................................................................................................
15
2.4.4.2
Continuous Database Integration ........................................................................................................
15
2.4.4.2.2
AUTOMATED DATABASE INTEGRATION ........................................................................................
16
16
2.4.4.3.1
AUTOMATED UNIT TESTING ..........................................................................................................
16
2.4.4.4.2
MANAGING CODE COVERAGE .......................................................................................................
18
2.4.4.5
Documentation
Compilation ...............................................................................................................
18
2.4.4.5.3
PRODUCT DASHBOARD PUBLISHING ............................................................................................
18
2.4.5
Periodic Automated Environmental
Refresh ...........................................................................................
19
2.4.5.2
Temporary Environmental
Archive ......................................................................................................
19
2.4.5.4
Code
Migration ....................................................................................................................................
19
2.4.5.5
Structural Database
Modifications ......................................................................................................
20
2.4.5.6
Configuration Data
Load ......................................................................................................................
20
2.4.5.8
Shake-down
Testing .............................................................................................................................
20
2.4.6
Integration Test and Regression
Test ......................................................................................................
20
2.4.6.1
Verification.....................................................................................................................................
......
21
2.4.7
Pair
Programming .................................................................................................................................
...
21
2.4.8
Design
Approach ......................................................................................................................................
21
2.4.8.1
Configuration vs.
Customization ..........................................................................................................
21
2.4.9
Database
Customization ..........................................................................................................................
21
2.4.10
Solution
Refactoring ................................................................................................................................
22
2.4.10.1
Load and Performance
Testing ........................................................................................................
22
2.5
Executing a Production Release
Sprint .....................................................................................................
22
2.5.1
Release
Planning ......................................................................................................................................
23
2.5.1.1
Release Sprint
Staffing .........................................................................................................................
23
2.5.2
Planning End-User
Training......................................................................................................................
23
2.5.3
Stakeholder Involvement and
Communication .......................................................................................
24
2.5.4
Technical
Implementation .......................................................................................................................
24
2.5.5
Load and Performance
Testing ................................................................................................................
24
2.5.6
Formal User Revalidation &
Acceptance .................................................................................................
24
2.5.7
Execution of Data Conversion in
Production ...........................................................................................
24
SECTION 3: SUPPORT
PROCESSES ...........................................................................................
...
25
3.1
Continuous Process
Improvement ............................................................................................................
25
3.2
Project Configuration
Management .........................................................................................................
25
3.3
Risk and Issues
Management ................................................................................................................
...
25
3.3.1Contingency
Planning ..............................................................................................................................
26
Page 3
3.4
Assumption and Constraint
Management ................................................................................................
26
3.5
Performance
Metrics ...................................................................................................................
.............
27
3.5.1
Scope & Schedule
Management ..............................................................................................................
27
3.5.2.3
Remaining Product
Backlog .................................................................................................................
32
3.5.3
Quality
Metrics ........................................................................................................................................
33
3.5.3.3
UAT Defect
Density ..............................................................................................................................
35
3.5.3.4
Failed
Builds .........................................................................................................................................
35
3.5.3.5
Failed Environmental
Refreshes ..........................................................................................................
36
3.6
End User
Training ..................................................................................................................
...................
37
3.7
Stakeholder Involvement and Communications
Management .................................................................
37
SECTION 4:
PERSONNEL .....................................................................................
............................
38
4.1
AGILE PROJECT
ORGANIZATION .....................................................................................................
..........
38
4.1.1SUBJECT MATTER EXPERTS
INVOLVEMENT .............................................................................................
39
4.1.2
Project
OWNER ........................................................................................................................................
40
4.1.3
AREA PRODUCT
OWNERS ........................................................................................................................
40
4.1.4
MASTER SCRUM
MASTER ........................................................................................................................
41
4.1.5
SCRUM
MASTERS .....................................................................................................................................
41
4.2
FEATURE
TEAMS ...................................................................................................................
....................
42
4.2.1
FEATURE TEAM RAMPUP .......................................................................................................................
42
SECTION 5:
PROJECT FACILITIES AND
RESOURCES ..............................................................
43
The assigned Product Owner should meet with business users to establish the initial version of the
project/product backlog and establish an overall vision for the solution. Note that the backlog will change
and grow with the project as additional tasks are identified and added, so the intent of the initial backlog is
to establish a good enough understand to be able to estimate the amount of time and resources needed to
complete the project successfully.
Teams must indicate the unique priority/importance of each requirement in the backlog. Note that the
assigned unique importance must be a unique number. Teams should determine what scale will be used
for priorities (e.g. from 1 to N, where N is ten times the total number of requirements in the Product
Backlog) for each and every requirement included in the Product Backlog. In the example provided,
priorities would be assigned in increments of 10; this readily enables the team to insert new requirement
priorities between existing requirements as the project progresses. If the team believes that alternative
approaches would be more effective (based on past experiences) they are free to use these practices
once the project is underway and opportunities for improvement are identified during the Sprint
retrospective meetings.
The teams initial priorities will serve as the initial starting point for the Product Owner to establish overall
priorities for the solution during project startup (see section 4.1.2 Project OWNER for information about the
Product Owners role). The assigned priorities are also expected to heavily influence the proposed
development roadmap defined in section 1.5 System Roadmap: Initial Plan of Work and the themes of
each resulting Sprint.
Fibonacci sequence included in the table below and in the product backlog template. Teams should
establish definitions or guidance for the use of each numeric size that will be used by the Feature Teams
on the project to establish a common understanding of size expectations. This should broker more uniform
estimation across teams (on multi-team projects). The definitions must include information from all of the
software engineering disciplines relevant to the project, including data conversion, interface, database,
user interface and architectural elements and any other relevant components/competencies.
Note that DHS does not intend to use or control projects with the above mentioned generic size estimates;
the size estimates are used for planning Sprints, specifically how much scope it is believed can be
addressed within each Sprint and are NOT considered commitments.
Teams should also define which sizing metric would be most beneficial to use on a given project;
specifically the use of Ideal Days versus an arbitrary generic size unit or perhaps even using Function
Points directly. The approach should be a simple mechanism to use on the project for all staff that quickly
and effectively enable the tracking of software engineering tasks.
Size Component
0- Zero
1
-Very Tiny
2
Tiny
3
- Almost Tiny
5
- Very Small
8
Small
13
Medium
21
Large
34
- Very Large
55
Huge
89
Massive
<Note that DHS expects Requirements/Stories of this size to be broken down into
information
Sprint themes on the roadmap will likely correlate with the eventual Sprint goal and identify the prioritized
features that are aligned with each Sprint in a way that addresses both business priorities and the greatest
risk early in the project.
The definition of done at the solution level is used to determine when the system is finished and meets
DHS intended need(s)
The definition of done at a Sprint level indicates when a Sprint goal is achieved (see section 2.2.1 Planning
and Executing the Four Week Engineering Sprints);
The definition of done for the various coding related activities such as development, data conversion,
training material creation, and interface development.
In using the Scrum system, a good practice and approach is using four week engineering Sprint/iterations.
During project planning, the project team should define the initial Sprint duration and configuration that will
be used to manage the development activities. This guideline document provides an initial standard and
structure for how a Sprint cycle should be executed.
As a general guideline, a good approach is one that efficiently and effectively integrates the system in a
timely fashion. Project teams are encouraged to work collaboratively with DHS management and involved
stakeholders, throughout the project to a establish project approach that is best suited to meeting the
specified needs of the project champion/sponsor.
In Part 1 of Sprint planning, the Project Owner (and Area Product Owners on large projects), executive
management and users meet with representatives from each of the Feature Teams and the Scrum Masters
to establish a goal for the Sprint. These meetings are also used to identify what functionality will be built in
the coming Sprint.
The part 1 Sprint planning meeting lasts no more than 4 hours and during the meeting, business priorities
are discussed and then contrasted against the priorities in the Product Backlog. If necessary the backlog
priorities may be changed to reflect new information or changes in business priorities. As the top priorities
are selected for inclusion in the scope of the Feature Teams Sprints, careful attention is paid to the size of
work being taken by each team in light of the teams corresponding velocity (for more information about
Feature Team velocity see section 2.2.1.3 Sprint Charters).
It is important to note that, for large projects with multiple Feature Teams, the representatives from each
Feature Team are fully authorized and responsible for the assignment of work to their associated team. At
the end of the part 1 planning, the Feature Team is fully committed to delivering the work they have agreed
to take on. Additionally, as a good practice, DHS uses an approach that openly allows all Feature
Team members the opportunity to participate as the team representative in the part 1 planning meeting
over a period of time. The establishment of a rotating schedule of the team representative(s) is a good
practice for accomplishing this.
During project planning, the project team defines the accepted convention for how Feature Team members
are identified to attend the part 1 planning meeting and how the teams will maintain shared commitment
for the selected scope. Other conventions about how the part 1 planning meeting (such as when in the
Sprint cycle: beginning or end) is conducted should also be noted. Teams should also consider how the
Sprint 1 planning meeting can be structured and managed to prevent overrunning the allocated 4 hour
duration.
The second part of planning for a Sprint involves the individual Feature Teams meeting and discussing the
work it committed to complete and how it intends to build the functionality into a product increment during
the Sprint. For large multi-team projects, Area Product Owners should be available to assist the Feature
Team in the detailed planning of their Sprint activities, however the Area Product Managers may be
required to split her/his time across multiple Feature Teams and hence may not be available to a specific
Feature Team for the full duration of the part 2 planning meeting. As with the first Sprint planning meeting,
the second planning meeting lasts no more than four hours.
During project planning, the team decides how they will conduct the second part of the Sprint planning
meeting while recognizing that self-organizing teams may alter those practices at a future time.
The main output of the part 2 planning meeting is the Sprint Backlog, which includes the tasks the Feature
Team will complete in building the required features, estimates of the amount of time it will take to realize
those features (preferably in Ideal Days) and a Sprint burn-down chart template. The use of specific media
for the Sprint backlog items is specified by each of self-organized Feature Teams. Teams are encouraged
to use physical Sprint backlog media to track progress unless other specific practices are justified.
During project planning, the project team should define how the burn-down chart will be updated each day
and how records will be kept to record the daily progress (e.g. picture of the team backlog are taken and
electronically filed). Additionally, the project team should define how tasks are identified and estimated and
how unexpected complexity is to be handled.
As a general guideline, each requirement should be broken down into smaller tasks during the part 2
planning meeting and during this decomposition each tasks should be assigned an Ideal Day estimate,
which is maintained throughout the Sprint, by the Feature Team.
Sprint Charters
One of the main outputs of both of the Sprint planning meetings is a one page charter that indicates the
goal of the Sprint, the dates for the beginning and ending of the Sprint, the time and location of the project
daily Scrum meeting as well as dates, times and locations for the Sprint Review meeting and the solution
retrospective meeting. For large, multi-team project, the solution Sprint Charter is made available to all
stakeholders interested in the project. Additionally, the Feature Teams create auxiliary charters that include
relevant details about the scope of work they have selected along with relevant details about their teams
plans for the Sprint.
As a general guideline, the Feature Team representatives bring a partially completed team charter to the
part 1 planning meeting. This partially completed charter includes the teams targeted velocity (in generic
size count) and the Feature Teams expected availability over the Sprint (e.g. accounting for known
vacations and holidays). While the content of the charters may evolve over time, project teams are
required to use the established template, TP003_Sprint_Charters_Template (see
https://ardhs.sharepointsite.net/guidelines/Shared%20Documents/TP003_Sprint_Charters_Template.doc x), at
the beginning of the project.
For large multi-team projects, Daily Scrum meetings are conducted at two levels each day; at the Feature
Team level for each team and at the Enterprise level where representatives from each Feature Team
attend with the Product Owner and Area Product Owners; small single team projects only require one Daily
Scrum meeting. Regardless of which level the daily Scrum meeting is for, it never lasts more than 15
minutes, requires that all team members attend, stand, and provide answers to the following 3 questions:
During project planning, the project team defines how will keep the daily meetings under the 15 minute
limit, how they will handle non-Feature Team (uninvited) participation (not attendance), and how they will
schedule the Feature Team level meetings to facilitate a consolidated and accurate solution level meeting.
Simply put, the Sprint review meeting is a 4 hour demonstration of the new products and features that were
built during the latest 4 week Sprint. During the meeting representatives from the Feature Team(s) present
the results of their work to the Product Owner, Area Product Owners, management, users and other
stakeholders. The big-picture intent of the review meetings is to validate the solutions progress against
DHS needs and priorities and to mitigate project risk.
Presenters at the review meeting should do as little as possible in preparation of the meeting; the meeting
is not intended to be a polished presentation requiring additional project resources to prepare, but rather a
functional view of the solution and its progress. During project planning, the project team should define
how the meeting will be organized, how Feature Teams might identify presenters, and how feedback will be
collected.
As a general guideline, DHS uses the review meetings as a stakeholder involvement and communication
forum for many of the project stakeholders. During project planning, project teams should define how DHS
can involve the broadest audience possible (video conference, webinar, etc.) while completing the meeting
within the four hour limit.
Traditional User Acceptance Testing (UAT) is not readily possible when using Agile project approaches. As
a result DHS has adopted a three pronged approach to addressing the need for formal acceptance of the
developed solution:
Sprint Execution User Validation see section 2.4.2 Sprint Execution User Validation.
Sprint Review User Acceptance (this section) at the end of each Sprint, during the Sprint Review
Meeting, DHS reviews and accepts the developed components presented in the review.
Release Sprint User Revalidation while executing a Release Sprint (see section 2.5.6 Formal User
Revalidation & Acceptance requirements) DHS requires that Users Revalidate the functionality that is
being released.
As a general guideline, DHS provides acceptance of the developed features and components
demonstrated in the Sprint Review Meetings. All feedback that requires a system change is incorporated
into the Product Backlog for future Sprints. During project planning, the project team should define how features
will be demonstrated, how feedback will be collected and how acceptance will be documented.
Retrospectives
Retrospection is a key activity that drives continuous improvement during the project. During project
planning, the project team should provide teams with a brief summary of techniques that they will use
during retrospective meetings. Additionally project teams should define when retrospective meetings are
conducted within the Sprint cycle. A recommended guideline is to conduct retrospective meetings at the
end of a Sprint because any opportunities for improvement will be fresh in the teams minds. Note that for
large, multi-Feature Team, projects retrospective meetings are conducted at two levels.
At the end of each iteration, each Feature Team must conduct a retrospective meeting to inspect their
processes, identify potential improvements, assess any limitations and challenges faced by the team, and
determine solutions that potentially eliminate the challenges. During project planning, the project team
should define how the Feature Teams will conduct retrospectives, when they should be conducted and any
particular approaches that might be helpful (e.g. Kaizen, 5-Whys).
SOLUTION RETROSPECTIVES
In addition to the Feature Team retrospectives, the DHS Project Owner or Master Scrum Master (for very
large projects) conduct a solution level retrospective with the focus of improving operations and
efficiencies at the overall project level. A key input to the solution level retrospective is the challenges and
ideas from the Feature Team retrospectives. The enterprise retrospective meetings are also conducted
after the end of each Sprint and will be led by the Product Owner or designee and attended by the Area
Product Owners, DHS management and other interested stakeholders.
During project planning, the project team must provide input and recommendations about how the solution
retrospective meetings should be scheduled to enable input from the Feature Team retrospectives as well
as more general recommendations about how DHS can implement enterprise improvement activities.
Pre-Sprint Preparation
As a regular and on-going activity before each Sprint, in anticipation of the Sprint planning meetings, the
DHS Project Owner assesses and updates the Product Backlog to reflect current priorities, recent
legislative changes or other external influences. These updates also include the closing out of
requirements that were addressed in the previous Sprint as well as updates to the Product Burn-down
Chart.
Within the Defined Project Strategy, project teams should create a graphical depiction of the Sprint
cycle/timeline that will be used to meet DHS requirements. This graphic depiction should show how
activities conducted within each Sprint are laid out; it is not considered to be a final, rigid plan for each
Sprint, but rather a visual aid to help project team members to understand the timing of events within the
Sprint.
Configuration of the development toolsets (i.e. IDE, testing tools, see section 2.4.4 Continuous Integration)
Establishment of the project performance metric utilities (section 3.5 Performance Metrics)
Executing Sprint Planning (parts 1 and 2) for Sprint 2 (2.2 Scrum Management Approach)
The plan for the project startup activities within Iteration 1 should include a Sprint Charter (and Solution
Charter for large projects) and a Sprint Backlog. These materials accelerate the execution of the execution
activities for the first Sprint. The project team should also provide details about the tasks that must be
executed, any constraints that are known, dependencies between tasks, and the level of resourcing
required to complete the work.
For all identified iteration tasks, the project team should insert requirements into the Product backlog at the
end of the list and assign unique priorities that effectively convey that these tasks must be completed in the
first Sprint. For example, the first Sprint may be used to elicit User Stories/Requirements from business
users for a given set of functions; the elicitation of requirements in each area may be suitable tasks for the
Sprint product backlog.
In Agile terms, the four week development iterations must produce a potentially shippable product. In
general, potentially shippable is defined to mean a version of software products and any and all
associated materials required to implement that software. It is important to note that the software may
require additional components to function in the production environment (e.g. a user interface) but the unit
component (e.g. database stored procedure) is of production quality.
It is imperative that everyone on the project, from the most junior team member to the Project Owner and
management, to have a strong sense of collective ownership for the successful implementation of the
desired solution within the defined schedule. This is reflected in the horizontal nature of the organization
chart shown in section 4.1 AGILE PROJECT ORGANIZATION. During project planning, the project team
should decide how they will actively foster and maintain a culture of shared ownership across the team and
with other project stakeholders.
DHS uses the concept of validation to ensure that the systems meet the intended needs, or goodness of
fit. Hence validation cannot be completed with automated testing, at least initially. Validation must involve
subject matter experts (SMEs) who understand the nature of the intended solution. Through close
collaboration during each Sprint, the Project Owner, Area Product Owners, SMEs, and other participants
validate the evolving solution during each Sprint.
Traditionally, validation is accomplished during User Acceptance Testing (UAT), which is not readily
possible when using Agile concepts. As a result DHS has adopted a three pronged approach to addressing
the need for formal acceptance of the developed solution:
Sprint Execution User Validation (this section). During the execution of the Sprint, the Project Owner and
Area Product Owners regularly provide input, use and review the work of the Feature Teams.
Sprint Review User Acceptance see section 2.2.1.6 Sprint User Acceptance
Release Sprint User Revalidation see 2.5.6 Formal User Revalidation & Acceptance
For an Agile approach to be successful, frequent and regular interaction must take place between the
Feature Teams and Subject Matter Experts (SMEs). As a result of this interaction, the Feature Teams gain
heightened insight about the required solution and lowers risk associated with the delivery of the right
solution. Due to the nature of the feedback and SME interactions, the Feature Teams use their judgment
based on the scope of the feedback; for feedback with small resulting changes the team assesses how to
incorporate the change within the existing Sprint; for larger changes that cannot be accommodated in the
Sprint the Feature Team works with the Project Owner and Area Product Owner to communicate that an
additional task (or tasks) needs to be added to the Product Backlog.
During project planning, the project team should discuss how features will be demonstrated, how feedback
will be collected, and how validation will be tracked to make sure that all new components receive user
scrutiny.
As a general guideline, the Project Owners and Area Produce Owners regularly engage stakeholders
external to the project to use features and components developed in the Sprint, and past Sprints, to
validate that the solution fits its intended need.
A best practice for user validation involves recording user interactions with the system in an automated
manner that enables playback of the user actions. During project planning, the project team should
discuss how this information will be captured, managed throughout the project, and used (played back) to
facilitate User Revalidation during Release Sprints (see 2.5.6 Formal User Revalidation & Acceptance for
more details).
Project teams should use engineering approaches that incorporate the development of training materials
seamlessly into each Sprint. In other words, Feature Teams should be staffed with people who have skills
necessary for the development of training materials. Project teams should also give thought to the type of
type of training material(s) that will be developed and how those materials are integrated into the software
solution (e.g. on-screen help, training manuals and video tutorials such as YouTube).
Continuous Integration
DHS mandates and verifies that all project teams make full use of Continuous Integration (CI) from the
beginning of the project through the end of the project. During project planning, project teams should
define an approach to implementing continuous integration practices on the project including the
frequency and proposed times when integration cycles will be executed (e.g. 10 AM and 2 PM each
business day; full regression each Saturday at 1 PM). The project team should also include a discussion
of the tools (e.g. CI tool, build tool, version control and others) that are required to implement the proposed
approach.
Where possible, DHS uses freely available, open-source tools for executing a continuous integration
strategy. Whenever a project team identifies software tools that do not include a license free of charge, the
project team must clearly justify why the tool is needed and why a similarly-featured free of cost tool
cannot be used and request the purchase of the tool through the DHS Office of Systems and Technology
(OST).
Ideally, system code should be compiled at least three times daily (twice during normal business hours
and once overnight). As a general good practice, DHS also encourages more frequent compilations. Doing
so enables the rapid detection of defects and issues with the code base that can rapidly be resolved
because the events that triggered the corresponding error would still be fresh in the project teams
memory. During project planning, the project team should define their approach to code compilation,
including any tools that will be used.
During project planning, project teams should define their approach and toolset that will be used to
maintain control of versions of software. Because a single program/file may be modified by multiple people
simultaneously (shared solution responsibility) it is important that the proposed tool be capable of
reconciling multiple changes as the solution progresses. It is required that ALL code be checked in every
day before the end of the day.
During project planning, project teams should define how builds will be created, specifically focusing on
hardware and network resources. As a general good practice, build cycles must be completed as rapidly as
possible, and in most cases executed in a few minutes. Project teams should also determine the
configuration of the build server and any specific requirements of such a machine that enable rapid build
cycles.
As a general guideline, a build server will, under normal circumstances, not require any unusual hardware
or configurations; in most cases a developer workstation can satisfy the requirements of a build server.
AUTOMATED BUILDS
During project planning, project teams should establish their build cycle, the events that are triggered, the
order of the events, dependencies and exceptions handling.
As a general guideline, continuous integration process and, more specifically, the regular solution builds,
should be uneventful processes on the project. One effective means to accomplish this goal is to mandate
that all developers conduct a local build with modified code prior to checking their modified solutions into
the common repository.
During project planning, project teams define how their project approach addresses this requirement as
well as what expectations are included for private developer builds.
Build cycles should execute rapidly in order to provide timely feedback. Project teams should describe the
conventions that will be established on the project to balance the speed of the build cycle and associated
feedback against the need to provide adequate coverage of tests during the cycle. Project teams should
continually seek ways to refine the balance between useful inspection and rapid builds should the project
team experience longer than acceptable build times.
As with the continuous integration of software code based products, DHS mandates that databases
technologies be continually integrated with software solutions. Continuous database integration enables
project teams to establish a solid foundational domain model with which to build application layers. Doing
so reduces the risk of defects associated with multiple layers of the system from taking heightened effort to
trouble shoot and resolve. Additionally, continuous database integration accelerates testing as the
application and associated data model are established in a well-defined and known state.
Any development resource should be able to create or modify database components within the system.
The modifications and definition of database components must be accomplished with DDL scripts that are
to be included in the version control repository. Establishing this practice on the project enables the
Continuous Integration process to wipe and establish required database structures from scratch in a known
stable state.
The Continuous Integration process used by the project team must have the capability to automatically create
and refresh databases. Because database solutions are integral components of the overall solution, it is critical
to establish practices that mitigate the risk of establishing a fragile database platform.
DHS requires DDL and DML scripts that are created and managed along with the solution code. As a
result, it is possible for DBA-type resources to periodically review and tune the database construction
scripts. This also enables this activity to be conducted with little or no impacts to Feature Teams who are
executing work on the solution.
During project planning, project teams should define how they will plan for and enable knowledgeable
individuals with experience tuning databases to tune the system to improve performance and the solutions
capabilities.
The software engineering approach used by DHS is heavily dependent on continuous integration and test
driven development as risk mitigation mechanisms. Project teams must define how they integrate test
driven development concepts and how tests are conceptualized and associated with Sprint backlog items.
The project teams should also determine what tools would help to automate the unit testing process and
should also define the strategy/architecture of the unit testing approach.
Because the regular, multi-daily build-and-test cycle must execute quickly, the project team must establish
sets of tests that vary in scope and complexity that exercise the system differently depending on the
available time. In other words, that automated tests run during business hours are relatively small, but
those that run each night are more comprehensive in nature. Project teams should use an approach that
enables unit tests of the system to be performed at various levels. During project planning, project teams
should describe such tiers of tests will be established, configured and maintained over time as the System
evolves.
Systems often require thousands of unit tests to be run by the completion of the project. With this in mind, it
is important to establish an architecture for the unit tests in aggregate to prevent them from becoming
fragile over time. For example, hard coding a future date into a unit test makes the test sensitive to the
current date and at some point in the future the test results will change when the current date passes the
hard coded date. Project teams should give thought to defining how to make the unit testing approach
maintain a robust testing system that avoids fragile tests. Project teams should also define how the unit
tests will be combined in sequence to form testing scenarios.
As the system evolves, it becomes increasingly important to establish test data that maintains integrity
across the application (e.g. establishment of client demographics associated with a client ID that is
referenced throughout the system). One way to establish this data is the use of Structured Query
Language (SQL) scripts to populate data directly into the database on the back end. During project
planning, the project team defines how referent, high integrity test data will be created and maintained in
the application in a manner that enables quick, frequent, and preferably automated, environmental
refreshes.
It is inevitable that defects will be injected into the system as part of the engineering cycle. While
unavoidable during initial injection, it is possible to prevent future occurrences of the defect by establishing
tests that verify the defect condition does not reoccur. As a general guideline, DHS requires that all
detected defects be covered by unit tests that detect any future instance of the defect.
The conversion of data from existing applications into the system is included in the scope of Sprints as
necessary based on selected Sprint scope. Within this in mind, during project planning, project teams
should define an approach to converting data from existing systems into the target/new system, what tools
will be used to facilitate the conversion (e.g. Extract-Transform-Load (ETL) tool), how errors and
discrepancies will be resolved, and how the process will be integrated into the continuous integration cycle
and environmental refreshes.
DHS makes production data available to the team to test data conversion routines and processes on an asneeded basis. During project planning, project teams should define, in detail, how they will identify the need
for production data, how they will work with other DHS staff to obtain the data, how the data will be stored
to protect confidentiality, how the data will be used, and how often the data is expected to be refreshed
(e.g. new data pulled at the end of each Sprint and made available before the beginning of the next). As a
general practice, the project team should identify specific individuals who are authorized to access
production systems, thereby enabling her/him to extract sensitive production data that are made available
to teams to construct and test data conversion programs.
DHS is interested in innovative approaches to the integration of data conversion activities into the software
engineering approach. Project teams are encouraged to identify practices and recommendations that are
believed to be relevant and potentially beneficial for DHS to use. Additionally, due to the sensitive nature of
production data, project teams must also identify ways that mandatory HIPAA requirements can be met and
how the project team will help keep the data secure from unauthorized access.
While structuring the team, project planners must not identify a single Conversion Feature Team for the
project. In the spirit of Agile software development practices each Feature Team should have skills and
competencies available to address all required conversion activities within a Sprint.
Project teams regularly and aggressively review and inspect code work products being created on the
project to maintain a high degree of technical quality. DHS appreciates highly efficient Agile engineering
teams and understands that the results of this high performance equates to a large volume of solution
code. Therefore, it is always necessary to establish automated and continuous practices to ensure that
established standards are being met to avoid unnecessary technical complexity and reduced efficiency.
During project planning, the project team must identify the coding conventions that will be used on the
project as well as the toolset that will be used to verify the code against established standards. If the
project team needs to modify the systems existing coding standards, the team must explain the rationale
for the change and how the new coding standards will be enforced in tandem with pre-existing system
standards. The team should also explain how deviations from the established standard will be resolved by
project teams and how the standards are modified and maintained within the selected toolset.
The process of automated code verification against the established standard must be fully integrated into
the continuous integration cycle.
During project planning, the project team defines ways use automated unit testing to ensure that adequate
testing coverage of the software solution. The project team should also describe the tools that will be used
to monitor testing coverage. It is important for the project team to provide a thorough description of the
overall philosophy behind managing testing code coverage, what percentage of code coverage is targeted,
and how deviations from that target will be addressed.
During project planning, the project team should define how the code will be assessed and reviewed to
identify duplicate or redundant code or code components. DHS understands that large-scale code sets
inevitably contain duplication of code structures and that this duplication must be managed to preserve the
overall solution quality. The project team should define what tools will be used to review code for
duplication and how the review will be integrated into the regular engineering activities, including the CI
cycle.
Documentation Compilation
DHS believes that a self-documenting solution is the best mechanism through which to create solution
documentation.
During project planning, the project team should define how solution documentation will be compiled,
stored, and published within the continuous integration cycle. Specifically, the team should describe how
often the documentation will be compiled and what specifically will be compiled. The team should define
the toolset that will be used to create and publish the documentation.
A data dictionary that fully describes data models that are implemented with systems and
Test Driven Development process is executed shortly before a set of activities dedicated to specifying and
coding of software. In other words the specifications are developed at the same time the code is developed
to satisfy established automated unit tests.
During project planning, the project teams define the proposed approach for automatically generating high
quality API code specifications. The project teams should also discuss any toolsets that will be used to
generate the specifications and how these specifications will be integrated into the systems integrated
library.
During project planning, the project team should define how a data dictionary will be automatically created by the
system. As a general practices, DHS prefers a dictionary that makes use of HTML technologies.
At the completion of each CI cycle, a dashboard summarizing the results of the various activities is
created. The dashboard is visually appealing and easily navigable to facilitate executive DHS management
in understanding the technical status of the solution. During project planning, the project team should
identify dashboards that will be used. In addition, the project team should indicate desired and
recommended approaches for communicating the status of the technical solution to interested
stakeholders.
DHS continually seeks to establish development and test related environments that are regularly
automatically reconstructed with fresh versions of the solution, configuration data, and test data. The intent
of the regular refresh is to avoid unknown or undocumented, but required, configurations that may lead to
issues and slow development progress. As a result of the previous point, DHS does not use a backup or
disaster recovery mechanism to satisfy this requirement as this approach still leads to unknown and
misunderstood configurations.
Ideally, an environment refresh would involve the automated establishment of a fully functional version of
the system from a clean, base system image. During project planning, project teams should describe how
to accomplish a fully automatic refresh of all proposed environments from the base configuration to a
specified version/baseline of the system. The automated refresh approach makes heavy use of the CI
solution described in section 2.4.4 Continuous Integration. Project Teams should describe in particular
detail:
How the existing environment will be temporarily backed up and when the temporary backup is deleted
How the base system installation is reestablished
Note that DHS uses scripts that can be executed, both manually and programmatically, that accepts
parameters (target environment, DHS system version, etc.) to trigger the refresh using the proposed
Continuous Integration tools.
During project planning, the project team should define in detail their approach to automatically refreshing
each proposed environment to a base-state with a known version of software, data model, configuration
data, and test data.
The continuous integration toolset is used to accomplish the automated environment refreshes. For each
environment, the project team should specify the frequency of the refresh (e.g. development refreshed
nightly, system test weekly and UAT before each validation cycle).
During project planning, the project team should explain how a temporary backup of a given environment is
backed up before beginning the refresh. Because the refresh is fully automated, the project team should explain
how success and failure of the backup impacts the refresh cycle. The project team should explain how the
backup will be stored, accessed, and if necessary, restored. The project team should also provide a brief
discussion of how long the backup will be maintained before being deleted.
During project planning, the project team should explain how the environment will be reset to a base
version of the system once the environment is backed up. If the system involves the restoring of server
images, the project team should explain how the images will be established, refreshed, stored and
maintained.
Code Migration
During project planning, the project team should briefly explain how the code for the system will be
migrated to the specified environment after the establishment of the base system.