Test Deliverables Inventory

The following baseline documents may be used as inputs into testing processes and used to derive testing deliverables:

Feature Inventory

A Feature Inventory is a project deliverable derived from the project’s/system’s requirements documentation (e.g. User Requirements Definitions (URDs) or possibly User Guides in the case of third party packages). A Feature Inventory is produced by a Business Analyst and consists of a list of system Features (sometimes referred to as software or system functions). Each entry in the Feature Inventory consists of a unique identifier, the description of the Feature itself, the cross reference to the section(s) in the source document (typically a URD) which specifies the Feature, a new/changed/unchanged indicator and a priority.

Front Office Automation (FOA), Touch Tone Trader (TTT) Internet, Tarot and the AS/400 will each have its own Feature Inventory. The inputs to a Feature Inventory are shown in Appendix B.

 

Behaviour Inventories

A Behaviour Inventory is a project deliverable derived from the project’s Feature Inventory. A Behaviour Inventory is produced by a Business Analyst and consists of a table of system behaviours. Each Behaviour (each entry in a Behaviour Inventory) consists of a feature, a system rule and a response, with a unique identifier. The full specification of a function would normally comprise one or more behaviours.

Upon completion each Behaviour is assigned a priority. The Behaviour Inventory is used to create the System Test Condition List, as each Behaviour could be developed into at least one System Test Condition and can also be used to check the coverage of the Business Scenario Inventory used in UAT.

The Behaviour Inventory can be used to provide a reasonably accurate estimate of the effort required with test preparation and test execution activities for both System Testing and UAT. The inputs to a Behaviour Inventory are shown in Appendix C.

Business Processes

It is assumed that if business processes have not already been documented for an existing system that it would not take long for users familiar with the system (and assigned to the project) to specify the salient points of the set of Business Processes for the system. It is also assumed that a Business Analyst will document the salient points of any new Business Processes required by the project. The set of existing and new business processes (or their summaries) can be used to derive Business Process Flowcharts, from which UA Testers can derive Business Scenarios.

Business Process Flowcharts

Users assigned to the project can produced flowcharts for existing Business Processes to show the steps to be taken and the key decisions to be made in relation to the particular Business Process. Business Analysts can do this for new Business Processes. Once a Business Process Flowchart has been completed then paths can be traced through the whole Business Process or through different sections of the Business Process. These paths through the key steps and decision points can be documented by UA Testers as Business Scenarios and included in the Business Scenario Inventory.

Project Specific Test Deliverables

There is a hierarchy to the set of test deliverables to be produced for each project. The first test deliverable to be produced is the Master Test Plan, which defines the strategy for the project as a whole, covering each stage of testing. Most of the other deliverables relate to System Testing and UAT. Lists that provide an overview of the tests to be developed (the System Test Condition Lists for System Testing and the Business Scenario Inventory for UA Testing) are produced from requirements documents (e.g. URDs) and from the inputs already described in section 3.2 of this document.

There is no requirement for development testing activities to be documented, but checklists will be developed and maintained to provide developers with an aide memoire of types of tests to consider including as part of their development testing.

Deliverable Description

Test Specifications

Also produced for System Testing and UAT. These specifications contain or refer to the final version of the relevant list of tests (the System Test Condition List or the Business Scenario Inventory), as well as specifying any refinements to the test approach and detailed information about test data.

Test Procedures (test scripts)

Developed to implement one or more System Test Conditions/Business Scenarios.

Test Run Log

During test execution, a Test Run Log will be maintained (recording the outcome of tests) and observations will be raised whenever tests fail. Finally at the end of the stage’s test execution a Test Summary Report will be produced to provide a summary record the testing activities. Tests will also be selected for inclusion in LUT and in the Regression Test Library for future use.

Master Test Plan

A project specific deliverable which describes the overall framework for the project’s testing activities. It refines the Stages of Testing document to make it applicable to the project and identifies the project’s stages of testing, with scope, objectives, deliverables, responsibilities, entry criteria, exit criteria etc. for each stage of testing. The Project Manager (or UAT Workstream Manager) will co-ordinate a number of project staff to create this document (see the diagram in Appendix A).

Development Test Checklists

Lists of tests for members of the development team to consider when testing their own software (e.g. ‘Is there a need to test that the program can handle empty input files without crashing?’). Developers themselves will be responsible for creating and maintaining these lists, particularly for adding new entries as a result of fixing software in response to observations and following Post Implementation Testing Reviews. It is intended that a checklist will be used for a number of projects and that the lists will grow in length over time, but so much as to become impractical to use.

System Test Condition Lists

Lists of testable conditions, which provide an overview of potential tests so that the most important ones can be selected. Prioritised test conditions will be developed into test procedures and executed. Whenever Feature Inventories (or function lists) and Behaviour Inventories are produced (normally by Business Analysts), these will be used as inputs to create test conditions, otherwise condition lists could be produced from requirements specifications (e.g. User Requirements Definitions (URDs)). The System Test Condition List forms part of the System Test Specification (see the diagram in Appendix D).

System Test Specification

A document that contains or refers to the System Test Condition List. It also specifies the refinements to the test approach (if any). Once it has been approved, it can be used as the basis for developing detailed System Test Procedures (sometimes called test scripts) i.e. it forms the basis for System Test Execution. The System Test Condition List section of the specification may also be used as one of the inputs into the development of UA Test Procedures.

System Test Procedures (test scripts)

Often grouped together into sets, these documents provide instructions for the person responsible for running a test. A Test Procedure addresses how the tester can ensure that the test can start, how to navigate around the system and how to input data, as well as specifying the data values to be input and the expected results(see the diagrams in Appendices E and H).

System Test Run Log

A  document which is updated after System Test Procedures are executed. It consists of a list of the identifiers of Test Procedures and groups of columns to be completed each time a Test Procedure is executed. These include the outcome of the test (pass/fail), the initials of the person executing the test, the date of the test and space for comments and the identifier of any observations raised as a result of the test. This provides a permanent record of the outcome of the test (see the diagram in Appendix H).

System Test Observation (incidents)

Produced during System Test Execution whenever a tester identifies a discrepancy between the outcome of a test and the expected results documented in the Test Procedure or whenever the tester observes something unusual. Not all observations are faults.

System Test Report

A document produced at the end of System Test Execution. It summarises the testing performed (in terms of scope, effort and duration)and details any departures from the Master Test Plan (e.g. by referencing any tests that were omitted) and summarises the outcome of the tests, presenting statistics about the number of Test Observations raised during each test run and briefly describing any significant observations which are still outstanding.

Business Scenario Inventory

The main input into a project’s set of UA Test Procedures and as such forms the basis of UAT. It documents the aspects of the business processes that need to be tested, so that business oriented User Acceptance (UA) Test Procedures can be developed. It places particular emphasis on tracing paths through the decision points of business processes, enabling planned testing to be easily reviewed for accuracy and omissions and then prioritised in accordance with business users’ wishes, prior to developing detailed Test Procedures for UAT. The Business Scenario Inventory forms part of the User Acceptance Test Specification.

User Acceptance (UA) Test Specification

A document which contains or refers to the Business Scenario Inventory. It also specifies the refinements to the test approach (if any). Once it has been approved, it can be used as the basis for developing detailed UA Test Procedures (sometimes called test scripts) i.e. it forms the basis for UA Test Execution. It also specifies the sources of data and responsibilities for data provision and data administration during the UAT.

UA Test Procedures (test scripts)

Often grouped together into sets, these documents provide instructions for the person responsible for running a test. A Test Procedure addresses how the tester can ensure that the test can start, how to navigate around the system and how to input data, as well as specifying the data values to be input and the expected results.

UA Test Run Log

A document which is updated after UA Test Procedures are executed. It consists of a list of the identifiers of Test Procedures and groups of columns to be completed each time a Test Procedure is executed. These include the outcome of the test (pass/fail), the initials of the person executing the test, the date of the test and space for comments and the identifier of any observations raised as a result of the test. This provides a permanent record of the outcome of the test.

UA Test Observation (incidents)

Produced during UA Test Execution whenever a tester identifies a discrepancy between the outcome of a test and the expected results documented in the Test Procedure or whenever the tester observe something unusual. Not all observations are genuine faults.

UA Test Report

A document produced at the end of UA Test Execution. It summarises the testing performed (in terms of scope, effort and duration) and details any departures from the Master Test Plan (e.g. by referencing any tests that were omitted) and summarises the outcome of the tests, presenting statistics about the number of Test Observations raised during each test run and briefly describing any significant observations which are still outstanding.

UA Test Sign-Off Form

A form signed by the Project Sponsor/Director and Managers of Business Areas impacted by the project as well as the UA Test Manager/ Test Analyst. Sign-off signifies that testing has been successfully completed in accordance with the Master Test Plan and UA Test Specification. Sign-off will typically take place at the Go Live Panel meeting in which the UA Test Report is discussed.

Updated Regression Test Library (RTL)

A selection of System Test Procedures and UA Test Procedures. After System Testing and UAT have completed, at a time specified in the UA Test Report (and quite possibly after LUT has completed) a selection of the new Test Procedures (for both System Test and UAT) will be incorporated into the Regression Test Library (RTL) to be executed as part of System Test Execution for the next relevant project. It may also be appropriate to remove some existing Test Procedures from the RTL to prevent it from growing beyond a useful size.

Live User (LU) Test Procedures

Consists of a subset of System Test Procedures and UA Test Procedures to be run over the implementation weekend against the target production machine. These LU Test Procedures provide confirmation that the software is configured correctly.

LU Test Sign-Off Form

This is a form acknowledging approval from the Project Sponsor and signed by the UA Testers responsible for executing the LU Test Procedures. Sign-off signifies that the results of LUT on the target production environment are consistent with those obtained in UAT.

 

Top