MindMap Gallery ISEB
The Information Systems Examination Board (ISEB) is a professional certification body that provides qualifications and certifications in the field of information systems and software development. This mind map aims to provide an overview of ISEB and its role in the industry, as well as the various certifications it offers. ISEB was established in the United Kingdom in 1967 and has since become a globally recognized organization for professional certifications in the field of information systems. It offers a wide range of certifications, including software testing, business analysis, project management, service management, and software development. Through this mind map, we will explore the different certification paths offered by ISEB, highlighting the key areas of knowledge and skills required for each certification.
Edited at 2023-03-29 17:49:24ISEB
The Fundamentals of Testing
General testing principles
Testing shows the presence of bugs
Exhaustive testing is impossible
Early testing - the earlier a problem is found, the less it costs to fix
Defect clustering - Pareto principle: 80% of the problems are found in 20% of the modules
The pesticide paradox - running the same set of test continually will not continue to find new defects
Testing is context dependent
Absence of errors fallacy - software with no errors is not necessarily ready to be shipped
Testing and risk - what and how much we test must be related in some way to the risk
Testing and quality
Testing cannot directly remove defects, nor can it directly enhance quality
Resources triangle: Money, Time, Quality
Decide when 'enough is enough'
Prioritization is the most important aspect of achieving an acceptable result from a finite and limited amount of testing
There is a level of quality that is acceptable for a given system
Every system is subject to risk of one kind or another
Completion criteria
How much of the software is to be tested
What levels of defects can be tolerated in a delivered product
A good test is one that finds a defect if there is one present
Testing is a systematic exploration of a component or system with the main aim of finding and reporting defects
Debugging is the process that developers go through to identify the cause of bugs or defects in code and undertake corrections
Static testing tests software without executing it.
Dynamic testing exercises the program under test with test data (test execution in this context)
An error (mistake) leads to a defect, which can cause an observed failure
Incorrect software can harm: people, companies, the environment
Software failures can lead to: loss of money, loss of time, loss of business reputation, injury, death
Characteristics of good testing
Early test design
Each work-product is tested
Testers are involved in reviewing requirements as early as possible
The psychology of testing
Communication
Work together rather than be confrontational
Results should be presented in a non-personal way
Attempt to understand how others feel
Confirm that you have both understood and been understood
People who could test software
Those who wrote the code
Members of the same development team
Members of different group
members of different company
Fundamental Test Process
Test planning and control
what is going to be tested
how to achieve
define test completion criteria
Scheduling test analysis and design
Selecting test metrics for monitoring and control
compare the progress against the plan
monitoring to measure what has happened
adjust future activities in the light of experience
Test analysis and design
reviewing reqs, architecture, design
analysing test items, specification, structure
designing the tests, give priority
determining whether the reqs and the system are testable
detailing what the test envir. should look like and whether infrastructure and tools required
highlighting the test data required for TCs
Test implementation and execution
developing and prioritizing TCs
collectioning TCs into test suites
checking the test environment
running TCs
keeping a log of testing activities
comparing actual with expected results
reporting discrepancies
reporting test activities
Evaluating exit criteria and reporting
checking if exit criteria have been met
checking if more tests are needed
writing up the result
Test closure activities
documentation in order
reports written
defects closed
archiving the test environment
passing over testware
writing lessons learned
Life Cycles
Software Development Models
V-model
Characteristics
Acceptance testing can take place before system testing starts
Each test level has objectives specific to that level
There may be fewer than 4 test levels in a V-model
Requirement specification -- > Acceptence testing
Purpose is to demonstrate system conformance to the customer requirements
Types:
User acceptance testing
Operational acceptance testing
Back-up facilities
Procedures for disaster recovery
Training for end-users
Maintenance procedures
Security procedures
Contract and regulation acceptance testing
Contract acceptance testing
Regulation acceptance testing
Alpha and beta (field) testing
Alpha testing takes place at the developer's site
Beta testing takes place at the customer's site
Functional specification -- > System testing
Testing functionality from an end-to-end perspective
Focusing on the behaviour of the whole system / product
The behaviour is documented in the functional specification
Functional requirements provide detail on what the application being developed will do
Security
Interoperability
Non-functional requirements detail how the application will perform in use
Installability
Maintainability
Performance
Load handling
Stress handling
Portability
Recovery
Reliability
Usability
Technical specification -- > Integration testing
Integration strategies
Big-bang integration
units are linked at once
difficult to isolate any errors found
problems my be discovered late
Top-down integration
Built in stages
Starting with those at the top
Those lower down may not have been built or integrated yet
A stub is a skeletal implementation of the component (passive) called by other component
Bottom-up integration
The components that may not be in place are those which actively call other components
When these special components call other components, they are called drivers
Levels of integration testing
Component integration testing
System integration testing - this level occurs after System testing
Program specification -- > Unit testing
Units are also called programs, modules or components
Ensures that the code written for the unit meets its specification
Performed by developer
Defects found are often not recorded
Stubs may be used
May cover resource behaviour (e.g. memory leaks)
Coding
Iterative model
Stages
Entry
Requirements
Design
Code
Test
Exit
Forms of iterative development
prototyping
rapid application development (RAD)
agile software development
Rational Unified Process (RUP)
Test Types
Functional testing
Security testing
Interoperability testing - this evaluates the capability of the system to interact with other specified components
Specification-based testing
May be performed at all test levels
Non-functional testing
behavioural aspects
usability
performance under load and stress
Structural testing
Used to measure how much testing has been carried out
Focus on the structural aspects of the system
Code itself
architectural definition of the system
Code coverage
Can be carried out at any test level
Testing related to changes
Retesting (confirmation) testing - confirm that the defect has been successfully removed
Regression testing - ensures that no additional defects have been introduced. Should be carried out if the envir. has changed.
Maintenance Testing
Testing which takes place on a system which is in operation in the live environment
Changes may be due to:
Additional features being required
The system being migrated
The system being retired
New faults being found
Impact analysis can be difficult
the specification may be out of date
original development team may have moved on to other projects
Static Testing
Types of static testing
Review - used to find and remove errors an ambiguities in documents before they are used in the development process, thus reducing one source of defects in the code. Reviews are normally completed manually.
Static analysis - enables code to be analysed for structural defects or systematic programming weaknesses that may lead to defects. Static analysis is normally completed automatically using tools.
Development testing is an emerging category, including a set of processes and software, such as static analysis, designed to help developers, management, and the business easily find and fix quality and security problems early in the development cycle, as the code is being written, without impacting time to market, cost, or customer satisfaction.
The MAIN objective during development testing is to cause as many failures as possible so that defects in the software are identified and can be fixed
Reviews and the test process
Benefits of finding defects early in the life cycle
Productivity can be improved and timescale reduced
Testing costs and time can be reduced
On-going support costs will be lower
Improved communication
Types of defects found:
Deviations from standards
Requirements defects
Design defects
Insufficient maintainability (poor maintainability of code)
Incorrect interface specifications
Review process
Factors that influence the level of formality:
The maturity of the development process: the more mature the process is, the more formal reviews tend to be.
Legal or regulatory requirements.
The need for an audit trail.
Review objectives:
Finding defects
Gaining understanding
Generating discussion
Decision making by consensus
Basic review process
The document under review is studied by the reviewers.
Reviewers identify issues or problems and inform the author either verbally or in a documented form.
The author decides on any action to take in response to the comments and update the document accordingly.
Phases of a formal review
Planning
Selecting the personnel
Allocating roles
Defining the entry and exit criteria
Selecting the parts of documents to be reviewed
Kick-off
Distributing documents
Explaining the objectives, process and documents to the participants
Checking entry criteria (for more formal reviews such as Inspections)
Individual preparation
This is a key task and may actually be time-boxed
Reading the source of documents
Noting potential defects, questions and comments
Review meeting
The approach taken will have been decided at the kick-off stage. Factors:
Time available
Requirements of the author
Type of review
May include discussion regarding any defects found
Formal reviews will have documented results or minutes
Participants may simply note defects for the author to correct
Participants might also make recommendations for handling or correcting the defects
Rework - correcting the defects by the author
Follow-up
The review leader will check that the agreed defects have been addressed
will gather metrics
how much time was spent on the review
how many defects were found
will check the exit criteria (for more formal review typles such as Inspections)
Roles and responsabilities
Manager
decides on what is to be reviewed
ensures there is sufficient time allocated in the project plan
determines if the review objectives have been met
Moderator (review leader)
leads the review of the document or set of documents
planning the review
running the meeting
follow-ups after the meeting
is often the person upon whom the success of the review rests
makes the final decision as to whether to release an updated document
Author
is the writer with chief responsability for the development of the documents to be reviewed
takes resposability for fixing any agreed defects
Reviewers
experienced practitioner
new or inexperienced team member
someone from a different team
a dissenter who is hard to please
Scribe (recorder)
documents all of the issues and defects, problems and open points that were identified during the meeting
Tester (not associated with reviews)
will be required to analyse a document to enable the development of tests
in analysing the document they will also review it
Types of review
Informal review
no formal process
may be documented but this is not required
there may be variations in the usefulness of the review depending on the reviewer
the main purpose is to find defects and this method is not expensive to achieve some limited benefit
the review may be implemented by pair programming
Walkthrough
the meeting is led by the author
review sessions are open ended and may vary from informal to very formal
optional components
preparation by reviewers before meeting
production of a review report or a list of findings
appointment of a scribe
the main purposes
to enable learning about the content of the document under review
to help team members gain an understanding of the content of the document
to find defects
explore scenarios, or conduct dry runs of code or process
Technical review
is documented and use a well-defined defect-detection process that includes peers and technical experts
performed as a peer review led by a trained moderator who is not the author
reviewers prepare for the review meeting
using checklists
prepare a review report with a list of findings
the main purposes
discussion
decision making
evaluation of alternatives
finding defects
solving technical problems
checking conformance to specifications and standards
Inspection (most formal)
is led by a trained moderator and usually involve peer examination of a document; individual inspectors work within defined roles
formal, based on rules and checklists and uses entry and exit criteria
pre-meeting preparation is essential
an inspection report, with a list of findings, is produced
after the meeting a formal follow-up process is used
the main purpose is to find defects and process improvement may be a secondary purpose
Success factors for reviews
clearly predefined and agreed objective
any defects found are welcomed
review techniques (both formal and informal)
checklists or roles should be used
management support is essential
accent on learning and process improvement
quantitative approaches to success measurement
how many defects found
time taken to review/inspect
percentage of project budget used/saved
Static analysis by tools
the value is
add the greatest value when used during component and integration testing
early detection of defects
early warning about suspicious aspects of the code or design
identification of development standard breaches
detecting dependencies and inconsistencies in software models
improved maintainability of code and design
prevention of defects
typical defects
referencing a variable with an undefined value
inconsistent interface between modules and components
variables that are never used
unreachable (dead) code
programming standards violations
security vulnerabilities
syntax violations of code and software models
Test Design Techniques
Test design - Specification of the test cases required to test a feature
Test case (TC) design techniques
specification-based (black-box) - deriving test cases directly from a specification or a model of a system or proposed system
testers responsability - what a system should do (a specification)
designers responsability - how it should work (a design)
Five techniques
Equivalence partitioning
Input partitions
valid equivalence partitions
non-valid equivalence partitions
Output partitions
Other partitions
Boundary value analysis
valid boundary values - inside of a valid partition
non-valid boundary values - outside of a valid partition
the boundary value itself
Decision table testing
A decision table lists all the input conditions that can occur and all the actions that can arise from them
Actions are structured into a table as rows, with the conditions at the top of the table and the possible actions at the bottom
Business rules which involve combinations of conditions to produce some combination of actions, are arranged across the top. Each column therefore represents a single business rule and shows how input conditions combine to produce actions
Each column represents a possible test case, since it identifies both inputs and expected outputs
State transition testing
Transitions are caused by events and they may generate outputs and/ or changes of state
An event is anything that acts as a trigger for a change
could be an input to the system
could be something inside the system that changes for some reason
A state table (ST) - records all the possible events and all the possible states; for each combination of event and state it shows the outcome in terms of the new state and any outputs that are generated
The ST is the source from which we usually derive test cases
Use case testing
Use cases are one way of specifying functionality as business scenarios or process flows. They capture the individual interactions between 'actors' and the system.
Use cases are a high-level view of requirements
Use case testing relates to real user processes, so it offers an opportunity to exercise a complete process flow
structure-based (white-box) - deriving test cases directly from the code written to implement a system
Reading and interpreting code
Overall program structure
executable code
non-executable code
Programming structures
Sequence
Selection
Iteration
Flow charts - visual representation of the structure
Control flow graphs - represent the decision points and the flow of control within a piece of code
nodes - represent any point where the flow of control can be modified or the points where a control structure returns to the main flow
edges - lines connecting any two nodes
Cyclomatic Complexity is the number of independent paths through the program. The easiest way to calculate it is to add up the number of decisions and add 1 to the result. We have 2 IF tests (i.e. 2 decisions) plus 1 = 3.
Hybrid flow graphs
Statement testing and coverage
Decision testing and coverage
Other structure-based techniques
experience-based (ad hoc) - deriving test cases from the tester's experience of similar systems and general experience of testing
Error guessing (ad hoc testing) - takes advantage of a tester's skill, intuition and experience
It can identify tests not easily captured by formal techniques
It can make good use of tester experience and available defect data
Exploratory testing - combines the experience of testers with a structured approach to testing
It is useful when there are limited specification documents available
It is useful when testing is constrained due to time pressures
The test development process
Steps in the desing of tests
Identify (decide on) test conditions
A test condition - an item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element
Specify (design) test cases
A test case - a set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement
A good test case should be traceable back to the test condition and the element of the specification that it is testing
Specify (write) test procedures
A test procedure specification - a sequence of actions for the execution of a test. It is often called test script.
Write a test execution schedule
Test execution schedule - puts all the individual test procedures in the right sequence and sets up the system so that they can be run
Test coverage - provides a quantitative assessment of the extent and quality of testing
It provides a quantitative measure of the quality of the testing that has been done by measuring what has been achieved.
It provides a way of estimating how much more testing needs to be done.
Choosing test techniques
Type of system
Regulatory standards
Customer or contractual requirements
Level of risk
Type of risk
Test objectives
Documentation available
Knowledge of the testers
Time and budget
Development life cycle
Use case models
Experience of type of defects found
Test Management
Risk and testing
Risk - a factor that could result in future negative consequences, usually expressed as impact and likelihood
Level of risk - (probability of the risk occurring) x (impact if it did happen)
A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk
Testing is a risk control activity
Project risks
Supplier issues
Failure of a third party to deliver on time or at all
Contractual issues
Organizational factors
skill and staff shortages
Personal and training issues
Political issues, such as a change of management or restructuring that will affect the project resources
Problems that stop testers communicating their needs and test results
Failure to follow up on low-level testing and reviews
Lack of appreciation of the benefits of testing
Specialist issues
Problems in defining the right requirements
The extent that requirements can be met given existing project constraints
The quality of the design, development and test team
Product risks
Failure-prone software delivered
Poor requirements leading to badly defined and built software
Potential that a defect in the software/hardware could cause harm to an individual or company
Poor software quality characteristics leading to poor user feedback
The software does not meet the requirements and delivers functionality that was not requested
In a risk-based approach the risks identified
will determine the test techniques. Motor Industry Software Reliability Association (MISRA) defines which test techniques should be used for each level of risk: the higher the risk, the higher the coverage required from test techniques
prioritize testing in an attempt to find the critical defects as early as possible
will determine any non-test activities that could be employed to reduce risk
Risk management activities provide a disciplined approach
To assess continuously what can go wrong (risks).
To determine what risks are important to deal with (probability x impact).
To implement actions to deal with those risks (mitigating actions)
Test organization
Test organization and independence
The more remote a tester is from the production of the document, the greater is the level of independence
The grater the level of independence, the greater the likelihood of errors in testing arising from unfamiliarity
Levels of independent testing
The developer
Independent testers ceded to the development team
Independent permanent test team, centre of excellence, within the organization
Independent testers or test team provided by the operational business units
Specialists testers such as usability testers, security testers, or performance testers
Outsourced test team or testers, e.g. contractors or other organizations
Features of independent testing
Benefits
The tester sees other and different defects to the author
The tester is unbiased
The tester can see what has been built rather than what the developer thought had been built
The tester makes no assumptions regarding quality
Drawbacks
Isolation from the development team
The tester may be seen as the bottleneck
Developers lose a sense of responsability for quality
The fully independent view sets developers and testers on either side of an invisible fence
Tasks of a test leader
Coordinating the development of the test strategy and plan with project managers and others
Writing or reviewing test strategies produced for the project and test policies produced for the organization
Contributing the testing perspective to other project activities, such as development delivery schedules
Planning the development of the required tests
Managing the specification, preparation, implementation and execution of tests, including the monitoring and control of all the specification and execution
Taking the required action, including adapting the planning, based on test results and progress
Ensuring that adequate configuration management of testware is in place and that the testware is fully traceable
Putting in place suitable metrics
Agreeing what should be automated, to what degree, and how
Selecting tools to support testing and ensuring any tool training requirements are met
Agreeing the structure and implementation of the test environment
Scheduling all testing activity
At the end of the project, writing a test summary report based on the information gathered during testing
Tasks of a tester (test analyst or test executor)
Reviewing and contributing to the development of test plans
Analysing, reviewing and assessing user requirements, specifications and models for testability
Creating test specifications from the test basis
Setting up the test environment
Preparing and acquiring/copying/creating test data
Implementing tests on all test levels, executing and logging the tests, evaluating the results and documenting the deviations from expected results as defects
Using test administration or management and test monitoring tools as required
Automating tests
Where required, running the tests and measuring the performance of components and systems
Reviewing test developed by other testers
Test approaches (strategies)
Preventative - the test design process is initiated as early as possible in the life cycle to stop defects being built into the final solution
Reactive - this is where testing is the last development stage and is not started until after design and coding has been completed
Analytical such as risk-based testing - where testing is directed to areas of greatest risk
Model-based such as stochastic testing using statistical information about failure rates
Methodological such as failure-based, check-list based and quality-characteristic-based
Standard-compliant, specified by industry-specific standards such as The Railway Signalling standards (which define the levels of testing required) or the MISRA (which defines how to design, build and test reliable software for the motor industry)
Process-compliant , which adhere to the processes developed for use with the various agile methodologies or traditional waterfall approaches
Dynamic and heuristic, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.
Consultative such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside or within the test team
Regression-averse such as those that include reuse of existing test material, extensive automation of functional regression tests and standard test suites
Factors considered when defining the strategy or approach
Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company
Skills and experience of the people in the proposed techniques, tools and methods.
The objective of the testing endeavour and the mission of the testing team
Regulatory aspects such as external and internal regulations for the development process
The nature of the product and the business
Test planning and estimation
Test planning
Test planning is used in development and implementation projects (sometimes called 'green field') as well as maintenance (change and fix) activities
Master test plan or a project test plan - the main document produced in test planning
The details of the test-level activities are documented within test-level plans, e.g. the system test plan.
The contents sections of a test plan for either the master test plan or test-level plans are normally identical or very similar.
IEEE 829Standard for Software and System Test Documentation identifies minimum of 16 sections in a test plan
Test plan identified
Introduction
Test items
Features to be tested
Features not to be tested
Approach
Item pass/fail criteria
Suspension and resumption requirements
Test deliverables
Testing tasks
Environmental needs
Responsabilities
Staffing and training needs
Schedule
Risks and contingencies
Approvals
Test-planning activities
Working with the Project Manager and subject matter experts' to determine the scope and the risks that need to be tested. As well identifying and agreeing the objectives of the testing.
Putting together the overall approach of testing ensuring that the test levels and entry and exit criteria are defined
Liaising with the project manager and making sure that the testing activities have been included within the software life-cycle activities such as:
design
development
implementation
Working with the project to decide what needs to be tested, what roles are involved and who will perform the test activities, planning when and how the test activities should be done, deciding how the test results will be evaluated and defining when to stop testing (exit criteria)
Building a plan that identifies when and who will undertake the test analysis and design activities.
Finding and assigning resources for the different activities that have been defined
Deciding what the documentation for the test project will be
Defining the management information, including the metrics required and putting in place the processes to monitor and control test preparation and execution, defect resolution and risk issues.
Ensuring that the test documentation generates repeatable test assets, e.g. test cases.
Exit criteria
All tests planned have been run
A certain level of requirements coverage has been achieved
No high-priority or severe defects are left outstanding
All high-risk areas have been fully tested
Cost - when the budget has been spent
The schedule has been achieved
Test estimation
The metrics-based approach
The number of test conditions
The number of test cases written
The number of test cases executed
The time taken to develop test cases
The time taken to run test cases
The number of defects found
The number of environment outages and how long on average each one lasted
The expert-based approach
Business experts
Test process consultants
Developers
Technical architects
Analysts and designers
Anyone with knowledge of the application to be tested or the tasks invloved in the process
Things that affect the level of effort required to fulfil the test requirements of a project
Product characteristics
size of the test basis
complexity of the final product
the amount of non-functional requirements
the security requirements (perhaps meeting BS7799, the security standard)
how much documentation is required
the availability and quality of the test basis
Development process characteristics
timescales
amount of budget available
skills of those involved in the testing and development activity
which tools are being used across the life cycle
Expected outcome of testing such as
the amount of errors
test cases to be written
Test progress monitoring and control
Test progress monitoring
Percentage of work done in test case preparation
Percentage of work done in test environment preparation
Test case execution
Defect information
Test coverage of requirements, risk or code
Subjective confidence of testers in the product
Dates of test milestones
Testing costs
Test reporting
What has happened during a given period of time
Analysed information and metrics required to support recommendations and decisions about future actions:
an assessment of defects remaining
the economic benefit of continued testing
outstanding risks
the level of confidence in tested software
The IEEE 829Standard for Software and System Test Documentation includes an outline of a test summary report
Test summary report identifier
Summary
Variances
Comprehensive assessment
Summary results
Evaluation
Summary of activities
Approvals
Test control
Making decisions based on information from test monitoring
Reprioritize tests when an identified project risk occurs
Change the test schedule due to availability of a test environment
Adding extra test scripts to a test suite
Set an entry criterion requiring fixes to be retested by a developer before accepting them into a build
Review of product risks and perhaps changing the risk ratings to meet the target
Adjusting the scope of the testing to manage the testing of late change requests
Descoping of functionality, i.e. removing some less important planned deliverables from the initial delivered solution to reduce the time and effort required to achieve that solution
Delaying release into the production environment until exit criteria have been met
Continuing testing after delivery into the production environment so that defects are found before they occur in production
Incident management
An incident is any unplanned event ocurring that requires further investigation
according to IEEE 1044 (Standard Classification for Software Anomalies) is the process of recognizing, investigating, taking action and disposing of incidents
Incident reports have the following objectives:
To provide developers and other parties with feedback on the problem to enable identification, isolation and correction as necessary
To provide test leaders with a means of tracking the quality of the system under test and the progress of the testing. One of the key metrics used to measure progress is a view of how many incidents are raised, their priority and finally that they have been corrected and signed off.
To provide ideas for test process improvement. For each incident the point of injection should be documented
The details included on an incident report
Date of issue, issuing organization, author, approvals and status
Scope, severity and priority of the incident
References, including the identity of the test case specification that revealed the problem
Expected and actual results
Date the incident was discovered
Identification of the test item and environment
Software or system life-cycle process in which the incident was observed
Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots
Degree of impact on stakeholder(s) interests
Severity of the impact on the system
Urgency/priority to fix
Status of the incident
Conclusions, recommendations and approvals
Global issues, such as other areas that may be affected by a change resulting from the incident
Change history , such as the sequence of actions taken by project team members with respect to the incident to isolate, repair and confirm it as fixed
References, including the identity of the test case specification that revealed the problem
Test incident report outline
Test incident report identifier
Summary
Incident description
Inputs
Expected results
Actual results
Anomalies
Date and time
Procedure step
Environment
Attempts to repeat
Testers' comments
Observers' comments
Impact
Configuration management
is the process of managing products, facilities and processes by managing the information about them, including changes and ensuring they are what they are supposed to be in every case
for testing, will involve controlling both the versions of code to be tested and the documents used during the development process, e.g. requirements, design and plans
should ensure traceability throughout the test process, e.g. a requirement should be traceable through to the test cases that are run to test its level of quality and vice versa
each item of testware (such as a test procedure) should have its own version number and be linked to the version for the software it was used to test
Tool Support for Testing
General principles
Test tool - A software product that supports one or more test activities, such as planning and control, specification, building, initial files and data, test execution and test analysis
Benefit is that the amount of time and effort spent performing routine, mundane, repetitive tasks is greatly reduced
A typical objective of a pilot project for introducing a testing tool into an organisation is to assess whether the benefits will be achieved at a reasonable cost
Risks - over-optimistic expectations of what the tool can do and a lack of appreciation of the effort required to implement and obtain the benefits that the tool can bring
Tool support for management of testing and tests
Test management tools
provide support for various activities and tasks throughout the development life cycle
integration with reqs management tools allows reports to be produced on test progress against one or more reqs
integration with incident management tools allows reports also to include analysis of defects against reqs
The metrics produced can be used as input to:
Test and project management to control the current project
Estimates for future projects
Identifying weaknesses or inefficiencies in the development or test process that can be subsequently investigated with the aim of improving them
Incident management tools
creation of an incident report
and maintenance of details about the incident as it progresses through the incident life cycle
Requirements management tools
used by business analysts to record, manage and prioritize the requirements of a system
traceability function enables links and references to be made between reqs, function, test conditions and other testware items
enables reqs coverage metrics calculation as traceability enables test cases to be mapped to reqs
Configuration management tools
designed for managing: the versions of different software (and hardware) components that comprise a complete build of the system; and various complete builds of systems that exist for various software platforms over a period of time
The amount of benefit depends upon
the complexity of the system architecture
the number and frequency of builds of the integrated system
how much choice are available to customers
allow traceability between testware and builds of an integrated system and versions of subsystems and modules
Traceability is useful for
identifying the correct version of test procedures to be used
determining which test procedures and other testware can be reused or need to be updated /maintained
assisting the debugging process so that a failure found when running a test procedure can be traced back to the appropriate version of a subsystem
Tool support for static testing
Review tools
provide a framework for reviews and inspections
maintaining information about the review process, such as rules and checklists
the ability to record, communicate and retain review comments and defects
the ability to amend and reissue the deliverable under review whilst retaining a history or log of the changes made
traceability functions to enable changes to deliverables under review to highlight other deliverables that may be affected by the change
the use of web technology to provide access from any geographical location to this information
can interface with configuration management tools to control the version numbers of a document under review
tend to be more beneficial for peer (or technical) reviews and inspections rather than walkthroughs and informal reviews
Static analysis tools
analyse code before it is executed
used mainly by developers prior to unit testing
some are integrated with dynamic and coverage measurement tools.
used to improve the understanding of the code and to calculate complexity and other metrics
types of defects
syntax errors
variance from programming standards
invalid code structures
unreachable code
portability
security vulnerabilities
inconsistent interfaces between components
references to variables that have a null value or never used
Modelling tools
used by developers during the analysis and design stages of the development life cycle
are very cost effective at finding defects early in the development life cycle
Tool support for test specification
Test design tools
support the generation and creation of test cases
many are integrated with other tools
modelling tools
reqs management tools
static analysis tools
test management tools
A test oracle is a type of test design tool that automatically generates expected results. Useful for:
Replacement systems
Migrations
Regression testing
useful for safety-critical and other high-risk software where coverage levels are higher and industry, defence or government standards need to be adhered to
Test data preparation tools
Tools support for test execution and logging
Test comparators
compare the contents of files, databases, XML messages, objects and other electronic data formats
useful for regression testing since the contents of output or interface files should usually be the same
Test execution tools
Types
Record (or capture playback) tools
Data-driven testing - Data-driven testing is the creation of test scripts to run together with their related data sets in a framework. The framework provides re-usable test logic to reduce maintenance and improve test coverage. Input and result (test criteria) data values can be stored in one or more central data sources or databases, the actual format and organisation can be implementation specific.
Keyword-driven testing - Action words are defined to cover specific interactions in system which can then be used by testers to build their tests
Requires
Technicall skills
Maintenance
Effective and efficient use
Benefits
Cost savings as a result of the time saved by running automated tests rather than manual tests
Accuracy benefits from avoiding manual errors in execution and comparison
The ability and flexibility to use skilled testers on more useful and interesting tasks
The speed with which the results of the regression pack can be obtained
Test harnesses
known as unit test framework tools
used by developers to simulate a small section of the environment in which the software will operate
Coverage measurement tools
measure the % of the code structure covered across white-box measurement techniques such as statement coverage and branch or decision coverage
can be used to assess test completion criteria and/or exit criteria
Security tools
Fault attacks - can evaluate the reliability of a test object by attempting to force specific failures to occur
Tool support for performance and monitoring
Dynamic analysis tools
used to detect the type of defects that are difficult to find during static testing
report static defects
report dynamic defects
provide coverage measurement figures
report upon the code being (dynamically) executed at various instrumentation points
used to:
report on the state of software during its execution
monitor the allocation, use and deallocation of memory
identify memory leaks
detect time dependencies
identify unassigned pointers
check pointer arithmetic
are often integrated with static analysis and coverage measurement tools. This allow:
the code to be analysed statically
the code to be instrumented
the code to be executed (dynamically)
Performance testing/load testing/stress testing tools
Load testing reports upon the performance of a system under test, under various loads and usage patterns
Stress testing identify the usage pattern or load at which the system under test fails
Types of defects
general performance problems
performance bottlenecks
memory leakage
record-locking problems
concurrency problems
excess usage of system resources
exhaustion of disk space
Monitoring tools
used to check if systems are available
used to check if systems performance is acceptable
Other tools
Spreadsheets
Word processors
Back-up and restore utilities
SQL and other database query tools
Project planning tools
Debugging tools