Saturday, May 17, 2025

Business Domain Testing

Overview of Software Testing: Introduction Testing, QC and QATesting and SDLC ModelsAgile TestingTesting ProcessesTest Levels - Test TypesTest Documents - Test Metrics

Business Domain testing Skills: Software Testing Lifecycle (STLC)Quality Characteristics for Business domain TestingFunctional TestingMobile Application TestingUsability Testing - RolesResponsibilitiesTools

Introduction

o   Software testing is a set of activities to discover defects and evaluate the quality of software artifacts.

o   Testing involves verification, i.e., checking whether the system meets specified requirements, it also involves validation, which means checking whether the system meets users’ and other stakeholders’ needs in its operational environment.


o   Testing may be dynamic or static. Dynamic testing involves the execution of software, while static testing includes reviews and static analysis.

o   Testing needs to be properly planned, managed, estimated, monitored and controlled

o   Objectives of testing can vary, depending upon the context, which includes the work product being tested, the test level, risks, the software development lifecycle (SDLC) being followed, and factors related to the business context

Testing, QC and QA

o   Testing is a major form of quality control (QC), while others include formal methods (model checking and proof of correctness), simulation and prototyping.

o   QC is a product-oriented, corrective approach that focuses on those activities supporting the achievement of appropriate levels of quality

o   Quality Assurance (QA) is a process-oriented, preventive approach that focuses on the implementation and improvement of processes. It works on the basis that if a good process is followed correctly, then it will generate a good product.

o   Test results are used by QA and QC. In QC they are used to fix defects, while in QA they provide feedback on how well the development and test processes are performing.



Testing and SDLC Models

o   Software Development Lifecycle (SDLC) include sequential development models (e.g., waterfall model, V-model), iterative development models (e.g., spiral model, prototyping), and incremental development models (e.g., Unified Process).

o   Some activities within software development processes can also be described by more detailed software development methods and Agile practices.


Agile Testing

o   One of the main differences between traditional lifecycles and Agile lifecycles is the idea of very short iterations, each iteration resulting in working software that delivers features of value to business stakeholders.


o   At the beginning of the project, there is a release planning period. This is followed by a sequence of iterations.

o   At the beginning of each iteration, there is an iteration planning period. Once the iteration scope is established, the selected user stories are developed, integrated with the system, and tested.

o   These iterations are highly dynamic, with development, integration, and testing activities taking place throughout each iteration, and with considerable parallelism and overlap. Testing activities occur throughout the iteration, not as a final activity.

o    Agile approaches and aspects include Whole-Team Approach, Early and Frequent Feedback, Collaborative User Story Creation, Retrospectives, Continuous Integration, Release and Iteration Planning

o   Agile Testing Methods include test-driven development, acceptance test-driven development, and behavior-driven development 

o   Testing quadrants align the test levels with the appropriate test types in the Agile methodology

Testing Processes

o   Organizational Test Processes

§  Organizational Test Policy

§  Organizational Test Strategy

o   Test Management Processes

§  Test Planning

§  Risk Management

§  Test Monitoring and Control

§  Test Completion

o   Dynamic Testing Processes

§  Test Analysis

§  Test Design

§  Test Implementation

§  Test Execution

§  Test Completion and Reporting

Test Levels

o   Component testing: focuses on testing components in isolation.

o   Component integration testing focuses on testing the interfaces and interactions between components, dependent on the integration strategy approaches like bottom-up, top-down or big-bang.

o   System testing focuses on the overall behavior and capabilities of an entire system or product,

o   System integration testing (SIT) focuses on testing the interfaces of the system under test and other systems and external services

o   User Acceptance testing (UAT) focuses on validation and on demonstrating readiness for deployment, which means that the system fulfills the user’s business needs


Test Types

o   Functional testing: evaluates the functions that a component or system should perform.

o   Non-functional testing: evaluates attributes other than functional characteristics of a component or system, non-functional software quality characteristics include Performance, Compatibility, Usability, Reliability, Security, Maintainability and Portability

o   Black-box testing: specification-based and derives tests from documentation external to the test object.

o   White-box testing: structure-based and derives tests from the system's implementation or internal structure (e.g., code, architecture, workflows, and data flows)

o   Confirmation Testing or Retesting (it confirms that an original defect has been successfully fixed) and Regression Testing (confirms that no adverse consequences have been caused by a change)

Test Documents

o   Test Policy

o   Test Strategy

o   Master test plan

o   Test plan

o   Traceability Matrix

o   High level scenarios (HLS)

o   Test cases (TC)

o   Test data requirements

o   Test Execution Log

o   Test Status report (Progress report)

o   Test Completion report (Summary report)

o   Defect report

Test Metrics

o   Project progress metrics (e.g., task completion, resource usage, test effort)

o   Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)

o   Product quality metrics (e.g., availability, response time, mean time to failure)

o   Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)

o   Risk metrics (e.g., residual risk level)

o   Coverage metrics (e.g., requirements coverage, code coverage)

o   Cost metrics (e.g., cost of testing, organizational cost of quality)

Software Testing Lifecycle (STLC)

o   STLC consists of key activities to ensure that all software quality goals are met:

§  Requirements Analysis

§  Test Planning

§  Test Case Design

§  Test Environment Setup

§  Test Execution

§  Test Closure

Quality Characteristics for Business domain Testing

o   Functional Suitability

§  Functional Correctness: involves verifying the application's adherence to the specified or implied requirements and may also include computational accuracy

§  Functional Appropriateness: involves evaluating and validating the appropriateness of a set of functions for its intended specified tasks

§  Functional Completeness: performed to determine the coverage of specified tasks and user objectives by the implemented functionality. Traceability between specification items (e.g., requirements, user stories, use cases) and the implemented functionality (e.g., function, component, workflow) is essential to enable required functional completeness to be determined.

o   Usability

§  Appropriateness recognizability: verify the users can recognize whether a product or system is appropriate for their needs.

§  Learnability: Verifying the product or system enables the user to learn how to use it with effectiveness and efficiency in emergency situations.

§  Operability: Verifying the product or system is easy to operate, control and appropriate to use.

§  User error protection: Verifying the product or system protects users against making errors.

§  User interface aesthetics: Verifying the user interface enables pleasing and satisfying interaction for the user.

§  Accessibility: Verifying the product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.

o   Portability

§  Installability: Verifying the product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments.

§  Adaptability: Verifying effectiveness and efficiency in which a product or system can be successfully installed and/or uninstalled in a specified environment.

§  Replaceability: Verifying the product can replace another specified software product for the same purpose in the same environment.

o   Compatibility

§  Interoperability: verifies the exchange of information between two or more systems or components. Tests focus on the ability to exchange information and subsequently use the information that has been exchanged.

Functional Testing

o   Functional Testing evaluates the functions that a component or system should perform. The functions are “what” the test object should do. The main objective of functional testing is checking the functional completeness, functional correctness and functional appropriateness.

o   Test analysis and design techniques including Coverage-Based test Techniques and Experience-based Test Techniques

o   Coverage-Based test Techniques such as Equivalence Partitioning (EP), Boundary Value Analysis (BVA), Decision Table Testing, State Transition Testing, Use Case Testing, Data Cycle Testing (CRUD) testing and others.

§  EP Technique divides data into partitions based on the expectation that all the elements of a given partition are to be processed in the same way by the test object.

§  If a test case, that tests one value from an equivalent partition, detects a defect, this defect should also be detected by test cases that test any other value from the same partition. Therefore, one test for each partition is sufficient.

§  BVA is a technique based on exercising the boundaries of equivalence partitions

§  The minimum and maximum values of a partition are its boundary values

§  In the case of BVA, if two elements belong to the same partition, all elements between them must also belong to that partition.

§  In 2-Value BVA, for each boundary value there are two coverage items: this boundary value and its closest neighbor belonging to the adjacent partition. To achieve 100% coverage with 2-value BVA, test cases must exercise all coverage items, i.e., all identified boundary values.

§  In 3-Value BVA, for each boundary value there are three coverage items: this boundary value and both its neighbors. Therefore, in 3-value BVA some of the coverage items may not be boundary values. To achieve 100% coverage with 3-value BVA, test cases must exercise all coverage items, i.e., identified boundary values and their neighbors 

§  EP-with-three-value-BVA Example:

§  Decision Table are used for testing the implementation of system requirements that specify how different combinations of conditions result in different outcomes.

§  A state transition diagram shows the possible software states, as well as how the software enters, exits, and transitions between states.

§  A transition is initiated by an event (e.g., user input of a value into a field). The event results in a transition. The same event can result in two or more different transitions from the same state. The state change may result in the software taking an action (e.g., outputting a calculation or error message).

§  A state transition table shows all valid transitions and potentially invalid transitions between states, as well as the events, and resulting actions for valid transitions. State transition diagrams normally show only the valid transitions and exclude the invalid transitions

§  Tests can be derived from use cases, which are a specific way of designing interactions with software items. They incorporate requirements for the software functions. Use cases are associated with actors (human users, external hardware, or other components or systems) and subjects (the component or system to which the use case is applied).

§  The data cycle test or CRUD test (CREATE-READ-UPDATE-DELETE) is a technique for testing whether the data are being used and processed consistently by various functions from within different subsystems or even different systems. The technique is ideally suited to the testing of overall functionality, suitability and connectivity.

§  CRUD testing focuses on the coupling between different functions and how they handle common data.

§  An example of Data Cycle: a subsystem that invoices orders and processes payments. The relevant part of the CRUD matrix is in the table below.

o   Experience-based Test Techniques

§  Error Guessing

§  Exploratory testing

§  Checklist-based testing

Mobile Application Testing

o   Mobile Application Testing including Testing for Compatibility with Device Hardware, App Interactions with Device Software, Various Connectivity Methods

o   Testing for Compatibility with Device Hardware

§  Testing for Device Features

§  Testing for Different Displays

§  Testing for Device Temperature

§  Testing for Device Input Sensors

§  Testing Various Input Methods

§  Testing for Screen Orientation Change

§  Testing for Typical Interrupts

§  Testing for Access Permissions to Device Features

§  Testing for Power Consumption and State

o   Testing for App Interactions with Device Software

§  Testing for Notifications

§  Testing for Quick-access Links

§  Testing for User Preferences Provided by the Operating System

§  Testing for Different Types of Apps

§  Testing for Interoperability with Multiple Platforms and Operating System Versions

§  Testing for Interoperability and Co-existence with other Apps on the Device

o   Testing for Various Connectivity Methods

§  Testing for cellular networks such as 2G, 3G, 4G and 5G

§  Testing for wireless connection types such as NFC or Bluetooth.

Usability Testing

o   Usability testing includes Testing User Interface (UI), User Experience (UX) and Accessibility of software products

§  Usability is the extent to which a software product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use

§  The User interface (UI) consists of all components of a software product that provide information and controls for the user to accomplish specific tasks with the system.

§  User experience (UX) describes a person’s perceptions and responses that result from the use and/or anticipated use of a product, system or service.

§  Accessibility is the degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use

o   A usability test has the following three principal steps and associated tasks

§  Prepare usability test

·       Create usability test plan

·       Recruit usability test participants

·       Write usability test script(s)

·       Define usability test tasks

·       Pilot usability test session

§  Conduct usability test sessions

·       Prepare session

·       Perform briefing with pre-session instructions

·       Conduct pre-session interview

·       Moderate session

·       Conduct post-session interview

§  Communicate results and findings

·       Analyze findings

·       Write usability test report

·       Sell findings (i.e., convince people)

o   Human-centered design

§  An approach to design that aims to make software products more usable by focusing on the use of software products and applying human factors, ergonomics, and usability knowledge and techniques.

§  The human-centered design process can be summarized as follows: Analyze, Design, Evaluate, Iterate

§  The human-centered design activities are based on the following three key elements: Users, Evaluation, Iterations

Roles

o   Test management role

§  Takes overall responsibility for managing test activities, managing the product, and managing the team

§  Managing Test Activities

·        Test process: mainly focused on the activities of test planning, test monitoring and control and test completion

·        Risk based testing: Risk identification, assessment, monitoring and mitigation of risks to drive testing

·        Project test strategy

·        Improving the test process

§  Managing the product

·        Test estimation: Effort, time and Cost

·        Test metrics

·        Defect management

§  Managing the team

·        Test Team

·        Stakeholder relationships

o   Testing role

§  Takes overall responsibility for the engineering aspect of testing.

§  Mainly focused on the activities of test analysis, test design, test implementation and test execution.

§  Contribute to risk-based testing (risks related to business domain)

Responsibilities

o   Define test strategy consistently with the organizational test strategy and project context

o   Reviewing and analyzing system specifications to ensure clarity and understanding of requirements.

o   Creating detailed, comprehensive, and well-structured test plans.

o   Recognize and classify the risks associated with the functional correctness and Usability of software systems

o   Coordinate the test plans with project managers, product owners, Development team and other stakeholders

o   Perform the appropriate testing processes and activities based on the software development life cycle

o   Designing test scenarios and test cases based on specifications and requirements.

o   Executing test cases and analyzing results to report any defects or issues.

o   Conducting Functional testing and Usability testing to ensure software quality

o   Tracking defects or inconsistencies in the product's functionality and user interface

o   Collaborating with cross-functional teams to ensure quality throughout the software development lifecycle.

o   Collaborate with a cross-functional Agile team and apply practices of Agile software development

o   Continuously monitor and control testing to achieve project goals

o   Introducing metrics for measuring test progress and evaluating the quality of the testing and product

o   Prepare and deliver test progress reports, test summary and Test Completion reports

o   Participating in continuous improvement initiatives to enhance the efficiency and effectiveness of the testing process.

Tools

o   Tools Categories:

§  Application lifecycle management (ALM) and Test Management tools

§  Requirements management tools

§  Static testing tools

§  Test design and implementation tools

§  Test execution and coverage tools

§  Defect Tracking and Issue Management tools

§  Mobile Application testing tools.

o   Examples of Tools: Jira, Zephyr, Bugzilla, MantisBT, TestRail, qTest, Redmine, HP Quality Center, TestLink, Azure DevOps, Azure Test Plan and so on


References

ISTQB-CTFL Foundation Level 
https://www.istqb.org/certifications/certified-tester-foundation-level-ctfl-v4-0/
ISTQB-CTFL-AT Agile Tester
https://www.istqb.org/certifications/certified-tester-foundation-level-agile-tester-ctfl-at/
ISTQB-CTAL-TA Test Analyst
https://www.istqb.org/certifications/certified-tester-advanced-level-test-analyst/
ISTQB-CT-UT Usability Testing
https://www.istqb.org/certifications/certified-tester-usability-testing-ct-ut/
ISTQB-CTAL-TM Test Management
https://www.istqb.org/certifications/certified-tester-advanced-level-test-management-ctal-tm-v3-0/
GAQM-CSTE Software Test Engineer
https://gaqm.org/certifications/software_testing/cste
ASQ-CSQE Software Quality Engineer
https://www.asq.org/cert/software-quality-engineer
ISO/IEC 25010 Product Quality Model
https://www.iso.org/standard/78176.html
Test Design Techniques
https://www.tmap.net/building-blocks/test-design-techniques