*Software Engineering Management
**Initiation and Scope Definition
**Software Project Planning
**Software Project Enactment
**Software Review and Evaluation
**Software Closure
**Software Engineering Measurement
**Software Configuration Management
*Software Engineering Management
**Initiation and Scope Definition
**Software Project Planning
**Software Project Enactment
**Software Review and Evaluation
**Software Closure
**Software Engineering Measurement
**Software Configuration Management
Keywords: ALM, Areas, governance, development, Maintenance, Stages, Evolution, ALM Suites, Tools
·
ALM is the lifecycle
management of applications, which includes governance,
development and maintenance.
·
ALM includes these
disciplines: requirements management, software architecture, development,
testing, maintenance, change management, support, continuous integration,
project management, deployment, release management and governance.
·
ALM tools provide a
standardized system for communication and collaboration between software
development teams and related departments, such as test and operations.
·
ALM combines the
disciplines concerned with all aspects of the process to achieve the goal of
driving efficiency through predictable and repeatable software delivery.
·
Governance includes requirements management, resource
management, nurturing and system administration such as data security, user
access, change tracking, review, audit, deployment control, and rollback.
·
Application development includes identifying current
problems, and planning, design, building, and testing the application and its
continuous improvements. This area includes traditional developer and app maker
roles.
·
Maintenance includes deployment of the app, and maintenance
of optional and dependent technologies.
·
The application lifecycle is the cyclical
software development process that involves these areas: plan and track,
develop, build and test, deploy, operate, monitor, and learn from discovery.
·
The application management
lifecycle can be divided into several stages, depending on the specific
methodology or framework being used. Most ALM processes include the following Key
Stages:
o Stage 1: Evaluation
§
Key considerations include Business
Case, Requirements Gathering, Technical Feasibility, Vendor Selection
o Stage 2: Implementation
§
Key considerations include Development,
Testing, Deployment, Management
o Stage 3: Live
§
Key considerations include Monitoring,
Maintenance, Support, Optimization, Application Register
o Stage 4: Phasing Out
§
Key considerations include Data
Migration, Communication, Deprecation
o Stage 5: Retired
§
Key considerations include Roadmaps
and Migration Plans, Decommissioning, Data Retention, Lessons Learned
·
The Main stages of ALM
include Application requirements, Application development, Application testing,
Application deployment, Application maintenance
§
Stakeholders define what
they require from the application.
§
They analyze how the
application will help them to meet their business goals and regulatory
compliance requirements.
§
Requirements management
typically involves writing user stories that show how different users will
interact with the application.
o Application development
§
various teams work together to convert
the requirements into a working application
§
Project managers estimate the time and
development cost.
§
Developers identify the design tasks
and programming activities.
§
Quality analysts add review tasks and
checkpoints for quality and progress checks.
§
The development and testing teams also
plan a timeline for their software projects. They identify any interdependence
among the requirements and decide the order in which to complete and release
new features.
o Application testing
§
Quality analysts assess the
application to verify it meets requirements.
§
They identify and
prioritize any software errors or bugs, which the software development team
then fixes.
§
Application testing and
development often proceed simultaneously during the application’s lifecycle.
§
Agile development
methodologies use automated testing tools to test the entire code base every
time developers make a software change.
o Application deployment
§
Developers release the
application to end users.
§
Release management also
includes planning how the team deploys software changes over time.
§
Agile development teams
automate deployment to speed up the release of new features and updates.
o Application maintenance
§
Support and development
teams work together to resolve the remaining bugs, plan new updates, and
improve the product further.
§
They incorporate user
feedback and release new features that are relevant to customers.
·
Adoption of Agile and
DevOps practices
o
The widespread adoption of
Agile and DevOps methodologies has transformed how software is developed and
managed.
o
ALM has evolved to support
these practices, enabling organizations to be more adaptive and automated in
delivery software while enhancing cross-team collaboration.
·
Shift towards
cloud-based ALM solutions
o
Cloud-based ALM solutions
provide organizations with the agility and scalability to manage software
delivery and collaborate across distributed teams and geographies.
o
Cloud-based ALM platforms
offer benefits such as reduced infrastructure costs, increased accessibility,
and easier maintenance.
·
Use of artificial
intelligence (AI)
o
AI technologies are
transforming ALM. AI algorithms enable predictive analytics, anomaly detection,
root cause analysis, human-like automation, and workflow optimization.
o
ALM platforms with AI
capabilities significantly improve productivity, reduce time-to-market, and
enhance software quality and performance.
·
Emphasis on compliance
and governance
o
As regulatory requirements
and compliance standards become more stringent, ALM has evolved to incorporate
robust compliance and governance features.
o
ALM tools can track
changes, document audits, enforce policies, and ensure adherence to regulatory
requirements
·
ALM tools are software that
developers, testers, analysts, and other stakeholders use for application
management. They provide a standardized environment that everyone can use to
communicate and collaborate. Some common features of an integrated ALM suite:
o
Project management
o
Requirements management
o
Source code management
o
Test management
o
Real-time chat support
o
Project portfolio
management
o
Visualization tools, such
as charts and graphs
·
Some ALM software suites
are:
o
Microsoft Azure DevOps
o
Atlassian JIRA
o
GitLab
o
Helix ALM
o
Enterprise Architect
o
Tuleap
o
Jama Connect
o
Codebeamer
o
Orcanos Application
Lifecycle Management
o
Visure
o
ClickUp
o
IBM Targetprocess
o
Rally Software
o
Polarion ALM
o
DocSheets
o
Polarion
o SpiraTeam
https://learn.microsoft.com/en-us/power-platform/alm/overview-alm
https://www.ibm.com/think/topics/application-lifecycle-management
https://aws.amazon.com/what-is/application-lifecycle-management/
https://www.opentext.com/what-is/application-lifecycle-management
https://www.ardoq.com/knowledge-hub/application-lifecycle-management
o
Software testing is a set
of activities to discover defects and evaluate the quality of software
artifacts.
o Testing involves verification, i.e., checking whether the system meets specified requirements, it also involves validation, which means checking whether the system meets users’ and other stakeholders’ needs in its operational environment.
o
Testing may be dynamic or
static. Dynamic testing involves the execution of software, while static
testing includes reviews and static analysis.
o
Testing needs to be
properly planned, managed, estimated, monitored and controlled
o
Objectives of testing can
vary, depending upon the context, which includes the work product being tested,
the test level, risks, the software development lifecycle (SDLC) being
followed, and factors related to the business context
o
Testing is a major form
of quality control (QC), while others include formal methods (model checking
and proof of correctness), simulation and prototyping.
o
QC is a product-oriented,
corrective approach that focuses on those activities supporting the achievement
of appropriate levels of quality
o
Quality Assurance (QA) is
a process-oriented, preventive approach that focuses on the implementation and
improvement of processes. It works on the basis that if a good process is
followed correctly, then it will generate a good product.
o
Test results are used by
QA and QC. In QC they are used to fix defects, while in QA they provide feedback
on how well the development and test processes are performing.
o
Software Development
Lifecycle (SDLC) include sequential development models (e.g., waterfall model,
V-model), iterative development models (e.g., spiral model, prototyping), and
incremental development models (e.g., Unified Process).
o
Some activities within
software development processes can also be described by more detailed software
development methods and Agile practices.
o
One of the main
differences between traditional lifecycles and Agile lifecycles is the idea of
very short iterations, each iteration resulting in working software that
delivers features of value to business stakeholders.
o
At the beginning of the
project, there is a release planning period. This is followed by a sequence of
iterations.
o
At the beginning of each
iteration, there is an iteration planning period. Once the iteration scope is
established, the selected user stories are developed, integrated with the
system, and tested.
o
These iterations are
highly dynamic, with development, integration, and testing activities taking place
throughout each iteration, and with considerable parallelism and overlap.
Testing activities occur throughout the iteration, not as a final activity.
o
Agile approaches and aspects include Whole-Team
Approach, Early and Frequent Feedback, Collaborative User Story Creation, Retrospectives,
Continuous Integration, Release and Iteration Planning
o
Agile Testing Methods
include test-driven development, acceptance test-driven development, and
behavior-driven development
o
Testing quadrants align
the test levels with the appropriate test types in the Agile methodology
o Organizational Test Processes
§ Organizational Test Policy
§ Organizational Test Strategy
o
Test Management Processes
§ Risk Management
o Dynamic Testing Processes
§ Test Execution
§ Test Completion and Reporting
o
Component testing: focuses
on testing components in isolation.
o
Component integration testing focuses
on testing the interfaces and interactions between components, dependent on the
integration strategy approaches like bottom-up, top-down or big-bang.
o
System testing focuses
on the overall behavior and capabilities of an entire system or product,
o
System integration
testing (SIT) focuses on testing the interfaces of the system
under test and other systems and external services
o
User Acceptance testing (UAT)
focuses on validation and on demonstrating readiness for deployment, which
means that the system fulfills the user’s business needs
o
Functional testing: evaluates
the functions that a component or system should perform.
o
Non-functional testing: evaluates
attributes other than functional characteristics of a component or system, non-functional
software quality characteristics include Performance, Compatibility, Usability,
Reliability, Security, Maintainability and Portability
o
Black-box testing: specification-based
and derives tests from documentation external to the test object.
o
White-box testing: structure-based
and derives tests from the system's implementation or internal structure (e.g.,
code, architecture, workflows, and data flows)
o Confirmation Testing or Retesting (it confirms
that an original defect has been successfully fixed) and Regression Testing
(confirms
that no adverse consequences have been caused by a change)
o
Test Policy
o
Test Strategy
o
Master test plan
o
Test plan
o
Traceability Matrix
o
High level scenarios
(HLS)
o
Test cases (TC)
o
Test data requirements
o
Test Execution Log
o
Test Status report
(Progress report)
o
Test Completion report
(Summary report)
o
Defect report
o
Project progress metrics (e.g.,
task completion, resource usage, test effort)
o
Test progress metrics (e.g.,
test case implementation progress, test environment preparation progress,
number of test cases run/not run, passed/failed, test execution time)
o
Product quality metrics (e.g.,
availability, response time, mean time to failure)
o
Defect metrics (e.g.,
number and priorities of defects found/fixed, defect density, defect detection percentage)
o
Risk metrics (e.g.,
residual risk level)
o
Coverage metrics (e.g.,
requirements coverage, code coverage)
o
Cost metrics (e.g.,
cost of testing, organizational cost of quality)
o
STLC consists of key
activities to ensure that all software quality goals are met:
§ Requirements Analysis
§
§ Test Case Design
§ Test Environment Setup
§ Test Execution
§ Test Closure
o
Functional Suitability
§
Functional Correctness:
involves verifying the application's adherence to the specified or implied
requirements and may also include computational accuracy
§
Functional
Appropriateness: involves evaluating and validating the appropriateness of
a set of functions for its intended specified tasks
§
Functional Completeness:
performed to determine the coverage of specified tasks and user objectives by
the implemented functionality. Traceability between specification items (e.g.,
requirements, user stories, use cases) and the implemented functionality (e.g.,
function, component, workflow) is essential to enable required functional
completeness to be determined.
o Usability
§
Appropriateness
recognizability: verify the users can recognize whether a product or system
is appropriate for their needs.
§
Learnability:
Verifying the product or system enables the user to learn how to use it with
effectiveness and efficiency in emergency situations.
§
Operability:
Verifying the product or system is easy to operate, control and appropriate to
use.
§
User error protection:
Verifying the product or system protects users against making errors.
§
User interface
aesthetics: Verifying the user interface enables pleasing and satisfying
interaction for the user.
§
Accessibility:
Verifying the product or system can be used by people with the widest range of
characteristics and capabilities to achieve a specified goal in a specified
context of use.
o Portability
§
Installability:
Verifying the product or system can effectively and efficiently be adapted for
different or evolving hardware, software or other operational or usage
environments.
§
Adaptability:
Verifying effectiveness and efficiency in which a product or system can be
successfully installed and/or uninstalled in a specified environment.
§
Replaceability:
Verifying the product can replace another specified software product for the
same purpose in the same environment.
o Compatibility
§
Interoperability: verifies
the exchange of information between two or more systems or components. Tests
focus on the ability to exchange information and subsequently use the
information that has been exchanged.
o
Functional Testing
evaluates
the functions that a component or system should perform. The functions are
“what” the test object should do. The main objective of functional testing is
checking the functional completeness, functional correctness and functional
appropriateness.
o
Test analysis and design
techniques including Coverage-Based test Techniques and Experience-based Test
Techniques
o
Coverage-Based test
Techniques such as Equivalence
Partitioning (EP), Boundary Value Analysis (BVA), Decision Table Testing, State
Transition Testing, Use Case Testing, Data Cycle Testing (CRUD) testing and
others.
§ EP Technique divides data into partitions based on the expectation
that all the elements of a given partition are to be processed in the same way
by the test object.
§ If a test case, that tests one value from an equivalent partition, detects a defect, this defect should also be detected by test cases that test any other value from the same partition. Therefore, one test for each partition is sufficient.
§ BVA is a technique based on exercising the boundaries of equivalence
partitions
§ The minimum and maximum values of a partition are its boundary values
§ In the case of BVA, if two elements belong to the same partition, all
elements between them must also belong to that partition.
§ In 2-Value BVA, for each boundary value there are two coverage items: this boundary value and its closest neighbor belonging to the adjacent partition. To achieve 100% coverage with 2-value BVA, test cases must exercise all coverage items, i.e., all identified boundary values.
§ In 3-Value BVA, for each boundary value there are three coverage items: this boundary value and both its neighbors. Therefore, in 3-value BVA some of the coverage items may not be boundary values. To achieve 100% coverage with 3-value BVA, test cases must exercise all coverage items, i.e., identified boundary values and their neighbors
§ EP-with-three-value-BVA Example:
§ Decision Table are used for testing the implementation of system
requirements that specify how different combinations of conditions result in
different outcomes.
§ A state transition diagram shows the possible software states, as well
as how the software enters, exits, and transitions between states.
§ A transition is initiated by an event (e.g., user input of a value
into a field). The event results in a transition. The same event can result in
two or more different transitions from the same state. The state change may
result in the software taking an action (e.g., outputting a calculation or
error message).
§ A state transition table shows all valid transitions and potentially
invalid transitions between states, as well as the events, and resulting
actions for valid transitions. State transition diagrams normally show only the
valid transitions and exclude the invalid transitions
§ Tests can be derived from use cases, which are a specific way of
designing interactions with software items. They incorporate requirements for
the software functions. Use cases are associated with actors (human users,
external hardware, or other components or systems) and subjects (the component
or system to which the use case is applied).
§ The data cycle test or CRUD test (CREATE-READ-UPDATE-DELETE) is a
technique for testing whether the data are being used and processed
consistently by various functions from within different subsystems or even
different systems. The technique is ideally suited to the testing of overall
functionality, suitability and connectivity.
§ CRUD testing focuses on the coupling between different functions and
how they handle common data.
§ An example of Data Cycle: a subsystem that invoices orders and
processes payments. The relevant part of the CRUD matrix is in the table below.
o
Experience-based Test
Techniques
§ Error Guessing
§ Exploratory testing
§ Checklist-based testing
o
Mobile Application
Testing including Testing for Compatibility with Device Hardware, App
Interactions with Device Software, Various Connectivity Methods
o Testing for Compatibility with Device Hardware
§ Testing for Device Features
§ Testing for Different Displays
§ Testing for Device Temperature
§ Testing for Device Input Sensors
§ Testing Various Input Methods
§ Testing for Screen Orientation Change
§ Testing for Typical Interrupts
§ Testing for Access Permissions to Device Features
§ Testing for Power Consumption and State
o Testing for App Interactions with Device
Software
§ Testing for Notifications
§ Testing for Quick-access Links
§ Testing for User Preferences Provided by the Operating System
§ Testing for Different Types of Apps
§ Testing for Interoperability with Multiple Platforms and Operating
System Versions
§ Testing for Interoperability and Co-existence with other Apps on the
Device
o Testing for Various Connectivity Methods
§ Testing for cellular networks such as 2G, 3G, 4G and 5G
§ Testing for wireless connection types such as NFC or Bluetooth.
o
Usability testing includes Testing User Interface (UI), User Experience
(UX) and Accessibility of software products
§ Usability is the extent to which a software product can
be used by specified users to achieve specified goals with effectiveness,
efficiency and satisfaction in a specified context of use
§ The User interface (UI)
consists of all components of a software product that provide information and
controls for the user to accomplish specific tasks with the system.
§ User experience (UX) describes a person’s perceptions
and responses that result from the use and/or anticipated use of a product,
system or service.
§ Accessibility is the degree to which a product or system can
be used by people with the widest range of characteristics and capabilities to
achieve a specified goal in a specified context of use
o
A usability test has the
following three principal steps and associated tasks
§ Prepare usability test
·
Create usability test plan
·
Recruit usability test
participants
·
Write usability test
script(s)
·
Define usability test tasks
·
Pilot usability test
session
§ Conduct usability test sessions
·
Prepare session
·
Perform briefing with
pre-session instructions
·
Conduct pre-session
interview
·
Moderate session
·
Conduct post-session
interview
§ Communicate results and findings
·
Analyze findings
·
Write usability test report
·
Sell findings (i.e.,
convince people)
o Human-centered design
§
An approach to design that
aims to make software products more usable by focusing on the use of software
products and applying human factors, ergonomics, and usability knowledge and
techniques.
§
The human-centered design
process can be summarized as follows: Analyze, Design, Evaluate, Iterate
§
The human-centered design
activities are based on the following three key elements: Users, Evaluation, Iterations
o Test management role
§ Takes overall responsibility for managing test activities, managing
the product, and managing the team
§ Managing Test Activities
·
Test process: mainly
focused on the activities of test planning, test monitoring and control and
test completion
·
Risk based testing: Risk identification,
assessment, monitoring and mitigation of risks to drive testing
·
Project test strategy
·
Improving the test
process
§ Managing the product
·
Test estimation: Effort,
time and Cost
·
Test metrics
·
Defect management
§ Managing the team
·
Test Team
·
Stakeholder relationships
o Testing role
§ Takes overall responsibility for the engineering aspect of testing.
§ Mainly focused on the activities of test analysis, test design, test
implementation and test execution.
§ Contribute to risk-based testing (risks related to business domain)
o
Define test strategy
consistently with the organizational test strategy and project context
o
Reviewing and analyzing
system specifications to ensure clarity and understanding of requirements.
o
Creating detailed,
comprehensive, and well-structured test plans.
o
Recognize and classify
the risks associated with the functional correctness and Usability of software
systems
o
Coordinate the test plans
with project managers, product owners, Development team and other stakeholders
o
Perform the appropriate
testing processes and activities based on the software development life cycle
o
Designing test scenarios
and test cases based on specifications and requirements.
o
Executing test cases and
analyzing results to report any defects or issues.
o
Conducting Functional
testing and Usability testing to ensure software quality
o
Tracking defects or
inconsistencies in the product's functionality and user interface
o
Collaborating with
cross-functional teams to ensure quality throughout the software development
lifecycle.
o
Collaborate with a
cross-functional Agile team and apply practices of Agile software development
o
Continuously monitor and
control testing to achieve project goals
o
Introducing metrics for
measuring test progress and evaluating the quality of the testing and product
o
Prepare and deliver test
progress reports, test summary and Test Completion reports
o
Participating in
continuous improvement initiatives to enhance the efficiency and effectiveness
of the testing process.
o
Tools Categories:
§ Application lifecycle management (ALM) and Test Management tools
§ Requirements management tools
§ Static testing tools
§ Test design and implementation tools
§ Test execution and coverage tools
§ Defect Tracking and Issue Management tools
§ Mobile Application testing tools.
o Examples of Tools: Jira, Zephyr, Bugzilla, MantisBT, TestRail, qTest, Redmine, HP Quality Center, TestLink, Azure DevOps, Azure Test Plan and so on