SlideShare ist ein Scribd-Unternehmen logo
1 von 187
Downloaden Sie, um offline zu lesen
Uni
versi
tà
del
l
a
Svi
zzera
i
tal
i
ana Software Institute
Declarative Performance
Testing Automation
Vincenzo Ferme
Committee Members:

Internal: Prof. Walter Binder, Prof. Mauro Pezzè

External: Prof. Lionel Briand, Prof. Dr. Dr. h. c. Frank Leyman
Research Advisor:
Prof. Cesare Pautasso
Automating Performance Testing
for the DevOps Era
2
Outline
‣ Context
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
‣ Career and Contributions
2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
‣ Career and Contributions
‣ Concluding Remarks and Highlights
Context
3
4
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps !4
CI Server
Repo
Developers,
Testers,
Architects
Production
CD Server
C.S.D
.L.
5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
6
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Containers
92%
7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
Scalability and Availability
7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
Scalability and Availability Fewer Production Errors
8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Match Performance Requirements
8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Scalability and Availability
Match Performance Requirements
8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Scalability and Availability
3rd Party Performance
Match Performance Requirements
9
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
CI Server
Repo
Developers,
Testers,
Architects
Continuous Changes
Continuous Test Execution
“Only conducting performance testing at the conclusion
of system or functional testing is like conducting a
diagnostic blood test on a patient who is already dead.
”
Scott Barber
10
State of the Art
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
Performance Testing is Rarely Applied in DevOps Processes
[Bezemer et al., ICPE 2019]
[Bezemer et al., ICPE 2019]
Bezemer, C.-P., Eismann, S., Ferme, V., Grohmann, J., Heinrich, R., Jamshidi, P., Shang, W., van Hoorn, A.,
Villavicencio, M.,Walter, J., and Willnecker, F. (2019). How is Performance Addressed in DevOps? In Proceedings
of the 10th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 45–50.
11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015]
Slowness of Execution
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015]
Slowness of Execution
Lack of Native Support for CI/CD Tools
[Leitner and Bezemer, 2017]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
12
Declarative Performance Engineering
“ Enabling the performance analyst to declaratively specify what performance-
relevant questions need to be answered without being concerned about how
they should be answered.
”
[Walter et al., 2016]
[Walter et al., 2016]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
12
Declarative Performance Engineering
“ Enabling the performance analyst to declaratively specify what performance-
relevant questions need to be answered without being concerned about how
they should be answered.
”
[Ferme and Pautasso, ICPE 2018]
Ferme, V. and Pautasso, C. (2018). A Declarative Approach for Performance Tests Execution in Continuous
Software Development Environments. In Proceedings of the 9th ACM/SPEC International Conference on
Performance Engineering (ICPE), pages 261–272.
[Walter et al., 2016]
Developers,
Testers,
Architects,
Performance Analyst …
[Ferme and Pautasso, ICPE 2018]
[Walter et al., 2016]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
13
State of the Art
[Walter, 2018]
DECLARE
Proposes languages and tools for specifying performance
concerns, and declaratively querying performance
knowledge collected and modelled by different tools, with the
objective of providing automated answers to the specified
performance concerns.
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
14
State of the Art
[Schulz et al., 2020]
ContinuITy
Focuses on dealing with the challenges of continuously updating
performance tests, by leveraging performance
knowledge of software systems collected and modelled from
the software operating in production environments.
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
14
State of the Art
[Schulz et al., 2020]
ContinuITy
Focuses on dealing with the challenges of continuously updating
performance tests, by leveraging performance
knowledge of software systems collected and modelled from
the software operating in production environments.
[Avritzer et al., 2020] [Okanovic et al., 2020] [Schulz et al., 2019]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
Problem Statement &
Research Goals
15
16
Problem Statement
To design new methods and techniques for the declarative
specification of performance tests and their automation processes, and
to provide models and frameworks enabling continuous and
automated execution of performance tests, in particular
referring to the target systems, target users and context of our work.
17
Research Goals
R.G. 1 (Which Tests?)
17
Research Goals
R.G. 1 (Which Tests?) R.G. 2 (How to Specify?)
17
Research Goals
R.G. 1 (Which Tests?) R.G. 2 (How to Specify?)
R.G. 3 (How to Automate?)
17
Research Goals
R.G. 1 (Which Tests?) R.G. 2 (How to Specify?)
R.G. 3 (How to Automate?) R.G. 4 (How to CSDL?)
Main Contributions
18
19
Main Contributions
Overall Contribution ➤ Main Contributions Overview
A Declarative Approach for Performance Tests Execution
Automation enabling the continuous and automated execution of
performance tests alongside the Continuous Software
Development Lifecycle, and embrace DevOps goals by enabling the
end-to-end execution of service-level performance tests,
including S.U.T. lifecycle management.
20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
Automation-oriented
Performance Tests Catalog
20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
Automation-oriented
Performance Tests Catalog
20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
BenchFlow Model-driven Framework
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
Automation-oriented
Performance Tests Catalog
20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
BenchFlow Model-driven Framework
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
Automation-oriented
Performance Tests Catalog
21
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
Fill a gap identified in the performance testing literature by contributing an
automation-oriented performance test catalog providing a
comprehensive reference to properly identifying different kinds of
performance tests and their automation requirements.
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
- Measurements to be collected and metrics to be calculated
22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
- Measurements to be collected and metrics to be calculated
- Preliminary performance tests to be already executed
23
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
23
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
23
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
23
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
23
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
23
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
25
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
Exploratory Test
25
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
Exploratory Test
25
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
Exploratory Test
26
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
BenchFlow Model-driven Framework
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
[Ferme and Pautasso, ICPE 2018] [Ferme et al., BPM 2015] [Ferme and Pautasso, ICPE 2016]
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
Performance
Data Analysis
27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
Performance
Data Analysis
Definition of Configuration Tests
28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
Know
you
I
SUT-awareness
28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
Goal-Driven
Performance Testing
Know
you
I
SUT-awareness
29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling
29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Deployment Infra.
29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload
29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload Collect Data
29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload Analyse Data
Collect Data
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
s
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
34
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
34
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
36
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
36
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
37
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
TerminationCriteria
TestTerminationCriterion
max_time: Time
max_number_of_experiments: Option<Int>
max_failed_experiments: Option<Percent>
TerminationCriterion
max_number_of_trials: Int
max_failed_trials: Option<Percent>
services: Option<Map<String, ServiceTerminationCriterion>>
workloads: Option<Map<String, WorkloadTerminationCriterion>>
experiment
0..1
test
0..1
WorkloadTerminationCriterion
confidence_interval_metric: WorkloadMetric
confidence_interval_value: Float
confidence_interval_precision: Percent
ServiceTerminationCriterion
confidence_interval_metric: ServiceMetric
confidence_interval_value: Float
confidence_interval_precision: Percent
service_name
0..*
workload_name
0..*
38
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
s
s
39
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
ExperimentTerminationCriteria
max_time: Time
BenchFlowExperiment
version: ExperimentVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
BechFlowExperimentConfiguration
configuration 1
sut
1
workload
1
data_collection 1
load_function
1
TerminationCriterion
max_number_of_trials: Int
max_failed_trials: Option<Percent>
services: Option<Map<String, ServiceTerminationCriterion>>
workloads: Option<Map<String, WorkloadTerminationCriterion>>
termination_criteria
1
experiment
0..1
«enumeration»
ExperimentVersion
1
1.1
1.2
1.3
1.4
2
2.1
2.2
3
LoadFunction
users: Int
ramp_up: Time
steady_state: Time
ramp_down: Time
SutVersion
version
1
40
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Checkout Build Unit Tests
Integration
Tests
E2e Tests Smoke Tests
Load Tests
Acceptance
Tests
Regression
Tests
Deploy in
Production
FUNCTIONAL TESTS
FUNCTIONAL TESTS
PERFORMANCE TESTS
41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
42
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Test
43
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
43
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
Application Server DBMS
Workflow Engine
Job
Executor
Core
Engine
Transaction Manager
Instance
Database
Persistent Manager
Process Navigator
A
B
C
D
Task Dispatcher
Users
Service Invoker
…
Web
Service
[Skouradaki et al., ICPE 2015]
[Ferme et al., BPM 2015]
[Ferme et al., CLOSER 2016]
[Skouradaki et al., BPM 2016]
[Ferme et al., BPM 2016]
[Ivanchikj et al., BPM 2017]
[Rosinosky et al., OTM 2018]
44
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
20 confidence_interval_precision: 95%
21 services:
22 service_a:
23 confidence_interval_metric: avg_cpu
24 confidence_interval_value: 60%
25 confidence_interval_precision: 95%
26 sut:
27 name: "my_app"
28 version: "v1.5"
29 type: "http"
30 sut_configuration:
31 default_target_service:
32 name: "service_a"
33 endpoint: "/"
34 sut_ready_log_check: "/(.*)System started(.*)/g"
35 deployment:
36 service_a: "my_server"
37 services_configuration:
38 service_a:
39 resources:
40 cpu: 100m
41 memory: 256Mi
42 configuration:
43 NUM_SERVICE_THREAD: 12
44 service_b:
45 resources:
46 cpu: 200m
47 memory: 256Mi
48 configuration:
49 THREADPOOL_SIZE: 64
50 dbms_a:
51 resources:
52 cpu: 100m
53 memory: 256Mi
54 configuration:
55 QUERY_CACHE_SIZE: 48Mi
56 workloads:
57 workload_a:
58 popularity: 70%
59 item_a:
60 driver_type: "http"
61 inter_operation_timings: "negative_exponential"
SUT Conf.
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test
45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test
t
λ
wall clock time
9 AM
300
BenchFlow
Faban
Load test
template
Architect.
con g.
s0
1
ϕ
i
0.015
Γk 0.042
pass/fail (ck
) PASS
sn 1
2.164
0.108
FAIL
...
...
δk
1.26 % 2.58 %
δk
⋅ ck
1.26 % 0.00 %
norm. test mass (si
* p'(λ'))
Σ
100.00 %
74.81 %
0.142
...
...
...
...
...
...
...
...
...
...
^
Operational pro le
Empirical distribution of
workload situations
Baseline & test results
per architectural con g.
Domain metric
dashboard
#Workload
situations
ContinuITy
Analysis of
operational data
2
Experiment
generation
3
Experiment
execution
4
Domain metric
calculation
5
Collection of
operational data
1
λ'
sampled workload situation
f'
100 200 300
0.2
Relative
Mass
0.25
0.20
0.15
0.05
0
50 100 150 200
Workload Situations (Number of Users)
x
x
x x
x
x
x
0.10
x
250 300
x x
x
Step
(Intermediate)
Artifact
Tool
[Avritzer et al., JSS 2020]
[Avritzer et al., ICPE 2019]
[Avritzer et al., ECSA 2018]
45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test Application Server DBMS
Workflow Engine
Job
Executor
Core
Engine
Transaction Manager
Instance
Database
Persistent Manager
Process Navigator
A
B
C
D
Task Dispatcher
Users
Service Invoker
…
Web
Service
t
λ
wall clock time
9 AM
300
BenchFlow
Faban
Load test
template
Architect.
con g.
s0
1
ϕ
i
0.015
Γk 0.042
pass/fail (ck
) PASS
sn 1
2.164
0.108
FAIL
...
...
δk
1.26 % 2.58 %
δk
⋅ ck
1.26 % 0.00 %
norm. test mass (si
* p'(λ'))
Σ
100.00 %
74.81 %
0.142
...
...
...
...
...
...
...
...
...
...
^
Operational pro le
Empirical distribution of
workload situations
Baseline & test results
per architectural con g.
Domain metric
dashboard
#Workload
situations
ContinuITy
Analysis of
operational data
2
Experiment
generation
3
Experiment
execution
4
Domain metric
calculation
5
Collection of
operational data
1
λ'
sampled workload situation
f'
100 200 300
0.2
Relative
Mass
0.25
0.20
0.15
0.05
0
50 100 150 200
Workload Situations (Number of Users)
x
x
x x
x
x
x
0.10
x
250 300
x x
x
Step
(Intermediate)
Artifact
Tool
[Avritzer et al., JSS 2020]
[Avritzer et al., ICPE 2019]
[Avritzer et al., ECSA 2018]
46
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:

Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
47
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
Ready
Running
user paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
48
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
49
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
Entity YAML Specification
Parse to YAML Object
Parse + Syntactic
Validation
Semantic Validation
Entity
Representation
Exception
50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
Evaluations & Case
Studies
52
53
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Expert Review
53
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Expert Review Summative Evaluation
53
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Expert Review Summative Evaluation
Iterative Review, Case Studies
53
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Expert Review Summative Evaluation
Iterative Review, Case Studies Comparative Evaluation
54
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
- Expressiveness for performance testing automation
54
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
- Expressiveness for performance testing automation
- Usability for target users
54
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
- Expressiveness for performance testing automation
- Usability for target users
- Effort for target users
54
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
- Expressiveness for performance testing automation
- Usability for target users
- Effort for target users
- Reusability for target users
54
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
- Expressiveness for performance testing automation
- Usability for target users
- Effort for target users
- Reusability for target users
- Well Suited for target users
54
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
- Expressiveness for performance testing automation
- Usability for target users
- Effort for target users
- Reusability for target users
- Well Suited for target users
- Suitability for target users vs. Imperative
55
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
55
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
55
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
55
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Questions related to the Research Questions
55
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Questions related to the Research Questions
- Questions for additional feedback
55
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Questions related to the Research Questions
- Questions for additional feedback
- Conclusion
56
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
0 2 4 6 8 10 12 14 16 18
Participants (18)
57
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Results Highlights
0
2
4
6
8
10
12
14
16
18
0
2
4
6
8
10
12
14
16
18
58
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Results Highlights
0
2
4
6
8
10
12
14
16
18
0
2
4
6
8
10
12
14
16
18
59
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Results Highlights
0
2
4
6
8
10
12
14
16
18
0
2
4
6
8
10
12
14
16
18
60
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
Learnability
60
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Objective
Learnability Reusability
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
- Questions for additional feedback
61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
- Questions for additional feedback
- Conclusion
62
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
https://www.getfeedback.com/resources/online-surveys/better-online-survey-respon
se-rates/
0 10 20 30 40 50 60
Participants (63)
63
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Results Highlights
64
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Results Highlights
65
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Results Highlights
0
5
10
15
20
25
30
35
40
45
50
55
60
0
5
10
15
20
25
30
35
40
45
50
55
60
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era

Weitere ähnliche Inhalte

Was ist angesagt?

Manual Testing Notes
Manual Testing NotesManual Testing Notes
Manual Testing Notesguest208aa1
 
Automation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabadAutomation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabadDurga Prasad
 
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.Wolfgang Grieskamp
 
Complete testing@uma
Complete testing@umaComplete testing@uma
Complete testing@umaUma Sapireddy
 
Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingKumari Warsha Goel
 
Test Driven iOS Development (TDD)
Test Driven iOS Development (TDD)Test Driven iOS Development (TDD)
Test Driven iOS Development (TDD)Babul Mirdha
 
Model-based Testing: Taking BDD/ATDD to the Next Level
Model-based Testing: Taking BDD/ATDD to the Next LevelModel-based Testing: Taking BDD/ATDD to the Next Level
Model-based Testing: Taking BDD/ATDD to the Next LevelBob Binder
 
Performance Testing | Instamojo
Performance Testing | InstamojoPerformance Testing | Instamojo
Performance Testing | InstamojoMohit Shukla
 
Qa interview questions and answers
Qa interview questions and answersQa interview questions and answers
Qa interview questions and answersGaruda Trainings
 
Quality assurance by Sadquain
Quality assurance by Sadquain Quality assurance by Sadquain
Quality assurance by Sadquain Xad Kuain
 
Manual Testing Material by Durgasoft
Manual Testing Material by DurgasoftManual Testing Material by Durgasoft
Manual Testing Material by DurgasoftDurga Prasad
 
The Automation Firehose: Be Strategic and Tactical by Thomas Haver
The Automation Firehose: Be Strategic and Tactical by Thomas HaverThe Automation Firehose: Be Strategic and Tactical by Thomas Haver
The Automation Firehose: Be Strategic and Tactical by Thomas HaverQA or the Highway
 
Resume arti soni
Resume arti soniResume arti soni
Resume arti soniAkash gupta
 
01. testing fresher-resume
01. testing fresher-resume01. testing fresher-resume
01. testing fresher-resumemuqtar12
 
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparationIstqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparationKevalkumar Shah
 
Performance testing interview questions and answers
Performance testing interview questions and answersPerformance testing interview questions and answers
Performance testing interview questions and answersGaruda Trainings
 

Was ist angesagt? (19)

Manual Testing Notes
Manual Testing NotesManual Testing Notes
Manual Testing Notes
 
Automation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabadAutomation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabad
 
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
 
Qa management in big agile teams
Qa management in big agile teamsQa management in big agile teams
Qa management in big agile teams
 
Complete testing@uma
Complete testing@umaComplete testing@uma
Complete testing@uma
 
Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software Testing
 
Test Driven iOS Development (TDD)
Test Driven iOS Development (TDD)Test Driven iOS Development (TDD)
Test Driven iOS Development (TDD)
 
Resume
Resume Resume
Resume
 
Model-based Testing: Taking BDD/ATDD to the Next Level
Model-based Testing: Taking BDD/ATDD to the Next LevelModel-based Testing: Taking BDD/ATDD to the Next Level
Model-based Testing: Taking BDD/ATDD to the Next Level
 
Performance testing and rpt
Performance testing and rptPerformance testing and rpt
Performance testing and rpt
 
Performance Testing | Instamojo
Performance Testing | InstamojoPerformance Testing | Instamojo
Performance Testing | Instamojo
 
Qa interview questions and answers
Qa interview questions and answersQa interview questions and answers
Qa interview questions and answers
 
Quality assurance by Sadquain
Quality assurance by Sadquain Quality assurance by Sadquain
Quality assurance by Sadquain
 
Manual Testing Material by Durgasoft
Manual Testing Material by DurgasoftManual Testing Material by Durgasoft
Manual Testing Material by Durgasoft
 
The Automation Firehose: Be Strategic and Tactical by Thomas Haver
The Automation Firehose: Be Strategic and Tactical by Thomas HaverThe Automation Firehose: Be Strategic and Tactical by Thomas Haver
The Automation Firehose: Be Strategic and Tactical by Thomas Haver
 
Resume arti soni
Resume arti soniResume arti soni
Resume arti soni
 
01. testing fresher-resume
01. testing fresher-resume01. testing fresher-resume
01. testing fresher-resume
 
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparationIstqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
 
Performance testing interview questions and answers
Performance testing interview questions and answersPerformance testing interview questions and answers
Performance testing interview questions and answers
 

Ähnlich wie Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era

Resume_AnujTiwari
Resume_AnujTiwariResume_AnujTiwari
Resume_AnujTiwariAnuj Tiwari
 
Agile vs. DevOps for Continuous Testing: How to Optimize Your Pipeline
Agile vs. DevOps for Continuous Testing: How to Optimize Your PipelineAgile vs. DevOps for Continuous Testing: How to Optimize Your Pipeline
Agile vs. DevOps for Continuous Testing: How to Optimize Your PipelinePerfecto by Perforce
 
Software requirements engineering
Software requirements engineeringSoftware requirements engineering
Software requirements engineeringAbdul Basit
 
Continuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hallContinuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hallPeter Marshall
 
5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the ProblemsTakanori Suzuki
 
AfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing IntroductionAfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing IntroductionPeter Marshall
 
JAVED SAYYED RESUME (2)
JAVED SAYYED RESUME (2)JAVED SAYYED RESUME (2)
JAVED SAYYED RESUME (2)Javed Sayyed
 
Primer on application_performance_testing_v0.2
Primer on application_performance_testing_v0.2Primer on application_performance_testing_v0.2
Primer on application_performance_testing_v0.2Trevor Warren
 
ISTQB Advanced Study Guide - 2
ISTQB Advanced Study Guide - 2ISTQB Advanced Study Guide - 2
ISTQB Advanced Study Guide - 2Yogindernath Gupta
 
Quality engineering & testing in DevOps IT delivery with TMAP
Quality engineering & testing in DevOps IT delivery with TMAPQuality engineering & testing in DevOps IT delivery with TMAP
Quality engineering & testing in DevOps IT delivery with TMAPRik Marselis
 
Importance of Testing in SDLC
Importance of Testing in SDLCImportance of Testing in SDLC
Importance of Testing in SDLCIJEACS
 
Manual Testing Guide1.pdf
Manual Testing Guide1.pdfManual Testing Guide1.pdf
Manual Testing Guide1.pdfKhushal Chate
 
Industry-academia collaborations in Software Engineering: 20+ Years of Experi...
Industry-academia collaborations in Software Engineering: 20+ Years of Experi...Industry-academia collaborations in Software Engineering: 20+ Years of Experi...
Industry-academia collaborations in Software Engineering: 20+ Years of Experi...Vahid Garousi
 
An introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAn introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAnuraj S.L
 
JAVED SAYYED RESUME
JAVED SAYYED RESUMEJAVED SAYYED RESUME
JAVED SAYYED RESUMEJaved Sayyed
 
International Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJSInternational Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJShildredzr1di
 
Continuous Testing - The New Normal
Continuous Testing - The New NormalContinuous Testing - The New Normal
Continuous Testing - The New NormalTechWell
 
Aginext 2021: Built-in Quality - How agile coaches can contribute
Aginext 2021: Built-in Quality - How agile coaches can contributeAginext 2021: Built-in Quality - How agile coaches can contribute
Aginext 2021: Built-in Quality - How agile coaches can contributeDerk-Jan de Grood
 

Ähnlich wie Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era (20)

Resume_AnujTiwari
Resume_AnujTiwariResume_AnujTiwari
Resume_AnujTiwari
 
Agile vs. DevOps for Continuous Testing: How to Optimize Your Pipeline
Agile vs. DevOps for Continuous Testing: How to Optimize Your PipelineAgile vs. DevOps for Continuous Testing: How to Optimize Your Pipeline
Agile vs. DevOps for Continuous Testing: How to Optimize Your Pipeline
 
Software requirements engineering
Software requirements engineeringSoftware requirements engineering
Software requirements engineering
 
Continuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hallContinuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hall
 
5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ(CFP) - Quality Improvement by the Real-Time Detection of the Problems
 
AfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing IntroductionAfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing Introduction
 
JAVED SAYYED RESUME (2)
JAVED SAYYED RESUME (2)JAVED SAYYED RESUME (2)
JAVED SAYYED RESUME (2)
 
Primer on application_performance_testing_v0.2
Primer on application_performance_testing_v0.2Primer on application_performance_testing_v0.2
Primer on application_performance_testing_v0.2
 
ISTQB Advanced Study Guide - 2
ISTQB Advanced Study Guide - 2ISTQB Advanced Study Guide - 2
ISTQB Advanced Study Guide - 2
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Quality engineering & testing in DevOps IT delivery with TMAP
Quality engineering & testing in DevOps IT delivery with TMAPQuality engineering & testing in DevOps IT delivery with TMAP
Quality engineering & testing in DevOps IT delivery with TMAP
 
Importance of Testing in SDLC
Importance of Testing in SDLCImportance of Testing in SDLC
Importance of Testing in SDLC
 
Manual Testing Guide1.pdf
Manual Testing Guide1.pdfManual Testing Guide1.pdf
Manual Testing Guide1.pdf
 
Industry-academia collaborations in Software Engineering: 20+ Years of Experi...
Industry-academia collaborations in Software Engineering: 20+ Years of Experi...Industry-academia collaborations in Software Engineering: 20+ Years of Experi...
Industry-academia collaborations in Software Engineering: 20+ Years of Experi...
 
Test-Driven Code Review: An Empirical Study
Test-Driven Code Review: An Empirical StudyTest-Driven Code Review: An Empirical Study
Test-Driven Code Review: An Empirical Study
 
An introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAn introduction to Software Testing and Test Management
An introduction to Software Testing and Test Management
 
JAVED SAYYED RESUME
JAVED SAYYED RESUMEJAVED SAYYED RESUME
JAVED SAYYED RESUME
 
International Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJSInternational Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJS
 
Continuous Testing - The New Normal
Continuous Testing - The New NormalContinuous Testing - The New Normal
Continuous Testing - The New Normal
 
Aginext 2021: Built-in Quality - How agile coaches can contribute
Aginext 2021: Built-in Quality - How agile coaches can contributeAginext 2021: Built-in Quality - How agile coaches can contribute
Aginext 2021: Built-in Quality - How agile coaches can contribute
 

Mehr von Vincenzo Ferme

Continuous Performance Testing for Microservices
Continuous Performance Testing for MicroservicesContinuous Performance Testing for Microservices
Continuous Performance Testing for MicroservicesVincenzo Ferme
 
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...Vincenzo Ferme
 
Towards Holistic Continuous Software Performance Assessment
Towards Holistic Continuous Software Performance AssessmentTowards Holistic Continuous Software Performance Assessment
Towards Holistic Continuous Software Performance AssessmentVincenzo Ferme
 
Estimating the Cost for Executing Business Processes in the Cloud
Estimating the Cost for Executing Business Processes in the CloudEstimating the Cost for Executing Business Processes in the Cloud
Estimating the Cost for Executing Business Processes in the CloudVincenzo Ferme
 
Workflow Engine Performance Benchmarking with BenchFlow
Workflow Engine Performance Benchmarking with BenchFlowWorkflow Engine Performance Benchmarking with BenchFlow
Workflow Engine Performance Benchmarking with BenchFlowVincenzo Ferme
 
Using Docker Containers to Improve Reproducibility in Software and Web Engine...
Using Docker Containers to Improve Reproducibility in Software and Web Engine...Using Docker Containers to Improve Reproducibility in Software and Web Engine...
Using Docker Containers to Improve Reproducibility in Software and Web Engine...Vincenzo Ferme
 
A Container-Centric Methodology for Benchmarking Workflow Management Systems
A Container-Centric Methodology for Benchmarking Workflow Management SystemsA Container-Centric Methodology for Benchmarking Workflow Management Systems
A Container-Centric Methodology for Benchmarking Workflow Management SystemsVincenzo Ferme
 
Towards a Benchmark for BPMN Engines
Towards a Benchmark for BPMN EnginesTowards a Benchmark for BPMN Engines
Towards a Benchmark for BPMN EnginesVincenzo Ferme
 
BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management SystemsBenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management SystemsVincenzo Ferme
 
On the Road to Benchmarking BPMN 2.0 Workflow Engines
On the Road to Benchmarking BPMN 2.0 Workflow EnginesOn the Road to Benchmarking BPMN 2.0 Workflow Engines
On the Road to Benchmarking BPMN 2.0 Workflow EnginesVincenzo Ferme
 

Mehr von Vincenzo Ferme (11)

Continuous Performance Testing for Microservices
Continuous Performance Testing for MicroservicesContinuous Performance Testing for Microservices
Continuous Performance Testing for Microservices
 
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...
 
Towards Holistic Continuous Software Performance Assessment
Towards Holistic Continuous Software Performance AssessmentTowards Holistic Continuous Software Performance Assessment
Towards Holistic Continuous Software Performance Assessment
 
Estimating the Cost for Executing Business Processes in the Cloud
Estimating the Cost for Executing Business Processes in the CloudEstimating the Cost for Executing Business Processes in the Cloud
Estimating the Cost for Executing Business Processes in the Cloud
 
Workflow Engine Performance Benchmarking with BenchFlow
Workflow Engine Performance Benchmarking with BenchFlowWorkflow Engine Performance Benchmarking with BenchFlow
Workflow Engine Performance Benchmarking with BenchFlow
 
Using Docker Containers to Improve Reproducibility in Software and Web Engine...
Using Docker Containers to Improve Reproducibility in Software and Web Engine...Using Docker Containers to Improve Reproducibility in Software and Web Engine...
Using Docker Containers to Improve Reproducibility in Software and Web Engine...
 
A Container-Centric Methodology for Benchmarking Workflow Management Systems
A Container-Centric Methodology for Benchmarking Workflow Management SystemsA Container-Centric Methodology for Benchmarking Workflow Management Systems
A Container-Centric Methodology for Benchmarking Workflow Management Systems
 
Towards a Benchmark for BPMN Engines
Towards a Benchmark for BPMN EnginesTowards a Benchmark for BPMN Engines
Towards a Benchmark for BPMN Engines
 
BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management SystemsBenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
 
On the Road to Benchmarking BPMN 2.0 Workflow Engines
On the Road to Benchmarking BPMN 2.0 Workflow EnginesOn the Road to Benchmarking BPMN 2.0 Workflow Engines
On the Road to Benchmarking BPMN 2.0 Workflow Engines
 
Open Data
Open DataOpen Data
Open Data
 

Kürzlich hochgeladen

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...itnewsafrica
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Nikki Chapple
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 

Kürzlich hochgeladen (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 

Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era

  • 1. Uni versi tà del l a Svi zzera i tal i ana Software Institute Declarative Performance Testing Automation Vincenzo Ferme Committee Members:
 Internal: Prof. Walter Binder, Prof. Mauro Pezzè
 External: Prof. Lionel Briand, Prof. Dr. Dr. h. c. Frank Leyman Research Advisor: Prof. Cesare Pautasso Automating Performance Testing for the DevOps Era
  • 3. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering
  • 4. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering ‣ Problem Statement & Research Goals
  • 5. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering ‣ Problem Statement & Research Goals ‣ Main Contributions
  • 6. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering ‣ Problem Statement & Research Goals ‣ Main Contributions ‣ Evaluations & Overview of Case Studies
  • 7. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering ‣ Problem Statement & Research Goals ‣ Main Contributions ‣ Evaluations & Overview of Case Studies ‣ Open Challenges
  • 8. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering ‣ Problem Statement & Research Goals ‣ Main Contributions ‣ Evaluations & Overview of Case Studies ‣ Open Challenges ‣ Career and Contributions
  • 9. 2 Outline ‣ Context ‣ State of the Art & Declarative Performance Engineering ‣ Problem Statement & Research Goals ‣ Main Contributions ‣ Evaluations & Overview of Case Studies ‣ Open Challenges ‣ Career and Contributions ‣ Concluding Remarks and Highlights
  • 11. 4 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps !4 CI Server Repo Developers, Testers, Architects Production CD Server C.S.D .L.
  • 12. 5 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps How often do you check in code? [CNCF Survey 2020] Cumulative growth in commits by quarter (Q1 2015-Q4 2019) The majority of respondents (53%) check in code multiple tim How often are your release cycles?
  • 13. 5 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps How often do you check in code? [CNCF Survey 2020] Cumulative growth in commits by quarter (Q1 2015-Q4 2019) The majority of respondents (53%) check in code multiple tim How often are your release cycles?
  • 14. 5 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps How often do you check in code? [CNCF Survey 2020] Cumulative growth in commits by quarter (Q1 2015-Q4 2019) The majority of respondents (53%) check in code multiple tim How often are your release cycles?
  • 15. How often do you check in code? [CNCF Survey 2020] Cumulative growth in commits by quarter (Q1 2015-Q4 2019) The majority of respondents (53%) check in code multiple tim How often are your release cycles? 6 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Containers 92%
  • 16. 7 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Time to Market !7
  • 17. 7 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Time to Market !7 Fast feedback-loop
  • 18. 7 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Time to Market !7 Fast feedback-loop Scalability and Availability
  • 19. 7 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Time to Market !7 Fast feedback-loop Scalability and Availability Fewer Production Errors
  • 20. 8 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Match Performance Requirements
  • 21. 8 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Scalability and Availability Match Performance Requirements
  • 22. 8 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps Scalability and Availability 3rd Party Performance Match Performance Requirements
  • 23. 9 Context Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps CI Server Repo Developers, Testers, Architects Continuous Changes Continuous Test Execution “Only conducting performance testing at the conclusion of system or functional testing is like conducting a diagnostic blood test on a patient who is already dead. ” Scott Barber
  • 24. 10 State of the Art Perf. Testing and DevOps ➤ Declarative Perf. Engineering Performance Testing is Rarely Applied in DevOps Processes [Bezemer et al., ICPE 2019] [Bezemer et al., ICPE 2019] Bezemer, C.-P., Eismann, S., Ferme, V., Grohmann, J., Heinrich, R., Jamshidi, P., Shang, W., van Hoorn, A., Villavicencio, M.,Walter, J., and Willnecker, F. (2019). How is Performance Addressed in DevOps? In Proceedings of the 10th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 45–50.
  • 25. 11 State of the Art Complexity of Def. and Exec. [Streitz et al., 2018][Leitner and Bezemer, 2017] Perf. Testing and DevOps ➤ Declarative Perf. Engineering
  • 26. 11 State of the Art Complexity of Def. and Exec. [Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015] Slowness of Execution Perf. Testing and DevOps ➤ Declarative Perf. Engineering
  • 27. 11 State of the Art Complexity of Def. and Exec. [Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015] Slowness of Execution Lack of Native Support for CI/CD Tools [Leitner and Bezemer, 2017] Perf. Testing and DevOps ➤ Declarative Perf. Engineering
  • 28. 12 Declarative Performance Engineering “ Enabling the performance analyst to declaratively specify what performance- relevant questions need to be answered without being concerned about how they should be answered. ” [Walter et al., 2016] [Walter et al., 2016] Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?, Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
  • 29. 12 Declarative Performance Engineering “ Enabling the performance analyst to declaratively specify what performance- relevant questions need to be answered without being concerned about how they should be answered. ” [Ferme and Pautasso, ICPE 2018] Ferme, V. and Pautasso, C. (2018). A Declarative Approach for Performance Tests Execution in Continuous Software Development Environments. In Proceedings of the 9th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 261–272. [Walter et al., 2016] Developers, Testers, Architects, Performance Analyst … [Ferme and Pautasso, ICPE 2018] [Walter et al., 2016] Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?, Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
  • 30. 13 State of the Art [Walter, 2018] DECLARE Proposes languages and tools for specifying performance concerns, and declaratively querying performance knowledge collected and modelled by different tools, with the objective of providing automated answers to the specified performance concerns. Perf. Testing and DevOps ➤ Declarative Perf. Engineering
  • 31. 14 State of the Art [Schulz et al., 2020] ContinuITy Focuses on dealing with the challenges of continuously updating performance tests, by leveraging performance knowledge of software systems collected and modelled from the software operating in production environments. Perf. Testing and DevOps ➤ Declarative Perf. Engineering
  • 32. 14 State of the Art [Schulz et al., 2020] ContinuITy Focuses on dealing with the challenges of continuously updating performance tests, by leveraging performance knowledge of software systems collected and modelled from the software operating in production environments. [Avritzer et al., 2020] [Okanovic et al., 2020] [Schulz et al., 2019] Perf. Testing and DevOps ➤ Declarative Perf. Engineering
  • 34. 16 Problem Statement To design new methods and techniques for the declarative specification of performance tests and their automation processes, and to provide models and frameworks enabling continuous and automated execution of performance tests, in particular referring to the target systems, target users and context of our work.
  • 35. 17 Research Goals R.G. 1 (Which Tests?)
  • 36. 17 Research Goals R.G. 1 (Which Tests?) R.G. 2 (How to Specify?)
  • 37. 17 Research Goals R.G. 1 (Which Tests?) R.G. 2 (How to Specify?) R.G. 3 (How to Automate?)
  • 38. 17 Research Goals R.G. 1 (Which Tests?) R.G. 2 (How to Specify?) R.G. 3 (How to Automate?) R.G. 4 (How to CSDL?)
  • 40. 19 Main Contributions Overall Contribution ➤ Main Contributions Overview A Declarative Approach for Performance Tests Execution Automation enabling the continuous and automated execution of performance tests alongside the Continuous Software Development Lifecycle, and embrace DevOps goals by enabling the end-to-end execution of service-level performance tests, including S.U.T. lifecycle management.
  • 41. 20 Main Contributions Overall Contribution ➤ Main Contributions Overview Automation-oriented Performance Tests Catalog
  • 42. 20 Main Contributions Overall Contribution ➤ Main Contributions Overview BenchFlow Declarative DSL BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1 Automation-oriented Performance Tests Catalog
  • 43. 20 Main Contributions Overall Contribution ➤ Main Contributions Overview BenchFlow Declarative DSL BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1 BenchFlow Model-driven Framework Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors Automation-oriented Performance Tests Catalog
  • 44. 20 Main Contributions Overall Contribution ➤ Main Contributions Overview BenchFlow Declarative DSL BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1 BenchFlow Model-driven Framework Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors Automation-oriented Performance Tests Catalog
  • 45. 21 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples Fill a gap identified in the performance testing literature by contributing an automation-oriented performance test catalog providing a comprehensive reference to properly identifying different kinds of performance tests and their automation requirements.
  • 46. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity
  • 47. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity - Expectations on the execution environment conditions
  • 48. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity - Expectations on the execution environment conditions - Workload input parameters
  • 49. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity - Expectations on the execution environment conditions - Workload input parameters - Required execution process
  • 50. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity - Expectations on the execution environment conditions - Workload input parameters - Required execution process - Checks to be performed on the S.U.T.
  • 51. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity - Expectations on the execution environment conditions - Workload input parameters - Required execution process - Checks to be performed on the S.U.T. - Measurements to be collected and metrics to be calculated
  • 52. 22 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples - Assumptions on the S.U.T. maturity - Expectations on the execution environment conditions - Workload input parameters - Required execution process - Checks to be performed on the S.U.T. - Measurements to be collected and metrics to be calculated - Preliminary performance tests to be already executed
  • 53. 23 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
  • 54. 23 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
  • 55. 23 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
  • 56. 23 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
  • 57. 23 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
  • 58. 23 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
  • 59. 24 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples 1.Baseline Performance Test 2.Unit Performance Test 3.Smoke Test 4.Performance Regression Test 5.Sanity Test 6.Load Test 7.Scalability Test 8.Elasticity Test 9.Stress Test 10.Peak Load Test 11.Spike Test 12.Throttle Test 13.Soak or Stability Test 14.Exploratory Test 15.Configuration Test 16.Benchmark Performance Test 17.Acceptance Test 18.Capacity or Endurance Test 19.Chaos Test 20.Live-traffic or Canary 21.Breakpoints Perf. Test 22.Failover or Recovery Test 23.Resiliency or Reliability 24.Snapshot-load Test 25.Volume or Flood Test
  • 60. 24 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples 1.Baseline Performance Test 2.Unit Performance Test 3.Smoke Test 4.Performance Regression Test 5.Sanity Test 6.Load Test 7.Scalability Test 8.Elasticity Test 9.Stress Test 10.Peak Load Test 11.Spike Test 12.Throttle Test 13.Soak or Stability Test 14.Exploratory Test 15.Configuration Test 16.Benchmark Performance Test 17.Acceptance Test 18.Capacity or Endurance Test 19.Chaos Test 20.Live-traffic or Canary 21.Breakpoints Perf. Test 22.Failover or Recovery Test 23.Resiliency or Reliability 24.Snapshot-load Test 25.Volume or Flood Test
  • 61. 24 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples 1.Baseline Performance Test 2.Unit Performance Test 3.Smoke Test 4.Performance Regression Test 5.Sanity Test 6.Load Test 7.Scalability Test 8.Elasticity Test 9.Stress Test 10.Peak Load Test 11.Spike Test 12.Throttle Test 13.Soak or Stability Test 14.Exploratory Test 15.Configuration Test 16.Benchmark Performance Test 17.Acceptance Test 18.Capacity or Endurance Test 19.Chaos Test 20.Live-traffic or Canary 21.Breakpoints Perf. Test 22.Failover or Recovery Test 23.Resiliency or Reliability 24.Snapshot-load Test 25.Volume or Flood Test
  • 62. 24 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples 1.Baseline Performance Test 2.Unit Performance Test 3.Smoke Test 4.Performance Regression Test 5.Sanity Test 6.Load Test 7.Scalability Test 8.Elasticity Test 9.Stress Test 10.Peak Load Test 11.Spike Test 12.Throttle Test 13.Soak or Stability Test 14.Exploratory Test 15.Configuration Test 16.Benchmark Performance Test 17.Acceptance Test 18.Capacity or Endurance Test 19.Chaos Test 20.Live-traffic or Canary 21.Breakpoints Perf. Test 22.Failover or Recovery Test 23.Resiliency or Reliability 24.Snapshot-load Test 25.Volume or Flood Test
  • 63. 25 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples Exploratory Test
  • 64. 25 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples Exploratory Test
  • 65. 25 Automation-oriented Performance Tests Catalog Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples Exploratory Test
  • 66. 26 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors BenchFlow Model-driven Framework BenchFlow Declarative DSL BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1 [Ferme and Pautasso, ICPE 2018] [Ferme et al., BPM 2015] [Ferme and Pautasso, ICPE 2016]
  • 67. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions
  • 68. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions Workloads
  • 69. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions Workloads Simulated Users
  • 70. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions Workloads Simulated Users Test Data
  • 71. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions Workloads Simulated Users Test Data TestBed Management
  • 72. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions Workloads Simulated Users Test Data TestBed Management Performance Data Analysis
  • 73. 27 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Load Functions Workloads Simulated Users Test Data TestBed Management Performance Data Analysis Definition of Configuration Tests
  • 74. 28 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Integration in CSDL
  • 75. 28 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Integration in CSDL Know you I SUT-awareness
  • 76. 28 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Declarative DSL Integration in CSDL Goal-Driven Performance Testing Know you I SUT-awareness
  • 77. 29 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Model-driven Framework Test Scheduling
  • 78. 29 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Model-driven Framework Test Scheduling Deployment Infra.
  • 79. 29 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Model-driven Framework Test Scheduling Manage S.U.T. Deployment Infra.
  • 80. 29 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Model-driven Framework Test Scheduling Manage S.U.T. Deployment Infra. Issue Workload
  • 81. 29 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Model-driven Framework Test Scheduling Manage S.U.T. Deployment Infra. Issue Workload Collect Data
  • 82. 29 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlow Model-driven Framework Test Scheduling Manage S.U.T. Deployment Infra. Issue Workload Analyse Data Collect Data
  • 83. Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors 30 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test s
  • 84. Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors 30 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
  • 85. Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors 30 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
  • 86. 31 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 87. 31 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 88. 31 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 89. 31 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 90. 31 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 91. 32 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» GoalType LOAD SMOKE SANITY CONFIGURATION SCALABILITY SPIKE EXHAUSTIVE_EXPLORATION STABILITY_BOUNDARY CAPACITY_CONSTRAINTS REGRESSION_COMPLETE REGRESSION_INTERSECTION ACCEPTANCE Observe Exploration exploration 0..1 observe 1
  • 92. 32 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» GoalType LOAD SMOKE SANITY CONFIGURATION SCALABILITY SPIKE EXHAUSTIVE_EXPLORATION STABILITY_BOUNDARY CAPACITY_CONSTRAINTS REGRESSION_COMPLETE REGRESSION_INTERSECTION ACCEPTANCE Observe Exploration exploration 0..1 observe 1 «enumeration» ServiceMetric AVG_RAM AVG_CPU RESOURCE_COST ... «enumeration» WorkloadMetric AVG_RESPONSE_TIME THROUGHPUT AVG_LATENCY ... Observe ServiceObserve +service_name: List<ServiceMetric> services 0..N WorkloadObserve +workload_name: Option<List<WorkloadMetric>> +operation_name: Option<List<WorkloadMetric>> workloads 0..N
  • 93. 32 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» GoalType LOAD SMOKE SANITY CONFIGURATION SCALABILITY SPIKE EXHAUSTIVE_EXPLORATION STABILITY_BOUNDARY CAPACITY_CONSTRAINTS REGRESSION_COMPLETE REGRESSION_INTERSECTION ACCEPTANCE Observe Exploration exploration 0..1 observe 1 «enumeration» ServiceMetric AVG_RAM AVG_CPU RESOURCE_COST ... «enumeration» WorkloadMetric AVG_RESPONSE_TIME THROUGHPUT AVG_LATENCY ... Observe ServiceObserve +service_name: List<ServiceMetric> services 0..N WorkloadObserve +workload_name: Option<List<WorkloadMetric>> +operation_name: Option<List<WorkloadMetric>> workloads 0..N
  • 94. 32 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» GoalType LOAD SMOKE SANITY CONFIGURATION SCALABILITY SPIKE EXHAUSTIVE_EXPLORATION STABILITY_BOUNDARY CAPACITY_CONSTRAINTS REGRESSION_COMPLETE REGRESSION_INTERSECTION ACCEPTANCE Observe Exploration exploration 0..1 observe 1 «enumeration» ServiceMetric AVG_RAM AVG_CPU RESOURCE_COST ... «enumeration» WorkloadMetric AVG_RESPONSE_TIME THROUGHPUT AVG_LATENCY ... Observe ServiceObserve +service_name: List<ServiceMetric> services 0..N WorkloadObserve +workload_name: Option<List<WorkloadMetric>> +operation_name: Option<List<WorkloadMetric>> workloads 0..N
  • 95. 32 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» GoalType LOAD SMOKE SANITY CONFIGURATION SCALABILITY SPIKE EXHAUSTIVE_EXPLORATION STABILITY_BOUNDARY CAPACITY_CONSTRAINTS REGRESSION_COMPLETE REGRESSION_INTERSECTION ACCEPTANCE Observe Exploration exploration 0..1 observe 1 «enumeration» ServiceMetric AVG_RAM AVG_CPU RESOURCE_COST ... «enumeration» WorkloadMetric AVG_RESPONSE_TIME THROUGHPUT AVG_LATENCY ... Observe ServiceObserve +service_name: List<ServiceMetric> services 0..N WorkloadObserve +workload_name: Option<List<WorkloadMetric>> +operation_name: Option<List<WorkloadMetric>> workloads 0..N
  • 96. 33 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test LoadFunctionExplorationSpace users: Option<List<Int>> users_range: Option<[Int,Int]> users_step: Option<StepFunction[Int]> load_function 0..1 ExplorationSpace services: Option<Map<String, ServiceExplorationSpace>> service_name 0..* ServiceExplorationSpace resources: Option<Map<Resource, String> configuration: Option<Map<String, List<String>> Exploration StabilityCriteria services: Option<Map<String, ServiceStabilityCriterion>> workloads: Option<Map<String, WorkloadStabilityCriterion>> exploration_space 1 stability_criteria 0..1 exploration_strategy 1 Memory values: Option<List<Bytes>> range: Option<[Bytes,Bytes]> step: Option<StepFunction[Bytes]> Cpu values: Option<List<Millicores>> range: Option<[Millicores,Millicores]> step: Option<StepFunction[Millicores]> «abstract» Resource StepFunction[T] operator: StepFunctionOperator value: T resources 0..2 WorkloadStabilityCriterion max_mix_deviation: Percent ServiceStabilityCriterion avg_cpu: Option<StabilityCriterionSetting[Percent]> avg_memory: Option<StabilityCriterionSetting[Percent]> StabilityCriterionSetting[T] operator: StabilityCriterionCondition value: T service_name 0..* workload_name 0..* ExplorationStrategy selection: SelectionStrategyType validation: Option<ValidationStrategyType> regression: Option<RegressionStrategyType> «enumeration» StabilityCriterionCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL «enumeration» StepFunctionOperator PLUS MINUS MULTIPLY DIVIDE POWER
  • 97. 33 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test LoadFunctionExplorationSpace users: Option<List<Int>> users_range: Option<[Int,Int]> users_step: Option<StepFunction[Int]> load_function 0..1 ExplorationSpace services: Option<Map<String, ServiceExplorationSpace>> service_name 0..* ServiceExplorationSpace resources: Option<Map<Resource, String> configuration: Option<Map<String, List<String>> Exploration StabilityCriteria services: Option<Map<String, ServiceStabilityCriterion>> workloads: Option<Map<String, WorkloadStabilityCriterion>> exploration_space 1 stability_criteria 0..1 exploration_strategy 1 Memory values: Option<List<Bytes>> range: Option<[Bytes,Bytes]> step: Option<StepFunction[Bytes]> Cpu values: Option<List<Millicores>> range: Option<[Millicores,Millicores]> step: Option<StepFunction[Millicores]> «abstract» Resource StepFunction[T] operator: StepFunctionOperator value: T resources 0..2 WorkloadStabilityCriterion max_mix_deviation: Percent ServiceStabilityCriterion avg_cpu: Option<StabilityCriterionSetting[Percent]> avg_memory: Option<StabilityCriterionSetting[Percent]> StabilityCriterionSetting[T] operator: StabilityCriterionCondition value: T service_name 0..* workload_name 0..* ExplorationStrategy selection: SelectionStrategyType validation: Option<ValidationStrategyType> regression: Option<RegressionStrategyType> «enumeration» StabilityCriterionCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL «enumeration» StepFunctionOperator PLUS MINUS MULTIPLY DIVIDE POWER
  • 98. 33 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test LoadFunctionExplorationSpace users: Option<List<Int>> users_range: Option<[Int,Int]> users_step: Option<StepFunction[Int]> load_function 0..1 ExplorationSpace services: Option<Map<String, ServiceExplorationSpace>> service_name 0..* ServiceExplorationSpace resources: Option<Map<Resource, String> configuration: Option<Map<String, List<String>> Exploration StabilityCriteria services: Option<Map<String, ServiceStabilityCriterion>> workloads: Option<Map<String, WorkloadStabilityCriterion>> exploration_space 1 stability_criteria 0..1 exploration_strategy 1 Memory values: Option<List<Bytes>> range: Option<[Bytes,Bytes]> step: Option<StepFunction[Bytes]> Cpu values: Option<List<Millicores>> range: Option<[Millicores,Millicores]> step: Option<StepFunction[Millicores]> «abstract» Resource StepFunction[T] operator: StepFunctionOperator value: T resources 0..2 WorkloadStabilityCriterion max_mix_deviation: Percent ServiceStabilityCriterion avg_cpu: Option<StabilityCriterionSetting[Percent]> avg_memory: Option<StabilityCriterionSetting[Percent]> StabilityCriterionSetting[T] operator: StabilityCriterionCondition value: T service_name 0..* workload_name 0..* ExplorationStrategy selection: SelectionStrategyType validation: Option<ValidationStrategyType> regression: Option<RegressionStrategyType> «enumeration» StabilityCriterionCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL «enumeration» StepFunctionOperator PLUS MINUS MULTIPLY DIVIDE POWER
  • 99. 33 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test LoadFunctionExplorationSpace users: Option<List<Int>> users_range: Option<[Int,Int]> users_step: Option<StepFunction[Int]> load_function 0..1 ExplorationSpace services: Option<Map<String, ServiceExplorationSpace>> service_name 0..* ServiceExplorationSpace resources: Option<Map<Resource, String> configuration: Option<Map<String, List<String>> Exploration StabilityCriteria services: Option<Map<String, ServiceStabilityCriterion>> workloads: Option<Map<String, WorkloadStabilityCriterion>> exploration_space 1 stability_criteria 0..1 exploration_strategy 1 Memory values: Option<List<Bytes>> range: Option<[Bytes,Bytes]> step: Option<StepFunction[Bytes]> Cpu values: Option<List<Millicores>> range: Option<[Millicores,Millicores]> step: Option<StepFunction[Millicores]> «abstract» Resource StepFunction[T] operator: StepFunctionOperator value: T resources 0..2 WorkloadStabilityCriterion max_mix_deviation: Percent ServiceStabilityCriterion avg_cpu: Option<StabilityCriterionSetting[Percent]> avg_memory: Option<StabilityCriterionSetting[Percent]> StabilityCriterionSetting[T] operator: StabilityCriterionCondition value: T service_name 0..* workload_name 0..* ExplorationStrategy selection: SelectionStrategyType validation: Option<ValidationStrategyType> regression: Option<RegressionStrategyType> «enumeration» StabilityCriterionCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL «enumeration» StepFunctionOperator PLUS MINUS MULTIPLY DIVIDE POWER
  • 100. 33 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test LoadFunctionExplorationSpace users: Option<List<Int>> users_range: Option<[Int,Int]> users_step: Option<StepFunction[Int]> load_function 0..1 ExplorationSpace services: Option<Map<String, ServiceExplorationSpace>> service_name 0..* ServiceExplorationSpace resources: Option<Map<Resource, String> configuration: Option<Map<String, List<String>> Exploration StabilityCriteria services: Option<Map<String, ServiceStabilityCriterion>> workloads: Option<Map<String, WorkloadStabilityCriterion>> exploration_space 1 stability_criteria 0..1 exploration_strategy 1 Memory values: Option<List<Bytes>> range: Option<[Bytes,Bytes]> step: Option<StepFunction[Bytes]> Cpu values: Option<List<Millicores>> range: Option<[Millicores,Millicores]> step: Option<StepFunction[Millicores]> «abstract» Resource StepFunction[T] operator: StepFunctionOperator value: T resources 0..2 WorkloadStabilityCriterion max_mix_deviation: Percent ServiceStabilityCriterion avg_cpu: Option<StabilityCriterionSetting[Percent]> avg_memory: Option<StabilityCriterionSetting[Percent]> StabilityCriterionSetting[T] operator: StabilityCriterionCondition value: T service_name 0..* workload_name 0..* ExplorationStrategy selection: SelectionStrategyType validation: Option<ValidationStrategyType> regression: Option<RegressionStrategyType> «enumeration» StabilityCriterionCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL «enumeration» StepFunctionOperator PLUS MINUS MULTIPLY DIVIDE POWER
  • 101. 34 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 102. 34 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 103. 35 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test «enumeration» GateCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL PERCENT_MORE PERCENT_LESS ServiceQualityGate gate_metric: ServiceMetric condition: GateCondition gate_threshold_target: String OR ServiceMetric gate_threshold_minimum: Option<String OR ServiceMetric> service_name 0..* WorkloadQualityGate max_mix_deviation: Option<Percent> max_think_time_deviation: Option<Percent> gate_metric: Option<WorkloadMetric> condition: Option<GateCondition> gate_threshold_target: Option<String OR WorkloadMetric> gate_threshold_minimum: Option<String OR WorkloadMetric> workload_name 0..* QualityGate services: Option<Map<String ServiceQualityGate>> workloads: Option<Map<String WorkloadQualityGate>> mean_absolute_error: Option<Percent> RegressionQualityGate service: Option<String> workload: Option<String> gate_metric: ServiceMetric OR WorkloadMetric regression_delta_absolute: Option<Time> regression_delta_percent: Option<Percent> regression 0..1
  • 104. 35 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test «enumeration» GateCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL PERCENT_MORE PERCENT_LESS ServiceQualityGate gate_metric: ServiceMetric condition: GateCondition gate_threshold_target: String OR ServiceMetric gate_threshold_minimum: Option<String OR ServiceMetric> service_name 0..* WorkloadQualityGate max_mix_deviation: Option<Percent> max_think_time_deviation: Option<Percent> gate_metric: Option<WorkloadMetric> condition: Option<GateCondition> gate_threshold_target: Option<String OR WorkloadMetric> gate_threshold_minimum: Option<String OR WorkloadMetric> workload_name 0..* QualityGate services: Option<Map<String ServiceQualityGate>> workloads: Option<Map<String WorkloadQualityGate>> mean_absolute_error: Option<Percent> RegressionQualityGate service: Option<String> workload: Option<String> gate_metric: ServiceMetric OR WorkloadMetric regression_delta_absolute: Option<Time> regression_delta_percent: Option<Percent> regression 0..1
  • 105. 35 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test «enumeration» GateCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL PERCENT_MORE PERCENT_LESS ServiceQualityGate gate_metric: ServiceMetric condition: GateCondition gate_threshold_target: String OR ServiceMetric gate_threshold_minimum: Option<String OR ServiceMetric> service_name 0..* WorkloadQualityGate max_mix_deviation: Option<Percent> max_think_time_deviation: Option<Percent> gate_metric: Option<WorkloadMetric> condition: Option<GateCondition> gate_threshold_target: Option<String OR WorkloadMetric> gate_threshold_minimum: Option<String OR WorkloadMetric> workload_name 0..* QualityGate services: Option<Map<String ServiceQualityGate>> workloads: Option<Map<String WorkloadQualityGate>> mean_absolute_error: Option<Percent> RegressionQualityGate service: Option<String> workload: Option<String> gate_metric: ServiceMetric OR WorkloadMetric regression_delta_absolute: Option<Time> regression_delta_percent: Option<Percent> regression 0..1
  • 106. 35 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test «enumeration» GateCondition GREATHER_THAN LESS_THAN GREATHER_OR_EQUAL_THEN LESS_OR_EQUAL_THEN EQUAL PERCENT_MORE PERCENT_LESS ServiceQualityGate gate_metric: ServiceMetric condition: GateCondition gate_threshold_target: String OR ServiceMetric gate_threshold_minimum: Option<String OR ServiceMetric> service_name 0..* WorkloadQualityGate max_mix_deviation: Option<Percent> max_think_time_deviation: Option<Percent> gate_metric: Option<WorkloadMetric> condition: Option<GateCondition> gate_threshold_target: Option<String OR WorkloadMetric> gate_threshold_minimum: Option<String OR WorkloadMetric> workload_name 0..* QualityGate services: Option<Map<String ServiceQualityGate>> workloads: Option<Map<String WorkloadQualityGate>> mean_absolute_error: Option<Percent> RegressionQualityGate service: Option<String> workload: Option<String> gate_metric: ServiceMetric OR WorkloadMetric regression_delta_absolute: Option<Time> regression_delta_percent: Option<Percent> regression 0..1
  • 107. 36 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 108. 36 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test BenchFlowTest version: TestVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> Goal type: GoalType stored_knowledge: Option<Boolean> «enumeration» TestVersion 1 1.1 2 3 BenchFlowTestConfiguration configuration 1 sut 1 workload_name 1..N DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> data_collection 0..1 goal 1 LoadFunction users: Option<Int> ramp_up: Time steady_state: Time ramp_down: Time load_function 1 TerminationCriteria +test: TestTerminationCriterion +experiment: ExperimentTerminationCriterion termination_criteria 0..1 QualityGates quality_gates 0..1
  • 109. 37 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test TerminationCriteria TestTerminationCriterion max_time: Time max_number_of_experiments: Option<Int> max_failed_experiments: Option<Percent> TerminationCriterion max_number_of_trials: Int max_failed_trials: Option<Percent> services: Option<Map<String, ServiceTerminationCriterion>> workloads: Option<Map<String, WorkloadTerminationCriterion>> experiment 0..1 test 0..1 WorkloadTerminationCriterion confidence_interval_metric: WorkloadMetric confidence_interval_value: Float confidence_interval_precision: Percent ServiceTerminationCriterion confidence_interval_metric: ServiceMetric confidence_interval_value: Float confidence_interval_precision: Percent service_name 0..* workload_name 0..*
  • 110. 38 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors s s
  • 111. 39 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test «abstract» Workload popularity: Option<Percent> Sut name: String type: Option<SutType> services_configuration: Option<Map<String, ServiceConfigurations>> DataCollection only_declared: Boolean services: Option<Map<String, ServerSideConfiguration>> workloads: Option<Map<String, ClientSideConfiguration>> ExperimentTerminationCriteria max_time: Time BenchFlowExperiment version: ExperimentVersion name: String description: Option<String> workloads: Map<String, Workload> labels: Option<String> BechFlowExperimentConfiguration configuration 1 sut 1 workload 1 data_collection 1 load_function 1 TerminationCriterion max_number_of_trials: Int max_failed_trials: Option<Percent> services: Option<Map<String, ServiceTerminationCriterion>> workloads: Option<Map<String, WorkloadTerminationCriterion>> termination_criteria 1 experiment 0..1 «enumeration» ExperimentVersion 1 1.1 1.2 1.3 1.4 2 2.1 2.2 3 LoadFunction users: Int ramp_up: Time steady_state: Time ramp_down: Time SutVersion version 1
  • 112. 40 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Checkout Build Unit Tests Integration Tests E2e Tests Smoke Tests Load Tests Acceptance Tests Regression Tests Deploy in Production FUNCTIONAL TESTS FUNCTIONAL TESTS PERFORMANCE TESTS
  • 113. 41 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Suite Environment name: String skip_deploy: Option<Boolean> environments 0..N TestSuite version: TestSuiteVersion name: String description: Option<String> Test include_labels: Option<List<Regex>> paths: Option<List<String>> tests 1 Push branches: Option<List<Regex>> Trigger scheduled: Option<Boolean> triggers 0..1 PullRequest contexts: Option<ContextType> source_branches: Option<List<Regex>> target_branches: Option<List<Regex>> QualityGate criterion: CriterionType exclude: Option<List<String>> quality_gates 1 Event on 0..N Release types: List<String> Deployment names: List<String> suite 1 «enumeration» CriterionType ALL_SUCCESS AT_LEAST_ONE_SUCCESS «enumeration» TestSuiteVersion 1 1.1 «enumeration» ContextType HEAD MERGE ALL
  • 114. 41 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Suite Environment name: String skip_deploy: Option<Boolean> environments 0..N TestSuite version: TestSuiteVersion name: String description: Option<String> Test include_labels: Option<List<Regex>> paths: Option<List<String>> tests 1 Push branches: Option<List<Regex>> Trigger scheduled: Option<Boolean> triggers 0..1 PullRequest contexts: Option<ContextType> source_branches: Option<List<Regex>> target_branches: Option<List<Regex>> QualityGate criterion: CriterionType exclude: Option<List<String>> quality_gates 1 Event on 0..N Release types: List<String> Deployment names: List<String> suite 1 «enumeration» CriterionType ALL_SUCCESS AT_LEAST_ONE_SUCCESS «enumeration» TestSuiteVersion 1 1.1 «enumeration» ContextType HEAD MERGE ALL
  • 115. 41 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Suite Environment name: String skip_deploy: Option<Boolean> environments 0..N TestSuite version: TestSuiteVersion name: String description: Option<String> Test include_labels: Option<List<Regex>> paths: Option<List<String>> tests 1 Push branches: Option<List<Regex>> Trigger scheduled: Option<Boolean> triggers 0..1 PullRequest contexts: Option<ContextType> source_branches: Option<List<Regex>> target_branches: Option<List<Regex>> QualityGate criterion: CriterionType exclude: Option<List<String>> quality_gates 1 Event on 0..N Release types: List<String> Deployment names: List<String> suite 1 «enumeration» CriterionType ALL_SUCCESS AT_LEAST_ONE_SUCCESS «enumeration» TestSuiteVersion 1 1.1 «enumeration» ContextType HEAD MERGE ALL
  • 116. 41 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Suite Environment name: String skip_deploy: Option<Boolean> environments 0..N TestSuite version: TestSuiteVersion name: String description: Option<String> Test include_labels: Option<List<Regex>> paths: Option<List<String>> tests 1 Push branches: Option<List<Regex>> Trigger scheduled: Option<Boolean> triggers 0..1 PullRequest contexts: Option<ContextType> source_branches: Option<List<Regex>> target_branches: Option<List<Regex>> QualityGate criterion: CriterionType exclude: Option<List<String>> quality_gates 1 Event on 0..N Release types: List<String> Deployment names: List<String> suite 1 «enumeration» CriterionType ALL_SUCCESS AT_LEAST_ONE_SUCCESS «enumeration» TestSuiteVersion 1 1.1 «enumeration» ContextType HEAD MERGE ALL
  • 117. 42 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Test
  • 118. 43 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test 1 version: "3" 2 name: "Load Test" 3 description: "Example of Load Test" 4 labels: "load_test" 5 configuration: 6 goal: 7 type: "load_test" 8 # stored_knowledge: "false" 9 observe: 10 ... 11 load_function: 12 users: 1000 13 ramp_up: 5m 14 steady_state: 20m 15 ramp_down: 5m 16 termination_criteria: 17 ... 18 quality_gates: 19 ... 20 sut: 21 ... 22 workloads: 23 ... 24 data_collection: 25 # AUTOMATICALLY attached based on the observe section IF NOT specified 26 services: 27 ... Load Test
  • 119. 43 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test 1 version: "3" 2 name: "Load Test" 3 description: "Example of Load Test" 4 labels: "load_test" 5 configuration: 6 goal: 7 type: "load_test" 8 # stored_knowledge: "false" 9 observe: 10 ... 11 load_function: 12 users: 1000 13 ramp_up: 5m 14 steady_state: 20m 15 ramp_down: 5m 16 termination_criteria: 17 ... 18 quality_gates: 19 ... 20 sut: 21 ... 22 workloads: 23 ... 24 data_collection: 25 # AUTOMATICALLY attached based on the observe section IF NOT specified 26 services: 27 ... Load Test Application Server DBMS Workflow Engine Job Executor Core Engine Transaction Manager Instance Database Persistent Manager Process Navigator A B C D Task Dispatcher Users Service Invoker … Web Service [Skouradaki et al., ICPE 2015] [Ferme et al., BPM 2015] [Ferme et al., CLOSER 2016] [Skouradaki et al., BPM 2016] [Ferme et al., BPM 2016] [Ivanchikj et al., BPM 2017] [Rosinosky et al., OTM 2018]
  • 120. 44 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test 20 confidence_interval_precision: 95% 21 services: 22 service_a: 23 confidence_interval_metric: avg_cpu 24 confidence_interval_value: 60% 25 confidence_interval_precision: 95% 26 sut: 27 name: "my_app" 28 version: "v1.5" 29 type: "http" 30 sut_configuration: 31 default_target_service: 32 name: "service_a" 33 endpoint: "/" 34 sut_ready_log_check: "/(.*)System started(.*)/g" 35 deployment: 36 service_a: "my_server" 37 services_configuration: 38 service_a: 39 resources: 40 cpu: 100m 41 memory: 256Mi 42 configuration: 43 NUM_SERVICE_THREAD: 12 44 service_b: 45 resources: 46 cpu: 200m 47 memory: 256Mi 48 configuration: 49 THREADPOOL_SIZE: 64 50 dbms_a: 51 resources: 52 cpu: 100m 53 memory: 256Mi 54 configuration: 55 QUERY_CACHE_SIZE: 48Mi 56 workloads: 57 workload_a: 58 popularity: 70% 59 item_a: 60 driver_type: "http" 61 inter_operation_timings: "negative_exponential" SUT Conf. 1 version: "3" 2 name: "Load Test" 3 description: "Example of Load Test" 4 labels: "load_test" 5 configuration: 6 goal: 7 type: "load_test" 8 # stored_knowledge: "false" 9 observe: 10 ... 11 load_function: 12 users: 1000 13 ramp_up: 5m 14 steady_state: 20m 15 ramp_down: 5m 16 termination_criteria: 17 ... 18 quality_gates: 19 ... 20 sut: 21 ... 22 workloads: 23 ... 24 data_collection: 25 # AUTOMATICALLY attached based on the observe section IF NOT specified 26 services: 27 ... Load Test
  • 121. 45 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test 1 version: "3" 2 name: "Configuration Test" 3 description: "Example of Configuration Test" 4 labels: "configuration" 5 configuration: 6 goal: 7 type: "configuration" 8 stored_knowledge: "true" 9 observe: 10 ... 11 exploration: 12 exploration_space: 13 services: 14 service_a: 15 resources: 16 cpu: 17 range: [100m, 1000m] 18 step: "*4" 19 memory: 20 range: [256Mi, 1024Mi] 21 step: "+768Mi" 22 configuration: 23 NUM_SERVICE_THREAD: [12, 24] 24 dbms_a: 25 resources: 26 cpu: 27 range: [100m, 1000m] 28 step: "*10" 29 memory: 30 range: [256Mi, 1024Mi] 31 step: "+768Mi" 32 configuration: 33 QUERY_CACHE_SIZE: 48Mi 34 exploration_strategy: 35 selection: "one_at_a_time" 36 load_function: 37 ... 38 termination_criteria: 39 ... 40 quality_gates: 41 ... Configuration Test
  • 122. 45 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test 1 version: "3" 2 name: "Configuration Test" 3 description: "Example of Configuration Test" 4 labels: "configuration" 5 configuration: 6 goal: 7 type: "configuration" 8 stored_knowledge: "true" 9 observe: 10 ... 11 exploration: 12 exploration_space: 13 services: 14 service_a: 15 resources: 16 cpu: 17 range: [100m, 1000m] 18 step: "*4" 19 memory: 20 range: [256Mi, 1024Mi] 21 step: "+768Mi" 22 configuration: 23 NUM_SERVICE_THREAD: [12, 24] 24 dbms_a: 25 resources: 26 cpu: 27 range: [100m, 1000m] 28 step: "*10" 29 memory: 30 range: [256Mi, 1024Mi] 31 step: "+768Mi" 32 configuration: 33 QUERY_CACHE_SIZE: 48Mi 34 exploration_strategy: 35 selection: "one_at_a_time" 36 load_function: 37 ... 38 termination_criteria: 39 ... 40 quality_gates: 41 ... Configuration Test t λ wall clock time 9 AM 300 BenchFlow Faban Load test template Architect. con g. s0 1 ϕ i 0.015 Γk 0.042 pass/fail (ck ) PASS sn 1 2.164 0.108 FAIL ... ... δk 1.26 % 2.58 % δk ⋅ ck 1.26 % 0.00 % norm. test mass (si * p'(λ')) Σ 100.00 % 74.81 % 0.142 ... ... ... ... ... ... ... ... ... ... ^ Operational pro le Empirical distribution of workload situations Baseline & test results per architectural con g. Domain metric dashboard #Workload situations ContinuITy Analysis of operational data 2 Experiment generation 3 Experiment execution 4 Domain metric calculation 5 Collection of operational data 1 λ' sampled workload situation f' 100 200 300 0.2 Relative Mass 0.25 0.20 0.15 0.05 0 50 100 150 200 Workload Situations (Number of Users) x x x x x x x 0.10 x 250 300 x x x Step (Intermediate) Artifact Tool [Avritzer et al., JSS 2020] [Avritzer et al., ICPE 2019] [Avritzer et al., ECSA 2018]
  • 123. 45 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test 1 version: "3" 2 name: "Configuration Test" 3 description: "Example of Configuration Test" 4 labels: "configuration" 5 configuration: 6 goal: 7 type: "configuration" 8 stored_knowledge: "true" 9 observe: 10 ... 11 exploration: 12 exploration_space: 13 services: 14 service_a: 15 resources: 16 cpu: 17 range: [100m, 1000m] 18 step: "*4" 19 memory: 20 range: [256Mi, 1024Mi] 21 step: "+768Mi" 22 configuration: 23 NUM_SERVICE_THREAD: [12, 24] 24 dbms_a: 25 resources: 26 cpu: 27 range: [100m, 1000m] 28 step: "*10" 29 memory: 30 range: [256Mi, 1024Mi] 31 step: "+768Mi" 32 configuration: 33 QUERY_CACHE_SIZE: 48Mi 34 exploration_strategy: 35 selection: "one_at_a_time" 36 load_function: 37 ... 38 termination_criteria: 39 ... 40 quality_gates: 41 ... Configuration Test Application Server DBMS Workflow Engine Job Executor Core Engine Transaction Manager Instance Database Persistent Manager Process Navigator A B C D Task Dispatcher Users Service Invoker … Web Service t λ wall clock time 9 AM 300 BenchFlow Faban Load test template Architect. con g. s0 1 ϕ i 0.015 Γk 0.042 pass/fail (ck ) PASS sn 1 2.164 0.108 FAIL ... ... δk 1.26 % 2.58 % δk ⋅ ck 1.26 % 0.00 % norm. test mass (si * p'(λ')) Σ 100.00 % 74.81 % 0.142 ... ... ... ... ... ... ... ... ... ... ^ Operational pro le Empirical distribution of workload situations Baseline & test results per architectural con g. Domain metric dashboard #Workload situations ContinuITy Analysis of operational data 2 Experiment generation 3 Experiment execution 4 Domain metric calculation 5 Collection of operational data 1 λ' sampled workload situation f' 100 200 300 0.2 Relative Mass 0.25 0.20 0.15 0.05 0 50 100 150 200 Workload Situations (Number of Users) x x x x x x x 0.10 x 250 300 x x x Step (Intermediate) Artifact Tool [Avritzer et al., JSS 2020] [Avritzer et al., ICPE 2019] [Avritzer et al., ECSA 2018]
  • 124. 46 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Experiment Execution Exploration Execution Analysis Test Bundle:
 Test Suite / Test YAML + SUT Deployment Descriptor YAML + Files Metrics Failures Result Analysis Goal Exploration Experiment Generation Experiment Bundle: Experiment YAML + SUT Deployment Descriptor YAML + Files Success Execution Errors
  • 125. 47 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated Ready Running user paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 126. 48 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Ready Running user paused Handle Experiment Result Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle user terminated OR [execution time > max_time] experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments max_number_of_experim
  • 127. 49 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Ready Running user paused Handle Experiment Result Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle user terminated OR [execution time > max_time] experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments max_number_of_experim Entity YAML Specification Parse to YAML Object Parse + Syntactic Validation Semantic Validation Entity Representation Exception
  • 128. 50 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Ready Running user paused Handle Experiment Result Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle user terminated OR [execution time > max_time] experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments max_number_of_experim
  • 129. 50 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Ready Running user paused Handle Experiment Result Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle user terminated OR [execution time > max_time] experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments max_number_of_experim
  • 130. 50 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Ready Running user paused Handle Experiment Result Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle user terminated OR [execution time > max_time] experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments max_number_of_experim
  • 131. 50 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Ready Running user paused Handle Experiment Result Remove Non Reachable Experiments Determine Exploration Strategy Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle user terminated OR [execution time > max_time] experiment results available Waiting user input needed user input received Start [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments max_number_of_experim
  • 132. 51 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available needed [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 133. 51 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available needed [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 134. 51 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available needed [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 135. 51 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available needed [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 136. 51 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available needed [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 137. 51 Declarative Approach for Performance Tests Execution Automation Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test Terminated paused Handle Experiment Result Validate Prediction Function Derive Prediction Function Remove Non Reachable Experiments Add Stored Knowledge Determine and Execute Experiments: Experiment Life Cycle [can reach goal] [cannot reach goal] [Acceptable Prediction Error] [Not Acceptable Prediction Error] user terminated OR [execution time > max_time] Goal Reached Completed with Failure experiment results available needed [no regression model] Validate Termination Criteria [experiments remaining] [all experiments executed] [regression model] [no regression model] [regression model] Determine and Execute Initial Validation Set: Experiment Life Cycle [regression model] [no regression model] experiment results available [validation set complete] Partially Complete Terminating Check Quality Gates [quality gates pass] [failed quality gates] Handle Experiment Result [validation set NOT complete] Remove Non Reachable Experiments for Validation Set [# executed experiments >= max_number_of_experiments]
  • 139. 53 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Expert Review
  • 140. 53 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Expert Review Summative Evaluation
  • 141. 53 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Expert Review Summative Evaluation Iterative Review, Case Studies
  • 142. 53 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Expert Review Summative Evaluation Iterative Review, Case Studies Comparative Evaluation
  • 143. 54 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective - Expressiveness for performance testing automation
  • 144. 54 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective - Expressiveness for performance testing automation - Usability for target users
  • 145. 54 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective - Expressiveness for performance testing automation - Usability for target users - Effort for target users
  • 146. 54 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective - Expressiveness for performance testing automation - Usability for target users - Effort for target users - Reusability for target users
  • 147. 54 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective - Expressiveness for performance testing automation - Usability for target users - Effort for target users - Reusability for target users - Well Suited for target users
  • 148. 54 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective - Expressiveness for performance testing automation - Usability for target users - Effort for target users - Reusability for target users - Well Suited for target users - Suitability for target users vs. Imperative
  • 149. 55 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction
  • 150. 55 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check
  • 151. 55 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach
  • 152. 55 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Questions related to the Research Questions
  • 153. 55 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Questions related to the Research Questions - Questions for additional feedback
  • 154. 55 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Questions related to the Research Questions - Questions for additional feedback - Conclusion
  • 155. 56 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation 0 2 4 6 8 10 12 14 16 18 Participants (18)
  • 156. 57 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Results Highlights 0 2 4 6 8 10 12 14 16 18 0 2 4 6 8 10 12 14 16 18
  • 157. 58 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Results Highlights 0 2 4 6 8 10 12 14 16 18 0 2 4 6 8 10 12 14 16 18
  • 158. 59 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Results Highlights 0 2 4 6 8 10 12 14 16 18 0 2 4 6 8 10 12 14 16 18
  • 159. 60 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective Learnability
  • 160. 60 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Objective Learnability Reusability
  • 161. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction
  • 162. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check
  • 163. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach
  • 164. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Multiple-choice tasks related to the Research Questions
  • 165. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Multiple-choice tasks related to the Research Questions - Questions on the overall approach
  • 166. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Multiple-choice tasks related to the Research Questions - Questions on the overall approach - Questions for additional feedback
  • 167. 61 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Structure - Introduction - Background check - Overview of the approach - Multiple-choice tasks related to the Research Questions - Questions on the overall approach - Questions for additional feedback - Conclusion
  • 168. 62 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation https://www.getfeedback.com/resources/online-surveys/better-online-survey-respon se-rates/ 0 10 20 30 40 50 60 Participants (63)
  • 169. 63 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Results Highlights
  • 170. 64 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Results Highlights
  • 171. 65 Evaluations Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation Results Highlights 0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60