The document outlines Vincenzo Ferme's research on automating performance testing for continuous software development environments. It discusses the context of continuous development lifecycles and DevOps practices, and how performance testing is rarely applied in these processes. It then presents the state of the art in declarative performance engineering and the challenges of defining and executing performance tests. The document outlines the problem statement and research goals, which include how to specify performance tests and automate their execution in continuous software development lifecycles. The main contributions are summarized as developing an automation-oriented performance tests catalog, the BenchFlow declarative domain-specific language for specifying tests, and the BenchFlow model-driven framework for executing experiments.
Generative Artificial Intelligence: How generative AI works.pdf
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
1. Uni
versi
tà
del
l
a
Svi
zzera
i
tal
i
ana Software Institute
Declarative Performance
Testing Automation
Vincenzo Ferme
Committee Members:
Internal: Prof. Walter Binder, Prof. Mauro Pezzè
External: Prof. Lionel Briand, Prof. Dr. Dr. h. c. Frank Leyman
Research Advisor:
Prof. Cesare Pautasso
Automating Performance Testing
for the DevOps Era
4. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
5. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
6. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
7. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
8. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
‣ Career and Contributions
9. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
‣ Career and Contributions
‣ Concluding Remarks and Highlights
11. 4
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps !4
CI Server
Repo
Developers,
Testers,
Architects
Production
CD Server
C.S.D
.L.
12. 5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
13. 5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
14. 5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
15. How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
6
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Containers
92%
18. 7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
Scalability and Availability
19. 7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
Scalability and Availability Fewer Production Errors
21. 8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Scalability and Availability
Match Performance Requirements
22. 8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Scalability and Availability
3rd Party Performance
Match Performance Requirements
23. 9
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
CI Server
Repo
Developers,
Testers,
Architects
Continuous Changes
Continuous Test Execution
“Only conducting performance testing at the conclusion
of system or functional testing is like conducting a
diagnostic blood test on a patient who is already dead.
”
Scott Barber
24. 10
State of the Art
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
Performance Testing is Rarely Applied in DevOps Processes
[Bezemer et al., ICPE 2019]
[Bezemer et al., ICPE 2019]
Bezemer, C.-P., Eismann, S., Ferme, V., Grohmann, J., Heinrich, R., Jamshidi, P., Shang, W., van Hoorn, A.,
Villavicencio, M.,Walter, J., and Willnecker, F. (2019). How is Performance Addressed in DevOps? In Proceedings
of the 10th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 45–50.
25. 11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
26. 11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015]
Slowness of Execution
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
27. 11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015]
Slowness of Execution
Lack of Native Support for CI/CD Tools
[Leitner and Bezemer, 2017]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
28. 12
Declarative Performance Engineering
“ Enabling the performance analyst to declaratively specify what performance-
relevant questions need to be answered without being concerned about how
they should be answered.
”
[Walter et al., 2016]
[Walter et al., 2016]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
29. 12
Declarative Performance Engineering
“ Enabling the performance analyst to declaratively specify what performance-
relevant questions need to be answered without being concerned about how
they should be answered.
”
[Ferme and Pautasso, ICPE 2018]
Ferme, V. and Pautasso, C. (2018). A Declarative Approach for Performance Tests Execution in Continuous
Software Development Environments. In Proceedings of the 9th ACM/SPEC International Conference on
Performance Engineering (ICPE), pages 261–272.
[Walter et al., 2016]
Developers,
Testers,
Architects,
Performance Analyst …
[Ferme and Pautasso, ICPE 2018]
[Walter et al., 2016]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
30. 13
State of the Art
[Walter, 2018]
DECLARE
Proposes languages and tools for specifying performance
concerns, and declaratively querying performance
knowledge collected and modelled by different tools, with the
objective of providing automated answers to the specified
performance concerns.
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
31. 14
State of the Art
[Schulz et al., 2020]
ContinuITy
Focuses on dealing with the challenges of continuously updating
performance tests, by leveraging performance
knowledge of software systems collected and modelled from
the software operating in production environments.
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
32. 14
State of the Art
[Schulz et al., 2020]
ContinuITy
Focuses on dealing with the challenges of continuously updating
performance tests, by leveraging performance
knowledge of software systems collected and modelled from
the software operating in production environments.
[Avritzer et al., 2020] [Okanovic et al., 2020] [Schulz et al., 2019]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
34. 16
Problem Statement
To design new methods and techniques for the declarative
specification of performance tests and their automation processes, and
to provide models and frameworks enabling continuous and
automated execution of performance tests, in particular
referring to the target systems, target users and context of our work.
40. 19
Main Contributions
Overall Contribution ➤ Main Contributions Overview
A Declarative Approach for Performance Tests Execution
Automation enabling the continuous and automated execution of
performance tests alongside the Continuous Software
Development Lifecycle, and embrace DevOps goals by enabling the
end-to-end execution of service-level performance tests,
including S.U.T. lifecycle management.
43. 20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
BenchFlow Model-driven Framework
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
Automation-oriented
Performance Tests Catalog
44. 20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
BenchFlow Model-driven Framework
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
Automation-oriented
Performance Tests Catalog
45. 21
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
Fill a gap identified in the performance testing literature by contributing an
automation-oriented performance test catalog providing a
comprehensive reference to properly identifying different kinds of
performance tests and their automation requirements.
47. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
48. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
49. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
50. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
51. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
- Measurements to be collected and metrics to be calculated
52. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
- Measurements to be collected and metrics to be calculated
- Preliminary performance tests to be already executed
59. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
60. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
61. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
62. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
66. 26
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
BenchFlow Model-driven Framework
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
[Ferme and Pautasso, ICPE 2018] [Ferme et al., BPM 2015] [Ferme and Pautasso, ICPE 2016]
67. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions
68. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads
69. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
70. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
71. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
72. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
Performance
Data Analysis
73. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
Performance
Data Analysis
Definition of Configuration Tests
74. 28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
75. 28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
Know
you
I
SUT-awareness
76. 28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
Goal-Driven
Performance Testing
Know
you
I
SUT-awareness
77. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling
78. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Deployment Infra.
79. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
80. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload
81. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload Collect Data
82. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload Analyse Data
Collect Data
83. Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
s
84. Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
85. Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
86. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
87. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
88. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
89. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
90. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
91. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
92. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
93. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
94. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
95. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
96. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
97. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
98. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
99. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
100. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
101. 34
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
102. 34
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
103. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
104. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
105. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
106. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
107. 36
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
108. 36
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
109. 37
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
TerminationCriteria
TestTerminationCriterion
max_time: Time
max_number_of_experiments: Option<Int>
max_failed_experiments: Option<Percent>
TerminationCriterion
max_number_of_trials: Int
max_failed_trials: Option<Percent>
services: Option<Map<String, ServiceTerminationCriterion>>
workloads: Option<Map<String, WorkloadTerminationCriterion>>
experiment
0..1
test
0..1
WorkloadTerminationCriterion
confidence_interval_metric: WorkloadMetric
confidence_interval_value: Float
confidence_interval_precision: Percent
ServiceTerminationCriterion
confidence_interval_metric: ServiceMetric
confidence_interval_value: Float
confidence_interval_precision: Percent
service_name
0..*
workload_name
0..*
110. 38
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
s
s
111. 39
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
ExperimentTerminationCriteria
max_time: Time
BenchFlowExperiment
version: ExperimentVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
BechFlowExperimentConfiguration
configuration 1
sut
1
workload
1
data_collection 1
load_function
1
TerminationCriterion
max_number_of_trials: Int
max_failed_trials: Option<Percent>
services: Option<Map<String, ServiceTerminationCriterion>>
workloads: Option<Map<String, WorkloadTerminationCriterion>>
termination_criteria
1
experiment
0..1
«enumeration»
ExperimentVersion
1
1.1
1.2
1.3
1.4
2
2.1
2.2
3
LoadFunction
users: Int
ramp_up: Time
steady_state: Time
ramp_down: Time
SutVersion
version
1
112. 40
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Checkout Build Unit Tests
Integration
Tests
E2e Tests Smoke Tests
Load Tests
Acceptance
Tests
Regression
Tests
Deploy in
Production
FUNCTIONAL TESTS
FUNCTIONAL TESTS
PERFORMANCE TESTS
113. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
114. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
115. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
116. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
117. 42
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Test
118. 43
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
119. 43
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
Application Server DBMS
Workflow Engine
Job
Executor
Core
Engine
Transaction Manager
Instance
Database
Persistent Manager
Process Navigator
A
B
C
D
Task Dispatcher
Users
Service Invoker
…
Web
Service
[Skouradaki et al., ICPE 2015]
[Ferme et al., BPM 2015]
[Ferme et al., CLOSER 2016]
[Skouradaki et al., BPM 2016]
[Ferme et al., BPM 2016]
[Ivanchikj et al., BPM 2017]
[Rosinosky et al., OTM 2018]
122. 45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test
t
λ
wall clock time
9 AM
300
BenchFlow
Faban
Load test
template
Architect.
con g.
s0
1
ϕ
i
0.015
Γk 0.042
pass/fail (ck
) PASS
sn 1
2.164
0.108
FAIL
...
...
δk
1.26 % 2.58 %
δk
⋅ ck
1.26 % 0.00 %
norm. test mass (si
* p'(λ'))
Σ
100.00 %
74.81 %
0.142
...
...
...
...
...
...
...
...
...
...
^
Operational pro le
Empirical distribution of
workload situations
Baseline & test results
per architectural con g.
Domain metric
dashboard
#Workload
situations
ContinuITy
Analysis of
operational data
2
Experiment
generation
3
Experiment
execution
4
Domain metric
calculation
5
Collection of
operational data
1
λ'
sampled workload situation
f'
100 200 300
0.2
Relative
Mass
0.25
0.20
0.15
0.05
0
50 100 150 200
Workload Situations (Number of Users)
x
x
x x
x
x
x
0.10
x
250 300
x x
x
Step
(Intermediate)
Artifact
Tool
[Avritzer et al., JSS 2020]
[Avritzer et al., ICPE 2019]
[Avritzer et al., ECSA 2018]
123. 45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test Application Server DBMS
Workflow Engine
Job
Executor
Core
Engine
Transaction Manager
Instance
Database
Persistent Manager
Process Navigator
A
B
C
D
Task Dispatcher
Users
Service Invoker
…
Web
Service
t
λ
wall clock time
9 AM
300
BenchFlow
Faban
Load test
template
Architect.
con g.
s0
1
ϕ
i
0.015
Γk 0.042
pass/fail (ck
) PASS
sn 1
2.164
0.108
FAIL
...
...
δk
1.26 % 2.58 %
δk
⋅ ck
1.26 % 0.00 %
norm. test mass (si
* p'(λ'))
Σ
100.00 %
74.81 %
0.142
...
...
...
...
...
...
...
...
...
...
^
Operational pro le
Empirical distribution of
workload situations
Baseline & test results
per architectural con g.
Domain metric
dashboard
#Workload
situations
ContinuITy
Analysis of
operational data
2
Experiment
generation
3
Experiment
execution
4
Domain metric
calculation
5
Collection of
operational data
1
λ'
sampled workload situation
f'
100 200 300
0.2
Relative
Mass
0.25
0.20
0.15
0.05
0
50 100 150 200
Workload Situations (Number of Users)
x
x
x x
x
x
x
0.10
x
250 300
x x
x
Step
(Intermediate)
Artifact
Tool
[Avritzer et al., JSS 2020]
[Avritzer et al., ICPE 2019]
[Avritzer et al., ECSA 2018]
124. 46
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
125. 47
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
Ready
Running
user paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
126. 48
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
127. 49
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
Entity YAML Specification
Parse to YAML Object
Parse + Syntactic
Validation
Semantic Validation
Entity
Representation
Exception
128. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
129. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
130. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
131. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
132. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
133. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
134. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
135. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
136. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
137. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
163. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
164. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
165. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
166. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
- Questions for additional feedback
167. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
- Questions for additional feedback
- Conclusion