NOTE

This documentation as well as the QALIPSIS software is a work in progress and not yet officially released.

The current artifacts are available on the Maven repository https://oss.sonatype.org/content/repositories/snapshots.

If you are eager to start with QALIPSIS, you can find some examples in our dedicated Github repository.

Thank you for your understanding.

Introduction

Overview

QALIPSIS is an enterprise-grade load, performance and end-to-end testing tool especially designed for distributed systems. It is a step above other load test applications that merely evaluate your system internally.

  • QALIPSIS is developer-centric with a low learning curve that makes it easy to use and extremely efficient.

  • QALIPSIS is developed in Kotlin to gain the benefits of the Java ecosystem and provide seamless system performance.

  • The basic out-of-the box QALIPSIS is completely free of charge.

    Commercial-licensed-based plugins, the cloud-based application, and other premium extensions are not included in the basic out-of-the box QALIPSIS.

  • QALIPSIS is designed to test the performance of distributed and monolithic systems.

  • QALIPSIS collects data from operating systems, databases, and monitoring tools.

  • QALIPSIS cross-checks system metrics and performs data verification to ensure that the test system is performing as expected.

Features

User-friendly interface

The QALIPSIS Graphical User Interface (GUI) allows users, depending on their role, to configure and run load, performance and end-to-end tests with capabilities including, but not limited to:

  • Creating a campaign.

  • Configuring the load distribution.

  • Increasing/decreasing the number of simulated users.

  • Starting/stopping the campaign.

  • Viewing reports.

Automation friendly performance verification

Using the Command Line Interface (CLI) or the Gradle plugin, you can automatically and repeatedly verify the condition and performance of your system.

User-friendly operation

With QALIPSIS you are never far from setting up your next load, performance, and end-to-end test campaign.

Start with the creation of a pre-configured project from QALIPSIS bootstrap, then develop code in your IDE.

QALIPSIS provides a Gradle plugin to simplify the configuration, packaging and deployment of your scenarios to your on-premise installation or QALIPSIS Cloud.

You do not need to be a Kotlin expert or a master in the technology that you want to test. QALIPSIS DSL is a meta-language that most users can learn within minutes and it provides the means to execute all of the operations in your scenarios.

Further documentation and examples guide you through more advanced use cases.

Downloading a ready-to-use project for your IDE is easy with QALIPSIS and allows integration with popular DevOps tools to test your software and track the impact of recent changes.

Aggregated assertions

Aggregated assertions allow you to verify the overall results of a scenario. You can define thresholds to make a test succeed or fail and immediately get insights about the performance and consistency of your tested system.

Testing, performance, and geographical distribution

  • QALIPSIS can test distributed systems.

  • QALIPSIS can perform as a distributed system.

  • QALIPSIS can simulate load from several geographic locations to run comparisons based on location.

Live results, integration and monitoring

QALIPSIS events and meters provide extended details of load-test operations, providing users with various options for accessing the results. The results visualizations and dashboards, with details of each test-case execution, provide valuable insight for identifying the bottlenecks in your applications and uncovering the components and conditions of failures.

The QALIPSIS Gradle plugin integrates the automated execution of QALIPSIS scenarios with your CI/CD pipelines to validate performance and functionality on a regular and frequent basis. This helps you uncover problems immediately after they are introduced in your system, saving time and money during resolution.

Alternatively, you can use QALIPSIS for constant monitoring. A long-running campaign lets you monitor the availability and responsiveness of your system over a long period of time. The statistics of running campaigns are accessible in real-time, allowing you to observe their evolution and detect issues at their earliest occurrence. Time-to-solution is faster with the ability to analyze problems in real time and conditions.

Concurrent load simulations and data ingestion

Combining the double benefits of a lightweight architecture and deployment as a cluster, QALIPSIS makes it possible to simulate the load from thousands to millions of users (e.g. website visitors, connected IoT devices, etc.).

You can execute load, performance and end-to-end test campaigns running several scenarios, and analyze how behaviors of certain personas affect the experience of others.

QALIPSIS not only injects outgoing data into your system to test; it also ingests incoming data sources to compare their values with the ones generated by your tests. You can cross-verify requests and responses with infrastructure metrics, messages, and database records.

Operating system independent

QALIPSIS load, performance and end-to-end testing can be performed from any operating system supporting Java with the same impressive results.

Benefits

QALIPSIS was designed with distributed systems and current developer practices in mind. It provides a testing framework to allow developers to work in a real-world environment, writing and running load, performance and end-to-end tests without a steep learning curve.

For developers

QALIPSIS empowers developers to:

  • Choose their preferred testing tools and programming language for creating test scenarios.

  • Cross-verify data from diverse sources, consolidating responses to requests, infrastructure metrics, middleware events, and database records in the same platform for comparison and validation against expectations.

  • Harness the full power of the Java ecosystem, leveraging any library, protocol or system.

  • Reuse software components to expedite scenario development.

  • Simulate users, third-party software and IoT devices.

  • Simulate real-user and device behavior to make tests more relevant.

  • Generate detailed and relevant results by using distributed tracing markers to track the complete execution workflow from end to end.

  • Customize reports and dashboards to meet requirements.

  • Perform tests on development machines, containers and virtual machines with excellent results.

  • Seamlessly integrate a standard DevOps workflow for performance testing, incorporating health checks into production environments before the deployment.

  • Create scenarios within minutes by enabling the utilization of any Java library of choice (such as JUnit, Hamcrest, AssertJ) or other proprietary software libraries.

  • Test non-http technologies using various plugins, including databases and messaging platforms.

For IT operations teams

QALIPSIS provides IT Operations teams solutions to:

  • Monitor the impact of deployments and other IT operations on system availability and overall performance.

  • Expose bottlenecks and uncover correlations between load and resource consumption by unifying the verification of load-test and infrastructure metrics.

  • Proactively identify issues before they reach users and customers by validating the size and configuration of the system’s infrastructure.

  • Evaluate system performance under normal and peak loads by conducting load tests from diverse geographic locations at different times.

  • Achieve exceptional results across virtual engines, containers and conventional development machines.

  • Seamlessly integrate verifications into their standard DevOps workflow, enabling implementation and alignment with best practices.

What’s new

QALIPSIS 0.6 is the initial product release.

QALIPSIS 0.6 is designed with the scalability and simplicity required to perform load, efficiency and end-to-end verification on both distributed and stand-alone systems

With QALIPSIS 0.6:

  • You can create scenarios with different branches and running with many minions.

  • You can use its many native steps and plugins to fire load on your system and feed QALIPSIS with data to verify load strength.

QALIPSIS can run in embedded, standalone and distributed modes. It has the capability to log and save a history of events and metrics using various tools such as Elasticsearch, Graphite, TimescaleDB and InfluxDB.

In future QALIPSIS releases, you can expect an expansion of plugins, an increase in load injection steps and verification processes and various other improvements.

Feedback on this release, including bugs and suggested enhancements, is welcome and may be sent to the QALIPSIS team at Github.

QALIPSIS core concepts

Overview

Once you have a minimal understanding of the core concepts, you will be ready to create a QALIPSIS scenario to verify that your software is performing as expected.

The aim of QALIPSIS is to execute actions that inject load into IT systems to monitor and validate their behaviors (latency, failure rates, availability, and data consistency).

The actions are performed by minions that inject load into a system and then run through a preconfigured scenario consisting of steps and assertions. If running QALIPSIS on a distributed system, you can create clusters with Head and Factory nodes to inject higher load.

Prior to configuring QALIPSIS to run on your system, it is crucial to understand the core concepts of the tool:

  • Minions

  • Scenarios

  • Steps

  • Assertions

  • Nodes

    • Head

    • Factory

Minions

Minions are simulations of website users, IoT devices, or other systems. They each perform a complete scenario.

Any number of minions can act on a scenario. The minion count for a scenario is set during scenario configuration (refer to Scenario specifications) or at runtime (refer to QALIPSIS configuration).

Note that a minion count entered at runtime overrides the minion count set during scenario configuration.

Scenarios

A QALIPSIS scenario is the single largest component of a load test. It contains steps with a detailed description of all the operations to accomplish, their dependencies, as well as assertions to ensure everything performs as expected.

A simple scenario configuration includes the following:

  1. Declare to the scenario function and assign it a name.

    • Mandatory

    • QALIPSIS Signature: @Scenario("scenario name")

  2. Register the scenario.

    • Mandatory

    • QALIPSIS Signature: val myScenario = scenario()

  3. Define the total # of minions to be executed.

    • Optional

    • Value: positive number

    • Default = 1; Minimum = 1

    • QALIPSIS Signature: minionsCount = xxxxx

  4. Define the execution profile.

    • Mandatory

    • QALIPSIS Signature: profile { strategy( ) }

  5. Define the tree where the load will be injected.

    • Mandatory

    • QALIPSIS Signature: start()

  6. Call the Gradle Plugin for QALIPSIS.

    • Mandatory

      1
      2
      3
      4
      
      plugins {
          kotlin("jvm")
          kotlin("kapt")
      }
  7. Add steps to your scenario.

Steps

Steps are the central components for working with QALIPSIS and can be developed in Kotlin using the fluent API.

Steps depend on data sources, results of other steps or assertions to execute or use their data.

A step specification is configured such that:

  • The step receives the configuration closure as a parameter.

  • .configure is called after the step parameters are set.

Refer to the Step specifications section of this manual for details.

Each step has a unique configuration depending on the purpose of the step. The step configurations all follow the same logic:

  1. Declared as a root or successor step.

  2. Provided a name.

  3. Provided with step context.

  4. Delivers output data.

Steps are configured using core operators:

  • Assertions and verifications

  • Timing

  • Conditional

  • Converting

  • Error Handling

  • Filtering

  • Joining

  • Mathematical/Aggregate

  • Transforming

  • Flow

  • User-defined

Refer to the Step specifications section of this manual for details.

Assertions

Assertions perform checks within the scenario by:

  • Providing validation of data persistence, message exchanges, and requests to remote systems.

  • Setting and testing durations to verify the system’s overall responsiveness.

Data generated by QALIPSIS and other systems, including the system under test, can be used for all assertion checks.

Each assertion failure is logged in QALIPSIS events, enabling you to track the cause of overall campaign failures.

Additionally, QALIPSIS can generate JUnit-like reports to simplify existence with CI/CD tools.

Nodes

Nodes are distributed instances within a QALIPSIS cluster. QALIPSIS nodes are designated as either a head or a factory.

Head

The head node within a QALIPSIS cluster is responsible for orchestration and reporting.

  • It is the direct interface for all the interactions with QALIPSIS, via a graphical UI and a REST API.

  • It supports seamless integration with external third-party tools for reporting, notifications and analysis.

  • It ensures continuous access to QALIPSIS features and data, even when there are no active factory nodes.

QALIPSIS Commercial edition supports additional responsibilities for the head node, including:

  • Improved security with OAuth2 authentication support; users are able to authenticate using their existing credentials.

  • Head node redundancy; if one node fails, a backup is available to seamlessly take over, eliminating a single point of failure and enhancing the reliability of the system.

Factory

The factory nodes of a QALIPSIS cluster are distributed components that host the minions that execute load tests and assertions.

The factories are the "working" nodes of cluster, and contains the executable version of your scenarios.

Factories have a dual role: they inject load into the system under test and retrieve records from various data sources like databases and messaging platforms.

The retrieval of records enables you to compare data both within and outside the system. Importantly, setting up additional firewall rules for data reception in the factory is not necessary. QALIPSIS plugins are designed with specialized poll and search operations, simplifying the process of external data retrieval without requiring adjustments to your existing infrastructure.

You can add factories to your cluster to expand your load capacity; the factories can be synchronized and establish communication through low-latency messaging platforms such as Redis Pub/Sub and Kafka.

Unlike the head node, which remains a consistent presence in the cluster, factories can be dynamically added or removed as needed. This approach helps minimize resource consumption during periods when no campaign is running.

QALIPSIS Project Deployment

Overview

There are several ways to install, operate and use QALIPSIS, depending on your use case, load needs and IT-landscape complexity.

This section discusses different approaches, including the benefits and consequences of each approach.

If you find yourself in a situation that requires professional support to assist in the preparation, installation and operation phases, do not hesitate to contact our team at https://qalipsis.io/#contact.

Embedded

It is extremely simple to use QALIPSIS as an embedded service into your existing JVM applications.

Using QALIPSIS as an embedded service is convenient for execute campaigns in your tests (JUnit or others) or to create your own applications that can trigger campaigns.

It requires only three steps:

  1. Add the dependencies needed to run QALIPSIS and develop your scenario.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    dependencies {
        implementation(platform("io.qalipsis:qalipsis-platform:<version>"))
        kapt("io.qalipsis:api-processors")
    
        runtimeOnly("io.qalipsis:head")
        runtimeOnly("io.qalipsis:factory")
        implementation(test"io.qalipsis:runtime")
    
        // Examples of plugins
        implementation("io.qalipsis.plugin:cassandra")
        implementation("io.qalipsis.plugin:jackson")
    }
  2. Develop your scenario.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    import io.qalipsis.api.annotations.Scenario
    ...
    
    class My Scenario {
    
        @Scenario("do-something-useful") // Do not forget the annotation!
        fun doSomethingUseful() {
            scenario {
                minionsCount = 10_000
                profile {
                    stages {
                      ...
                    }
                }
            }.start()
            ....
        }
    }
  3. Start QALIPSIS in your application:

    1
    
    Qalipsis.main("-s", "do-something-useful", "-c", "myconfig=thevalue", "my-other-args")

Alternately, you can use the following method:

  1. Import the test fixtures module for the runtime

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    
    dependencies {
        implementation(platform("io.qalipsis:qalipsis-platform:<version>"))
        kapt("io.qalipsis:api-processors")
    
        runtimeOnly("io.qalipsis:head")
        runtimeOnly("io.qalipsis:factory")
        implementation(test"io.qalipsis:runtime")
        implementation(testFixtures("io.qalipsis:runtime")) // Here is the additional dependency.
    
        // Examples of plugins
        implementation("io.qalipsis.plugin:cassandra")
        implementation("io.qalipsis.plugin:jackson")
    }
    1. Use the builder to start QALIPSIS

      1
      2
      3
      
      QalipsisTestRunner.withScenarios("do-something-useful")
          .withConfiguration("myconfig=thevalue")
          .execute("my-other-args")

Ephemeral

Ephemeral installations of QALIPSIS are designed to be directly embedded into third-party tools or for use in continuous integration/continuous deployment (CI/CD) applications.

However, the ease of build and lightness of an ephemeral installation makes it a very portable alternative for verifying test systems and performing demos.

The QALIPSIS Gradle Plugin provides the task package-ephemeral to create an archive of your project.

To configure an ephemeral installation for project deployment:

  • In your QALIPSIS project, execute the task package-ephemeral which will assemble your project and create an archive file: build/install/your-project-name-your-version-qalipsis.jar.

    The .jar file is a self-executable archive that contains your scenario, classes, and all required dependencies to execute your QALIPSIS project.

  • Check that Java Runtime Environment 11+ is installed on your system as it is required to execute the archive.

  • Move the archived file to your folder of choice; navigate to that folder.

  • If operating on a Windows system, run java -jar your-project-name-your-version-qalipsis.jar --autostart.

  • If operating on a Linux, Unix or Mac system:

    1. Run chmod +x your-project-name-your-version-qalipsis.jar

    2. Run java -jar your-project-name-your-version-qalipsis.jar --autostart

  • QALIPSIS starts, executes the scenario(s) in the project, and quits.

  • The results of the project are displayed on the console but are not saved to QALIPSIS.

  • To save the results, export them to a file(s) using any of our exporters such as JUnit.

If you want to tweak the configuration, create and include in your project a qalipsis.yml or config/qalipsis.yml file that contains the part of the configuration that you want to overwrite.

Note that the config/qalipsis.yml file has precedence over the qalipsis.yml if both are present in the configuration.

If any of your use cases includes data that you want to retain in QALIPSIS for comparison, regression testing, or other use, you should use QALIPSIS in a non-persistent standalone installation.

Transient

Overview

Transient installations of QALIPSIS are designed for operations that do not need to keep a history of data. The most often used cases for transient installations include:

  • Continuous integration/continuous deployment (CI/CD) workflows where only the success or failure of the execution determines if the data passes through the quality gates.

  • End-to-end testing that does not require a history of data.

A transient installation is not a long-live process.

  • QALIPSIS starts a campaign execution on demand and stops immediately after the campaign’s completion.

  • To keep external logs of the campaign execution, export the results as a file or to a third-party platform.

If any of your use cases includes data that you want to retain in QALIPSIS for comparison, regression testing, or other use, you should use QALIPSIS in a non-persistent standalone installation.

Configuring a transient installation for project deployment

The QALIPSIS Gradle Plugin provides tasks to create an installable archive from your project. It proposes a unique task to create a ZIP archive and a TAR.GZ archive. The installation is similar whether using a ZIP archive or a TAR.GZ archive; installation using a ZIP archive is described in this section.

In your QALIPSIS project, execute the task package-standalone which will assemble your project and create a ZIP file: build/install/qalipsis-your-project-name.zip.

The ZIP archive contains your scenario, classes, and all required dependencies to execute your QALIPSIS project.

  • Add the ZIP archive to your installation folder of choice and extract the compressed files: unzip qalipsis-your-project-name.zip.

  • Navigate to the QALIPSIS folder on your system:

    • Linux, Unix or Mac systems:`/your/installation/folder/qalipsis-your-project-name`

    • Windows systems:`C:\your\installation\folder\qalipsis-your-project-name'

  • Start QALIPSIS:

    • On Linux, Unix or Mac systems:

      1. Run chmod +x -R

      2. Run ./qalipsis.sh --autostart

    • On Windows systems: run qalipsis.bat --autostart

An additional argument is not required, as the transient, non-persistent execution, is the default.

  • QALIPSIS starts, executes the scenario(s) in the project, and quits.

  • The results of the project are displayed on the console but are not saved to QALIPSIS.

  • To save the results, export them results to a file(s) using an exporter such as JUnit.

If you want to tweak the configuration, create and include in your project a qalipsis.yml or config/qalipsis.yml file that contains the parts of the configuration that you want to overwrite.

Note that the config/qalipsis.yml file has precedence over the qalipsis.yml if both are present in the configuration.

If any of your use cases includes data that you want to retain in QALIPSIS for comparison, regression testing, or other use, you should use QALIPSIS in a non-persistent standalone installation.

Standalone

Overview

Standalone installations of QALIPSIS are designed for low to normal load operations that require data tracking. They allow you to tweak the deployment configuration to track the history of executions.

A standalone installation:

  • Executes all QALIPSIS components as in a transient installation.

  • Provides durability in data and history tracking.

  • Provides REST endpoints to interact with QALIPSIS.

QALIPSIS stores the master data and the time-series data generated by the scenario (events, meters) in separate databases. The master data requires transactional consistency and the time-series data requires high throughput and strong calculation capabilities. Separate databases allow for the different requirements.

For sake of simplicity, this guide will use PostgreSQL for both groups of data.

If you want to configure a different database for the time-series data, refer to the applicable plugin section for details (elasticsearch, influxDB, timescaleDB, and graphite are currently supported).

This section will guide you through the configuration of a Transient installation to a persistent standalone installation.

While this installation requires the installation of a PostgreSQL database, we will not go into those details here as they are provided in the official PostgreSQL documentation.

Prepare the installation

Execute the following steps as for a Transient installation:

  1. In your QALIPSIS project, execute the task package-standalone to create a ZIP file: build/install/qalipsis-your-project-name.zip.

  2. Add the ZIP archive to your installation folder of choice and extract the compressed files: unzip qalipsis-your-project-name.zip.

Prepare the database

  1. Install your PostgreSQL database

  2. Create a user with the name qalipsis.

  3. Create a database with the name qalipsis.

  4. Connect to the database qalipsis with your admin user (generally postgres).

  5. Execute the following:

    CREATE SCHEMA qalipsis_liquibase IF NOT EXISTS;
    ALTER SCHEMA qalipsis_liquibase OWNER TO qalipsis;
    CREATE SCHEMA qalipsis IF NOT EXISTS;
    ALTER SCHEMA qalipsis OWNER TO qalipsis;
    
    CREATE SCHEMA events AUTHORIZATION qalipsis;
    GRANT ALL ON SCHEMA events TO qalipsis;
    
    CREATE SCHEMA meters AUTHORIZATION qalipsis;
    GRANT ALL ON SCHEMA meters TO qalipsis;

Configure the database for the master data

  1. Navigate to your installation folder where the qalipsis.sh and qalipsis.bat files are located.

  2. Create a qalipsis.yml file to configure access to the database for the master data:

    1
    2
    3
    4
    5
    6
    7
    
    datasources:
      default:
        url: jdbc:postgresql://your-database-host:5432/qalipsis (1)
        username: qalipsis
        password: the_password_of_your_qalipsis_user
        maximumPoolSize: 4      (2)
        minimumIdle: 2
    1 Replace the host, port, username, and password values with the appropriate values for you configuration.
    2 The maximumPoolSize and minimumIdle are the REST endpoints and depend on your number of users. Increase the value of maximumPoolSize if you observe errors or a high latency when using the REST endpoints.

Configure the database for the events and meters

The connection to PostgreSQL for events and meters is provided by the timescaledb plugin, which is embedded by default in all the head, transient and standalone installations you will build with the QALIPSIS Gradle plugin.

In the file qalipsis.yml add the following details:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
events:
  export.timescaledb:  (1)
    enabled: true
    host: your-database-host
    port: 5432
    database: qalipsis
    username: qalipsis
    password: the_password_of_your_qalipsis_user
    schema: meters

  provider.timescaledb: (2)
    enabled: true
    host: your-database-host
    port: 5432
    database: qalipsis
    username: qalipsis
    password: the_password_of_your_qalipsis_user
    schema: meters

meters:
  export.timescaledb: (3)
    enabled: true
    host: your-database-host
    port: 5432
    database: qalipsis
    username: qalipsis
    password: the_password_of_your_qalipsis_user
    schema: meters

  provider.timescaledb: (4)
    enabled: true
    host: your-database-host
    port: 5432
    database: qalipsis
    username: qalipsis
    password: the_password_of_your_qalipsis_user
    schema: meters
1 Export the events to PostgreSQL.
2 Render the events from PostgreSQL.
3 Export the meters to PostgreSQL.
4 Render the meters from PostgreSQL.

Start QALIPSIS

  1. Navigate to the QALIPSIS folder on your system:

    • Linux, Unix or Mac systems:`/your/installation/folder/qalipsis-your-project-name`

    • Windows systems:`C:\your\installation\folder\qalipsis-your-project-name on Windows`

  2. Start QALIPSIS:

    • On Linux, Unix or Mac systems:

  3. Run chmod +x -R

  4. Run ./qalipsis.sh --autostart

    • On Windows systems:

      Run qalipsis.bat --autostart

QALIPSIS will start and provide REST endpoints on port 8080.

Note: Open source QALIPSIS OSS does not support native security. For a native secured installation, we encourage you to contact the QALIPSIS team to request a license for QALIPSIS Commercial edition.

Cluster Installations

Overview

Clusters provide a scalable and durable installation of QALIPSIS, allowing increased load needs, more scenarios and more users.

In a cluster, QALIPSIS is split into several components which collaborate using third-party systems to exchange data.

Heads are the components that users use to interact with QALIPSIS. Factories, on the other hand, are hidden and inaccessible to users. They exclusively initiate outbound connections, enabling them to securely operate behind firewalls without limitations.

Heads should remain active permanently to ensure the best user experience. Factories can either remain active or be activated as needed, without affecting the analysis of campaign results.

Clusters can be deployed in different environments, including https://kubernetes.io/docs/concepts/security/[Kubernetes], https://www.redhat.com/en/technologies/cloud-computing/openshift[OpenShift], or a hybrid system with some factories on physical machines and others on virtual machines.

The following sections explain the installation process for various forms of QALIPSIS clusters that offer different levels of scalability and availability.

Light cluster

A light cluster requires minimal effort for installation; however, it is limited in terms of load capability.

The main advantage of a light cluster installation is to provide a secure geographical/network-based distribution of load and data consumption for private networks.

A light cluster consists of:

  • One instance of a head.

  • Several instances of factories.

  • A PostgreSQL database.

  • A Redis instance (or a Redis cluster) to host shared data and exchange messages between the QALIPSIS head and factories.

High-load cluster

A high-load cluster requires significant effort for installation and operation; however, it is extremely scalable.

NOTE

A high-load cluster is only possible with QALIPSIS Commercial edition. If you are interested in deploying a high-load cluster, contact the QALIPSIS team to request a license for QALIPSIS Commercial edition.

A high-load cluster consists of:

  • One instance of a head.

  • Several instances of factories.

  • A PostgreSQL database.

  • One or more clusters of databases for the time-series data.

  • A Redis cluster to host shared data.

  • A Kafka cluster to exchange messages between the QALIPSIS head and factories.

Amendable installations

The purpose of an amendable installation is to allow you to add scenarios to the installation without having to rebuild a new installation archive after each change.

As a prerequisite, you need to install QALIPSIS as described in the transient, standalone or cluster installation sections.

Note: When packaging an installation, you do not need to create a scenario for the project. An "empty" installation is a valid installation. Executing QALIPSIS on an "empty" installation will return an exit code different from 0 and a no scenario found warning.

When your QALIPSIS installation is complete:

  1. Create a new QALIPSIS project using the QALIPSIS Gradle plugin.

  2. Create a scenario in your project.

  3. Add implementation or runtime dependencies to your Gradle project, as you would with any Gradle project.

  4. Execute the task package-scenario on your Gradle module, which will assemble your project and create a ZIP file:

    build/scenario/qalipsis-scenario-your-project-name.zip

    This archive contains your scenario and all the non-QALIPSIS dependencies you have manually added to your Gradle project.

  5. Copy and paste the archive to a folder with an existing installation of QALIPSIS transient, standalone or factory of a cluster.

  6. Extract the compressed files: unzip qalipsis-scenario-your-project-name.zip.

  7. Copy each file in the expanded folder into the folder with scenarios of your existing installation.

  8. If it is still running, restart QALIPSIS to load the new scenarios.

WARNING

Different scenarios might require different dependencies; pay attention to the conflict of versions when you overwrite existing files in an existing installation.

To avoid conflicts, use the same versions of dependencies across all scenarios/projects deployed in the same installation.

Containers

Docker images from QALIPSIS work like prepared installations that you can amend with your own scenarios and configuration.

Three images are available on the Docker hub free to all QALIPSIS users:

  • qalipsis-oss:<version|latest> to start a container as a transient or standalone application.

  • qalipsis-oss-head:<version|latest> contains only the components to execute a head in a cluster.

  • qalipsis-oss-factory:<version|latest> contains only the components to execute a factory in a cluster.

Prepare the images

While the images available on the Docker hub are ready to use, empty standalone and factory images cannot do much for you.

You can add your own scenarios and dependencies as prepared for an Amendable installation with the following Docker file:

from qalipsis-oss:latest

ADD build/scenario/qalipsis-scenario-your-project-name.zip /opt/qalipsis

Where can you use the images?

All of the images are tested to run as Docker, Kubernetes or OpenShift containers and can be configured with additional program arguments or a mounted configuration file.

Refer to the official documentation for each platform to learn how to attach a configuration file and arguments to the execution of QALIPSIS.

Quickstart guide: Testing an HTTP server with QALIPSIS

In this quickstart guide, you will learn how to test a very simple echo-like HTTP service using QALIPSIS.

Before you get started

Before starting, you need:

  • A general understanding of the [QALIPSIS Core Concepts].

  • Java 11 SDK or above installed on your system.

  • Docker installed on your system.

What this guide covers

This guide will take you through the following steps for testing a simple HTTP service:

  1. Start an HTTP server and a docker environment for the project.

  2. Create a project.

  3. Configure a scenario.

  4. Add steps to the scenario.

    1. An HTTP step to send requests to the server.

    2. A verify step that asserts that the server’s response is "OK" and to verify the server’s responses.

    3. An HTTP WITH step that reuses the HTTP connection.

    4. A second assertion step to verify the server’s responses.

  5. Execute the scenario using Gradle and view the report.

  6. Create a Java archive of your project so that you can transport it as needed and run it anywhere.

  7. Enable event logging to allow you to view errors in the report.

Start an HTTP server and a docker environment

  1. Create a file named docker-compose.yml in your working directory with the following content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    version: '3.5'
    services:
      http-punching-ball:
        image: aerisconsulting/http-punching-ball
        command:
          - "--https=true"
          - "--ssl-key=http-server.key"
          - "--ssl-cert=http-server.crt"
        ports:
          - "18080:8080"
          - "18443:8443"
  2. Start the created docker environment with the command docker compose up -d.

Create a project

  1. Clone or download the bootstrap-project in GitHub: https://github.com/qalipsis/bootstrap-project.

    This repository contains a preconfigured project with a skeleton to develop a scenario.

  2. From the <> Code dropdown, select Download ZIP.

  3. Extract the zipped files to your project directory on your computer.

  4. Open the build.gradle.kts file and uncomment the required plugins.

  5. Reload the Gradle project.

  6. Rename MyBootstrapScenario.kt (in the src/main/kotlin/my/bootstrap folder) to better identify your scenario.

  7. Rename the docker images in build.gradle.kts as appropriate.

Configure a scenario

Edit the downloaded scenario skeleton

  1. Open the downloaded project archive in your IDE.

  2. Edit the file that you renamed above.

The scenario skeleton should be displayed:

1
2
3
4
5
6
7
8
9
10
11
12
13
package quickstart
import io.qalipsis.api.annotations.Scenario
class Quickstart{
    @Scenario("quickstart")
    fun quickstart() {
        scenario {
            minionsCount = 1
            profile {
            }
        }
    .start()
    }
}

Configure the scenario basics

Configure the scenario to inject 10,000 minions at a rate of 100 minions every 500 ms.

1
2
3
4
5
6
7
        scenario {
            minionsCount = 10000
            profile {
                regular(periodMs = 500, minionsCountProLaunch = 100)
            }
        }
        .start()

Add steps to the scenario

Add and configure steps within the scenario that will send an HTTP request to the server and verify that the server’s response is as expected.

Send an HTTP request

Add an http step to the scenario that will send a POST request to the server.

        .start() (1)
        .netty() (2)
        .http {  (3)
            name = "quickstart-http-post"  (4)
            connect {  (5)
                url("http://localhost:18080")
                version = HttpVersion.HTTP_1_1
            }
            request { _,_ ->  (6)
                    SimpleHttpRequest(HttpMethod.POST, "/echo").body(
                    "Hello World!",
                    HttpHeaderValues.TEXT_PLAIN
                    )
            }
            report {
                reportErrors = true  (7)
            }
        }
1 Specifies the starting point for load injection.
2 Accesses the netty namespace.
3 Calls the the http operator
4 Gives the http step a name.
5 Configures the HTPP connection to the server.
6 Configures the HTTP request (HTTP method, path and body) that will be sent to the server.
7 Enables reporting of errors for the http step.

Execute the scenario using Gradle: ./gradlew run. This allows you to observe requests logged by the HTTP server. However, it does not verify that the server is returning what you expect.

Verify the server’s response

To verify that the server is returning what you expect, add a verify step to the scenario, just after the http step.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
        .start()
        .netty()
        .http {  (1)
            // ...
        }
        .verify { result ->
            assertSoftly {
                result.asClue {
                    it.response shouldNotBe null
                    it.response!!.asClue { response ->
                        response.status shouldBe HttpResponseStatus.OK  (2)
                        response.body shouldContainOnlyOnce "Hello World!"  (3)
                        response.body shouldContainOnlyOnce "POST"
                    }
                    it.meters.asClue { meters ->
                        meters.timeToFirstByte!! shouldBeLessThanOrEqualTo Duration.ofSeconds(1)
                        meters.timeToLastByte!! shouldBeLessThanOrEqualTo Duration.ofSeconds(2)
                    }
                }
            }
        }.configure {
            name = "verify-post" (3)
        }
1 Place holder for the http step (quickstart-http-post) configuration parameters.
2 Verifies that the HTTP status of the response is OK; the step uses io.netty.handler.codec.http.HttpResponseStatus for the verification.
3 Verifies that the body of the response contains the string ("Hello World!") sent in the request.

At this point, if you execute the updated scenario, a log file will display showing the status of each response and confirmation that that server is posting the "Hello World!" string.

Reuse the HTTP connection

You can strengthen the relevance of your test by:

  1. Adding an httpWith step to the scenario that reuses the original HTTP connection (quickstart-http-post).

    The same minions that passed through the first http step will pass through the httpWith step.

  2. Adding a second verify step to assert that the response from the server is as expected.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
...
            configure {
                name = "verify-post"
            }
            .netty()  (1)
            .httpWith("quickstart-http-post") {  (2)
                report {
                    reportErrors = true
                }
                request { _, _ ->
                    SimpleHttpRequest(HttpMethod.PATCH, "/echo").body(
                        "Hello World!",
                        HttpHeaderValues.TEXT_PLAIN
                    )
                }
            }.verify { result ->
                assertSoftly {
                    result.asClue {
                        it.response shouldNotBe null
                        it.response!!.asClue { response ->
                            response.status shouldBe HttpResponseStatus.OK (3)
                            response.body shouldContainOnlyOnce "Hello World!" (4)
                            response.body shouldContainOnlyOnce "PATCH" (5)
                        }
                        it.meters.asClue { meters ->
                            meters.timeToFirstByte!! shouldBeLessThanOrEqualTo Duration.ofMillis(100)
                            meters.timeToLastByte!! shouldBeLessThanOrEqualTo Duration.ofMillis(200)
                        }
                    }
                }
            }.configure {
                name = "verify-patch"
            }
1 Accesses the netty namespace.
2 Calls the httpWith operator and specifies the previous http connection to reuse ("quickstart-http-post").

Note that there is no need to specify parameters for the`httpWith` connection since it reuses the parameters from the previous http connection.

3 Verifies that the HTTP status of the response is OK; the step uses io.netty.handler.codec.http.HttpResponseStatus for the verification.
4 Verifies that the body of the response contains the string ("Hello World!") sent in the request.
5 Not sure I understand what this line of code does…​

Execute the scenario

Execute your updated scenario as a Gradle project.

A couple of seconds after execution, a report (similar to the one below) will display on your console.

============================================================
=====================  CAMPAIGN REPORT =====================
============================================================

Campaign...........................quickstart-http-4fb6244cd3
Start..............................2022-03-30T16:41:02.597973Z
End................................2022-03-30T16:41:53.411602Z
Duration...........................50 seconds
Configured minions.................10000
Completed minions..................10000
Successful steps executions........39832
Failed steps executions............42
Status.............................FAILED



=====================  SCENARIO REPORT =====================
Scenario...........................quickstart-http
Start..............................2022-03-30T16:41:02.597973Z
End................................2022-03-30T16:41:53.411602Z
Duration...........................50 seconds
Configured minions.................10000
Completed minions..................10000
Successful steps executions........39832
Failed steps executions............42
Status.............................FAILED
Messages:
- ERROR:  step 'quickstart-http-post' - "Success: 9958, Execution errors: 42"
- INFO:   step 'verify-post' - "Success: 9958, Failures (verification errors): 0, Errors (execution errors): 0"
- INFO:   step '_622niod959' - "Success: 9958, Execution errors: 0"
- ERROR:  step 'verify-patch' - "Success: 9926, Failures (verification errors): 32, Errors (execution errors): 0"
  • Note that the information for the whole campaign as well as for each scenario is displayed.

  • In the example above, there are errors, which explains why the execution failed.

  • To understand the errors, refer to Enable Event Logging .

Create an archive of your scenario

Execute the statement ./gradlew assemble.

A JAR archive ending with -qalipsis.jar is created in the folder build/libs. This archive contains all dependencies and is self-sufficient to execute your scenario on any machine with a Java Runtime Environment installed.

To execute your scenario: . Open a terminal and navigate to the folder where the JAR archive is stored. . Move the JAR archive out of the build folder; this will avoid accidental deletion of the archive. . Execute the command java -jar quickstart-<your-version>-qalipsis.jar.

Enable event logging

In the current directory, create a folder called config. In this new folder, create a file called qalipsis.yml with the following content:

events:
  root: warn
  export:
    slf4j:
      enabled: true

This enables the logging of the events with severity WARN and higher to a file called qalipsis-events.log.

The qalipsis-events.log contains a contains the details of the individual failed assertions.

For example, the details of a failed assertion may be displayed as:

2022-03-31T06:33:55.937Z WARN --- step.assertion.failure;io.qalipsis.plugins.netty.tcp.ConnectionAndRequestResult@1e869276

The pertinent parts of the above message are: * The exact instant when the failed assertion occurred:`2022-03-31T06:33:55` * The reason for the failure: PT0.10696475S should be ⇐ PT0.1S: * The campaign (quickstart-campaign-4d39026fd7), scenario (quickstart-http), and verification step (verify-patch) where the failed assertion occurred.

Scenario specifications

Overview

A scenario specification is the total of all of the steps required to configure the scenario and to execute it at run-time.

In order to provide maximum flexibility for configuring a scenario and the best performance while executing it, QALIPSIS splits its configuration from its execution.

  • In the configuration phase, you develop specifications for the scenario.

  • In the execution phase, you set the runtime parameters for the scenario(s) you are executing.

This section focuses on scenario configuration. Details on scenario execution are provided in QALIPSIS configuration.

Configuring a scenario specification

A scenario specification is made up of several parts.

  1. A declaration of the function that declares the scenario:

    1
    2
    
    @Scenario("hello-world")
    fun myScenario()
  2. Assigning the kebab-cased scenario name (hello-world):

    1
    2
    3
    
    @Scenario("hello-world")
    fun myScenario(){
        val myScenario = scenario { }
  3. Configuring the scenario.

    1. Assume the default configuration parameters:

      1
      2
      3
      4
      
      @Scenario("hello-world")
      fun myScenario()
          val myScenario = scenario { // Scenario default configuration parameter values
          }

      The default parameters include:

      • minionCount = 1

      • retryPolicy = no retry ---

  4. Include defined parameters within the configuration, as in the following example:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
    @Scenario("hello-world")
    fun myScenario(){
        val myScenario = scenario {
            minionsCount = 1000
            retryPolicy(instanceOfRetryPolicy)
            profile {
              regular(1000, 50)
            }
        }
    }
  5. Identify the root of the (load-injected) tree receiving all the minions.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    @Scenario("hello-world")
    fun myScenario(){
        val myScenario = scenario {
            minionsCount = 1000
            retryPolicy(instanceOfRetryPolicy)
            profile {
              regular(1000, 50)
            }
        }
    }
    .start() // Root of the load injected tree.
  6. Add step specifications to the tree:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    @Scenario("hello-world")
    fun myScenario(){
        val myScenario = scenario {
            minionsCount = 1000
            retryPolicy(instanceOfRetryPolicy)
            profile {
                regular(1000, 50)
            }
        }
    }
    .start() // Add step specifications here.

Default scenario configuration

Every scenario has a default configuration.

Number of minions

  • Optional

  • Default = 1

Whether you want to simulate users, IoT devices or third-party systems, each simulation is identified with a unique minion:

  • The more minions, the more load.

  • Minions execute concurrently and through all trees.

  • You can specify a default minionsCount (1000 in the following example):

@Scenario("hello-world")
fun myScenario() {
    val myScenario = scenario {
        minionsCount = 1000
    }
}

Load-injected trees receive all of the minions; non-load injected trees receive only one minion.

Execution profile

Once you have specified the number of minions in your scenario, you need to specify the pace that the minions are injected into the system. QALIPSIS out-of-the-box execution profiles include:

  • Accelerated

    • More

    • Faster

  • Regular

  • Timeframe

  • Stages

  • User Defined

You can use one of the out-of-the-box profiles or create your own.

In the following sections we will explain how to use some of the out-of-the-box execution profiles.

  • Accelerated execution profiles

    An accelerated execution profile injects either a set number of minions at an accelerated pace or an accelerated number of minions at a set pace.

    1. Accelerated pace using faster.

      The following example injects 10 minions to start,then 10 new minions after 2000 ms, 10 new minions after another 1333 (2000 / 1.5) ms, 10 new minions after 888 (2000 / 1.5 / 1.5) ms, continuing until the interval reaches 200 ms.

      After the interval reaches 200ms, 10 minions are injected every 200 ms until all minions are active.

      1
      2
      3
      4
      5
      6
      7
      8
      
      @Scenario("hello-world")
      fun myScenario() {
          val myScenario = scenario {
              profile {
                  faster(2000, 1.5, 200, 10)
              }
          }
      }

      The speed-factor at runtime will be applied to the multiplication factor: a speed factor of 2 will reduce the interval time in half.

    2. Accelerated number of minions using more.

      The following example injects minions every 200 ms: first 10, then 20 new minions after 200 ms, 40 new minions after another 200 ms, to the limit of 1000 minions every 200ms until all minions are active.

      1
      2
      3
      4
      5
      6
      7
      8
      
      @Scenario("hello-world")
      fun myScenario() {
          val myScenario = scenario {
              profile {
                  more(periodMs = 200, minionsCountProLaunchAtStart = 10, multiplier = 2.0, maxMinionsCountProLaunch = 1000)
              }
          }
      }

      The speed-factor at runtime will be applied to the # of minions injected: a speed factor of 2 will increase the number of minions by 2 at each injection.

  • regular execution profile

    A regular execution profile sets the injection of a set amount of minions at a set pace.

    The following example sets the execution of 200 minions every 10 ms until the total number of minions is started.

    1
    2
    3
    4
    5
    6
    7
    8
    
    @Scenario("hello-world")
    fun myScenario() {
        val myScenario = scenario {
            profile {
                regular(periodMs = 10, minionsCountProLaunch = 200)
            }
        }
    }

    The speed-factor at runtime will be applied to the interval: a speed factor of 2 will divide the interval by 2.

  • timeframe execution profile

    A time frame execution profile injects an equal number of minions at an equal interval within a set timeframe.

    The following example starts an equal number of minions every 20 ms, so that they all are active after 40000 ms.

    1
    2
    3
    4
    5
    6
    7
    8
    
    @Scenario("hello-world")
    fun myScenario() {
        val myScenario = scenario {
            profile {
                timeframe(periodInMs = 20, timeFrameInMs = 40000)
            }
        }
    }

    The speed-factor at runtime will be applied to the interval: a speed factor of 2 will divide the interval by 2.

  • stages execution profile

A stages execution profile injects more and more minions into the test step by step, keeping them all active until the end of the test.

  • Each stage:

    • Defines the number of minions to add.

    • How fast the minions should be added (resolution)

    • The total duration of the stage (how long to wait until the next stage starts or the test ends if the stage is the last stage).

  • Contrary to the other execution profiles, stages ignores the total minionsCount defined in the scenario and only relies on the sum of all the minions started in the stages.

Note that resolution is an optional parameter with a default value of 500 ms.

The following example starts a stages execution profile that includes two stages.

  • The first stage:

    • Injects 234 minions into the test within 12 seconds and 500 ms apart.

      10 minions are injected 9 times (500 ms apart) after the firtst injection; 4 minions are injected in the final injection.

    • Lasts a total of 30 seconds.

  • The second stage:

    • Starts 30 seconds after the first stage starts.

    • Injects 15 minions within 500 ms after the stage starts.

    • Lasts a total of 1500 ms.

1
2
3
4
stages {
    stage(minionsCount = 234, rampUpDuration = Duration.ofSeconds(12), totalDuration = Duration.ofSeconds(30), resolution = Duration.ofMillis(500))
    stage(minionsCount = 15, rampUpDurationMs = 500, totalDurationMs = 1500)
}
  • User defined execution profiles

    A user-defined execution profile is totally open to the developer’s creativity or needs. It consists of returning a MinionsStartingLine considering the previous period in milliseconds, the total number of minions to start, and the speed factor to apply.

    The following example releases 1/100th of all the minions after 1000 ms, then applies different rules depending on the past interval:

    • If the previous interval was longer than 2 seconds, 1/100 of all the minions are released after 100ms;

    • Otherwise, a random interval is calculated applying the speed factor to reduce it, in order to start 1/10 of all the minions.

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      
      @Scenario("hello-world")
      fun myScenario() {
          val myScenario = scenario {
              profile {
                  define { pastPeriodMs, totalMinions, speedFactor ->
                      when {
                          pastPeriodMs == 0L -> MinionsStartingLine(totalMinions / 100, 1000)
                          pastPeriodMs > 2000 -> MinionsStartingLine(totalMinions / 10, 100)
                          else -> MinionsStartingLine(totalMinions / 10,
                              ((1000 + pastPeriodMs * Math.random()) / speedFactor).toLong().coerceAtLeast(200))
                      }
                  }
              }
          }
      }

      Whatever definition you choose:

      • The maximum number of minions launched cannot be more than the totalMinions.

      • Generating a negative interval results in an immediate start.

      • When calling the first iteration of a user-defined strategy, the initial value of pastPeriodMs is 0.

Retry policy

  • Optional

  • Default = NoRetryPolicy

Execution of steps can fail for a variety of reasons, including, but not limited to, an unreachable remote host or invalid data.

When a step execution fails, it may be retried if a retry policy has been set within the scenario configuration or the step configuration.

Once all attempts are executed per the defined retryPolicy, the step execution context is marked as exhausted and only error processing steps are executed. Refer to the Step specifications section for more information on error processing steps.

There are two implementations of retries in QALIPSIS:

  • NoRetryPolicy: This is the default and there is no further attempt after failure.

  • BackoffRetryPolicy: Allows for limit-based retries when a step fails.

Parameters
  • retries: maximum number of attempts on a failure; positive number; default is 3.

  • delay: initial delay from a failure to the next attempt; type = Duration; default is 1 second.

  • multiplier: factor to multiply delay between each attempt; type = Double; default is 1.0.

  • maxDelay: maximal duration allowed between two attempts; type = Duration; default is 1 hour.

Example

In the example below, there are 10 retries after a step failure.

  • The 1st retry occurs 1 ms after the failure.

  • The 2nd retry occurs 2 ms after the 1st retry.

  • The 3rd retry occurs 4 ms after the 2nd retry.

  • The 4th retry occurs 8 ms after the 3rd retry.

  • The 5th retry occurs 10 ms after the 4th retry.

  • The 6th through 10th retries occur 10 ms after the previous retry.

    1
    2
    3
    4
    5
    6
    
    BackoffRetryPolicy(
        retries = 10,  (1)
        maxDelay = Duration.ofMillis(10) (2)
        delay = Duration.ofMillis(1) (3)
        multiplier = 2.0 (4)
    )
    1 Sets the number of retries after a step failure to 10
    2 Sets the max delay to 10 ms.
    3 Sets the first delay at 1 ms
    4 Sets the multiplier to 2.0.

After the defined number of retries are executed, the step execution context is marked as exhausted and only error processing steps are executed. Refer to Step specifications for more information on error processing steps.

Load-injected tree

A scenario is a combination of directed acyclic graphs, joined by steps.

There might be several graphs starting concurrently at the scenario root, but only one graph (the specified load-injected branch) is visited by all the minions injected into the scenario.

Other graphs are considered non-injected or "side-trees" and perform data verification for when the branches rejoin.

myScenariostart().doSomething() identifies which graph is the root receiving the load of all the minions at runtime.

myScenario.doSomething() identifies a second graph visited by a unique minion and generally consisting of iterative operations such as polling of data sources.

1
2
3
4
5
6
7
8
9
10
11
12
13
@Scenario("hello-world")
fun myScenario() {
    val myScenario = scenario {
        minionsCount = 1000
        profile {
            regular(1000, 50)
        }
    }
    myScenario.start() (1)
        .returns<string>{ "My input" }
    myScenario (2)
        .returns<String>{ "My other input" }
}
1 Root branch receiving all minions.
2 Second branch visited by a unique minion.

Note that only one use of .start() is permitted and only one step after it.

To create more than one load-injected branch, you can use the .split command.

.split provides the output of a unique step to several steps by initiating the divergence of two or more load-injected branches, all running concurrently and receiving all the minions.

In the following example, two branches are diverging. The purpose of the divergence is log and save a collection of temperatures from an IoT source.

  • The first branch uses .flatten() to log the temperature values (in °C).

  • The second branch uses r2dbc().save to save the collection of values.

.flatMap then converts the logged and saved temperature values from °C to °F for use throughout the rest of the campaign.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@Scenario("hello-world")
fun myScenario() {
    scenario {
        .start()
            .split {
                flatten()   (1)
                .onEach { temperature ->
                    log.info("The temperature is $temperature degrees celsius)
                }

                r2dbc().save { temperatures ->  (2)
                }
            }
            .flatMap { temperatureCelsius ->   (3)
                (temperatureCelsius * 9/5) + 32
            }
    }
}
1 Log the values
2 Save the values.
3 Convert the received values from °C to °F and distribute them one by one.

Singleton step

A singleton step is designed to be executed only once by one minion. Its purpose is to gather data to provide to the other minions as they pass through it. The minions can use the data to feed other steps or verify expected results in the tested system.

Examples of a singleton step include:

  • A database poller.

  • A message consumer.

While the singleton step is executed only once, it provides data to all the minions passing through it.

A singleton step does not defer from other steps. It is executed using one minion, which looks like any other minion. Even if the singleton step is created on a load-injected branch, it remains aside and is proxied.

The proxy will be executed by all the minions going through the branch, while the singleton step will only have its own one, only aiming at triggering its operation: polling, reading a file, starting a message consumer, etc.

The singleton step and its proxy are connected by a topic, which actually defines the way the polled/read/consumed records are provided to the minions.

Data distribution from the singleton step to the load minions

For convenience purpose, QALIPSIS proposes three different modes of distributing records out of a singleton step:

  • Loop

  • Unicast/Forward Once

  • Broadcast

Note: You can develop your own singleton step by creating a plugin for it. Details for creating your own plugins will be provided in future updates to this documentation.

QALIPSIS provides all required statements out-of-the-box.

Loop

When the number of records emitted by the singleton step is finite (such as a file), you might want to loop in the whole set of data to be sure there will always be data for the minions.

All minions receive all the records. When there is no more data for a minion, it receives the same data again from the beginning and loops on the whole set of records as long as it needs new ones.

Unicast/forward once

Each minion receives the next unused record emitted from the singleton step. Once there are no more records to provide, all the minions remain blocked in the step.

Broadcast

All the minions receive all the records from the beginning. Once there are no more records to provide, all of the minions, having already received all available records, remain blocked in the step.

If the size of the buffer is limited, minions landing in the step for the first time will not receive everything from the beginning, but only from the size of the buffer.

Step specifications

Defining and configuring a step specification

A step specification is defined, depending on the step nature and function, as either:

  • The root of a scenario (the first step after .start()).

    or

  • The successor of a previous step.

In the example below, my-tcp is the root step; reuse-tcp is a successor step.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
scenario {
    minionsCount = 150
    profile {
        regular(150, minionsCount) (1)
    }
    .start()
    .netty()
    .tcp { (2)
        name = "my-tcp"
        connect {
            address("localhost", 3000)
            noDelay = true
        }
        request { tcpRequest1 }
        metrics { connectTime = true }
    }
    .reuseTcp("my-tcp") { (3)
        name = "reuse-tcp"
        iterate(20, Duration.ofMillis(100))
        request { tcpRequest2 }
    }
}
1 All 150 minions are deployed simultaneously.
2 The .tcp step ("my-tcp") is the root of the scenario.
3 The .reuseTcp step ("reuse-tcp") is a successor of the previous step ("my-tcp") and receives the previous step’s output as its input.

A step specification is configured such that:

  • The step receives the configuration closure {} as a parameter (as in .tcp and .reuse-tcp in the example above).

    or

  • .configure is called after the step as in the example below:

1
2
3
4
5
6
7
8
    .reuseTcp("my-tcp") {
        request { tcpRequest2 }
    }.configure {
        name = "reuse-tcp"
        iterate(20, Duration.ofMillis(100))
        retryPolicy(instanceOfRetryPolicy)
        timeout(2000)
    }

Step context

A step’s context is the mechanism by which:

  1. Data received from the previous step is transported as input to the next steps.

  2. The next steps transform the data and provides it as output.

  3. The output is forwarded to the next steps as input.

Additionally, a step’s context determines how prior errors are transported through the step.

Common load distribution steps

Fan-out

To split a branch and add two or more concurrent steps as successors, execute a .split.

In the example below, anyStep receives the output of the previous step as input. anyStep transforms its input per its step configuration. The transformed data is provided as input to both the map and the flatten steps.

If the entity returned by the step providing the data before the .split (anyStep() in the example) is liable to change, do not change it in one branch, as this would have side-effects on the other branch. Consider creating a deep clone of the record that is to be changed to avoid this issue. The map and the flatten steps each do something different with the data.
1
2
3
4
5
6
7
.anyStep { }
    .split {
        .map { ... }
            .validate{ ... }
    }
        .flatten()
            .filterNotNull()

Fan-in/joins

Fan-in/Join functions allow the user to re-join split branches of a scenario or to verify data from different sources.

You can join the branches in one of two ways:

  • Use the name of a concurrent step.

  • Create a step directly in the join.

The following example uses the name of a concurrent step:

1
2
3
4
5
6
7
.anyStep { }
    .innerJoin<EntityWithKey, EntityWithId>(
        on = { "${it.key}" }
        with = "my-datasource-poll-step" (1)
        having = { it.id } (2)
    )
    .map { } (3)
1 The step my-datasource-poll-step is declared in a parallel branch (or earlier in the same branch) and provides an output of type EntityWithId with a field id being a string in this example.
2 Closure to extract the key from the right side, which should match the left side value.
3 The result is a Pair<EntityWithKey, EntityWithId> where string representation of key of the EntityWithKey equals to the field id of the EntityWithId.

Records coming from the right side are kept in a cache until one comes from the left side with the same key. Once used, they are evicted from the cache.

When a flow comes from the left side (from anyStep) but has no corresponding record yet coming from the right (my-datasource-poll-step), it waits until the defined timeout on the step. Refer to Defining and configuring a step specification for more details.

The step context after the join inherits from the one coming from the left.

The following example creates a step directly in the join:

1
2
3
4
5
6
7
8
9
10
11
    anyStep  { }
        .innerJoin<EntityWithKey, EntityWithId>(
            on = { "${it.key}" },
            with = {
                r2dbc().poll{

                }
            },
            having = { it.id }
        )
        .map {}

This method is similar to naming a concurrent step except that a step is created in the join. The receiver of the with is the scenario.

Error processing steps

When an error occurs when executing a step, after all the potential retries, the step context being carried from top to bottom is marked as "exhausted".

From that point of the process, only the steps relevant for error processing are executed, the others are simply bypassed.

The steps relevant for error processing include:

  • catchErrors: takes the collection of errors as parameter to process them. In the example the number of previous errors to process is 10.

    1
    
    .catchErrors(10)
  • catchExhaustedContext detect when a coroutine context has been exhausted. It takes the full exhausted step context as a parameter. Available parameters for.catchExhaustedContext include:

    • minionId = ""

    • scenarioName = ""

    • stepName = ""

    • isExhausted = ""; available values are true (do not execute next step) or false (execute next step).

      In the example, when a coroutine is exhausted, an error is returned indicating that the minion ID of the exhausted step is 250.

      1
      
      .catchExhaustedContext(minionID = 250)

Operators

QALIPSIS provides default operators to tweak the execution of your scenario, that do not require the addition of plugins to your classpath.

Out-of-the-box QALIPSIS operators perform a variety of functions including data transformation, data filtering, data verification, altering the behavior of the data, etc. Together with user-defined operators the list of operators is infinite.

Some of the commonly used operators are defined in this section.

Timing operators

This section provides an explanation and examples of the delay, pace, acceleratingPace, and constantPace timing operators.

  • delay adds a constant delay before executing the next step, keeping the same execution pace as the previous step. In the example, the delay between steps is set to 123 MS.

    1
    
    .delay(Duration.ofMillis(123))
  • pace ensures that the next step is executed at an expected pace, whatever the pace of the input is. The calculation applies independently to each minion. In the following examples, the pace is set to 10 MS.

    1
    
    .pace { pastPeriodMs -> pastPeriodMs + 10 }

    or

    1
    
    .pace(Duration.ofMillis(10))
  • acceleratingPace reduces the waiting interval for each iteration; to slow down the pace, use an accelerator lower than 1. In the example, the start pace is set to 20 MS, the acceleration is set to 2.0; the end pace is set to 4 MS.

    1
    
    .acceleratingPace(20, 2.0, 4)
  • constantPace applies a constant pace to execute the successor steps. The constant pace is set to 20 MS in the example.

    1
    
    .constantPace(20)

Filter operators

This section provides an explanation and examples of the validate,filter, and verify filter operators.

  • validate validates the input. If errors are returned, the step context is marked as exhausted and can then only be processed by error processing steps. In the example, steps with an input specification that is not an even integer return an error and are marked as exhausted.

    1
    
    .validate{ if(it % 2 != 0) listOf(StepError("The input should be even")) else emptyList() }
  • filter removes the step contexts having an input not matching the filter specification. In the example, steps with an input specification that is not an even integer are filtered out.

    1
    
    .filter(is even)

    filter is a silent equivalent of validate; it does not generate an error and does not mark the step context as exhausted.

  • verify validates assertions rather than the input data quality/safety that is validated by validate. In the example, .verify confirms that that the batteryLevel property of the foundedBatteryState is exactly the same as the batteryLevel property of the savedBatteryState. If the two values are not the same, an error is returned and the step is marked as exhausted.

    1
    2
    3
    4
    5
    
    .verify { result ->
        val savedBatteryState = result.first
        val foundedBatteryState = result.second
        foundedBatteryState.batteryLevel shouldBeExactly savedBatteryState.batteryLevel
    }

Transformation operators

This section provides an explanation and examples of the map, mapWithContext, flatMap, flatten, and onEach transformation operators.

  • map converts each input element into another element in the output. In the example, .map multiplies each of its input values (it) by 3.

    1
    
    .map { it * 3 }
  • mapWithContext provides the same capability as map but with the step context as an additional argument. In the example, .mapWithContext returns the value of its input (it) and the input value times 3.

    1
    
    .mapWithContext{ context, input -> context.minionId + "-" + input }
  • flatMap converts a collective input into a single element in the output, with a user-defined strategy. In the example, flatmap converts a collection from the property members of the input into a flow. Each record of members will be individually provided to the next steps..

    1
    
    .flatMap { input -> flowOf(*input.members.toTypedArray()) }
  • flatten converts a collective input into a single element in the output for a known collective type (array, iterable, etc.), keeping each item unchanged.

    In the example, the .flatten converts the output of the .map functions into a single collection of integers where each integer is either the original integer multiplied by 3 or the original integer plus 100. [3, 103, 6, 106, 9, 109].

    1
    2
    3
    
    .map { it * 3 }
    .map { arrayOf(it, it + 100) }
    .flatten()
  • onEach executes one or several statements on each element of the input and forwards them unchanged to the output. The example prints the input of the .onEach step to the console.

    1
    
    .onEach { input -> { println(input) }

Caching operators

This section provides an explanation and examples of the shelve and unshelve caching operators.

  • shelve caches the provided values in a shared cache. In the example, .shelve creates a new sequence by transforming each element of the original sequence into a map of key-value pairs ("value-1" to it + 1).

    1
    
    .shelve { input -> mapOf("value-1" to input + 1) }

    When using a distributed architecture, make sure the cached value can be serialized by the used implementation of shared state registry. The ID of the minion is automatically added to the key to avoid collision.

  • unshelve extracts a specific value from the cache using the keys passed as arguments. In the example, unshelve<Int, Double>("value-1") takes two type arguments <Int, Double>, which specify the types of the input of the step and the one of the value with key value-1, retrieved from the cache.

    It then extracts the values associated with the key "value-1", which is of type Double. The extracted values are collected into a new collection of type List<Double>.

    1
    
    .unshelve<Int, Double>("value-1")

Flow operators

This section provides an explanation and examples of the stage flow operator.

  • stage groups a linear graph of steps.

    Stages are not mandatory in a scenario, but provide a convenient way to iterate steps. In the example, after flatten() is completed it repeats 5 times starting at the first map.

    1
    2
    3
    4
    5
    6
    7
    
    .stage("my-stage") {
        .map { it * 3 }
        .map { arrayOf(it, it + 100) }
        .flatten()
    }.configure {
        iterate(5)
    }

Combining operators

The most often used combining operator is collect.

collect collects inputs across all the minions and releases them as a list within the latest received step context. Its function is the opposite of flatten.

In the example, .collect collects the elements of a workflow with a timeout of 1 second or until there are 10 items in the collection. After one second or when 10 items are collected, the collection is provided to the next steps.

1
.collect(Duration.ofSeconds(1), 10)

QALIPSIS configuration

Starting QALIPSIS in different modes

Two modes of QALIPSIS configuration are currently supported:

  • Interactive

  • Scenario Selection

Interactive

  1. Execute QALIPSIS with the option --prompt or -p. as in the following example:

    java -jar my-scenario-qalipsis.jar --prompt
  2. When prompted, enter the load factor (# of minions), speed factor, execution profile, etc.

Scenario selection

Execute QALIPSIS with the --scenarios or -s as in the following example:

java -jar my-scenario-qalipsis.jar --scenarios "my-first-scenario, ?y-*-s*n?rio"

The value is a comma-separated list of an exact value of a scenario, a pattern of the scenario, or both. The pattern may contain wildcards (* or ?) as in the example above.

Runtime configuration

Command-line arguments

This section provides an overview and an example of the command-line arguments that are available at runtime.

Note that all command-line arguments can be provided in a file, as explained in External property sources.

  1. Autostart mode. The --autostart argument automatically executes all the scenarios present in the class path, when QALIPSIS starts.

  2. Number of Minions by Scenario.

    This command defines how many minions should be used in the load-injected tree. It is not compliant with campaign.minions-factor.

    Following is an example of the command to use to set the number of minions:

    ---
    campaign.minions-count-per-scenario (#ofMinions)
    ---

    where #ofMinions is an integer and the default value is the minionCount set during configuration of the scenario.

  3. Load Factor.

    The load factor command multiplies the number of minions set in the scenario specification by the provided value. It is not compliant with campaign.minions-count-per-scenario.

    • If the value is set between 0 and 1, the minionCount is reduced.

    • If the value is set to greater than one (or double,triple, quadruple, etc.) the minionCount is increased.

    • The default value is always set to 1.

      Following is an example of the command to set the load factor:

      campaign.minions-factor, double, default to 1

  4. Speed Factor. The speed factor command is applied on the execution profile to accelerate or slow down the pace of the minions released.

    For example:

    campaign.speed-factor, double, default to 1

    The value for the speed factor is defined the same as for load factor.

External property sources

QALIPSIS uses Micronaut’s runtime underneath. Hence, you can use its mechanism of Externalized Configuration with Property Sources.

The configuration files contain all of the settings that are relevant for QALIPSIS to run. Included in the settings the parameters for logging and monitoring data exports, data sources for the Head node, executor sizes, etc.

You can overwrite the default Qalipsis configuration and include your own properties by creating a file named qalipsis.{extension} or config/qalipsis.{extension} in the classpath or in the working directory or both. The {extension} can be either of the following:

  • YAML documents (use the extension yaml or yml)

  • JSON documents (use the extension json)

  • flat properties (use the extension properties)

Yaml or yml files have more precedence over a json file and json files have more precedence over a properties file. In a situation, when we have two YAML files with the same name and located in the same directory, the file with the yaml extension has precedence over the one with yml extension.

You can specify the same parameters in different environment sources, however the levels of precedence of the configuration sources determines the configuration values that are prioritized.

The precedence of the configuration sources, from lowest to highest levels is outlined below:

  • Default configuration embedded in QALIPSIS artifacts

  • Class path files: the files in /config have precedence over those in the root

  • Working directory files: the files in /config have precedence over those in the root -OS Environment variables

    • Java System properties(-DmyArg=value)

    • Command line arguments (using -c “myArg=value”).

Dependency injection

QALIPSIS relies on the Dependency injection framework from Micronaut and supports the injection of dependencies in the constructors of the scenario declaring classes and methods annotated with @Scenario.

Ensure you have the following dependency in the project where your beans or factories are compiled: kapt("io.qalipsis:qalipsis-api-processors:0.7.c-SNAPSHOT")

Injecting a singleton bean

You can declare a singleton bean using the annotation @jakarta.inject.Singleton on a type, or use a Micronaut factory.

Creation of beans

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import jakarta.inject.Singleton
@Singleton // The annotation `@jakarta.inject.Singleton` can be added to a class to automatically create a bean and declare it as singleton.
class NoRetry : RetryPolicy {
    suspend fun <I, O> execute(context: StepContext<I, O>, executable: suspend (StepContext<I, O>) -> Unit){
        executable(context)
    }
}
@Factory
class ExecutorFactory {
    @Singleton // The annotation `@jakarta.inject.Singleton` added to a method declares the returned bean as a singleton.
    fun simpleExecutor(retryPolicy: RetryPolicy): Executor { // The retry policy is automatically injected into the factory method.
        return SimpleExecutor(retryPolicy)
    }
}

Once beans are created, you can inject them in the constructor or the scenario method:

Injection of beans

1
2
3
4
5
6
7
8
9
10
class EcommercePurchase(private val retryPolicy: RetryPolicy) {// The previously created bean of type `RetryPolicy` is injected in the constructor of the scenario declaring class.

    @Scenario("login-select-and-checkout")
    fun fromLoginToCheckout(executor: Executor) {// The previously created bean of type `Executor` is injected in the scenario declaring method.
        scenario {
            retryPolicy(retryPolicy)
        }
        // ...
    }
}

Qualifying the bean to inject

When several beans exist for the same type, you might want to select the one to inject. QALIPSIS supports named injection as in the following example.

import jakarta.inject.Named

class EcommercePurchase(@param:Named("default") private val retryPolicy: RetryPolicy) { // When the constructor parameter is a property, precise the target of the annotation.

    @Scenario("login-select-and-checkout")
    fun fromLoginToCheckout(@Named("simple") executor: Executor) { // The annotation `@Named` selects the bean of type `Executor` qualified by the name "simple" for the injection.
        scenario {
            retryPolicy(retryPolicy)
        }
        // ...
    }
}

QALIPSIS only supports name qualifiers; however, you can read more about other qualifiers in the Micronaut documentation.

Injecting a property value

Properties for the runtime property sources are also injectable in scenario declaring classes, constructors, and methods.

1
2
3
4
5
6
7
8
9
10
11
12
13
import io.qalipsis.api.annotations.Property

class EcommercePurchase {

    @Scenario("login-select-and-checkout")
    fun fromLoginToCheckout(@Property(name = "ecommerce-purchase.action-pace", orElse = "10s") actionPaceDuration: Duration) { (1)
        scenario {
            // ...
        }
        .constantPace(actionPaceDuration.toMillis())
        // ...
    }
}

The default value set in orElse is optional.

More information about property sources is available in the Micronaut documentation.

Injecting iterable containers

QALIPSIS supports the injection of a single instance of beans as shown above, as well as an Iterable or any of its subtypes Collection, List, Set and their derivatives and Optional for properties as in the example below.

1
2
3
4
5
6
7
8
class EcommercePurchase(private val retryPolicies: List<RetryPolicy>) {

    @Scenario("e-commerce-purchase")
    fun fromLoginToCheckout(@Property("startDelay") executor: Optional<Duration>) {

        // ...
    }
}

Summary

The sample code below includes all the supported methods of injection for beans and properties.

The example is for a scenario method, but can also be used for the constructor of a class declaring scenarios.

1
2
3
4
5
6
7
8
9
10
11
12
    @Scenario
    fun fromLoginToCheckout(
        classToInject: ClassToInject, // An instance of `ClassToInject` is injected, failing if missing.
        mayBeOtherClassToInject: Optional<OtherClassToInject>,  // An instance of `Optional<OtherClassToInject>` is injected, replaced by an empty optional if missing.
        @Named("myInjectable") namedInterfaceToInject: InterfaceToInject, // An instance of `InterfaceToInject` qualified by the name `myInjectable` is injected, failing if missing.
        injectables: List<InterfaceToInject>, // A `List` of all the instances of classes inheriting from `InterfaceToInject` is injected
        @Named("myInjectable") namedInjectables: List<InterfaceToInject>, // A `List` of all the instances of classes inheriting from `InterfaceToInject` and qualified by the name `myInjectable` is injected
        @Property(name = "this-is-a-test") property: Duration, // The value of the property `this-is-a-test` is injected after its conversion to a `Duration`, failing if missing
        @Property(name = "this-is-another-test", orElse = "10") propertyWithDefaultValue: Int, // The value of the property `this-is-another-test` is injected after its conversion to a `Int`, replace by 10 as Int if missing
        @Property(name = "this-is-yet-another-test") mayBeProperty: Optional<String> // The value of the property `this-is-yet-another-test` is injected after its conversion to a `String`, replaced by an empty optional if missing
    ) {
    }

Monitoring Test Campaigns

Overview

QALIPSIS aims at giving insights into the behavior of the tested system from a performance perspective.

The insights are provided to users by way of dashboards and detailed values describing the performed operations.

Definitions

In QALIPSIS, each step generates different kinds of analytics data that supply different levels of detail about the behavior and performances of the system.

Meters

The highest level of detail is built on meters, using the Micrometer Application Monitoring library.

Meters represent the behavior or performance of a step or operation over a short period of time as defined in the meter’s configuration.

  • The shorter the time period that a meter represents, the more data will be saved into the time-series database.

  • The data can only be fetched and reported from the time-series database where they were saved. They are not available in a QALIPSIS scenario.

  • In QALIPSIS, the Micrometer meters (Counters, Timers, Gauges) are attached to a single campaign and should be created and removed according to the campaign lifecycle.

Meters can be tagged to specify the scenario, campaign, and test step that they are related to.

Events

The intermediate level of detail is built on events that are directly saved into a time-series database.

Events represent an operation to be executed at a given point in time.

  • Events can be compared to logs as managed by log4j and logback, and apply a similar concept for configuration.

  • Events are described by a severity, a log name, and values.

Events can be tagged to specify the scenario, campaign, and test step that they are related to.

Step execution meters

The finest level of detail is built on step execution meters.

Step execution meters are primitive values and Java durations.

  • Step execution meters are directly provided as an output from one step to another.

  • Step execution meters make it possible to verify the performances and the success of an operation directly in the scenario, in order to decide whether next steps should be executed or not.

Recording events and meters in campaigns

To evaluate and analyze the performance and failures of your system during a campaign, you need to monitor events and meters at different levels and export the monitored data.

Enabling monitoring in steps

Scenarios within campaigns are made up of steps. By default, monitoring is disabled for each step.

Note that not all steps are suitable for monitoring. For example, there would be nothing to monitor in a delay step. However, load-generating and assertion steps generate data that is important to monitor.

QALIPSIS allows you to enable monitoring and configure the different levels of data that you want to record for each step.

Use the monitoring operator to enable and configure monitoring for each relevant step.

  • To generate events only (no meters) for the step:

    1
    
    monitoring { events = true }
  • To generate meters only (no events) for the step:

    1
    
    monitoring { meters = true }
  • To generate both events and meters for the step:

    1
    
    monitoring{ all() }

    Be precise when specifying the information to monitor. Recording too much information may overshadow the information that is most important and relevant to your system’s behaviors and performance.

Exporting monitored data

QALIPSIS supports different technologies to record events and meters, providing the flexibility of using different analytic solutions that best suit your preferences and requirements. You can globally enable the export of events and meters to databases or messaging platforms.

  • To enable the export of events:

    1
    
    events.export.enabled=true
  • To enable the export of meters:

    1
    
    meters.export.enabled=true

Plugin steps must also be configured to export monitoring data. At the time of this writing, the following plugins are supported:

It is important to balance the count of publishers (when configurable) with the amount of event data that steps generate. If the amount of data is too large for the publishers to keep up with, there may be unwanted side effects. For example, the events buffer size may increase enough to cause memory issues and QALIPSIS may need time to exit after the end of a campaign to empty the buffers.

Exporting specific event data

QALIPSIS events are very similar to, and can be configured as, logs. The major difference is that QALIPSIS events are filtered based on event name and specified EventLevel. You can select a minimum level of event that is to be filtered, based upon the name of the events.

EventLevel parameters
  • TRACE (default): all events are kept.

  • DEBUG: only events with EventLevel DEBUG, INFO, WARN, or ERROR are kept.

  • INFO: only events with EventLevel INFO, WARN, or ERROR are kept.

  • WARN: only events with EventLevel WARN or ERROR are kept.

  • ERROR: only events with EventLevel ERROR are kept.

  • OFF: no events are kept.

Example
1
2
3
4
events:
  root: INFO (1)
  elasticsearch: TRACE (2)
  cassandra.poll: DEBUG (3)
1 All events with levels of INFO or higher (WARN, ERROR) are kept.
2 All events starting with elasticsearch. are kept.
3 Events with names starting with cassandra.poll. are only kept if their level is DEBUG or higher; cassandra.poll: TRACE events are not kept.

Plugins

Introduction

A plugin is a technical extension designed to extend the core capabilities of QALIPSIS.

Plugins:

  • Tackle the complexity implied by the multiplicity of technologies and use cases to support in the tool.

  • Reduce the coupling between the technical elements, keeping them safer and easier to maintain.

The main advantages of using plugins are:

  • A drastic reduction in the footprint of required libraries to run a scenario.

  • A better user/developer experience, including:

    • The ability to focus on what is absolutely relevant to the user/developer.

    • Having a smaller list of proposals when using auto-completion.

    • Faster compilation and start times.

    • Smaller artifacts to store, transport and share.

By nature, a plugin is an optional component of QALIPSIS. It allows you to include only what you need while ignoring components and libraries that do not serve your your present concerns.

While a certain number of plugins are proposed by the community to cover the most common use cases and technologies, a plugin can also be an extension only for yourself and remain private.

Plugin functionality

A plugin contains one or several technical functionalities of the software that are not part of the core. The most common functionalities that a plugin might implement are:

  • A step to perform an action, such as:

    • Executing requests on a remote system using a specific protocol.

    • Polling data from a source.

    • Transforming data.

  • An events logger.

  • A meter registry.

Default implementations of the above functionalities are part of the QALIPSIS core. However, the default implementations may not be suited for your scenario (e.g., if productive testing or constant monitoring are requirements). QALIPSIS plugins enhance the above functionalities by providing related implementations.

Plugin integration

To create a test scenario, a tiny Gradle project is required. The plugin is simply added as a dependency to the class path of the project. You can read more about scenario configuration within the Quickstart guide: Testing an HTTP server with QALIPSIS.

If the development of a scenario requires the plugin to compile (when using a step, for example), the dependency to the plugin artifact must be added to the implementation dependencies.

If the plugin delivers functionalities that you only need at runtime, add it to the runtimeOnly dependencies.

Using QALIPSIS plugins

QALIPSIS provides several plugins targeting different technologies, systems and protocols.

To use a plugin, add the related dependency to your project:

Gradle - Groovy DSL:

implementation group:'io.qalipsis.plugin', name: 'qalipsis-plugin-XXX', version: '0.7.c-SNAPSHOT'

Gradle - Kotlin DSL:

implementation("io.qalipsis.plugin:qalipsis-plugin-XXX:0.7.c-SNAPSHOT")

QALIPSIS available plugins

In this section, documentation is provided for currently available plugins that can be used with your QALIPSIS project.

This section will be updated as more plugins are available.

Note: Before using a plugin, confirm that the related dependency has been added to your project:

  • Gradle - Groovy DSL:

    implementation group:'io.qalipsis.plugin', name: 'qalipsis-plugin-XXX', version: '0.7.c-SNAPSHOT

  • Gradle - Kotlin DSL:

    implementation("io.qalipsis.plugin:qalipsis-plugin-XXX:0.7.c-SNAPSHOT")

Name

Description

cassandra

Connects QALIPSIS to APACHE Cassandra.

elasticsearch

Connects QALIPSIS to Elasticsearch.

graphite

Connects QALIPSIS to Graphite using the pickle or plaintext protocol.

influxdb

Connects QALIPSIS to InfluxDB.

jackson

Supports reading a file (JSON, XML or CSV), converting the data to an object, and pushing the object to the next step.

jms

Connects QALIPSIS to a messaging platform that supports JMS (for example, ActiveMQ).

jarkataee

Connects QALIPSIS to a messaging platform that supports Jakarta EE.

kafka

Connects QALIPSIS to an Apache Kafka cluster.

mongodb

Connects QALIPSIS to MongoDb.

netty

Supports TCP, UDP, HTTP and MQTT clients

timescaledb

Connects QALIPSIS to TimescaleDB

r2dbc-jasync

Connects connects QALIPSIS to MariaDB, MySQL and PostgreSQL.

rabbitmq

Connects QALIPSIS to RabbitMQ.

redis-lettuce

Connects QALIPSIS to Redis.

slack

Sends campaign notifications to a defined Slack channel.

mail

Sends campaign notifications to a defined email address.

gradle

Connects Qalipsis to the Gradle build tool.

Cassandra plugin

Overview

The Cassandra plugin connects QALIPSIS to Apache Cassandra.

Technologies addressed
Dependency

io.qalipsis.plugin:qalipsis-plugin-cassandra

Namespace in scenario

cassandra()

Client library

DataStax: refer to the DataStax.

Datastax is a modern, feature-rich and highly tunable Java client library for Apache Cassandra using Cassandra’s binary protocol and Cassandra Query Language.

Supported steps
  • poll: polls the newest data periodically.

  • save: saves records generated from the input and step context.

  • search: searches records and builds a query from the input and step context.

Example scenarios using a combination of Cassandra supported steps are provided on Github.

Poll step

The poll step within the Cassandra plugin polls the newest records from the database table.

Ancestor

Scenario or step

Functionality

The poll step is created only once in a scenario; it is then used throughout the scenario as called. The poll step within the Cassandra plugin uses a strategy of “delivered at least once."

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
cassandra().poll {
    connect {
        servers = listOf("localhost:9042")
        keySpace = "iot"
        datacenterName = "datacenter1"
    }
    query("SELECT deviceid, timestamp, batterylevel FROM batteryState WHERE company = ? ORDER BY timestamp")
    parameters("ACME Inc.")
    tieBreaker {
        name = "timestamp"
        type = GenericType.INSTANT
    }
    pollDelay(Duration.ofSeconds(1))
}
Notable parameters
  • connect (required): configures the connection to Apache Cassandra

  • keyspace (required): the outermost object that determines how data replicates on nodes.

  • query (required): defines the prepared statement to execute when polling.

  • parameters (optional): binds values to the clauses of the prepared query.

  • tieBreaker (required): repeats the first column used for sorting and comparison ( or >=, depending on the ascending or descending order).

  • pollDelay (required): specifies the delay between query executions.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tips
  • Use the poll step with a left join operator to verify the values saved by your tested system in the database.

  • query requires at least one sorting field (tiebreaker in the example) to filter the newest results and not fetch the same records repeatedly.

  • The latest known value for tiebreaker is used to filter out values already received from the database.

Reference documentation

Refer to the Apache Cassandra documentation for further parameter and configuration information.

Save step

The save step within the Cassandra plugin persists records into a cassandra database using the save query with Cassandra Query Language (CQL).

Ancestor

Step

Functionality

The save step’s input and step context are used to generate documents to save into a database.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
.cassandra()
.save {
    connect {
        servers = listOf("localhost:9042")
        keyspace = "iot"
        datacenterName = "datacenter1"
    }

    table { _, _ ->
        "batteryState"
    }

    columns { _, _ ->
        listOf(
            "company",
            "deviceid",
            "timestamp,
            "batterylevel"
        )
    }

    rows { _, input ->
        listOf(
            CassandraSaveRow(
                "'ACME Inc.'",
                "'${input.deviceId}'",
                "'${input.timestamp}'",
                input.batteryLevel
            )
        )
    }
}
Notable parameters
  • connect (required): configures the connection to Apache Cassandra.

  • keyspace (required): the outermost object that determines how data replicates on nodes.

Tips
  • Use the save step after a collect step to reduce the number of queries executed onto the database and reduce the load your scenario generates on it.

  • The save step is extremely flexible; table, rows, and columns depend on the context and input value received from the previous step.

Reference documentation

Refer to the Apache Cassandra documentation for further parameter and configuration information.

Search step

The search step within the Cassandra plugin searches records in the database using a search query.

Ancestor

step

Functionality

The search step’s input and step context are used to complete the search; the list of results is then forwarded to the next step(s).

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cassandra().search {
    name = "search"

    connect {
        servers = listOf("localhost:9042")
        keyspace = "iot"
        datacenterName = "datacenter1"
    }
    query { _, _ ->
        "SELECT * FROM batteryState WHERE company= ? AND deviceid = ? AND timestamp = ?"
    }
    parameters { _, input ->
        listOf("ACME Inc.", input.input.deviceId, input.input.timestamp)
    }
}
Notable parameters
  • connect (required): configures the connection to Apache Cassandra.

  • keyspace(required): the outermost object that determines how data replicates on nodes.

  • query (required): defines the prepared statement to execute when searching.

  • parameters: (optional): binds values to the clauses of the prepared query.

Tips
  • Use the search step after a collect() step to reduce the number of queries executed onto the database and to reduce the load the scenario generates on the test system.

  • The query parameter may contain ordering clauses.

Reference documentation
  • Refer to the Apache Cassandra documentation for further parameter and configuration information.

Elasticsearch plugin

Overview

The Elasticsearch plugin connects QALIPSIS to Elasticsearch.

Technology addressed

Elasticsearch: https://www.elastic.co/

Dependency

io.qalipsis.plugin:qalipsis-plugin-elasticsearch

Namespace in scenario

elasticsearch()

Client library

JAVA API: refer to Elasticsearch JAVA Client.

Supported steps
  • poll: polls the newest data periodically.

  • save: saves a batch records generated from the input and step context.

  • search: searches records and builds a query from the input and step context.

Example scenarios using a combination of Elasticsearch supported steps are provided on Github.

Poll step

The poll step within the Elasticsearch plugin periodically polls data from Elasticsearch. The output is a list of ElasticsearchDocument containing JSON data or deserialized to objects.

Ancestor

Scenario

Functionality

The poll step is created only once in a scenario; it is then used throughout the scenario as called. The poll step within the Elasticsearch plugin uses a strategy of “delivered exactly once”. The output of the poll step is an ElasticsearchDocument that contains maps of column to values.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
elasticsearch().poll {
    name = "poll"
    client { RestClient.builder(HttpHost.create("http://localhost:9200")).build() }
    query {
        """{
            "query":{
                "match_all":{}
            },
                "sort" : ["_score"]
            }""".trimIndent()
    }
    index("battery-state")
    pollDelay(Duration.ofSeconds(1))
}
Notable parameters
  • index: sets the elasticsearch indices to use for the queries; defaults to all.

  • client: configures the REST client to connect to Elasticsearch.

  • query: configures the builder for the JSON query to perform the first poll.

  • queryParameters(optional): adds parameters to the query.

  • pollDelay (required): specifies the delay between query executions.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tips
  • The poll step is generally used in conjunction with a join to assert data or inject data into a workflow.

  • The query parameter requires at least one sorting field to filter the newest results and not fetch the same records repeatedly.

  • The output of the poll step is an ElasticsearchDocument that contains maps of column names to values. If flatten is called, the records are provided one by one to the next step; otherwise each poll batch remains complete.

Reference Documentation

Save step

The save step within the Elasticsearch plugin persists a batch of documents into Elasticsearch.

Ancestor

Step

Functionality

The save step’s input and step context are used to generate a list of documents that are forwarded to Elasticsearch.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.elasticsearch()
.save {
    name = "save"
    client {
        RestClient.builder(HttpHost.create("http://localhost:9200"))
            .build()
    }
    documents { _, input ->
        listOf(
            Document(
                index = "battery-state",
                id = input.deviceId,
                source = objectMapper.writeValueAsString(input)
            )
        )
    }
}
Notable Parameters
  • client: configures the REST client to connect to Elasticsearch.

  • documents: defines the strategy to create documents to save from the input and step context.

Reference Documentation

Refer to the Elasticsearch documentation for further parameter information.

Search step

The search step within the Elasticsearch plugin searches documents in the Elasticsearch database using a search query.

Ancestor

Step

Functionality

The search step’s input and step context are used to generate a query; the batch of results is then forwarded to the next step(s).

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
.elasticsearch()
.search {
    name = "search"
    client {
        RestClient.builder(HttpHost.create("http://localhost:9200")).build()
    }
    index { _, _ -> listOf("battery-state") }
    query { _, input ->
        """{
            "query": {
                "bool": {
                    "must": [
                        { "match": { "deviceId": "${input.input.deviceId}" }},
                        { "term": { "timestamp": ${input.input.timestamp.epochSecond} }}
                    ]
                }
            }
        }"""
    }
}
Notable parameters
  • client: configures the REST client to connect to Elasticsearch.

  • index: sets the elasticsearch indices to use for the queries; defaults to all.

  • query: configures the builder for the JSON query to perform a search.

Reference Documentation

Configuration of publication of events and meters to Elasticsearch

Configuration namespace

Events

events.export.elasticsearch

Meters

meters.export.elasticsearch

Example configuration file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
events:
    export:
        elasticsearch:
            enabled: true
            min-level: INFO
            urls: http://localhost:9200
            index-prefix: qalipsis-events
            index-date-pattern: yyyy-MM-dd
            duration-as-nano: false
            linger-period: 10s
            batch-size: 2000
            publishers: 1
            username:
            password:
            shards: 1
            replicas: 0

meters:
    export:
        elasticsearch:
            urls: http://localhost:9200
            host: ${meters.export.elasticsearch.urls}
            enabled: true
            step: PT10S
            index: qalipsis-meters
            index-date-format: yyyy-MM-dd
            auto-create-index: true
Event parameters
  • enabled (required): boolean flag that activates/deactivates event publishing to Elasticsearch; defaults to false; must be set to true.

  • min-level (required): minimal accepted level of events; allowable values are EventLevel`values: `TRACE, DEBUG, INFO, WARN, ERROR, OFF; defaults to INFO.

  • urls (required): list of URLs to the Elasticsearch instances; defaults to http://localhost:9200.

  • index-prefix (required): string prefix to use for the created indices containing the events; defaults to qalipsis-events.

  • path-prefix (required): string prefix used to demarcate url paths; defaults to http://.

  • index-date-pattern (required): string format of the date part of the index as supported by [java.time.format.DateTimeFormatter]; defaults to yyyy-MM-dd.

  • refresh-interval (required): string delay to refresh the indices in Elasticsearch; defaults to 10s.

  • store-source (required): boolean flag identifies whether the source of documents is stored with the values; set to true for yes and false for no; defaults to false.

  • linger-period (required): maximum period between publication of events to Elasticsearch; positive duration, defaults to 10s.

  • batch-size (required): maximum number of events buffered between two publications of events to Elasticsearch; positive integer; defaults to 2000.

  • publishers (required): number of concurrent publication of events that can run; positive integer; defaults to 1 (no concurrency).

  • shards (required): number of shards to apply on the created indices for events; positive integer; defaults to 1.

  • replicas (required): number of replicas to apply on the created indices for events; defaults to 0.

  • username (optional): name of the user for basic authentication when connecting to Elasticsearch.

  • password (optional): password of the user for basic authentication when connecting to Elasticsearch.

    By default, Elasticsearch does not require a username or password for authentication. However, it is recommended that the information be provided to ensure secure access to the Elasticsearch data.

Meter parameters
  • urls (required): list of URLs to the Elasticsearch instances; defaults to http://localhost:9200.

  • host (required): defaults to the value of urls above.

  • enabled (required): boolean flag that activates/deactivates meter publishing to Elasticsearch; defaults to false; must be set to true.

  • step (required): The step size (reporting frequency) to use; defaults to PT10S.

  • index (required): string value to use for creation of index containing the meters; defaults to qalipsis-meters.

  • index-date-format (required): string format of the date part of the index as supported by [java.time.format.DateTimeFormatter]; defaults to yyyy-MM-dd.

  • auto-create-index (required): boolean flag to enable/disable auto creation of index; defaults to true.

  • proxy (optional): string URL of the HTTP proxy to use to access Elasticsearch; defaults to null.

    It is recommended that the appropriate proxy configuration be identified (rather than letting it default to null) in order to enable support for various authentication mechanisms in Elasticsearch.

  • username (optional): name of the user for basic authentication when connecting to Elasticsearch.

  • password(optional): password of the user for basic authentication when connecting to Elasticsearch.

    By default, Elasticsearch does not require a username or password for authentication. However, it is recommended that the information be provided to ensure secure access to the Elasticsearch data.

Graphite plugin

Overview

The Graphite plugin connects QALIPSIS to Graphite using the pickle or plaintext protocol.

Technology addressed

Graphite: https://graphiteapp.org/

Dependency

io.qalipsis:plugin-graphite

Namespace in scenario

graphite()

Client library
Supported steps
  • poll: polls data from Graphite.

  • save: saves events and records to Graphite.

Example scenarios using a combination of Graphite supported steps are provided in Github.

Poll step

The poll step within the the Graphite plugin collects metrics from a Graphite server and broadcasts them to other components.

Ancestor

Scenario or Step

Functionality

The ‘poll’ step is typically used to collect and distribute metrics for monitoring and visualization purposes.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
graphite().poll {
  name = "my-poll-step"

  connect {
    server("http://localhost:8080")
  }

  query(
    GraphiteQuery("device.battery-state.*")
      .from(
        GraphiteMetricsTime(
          2,
          GraphiteMetricsTimeSignUnit.MINUS,
          GraphiteMetricsTimeUnit.DAYS
        )
      )
      .noNullPoints(true)
  )
  broadcast(123, Duration.ofSeconds(20))
  pollDelay(Duration.ofSeconds(1))
}
Notable parameters
  • connect (required): configures the connection to the Graphite server.

  • query (required): specifies a query to retrieve metrics from the Graphite server.

  • broadcast (required): specifies the broadcast parameters for the step.

  • pollDelay (required): query (required): specifies a query to retrieve metrics from the Graphite server.

Reference documentation

Refer to the Graphite documentation for parameter and configuration information.

Save step

The save step within the Graphite plugin persists records into a Graphite database using the save query.

Ancestor

Scenario or Step

Functionality

The save step’s input and step context are used to generate documents to save into Graphite.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.graphite()
.save {

  connect {
    server("localhost", 2003)
  }

  records { _, input ->
    listOf(
      GraphiteRecord(
        metricPath = "device.battery-state.${input.deviceId.lowercase()}",
        value = input.batteryLevel,
        timestamp = input.timestamp
      )
    )
  }
}
Notable parameters
  • connect (required): configures the connection to the Graphite server.

  • records (required): generates and configures a list of saved records.

Reference documentation

Refer to the Graphite documentation for further parameter and configuration information.

Configuration of publication of events and meters to Graphite.

Configuration namespace

Events

events.export.graphite

Meters

meters.export.graphite

Example configuration file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
events:
  export:
    graphite:
      enabled: true
      min-level: INFO
      host: localhost
      port: 2004
      protocol: pickle
      batch-size: 2
      linger-period: 10s
      publishers: 2

meters:
  export:
    graphite:
      host: localhost
      port: 2004
      enabled: true
      step: PT10S
Event parameters
  • enabled (required): boolean flag that activates/deactivates event publishing to Graphite; defaults to false; must be set to true.

  • min-level (required): minimal accepted level of events; defaults to INFO; allowed values are EventLevel values: TRACE, DEBUG, INFO, WARN, ERROR, OFF.

  • host (required): defaults to localhost; cannot be blank.

  • port (required): positive integer; defaults to 2004

  • protocol (required): the GraphiteProtocol type; default is pickle; available choices are pickle or plaintext.

  • batch-size (required): maximum number of events buffered between two publications of events to Graphite; positive integer; defaults to 2.

  • linger-period (required): maximum period between two publications of events to Graphite; positive duration; defaults to 10s.

  • publishers (required): maximum number of concurrent publication of events to Graphite; positive integer; defaults to 2.

Meter parameters
  • enabled (required): boolean flag that activates/deactivates meter publishing to Graphite; defaults to false; must be set to true.

  • host (required): defaults to localhost; cannot be blank.

  • port (required): positive integer; defaults to 2004.

  • step (required): The step size (reporting frequency); defaults to PT10S.

InfluxDB plugin

Overview

The InfluxDB plugin connects QALIPSIS to InfluxDB.

Technologies addressed
Dependency

io.qalipsis.plugin:qalipsis-plugin-influxdb

Namespace in scenario

influxdb()

Client library

influxdb-client-kotlin: refer to the influxdb-client-kotlin.

Supported steps
  • poll: polls the newest data periodically.

  • save: saves a batch of records generated from the input and step context.

  • search: searches records and builds a query from the input and step context.

Example scenarios using a combination of the InfluxDB supported steps are provided on GitHub.

Poll step

The poll step within the InfluxDB plugin polls the newest records from the database, with a delay between each execution of a select query.

Ancestor

Scenario

Functionality

The poll step is created only once in a scenario; it is then used throughout the scenario as called. The poll step within the InfluxDB plugin uses a strategy of “delivered at least once”.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
influxdb().poll {
    connect {
        server(
            url = "http://localhost:18086",
            bucket = "iot",
            org = "qalipsis"
        )
        basic(user = "DJones", password = "Oxford1987")
    }
    query(
        """
        from(bucket:"${iot}")
            |> range(start: -15m)
            |> filter(
                fn: (r) => r._measurement == "battery_state"
            )
    """.trimIndent()
    )
    pollDelay(Duration.ofSeconds(1))
}
Notable parameters
  • connect (required): configures the connection to InfluxDB.

  • query (required): specifies a query respecting the requirements of InfluxDB.

  • parameters (optional): binds values to the clauses of the prepared query.

  • pollDelay (required): specifies the delay between query executions.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tip

Use the poll step with a join operator to verify the values that the tested system saves to the database.

Reference: Documentation

Refer to the InfluxDB documentation for further parameter and configuration information.

Save step

The save step within the InfluxDB plugin persists a batch of records into the InfluxDB bucket using a specific points class.

Ancestor

Step

Functionality

The save step’s input and step context are used to generate the bucket, the list of points with their measurement, and tags and fields, which are then combined and forwarded to the database.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
.save {
    connect {
        server(
            url =  "http://localhost:18086",
            bucket = "iot",
            org = "qalipsis"
        )
        basic(user = "DJones", password = "Oxford1987")
    }
    query {
        bucket = { _, _ -> "iot" }
        organization = { _, _ -> "qalipsis" }
        points = { _, input ->
            listOf(
                Point.measurement("battery_state")
                    .addField("battery_level", input.batteryLevel)
                    .addTag("device_id", input.deviceId)
                    .addTag("timestamp", input.timestamp.epochSecond.toString())
            )
        }
    }
}
Notable parameter
  • connect (required): configures the connection to InfluxDB.

  • query (required): specifies a query respecting the requirements of InfluxDB.

Tips
  • Use the save step after a collect step to reduce the number of queries executed onto the database and to reduce the load your scenario generates on the test system.

  • The save step is extremely flexible: bucket, tags, measurement and fields might depend on the context and input value received from the previous step.

Reference documentation

Refer to the InfluxDB documentation for further parameter and configuration information.

Search step

The search step within the InfluxDB plugin searches records in the database using a search query.

Ancestor

Step

Functionality

The search step’s input and step context are used to generate a query; the batch of results is then forwarded to the next step(s).

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
.influxdb()
.search {
    connect {
        server(
            url = "http://localhost:18086",
            bucket = "iot",
            org = "qalipsis"
        )
        basic(user = "DJones", password = "Oxford1987")
    }
    query { _, input ->
        """
            from(bucket: "${iot}")
                |> range(start: -15m)
                |> filter(
                    fn: (r) => r._measurement == "${"battery_state"}" and
                    r.${"device_id"} == "${input.deviceId}" and
                    r.${"timestamp"} == "${input.timestamp.epochSecond}"
                )
        """.trimIndent()
    }
}
Notable parameters
  • connect (required): configures the connection to InfluxDB.

  • query (required): specifies a query respecting the requirements of InfluxDB.

Tip

Use the search step after a collect step to reduce the number of queries executed onto the database and to reduce the load the scenario generates on the test system.

Reference documentation

Refer to the InfluxDB documentation for further parameter and configuration information.

Configuration of publication of events and meters to InfluxDB

Configuration namespace

Events

events.export.influxdb

Meters

meters.export.influxdb

Example configuration file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
events:
    export:
        influxdb:
            url: http://localhost:8086
            enabled: true
            min-level: INFO
            linger-period: 10s
            batch-size: 2000
            org: qalipsis
            bucket: qalipsis-event
            username: user
            password: paspasspass

meters:
    export:
        influxdb:
            url: http://localhost:8086
            enabled: true
            step: PT10S
            org: qalipsis
            bucket: qalipsis-meter
            username: user
            password: passpasspass
Event parameters
  • url (required): string URL to the InfluxDB instance; defaults to http://localhost:8086.

  • enabled (required): boolean flag that activates/deactivates event publishing to InfluxDB; defaults to false; must be set to true.

  • min-level (required): minimum accepted level of events; defaults to INFO; allowable values are `EventLevel`values: TRACE, DEBUG, INFO, WARN, ERROR, OFF.

  • linger-period (required): maximum period between publication of events to InfluxDB; positive duration, defaults to 10s.

  • batch-size (required): maximum number of events buffered between two publications of events to InfluxDB; positive integer; defaults to 2000.

  • org (required): name of the organization to use for basic authentication when connecting to InfluxDB; defaults to qalipsis.

  • bucket (required): password of the bucket to use for saving events to InfluxDB; defaults to qalipsis-event.

  • username (required): name of the user for basic authentication when connecting to InfluxDB; defaults to user.

  • password (required): password of the user for basic authentication when connecting to InfluxDB; defaults to passpasspass.

  • publishers (optional): number of concurrent publication of events that can run; positive integer; defaults to 1 (no concurrency).

Meter parameters
  • url (required): string URL to the InfluxDB instance; defaults to http://localhost:8086.

  • enabled (required): boolean flag that activates/deactivates meter publishing to InfluxDB; defaults to false; must be set to true.

  • step (required): step size (reporting frequency); defaults to PT10S.

  • org (required): name of the organization to use for basic authentication when connecting to InfluxDB; defaults to qalipsis.

  • bucket (required): password of the bucket to use for saving meters to InfluxDB; defaults to qalipsis-meter.

  • username (required): name of the user for basic authentication when connecting to InfluxDB; defaults to user.

  • password (required): password of the user for basic authentication when connecting to InfluxDB; defaults to passpasspass.

Jackson plugin

Overview

The Jackson plugin supports reading a (JSON, XML or CSV) file, converting the data to an object, and pushing the object to the next step.

Technologies addressed

Class JsonMapper

Dependency

io.qalipsis.plugin:qalipsis-plugin-jackson

Namespace in scenario

jackson()

Client library

JsonMapper: refer to FasterXML.

Supported steps

csvToMap and csvToObject

The csvToMap and csvToObject steps within the Jackson plugin read a CSV resource (file, classpath resource, URL) and returns each item as an instance of an Object or Map.

Ancestor

Step

Functionality

The csvToMap and csvToObject step specification has a targetClass property to which the output lines should be mapped. The step reads a CSV file, converts the data to a map (or an object) and pushes the map (or object) to the next step.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
jackson().csvToMap {
   classpath(file)
   header {
      escapeChar('E')
      quoteChar('"')
      columnSeparator(';')
      withHeader()
      allowComments()

      column("1st value").double()
      column("2th value").string().array(",")
      column("3rd value").string(true)
      column("4th value").string(true)
   }
}
.jackson()
.csvToObject(BatteryState::class) {
   classpath("battery-levels.csv")
   header {
      column("deviceId")
      column("timestamp")
      column("batteryLevel").integer()
   }
   unicast()
}
Notable parameters
  • classpath(file): reads the data from a specified class path resource; file is the path to the resource with the leading slash ignored.

  • file(path): reads the data from a specified plain file in the file system; path is the path to the file, either absolute or relative to the current working directory.

  • url(url): specifies and links to the url of the data resource.

  • mapper: configures the JSON Mapper to deserialize the records.

Tip

Only one path (classpath(file), file(path), or url(url)) to the resource is allowed for each csvToObject or csvToMap step.

Reference documentation

Refer to the jackson documentation for further configuration and parameter information.

xmlToObject

The xmlToObject step within the Jackson plugin reads an XML resource (classpath, file, or url) and returns each item as an instance of an object of the specified class.

Ancestor

Step

Functionality

The xmlToObject step specification has a targetClass property to which the output lines should be mapped. The step reads an XML file, converts the data to an Object, and pushes the Object to the next step.

Example
1
2
3
jackson().xmlToObject(PojoForTest::class) {
   classpath(file)
}
Notable parameters
  • classpath(file): reads the data from a specified class path resource; file is the path to the resource with the leading slash ignored.

  • file(path): reads the data from a specified plain file in the file system; path is the path to the file, either absolute or relative to the current working directory.

  • url(url): specifies and links to the url of the data resource.

Tip

Only one path (classpath(file), file(path), or url(url)) to the resource is allowed for each xmlToObject step.

Reference documentation

Refer to the jackson documentation for further configuration and parameter information.

jsonToMap and jsonToObject

The jsonToMap and jsonToObject steps within the Jackson plugin read a JSON resource (file, classpath resource, URL) and returns each item as an instance of map or object).

Ancestor

Step

Functionality

The jsonToMap and jsonToObject step specification has a targetClass property to which the output lines should be mapped. The step reads a JSON file, converts the data to a Map (or an Object) and pushes the Map (or Object) to the next step.

Example
1
2
3
4
5
6
jackson().jsonToMap() {
   classpath(file)
}
jackson().jsonToObject(PojoForTest::class) {
   classpath(file)
}
Notable parameters
  • classpath(file): reads the data from a specified path resource; file is the path to the resource with the leading slash ignored.

  • file(path): reads the data from a specified plain file in the file system; path is the path to the file, either absolute or relative to the current working directory.

  • url(url): specifies and links to the url of the data resource.

Tip

Only one path (classpath(file), file(path), or url(url)) to the resource is allowed for each jsnToObject or jsnToMap step.

Reference documentation

Refer to the jackson documentation for further configuration and parameter information.

JMS plugin

Overview

The JMS plugin connects QALIPSIS to a messaging platform that supports JMS.

Technology addressed

JMS: https://www.oracle.com/java/technologies/java-message-service.html

Dependency

io.qalipsis.plugin:qalipsis-plugin-jms

Namespace in scenario

jms()

Client library

JMS API: refer to Java Interface Summary.

Supported steps
  • consume: consumes data from a messaging platform supporting JMS.

  • produce: pushes data to a messaging platform supporting JMS.

An example scenario that uses a combination of the JMS consume and produce steps is provided on GitHub.

Consume step

The consume step within the JMS plugin consumes records from ActiveMQ.

Ancestor

Scenario

Functionality

The consume step polls data from the designated ActiveMQ connection, step context, and the data deserializer.

Example
1
2
3
4
5
6
7
8
jms().consume {
    queues("battery_state")
    queueConnection { ActiveMQConnectionFactory("tcp://localhost:61616").createQueueConnection() }
}
    .deserialize(JmsJsonDeserializer(targetClass = BatteryState::class))
    .map { result ->
        result.record.value
    }
Notable parameters
  • consume: configures the the ActiveMQ connection.

    • queues: specifies the queue(s) for the connection.

    • QueueConnection: provides the java ActiveMQ connection for the specified queues.

    • topics (optional): specifies the topic(s) for the connection.

    • TopicConnection (optional): provides the java ActiveMQ connection for the specified topics.

  • broadcast (optional): specifies the broadcast parameters for the step.

  • deserialize (optional): specifies the class to use to deserialize the body of the response.

Tip

The ActiveMQ connection can be configured for either queues (QeueConnection) or topics (TopicConnection), but not both.

Reference documentation

Refer to the JMS documentation for further parameter and configuration information.

Produce step

The produce step within the JMS plugin produces records in ActiveMQ using JMS.

Ancestor

Step

Functionality

The produce step produces a batch of records into one or more JMS destinations. The record values may be provided by previous steps.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.jms()
.produce {
    connect {
        ActiveMQConnectionFactory("tcp://localhost:61616").createConnection()
    }
    records { _, input ->
        listOf(
            JmsProducerRecord(
                destination = ActiveMQQueue().createDestination("battery_state"),
                messageType = JmsMessageType.BYTES,
                value = objectMapper.writeValueAsBytes(input)
            )
        )
    }
}
Notable parameters
  • connect: specifies the connection type for the JmsProducerRecord(s).

  • records: generates and configures a list of produced records.

    • destination: defines the destination for each JmsProducerRecord as either queues (QueueConnection) or topics (TopicConnection).

    • messageType: indicates the JmsMessageType as: TEXT, AUTO, BYTES, or OBJECT.

    • value: defines the message to be displayed at the destination.

Tip

If you are not sure about the JmsMessageType, use AUTO to allow the system to infer the type and display the result.

Reference documentation

Refer to the JMS documentation for further parameter and configuration information.

JMS plugin

Overview

The JMS plugin connects QALIPSIS to a messaging platform that supports JMS.

Technology addressed

JMS: https://www.oracle.com/java/technologies/java-message-service.html

Dependency

io.qalipsis.plugin:qalipsis-plugin-jms

Namespace in scenario

jms()

Client library

JMS API: refer to Java Interface Summary.

Supported steps
  • consume: consumes data from a messaging platform supporting JMS.

  • produce: pushes data to a messaging platform supporting JMS.

An example scenario that uses a combination of the JMS consume and produce steps is provided on GitHub.

Consume step

The consume step within the JMS plugin consumes records from ActiveMQ.

Ancestor

Scenario

Functionality

The consume step polls data from the designated ActiveMQ connection, step context, and the data deserializer.

Example
1
2
3
4
5
6
7
8
jms().consume {
    queues("battery_state")
    queueConnection { ActiveMQConnectionFactory("tcp://localhost:61616").createQueueConnection() }
}
    .deserialize(JmsJsonDeserializer(targetClass = BatteryState::class))
    .map { result ->
        result.record.value
    }
Notable parameters
  • consume: configures the the ActiveMQ connection.

    • queues: specifies the queue(s) for the connection.

    • QueueConnection: provides the java ActiveMQ connection for the specified queues.

    • topics (optional): specifies the topic(s) for the connection.

    • TopicConnection (optional): provides the java ActiveMQ connection for the specified topics.

  • broadcast (optional): specifies the broadcast parameters for the step.

  • deserialize (optional): specifies the class to use to deserialize the body of the response.

Tip

The ActiveMQ connection can be configured for either queues (QeueConnection) or topics (TopicConnection), but not both.

Reference documentation

Refer to the JMS documentation for further parameter and configuration information.

Produce step

The produce step within the JMS plugin produces records in ActiveMQ using JMS.

Ancestor

Step

Functionality

The produce step produces a batch of records into one or more JMS destinations. The record values may be provided by previous steps.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.jms()
.produce {
    connect {
        ActiveMQConnectionFactory("tcp://localhost:61616").createConnection()
    }
    records { _, input ->
        listOf(
            JmsProducerRecord(
                destination = ActiveMQQueue().createDestination("battery_state"),
                messageType = JmsMessageType.BYTES,
                value = objectMapper.writeValueAsBytes(input)
            )
        )
    }
}
Notable parameters
  • connect: specifies the connection type for the JmsProducerRecord(s).

  • records: generates and configures a list of produced records.

    • destination: defines the destination for each JmsProducerRecord as either queues (QueueConnection) or topics (TopicConnection).

    • messageType: indicates the JmsMessageType as: TEXT, AUTO, BYTES, or OBJECT.

    • value: defines the message to be displayed at the destination.

Tip

If you are not sure about the JmsMessageType, use AUTO to allow the system to infer the type and display the result.

Reference documentation

Refer to the JMS documentation for further parameter and configuration information.

Jakarta EE Plugin

Overview

The Jakarta EE plugin connects QALIPSIS to a messaging platform that supports JAKARTA EE Messaging.

Technology addressed

Jakarta EE Messaging and the products compliant with it.

Dependency

io.qalipsis.plugin:qalipsis-plugin-jakarta-ee-messaging

Namespace in scenario

` .jakarta()`

Client library

Refer to Jarkata EE.

Supported steps
  • consume: consumes data from a messaging platform supporting Jakart EE.

  • produce: pushes data to a messaging platform supporting Jakart EE.

Example scenarios using a combination of Jakarta EE supported steps are provided on Github.

Consume step

The consume step within the Jakarta EE plugin consumes records from a Jakarta-EE-Messaging-compatible product.

Ancestor

Scenario

Functionality

The consume step polls data from the designated connection, step context, and the data deserializer.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
it.jakarta().consume {
  queues("battery_state")
  queueConnection {
    ActiveMQConnectionFactory(
      "tcp://localhost:61616",
      "qalipsis_user",
      "qalipsis_password"
    ).createQueueConnection()
  }
}
  .deserialize(JakartaJsonDeserializer(targetClass = BatteryState::class))
  .map { result ->
    result.record.value
  }
Notable parameters
  • queues: specifies the queue(s) to consume.

  • QueueConnection: provides the connection to attach to the queue consumer, which is generally specific to the underlying product.

  • topics (optional): specifies the topic(s) to consume.

  • topicConnection (optional): provides the connection to attach to the topic consumer, which is generally specific to the underlying product.

  • broadcast (optional): specifies the broadcast parameters for the step.

  • deserialize: specifies the class to use to deserialize the body of the response.

  • session (optional): allows to configure the session to attach to the message producer, see the official documentation of Jakarta EE Messaging for more explanation.

Tip

The connection can be configured to consume either queues (queueConnection) or topics (topicConnection), but not both.

Reference documentation

Refer to the Jakarta EE documentation for further parameter and configuration information.

Produce step

The produce step within the Jakarta EE plugin produces records in a Jakarta-EE-Messaging-compatible product.

Ancestor

Step

Functionality

The produce step within the Jakarta EE plugin produces records in a Jakarta-EE-Messaging-compatible product.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
.jakarta()
.produce {

  connect {
    ActiveMQConnectionFactory(
      "tcp://localhost:61616",
      "qalipsis_user",
      "qalipsis_password"
    ).createQueueConnection()
  }

  session {
    it.createSession(false, Session.AUTO_ACKNOWLEDGE)
  }

  records { _, input ->
    listOf(
      JakartaProducerRecord(
        destination = Queue("battery_state"),
        messageType = JakartaMessageType.BYTES,
        value = objectMapper.writeValueAsBytes(input)
      )
    )
  }
}
Notable parameters
  • connect: provides the connection for the JakartaProducerRecord(s).

  • records: generates a list of records to send the messaging platform.

    • destination: defines the destination for each JakartaProducerRecord as either queues (QueueConnection) or topics (TopicConnection).

    • messageType: indicates the JakartaMessageType as: TEXT, AUTO, BYTES, or OBJECT.

    • value: defines the message to be displayed at the destination.

  • session (optional): allows to configure the session to attach to the message producer, see the official documentation of Jakarta EE Messaging for more explanation.

  • producers (optional, default to 1): number of producers to use, in order to scale the data production up.

Tip

If you are not sure about the JakartaMessageType, use AUTO to allow the system to infer the type and display the result.

Reference documentation

Refer to the Jakarta EE documentation for further parameter and configuration information.

MongoDB plugin

Overview

The MongoDB plugin connects QALIPSIS to MongoDb.

Technology addressed

MongoDB: https://mongodb.com/

Dependency

io.qalipsis.plugin:qalipsis-plugin-mongodb

Namespace in scenario

mongodb()

Client library

MongoDB Reactive Streams Driver: refer to MongoDB Java Reactive Streams.

Supported steps
  • poll: polls the newest data periodically.

  • save: saves a batch of documents generated from the input and step context.

  • search: searches documents and builds a query from the input and step context.

Example scenarios using a combination of MongoDB supported steps are provided on GitHub.

Poll step

The poll step within the MongoDB plugin polls the newest records from a database table, with a delay between each execution.

Ancestor

Scenario

Functionality

The poll step is created only once in a scenario; it is then used throughout the scenario as called. The poll step within the MongoDB plugin uses a strategy of “delivered at least once”.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
mongodb().poll {
    name = "poll"
    connect {
        MongoClients.create("mongodb://James23:SantaFe1987@localhost:27017")
    }
    search {
        database = "iot"
        collection = "batteryState"
        query = Document()
        sort = linkedMapOf("deviceId" to Sorting.ASC)
        tieBreaker = "deviceId"
    }
    pollDelay(Duration.ofSeconds(1))
}
Notable parameters
  • connect (required): configures the connection to MongoDb.

  • query (required): specifies a document respecting the requirements of the target database.

  • tieBreaker (required): repeats the first column used for sorting and comparison ( or >=, depending on the ascending or descending order).

  • parameters (optional): binds values to the clauses of the prepared query.

  • pollDelay (required): specifies the delay between query executions.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tips
  • Use the poll step with a join operator if you need to verify the values saved by your tested system in the database.

  • The query parameter requires at least one sorting field (tiebreaker in the example) to filter the newest results and not fetch the same records repeatedly.

  • The latest known value for tiebreaker is used to filter out the values already received from the database.

Reference Documentation

Refer to the Mongodb documentation for further parameter and configuration information.

Save step

The save step within the MongoDB plugin persists a batch of documents into MongoDB.

Ancestor

Step

Functionality

The save step’s input and step context are used to generate the table, the list of columns and their values, which are then combined and forwarded to the database.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.mongodb()
.save {
    name = "save"
    connect {
        MongoClients.create("mongodb://James23:SantaFe1987@localhost:27017")
    }
    query {
        database { _, _ ->
            "iot"
        }
        collection { _, _ ->
            "batteryState"
        }
        documents { _, input ->
            listOf(Document.parse(objectMapper.writeValueAsString(input)))
        }
    }
}
Notable parameters
  • connect (required): configures the connection to MongoDb.

  • query (required): specifies a document respecting the requirements of the target database.

Tips
  • Use the save step after a collect step in order to reduce the number of queries executed onto the database and to reduce the load your scenario generates on the test system.

  • The save step is extremely flexible; database, collection and documents may depend on the context and input value received from the previous step.

Reference Documentation

Refer to the Mongodb documentation for further parameter and configuration information.

Search step

The search step within the MongoDB plugin searches records in the database using a search query.

Ancestor

Step

Functionality

The search step’s input and step context are used to generate a document and the values to complete it; the batch of results is then forwarded to the next step(s).

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.search {
    name = "search"
    connect {
        MongoClients.create("mongodb://James23:SantaFe1987@localhost:27017")
    }
    search {
        database { _, _ -> "iot" }
        collection { _, _ -> "batteryState" }
        query { _, input ->
            Document(
                mapOf(
                    "deviceId" to input.deviceId,
                    "timestamp" to input.timestamp.epochSecond
                )
            )
        }
    }
}
Notable parameters
  • connect (required): configures the connection to MongoDb.

  • search (required): defines the search parameters including database, collection query and sort.

Tip

Use the search step after a collect() step in order to reduce the number of queries executed onto the database and to reduce the load the scenario generates on the test system.

Reference Documentation

Refer to the Mongodb documentation for further parameter and configuration information.

Netty plugin

Overview

The Netty plugin supports TCP, UDP, HTTP and MQTT clients.

Technologies addressed
Dependency

io.qalipsis.plugin:qalipsis-plugin-netty

Namespace in scenario

netty()

Client library

Netty: refer to the Netty project.

Supported steps
  • HTTP: executes an HTTP request using HTTP 1.1 or HTTP 2.0.

  • HTTP WITH: executes an HTTP request reusing the HTTP connection created by a precedent step.

  • CLOSE HTTP: closes an HTTP connection created by a precedent step.

  • MQTT Publish: sends data to an MQTT server.

  • MQTT Subscribe: consumes data from an MQTT server.

  • TCP: exchanges data with a TCP server.

  • TCP WITH: exchanges data reusing the TCP connection created by a precedent step.

  • CLOSE TCP: closes a TCP connection created by a precedent step.

  • UDP: exchanges data with a UDP endpoint.

HTTP step

The HTTP step within the Netty plugin executes an HTTP request using HTTP 1.1 or HTTP 2.0.

Ancestors

Scenario and Step

Functionality
  • The HTTP step establishes a connection with the server using either a pool of connections or an individual connection for each minion.

  • If an individual connection is used, the connection is maintained until the latest execution for the minion is performed, including the ones from the related HTTP WITH step, as in the example.

  • When the step is kept open without limit, a CLOSE TCP step must be executed to properly close the connections and avoid saturation of the system.

    Be aware that the number of open connections that can be concurrently maintained depends on the configuration of your operating system. If there are issues with maintaining open connections, try adapting the limits of the file descriptors.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
netty().http {
    request { context, input -> SimpleHttpRequest(HttpMethod.HEAD, "/head") }
    connect {
        url("https://my-website.com:8080/test")
        connectTimeout = Duration.ofSeconds(5)
        inflate()
        charset(Charsets.ISO_8859_1)
        maxContentLength(123)
        followRedirections(15)
        noDelay = false
        keepConnectionAlive = false
        tls {
            disableCertificateVerification = true
        }
        proxy {
            type = HttpProxyType.SOCKS5
            address("my-proxy", 9876)
        }
        followRedirections(5)
    }
    pool {
        size = 143
        checkHealthBeforeUse = true
    }
}.deserialize(String::class)
Notable parameters
  • request: a factory to create the request to exchange, using the step context and the input as arguments.

  • connect: configures the connection to the remote endpoint.

    • version: version of the HTTP protocol to use: HTTP_1_1 or HTTP_2_0,; defaults to HTTP_1_1.

    • url: the root URL, the path of the request is appended to it.

    • connectTimeout: limit duration to fully establish the connection; defaults to 10 seconds.

    • readTimeout: limit duration to receive a response and read it totally after a request; defaults to 10 seconds.

    • shutdownTimeout: limit duration to close a connection; defaults to 10 seconds.

    • sendBufferSize: the size of the buffer used to prepare the data to send; defaults to 1024 bytes.

    • receiveBufferSize: the size of the buffer to receive the data of the response; defaults to 1024 bytes.

    • nettyChannelOptions: overwrites the options of the Netty channel; use with care.

    • noDelay: disables the Nagle Algorithm; defaults to true.

    • keepConnectionAlive: when set to true, the HTTP header connection is set to Keep-Alive and the client connection is kept open until CLOSEHTTP() is called.

    • tls: configuration for SSL/TLS

      • disableCertificationVerification: set it to true when the certificate on server should not be verified (for example for a self-signed certificate)

      • protocols: supported protocols; defaults to the Netty default for the related HTTP version.

      • ciphers: supported cipher algorithms; defaults to the Netty default for the related HTTP version.

    • proxy: configuration to exchange HTTP data via a proxy; defaults to no configuration.

      • address: the host and port of the proxy.

      • type: type of the proxy to use: HTTP, SOCKS5; defaults to HTTP.

      • authenticate: username and optional password to authenticate to the proxy, if it supports and requires it.

    • charset: character encoding of the HTTP requests,

    • maxContentLength: maximal expected size of a request, in bytes, defaults to 1048576, which is the minimum allowed.

    • inflate: enables the acceptance of compressed response. Compression of the request is not supported.

    • followRedirections: when the server returns an HTTP status 3xx (redirection) in the response, the client will resend the request to the provided location. Otherwise, the response is kept as is. The argument maxRedirections provides the maximal number of redirections to follow; defaults to 10. By default, the redirections are not followed at all. Note that following a redirections makes the request to be sent there, exactly as it is (same method and content).

  • pool: configuration of the pool of connections; the pool is disabled by default.

    • size: number of constant connections maintained in the pool.

    • checkHealthBeforeUse: verifies the health of the connection when polled from the pool and before it is used.

  • deserialize: specifies the class to use to deserialize the body of the response.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

HTTP WITH step

The HTTP WITH step within the Netty plugin executes an HTTP request reusing the HTTP connection created by a precedent step.

Ancestor

Step

Functionality
  • The HTTP WITH step is a direct or indirect follower of an HTTP step. HTTP WITH specifies the name of the step that owns the connection, and reuses a connection from that step as it was created (either from a pool or the connection attached to the minion).

  • HTTP WITH cannot configure the connection differently than the HTTP step it relates to.

Example
1
2
3
netty().httpWith("name-of-a-previous-HTTP-step") {
    request { context, input -> SimpleHttpRequest(HttpMethod.HEAD, "/head") }
}.deserialize(String::class)
Notable parameters
  • netty().httpWith("name-of-a-previous-HTTP-step"): the name of the HTTP step that owns the connection.

    • request: a factory to create the request to exchange, using the step context and the input as arguments.

  • deserialize: specifies the class to use to deserialize the body of the response.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

CLOSE HTTP step

The CLOSE HTTP step within the Netty plugin closes the HTTP connection attached to the minion, and created by a direct or indirect ancestor step, when the tail of the minion workflow is received.

It is always a best practice to use a CLOSE HTTP step when reusing an HTTP connection across several steps without a pool of connections.

Ancestor

Step

Functionality

The CLOSE HTTP step is a direct or indirect follower of an HTTP step which keeps the connection open. CLOSE HTTP simply forwards the input it receives to the next step(s).

Example
1
netty().closeHttp("name-of-a-previous-HTTP-step")
Notable parameter
  • closeHttp (required): specifies the name of an HTTP step owning the HTTP connection to close.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

MQTT Publish step

The MQTT Publish step within the Netty plugin sends data to an MQTT server.

Ancestor

Step

Functionality

The MQTT Publish step provides a variety of options for controlling the quality and reliability of messages that are published to the server.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.netty().mqttPublish {
    connect {
        host =  "localhost"
        port = 11883
    }
    clientName("mqtt-client-1")
    records { _, batteryState ->
        listOf(
            MqttPublishRecord(
                value = objectMapper.writeValueAsString(batteryState),
                topicName = "battery-state"
            )
        )
    }
}
Notable parameters
  • connect (required): configures the host and port for the connection.

  • clientName (required): uniquely identifies the client onto the MQTT server.

  • topicName (required): identifies the MQTT topic to write to.

Reference Documentation

Refer to the Netty documentation for further configuration and parameter information.

MQTT Subscribe step

The MQTT Subscribe step within the Netty plugin consumes data from an MQTT server.

Ancestor

Step

Functionality

The MQTT Subscribe step provides a variety of options for controlling the quality and reliability of the messages that are subscribed to from the server.

Example
1
2
3
4
5
6
7
8
it.netty().mqttSubscribe {
    connect {
        host = localhost
        port = 11883
    }
    clientName("mqtt-client-1")
    topicFilter(ServerConfiguration.TOPIC_NAME)
}
Notable Parameters
  • connect (required): configures the host and port for the connection.

  • clientName (required): uniquely identifies the client onto the MQTT server.

  • topicFilter (required): MQTT topics or patterns to consume.

TCP step

The TCP step within the Netty plugin is a client for TCP.

Ancestor

Scenario and Step

Functionality

The TCP step establishes a connection with the server using either a pool of connections or an individual connection for each minion.

If an individual connection is used, the connection is maintained until the latest execution for the minion is performed, including the ones from the related TCP WITH step, as in the example.

When the step is kept open without limit, a CLOSE TCP step should be executed to properly close the connections and avoid a saturation of the system.

Be aware that the number of open connections that can be concurrently maintained depends on the configuration of your operating system. If there are issues with maintaining open connections, adapting the limits for file descriptors may help.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
netty().tcp {
    request { context, input -> input.encodeToByteArray() }
    connect {
        address("remote-server.acme.com", 9000)
        connectTimeout = Duration.ofMinutes(1)
        noDelay = false
        keepConnectionAlive = false
        tls {
            disableCertificateVerification = true
        }
        proxy {
            type = HttpProxyType.SOCKS5
            address("my-proxy", 9876)
        }
    }
    pool {
        size = 1000
        checkHealthBeforeUse = true
    }
}
Notable parameters
  • request: a factory to create the request to exchange, using the step context and the input as arguments; the request is an array of bytes.

  • connect:configures the connection to the remote endpoint.

    • address:hostname or IP address and port of the remote server.

      • connectTimeout: limit duration to fully establish the connection; defaults to 10 seconds.

      • readTimeout:limit duration to receive and read a response after a request; defaults to 10 seconds.

      • shutdownTimeout:limit duration to close a connection; defaults to 10 seconds.

      • sendBufferSize: the size of the buffer used to prepare the data to send; defaults to 1024 bytes.

      • receiveBufferSize: the size of the buffer to receive the data of the response; defaults to 1024 bytes.

      • nettyChannelOptions: overwrites the options of the Netty channel, use with care.

      • noDelay: disables the Nagle Algorithm; defaults to true.

      • tls: configuration for SSL/TLS

    • disableCertificationVerification: set to true if the certificate on the server should not be verified (for example for a self-signed certificate).

    • protocols: supported protocols; defaults to the Netty default.

    • ciphers: supported cipher algorithms; defaults to the Netty default.

      • proxy: configuration to exchange data via a proxy; defaults to no configuration.

    • address: hostname or IP address and port of the proxy server.

    • type: type of the proxy to use: SOCKS4, SOCKS5; defaults to SOCKS4.

    • authenticate: username and optional password to authenticate to the proxy, if the proxy supports and requires it.

  • pool: configuration of the pool of connections; defaults to disabled.

    • size: number of constant connections maintained in the pool.

    • checkHealthBeforeUse: verifies the health of the connection when polled from the pool and before it is used.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

TCP WITH step

The TCP WITH step within the Netty plugin sends a TCP payload, reusing the connection created by a precedent step.

Ancestor

Step

Functionality
  • The TCP WITH step is a direct or indirect follower of a TCP step. TCP WITH specifies the name of the step that owns the connection, and reuses a connection from that step as it was created (either from a pool or the connection attached to the minion).

  • TCP WITH cannot configure the connection differently than the TCP step it relates to.

Example
1
2
3
netty().tcpWith("name-of-a-previous-HTTP-step") {
    request { context, input -> input.encodeToByteArray() }
}
Notable parameters
  • request: a factory to create the request to exchange, using the step context and the input as arguments.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

CLOSE TCP step

The CLOSE TCP step within the Netty plugin closes the TCP connection, attached to a minion and created by a direct or indirect ancestor step, when the tail of the minion workflow is received.

It is best practice to use a CLOSE TCP step when reusing a TCP connection across several steps without a pool of connections.

Ancestor

Step

Functionality

The CLOSE TCP step is a direct or indirect follower of an HTTP step which keeps the connection open. CLOSE TCP simply forwards the input it receives to the next step(s).

Example
1
netty().closeHttp("name-of-a-previous-HTTP-step")

Notable parameter: * closeHttp (required): specifies the name of the HTTP step to close.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

UDP step

The UDP step within the Netty plugin is a client for UDP.

It is best practice to use the UDP step when reusing a TCP connection across several steps without a pool of connections.

Ancestors: Scenario and Step

Functionality

The UDP step creates a UDP Datagram packet from the request and sends it to the remote endpoint and expects a response.

Example
1
2
3
4
5
6
7
8
9
10
11
netty().udp {
    request { context, input -> input.encodeToByteArray() }
    connect {
        address("remote-server.acme.com", 9000)
        connectTimeout = Duration.ofMinutes(1)
    }
    pool {
        size = 143
        checkHealthBeforeUse = true
    }
}
Notable parameters
  • request: a factory to create the request to exchange, using the step context and the input as arguments, the request is an array of bytes

  • connect: configures the connection to the remote endpoint

    • address: hostname or IP address and port of the remote endpoint

    • connectTimeout: limit duration to fully establish the connection; defaults to 10 seconds.

    • readTimeout: limit duration to receive a response and read it totally after a request; defaults to 10 seconds.

    • shutdownTimeout: limit duration to close a connection; defaults to 10 seconds.

    • sendBufferSize: the size of the buffer used to prepare the data to send; defaults to 1024 bytes.

    • receiveBufferSize: the size of the buffer to receive the data of the response; defaults to 1024 bytes.

    • nettyChannelOptions: overwrites the options of the Netty channel, use with care.

Reference Documentation
  • Refer to the Netty documentation for further configuration and parameter information.

TimescaleDB plugin

Overview

The TimescaleDB plugin connects QALIPSIS to TimescaleDB.

Technology addressed
Dependency

io.qalipsis:plugin-timescaledb

Namespace in scenario

timescaledb()

Client library

https://docs.timescale.com/keywords/Toolkit/

Configuration of publication of events and meters to TimescaleDB.

Configuration namespace

Events

events.export.timescaledb

Meters

meters.export.timescaledb

Example configuration file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
events:
  export:
    timescaledb:
      enabled: true
      min-level: INFO
      host: localhost
      port: 5432
      database: qalipsis
      schema: events
      linger-period: 10s
      batch-size: 20000
      publishers: 1
      username: qalipsis_user
      password: qalipsis-pwd

meters:
  export:
    timescaledb:
      enabled: true
      host: localhost
      port: 5432
      database: qalipsis
      schema: meters
      username: qalipsis_user
      password: qalipsis-pwd
      step: PT10S
Event Parameters
  • enabled (required): boolean flag that activates/deactivates event publishing to TimescaleDB; defaults to false; must be set to true.

  • min-level (required): minimum accepted level of events; defaults to INFO; allowable values are `EventLevel`values: TRACE, DEBUG, INFO, WARN, ERROR, OFF; cannot be null.

  • host (required): defaults to localhost; cannot be blank.

  • port (required): positive integer; defaults to 5432.

  • database (required): string name of the database; defaults to qalipsis; cannot be blank.

  • schema (required): string name of the schema; defaults to events; cannot be blank.

  • linger-period (required): maximum period between publication of events to TimescaleDB; positive duration, defaults to 10s.

  • batch-size (required): maximum number of events buffered between two publications of events toTimescaleDB; positive integer; defaults to 2000.

  • publishers (required): number of concurrent publication of events that can run; positive integer; defaults to 1 (no concurrency).

  • username (optional): name of the user for basic authentication when connecting to TimescaleDB; defaults to qalipsis_user.

  • password (optional): password of the user for basic authentication when connecting to TimescaleDB; defaults to qalipsis_pwd.

    By default, TimescaleDB does not require a username and password for authentication. However, it is recommended that the information be provided to ensure secure access to the TimescaleDB data.

Meter Parameters
  • enabled` (required): boolean flag that activates/deactivates meter publishing to TimescaleDB; defaults to false; must be set to true.

  • host (required): defaults to localhost; cannot be blank.

  • port (required): positive integer; defaults to 5432.

  • database (required): string name of the database; defaults to qalipsis; cannot be blank.

  • schema (required): string name of the schema; defaults to events; cannot be blank.

  • step (required): the step size (reporting frequency); defaults to PT10S.

  • username (optional): name of the user for basic authentication when connecting to TimescaleDB; defaults to qalipsis_user.

  • password (optional): password of the user for basic authentication when connecting to TimescaleDB; defaults to qalipsis_pwd.

    By default, TimescaleDB does not require a username and password for authentication. However, it is recommended that the information be provided to ensure secure access to the TimescaleDB data.

R2DBC-jasync plugin

Overview

The R2DBC-jasync plugin connects QALIPSIS to MariaDB, MySQL and PostgreSQL.

Technologies addressed
Dependency

io.qalipsis.plugin:qalipsis-plugin-r2dbc-jasync

Namespace in scenario

r2dbcJasync()

Client library

Jasync-sql: refer to jasync-sql

Jasync-sql is an asynchronous database driver for PostgreSQL and MySQL written in kotlin.

Supported steps
  • poll: polls the newest data periodically.

  • save: saves a batch of records generated from the input and step context.

  • search: searches records and builds a query from the input and step context.

Example scenarios using a combination of R2DBC-jasync supported steps are provided GitHub.

Poll step

The poll step within the R2DBC-jasync plugin polls the newest records from the database, respecting a delay between each execution of a select query.

Ancestor

Scenario

Functionality

The poll step is created only once in a scenario; it is then used throughout the scenario as called. The poll step within the r2b-jaysync plugin uses a strategy of “delivered at least once”.

Example
1
2
3
4
5
6
7
8
9
10
11
r2dbcJasync().poll {
    protocol(Protocol.POSTGRESQL)
    connection {
        database = "iot"
        port = 5432
        username = "DavidW"
        password = "Awesome_1978"
    }
    query("select * from battery_state order by \"timestamp\"")
    pollDelay(Duration.ofSeconds(1))
}
Notable parameters
  • connection (required): specifies the connection parameters.

  • query (required): specifies a query respecting the requirements of the target database (MariaDB, MySQL, PostgreSQL,).

  • pollDelay (required): specifies the delay between query executions.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tips
  • Use the poll step with a join operator when you need to verify the values saved by your tested system in the database.

  • query requires at least one sorting field to filter the newest results and not fetch the same records repeatedly.

Reference documentation

Refer to Configuring and Managing Connections for further parameter and configuration information.

Save step

The save step within the R2DBC-jasync plugin persists a batch of records into an SQL table using a prepared statement.

Ancestor

Step

Functionality

The save step’s input and step context are used to generate the table and the list of columns and their values, which are then combined and forwarded to the database.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
.r2dbcJasync()
.save {
    protocol(Protocol.POSTGRESQL)
    connection {
        database = "iot"
        port = 5432
        username = "DavidW"
        password = "Awesome_1978"
    }
    tableName { _, _ ->
        "battery_state"
    }
    columns { _, _ ->
        listOf(
            "device_id",
            "timestamp",
            "battery_level"
        )
    }
    values { _, input ->
        listOf(
            JasyncSaveRecord(
                input.deviceId,
                input.timestamp.epochSecond,
                input.batteryLevel
            )
        )
    }
}
Notable parameter
  • connection (required): specifies the connection parameters.

Tips
  • Use the save step after a collect() step in order to reduce the number of queries executed onto the database and to reduce the load your scenario generates on the test system.

  • The save step is extremely flexible; table, columns and values might depend on the context and input value received from the previous step.

Reference documentation

Refer to Configuring and Managing Connections for further parameter and configuration information.

Search step

The search step within the R2DBC-jasync plugin searches records in the database using a search query.

Ancestor

Step

Functionality

The search step’s input and step context are used to generate a prepared statement and the values to complete it; the batch of results is then forwarded to the next step(s).

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.r2dbcJasync()
.search {
    protocol(Protocol.MARIADB)
    connection {
        database = "iot"
        port = 3306
        username = "DavidW"
        password = "Awesome_1978"
    }
    query { _, _ ->
        val request = "SELECT DISTINCT * from battery_state where timestamp = ? AND device_id = ?"
        request
    }
    parameters { _, input ->
        listOf(input.timestamp.epochSecond, input.deviceId)
    }
}
Notable parameters
  • connection (required): specifies the connection parameters.

  • query: generates a prepared statement from the step’s context and input.

  • parameters (optional): binds values to the clauses of the prepared query.

Tip

Use the search step after a collect step in order to reduce the number of queries executed onto the database and to reduce the load your scenario generates on the test system.

Reference documentation

Refer to Configuring and Managing Connections for further parameter and configuration information.

RabbitMQ plugins

Overview

The RabbitMQ plugin connects QALIPSIS to the RabbitMQ broker.

Technology addressed

RabbitMQ: https://www.rabbitmq.com/

Dependency

io.qalipsis.plugin:qalipsis-plugin-rabbitmq

Namespace in scenario

rabbitmq()

Client library

RabbitMQ: refer to rabbitMQ

Supported steps
  • consume: polls data from the RabbitMQ broker.

  • produce: pushes data to the RabbitMQ broker.

An example scenario that uses a combination of the rabbitmq consume and produce steps is provided on GitHub.

Consume step

The consume step within the RabbitMQ plugin consumes data from queues of the RabbitMQ broker and forwards the data to the next step.

This step is generally used in conjunction with join to assert data or inject data into a workflow.

This step supports multithreading concurrent consumers using the concurrency() property. When the concurrency is set to more than 1, the sequence of the records transmitted to the next step in no longer guaranteed.

To learn more, visit the RabbitMQ website.

Ancestor

Scenario

Functionality

The consume step starts a polling loop for consumption of the designated queue. As new records appear, consume polls them and sends them to the next step.

Example
1
2
3
4
5
6
7
8
9
10
11
rabbitmq()
  .consume {
    name = "consume"
    connection {
      host = "localhost"
      port = 5672
      username = "Celia45"
      password = "Columbus1963"
    }
    queue(queueName = "battery_state")
}
Notable parameters
  • connection (required): configures the connection to the RabbitMQ broker; defaults to localhost:5672

  • queue (required): defines the name of the queue to consume.

  • concurrency (optional): defines the number of concurrent channels producing messages to RabbitMQ; defaults to 1.

  • prefetchCount(optional): defines the prefetch count value; defaults to 10.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tip

Use the consume step with join to assert data or inject data into a workflow.

Reference documentation
  • Learn more about prefetchCount by reading the RabbitMQ Prefetch documentation.

  • To configure the consume step, consult the RabbitMQ documentation.

Produce step

The produce step within the RabbitMQ plugin provides messages to the RabbitMQ broker using a query; the messages are then forwarded as input to the next step.

Ancestor

Scenario

Functionality

The produce step’s input and step context are used to generate the values to complete the step; results are then forwarded to the next step(s).

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
.rabbitmq()
.produce {
  connection {
    host = "localhost"
    port = 5672
      username = "Celia45"
      password = "Columbus1963"
  }
  records { _, input ->
    listOf(
      RabbitMqProducerRecord(
        exchange = "battery_state",
        routingKey = "battery_state",
        props = null,
        value = objectMapper.writeValueAsBytes(input)
      )
    )
  }
}
Notable parameters
  • concurrency(optional): defines the number of concurrent channels producing messages to RabbitMQ; defaults to 1.

  • connection: configures the connection to the RabbitMQ broker; defaults to localhost:5672.

  • records: configures and generates a list of produced records.

Reference documentation
  • For more information on configuring the connection to RabbitMQ, refer to Connecting to RabbitMQ.

  • For more information on configuring the produce step, refer to the RabbitMQ documentation.

Redis-lettuce plugin

The Redis-lettuce plugin connects QALIPSIS to Redis.

Technology addressed

Redis: https://redis.io/

Dependency

io.qalipsis.plugin:qalipsis-plugin-redis-lettuce

Namespace in scenario

redisLettuce()

Client library

Lettuce: refer to Lettuce.

Supported steps
  • poll: polls the newest data periodically.

  • save: saves a batch of records generated from the input and step context.

  • consume: consumes data from Redis streams.

  • produce: pushes messages to Redis streams.

Example scenarios using a combination of Redis supported steps are provided on Github.

Poll step

The poll step within the Redis-lettuce plugin polls the newest records from a Redis key, with a delay between each execution of polling.

Ancestor

Scenario

Functionality

The poll step is created only once in a scenario; it is then used throughout the scenario as called.

  • The KeyOrPattern to be used in the poll is set to one of the following valid values: scan, sscan, hscan, zscan.

  • The native Redis-lettuce command is called using the KeyOrPattern provided.

Example
1
2
3
4
5
6
7
8
9
10
11
12
it.redisLettuce()
  .pollSscan {
    connection {
      nodes = listOf("localhost:6379")
      database = 0
      redisConnectionType = RedisConnectionType.SINGLE
      authUser = "Celia45"
      authPassword = "Columbus1963"
    }
    keyOrPattern("battery_state_save_and_poll")
    pollDelay(Duration.ofSeconds(1))
  }
Notable parameters
  • Connection (required): specifies the connection parameters.

  • redisConnectionType (required): indicates the connection type as SINGLE, CLUSTER or SENTINEL.

  • KeyOrPattern (required): specifies the key or pattern that will be polled after each pollDelay.

  • pollDelay (required): specifies the delay between query executions.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tips
  • Use the poll step with a join operator when you need to verify the values saved by your tested system in the database.

  • In addition to SSCAN (pollSscan in the example), the poll step also supports the following Redis commands: HSCAN (polllHscan) SCAN (pollScan), and ZSCAN (pollZscan).

Reference documentation

Save step

The save step within the Redis-lettuce plugin persists a batch of records into Redis.

Ancestor

Step

Functionality

The save step’s input and step context are used to generate a valid Redis command and the values to complete it; the batch of results is then forwarded to the next step(s). The records are saved individually.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.redisLettuce()
.save {
  connection {
    nodes = listOf("localhost:6379")
    database = 0
    redisConnectionType = RedisConnectionType.SINGLE
      authUser = "Celia45"
      authPassword = "Columbus1963"
  }

  records { _, input ->
    listOf(
      SetRecord("battery_state_save_and_poll",
        objectMapper.writeValueAsString(input)
      )
    )
  }
}
Notable parameters
  • Connection (required): specifies the connection parameters.

  • redisConnectionType (required): indicates the connection type as SINGLE, CLUSTER or SENTINEL.

  • records (required): generates and configures a list of saved records.

Tip
  • The save step supports the following Redis commands: SET, SADD, HSET, ZADD. The provided command may depend on the context and input value received from the previous step.

  • If the redisConnectionType is set to SENTINEL, the parameter masterId is necessary.

Reference documentation

Consume step

The consume step within the Redis-lettuce plugin records data from Redis streams.

Ancestor

Scenario

Functionality

The consume step creates a consumer group in a Redis stream with the provided connection, consumer group name, and concurrency. If concurrency is set to 1, records are transmitted sequentially to the next step.

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
redisLettuce()
.streamsConsume {
  connection {
    nodes = redis://localhost:6439
    database = 0
    redisConnectionType = RedisConnectionType.SINGLE
      authUser = "Celia45"
      authPassword = "Columbus1963"
  }
  streamKey("battery_state_produce_and_consume")
  offset(LettuceStreamsConsumerOffset.FROM_BEGINNING)
  group("consumer")
}
Notable parameters
  • connection (required): configures the connection to QALIPSIS.

  • redisConnectionType (required): indicates the connection type as SINGLE, CLUSTER or SENTINEL. The default is SINGLE.

  • offset (required): sets the offset value to FROM_BEGINNING, LAST_CONSUMED or LATEST for each topic and consumer group.

  • broadcast (optional): specifies the broadcast parameters for the step.

Tip

If redisConnectionType is set to SENTINEL, the parameter masterId is necessary.

Reference documentation

Produce step

The produce step within the Redis-lettuce plugin produces records into the Redis streams.

Ancestor

Step

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
.redisLettuce()
.streamsProduce {
  connection {
    nodes = listOf("localhost:6379")
    database = 0
    redisConnectionType = RedisConnectionType.SINGLE
      authUser = "Celia45"
      authPassword = "Columbus1963"
  }

  records { _, input ->
    listOf(
      LettuceStreamsProduceRecord(
        key = "battery_state_produce_and_consume", value = mapOf(
        "device_id" to input.deviceId,
        "timestamp" to input.timestamp.epochSecond.toString(),
        "battery_level" to input.batteryLevel.toString()
        )
      )
    )
  }
}
Notable parameters
  • redisConnectionType (required): sets the connection type as SINGLE, CLUSTER or SENTINEL.

  • records (required): generates and configures a list of produced records.

Tip
  • If redisConnectionType is set to SENTINEL, the parameter masterId is necessary.

Reference documentation

Slack plugin

Overview

QALIPSIS supports Slack to notify users of the status of their campaign after it runs. The process of enabling slack to trigger notifications can be grouped in two stages:

  • Configuring and activating the QALIPSIS publisher for Slack.

  • Creating and installing an app into a Slack workspace.

Configuring and activating the QALIPSIS publisher for Slack

To activate the QALIPSIS publisher for Slack, create a configuration file in your QALIPSIS project.

Configuration namespace

report.export.slack

Example configuration file
1
2
3
4
5
6
7
8
9
10
report:
  export:
    slack:
      enabled: true
      token: xoxb-my-awesome-bot-token
      channel: my-qalipsis-campaign-report-channel\presentation\d\1wUY3AXkwyhOjw-5R6CuvVQF176RTH_iTs57awFDN1Kc\edit
      status:
        - SUCCESS
        - FAILED
        - WARNING
Parameters
  • enabled (required): boolean flag that activates/deactivates campaign report publishing to Slack; defaults to false; must be set to true.

  • token (required): the bot token of the app in charge of posting messages to the specified workspace channel, usually beginning with xoxb.

  • channel (required): a valid Slack channel name within a workspace indicating where notifications messages should be pushed to.

  • status (required): specifies which campaign report statuses should trigger a notification.

    Allowed values are any set of ReportExecutionStatus values: ALL, FAILED, SUCCESSFUL, WARNING, ABORTED; defaults to ALL.

    • Selecting ALL triggers a notification for any campaign report status.

    • Selecting two or more status values triggers a notification when the campaign report status is the same as any of the selected status values.

    • Selecting just one status value triggers a notification only when the campaign report status is the same as the selected status value.

Reference Documentation

Refer to the QALIPSIS documentation for information about adding configuration properties that set or override the default parameters.

Creating and installing an app into the Slack workspace

Creating an app

You will need to create an app on the slack api website.

  1. Navigate to the the slack api website.

  2. Click on Create an App button

  3. Select From an app manifest.

  4. Select the development workspace where you will build your app and click on the Next button.

  5. Replace the content of the json manifest with the json object provided below.

    You can edit information in the display_information and features blocks to your preference, but avoid making changes in the oauth-config block.

  6. Review the app summary.

  7. Click the Create App button.

json object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
  "display_information": {
    "name": "QALIPSIS-REPORT-NOTIFIER",
    "description": "QALIPSIS notifications",
    "background_color": "#32373d",
    "long_description": "Receives the notifications from QALIPSIS via Slack, including execution reports. Install the App in Slack, configure the access in QALIPSIS and be notified in realtime when a campaign ends."
  },
  "features": {
    "bot_user": {
      "display_name": "QALIPSIS-REPORT-NOTIFIER",
      "always_online": false
    }
  },
  "oauth_config": {
    "scopes": {
      "bot": [
        "chat:write",
        "chat:write.public",
        "im:read",
        "channels:history",
        "groups:history",
        "mpim:history",
        "im:history",
        "incoming-webhook"
      ]
    }
  },
  "settings": {
    "org_deploy_enabled": false,
    "socket_mode_enabled": false,
    "token_rotation_enabled": false
  }
}

Installing the app into your workspace

  1. From the to left side panel of the slack api website, click on Basic information.

  2. From the Basic information menu, click on Install to Workspace.

  3. Select or create a channel where the app you created above (QALIPSIS-REPORT-NOTIFIER), should post notifications.

  4. Click on allow to automatically install the app to your workspace.

    • Your selected slack channel is now set to receive message notifications from QALIPSIS.

    • A QALIPSIS bot token will display under Apps in the left sidebar of your Slack workspace.

Reference Documentation

For further information, refer to the official slack documentation.

Viewing/Updating the app configuration

To view/update the app configuration, right click on the QALIPSIS bot token (displayed under Apps in the left sidebar of your Slack workspace) and select View App Details.

From the displayed Tenant Modal window, click Configuration. The QALIPSIS App Manager page displays.

  • Any user can view the description and Authorizations of the app.

  • Authorized users can view and update the app configuration.

Mail plugin

Overview

QALIPSIS supports email messaging, to notify users of the status of their campaigns after it runs. This is achieved by configuring and enabling the QALIPSIS publisher for mail.

Configuring and activating the QALIPSIS publisher for mail

To activate the QALIPSIS publisher for mail, create a configuration file in your QALIPSIS project.

Configuration namespace

report.export.mail

Example configuration file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
report:
  export:
    mail:
      enabled: true
      username: qalipsis-user
      password: passpass
      status: (1)
        - SUCCESSFUL
        - FAILED
        - WARNING
      authenticationMode: USERNAME_PASSWORD (2)
      from: foo@bar.com
      to:
        - fooadmin1@bar.com
        - fooadmin2@bar.com
      junit: false  (3)
      ssl: true
      starttls: true
      host: smtp.gmail.com
      port: 587
1 Mail notifications are triggered only when the status of a campaign report is FAILED, SUCCESSFUL, or WARNING. All other campaign statuses are ignored.
2 Allows authentication requests for a network connection.
3 Junit report attachments would not be included in the email.
Parameters
  • enabled (required): boolean flag that activates/deactivates campaign report publishing to mail; defaults to false; must be set to true to enable notifications.

  • username (required): username to be used in authenticating network connection requests when password authentication is required (when the AuthenticationMode is set to USERNAME_PASSWORD).

  • password (required): password to be used in authenticating network connection requests when password authentication is required (when the AuthenticationMode is set to USERNAME_PASSWORD).

  • status (required): specifies which campaign report status(es) should trigger a notification.

    Allowed values are any set of ReportExecutionStatus values: ALL, FAILED, SUCCESSFUL, WARNING, ABORTED; defaults to ALL.

    • Selecting ALL triggers a notification for any campaign report status.

    • Selecting two or more status values triggers a notification when the campaign report status is the same as any of the selected status values.

    • Selecting just one status value triggers a notification only when the campaign report status is the same as the selected status value.

  • authenticationMode (required): authentication options for SMTP server; allowed values are any set of AuthenticationMode values: USERNAME_PASSWORD, PLAIN; defaults to PLAIN.

  • from (required): email to indicate sender address; defaults to no-reply@@qalipsis.io.

  • to (required): list of valid email addresses that are recipients of the email.

  • cc (optional): list of valid email addresses to be copied in the email; defaults to an empty list.

  • junit (required): boolean flag that activates/deactivates inclusion of a Junit report as a zipped attachment, if a report exists; defaults to false.

  • ssl (required): boolean flag that enables/disables the use of SSL to connect; defaults to false.

  • starttls (required): boolean flag to enable/disable use of the STARTTLS command (if supported by the server); defaults to false.

  • host (required): the host name of the mail server to connect to; defaults to localhost.

  • port (required): the port number of the mail server to connect to; defaults to 25.

Reference Documentation

Refer to the QALIPSIS documentation for information about adding configuration properties that set or override the default parameters.

Gradle Plugin

Overview

More information will be provided soon on how the Gradle plugin extends the capabilities of QALIPSIS and makes the programming of scenarios faster and the integrations into DevOps easier.

QALIPSIS GUI

Overview

The QALIPSIS GUI allows you to create, configure, run, monitor and view results of test campaigns from your browser. You can configure campaigns and run them immediately, at a future time, or on a recurring schedule.

  • You can view in-process campaign results in real time.

  • You can view finalized reports of a previous campaign’s data on static reports and on an interactive time-series chart.

The following topics are discussed in this section:

Logging in to QALIPSIS

  1. Navigate to the QALIPSIS Welcome page according to the installation details provided by your administrator.

    QALIPSIS Cloud is available at https://app.qalipsis.io.

  2. Enter your Username and Password.

    If your user profile is eligible for several organizations, select the tenant to use from the drop down box.

  3. Click Submit to log into QALIPSIS.

    Notes
    • If a Tenant selection message is not displayed, you will be logged in to QALIPSIS with the tenant you were using in your most recent login.

    • You can can select a different tenant at any time that you are logged into QALIPSIS, using the bottom-left panel.

Navigating QALIPSIS

When you log into QALIPSIS, the Home page of the GUI and a left sidebar are displayed.

Left Sidebar

Change to a different tenant

  1. Click on your name near the bottom of the sidebar to open the Tenant Modal window.

  2. Select an available tenant to use for the current session.

View/Update Your profile

  1. Click on your name at the bottom of the left sidebar to open the Tenant Modal window.

  2. Click on the settings icon.

  3. Click on My profile to open the My Profile page.

  4. Make changes to your profile and click on Save changes.

Change your password

  1. Click on your name at the bottom of the left sidebar to open the Tenant Modal window.

  2. Click on the settings icon.

  3. Click on My profile to open the My Profile page.

  4. Click on Change password.

  5. Enter your new password in the New Password and Comfirm Password fields.

  6. Click on Save changes to save your new password.

Logout

  1. Click on your name at the bottom of the left sidebar to open the Tenant Modal window.

  2. Click on the settings icon.

  3. Click the Logout button. You will be logged out and returned to the Welcome screen.

Access campaign information and functions

The Campaign page is accessible from the left side bar. From the Campaign page, you can view current or past campaign data and create a new campaign.

Further details are provided later in this documentation.

Access Series information and functions

Series are data sets that are preconfigured to monitor and display relevant campaign data. The Series screen is accessible from the left sidebar. The Series screen allows the user view current series information, update or duplicate a series, and create a new series.

Further details are provided later in this documentation.

Working with series data and filters

Series are data sets that are preconfigured to monitor and display relevant campaign data.

  • The logged events and meters can be graphically displayed

  • You can create a series if your profile has write permissions.

  • You can update a series if one of the following is true.

    • You are the owner

    • The owner has shared it in write mode and you have write permissions.

Creating a series

From the Tenant sidebar:

  1. Click on Series to open the Series page.

    The Series page displays a table listing currently defined series.

  2. Click the +New series button near the top right corner of the page.

    The New series modal window opens.

  3. Enter the required data into the appropriate fields.

    Note that only numeric fields can be used with an operation. The the other types, only the count of provided values can be configured.

  4. Click the Create button.

    The following text is displayed in the modal window: '[Series Name] has just been created!'.

  5. Close the New series modal window to return to the Series page.

    The newly created series is displayed in the table.

Updating a series

  1. Click on Series in the left sidebar menu.

    The Series page opens and displays a table listing currently defined series.

  2. Click on the name of the series you want to update.

    The Update series modal window opens.

  3. Enter the required data into the appropriate fields.

  4. Click the Update button.

  5. Close the Update series modal window to return to the Series page.

Configuring and running a campaign

  1. Click the +Campaign button near the top right of the Campaign page.

    The New Campaign window will open displaying the campaign name, a settings icon, a search field, and a table listing scenarios that are available to run in the campaign.

  2. Optionally, rename the campaign by hovering over the New Campaign title and selecting the pen icon to edit the name.

  3. Select the checkbox to the left of each scenario that you want to include in your campaign.

    Each scenario has a default configuration.

  4. To configure a scenario to better suit your specific requirements, click the Configure button to the right of the scenario.

  5. Update the fields on the Scenario Configuration window.

  6. Click Apply and close the window.

    Scenario changes are saved for the current campaign and do not affect the scenario’s default settings.

  7. Click the Run immediately button to start the campaign.

    Campaign data is displayed in real-time in the Report panel and on the time-series chart.

    • Minion count is displayed by default.

    • Select one or more series cards to display the generated data for that series.

Displaying campaign data

  1. Click on Campaign in the left sidebar.

    A table listing all existing and planned campaigns is displayed.

    • Click on any column title to sort the table by ascending/descending order.

    • Search for a specific campaign by clicking on the magnifying glass icon near the top right of the screen and entering the name (or partial name) of the campaign.

  2. Click on the name of a campaign to display the Campaign details page.

    • A Report panel is displayed at the top of the screen.

      The top line in the Report panel displays information for the whole campaign; the following lines display information for each scenario within the campaign.

      Click on the Campaign Summary dropdown filter to select which scenario(s) information to display in the report section.

Glossary

Term

Definition

Archive

A self-executing package containing QALIPSIS components for installation and execution.

Assertion

A step within a scenario that compares and validates data and timing information from multiple sources to ensure that expected results are met.

Campaign

A single execution of one or a collection of scenarios, starting at a set time (or immediately), and continuing until completion, a fatal error occurs, or a stop is manually initiated.

Cluster

A group of pieces/nodes of the software that are executed on interconnected machines that may or may not be in different locations.

Configuration

Runtime settings to customize the execution of a campaign.

Data distribution

Strategy to distribute data received by a data source step (like polling) to the different minions passing by the step. QALIPSIS data distribution strategies include:

  • Loop

  • Unicast/forward once

  • Broadcast

Dependency Injection

A technique in which an object receives other objects that it depends on, called dependencies. A dependency injection involves four objects each having a different role:

  • The service object that is injected into a client.

  • The client object that receives the injected service.

  • The interface object that defines how the client uses the service.

  • The injector object that constructs the services and injects them into the client.

Directed Acyclic Graph (DAG)

A subpart of a scenario made up of a single continuous sequence of steps, to be executed in the same location by the same minions. The first DAG in a scenario is typically used to initiate the scenario and set up the environment for the other DAGs to execute.

Directive

A single instruction published by a head to trigger an action in a factory.

Event

A time-stamped snapshot within a step. There are several types of events that can be monitored, including the time to respond, the status of the response, the size of the payload to send, etc.

Execution profile

Settings that define the pace that minions are injected into the system. QALIPSIS out-of-the-box execution profiles include:

  • Timeframe

  • More

  • Faster

  • Regular

Feedback

The factory response to a directive; the feedback provides the status of the directive’s progression and/or the result of the expected action.

Gradle

The build automation tool that QALIPSIS uses to compile code, assemble resources, and package and deploy projects.

Input

The output of the previous step that is used in the current step’s execution.

Installation

The overall strategy for QALIPSIS installation within different environments.

  • Clusters: Deploying QALIPSIS across multiple locations.

  • Amendable: Installing QALIPSIS on existing infrastructures or platforms.

  • Containers: Using Docker (or similar) container technology to package and run QALIPSIS.

Load-injected tree

A branch of a scenario that receives load from all of the minions.

Meter/Step Meter

  • A meter is a value aggregating step of the performance over a short period of time across all the minions.

  • A step meter is a unique value representing the performance of a unique execution of a step by a minion (e.g., time to response or volume of data saved, etc.)

Minion

A virtual actor simulating a website user, an IoT device, or another system that interacts with the test system to inject load and verify performance.

Monitoring/Time-series data

A set of records generated by the execution of a campaign that permits the analysis of the performance of the target system and the generation of dashboards.

Node

A software entity of a QALIPSIS cluster. Nodes communicate with each other to exchange data.

  • Head: Node of a QALIPSIS cluster that coordinates and maintains the state of the cluster, orchestrating the factories and the campaigns.

  • Factory: Node of a QALIPSIS cluster that executes the scenario to generate load to a target system and verify its behavior.

Operator

A keyword used in a scenario or step to manipulate or perform calculations on applicable data.

Output

Raw data that a step generates, which may be used as input to the following steps to verify behavior or to generate further load.

Plugin

A technical extension designed to extend the core capabilities of QALIPSIS.

Project deployment

A strategy to install and run QALIPSIS on an IT landscape, depending on the requirements for data retention, load and security. QALIPSIS deployment strategies include:

  • Embedded

  • Ephemeral

  • Transient

  • Standalone

  • Cluster

QALIPSIS Project

A set of files that represent QALIPSIS scenarios and the dependencies (libraries) they require.

Report

  • A step report is a consolidated view of the executions of a step.

  • A campaign report is a consolidated view of the step reports, organized by scenarios.

Result

The record returned by a step when executed by a minion. The result contains values (e.g., records fetched from a database or a response to a request) and the step meters of the execution. The result is dispatched as the output of the step context to the followers.

Runner

A component in charge of triggering and orchestrating the flow of data from one step to the next. The runner is responsible for ensuring that output data from each step is provided as input to the next step as appropriate. Each runner task is assigned to the minion that simulates the user or device executing it.

Run-time parameters

Scenario parameters that are set at run-time or execution of the scenario.

Scenario

Sequence of actions to be executed by minions or to poll data from a remote system.

Specification

The configuration of a step or a scenario that describes the actions to perform and the sequence in which they are to be performed.

Step

The central atomic components within each scenario. Steps are the challenges minions must complete in order to continue in the scenario. Each step has a purpose. There are several step types within QALIPSIS:

  • Load steps that generate load or data onto a tested system.

  • Error processing steps that contain information for handling a failed assertion or execution.

  • Proxy steps that act as middlemen, using convenient strategies to connect steps and facilitate the flow of data and execution requests between them.

  • Operator steps that act on its data input or output before releasing it to the next step.

  • Assertion steps that validate incoming or outgoing data.

Step context

A single execution of a single step by a single minion.

  • Execution: A set of fields and values representing the single execution of a single minion on a single step.

  • Exhausted: After an execution failure, a step context is marked as exhausted, and this state is propagated to its descendants to inform that there will be no more execution with that context.

  • Tail: A step context that represents the last one in the execution workflow of a single minion.

Target system

An IT system that QALIPSIS injects load into in order to verify its ability to scale and support a high volume of users or data.

Workflow operators

Operators that control the flow of data within a scenario. QALIPSIS workflow operators include:

  • Fan-Out: Splits a branch and adds two or more concurrent steps as successors.

  • Fan-in/joins: Re-joins split branches of a scenario or verifies data from different sources.

  • Error processing steps: Handles errors or unexpected events during execution.

Frequently Asked Questions

To be provided.

Architecture

To be provided.