Can you stress-test distributed infrastructure and still see what is happening inside it?
Engineered for distributed load. Built for operational visibility.

Can you generate distributed load and observe your infrastructure while it is happening?
Frequent releases. Distributed systems. Complex integrations. In today’s fast-paced environments, testing can’t slow you down – but it can’t fall short either.
Here’s how QALIPSIS helps DevOps teams deliver reliable, high-performing software at scale.
Can your load generator keep up with the infrastructure it is supposed to stress?
- A single-machine load generator saturates its own network interface before it stresses the system under test. Running from one location also hides zone-level routing and latency differences that only appear when traffic arrives from multiple origins simultaneously — exactly the condition that causes degradation during a global traffic event.
- Deploy factories across the zones you control, assign each a
factory.zone, and trigger campaigns via the REST API with a per-scenario zone percentage split. The head orchestrates execution and collects results; factories inject load independently. Enable step-level monitoring on the steps that represent your critical service boundaries and export meters and events to your telemetry backend for live analysis during the run. - Read more:
What was your system doing at minute twelve of the load test?
- Most load tools produce an internal report you read when the run is over. But the questions that matter operationally — at what point did the queue start lagging, which zone saw latency diverge first, did the connection pool saturate before or after the error rate climbed — require time-series data correlated with your infrastructure metrics, in the same tool your team is already watching while the test runs.
- Enable monitoring per step —
monitoring { events = true },monitoring { meters = true }, ormonitoring { all() }— only where signal is needed, keeping data volume intentional. Export to Graphite, Elasticsearch, Kafka, TimescaleDB/PostgreSQL, or InfluxDB. Addfactory.tagsto carry environment, team, or project metadata through to every exported data point, so test runs are distinguishable in shared dashboards alongside production signals. - Read more:
How often does your infrastructure load test actually run?
- Infrastructure load tests that require manual setup and manual interpretation are not tests — they are experiments. They run infrequently, produce results no one has time to compare to the previous run, and get skipped entirely under release pressure. The value comes from running them repeatedly and automatically against a consistent baseline, exactly the way other pipeline stages work.
- Trigger campaigns programmatically via the REST API (
POST /campaigns) with parameterised payloads — minion counts, zone distributions, execution profiles — and retrieve results viaGET /campaigns/{campaign-key}to wire them into your pipeline logic. Schedule recurring campaigns (POST /campaigns/schedule) with time zone-aware recurrence for nightly or pre-release baseline runs. For Gradle-based pipelines, the Gradle plugin providesqalipsisRunAllScenariosor a customRunQalipsistask with JUnit report export alongside. - Read more:
QALIPSIS: Elevating software quality for DevOps teams
With QALIPSIS, your team can:
Detect bottlenecks
Database compatibility
Automate validation in your CI/CD
Release with confidence every
What you need to know
- What QALIPSIS lets you do:
- Choose a packaging and run mode that matches how you operate infra tests:
- Embed in a JVM project: embed QALIPSIS as a library and execute scenarios programmatically (standalone deployment only).
- Run as a Java archive (JAR): execute your assembled
*-qalipsis.jarwith Java. - Run from a distribution archive (ZIP): package a ZIP distribution (including scenarios/dependencies) and start via the
bin/launcher. - Run from a Docker image: package your project as an image and run with
docker runor in Kubernetes and OpenShift.
- Run as a Java archive (JAR): execute your assembled
- Independently of packaging, you can start QALIPSIS in standalone mode (single process), or cluster mode where nodes take a Head or Factory role. For cluster communication/synchronization, you can use low-latency messaging platforms such as Redis Pub/Sub and Kafka.
- Docs reference:
- Deployment Topologies: how to choose between standalone vs cluster, container vs host, and embedding in a JVM project.
Read more: Deployment topologies - Execute QALIPSIS: how to start from JAR, ZIP distribution, or Docker, and how to start head/factory roles.
Read more: Execute QALIPSIS - Core concepts: what head/factory nodes are and how factories can synchronize via Redis Pub/Sub and Kafka.
Read more: QALIPSIS core concepts
- Deployment Topologies: how to choose between standalone vs cluster, container vs host, and embedding in a JVM project.
- What QALIPSIS lets you do: Scale load horizontally while keeping a single orchestration and reporting control point. The Head orchestrates execution and reporting; Factory nodes host minions (simulated actors) that run scenarios; adding/removing factories expands capacity.
- Docs reference: The Core Concepts page explains the Head/Factory architecture, what minions are, and how scenarios relate to execution.
Read more: QALIPSIS core concepts
- What QALIPSIS lets you do: When you trigger a campaign via the REST API, you can provide a per-scenario zones distribution (percent-based split) so load is distributed and attributed by zone.
- Docs reference: The REST API page shows the campaign trigger payload, including how to set zones and other per-scenario runtime parameters.
Read more: QALIPSIS REST API
- What QALIPSIS lets you do: Stream monitoring data to your existing observability stack by enabling per-step monitoring (disabled by default, so you control volume) and activating one or more exporters. Supported export targets are:
- Elasticsearch
- Graphite
- InfluxDB
- TimescaleDB/PostgreSQL
- Apache Kafka
You can run zero to many simultaneously.
Each exporter lets you tune batching, publishing concurrency, and (for events) severity-level filtering. Custom factory tags (e.g. environment, team) are appended to all exported data for richer dashboard filtering. - Docs reference: The Monitoring Test Campaigns page covers enabling step-level monitoring, exporter activation, event-level filtering, and custom tagging. Each plugin page documents backend-specific configuration.
Read more: Monitoring Test Campaigns
- What QALIPSIS lets you do: Instrument runs as they happen by enabling monitoring per step (disabled by default per step, so you control volume). It supports two primitives:
- Meters: time-bucketed metrics (counters, timers, gauges, rates, throughputs) tagged by campaign/scenario/step/zone.
- Events: log-like records (timestamp, severity, name, values) tagged by campaign/scenario/step/zone.
- Docs reference: The Monitoring Test Campaigns page defines meters/events, how tags are applied, and how to enable monitoring on steps.
Read more: Monitoring the test campaigns
- What QALIPSIS lets you do: Export monitoring data to supported targets such as Graphite, Elasticsearch, Kafka, TimescaleDB/PostgreSQL, and InfluxDB. You can also add context via
factory.tagsso exported telemetry carries environment/team/project metadata. - Docs reference: The Monitoring Test Campaigns page lists supported exporters and shows how configuration and tagging works.
Read more: Monitoring the test campaigns
- What QALIPSIS lets you do: In standalone mode, QALIPSIS provides live reporting while tests run (enabled by default). You can disable it via
report.export.console-live.enabled: false. The GUI also provides a live-updated report with charts. - Docs reference: The Reporting page describes the default report and live console reporting options.
Read more: Reporting
- What QALIPSIS lets you do: Use the Head REST API (default port 8400) to run repeatable, parameterized campaigns without manual clicks:
- Trigger campaigns (
POST /campaigns) with runtime parameters like minion counts, optional zone splits, and execution profiles (e.g., staged ramps). - Schedule campaigns (
POST /campaigns/schedule) with hourly/daily/monthly recurrence and time zones. - Alternatively, the Gradle plugin lets you run scenarios directly as Gradle tasks, with ready-made examples for Jenkins and GitHub Actions.
An autostart mode stops all nodes after completion for clean CI/CD teardown.
- Trigger campaigns (
- Docs reference: The Automation, continuous integration and continuous deployment page documents endpoints, default port, request payloads, and scheduling options.
Read more: Automation, continuous integration and continuous deployment
- What QALIPSIS lets you do: Retrieve a default report including test results, execution time, and errors. Reports are accessible via GUI or REST API, and campaigns can be listed/retrieved via endpoints like
GET /campaignsandGET /campaigns/{campaign-key}. - Docs reference:
- The Reporting page describes what’s included in the default report and how reporting behaves.
Read more: Reporting - The REST API page shows how to query campaign runs and retrieve details.
Read more: QALIPSIS REST API
- The Reporting page describes what’s included in the default report and how reporting behaves.