Does your system fail between microservices — and do your tests even see it?
QALIPSIS helps software developers model event-driven paths, scale load across zones, and validate downstream outcomes with step-level observability – in CI or at scale.

Can your test scenarios reach past the API layer into the system that runs behind it?
The message still has to be enqueued. The record still has to be written. The inventory decrement still has to survive a thousand concurrent writers. These failures are invisible to anything that stops at the HTTP boundary.
Let’s explore the top testing challenges developers face today – and how QALIPSIS solves them.
How far behind is your consumer before the first error appears?
- The service returns 200 and the scenario moves on. What it does not see is that the message consumer is falling behind — processing one message per second while the producer is pushing ten. The lag accumulates invisibly until the queue backs up far enough to cause visible failures. By then, the window to catch it in testing has long passed, and the fix has to happen in production.
- Model the message path as steps in the scenario: produce the trigger via the API, then consume the downstream message using
kafka().consumeorrabbitmq().consume. A join operator correlates the original request with its corresponding consumed message — so if the message arrives late, out of order, or not at all, the assertion fails at the exact step with a timestamped event. Enable step monitoring to export consumer lag as a meter over time. - Read more:
What actually happened after the API returned 200?
- The API returns the right status code. The message was dispatched. But the database record was never written — or was written with a stale value because two concurrent transactions both read the same state before either committed. The bug is real, reproducible under load, and completely invisible to any test that stops at the HTTP response.
- Place assertions as steps inside the workflow, immediately after the interaction that should produce an effect. Use a join to correlate the submitted request with the persisted outcome — database record, cached value, or downstream event — and assert on the combined record. A divergence fails at the exact step where propagation broke, with an event capturing what was observed versus expected and when it happened.
- Read more:
What does your load test miss when it runs from a single machine?
- Running all load from one machine produces results that reflect that origin’s network path — not the experience of users distributed across geographies. Cross-region routing, zone-specific connection pooling, and geographic service affinity all behave differently when traffic arrives from multiple origins simultaneously. A single-location test passes; a multi-zone run finds the regional session service that sits thousands of milliseconds away from half its users.
- Deploy factories in the zones you control, assign each a
factory.zone, and trigger the campaign via the REST API with a per-scenario zone percentage split in the payload. Exported meters and events are tagged by zone, so latency distributions, error rates, and throughput can be compared across regions in your existing telemetry backend — and geographic asymmetries surface before they affect users. - Read more:
How did a 10× latency regression ship through a green pipeline?
- The scenario runs in the pipeline, exits zero, and the build passes. But the campaign report lives inside the tool, the JUnit file was never configured, and the only signal the pipeline received was “process exited cleanly.” A service degraded 10× under load throughout the run, every request eventually returned 200, and nothing failed the build. The regression shipped with a green pipeline.
- Use
--autostartfor non-interactive runs that stop nodes after completion. In Gradle-based pipelines, theqalipsisRunAllScenariostask or a customRunQalipsistask publishes JUnit reports configured underreport.export.junit.*. The build fails on assertion breaches, the JUnit file is available as a pipeline artefact, and campaign results are retrievable viaGET /campaigns/{campaign-key}for integration with your own tooling. - Read more:
Which step failed — and why does the report not say?
- The campaign fails. The report says “campaign status: failed.” Which step regressed? Was it latency, error rate, or a functional assertion? Was it one minion or all of them? Without step-level data, every failed campaign triggers the same manual investigation — pull logs, compare metrics, correlate timestamps — the same archaeology every time a regression surfaces.
- Enable
monitoring { events = true }andmonitoring { meters = true }(ormonitoring { all() }) on the steps where signal matters, not globally, to keep data volume intentional. Export to the backend your team already queries. Configure report publishers to send Slack or email notifications on selected campaign statuses — so the right people know immediately which campaign failed and can open a step-level report rather than starting from scratch. - Read more:
Enterprise-grade testing for modern software
With QALIPSIS, software teams can:
Simulate real-world traffic across async, event-driven systems
Run performance and load testing for cloud-native applications
Automate testing within DevOps workflows
Gain real-time insights to boost performance and reliability
What you need to know
- What QALIPSIS lets you do: A minion simulates a user/device and runs a complete scenario; you can define minion count in scenario configuration or override it at runtime. Read more: QALIPSIS core concepts
- Reference in the docs: Look for runtime override notes and autostart keys like
campaign.minions-count-per-scenarioandcampaign.minions-factor. Read more: Execute QALIPSIS
- What QALIPSIS lets you do: Define an execution profile in the scenario (
profile { ... }) to control how minions are injected.
Read more: Scenario specifications - Reference in the docs: Review the out-of-the-box profiles (
immediate,regular,timeframe,stages, etc.).
Read more: Scenario specifications
- What QALIPSIS lets you do: Assign a
factory.zoneto factories (cluster mode) and use zones in campaign definitions.
Read more: Cluster - Reference in the docs: Look for the
factory.zoneconfiguration and zone distribution examples.
Read more: QALIPSIS REST API
- What QALIPSIS lets you do: Use
--autostartto run immediately and stop nodes after completion; head can wait forcampaign.required-factoriesin cluster mode.
Read more: Execute QALIPSIS - Reference in the docs: Review
--autostart,--scenarios/-s, and thecampaign.*keys.
Read more: Execute QALIPSIS
- What QALIPSIS lets you do: Use the Gradle plugin to run scenarios and configure JUnit report output (example config shown in the docs).
Read more: QALIPSIS Gradle plugin - Reference in the docs: Look for tasks like
qalipsisRunAllScenariosand config keys underreport.export.junit.*.
Read more: QALIPSIS Gradle plugin
- What QALIPSIS lets you do: Monitoring is disabled by default per step; enable it per step using
monitoring { ... }.
Read more: Monitoring the test campaigns - Reference in the docs: Check
monitoring { events = true },monitoring { meters = true }, andmonitoring { all() }.
Read more: Monitoring the test campaigns
- What QALIPSIS lets you do: Enable global export and use plugins to export to supported backends (Graphite, Elasticsearch, Kafka, TimescaleDB/PostgreSQL, InfluxDB).
Read more: Monitoring the test campaigns - Reference in the docs: Review
events.export.enabled=true/meters.export.enabled=trueand exporter plugin docs.
Read more: Monitoring the test campaigns
- What QALIPSIS lets you do: Configure exporters, e.g: TimescaleDB via
events.export.timescaledb.*andmeters.export.timescaledb.*(example config in docs).
Read more: TimescaleDB plugin - Reference in the docs: Check parameters like
min-level,batch-size,linger-period,publishers, andstep.
Read more: TimescaleDB plugin
- What QALIPSIS lets you do: Use plugin steps to produce/consume messages in scenarios (e.g., Kafka
kafka(), RabbitMQrabbitmq()and JMSjms()). - Reference in the docs: Check supported steps lists (
consume,produce) and example configurations (e.g.,concurrency()).
Read more: Apache Kafka plugin, RabbitMQ plugin and JMS plugin
- What QALIPSIS lets you do: QALIPSIS generates a default campaign report (overall state, results, and messages). In standalone mode you can also use console reporting: a live report (enabled by default) or a final report (optional), but not both at once. Reports are available via the GUI or REST API, and can be persisted when data storage is enabled. You can also write a log file (
logging.file) and an events file (logging.events) to disk.
Read more: Reporting - Reference in the docs: In Reporting, look for
GET /api/campaignsandGET /api/campaigns/{campaign-key}, plusreport.export.console-live.enabledandreport.export.console.enabled. In Logging, look forlogging.fileandlogging.events.
Read more: Logging
- What QALIPSIS lets you do: You don’t need to be a Kotlin expert to be productive. Scenarios are defined as Kotlin functions (e.g., a function annotated with
@Scenario("...")), but the day-to-day authoring experience is largely QALIPSIS’ Kotlin-based DSL: you declare your scenario withscenario { ... }, set a few readable configuration fields (likeminionsCountandprofile { ... }), then compose steps in a fluent chain starting at.start()(e.g.,.start().netty().tcp { ... }.reuseTcp(...) { ... }). In practice, most edits are “DSL edits” (tweaking configuration blocks and step closures) rather than “Kotlin engineering.”
Read more: Scenario specifications - Reference in the docs: See Scenario specifications for the required structure (
@Scenario,scenario { ... },.start()), and Step specifications for how steps are assembled/configured via fluent calls and{ ... }configuration blocks.
Read more: Scenario specifications
- What QALIPSIS lets you do: If you’re only running an existing test project, you can execute QALIPSIS from a packaged distribution (JAR/ZIP/Docker) without changing the scenario sources.
Read more: Deployment topologies - Reference in the docs: In Execute QALIPSIS, check the sections describing how to start QALIPSIS from a JAR/ZIP/Docker distribution.
Read more: Execute QALIPSIS
- What QALIPSIS lets you do: A QALIPSIS project is packaged with your scenarios and dependencies (for example into a JAR or a ZIP distribution), so additional libraries on the project classpath can be bundled and used by your scenario code.
Read more: Execute QALIPSIS - Reference in the docs: In Execute QALIPSIS, find the “Distribution Archive” section describing a ZIP distribution “along with your scenarios and dependencies”. Also see Gradle plugin for adding QALIPSIS plugins via
qalipsis { plugins { ... } }.
Read more: QALIPSIS Gradle plugin
- What QALIPSIS lets you do: QALIPSIS lets “any number of minions” act on a scenario, and you can set the minion count in the scenario configuration or override it at runtime. The documentation does not define a fixed maximum; practical limits depend on where the factories run and what your scenario does (connections, data volume, monitoring/export, etc.).
Read more: QALIPSIS core concepts - Reference in the docs: In QALIPSIS core concepts, check the Minions section noting configuration-time vs runtime minion counts. In Execute QALIPSIS, review
campaign.minions-count-per-scenarioandcampaign.minions-factoroverrides.
Read more: Execute QALIPSIS
- What QALIPSIS lets you do: Yes—because QALIPSIS scenarios are authored as code (Kotlin + fluent API), you can store them alongside your application or in a dedicated test repo and manage them with your usual version-control workflow (branches, reviews, tags) like any other source code.
Read more: QALIPSIS core concepts - Reference in the docs: Starting from the bootstrap project, your scenarios live as normal source code in a standard project structure—so you can commit them to your version control system, run code reviews on scenario changes (diffs on steps/profiles/assertions/config), and track history over time exactly like application code.
Read more: Bootstrap project
- What QALIPSIS lets you do: Run QALIPSIS as a cluster with explicit roles: the head orchestrates campaigns and exposes the GUI/REST interface, while factories host minions and execute scenarios and assertions.
Read more: QALIPSIS core concepts - Reference in the docs: In QALIPSIS core concepts, look for the section describing head/factory roles and how they split orchestration vs execution.
Read more: QALIPSIS core concepts
- What QALIPSIS lets you do: Retrieve campaign outcomes via REST—useful for wiring results into your own tooling (dashboards, release notes, or CI summaries).
Read more: Reporting - Reference in the docs: In Reporting, look for REST endpoints like
GET /api/campaignsandGET /api/campaigns/{campaign-key}(and the default port reference).
Read more: Reporting