Are you testing distributed behavior – or just hammering endpoints?
Validate whole workflows under load — not just individual requests.

Can your test suite catch what breaks between services – not just at the surface?
A 200 response confirms the API accepted the call — nothing more. Whether the message was consumed, the record written, or concurrent users corrupted shared state is a different question entirely.
Here’s how QALIPSIS helps QA engineers overcome the most critical testing challenges.
Why does your test suite stay green while the queue falls behind?
- The API accepts the request and returns 200. Meanwhile the message broker queue is backing up — the consumer service cannot keep pace under peak load. No test is watching the downstream side, so the queue keeps growing, processing falls further behind, and by the time someone notices, a large backlog has accumulated. The test suite showed green throughout.
- QALIPSIS models the message path as steps inside the scenario. The API call triggers first; then the Kafka or RabbitMQ plugin consumes the downstream message and a join operator correlates the original request with its observed outcome. If the message never arrives — or arrives outside the expected window — the assertion fails at the exact step, with an event recording when and why.
- Read more:
Which shared resource breaks when a thousand users write at the same time?
- The platform works correctly with one user. With hundreds of simultaneous writes to the same resource — inventory decrements, reservation commits, profile updates — silent data corruption appears. Race conditions only surface under concurrent load, and sequential test execution never reproduces them. They remain invisible until a promotional event brings them out in production against real customer data.
- QALIPSIS runs multiple minions through the same write path simultaneously under a realistic load profile. After each stage, the database plugin verifies that persisted records reflect the expected state — no duplicates, no dropped writes, no last-writer-wins overwrites. Join operators match each submitted request against its stored outcome, making divergences visible at the step level rather than in a post-incident audit.
- Read more:
Which step in the chain is silently eating your latency budget?
- Each individual service looks fast in isolation. But the user experience depends on a chain — authentication, entitlement check, content lookup, session creation — and the SLO applies to the whole sequence, not any single call. Without a scenario that exercises the chain under load, test coverage is a collection of unit benchmarks that says nothing about actual user experience at peak.
- QALIPSIS models the full journey as an ordered sequence of HTTP steps, reusing the same connection across steps within a single minion’s execution. Each step is instrumented with meters, and verify steps assert that time-to-last-byte stays within the SLO budget. Exported per-step latency distributions make it clear which step in the chain degrades first as load increases.
- Read more:
Did the last regression ship because the error rate stayed at zero?
- Load tests run in CI, the error rate stays at zero, and the build passes. A service degraded to ten times its normal latency on every request throughout the campaign, every call eventually resolved, and nothing failed the build. The regression shipped. Error rate is not a proxy for latency correctness or functional accuracy — it just means the server responded.
- QALIPSIS integrates into the build via the Gradle plugin. Assertion thresholds are defined per step — on latency percentiles, error rates, and functional outcome verification — and JUnit-style reports are exported for the pipeline to consume. Any breach fails the build with a report identifying the step, the failure type, and the violated assertion, giving the team an actionable signal rather than a binary exit code.
- Read more:
Why are your test results stuck inside the test tool?
- Test results live inside the test tool. The team’s dashboards run in Grafana over InfluxDB, or Kibana over Elasticsearch. After every campaign, someone manually extracts metrics or screenshots charts. Comparing performance across releases is guesswork, and overlaying test signals with infrastructure metrics to find correlations is simply not possible in that workflow.
- QALIPSIS exports step-level events and meters to the backends your team already operates — Graphite, Elasticsearch, Kafka, TimescaleDB/PostgreSQL, or InfluxDB. Every data point is tagged by campaign, scenario, step, and zone, making it possible to filter by test run, track regressions across releases, and correlate load test signals with infrastructure behaviour in the same dashboards your team uses every day.
- Read more:
Performance testing made for QA engineers
With QALIPSIS, you can:
Test event-driven, async architectures with ease
Scale load tests on demand
Simulate complete user journeys
Automate validation in every release cycle
What you need to know
- What QALIPSIS lets you do: A minion simulates a user/device and runs a complete scenario; you can define minion count in scenario configuration or override it at runtime.
Read more: QALIPSIS core concepts - Reference in the docs: Runtime override notes + autostart keys like
campaign.minions-count-per-scenarioandcampaign.minions-factor.
Read more: Execute QALIPSIS
- What QALIPSIS lets you do: Use plugin steps to produce/consume messages in scenarios (e.g., Kafka
kafka(), RabbitMQrabbitmq()and JMSjms()). - Reference in the docs: Check supported steps lists (
consume,produce) and example configurations (e.g.,concurrency()).
Read more: Apache Kafka plugin, RabbitMQ plugin and JMS plugin
- What QALIPSIS lets you do: Build scenarios as sequences of steps with dependencies and assertions that validate data exchanges, persistence, and durations; assertion failures are logged as QALIPSIS events.
Read more: QALIPSIS core concepts - Reference in the docs: Look for the “Scenarios / Steps / Assertions” sections and the operators list (including joining and error handling).
Read more: QALIPSIS core concepts
- What QALIPSIS lets you do: Reuse an HTTP or TCP connection across multiple steps using
httpWithortcpWith, where the same minions that ran the first HTTP/TCP step continue into the follow-up request step, with the same connection.
Read more: Testing a REST API with QALIPSIS - Reference in the docs: Look for
http+httpWith(...)in the REST quickstart, and the note that “the same minions” pass through thehttpWithstep.
Read more: Testing a REST API with QALIPSIS
- What QALIPSIS lets you do: Run campaigns via Gradle tasks (including custom
RunQalipsistasks) or trigger via CLI, REST API or GUI; docs include CI examples (Jenkins, GitHub Actions, GitLab CI, Travis). Be notified of the campaign results by Email or Slack notifications.
Read more: CI & CD and QALIPSIS Gradle plugin - Reference in the docs: Look for Gradle tasks like
qalipsisRunAllScenariosand the CI snippets showing./gradlew clean assemble ....
Read more:
- What QALIPSIS lets you do: Allocate scenario load across zones when triggering a campaign via REST API or GUI, using a per-scenario
zonesdistribution (percentages summing to 100%), and analyze exported data tagged by zone.
Read more: QALIPSIS REST API and Monitoring the test campaigns - Reference in the docs: Look for
zonesin the REST API campaign payload and “meters/events are tagged by … zone.”
Read more: QALIPSIS REST API
- What QALIPSIS lets you do: Get a default campaign report (results + messages) accessible via GUI or REST API; optionally enable live reporting in standalone mode; export JUnit-like reports for CI tooling. Optionally, export all the analytics data to systems you really know to integrate the results with your existing workflow.
Read more: Reporting and QALIPSIS core concepts - Reference in the docs: Look for REST endpoints like
GET /campaignsandGET /campaigns/{campaign-key}, and config flags likereport.export.console-live.enabledandreport.export.console.enabled.
Read more: Reporting