FAQs
You use QALIPSIS for load, performance, and end-to-end testing of distributed and asynchronous systems, and it also applies to monoliths. You specify realistic workflows and then verify system behavior under load β latency, failure modes, availability, and data consistency β using simulated actors (βminionsβ) that execute complete scenarios.
Read more: What is QALIPSIS | QALIPSIS core concepts | Plugins
You test both. QALIPSIS supports distributed and monolithic systems with the same scenario and assertion model, so your test approach stays stable even when your architecture changes.
Read more: What is QALIPSIS | QALIPSIS core concepts
You test distributed and asynchronous workflows because QALIPSIS correlates signals across the system β metrics, messages, and database records β and validates the effects your scenario should produce. You verify end-to-end outcomes across asynchronous hops instead of relying on edge responses as a proxy for correctness.
Read more: What is QALIPSIS | QALIPSIS core concepts | Develop scenarios
You correlate what your tests inject (requests, produced messages, written records) with what the system emits and stores (metrics, consumed messages, database records). This cross-verification turns distributed systems testing from inference into evidence.
Read more: What is QALIPSIS | QALIPSIS core concepts
You get more realistic and more actionable results because QALIPSIS targets the whole system: multi-protocol testing via plugins, distributed execution, and end-to-end verification of asynchronous side effects. You validate that the system behaved correctly under load, not just that an endpoint remained responsive.
Read more: What is QALIPSIS | QALIPSIS core concepts | Plugins
Realistic distributed and asynchronous scenarios, cross-verification of metrics/messages/database records, geo-distributed load generation, concurrent personas, and automation-friendly pass/fail thresholds.
Read more: What is QALIPSIS | Plugins
QALIPSIS is a protocol-level load and end-to-end testing tool for distributed systems. Understanding its boundaries is as important as understanding its capabilities.
- It does not render browsers or simulate mobile devices
Every web or mobile application involves two execution domains. Server-side execution covers everything your infrastructure controls: API routing, business logic, database queries, message brokering, cache lookups, and inter-service communication. Client-side execution covers everything the end-user’s device handles after it receives a response: on the web, that means HTML parsing, DOM construction, CSS layout, JavaScript compilation and rendering; on mobile, it means native UI rendering, view hierarchy layout, animation frame rates, and on-device business logic.
Protocol-level tools – QALIPSIS, but also JMeter, Gatling, k6, and similar tools – operate in the server-side domain. They issue requests directly over the wire (HTTP, TCP, UDP, MQTT, or messaging protocols) and measure what the infrastructure does: how fast it responds, whether the data is correct, whether the system stays consistent under concurrency. They never instantiate a browser engine or a mobile runtime, so they have no DOM, no JavaScript execution, no native view rendering, and no concept of “the screen is visually complete.”
This is a deliberate architectural choice, not a gap. The two domains answer fundamentally different questions. Server-side load testing asks: “Can the infrastructure process 50,000 concurrent workflows without degrading throughput, correctness, or availability?” Client-side performance testing asks: “Once the device receives this response, how quickly does the screen become visually complete and interactive for this user, on this hardware, on this network?” The former is a function of your backend architecture; the latter is a function of front-end asset optimization, JavaScript or native-code complexity, rendering paths, and the end-user’s device capabilities. Conflating them produces misleading results in both directions.
There is also a practical reason: browser engines and mobile emulators serialize work through a single rendering thread, making each virtual user orders of magnitude more expensive to simulate. A typical load-injection node can sustain hundreds of protocol-level virtual users but only a handful of real browser or emulator sessions. Protocol-level operation is what allows QALIPSIS to scale minions from thousands to millions on a modestly sized cluster.
If you need to measure Largest Contentful Paint, Time to Interactive, or layout shift on the web, you need browser-level tools (Lighthouse, Playwright, WebPageTest). If you need to profile mobile rendering performance – frame drops, cold-start time, or memory pressure on-device – you need platform-specific tooling (Android Profiler, Xcode Instruments) or real-device cloud services. These are complementary to QALIPSIS, not competitors.
- It does not replace application observability
QALIPSIS integrates with monitoring stacks – it can export events and meters to Elasticsearch, InfluxDB, TimescaleDB, and Graphite – but it is not an APM. It does not instrument your application code, trace internal function calls, or build service dependency maps.
The relationship works in both directions, however. QALIPSIS can also consume data from your observability and monitoring infrastructure during a test. Its plugins include poll and search steps for databases and time-series stores, which means a scenario can ingest infrastructure metrics – CPU utilization, memory pressure, queue depths, error rates – published by your APM or monitoring stack and cross-verify them against the load being injected. For example, you can assert that resource consumption stays within expected bounds at a given concurrency level, or that a spike in database response time correlates with a specific load profile. QALIPSIS produces load-test evidence that your observability tools can correlate with application-level telemetry, and it consumes observability data to enrich its own assertions – but it does not replace the instrumentation itself.
- It is not a single-endpoint micro-benchmark tool
If your goal is to saturate a single URL and measure its raw request-per-second ceiling or p99 latency in isolation, dedicated HTTP micro-benchmarking tools are purpose-built for that: `wrk`, `hey`, and `vegeta` can all hammer one endpoint with minimal setup and overhead. QALIPSIS can do that too, but it is not its primary design intent. QALIPSIS is designed for workflow validation across components: chaining HTTP calls with database assertions, message broker verification, and cross-layer data consistency checks within a single scenario. Its value emerges when the question is not “how fast is this endpoint?” but “does the entire chain – from API request to message broker to database to downstream service – stay correct and responsive under realistic, concurrent load?”
- It is not a contract-testing framework
Contract testing verifies that two services agree on the shape and semantics of their interface – request format, response structure, status codes, field names – typically by running lightweight, isolated checks against a mock or recorded contract. Tools like Pact, Spring Cloud Contract, or Specmatic are designed for this: they generate and verify contracts between consumer and provider independently, often as part of a CI pipeline, without deploying the full system.
QALIPSIS operates in a different part of the testing spectrum. It does not generate or verify interface contracts between services. Instead, it deploys load against your actual running infrastructure and validates that the system behaves correctly end-to-end under concurrency: that data flows through the right services, lands correctly in databases and message brokers, and that timing and consistency hold at scale. Contract testing answers “do these two services still agree on the API shape?” – QALIPSIS answers “when 10,000 users exercise the real workflow simultaneously, does the whole system still produce the right outcomes?” The two are complementary: contract testing catches interface drift early and cheaply in the development cycle; QALIPSIS catches the runtime, concurrency, and data-integrity problems that only surface under load on a deployed system.
- It is not a penetration-testing tool
QALIPSIS generates load to measure performance, correctness, and system behavior under concurrency. It does not probe for security vulnerabilities, test for injection flaws, scan for misconfigurations, or simulate adversarial attack patterns. High request volume is not the same as a security assessment: QALIPSIS sends the requests you define in your scenario; it does not crawl your application looking for exploitable surfaces.
If you need to identify vulnerabilities such as SQL injection, cross-site scripting, broken authentication, or OWASP Top 10 weaknesses, use dedicated security testing tools: OWASP ZAP (open-source, CI/CD-friendly), Burp Suite (the industry standard for manual and automated web application penetration testing), or Nikto (lightweight web server vulnerability scanner). These tools intercept, analyze, and manipulate traffic to discover security flaws – a fundamentally different objective from what QALIPSIS is designed to do.
- It does not guarantee performance improvements
QALIPSIS produces evidence – metrics, traces, assertion outcomes, and campaign reports – that inform engineering decisions. It reveals where your system degrades, at what concurrency thresholds, and with what data-consistency consequences. Acting on those findings is an engineering responsibility, not a tool capability. QALIPSIS surfaces the problem; your team owns the fix.
You define a scenario as an ordered set of steps plus assertions. Minions execute the scenario end-to-end, and assertions confirm that the system produced the expected outcomes while load is applied. If you want the exact mental model and terminology, the βcore conceptsβ section in the docs lays it out cleanly.
Read more: What is QALIPSIS | QALIPSIS core concepts | Develop scenarios | Scenario specifications | Step specifications
No. You donβt need advanced Kotlin. You work with a Kotlin-based DSL thatβs intentionally readable and quick to learn, so teams can review, version, and maintain test scenarios like any other engineering artifact.
Read more: What is QALIPSIS | Get started | Bootstrap project
You generate load with minions β simulated users/devices/systems that execute complete scenarios. You scale by increasing the number of minions; you keep realism by having each minion follow a workflow, not just repeat a single request.
Read more: QALIPSIS core concepts | What is QALIPSIS
You scale QALIPSIS through distributed execution. In cluster mode, a Head node coordinates the campaign and Factory nodes execute minions. Adding factories increases load capacity horizontally and avoids turning one host into the limiting factor.
Read more: QALIPSIS core concepts | QALIPSIS up and running | Deployment topologies | Execute QALIPSIS
Yes. You distribute load generation across locations to emulate regional users and compare results by geography. This matters when routing, regional dependencies, and network latency materially affect performance.
Read more: What is QALIPSIS | Deployment topologies | REST API
Yes. You test non-HTTP technologies via plugins, including databases and messaging platforms. Thatβs essential in distributed systems where correctness and performance are often determined by what happens in queues, brokers, and datastores β not only at the HTTP edge.
Read more: What is QALIPSIS | QALIPSIS core concepts | Plugins
You test a wide range of systems via official plugins, including Apache Kafka, RabbitMQ, JMS, Jakarta EE Messaging, Apache Cassandra, Elasticsearch, InfluxDB, TimescaleDB, MongoDB, Redis, and relational opensource SQL databases. When you need something proprietary or not yet supported, you extend QALIPSIS with custom plugins.
Read more: Plugins | Apache Kafka plugin | RabbitMQ plugin | JMS plugin | Jakarta EE plugin | Apache Cassandra plugin | Elasticsearch plugin | InfluxDB plugin | TimescaleDB plugin | MongoDB plugin | Redis-lettuce plugin | R2DBC-jasync plugin
Yes. You validate what happens inside the system under load β message production/consumption, persisted records, and completion of asynchronous flows. QALIPSIS is positioned to cross-check what your test generated against what the system actually processed, so you can verify outcomes instead of assuming them.
Read more: What is QALIPSIS | QALIPSIS core concepts | Develop scenarios
Yes. You define aggregated assertions with thresholds so a run passes or fails based on performance and consistency criteria. That gives you an explicit release signal and keeps performance testing actionable.
Read more: What is QALIPSIS | QALIPSIS core concepts | Scenario specifications
You integrate QALIPSIS into virtually any CI/CD platform because itβs automation-first: execute it via the CLI, run it through the Gradle plugin, or trigger it through the REST API. The docs include examples for common platforms, but the integration pattern is portable by default.
Read more: What is QALIPSIS | Automation, CI, CD | CI & CD | Scheduling | REST API | QALIPSIS Gradle plugin | Execute QALIPSIS
Yes. You run QALIPSIS non-interactively for pipelines and scheduled runs. It is designed to fit repeatable engineering workflows, so execution does not depend on a UI.
Read more: Execute QALIPSIS | Automation, CI, CD | What is QALIPSIS
Yes. You use the GUI to configure and run campaigns, adjust load distribution and minion counts, control execution, and view reports. It supports exploration and shared visibility β especially when not every stakeholder wants to work directly in code.
Read more: What is QALIPSIS | Execute QALIPSIS
You get real-time insight into running campaigns: execution statistics, collected metrics, and operational details via events and meters. The practical outcome is faster diagnosis β you see issues while they happen, not only after the run completes.
Read more: What is QALIPSIS | Monitoring test campaigns
Yes. You push metrics into tools such as Grafana and Kibana, and you store and analyze time-series results using multiple backends via plugins. You keep your existing observability toolchain and add higher-quality test signals.
Read more: Monitoring test campaigns | Elasticsearch plugin | InfluxDB plugin | TimescaleDB plugin | Graphite plugin
Yes. You run campaigns with multiple scenarios and observe how persona behaviors interact under load. This reveals interference effects and shared bottlenecks that single-scenario tests often miss.
Read more: What is QALIPSIS | QALIPSIS core concepts | Develop scenarios
You deploy QALIPSIS in the mode that fits your constraints: standalone vs cluster, persistent vs ephemeral, container vs host. The docs describe the tradeoffs so you can choose based on scale, repeatability, and how you want to manage state.
Read more: QALIPSIS up and running | Deployment topologies | Execute QALIPSIS
You need Java 11 or later, or a Docker environment.
Read more: Get started | Execute QALIPSIS | What is QALIPSIS
Yes. You use QALIPSIS Open Source and its plugins for free, and you extend it through plugins and the broader Java ecosystem. If you need enterprise capabilities or a managed cloud option, commercial licensing covers that.
Read more: What is QALIPSIS | Plugins
Yes. You receive completion notifications through available notification plugins, such as email and Slack.
Read more: Mail plugin | Slack plugin
You share reports and dashboards that summarize outcomes and make bottlenecks defensible. Downloadable PDF reports and test comparisons are also available for stakeholder-friendly communication.
Read more: What is QALIPSIS | Reporting
Yes. QALIPSIS Cloud supports role-based access in paid plans and custom SSO in the customized plan.
Read more: What is QALIPSIS
You run QALIPSIS in the most common CI/CD platforms. The docs show examples, but the key point is portability: if your CI can run a shell command or a Gradle task, it can run QALIPSIS.
Read more: Automation, CI, CD | CI & CD | QALIPSIS Gradle plugin | Execute QALIPSIS
You store core execution data in PostgreSQL, and you store time-series records in external platforms through plugins (for example Elasticsearch, TimescaleDB, InfluxDB, Graphite). This split is intentional: structured results stay reliable and queryable, while metrics flow to the systems your team already uses for time-series analysis.
Read more: Data storage | Configuring QALIPSIS | Elasticsearch plugin | InfluxDB plugin | TimescaleDB plugin | Graphite plugin