RPA robot task fails when triggered from automated test suite

We’re experiencing intermittent failures with RPA robot tasks when triggered through our automated test suite in QA environment. The robots execute perfectly when triggered manually from the Mendix portal, but fail about 60% of the time during automated regression tests.

The error we’re seeing:


Error: Robot execution timeout
RPA.execute failed at step 3
Endpoint: https://qa-rpa.mendix.local/api/v1/execute
Status: Connection refused

Our setup involves robot registration in QA and environment-specific endpoint configuration that should route to the test RPA server. The automated test integration with RPA uses standard Mendix microflow calls. This is blocking our CI/CD pipeline as we can’t reliably validate RPA-dependent features. Has anyone solved similar issues with robot task execution in automated testing scenarios?

I’ve seen this before. The connection refused error usually points to timing issues with robot availability. In automated tests, the RPA service might not be fully initialized when your test fires the trigger. Are you using any wait conditions before calling the robot task? Also check if your QA environment has the robot instances properly scaled - automated tests can spawn multiple parallel executions that overwhelm a single robot instance.

The 60% failure rate suggests a race condition or resource contention. When you run automated tests, are multiple test cases trying to use the same robot simultaneously? RPA robots typically process tasks sequentially. If your test suite fires multiple robot triggers in parallel without queuing logic, some will fail. I’d recommend implementing a queue mechanism in your microflow that checks robot availability before execution. Also, look at your robot’s task timeout settings - automated tests might need longer timeouts than manual executions due to QA environment resource constraints.

Environment-specific configuration is critical here. We solved this by creating separate constant values for RPA endpoints per environment. In your app settings, define constants like RPA_QA_Endpoint and RPA_PROD_Endpoint, then use conditional logic in your microflows to select the right one based on runtime environment detection. This prevents hardcoded endpoints that might work in one environment but fail in others during automated testing.

The intermittent nature and connection refused error indicate environment configuration issues combined with test execution patterns. Your automated tests are likely overwhelming the QA RPA infrastructure.

Robot Registration in QA: First, verify your robot registration is environment-aware. In the RPA module configuration, ensure you have separate robot profiles for QA with dedicated endpoints. Check that the robot service is actually running and accessible at the endpoint your tests are calling. Use a health check microflow before test execution:


// Health check before robot execution
GET /api/rpa/health
IF response.status != 200 THEN
  WAIT 2000ms and RETRY

Environment-Specific Endpoint Config: Create environment constants in your Mendix app. Never hardcode endpoints. In your deployment pipeline, set these values:

In your robot trigger microflow, always reference the constant, not a literal string.

Automated Test Integration with RPA: The key issue is synchronization. Automated tests execute faster than manual triggers, creating race conditions. Implement this pattern in your test framework:

  1. Before test suite: Verify robot availability with health check
  2. In each test: Add explicit wait for robot ready state (poll status endpoint)
  3. After robot trigger: Poll for completion status instead of assuming immediate execution
  4. Implement retry logic with exponential backoff for transient failures
  5. Add test isolation - use unique task IDs to prevent cross-test interference

For the connection refused error specifically, check your QA firewall rules. Automated test runners might be executing from different network segments than manual test machines, causing connection blocks.

Also scale your QA RPA infrastructure. If you’re running parallel test suites, you need multiple robot instances. Configure a robot pool with at least 3-5 instances for QA to handle concurrent automated test execution. Use the RPA module’s queue management features to distribute tasks across available robots.

Finally, add comprehensive logging in your robot trigger microflows. Log the endpoint being called, robot ID, task parameters, and response codes. This will help identify whether failures are configuration issues, timing problems, or actual robot execution errors. The 60% failure rate should become 0% once you implement proper environment configuration, health checks, and queuing logic.