Test execution progress reports timing out on large test sets

Our release readiness reports are failing due to timeout issues in mf-24. Test execution progress reports consistently timeout after 120 seconds when analyzing test sets with over 5,000 test runs.

The Analysis View query appears straightforward but takes too long:

SELECT test_id, execution_status, COUNT(*)
FROM TEST_RUN_HISTORY
WHERE release_id = 'R-2025-Q3'
GROUP BY test_id, execution_status

I’ve tried increasing the report timeout configuration in server settings but that just delays the inevitable failure. Database performance seems normal for other queries. Anyone dealt with parameterized query optimization for large test execution datasets?

I’ve optimized test execution reporting for several large-scale ALM deployments, and your timeout issue requires a multi-layered solution addressing all the key performance factors.

Report Timeout Configuration: First, adjust the timeout settings appropriately. In Site Administration > Server Configuration, increase report.execution.timeout=300 (5 minutes) and enable report.query.optimization=true. However, timeout adjustment is just a temporary measure - the real fix requires addressing the underlying query performance.

Database Indexing: Create a composite index that matches your query pattern exactly:

CREATE INDEX idx_test_run_progress
ON TEST_RUN_HISTORY(release_id, test_id, execution_status)

This index can be created online in most databases without downtime. The composite index covers both your WHERE clause and GROUP BY columns, enabling index-only scans that avoid full table reads. This typically reduces query time from 120+ seconds to under 10 seconds.

Parameterized Queries: Ensure your Analysis View uses proper parameterization. Modify the query to use bind variables instead of literal values:

SELECT test_id, execution_status, COUNT(*)
FROM TEST_RUN_HISTORY
WHERE release_id = :release_param
GROUP BY test_id, execution_status

Parameterized queries enable execution plan caching, which dramatically improves performance for repeated report generation.

Test Run Archiving: Implement a data lifecycle policy for TEST_RUN_HISTORY. Archive test runs older than 90 days to a separate table:


-- Pseudocode - Key archiving steps:
1. Create TEST_RUN_ARCHIVE table with identical schema
2. Move records WHERE execution_date < CURRENT_DATE - 90
3. Create scheduled job to run archiving monthly
4. Update Analysis Views to query active table only
5. Create separate historical reports for archived data

Archiving keeps your active table size manageable, which is critical for sub-second query performance even with proper indexing.

After implementing these changes, test with your largest release dataset. The combination of proper indexing, parameterized queries, appropriate timeout configuration, and data archiving will eliminate the timeout issues and make your release readiness reports run in under 15 seconds consistently, even with 10,000+ test runs.

I agree with the archiving approach, but there’s also a configuration aspect to address. The default report timeout in mf-24 is conservative at 120 seconds. You can increase it, but more importantly, you should enable parameterized query caching. When ALM caches the execution plan for your parameterized release_id query, subsequent reports run much faster. Check if query.cache.enabled=true is set in your ALM server configuration.

Query caching is enabled. I’m willing to implement both the composite index and archiving strategy if that’s what’s needed. What’s the recommended approach for creating the composite index without impacting production test execution?

Composite indexes help with both filtering and grouping when the index columns match your query structure. However, database indexing alone might not fully solve your timeout problem if you’re analyzing historical data going back multiple releases. Consider implementing test run archiving - move completed test runs older than 90 days to an archive table. This keeps your active TEST_RUN_HISTORY table smaller and makes real-time progress reports much faster. You can still access archived data through separate historical reports when needed.

We do have indexes on release_id and test_id individually, but not a composite index. Would creating a composite index require downtime? Also, will that help with the GROUP BY aggregation performance or just the WHERE clause filtering?