Mf-25.3 sprint-mgmt automated perf testing vs traditional manual scheduling

We’re evaluating automated performance test scheduling in Performance Center with mf-25.3 sprint management versus our current manual approach. Our team runs 15-20 performance tests daily across multiple environments, and manual scheduling is becoming a bottleneck.

Interested in hearing experiences with automated triggers, especially around resource quota management and handling flaky tests. We’ve seen demos of hybrid scheduling approaches where critical tests auto-trigger on commits while regression suites run on fixed schedules.


# Current manual process:
1. QA lead reviews sprint board daily
2. Manually queues tests based on code changes
3. Monitors Controller pool availability
4. Re-runs failed tests manually

How are teams balancing automation with the need for human oversight on resource-intensive performance tests? What’s working in real-world sprint environments?

The hybrid approach works best in our experience. We use CI/CD webhooks to trigger smoke performance tests automatically on every sprint commit, but keep comprehensive load tests on a nightly schedule. This balances quick feedback with resource constraints. Controller pool management is easier when you separate fast smoke tests from heavy load scenarios.

That’s helpful context. How do you handle flaky tests in the automated pipeline? Our concern is that intermittent network issues or environment instability could trigger false negatives and block sprint progress unnecessarily.

We implemented a flaky test quarantine system. Tests that fail twice in a row get automatically moved to a separate queue for investigation. They still run on schedule but don’t block builds. Performance Center’s mf-25.3 test execution tracking makes it easy to identify patterns - we review quarantined tests weekly and either fix the underlying issue or adjust thresholds. This keeps the pipeline reliable while preventing good code from being blocked by environmental noise.