We’re setting up comprehensive load testing for our Rally 2024 release planning module and want to share our approach while getting community feedback. Our challenge is creating realistic test scenarios that match actual user behavior during PI planning sessions.
We’re using JMeter combined with Rally’s WSAPI but struggling with the gap between synthetic test data and production usage patterns. Key areas we’re focusing on:
// Sample test scenario structure
Thread Group: Release Planning Users (n=75)
- Login and navigate to release view
- Load feature hierarchy with dependencies
- Update feature estimates and assignments
- Generate dependency reports
Think time: 8-15 seconds between actions
Our main questions: How do you capture real user interaction patterns? What’s the best way to model think times accurately? How do you handle complex dependency graphing under load? And what monitoring approach works best - we’re considering ElasticSearch integration for real-time metrics.
Would love to hear how others approach performance testing for release planning workflows, especially during peak PI planning periods when 100+ users hit the system simultaneously.