Automated test pipeline integration between GitLab CI and Rally

We recently automated our test result synchronization between GitLab CI and Rally, eliminating manual result logging and achieving 85% time savings for our QA team. Before automation, test engineers spent 2-3 hours daily copying test results from GitLab pipeline reports into Rally test case execution records.

Our implementation uses GitLab CI webhook integration to trigger result updates whenever a test pipeline completes. The webhook payload includes test execution metadata that we map to Rally’s REST API format, creating TestCaseResult objects automatically. We also implemented test cycle automation that creates new test sets for each sprint and links test cases based on user story associations.

The most challenging aspect was artifact synchronization - ensuring test logs, screenshots, and performance metrics from GitLab are attached to the corresponding Rally test results. I’ll share our technical approach and lessons learned.

The artifact synchronization piece is what I’m most interested in. Attaching test logs and screenshots from GitLab to Rally test results seems complex. Does Rally’s REST API support binary file uploads, or do you store artifacts elsewhere and just link them? And how do you handle large files - our performance test results can be 50-100MB with detailed traces.

This sounds like exactly what we need! How did you handle the GitLab CI webhook integration? Did you build a custom middleware service to receive webhooks and transform them into Rally API calls, or did you use GitLab’s built-in webhook features directly? Also, what happens if the Rally API is down when a pipeline completes - do you have a retry mechanism?

How did you implement the test cycle automation? Creating test sets for each sprint manually is currently a major bottleneck for us. Do you automatically detect sprint boundaries and create the test sets, or is there some manual triggering involved? Also, how do you handle test case-to-user-story associations - do you rely on tags or some other mechanism?

Let me provide a comprehensive overview of our implementation covering all four focus areas:

GitLab CI Webhook Integration: We configured GitLab to send pipeline completion webhooks to our integration service endpoint. The webhook configuration in GitLab includes pipeline events with test report artifacts enabled. Our service validates webhook signatures using GitLab’s secret token to prevent unauthorized requests. The critical design decision was making the webhook receiver stateless - it immediately queues the event and returns 200 OK to GitLab, preventing timeout issues. The actual Rally API interaction happens asynchronously in background workers.

REST API Result Mapping: The mapping layer is a three-stage transformation pipeline:

// Pseudocode - Transformation stages:
1. Parse JUnit XML from GitLab artifacts
2. Lookup Rally TestCase references from mapping table
3. Transform to Rally JSON schema with required fields
4. Validate payload against Rally API specification
5. Submit batch of TestCaseResults via POST request

We handle the FormattedID-to-ObjectID mapping by caching Rally’s TestCase collection locally and refreshing it daily. This avoids repeated API queries during result submission. For test cases without Rally mappings, we log them to a review queue where QA can establish the associations.

Test Cycle Automation: Sprint-based test set creation is triggered by Rally’s iteration change events. We subscribe to Rally’s webhook notifications for iteration updates. When a new sprint starts, our automation:

  • Queries Rally for user stories in the current iteration
  • Extracts test case associations from user story requirements
  • Creates a test set named ‘Sprint-{N}-Automated-Tests’
  • Populates it with the associated test cases
  • Configures the test set’s TestFolder to match the sprint structure

The user story associations come from Rally’s Requirements field on test cases. We maintain these relationships as part of our test case authoring process.

Artifact Synchronization: This was indeed the most complex component. Rally’s REST API supports attachment uploads via multipart/form-data, but we optimized for performance by storing large artifacts in GitLab and linking them:

# For small artifacts (<5MB): Direct upload to Rally
attachment_data = {
    "AttachmentContent": {
        "Content": base64_encoded_data
    },
    "Artifact": test_result_ref,
    "Name": "test_output.log"
}

# For large artifacts: Store link in Notes field
notes = f"Test logs: {gitlab_artifact_url}"

Screenshots and small logs get uploaded directly to Rally as attachments. Performance test results and video recordings remain in GitLab, with URLs stored in Rally’s Notes field. This hybrid approach keeps Rally responsive while maintaining artifact access.

Results: After implementing this integration, our QA team reduced manual result logging from 2-3 hours daily to zero. Test result accuracy improved because we eliminated human transcription errors. The automation also enabled real-time test coverage dashboards in Rally that update within minutes of pipeline completion. Total development effort was about 3 weeks for a two-person team, and we’ve been running this in production for 6 months with 99.5% uptime. The 85% time savings calculation comes from comparing pre/post automation QA team time allocation - they now spend those hours on exploratory testing instead of data entry.

We built a lightweight Node.js service that acts as a webhook receiver and Rally API client. GitLab sends pipeline completion events to this service, which validates the payload, extracts test results, and makes authenticated REST API calls to Rally. For reliability, we implemented a message queue (RabbitMQ) that buffers webhook events. If Rally API is unavailable, events stay queued and get retried with exponential backoff. This approach handles network issues and Rally maintenance windows gracefully.

Can you elaborate on the REST API result mapping? Rally’s TestCaseResult schema is pretty specific about required fields. How do you map GitLab’s test output format to Rally’s expected structure? We’ve struggled with this because our GitLab pipelines output JUnit XML, but Rally needs JSON with specific field names and object references.

The mapping is definitely the trickiest part. We parse GitLab’s JUnit XML output and transform it into Rally’s JSON format:

const testResult = {

  TestCaseResult: {

    TestCase: `/testcase/${testCaseId}`,

    Build: `GitLab-${pipelineId}`,

    Verdict: status === 'passed' ? 'Pass' : 'Fail',

    Date: executionTime

  }

};

The key is maintaining a mapping table that correlates GitLab test names to Rally TestCase FormattedIDs. We store this in a configuration file that gets updated when new test cases are created in Rally.