Automated bi-directional defect sync between test execution and defect tracking

We implemented automated bi-directional synchronization between test execution results and defect tracking to eliminate manual defect logging. Previously, testers had to manually create defects from failed tests, leading to delays and inconsistent defect documentation.

Our solution uses REST API webhooks triggered by test execution events. When a test fails, the system automatically creates a defect with full test context - including test case details, execution environment, failure screenshots, and logs. The defect is automatically linked to the failed test case for traceability.

In the reverse direction, when defects are resolved, the system automatically triggers re-execution of associated test cases and updates defect status based on test results.

This automation reduced our defect logging time by 42% and improved escape metrics by ensuring every test failure is tracked as a defect with complete context for analysis.

This sounds like exactly what we need. How did you handle the webhook configuration? Are you using Polarion’s built-in webhook support or did you build a custom integration service?

I’ll provide the complete technical implementation and the measurable business outcomes we achieved.

Technical Architecture:

Our bi-directional sync system has three main components:

  1. Test Execution Event Listener: Integrated into our test automation framework (TestNG/JUnit), this component captures test execution events and publishes them to a message queue. Key events:
  • testStarted
  • testPassed
  • testFailed
  • testSkipped
  1. Integration Service (Node.js microservice): Consumes test execution events and orchestrates Polarion interactions via REST API:
// Simplified defect auto-creation logic
public void handleTestFailure(TestResult result) {
  // Filter out known false positives
  if (shouldCreateDefect(result)) {
    DefectWorkItem defect = new DefectWorkItem();
    defect.setTitle("Test Failure: " + result.getTestName());
    defect.setSeverity(calculateSeverity(result));
    defect.setDescription(buildDefectDescription(result));

    // Attach test context
    attachScreenshot(defect, result.getScreenshotPath());
    attachLogs(defect, result.getLogFile());

    // Create traceability link
    linkToTestCase(defect, result.getTestCaseId());

    polarionClient.createWorkItem(defect);
  }
}
  1. Polarion Webhook Handler: Receives notifications from Polarion when defects transition to ‘resolved’ status and triggers test re-execution:
// Webhook endpoint for defect status changes
@POST
@Path("/defect-status-changed")
public Response handleDefectStatusChange(DefectEvent event) {
  if (event.getNewStatus().equals("resolved")) {
    List<String> linkedTestCases =
      getLinkedTestCases(event.getDefectId());

    for (String testCaseId : linkedTestCases) {
      testExecutionQueue.scheduleRetest(testCaseId);
    }
  }
  return Response.ok().build();
}

Test Context Capture:

We capture comprehensive test context automatically:

  1. Screenshots: Captured at failure point and uploaded to Polarion as attachments via REST API. For web tests, we capture full-page screenshots plus focused element screenshots.

  2. Logs: Test execution logs are parsed to extract relevant error messages and stack traces. These are included in the defect description with full logs attached as files.

  3. Environment Data: Automatically captured and added to defect custom fields:

  • Browser/device type and version
  • Operating system
  • Test environment (dev/staging/prod)
  • Build version under test
  • Timestamp and duration
  1. Test Data: Input parameters and test data used during execution are documented in the defect for reproducibility.

Bi-Directional Sync Implementation:

Forward Sync (Test → Defect):

  • Test fails → Event published to queue
  • Integration service receives event within 5 seconds
  • Flaky test filter applied (checks last 10 executions)
  • If legitimate failure, defect auto-created in Polarion
  • Defect linked to test case via ‘verifiedBy’ relationship
  • Test case status updated to ‘failed’ with link to new defect

Reverse Sync (Defect → Test):

  • Defect transitions to ‘resolved’ → Polarion webhook fires
  • Integration service receives notification
  • Linked test cases retrieved via traceability API
  • Test cases queued for automated re-execution (priority queue)
  • Re-test results automatically update defect:
    • If test passes: defect → ‘verified’, add comment with test run link
    • If test fails: defect → ‘reopened’, attach new failure details

Intelligent Filtering for False Positives:

Our filtering logic significantly reduced noise:

  1. Flaky Test Detection:
  • Maintains 30-day history of test execution results
  • Tests with >20% intermittent failure rate flagged as flaky
  • Flaky test failures create ‘Test Incident’ work items, not defects
  1. Environment Issue Detection:
  • Analyzes failure logs for environment-related keywords (timeout, connection refused, service unavailable)
  • Checks if multiple unrelated tests failed simultaneously (suggests environment issue)
  • Environment issues create ‘Infrastructure Alert’ work items
  1. Known Issue Matching:
  • Compares failure signature (error message hash) against open defects
  • If match found, adds comment to existing defect instead of creating duplicate
  • Tracks occurrence count on existing defect

Measurable Business Outcomes:

Efficiency Improvements:

  • Defect logging time: reduced from 8 minutes average to 4.6 minutes (42% reduction)
  • Test-to-defect latency: reduced from 2-4 hours to under 5 minutes
  • Defect documentation completeness: improved from 67% to 94%
  • Duplicate defect rate: reduced from 18% to 6%

Escape Metrics Improvement:

  • Defect escape rate (defects found in production): reduced from 12% to 7%
  • Root cause analysis revealed that 40% of previous escapes were due to test failures that weren’t logged as defects
  • With automated sync, every test failure is now tracked, ensuring nothing slips through

Quality Metrics:

  • Test case traceability: improved from 78% to 98% (automated linking)
  • Defect retest cycle time: reduced from 3 days to 8 hours
  • Defect resolution verification rate: improved from 82% to 97%

Audit and Compliance:

  • Complete audit trail: every test failure has associated defect with full context
  • Traceability validation: automated validation that resolved defects have passing tests
  • Compliance reporting: 100% of production defects now traceable to test gaps

Implementation Timeline:

  • Week 1-2: REST API integration and webhook setup
  • Week 3-4: Test framework instrumentation and event publishing
  • Week 5-6: Filtering logic and false positive detection
  • Week 7-8: Pilot with 2 teams (20% of test cases)
  • Week 9-12: Full rollout and refinement

Key Success Factors:

  1. Start with a pilot to refine filtering logic before full rollout
  2. Invest time in flaky test detection to avoid defect noise
  3. Make test context capture automatic (screenshots, logs, environment)
  4. Use message queues for reliability (events not lost if Polarion temporarily unavailable)
  5. Monitor false positive rate weekly and tune filters
  6. Provide manual override capability for edge cases

The ROI was clear within 6 weeks - time savings, quality improvements, and reduced escapes justified the development investment.

How are you capturing test context like screenshots and logs? Are those stored in Polarion as attachments or linked externally?

We built a lightweight integration service that listens for test execution events from our test automation framework. The service makes REST API calls to Polarion to create defects and update test case status. We used Polarion’s REST API for work item creation and link management.

For the reverse sync (defect resolution triggering test re-execution), we configured Polarion webhooks to notify our integration service when defects transition to ‘resolved’ status. The service then queues the linked test cases for automated re-execution.

Good question. We implemented intelligent filtering before auto-creating defects:

if (testResult.status == FAILED) {
  if (isKnownFlakyTest(testCase) ||
      isEnvironmentIssue(failureLog)) {
    createTestIncident(); // not a defect
  } else {
    autoCreateDefect(testResult);
  }
}

Flaky tests are tracked separately as test incidents rather than defects. We also analyze failure patterns - if the same test fails multiple times across different environments, it’s more likely a real defect and gets auto-created. Single isolated failures are flagged for manual review before defect creation.