Automated tests creating duplicate defects in Jira DC defect tracker

We’re running Cypress tests in parallel across 6 containers, and when a test fails, it’s supposed to create a defect ticket in Jira DC automatically. The problem is that parallel test runs are creating duplicate defects for the same failure.

For example, if three containers hit the same login bug, we get three separate defect tickets instead of one. We’re using Xray for test management, and each Cypress runner sends its failure report independently via REST API.

cy.on('test:after:run', (test) => {
  if (test.state === 'failed') {
    createJiraDefect({
      summary: test.title,
      stackTrace: test.err.stack
    });
  }
});

This is blocking our triage process because the team spends hours deduplicating tickets manually. We need a way to hash the stack trace or use custom field mapping to detect duplicates before creating new defects.

Here’s a comprehensive solution that addresses all four technical focus areas:

1. Xray Deduplication Configuration: First, create a custom field in Jira DC called “Failure Signature” (type: Text Field - single line). Then configure Xray to use this field for deduplication:

  • Go to Xray Settings → Test Execution Settings
  • Enable “Defect Deduplication”
  • Select “Failure Signature” as the deduplication field
  • Set deduplication window to 7 days (adjust based on your sprint length)

2. Cypress Parallel Execution Strategy: Implement a failure aggregation service that sits between Cypress and Jira. Each Cypress container sends failures to this service instead of directly to Jira:

const crypto = require('crypto');

function generateFailureSignature(test) {

  const normalizedStack = test.err.stack

    .split('\n')

    .slice(0, 3)

    .map(line => line.replace(/:\d+:\d+/g, ''))

    .join(' ');

  const hashInput = `${test.title}|${test.err.message}|${normalizedStack}`;

  return crypto.createHash('sha256').update(hashInput).digest('hex');

}

3. Stack Trace Hash Implementation: The aggregation service maintains a cache of recent failures and their Jira ticket IDs. When a new failure arrives:

  • Generate the failure signature using the method above
  • Query the cache for existing defects with this signature
  • If found, update the existing defect with a new occurrence count
  • If not found, check Jira directly using JQL: project = DEFECT AND "Failure Signature" ~ "${hash}"
  • Only create a new defect if no match is found in either cache or Jira

4. Custom Field Mapping for Duplicate Detection: When creating a defect, populate these custom fields:

{
  "fields": {
    "project": {"key": "DEFECT"},
    "summary": "Cypress Test Failed: ${test.title}",
    "description": "Test: ${test.title}\nError: ${test.err.message} Stack: ${test.err.stack}",
    "customfield_10050": "${failureSignature}",
    "customfield_10051": "1",
    "customfield_10052": "${timestamp}"
  }
}

Where:

  • customfield_10050 = Failure Signature (the hash)
  • customfield_10051 = Occurrence Count (increment on duplicates)
  • customfield_10052 = First Seen Timestamp

Race Condition Prevention: Implement optimistic locking in your aggregation service:

async function createOrUpdateDefect(failureSignature, testData) {

  const lockKey = `defect:${failureSignature}`;

  const lock = await redis.set(lockKey, 'locked', 'NX', 'EX', 30);

  if (!lock) {

    await sleep(1000);

    return createOrUpdateDefect(failureSignature, testData);

  }

  try {

    const existingDefect = await searchJiraDefect(failureSignature);

    if (existingDefect) {

      await updateDefectOccurrence(existingDefect.key);

    } else {

      await createNewDefect(failureSignature, testData);

    }

  } finally {

    await redis.del(lockKey);

  }

}

Deployment Architecture: Deploy the aggregation service as a separate container in your CI/CD pipeline:

  • Cypress containers → Aggregation Service → Jira DC
  • Use Redis for distributed locking and caching
  • Configure Cypress to send failures to the aggregation service endpoint instead of directly to Jira

Triage Process Improvement: With this solution, your triage board will show:

  • Single defect per unique failure with an “Occurrences” field showing how many times it happened
  • First seen timestamp to prioritize older issues
  • All parallel execution failures consolidated into one ticket
  • Comments added automatically for each new occurrence with timestamp and container ID

This eliminates manual deduplication work and provides better visibility into failure patterns across your parallel test execution environment.

We had the same issue with parallel Cypress runs. The problem is that each test runner has no awareness of what other runners are doing. You need a centralized service that checks for existing defects before creating new ones. Try implementing a Redis cache that stores recent failure signatures, so each runner can check if a defect already exists for that failure pattern.

For parallel execution, you also need to handle race conditions. If two Cypress containers check for duplicates at the same moment and both find nothing, they’ll both create defects. Implement a distributed lock using your CI/CD system’s locking mechanism or a database lock to ensure only one container can create a defect for a given hash at a time.

You should hash a combination of the test name, the error message, and the first few lines of the stack trace (excluding line numbers which change frequently). Use a SHA-256 hash and store it in a text custom field in Jira. Before creating a defect, query Jira for any existing defects with the same hash value in that custom field. If found, just add a comment to the existing defect instead of creating a new one.

Xray has built-in deduplication features, but you need to configure them properly. In your Xray test execution settings, enable the “Link to Existing Defects” option and set up a custom field that stores a hash of the error message. When a new failure occurs, Xray will search for existing defects with the same hash value before creating a new ticket.