Linking automated test failures to requirement status in Jira

I’m trying to automatically update requirement issues in Jira 9 when linked automated tests fail in our CI pipeline. The goal is to transition requirements back to “Under Review” if any linked test execution fails.

I’ve set up Jira automation with branching on linked issues, but I’m hitting a problem where the requirement status gets updated incorrectly-sometimes it transitions even when tests pass, and other times it doesn’t transition when they should.

My automation rule branches on “Tests” link type and checks the test execution status field. I think the issue might be with how I’m handling the link directionality in JQL or missing guard conditions:


issueFunction in linkedIssuesOf("key = {{issue.key}}", "Tests")
AND status = "Failed"

This is breaking our quality dashboards because requirement coverage metrics show false negatives. Has anyone implemented requirement-to-test status propagation with proper guards to prevent transition loops?

We had similar issues with test-to-requirement automation. One thing that helped us was using Smart Values to check the current status before transitioning. In your branch rule, add a condition like {{issue.status.name}} does not equal “Under Review” before the transition action. Also make sure your JQL in the branch is actually returning the requirements, not the test executions themselves. The linkedIssuesOf function can be tricky with direction.

Thanks Sarah. I checked and our test executions are linked as “Tests” (outbound from requirement). So in the automation, when I branch on linked issues, should I be using “is tested by” instead? And for the guard condition, do you mean adding a condition before the transition action that checks {{requirement.status}} != "Under Review"?

The link direction is definitely your main issue here. In Jira, when you create a “Tests” link from Requirement A to Test Execution B, the link has two perspectives: outward (“tests”) and inward (“is tested by”). Your automation trigger is on the test execution, so when you branch on linked issues, you want the inward link perspective from the test’s point of view.

Try this JQL in your branch condition:


issue in linkedIssues({{triggerIssue.key}}, "is tested by")

This explicitly gets requirements that are tested by the current failed test execution. Then add a guard condition on each branched requirement issue before transitioning: check that {{issue.status.name}} is not already “Under Review” and that the issue type is actually a requirement. This prevents loops and accidental transitions on wrong issue types.

Raj, that makes sense about the link perspective. I updated the branch condition to use the inward link direction. One more question: should I also add a check to count how many linked tests are failing before transitioning the requirement? We have requirements with multiple test executions, and I don’t want to flip the status if only one out of five tests failed.

I’ve seen this exact issue. The problem is that “Tests” link type in Jira is directional-there’s an inward and outward side. Your JQL linkedIssuesOf might be checking the wrong direction depending on how you created the links. Try being explicit about link direction in your automation branch condition. Also, you definitely need a guard condition on the requirement transition to check if it’s not already in the target status, otherwise you’ll create endless loops when the automation re-triggers itself.

Let me give you a complete solution that addresses all three focus areas: link directionality, branching logic, and guard conditions.

1. Jira Automation Rule Structure:

Trigger: When test execution issue transitions to “Failed” status

2. Branch on Linked Issues (Requirement Perspective):

Use this JQL to get requirements linked via inward “is tested by” relationship:


issue in linkedIssues({{triggerIssue.key}}, "is tested by")
AND issuetype = Requirement

This ensures you’re only branching on actual requirement issues, not other linked items.

3. Guard Conditions (Prevent Loops and False Transitions):

Before transitioning each branched requirement, add these conditions:

  • Condition 1: {{issue.status.name}} does not equal “Under Review” (prevents re-triggering on already transitioned requirements)
  • Condition 2: Check if any linked tests are still passing (optional, depends on your policy)

4. Aggregation Logic for Multiple Tests:

Use a lookup issues action within the branch to count linked test executions:


issueFunction in linkedIssuesOf("key = {{issue.key}}", "tests")
AND issuetype = "Test Execution"
AND status = Failed

Store the count in a Smart Value variable. Then add a condition: only transition if failed test count > 0 AND (failed count / total linked tests) exceeds your threshold (e.g., > 20%).

5. Transition Action:

Use “Transition issue” action with the target status “Under Review”. Make sure the workflow transition is available from the current status.

6. Additional Safeguards:

  • Add a custom field on requirements like “Last Test Sync” timestamp to track when automation last ran
  • Use issue properties to store aggregated test status to avoid recalculating on every trigger
  • Consider adding a cooldown period (e.g., don’t re-transition if last sync was within 1 hour) to prevent thrashing during active test runs

Link Type Directionality Deep Dive:

The key confusion is that Jira link types are directional but the terminology changes based on perspective:

  • From Requirement → Test Execution: outward link is “tests”
  • From Test Execution → Requirement: inward link is “is tested by”

When your automation trigger fires on a test execution failure, you’re starting from the test execution’s perspective, so you need the inward link (“is tested by”) to find requirements.

Preventing Transition Loops:

Loops happen when:

  1. Test fails → Requirement transitions to “Under Review”
  2. Requirement transition triggers another automation
  3. That automation updates the test or requirement again
  4. Cycle repeats

Prevent this by:

  • Checking current status before transitioning (status != target status)
  • Using specific trigger conditions (only on test execution transitions, not requirement transitions)
  • Adding a “processed” flag or timestamp custom field to track automation execution
  • Disabling automation recursion in Jira automation settings if available

This approach has worked reliably for us with 500+ requirements and 2000+ test executions in Jira 9. The quality dashboards now accurately reflect requirement coverage, and we’ve eliminated false status transitions. The key is being explicit about link direction and adding proper guards at every step.