Best practices for audit trail coverage in test-execution module for ISO 26262 compliance

Our team is working toward ISO 26262 safety certification and we’re evaluating our audit trail coverage in the test-execution module (pol-2310). The standard requires comprehensive evidence of test execution, including environment details, execution timestamps, and result artifacts.

We’ve implemented basic test execution tracking, but our recent internal audit revealed gaps in our evidence chain. Specifically:


// Current test execution logging
TestRun run = testCase.execute();
run.setResult("PASSED");
run.setExecutor(currentUser);
// Missing: environment snapshot, screenshots, detailed logs

The auditor noted that while we capture basic pass/fail results, we lack automated screenshot capture, detailed environment tracking, and comprehensive audit report widgets that show the complete execution context.

I’d like to hear from others who have implemented ISO 26262 compliant test execution workflows. What approaches have you found effective for ensuring complete audit trail coverage while maintaining reasonable automation overhead?

Environment tracking is often overlooked but it’s crucial for ISO 26262. We created a custom work item type called “Test Environment Snapshot” that gets automatically linked to each test run. It captures software versions, hardware configuration, tool versions, and network topology. This snapshot becomes part of the audit trail and proves the test was executed in a controlled, documented environment. The traceability from test case → test run → environment snapshot → results satisfies the auditor’s requirement for complete execution context.

We use a tiered storage strategy. Screenshots and detailed logs are stored in external object storage (S3-compatible) with references in Polarion. Only critical evidence for safety-critical test cases is stored directly in the repository. For non-critical tests, we keep execution metadata and summary logs in Polarion, with links to full evidence in external storage. This keeps the repository performant while maintaining complete audit trail coverage. The audit report widgets can still access and display external evidence through the reference links.

One thing we learned the hard way - audit report widgets need to be designed from the auditor’s perspective, not the test engineer’s perspective. Create custom widgets that answer specific ISO 26262 questions: “Show me all test executions for safety requirement SR-1234 with complete evidence chain.” The widget should display test case, execution timestamp, executor, environment snapshot, result artifacts, and any deviations in a single view. We built five specialized widgets for different audit scenarios and it dramatically reduced certification review time.

We went through ISO 26262 certification last year. The key is treating test execution evidence as a first-class artifact, not an afterthought. Every test run should automatically capture environment state, execution logs, and visual evidence. We built custom automation that hooks into the test-execution module to collect this data at runtime. The audit report widgets need to surface all this information in a format that auditors can easily review. Don’t try to retrofit evidence collection after tests run - it needs to be part of the execution flow itself.

Having supported multiple ISO 26262 certifications using Polarion, I’ll share a comprehensive framework that addresses all the key elements you’ve mentioned.

Test Execution Evidence - Holistic Approach

The foundation is treating every test execution as an evidence package rather than just a result record. Each package must contain five core elements:

1. Automated Screenshot Capture Strategy Implement screenshot automation at three levels:

  • Pre-execution: Capture initial state before test begins
  • Step-level: Screenshot after each significant test step
  • Post-execution: Final state capture including any error conditions

Integrate screenshot capture directly into your test automation framework. Tag each image with execution context (test case ID, step number, timestamp, environment ID). Store screenshots as attachments to the test run work item with standardized naming conventions that audit report widgets can parse.

2. Environment Tracking Framework Create a structured environment snapshot that captures:

  • Software under test (version, build number, configuration)
  • Test tools and frameworks (versions, plugins, drivers)
  • Hardware configuration (if applicable for embedded systems)
  • Network topology and dependencies
  • Operating system and runtime environment details

Implement this as a custom work item type with fields matching ISO 26262’s traceability requirements. Link environment snapshots to test runs using a “executed-in” relationship. This creates auditable traceability from requirements through test cases to actual execution environment.

3. Comprehensive Audit Report Widgets Develop specialized widgets for different audit scenarios:

Widget 1: Safety Requirement Coverage Shows all test executions for a specific safety requirement, including execution timestamps, environments, results, and evidence artifacts. Filters by ASIL level and test phase.

Widget 2: Test Execution Timeline Visualizes test execution history over time with environment changes highlighted. Helps auditors understand testing progression and identify any gaps.

Widget 3: Evidence Completeness Matrix Displays which test runs have complete evidence packages (screenshots, logs, environment data) versus incomplete ones. Critical for identifying audit trail gaps before formal review.

Widget 4: Traceability Verification Validates that every test execution has proper links to test cases, requirements, and environment snapshots. Highlights broken traceability chains.

4. ISO 26262 Specific Considerations

For ASIL C and D level requirements:

  • Implement redundant evidence capture (primary and backup)
  • Add electronic signatures for test execution approval
  • Include deviation tracking when tests don’t execute as planned
  • Capture tool qualification evidence when using automated test tools

5. Practical Implementation Pattern

Integrate evidence collection into your CI/CD pipeline:

  1. Test execution begins → Capture environment snapshot
  2. Each test step → Screenshot + detailed log entry
  3. Test completes → Package all evidence and attach to test run
  4. Automated validation → Check evidence completeness
  5. Audit report generation → Pull complete evidence chain

Storage and Performance Optimization

For repository performance with extensive evidence:

  • Use external storage for large artifacts (videos, extensive logs)
  • Keep metadata and thumbnails in Polarion for quick access
  • Implement retention policies (archive old evidence after certification)
  • Use lazy loading in audit widgets to handle large datasets

Audit Report Best Practices

Design reports that answer auditor questions directly:

  • “Show complete evidence for requirement X”
  • “Prove all ASIL D tests executed in qualified environments”
  • “Demonstrate traceability from requirement through execution”
  • “Verify no evidence gaps exist for safety-critical features”

The combination of automated screenshot capture, structured environment tracking, and purpose-built audit report widgets creates a defensible audit trail that satisfies ISO 26262 requirements while remaining manageable for your test team. The key is automation - manual evidence collection doesn’t scale and introduces gaps that auditors will flag.

For screenshot automation, we integrated Selenium WebDriver with Polarion’s test execution API. Each test step captures screenshots automatically and attaches them to the test run record. The critical part is tagging screenshots with timestamps and environment identifiers so they’re traceable in audit reports.

These are excellent insights. The environment snapshot approach makes a lot of sense - treating it as a linked work item rather than just metadata. How do you handle the volume of screenshots and logs? I’m concerned about repository bloat with thousands of test executions.