Test case management imports duplicate requirements from DOORS Next during bulk operations

We’re encountering a serious data integrity issue when bulk importing requirements from DOORS Next 7.0.1 into our test case management module. The import process is creating duplicate requirement entries instead of recognizing existing requirements.

Here’s our scenario:

  • Initial import of 200 requirements from DOORS Next module → successful
  • Second import to add 50 new requirements from same module → creates 250 total (duplicates all 200 existing)
  • Each duplicate has a different internal ID but same requirement number and content

Our import mapping configuration appears correct with UUID fields mapped, but something in the OSLC import validation isn’t recognizing existing requirements. This is corrupting our test planning because we now have multiple copies of the same requirement linked to different test cases.


<import:mapping>
  <source>dcterms:identifier</source>
  <target>rm:requirementId</target>
  <matchRule>UUID</matchRule>
</import:mapping>

The skip-if-exists option doesn’t seem to be working. Has anyone successfully configured bulk imports to properly detect and skip existing requirements based on UUID preservation?

You’re dealing with a multi-faceted import validation issue that requires addressing four key components:

1. Import Mapping Configuration Your current mapping is incomplete for proper duplicate detection. You need to map both the identifier and the OSLC resource URI:

<import:mapping>
  <source>dcterms:identifier</source>
  <target>rm:requirementId</target>
  <matchRule>EXACT</matchRule>
</import:mapping>
<import:mapping>
  <source>rdf:about</source>
  <target>oslc:resource</target>
  <matchRule>URI</matchRule>
  <preserveSource>true</preserveSource>
</import:mapping>

The preserveSource attribute is critical-it tells the import engine to store the original DOORS Next URI with each imported requirement. This URI becomes the primary key for subsequent imports.

2. UUID Preservation Strategy The UUID matching rule you’re using only works if the UUID field is populated in both source and target. DOORS Next requirements have UUIDs, but your QM test case management may not be preserving them during import. Add this to your mapping:

<import:uuidHandling>
  <preserveSourceUUID>true</preserveSourceUUID>
  <conflictResolution>SKIP</conflictResolution>
</import:uuidHandling>

This ensures that when a requirement with the same UUID is encountered, the import skips it rather than creating a duplicate.

3. Skip-If-Exists Validation The skip-if-exists mechanism relies on an OSLC query to check for existing resources. Your import configuration needs to specify the query parameters:

<import:validation>
  <skipIfExists>true</skipIfExists>
  <oslc:query>
    <oslc:where>dcterms:identifier="{sourceId}" and oslc:resource="{sourceURI}"</oslc:where>
    <oslc:select>dcterms:identifier,oslc:resource</oslc:select>
  </oslc:query>
</import:validation>

This query uses both identifier and source URI for matching, which is more reliable than UUID alone. The {sourceId} and {sourceURI} placeholders are replaced with values from each imported requirement.

4. OSLC Feed Validation Before running another import, validate your DOORS Next OSLC export feed. Query the feed directly:


GET /rm/oslc/query?oslc.where=dcterms:identifier="REQ-001"
Accept: application/rdf+xml

Verify the response includes:

  • Consistent rdf:about URIs (same URI for same requirement across exports)
  • Valid dcterms:identifier values
  • Proper namespace declarations for rm: and oslc: prefixes

If the rdf:about URIs are changing between exports (e.g., including timestamps or session IDs), that’s your root cause. You’ll need to configure DOORS Next to generate stable URIs.

Cleanup and Re-import Before fixing the import:

  1. Delete the duplicate requirements from QM (keep only the original 200)
  2. Update your import mapping configuration with the four components above
  3. Run a test import with just 5-10 requirements to verify duplicate detection works
  4. Check the qm.log for “Resource already exists, skipping” messages
  5. Once validated, run the full import of your 50 new requirements

This systematic approach addresses import mapping completeness, UUID preservation, skip-if-exists validation, and OSLC feed integrity-the four pillars of reliable bulk import operations.

That’s a critical detail about the OSLC query. The import process should be logging the validation queries it’s executing. Check your qm.log for entries with “OSLC query for existing resource” and see what query string it’s using. If the query is malformed or using the wrong field name, it’ll never find matches and will create duplicates every time.

You might need to adjust your import mapping to explicitly specify the OSLC query format for duplicate detection.

Good point about the composite key. I checked our mapping and we do have dcterms:identifier mapped, but I don’t see the OSLC identifier field explicitly included. Could that be why the duplicate detection is failing?

Also, what do you mean by source repository URI? Should that be in the mapping configuration or is it automatically included in the OSLC feed?

The UUID matching rule might not be sufficient by itself. Check if your import configuration includes the OSLC identifier field in addition to dcterms:identifier. DOORS Next uses a composite key for requirement uniqueness that includes both the identifier and the project context.

Also verify that your import mapping preserves the source repository URI. Without that, the skip-if-exists logic can’t determine if a requirement already exists from a previous import.

The source repository URI needs to be part of your import feed. When DOORS Next exports requirements via OSLC, each requirement should have an rdf:about attribute that includes the full repository path. This is what QM uses to determine if a requirement was previously imported.

If your export isn’t including these URIs, or if they’re changing between exports, the duplicate detection will fail every time. Check the OSLC feed XML directly to see if the URIs are consistent across exports.

I’ve debugged similar import issues before. The problem is usually that the skip-if-exists validation uses a different matching algorithm than you’d expect. It doesn’t just check UUID-it actually performs an OSLC query with the identifier as a filter.

If your OSLC query configuration isn’t set up properly in the import mapping, the validation query returns no results and the import treats everything as new. You need to ensure the query uses the correct OSLC where clause with the identifier field.