Non-conformance webhook integration sends duplicate events to external system

Our non-conformance workflow triggers webhooks to update our ticketing system whenever a NC is modified. We’re seeing duplicate events being sent - sometimes the same update triggers 2-3 identical webhook calls within seconds. This creates duplicate tickets in our external system and confuses our teams.

Here’s a sample payload we receive multiple times:

{
  "eventType": "nc.updated",
  "ncId": "NC-2025-001",
  "timestamp": "2025-01-05T10:15:32Z"
}

The payload has no unique event ID, so our system can’t easily detect duplicates. We’ve tried adding delays in our webhook handler, but that doesn’t solve the root cause. How do others handle webhook deduplication with Qualio? Is there a way to add idempotency to these webhook events?

One thing to watch out for - if you’re only using ncId + timestamp, you might miss legitimate updates that happen in the same second. We’ve seen cases where users make rapid changes in Qualio (like updating status then immediately adding a comment), and both events have identical timestamps at the second level. Consider including additional fields in your deduplication key, like the specific field that changed or a hash of the entire payload. This prevents false positive duplicates while still catching true duplicates.

Webhook duplication is unfortunately common with many systems, including Qualio. The standard approach is implementing idempotency on your receiving end. You need to generate a unique identifier for each event and track which events you’ve already processed. Most teams use a combination of ncId + timestamp + eventType as a composite key to identify duplicates within a short time window.

We had the exact same issue. The problem is that Qualio’s webhook system doesn’t guarantee exactly-once delivery - it’s more of an at-least-once model. You absolutely need deduplication logic on your side. We built a Redis cache that stores event fingerprints for 24 hours. Before processing any webhook, we check if we’ve seen that fingerprint before. If yes, we return 200 OK but skip processing. Works perfectly and handles network retries gracefully.

Yes, database-based deduplication works fine for moderate webhook volumes. Create a table with columns for event_hash, processed_at, and add an index on event_hash for fast lookups. The key is generating a consistent hash from the payload - use ncId + timestamp + eventType. Also add a cleanup job to delete records older than 7 days to prevent table bloat. We process about 500 webhooks daily this way without performance issues.

Thanks for confirming this is a known behavior. I don’t have Redis available in our environment unfortunately. Can this be done with a simple database table? Like storing processed event hashes in PostgreSQL with a timestamp and checking before each webhook processing?