Audit reporting export truncates change history for large requirements

We’re experiencing critical issues with audit trail exports from the audit-reporting module for requirements that have extensive change histories (200+ revisions). The exported reports consistently truncate after approximately 150 revisions, leaving incomplete audit trails that fail compliance validation.

Our audit-trail pagination appears to be hitting some limit, and the export batch configuration doesn’t seem to handle large datasets properly. We’ve tried adjusting REST API chunking parameters, but the exports still cut off mid-stream. Database connection pooling might be timing out during the lengthy export process.


POST /polarion/rest/audit/export
Request timeout after 120 seconds
Partial data: 148/237 revisions exported
Error: Connection pool exhausted

This is blocking our quarterly compliance audit. Has anyone successfully exported large audit trails or found workarounds for the export batch size limitations?

We implemented a staged export approach for large requirements. Instead of trying to export the entire history in one go, we break it into time-based chunks (quarterly or monthly). This keeps individual exports manageable and prevents connection pool exhaustion. The audit-reporting module supports date range filters that work well for this pattern.

For high-volume audit exports, consider this approach that addresses all the key bottlenecks:

1. Audit-Trail Pagination Configuration Increase the pagination size but keep it reasonable to avoid memory issues. Set revision batch size to 75-100 items per page rather than the default 50. This reduces the number of round trips while maintaining stability.

2. Export Batch Configuration Configure the export process to handle batches incrementally:


audit.export.batchSize=75
audit.export.maxRetries=3
audit.export.retryDelay=5000

3. REST API Chunking Optimize the API layer for large payloads:


rest.api.chunkSize=100
rest.api.timeout=300000
rest.api.compression=true

4. Database Connection Pooling This is critical for preventing the exhaustion you’re seeing:


db.pool.maxActive=100
db.pool.maxWait=180000
db.pool.testOnBorrow=true

Additional Recommendations:

  • Enable compression on REST API responses to reduce transfer time
  • Implement exponential backoff for retry logic on failed chunks
  • Schedule large exports during off-peak hours to reduce connection contention
  • Monitor connection pool metrics to identify optimal sizing for your workload
  • Consider archiving very old revisions if they’re not required for active compliance audits

For Immediate Relief: If you need exports now, use the date range filtering mentioned by devops_engineer. Export in quarterly chunks, then consolidate the reports. This works within existing limits while you tune the configuration.

The key is balancing all four areas - increasing one without adjusting the others will just move the bottleneck. Start with these settings and monitor your export performance, then adjust based on your specific revision volumes and infrastructure capacity.

Thanks for the suggestions. I’ve reviewed our REST API configuration and the timeout is indeed set to 120 seconds. The database connection pool shows maxActive=50 but we’re hitting the limit during peak export times. Should I be looking at the export batch configuration separately from the REST API chunking, or are these coupled in the audit-reporting module?

The connection pool exhaustion suggests your export process is opening too many concurrent connections. Check your database connection pooling settings - you might need to increase maxActive and maxWait parameters. Also, the REST API chunking should be configured to process smaller batches with proper connection reuse between chunks.