We’re experiencing intermittent failures when importing supplier data using the Bulk Import Utility in Trackwise 9.0. The process works fine for small batches (under 100 records), but fails consistently when importing 500+ suppliers with connection timeout errors.
Error we’re seeing:
java.sql.SQLException: Connection timeout
at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate
Caused by: Read timed out after 30000ms
Our current batch size is set to 1000 records, and we’re using the default connection pool settings. The supplier table has around 50,000 existing records. We’ve noticed SQL Server blocking during these imports, and query performance degrades significantly during the import process.
Has anyone successfully tuned bulk imports for large supplier datasets? Particularly interested in optimal batch sizing and connection pool configuration for SQL Server.
One more thing - are you importing during business hours? Supplier lookups from other modules (CAPA, Change Control) will compete for the same table locks. We schedule bulk imports during maintenance windows to avoid contention. Also verify your tempdb configuration - bulk operations can hammer tempdb if it’s not properly sized.
Check your SQL Server Resource Governor settings too. We had similar issues and found that the bulk import sessions were being throttled. Also, 50K records shouldn’t cause this level of blocking unless you have missing indexes or outdated statistics. Run UPDATE STATISTICS on your supplier tables before the import. Consider reducing batch size to 250-500 records as a starting point.
The 30 second timeout is your smoking gun. Bulk imports with poor indexing will cause table scans that exceed this. Run sp_who2 during import to identify blocking chains. I bet you’ll see PAGELATCH_EX waits. Your supplier table probably needs indexes on supplier_type, status, and parent_supplier_id columns if you’re doing hierarchical imports.
Before touching connection pools, verify your import isn’t wrapped in a single transaction. If the entire batch commits as one transaction, you’ll hold locks for the full duration. Break it into smaller transactional units. Also check if you have triggers on the supplier table - those can cause cascading locks during bulk operations.
I’ve seen this exact issue. Your batch size of 1000 is likely too aggressive for the default connection pool. Start by checking your connection pool max active connections - default is usually 20-30. With bulk imports, you need headroom. Also check if your supplier table has proper indexing on foreign key columns.