Our treasury team is running automated payment batch jobs through Orchestrator in JDE 9.2.1, but we’re consistently hitting SQL timeout exceptions when processing large payment runs. The orchestration works fine for small batches (under 500 payments), but fails when we try to process our daily run of 2000-3000 payments.
The error we’re seeing:
SQL Exception: Query timeout expired
at PaymentProcessor.executeBatch(line 234)
Query: SELECT * FROM F0411 WHERE RPAG='P' AND RPDGJ>=...
The Orchestrator timeout is set to 300 seconds, which should be plenty, but the query seems to be taking longer as the batch size increases. We’ve tried splitting into smaller batches manually, but that defeats the purpose of automation. The incomplete payment processing is causing cash flow reconciliation issues and vendor payment delays. Any suggestions on optimizing either the Orchestrator configuration or the underlying SQL query performance?
Good point on the indexes. I checked with our DBA and we do have indexes on those columns, but the statistics haven’t been updated in 6 months. They’re running a statistics update now. However, even with proper indexes, is 300 seconds really enough for 3000 payment records? Should we be increasing the Orchestrator timeout value, or is there a better approach to batch size optimization?
Don’t just increase the timeout blindly - that’s treating the symptom, not the cause. The real issue is that your Orchestrator is trying to process all 3000 payments in a single transaction. Instead, configure the Orchestrator to use chunking. Set up a loop that processes payments in batches of 250-500 records with explicit commits between chunks. This way each chunk completes quickly, and you avoid long-running transactions that hold database locks and consume excessive resources.
Have you looked at the payment selection criteria? If you’re processing all pending payments at once, you might be better off segmenting by vendor, payment method, or bank account. This gives you natural batch boundaries and allows parallel processing.
Here’s a comprehensive solution addressing all three optimization areas:
1. Orchestrator Timeout Settings:
Don’t increase the global timeout - instead, implement intelligent timeout management. In your Orchestrator configuration:
- Keep connection timeout at 300 seconds
- Add a query timeout parameter separate from connection timeout
- Set query timeout to 60 seconds per chunk
- Configure retry logic with exponential backoff (3 retries with 30s, 60s, 120s delays)
This ensures individual operations fail fast while allowing the overall orchestration to continue.
2. Batch Size Optimization Strategy:
Implement dynamic batch sizing based on system load:
-- Pseudocode for adaptive batching:
1. Query current payment count: SELECT COUNT(*) FROM F0411 WHERE RPAG='P'
2. Calculate optimal chunk size: chunkSize = min(500, totalCount/6)
3. Process in loop with OFFSET/LIMIT pattern
4. Commit after each chunk, log progress
5. Monitor execution time, adjust chunk size if needed
Configure your Orchestrator with these parameters:
- Initial batch size: 250 records
- Maximum batch size: 500 records
- Adaptive threshold: if chunk takes >45 seconds, reduce batch size by 25%
- If chunk takes <15 seconds, increase batch size by 25%
3. SQL Query Performance Optimization:
Rewrite your payment selection query for efficiency:
SELECT RPAG, RPDGJ, RPDOC, RPSFX, RPAN8, RPAAP
FROM F0411
WHERE RPAG = 'P'
AND RPDGJ BETWEEN :startDate AND :endDate
ORDER BY RPDGJ, RPDOC
OFFSET :offset ROWS FETCH NEXT :batchSize ROWS ONLY
Key improvements:
- Explicit column list (not SELECT *) - reduces data transfer by ~60%
- Added date range filter to limit working set
- OFFSET/FETCH for pagination support
- ORDER BY ensures consistent processing order
Additional database optimizations:
- Verify composite index exists: CREATE INDEX F0411_PAYMENT_IDX ON F0411(RPAG, RPDGJ, RPDOC)
- Update table statistics weekly: EXEC dbms_stats.gather_table_stats(‘PRODDTA’, ‘F0411’)
- Enable query result caching for payment lookup queries
- Partition F0411 table by fiscal year if volume exceeds 10M records
Implementation approach:
- Start with batch size of 250 and monitor performance
- Run statistics update during off-peak hours
- Test query rewrite in development with production data volumes
- Implement chunking in Orchestrator with explicit commit points
- Add monitoring to track chunk execution times and adjust batch size dynamically
With these changes, you should process 3000 payments in 6-8 minutes total, with each chunk completing in under 60 seconds. This eliminates timeout issues while maintaining automation benefits.
That F0411 query is hitting the Accounts Payable Ledger table, which can be massive in high-volume environments. First thing to check: do you have proper indexes on RPAG and RPDGJ columns? Without those indexes, you’re doing full table scans on potentially millions of records. Run an EXPLAIN PLAN on that query to see if indexes are being used. Also check your database statistics - if they’re stale, the query optimizer might be choosing a poor execution plan.
One more thing - check your database connection pool settings. If Orchestrator is exhausting available connections or waiting for connection allocation, that adds to your timeout issues. Make sure your connection pool max size is adequate for concurrent orchestrations, and set appropriate wait timeouts. We typically configure 20-30 connections for payment processing orchestrations in production environments.