We’re experiencing severe performance issues with our revenue recognition batch processing in JD Edwards 9.2.0. Our month-end close is being delayed by 3+ hours due to the R42800 batch job crawling through transactions.
The Business Function is processing around 45,000 revenue transactions but seems to be doing row-by-row operations. We’ve noticed cursor management might be inefficient based on SQL traces showing repeated small fetches. Batch chunk sizing appears to be defaulted at 100 records which feels too conservative.
SQL query tracing shows the same SELECT statements executing thousands of times:
SELECT * FROM F42119 WHERE RDDOC = ? AND RDDCT = ?
SELECT * FROM F0911 WHERE GLDOC = ? AND GLTYP = ?
This is impacting our financial close timeline significantly. Has anyone optimized the R42800 revenue recognition process for large transaction volumes? What batch chunk sizes and Business Function parameters worked for you?
Thanks for the quick responses. I checked and we do have indexes on those columns, but the statistics were last updated 6 months ago. Running RUNSTATS now. Where exactly do I modify the chunk size parameter? Is that in the batch job processing options or somewhere in the Business Function configuration itself?
Those repeated SELECT statements are a red flag. You need to look at the Business Function’s data selection logic. The F42119 and F0911 tables should be joined in a single query rather than doing nested lookups. Have you checked if there are missing indexes on RDDOC/RDDCT and GLDOC/GLTYP columns? Also, are your table statistics current? Outdated stats can cause the optimizer to choose terrible execution plans.
The chunk size is typically controlled through the UBE processing option. Go to the batch application (R42800) and check the Processing Options tab. Look for parameters like “Number of Records to Process” or “Commit Interval”. You might also need to modify the Business Function if it’s hardcoded. Additionally, check your JDE.INI file for database fetch settings - there’s a parameter called “FetchArraySize” that defaults to 100 but should be bumped to at least 1000 for batch operations. One more thing: make sure you’re not running this during peak hours when other processes are competing for database resources.
I’ve seen this exact pattern before. The default chunk size of 100 is way too small for modern hardware. We increased ours to 5000 records per batch and saw immediate improvement. Also check your cursor fetch array size in the Business Function - it’s probably set to default which causes excessive round trips to the database.
I want to add some database-level considerations here. Beyond the application settings, you should enable SQL tracing at the database level to see the actual execution plans. In our environment, we discovered that the optimizer was doing full table scans on F0911 because the statistics were skewed. After updating stats and adding a composite index on (GLDOC, GLTYP, GLPOST), query time dropped by 80%. Also consider partitioning these large transaction tables if you’re on 9.2.0 - it helps with parallel query execution during batch processing.