We’re experiencing significant performance issues when querying general ledger balances through the Infor OS API Gateway for our month-end financial close process. The API calls take 4-6 minutes to return results when we request balances across all accounts (approximately 15,000 GL accounts) for the current fiscal year.
Our current approach makes a single API call without pagination:
GET /financials/v1/gl-balances?fiscalYear=2025&includeSubAccounts=true
Authorization: Bearer {token}
The response eventually returns but the timeout is causing our reporting dashboards to fail. We’ve tried filtering by account ranges and date parameters but haven’t found an optimal data filtering strategy yet. Is there a recommended approach for API query optimization when dealing with large GL datasets? Should we be implementing pagination differently?
Let me break down the complete optimization approach based on your scenario:
API Query Optimization Strategy:
Implement cursor-based pagination with proper filtering:
GET /financials/v1/gl-balances?fiscalYear=2025&limit=800&cursor={nextToken}
&fields=accountNumber,balance,period&segment1=100-199
Pagination Usage Best Practices:
- Use limit=800 as your page size (optimal for GL data volume)
- Implement cursor-based pagination using the nextCursor token from response headers
- Make sequential calls until nextCursor is null
- Add retry logic with exponential backoff for transient failures
Data Filtering Strategies:
-
Segment-based filtering: Break your 15K accounts into logical segments (departments, divisions). Query each segment separately - this gives you 10-15 API calls instead of one massive call
-
Period filtering: Instead of entire fiscal year, query by period ranges:
- Current period for real-time dashboards
- Historical periods only when needed for trend analysis
-
Field selection: Use the fields parameter to request only required data:
- Basic view: accountNumber, balance, period
- Detailed view: add description, currency, lastModified only when drilling down
-
Parallel processing: If you segment your queries, run 3-4 parallel threads (respect rate limits of ~10 requests/second)
Implementation Pattern:
Create a wrapper service that:
- Divides account ranges into segments
- Implements pagination loop for each segment
- Aggregates results into your reporting cache
- Runs asynchronously during off-peak hours
- Caches results for dashboard consumption
Expected Performance:
With these optimizations, your 15K account query should complete in 45-90 seconds total (distributed across multiple paginated calls) instead of 4-6 minutes. Each individual API call should return in 2-4 seconds.
Additional Considerations:
- Monitor API Gateway metrics in Infor OS Portal to identify any rate limiting
- Use ETags for caching if account balances haven’t changed
- Consider implementing a nightly batch job to pre-populate reporting tables
- For real-time dashboards, query only accounts with recent activity using lastModifiedDate filter
This comprehensive approach addresses all three focus areas: query optimization through proper pagination, effective use of cursor-based pagination patterns, and strategic data filtering to minimize payload size.
The single large query approach is definitely your bottleneck here. The API is processing all 15K accounts in one transaction which is causing the timeout. You need to implement proper pagination and filtering strategies to break this down into manageable chunks.
Thanks for the suggestions. We do have a segmented chart of accounts structure. Would it be better to make parallel API calls for different segments or sequential calls with pagination? Also, what’s the recommended page size for optimal performance?