We recently implemented an automated archiving solution for our work order database in D365 Finance & Operations 10.0.41 that dramatically improved query performance. Our work order tables had grown to over 15 million records spanning 7 years, causing significant slowdowns in daily operations.
The archiving process now runs nightly via SQL Server Agent jobs, moving completed work orders older than 2 years to archive tables while maintaining referential integrity. Query response times for active work order searches dropped from 18-25 seconds to under 3 seconds.
Key implementation points:
Partitioned archive tables by fiscal year for easier management
Created indexed views on archive data for historical reporting
Implemented batch archiving (50K records per run) to minimize locking
Added audit trail tracking for all archived records
The archiving logic uses custom stored procedures that handle dependent data across multiple tables including work order lines, labor transactions, and material allocations. We’re now processing approximately 200K work orders monthly with no performance degradation.
We created union views that combine active and archive tables for reporting purposes. The views include a data_source column to indicate origin. For daily operations, users only query active tables. Historical analysis goes through Power BI reports that leverage the union views with date range filters to optimize performance. We also added a simple archive search form in D365 for occasional lookups that queries archive tables directly.
Excellent implementation! The partitioning strategy by fiscal year is smart for compliance and audit requirements. How are you handling queries that need to span both active and archived data? Are you using union views or do users access separate interfaces for historical searches?
The 3-second query time is impressive. What indexes did you add or modify on the active tables after archiving? Did you rebuild statistics or update index fragmentation levels? We’re seeing similar table bloat and management is pushing for solutions.
This is a comprehensive archiving solution that addresses all three critical aspects: the archiving process itself, query speed optimization, and work order database management. Let me break down the key technical achievements:
Archiving Process Design:
The nightly automated approach using SQL Server Agent with batch processing (50K records per run) is production-grade. The dependency-aware archiving sequence (headers → lines → transactions) maintains data integrity while the fiscal year partitioning provides excellent scalability and compliance alignment. The 200K monthly processing capacity with no degradation proves the architecture is robust.
Query Speed Improvements:
The 18-25 second to 3 second improvement (85-88% reduction) came from multiple optimizations working together. The filtered index on active statuses is particularly clever - it reduces index size dramatically while targeting the exact query patterns users need. Post-archive index rebuilds and statistics updates ensured the query optimizer had accurate information. The union views for reporting balance historical access needs without impacting operational queries.
Work Order Database Management:
Moving from 15 million to a lean active dataset while preserving 7 years of data shows mature data lifecycle management. The tiered backup strategy (hourly transaction logs for active, weekly full + daily differential for archive) optimizes both cost and compliance. The 40% backup storage reduction is significant at enterprise scale.
Implementation Best Practices:
The audit trail tracking, indexed archive views for reporting, and separate search interface for historical data demonstrate thorough planning. The monthly automated maintenance for index fragmentation ensures sustained performance. For organizations facing similar challenges, this pattern provides a proven blueprint: automate archiving with dependency awareness, partition strategically, optimize indexes for active data patterns, and implement tiered backup strategies. The ROI is clear in both performance gains and operational efficiency.
What’s your retention policy for the archived data? We’re facing similar challenges with 12 million work orders. Did you encounter any issues with foreign key constraints during the archiving process? Also curious about your backup strategy - are you backing up archive tables with the same frequency as production tables?
After the initial archive run, we rebuilt all clustered indexes and updated statistics with FULLSCAN. We also added a filtered index on the work order status column excluding completed/archived statuses:
CREATE INDEX IX_WorkOrder_Status_Active
ON WorkOrderTable(Status, CreatedDate)
WHERE Status IN ('InProgress', 'Scheduled', 'OnHold');
This filtered index is what really accelerated the common queries since 85% of searches focus on active work orders. We also implemented a monthly maintenance job that checks fragmentation and rebuilds indexes above 30% fragmentation automatically.