Advanced planning module: Data migration vs full rebuild for DAM 2022 upgrade

Our team is planning an upgrade from DAM 2019 to DAM 2022 and we’re debating whether to migrate existing advanced planning data or do a complete rebuild. We have about 18 months of historical production schedules, material requirements, and capacity planning data that’s heavily customized.

The concern is data integrity issues during migration, especially with custom workflow mappings we’ve built. We’re also moving to a hybrid cloud deployment which adds another layer of complexity. Some team members argue a fresh start would be cleaner, while others worry about losing historical context for scheduling algorithms.

What approach have others taken for major version upgrades with the advanced planning module? Interested in hearing both the technical considerations and business impact perspectives.

Based on my experience with multiple DAM 2022 upgrades, here’s a comprehensive perspective on the migration versus rebuild decision:

Legacy Data Audit Considerations: Before deciding, perform a thorough audit of your existing planning data. Key metrics to evaluate:

  • Data completeness: Are all required fields populated across your 18 months of history?
  • Relationship integrity: Do all schedule-to-resource and material-to-order links resolve correctly?
  • Customization complexity: How many custom fields and tables have you added to the planning schema?
  • Usage patterns: Which historical data actually feeds into your current scheduling algorithms?

Run the Data Quality Analyzer tool and aim for 90%+ integrity score for migration to be viable. Below that threshold, you’re accumulating technical debt that will haunt you in the new version.

Hybrid Cloud Compatibility Strategy: The hybrid deployment question is somewhat independent of the migration decision, but impacts your approach:

Cloud-appropriate components: User interfaces, reporting, analytics, and long-term data storage work well in cloud with DAM 2022’s improved architecture.

On-premises recommendations: Keep the planning calculation engine, real-time scheduling processor, and direct shop floor integrations on-prem if you need sub-second response times. The network latency for cloud-based real-time calculations can impact production schedule accuracy during high-frequency rescheduling events.

Hybrid configuration: DAM 2022 supports active-active hybrid deployment where planning calculations can run in both locations with data synchronization. This provides failover capability but requires careful configuration of data replication rules.

Workflow Mapping Reality: This is where the rebuild argument becomes strongest. DAM 2022’s workflow engine is fundamentally different from 2019:

  • Event-driven architecture replaces polling-based triggers
  • New expression language for business rules
  • Different API endpoints for custom actions
  • Enhanced state machine capabilities but incompatible with legacy state definitions

Your custom workflows will need rebuilding regardless of data migration approach. Budget 40-60% of your upgrade timeline for workflow reconfiguration. The silver lining is that 2022’s workflow tools are significantly more powerful - you can likely simplify some complex customizations.

Recommended Hybrid Approach: Migrate selectively rather than all-or-nothing:

  1. Master data migration: Bring forward materials, resources, BOMs, and routing definitions. These are relatively stable and migration tools handle them well.

  2. Partial historical migration: Import the most recent 3-6 months of schedule history for algorithm training, but archive older data externally. This gives your predictive scheduling enough context without migrating problematic legacy records.

  3. Full workflow rebuild: Start fresh with workflows using DAM 2022 patterns. Document legacy logic but don’t try to replicate it exactly - take the opportunity to optimize.

  4. Phased go-live: Run parallel systems for 2-4 weeks, comparing planning outputs between old and new. This validates that your migrated/rebuilt configuration produces acceptable schedules.

Business Impact Perspective: The rebuild approach causes 4-8 weeks of reduced scheduling accuracy as algorithms retrain on new data. For high-mix manufacturing, this can mean 15-20% increase in schedule changes during the transition period. If your business can’t tolerate this disruption, selective migration with aggressive data cleanup is worth the extra effort.

For hybrid cloud deployments specifically, start with a cloud-first architecture design, then selectively move latency-sensitive components back on-premises based on performance testing. Don’t assume on-prem is always faster - DAM 2022’s cloud optimization often surprises people.

Ultimately, the decision hinges on your data quality audit results and business tolerance for transition disruption. Clean legacy data with simple customizations favors migration. Complex customizations with questionable data quality favors rebuild with selective import of critical master data.

Good point about the data audit. We haven’t done a thorough quality assessment yet. What tools do you recommend for analyzing planning data integrity? Also, regarding the hybrid cloud setup - are there specific planning module features that don’t work well in cloud deployments? We’re particularly concerned about real-time capacity calculations and material availability checks.

I’d recommend a hybrid approach rather than all-or-nothing. Migrate your master data and core planning structures, but rebuild custom workflows from scratch in the new version. This preserves your historical scheduling context while avoiding the workflow compatibility issues.

For hybrid cloud deployment, consider which components stay on-premises versus cloud. We kept our advanced planning database on-prem for performance reasons but moved reporting and analytics to cloud. The data sync between environments needs careful planning, especially for real-time scheduling updates. DAM 2022 has improved hybrid architecture support compared to 2019, but you’ll need to map out your network topology carefully.

We did a full rebuild last year when upgrading to DAM 2021. The migration path looked too risky with our customizations. Took about 3 months to reconfigure everything but we ended up with a cleaner system. The downside was losing historical trends for the scheduling engine - it took about 6 weeks of live data before the predictive algorithms started performing well again.

Don’t forget about workflow mapping complexity. DAM 2022 introduced new workflow engine architecture that’s incompatible with 2019 custom workflows. You’ll need to rebuild those regardless of whether you migrate data. Document your current workflow logic thoroughly before starting - we missed some edge cases in our upgrade and had to patch them post-go-live.

The legacy data audit is critical before making this decision. Run a data quality assessment on your existing advanced planning repository. Check for orphaned records, inconsistent relationships, and deprecated configuration elements that won’t migrate cleanly to DAM 2022.

In my experience, if your data quality score is below 85%, you’re better off rebuilding. The time spent cleaning legacy data for migration often exceeds the effort to reconfigure from scratch. Also consider that DAM 2022 has restructured some planning tables - certain customizations from 2019 simply won’t map directly.

For data assessment, use Apriso’s built-in Data Quality Analyzer tool (available in DAM 2020+). It specifically checks planning module integrity including schedule dependencies, resource allocations, and material links.

Regarding hybrid cloud - real-time capacity calculations work fine in cloud, but you need low-latency connectivity to your shop floor data sources. If your OPC-UA or SCADA integrations have high network latency to cloud, consider keeping the planning calculation engine on-premises and only moving the user interface and reporting layers to cloud. We measured 200-300ms additional latency for cloud-based planning calculations, which was acceptable for our use case but might not be for high-frequency rescheduling scenarios.