I’m looking to start a discussion about how organizations are handling the boundary between configuration management and master data in Blue Yonder Luminate, specifically for demand planning parameters.
We’re struggling with parameter ownership clarity. Some settings like forecast algorithms and smoothing factors feel like configuration, but they’re stored alongside product master data. Other parameters like safety stock policies are in configuration tables but behave like master data that changes frequently.
This creates challenges:
- Audit complexity when trying to track who changed what and when
- Inconsistent update processes (some through config management, others through data loads)
- Unclear ownership between IT (configuration) and business users (master data)
How are other organizations drawing this line? What criteria do you use to determine if a parameter belongs in configuration management versus master data? I’m particularly interested in hearing about governance models and auditability approaches that have worked well.
From an IT perspective, configuration should be version-controlled and promoted through environments (dev, test, prod). Master data flows through different channels, often from source systems or direct user updates. We classify parameters by their promotion path. If it needs to be tested in dev before production deployment, it’s configuration. If business users need to update it directly in production based on real-time business needs, it’s master data. This also affects our change management process and approval workflows.
I think the confusion comes from Blue Yonder’s data model itself. Some tables are hybrid - they contain both stable configuration elements and dynamic business parameters. For example, the product planning parameters table has algorithm settings (configuration-like) and forecast overrides (master data-like) in the same structure. We’ve had to create custom views and access controls to separate these logically even though they’re physically together.
We use a simple rule: if it changes with business cycles (monthly, quarterly), it’s master data. If it changes with system releases or major process changes, it’s configuration. Forecast algorithm selection is configuration because you don’t change it often and it requires testing. Safety stock levels are master data because planners adjust them regularly based on market conditions. This distinction helps clarify ownership - IT owns configuration, planning team owns master data.
Consistency checks are another dimension to consider. Configuration typically has referential integrity enforced by the system - you can’t configure an invalid algorithm code. Master data often has softer validation - you can enter a safety stock value that’s technically valid but operationally wrong. We use data quality rules to validate master data and configuration validation to check config parameters. The validation approach often reveals which category a parameter truly belongs to.
The auditability aspect is crucial. We implemented a governance framework where anything that impacts financial results or compliance requires audit trails. That pushed us to treat more parameters as master data because master data management systems have better audit capabilities than configuration management tools. Parameters like demand smoothing factors directly affect forecast accuracy and inventory investment, so they need full audit history including before/after values, user identity, timestamp, and business justification.