Succession planning analytics report displays duplicate candidates across multiple positions

Our succession planning analytics report is showing duplicate candidate entries when employees are identified as successors for multiple positions. In ADP 2023.2, when I run the Leadership Succession Dashboard, candidates who are tagged for 2-3 different leadership roles appear multiple times with different readiness scores for each position.

The data sync mapping seems to be creating separate records for each position-candidate pairing rather than consolidating them into a single view. I’ve tried using the built-in deduplication tool, but it’s not recognizing these as duplicates because the readiness scores differ by position.

Here’s what I’m seeing in the report output:


Employee: John Smith | Position: VP Sales | Readiness: 85%
Employee: John Smith | Position: VP Marketing | Readiness: 72%
Employee: John Smith | Position: COO | Readiness: 68%

For leadership review purposes, we need a consolidated view showing each candidate once with all their potential succession paths. The readiness score validation also seems inconsistent across these duplicate entries. How do others handle multi-position succession candidates in analytics reporting?

The readiness score variance is normal when candidates are assessed for different role types. VP Sales requires different competencies than COO, so the scores will naturally differ. However, you should verify that the same assessment framework is being applied consistently. Check your succession planning configuration to ensure all positions are using the same competency model and weighting system. Inconsistent readiness scores often indicate that different assessors are using different evaluation criteria. Run a competency alignment report to validate this.

This is actually expected behavior in ADP’s succession planning data model. The system creates a separate relationship record for each candidate-position pair because the readiness assessment is position-specific. What you’re looking for is a pivot view or aggregated report format.

The matrix layout is a good suggestion. I’m concerned about the readiness score validation though - when I look at John Smith’s scores across the three positions, the variance seems too high. Is there a way to validate that the readiness assessments are being calculated consistently?