Performance review scores not calculating correctly after go-live, causing inaccurate merit recommendations

After our recent go-live, we’re seeing incorrect performance review scores being calculated. The overall rating should be a weighted average of multiple competencies, but the results don’t match our expected calculations. Some employees are showing ratings of 3.7 when manually calculating gives us 4.1. I’ve reviewed the calculated field logic and the formula appears correct, but something is clearly off. The weighted average formula should be taking each competency score multiplied by its weight, but the rounding seems inconsistent too.

<calculated_field>
  <formula>(comp1 * 0.3) + (comp2 * 0.3) + (comp3 * 0.4)</formula>
  <data_type>decimal(3,2)</data_type>
</calculated_field>

This is impacting merit increase calculations and employees are questioning their ratings. Has anyone dealt with calculated field issues in performance reviews?

The data type definition might be your issue. Decimal(3,2) means 3 total digits with 2 after the decimal, so your max value is 9.99. If any intermediate calculations exceed this, you’ll get truncation. Also check if the competency scores themselves are being stored as the correct data type. I’ve seen cases where scores stored as integers get converted incorrectly in formulas.

The discrepancy between 3.7 and 4.1 is significant and suggests more than just rounding. Verify that your formula is executing at the right point in the business process. If it’s calculating before all competency scores are finalized, you’ll get incorrect results. Also check if there are multiple versions of the calculated field and whether the correct version is active in your review template.

I’ve debugged this exact issue multiple times. You need to systematically address all three areas:

Calculated Field Logic Review: Your formula structure is correct, but there are likely hidden issues. First, verify the actual field references. Navigate to the calculated field definition and use the “View Field References” option to confirm comp1, comp2, and comp3 are pointing to the correct competency rating fields. After go-live, sometimes field IDs change if data was migrated or templates were rebuilt.

<calculated_field name="Overall_Performance_Rating">
  <formula>ROUND(((Competency_Leadership * 0.3) + (Competency_Technical * 0.3) + (Competency_Results * 0.4)), 2)</formula>
  <data_type>decimal(4,2)</data_type>
  <null_handling>treat_as_zero</null_handling>
</calculated_field>

Note I changed data_type to decimal(4,2) to accommodate ratings up to 99.99 and added explicit null handling.

Weighted Average Formula Validation: The key issue is usually order of operations and data type conversions. Workday processes formulas left to right unless you use explicit parentheses. Your weights (0.3, 0.3, 0.4) need to be treated as decimals, not integers. Verify this by checking the weight values in a test scenario:

  • If comp1=4, comp2=4, comp3=5: Expected result = (4×0.3)+(4×0.3)+(5×0.4) = 1.2+1.2+2.0 = 4.4
  • If you’re getting 3.7, the weights might be stored as integers (0,0,0) or the competency scores are being truncated

Test by creating a simple calculated field that just displays comp1, comp2, and comp3 without any math. This will show you if the source data is correct.

Rounding and Data Type Checks: This is where your 3.7 vs 4.1 discrepancy likely originates. Check these specific configurations:

  1. Intermediate Rounding: If Workday is rounding each multiplication before adding them, you’ll get different results. Force the order with nested parentheses.

  2. Data Type Mismatches: If competency scores are stored as integers (1,2,3,4,5) but your formula expects decimals, the multiplication might truncate. Cast them explicitly: DECIMAL(comp1) * 0.3

  3. Rounding Method: Workday supports multiple rounding methods (standard, up, down, truncate). Check your calculated field configuration and ensure it’s set to standard mathematical rounding (ROUND function with 2 decimal places).

  4. Display vs Stored Values: The review form might be displaying rounded values while the calculated field uses the full precision stored values. This can create apparent discrepancies.

Validation Steps: Create a test performance review with known values:

  • Set comp1=4.0, comp2=4.0, comp3=5.0
  • Expected result: 4.4
  • If you get anything else, the formula or data types are wrong

Enable debug logging in your tenant (if available) and run the calculation manually through the business process. The logs will show you the exact values being used in each calculation step.

Finally, check if there are any post-calculation business process steps that might be modifying the rating. Some implementations have calibration or normalization rules that adjust ratings after the initial calculation, which could explain the discrepancies.

Check your rounding configuration in the calculated field setup. Workday has specific rounding rules that can be set at the field level. Also verify that all competency scores are actually populated before the calculation runs. If any competency is null or zero, it could throw off your weighted average. You might need to add null-handling logic to your formula to treat missing values appropriately.

Have you checked the review template configuration itself? Sometimes the issue isn’t in the calculated field but in how the template is structured. Make sure the competency sections are properly mapped to the fields your formula references. Also verify that managers are completing all required sections before submitting - partial submissions can cause calculation issues if the formula doesn’t handle incomplete data properly.