AI-generated performance comments in Redwood vs classic UI: feature comparison

We’re evaluating migration from classic UI to Redwood for our performance management cycle in ohcm-23c. The AI-generated comment feature is critical for our managers who oversee 15-20 direct reports each.

I’m curious about how the AI Comment Engine handles hierarchical prompt processing differently between the two interfaces. In classic UI, we’ve configured profile options to customize comment tone and length, but I’m unclear if Redwood offers the same level of customization.

Also concerned about data volume handling - we have 8,500 employees going through reviews simultaneously. Has anyone compared the two UIs in terms of performance when processing large batches of AI-generated feedback? Looking for real-world experiences before committing to the migration timeline.

One thing to watch: the hierarchical prompt processing in Redwood requires specific profile option configurations that aren’t obvious. Navigate to My Client Groups > Show More > AI Settings. There’s a hidden option called ‘Enable Contextual Prompt Layering’ that needs to be enabled at the enterprise level. Without it, Redwood falls back to the classic linear processing model and you lose the performance benefits. Also, the AI engine in Redwood pulls from a broader data set including competency models and succession plans, which can make comments more comprehensive but also slower if your data model isn’t optimized.

The customization capabilities are the biggest differentiator. In classic UI, you could modify AI prompt templates through profile options like ‘HCM_AI_COMMENT_TONE’ and ‘HCM_AI_COMMENT_LENGTH’. Redwood uses a visual configuration interface that’s more user-friendly but less flexible. You can set organizational tone preferences (formal, collaborative, developmental) and comment length (brief, standard, detailed), but you can’t inject custom prompt engineering like we could in classic. For organizations with specific industry terminology or compliance requirements, this can be limiting.

We completed this migration in Q1. Redwood’s AI engine actually uses a different prompt hierarchy that’s more sophisticated. The classic UI processes prompts linearly, while Redwood uses contextual layering - it analyzes the employee’s goals, past performance, and peer feedback simultaneously. Profile options work differently though - some classic configurations don’t translate directly. You’ll need to reconfigure using the Redwood-specific settings under AI Configuration workspace.

From a technical perspective, Redwood’s AI Comment Engine uses REST API calls to Oracle’s cloud AI service, while classic UI processes locally within the HCM instance. This means Redwood is more dependent on network latency but benefits from more powerful AI models. For data volume handling with 8,500 employees, ensure your API rate limits are configured appropriately - default is 100 requests per minute which will bottleneck during peak processing.