Custom OEE dashboard configuration for plant floor teams implementation

I wanted to share our recent implementation of custom OEE dashboards in FactoryTalk MES 11.0 for our three production lines. Our plant floor teams were struggling with the standard OEE reports because they showed too much aggregated data and didn’t provide the real-time visibility supervisors needed during shifts.

We built role-specific dashboards that display current shift OEE with drill-down capability to see availability, performance, and quality losses in real-time. Each production line now has a large display showing their current OEE percentage, trending graphs for the past 4 hours, and top 3 loss categories. Supervisors can tap any metric to drill down into specific downtime events or quality defects.

The implementation took about 3 weeks including requirements gathering, dashboard design, data integration testing, and operator training. The impact has been significant - our average OEE improved from 67% to 74% in the first two months just from increased visibility and faster response to losses.

Let me provide the complete implementation details since this seems helpful for others considering similar dashboards.

Custom KPI Configuration Approach:

We started by interviewing our production supervisors to understand what decisions they make during a shift and what data they need to make those decisions quickly. This led to three dashboard tiers:

Tier 1 - At-a-Glance View (always visible):

  • Current shift OEE percentage (large font, color-coded)
  • OEE trend line for past 4 hours
  • Production count vs target for current shift
  • Top 3 loss categories with time/count
  • Line status indicator (Running/Stopped/Changeover)

Tier 2 - Drill-Down Detail (tap to expand):

  • Availability: List of all downtime events with start time, duration, reason code
  • Performance: Cycle time distribution chart, speed loss events
  • Quality: Defect count by type, scrap rate trend
  • Resource utilization: Operator efficiency, material consumption rate

Tier 3 - Historical Analysis (separate view):

  • Shift comparison (current vs previous 5 shifts)
  • Day-over-day trending
  • Week-to-date performance summary

The key configuration decision was to calculate OEE using actual production time (shift duration minus scheduled breaks) rather than theoretical available time. This made the metric more actionable for supervisors since they’re measured on controllable losses.

Drill-Down Filter Implementation:

Yes Michelle, we implemented comprehensive filtering. Each drill-down view supports:

  • Time filters: Current shift, last 4 hours, last 8 hours, custom range
  • Product filters: Current product, all products this shift, by product family
  • Resource filters: By production line, by work center, by operator team
  • Loss category filters: Availability only, performance only, quality only, all losses

The filters are implemented using FT MES 11.0’s dashboard parameter feature. We created a filter panel that appears when supervisors tap the filter icon. Selected filters persist for that user session so supervisors don’t have to reselect them every time they switch views.

Technical implementation: We use cascading parameters where selecting a time range automatically updates the available product and resource options to only show what’s relevant for that time period. This prevents confusion from empty results.

Real-Time Data Integration Architecture:

Our hybrid data approach works like this:

  1. Real-time PLC data (OPC UA subscription, 1-second updates):

    • Machine state (running/stopped)
    • Current cycle time
    • Part counter
    • Current product ID
  2. MES database data (REST API query, 30-second refresh):

    • Downtime events with reason codes
    • Quality defects logged
    • Operator assignments
    • Shift schedule and targets
  3. Calculated metrics (dashboard logic, updates on data change):

    • OEE percentage (availability × performance × quality)
    • Trend calculations (4-hour moving average)
    • Loss categorization and ranking

We built a middleware service that sits between the dashboard and data sources. This service:

  • Subscribes to OPC UA tags for real-time PLC data
  • Polls MES database every 30 seconds for event data
  • Caches recent data to reduce database load
  • Calculates derived metrics (OEE, loss rankings, trends)
  • Exposes a WebSocket endpoint that pushes updates to dashboards

The dashboards connect to this WebSocket endpoint and receive push notifications whenever data changes. This eliminates the need for dashboards to poll constantly, reducing network traffic and improving responsiveness.

Performance Optimization:

To handle the latency issues Raj mentioned, we implemented several optimizations:

  1. Materialized views: Created database views that pre-calculate common OEE components. These views refresh every 5 minutes via scheduled job.

  2. Aggregate tables: Store hourly OEE summaries in separate tables for faster historical queries. The Tier 3 historical analysis pulls from these aggregate tables instead of raw event data.

  3. Client-side caching: Dashboards cache historical data (Tier 3 views) locally and only refresh when time range changes.

  4. Selective updates: Only data points that changed are pushed via WebSocket. If OEE percentage hasn’t changed but a new downtime event occurred, only the downtime event list updates.

  5. Progressive loading: When supervisors drill down to detail views, we show cached summary data immediately, then load full details in background.

Dashboard Deployment:

We deployed three ways to maximize accessibility:

  1. Large displays: 55" touchscreen monitors mounted at each production line (primary use case)
  2. Supervisor tablets: iPad app for supervisors walking the floor
  3. Web browser: Accessible from any computer for management review

All three interfaces connect to the same middleware service and show identical data. The only difference is layout optimization for screen size.

Training and Adoption:

We ran 2-hour training sessions for each shift over one week. The training focused on:

  • How to interpret the OEE percentage and trend
  • How to drill down to find root causes of losses
  • How to use filters to analyze specific products or time periods
  • How to export data for shift handover reports

We also created quick reference cards mounted next to each display with common tasks (“How to see downtime reasons”, “How to compare to previous shift”, etc.).

Results and Impact:

After two months:

  • Average OEE improved from 67% to 74% (10.4% relative improvement)
  • Mean time to respond to downtime events decreased from 8.5 minutes to 3.2 minutes
  • Unplanned downtime reduced by 18% due to faster response
  • Scrap rate decreased from 2.8% to 2.1% due to faster quality issue detection
  • Supervisor satisfaction score increased from 6.2/10 to 8.7/10

The most significant benefit was cultural. Operators and supervisors now have objective, real-time data to guide their decisions. Morning production meetings shifted from discussing what might have happened yesterday to reviewing actual data and planning improvements based on clear loss patterns.

Lessons Learned:

  1. Start simple: Our initial design had 15+ metrics. We scaled back to 8-10 based on supervisor feedback. Less is more for at-a-glance visibility.

  2. Involve users early: We showed mockups to supervisors after week 1 and incorporated their feedback. This prevented major redesign later.

  3. Plan for scalability: We started with 3 lines but designed the architecture to support 20+ lines. The middleware service and dashboard template make adding new lines straightforward.

  4. Test with real data: We ran parallel testing for 2 weeks where supervisors used both old reports and new dashboards. This identified data accuracy issues before go-live.

  5. Provide export capability: Supervisors wanted to export dashboard data for shift handover reports. We added a “Export to Excel” button that generates a formatted report with current metrics and key events.

Happy to answer more specific questions about implementation details or share our dashboard template if anyone wants to try something similar.

What about the drill-down filters? Can supervisors filter by shift, product, or operator? We want to implement something similar but need flexibility for supervisors to analyze different dimensions of performance.

How did you handle the real-time data integration? Are you pulling directly from the production equipment or using the standard FT MES data collection points? We’ve had latency issues with our dashboards when trying to update every few seconds.

This sounds great Laura. What specific KPIs did you include beyond the standard OEE breakdown? We’re looking to do something similar but want to make sure we’re tracking the right metrics for our supervisors to act on.

Good question. Beyond standard OEE components we added: current cycle time vs target, scrap count for current shift, unplanned downtime events with duration, and changeover time tracking. We also included a simple red/yellow/green status indicator for each line that supervisors can see from across the floor. The key was keeping it simple - no more than 8-10 data points visible at once to avoid information overload.

We use the FT MES real-time data API with a 30-second refresh interval. For critical metrics like current cycle time, we pull directly from PLC tags via OPC UA to get true real-time updates. The dashboard subscribes to OPC UA data change notifications so it updates immediately when production state changes (running to stopped, etc). This hybrid approach gives us real-time responsiveness for critical data without overloading the MES database with constant queries.