Slow SQL query performance after integrating external data source with ThingWorx mashups

We integrated an external SQL Server database with ThingWorx 9.7 to display historical production data in mashup dashboards. Queries are extremely slow (15-30 seconds), making dashboards unusable. The database contains 5 million+ records of production metrics.

We’re using a Database Thing connected to SQL Server and executing direct SQL queries from mashup data services. No pagination is implemented - queries return all matching records which can be thousands of rows.

SELECT * FROM ProductionMetrics
WHERE timestamp >= '2025-01-01'
AND line_id = 'LINE-001'

How do we optimize SQL query performance for large datasets in ThingWorx? We need guidance on implementing result pagination and improving overall mashup dashboard performance when displaying data from external databases.

Consider implementing a materialized view or summary table in SQL Server for dashboard data. Pre-aggregate metrics by hour or day instead of querying raw records. For a 6-month date range, you’d query 4,380 hourly records instead of millions of raw records. Refresh the summary table nightly with a SQL job.

Comprehensive solution for optimizing SQL query performance in ThingWorx mashups:

SQL Query Optimization: Your current query has multiple performance issues. Optimize with these techniques:

  1. Select specific columns instead of SELECT *:
SELECT timestamp, line_id, metric_name, metric_value, unit
FROM ProductionMetrics
WHERE timestamp >= @startDate AND timestamp < @endDate
AND line_id = @lineId
ORDER BY timestamp DESC
  1. Create composite index in SQL Server:
CREATE INDEX IX_ProductionMetrics_LineTime
ON ProductionMetrics(line_id, timestamp)
INCLUDE (metric_name, metric_value, unit)

This covering index allows SQL Server to satisfy the query entirely from the index without accessing the base table.

  1. Use parameterized queries in ThingWorx to prevent SQL injection and enable query plan caching. Define service parameters for startDate, endDate, lineId and bind them to SQL parameters.

  2. Implement query timeout in Database Thing configuration: set to 10 seconds maximum. If queries exceed this, they’re poorly optimized.

  3. For aggregated data (daily/hourly summaries), create materialized views:

CREATE VIEW vw_ProductionMetrics_Daily AS
SELECT CAST(timestamp AS DATE) as date, line_id,
  AVG(metric_value) as avg_value,
  MAX(metric_value) as max_value,
  COUNT(*) as record_count
FROM ProductionMetrics
GROUP BY CAST(timestamp AS DATE), line_id

Query this view for dashboard summary widgets - it’s 100x faster than raw data.

Result Pagination: Implement server-side pagination to limit data transferred to mashups. Create a service with pagination parameters:

// Pseudocode - Paginated query service:
1. Accept parameters: page (default 1), pageSize (default 50), filters, sortColumn
2. Calculate offset: (page - 1) * pageSize
3. Build SQL with OFFSET/FETCH:
   "...ORDER BY {sortColumn} OFFSET @offset ROWS FETCH NEXT @pageSize ROWS ONLY"
4. Execute query and return InfoTable with results
5. Also return totalCount (separate COUNT query) for pagination controls
// Returns: {data: InfoTable, totalCount: number, currentPage: number}

In SQL Server, use OFFSET/FETCH for pagination:

SELECT timestamp, line_id, metric_value
FROM ProductionMetrics
WHERE line_id = @lineId
ORDER BY timestamp DESC
OFFSET @offset ROWS FETCH NEXT @pageSize ROWS ONLY

Configure ThingWorx Grid widget for server-side pagination:

  • Enable “Server-Side Pagination” property
  • Bind grid’s PageNumber and PageSize to service parameters
  • Grid automatically requests new pages when user navigates

Implement efficient total count query. Don’t use COUNT(*) on the full table for every request - it’s slow. Instead:

  • Cache count for 5 minutes if filters haven’t changed
  • Use approximate counts for very large result sets (SQL Server sys.partitions)
  • Display “1000+ results” instead of exact count for large datasets

Mashup Dashboard Performance: Optimize overall dashboard performance with these strategies:

  1. Implement dashboard-level caching. Add a “Refresh” button instead of auto-refreshing every few seconds. Cache query results in a Stream or DataTable for 5-10 minutes.

  2. Use mashup parameters to filter data before loading. Don’t load all production lines - let users select specific line first, then load data.

  3. Implement lazy loading for dashboard tabs. Only load data for the active tab, not all tabs simultaneously.

  4. Optimize widget refresh intervals. Don’t refresh all widgets every 5 seconds. Stagger refreshes: critical widgets every 10 seconds, others every 30-60 seconds.

  5. Use ThingWorx subsystems to execute long-running queries asynchronously. Display a loading indicator while query executes in background, then update widget when complete.

  6. Implement data aggregation in ThingWorx services rather than mashups. Pre-process data server-side and return only summary data to widgets. For example, return hourly averages instead of raw minute-by-minute data.

  7. For charts displaying time-series data, implement data downsampling. For a 6-month view, display daily data points (180 points) instead of hourly (4,380 points). Let users zoom in for higher resolution.

  8. Monitor query performance using SQL Server Extended Events or Query Store. Identify slow queries and optimize them proactively.

  9. Consider using ThingWorx Analytics or a separate OLAP database for complex analytical queries rather than querying operational database directly.

With proper query optimization, pagination, and dashboard performance tuning, query times should drop from 15-30 seconds to under 2 seconds, and dashboard load times from 30+ seconds to under 5 seconds. This makes dashboards responsive and usable for real-time decision making.

Use the Grid widget’s built-in pagination features. Configure the grid to use server-side pagination mode. Your data service should accept ‘page’ and ‘pageSize’ parameters. The grid automatically calls your service with appropriate parameters when users navigate pages. Also implement sorting and filtering server-side - don’t fetch all data and filter client-side.

Returning thousands of rows to a mashup is killing performance. Implement server-side pagination using SQL OFFSET/FETCH. Modify your service to accept page number and page size parameters, then use OFFSET (page * pageSize) ROWS FETCH NEXT pageSize ROWS ONLY. Return only 50-100 rows per request. Let users page through results rather than loading everything at once.

SELECT * is your first problem. Never select all columns when you only need a few. Specify exact columns needed for your dashboard. Also, make sure timestamp and line_id columns are indexed in SQL Server. Without indexes, you’re doing full table scans on 5 million rows.