Expense management report export fails in Odoo cloud with timeout error for large datasets

We’re running Odoo 14 Enterprise in cloud deployment and experiencing critical failures when trying to export expense management reports. The system works fine for small date ranges (1-2 weeks), but when we try to export monthly or quarterly expense reports, the export process times out after about 90 seconds.

We need to generate comprehensive expense reports for approximately 180 employees with an average of 25-30 expense claims per employee per month. The export fails with a generic timeout error and no PDF or Excel file is generated. This is blocking our monthly financial close process and audit requirements.

The error message we see:


Report generation failed: Request timeout
Operation exceeded maximum execution time (90 seconds)
Report: Monthly Expense Summary
Date Range: 2025-05-01 to 2025-05-31
Records: 4,847 expense lines

The same report parameters worked fine in our previous version (Odoo 13 on-premise) and would generate in about 45 seconds. We’ve tried breaking it into smaller date ranges but that defeats the purpose of comprehensive monthly reporting. Is there a way to increase the timeout limit for report generation in cloud deployments?

I’ve solved this exact issue for multiple clients using Odoo 14 cloud. The problem is that the default expense report template loads all data synchronously and tries to render everything in one go. For large datasets, you need to implement paginated report generation or switch to asynchronous processing.

Here’s the solution that works reliably:

First, create a custom report action that uses Odoo’s queue_job module (if available in your cloud plan) or implement a scheduled action approach:

from odoo import models, fields, api
from odoo.exceptions import UserError
import base64
import io

class ExpenseReportLarge(models.TransientModel):
    _name = 'expense.report.large.wizard'
    _description = 'Large Expense Report Generator'

    date_from = fields.Date(required=True)
    date_to = fields.Date(required=True)
    employee_ids = fields.Many2many('hr.employee')
    report_format = fields.Selection([('pdf', 'PDF'), ('xlsx', 'Excel')], default='xlsx')

    def generate_report_async(self):
        # Create a pending report job
        report_job = self.env['expense.report.job'].create({
            'date_from': self.date_from,
            'date_to': self.date_to,
            'employee_ids': [(6, 0, self.employee_ids.ids)],
            'format': self.report_format,
            'state': 'pending',
            'user_id': self.env.user.id,
        })

        # Schedule the report generation
        report_job.with_delay()._generate_report()

        return {
            'type': 'ir.actions.client',
            'tag': 'display_notification',
            'params': {
                'message': f'Report generation started. You will receive an email when complete.',
                'type': 'info',
                'sticky': False,
            }
        }

class ExpenseReportJob(models.Model):
    _name = 'expense.report.job'
    _description = 'Expense Report Generation Job'

    date_from = fields.Date()
    date_to = fields.Date()
    employee_ids = fields.Many2many('hr.employee')
    format = fields.Selection([('pdf', 'PDF'), ('xlsx', 'Excel')])
    state = fields.Selection([('pending', 'Pending'), ('processing', 'Processing'),
                             ('done', 'Done'), ('failed', 'Failed')], default='pending')
    report_file = fields.Binary('Report File')
    report_filename = fields.Char()
    user_id = fields.Many2one('res.users')

    def _generate_report(self):
        self.state = 'processing'
        try:
            # Process in batches of 500 records to avoid memory issues
            expense_lines = self.env['hr.expense'].search([
                ('date', '>=', self.date_from),
                ('date', '<=', self.date_to),
                ('employee_id', 'in', self.employee_ids.ids),
            ])

            # Generate report in chunks
            report_data = self._process_expense_data_batched(expense_lines)

            if self.format == 'xlsx':
                report_file = self._generate_excel_report(report_data)
                filename = f'expense_report_{self.date_from}_{self.date_to}.xlsx'
            else:
                report_file = self._generate_pdf_report(report_data)
                filename = f'expense_report_{self.date_from}_{self.date_to}.pdf'

            self.write({
                'report_file': base64.b64encode(report_file),
                'report_filename': filename,
                'state': 'done',
            })

            # Send email notification
            self._send_completion_email()

        except Exception as e:
            self.state = 'failed'
            raise UserError(f'Report generation failed: {str(e)}')

    def _process_expense_data_batched(self, expense_lines, batch_size=500):
        # Process expense lines in batches to manage memory
        result = []
        for i in range(0, len(expense_lines), batch_size):
            batch = expense_lines[i:i+batch_size]
            # Use read() with specific fields to minimize memory usage
            batch_data = batch.read(['employee_id', 'date', 'name',
                                    'total_amount', 'state', 'payment_mode'])
            result.extend(batch_data)
        return result

This approach processes the report asynchronously in the background, sends an email when complete, and handles large datasets by processing in batches. For your 4,847 records, this should complete in 2-3 minutes without hitting timeout limits. You’ll need to add the queue_job dependency to your cloud instance or use scheduled actions if queue_job isn’t available.

Your cloud provider might have different timeout settings for different types of operations. While the default HTTP request timeout is 90 seconds, there might be a separate configuration for report generation jobs. Contact your cloud provider’s support and ask about ‘long-running report job’ configuration. Some providers allow you to mark specific operations as long-running, which puts them in a different processing queue with extended timeouts.

Have you tried exporting to Excel instead of PDF? Excel exports are generally faster because they don’t require the rendering overhead that PDF generation involves. Also, look at your report template - if it includes complex calculations, subtotals, or formatted layouts with images/logos on every page, that significantly increases processing time. Consider creating a simplified report template for large data exports that focuses on raw data rather than formatting.

The issue is likely in how the expense report queries are constructed. If the report is doing multiple database queries per expense line (like fetching related employee data, department info, approval status, etc. for each line individually), you’ll have thousands of queries running. This is the N+1 query problem. The report needs to be optimized to use batch queries with proper JOINs instead of iterating through records. Check the report’s Python code and look for loops that call read() or browse() methods inside.

This is a common issue with large data exports in cloud environments. The problem isn’t just the timeout - it’s also memory consumption. When generating PDF reports with thousands of records, Odoo loads all the data into memory at once, formats it, and then renders the PDF. With 4,847 expense lines, you’re probably hitting memory limits before you hit the timeout. Check if your cloud plan has options for temporary resource bursting during report generation, or consider upgrading to a plan with more memory allocation.

This asynchronous approach is exactly what we needed! We implemented the custom report generator with batch processing and it’s working perfectly. Reports that were timing out now complete in about 2-3 minutes and we receive an email with the download link. The batched data processing also reduced memory usage significantly. Thank you backend_specialist_27 for the comprehensive solution!