Sharing our implementation of a custom connector in Mendix Integration Hub that automated invoice synchronization with our legacy ERP system. We were processing 500+ invoices monthly through manual CSV exports and imports, which was error-prone and time-consuming.
The challenge was that our ERP (a 20-year-old custom system) only supports flat-file exports with a proprietary format - no modern APIs available. We needed bidirectional sync: pull invoice data from ERP for approval workflows in Mendix, then push approved invoices back with updated status codes.
Built a custom connector that handles the flat-file mapping, includes automated error logging for failed transformations, and runs on a scheduled basis. Reduced our manual processing time by 85% and virtually eliminated data entry errors. Happy to share technical details if anyone’s working on similar legacy integrations.
What about the reverse sync - pushing approved invoices back to the ERP? How do you generate the flat-file format that the legacy system expects? And how do you handle scenarios where the ERP rejects the import?
Yes, error handling was crucial. We created an IntegrationLog entity that captures every step: file received, parsing started, record count, transformation status, and any errors with the specific line number and field that failed. Each error includes the raw data that caused the issue, so our team can investigate without going back to the source files. We also set up email notifications for critical failures (like complete file parsing failures) and a dashboard showing success rates and common error patterns. This visibility has been invaluable for continuous improvement.
Interesting approach. How are you handling the error logging? We’ve found that flat-file integrations can fail in so many ways - malformed records, missing fields, data type mismatches. Do you have a centralized error tracking mechanism?
We used a hybrid approach. The flat-file parser is a custom Java action that reads the proprietary format (fixed-width fields with some delimiter variations depending on record type) and converts it to a standardized JSON structure. Then Mendix microflows handle the business logic mapping and validation. The Java action made it much more maintainable since we could unit test the parsing logic separately. The connector configuration in Integration Hub then orchestrates the entire flow - file pickup from SFTP, parsing, transformation, and import into Mendix entities.
Great questions - let me provide the complete technical implementation details:
Custom Connector Development:
We built the connector as a reusable module in Mendix with three main components:
- File Handler (Java Action): Parses the proprietary format using a configuration-driven approach. We externalized the field definitions (position, length, type, required/optional) into a MappingConfig entity, so we can adjust without code changes. The parser validates each field and builds a structured JSON output.
// Simplified parser structure
FieldConfig config = getFieldMapping(recordType);
for (Field field : config.getFields()) {
String value = extractField(line, field.startPos, field.length);
validateAndTransform(value, field.dataType);
}
-
Transformation Microflows: Handle the business logic mapping between ERP flat-file structure and Mendix domain model. We use a staging entity pattern - import to InvoiceStaging first, validate business rules, then promote to Invoice entity only if all validations pass.
-
Integration Hub Configuration: Orchestrates the full workflow with error recovery. Uses SFTP connector to monitor the ERP export folder, triggers the import process, and manages the complete lifecycle.
Legacy ERP Flat-File Mapping:
For the reverse sync (Mendix to ERP), we mirror the approach:
- Export microflow converts Invoice entities to a staging format
- Java action generates the flat-file using the same MappingConfig (ensures consistency)
- File is placed in ERP import folder via SFTP
- We monitor a separate “results” folder where ERP drops success/failure notifications
- Parse those result files and update invoice status in Mendix accordingly
The key insight was making the mapping bidirectional and configuration-driven. When the ERP team changes the format (which happens occasionally), we just update the MappingConfig records rather than rewriting code.
Automated Error Logging:
Our error framework has multiple levels:
-
Record-level errors: Captured in IntegrationLog with full context (file name, line number, field name, raw value, expected format, error message). These allow partial success - we process valid records and flag problematic ones for manual review.
-
File-level errors: Complete parsing failures, connection issues, missing required files. These trigger immediate email alerts to the integration team.
-
Business validation errors: Even if parsing succeeds, business rules might fail (duplicate invoice numbers, invalid account codes, etc.). These are logged separately in a ValidationError entity and displayed in a review queue for finance users.
We also implemented a retry mechanism for transient errors (network timeouts, file locks) with exponential backoff.
Scheduling and Concurrency:
We use Mendix scheduled events but added a locking mechanism to prevent overlaps:
- Before each run, check a IntegrationLock entity
- If locked and timestamp is recent (< 2 hours), skip this run
- Otherwise, acquire lock, process, release lock
- We run every 4 hours during business days, once overnight for full reconciliation
Results After 6 Months:
- 500+ invoices/month processed automatically (previously manual)
- Error rate dropped from ~15% (manual entry) to <2% (mostly data quality issues from source)
- Processing time: 30 seconds for typical batch vs. 2-3 days manual cycle
- Finance team can focus on exception handling rather than data entry
The custom connector approach gave us the flexibility to handle the legacy system’s quirks while leveraging Integration Hub’s monitoring and orchestration capabilities. Happy to share more specific implementation details if helpful!
Also curious about the scheduling aspect. Are you using Mendix’s built-in scheduled events or something more sophisticated? We need to sync multiple times per day but avoid overlapping runs if a previous sync is still processing.
This sounds exactly like what we need! We’re dealing with a similar legacy system. How did you handle the proprietary flat-file format? Did you build the parser in Java or were you able to do it with Mendix microflows?