Data and integration platform for food manufacturing
Summary
A mid-sized food manufacturing company migrated from an on-premise ERP to Dynamics 365 Business Central in the cloud. After the migration, BC365 data was not accessible for BI and reporting. House of Data designed and manages the complete data and integration infrastructure: a Microsoft Fabric Lakehouse with near-realtime data mirroring, an Azure API Management layer as the central integration hub, and architectural coordination across six involved vendors.
| Aspect | Before | After |
|---|---|---|
| BC365 data in BI | Not available | Near-realtime (every 10-15 minutes) |
| Integration pattern | FTP/CSV batch, point-to-point | API-first, event-driven via APIM |
| Monitoring & audit trail | None | Full logging via Application Insights |
| Architectural ownership | Spread across 6 vendors | Centralized at House of Data |
Situation
The company, which has approximately 200 employees and multiple production lines, was at a turning point. The migration from on-premise ERP to Dynamics 365 Business Central (BC365) in the cloud had been completed, but the data from BC365 was not accessible. BI tooling had no access to ERP data. There were no data pipelines, no reporting on current business data.
Meanwhile, the MES system on the production floor exchanged data with BC365 through manual CSV exports and FTP transfers. The planning system retrieved production orders through its own dedicated connection, and external partners delivered data through similar batch mechanisms.
The environment involved six vendors: the ERP vendor managing BC365, a BI partner for reporting, an IT administration partner for infrastructure, a production automation vendor, a programme management firm, and House of Data as the platform architect.
None of these parties had oversight of the whole. Each vendor optimised their own component, and nobody guarded the coherence of the system.
Challenge
The problems were structural, not incidental.
BC365 data not accessible. After the cloud migration, ERP data was not available for BI tooling and reporting. No data pipeline existed to make BC365 data available for analysis. The organization was flying blind on operational data.
Fragile integrations. Connections between systems had grown organically over time. FTP servers served as integration points, and CSV files were dropped and picked up at fixed intervals. There was no error handling, monitoring, or audit trail. When a file failed to arrive, nobody noticed until an end user complained about missing data.
Authentication sprawl. Each system had its own authentication mechanism. BC365 required OAuth2 with token refresh, the MES system supported only API keys, and the planning system used basic authentication. Credentials were scattered across configuration files managed by different vendors.
No architectural ownership. Six vendors operated with their own scopes, timelines, and priorities. The client had no internal IT architecture capability to direct these parties, and decisions about the data layer were made by parties that did not consider it their responsibility.
Approach
House of Data took on the role of platform architect, handling the design, implementation, and ongoing management of the data and integration platform, while coordinating all involved vendors.
Unlocking BC365 data: from Synapse to Fabric Open Mirroring
The first priority was making BC365 data accessible for BI and reporting. At that time, Fabric did not yet support Open Mirroring for BC365. We designed a pipeline using the technology available: bc2adls exported CDC deltas to Azure Data Lake Storage, Synapse Spark consolidated those deltas, Azure Data Factory orchestrated loading, and a SQL Warehouse served as the query layer. Data was refreshed every hour.
That solution worked, but was deliberately pragmatic: five components, each with its own failure behavior. When Fabric Open Mirroring became available (bc2adls version 26.32+), we migrated to the simpler architecture. BC365 data is now picked up directly by a Fabric Lakehouse, without Spark consolidation or ADF orchestration.
This required coordination with the ERP vendor for the bc2adls extension upgrade in BC365. While technically straightforward, it was an organizational effort that required central architectural oversight to initiate.
The result is an architecture of two components instead of five. BC365 data is available in the Fabric Lakehouse within 10 to 15 minutes of mutation. The SQL Analytics Endpoint is generated automatically, allowing BI tools to connect directly without manual view definitions.
Azure API Management: the integration hub
The second pillar was replacing FTP/CSV integrations with an API-first architecture using Azure API Management (APIM) as the central gateway.
APIM serves as both a translation layer and a security boundary:
- The MES system sends REST requests to APIM using an API key. APIM validates the key, exchanges it for an OAuth2 Bearer token with BC365, and forwards the request. The MES system needs no knowledge of OAuth2 flows.
- The planning system retrieves production orders through the same gateway, with its own rate limits and access rights. Only GET requests are permitted; write operations are blocked at the policy level.
- Webhooks from BC365 are received by APIM and routed to the relevant consumers. Instead of a scheduled batch, the system now works in an event-driven manner, with data pushed at the moment of change.
- External partners access the platform via API keys with strict rate limits and scope restrictions. Every call is logged in Application Insights.
The FTP servers have been decommissioned. There are no more CSV files sitting in a directory waiting to be picked up, and no more cron jobs running at fixed times and failing silently.
Multi-vendor coordination
The architectural role encompassed more than technical design. With six vendors, alignment is the difference between a working platform and a collection of loose components.
In practice, this meant:
- Guarding the architectural overview. Every change proposed by a vendor, such as a new connection or infrastructure change, was assessed against the overarching architecture. We acted as the party that sees how all the pieces fit together.
- Writing technical specifications. API contracts, data models, and authentication requirements were defined by House of Data to ensure all parties worked from the same specifications.
- Preventing escalations. When the ERP vendor pushed a schema change with downstream consequences, House of Data flagged it before it could become a production incident.
Results
The platform runs in production and is continuously managed by House of Data.
BC365 data unlocked. For the first time, the organization has access to current ERP data in BI tooling. Operational dashboards have become usable now that data is available within 10 to 15 minutes in the Fabric Lakehouse.
Reliable integrations. Every API call is logged, monitored, and secured. Rate limiting prevents a single system from overwhelming the backend, and the OAuth2 translation layer eliminates credential sprawl. Failures are now visible rather than silent.
Architectural simplicity. A data pipeline of two components. One central integration hub instead of point-to-point spaghetti. Fewer moving parts means fewer failure points, less maintenance, and faster turnaround on changes.
A single point of contact for the data layer. The client no longer needs six vendors to answer a data question. House of Data serves as the architectural conscience of the platform, ensuring all pieces continue to fit together correctly.
Lessons learned
Build with what is available, simplify when you can. The Synapse route was the right choice when we started: Open Mirroring did not exist yet. But we kept the architecture under review. When the simpler option became available, we migrated. Pragmatism at the start, discipline to simplify when the opportunity arises.
FTP is not an integration strategy. Exchanging CSV files via FTP works until it doesn’t, and when it fails, no one knows. The shift to API-first integrations provided essential visibility and manageability.
Multi-vendor environments require an architect. Six vendors without overarching architectural oversight leads to local optimisations that are globally suboptimal. Someone must guard the coherence of the entire infrastructure.
Start with the organisation. The bc2adls upgrade was technically trivial but organizationally the bottleneck. The coordination with five vendors was the real work.