Azure Virtual Network storage service endpoints cause routing conflicts with ExpressRoute BGP

We’re experiencing routing conflicts after enabling Azure Storage service endpoints on our Virtual Network. Our environment uses ExpressRoute with BGP for hybrid connectivity to on-premises datacenters. After enabling service endpoints for Microsoft.Storage on production subnets, some on-premises applications can no longer access Azure Storage accounts, while Azure-based VMs work fine.

The route tables show conflicting routes - ExpressRoute is advertising 0.0.0.0/0 via BGP which should route storage traffic through our on-premises network security appliances, but service endpoints create more specific routes (Storage service tags) that bypass ExpressRoute entirely. This breaks our security model where all internet-bound traffic must go through on-premises firewalls.


Get-AzRouteTable -ResourceGroupName prod-network
NextHopType: VirtualNetworkServiceEndpoint
AddressPrefix: Storage.EastUS (multiple /16 ranges)

We need storage traffic from Azure VMs to use service endpoints for performance, but on-premises traffic needs to route through ExpressRoute. Private endpoints seem like an alternative but we have 40+ storage accounts. How do we resolve this routing conflict?

The UDR approach sounds promising but won’t that break the service endpoint optimization for Azure VMs? We enabled service endpoints specifically to improve performance for Azure-based applications accessing storage. If we override with UDRs, doesn’t traffic go through the NVA instead of staying on the Microsoft backbone?

The fundamental issue is that service endpoints are designed for Azure-to-Azure scenarios, not hybrid. For hybrid environments with ExpressRoute and security requirements, private endpoints are the Microsoft-recommended solution despite the complexity. The cost isn’t as bad as you think - private endpoints are ~$7/month each, so 40 endpoints = ~$280/month which is usually acceptable for enterprise workloads.

You can automate private endpoint deployment with ARM templates or Terraform to reduce management overhead.

Service endpoints and ExpressRoute don’t mix well when you have forced tunneling (0.0.0.0/0 via BGP). Service endpoints always take precedence because they create system routes with higher priority than BGP routes. This is by design - service endpoints are meant to optimize Azure-to-Azure traffic by keeping it on the Microsoft backbone.

You either need to exclude storage prefixes from your BGP advertisements or switch to private endpoints.

Excluding storage prefixes from BGP isn’t feasible - we have hundreds of prefixes in the Storage service tag and they change regularly. Private endpoints for 40 storage accounts would be complex and expensive (each endpoint costs money plus we’d need private DNS zones). Is there a way to make service endpoints work with ExpressRoute for hybrid scenarios?

There’s a middle-ground solution: use route tables with more specific routes to override the service endpoint behavior for specific subnets. Create a user-defined route (UDR) with next hop type ‘VirtualAppliance’ pointing to your NVA for storage prefixes you need to control. UDRs with /24 or more specific prefixes will override service endpoints. This requires careful planning but lets you keep service endpoints enabled while controlling specific traffic flows.