ExpressRoute private peering at edge site not routing traffic to Azure resources

We’ve configured ExpressRoute private peering at our regional edge site to establish connectivity to Azure VNets, but traffic isn’t routing properly to our cloud resources. The ExpressRoute circuit shows as provisioned and connected, and BGP peering status appears healthy on both the Microsoft side and our edge router. However, when we try to access VMs or private endpoints in Azure from our edge location, connections time out.

Our edge router shows BGP routes being advertised from Microsoft:


show ip bgp summary
Neighbor: 10.255.255.1 (Microsoft)
Prefixes received: 47
State: Established

We have an NVA (Palo Alto firewall) between our edge network and the ExpressRoute gateway. The firewall has routes configured for the Azure address space (10.50.0.0/16) pointing to the ExpressRoute gateway interface. We’ve verified that our on-premises address space (192.168.100.0/24) is properly advertised to Azure through BGP. The connection worked briefly during initial testing but stopped routing after we implemented the NVA for security inspection. What could be blocking the traffic flow?

I’ve seen this exact scenario. The issue is usually asymmetric routing combined with NVA state tracking. Your outbound traffic goes through the NVA to ExpressRoute, but return traffic from Azure might be taking a different path if you have multiple routes or UDRs configured. Check your Azure route tables - do you have a UDR on the Azure subnet forcing traffic back through the NVA? Also verify that the NVA’s connection tracking timeout isn’t too aggressive. ExpressRoute has different MTU requirements too - check if Path MTU Discovery is working or if you need to adjust MSS clamping on the firewall.

For the MTU issue that Mike mentioned - this is critical with ExpressRoute. The default MTU on ExpressRoute circuits is 1500 bytes, but with VXLAN or other encapsulation at edge sites, you might be fragmenting packets. On Palo Alto, configure TCP MSS adjustment: set deviceconfig setting session tcp-reject-non-syn no and set deviceconfig setting tcp tcp-mss-adjustment 1350. This prevents black hole routing where packets are silently dropped due to DF bit being set. Also ensure your edge router has matching MTU settings on the ExpressRoute-facing interface.

Perfect - glad you got it resolved. Let me summarize the complete solution for ExpressRoute private peering with NVA inspection at edge sites:

ExpressRoute Private Peering Setup: Your ExpressRoute circuit configuration was correct - BGP peering established, routes being exchanged properly. The issue wasn’t with the Microsoft Enterprise Edge (MSEE) or your edge router BGP configuration.

BGP Route Advertisement: Routes were being advertised correctly in both directions. The problem was in the data plane, not the control plane. However, for production deployments, implement these BGP best practices:


router bgp 65001
 neighbor 10.255.255.1 timers 30 90
 neighbor 10.255.255.1 route-map AZURE-IN in
 neighbor 10.255.255.1 prefix-list EDGE-OUT out

This ensures stable peering during brief connectivity issues and provides route filtering for security.

NVA/Firewall Configuration: The core issue was asymmetric routing caused by missing User-Defined Routes (UDRs) in Azure. Here’s the complete fix:

  1. Azure-side UDR: Create a route table on your Azure VNet subnets with routes pointing back through the NVA:

    • Route: 192.168.100.0/24 (edge site) → Next hop: NVA private IP (e.g., 10.50.1.4)
    • Associate this route table with all subnets that need to communicate with the edge site
  2. NVA Security Policy: Configure firewall rules to permit bidirectional traffic:

    • Allow zone: Edge → Azure (10.50.0.0/16)
    • Allow zone: Azure → Edge (192.168.100.0/24)
    • Enable application inspection but don’t block BGP (TCP 179)
  3. TCP MSS Clamping: Critical for preventing MTU black holes:


set deviceconfig setting tcp tcp-mss-adjustment 1350
set deviceconfig setting session tcp-reject-non-syn no
  1. NVA Interface Configuration: Verify these settings in Azure portal:

    • IP forwarding: Enabled on both NICs
    • NSG rules: Permit traffic between edge and Azure address spaces
    • Accelerated networking: Enabled for better throughput
  2. Edge Router MTU: Match MTU settings to prevent fragmentation:


interface GigabitEthernet0/0/1
 mtu 1500
 ip tcp adjust-mss 1350

Validation Commands:

On edge router:


show ip bgp neighbors 10.255.255.1 advertised-routes
show ip route 10.50.0.0
ping 10.50.1.10 source 192.168.100.1 size 1400 df-bit

On Azure NVA (via serial console or SSH):


tcpdump -i eth0 host 192.168.100.1
netstat -rn | grep 192.168.100

The key lesson: When inserting an NVA into the ExpressRoute path, you must configure symmetric routing on both sides. Azure’s default routing will try to send return traffic directly to ExpressRoute, bypassing your NVA and breaking stateful inspection. Always implement UDRs to force traffic through the security appliance in both directions.

For production resilience, consider deploying NVAs in active-passive HA configuration with Azure Load Balancer, and use BGP route preferences to control failover behavior. Monitor ExpressRoute metrics in Azure Monitor for circuit utilization, BGP availability, and packet drops.

The MTU adjustment helped with some application connectivity issues we were seeing even after fixing the routing. For anyone else hitting this, I also had to adjust the BGP timers to be more forgiving since our edge site occasionally has brief connectivity hiccups. Changed keepalive to 30 seconds and hold time to 90 seconds instead of the defaults.

First thing to check - verify that your NVA isn’t blocking the BGP keepalives or the actual data plane traffic. Palo Alto firewalls need specific security policies to allow ExpressRoute traffic. Check if you have rules permitting TCP port 179 for BGP and the appropriate application-level inspection settings. Also, confirm that IP forwarding is enabled on the NVA network interfaces in Azure. Without that, the NVA won’t pass traffic between subnets even if routing tables are correct.