VNet peering traffic blocked by Network Security Group rules despite allow-all configuration

I have two VNets peered successfully (VNet-A in East US and VNet-B in West US), but resources can’t communicate across the peering even though the NSG rules look correct to me. The peering shows “Connected” status on both sides, but applications in VNet-A timeout when trying to reach services in VNet-B.

I’ve verified the NSG rules allow traffic between the address spaces:


Priority: 100
Source: 10.1.0.0/16 (VNet-A)
Destination: 10.2.0.0/16 (VNet-B)
Action: Allow

Both VNets have “Allow forwarded traffic” and “Allow gateway transit” configured. Route tables are attached to subnets but I haven’t added custom routes yet. Is there something with NSG rule priority or VNet peering configuration that could be blocking this?

Use Network Watcher’s IP flow verify tool to troubleshoot this. It will tell you exactly which NSG rule or route is blocking the traffic. Go to Network Watcher > IP flow verify, enter your source VM and destination IP, and it’ll show you whether traffic is allowed or denied and which rule is responsible.

Good point. I checked and we have NSGs at the subnet level only. VNet-B’s NSG does have an inbound allow rule for 10.1.0.0/16. But I’m wondering about the priority - could there be a deny rule with higher priority (lower number) that’s taking precedence?

First thing to check: are your NSG rules applied at the subnet level, NIC level, or both? If you have NSGs at both levels, both need to allow the traffic. Also verify you have the allow rule on BOTH the source and destination NSGs. The rule you showed needs to exist in VNet-A’s NSG for outbound, and VNet-B’s NSG needs a corresponding inbound rule. NSG evaluation is stateful but you still need both directions configured properly.

I had this exact issue last month. The problem was a custom route table that was overriding the default VNet peering routes. Even though peering creates system routes automatically, if you have a route table attached to your subnet with a 0.0.0.0/0 route pointing to a firewall or NVA, it can intercept the peered VNet traffic.

Check your route tables for any routes that overlap with the peered VNet address space. If you see routes pointing to virtual appliances, make sure those appliances are actually forwarding the traffic correctly, or add more specific routes for the peered VNet.

Also worth checking: Service Endpoints. If either VNet has service endpoints enabled for certain Azure services, it can affect routing behavior in unexpected ways, especially if you’re trying to reach PaaS resources across the peering.

Absolutely, priority matters. NSG rules are processed in priority order (lowest number first), and once a rule matches, processing stops. Check for any deny rules with priority lower than 100. Also look for the default rules - there’s a DenyAllInbound at priority 65500. If no explicit allow rule matches before that, traffic gets blocked.

Another thing: verify the VNet peering configuration has “Use remote gateways” disabled if you don’t actually have VPN gateways. Misconfigured gateway settings can cause routing issues even though the peering shows connected. And check that “Allow forwarded traffic” is enabled on BOTH sides of the peering, not just one.

Based on the symptoms you’re describing, here’s a systematic approach to resolve this:

1. NSG Rule Priority Verification List all NSG rules in priority order to identify conflicts:


az network nsg rule list --nsg-name <nsg-name> \
  --resource-group <rg-name> \
  --query "[].{Priority:priority, Name:name, Access:access}"

Check for deny rules with priority < 100 that might be blocking traffic before your allow rule executes. Remember that NSG evaluation stops at the first matching rule.

2. VNet Peering Configuration Review Verify both sides of the peering have correct settings:

  • “Allow forwarded traffic” must be enabled on BOTH VNets
  • “Use remote gateways” should be disabled unless you have actual VPN/ExpressRoute gateways
  • “Allow gateway transit” should match your architecture needs

Incorrect gateway settings are a common cause of peering connectivity issues even when status shows “Connected”.

3. Route Table Override Analysis This is likely your issue. Check effective routes on the source VM’s NIC:


az network nic show-effective-route-table \
  --name <nic-name> --resource-group <rg-name>

Look for:

  • Custom routes with 0.0.0.0/0 pointing to NVA/firewall (these override peering routes)
  • Routes for the peered VNet address space pointing to unexpected next hops
  • Missing VNet peering system routes (should show NextHopType: VNetPeering)

If you have a default route (0.0.0.0/0) pointing to a firewall or NVA, add a more specific route for the peered VNet CIDR pointing to “VirtualNetworkPeering” as the next hop type, or ensure your NVA is configured to forward inter-VNet traffic.

Testing and Validation Use Network Watcher’s Connection Monitor or IP Flow Verify:

  • IP Flow Verify: Tests specific source/destination pairs and shows which rule allows/denies
  • Connection Troubleshoot: Performs end-to-end connectivity testing
  • NSG Diagnostics: Shows exactly which NSG rule matched for a specific flow

In cross-region peering scenarios, also verify there are no Azure service limits or regional restrictions affecting your specific VNet pairing. Check the Activity Log for any peering operation warnings that might indicate configuration issues not visible in the portal status.