Shared VPC design considerations for multi-team environments with different security requirements

We’re designing a Shared VPC architecture for our organization that has 12 development teams with varying security and compliance requirements. Some teams handle PCI data, others process PHI, and several work with general corporate data.

I’m trying to establish clear IAM boundary design principles that give teams autonomy while maintaining security controls. We’re also concerned about quota management across teams and how to scope firewall rules effectively without creating a management nightmare. Has anyone implemented Shared VPC at this scale? What governance patterns worked well for balancing team independence with centralized network control?

Quota management is tricky in Shared VPC. We learned the hard way that quotas are project-level, not subnet-level, so one team can exhaust instance quotas and impact others. Our solution was setting up quota monitoring with Cloud Monitoring and alerting when any service project hits 70% of critical quotas. We also established a quota request process with automatic approval for standard increases and manual review for large requests. Documentation is critical - we maintain a central wiki with subnet assignments, firewall rule patterns, and escalation procedures.

Based on implementing Shared VPC for a similar multi-team environment, here’s a comprehensive design framework addressing IAM boundaries, quota management, and firewall rule scoping:

IAM Boundary Design:

The foundation is separating network administration from workload deployment. Create a dedicated host project owned by your central networking team with roles/compute.networkAdmin and roles/compute.securityAdmin. Service projects are organized by compliance boundary - separate projects for PCI, PHI, and general workloads.

Grant teams roles/compute.networkUser at the subnet level, not project level. This is crucial for IAM boundaries. Use IAM conditions to further restrict access based on resource tags or time windows if needed. For example, development teams get networkUser on dev subnets only, while operations teams get it on production subnets.

Implement a service account strategy where each team has designated SAs for their workloads. These SAs get roles/compute.instanceAdmin on their service project but not the host project. This prevents teams from modifying shared network infrastructure.

Quota Management Strategy:

Quotas in GCP are project-scoped, which creates challenges in Shared VPC. Implement these safeguards:

  1. Set up quota monitoring using Cloud Monitoring metrics like compute.googleapis.com/quota/instances/usage. Create alerts at 70%, 85%, and 95% thresholds.

  2. Establish a quota allocation policy - each service project gets baseline quotas based on team size and workload patterns. Document these allocations in a central registry.

  3. Use resource quotas in Kubernetes (if using GKE) to prevent individual teams from consuming excessive resources within their service projects.

  4. Implement a quota request workflow using Cloud Functions and Jira/ServiceNow integration for tracking and approval.

Firewall Rule Scoping:

This is where most organizations struggle. Use a three-tier approach:

  1. Hierarchical firewall policies (organization/folder level): Common deny rules that apply everywhere - deny external RDP/SSH, block known malicious IPs, enforce egress restrictions for compliance zones.

  2. VPC-level firewall rules (host project): Standard patterns like allow health checks, allow IAP tunneling, allow internal communication between subnets. Use network tags extensively - define a tagging taxonomy (e.g., tier-web, tier-app, tier-db, env-prod, env-dev).

  3. Service project firewall rules: Teams can create limited rules within their service projects, but only affecting their tagged resources. Use IAM conditions to prevent teams from creating rules that impact other teams’ tags.

For your 12 teams, create firewall rule templates for common patterns (web tier, app tier, database tier) and require teams to use these templates. This prevents rule sprawl and ensures consistent security posture.

Governance Framework:

Document everything in a central repository - subnet assignments, CIDR allocations, firewall rule patterns, service account naming conventions, and escalation procedures. Run quarterly reviews of firewall rules to identify and remove unused rules. Use Config Connector or Terraform to manage infrastructure as code, which provides audit trails and prevents configuration drift.

The key success factor is balancing team autonomy with centralized control. Teams should feel empowered to deploy workloads quickly while the network team maintains security and compliance guardrails.

Use hierarchical firewall policies at the organization or folder level for common rules (deny all external RDP/SSH, allow health checks, etc.). Then use VPC-level firewall rules for environment-specific patterns. The key is standardization - we created firewall rule templates that teams must use. For example, all database tiers use a standard set of tags and rules. Teams can’t create arbitrary rules; they select from approved templates. This reduced our rule count from 400+ to about 80 well-organized rules.