Container deployment breaks risk management data isolation between tenants

We’re running ETQ Reliance 2021 Risk Management module in Kubernetes containers and discovered a serious data isolation breach. During tenant validation testing, we found risk assessment data from TenantA appearing in TenantB’s dashboard queries.

The problem seems related to our container orchestration setup. We’re using default Kubernetes network policies and the risk management pods share a common database connection pool. Environment variables for tenant context aren’t being properly propagated to worker containers.


apiVersion: v1
kind: Pod
metadata:
  name: etq-risk-mgmt
env:
  - name: TENANT_ID
    value: "${TENANT_CONTEXT}"

Our API gateway performs basic authentication but doesn’t validate tenant boundaries at the request level. This creates a critical compliance violation where risk data crosses tenant boundaries. We need guidance on implementing proper isolation layers - network policies, database row-level security, and tenant validation in our container environment. Has anyone solved multi-tenant data privacy in containerized ETQ deployments?

The ETQ Risk Management module wasn’t originally designed for hard multi-tenancy in containers. You’ll need custom middleware to inject tenant context into every database query. I’ve seen teams use Hibernate filters or database proxy layers that automatically append tenant_id to all SQL statements. Your Kubernetes deployment needs to pass tenant context through request headers that persist across the entire call chain, not just environment variables.

From a security perspective, you’re missing defense in depth. Even if one layer fails, others should catch cross-tenant access. Implement JWT tokens with tenant claims at the API gateway, use PostgreSQL row-level security policies at the database layer, and configure Kubernetes NetworkPolicies to prevent pods from different tenant namespaces from communicating. Test this with chaos engineering - deliberately try to break isolation and verify your controls hold.

This is a serious architectural gap. Your issue stems from missing tenant context propagation through the container stack. The environment variable approach won’t work reliably because Kubernetes pods can restart and lose context. You need to implement tenant validation at multiple layers - API gateway should enforce tenant headers, database queries must include tenant filters in WHERE clauses, and network policies should segment tenant traffic at the pod level.

We hit this exact issue last year during our ISO 27001 audit. The auditors flagged it as a critical finding because risk assessment data contains sensitive business information. Beyond technical fixes, you need to document your data isolation strategy and prove it works through penetration testing. Our solution involved implementing database views per tenant and strict API gateway rules that reject any cross-tenant queries at the edge.

I’ve implemented this exact solution for ETQ Reliance in production. The fix requires changes at four critical layers working together.

First, implement Kubernetes NetworkPolicies for strict pod isolation. Create namespace-per-tenant architecture where each tenant’s risk management pods run in isolated namespaces with explicit ingress/egress rules:


apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tenant-isolation
spec:
  podSelector:
    matchLabels:
      tenant: "${TENANT_ID}"
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          tenant: "${TENANT_ID}"

Second, implement database row-level security. In PostgreSQL, create policies that automatically filter queries:

ALTER TABLE risk_assessments ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON risk_assessments
  USING (tenant_id = current_setting('app.tenant_id')::integer);

Third, fix container environment variable tenant context. Don’t use static environment variables - instead, use Kubernetes DownwardAPI to inject pod metadata and implement an init container that sets up tenant context from request headers:

env:
  - name: TENANT_ID
    valueFrom:
      fieldRef:
        fieldPath: metadata.labels['tenant']

Fourth, implement API gateway tenant validation layer. Use an API gateway like Kong or Istio that validates tenant claims in JWT tokens before routing requests. The gateway should extract tenant_id from the JWT, validate it against allowed tenants, and inject it as a header that propagates through all downstream services.

Additional considerations: Enable audit logging at each layer so you can trace any cross-tenant access attempts. Use separate database connection pools per tenant with connection string parameters that set session variables for row-level security. Implement integration tests that simulate malicious cross-tenant queries and verify they’re blocked.

For ETQ Risk Management specifically, you’ll need to modify the data access layer to respect tenant context. Create a TenantContextFilter that intercepts all database queries and automatically appends tenant_id conditions. This prevents developers from accidentally writing queries that leak data.

Monitor your Kubernetes network traffic using tools like Cilium or Calico to detect any unexpected pod-to-pod communication across tenant boundaries. Set up alerts for policy violations.

This layered approach ensures that even if one security control fails, others provide backup protection. We’ve been running this in production for 18 months across 50+ tenants with zero isolation breaches.