Introduction
Picture a fintech company running dozens of microservices on Kubernetes. Their platform handles sensitive financial transactions, and the team has invested heavily in application security. However, during a routine audit, they discover that a developer created a service account with cluster-admin privileges for debugging purposes months ago—and forgot to remove it. An attacker who compromises any pod in that namespace could use those credentials to take over the entire cluster.
Kubernetes security incidents increasingly stem not from zero-day vulnerabilities but from misconfigurations. The flexibility that makes Kubernetes powerful also introduces complexity: RBAC policies, network configurations, pod security contexts, and secrets management all present opportunities for error. This guide explores how to systematically test Kubernetes security and identify common misconfigurations.
Understanding Kubernetes Attack Surface
Kubernetes clusters present multiple attack vectors. The API server sits at the center—attackers who gain access with sufficient privileges can create workloads, read secrets, and modify cluster configurations.
RBAC misconfigurations represent one of the most common issues. Overly permissive roles and forgotten service accounts with elevated privileges create paths for privilege escalation:
# Avoid: Overly permissive ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: overly-permissive
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]Pod security contexts control what capabilities containers run with. Containers running as root or with privileged mode enabled can potentially escape to the host node.
Network policies define how pods communicate. In clusters without network policies, all pods can communicate freely—a compromised pod can probe every service.
Testing RBAC Configurations
Systematic RBAC testing starts with enumeration:
# List all ClusterRoles with their rules
kubectl get clusterroles -o yaml
# Check what a specific service account can do
kubectl auth can-i --list --as=system:serviceaccount:default:my-service-account
# Find who can delete deployments cluster-wide
kubectl who-can delete deployments --all-namespacesLook for these common misconfigurations:
- Excessive secret access: Service accounts that can read secrets across namespaces
- Wildcard permissions: Any role using wildcards for apiGroups, resources, or verbs
- Dangerous verbs: The
escalate,bind, andimpersonateverbs allow privilege escalation
Testing Pod Security
Pod security testing verifies that workloads run with appropriate restrictions:
# Find pods running as root
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {.spec.containers[*].securityContext.runAsUser}{"\n"}{end}'
# Find pods with privileged containers
kubectl get pods -A -o json | jq '.items[] | select(.spec.containers[].securityContext.privileged==true) | .metadata.name'A secure pod configuration should include:
apiVersion: v1
kind: Pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]Test for dangerous configurations: privileged: true, hostNetwork: true, hostPID: true, and writable hostPath mounts to sensitive directories.
Testing Network Policies
Without network policies, Kubernetes allows unrestricted pod-to-pod communication:
# Check if any network policies exist
kubectl get networkpolicies --all-namespaces
# Deploy a test pod and verify isolation
kubectl run nettest --image=nicolaka/netshoot -- sleep 3600
kubectl exec nettest -- curl -s --connect-timeout 3 http://internal-service.production:8080Effective network policies follow a deny-by-default approach:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressThen explicitly allow required communication paths.
Testing Secrets Management
Kubernetes secrets are base64-encoded but not encrypted by default. Verify that secrets are protected:
# Find pods with secrets as environment variables (can leak to logs)
kubectl get pods -A -o json | jq '.items[] | select(.spec.containers[].env[]?.valueFrom.secretKeyRef != null)'
# Verify etcd encryption is enabled
kubectl get pod kube-apiserver-<node> -n kube-system -o yaml | grep encryption-provider-configDisable automatic token mounting when not needed:
apiVersion: v1
kind: ServiceAccount
metadata:
name: no-token-sa
automountServiceAccountToken: falseCluster Component Security
Test control plane components:
- API Server: Verify TLS is enforced, anonymous auth is disabled, and audit logging captures relevant events
- Kubelet: The kubelet API should require authentication; test whether anonymous access allows reading pod information
- etcd: Verify etcd requires mutual TLS and isn't exposed to the network
Conclusion
Kubernetes security requires testing across multiple layers: RBAC configurations, pod security contexts, network policies, and secrets management. The dynamic nature of Kubernetes means security posture can shift quickly with frequent deployments and configuration changes.
Manual audits provide point-in-time snapshots but struggle to keep pace with rapid cluster changes. On-demand security testing helps teams identify misconfigurations as they're introduced. RedVeil's AI-powered platform can assess Kubernetes configurations alongside your web applications and APIs, providing a more complete view of your attack surface.
Start testing your Kubernetes security with RedVeil today.