Serverless computing has transformed how organizations build and deploy applications. AWS Lambda, Azure Functions, and Google Cloud Functions allow developers to focus on code while the cloud provider handles infrastructure. This shift brings significant security benefits—no servers to patch, automatic scaling, isolated execution environments—but also introduces new attack vectors. Misconfigured IAM permissions, exposed secrets, and inadequate input validation can lead to data breaches, privilege escalation, and resource abuse. This guide covers essential security practices for serverless functions across major cloud providers and explains how agentic penetration testing can identify vulnerabilities in your serverless architecture.
Understanding Serverless Security Risks
Serverless doesn't mean "no security." While the cloud provider secures the underlying infrastructure, you remain responsible for your code, configurations, and data. The shared responsibility model shifts, but your obligations persist.
Common serverless security risks include:
- Overly permissive IAM roles: Functions with broad permissions can be exploited for lateral movement
- Sensitive data in environment variables: Secrets exposed through logs, error messages, or function configuration
- Injection vulnerabilities: User input reaching databases, APIs, or system commands
- Insecure dependencies: Vulnerable libraries bundled with function code
- Event data injection: Malicious payloads in event sources (S3, SQS, API Gateway)
- Insufficient logging and monitoring: Attacks going undetected
Least Privilege IAM Configuration
The principle of least privilege is critical for serverless security. Each function should have only the permissions it needs to perform its specific task.
AWS Lambda IAM
Create purpose-specific IAM roles:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Users"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:us-east-1:123456789012:log-group:/aws/lambda/user-service-*"
}
]
}Avoid these common mistakes:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}Use AWS IAM Access Analyzer to identify overly permissive policies:
# List findings for unused permissions
aws accessanalyzer list-findings --analyzer-arn arn:aws:access-analyzer:us-east-1:123456789012:analyzer/my-analyzerAzure Functions IAM
Use Managed Identities with minimal RBAC roles:
# Create a function with system-assigned managed identity
az functionapp identity assign \
--name MyFunction \
--resource-group MyResourceGroup
# Grant specific role on specific resource
az role assignment create \
--assignee <managed-identity-principal-id> \
--role "Storage Blob Data Reader" \
--scope /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<account>/blobServices/default/containers/<container>Google Cloud Functions IAM
Use service accounts with granular permissions:
# service-account.yaml
resources:
- name: order-processor-sa
type: iam.v1.serviceAccount
properties:
accountId: order-processor
displayName: Order Processing Function
- name: order-processor-binding
type: gcp-types/storage-v1:storage.buckets.setIamPolicy
properties:
bucket: order-data
bindings:
- role: roles/storage.objectViewer
members:
- serviceAccount:order-processor@project-id.iam.gserviceaccount.comInput Validation in Serverless
Every event source—API Gateway, S3, SQS, EventBridge—can deliver malicious payloads. Validate all input.
API Gateway Event Validation (AWS Lambda)
import json
from typing import Optional
from pydantic import BaseModel, ValidationError, EmailStr, constr
class CreateUserRequest(BaseModel):
email: EmailStr
username: constr(min_length=3, max_length=32, regex=r'^[a-zA-Z0-9_]+$')
role: str
def validate_role(self):
allowed_roles = ['viewer', 'editor', 'admin']
if self.role not in allowed_roles:
raise ValueError(f"Invalid role. Must be one of: {allowed_roles}")
return self.role
def lambda_handler(event, context):
try:
body = json.loads(event.get('body', '{}'))
request = CreateUserRequest(**body)
request.validate_role()
except json.JSONDecodeError:
return {
'statusCode': 400,
'body': json.dumps({'error': 'Invalid JSON'})
}
except ValidationError as e:
return {
'statusCode': 400,
'body': json.dumps({'error': 'Validation failed', 'details': e.errors()})
}
# Process validated request
return create_user(request)S3 Event Validation
S3 object keys can contain malicious characters:
import re
import urllib.parse
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
# Key is URL-encoded in the event
key = urllib.parse.unquote_plus(record['s3']['object']['key'])
# Validate bucket is expected
if bucket not in ['allowed-bucket-1', 'allowed-bucket-2']:
raise ValueError(f"Unexpected bucket: {bucket}")
# Validate key format
if not re.match(r'^uploads/[a-zA-Z0-9_\-/]+\.(jpg|png|pdf)$', key):
raise ValueError(f"Invalid key format: {key}")
# Check for path traversal
if '..' in key:
raise ValueError("Path traversal detected")
process_file(bucket, key)SQS Message Validation (Azure Functions)
import { AzureFunction, Context } from "@azure/functions";
import { z } from "zod";
const OrderSchema = z.object({
orderId: z.string().uuid(),
customerId: z.string().uuid(),
items: z.array(z.object({
productId: z.string().uuid(),
quantity: z.number().int().positive().max(1000),
unitPrice: z.number().positive().max(1000000)
})).min(1).max(100),
shippingAddress: z.object({
street: z.string().max(200),
city: z.string().max(100),
country: z.string().length(2)
})
});
const queueTrigger: AzureFunction = async function (
context: Context,
queueMessage: unknown
): Promise<void> {
try {
const order = OrderSchema.parse(queueMessage);
await processOrder(order);
} catch (error) {
if (error instanceof z.ZodError) {
context.log.error("Invalid order format:", error.errors);
// Don't retry - send to dead letter queue
throw new Error("Validation failed - message will be dead-lettered");
}
throw error;
}
};
export default queueTrigger;Secrets Management
Never store secrets in environment variables without encryption. Use managed secret stores.
AWS Secrets Manager
import json
import boto3
from botocore.exceptions import ClientError
def get_secret(secret_name: str) -> dict:
session = boto3.session.Session()
client = session.client(service_name='secretsmanager')
try:
response = client.get_secret_value(SecretId=secret_name)
return json.loads(response['SecretString'])
except ClientError as e:
raise e
# Cache secrets to avoid repeated API calls
_secrets_cache = {}
def get_cached_secret(secret_name: str) -> dict:
if secret_name not in _secrets_cache:
_secrets_cache[secret_name] = get_secret(secret_name)
return _secrets_cache[secret_name]
def lambda_handler(event, context):
db_credentials = get_cached_secret('prod/database/credentials')
connection = connect_to_database(
host=db_credentials['host'],
user=db_credentials['username'],
password=db_credentials['password']
)Azure Key Vault
import { DefaultAzureCredential } from "@azure/identity";
import { SecretClient } from "@azure/keyvault-secrets";
const credential = new DefaultAzureCredential();
const vaultUrl = process.env.KEY_VAULT_URL;
const client = new SecretClient(vaultUrl, credential);
let cachedSecret: string | null = null;
async function getDbPassword(): Promise<string> {
if (!cachedSecret) {
const secret = await client.getSecret("database-password");
cachedSecret = secret.value;
}
return cachedSecret;
}
export async function main(context: Context): Promise<void> {
const password = await getDbPassword();
// Use password
}Google Secret Manager
from google.cloud import secretmanager
def access_secret(project_id: str, secret_id: str, version: str = "latest") -> str:
client = secretmanager.SecretManagerServiceClient()
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version}"
response = client.access_secret_version(request={"name": name})
return response.payload.data.decode("UTF-8")
# Cache at module level for warm starts
_api_key = None
def get_api_key():
global _api_key
if _api_key is None:
_api_key = access_secret("my-project", "external-api-key")
return _api_keySecure Function Configuration
Limit Execution Time and Resources
Prevent resource exhaustion attacks:
# AWS SAM template
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.handler
Runtime: python3.11
Timeout: 30 # Maximum 30 seconds
MemorySize: 256 # Limit memory
ReservedConcurrentExecutions: 100 # Limit concurrent executionsEnable VPC Isolation When Needed
Place functions in VPCs to access private resources:
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
VpcConfig:
SecurityGroupIds:
- !Ref LambdaSecurityGroup
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2Configure Dead Letter Queues
Handle failed invocations securely:
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
DeadLetterQueue:
Type: SQS
TargetArn: !GetAtt DeadLetterQueue.ArnDependency Security
Serverless functions often include third-party dependencies. These can contain vulnerabilities.
Scan Dependencies Before Deployment
# Python
pip install safety
safety check -r requirements.txt
# Node.js
npm audit
npm audit fix
# Use Snyk for deeper analysis
snyk testMinimize Dependencies
Only include what you need:
# Instead of importing all of boto3
import boto3
# Import specific clients
from boto3 import client
dynamodb = client('dynamodb')Pin Dependency Versions
# requirements.txt
requests==2.31.0
boto3==1.28.0
pydantic==2.4.0
Logging and Monitoring
Implement security-focused logging:
import json
import logging
from datetime import datetime
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def log_security_event(event_type: str, details: dict, severity: str = "INFO"):
log_entry = {
"timestamp": datetime.utcnow().isoformat(),
"event_type": event_type,
"severity": severity,
"details": details
}
if severity == "CRITICAL":
logger.critical(json.dumps(log_entry))
elif severity == "WARNING":
logger.warning(json.dumps(log_entry))
else:
logger.info(json.dumps(log_entry))
def lambda_handler(event, context):
# Log authentication attempts
user_id = event.get('requestContext', {}).get('authorizer', {}).get('userId')
source_ip = event.get('requestContext', {}).get('identity', {}).get('sourceIp')
log_security_event("api_access", {
"user_id": user_id,
"source_ip": source_ip,
"path": event.get('path'),
"method": event.get('httpMethod')
})
# Detect suspicious patterns
if detect_anomaly(event):
log_security_event("suspicious_activity", {
"user_id": user_id,
"source_ip": source_ip,
"reason": "Unusual request pattern"
}, severity="WARNING")Why Traditional Pentesting Falls Short
Serverless architectures present unique challenges for traditional penetration testing. Functions are ephemeral, making it difficult to maintain a testing session. The distributed nature means hundreds of functions may need assessment, each with its own event sources, IAM roles, and dependencies.
Manual testers must understand cloud-specific concepts like IAM policies, event triggers, and managed services. They need to test not just the function code, but the entire configuration: permissions, environment variables, VPC settings, and integrations.
Traditional pentests are also point-in-time. Serverless deployments happen frequently—often multiple times per day. New functions, changed permissions, or updated dependencies can introduce vulnerabilities after an assessment.
How Agentic Testing Secures Serverless
RedVeil's AI-powered penetration testing is designed for modern cloud architectures including serverless. RedVeil autonomously discovers your serverless functions through API endpoints, tests input validation across event sources, and identifies misconfigurations that could lead to security breaches.
RedVeil doesn't just test individual functions—it understands the relationships between functions, event sources, and downstream services. It identifies where overly permissive IAM roles could enable privilege escalation, where secrets might be exposed, and where input validation gaps could lead to injection attacks.
Each finding is validated through actual exploitation attempts, not theoretical analysis. When RedVeil identifies a vulnerability, it provides proof-of-concept evidence, impact assessment, and specific remediation guidance for your serverless platform.
With on-demand testing, you can verify your serverless security after deployments, IAM changes, or new function additions without waiting weeks for manual assessments.
Conclusion
Serverless computing offers significant security benefits, but requires careful attention to IAM permissions, input validation, secrets management, and function configuration. The ephemeral nature of functions and the complexity of cloud IAM create unique security challenges.
Traditional security testing struggles to comprehensively assess serverless architectures. On-demand, agentic penetration testing from RedVeil provides the autonomous, intelligent assessment serverless applications need to stay secure.
Start securing your serverless functions with RedVeil at https://app.redveil.ai/ and ensure your cloud-native applications are protected against modern threats.