Preventing Server-Side Template Injection in Python

Server-side template injection in Python frameworks like Flask and Django can lead to remote code execution when user input is rendered as template code.

Introduction

Your Flask application allows users to customize email templates with their company branding. A few weeks after launch, you discover that attackers have achieved full server access by injecting malicious template syntax into the customization fields. They leveraged Jinja2's powerful expression language to execute arbitrary Python code, exfiltrating credentials and installing backdoors.

SSTI occurs when user input is embedded into template code rather than passed as data to be rendered. Python's popular template engines—Jinja2 (Flask) and Django Templates—provide powerful expression capabilities that become dangerous when attackers control template content. This guide explores how SSTI vulnerabilities manifest in Python applications, effective prevention strategies, and why AI-agentic testing provides superior detection of template injection attacks.

Understanding the Risk

Template injection exploits the boundary between template code and template data. When applications treat user input as template syntax rather than as values to display, attackers can inject expressions that execute during rendering.

Jinja2 Exploitation: Jinja2's expression language provides access to Python's object model:

# Vulnerable Flask code
from flask import Flask, request, render_template_string
 
app = Flask(__name__)
 
@app.route('/greet')
def greet():
    name = request.args.get('name', 'Guest')
    # DANGEROUS: User input directly in template
    template = f"<h1>Hello, {name}!</h1>"
    return render_template_string(template)

An attacker provides this payload as name:

{{ ''.__class__.__mro__[1].__subclasses__()[X].__init__.__globals__['os'].popen('id').read() }}

This traverses Python's object hierarchy to access the os module and execute shell commands.

Django Template Exploitation: While Django templates are more restrictive, vulnerabilities occur when templates are constructed from user input:

from django.template import Template, Context
 
def render_message(request):
    message = request.GET.get('message', '')
    # DANGEROUS: User input in template
    template = Template(f"<p>{message}</p>")
    return HttpResponse(template.render(Context({})))

The impact is severe: SSTI typically leads to remote code execution, complete server compromise, and data exfiltration.

Prevention Best Practices

Never Embed User Input in Template Code

User input should be passed as template variables, never concatenated into template strings:

# VULNERABLE: Input in template code
template = f"<h1>Hello, {user_input}!</h1>"
render_template_string(template)
 
# SAFE: Input passed as variable
render_template_string("<h1>Hello, {{ name }}!</h1>", name=user_input)

Use Template Files Instead of Strings

Prefer pre-defined template files over dynamically constructed templates:

@app.route('/profile/<username>')
def profile(username):
    user = User.query.filter_by(username=username).first()
    return render_template('profile.html', user=user)

Sandbox User-Customizable Templates

If users must provide template content, use a sandboxed environment:

from jinja2.sandbox import SandboxedEnvironment
 
def render_user_template(template_string, context):
    env = SandboxedEnvironment()
    env.globals = {}
    
    try:
        template = env.from_string(template_string)
        return template.render(**context)
    except Exception:
        return "Template error"

Restrict Available Template Variables

Explicitly whitelist what's available in user templates:

def render_safe_template(user_template, user_data):
    env = SandboxedEnvironment()
    
    safe_context = {
        'company_name': user_data.get('company_name', ''),
        'user_name': user_data.get('user_name', ''),
        'date': datetime.now().strftime('%Y-%m-%d'),
    }
    
    template = env.from_string(user_template)
    return template.render(**safe_context)

Use Logic-Less Templating for User Content

For maximum safety, use a logic-less template syntax:

import re
 
def simple_template(template_string, values):
    def replace(match):
        key = match.group(1).strip()
        return str(values.get(key, ''))
    
    return re.sub(r'\{([^}]+)\}', replace, template_string)
 
result = simple_template(
    "Hello {name}, your order {order_id} is ready.",
    {'name': 'Alice', 'order_id': '12345'}
)

Input Validation and Sanitization

Block known SSTI patterns in user input:

import re
 
SSTI_PATTERNS = [
    r'\{\{.*\}\}',
    r'\{%.*%\}',
    r'__class__',
    r'__mro__',
    r'__subclasses__',
    r'__globals__',
]
 
def contains_ssti_pattern(input_string):
    for pattern in SSTI_PATTERNS:
        if re.search(pattern, input_string, re.IGNORECASE):
            return True
    return False

Django-Specific Protections

Django templates are safer by design but still require care:

# SAFE: Use Django's template system correctly
from django.shortcuts import render
 
def profile(request, user_id):
    user = User.objects.get(id=user_id)
    return render(request, 'profile.html', {'user': user})
 
# VULNERABLE: Constructing templates from user input - NEVER do this
template = Template(request.POST.get('template'))

Implement Content Security Policy

CSP provides defense in depth against template injection consequences:

@app.after_request
def add_security_headers(response):
    response.headers['Content-Security-Policy'] = (
        "default-src 'self'; script-src 'self'"
    )
    return response

Why Traditional Pentesting Falls Short

SSTI vulnerabilities require understanding the specific template engine, its expression syntax, and exploitation techniques. Manual testers may check obvious injection points but miss less apparent attack surfaces—custom template fields, PDF generators, or email builders. Exploitation payloads vary based on Python version and sandbox configurations, requiring systematic probing with multiple payload variations.

How AI-Agentic Testing Solves It

RedVeil's AI agents understand template injection across Python frameworks and systematically probe input fields for SSTI vulnerabilities. The platform tests with payloads specific to Jinja2, Django, and other template engines, validating whether expressions execute or are safely rendered as text.

When SSTI is detected, RedVeil demonstrates the impact and provides clear remediation guidance. On-demand testing allows validation after deploying new features that accept user-provided content.

Conclusion

Server-side template injection transforms simple input fields into critical vulnerabilities with remote code execution impact. The fundamental defense is architectural: never embed user input into template code. When user-customizable templates are required, sandboxed environments, restricted variables, and input validation provide layers of protection.

AI-agentic testing from RedVeil validates that your templating implementation resists injection attacks, systematically probing for SSTI across your Python application's attack surface.

Protect your Python applications from template injection—test with RedVeil today.

Ready to run your own test?

Start your first RedVeil pentest in minutes.