ref: up
This commit is contained in:
594
fission-python/template/docs/DEPLOYMENT.md
Normal file
594
fission-python/template/docs/DEPLOYMENT.md
Normal file
@@ -0,0 +1,594 @@
|
||||
# Deployment Guide
|
||||
|
||||
This guide covers deploying Fission Python functions to Kubernetes, including configuration tuning, troubleshooting, and best practices.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Quick Start](#quick-start)
|
||||
3. [Deployment Configuration](#deployment-configuration)
|
||||
4. [Executors](#executors)
|
||||
5. [Resource Tuning](#resource-tuning)
|
||||
6. [Environments](#environments)
|
||||
7. [Secrets Management](#secrets-management)
|
||||
8. [Rolling Updates](#rolling-updates)
|
||||
9. [Monitoring & Logging](#monitoring--logging)
|
||||
10. [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes cluster (v1.19+)
|
||||
- Fission installed (`kubectl apply -f https://github.com/fission/fission/releases/latest/download/fission-all.yaml`)
|
||||
- `fission` CLI installed and configured
|
||||
- `kubectl` configured to access cluster
|
||||
- Docker registry access (for custom images if needed)
|
||||
|
||||
## Quick Start
|
||||
|
||||
Assuming you have a project set up:
|
||||
|
||||
```bash
|
||||
# 1. Build the package (creates specs/ directory)
|
||||
cd /path/to/project
|
||||
./src/build.sh
|
||||
|
||||
# 2. Verify deployment configuration
|
||||
fission spec verify --file=.fission/deployment.json
|
||||
|
||||
# 3. Deploy to Fission
|
||||
fission deploy
|
||||
|
||||
# 4. Test deployed function
|
||||
curl http://$FISSION_ROUTER/api/items
|
||||
```
|
||||
|
||||
**That's it!** Fission will:
|
||||
- Build package.zip from src/
|
||||
- Create environment (if not exists)
|
||||
- Create package
|
||||
- Create functions from docstring metadata
|
||||
- Set up HTTP triggers
|
||||
|
||||
## Deployment Configuration
|
||||
|
||||
### deployment.json vs fission.yaml
|
||||
|
||||
This template uses `deployment.json`, **not** `fission.yaml` or `fission.json`. The Fission Python builder extracts function metadata from Python docstrings directly.
|
||||
|
||||
### Key Sections
|
||||
|
||||
#### environments
|
||||
|
||||
Define build environment:
|
||||
|
||||
```json
|
||||
{
|
||||
"environments": {
|
||||
"myproject-py": {
|
||||
"image": "ghcr.io/fission/python-env",
|
||||
"builder": "ghcr.io/fission/python-builder",
|
||||
"mincpu": 50,
|
||||
"maxcpu": 100,
|
||||
"minmemory": 50,
|
||||
"maxmemory": 500,
|
||||
"poolsize": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `image` - Runtime image (Python + libraries)
|
||||
- `builder` - Builder image (compiles dependencies)
|
||||
- Resource limits in millicores (50 = 0.05 CPU) and MB
|
||||
|
||||
#### packages
|
||||
|
||||
Define how to build your code:
|
||||
|
||||
```json
|
||||
{
|
||||
"packages": {
|
||||
"myproject": {
|
||||
"buildcmd": "./build.sh",
|
||||
"sourcearchive": "package.zip",
|
||||
"env": "myproject-py"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `buildcmd` - Build script inside builder container
|
||||
- `sourcearchive` - Generated by builder from `sourcepath`
|
||||
- `env` - Links to environment definition
|
||||
|
||||
#### function_common
|
||||
|
||||
Default configuration for all functions:
|
||||
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"pkg": "myproject",
|
||||
"secrets": ["fission-myproject-env"],
|
||||
"configmaps": ["fission-myproject-config"],
|
||||
"executor": { ... },
|
||||
"mincpu": 50,
|
||||
"maxcpu": 100,
|
||||
"minmemory": 50,
|
||||
"maxmemory": 500
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `pkg` - Package name to use
|
||||
- `secrets` / `configmaps` - K8s resources to mount into functions
|
||||
- `executor` - Execution strategy (poolmgr or newdeploy)
|
||||
|
||||
#### secrets / configmaps
|
||||
|
||||
**Placeholder definitions only**. These inform Fission what secret names to expect, but the actual values go in real K8s secrets:
|
||||
|
||||
```json
|
||||
{
|
||||
"secrets": {
|
||||
"fission-myproject-env": {
|
||||
"literals": [
|
||||
"PG_HOST=localhost",
|
||||
"PG_PORT=5432"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create the actual secret:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=prod-db.example.com \
|
||||
--from-literal=PG_PORT=5432 \
|
||||
--from-literal=PG_USER=myuser \
|
||||
--from-literal=PG_PASS=mypassword
|
||||
```
|
||||
|
||||
## Executors
|
||||
|
||||
Fission supports two executor types:
|
||||
|
||||
### poolmgr (default)
|
||||
|
||||
Good for:
|
||||
- High-concurrency HTTP functions
|
||||
- Functions that should scale to zero
|
||||
- Stateless request/response patterns
|
||||
|
||||
Configuration:
|
||||
|
||||
```json
|
||||
"executor": {
|
||||
"select": "poolmgr",
|
||||
"poolmgr": {
|
||||
"concurrency": 1, // Requests per pod
|
||||
"requestsperpod": 1,
|
||||
"onceonly": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `concurrency` - How many concurrent requests each pod handles (usually 1 for Python due to GIL)
|
||||
- `poolsize` from environment controls number of pods in pool
|
||||
|
||||
### newdeploy
|
||||
|
||||
Good for:
|
||||
- Dedicated function instances
|
||||
- Long-running or background jobs
|
||||
- Functions needing stable network identity
|
||||
|
||||
Configuration:
|
||||
|
||||
```json
|
||||
"executor": {
|
||||
"select": "newdeploy",
|
||||
"newdeploy": {
|
||||
"minscale": 1, // Minimum pods (set to 0 for scale-to-zero)
|
||||
"maxscale": 5, // Maximum pods
|
||||
"targetcpu": 80 // Scale up when CPU > 80%
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `minscale` - Keep at least N pods running (0 = scale to zero)
|
||||
- `maxscale` - Maximum pods for auto-scaling
|
||||
- `targetcpu` - CPU threshold for scaling
|
||||
|
||||
## Resource Tuning
|
||||
|
||||
Resources are defined in millicores (m) and MB:
|
||||
|
||||
- `mincpu` / `maxcpu`: 1000 = 1 CPU core
|
||||
- `minmemory` / `maxmemory`: in MB
|
||||
|
||||
**Example settings**:
|
||||
|
||||
| Function Type | mincpu | maxcpu | minmemory | maxmemory |
|
||||
|--------------|--------|--------|-----------|-----------|
|
||||
| Simple API | 50 | 100 | 128 | 256 |
|
||||
| DB-intensive | 200 | 500 | 256 | 512 |
|
||||
| ML inference | 1000 | 2000 | 1024 | 2048 |
|
||||
|
||||
**Tips**:
|
||||
- Start conservatively, monitor, then adjust
|
||||
- Function pods are killed if they exceed `maxmemory`
|
||||
- CPU limits are enforced by Kubernetes scheduler
|
||||
- Use `minmemory` >= 128 to avoid OOM kills
|
||||
|
||||
### Checking Current Usage
|
||||
|
||||
```bash
|
||||
# Get function pods
|
||||
kubectl get pods -n fission
|
||||
|
||||
# Describe pod for resource usage
|
||||
kubectl describe pod <pod-name> -n fission
|
||||
|
||||
# See metrics (if metrics-server installed)
|
||||
kubectl top pod <pod-name> -n fission
|
||||
```
|
||||
|
||||
## Environments
|
||||
|
||||
You can have multiple deployment environments (dev, staging, prod):
|
||||
|
||||
### Using deployment.json variants
|
||||
|
||||
- `deployment.json` - Production (default)
|
||||
- `dev-deployment.json` - Development (used with `fission deploy --dev`)
|
||||
|
||||
Example `dev-deployment.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"namespace": "fission-dev",
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-dev-env"],
|
||||
"configmaps": ["fission-myproject-dev-config"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Switching Environments
|
||||
|
||||
```bash
|
||||
# Deploy to dev
|
||||
fission deploy --dev
|
||||
|
||||
# Deploy to prod (default)
|
||||
fission deploy
|
||||
|
||||
# Specify namespace
|
||||
fission deploy --namespace fission-staging
|
||||
```
|
||||
|
||||
## Secrets Management
|
||||
|
||||
### Creating Secrets
|
||||
|
||||
```bash
|
||||
# Basic secret from literals
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=localhost \
|
||||
--from-literal=PG_PORT=5432
|
||||
|
||||
# From file
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-file=secrets.properties
|
||||
|
||||
# With multiple namespaces
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--namespace fission-dev \
|
||||
--from-literal=PG_HOST=dev-db.example.com
|
||||
```
|
||||
|
||||
### Encrypted Secrets (Vault)
|
||||
|
||||
To encrypt sensitive values:
|
||||
|
||||
```python
|
||||
# On your local machine (with PyNaCl installed)
|
||||
from vault import encrypt_vault
|
||||
|
||||
key = "your-32-byte-hex-key-here..." # 64 hex chars
|
||||
encrypted = encrypt_vault("super-secret-password", key)
|
||||
print(encrypted) # vault:v1:base64...
|
||||
```
|
||||
|
||||
Store the encrypted string in K8s secret:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_PASS='vault:v1:base64...'
|
||||
```
|
||||
|
||||
Set `CRYPTO_KEY` in `helpers.py` to the hex key:
|
||||
|
||||
```python
|
||||
CRYPTO_KEY = "e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6"
|
||||
```
|
||||
|
||||
**Important**: Rotate keys periodically. When changing key, re-encrypt all secrets.
|
||||
|
||||
### Updating Secrets
|
||||
|
||||
```bash
|
||||
# Edit secret
|
||||
kubectl edit secret fission-myproject-env
|
||||
|
||||
# Update single key
|
||||
kubectl set secret secret fission-myproject-env \
|
||||
--from-literal=PG_PASS='new-password'
|
||||
|
||||
# Roll function to pick up new secret
|
||||
fission function update --name my-function
|
||||
```
|
||||
|
||||
## Rolling Updates
|
||||
|
||||
### Deploy Changes
|
||||
|
||||
```bash
|
||||
# Build and deploy
|
||||
./src/build.sh
|
||||
fission deploy
|
||||
|
||||
# Or deploy single function
|
||||
fission function update --name my-function
|
||||
```
|
||||
|
||||
### Zero-Downtime Deployments
|
||||
|
||||
Fission handles rolling updates automatically:
|
||||
1. New package is built
|
||||
2. New function pods are created with new code
|
||||
3. Old pods continue serving traffic until new pods are ready
|
||||
4. Old pods are terminated
|
||||
|
||||
**No downtime** by default for HTTP triggers.
|
||||
|
||||
### Canary Deployments
|
||||
|
||||
For canary deployments:
|
||||
1. Deploy new version with different function name: `my-function-v2`
|
||||
2. Route some traffic using ingress annotations or service mesh
|
||||
3. Gradually shift traffic
|
||||
4. Delete old function
|
||||
|
||||
## Monitoring & Logging
|
||||
|
||||
### Viewing Logs
|
||||
|
||||
```bash
|
||||
# All function logs in namespace
|
||||
kubectl logs -n fission -l fission-function=true --tail=100
|
||||
|
||||
# Specific function
|
||||
kubectl logs -n fission -l fission-function/name=my-function --tail=100
|
||||
|
||||
# Follow logs
|
||||
kubectl logs -n fission -l fission-function/name=my-function -f
|
||||
|
||||
# Container logs (if multiple containers)
|
||||
kubectl logs -n fission -l fission-function/name=my-function -c builder
|
||||
```
|
||||
|
||||
### Structured Logging
|
||||
|
||||
Use `logger` from `helpers.py` (already configured):
|
||||
|
||||
```python
|
||||
logger.info("Processing request", extra={"user_id": user_id})
|
||||
logger.error("Database error", exc_info=True, extra={"query": sql})
|
||||
```
|
||||
|
||||
Logs are collected by the container runtime and available via `kubectl logs`.
|
||||
|
||||
### Metrics
|
||||
|
||||
Fission exposes Prometheus metrics:
|
||||
|
||||
```bash
|
||||
# Get metrics endpoint
|
||||
kubectl port-forward -n fission svc/fission-prometheus-server 9090:9090
|
||||
|
||||
# Or query via kubectl
|
||||
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/fission/pods/*" | jq .
|
||||
```
|
||||
|
||||
Metrics include:
|
||||
- Request rate
|
||||
- Error rate
|
||||
- Response latency
|
||||
- Pod counts
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Deployment Fails
|
||||
|
||||
**Error**: `Error building package`
|
||||
|
||||
Check:
|
||||
- `build.sh` is executable: `chmod +x src/build.sh`
|
||||
- All dependencies in `requirements.txt` are valid
|
||||
- Python syntax is correct: `python -m py_compile src/*.py`
|
||||
|
||||
**Error**: `Function not found after deploy`
|
||||
|
||||
Check:
|
||||
- Fission docstring block is properly formatted (must be ````fission` with backticks)
|
||||
- No YAML/JSON syntax errors in docstring
|
||||
- Function file is in `src/` directory
|
||||
|
||||
### Function Not Responding
|
||||
|
||||
**Check pod status**:
|
||||
```bash
|
||||
kubectl get pods -n fission -l fission-function/name=my-function
|
||||
```
|
||||
|
||||
**Pod stuck in Pending** - Insufficient resources or image pull error
|
||||
|
||||
**Pod stuck in ContainerCreating** - Volume mount issue or image pull
|
||||
|
||||
**Pod CrashLoopBackOff** - Application error. Check logs:
|
||||
```bash
|
||||
kubectl logs -n fission <pod-name> --previous
|
||||
```
|
||||
|
||||
### Configuration Not Loading
|
||||
|
||||
**Secrets not available**:
|
||||
```bash
|
||||
# Check secret exists in correct namespace
|
||||
kubectl get secret fission-myproject-env -n fission
|
||||
|
||||
# Verify secret is mounted
|
||||
kubectl exec -it <pod-name> -n fission -- ls /secrets/default/
|
||||
```
|
||||
|
||||
**ConfigMaps not available**:
|
||||
```bash
|
||||
kubectl get configmap fission-myproject-config -n fission
|
||||
```
|
||||
|
||||
**Profusion parms not reading**:
|
||||
- Ensure `SECRET_NAME` in helpers.py matches created secret name
|
||||
- Path format: `/secrets/{namespace}/{secret-name}/{key}`
|
||||
|
||||
### Slow Performance
|
||||
|
||||
1. **Increase resources**: Raise `maxmemory` and `maxcpu`
|
||||
2. **Connection pooling**: Use connection pooler like PgBouncer for heavy DB load
|
||||
3. **Database queries**: Check slow queries, add indexes
|
||||
4. **Cold starts**: Set `minscale: 1` with newdeploy executor to keep warm
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
**Error**: `could not connect to server: Connection refused`
|
||||
|
||||
- Verify database is reachable from cluster
|
||||
- Check security groups/network policies
|
||||
- Test connectivity from pod:
|
||||
```bash
|
||||
kubectl exec -it <pod-name> -n fission -- nc -zv $PG_HOST $PG_PORT
|
||||
```
|
||||
|
||||
**Error**: `password authentication failed`
|
||||
|
||||
- Verify credentials in secret
|
||||
- Check PG_USER format (with `plaintext:` prefix for vault)
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Custom Runtime Image
|
||||
|
||||
If you need system packages:
|
||||
|
||||
```dockerfile
|
||||
FROM ghcr.io/fission/python-env:latest
|
||||
RUN apk add --no-cache gcc libffi-dev
|
||||
```
|
||||
|
||||
Build and push:
|
||||
```bash
|
||||
docker build -t myregistry/python-custom:latest .
|
||||
docker push myregistry/python-custom:latest
|
||||
```
|
||||
|
||||
Update `deployment.json`:
|
||||
```json
|
||||
"environments": {
|
||||
"myproject-py": {
|
||||
"image": "myregistry/python-custom:latest",
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Environment Variables from ConfigMap
|
||||
|
||||
```json
|
||||
"configmaps": {
|
||||
"fission-myproject-config": {
|
||||
"literals": [
|
||||
"LOG_LEVEL=DEBUG",
|
||||
"FEATURE_FLAG_X=true"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access in code:
|
||||
```python
|
||||
import os
|
||||
log_level = os.getenv("LOG_LEVEL", "INFO")
|
||||
```
|
||||
|
||||
### Lifecycle Hooks
|
||||
|
||||
Use `function_pre_remove` and `function_post_remove` in deployment hooks:
|
||||
|
||||
```json
|
||||
"hooks": {
|
||||
"function_pre_remove": [
|
||||
{
|
||||
"type": "http",
|
||||
"url": "http://cleanup-service/cleanup",
|
||||
"timeout": 30000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Common Commands Reference
|
||||
|
||||
```bash
|
||||
# List functions
|
||||
fission function list
|
||||
|
||||
# Test function manually
|
||||
fission function test --name my-function
|
||||
|
||||
# Update single function
|
||||
fission function update --name my-function
|
||||
|
||||
# Delete function
|
||||
fission function delete --name my-function
|
||||
|
||||
# View function pods
|
||||
kubectl get pods -n fission -l fission-function/name=my-function
|
||||
|
||||
# View logs
|
||||
kubectl logs -n fission -l fission-function/name=my-function -f
|
||||
|
||||
# Exec into pod
|
||||
kubectl exec -it <pod-name> -n fission -- /bin/sh
|
||||
|
||||
# Describe function
|
||||
fission function describe --name my-function
|
||||
|
||||
# Get function YAML
|
||||
fission function get --name my-function -o yaml
|
||||
|
||||
# Check Fission version
|
||||
fission version
|
||||
|
||||
# Check Fission status
|
||||
kubectl get pods -n fission
|
||||
```
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Fission Deployment Documentation](https://fission.io/docs/usage/deploy/)
|
||||
- [Fission Executors](https://fission.io/docs/architecture/executor/)
|
||||
- [Kubernetes Resource Management](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
|
||||
- [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/)
|
||||
582
fission-python/template/docs/MIGRATIONS.md
Normal file
582
fission-python/template/docs/MIGRATIONS.md
Normal file
@@ -0,0 +1,582 @@
|
||||
# Database Migrations
|
||||
|
||||
This guide covers managing database schema changes in Fission Python projects.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Migration Files](#migration-files)
|
||||
3. [Applying Migrations](#applying-migrations)
|
||||
4. [Writing Migrations](#writing-migrations)
|
||||
5. [Best Practices](#best-practices)
|
||||
6. [Rollback Strategies](#rollback-strategies)
|
||||
7. [Automation](#automation)
|
||||
|
||||
## Overview
|
||||
|
||||
Database schema changes should be managed through versioned migration scripts, not manual `CREATE TABLE` statements.
|
||||
|
||||
This template uses **plain SQL migration files** (`.sql`), which provide:
|
||||
- Version control of schema changes
|
||||
- Repeatable application to different environments
|
||||
- Clear upgrade/downgrade paths
|
||||
- Audit trail of schema evolution
|
||||
|
||||
## Migration Files
|
||||
|
||||
Place SQL migration scripts in the `migrates/` directory:
|
||||
|
||||
```
|
||||
migrates/
|
||||
├── 001_initial_schema.sql
|
||||
├── 002_add_user_email.sql
|
||||
├── 003_create_indexes.sql
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Naming convention**:
|
||||
- Prefix with sequential number (zero-padded for sorting)
|
||||
- Descriptive name after underscore
|
||||
- `.sql` extension
|
||||
- Numbers should be unique and monotonically increasing
|
||||
|
||||
### Initial Schema Example
|
||||
|
||||
```sql
|
||||
-- migrates/001_create_items_table.sql
|
||||
-- Create items table
|
||||
CREATE TABLE IF NOT EXISTS items (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
status VARCHAR(50) DEFAULT 'active',
|
||||
metadata JSONB,
|
||||
created TIMESTAMPTZ DEFAULT NOW(),
|
||||
modified TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Add indexes
|
||||
CREATE INDEX idx_items_status ON items(status);
|
||||
CREATE INDEX idx_items_created ON items(created);
|
||||
|
||||
-- Add comments
|
||||
COMMENT ON TABLE items IS 'Stores item records';
|
||||
COMMENT ON COLUMN items.status IS 'Item status: active, inactive, pending';
|
||||
```
|
||||
|
||||
## Applying Migrations
|
||||
|
||||
### Manually
|
||||
|
||||
```bash
|
||||
# Connect to database
|
||||
psql -h localhost -U postgres -d mydb
|
||||
|
||||
# Run migration file
|
||||
\i migrates/001_create_items_table.sql
|
||||
|
||||
# Run all migrations in order (bash script)
|
||||
for file in $(ls migrates/*.sql | sort); do
|
||||
echo "Applying $file..."
|
||||
psql -h localhost -U postgres -d mydb -f "$file"
|
||||
done
|
||||
```
|
||||
|
||||
### Automatically from Python
|
||||
|
||||
Create a simple migration runner:
|
||||
|
||||
```python
|
||||
# src/migrate.py (not part of function, standalone script)
|
||||
import os
|
||||
import psycopg2
|
||||
from helpers import init_db_connection
|
||||
|
||||
def run_migrations():
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create migrations tracking table if not exists
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version INTEGER PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
applied_at TIMESTAMPTZ DEFAULT NOW()
|
||||
)
|
||||
""")
|
||||
|
||||
# Get already-applied migrations
|
||||
cursor.execute("SELECT version FROM schema_migrations")
|
||||
applied = {row[0] for row in cursor.fetchall()}
|
||||
|
||||
# Find migration files
|
||||
migrates_dir = os.path.join(os.path.dirname(__file__), "..", "migrates")
|
||||
files = sorted([
|
||||
f for f in os.listdir(migrates_dir)
|
||||
if f.endswith(".sql")
|
||||
])
|
||||
|
||||
# Apply pending migrations
|
||||
for filename in files:
|
||||
# Extract version number
|
||||
version = int(filename.split("_")[0])
|
||||
if version in applied:
|
||||
print(f"Skipping {filename} (already applied)")
|
||||
continue
|
||||
|
||||
path = os.path.join(migrates_dir, filename)
|
||||
print(f"Applying {filename}...")
|
||||
with open(path, 'r') as f:
|
||||
sql = f.read()
|
||||
|
||||
try:
|
||||
cursor.execute(sql)
|
||||
cursor.execute(
|
||||
"INSERT INTO schema_migrations (version, name) VALUES (%s, %s)",
|
||||
(version, filename)
|
||||
)
|
||||
conn.commit()
|
||||
print(f" ✓ Applied {filename}")
|
||||
except Exception as e:
|
||||
conn.rollback()
|
||||
print(f" ✗ Failed: {e}")
|
||||
raise
|
||||
|
||||
conn.close()
|
||||
print("All migrations applied")
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_migrations()
|
||||
```
|
||||
|
||||
Run:
|
||||
```bash
|
||||
python src/migrate.py
|
||||
```
|
||||
|
||||
### Using Migration Tools
|
||||
|
||||
For more advanced features (rollbacks, branching), consider:
|
||||
|
||||
- **[Alembic](https://alembic.sqlalchemy.org/)** - Database migration tool for SQLAlchemy (if using ORM)
|
||||
- **[pg migrator](https://github.com/heroku/pg-migrator)** - Heroku's migration tool
|
||||
- **[goose](https://github.com/pressly/goose)** - Multi-database migration tool (can use from Python)
|
||||
- **[yoyo-migrations](https://github.com/gugulet-h/yoyo-migrations)** - Python-based migrations
|
||||
|
||||
## Writing Migrations
|
||||
|
||||
### Principles
|
||||
|
||||
1. **Idempotent** - Script should succeed if run multiple times
|
||||
2. **Additive first** - Add columns/tables before removing/dropping
|
||||
3. **Backward compatible** - New schema should work with old code
|
||||
4. **Atomic** - One logical change per migration file
|
||||
5. **Test locally** - Apply to test database before production
|
||||
|
||||
### Common Operations
|
||||
|
||||
#### Create Table
|
||||
|
||||
```sql
|
||||
CREATE TABLE IF NOT EXISTS orders (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID NOT NULL,
|
||||
total DECIMAL(10,2) NOT NULL,
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'pending',
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Add foreign key
|
||||
ALTER TABLE orders
|
||||
ADD CONSTRAINT fk_orders_user
|
||||
FOREIGN KEY (user_id)
|
||||
REFERENCES users(id)
|
||||
ON DELETE CASCADE;
|
||||
|
||||
-- Index for performance
|
||||
CREATE INDEX idx_orders_user_id ON orders(user_id);
|
||||
CREATE INDEX idx_orders_created_at ON orders(created_at);
|
||||
```
|
||||
|
||||
#### Add Column
|
||||
|
||||
```sql
|
||||
-- Add nullable column (safe, backward compatible)
|
||||
ALTER TABLE orders
|
||||
ADD COLUMN shipping_address JSONB;
|
||||
|
||||
-- Add column with default (be careful with large tables!)
|
||||
-- This rewrites entire table - use cautiously
|
||||
ALTER TABLE orders
|
||||
ADD COLUMN tax_amount DECIMAL(10,2) DEFAULT 0.00;
|
||||
```
|
||||
|
||||
#### Rename Column
|
||||
|
||||
```sql
|
||||
-- PostgreSQL 9.2+ supports RENAME COLUMN
|
||||
ALTER TABLE orders
|
||||
RENAME COLUMN total TO order_total;
|
||||
```
|
||||
|
||||
#### Modify Column Type
|
||||
|
||||
```sql
|
||||
-- Change VARCHAR length
|
||||
ALTER TABLE users
|
||||
ALTER COLUMN email TYPE VARCHAR(320);
|
||||
|
||||
-- Convert to different type (use USING clause)
|
||||
ALTER TABLE orders
|
||||
ALTER COLUMN status TYPE VARCHAR(100)
|
||||
USING status::VARCHAR(100);
|
||||
```
|
||||
|
||||
#### Create Index
|
||||
|
||||
```sql
|
||||
-- Simple index
|
||||
CREATE INDEX idx_users_email ON users(email);
|
||||
|
||||
-- Unique index
|
||||
CREATE UNIQUE INDEX idx_users_email_unique ON users(email);
|
||||
|
||||
-- Partial index (only active users)
|
||||
CREATE INDEX idx_users_active ON users(id)
|
||||
WHERE status = 'active';
|
||||
|
||||
-- Multi-column index
|
||||
CREATE INDEX idx_orders_user_status ON orders(user_id, status);
|
||||
```
|
||||
|
||||
#### Drop Column/Table
|
||||
|
||||
```sql
|
||||
-- First, ensure no one is using it
|
||||
-- Consider using SET DEFAULT then dropping in subsequent migration
|
||||
|
||||
-- Drop column
|
||||
ALTER TABLE orders
|
||||
DROP COLUMN IF EXISTS old_column;
|
||||
|
||||
-- Drop table (dangerous!)
|
||||
DROP TABLE IF EXISTS old_logs;
|
||||
```
|
||||
|
||||
### Data Migrations
|
||||
|
||||
Sometimes you need to transform data:
|
||||
|
||||
```sql
|
||||
-- Backfill new column from existing data
|
||||
UPDATE orders
|
||||
SET shipping_address = jsonb_build_object(
|
||||
'street', address_street,
|
||||
'city', address_city,
|
||||
'zip', address_zip
|
||||
)
|
||||
WHERE shipping_address IS NULL;
|
||||
|
||||
-- Migrate enum values
|
||||
UPDATE products
|
||||
SET status = 'active' WHERE status = 'ACTIVE';
|
||||
|
||||
-- Clean up duplicates
|
||||
WITH duplicates AS (
|
||||
SELECT id, ROW_NUMBER() OVER (PARTITION BY email ORDER BY created_at) AS rn
|
||||
FROM users
|
||||
)
|
||||
DELETE FROM users WHERE id IN (SELECT id FROM duplicates WHERE rn > 1);
|
||||
```
|
||||
|
||||
### Transactional Migrations
|
||||
|
||||
Wrap critical migrations in transactions:
|
||||
|
||||
```sql
|
||||
BEGIN;
|
||||
|
||||
-- Multiple related operations
|
||||
ALTER TABLE orders ADD COLUMN shipping_id UUID;
|
||||
UPDATE orders SET shipping_id = uuid_generate_v4() WHERE shipping_id IS NULL;
|
||||
ALTER TABLE orders ALTER COLUMN shipping_id SET NOT NULL;
|
||||
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
**Note**: DDL statements in PostgreSQL auto-commit, so `BEGIN`/`COMMIT` may not work as expected for schema changes. For complex multi-step changes, consider using advisory locks or deployment coordination.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ Do's
|
||||
|
||||
1. **Test migrations on copy of production database** before applying to prod
|
||||
2. **Keep migrations small** - One logical change per file
|
||||
3. **Write data migrations as separate files** from schema migrations
|
||||
4. **Use `IF NOT EXISTS` and `IF EXISTS`** to make migrations idempotent
|
||||
5. **Never drop columns/tables in the same migration you add them** - Separate to allow rollback
|
||||
6. **Document why** - Add comments explaining the purpose
|
||||
7. **Consider indexes** - Add indexes for frequently queried columns in same migration as table creation
|
||||
8. **Use UUIDs** for primary keys (`gen_random_uuid()` in PostgreSQL 13+)
|
||||
9. **Add `created_at` and `updated_at` timestamps** to all tables
|
||||
10. **Version numbers must be unique and sequential**
|
||||
|
||||
### ❌ Don'ts
|
||||
|
||||
1. **Don't modify already-applied migrations** - They're part of history
|
||||
2. **Don't skip version numbers** - Creates gaps but not critical
|
||||
3. **Don't use destructive operations without backup** - `DROP COLUMN`, `DROP TABLE`
|
||||
4. **Don't run long-running migrations during peak hours** - Use low-traffic windows
|
||||
5. **Don't add NOT NULL without default** on non-empty tables - Will fail due to existing NULL rows
|
||||
6. **Don't assume order of execution** - Always number sequentially
|
||||
7. **Don't mix unrelated changes** in one migration file
|
||||
|
||||
### Zero-Downtime Migrations
|
||||
|
||||
#### Adding Column
|
||||
|
||||
```sql
|
||||
-- Step 1: Add column as nullable or with default (fast)
|
||||
ALTER TABLE orders ADD COLUMN status VARCHAR(50);
|
||||
|
||||
-- Step 2: Deploy code that writes to new column
|
||||
-- Your application updates to populate status
|
||||
|
||||
-- Step 3: Backfill existing rows (if needed)
|
||||
UPDATE orders SET status = 'completed' WHERE status IS NULL AND shipped_at IS NOT NULL;
|
||||
|
||||
-- Step 4: Make column NOT NULL (if needed) - only after all rows have values
|
||||
ALTER TABLE orders ALTER COLUMN status SET NOT NULL;
|
||||
```
|
||||
|
||||
#### Renaming Column
|
||||
|
||||
```sql
|
||||
-- Step 1: Add new column
|
||||
ALTER TABLE orders ADD COLUMN order_status VARCHAR(50);
|
||||
|
||||
-- Step 2: Deploy code writing to both old and new columns (dual-write)
|
||||
|
||||
-- Step 3: Backfill data
|
||||
UPDATE orders SET order_status = status;
|
||||
|
||||
-- Step 4: Deploy code reading from new column, stop writing to old
|
||||
|
||||
-- Step 5: Drop old column (in separate migration)
|
||||
ALTER TABLE orders DROP COLUMN status;
|
||||
```
|
||||
|
||||
## Rollback Strategies
|
||||
|
||||
### Manual Rollback
|
||||
|
||||
For each migration, you may want to write a corresponding "down" migration:
|
||||
|
||||
```sql
|
||||
-- 002_add_user_email.sql (UP)
|
||||
ALTER TABLE users ADD COLUMN email VARCHAR(320);
|
||||
|
||||
-- 002_add_user_email_rollback.sql (DOWN)
|
||||
ALTER TABLE users DROP COLUMN IF EXISTS email;
|
||||
```
|
||||
|
||||
Store rollback scripts alongside migrations or in separate `rollbacks/` directory.
|
||||
|
||||
### Point-in-Time Recovery
|
||||
|
||||
**Best strategy**: Restore database from backup to point before bad migration, then re-apply good migrations.
|
||||
|
||||
```bash
|
||||
# Restore from PITR backup (if using WAL archiving)
|
||||
pg_restore -h localhost -U postgres -d mydb --point-in-time="2025-03-18 10:30:00"
|
||||
|
||||
# Re-run migrations up to good version
|
||||
python src/migrate.py # But this applies all, so need selective
|
||||
```
|
||||
|
||||
### Selective Rollback Script
|
||||
|
||||
```python
|
||||
# rollback.py
|
||||
import sys
|
||||
from helpers import init_db_connection
|
||||
|
||||
def rollback(to_version: int):
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Find migrations after target version
|
||||
cursor.execute("""
|
||||
SELECT version, name
|
||||
FROM schema_migrations
|
||||
WHERE version > %s
|
||||
ORDER BY version DESC
|
||||
""", (to_version,))
|
||||
|
||||
migrations = cursor.fetchall()
|
||||
|
||||
for version, name in migrations:
|
||||
rollback_file = f"rollbacks/{version:03d}_{name.split('_', 1)[1]}.sql"
|
||||
print(f"Rolling back {name} using {rollback_file}...")
|
||||
with open(rollback_file, 'r') as f:
|
||||
sql = f.read()
|
||||
cursor.execute(sql)
|
||||
cursor.execute("DELETE FROM schema_migrations WHERE version = %s", (version,))
|
||||
conn.commit()
|
||||
print(f" Rolled back {name}")
|
||||
|
||||
conn.close()
|
||||
print(f"Rolled back to version {to_version}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
target = int(sys.argv[1])
|
||||
rollback(target)
|
||||
```
|
||||
|
||||
## Automation
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
In your deployment pipeline:
|
||||
|
||||
```bash
|
||||
# Before deploying new code
|
||||
python src/migrate.py
|
||||
|
||||
# If migrations fail, abort deployment
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Migrations failed, aborting deployment"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Deploy new code
|
||||
fission deploy
|
||||
```
|
||||
|
||||
### Pre-deployment Hooks
|
||||
|
||||
Use Fission hooks to run migrations automatically:
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"function_pre_deploy": [
|
||||
{
|
||||
"type": "http",
|
||||
"url": "http://migration-service/migrate",
|
||||
"timeout": 300000
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or simpler: run migration as part of `build.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# src/build.sh
|
||||
|
||||
# Install dependencies
|
||||
pip3 install -r requirements.txt -t .
|
||||
|
||||
# Run migrations against test DB (or do nothing, migrations are separate)
|
||||
# python ../migrate.py
|
||||
|
||||
# Package up
|
||||
cp -r . ${DEPLOY_PKG}
|
||||
```
|
||||
|
||||
### Database Change Management Tools
|
||||
|
||||
Consider specialized tools for larger teams:
|
||||
- **[Flyway](https://flywaydb.org/)** - Java-based, supports repeatable migrations
|
||||
- **[Liquibase](https://www.liquibase.org/)** - XML/YAML/JSON migrations
|
||||
- **[Prisma Migrate](https://www.prisma.io/docs/concepts/components/prisma-migrate)** - If using Prisma ORM
|
||||
- **[Alembic](https://alembic.sqlalchemy.org/)** - Python, SQLAlchemy-specific
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **Create migration**:
|
||||
```bash
|
||||
touch migrates/004_add_orders_table.sql
|
||||
```
|
||||
|
||||
2. **Write SQL**:
|
||||
```sql
|
||||
CREATE TABLE orders (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID NOT NULL REFERENCES users(id),
|
||||
total DECIMAL(10,2) NOT NULL,
|
||||
status VARCHAR(50) DEFAULT 'pending',
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX idx_orders_user_id ON orders(user_id);
|
||||
```
|
||||
|
||||
3. **Test locally**:
|
||||
```bash
|
||||
createdb test_migration
|
||||
psql test_migration -f migrates/004_add_orders_table.sql
|
||||
```
|
||||
|
||||
4. **Commit migration file**:
|
||||
```bash
|
||||
git add migrates/004_add_orders_table.sql
|
||||
git commit -m "Add orders table"
|
||||
```
|
||||
|
||||
5. **Apply to staging**:
|
||||
```bash
|
||||
# Update dev-deployment.json if new env vars needed
|
||||
fission deploy --dev
|
||||
python src/migrate.py
|
||||
```
|
||||
|
||||
6. **Apply to production**:
|
||||
```bash
|
||||
# Maintenance window or blue-green deployment
|
||||
fission deploy
|
||||
python src/migrate.py
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Migration Fails
|
||||
|
||||
Check error message:
|
||||
- **syntax error**: Validate SQL with `psql -c "SQL"` manually
|
||||
- **duplicate column**: Migration already applied, check `schema_migrations`
|
||||
- **permission denied**: DB user lacks ALTER/CREATE privileges
|
||||
- **lock timeout**: Another migration running, wait or kill process
|
||||
|
||||
### Migration Already Applied But Failed
|
||||
|
||||
If migration was recorded in `schema_migrations` but failed mid-way:
|
||||
|
||||
1. Manually revert partial changes or fix broken state
|
||||
2. Delete row from `schema_migrations`: `DELETE FROM schema_migrations WHERE version = 4;`
|
||||
3. Re-run migration
|
||||
|
||||
### Long-Running Migration
|
||||
|
||||
Large table alterations can lock rows and cause downtime:
|
||||
|
||||
- Run during low-traffic period
|
||||
- Use `CONCURRENTLY` for index creation (PostgreSQL):
|
||||
```sql
|
||||
CREATE INDEX CONCURRENTLY idx_orders_created ON orders(created_at);
|
||||
```
|
||||
- For adding NOT NULL, populate values first with UPDATE, then add constraint
|
||||
- Consider using `pg_repack` for online table reorganization
|
||||
|
||||
## Summary
|
||||
|
||||
- Store migrations in `migrates/` directory, numbered sequentially
|
||||
- Use `init_db_connection()` to run migrations programmatically
|
||||
- Test migrations on staging database before production
|
||||
- Keep migrations backward compatible when possible
|
||||
- Have a rollback plan (backups, down scripts)
|
||||
- Integrate migrations into CI/CD pipeline
|
||||
438
fission-python/template/docs/SECRETS.md
Normal file
438
fission-python/template/docs/SECRETS.md
Normal file
@@ -0,0 +1,438 @@
|
||||
# Secrets and Configuration Management
|
||||
|
||||
This guide covers best practices for managing secrets and configuration in Fission Python functions.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Kubernetes Secrets vs ConfigMaps](#kubernetes-secrets-vs-configmaps)
|
||||
3. [Secrets in Fission](#secrets-in-fission)
|
||||
4. [Vault Encryption](#vault-encryption)
|
||||
5. [Secret Rotation](#secret-rotation)
|
||||
6. [Configuration Precedence](#configuration-precedence)
|
||||
7. [Best Practices](#best-practices)
|
||||
|
||||
## Overview
|
||||
|
||||
Sensitive data (passwords, API keys) should **never** be:
|
||||
- Committed to Git
|
||||
- Hardcoded in source code
|
||||
- Passed as plaintext in deployment files
|
||||
|
||||
Instead, use:
|
||||
- **Kubernetes Secrets** - For sensitive values
|
||||
- **Kubernetes ConfigMaps** - For non-sensitive configuration
|
||||
- **Vault encryption** - For encrypting secrets at rest in K8s
|
||||
|
||||
## Kubernetes Secrets vs ConfigMaps
|
||||
|
||||
| Feature | Secrets | ConfigMaps |
|
||||
|---------|---------|------------|
|
||||
| Purpose | Sensitive data (passwords, tokens, keys) | Non-sensitive config (endpoints, feature flags) |
|
||||
| Storage | Base64 encoded (not encrypted by default) | Plain text |
|
||||
| Mount as | Files in `/secrets/` | Files in `/configs/` |
|
||||
| Access in code | `get_secret(key)` | `get_config(key)` |
|
||||
| Max size | 1MB total | 1MB total |
|
||||
| Can be encrypted | Yes, with K8s encryption at rest | Yes |
|
||||
|
||||
**Rule of thumb**:
|
||||
- Use Secrets for: database passwords, API tokens, encryption keys
|
||||
- Use ConfigMaps for: service URLs, feature flags, log levels, non-sensitive constants
|
||||
|
||||
## Secrets in Fission
|
||||
|
||||
### Defining Secret References in deployment.json
|
||||
|
||||
In `.fission/deployment.json`, declare the secret names your functions expect:
|
||||
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-env"],
|
||||
"configmaps": ["fission-myproject-config"]
|
||||
},
|
||||
"secrets": {
|
||||
"fission-myproject-env": {
|
||||
"literals": [
|
||||
"PG_HOST=localhost",
|
||||
"PG_PORT=5432"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Important**: The `literals` array here is **only documentation**. The actual secret values must be created separately in Kubernetes.
|
||||
|
||||
### Creating Actual Kubernetes Secrets
|
||||
|
||||
```bash
|
||||
# Create secret with multiple keys
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=postgres.example.com \
|
||||
--from-literal=PG_PORT=5432 \
|
||||
--from-literal=PG_DB=mydb \
|
||||
--from-literal=PG_USER=myuser \
|
||||
--from-literal=PG_PASS='my-password'
|
||||
|
||||
# In a specific namespace (Fission namespace)
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--namespace fission \
|
||||
--from-literal=...
|
||||
|
||||
# From environment file
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--namespace fission \
|
||||
--from-env-file=.env
|
||||
```
|
||||
|
||||
### How Secrets Are Mounted
|
||||
|
||||
Fission mounts secrets as files in the function pod:
|
||||
|
||||
```
|
||||
/secrets/{namespace}/{secret-name}/{key}
|
||||
```
|
||||
|
||||
Example path: `/secrets/default/fission-myproject-env/PG_HOST`
|
||||
|
||||
The `helpers.py` `get_secret()` function reads from this path:
|
||||
|
||||
```python
|
||||
def get_secret(key: str, default=None):
|
||||
namespace = get_current_namespace()
|
||||
path = f"/secrets/{namespace}/{SECRET_NAME}/{key}"
|
||||
with open(path, "r") as f:
|
||||
return f.read()
|
||||
```
|
||||
|
||||
**Note**: `SECRET_NAME` must match the K8s secret name (`fission-myproject-env`).
|
||||
|
||||
### Reading Secrets in Code
|
||||
|
||||
```python
|
||||
from helpers import get_secret
|
||||
|
||||
# With default fallback
|
||||
db_host = get_secret("PG_HOST", "localhost")
|
||||
db_port = int(get_secret("PG_PORT", "5432"))
|
||||
db_user = get_secret("PG_USER")
|
||||
db_pass = get_secret("PG_PASS")
|
||||
|
||||
# If key missing and no default, returns None
|
||||
maybe_value = get_secret("OPTIONAL_KEY")
|
||||
```
|
||||
|
||||
**Always provide a default** for non-critical configuration to avoid crashes if secret is missing.
|
||||
|
||||
### ConfigMaps
|
||||
|
||||
Same pattern, different mount path: `/configs/{namespace}/{configmap-name}/{key}`
|
||||
|
||||
```python
|
||||
from helpers import get_config
|
||||
|
||||
api_endpoint = get_config("API_ENDPOINT", "http://default.api")
|
||||
feature_flag = get_config("FEATURE_X_ENABLED", "false")
|
||||
```
|
||||
|
||||
Create ConfigMap:
|
||||
|
||||
```bash
|
||||
kubectl create configmap fission-myproject-config \
|
||||
--namespace fission \
|
||||
--from-literal=API_ENDPOINT=https://api.example.com \
|
||||
--from-literal=FEATURE_X_ENABLED=true
|
||||
```
|
||||
|
||||
## Vault Encryption
|
||||
|
||||
To encrypt secrets before storing in K8s:
|
||||
|
||||
### Generate Encryption Key
|
||||
|
||||
```bash
|
||||
# Generate 32-byte (64 hex char) random key
|
||||
openssl rand -hex 32
|
||||
# Example output: e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6
|
||||
```
|
||||
|
||||
### Encrypt a Value
|
||||
|
||||
```python
|
||||
# Encrypt locally
|
||||
from vault import encrypt_vault
|
||||
|
||||
key = "e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6"
|
||||
encrypted = encrypt_vault("my-secret-password", key)
|
||||
print(encrypted)
|
||||
# Output: vault:v1:base64-encrypted-data
|
||||
```
|
||||
|
||||
### Store Encrypted Value
|
||||
|
||||
Create K8s secret with encrypted value:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_PASS='vault:v1:base64...'
|
||||
```
|
||||
|
||||
### Configure decryption in helpers.py
|
||||
|
||||
```python
|
||||
CRYPTO_KEY = "e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6"
|
||||
```
|
||||
|
||||
### Automatic Decryption
|
||||
|
||||
`get_secret()` and `get_config()` automatically:
|
||||
1. Read the file content
|
||||
2. Detect if it starts with `vault:v1:` (using `is_valid_vault_format()`)
|
||||
3. Decrypt using `CRYPTO_KEY` if encrypted
|
||||
4. Return plaintext
|
||||
|
||||
**No code changes needed** - it "just works".
|
||||
|
||||
### Verification
|
||||
|
||||
```bash
|
||||
# Test decryption
|
||||
kubectl get secret fission-myproject-env -o jsonpath='{.data.PG_PASS}' | base64 -d
|
||||
# Should show: vault:v1:...
|
||||
|
||||
# Exec into pod and manually check
|
||||
kubectl exec -it <pod-name> -- python3 -c "from helpers import get_secret; print(get_secret('PG_PASS'))"
|
||||
# Should print decrypted value
|
||||
```
|
||||
|
||||
## Secret Rotation
|
||||
|
||||
### Rotating a Secret
|
||||
|
||||
1. **Generate new value** (new password, new API key)
|
||||
2. **Encrypt** (if using vault)
|
||||
3. **Update K8s secret**:
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--dry-run=client \
|
||||
--from-literal=PG_PASS='new-password' \
|
||||
-o yaml | kubectl apply -f -
|
||||
```
|
||||
4. **Update actual external system** (database, API provider) with new value
|
||||
5. **Verify applications work** (check logs)
|
||||
6. **Remove old value** (if rotating from old to new, both may need to coexist temporarily)
|
||||
|
||||
### Rotating Vault Encryption Key
|
||||
|
||||
**Warning**: Changing `CRYPTO_KEY` requires re-encrypting all secrets!
|
||||
|
||||
1. Deploy new code with updated `CRYPTO_KEY` **temporarily** pointing to new key
|
||||
2. Create new K8s secrets with values encrypted under new key (or re-encrypt via script)
|
||||
3. Switch `CRYPTO_KEY` back to original (or both keys during transition) - actually this is complex
|
||||
|
||||
**Recommended**: Have two keys during rotation:
|
||||
```python
|
||||
CRYPTO_KEYS = [
|
||||
"old-key-hex...", # Keep for decrypting old secrets
|
||||
"new-key-hex..." # Use for encrypting new/updated secrets
|
||||
]
|
||||
```
|
||||
|
||||
Then update `decrypt_vault()` to try each key until one works. After all secrets migrated, remove old key.
|
||||
|
||||
## Configuration Precedence
|
||||
|
||||
Fission supports multiple deployment configuration files:
|
||||
|
||||
1. **deployment.json** - Base configuration (committed to repo)
|
||||
2. **dev-deployment.json** - Development overrides (usually not committed)
|
||||
3. **local-deployment.json** - Local overrides (gitignored)
|
||||
|
||||
### Override Priority
|
||||
|
||||
When using `fission deploy --dev`, Fission loads:
|
||||
- Base configuration from `deployment.json`
|
||||
- Overlay from `dev-deployment.json`
|
||||
|
||||
Values in the overlay file replace or extend base values.
|
||||
|
||||
**Example**: Override secret name for dev:
|
||||
|
||||
**deployment.json**:
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-env"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**dev-deployment.json**:
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-dev-env"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now `fission deploy --dev` uses the dev secret, while `fission deploy` uses prod secret.
|
||||
|
||||
### Local Overrides
|
||||
|
||||
Create `.fission/local-deployment.json` for your workstation:
|
||||
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-local-env"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Fission automatically uses this if present (no flag needed). `.gitignore` typically excludes it.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's ✅
|
||||
|
||||
1. **Do use Kubernetes Secrets** - Never hardcode credentials
|
||||
2. **Do encrypt with vault** - Prevents plaintext secrets in K8s
|
||||
3. **Do store vault key securely** - In K8s sealed secret, external vault (HashiCorp Vault, AWS Secrets Manager), or as a separate K8s secret in restricted namespace
|
||||
4. **Do namespace secrets** - Use different secrets for dev/staging/prod
|
||||
5. **Do rotate secrets regularly** - Especially database passwords, API tokens
|
||||
6. **Do use ConfigMaps for non-sensitive config** - Cleaner separation
|
||||
7. **Do provide sensible defaults** - In `get_secret()` calls
|
||||
8. **Do validate required secrets** - Fail fast at startup:
|
||||
```python
|
||||
def init():
|
||||
pg_host = get_secret("PG_HOST")
|
||||
if not pg_host:
|
||||
raise ValueError("PG_HOST secret is required")
|
||||
```
|
||||
|
||||
### Don'ts ❌
|
||||
|
||||
1. **Don't commit secrets** - Even in `deployment.json` literals
|
||||
2. **Don't put plaintext in Git** - Use placeholders or remove before commit
|
||||
3. **Don't embed vault key in code for production** - Use environment-specific override or external secret management
|
||||
4. **Don't share vault key publicly** - It's a symmetric key - anyone with it can decrypt all secrets
|
||||
5. **Don't use same secret across namespaces** - Separate environments should have separate credentials
|
||||
6. **Don't rely on obscurity** - Security through obscurity is not security
|
||||
|
||||
### Supply Chain Security
|
||||
|
||||
For production deployments:
|
||||
|
||||
1. **Store vault key in sealed secrets** (if on K8s):
|
||||
```bash
|
||||
kubectl create secret generic crypto-key \
|
||||
--from-literal=key='your-hex-key'
|
||||
# Then use SealedSecrets controller to encrypt in Git
|
||||
```
|
||||
|
||||
2. **Use external secrets operator**:
|
||||
```yaml
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: db-creds
|
||||
spec:
|
||||
refreshInterval: "1h"
|
||||
secretStoreRef:
|
||||
name: vault-backend
|
||||
kind: SecretStore
|
||||
target:
|
||||
name: fission-myproject-env
|
||||
creationPolicy: Owner
|
||||
data:
|
||||
- secretKey: PG_PASS
|
||||
remoteRef:
|
||||
key: /prod/db/password
|
||||
```
|
||||
|
||||
3. **Rotate automatically** with cronjobs or external secret manager
|
||||
|
||||
## Environment Variable Alternative
|
||||
|
||||
While the template uses secret files mounted by Fission, you can also use environment variables:
|
||||
|
||||
```json
|
||||
"function_common": {
|
||||
"environment": {
|
||||
"LOG_LEVEL": "INFO",
|
||||
"FEATURE_FLAG": "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access with `os.getenv()`:
|
||||
|
||||
```python
|
||||
import os
|
||||
log_level = os.getenv("LOG_LEVEL", "INFO")
|
||||
```
|
||||
|
||||
**However**: Environment is less flexible than secrets/configmaps for dynamic updates (requires function restart). Prefer secrets/configmaps for values that may change independently of code deployments.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Secret Not Available
|
||||
|
||||
```bash
|
||||
# Check secret exists in correct namespace
|
||||
kubectl get secret fission-myproject-env -n fission
|
||||
|
||||
# Check secret keys
|
||||
kubectl get secret fission-myproject-env -n fission -o jsonpath='{.data}'
|
||||
|
||||
# Check pod mount
|
||||
kubectl exec -it <pod-name> -n fission -- ls -la /secrets/default/
|
||||
```
|
||||
|
||||
Common issues:
|
||||
- Secret in wrong namespace (use Fission namespace, usually `fission` or as configured)
|
||||
- Secret name typo in helpers.py `SECRET_NAME` variable
|
||||
- Secret not mounted due to missing permission (service account restriction)
|
||||
|
||||
### Vault Decryption Failing
|
||||
|
||||
```python
|
||||
from vault import is_valid_vault_format, decrypt_vault
|
||||
|
||||
vault_str = get_secret("PG_PASS")
|
||||
print(is_valid_vault_format(vault_str)) # Should be True
|
||||
print(decrypt_vault(vault_str, "wrong-key")) # Raises CryptoError
|
||||
```
|
||||
|
||||
Check:
|
||||
- `CRYPTO_KEY` is set correctly in `helpers.py`
|
||||
- Key is 64 hex characters (32 bytes)
|
||||
- Encrypted value format is exactly `vault:v1:base64...`
|
||||
|
||||
### Permission Denied Reading Secret
|
||||
|
||||
Pod may lack permission to read secret. Check service account:
|
||||
|
||||
```bash
|
||||
# Get function pod's service account
|
||||
kubectl get pod <pod-name> -n fission -o jsonpath='{.spec.serviceAccountName}'
|
||||
|
||||
# Check role bindings
|
||||
kubectl get rolebinding -n fission
|
||||
kubectl get clusterrolebinding -n fission
|
||||
|
||||
# Add permission if needed (requires cluster admin)
|
||||
kubectl create clusterrolebinding fission-secret-reader \
|
||||
--clusterrole=view \
|
||||
--serviceaccount=fission:default
|
||||
```
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/)
|
||||
- [Kubernetes ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/)
|
||||
- [Fission Environment and Config](https://fission.io/docs/usage/env/)
|
||||
- [PyNaCl Documentation](https://pynacl.readthedocs.io/)
|
||||
- [SealedSecrets](https://github.com/bitnami-labs/sealed-secrets) - Store encrypted secrets in Git
|
||||
240
fission-python/template/docs/STRUCTURE.md
Normal file
240
fission-python/template/docs/STRUCTURE.md
Normal file
@@ -0,0 +1,240 @@
|
||||
# Project Structure
|
||||
|
||||
This document explains the purpose and contents of each directory and file in a Fission Python project.
|
||||
|
||||
## Directory Layout
|
||||
|
||||
```
|
||||
project/
|
||||
├── .fission/ # Fission configuration
|
||||
│ ├── deployment.json # Main deployment configuration
|
||||
│ ├── dev-deployment.json # Development environment overrides
|
||||
│ └── local-deployment.json # Local development overrides
|
||||
├── src/ # Source code
|
||||
│ ├── __init__.py # Package initialization
|
||||
│ ├── vault.py # Vault encryption utilities
|
||||
│ ├── helpers.py # Shared utility functions
|
||||
│ ├── exceptions.py # Custom exception classes
|
||||
│ ├── models.py # Pydantic models for validation
|
||||
│ ├── build.sh # Build script (executable)
|
||||
│ └── *.py # Your function implementations
|
||||
├── test/ # Unit and integration tests
|
||||
│ ├── __init__.py
|
||||
│ ├── test_*.py # Test files
|
||||
│ └── requirements.txt # Test dependencies
|
||||
├── migrates/ # Database migration scripts
|
||||
│ └── *.sql # SQL migration files
|
||||
├── manifests/ # Kubernetes manifests (optional)
|
||||
│ └── *.yaml # K8s resources
|
||||
├── specs/ # Generated Fission specs
|
||||
│ ├── fission-deployment-config.yaml
|
||||
│ └── ...
|
||||
├── requirements.txt # Runtime dependencies
|
||||
├── dev-requirements.txt # Development dependencies
|
||||
├── .env.example # Environment variable template
|
||||
├── pytest.ini # Pytest configuration
|
||||
├── README.md # Project documentation
|
||||
└── (other project files)
|
||||
```
|
||||
|
||||
## File Purposes
|
||||
|
||||
### .fission/deployment.json
|
||||
|
||||
This is **the most important configuration file** for Fission deployment. It defines:
|
||||
|
||||
- **environments**: Build environment configuration (image, builder, resources)
|
||||
- **archives**: Source code packaging (typically "package.zip" from src/)
|
||||
- **packages**: Package definitions linking source to environment
|
||||
- **function_common**: Default settings applied to all functions
|
||||
- **secrets**: Secret definitions (literal values are placeholders - actual secrets go in K8s)
|
||||
- **configmaps**: ConfigMap definitions (non-sensitive configuration)
|
||||
|
||||
**Important**: The secret and configmap literals are **placeholders only**. In production, you create actual K8s secrets/configmaps with the same names containing real values.
|
||||
|
||||
**Placeholders**:
|
||||
- `${PROJECT_NAME}` - Replaced with your project name by `create-project.sh`
|
||||
- Secret name pattern: `fission-${PROJECT_NAME}-env`
|
||||
- ConfigMap name pattern: `fission-${PROJECT_NAME}-config`
|
||||
|
||||
### src/vault.py
|
||||
|
||||
Provides encryption/decryption utilities using PyNaCl (SecretBox). This is used when you want to store encrypted values in K8s secrets rather than plaintext.
|
||||
|
||||
**Key functions**:
|
||||
- `encrypt_vault(plaintext, key)` - Encrypt and return vault format string
|
||||
- `decrypt_vault(vault, key)` - Decrypt vault format string
|
||||
- `is_valid_vault_format(vault)` - Check if string is vault-encrypted
|
||||
|
||||
**Usage in helpers.py**: The `get_secret()` and `get_config()` functions automatically detect vault format (`vault:v1:...`) and decrypt if a valid `CRYPTO_KEY` is set.
|
||||
|
||||
### src/helpers.py
|
||||
|
||||
Shared utilities used across functions:
|
||||
|
||||
**Database**:
|
||||
- `init_db_connection()` - Creates PostgreSQL connection from secrets
|
||||
- `db_row_to_dict(cursor, row)` - Convert row tuple to dict
|
||||
- `db_rows_to_array(cursor, rows)` - Convert multiple rows to list of dicts
|
||||
|
||||
**Configuration**:
|
||||
- `get_secret(key, default=None)` - Read from K8s secret volume
|
||||
- `get_config(key, default=None)` - Read from K8s config volume
|
||||
- `get_current_namespace()` - Get current K8s namespace
|
||||
|
||||
**Utilities**:
|
||||
- `str_to_bool(input)` - Convert string to boolean
|
||||
- `check_port_open(ip, port, timeout)` - TCP port connectivity check
|
||||
- `get_user_from_headers()` - Extract user ID from request headers
|
||||
- `format_error_response(...)` - Build standardized error dict
|
||||
|
||||
**Logging**:
|
||||
- Helper uses `current_app.logger` (Flask) for error logging
|
||||
|
||||
### src/exceptions.py
|
||||
|
||||
Custom exception hierarchy:
|
||||
|
||||
```
|
||||
ServiceException (base)
|
||||
├── ValidationError (400) - Invalid input
|
||||
├── NotFoundError (404) - Resource not found
|
||||
├── ConflictError (409) - Duplicate/conflict
|
||||
└── DatabaseError (500) - Database failure
|
||||
```
|
||||
|
||||
All exceptions include:
|
||||
- `error_code` - Machine-readable code
|
||||
- `http_status` - HTTP status
|
||||
- `error_msg` - Human-readable message
|
||||
- `x_user` (optional) - User identifier
|
||||
- `details` (optional) - Additional context dict
|
||||
|
||||
When raised in a Fission function, these automatically return proper JSON error responses.
|
||||
|
||||
### src/models.py
|
||||
|
||||
Pydantic models for request/response validation:
|
||||
|
||||
**Patterns included**:
|
||||
- Enums (e.g., `Status`, `DataType`)
|
||||
- Dataclass filters (e.g., `ItemFilter`, `Pagination`)
|
||||
- Request models (`ItemCreateRequest`, `ItemUpdateRequest`)
|
||||
- Response models (`ItemResponse`, `PaginatedResponse`)
|
||||
- ErrorResponse model (used by exceptions)
|
||||
|
||||
**Key concepts**:
|
||||
- Use `Field(...)` with constraints (min_length, max_length, ge, le)
|
||||
- Provide `description` for API documentation
|
||||
- Use `json_schema_extra` for example values
|
||||
- Set `from_attributes = True` for ORM compatibility
|
||||
|
||||
### src/build.sh
|
||||
|
||||
Bash script that builds the dependency package. It:
|
||||
1. Detects OS (Debian vs Alpine)
|
||||
2. Installs build dependencies (gcc, libpq-dev/python3-dev/postgresql-dev)
|
||||
3. Installs Python requirements into `src/` directory
|
||||
4. Copies `src/` to package destination
|
||||
|
||||
**Important**: Must be executable (`chmod +x src/build.sh`)
|
||||
|
||||
The script expects environment variables:
|
||||
- `SRC_PKG` - Source package directory (e.g., `src`)
|
||||
- `DEPLOY_PKG` - Destination package (e.g., `specs/package`)
|
||||
|
||||
Fission builder sets these automatically.
|
||||
|
||||
### test/
|
||||
|
||||
Contains unit and integration tests.
|
||||
|
||||
**Structure**:
|
||||
- `test_*.py` - Test files following pytest conventions
|
||||
- `requirements.txt` - Test dependencies (pytest, pytest-mock, requests)
|
||||
|
||||
**Running tests**:
|
||||
```bash
|
||||
pip install -r dev-requirements.txt
|
||||
pytest
|
||||
```
|
||||
|
||||
## Fission Configuration in Docstrings
|
||||
|
||||
Each Python function that should be exposed as a Fission function **must** include a ````fission` block in its docstring:
|
||||
|
||||
```python
|
||||
def my_function(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "my-function",
|
||||
"http_triggers": {
|
||||
"my-trigger": {
|
||||
"url": "/api/endpoint",
|
||||
"methods": ["GET", "POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Human-readable description here.
|
||||
"""
|
||||
# Implementation
|
||||
```
|
||||
|
||||
The Fission Python builder parses these docstrings and generates the `specs/fission-deployment-config.yaml` and other spec files.
|
||||
|
||||
**Supported trigger types**:
|
||||
- `http_triggers` - HTTP endpoints
|
||||
- `kafka_triggers` - Kafka topics
|
||||
- `timer_triggers` - Scheduled execution
|
||||
- `message_queue_triggers` - MQTT, NATS, etc.
|
||||
|
||||
## Configuration Precedence
|
||||
|
||||
1. **deployment.json** - Base configuration (committed to repo)
|
||||
2. **dev-deployment.json** - Overrides for dev environment (not always committed)
|
||||
3. **local-deployment.json** - Local overrides (typically .gitignored)
|
||||
|
||||
When deploying:
|
||||
- `fission deploy` uses deployment.json
|
||||
- `fission deploy --dev` uses dev-deployment.json if present
|
||||
|
||||
## Secrets and Configuration Flow
|
||||
|
||||
1. **Define placeholders** in `deployment.json`:
|
||||
```json
|
||||
"secrets": {
|
||||
"fission-myproject-env": {
|
||||
"literals": ["PG_HOST=localhost", "PG_PORT=5432"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Create actual K8s secret**:
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=prod-db.example.com \
|
||||
--from-literal=PG_PORT=5432
|
||||
```
|
||||
|
||||
3. **Read in code** via `get_secret()`:
|
||||
```python
|
||||
host = get_secret("PG_HOST")
|
||||
```
|
||||
|
||||
4. **For vault encryption**:
|
||||
- Set `CRYPTO_KEY` in helpers.py or as env override
|
||||
- Store encrypted: `vault:v1:base64data` in K8s secret
|
||||
- `get_secret()` auto-decrypts
|
||||
|
||||
## Summary
|
||||
|
||||
- Keep function code in `src/`
|
||||
- Define Fission metadata in docstring blocks
|
||||
- Use helpers for common operations
|
||||
- Define custom exceptions for error handling
|
||||
- Validate inputs with Pydantic models
|
||||
- Store tests in `test/` with pytest
|
||||
- Manage database migrations in `migrates/`
|
||||
- Do not commit actual secrets to repository
|
||||
567
fission-python/template/docs/TESTING.md
Normal file
567
fission-python/template/docs/TESTING.md
Normal file
@@ -0,0 +1,567 @@
|
||||
# Testing Guide
|
||||
|
||||
This document covers testing strategies and best practices for Fission Python functions.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Test Types](#test-types)
|
||||
2. [Dependencies](#dependencies)
|
||||
3. [Unit Testing](#unit-testing)
|
||||
4. [Integration Testing](#integration-testing)
|
||||
5. [Test Database](#test-database)
|
||||
6. [Mocking](#mocking)
|
||||
7. [Fixtures](#fixtures)
|
||||
8. [Coverage](#coverage)
|
||||
9. [Running Tests](#running-tests)
|
||||
10. [CI/CD Integration](#cicd-integration)
|
||||
|
||||
## Test Types
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Test individual functions in isolation, mocking external dependencies:
|
||||
- Database calls
|
||||
- HTTP requests
|
||||
- File I/O
|
||||
- External services
|
||||
|
||||
**Goal**: Verify business logic correctness without infrastructure.
|
||||
|
||||
### Integration Tests
|
||||
|
||||
Test the function with real (or test) dependencies:
|
||||
- Actual database queries
|
||||
- End-to-end request/response flow
|
||||
- Real configuration loading
|
||||
|
||||
**Goal**: Verify integration points work correctly.
|
||||
|
||||
## Dependencies
|
||||
|
||||
Install test dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r test/requirements.txt
|
||||
# Or for dev (includes both runtime and test deps):
|
||||
pip install -r dev-requirements.txt
|
||||
```
|
||||
|
||||
Required packages:
|
||||
- `pytest` - Test framework
|
||||
- `pytest-mock` - Mocking utilities (provides `mocker` fixture)
|
||||
- `requests` - For integration tests making HTTP calls
|
||||
|
||||
## Unit Testing
|
||||
|
||||
### Example Test Structure
|
||||
|
||||
```python
|
||||
# test/test_my_function.py
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from src.my_function import create_item
|
||||
from exceptions import ValidationError
|
||||
|
||||
def test_create_item_success():
|
||||
"""Test successful item creation."""
|
||||
# Arrange
|
||||
mock_conn = MagicMock()
|
||||
mock_cursor = MagicMock()
|
||||
mock_conn.cursor.return_value = mock_cursor
|
||||
mock_cursor.fetchone.return_value = ("item-id", "Item Name", "active")
|
||||
|
||||
# Mock init_db_connection to return our mock
|
||||
with patch("src.my_function.init_db_connection", return_value=mock_conn):
|
||||
# Create a mock Flask request
|
||||
with patch("src.my_function.request") as mock_request:
|
||||
mock_request.get_json.return_value = {
|
||||
"name": "Test Item",
|
||||
"status": "active"
|
||||
}
|
||||
mock_request.view_args = {}
|
||||
|
||||
# Act
|
||||
result = create_item({}, {})
|
||||
|
||||
# Assert
|
||||
assert result["id"] == "item-id"
|
||||
assert result["name"] == "Test Item"
|
||||
mock_cursor.execute.assert_called_once()
|
||||
mock_conn.commit.assert_called_once()
|
||||
|
||||
def test_create_item_validation_error():
|
||||
"""Test validation of missing required fields."""
|
||||
with patch("src.my_function.request") as mock_request:
|
||||
mock_request.get_json.return_value = {"name": ""} # Empty name
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
create_item({}, {})
|
||||
|
||||
assert "validation" in str(exc_info.value.error_msg).lower()
|
||||
```
|
||||
|
||||
### Mocking Helpers
|
||||
|
||||
Use `patch` to replace dependencies:
|
||||
|
||||
```python
|
||||
# Mock helpers.get_secret
|
||||
@patch("src.my_function.helpers.get_secret")
|
||||
def test_with_mocked_secret(mock_get_secret):
|
||||
mock_get_secret.return_value = "localhost"
|
||||
# Test code...
|
||||
|
||||
# Mock entire module
|
||||
@patch("src.my_function.helpers.init_db_connection")
|
||||
def test_with_mocked_db(mock_init_db):
|
||||
mock_conn = MagicMock()
|
||||
mock_init_db.return_value = mock_conn
|
||||
# Test code...
|
||||
```
|
||||
|
||||
### Mocking Flask Request
|
||||
|
||||
```python
|
||||
from flask import Request
|
||||
|
||||
def test_with_flask_request():
|
||||
with patch("src.my_function.request") as mock_request:
|
||||
mock_request.get_json.return_value = {"key": "value"}
|
||||
mock_request.args.getlist.return_value = []
|
||||
mock_request.headers.get.return_value = "user-123"
|
||||
# Test code...
|
||||
```
|
||||
|
||||
## Integration Testing
|
||||
|
||||
### Test Database Setup
|
||||
|
||||
Use a separate test database:
|
||||
|
||||
```bash
|
||||
# Create test database
|
||||
createdb fission_test
|
||||
|
||||
# Or with Docker:
|
||||
docker run -d -p 5433:5432 -e POSTGRES_PASSWORD=test postgres:15
|
||||
```
|
||||
|
||||
Set environment variables for test database:
|
||||
```bash
|
||||
export PG_HOST=localhost
|
||||
export PG_PORT=5433
|
||||
export PG_DB=fission_test
|
||||
export PG_USER=postgres
|
||||
export PG_PASS=test
|
||||
```
|
||||
|
||||
### pytest Fixtures for Database
|
||||
|
||||
```python
|
||||
# conftest.py (placed in test/ directory)
|
||||
import pytest
|
||||
import psycopg2
|
||||
from helpers import init_db_connection
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def db_connection():
|
||||
"""Create a database connection for the entire test session."""
|
||||
conn = init_db_connection()
|
||||
yield conn
|
||||
conn.close()
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def db_cursor(db_connection):
|
||||
"""Create a cursor for each test, with transaction rollback."""
|
||||
conn = db_connection
|
||||
cursor = conn.cursor()
|
||||
# Start a transaction that will be rolled back
|
||||
conn.rollback()
|
||||
yield cursor
|
||||
# Rollback after each test to keep DB clean
|
||||
conn.rollback()
|
||||
```
|
||||
|
||||
### Example Integration Test
|
||||
|
||||
```python
|
||||
# test/test_integration.py
|
||||
def test_create_and_retrieve_item_integration(db_connection):
|
||||
"""Test full CRUD cycle with real database."""
|
||||
from src.models import ItemCreateRequest
|
||||
from src.functions import create_item, get_item
|
||||
|
||||
# Insert test data
|
||||
cursor = db_connection.cursor()
|
||||
cursor.execute("DELETE FROM items WHERE name = 'Integration Test'")
|
||||
db_connection.commit()
|
||||
|
||||
# Create item via function
|
||||
with patch("src.functions.request") as mock_request:
|
||||
mock_request.get_json.return_value = {
|
||||
"name": "Integration Test",
|
||||
"description": "Test item"
|
||||
}
|
||||
mock_request.view_args = {}
|
||||
result = create_item({}, {})
|
||||
|
||||
item_id = result["id"]
|
||||
assert result["name"] == "Integration Test"
|
||||
|
||||
# Retrieve same item
|
||||
with patch("src.functions.request") as mock_request:
|
||||
with patch("src.functions.request.view_args", {"id": item_id}):
|
||||
result = get_item({"path": f"/items/{item_id}"}, {})
|
||||
assert result["id"] == item_id
|
||||
|
||||
# Cleanup
|
||||
cursor.execute("DELETE FROM items WHERE id = %s", (item_id,))
|
||||
db_connection.commit()
|
||||
```
|
||||
|
||||
## Test Database Migrations
|
||||
|
||||
Apply migrations before integration tests:
|
||||
|
||||
```python
|
||||
# conftest.py
|
||||
import subprocess
|
||||
|
||||
def apply_migrations():
|
||||
"""Apply all SQL migrations to test database."""
|
||||
import os
|
||||
migrates_dir = os.path.join(os.path.dirname(__file__), "..", "migrates")
|
||||
for file in sorted(os.listdir(migrates_dir)):
|
||||
if file.endswith(".sql"):
|
||||
path = os.path.join(migrates_dir, file)
|
||||
subprocess.run(
|
||||
["psql", "-d", "fission_test", "-f", path],
|
||||
check=True
|
||||
)
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def setup_database():
|
||||
"""Run migrations before any tests."""
|
||||
apply_migrations()
|
||||
yield
|
||||
# Optionally drop and recreate after tests
|
||||
```
|
||||
|
||||
## Mocking
|
||||
|
||||
### Built-in unittest.mock
|
||||
|
||||
```python
|
||||
from unittest.mock import patch, MagicMock, mock_open
|
||||
|
||||
# Simple patch
|
||||
with patch("module.function") as mock_func:
|
||||
mock_func.return_value = "mocked"
|
||||
# call code that uses module.function
|
||||
|
||||
# Assert called with specific args
|
||||
mock_func.assert_called_once_with("arg1", "arg2")
|
||||
|
||||
# Mock context manager
|
||||
with patch("builtins.open", mock_open(read_data="file content")) as mock_file:
|
||||
# code that opens file
|
||||
mock_file.assert_called_with("path/to/file", "r")
|
||||
```
|
||||
|
||||
### pytest-mock Fixture
|
||||
|
||||
Simpler syntax using `mocker` fixture:
|
||||
|
||||
```python
|
||||
def test_with_mocker(mocker):
|
||||
mock_func = mocker.patch("src.function.helper")
|
||||
mock_func.return_value = {"key": "value"}
|
||||
# test code...
|
||||
```
|
||||
|
||||
## Fixtures
|
||||
|
||||
Create reusable fixtures in `conftest.py`:
|
||||
|
||||
```python
|
||||
# test/conftest.py
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def sample_item_data():
|
||||
"""Provide sample item data for tests."""
|
||||
return {
|
||||
"name": "Test Item",
|
||||
"description": "A test item",
|
||||
"status": "active"
|
||||
}
|
||||
|
||||
@pytest.fixture
|
||||
def mock_db_connection(mocker):
|
||||
"""Provide a mocked database connection."""
|
||||
mock_conn = mocker.MagicMock()
|
||||
mock_cursor = mocker.MagicMock()
|
||||
mock_conn.cursor.return_value = mock_cursor
|
||||
mock_cursor.fetchone.return_value = None
|
||||
return mock_conn
|
||||
```
|
||||
|
||||
Fixtures are automatically available to all tests in the directory.
|
||||
|
||||
## Coverage
|
||||
|
||||
Measure test coverage with pytest-cov:
|
||||
|
||||
```bash
|
||||
# Install
|
||||
pip install pytest-cov
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=src
|
||||
|
||||
# HTML report
|
||||
pytest --cov=src --cov-report=html
|
||||
open htmlcov/index.html
|
||||
|
||||
# Show missing lines
|
||||
pytest --cov=src --cov-report=term-missing
|
||||
```
|
||||
|
||||
Aim for high coverage of business logic (80%+). Don't worry about 100% coverage of trivial getters/setters.
|
||||
|
||||
### Excluding Files
|
||||
|
||||
Add to `pytest.ini`:
|
||||
```ini
|
||||
[pytest]
|
||||
addopts = --cov=src --cov-exclude=src/vault.py
|
||||
```
|
||||
|
||||
Or use `.coveragerc`:
|
||||
```ini
|
||||
[run]
|
||||
omit = src/vault.py
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Basic Commands
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Verbose
|
||||
pytest -v
|
||||
|
||||
# Run specific test file
|
||||
pytest test/test_my_function.py
|
||||
|
||||
# Run specific test function
|
||||
pytest test/test_my_function.py::test_create_item_success
|
||||
|
||||
# Run with markers
|
||||
pytest -m "integration" # if using @pytest.mark.integration
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Show print statements
|
||||
pytest -s
|
||||
```
|
||||
|
||||
### Environment Setup
|
||||
|
||||
Create `test/.env` or set environment variables before tests:
|
||||
|
||||
```bash
|
||||
# For integration tests
|
||||
export PG_HOST=localhost
|
||||
export PG_PORT=5432
|
||||
export PG_DB=fission_test
|
||||
```
|
||||
|
||||
Or use a pytest fixture to load from `.env`:
|
||||
|
||||
```python
|
||||
# conftest.py
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def load_env():
|
||||
env_path = os.path.join(os.path.dirname(__file__), ".env")
|
||||
load_dotenv(env_path)
|
||||
```
|
||||
|
||||
### Markers
|
||||
|
||||
Mark tests as unit/integration/slow:
|
||||
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_quick_unit():
|
||||
pass
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_full_workflow():
|
||||
pass
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_long_running():
|
||||
pass
|
||||
```
|
||||
|
||||
Run only unit tests:
|
||||
```bash
|
||||
pytest -m "unit"
|
||||
```
|
||||
|
||||
Skip tests:
|
||||
```bash
|
||||
pytest -m "not slow"
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yaml
|
||||
name: Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:15
|
||||
env:
|
||||
POSTGRES_PASSWORD: test
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
ports:
|
||||
- 5432:5432
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install -r dev-requirements.txt
|
||||
- name: Setup database
|
||||
run: |
|
||||
createdb -h localhost -U postgres fission_test
|
||||
psql -h localhost -U postgres fission_test -f migrates/001_schema.sql
|
||||
env:
|
||||
PGPASSWORD: test
|
||||
- name: Run tests
|
||||
run: |
|
||||
pytest --cov=src --cov-report=xml
|
||||
env:
|
||||
PG_HOST: localhost
|
||||
PG_PORT: 5432
|
||||
PG_DB: fission_test
|
||||
PG_USER: postgres
|
||||
PG_PASS: test
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v3
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **One assertion per test** - Keep tests focused
|
||||
2. **Use descriptive names** - `test_create_item_validation_error_for_missing_name`
|
||||
3. **Arrange-Act-Assert** - Structure tests clearly
|
||||
4. **Mock external dependencies** - Don't rely on network or external services
|
||||
5. **Test error cases** - Don't just test happy paths
|
||||
6. **Use fixtures** - Reuse setup/teardown code
|
||||
7. **Keep tests independent** - No shared state between tests
|
||||
8. **Test edge cases** - Empty inputs, null values, boundary conditions
|
||||
9. **Don't test libraries** - Don't write tests for Flask/Pydantic themselves
|
||||
10. **Clean up resources** - Use fixtures to ensure cleanup
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Testing Exceptions
|
||||
|
||||
```python
|
||||
def test_raises_not_found():
|
||||
with pytest.raises(NotFoundError) as exc:
|
||||
get_item("nonexistent-id")
|
||||
assert exc.value.http_status == 404
|
||||
```
|
||||
|
||||
### Parametrized Tests
|
||||
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.parametrize("input,expected", [
|
||||
("true", True),
|
||||
("false", False),
|
||||
("", None),
|
||||
(None, None),
|
||||
])
|
||||
def test_str_to_bool(input, expected):
|
||||
from helpers import str_to_bool
|
||||
assert str_to_bool(input) == expected
|
||||
```
|
||||
|
||||
### Temporary Files/Directories
|
||||
|
||||
```python
|
||||
def test_with_temp_file(tmp_path):
|
||||
# tmp_path is a pathlib.Path to a temporary directory
|
||||
file = tmp_path / "test.txt"
|
||||
file.write_text("content")
|
||||
assert file.read_text() == "content"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Fail with Database Errors
|
||||
|
||||
- Check test database is running: `pg_isready -h localhost -p 5432`
|
||||
- Verify migrations applied: `psql -l | grep fission_test`
|
||||
- Check environment variables: `echo $PG_HOST`
|
||||
|
||||
### Mock Not Working
|
||||
|
||||
- Ensure you're patching the **correct import location** (where it's used, not where it's defined)
|
||||
```python
|
||||
# Wrong: patching where it's defined
|
||||
@patch("helpers.get_secret")
|
||||
# Right: patching where it's used in your function module
|
||||
@patch("src.my_function.helpers.get_secret")
|
||||
```
|
||||
|
||||
### Import Errors
|
||||
|
||||
Ensure PYTHONPATH includes project root:
|
||||
```bash
|
||||
export PYTHONPATH=/path/to/project:$PYTHONPATH
|
||||
```
|
||||
|
||||
Or use pytest's `pythonpath` option in pytest.ini:
|
||||
```ini
|
||||
[pytest]
|
||||
pythonpath = .
|
||||
```
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [pytest documentation](https://docs.pytest.org/)
|
||||
- [pytest-mock documentation](https://github.com/pytest-dev/pytest-mock)
|
||||
- [Python unittest.mock](https://docs.python.org/3/library/unittest.mock.html)
|
||||
- [Testing Flask Applications](https://flask.palletsprojects.com/en/2.1.x/testing/)
|
||||
Reference in New Issue
Block a user