ref: up
This commit is contained in:
16
.claude-plugin/marketplace.json
Normal file
16
.claude-plugin/marketplace.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"name" : "vega-claude-marketplace",
|
||||
"owner" : {"name": "tiendd", "email": "fdm.dev17@gmail.com"},
|
||||
"plugins": [
|
||||
{
|
||||
"id" : "fission-python",
|
||||
"name" : "FissionPython",
|
||||
"description": "Skill for creating, analyzing, and managing Fission Python projects.",
|
||||
"source" : "./",
|
||||
"type" : "skill",
|
||||
"path" : "fission-python",
|
||||
"tools" : ["create-project", "analyze-config", "update-docstring"],
|
||||
"version" : "1.0.1"
|
||||
}
|
||||
]
|
||||
}
|
||||
360
.claude/plans/distributed-coalescing-breeze.md
Normal file
360
.claude/plans/distributed-coalescing-breeze.md
Normal file
@@ -0,0 +1,360 @@
|
||||
# Plan: Update Fission Python Template Based on Example Projects
|
||||
|
||||
## Context
|
||||
|
||||
The current Fission Python template (`fission-python/template/`) is essentially a copy of the `py-eom-quota` example project, making it **quota-specific** rather than a **generic starting point** for new Fission Python projects.
|
||||
|
||||
Three example projects were analyzed:
|
||||
- `py-eom-quota` - User quota management API
|
||||
- `py-eom-storage` - Storage resource management with S3 integration
|
||||
- `py-ailbl-scheduler` - Background job scheduler with Dagster integration
|
||||
|
||||
All examples share common infrastructure patterns but differ in business logic. This plan will make the template **generic, reusable, and production-ready** by extracting shared best practices.
|
||||
|
||||
---
|
||||
|
||||
## Key Findings from Examples
|
||||
|
||||
### 1. Common Infrastructure (All Projects Share)
|
||||
|
||||
- **vault.py** - Identical across all three projects (encryption/decryption using PyNaCl)
|
||||
- **helpers.py** - Nearly identical core utilities:
|
||||
- `get_secret()` / `get_config()` (K8s secrets/configmaps with vault support)
|
||||
- `init_db_connection()` (PostgreSQL connection)
|
||||
- `db_row_to_dict()` / `db_rows_to_array()`
|
||||
- `get_user_from_headers()` (extract user for audit logging)
|
||||
- `format_error_response()` (standardized error format)
|
||||
- `check_port_open()` (DB readiness check)
|
||||
- `str_to_bool()` utility
|
||||
- **Fission Configuration** - Using docstring format in `main()` functions
|
||||
- **Exception Patterns** - Custom exception hierarchies with:
|
||||
- `error_code` (machine-readable)
|
||||
- `http_status` (HTTP status)
|
||||
- `error_msg` (human-readable)
|
||||
- `x_user` (optional user tracking)
|
||||
- `details` (optional additional context)
|
||||
- **Pydantic Models** - Request validation, response schemas, pagination/filtering
|
||||
- **Project Structure** - Consistent layout:
|
||||
```
|
||||
project/
|
||||
├── .fission/deployment.json
|
||||
├── src/
|
||||
│ ├── __init__.py
|
||||
│ ├── exceptions.py
|
||||
│ ├── helpers.py
|
||||
│ ├── models.py
|
||||
│ ├── vault.py
|
||||
│ ├── build.sh
|
||||
│ └── <business logic>.py
|
||||
├── test/
|
||||
├── migrates/
|
||||
├── manifests/
|
||||
├── specs/
|
||||
├── requirements.txt
|
||||
├── dev-requirements.txt
|
||||
└── README.md
|
||||
```
|
||||
|
||||
### 2. Variations Between Projects
|
||||
|
||||
**Database Connection:**
|
||||
- `py-eom-quota`: Advanced `DBConfig` dataclass with `from_remote_config()` support
|
||||
- `py-eom-storage` & `py-ailbl-scheduler`: Simplified direct connection from secrets
|
||||
|
||||
**Additional Dependencies:**
|
||||
- Storage: `boto3` (S3/MinIO), `botocore`
|
||||
- Scheduler: `gql` (GraphQL), `cron-descriptor`
|
||||
- All: `pydantic`, `psycopg2-binary`, `PyNaCl`, `Flask`, `requests`
|
||||
|
||||
**Executors:**
|
||||
- Quota: `poolmgr` (concurrency=1)
|
||||
- Storage: `poolmgr` (concurrency=3, maxscale=3)
|
||||
- Scheduler: `newdeploy` (minscale=1, maxscale=1)
|
||||
|
||||
### 3. Issues to Fix
|
||||
|
||||
- **README outdated** - References `pymake`, `fission.json`, `fission.yaml` (not used)
|
||||
- **Missing Flask** - `src/requirements.txt` needs Flask (currently only in dev-requirements)
|
||||
- **Quota-specific code** - Template should be generic (no `QuotaException`, `QuotaResponse`, etc.)
|
||||
- **No .env.example** - Missing environment variable template
|
||||
- **Test dependencies minimal** - Should include `pytest`, `pytest-mock`, `requests`, `flake8`, `black`
|
||||
- **build.sh** - Should handle both alpine (apk) and debian (apt) properly
|
||||
- **deployment.json** - Should not hardcode `fission-eom-quota-env` secret names
|
||||
- **Missing Python version** - Should specify Python 3.11+ (scheduler uses 3.11-alpine)
|
||||
|
||||
---
|
||||
|
||||
## Recommended Changes
|
||||
|
||||
### Phase 1: Core Infrastructure (Keep Generic)
|
||||
|
||||
**Files to MODIFY:**
|
||||
|
||||
1. **`src/vault.py`** - Keep as-is (already perfect, identical in all examples)
|
||||
|
||||
2. **`src/helpers.py`** - Use the simplified pattern from `py-eom-storage` but add:
|
||||
- Keep: `get_secret()`, `get_config()`, `init_db_connection()`, `db_row_to_dict()`, `db_rows_to_array()`, `get_current_namespace()`, `str_to_bool()`, `check_port_open()`, `get_user_from_headers()`, `format_error_response()`
|
||||
- Remove: `DBConfig` class (too specific to quota, keep it simple)
|
||||
- Add: `.strip()` when reading files (as in scheduler)
|
||||
- Keep CORS_HEADERS and constants but make them configurable
|
||||
|
||||
3. **`src/exceptions.py`** - Replace quota-specific with generic patterns:
|
||||
```python
|
||||
class ServiceException(Exception):
|
||||
"""Base exception for service errors."""
|
||||
def __init__(self, error_code, http_status, error_msg, x_user=None, details=None):
|
||||
...
|
||||
|
||||
class ValidationError(ServiceException): # 400
|
||||
class NotFoundError(ServiceException): # 404
|
||||
class ConflictError(ServiceException): # 409
|
||||
class DatabaseError(ServiceException): # 500
|
||||
```
|
||||
(Based on `py-eom-storage` pattern - cleaner and more generic)
|
||||
|
||||
4. **`src/models.py`** - Replace with generic example patterns:
|
||||
- Remove: All quota-specific models
|
||||
- Add: Generic `ItemResponse`, `PaginatedResponse`, `ErrorResponse`
|
||||
- Include examples of Pydantic models with Field descriptions and json_schema_extra
|
||||
- Show patterns for: Enums, nested models, dataclasses for filters
|
||||
|
||||
5. **`src/requirements.txt`** - Update to include actual runtime deps:
|
||||
```
|
||||
Flask==2.1.1
|
||||
pydantic==2.11.7
|
||||
psycopg2-binary==2.9.10
|
||||
PyNaCl==1.6.0
|
||||
requests==2.32.2
|
||||
```
|
||||
(Remove commented examples - these go in docs, not requirements.txt)
|
||||
|
||||
6. **`dev-requirements.txt`** - Expand with useful dev tools:
|
||||
```
|
||||
Flask==2.1.1
|
||||
requests==2.32.2
|
||||
pytest==8.2.0
|
||||
pytest-mock==3.14.0
|
||||
flake8==7.0.0
|
||||
black==24.1.1
|
||||
mypy==1.8.0
|
||||
```
|
||||
|
||||
7. **`README.md`** - Complete rewrite:
|
||||
- Remove references to pymake, fission.json
|
||||
- Explain actual project structure
|
||||
- Document Fission configuration in docstrings
|
||||
- Show how to use deployment.json
|
||||
- Document environment variables (secrets/configmaps)
|
||||
- Explain testing approach
|
||||
- Add development workflow
|
||||
- Include examples from all three projects as inspiration
|
||||
|
||||
8. **`.fission/deployment.json`** - Make generic with placeholders:
|
||||
- Use `your-service-py` as environment name
|
||||
- Use `your-package` as package name
|
||||
- Use generic secret/configmap names: `fission-${PROJECT_NAME}-env`, `fission-${PROJECT_NAME}-config`
|
||||
- Show both `poolmgr` and `newdeploy` executor examples (commented)
|
||||
- Include optional fields like `imagepullsecret`, `runtime_envs`, `configmaps`
|
||||
|
||||
9. **`test/requirements.txt`** - Add:
|
||||
```
|
||||
pytest==8.2.0
|
||||
pytest-mock==3.14.0
|
||||
requests==2.32.3
|
||||
```
|
||||
|
||||
10. **`build.sh`** - Fix to use `${SRC_PKG}` properly (current version is correct)
|
||||
|
||||
### Phase 2: Documentation & Examples
|
||||
|
||||
**New Files to ADD:**
|
||||
|
||||
1. **`src/__init__.py`** - Already exists, keep as is
|
||||
|
||||
2. **`examples/` directory** (new) - Sample function implementations:
|
||||
- `example_crud.py` - Basic CRUD with Pydantic validation
|
||||
- `example_webhook.py` - Webhook receiver pattern
|
||||
- `example_scheduler.py` - Background job pattern (from ailbl-scheduler)
|
||||
- Each should have proper Fission docstring config
|
||||
|
||||
3. **`.env.example`** - Template showing all environment variables:
|
||||
```
|
||||
# PostgreSQL
|
||||
PG_HOST=
|
||||
PG_PORT=5432
|
||||
PG_DB=
|
||||
PG_USER=
|
||||
PG_PASS=
|
||||
PG_DBSCHEMA=
|
||||
|
||||
# Optional: Service-specific config (via ConfigMap)
|
||||
# YOUR_SERVICE_CONFIG_ENDPOINT=
|
||||
|
||||
# Optional: Vault encryption key (if using encrypted secrets)
|
||||
# CRYPTO_KEY=
|
||||
```
|
||||
|
||||
4. **`docs/` directory** (new) - Additional documentation:
|
||||
- `STRUCTURE.md` - Detailed file structure explanation
|
||||
- `TESTING.md` - How to write and run tests
|
||||
- `DEPLOYMENT.md` - Deployment options and tuning
|
||||
- `SECRETS.md` - Managing secrets and configmaps
|
||||
- `MIGRATIONS.md` - Database migration workflow
|
||||
|
||||
5. **`pytest.ini`** - Default pytest configuration:
|
||||
```ini
|
||||
[pytest]
|
||||
testpaths = test
|
||||
python_files = test_*.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
log_cli = true
|
||||
log_cli_level = INFO
|
||||
```
|
||||
|
||||
6. **`.gitignore`** - Ensure it excludes:
|
||||
- `__pycache__/`
|
||||
- `*.pyc`
|
||||
- `.env`
|
||||
- `.venv/`
|
||||
- `venv/`
|
||||
- `.pytest_cache/`
|
||||
- `.mypy_cache/`
|
||||
- `.coverage`
|
||||
- `coverage.xml`
|
||||
- `specs/` (optional - generated files)
|
||||
|
||||
7. **`MANIFEST.md`** - Template for Kubernetes manifests (if not using auto-generated)
|
||||
|
||||
### Phase 3: Modernization
|
||||
|
||||
**Update CI/CD:**
|
||||
|
||||
Review `.gitea/workflows/` files:
|
||||
- Ensure they install dependencies correctly
|
||||
- Add linting (flake8/black) steps
|
||||
- Add test execution
|
||||
- Add deployment steps with proper environment detection
|
||||
- Consider adding security scanning
|
||||
|
||||
**Python Version:**
|
||||
|
||||
- Ensure all files are compatible with Python 3.11+
|
||||
- Update `build.sh` to use Python 3.11 image (like scheduler does) or keep generic
|
||||
- Consider adding `runtime.txt` or `pyproject.toml` to specify Python version
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify Summary
|
||||
|
||||
**Direct modifications:**
|
||||
- `src/helpers.py` - Simplify, improve
|
||||
- `src/exceptions.py` - Make generic
|
||||
- `src/models.py` - Replace with generic patterns
|
||||
- `src/requirements.txt` - Add Flask, remove commented section
|
||||
- `dev-requirements.txt` - Comprehensive dev dependencies
|
||||
- `test/requirements.txt` - Test dependencies
|
||||
- `README.md` - Complete rewrite
|
||||
- `.fission/deployment.json` - Generic placeholders
|
||||
- `build.sh` - Already good, just ensure compatibility
|
||||
|
||||
**New files to add:**
|
||||
- `.env.example`
|
||||
- `pytest.ini`
|
||||
- `.gitignore` (enhance)
|
||||
- `examples/` directory with sample functions
|
||||
- `docs/` directory with detailed guides
|
||||
- `src/example_crud.py` (or in examples/)
|
||||
- `src/example_webhook.py` (or in examples/)
|
||||
- `src/example_scheduler.py` (or in examples/)
|
||||
|
||||
**New directories:**
|
||||
- `examples/`
|
||||
- `docs/`
|
||||
|
||||
---
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
1. **Backup current template** (git branch)
|
||||
2. **Modify core files** in order: helpers → exceptions → models → requirements → deployment.json → README
|
||||
3. **Add new files** (examples, docs, configs)
|
||||
4. **Test the template**:
|
||||
- Run `create-project.sh` to generate a new project
|
||||
- Verify build.sh works
|
||||
- Run tests
|
||||
- Check Fission spec generation
|
||||
5. **Commit with clear message**
|
||||
6. **Update plugin documentation** if needed
|
||||
|
||||
---
|
||||
|
||||
## Verification Steps
|
||||
|
||||
After implementing the changes:
|
||||
|
||||
1. **Create a test project** from the updated template:
|
||||
```bash
|
||||
./create-project.sh test-project ./tmp-test/
|
||||
```
|
||||
2. **Inspect generated project**:
|
||||
- Verify all files are present
|
||||
- Check that placeholders are substituted correctly
|
||||
- Ensure imports work
|
||||
3. **Build the package**:
|
||||
```bash
|
||||
cd tmp-test
|
||||
./src/build.sh
|
||||
```
|
||||
4. **Run tests** (if any):
|
||||
```bash
|
||||
pip install -r dev-requirements.txt
|
||||
pytest
|
||||
```
|
||||
5. **Check syntax**:
|
||||
```bash
|
||||
python -m py_compile src/*.py
|
||||
flake8 src/
|
||||
black --check src/
|
||||
```
|
||||
6. **Validate Fission config**:
|
||||
```bash
|
||||
fission spec verify --file=.fission/deployment.json
|
||||
```
|
||||
7. **Review README** - Does it accurately describe the project?
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- Template is **generic**, not domain-specific
|
||||
- All examples' best practices are incorporated
|
||||
- Documentation is accurate and complete
|
||||
- Dependencies are correctly listed (Flask in requirements, not just dev)
|
||||
- README reflects actual Fission workflow (docstrings, not fission.yaml)
|
||||
- Multiple example implementations provided (CRUD, webhook, scheduler)
|
||||
- Secrets/configuration clearly explained
|
||||
- Testing setup is comprehensive
|
||||
- Project passes linting and type checks
|
||||
|
||||
---
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| Breaking existing template users | Keep changes minimal in helpers; preserve backward compatibility where possible |
|
||||
| Over-engineering | Stick to patterns that appear in at least 2 of 3 examples |
|
||||
| Missing edge cases | Include optional advanced patterns (like DBConfig) in docs, not in core |
|
||||
| Documentation drift | Keep docs close to code; add examples that mirror real projects |
|
||||
|
||||
---
|
||||
|
||||
## Post-Implementation
|
||||
|
||||
After the template is updated:
|
||||
1. Consider creating a **template validation script** to ensure quality
|
||||
2. Update the **plugin SKILL.md** to reflect template changes
|
||||
3. Add **templating tests** to the fission-python-skill test suite
|
||||
4. Document the **update process** for future template modifications
|
||||
5. Consider **versioning** the template (e.g., `template-v2/`)
|
||||
32
.claude/plans/ethereal-giggling-sunrise.md
Normal file
32
.claude/plans/ethereal-giggling-sunrise.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Plan: Update marketplace.json
|
||||
|
||||
## Context
|
||||
The marketplace.json file currently has an empty plugins array. The goal is to register the existing `fission-python-skill` plugin in the marketplace by adding it to the plugins list. Owner information will remain unchanged.
|
||||
|
||||
## Current State
|
||||
- **File**: `.claude-plugin/marketplace.json`
|
||||
- **Current content**: `{ "name": "vega-claude-marketplace", "owner": {"name": "tiendd", "email": "fdm.dev17@gmail.com"}, "plugins": [] }`
|
||||
- **Plugin to add**: `fission-python-skill/.claude-plugin/plugin.json` contains:
|
||||
- id: "fission-python-skill"
|
||||
- name: "Fission Python Skill"
|
||||
- description: "Skill for creating, analyzing, and managing Fission Python projects."
|
||||
- type: "skill"
|
||||
- path: "fission-python-skill"
|
||||
- tools: ["create-project", "analyze-config", "update-docstring"]
|
||||
- version: "1.0.0"
|
||||
|
||||
## Implementation
|
||||
1. Read the current `plugin.json` from `fission-python-skill/.claude-plugin/` to extract plugin metadata
|
||||
2. Update `.claude-plugin/marketplace.json`:
|
||||
- Keep existing name and owner unchanged
|
||||
- Add a plugin object to the plugins array with the data from plugin.json
|
||||
|
||||
## Critical Files
|
||||
- `.claude-plugin/marketplace.json` (to be modified)
|
||||
- `fission-python-skill/.claude-plugin/plugin.json` (source of plugin data)
|
||||
|
||||
## Verification
|
||||
After modification, verify:
|
||||
1. The file contains valid JSON
|
||||
2. The plugins array contains the fission-python-skill object
|
||||
3. Owner information is unchanged
|
||||
182
.claude/plans/iridescent-meandering-blanket.md
Normal file
182
.claude/plans/iridescent-meandering-blanket.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# README.md Generation Plan
|
||||
|
||||
## Context
|
||||
|
||||
The user requested to "review project and generate README.md file" for the Claude Marketplace repository. This repository contains a plugin ecosystem for Claude Code with two major components:
|
||||
|
||||
1. **Fission Python Skill** - A plugin for creating, analyzing, and managing Fission serverless Python projects
|
||||
2. **SDLC Agent System** - A complete multi-agent Software Development Life Cycle system for automated planning, architecture, coding, and code review
|
||||
|
||||
The repository is missing a root-level README.md file that documents these components, their usage, and how they work together.
|
||||
|
||||
## Problem Statement
|
||||
|
||||
The repository needs a comprehensive README.md at the root level that:
|
||||
|
||||
1. Introduces the Claude Marketplace project and its purpose
|
||||
2. Documents the Fission Python Skill plugin (tools, installation, usage)
|
||||
3. Documents the SDLC Agent System (agents, setup, workflow)
|
||||
4. Explains the project structure and key directories
|
||||
5. Provides quick start guides for both components
|
||||
6. Includes reference information (technologies, environment variables, common tasks)
|
||||
7. Links to existing detailed documentation (CLAUDE.md, agent docs, skill docs)
|
||||
|
||||
## Solution Approach
|
||||
|
||||
### What to Include
|
||||
|
||||
Based on repository analysis, the README should cover:
|
||||
|
||||
1. **Project Overview**
|
||||
- What is Claude Marketplace?
|
||||
- Key components (Fission Python Skill, SDLC Agents)
|
||||
- Relationship between components
|
||||
|
||||
2. **Fission Python Skill**
|
||||
- Purpose and use cases
|
||||
- Available tools (create-project, analyze-config, update-docstring)
|
||||
- Installation/setup (chmod +x on scripts)
|
||||
- Usage examples for each tool
|
||||
- Project structure
|
||||
- Links to detailed docs (SKILL.md, reference.md)
|
||||
|
||||
3. **SDLC Agent System**
|
||||
- Overview of the 7 agents (Initializer, Planning, Architect, Coding, Code Review, Curator, Retro)
|
||||
- Agent workflow and handoffs
|
||||
- Setup procedure (setup.sh script)
|
||||
- agent-context directory structure
|
||||
- Skills system (stack detection, patterns, frameworks)
|
||||
- Quality gates and harness scripts
|
||||
- Links to detailed agent docs
|
||||
|
||||
4. **Project Structure**
|
||||
- Directory layout with descriptions
|
||||
- Key configuration files
|
||||
- Template locations
|
||||
|
||||
5. **Quick Start**
|
||||
- Using Fission Python Skill to create a project
|
||||
- Setting up SDLC Agents in an existing project
|
||||
- Development environment (devcontainer)
|
||||
|
||||
6. **Development**
|
||||
- Making changes to skill scripts
|
||||
- Updating plugin metadata
|
||||
- Testing approaches
|
||||
|
||||
7. **Configuration**
|
||||
- Environment variables (for devcontainer)
|
||||
- Claude Code settings
|
||||
|
||||
8. **Related Documentation**
|
||||
- CLAUDE.md (comprehensive project guide)
|
||||
- Agent-specific documentation
|
||||
- Skill documentation
|
||||
|
||||
### Design Decisions
|
||||
|
||||
- **Structure**: Standard GitHub README with clear sections using markdown headings
|
||||
- **Tone**: Professional, concise, informative
|
||||
- **Format**: Single file at repository root
|
||||
- **Links**: Cross-reference existing documentation rather than duplicating content
|
||||
- **Code blocks**: Include practical examples for all commands
|
||||
- **Tables**: Use for quick reference (tools, skills, agents)
|
||||
|
||||
### Reuse Existing Content
|
||||
|
||||
- CLAUDE.md contains excellent detailed information - will summarize and link to it
|
||||
- Individual agent .md files have authoritative content - will link rather than copy
|
||||
- Skill files (SKILL.md) already have user-facing docs - will summarize and link
|
||||
- The template structure is documented in CLADE.md - will extract key info
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. Create `/workspaces/claude-marketplace/README.md` with:
|
||||
|
||||
a. **Header section**
|
||||
- Badges (if applicable)
|
||||
- Title and subtitle
|
||||
- One-sentence description
|
||||
|
||||
b. **Table of Contents** (auto-generated with markdown-toc or manual)
|
||||
|
||||
c. **Project Overview**
|
||||
- Purpose of Claude Marketplace
|
||||
- Components summary
|
||||
- Key technologies
|
||||
|
||||
d. **Fission Python Skill section**
|
||||
- Description
|
||||
- Tools table
|
||||
- Installation
|
||||
- Usage examples
|
||||
- Project structure
|
||||
- Links to SKILL.md and reference.md
|
||||
|
||||
e. **SDLC Agent System section**
|
||||
- What are SDLC Agents?
|
||||
- The 7 agents with brief descriptions
|
||||
- Setup instructions
|
||||
- Agent-context structure
|
||||
- Skills system overview
|
||||
- Harness and quality gates
|
||||
- Links to detailed docs
|
||||
|
||||
f. **Project Structure section**
|
||||
- Directory tree visualization
|
||||
- Key files table
|
||||
|
||||
g. **Quick Start section**
|
||||
- Setting up dev environment
|
||||
- Creating Fission project
|
||||
- Initializing SDLC Agents
|
||||
|
||||
h. **Development section**
|
||||
- Modifying skills
|
||||
- Plugin registration
|
||||
- Testing
|
||||
|
||||
i. **Configuration section**
|
||||
- Environment variables table
|
||||
- Settings files
|
||||
|
||||
j. **License** (check if exists)
|
||||
|
||||
2. Ensure the README:
|
||||
- Is comprehensive but concise
|
||||
- Uses consistent formatting (h2 for major sections, h3 for subsections)
|
||||
- Includes practical examples with code blocks
|
||||
- Links to existing detailed documentation
|
||||
- Has a clear call-to-action for both components
|
||||
|
||||
3. Quality checks:
|
||||
- Verify all linked files exist
|
||||
- Ensure markdown renders properly (no broken syntax)
|
||||
- Check for consistency with CLAUDE.md
|
||||
|
||||
## Critical Files
|
||||
|
||||
- `/workspaces/claude-marketplace/README.md` - The file to create
|
||||
- `/workspaces/claude-marketplace/CLAUDE.md` - Source for detailed project information
|
||||
- `/workspaces/claude-marketplace/fission-python-skill/SKILL.md` - Skill documentation source
|
||||
- `/workspaces/claude-marketplace/.sdlc-agents/` - Agent documentation directory
|
||||
- `/workspaces/claude-marketplace/.sdlc-agents/setup.sh` - Agent setup script
|
||||
|
||||
## Verification
|
||||
|
||||
After generating the README.md:
|
||||
|
||||
1. Check markdown syntax (headings, lists, code blocks, tables)
|
||||
2. Verify all internal links point to existing files
|
||||
3. Ensure all referenced tools and scripts actually exist
|
||||
4. Confirm information consistency with source files
|
||||
5. Review for completeness: Does it answer "What is this repo?" and "How do I use it?"
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- README.md exists at repository root
|
||||
- Provides clear overview of both major components
|
||||
- Includes practical usage examples
|
||||
- Links to authoritative detailed documentation
|
||||
- Follows standard GitHub README conventions
|
||||
- New users can understand the project and get started
|
||||
421
.claude/plans/rosy-giggling-flurry.md
Normal file
421
.claude/plans/rosy-giggling-flurry.md
Normal file
@@ -0,0 +1,421 @@
|
||||
# Plan: Enhance Fission Python Projects with Exceptions, Pydantic Models, and Code Quality Improvements
|
||||
|
||||
## Context
|
||||
|
||||
Three Fission Python projects need systematic improvements to enhance error handling, data validation, and code maintainability:
|
||||
|
||||
- **py-eom-storage**: Storage management API (GET/POST /storages, GET/PUT/DELETE /storages/{id})
|
||||
- **py-eom-quota**: Quota management API (GET/POST /quotas, POST/DELETE /users/{userId}/quotas/{quotaId})
|
||||
- **py-ailbl-scheduler**: Background worker system for scheduled tasks
|
||||
|
||||
Currently, all projects use generic `Exception` with simple error messages returned as `{"error": str(err)}` with 500 status. There's no structured error handling, request validation, or consistent response formatting. Some projects have Pydantic models but not comprehensively used.
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Custom Exceptions**: Implement domain-specific exception classes with:
|
||||
- `error_code`: Machine-readable error identifier
|
||||
- `http_status_code`: Appropriate HTTP status (400, 404, 409, 500, etc.)
|
||||
- `error_msg`: Human-readable message
|
||||
- `x_user`: User identifier from request header (X-Fission-Params-UserId or similar)
|
||||
|
||||
2. **Pydantic Models**: Add comprehensive request/response models for all endpoints:
|
||||
- Request body validation (POST/PUT)
|
||||
- Query parameter validation (GET)
|
||||
- Structured response schemas
|
||||
- Consistent error response format
|
||||
|
||||
3. **Code Quality**: Improve maintainability with:
|
||||
- Detailed docstrings for all functions and classes
|
||||
- Refactoring of complex, multi-responsibility functions
|
||||
- Consistent error handling patterns
|
||||
- Fix broken imports and type issues
|
||||
|
||||
## Project-Specific Plans
|
||||
|
||||
### 1. py-eom-storage
|
||||
|
||||
**Current State:**
|
||||
- Has Pydantic models: `S3Resource`, `S3Credential` (unused)
|
||||
- Uses dataclasses: `Page`, `Filter` (should be Pydantic)
|
||||
- Endpoints: `/eom/admin/storages` (filter_or_insert.py), `/eom/admin/storages/{StorageId}` (update_or_delete.py)
|
||||
|
||||
**Changes Needed:**
|
||||
|
||||
**A. Create `src/exceptions.py`:**
|
||||
```python
|
||||
class StorageException(Exception):
|
||||
"""Base exception for storage-related errors."""
|
||||
def __init__(self, error_code: str, http_status: int, error_msg: str, x_user: str = None):
|
||||
self.error_code = error_code
|
||||
self.http_status = http_status
|
||||
self.error_msg = error_msg
|
||||
self.x_user = x_user
|
||||
super().__init__(self.error_msg)
|
||||
|
||||
class ValidationError(StorageException):
|
||||
"""Invalid input data."""
|
||||
class NotFoundError(StorageException):
|
||||
"""Resource not found."""
|
||||
class ConflictError(StorageException):
|
||||
"""Resource conflict (e.g., duplicate name)."""
|
||||
class DatabaseError(StorageException):
|
||||
"""Database operation failed."""
|
||||
class S3ConnectionError(StorageException):
|
||||
"""S3/MinIO connection failed."""
|
||||
```
|
||||
|
||||
**B. Create/Update `src/models.py` (or extend existing):**
|
||||
```python
|
||||
# Request models
|
||||
class StorageCreateRequest(BaseModel):
|
||||
name: str = Field(..., min_length=1, max_length=255)
|
||||
description: typing.Optional[str] = None
|
||||
resource: dict # Should validate S3 structure
|
||||
|
||||
class StorageUpdateRequest(BaseModel):
|
||||
name: typing.Optional[str] = None
|
||||
description: typing.Optional[str] = None
|
||||
resource: typing.Optional[dict] = None
|
||||
active: typing.Optional[bool] = None
|
||||
|
||||
# Query models (convert Page/Filter to Pydantic)
|
||||
class StorageFilter(BaseModel):
|
||||
ids: typing.Optional[typing.List[str]] = None
|
||||
keyword: typing.Optional[str] = None
|
||||
collection_id: typing.Optional[str] = None
|
||||
enable: typing.Optional[bool] = None
|
||||
created_from: typing.Optional[datetime] = None
|
||||
created_to: typing.Optional[datetime] = None
|
||||
# ... other filters
|
||||
|
||||
class StorageQuery(BaseModel):
|
||||
page: int = 0
|
||||
size: int = Field(8, ge=1, le=100)
|
||||
asc: bool = True
|
||||
sortby: typing.Optional[Literal["name", "enable", "created", "modified"]] = None
|
||||
filter: StorageFilter = StorageFilter()
|
||||
|
||||
# Response models
|
||||
class StorageResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
description: typing.Optional[str]
|
||||
resource: dict
|
||||
enable: bool
|
||||
created: datetime
|
||||
modified: datetime
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
error_code: str
|
||||
http_status: int
|
||||
error_msg: str
|
||||
x_user: typing.Optional[str] = None
|
||||
details: typing.Optional[dict] = None
|
||||
```
|
||||
|
||||
**C. Refactor `filter_or_insert.py`:**
|
||||
- Replace try-except to catch custom exceptions
|
||||
- Validate request body using Pydantic in `make_insert_request`
|
||||
- Use Pydantic for query parsing in `make_filter_request`
|
||||
- Add helper function `handle_exception` to format error responses consistently
|
||||
- Extract SQL queries into separate functions for testability
|
||||
- Add comprehensive docstrings explaining each endpoint's behavior
|
||||
|
||||
**D. Refactor `update_or_delete.py`:**
|
||||
- Similar pattern: custom exceptions, Pydantic validation
|
||||
- Refactor `is_depended_on_storage` - this function does too much, split into smaller helpers
|
||||
- Add detailed comments for each database operation
|
||||
- Ensure proper error messages with appropriate HTTP status codes
|
||||
|
||||
**E. Update `helpers.py`:**
|
||||
- Add utility `get_user_from_header(request)` to extract x-user from various headers
|
||||
|
||||
---
|
||||
|
||||
### 2. py-eom-quota
|
||||
|
||||
**Current State:**
|
||||
- Already has extensive Pydantic models in `models.py` (QuotaPage, UserQuotaPage, ScheduleCreate, etc.)
|
||||
- But: `userquota_filter.py` imports from `quota_update_or_delete` which doesn't exist (broken import)
|
||||
- Need to expand models to cover all request/response scenarios
|
||||
- Endpoints: `/eom/admin/quotas` (filter), `/eom/admin/users/{UserId}/quotas` (filter/insert), `/eom/admin/users/{UserId}/quotas/{QuotaId}` (update/delete)
|
||||
|
||||
**Changes Needed:**
|
||||
|
||||
**A. Create `src/exceptions.py`:**
|
||||
```python
|
||||
class QuotaException(Exception):
|
||||
"""Base exception for quota management."""
|
||||
def __init__(self, error_code: str, http_status: int, error_msg: str, x_user: str = None):
|
||||
self.error_code = error_code
|
||||
self.http_status = http_status
|
||||
self.error_msg = error_msg
|
||||
self.x_user = x_user
|
||||
super().__init__(self.error_msg)
|
||||
|
||||
class QuotaNotFoundError(QuotaException):
|
||||
"""Quota does not exist."""
|
||||
class UserQuotaConflictError(QuotaException):
|
||||
"""User already has this type of quota."""
|
||||
class ValidationError(QuotaException):
|
||||
"""Invalid request data."""
|
||||
class DatabaseError(QuotaException):
|
||||
"""Database operation failed."""
|
||||
```
|
||||
|
||||
**B. Extend `src/models.py`:**
|
||||
The existing models mix schedule and quota models. Need to:
|
||||
- Separate or clearly document which are for quotas vs schedules
|
||||
- Add request models:
|
||||
```python
|
||||
class QuotaCreateRequest(BaseModel):
|
||||
name: str
|
||||
description: typing.Optional[str] = None
|
||||
type: QuotaType
|
||||
value: typing.Union[MaxSizeBody, MaxOrderTimesBody]
|
||||
expire: ExpireBody
|
||||
|
||||
class QuotaUpdateRequest(BaseModel):
|
||||
name: typing.Optional[str] = None
|
||||
description: typing.Optional[str] = None
|
||||
enable: typing.Optional[bool] = None
|
||||
type: typing.Optional[QuotaType] = None
|
||||
value: typing.Optional[typing.Union[MaxSizeBody, MaxOrderTimesBody]] = None
|
||||
expire: typing.Optional[ExpireBody] = None
|
||||
|
||||
class UserQuotaAssignRequest(BaseModel):
|
||||
quota_id: str
|
||||
```
|
||||
|
||||
- Ensure response models exist (QuotaResponse, UserQuotaResponse)
|
||||
|
||||
**C. Fix `userquota_filter.py`:**
|
||||
- Fix broken import: `from quota_update_or_delete import __get_by_id` → `from userquota_insert_or_delete import __get_by_id` (or better: move `__get_by_id` to a shared helpers module)
|
||||
- Refactor `make_filter_request`:
|
||||
- Use `UserQuotaPage` Pydantic model properly
|
||||
- Validate user_id header is present using Pydantic
|
||||
- Replace try-except with custom exceptions
|
||||
- Add comprehensive docstring
|
||||
- The function currently manually sets `paging.filter.user_ids = [user_id]` - this should be part of a validation layer
|
||||
|
||||
**D. Refactor `userquota_insert_or_delete.py`:**
|
||||
- Fix the same broken import pattern (it imports nothing but uses `__get_by_id` in filter)
|
||||
- Add proper request validation using Pydantic models
|
||||
- Replace generic exceptions with `UserQuotaConflictError`, `QuotaNotFoundError`, etc.
|
||||
- Refactor `__validate_user_quota_type` - currently SQL query is hardcoded, add comments explaining business logic
|
||||
- The insert SQL has wrong columns: `INSERT INTO eom_user_quota(id, name, description, type, value, expire)` but the table likely only has (id, user_id, quota_id). Need to check database schema but from the code it seems mismatched.
|
||||
|
||||
**E. Improve `helpers.py`:**
|
||||
- Add utility functions for extracting and validating user headers
|
||||
- Add consistent error handling helpers
|
||||
|
||||
---
|
||||
|
||||
### 3. py-ailbl-scheduler
|
||||
|
||||
**Current State:**
|
||||
- No HTTP endpoints (only time-triggered workers)
|
||||
- No Pydantic models needed per user's choice
|
||||
- Needs custom exceptions and code quality improvements
|
||||
- Workers: `worker_session_picker.py`, `worker_session_poller.py`, `worker_scheduler_scan.py`, `worker_schedule_auto_disable.py`
|
||||
- Common utilities in `common.py`, `helpers.py`
|
||||
|
||||
**Changes Needed:**
|
||||
|
||||
**A. Create `src/exceptions.py`:**
|
||||
```python
|
||||
class SchedulerException(Exception):
|
||||
"""Base exception for scheduler operations."""
|
||||
def __init__(self, error_code: str, error_msg: str, details: dict = None):
|
||||
self.error_code = error_code
|
||||
self.error_msg = error_msg
|
||||
self.details = details
|
||||
super().__init__(self.error_msg)
|
||||
|
||||
class ScheduleNotFoundError(SchedulerException):
|
||||
"""Schedule does not exist."""
|
||||
class SessionLockError(SchedulerError):
|
||||
"""Failed to acquire session lock."""
|
||||
class DagsterError(SchedulerError):
|
||||
"""Dagster pipeline execution failed."""
|
||||
class CronParseError(SchedulerError):
|
||||
"""Invalid cron expression."""
|
||||
class ConfigurationError(SchedulerError):
|
||||
"""Missing or invalid configuration."""
|
||||
```
|
||||
|
||||
**B. Refactor `worker_scheduler_scan.py`:**
|
||||
This is the most complex function (446 lines). Goals:
|
||||
- Extract helper functions:
|
||||
- `_normalize_cron_for_cronner` (already exists)
|
||||
- `_as_date`, `_as_time` (already exist)
|
||||
- `_within_active_window` (already exists)
|
||||
- `_is_due_by_cron` (already exists)
|
||||
- `_is_due_by_freq` (already exists)
|
||||
- Extract the schedule creation logic into `_create_session_for_schedule(cur, schedule, now, slot_start)`
|
||||
- Extract the candidate schedule selection into `_fetch_due_schedules(cur, now, slot_start, slot_end, limit=50)`
|
||||
- Add detailed docstrings explaining the overall algorithm: "Scan for schedules that are due in the current time slot and create sessions atomically"
|
||||
- Improve variable names (e.g., `s` → `schedule`, `cur` → `cursor`)
|
||||
- Add comments explaining the advisory lock strategy and why it's needed
|
||||
- Ensure proper exception handling with custom exceptions
|
||||
- The function currently catches generic Exception at the end - wrap specific operations with appropriate custom exceptions
|
||||
|
||||
**C. Refactor `worker_session_picker.py`:**
|
||||
- Similar breakdown: extract `_pick_and_claim_sessions(conn, limit=20)` helper
|
||||
- Extract `_process_kind5_session(session, ctx)` and `_process_kind1_session(session, ctx)` into separate functions
|
||||
- Add detailed docstring explaining the picking strategy (FOR UPDATE SKIP LOCKED)
|
||||
- Replace bare `except Exception` with specific exception types
|
||||
- Add comments explaining the kind handling logic (kind 5 vs kind 1)
|
||||
- The function `_build_run_config_kind5` is specific to that kind - could be moved to a separate module if needed
|
||||
|
||||
**D. Refactor `worker_session_poller.py`:**
|
||||
- Extract `_update_completed_session(cur, session_id, status_info, now)` helper
|
||||
- Extract `_update_started_session(cur, session_id, started_dt)` helper
|
||||
- Add docstring explaining polling strategy
|
||||
- Replace generic exception handling with `DagsterError` when Dagster calls fail
|
||||
- Add type hints for the row unpacking: `for sid, run_id, started, cron_description, created_by in rows:`
|
||||
|
||||
**E. Refactor `worker_schedule_auto_disable.py`:**
|
||||
- This is simple enough already but still add comprehensive docstring
|
||||
- Consider adding custom exception for database errors
|
||||
|
||||
**F. Improve `helpers.py` (in scheduler):**
|
||||
- The `GraphQL` class and related functions are specific to Dagster - add docstrings
|
||||
- `safe_notify` is good, add docstring
|
||||
- Consider creating a `SchedulerHelper` class to group related utilities
|
||||
|
||||
**G. Improve `common.py`:**
|
||||
- Already has good docstrings but could be expanded
|
||||
- Add type hints to function signatures
|
||||
- Break `launch_pipeline_execution` if too complex (handles multiple error cases)
|
||||
|
||||
---
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Exception Hierarchy
|
||||
|
||||
Each project will have:
|
||||
```python
|
||||
class BaseProjectException(Exception):
|
||||
"""Base with error_code, http_status (if applicable), message, metadata."""
|
||||
pass
|
||||
|
||||
# Specific exceptions inherit from base
|
||||
class NotFoundError(BaseProjectException): ...
|
||||
class ValidationError(BaseProjectException): ...
|
||||
class ConflictError(BaseProjectException): ...
|
||||
class DatabaseError(BaseProjectException): ...
|
||||
# Domain-specific: StorageNotFoundError, QuotaConflictError, ScheduleNotFoundError, etc.
|
||||
```
|
||||
|
||||
### Error Response Format
|
||||
|
||||
Standardized JSON response:
|
||||
```json
|
||||
{
|
||||
"error_code": "STORAGE_NOT_FOUND",
|
||||
"http_status": 404,
|
||||
"error_msg": "Storage with id 'xyz' does not exist",
|
||||
"x_user": "user123",
|
||||
"details": { /* optional additional context */ }
|
||||
}
|
||||
```
|
||||
|
||||
### Middleware Pattern
|
||||
|
||||
In each HTTP endpoint function:
|
||||
```python
|
||||
def main():
|
||||
try:
|
||||
# Extract user header
|
||||
x_user = request.headers.get("X-Fission-Params-UserId")
|
||||
# Route to handler
|
||||
return handler()
|
||||
except ValidationError as e:
|
||||
return error_response(e), 400
|
||||
except NotFoundError as e:
|
||||
return error_response(e), 404
|
||||
except ConflictError as e:
|
||||
return error_response(e), 409
|
||||
except StorageException as e:
|
||||
logger.error(f"Storage error: {e.error_code}: {e.error_msg}")
|
||||
return error_response(e), 500
|
||||
except Exception as e:
|
||||
logger.exception("Unexpected error")
|
||||
return {"error": "Internal server error"}, 500
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. **Phase 1**: Create exception modules for all three projects
|
||||
2. **Phase 2**: Add/expand Pydantic models (storage, then complete quota)
|
||||
3. **Phase 3**: Refactor endpoints to use exceptions and models
|
||||
4. **Phase 4**: Refactor complex functions in scheduler
|
||||
5. **Phase 5**: Documentation pass - ensure all functions have docstrings
|
||||
6. **Phase 6**: Test manually by running functions (no automated tests to update)
|
||||
|
||||
---
|
||||
|
||||
## Verification Steps
|
||||
|
||||
1. **Manual Testing**:
|
||||
- Deploy each function to local Fission or use test environment
|
||||
- Test error cases: invalid input, missing resources, database failures
|
||||
- Verify error response format matches specification
|
||||
- Check logs for proper error logging
|
||||
|
||||
2. **Code Review**:
|
||||
- All functions have docstrings with Args, Returns, Raises sections
|
||||
- No function exceeds ~50 lines (extracted helpers where needed)
|
||||
- All exceptions are specific, not generic `Exception`
|
||||
- Request validation happens before business logic
|
||||
|
||||
3. **Import Verification**:
|
||||
- Fix broken imports (especially in py-eom-quota's userquota_filter.py)
|
||||
- Ensure circular dependencies are avoided
|
||||
|
||||
4. **Type Safety**:
|
||||
- Run static type checker if available (mypy/pyright)
|
||||
- Ensure all functions have return type hints
|
||||
|
||||
---
|
||||
|
||||
## Critical Files to Modify
|
||||
|
||||
**py-eom-storage:**
|
||||
- `src/exceptions.py` (new)
|
||||
- `src/models.py` (create/extend)
|
||||
- `src/filter_or_insert.py` (refactor)
|
||||
- `src/update_or_delete.py` (refactor)
|
||||
- `src/helpers.py` (add utilities)
|
||||
- `src/vault.py` (minor: improve docs)
|
||||
|
||||
**py-eom-quota:**
|
||||
- `src/exceptions.py` (new)
|
||||
- `src/models.py` (extend with request models)
|
||||
- `src/userquota_filter.py` (fix imports, refactor)
|
||||
- `src/userquota_insert_or_delete.py` (refactor, fix SQL if needed)
|
||||
- `src/helpers.py` (add utilities)
|
||||
|
||||
**py-ailbl-scheduler:**
|
||||
- `src/exceptions.py` (new)
|
||||
- `src/worker_scheduler_scan.py` (major refactor)
|
||||
- `src/worker_session_picker.py` (refactor)
|
||||
- `src/worker_session_poller.py` (refactor)
|
||||
- `src/worker_schedule_auto_disable.py` (docs)
|
||||
- `src/common.py` (docs, type hints)
|
||||
- `src/helpers.py` (docs, maybe extract class)
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- All changes are in `/workspaces/claude-marketplace/data/examples/`
|
||||
- Preserve existing API contracts (URLs, HTTP methods)
|
||||
- Do not change database schema
|
||||
- Maintain backward compatibility with existing clients
|
||||
- Focus on internal improvements: error handling, validation, documentation
|
||||
- Use consistent patterns across all three projects
|
||||
72
.claude/plans/snappy-launching-pony.md
Normal file
72
.claude/plans/snappy-launching-pony.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Fission Python Skill Plan
|
||||
|
||||
## Context
|
||||
The user wanted to create a new skill called `fission-python-skill` in the `@fission-plugin/skills` directory. This skill should provide three main capabilities:
|
||||
1. Create a new fission python project with template (based on @data/py-eom-quota)
|
||||
2. Analyze configuration in .fission of each fission-python project
|
||||
3. Parse and update docstring of fission function method
|
||||
|
||||
## Approach
|
||||
Based on my exploration of the codebase:
|
||||
- The example project @data/py-eom-quota shows a standard fission python project structure
|
||||
- Fission configuration is stored in .fission/deployment.json and similar files
|
||||
- Python functions contain fission configuration in their docstrings using a specific format (between ```fission and ``` markers)
|
||||
- The fission-plugin/skills directory currently contained empty SKILL.md and reference.md files
|
||||
|
||||
I implemented all three requested capabilities as shell scripts within the skill directory.
|
||||
|
||||
## Implementation Summary
|
||||
|
||||
### Phase 1: Skill Creation Completed
|
||||
1. Created the skill directory: `/workspaces/claude-marketplace/fission-plugin/skills/fission-python-skill/`
|
||||
2. Created SKILL.md following the skill format from the documentation
|
||||
3. Created reference.md with detailed usage instructions
|
||||
4. Implemented the three core tools as shell scripts:
|
||||
- `create-project.sh`: Creates a new fission python project from template
|
||||
- `analyze-config.sh`: Analyzes .fission configuration in a project
|
||||
- `update-docstring.sh`: Parses and updates docstrings in fission function methods
|
||||
|
||||
### Phase 2: Tool Implementation Details (Completed)
|
||||
|
||||
#### create-project.sh
|
||||
- Takes project name and optional destination directory
|
||||
- Copies template from @data/py-eom-quota (excluding .git, etc.)
|
||||
- Replaces placeholder values in configuration files
|
||||
- Provides usage instructions for next steps
|
||||
|
||||
#### analyze-config.sh
|
||||
- Takes path to a fission project
|
||||
- Reads and parses .fission/deployment.json (and related files)
|
||||
- Outputs structured summary of:
|
||||
- Environments and their resource settings
|
||||
- Packages and build commands
|
||||
- Functions and their triggers
|
||||
- Secrets and configmaps
|
||||
- Archives and source configuration
|
||||
|
||||
#### update-docstring.sh
|
||||
- Takes path to a python file and optionally function name
|
||||
- Parses docstrings to extract embedded fission configuration (between ```fission markers)
|
||||
- Allows updating the fission configuration within docstrings using --set flag
|
||||
- Can retrieve current configuration using --get flag (default)
|
||||
- Preserves existing function code and documentation outside fission blocks
|
||||
- Uses Python script for robust JSON handling and string manipulation
|
||||
|
||||
### Phase 3: Testing (Completed)
|
||||
- Tested create-project.sh by generating a new project and verifying structure
|
||||
- Tested analyze-config.sh on the existing @data/py-eom-quota project
|
||||
- Tested update-docstring.sh by retrieving and modifying fission configuration in function docstrings
|
||||
- All tools have proper help text and error handling
|
||||
|
||||
## Files Created
|
||||
- `/workspaces/claude-marketplace/fission-python-skill/SKILL.md`
|
||||
- `/workspaces/claude-marketplace/fission-python-skill/reference.md`
|
||||
- `/workspaces/claude-marketplace/fission-python-skill/create-project.sh`
|
||||
- `/workspaces/claude-marketplace/fission-python-skill/analyze-config.sh`
|
||||
- `/workspaces/claude-marketplace/fission-python-skill/update-docstring.sh`
|
||||
|
||||
## Verification Results
|
||||
✓ create-project.sh: Successfully creates new fission python projects from template
|
||||
✓ analyze-config.sh: Successfully analyzes .fission configuration showing environments, packages, functions, secrets, etc.
|
||||
✓ update-docstring.sh: Successfully extracts and updates fission configuration in function docstrings
|
||||
All tools are executable and include proper error handling and usage instructions.
|
||||
156
.claude/plans/update-fission-skill.md
Normal file
156
.claude/plans/update-fission-skill.md
Normal file
@@ -0,0 +1,156 @@
|
||||
# Plan: Update FissionPython Skill
|
||||
|
||||
## Context
|
||||
|
||||
The `fission-python-skill` plugin needs to be updated to meet new requirements for Fission Python projects:
|
||||
|
||||
1. **Build script**: `src/build.sh` must exist and be referenced correctly in `.fission/deployment.json`
|
||||
2. **Dependencies**: `src/requirements.txt` must exist and contain necessary packages (pydantic, etc.)
|
||||
3. **CI/CD**: All projects must include `.gitea/workflows/` directory with deployment workflows
|
||||
4. **API Design**: HTTP trigger functions must use Pydantic models for request/response validation
|
||||
5. **Documentation**: All functions must have proper docstrings and code comments
|
||||
6. **Portability**: Remove hardcoded absolute paths - the plugin should work from any location
|
||||
|
||||
Current issues:
|
||||
- `create-project.sh` uses hardcoded path `/workspaces/claude-marketplace/data/py-eom-quota`
|
||||
- Template resides in `data/examples/` which is outside the plugin
|
||||
- No validation of generated projects
|
||||
- Documentation references incorrect paths
|
||||
|
||||
## Approach
|
||||
|
||||
**Step 1: Make Plugin Portable**
|
||||
- Copy the `py-eom-quota` template into `fission-python-skill/template/`
|
||||
- Update `create-project.sh` to find template relative to script location using `dirname "$0"`
|
||||
- Remove all absolute path references
|
||||
|
||||
**Step 2: Add Project Validation**
|
||||
Enhance `create-project.sh` with post-creation validation:
|
||||
- Check `src/build.sh` exists and is executable
|
||||
- Verify `.fission/deployment.json` references the correct build command (`./build.sh`)
|
||||
- Check `src/requirements.txt` exists and contains required dependencies:
|
||||
- `pydantic==2.x`
|
||||
- `flask` (for HTTP handlers)
|
||||
- `psycopg2-binary` or `psycopg2` (if database usage)
|
||||
- Verify `.gitea/workflows/` directory exists with the 4 standard workflow files
|
||||
- Validate that function files contain pydantic models (basic grep check)
|
||||
- Warn if docstrings appear minimal or missing
|
||||
|
||||
**Step 3: Update Documentation**
|
||||
- Fix `SKILL.md` and `reference.md` to reference the correct template path
|
||||
- Document the new validation checks
|
||||
- Update examples to show portable usage
|
||||
|
||||
**Step 4: Potentially Add New Tool**
|
||||
Consider adding a separate validation tool (`validate-project.sh`) that can be run on existing projects to check compliance with standards.
|
||||
|
||||
## Critical Files to Modify
|
||||
|
||||
1. **`fission-python-skill/create-project.sh`**
|
||||
- Change `TEMPLATE_DIR` to use relative path: `$(dirname "$0")/template`
|
||||
- Add validation functions after project creation
|
||||
- Improve error messages
|
||||
- Add warnings for missing documentation
|
||||
|
||||
2. **`fission-python-skill/template/`** (new directory)
|
||||
- Copy entire structure from `data/examples/py-eom-quota/`
|
||||
- Ensure `build.sh` has correct permissions (755)
|
||||
- Verify all configuration files
|
||||
|
||||
3. **`fission-python-skill/SKILL.md`** and **`reference.md`**
|
||||
- Update template path references
|
||||
- Document validation behavior
|
||||
- Update examples
|
||||
|
||||
4. **`.claude-plugin/marketplace.json`**
|
||||
- No changes needed (plugin registration OK)
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Template Structure
|
||||
```
|
||||
fission-python-skill/
|
||||
└── template/
|
||||
├── .fission/
|
||||
│ ├── deployment.json
|
||||
│ ├── dev-deployment.json
|
||||
│ └── local-deployment.json
|
||||
├── .gitea/
|
||||
│ └── workflows/
|
||||
│ ├── dev-deployment.yaml
|
||||
│ ├── install-dispatch.yaml
|
||||
│ ├── uninstall-dispatch.yaml
|
||||
│ └── analystic-dispatch.yaml
|
||||
├── src/
|
||||
│ ├── build.sh (executable)
|
||||
│ ├── requirements.txt (with pydantic, flask, etc.)
|
||||
│ ├── models.py (with pydantic models)
|
||||
│ ├── exceptions.py
|
||||
│ ├── helpers.py
|
||||
│ ├── vault.py
|
||||
│ └── <example functions>.py (with docstrings and pydantic usage)
|
||||
├── test/
|
||||
├── manifests/
|
||||
├── migrates/
|
||||
├── specs/
|
||||
├── dev-requirements.txt
|
||||
├── README.md
|
||||
├── .gitignore
|
||||
└── .devcontainer/
|
||||
```
|
||||
|
||||
### Validation Checklist in create-project.sh
|
||||
After copying template and doing substitutions:
|
||||
1. `[ -f "$PROJECT_PATH/src/build.sh" ]` || warning
|
||||
2. `[ -x "$PROJECT_PATH/src/build.sh" ]` || chmod +x
|
||||
3. Check `deployment.json` contains `"./build.sh"` in packages.buildcmd
|
||||
4. `[ -f "$PROJECT_PATH/src/requirements.txt" ]` || error
|
||||
5. Check requirements.txt contains `pydantic` (grep -q "pydantic")
|
||||
6. Check requirements.txt contains `flask` (grep -q "flask")
|
||||
7. `[ -d "$PROJECT_PATH/.gitea/workflows" ]` || warning/copy from template
|
||||
8. Count workflow files: should have at least 4 .yaml files
|
||||
9. Optional: Check that Python files have docstrings (grep for triple quotes)
|
||||
10. Optional: Check for pydantic BaseModel usage in models.py
|
||||
|
||||
### Portable Path Resolution
|
||||
```bash
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
TEMPLATE_DIR="$SCRIPT_DIR/template"
|
||||
```
|
||||
|
||||
This ensures the plugin works regardless of where it's invoked from.
|
||||
|
||||
## Verification Steps
|
||||
|
||||
1. **Test create-project**:
|
||||
- Run `./fission-python-skill/create-project.sh test-project ./tmp/`
|
||||
- Verify all expected directories/files exist
|
||||
- Check that validation warnings/errors appear appropriately
|
||||
|
||||
2. **Test portability**:
|
||||
- Move plugin to a different directory
|
||||
- Run create-project from there
|
||||
- Should still work without path adjustments
|
||||
|
||||
3. **Test validation**:
|
||||
- Manually delete `src/requirements.txt` from template and create project → should error
|
||||
- Remove pydantic from requirements.txt → should warn
|
||||
- Remove .gitea/workflows → should warn
|
||||
- Change build.sh buildcmd to something else → should warn
|
||||
|
||||
4. **Test generated project**:
|
||||
- Verify functions have docstrings with fission config blocks
|
||||
- Verify models.py uses pydantic BaseModel
|
||||
- Verify HTTP triggers properly defined in deployment.json
|
||||
|
||||
## Risks and Considerations
|
||||
|
||||
- **Template duplication**: Moving template into plugin duplicates existing examples. That's acceptable - the examples in `data/examples/` are finished projects, while the template is a starter. Keep both.
|
||||
- **Validation strictness**: Start with warnings for most checks, errors only for critical missing files (requirements.txt build.sh). Can tighten later.
|
||||
- **Template maintenance**: When updating the template, only modify `fission-python-skill/template/`. The examples in `data/examples/` are independent and can diverge if needed.
|
||||
|
||||
## Post-Implementation
|
||||
|
||||
- Update any scripts or docs that reference the old template path
|
||||
- Test the skill end-to-end through Claude Code
|
||||
- Consider adding a `validate-project.sh` tool for existing projects
|
||||
6
.claude/settings.json
Normal file
6
.claude/settings.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"permissions" : {"read": true, "write": true, "execute": true},
|
||||
"enabledPlugins" : {"engineering-skills@claude-code-skills": true, "skill-creator@daymade-skills": true},
|
||||
"plansDirectory" : "/workspaces/claude-marketplace/.claude/plans",
|
||||
"bypassPermissions": true
|
||||
}
|
||||
59
.devcontainer/devcontainer.json
Normal file
59
.devcontainer/devcontainer.json
Normal file
@@ -0,0 +1,59 @@
|
||||
{
|
||||
"customizations" : {
|
||||
"vscode": {
|
||||
"extensions": [
|
||||
// VS Code specific
|
||||
"ms-azuretools.vscode-docker",
|
||||
"dbaeumer.vscode-eslint",
|
||||
"j-brooke.fracturedjsonvsc",
|
||||
// Python specific
|
||||
"ms-python.python",
|
||||
"charliermarsh.ruff",
|
||||
// Markdown specific
|
||||
"yzhang.markdown-all-in-one",
|
||||
// JSON formatter
|
||||
"j-brooke.fracturedjsonvsc",
|
||||
// YAML formatter
|
||||
"kennylong.kubernetes-yaml-formatter",
|
||||
"Continue.continue" // AI
|
||||
],
|
||||
"settings" : {
|
||||
"diffEditor.renderSideBySide": true,
|
||||
"editor.suggestSelection" : "first",
|
||||
"editor.tabSize" : 4,
|
||||
"editor.wordWrap" : "off",
|
||||
"editor.wordWrapColumn" : 200,
|
||||
"explorer.confirmDelete" : false,
|
||||
"explorer.confirmDragAndDrop": false,
|
||||
"files.exclude" : {
|
||||
"**/.classpath" : true,
|
||||
"**/.DS_Store" : true,
|
||||
"**/.factorypath": true,
|
||||
"**/.git" : true,
|
||||
"**/.project" : true,
|
||||
"**/.settings" : true,
|
||||
"**/*.js" : {"when": "$(basename).ts"},
|
||||
"**/*.js.map" : true
|
||||
},
|
||||
"ansible.validation.enabled" : false,
|
||||
"telemetry.telemetryLevel" : "off"
|
||||
}
|
||||
}
|
||||
},
|
||||
"forwardPorts" : [],
|
||||
"dockerComposeFile": ["docker-compose.yaml"],
|
||||
"service" : "devcontainer",
|
||||
"workspaceFolder" : "/workspaces/${localWorkspaceFolderBasename}",
|
||||
"mounts" : [
|
||||
// "source=${localEnv:HOME}/.claude,target=/home/vscode/.claude,type=bind",
|
||||
"source=${localEnv:HOME}/Workspaces/self/sdlc-agents/agents,target=/workspaces/${localWorkspaceFolderBasename}/.sdlc-agents,type=bind"
|
||||
],
|
||||
"containerEnv" : {
|
||||
"ANTHROPIC_API_KEY" : "",
|
||||
"ANTHROPIC_BASE_URL" : "https://openrouter.ai/api",
|
||||
// "ANTHROPIC_AUTH_TOKEN" : "${localEnv:OPENROUTER_API_KEY}",
|
||||
"ANTHROPIC_MODEL" : "stepfun/step-3.5-flash:free",
|
||||
"ANTHROPIC_SMALL_FAST_MODEL": "nvidia/nemotron-3-super-120b-a12b:free"
|
||||
},
|
||||
"postStartCommand" : "/workspaces/${localWorkspaceFolderBasename}/.devcontainer/setup.sh"
|
||||
}
|
||||
8
.devcontainer/docker-compose.yaml
Normal file
8
.devcontainer/docker-compose.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
services:
|
||||
devcontainer:
|
||||
image: mcr.microsoft.com/vscode/devcontainers/python:3.11-bullseye
|
||||
volumes:
|
||||
- ../..:/workspaces:cached
|
||||
command: sleep infinity
|
||||
env_file:
|
||||
- .env
|
||||
2
.devcontainer/example.env
Normal file
2
.devcontainer/example.env
Normal file
@@ -0,0 +1,2 @@
|
||||
OPENROUTER_API_KEY=
|
||||
ANTHROPIC_AUTH_TOKEN=$OPENROUTER_API_KEY
|
||||
20
.devcontainer/setup.sh
Executable file
20
.devcontainer/setup.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
|
||||
# For debugging
|
||||
# set -eux
|
||||
|
||||
|
||||
sudo apt update
|
||||
sudo apt install -y tmux
|
||||
|
||||
|
||||
# install claude code
|
||||
curl -fsSL https://claude.ai/install.sh | bash
|
||||
|
||||
# install claude plugin
|
||||
claude plugin marketplace add https://github.com/daymade/claude-code-skills
|
||||
claude plugin marketplace add https://github.com/alirezarezvani/claude-skills
|
||||
|
||||
# Marketplace name: daymade-skills (from marketplace.json)
|
||||
# claude plugin install skill-creator@daymade-skills
|
||||
# claude plugin install engineering-skills@claude-code-skills
|
||||
5
.gitignore
vendored
Normal file
5
.gitignore
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
/.sdlc-agents
|
||||
/.devcontainer/.env
|
||||
/.vscode
|
||||
/.claude/settings.local.json
|
||||
/data
|
||||
121
CLAUDE.md
Normal file
121
CLAUDE.md
Normal file
@@ -0,0 +1,121 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Repository Overview
|
||||
|
||||
This is a **Claude Marketplace** repository — a plugin ecosystem for Claude Code. It contains:
|
||||
|
||||
- **Plugin Registry** (`.claude-plugin/marketplace.json`): Defines available plugins/skills
|
||||
- **Fission Python Skill** (`fission-python/`): Plugin providing tools for Fission serverless Python projects
|
||||
- **Example Projects** (`data/examples/`): Example Fission Python projects (py-eom-quota, py-eom-storage, py-ailbl-scheduler)
|
||||
- **Fission Python Template** (`fission-python/template/`): Default template used when creating new Fission Python projects
|
||||
|
||||
> **Note:** `.sdlc-agents/` is **not** part of this repo. It is bind-mounted into the devcontainer from `~/Workspaces/self/sdlc-agents/agents` on the host.
|
||||
|
||||
## Plugin System Architecture
|
||||
|
||||
Claude Code discovers plugins via `.claude-plugin/marketplace.json` at the repository root. Each plugin entry:
|
||||
- Has an `id`, `name`, `description`, `type` (`skill`), `path`, and `tools` array
|
||||
- Points to a directory containing the tool implementations (executable `.sh` scripts)
|
||||
|
||||
The `fission-python` plugin (`fission-python/.claude-plugin/plugin.json`) exposes three tools:
|
||||
- `create-project` — `create-project.sh`
|
||||
- `analyze-config` — `analyze-config.sh`
|
||||
- `update-docstring` — `update-docstring.sh`
|
||||
|
||||
Both `marketplace.json` (registry) and `plugin.json` (plugin definition) must be present for discovery to work.
|
||||
|
||||
## Working with the Fission Python Skill
|
||||
|
||||
```bash
|
||||
# Make scripts executable (required after fresh clone)
|
||||
chmod +x fission-python/*.sh
|
||||
|
||||
# Create a new Fission Python project
|
||||
./fission-python/create-project.sh <project-name> [destination]
|
||||
|
||||
# Analyze .fission configuration in an existing project
|
||||
./fission-python/analyze-config.sh <project-path> # requires jq
|
||||
|
||||
# View or update Fission config embedded in a function docstring
|
||||
./fission-python/update-docstring.sh <file.py> <function-name> --get
|
||||
./fission-python/update-docstring.sh <file.py> <function-name> --set '<json>'
|
||||
```
|
||||
|
||||
### Fission Configuration in Docstrings
|
||||
|
||||
Fission config is embedded between ` ```fission ` and ` ``` ` markers in Python function docstrings:
|
||||
|
||||
```python
|
||||
def main():
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "function-name",
|
||||
"http_triggers": {
|
||||
"trigger-name": {"url": "/endpoint", "methods": ["GET"]}
|
||||
}
|
||||
}
|
||||
```
|
||||
"""
|
||||
```
|
||||
|
||||
The `update-docstring.sh` tool parses and updates these blocks.
|
||||
|
||||
### Updating the Project Template
|
||||
|
||||
The template lives in `fission-python/template/`. When `create-project.sh` runs, it copies this directory and performs string substitutions (e.g., replacing project name). The plugin locates the template relative to its own position, making it portable.
|
||||
|
||||
Key template areas to modify:
|
||||
- `src/` — default function structure and helpers
|
||||
- `.fission/deployment.json` — default environment/package/function config
|
||||
- `.gitea/workflows/` — CI/CD pipeline workflows
|
||||
- `requirements.txt` / `dev-requirements.txt` — dependencies
|
||||
|
||||
## Testing
|
||||
|
||||
No centralized test runner. Test skill scripts by running them directly with various arguments, verifying argument parsing, file operations, and JSON output. Example projects in `data/examples/` use pytest.
|
||||
|
||||
## Configuration
|
||||
|
||||
- `.claude/settings.json` — Claude Code settings (plans directory)
|
||||
- `.claude-plugin/marketplace.json` — Plugin registry
|
||||
- `fission-python/.claude-plugin/plugin.json` — Plugin definition
|
||||
|
||||
## Development Environment (Devcontainer)
|
||||
|
||||
The `.devcontainer/devcontainer.json` configures a VS Code dev container with:
|
||||
- `postStartCommand`: `.devcontainer/setup.sh`
|
||||
- Bind mount: `~/Workspaces/self/sdlc-agents/agents` → `.sdlc-agents/` (external SDLC system)
|
||||
|
||||
Environment variables set in devcontainer:
|
||||
|
||||
| Variable | Default | Purpose |
|
||||
|---|---|---|
|
||||
| `ANTHROPIC_API_KEY` | (empty) | API key |
|
||||
| `ANTHROPIC_BASE_URL` | `https://openrouter.ai/api` | API endpoint |
|
||||
| `ANTHROPIC_MODEL` | `stepfun/step-3.5-flash:free` | Default model |
|
||||
| `ANTHROPIC_SMALL_FAST_MODEL` | `nvidia/nemotron-3-super-120b-a12b:free` | Alternate model |
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Plugin tools must be executable**: Run `chmod +x fission-python/*.sh`
|
||||
- **jq dependency**: `analyze-config.sh` requires `jq` for JSON parsing
|
||||
- **sed portability**: Scripts use GNU sed — on macOS use `sed -i ''` instead of `sed -i`
|
||||
- **Template exclusions**: `create-project.sh` excludes `.git`, `__pycache__`, `*.pyc`, `.env` on copy
|
||||
- **SDLC agents**: Only available inside devcontainer (bind-mounted, not in this repo)
|
||||
|
||||
## AI Guidelines
|
||||
|
||||
### Planning Rule
|
||||
Before making code changes for non-trivial tasks:
|
||||
1. Use `EnterPlanMode` to create a detailed implementation plan
|
||||
2. Explore the codebase to understand existing patterns
|
||||
3. Present the plan for approval before implementing
|
||||
4. Use `TaskList` to track progress on multi-step tasks
|
||||
|
||||
### Agent Usage Rule
|
||||
Use agents for complex, multi-step, or parallelizable tasks:
|
||||
- Research/exploration → Explore agent
|
||||
- Implementation planning → Plan agent
|
||||
509
README.md
Normal file
509
README.md
Normal file
@@ -0,0 +1,509 @@
|
||||
# Claude Marketplace
|
||||
|
||||
[](https://fission.io/)
|
||||
[](https://python.org)
|
||||
[](https://claude.ai/code)
|
||||
|
||||
A plugin ecosystem for Claude Code featuring the **Fission Python Skill** for serverless Python projects and a complete **SDLC Agent System** for automated software development workflows.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Fission Python Skill](#fission-python-skill)
|
||||
- [SDLC Agent System](#sdlc-agent-system)
|
||||
- [Project Structure](#project-structure)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Development](#development)
|
||||
- [Configuration](#configuration)
|
||||
- [Related Documentation](#related-documentation)
|
||||
- [Key Technologies](#key-technologies)
|
||||
|
||||
## Overview
|
||||
|
||||
Claude Marketplace is a repository that provides extensible plugins and tools for Claude Code, Anthropic's AI-powered development environment. It contains two major components:
|
||||
|
||||
1. **Fission Python Skill** - A plugin for creating, analyzing, and managing Fission serverless Python projects on Kubernetes
|
||||
2. **SDLC Agent System** - A complete multi-agent Software Development Life Cycle system for automated planning, architecture validation, coding, and code review
|
||||
|
||||
These components work independently but can be used together to create and maintain production-ready software with AI assistance.
|
||||
|
||||
## Fission Python Skill
|
||||
|
||||
The Fission Python Skill provides three essential tools for working with [Fission](https://fission.io/) serverless functions written in Python.
|
||||
|
||||
### When to Use
|
||||
|
||||
- Creating new Fission Python projects from a standardized template
|
||||
- Analyzing existing Fission configuration files
|
||||
- Parsing and updating Fission configuration embedded in Python function docstrings
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `create-project.sh` | Create new Fission Python project from template |
|
||||
| `analyze-config.sh` | Analyze `.fission` configuration in a project |
|
||||
| `update-docstring.sh` | Parse and update Fission config in function docstrings |
|
||||
|
||||
### Installation
|
||||
|
||||
The skill scripts are ready to use. Make them executable:
|
||||
|
||||
```bash
|
||||
chmod +x fission-python-skill/*.sh
|
||||
```
|
||||
|
||||
### Usage Examples
|
||||
|
||||
**Create a new Fission project:**
|
||||
```bash
|
||||
./fission-python-skill/create-project.sh my-function ./projects/
|
||||
```
|
||||
|
||||
**Analyze project configuration:**
|
||||
```bash
|
||||
./fission-python-skill/analyze-config.sh ./my-fission-project
|
||||
```
|
||||
|
||||
**View function configuration:**
|
||||
```bash
|
||||
./fission-python-skill/update-docstring.sh ./src/func.py main --get
|
||||
```
|
||||
|
||||
**Update function configuration:**
|
||||
```bash
|
||||
./fission-python-skill/update-docstring.sh ./src/func.py main --set '{"http_triggers": {"api": {"url": "/v1/data", "methods": ["GET"]}}}'
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
|
||||
Projects created with `create-project.sh` follow the standard Fission Python layout:
|
||||
|
||||
```
|
||||
project/
|
||||
├── .fission/
|
||||
│ ├── deployment.json # Main configuration
|
||||
│ ├── dev-deployment.json # Development overrides
|
||||
│ └── local-deployment.json # Local overrides
|
||||
├── src/ # Python function source files
|
||||
│ └── function.py # Functions with fission config in docstrings
|
||||
├── specs/ # Generated Fission specs
|
||||
├── test/ # Unit tests
|
||||
├── manifests/ # Kubernetes manifests
|
||||
├── migrates/ # Database migrations
|
||||
├── requirements.txt # Runtime dependencies
|
||||
└── dev-requirements.txt # Development dependencies
|
||||
```
|
||||
|
||||
### Fission Configuration in Docstrings
|
||||
|
||||
Fission configuration is embedded in Python function docstrings between ````fission` and ```` markers:
|
||||
|
||||
```python
|
||||
def main():
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "function-name",
|
||||
"http_triggers": {
|
||||
"trigger-name": {
|
||||
"url": "/endpoint",
|
||||
"methods": ["GET", "POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
"""
|
||||
# implementation
|
||||
```
|
||||
|
||||
The `update-docstring.sh` tool parses and updates these configurations.
|
||||
|
||||
## SDLC Agent System
|
||||
|
||||
The SDLC (Software Development Life Cycle) Agent System is a complete multi-agent framework for automated software development. It orchestrates seven specialized agents to take a feature request from planning through implementation and review.
|
||||
|
||||
### The Seven Agents
|
||||
|
||||
| Agent | Role | Key Responsibilities |
|
||||
|-------|------|---------------------|
|
||||
| **Initializer** | Setup | Initialize `agent-context/`, detect stack, generate harness scripts, populate domain skills |
|
||||
| **Planning** | Planning | Transform requests into structured, architecture-aware plans with isolated tasks |
|
||||
| **Architect** | Review | Evaluate architecture, enforce architectural rules, validate design decisions |
|
||||
| **Coding** | Implementation | Implement one task at a time from self-contained task files |
|
||||
| **Code Review** | Quality | Review code for correctness, architecture adherence, and debt awareness |
|
||||
| **Curator** | Orchestration | Manage task progression, feature completion, and handoffs |
|
||||
| **Retro** | Improvement | Conduct retrospectives and process improvement |
|
||||
|
||||
### Agent Workflow
|
||||
|
||||
```
|
||||
User Request
|
||||
↓
|
||||
Initializer (one-time setup)
|
||||
↓
|
||||
Planning Agent (creates feature + tasks)
|
||||
↓
|
||||
Architect Agent (reviews plan)
|
||||
↓
|
||||
Coding Agent (implements task 1)
|
||||
↓
|
||||
Coding Agent (implements task 2)
|
||||
↓
|
||||
...
|
||||
↓
|
||||
Code Review Agent (reviews implementation)
|
||||
↓
|
||||
Curator (approves/requests changes)
|
||||
↓
|
||||
Feature Complete
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
To use the SDLC Agent System in a project:
|
||||
|
||||
```bash
|
||||
# Copy agent templates to project root
|
||||
./.sdlc-agents/setup.sh /path/to/target-project
|
||||
|
||||
# This creates: /path/to/target-project/agent-context/
|
||||
# with harness/, memory/, features/, and extensions/ directories
|
||||
```
|
||||
|
||||
The setup is **idempotent** - safe to run multiple times. Use `-f` to force overwrite existing files.
|
||||
|
||||
### Agent Context Structure
|
||||
|
||||
After setup, the project will have:
|
||||
|
||||
```
|
||||
agent-context/
|
||||
├── harness/ # Task execution scripts and tracking
|
||||
│ ├── init-project.sh
|
||||
│ ├── run-quality-gates.sh
|
||||
│ ├── run-arch-tests.sh
|
||||
│ ├── run-feature.sh
|
||||
│ ├── start-task.sh
|
||||
│ └── ...
|
||||
├── memory/ # Learning playbook and retrieval
|
||||
│ └── learning-playbook.md
|
||||
├── features/ # Feature specs and task definitions
|
||||
│ ├── FEAT-001/
|
||||
│ │ ├── feature.md # Feature context
|
||||
│ │ ├── progress-log.md # Progress tracking
|
||||
│ │ └── tasks/
|
||||
│ │ ├── T01-setup.md
|
||||
│ │ └── T02-implement.md
|
||||
└── extensions/ # Custom rules and skills
|
||||
├── _all-agents/ # Global constraints
|
||||
├── planning-agent/
|
||||
├── architect-agent/
|
||||
├── coding-agent/
|
||||
├── codereview-agent/
|
||||
└── skills/ # Project-specific skills
|
||||
├── domain/
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Skill System
|
||||
|
||||
Agents can load domain-specific skills that provide patterns, constraints, and guidance:
|
||||
|
||||
**Stack Skills** (auto-detected):
|
||||
- Java/Kotlin, TypeScript/JavaScript, Python, Go, Rust, .NET/C#, Ruby, PHP
|
||||
|
||||
**Pattern Skills** (on-demand):
|
||||
- Hexagonal, Layered, Modular Monolith, Microservices, Spec-Driven
|
||||
|
||||
**Framework Skills**:
|
||||
- Embabel
|
||||
|
||||
Skills are discovered using `stack-detection.md` and can be explicitly requested via skill directives:
|
||||
- `#TDD` - Force-load TDD skill
|
||||
- `#Hexagonal,Clean` - Load multiple skills
|
||||
- `#only:Python,Security` - Use only these skills
|
||||
- `!Kafka` - Exclude a skill
|
||||
|
||||
### Harness and Quality Gates
|
||||
|
||||
The generated harness scripts provide:
|
||||
|
||||
- **`init-project.sh`** - Install dependencies, compile, verify environment
|
||||
- **`run-quality-gates.sh`** - Run all quality checks (tests, lint, arch, coverage, security)
|
||||
- **`run-arch-tests.sh`** - Run architecture validation tests
|
||||
- **`run-feature.sh`** - Run tests for a specific feature
|
||||
- **`start-task.sh`** - Mark task as in_progress
|
||||
- **`collect-metrics.sh`**, **`compare-metrics.sh`**, **`archive-metrics.sh`** - Metrics management
|
||||
|
||||
These scripts are **stack-specific** - generated based on the detected technology stack and configured tools.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
claude-marketplace/
|
||||
├── .claude-plugin/
|
||||
│ └── marketplace.json # Plugin registry for Claude Code
|
||||
├── .sdlc-agents/ # Complete SDLC agent system
|
||||
│ ├── agents/ # Agent configurations (7 agents)
|
||||
│ │ ├── initializer-agent.md
|
||||
│ │ ├── planning-agent.md
|
||||
│ │ ├── architect-agent.md
|
||||
│ │ ├── coding-agent.md
|
||||
│ │ ├── codereview-agent.md
|
||||
│ │ ├── curator-agent.md
|
||||
│ │ └── retro-agent.md
|
||||
│ ├── guardrails/ # Quality guidelines
|
||||
│ ├── skills/ # Stack, pattern, and framework skills
|
||||
│ │ ├── stacks/ # Language-specific (Python, Java, TS, Go, etc.)
|
||||
│ │ ├── patterns/ # Architecture patterns
|
||||
│ │ └── frameworks/ # Framework-specific guidance
|
||||
│ ├── templates/ # Templates for agent-context structure
|
||||
│ ├── tools/ # Utility scripts (discovery, validation, skills)
|
||||
│ └── setup.sh # Initialize agent-context in a project
|
||||
├── fission-python-skill/ # Fission Python plugin
|
||||
│ ├── .claude-plugin/
|
||||
│ │ └── plugin.json # Plugin definition
|
||||
│ ├── template/ # Project template (copied when creating new projects)
|
||||
│ │ ├── .fission/
|
||||
│ │ ├── src/
|
||||
│ │ ├── test/
|
||||
│ │ ├── manifests/
|
||||
│ │ ├── migrates/
|
||||
│ │ ├── specs/
|
||||
│ │ ├── requirements.txt
|
||||
│ │ └── dev-requirements.txt
|
||||
│ ├── create-project.sh # Create new Fission project
|
||||
│ ├── analyze-config.sh # Analyze .fission configuration
|
||||
│ ├── update-docstring.sh # Parse/update function docstrings
|
||||
│ ├── SKILL.md # Quick reference
|
||||
│ └── reference.md # Detailed tool documentation
|
||||
├── data/
|
||||
│ └── examples/ # Example Fission Python projects
|
||||
│ ├── py-eom-quota/
|
||||
│ ├── py-eom-storage/
|
||||
│ └── py-ailbl-scheduler/
|
||||
└── .devcontainer/ # VS Code dev container configuration
|
||||
└── devcontainer.json
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Setting Up Development Environment
|
||||
|
||||
**Recommended:** Open the repository in a devcontainer (VS Code with Dev Containers extension):
|
||||
|
||||
```bash
|
||||
# Open in container from VS Code
|
||||
# The devcontainer will set up dependencies and environment
|
||||
```
|
||||
|
||||
**Manual setup:**
|
||||
```bash
|
||||
# Make scripts executable
|
||||
chmod +x fission-python-skill/*.sh
|
||||
chmod +x .sdlc-agents/setup.sh
|
||||
```
|
||||
|
||||
### Using the Fission Python Skill
|
||||
|
||||
1. **Create a new Fission project:**
|
||||
```bash
|
||||
./fission-python-skill/create-project.sh my-api ./projects/
|
||||
```
|
||||
|
||||
2. **Analyze the configuration:**
|
||||
```bash
|
||||
./fission-python-skill/analyze-config.sh ./projects/my-api
|
||||
```
|
||||
|
||||
3. **Develop your function** in `src/`, update docstrings as needed with `update-docstring.sh`
|
||||
|
||||
4. **Deploy to Fission** following Fission documentation
|
||||
|
||||
### Using the SDLC Agent System
|
||||
|
||||
1. **Initialize an existing project** (new or legacy):
|
||||
|
||||
```bash
|
||||
./.sdlc-agents/setup.sh /path/to/your/project
|
||||
```
|
||||
|
||||
2. The Initializer Agent will:
|
||||
- Detect the technology stack
|
||||
- Generate stack-specific harness scripts
|
||||
- Create initial feature structure
|
||||
- Populate domain skills based on project analysis
|
||||
- For legacy projects: discover and document architecture
|
||||
|
||||
3. **Start a new feature:**
|
||||
- Create a feature request in `agent-context/features/`
|
||||
- The Planning Agent will create detailed task files
|
||||
- The Architect Agent will review and approve the plan
|
||||
- Coding Agents implement each task
|
||||
- Code Review Agents validate each implementation
|
||||
- Curator manages handoffs and completion
|
||||
|
||||
4. **Run quality gates:**
|
||||
```bash
|
||||
cd /path/to/your/project
|
||||
./agent-context/harness/run-quality-gates.sh
|
||||
```
|
||||
|
||||
### Testing Changes
|
||||
|
||||
- **Fission skill scripts:** Run directly with various arguments
|
||||
- **Template projects:** Use pytest in generated projects
|
||||
- **SDLC agents:** Run `setup.sh` and verify `agent-context/` creation
|
||||
|
||||
## Development
|
||||
|
||||
### Modifying Fission Python Skill
|
||||
|
||||
1. Edit scripts in `fission-python-skill/`
|
||||
2. Update `plugin.json` to modify tools or metadata
|
||||
3. Update `marketplace.json` to change plugin registry
|
||||
4. Test by running scripts directly
|
||||
|
||||
### Updating the Project Template
|
||||
|
||||
The template at `fission-python-skill/template/` is used by `create-project.sh`:
|
||||
|
||||
- Modify `src/` to change default function structure
|
||||
- Update `.fission/deployment.json` for environment configuration
|
||||
- Adjust `requirements.txt` for different dependencies
|
||||
- The template `build.sh` is used when building packages
|
||||
- Ensure `.gitea/workflows/` includes CI/CD workflows
|
||||
|
||||
The plugin locates the template relative to its own location, making it portable.
|
||||
|
||||
### Plugin Registration
|
||||
|
||||
Both files must be present:
|
||||
- `.claude-plugin/marketplace.json` (registry - lists all plugins)
|
||||
- `fission-python-skill/.claude-plugin/plugin.json` (plugin definition)
|
||||
|
||||
### Invoking Skills in Claude Code
|
||||
|
||||
Once registered, skills can be invoked through Claude Code's tool system. The tool names correspond to the script basenames:
|
||||
- `create-project`
|
||||
- `analyze-config`
|
||||
- `update-docstring`
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables (Dev Container)
|
||||
|
||||
Set in `.devcontainer/devcontainer.json`:
|
||||
|
||||
| Variable | Default | Purpose |
|
||||
|----------|---------|---------|
|
||||
| `ANTHROPIC_API_KEY` | (empty) | API key (mounted from local) |
|
||||
| `ANTHROPIC_BASE_URL` | `https://openrouter.ai/api` | API endpoint |
|
||||
| `ANTHROPIC_MODEL` | `stepfun/step-3.5-flash:free` | Default model |
|
||||
| `ANTHROPIC_SMALL_FAST_MODEL` | `nvidia/nemotron-3-super-120b-a12b:free` | Alternate model |
|
||||
|
||||
### Claude Code Settings
|
||||
|
||||
- `.claude/settings.json` - Main settings (plans directory, hooks)
|
||||
- `.claude/settings.local.json` - Local overrides (not committed)
|
||||
|
||||
### Dependencies
|
||||
|
||||
The `analyze-config.sh` tool requires `jq` for JSON parsing:
|
||||
```bash
|
||||
# Debian/Ubuntu
|
||||
sudo apt-get install jq
|
||||
|
||||
# macOS
|
||||
brew install jq
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
### Comprehensive Guides
|
||||
- **[CLAUDE.md](CLAUDE.md)** - Complete project guide with detailed architecture and workflows
|
||||
|
||||
### Fission Python Skill
|
||||
- **[fission-python-skill/SKILL.md](fission-python-skill/SKILL.md)** - Quick reference and usage patterns
|
||||
- **[fission-python-skill/reference.md](fission-python-skill/reference.md)** - Detailed tool documentation
|
||||
|
||||
### SDLC Agents
|
||||
- **[.sdlc-agents/skills/README.md](.sdlc-agents/skills/README.md)** - Skill system usage and discovery
|
||||
- **[.sdlc-agents/initializer-agent.md](.sdlc-agents/initializer-agent.md)** - Setup agent documentation
|
||||
- **[.sdlc-agents/planning-agent.md](.sdlc-agents/planning-agent.md)** - Planning agent documentation
|
||||
- **[.sdlc-agents/architect-agent.md](.sdlc-agents/architect-agent.md)** - Architecture review agent
|
||||
- **[.sdlc-agents/coding-agent.md](.sdlc-agents/coding-agent.md)** - Implementation agent
|
||||
- **[.sdlc-agents/codereview-agent.md](.sdlc-agents/codereview-agent.md)** - Code review agent
|
||||
- **[.sdlc-agents/curator-agent.md](.sdlc-agents/curator-agent.md)** - Orchestration agent
|
||||
- **[.sdlc-agents/retro-agent.md](.sdlc-agents/retro-agent.md)** - Retrospective agent
|
||||
- **[.sdlc-agents/skills/harness-spec.md](.sdlc-agents/skills/harness-spec.md)** - Harness specification
|
||||
|
||||
## Key Technologies
|
||||
|
||||
- **Fission** - Serverless framework for Kubernetes
|
||||
- **Python** - Primary language for the Fission skill plugin
|
||||
- **Claude Code** - AI-powered development environment by Anthropic
|
||||
- **Kubernetes** - Container orchestration platform for Fission
|
||||
- **SDLC Agents** - Multi-agent system for automated software development
|
||||
- **Shell scripting** - Tool implementations (Bash with `jq` for JSON)
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Plugin tools must be executable**: Always run `chmod +x` on `.sh` files
|
||||
- **Do not commit secrets**: `.env` files are gitignored; template creates placeholder only
|
||||
- **Plugin registration**: Both `marketplace.json` AND plugin's `plugin.json` must exist
|
||||
- **Template path**: `create-project.sh` uses relative path to `fission-python-skill/template/` (portable)
|
||||
- **jq dependency**: `analyze-config.sh` requires `jq` installed
|
||||
- **sed portability**: Scripts use GNU sed; may need adjustment for macOS (`sed -i ''`)
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Fission Python Skill
|
||||
|
||||
```bash
|
||||
# Create new project
|
||||
fission-python-skill/create-project.sh my-project ./output/
|
||||
|
||||
# Analyze configuration
|
||||
fission-python-skill/analyze-config.sh ./existing-project
|
||||
|
||||
# View docstring config
|
||||
fission-python-skill/update-docstring.sh ./src/func.py main --get
|
||||
|
||||
# Update docstring config
|
||||
fission-python-skill/update-docstring.sh ./src/func.py main --set '{"http_triggers": {"api": {"url": "/v1", "methods": ["GET"]}}}'
|
||||
```
|
||||
|
||||
### SDLC Agent System
|
||||
|
||||
```bash
|
||||
# Setup agents in a project
|
||||
.sdlc-agents/setup.sh /path/to/project
|
||||
|
||||
# Resolve skill paths for planning agent
|
||||
.sdlc-agents/tools/skills/resolve-skills.sh --agent planning python tdd
|
||||
|
||||
# Parse skill directives from user prompt
|
||||
.sdlc-agents/tools/skills/parse-skill-directives.sh "Use #Hexagonal but !Kafka"
|
||||
```
|
||||
|
||||
### Harness Scripts (after setup)
|
||||
|
||||
```bash
|
||||
# Initialize project (install deps, compile)
|
||||
./agent-context/harness/init-project.sh
|
||||
|
||||
# Run all quality checks
|
||||
./agent-context/harness/run-quality-gates.sh
|
||||
|
||||
# Run architecture tests only
|
||||
./agent-context/harness/run-arch-tests.sh
|
||||
|
||||
# Run feature-specific tests
|
||||
./agent-context/harness/run-feature.sh FEAT-001
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Built for the Claude Code ecosystem. See [CLAUDE.md](CLAUDE.md) for comprehensive documentation.
|
||||
8
fission-python/.claude-plugin/plugin.json
Normal file
8
fission-python/.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"id" : "fission-python",
|
||||
"name" : "FissionPython",
|
||||
"description": "Skill for creating, analyzing, and managing Fission Python projects.",
|
||||
"type" : "skill",
|
||||
"tools" : ["create-project", "analyze-config", "update-docstring"],
|
||||
"version" : "1.0.1"
|
||||
}
|
||||
83
fission-python/SKILL.md
Normal file
83
fission-python/SKILL.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Fission Python Skill
|
||||
|
||||
Skill for creating, analyzing, and managing Fission Python projects.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
- Creating new Fission Python projects from templates
|
||||
- Analyzing Fission configuration (.fission files) in existing projects
|
||||
- Parsing and updating embedded Fission configuration in function docstrings
|
||||
|
||||
## Key Concepts
|
||||
|
||||
- **Fission Project Structure**: Standard layout with src/, .fission/, specs/ directories
|
||||
- **Function Docstring Configuration**: Fission configuration embedded in Python function docstrings between ```fission markers
|
||||
- **Configuration Files**: .fission/deployment.json defines environments, packages, functions, secrets, and configmaps
|
||||
|
||||
## Patterns
|
||||
|
||||
### Project Template
|
||||
Standard Fission Python project includes:
|
||||
- `src/` - Python function source files with build.sh and requirements.txt
|
||||
- `.fission/` - Fission configuration (deployment.json, etc.)
|
||||
- `specs/` - Generated Fission specs
|
||||
- `test/` - Unit tests
|
||||
- `manifests/` - Kubernetes manifests
|
||||
- `migrates/` - Database migrations
|
||||
- `.devcontainer/` - Development container configuration
|
||||
- `.gitea/workflows/` - CI/CD deployment workflows
|
||||
|
||||
All functions should use pydantic models for request/response validation and include comprehensive docstrings.
|
||||
|
||||
### Docstring Format
|
||||
Fission configuration in docstrings follows this pattern:
|
||||
```python
|
||||
def main():
|
||||
"""
|
||||
Function description
|
||||
|
||||
```fission
|
||||
{
|
||||
"name": "function-name",
|
||||
"http_triggers": {
|
||||
"trigger-name": {
|
||||
"url": "/endpoint",
|
||||
"methods": ["GET", "POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
"""
|
||||
# function implementation
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `create-project.sh` | Create new Fission Python project from template |
|
||||
| `analyze-config.sh` | Analyze .fission configuration in a project |
|
||||
| `update-docstring.sh` | Parse and update docstrings in fission function methods |
|
||||
|
||||
## Examples
|
||||
|
||||
### Create a new project
|
||||
```bash
|
||||
fission-python-skill create-project my-new-function ./projects/
|
||||
```
|
||||
|
||||
### Analyze project configuration
|
||||
```bash
|
||||
fission-python-skill analyze-config ./my-fission-project
|
||||
```
|
||||
|
||||
### Update function docstring
|
||||
```bash
|
||||
fission-python-skill update-docstring ./src/my_function.py main
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- None - this is a specialized skill for Fission Python projects
|
||||
151
fission-python/analyze-config.sh
Executable file
151
fission-python/analyze-config.sh
Executable file
@@ -0,0 +1,151 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Fission Configuration Analyzer
|
||||
# Analyzes and displays fission configuration from .fission directory
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 <project-path>"
|
||||
echo " project-path: Path to the fission project directory (should contain .fission/ subdirectory)"
|
||||
exit 1
|
||||
}
|
||||
|
||||
if [[ $# -ne 1 ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
PROJECT_PATH="$1"
|
||||
|
||||
# Validate project path
|
||||
if [[ ! -d "$PROJECT_PATH" ]]; then
|
||||
echo "Error: Project path '$PROJECT_PATH' does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
FISSION_DIR="$PROJECT_PATH/.fission"
|
||||
|
||||
if [[ ! -d "$FISSION_DIR" ]]; then
|
||||
echo "Error: .fission directory not found in '$PROJECT_PATH'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "🔍 Analyzing Fission configuration in '$PROJECT_PATH'"
|
||||
echo "=================================================="
|
||||
|
||||
# Analyze deployment.json (main configuration)
|
||||
if [[ -f "$FISSION_DIR/deployment.json" ]]; then
|
||||
echo ""
|
||||
echo "📋 DEPLOYMENT CONFIGURATION"
|
||||
echo "--------------------------"
|
||||
|
||||
# Extract namespaces
|
||||
namespace=$(jq -r '.namespace // "default"' "$FISSION_DIR/deployment.json")
|
||||
echo "Namespace: $namespace"
|
||||
|
||||
# Analyze environments
|
||||
echo ""
|
||||
echo "🌐 ENVIRONMENTS:"
|
||||
if jq -e '.environments' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
jq -r '.environments | to_entries[] | " \(.key):" +
|
||||
"\n Image: \(.value.image // "N/A")" +
|
||||
"\n Builder: \(.value.builder // "N/A")" +
|
||||
"\n CPU: \(.value.mincpu // 0)m-\(.value.maxcpu // 0)m" +
|
||||
"\n Memory: \(.value.minmemory // 0)Mi-\(.value.maxmemory // 0)Mi" +
|
||||
"\n Pool Size: \(.value.poolsize // 1)"' "$FISSION_DIR/deployment.json"
|
||||
else
|
||||
echo " No environments defined"
|
||||
fi
|
||||
|
||||
# Analyze packages
|
||||
echo ""
|
||||
echo "📦 PACKAGES:"
|
||||
if jq -e '.packages' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
jq -r '.packages | to_entries[] | " \(.key):" +
|
||||
"\n Build Command: \(.value.buildcmd // "N/A")" +
|
||||
"\n Source Archive: \(.value.sourcearchive // "N/A")" +
|
||||
"\n Environment: \(.value.env // "N/A")"' "$FISSION_DIR/deployment.json"
|
||||
else
|
||||
echo " No packages defined"
|
||||
fi
|
||||
|
||||
# Analyze functions (from function_common and individual functions)
|
||||
echo ""
|
||||
echo "⚙️ FUNCTIONS:"
|
||||
if jq -e '.function_common' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
echo " Common Configuration:"
|
||||
jq -r '.function_common |
|
||||
" Package: \(.pkg // "N/A")" +
|
||||
"\n Executor Type: \(.executor.select // "N/A")" +
|
||||
"\n CPU: \(.mincpu // 0)m-\(.maxcpu // 0)m" +
|
||||
"\n Memory: \(.minmemory // 0)Mi-\(.maxmemory // 0)Mi"' "$FISSION_DIR/deployment.json"
|
||||
fi
|
||||
|
||||
# Look for individual function definitions
|
||||
if jq -e '.functions' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
echo ""
|
||||
echo " Individual Functions:"
|
||||
jq -r '.functions | to_entries[] | " \(.key):" +
|
||||
"\n Executor: \(.value.executor // "N/A")"' "$FISSION_DIR/deployment.json"
|
||||
else
|
||||
echo " No individual functions defined (using function_common)"
|
||||
fi
|
||||
|
||||
# Analyze secrets
|
||||
echo ""
|
||||
echo "🔐 SECRETS:"
|
||||
if jq -e '.secrets' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
jq -r '.secrets | to_entries[] | " \(.key):" +
|
||||
"\n Type: \(.value.kind // "literal")" +
|
||||
"\n Literal Count: \(.value.literals | length // 0)"' "$FISSION_DIR/deployment.json"
|
||||
else
|
||||
echo " No secrets defined"
|
||||
fi
|
||||
|
||||
# Analyze configmaps
|
||||
echo ""
|
||||
echo "⚙️ CONFIGMAPS:"
|
||||
if jq -e '.configmaps' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
jq -r '.configmaps | to_entries[] | " \(.key):" +
|
||||
"\n Literal Count: \(.value.literals | length // 0)"' "$FISSION_DIR/deployment.json"
|
||||
else
|
||||
echo " No configmaps defined"
|
||||
fi
|
||||
|
||||
# Analyze archives
|
||||
echo ""
|
||||
echo "📦 ARCHIVES:"
|
||||
if jq -e '.archives' "$FISSION_DIR/deployment.json" >/dev/null; then
|
||||
jq -r '.archives | to_entries[] | " \(.key):" +
|
||||
"\n Source Path: \(.value.sourcepath // "N/A")"' "$FISSION_DIR/deployment.json"
|
||||
else
|
||||
echo " No archives defined"
|
||||
fi
|
||||
else
|
||||
echo ""
|
||||
echo "⚠️ deployment.json not found in .fission directory"
|
||||
fi
|
||||
|
||||
# Check for other fission files
|
||||
echo ""
|
||||
echo "📄 OTHER FISSION FILES:"
|
||||
shopt -s nullglob
|
||||
fission_files=("$FISSION_DIR"/*.json)
|
||||
if [[ ${#fission_files[@]} -gt 0 ]]; then
|
||||
for file in "${fission_files[@]}"; do
|
||||
filename=$(basename "$file")
|
||||
if [[ "$filename" != "deployment.json" ]]; then
|
||||
echo " $filename"
|
||||
# Show brief content
|
||||
if [[ -s "$file" ]]; then
|
||||
line_count=$(wc -l < "$file")
|
||||
echo " ($line_count lines)"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo " No other .json files found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ Analysis complete"
|
||||
225
fission-python/create-project.sh
Executable file
225
fission-python/create-project.sh
Executable file
@@ -0,0 +1,225 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Fission Python Project Creator
|
||||
# Creates a new fission python project from template
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Get the directory where this script is located
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
TEMPLATE_DIR="$SCRIPT_DIR/template"
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 <project-name> [destination-directory]"
|
||||
echo " project-name: Name for the new fission project"
|
||||
echo " destination-directory: Optional directory where project should be created (default: current directory)"
|
||||
exit 1
|
||||
}
|
||||
|
||||
if [[ $# -lt 1 || $# -gt 2 ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
PROJECT_NAME="$1"
|
||||
DEST_DIR="${2:-./}"
|
||||
|
||||
# Validate project name
|
||||
if [[ ! "$PROJECT_NAME" =~ ^[a-zA-Z0-9_-]+$ ]]; then
|
||||
echo "Error: Project name can only contain letters, numbers, hyphens, and underscores"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create destination directory if it doesn't exist
|
||||
mkdir -p "$DEST_DIR"
|
||||
|
||||
PROJECT_PATH="$DEST_DIR/$PROJECT_NAME"
|
||||
|
||||
# Check if project already exists
|
||||
if [[ -d "$PROJECT_PATH" ]]; then
|
||||
echo "Error: Project directory '$PROJECT_PATH' already exists"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if template exists
|
||||
if [[ ! -d "$TEMPLATE_DIR" ]]; then
|
||||
echo "Error: Template directory not found at '$TEMPLATE_DIR'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Creating fission python project '$PROJECT_NAME' in '$PROJECT_PATH'..."
|
||||
|
||||
# Copy template excluding unwanted files/directories
|
||||
rsync -av --exclude='.git' --exclude='__pycache__' --exclude='*.pyc' --exclude='.env' \
|
||||
"$TEMPLATE_DIR/" "$PROJECT_PATH/"
|
||||
|
||||
# Replace placeholder values in configuration files
|
||||
|
||||
# 1. deployment.json - replace ${PROJECT_NAME} and old eom-quota references
|
||||
if [[ -f "$PROJECT_PATH/.fission/deployment.json" ]]; then
|
||||
sed -i "s/\${PROJECT_NAME}/$PROJECT_NAME/g" "$PROJECT_PATH/.fission/deployment.json"
|
||||
sed -i "s/eom-quota/$PROJECT_NAME/g" "$PROJECT_PATH/.fission/deployment.json"
|
||||
sed -i "s/fission-eom-quota/fission-$PROJECT_NAME/g" "$PROJECT_PATH/.fission/deployment.json"
|
||||
fi
|
||||
|
||||
# 2. Override files - dev-deployment.json and local-deployment.json
|
||||
for override in dev-deployment.json local-deployment.json; do
|
||||
if [[ -f "$PROJECT_PATH/.fission/$override" ]]; then
|
||||
sed -i "s/\${PROJECT_NAME}/$PROJECT_NAME/g" "$PROJECT_PATH/.fission/$override"
|
||||
sed -i "s/eom-quota/$PROJECT_NAME/g" "$PROJECT_PATH/.fission/$override"
|
||||
sed -i "s/fission-eom-quota/fission-$PROJECT_NAME/g" "$PROJECT_PATH/.fission/$override"
|
||||
fi
|
||||
done
|
||||
|
||||
# 3. helpers.py - update SECRET_NAME and CONFIG_NAME
|
||||
if [[ -f "$PROJECT_PATH/src/helpers.py" ]]; then
|
||||
sed -i "s/\${PROJECT_NAME}/$PROJECT_NAME/g" "$PROJECT_PATH/src/helpers.py"
|
||||
fi
|
||||
|
||||
# 4. README.md - update with project name and clean up
|
||||
if [[ -f "$PROJECT_PATH/README.md" ]]; then
|
||||
sed -i "s/\${PROJECT_NAME}/$PROJECT_NAME/g" "$PROJECT_PATH/README.md"
|
||||
sed -i "s/^# Fission Python Template/# $PROJECT_NAME/g" "$PROJECT_PATH/README.md"
|
||||
sed -i "s/your-service-py/$PROJECT_NAME-py/g" "$PROJECT_PATH/README.md"
|
||||
sed -i "s/your-package/$PROJECT_NAME/g" "$PROJECT_PATH/README.md"
|
||||
fi
|
||||
|
||||
# ========== VALIDATION ==========
|
||||
echo ""
|
||||
echo "🔍 Validating project structure..."
|
||||
|
||||
validation_errors=0
|
||||
validation_warnings=0
|
||||
|
||||
# 1. Check build.sh exists and is executable
|
||||
if [[ -f "$PROJECT_PATH/src/build.sh" ]]; then
|
||||
if [[ -x "$PROJECT_PATH/src/build.sh" ]]; then
|
||||
echo " ✓ build.sh exists and is executable"
|
||||
else
|
||||
echo " ⚠ build.sh exists but is not executable. Fixing with chmod +x..."
|
||||
chmod +x "$PROJECT_PATH/src/build.sh"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
fi
|
||||
else
|
||||
echo " ✗ ERROR: src/build.sh is missing (required)"
|
||||
validation_errors=$((validation_errors + 1))
|
||||
fi
|
||||
|
||||
# 2. Check deployment.json references ./build.sh
|
||||
if [[ -f "$PROJECT_PATH/.fission/deployment.json" ]]; then
|
||||
if grep -q '"./build.sh"' "$PROJECT_PATH/.fission/deployment.json" 2>/dev/null; then
|
||||
echo " ✓ deployment.json references ./build.sh"
|
||||
else
|
||||
echo " ⚠ deployment.json does not reference './build.sh' in buildcmd"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
fi
|
||||
else
|
||||
echo " ✗ ERROR: .fission/deployment.json is missing"
|
||||
validation_errors=$((validation_errors + 1))
|
||||
fi
|
||||
|
||||
# 3. Check requirements.txt exists
|
||||
if [[ -f "$PROJECT_PATH/src/requirements.txt" ]]; then
|
||||
echo " ✓ requirements.txt exists"
|
||||
|
||||
# Check for essential dependencies
|
||||
missing_deps=()
|
||||
if ! grep -qi 'pydantic' "$PROJECT_PATH/src/requirements.txt"; then
|
||||
missing_deps+=("pydantic")
|
||||
fi
|
||||
if ! grep -qi 'flask' "$PROJECT_PATH/src/requirements.txt"; then
|
||||
missing_deps+=("flask")
|
||||
fi
|
||||
|
||||
if [[ ${#missing_deps[@]} -gt 0 ]]; then
|
||||
echo " ⚠ requirements.txt missing recommended dependencies: ${missing_deps[*]}"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
else
|
||||
echo " ✓ Contains essential dependencies (pydantic, flask)"
|
||||
fi
|
||||
else
|
||||
echo " ✗ ERROR: src/requirements.txt is missing (required)"
|
||||
validation_errors=$((validation_errors + 1))
|
||||
fi
|
||||
|
||||
# 4. Check .gitea/workflows directory exists
|
||||
if [[ -d "$PROJECT_PATH/.gitea/workflows" ]]; then
|
||||
echo " ✓ .gitea/workflows directory exists"
|
||||
|
||||
# Count workflow files
|
||||
workflow_count=$(find "$PROJECT_PATH/.gitea/workflows" -type f -name "*.yaml" 2>/dev/null | wc -l)
|
||||
if [[ $workflow_count -ge 4 ]]; then
|
||||
echo " ✓ Found $workflow_count workflow files"
|
||||
else
|
||||
echo " ⚠ Only found $workflow_count workflow files (expected at least 4)"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
fi
|
||||
else
|
||||
echo " ⚠ .gitea/workflows directory is missing (recommended for CI/CD)"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
fi
|
||||
|
||||
# 5. Check for Python files with docstrings (basic check, excluding __init__.py)
|
||||
python_files=$(find "$PROJECT_PATH/src" -type f -name "*.py" ! -name "__init__.py" 2>/dev/null | wc -l)
|
||||
if [[ $python_files -gt 0 ]]; then
|
||||
files_with_docstrings=$(grep -l '"""' "$PROJECT_PATH/src/"*.py 2>/dev/null | grep -v '__init__.py' | wc -l)
|
||||
if [[ $files_with_docstrings -eq $python_files ]]; then
|
||||
echo " ✓ All Python files contain docstrings"
|
||||
else
|
||||
echo " ⚠ Only $files_with_docstrings/$python_files Python files have docstrings"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
fi
|
||||
else
|
||||
echo " ⚠ No Python files found in src/ to check docstrings"
|
||||
fi
|
||||
|
||||
# 6. Check for pydantic BaseModel usage in models.py (if exists)
|
||||
if [[ -f "$PROJECT_PATH/src/models.py" ]]; then
|
||||
if grep -q "pydantic.BaseModel" "$PROJECT_PATH/src/models.py"; then
|
||||
echo " ✓ models.py uses pydantic.BaseModel"
|
||||
else
|
||||
echo " ⚠ models.py does not appear to use pydantic.BaseModel (recommended for HTTP triggers)"
|
||||
validation_warnings=$((validation_warnings + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
if [[ $validation_errors -eq 0 && $validation_warnings -eq 0 ]]; then
|
||||
echo "✅ All validations passed!"
|
||||
elif [[ $validation_errors -eq 0 ]]; then
|
||||
echo "⚠️ $validation_warnings validation warning(s). Project is usable but review above."
|
||||
else
|
||||
echo "❌ $validation_errors validation error(s) and $validation_warnings warning(s)."
|
||||
echo " The project was created but may have issues. Review the messages above."
|
||||
fi
|
||||
|
||||
# Create basic .env file if it doesn't exist
|
||||
if [[ ! -f "$PROJECT_PATH/.env" ]]; then
|
||||
cat > "$PROJECT_PATH/.env" << EOF
|
||||
# Environment variables for $PROJECT_NAME
|
||||
# Copy this to .env.local for local overrides
|
||||
FISSION_ROUTE_SERVICE_ENDPOINT=http://router.fission.svc.cluster.local
|
||||
EOF
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🎉 Project '$PROJECT_NAME' created!"
|
||||
|
||||
if [[ $validation_errors -eq 0 && $validation_warnings -eq 0 ]]; then
|
||||
echo "✅ All validations passed!"
|
||||
elif [[ $validation_errors -eq 0 ]]; then
|
||||
echo "⚠️ $validation_warnings warning(s) found - review above."
|
||||
else
|
||||
echo "❌ $validation_errors error(s) and $validation_warnings warning(s) - review above."
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "Next steps:"
|
||||
echo "1. cd $PROJECT_PATH"
|
||||
echo "2. Review and update configuration in .fission/deployment.json"
|
||||
echo "3. Install dependencies: pip install --upgrade --force-reinstall -r dev-requirements.txt"
|
||||
echo "4. Customize your functions in the src/ directory (see examples/ for patterns)"
|
||||
echo "5. Ensure HTTP trigger functions have proper fission config in docstrings"
|
||||
echo "6. Write tests in test/ directory"
|
||||
echo "7. Create Kubernetes secrets: kubectl create secret generic fission-$PROJECT_NAME-env --from-literal=... (see docs/SECRETS.md)"
|
||||
echo "8. Build and deploy: ./src/build.sh && fission deploy"
|
||||
185
fission-python/reference.md
Normal file
185
fission-python/reference.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Fission Python Skill Reference
|
||||
|
||||
Detailed reference for the fission-python-skill tools.
|
||||
|
||||
---
|
||||
|
||||
## create-project.sh
|
||||
|
||||
Create a new Fission Python project from the standard template.
|
||||
|
||||
### Usage
|
||||
```bash
|
||||
fission-python-skill create-project <project-name> [destination-directory]
|
||||
```
|
||||
|
||||
### Arguments
|
||||
- `project-name`: Name for the new fission project (used for directories and configuration)
|
||||
- `destination-directory`: Optional directory where the project should be created (defaults to current directory)
|
||||
|
||||
### Description
|
||||
Creates a new Fission Python project by copying the template from the plugin's `template/` directory. The plugin is portable and works from any location - the template is stored relative to the plugin itself.
|
||||
|
||||
The template includes:
|
||||
- Standard directory structure (src/, .fission/, specs/, test/, manifests/, migrates/)
|
||||
- Example Python functions with fission configuration in docstrings
|
||||
- Configuration files (.fission/deployment.json, etc.)
|
||||
- Development setup (devcontainer, requirements, etc.)
|
||||
- CI/CD workflows (.gitea/workflows/)
|
||||
- build.sh script for packaging
|
||||
|
||||
After project creation, the script validates that:
|
||||
- `src/build.sh` exists and is executable
|
||||
- `.fission/deployment.json` references `./build.sh`
|
||||
- `src/requirements.txt` exists and contains essential dependencies (pydantic, flask)
|
||||
- `.gitea/workflows/` directory exists with at least 4 workflow files
|
||||
- Python files have docstrings (excluding __init__.py)
|
||||
- models.py uses pydantic.BaseModel
|
||||
|
||||
Validation warnings or errors are displayed to ensure the project follows best practices.
|
||||
|
||||
### Examples
|
||||
```bash
|
||||
# Create project in current directory
|
||||
fission-python-skill create-project my-function
|
||||
|
||||
# Create project in specific directory
|
||||
fission-python-skill create-project my-function ./projects/
|
||||
|
||||
# Create project with different name
|
||||
fission-python-skill create-project data-processing-tool ./services/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## analyze-config.sh
|
||||
|
||||
Analyze and display Fission configuration from a project's .fission directory.
|
||||
|
||||
### Usage
|
||||
```bash
|
||||
fission-python-skill analyze-config <project-path>
|
||||
```
|
||||
|
||||
### Arguments
|
||||
- `project-path`: Path to the fission project directory (should contain .fission/ subdirectory)
|
||||
|
||||
### Description
|
||||
Reads and parses the fission configuration files (.fission/deployment.json and related files) to provide a structured summary of:
|
||||
- Environments and their resource allocation (CPU, memory, scaling)
|
||||
- Packages and their build commands
|
||||
- Functions and their HTTP triggers
|
||||
- Secrets and ConfigMaps
|
||||
- Archives and source configuration
|
||||
|
||||
### Output Format
|
||||
The analysis is displayed in a human-readable format showing:
|
||||
- Project overview
|
||||
- Environment configurations
|
||||
- Package details
|
||||
- Function specifications
|
||||
- Security configurations (secrets/configmaps)
|
||||
|
||||
### Examples
|
||||
```bash
|
||||
# Analyze current directory project
|
||||
fission-python-skill analyze-config .
|
||||
|
||||
# Analyze specific project
|
||||
fission-python-skill analyze-config ./my-fission-project
|
||||
|
||||
# Analyze project in parent directory
|
||||
fission-python-skill analyze-config ../data-processing-service
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## update-docstring.sh
|
||||
|
||||
Parse and update the embedded Fission configuration in Python function docstrings.
|
||||
|
||||
### Usage
|
||||
```bash
|
||||
fission-python-skill update-docstring <file-path> [function-name] [--set "<json>"] [--get] [--help]
|
||||
```
|
||||
|
||||
### Arguments
|
||||
- `file-path`: Path to the Python file containing the function
|
||||
- `function-name`: Optional specific function name to target (if not provided, processes all functions with fission configuration)
|
||||
- `--set "<json>"`: Set the fission configuration to the provided JSON string
|
||||
- `--get`: Get/display the current fission configuration (default action if neither --set nor --get provided)
|
||||
- `--help`: Show help message
|
||||
|
||||
### Description
|
||||
This tool extracts, displays, and can modify the Fission configuration embedded in Python function docstrings. The configuration is expected to be between ```fission and ``` markers in the docstring.
|
||||
|
||||
The tool preserves:
|
||||
- All function code outside the docstring
|
||||
- Docstring content outside the fission configuration blocks
|
||||
- Formatting and indentation of the existing code
|
||||
- Only modifies the content between ```fission markers
|
||||
|
||||
### Examples
|
||||
```bash
|
||||
# View current fission configuration in a function
|
||||
fission-python-skill update-docstring ./src/my_function.py main --get
|
||||
|
||||
# Update fission configuration with new JSON
|
||||
fission-python-skill update-docstring ./src/my_function.py main --set '{"name": "updated-function", "http_triggers": {"updated-trigger": {"url": "/new-endpoint", "methods": ["POST"]}}}'
|
||||
|
||||
# Process all functions with fission configuration in a file
|
||||
fission-python-skill update-docstring ./src/functions.py --get
|
||||
|
||||
# View help
|
||||
fission-python-skill update-docstring --help
|
||||
```
|
||||
|
||||
### Configuration Format
|
||||
The fission configuration should be valid JSON representing the function's fission definition, typically including:
|
||||
- `name`: Function name
|
||||
- `environment`: Optional environment override
|
||||
- `http_triggers`: HTTP endpoint configuration
|
||||
- `schedule_triggers`: Cron-based triggers (if applicable)
|
||||
- `message_queue_triggers`: Message queue triggers (if applicable)
|
||||
|
||||
### Error Handling
|
||||
- Returns error if file doesn't exist
|
||||
- Returns error if function not found (when specified)
|
||||
- Returns error if no fission configuration found in function docstring
|
||||
- Returns error if provided JSON is invalid (when using --set)
|
||||
- Preserves original file on error (no partial writes)
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
The fission-python-skill is automatically available when the fission-plugin is installed. To use the tools directly:
|
||||
|
||||
1. Navigate to the plugin directory and run tools directly:
|
||||
```bash
|
||||
cd /path/to/fission-python-skill
|
||||
./create-project.sh my-project
|
||||
```
|
||||
|
||||
2. Or add the plugin directory to your PATH:
|
||||
```bash
|
||||
export PATH="$PATH:/path/to/fission-python-skill"
|
||||
fission-python-skill create-project my-project
|
||||
```
|
||||
|
||||
The plugin is portable and does not require any hardcoded paths - it will locate its template relative to its own location.
|
||||
|
||||
---
|
||||
|
||||
## Template Source
|
||||
|
||||
The project template is sourced from: `fission-python-skill/template/` (relative to the plugin location).
|
||||
|
||||
When creating new projects, the following files/directories are excluded from copying:
|
||||
- .git/
|
||||
- __pycache__/
|
||||
- *.pyc
|
||||
- .env
|
||||
- test/ (test files are copied but may need project-specific updates)
|
||||
|
||||
The plugin is fully portable - it uses relative paths based on the script location, so it can be moved to different directories or systems without breaking.
|
||||
20
fission-python/template/.devcontainer/.env.example
Normal file
20
fission-python/template/.devcontainer/.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# For download Rake tool
|
||||
PRIVATE_GIT_TOKEN=
|
||||
|
||||
# Rake tool's profile
|
||||
FISSION_PROFILE=local
|
||||
|
||||
# Rancher K3S version (docker-compose)
|
||||
K3S_VERSION=v1.32.4-k3s1
|
||||
K3S_TOKEN=
|
||||
|
||||
FISSION_VER=v1.21.0
|
||||
FISSION_NAMESPACE=fission
|
||||
|
||||
# Nginx ingress
|
||||
NGINX_INGRESS_VER=v1.7.1
|
||||
|
||||
# Metrics
|
||||
METRICS_NAMESPACE=monitoring
|
||||
OPENTELEMETRY_NAMESPACE=opentelemetry-operator-system
|
||||
JAEGER_NAMESPACE=jaeger
|
||||
42
fission-python/template/.devcontainer/devcontainer.json
Normal file
42
fission-python/template/.devcontainer/devcontainer.json
Normal file
@@ -0,0 +1,42 @@
|
||||
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
|
||||
// README at: https://github.com/devcontainers/templates/tree/main/src/rust
|
||||
{
|
||||
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
|
||||
// "image": "mcr.microsoft.com/devcontainers/rust:0-1-bullseye",
|
||||
// Use docker compose file
|
||||
"dockerComposeFile": ["docker-compose.yaml", "docker-compose-k3s.yaml"],
|
||||
"service": "devcontainer",
|
||||
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
|
||||
// Features to add to the dev container. More info: https://containers.dev/features.
|
||||
// "features": {},
|
||||
// Configure tool-specific properties.
|
||||
"customizations": {
|
||||
// Configure properties specific to VS Code.
|
||||
"vscode": {
|
||||
"settings": {"terminal.integrated.defaultProfile.linux": "bash"},
|
||||
"extensions": [
|
||||
// VS Code specific
|
||||
"ms-azuretools.vscode-docker",
|
||||
"dbaeumer.vscode-eslint",
|
||||
"j-brooke.fracturedjsonvsc",
|
||||
// Python specific
|
||||
"ms-python.python",
|
||||
"charliermarsh.ruff",
|
||||
// Markdown specific
|
||||
"yzhang.markdown-all-in-one",
|
||||
// YAML formatter
|
||||
"kennylong.kubernetes-yaml-formatter",
|
||||
// hightlight and format `pyproject.toml`
|
||||
"tamasfe.even-better-toml"
|
||||
]
|
||||
}
|
||||
},
|
||||
"mounts": [],
|
||||
// "runArgs": [
|
||||
// "--env-file",
|
||||
// ".devcontainer/.env"
|
||||
// ],
|
||||
"postStartCommand": "/workspaces/${localWorkspaceFolderBasename}/.devcontainer/initscript.sh",
|
||||
// Use 'forwardPorts' to make a list of ports inside the container available locally.
|
||||
"forwardPorts": []
|
||||
}
|
||||
@@ -0,0 +1,52 @@
|
||||
services:
|
||||
k3s-server:
|
||||
image: "rancher/k3s:${K3S_VERSION:-latest}"
|
||||
# command: server --disable traefik --disable servicelb
|
||||
command: server --disable traefik
|
||||
hostname: k3s-server
|
||||
dns:
|
||||
- 10.10.20.100
|
||||
tmpfs: [ "/run", "/var/run" ]
|
||||
ulimits:
|
||||
nproc: 65535
|
||||
nofile:
|
||||
soft: 65535
|
||||
hard: 65535
|
||||
privileged: true
|
||||
restart: always
|
||||
environment:
|
||||
- K3S_TOKEN=${K3S_TOKEN:-secret}
|
||||
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
|
||||
- K3S_KUBECONFIG_MODE=666
|
||||
volumes:
|
||||
- k3s-server:/var/lib/rancher/k3s
|
||||
# This is just so that we get the kubeconfig file out
|
||||
- .:/output
|
||||
ports:
|
||||
- 6443 # Kubernetes API Server
|
||||
- 80 # Ingress controller port 80
|
||||
- 443 # Ingress controller port 443
|
||||
|
||||
k3s-agent:
|
||||
image: "rancher/k3s:${K3S_VERSION:-latest}"
|
||||
hostname: k3s-agent
|
||||
dns:
|
||||
- 10.10.20.100
|
||||
tmpfs: [ "/run", "/var/run" ]
|
||||
ulimits:
|
||||
nproc: 65535
|
||||
nofile:
|
||||
soft: 65535
|
||||
hard: 65535
|
||||
privileged: true
|
||||
restart: always
|
||||
environment:
|
||||
- K3S_URL=https://k3s-server:6443
|
||||
- K3S_TOKEN=${K3S_TOKEN:-secret}
|
||||
volumes:
|
||||
- k3s-agent:/var/lib/rancher/k3s
|
||||
profiles: [ "cluster" ] # only start agent if run with profile `cluster`
|
||||
|
||||
volumes:
|
||||
k3s-server: {}
|
||||
k3s-agent: {}
|
||||
13
fission-python/template/.devcontainer/docker-compose.yaml
Normal file
13
fission-python/template/.devcontainer/docker-compose.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
services:
|
||||
devcontainer:
|
||||
# All tags avaiable at: https://mcr.microsoft.com/v2/devcontainers/rust/tags/list
|
||||
# image: mcr.microsoft.com/vscode/devcontainers/python:3.10-bullseye
|
||||
image: registry.vegastar.vn/vegacloud/fission-python:3.10-bullseye
|
||||
volumes:
|
||||
- ../..:/workspaces:cached
|
||||
command: sleep infinity
|
||||
env_file:
|
||||
- .env
|
||||
# Comment out depend if you only run devcontainer
|
||||
depends_on:
|
||||
- k3s-server
|
||||
166
fission-python/template/.devcontainer/initscript.sh
Executable file
166
fission-python/template/.devcontainer/initscript.sh
Executable file
@@ -0,0 +1,166 @@
|
||||
#!/bin/bash
|
||||
|
||||
## For debugging
|
||||
# set -eux
|
||||
|
||||
|
||||
# wait few seconds to ensure k3s server is ready
|
||||
sleep 60
|
||||
|
||||
|
||||
#############################
|
||||
### DEV PACKAGES
|
||||
#############################
|
||||
export RAKE_VER=0.1.7
|
||||
|
||||
curl -L https://$PRIVATE_GIT_TOKEN@registry.vegastar.vn/vegacloud/make/releases/download/$RAKE_VER/rake-$RAKE_VER-x86_64-unknown-linux-musl.tar.gz | tar xzv -C /tmp/
|
||||
sudo install -o root -g root -m 0755 /tmp/rake-$RAKE_VER-x86_64-unknown-linux-musl/rake /usr/local/bin/rake
|
||||
|
||||
#############################
|
||||
### KUBECTL
|
||||
#############################
|
||||
|
||||
## Config kubectl
|
||||
mkdir -p ~/.kube
|
||||
cp ${PWD}/.devcontainer/kubeconfig.yaml ~/.kube/config
|
||||
sed -i 's/127.0.0.1/k3s-server/g' ~/.kube/config
|
||||
|
||||
## allow insecure connection
|
||||
shopt -s expand_aliases
|
||||
echo 'alias kubectl="kubectl --insecure-skip-tls-verify"' >> ~/.bashrc
|
||||
echo 'alias k="kubectl --insecure-skip-tls-verify"' >> ~/.bashrc
|
||||
|
||||
#############################
|
||||
### K9S
|
||||
#############################
|
||||
|
||||
# install k9s
|
||||
wget https://github.com/derailed/k9s/releases/download/v0.50.6/k9s_linux_amd64.deb -O /tmp/k9s_linux_amd64.deb
|
||||
sudo dpkg -i /tmp/k9s_linux_amd64.deb
|
||||
|
||||
#############################
|
||||
### NGINX INGRESS
|
||||
#############################
|
||||
|
||||
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-$NGINX_INGRESS_VER/deploy/static/provider/cloud/deploy.yaml
|
||||
# cat <<EOT >> /tmp/nginx-service.yaml
|
||||
# apiVersion: v1
|
||||
# kind: Service
|
||||
# metadata:
|
||||
# name: ingress-nginx-controller-loadbalancer
|
||||
# namespace: ingress-nginx
|
||||
# spec:
|
||||
# selector:
|
||||
# app.kubernetes.io/component: controller
|
||||
# app.kubernetes.io/instance: ingress-nginx
|
||||
# app.kubernetes.io/name: ingress-nginx
|
||||
# ports:
|
||||
# - name: http
|
||||
# port: 80
|
||||
# protocol: TCP
|
||||
# targetPort: 80
|
||||
# - name: https
|
||||
# port: 443
|
||||
# protocol: TCP
|
||||
# targetPort: 443
|
||||
# type: LoadBalancer
|
||||
# EOT
|
||||
# kubectl apply -f /tmp/nginx-service.yaml
|
||||
# rm -f /tmp/nginx-service.yaml
|
||||
|
||||
#############################
|
||||
### OPEN TELEMETRY
|
||||
#############################
|
||||
# kubectl create namespace $JAEGER_NAMESPACE
|
||||
# kubectl create namespace $OPENTELEMETRY_NAMESPACE
|
||||
|
||||
# ## cert-manager
|
||||
# kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml
|
||||
|
||||
# ## install jaeger
|
||||
# helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
|
||||
# helm install jaeger jaegertracing/jaeger -n $JAEGER_NAMESPACE
|
||||
# kubectl -n $JAEGER_NAMESPACE get po
|
||||
|
||||
# ## open telemetry operator
|
||||
# kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
|
||||
|
||||
# ## create an OpenTelemetry Collector instance
|
||||
# kubectl -n $OPENTELEMETRY_NAMESPACE apply -f .devcontainer/helm/opentelemetry-collector.yaml
|
||||
|
||||
#############################
|
||||
### FISSION PODs
|
||||
#############################
|
||||
kubectl create namespace $FISSION_NAMESPACE
|
||||
|
||||
# ## install with helm
|
||||
# kubectl create -k "github.com/fission/fission/crds/v1?ref=${FISSION_VER}"
|
||||
# helm repo add fission-charts https://fission.github.io/fission-charts/ && helm repo update
|
||||
# kubectl apply -f - <<EOF
|
||||
# apiVersion: v1
|
||||
# kind: Namespace
|
||||
# metadata:
|
||||
# name: fission
|
||||
# ---
|
||||
# apiVersion: v1
|
||||
# kind: Namespace
|
||||
# metadata:
|
||||
# name: gh-eom
|
||||
# EOF
|
||||
# kubectl apply -f - <<EOF
|
||||
# type: kubernetes.io/dockerconfigjson
|
||||
# apiVersion: v1
|
||||
# kind: Secret
|
||||
# metadata:
|
||||
# name: vega-container-registry
|
||||
# namespace: fission
|
||||
# data:
|
||||
# .dockerconfigjson: >-
|
||||
# eyJhdXRocyI6eyJyZWdpc3RyeS52ZWdhc3Rhci52biI6eyJ1c2VybmFtZSI6InRpZW5kZCIsInBhc3N3b3JkIjoiYTBjY2JjMDVjNzMyYzExMjU3OTg1NjMwNjY5ZTFjNjEyNDg0NzU1MyIsImF1dGgiOiJkR2xsYm1Sa09tRXdZMk5pWXpBMVl6Y3pNbU14TVRJMU56azROVFl6TURZMk9XVXhZell4TWpRNE5EYzFOVE09In19fQ==
|
||||
# EOF
|
||||
# helm upgrade --install fission fission-charts/fission-all --namespace $FISSION_NAMESPACE -f - <<EOF
|
||||
# imagePullSecrets:
|
||||
# - name: vega-container-registry
|
||||
# defaultNamespace: default
|
||||
# additionalFissionNamespaces:
|
||||
# - gh-eom
|
||||
# EOF
|
||||
|
||||
## install without helm
|
||||
kubectl create -k "github.com/fission/fission/crds/v1?ref=${FISSION_VER}"
|
||||
kubectl create namespace $FISSION_NAMESPACE
|
||||
kubectl config set-context --current --namespace=$FISSION_NAMESPACE
|
||||
kubectl apply -f https://github.com/fission/fission/releases/download/${FISSION_VER}/fission-all-${FISSION_VER}-minikube.yaml
|
||||
kubectl config set-context --current --namespace=default #to change context to default namespace after installation
|
||||
|
||||
|
||||
#############################
|
||||
### PROMETHEUS AND GRAFANA
|
||||
#############################
|
||||
# kubectl create namespace $METRICS_NAMESPACE
|
||||
|
||||
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && helm repo update
|
||||
# helm install prometheus prometheus-community/kube-prometheus-stack -n $METRICS_NAMESPACE
|
||||
|
||||
#############################
|
||||
### UPDATE FISSION
|
||||
#############################
|
||||
|
||||
# helm upgrade fission fission-charts/fission-all --namespace $FISSION_NAMESPACE -f .devcontainer/helm/fission-values.yaml
|
||||
|
||||
#############################
|
||||
### PORT FORWARDING
|
||||
#############################
|
||||
|
||||
## To access jaeger-query, you can use Kubernetes port forwarding
|
||||
# kubectl -n jaeger port-forward svc/jaeger-query 8080:80 --address='0.0.0.0'
|
||||
## To access kabana, you can use Kubernetes port forwarding
|
||||
# kubectl --namespace monitoring port-forward svc/prometheus-grafana 3000:80
|
||||
## For password, you'll need to run the following command:
|
||||
# kubectl get secret --namespace monitoring prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
||||
|
||||
#############################
|
||||
### INSTALLING PYTHON PACKAGES
|
||||
#############################
|
||||
|
||||
pip install -r dev-requirements.txt -r src/requirements.txt
|
||||
24
fission-python/template/.env.example
Normal file
24
fission-python/template/.env.example
Normal file
@@ -0,0 +1,24 @@
|
||||
# PostgreSQL Database Configuration
|
||||
PG_HOST=
|
||||
PG_PORT=5432
|
||||
PG_DB=
|
||||
PG_USER=
|
||||
PG_PASS=
|
||||
PG_DBSCHEMA=public
|
||||
|
||||
# Optional: Service-specific configuration (via ConfigMap)
|
||||
# YOUR_SERVICE_CONFIG_ENDPOINT=
|
||||
|
||||
# Optional: Vault encryption key (32-byte hex string)
|
||||
# Required if using encrypted secrets (vault:v1:...)
|
||||
CRYPTO_KEY=
|
||||
|
||||
# Example: If using MinIO/S3
|
||||
# S3_ENDPOINT=
|
||||
# S3_ACCESS_KEY=
|
||||
# S3_SECRET_KEY=
|
||||
# S3_BUCKET=
|
||||
|
||||
# Example: If using external APIs
|
||||
# API_ENDPOINT=
|
||||
# API_KEY=
|
||||
65
fission-python/template/.fission/deployment.json
Normal file
65
fission-python/template/.fission/deployment.json
Normal file
@@ -0,0 +1,65 @@
|
||||
{
|
||||
"namespace": "default",
|
||||
"environments": {
|
||||
"${PROJECT_NAME}-py": {
|
||||
"image": "ghcr.io/fission/python-env",
|
||||
"builder": "ghcr.io/fission/python-builder",
|
||||
"mincpu": 50,
|
||||
"maxcpu": 100,
|
||||
"minmemory": 50,
|
||||
"maxmemory": 500,
|
||||
"poolsize": 1
|
||||
}
|
||||
},
|
||||
"archives": { "package.zip": {"sourcepath": "src"} },
|
||||
"packages": {
|
||||
"${PROJECT_NAME}": {
|
||||
"buildcmd": "./build.sh",
|
||||
"sourcearchive": "package.zip",
|
||||
"env": "${PROJECT_NAME}-py"
|
||||
}
|
||||
},
|
||||
"function_common": {
|
||||
"pkg": "${PROJECT_NAME}",
|
||||
"secrets": ["fission-${PROJECT_NAME}-env"],
|
||||
"configmaps": ["fission-${PROJECT_NAME}-config"],
|
||||
"executor": {
|
||||
"select": "poolmgr",
|
||||
"poolmgr": {
|
||||
"concurrency": 1,
|
||||
"requestsperpod": 1,
|
||||
"onceonly": false
|
||||
},
|
||||
"newdeploy": {
|
||||
"minscale": 1,
|
||||
"maxscale": 1,
|
||||
"targetcpu": 80
|
||||
}
|
||||
},
|
||||
"mincpu": 50,
|
||||
"maxcpu": 100,
|
||||
"minmemory": 50,
|
||||
"maxmemory": 500
|
||||
},
|
||||
"secrets": {
|
||||
"fission-${PROJECT_NAME}-env": {
|
||||
"literals": [
|
||||
"PG_HOST=YOUR_DB_HOST",
|
||||
"PG_PORT=5432",
|
||||
"PG_DB=YOUR_DB_NAME",
|
||||
"PG_USER=YOUR_DB_USER",
|
||||
"PG_PASS=YOUR_DB_PASSWORD",
|
||||
"PG_DBSCHEMA=public"
|
||||
]
|
||||
}
|
||||
},
|
||||
"configmaps": {
|
||||
"fission-${PROJECT_NAME}-config": {
|
||||
"literals": [
|
||||
"FN_OPTIONAL_CONFIG=http://example.com/config"
|
||||
]
|
||||
}
|
||||
},
|
||||
"imagepullsecret": "",
|
||||
"runtime_envs": {}
|
||||
}
|
||||
22
fission-python/template/.fission/dev-deployment.json
Normal file
22
fission-python/template/.fission/dev-deployment.json
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"namespace": "fission-dev",
|
||||
"secrets": {
|
||||
"fission-${PROJECT_NAME}-env": {
|
||||
"literals": [
|
||||
"PG_HOST=dev-db.example.com",
|
||||
"PG_PORT=5432",
|
||||
"PG_DB=devdb",
|
||||
"PG_USER=${PROJECT_NAME}-dev",
|
||||
"PG_PASS=dev-password"
|
||||
]
|
||||
}
|
||||
},
|
||||
"configmaps": {
|
||||
"fission-${PROJECT_NAME}-config": {
|
||||
"literals": [
|
||||
"LOG_LEVEL=DEBUG",
|
||||
"FISSION_ROUTE_SERVICE_ENDPOINT=http://router.fission.svc.cluster.local"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
32
fission-python/template/.fission/local-deployment.json
Normal file
32
fission-python/template/.fission/local-deployment.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"namespace": "default",
|
||||
"environments": {
|
||||
"${PROJECT_NAME}-py": {
|
||||
"image": "ghcr.io/fission/python-env:3.11",
|
||||
"builder": "ghcr.io/fission/python-builder:3.11",
|
||||
"mincpu": 100,
|
||||
"maxcpu": 200,
|
||||
"minmemory": 128,
|
||||
"maxmemory": 256,
|
||||
"poolsize": 1
|
||||
}
|
||||
},
|
||||
"secrets": {
|
||||
"fission-${PROJECT_NAME}-env": {
|
||||
"literals": [
|
||||
"PG_HOST=localhost",
|
||||
"PG_PORT=5432",
|
||||
"PG_DB=testdb",
|
||||
"PG_USER=postgres",
|
||||
"PG_PASS=test"
|
||||
]
|
||||
}
|
||||
},
|
||||
"configmaps": {
|
||||
"fission-${PROJECT_NAME}-config": {
|
||||
"literals": [
|
||||
"LOG_LEVEL=DEBUG"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
name: "K8S Fission Code Analystics"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
sonarqube:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: 🔍 SonarQube Scan
|
||||
id: scan
|
||||
uses: sonarsource/sonarqube-scan-action@master
|
||||
env:
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
|
||||
with:
|
||||
args: >
|
||||
-Dsonar.projectKey=${{ github.event.repository.name }} -Dsonar.sources=.
|
||||
- name: 🔔 Send notification
|
||||
uses: appleboy/telegram-action@master
|
||||
if: always()
|
||||
with:
|
||||
to: ${{ secrets.TELEGRAM_TO }}
|
||||
token: ${{ secrets.TELEGRAM_TOKEN }}
|
||||
format: markdown
|
||||
socks5: ${{ secrets.TELEGRAM_PROXY_URL != '' && secrets.TELEGRAM_PROXY_URL || '' }}
|
||||
message: |
|
||||
${{ steps.scan.outcome == 'success' && '🟢 (=^ ◡ ^=)' || '🔴 (。•́︿•̀。)' }} Scanned ${{ github.event.repository.name }}
|
||||
*Msg*: `${{ github.event.commits[0].message }}`
|
||||
72
fission-python/template/.gitea/workflows/dev-deployment.yaml
Normal file
72
fission-python/template/.gitea/workflows/dev-deployment.yaml
Normal file
@@ -0,0 +1,72 @@
|
||||
name: "Development Deployment"
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
name: Deploy to development
|
||||
runs-on: ubuntu-latest
|
||||
environment: development
|
||||
steps:
|
||||
- name: ☸️ Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: 🐍 Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: 📦 Install dependencies
|
||||
run: |
|
||||
pip install -r dev-requirements.txt
|
||||
|
||||
- name: 🔍 Lint with flake8
|
||||
run: flake8 src/ --max-line-length=88 --extend-ignore=E203,W503
|
||||
|
||||
- name: 🎨 Check formatting with black
|
||||
run: black --check src/
|
||||
|
||||
- name: 🧪 Run tests
|
||||
run: pytest --cov=src --cov-report=xml
|
||||
|
||||
- name: 📤 Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
files: coverage.xml
|
||||
fail_ci_if_error: false
|
||||
|
||||
- name: ☸️ Setup kubectl
|
||||
uses: azure/setup-kubectl@v4
|
||||
with:
|
||||
version: 'v1.28.0'
|
||||
|
||||
- name: 🔐 Configure Kubeconfig
|
||||
uses: azure/k8s-set-context@v4
|
||||
with:
|
||||
method: kubeconfig
|
||||
kubeconfig: ${{ secrets.KUBECONFIG_DEV }}
|
||||
|
||||
- name: 🚀 Install Fission CLI
|
||||
run: |
|
||||
curl -L https://github.com/fission/fission/releases/latest/download/fission-linux-amd64 -o /tmp/fission
|
||||
sudo install /tmp/fission /usr/local/bin/fission
|
||||
fission check
|
||||
|
||||
- name: 📦 Build and Deploy (dev)
|
||||
run: |
|
||||
echo "Deploying to development environment..."
|
||||
fission deploy --dev
|
||||
|
||||
- name: 🔔 Notify success
|
||||
if: always()
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const status = '${{ job.status }}';
|
||||
const emoji = status === 'success' ? '🟢' : '🔴';
|
||||
const message = `${emoji} Dev deployment ${status} for ${{ github.repository }}@${{ github.sha }}\nCommit: ${{ github.event.commits[0].message }}`;
|
||||
// Send to Slack/Telegram/etc - customize as needed
|
||||
console.log(message);
|
||||
@@ -0,0 +1,56 @@
|
||||
name: "Manual Deployment"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
environment:
|
||||
description: 'Deployment environment (dev, staging, prod)'
|
||||
required: true
|
||||
type: choice
|
||||
options:
|
||||
- dev
|
||||
- staging
|
||||
- prod
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
name: Deploy to ${{ github.event.inputs.environment }}
|
||||
runs-on: ubuntu-latest
|
||||
environment: ${{ github.event.inputs.environment }}
|
||||
steps:
|
||||
- name: ☸️ Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: ☸️ Setup kubectl
|
||||
uses: azure/setup-kubectl@v4
|
||||
with:
|
||||
version: 'v1.28.0'
|
||||
|
||||
- name: 🔐 Configure Kubeconfig
|
||||
uses: azure/k8s-set-context@v4
|
||||
with:
|
||||
method: kubeconfig
|
||||
kubeconfig: ${{ secrets[format('KUBECONFIG_{0}', github.event.inputs.environment)] }}
|
||||
|
||||
- name: 🚀 Install Fission CLI
|
||||
run: |
|
||||
curl -L https://github.com/fission/fission/releases/latest/download/fission-linux-amd64 -o /tmp/fission
|
||||
sudo install /tmp/fission /usr/local/bin/fission
|
||||
fission check
|
||||
|
||||
- name: 📦 Deploy
|
||||
run: |
|
||||
echo "Deploying to ${{ github.event.inputs.environment }} environment..."
|
||||
if [ "${{ github.event.inputs.environment }}" = "dev" ]; then
|
||||
fission deploy --dev
|
||||
else
|
||||
fission deploy
|
||||
fi
|
||||
|
||||
- name: 🔔 Notify
|
||||
if: always()
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const env = '${{ github.event.inputs.environment }}';
|
||||
const status = '${{ job.status }}';
|
||||
console.log(`Deployment to ${env} completed with status: ${status}`);
|
||||
@@ -0,0 +1,54 @@
|
||||
name: "Manual Uninstall"
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
environment:
|
||||
description: 'Environment to uninstall from (dev, staging, prod)'
|
||||
required: true
|
||||
type: choice
|
||||
options:
|
||||
- dev
|
||||
- staging
|
||||
- prod
|
||||
|
||||
jobs:
|
||||
uninstall:
|
||||
name: Uninstall from ${{ github.event.inputs.environment }}
|
||||
runs-on: ubuntu-latest
|
||||
environment: ${{ github.event.inputs.environment }}
|
||||
steps:
|
||||
- name: ☸️ Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: ☸️ Setup kubectl
|
||||
uses: azure/setup-kubectl@v4
|
||||
with:
|
||||
version: 'v1.28.0'
|
||||
|
||||
- name: 🔐 Configure Kubeconfig
|
||||
uses: azure/k8s-set-context@v4
|
||||
with:
|
||||
method: kubeconfig
|
||||
kubeconfig: ${{ secrets[format('KUBECONFIG_{0}', github.event.inputs.environment)] }}
|
||||
|
||||
- name: 🚀 Install Fission CLI
|
||||
run: |
|
||||
curl -L https://github.com/fission/fission/releases/latest/download/fission-linux-amd64 -o /tmp/fission
|
||||
sudo install /tmp/fission /usr/local/bin/fission
|
||||
fission check
|
||||
|
||||
- name: 🗑️ Uninstall functions
|
||||
run: |
|
||||
echo "Uninstalling from ${{ github.event.inputs.environment }} environment..."
|
||||
# Delete all functions in this repository/package
|
||||
# Note: This will remove functions defined in deployment.json
|
||||
fission function list --all-namespaces | grep "${{ github.event.repository.name }}" | awk '{print $1}' | xargs -r fission function delete --name
|
||||
|
||||
- name: 🔔 Notify
|
||||
if: always()
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const env = '${{ github.event.inputs.environment }}';
|
||||
const status = '${{ job.status }}';
|
||||
console.log(`Uninstall from ${env} completed with status: ${status}`);
|
||||
190
fission-python/template/.gitignore
vendored
Normal file
190
fission-python/template/.gitignore
vendored
Normal file
@@ -0,0 +1,190 @@
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
# *.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
|
||||
# pdm
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
||||
#pdm.lock
|
||||
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
||||
# in version control.
|
||||
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
|
||||
.pdm.toml
|
||||
.pdm-python
|
||||
.pdm-build/
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
#.idea/
|
||||
|
||||
## Ignore Temporary directory of Dagster
|
||||
/tmp*
|
||||
|
||||
## Devcontainer cache files, that will make devcontainer start faster after first run
|
||||
/.vscache/.vscode-server/*
|
||||
!/.vscache/.vscode-server/.gitkeep
|
||||
/.vscache/.devcontainer/*
|
||||
!/.vscache/.devcontainer/.gitkeep
|
||||
|
||||
## Ignore K3S config file
|
||||
/.devcontainer/kubeconfig.yaml
|
||||
|
||||
## Ignore packaged files
|
||||
/*.zip
|
||||
# !/package.zip
|
||||
/*.bak
|
||||
|
||||
# No Makefile in this template - uses build.sh instead
|
||||
|
||||
## Ignore fission's specs files
|
||||
/specs/*
|
||||
!/specs/fission-deployment-config.yaml
|
||||
!/specs/README
|
||||
|
||||
/manifests/*
|
||||
|
||||
/fission-dumps
|
||||
436
fission-python/template/README.md
Normal file
436
fission-python/template/README.md
Normal file
@@ -0,0 +1,436 @@
|
||||
# Fission Python Template
|
||||
|
||||
A production-ready template for building Fission serverless Python functions with best practices for configuration, database connectivity, error handling, and testing.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
project/
|
||||
├── .fission/
|
||||
│ ├── deployment.json # Fission function deployment configuration
|
||||
│ ├── dev-deployment.json # Development overrides
|
||||
│ └── local-deployment.json # Local development overrides
|
||||
├── src/
|
||||
│ ├── __init__.py # Package initialization
|
||||
│ ├── vault.py # Vault encryption/decryption utilities
|
||||
│ ├── helpers.py # Shared utilities (DB, secrets, configs)
|
||||
│ ├── exceptions.py # Custom exception hierarchy
|
||||
│ ├── models.py # Pydantic models (request/response schemas)
|
||||
│ ├── build.sh # Package build script
|
||||
│ └── your_function.py # Your function implementations
|
||||
├── test/
|
||||
│ ├── __init__.py
|
||||
│ ├── test_*.py # Unit tests
|
||||
│ └── requirements.txt # Test dependencies
|
||||
├── migrates/
|
||||
│ └── schema.sql # Database migration scripts
|
||||
├── manifests/ # Kubernetes manifests (optional)
|
||||
├── specs/ # Generated Fission specs (created by fission CLI)
|
||||
├── requirements.txt # Runtime dependencies
|
||||
├── dev-requirements.txt # Development dependencies
|
||||
├── .env.example # Environment variable template
|
||||
├── pytest.ini # Pytest configuration
|
||||
└── README.md # Project documentation
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### Fission Configuration in Docstrings
|
||||
|
||||
Fission reads function metadata from docstrings using the ````fission` marker:
|
||||
|
||||
```python
|
||||
def my_function(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "my-function",
|
||||
"http_triggers": {
|
||||
"my-trigger": {
|
||||
"url": "/api/my-endpoint",
|
||||
"methods": ["GET", "POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
"""
|
||||
# Your implementation
|
||||
return {"message": "Hello World"}
|
||||
```
|
||||
|
||||
**Note:** Do not use `fission.yaml` or `fission.json`. The Fission Python builder reads the docstring annotations directly from your Python source files.
|
||||
|
||||
### Environment Variables & Secrets
|
||||
|
||||
Configuration is managed through Kubernetes Secrets and ConfigMaps:
|
||||
|
||||
- **Secrets**: Database credentials, API keys, encryption keys (sensitive)
|
||||
- **ConfigMaps**: Non-sensitive configuration, endpoints, feature flags
|
||||
|
||||
Access them via helper functions:
|
||||
|
||||
```python
|
||||
from helpers import get_secret, get_config
|
||||
|
||||
# Read secret (with optional default)
|
||||
db_host = get_secret("PG_HOST", "localhost")
|
||||
db_port = int(get_secret("PG_PORT", "5432"))
|
||||
|
||||
# Read config
|
||||
api_endpoint = get_config("EXTERNAL_API_ENDPOINT")
|
||||
```
|
||||
|
||||
**Placeholder variables** in `deployment.json`:
|
||||
- `${PROJECT_NAME}` - Replaced with your actual project name during project creation
|
||||
- Secret/configmap names follow pattern: `fission-${PROJECT_NAME}-env` and `fission-${PROJECT_NAME}-config`
|
||||
|
||||
### Database Connectivity
|
||||
|
||||
Use the provided `init_db_connection()` helper:
|
||||
|
||||
```python
|
||||
from helpers import init_db_connection, db_rows_to_array
|
||||
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT * FROM items")
|
||||
rows = db_rows_to_array(cursor, cursor.fetchall())
|
||||
```
|
||||
|
||||
The helper automatically:
|
||||
- Reads connection parameters from secrets (PG_HOST, PG_PORT, PG_DB, PG_USER, PG_PASS, PG_DBSCHEMA)
|
||||
- Checks port connectivity before connecting
|
||||
- Uses LoggingConnection for query logging
|
||||
- Applies schema search path if PG_DBSCHEMA is set
|
||||
|
||||
### Error Handling
|
||||
|
||||
Use the exception hierarchy from `exceptions.py`:
|
||||
|
||||
```python
|
||||
from exceptions import ValidationError, NotFoundError, ConflictError, DatabaseError
|
||||
|
||||
def get_item(item_id: str):
|
||||
item = db.fetch_one(item_id)
|
||||
if not item:
|
||||
raise NotFoundError(f"Item {item_id} not found", x_user=get_user_from_headers())
|
||||
return item
|
||||
```
|
||||
|
||||
All exceptions return standardized error responses:
|
||||
|
||||
```json
|
||||
{
|
||||
"error_code": "NOT_FOUND",
|
||||
"http_status": 404,
|
||||
"error_msg": "Item 123 not found",
|
||||
"x_user": "user-456",
|
||||
"details": {"item_id": "123"}
|
||||
}
|
||||
```
|
||||
|
||||
### Validation with Pydantic
|
||||
|
||||
Validate request payloads using Pydantic models:
|
||||
|
||||
```python
|
||||
from models import ItemCreateRequest
|
||||
from pydantic import ValidationError as PydanticValidationError
|
||||
|
||||
def create_item():
|
||||
try:
|
||||
data = ItemCreateRequest(**request.get_json())
|
||||
except PydanticValidationError as e:
|
||||
raise ValidationError(str(e), details=e.errors())
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
# Install runtime and development dependencies
|
||||
pip install -r dev-requirements.txt
|
||||
|
||||
# Or just runtime dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Local Testing
|
||||
|
||||
Fission provides `fission spec` to test specs locally:
|
||||
|
||||
```bash
|
||||
# Verify your deployment configuration
|
||||
fission spec verify --file=.fission/deployment.json
|
||||
|
||||
# Build and test locally
|
||||
fission function test --name your-function
|
||||
```
|
||||
|
||||
### 3. Unit Testing
|
||||
|
||||
Run tests with pytest:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=src
|
||||
|
||||
# Run specific test file
|
||||
pytest test/test_my_function.py
|
||||
|
||||
# Verbose output
|
||||
pytest -v
|
||||
```
|
||||
|
||||
**Example test structure:**
|
||||
|
||||
```python
|
||||
# test/test_my_function.py
|
||||
import pytest
|
||||
from unittest.mock import patch
|
||||
from src.my_function import main
|
||||
|
||||
def test_my_function_success():
|
||||
event = {"key": "value"}
|
||||
context = {}
|
||||
result = main(event, context)
|
||||
assert result["status"] == "success"
|
||||
|
||||
@patch("helpers.init_db_connection")
|
||||
def test_my_function_with_db(mock_db):
|
||||
# Mock database connection
|
||||
mock_conn = MagicMock()
|
||||
mock_db.return_value = mock_conn
|
||||
# Test function
|
||||
```
|
||||
|
||||
### 4. Building the Package
|
||||
|
||||
The `build.sh` script installs dependencies and packages your code:
|
||||
|
||||
```bash
|
||||
# From project root
|
||||
./src/build.sh
|
||||
|
||||
# This produces a package.zip in the specs directory
|
||||
# Ready for deployment with: fission deploy
|
||||
```
|
||||
|
||||
The build script detects the OS (Debian/Alpine) and installs the correct build dependencies (gcc, libpq-dev, python3-dev).
|
||||
|
||||
### 5. Deployment
|
||||
|
||||
```bash
|
||||
# Deploy to Fission
|
||||
fission deploy
|
||||
|
||||
# Or deploy specific function
|
||||
fission function update --name my-function --env your-env
|
||||
```
|
||||
|
||||
## Deployment Configuration
|
||||
|
||||
### Executors
|
||||
|
||||
Choose between two executor types in `deployment.json`:
|
||||
|
||||
**poolmgr** (default) - Good for high-concurrency HTTP functions:
|
||||
```json
|
||||
"executor": {
|
||||
"select": "poolmgr",
|
||||
"poolmgr": {
|
||||
"concurrency": 1,
|
||||
"requestsperpod": 1,
|
||||
"onceonly": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**newdeploy** - Good for dedicated scaling:
|
||||
```json
|
||||
"executor": {
|
||||
"select": "newdeploy",
|
||||
"newdeploy": {
|
||||
"minscale": 1,
|
||||
"maxscale": 5,
|
||||
"targetcpu": 80
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Resource Limits
|
||||
|
||||
Set resource allocation in `function_common`:
|
||||
- `mincpu` / `maxcpu` - CPU allocation in millicores (50 = 0.05 cores)
|
||||
- `minmemory` / `maxmemory` - Memory in MB
|
||||
- Adjust based on your function's needs
|
||||
|
||||
### Environment-Specific Overrides
|
||||
|
||||
Use `dev-deployment.json` for development environment (different secrets, lower resources). Fission will automatically use it when `--dev` flag is passed.
|
||||
|
||||
## Vault Encryption
|
||||
|
||||
For encrypted secrets, use the vault utility functions:
|
||||
|
||||
```python
|
||||
from vault import encrypt_vault, decrypt_vault, is_valid_vault_format
|
||||
|
||||
# Encrypt a value (run locally to generate vault string)
|
||||
encrypted = encrypt_vault("my-secret", "your-hex-key-here")
|
||||
# Result: "vault:v1:base64-encrypted-data"
|
||||
|
||||
# Store the encrypted string in your K8s secret
|
||||
# The helper will auto-decrypt if is_valid_vault_format() returns True
|
||||
```
|
||||
|
||||
**Important:** Set `CRYPTO_KEY` in your helpers.py (or via environment override) to your actual 32-byte key in hex format.
|
||||
|
||||
## Testing Strategies
|
||||
|
||||
### Unit Tests
|
||||
- Mock external dependencies (database, HTTP calls)
|
||||
- Test business logic isolation
|
||||
- Use `pytest-mock` for convenient mocking
|
||||
|
||||
### Integration Tests
|
||||
- Use a test database
|
||||
- Clean up test data after each run
|
||||
- Consider using `pytest.fixtures` for setup/teardown
|
||||
|
||||
### Local Development
|
||||
- Use `.fission/local-deployment.json` for local Fission setup
|
||||
- Override secrets/configmaps for local environment
|
||||
- Run with: `fission function test --local`
|
||||
|
||||
## Migrations
|
||||
|
||||
Place SQL migration scripts in `migrates/`:
|
||||
|
||||
```sql
|
||||
-- migrates/001_create_items_table.sql
|
||||
CREATE TABLE items (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
status VARCHAR(50) DEFAULT 'active',
|
||||
created TIMESTAMP DEFAULT NOW(),
|
||||
modified TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
Apply migrations manually via psql or using a migration tool like `alembic`.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep functions small** - Single responsibility per function
|
||||
2. **Use Pydantic** - Validate all inputs with request models
|
||||
3. **Standardize errors** - Use the provided exception classes
|
||||
4. **Log appropriately** - Use `logger` from helpers (already configured)
|
||||
5. **Track users** - Use `get_user_from_headers()` for audit trails
|
||||
6. **Write tests** - Aim for high coverage of business logic
|
||||
7. **Document functions** - Add docstrings with fission config block
|
||||
8. **Avoid global state** - Functions should be stateless and idempotent
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
The template includes `.gitea/workflows/` for CI/CD:
|
||||
|
||||
- `install-dispatch.yaml` - Triggered on installation events
|
||||
- `uninstall-dispatch.yaml` - Cleanup on uninstall
|
||||
- `dev-deployment.yaml` - Development environment updates
|
||||
- `analystic-dispatch.yaml` - Analytics processing
|
||||
|
||||
Adapt these workflows for your deployment pipeline (GitHub Actions, GitLab CI, etc.).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Spec Generation Fails
|
||||
- Ensure all function files have proper fission config in docstrings
|
||||
- Run: `python -m py_compile src/*.py` to check syntax
|
||||
- Verify `build.sh` is executable: `chmod +x src/build.sh`
|
||||
|
||||
### Cannot Connect to Database
|
||||
- Check that secrets are mounted correctly: `kubectl exec <pod> -- ls /secrets/default/`
|
||||
- Verify PG_HOST, PG_PORT are correct
|
||||
- Use `check_port_open()` debug output
|
||||
- Test connection manually: `psql -h $PG_HOST -p $PG_PORT -U $PG_USER $PG_DB`
|
||||
|
||||
### Missing Dependencies
|
||||
- Ensure `requirements.txt` includes ALL dependencies (Flask is required!)
|
||||
- Check build logs for pip errors
|
||||
- Rebuild package: `./src/build.sh`
|
||||
|
||||
## Example Implementations
|
||||
|
||||
### CRUD Operation
|
||||
|
||||
```python
|
||||
from flask import request
|
||||
from helpers import init_db_connection, format_error_response
|
||||
from exceptions import ValidationError, NotFoundError, DatabaseError
|
||||
from models import ItemCreateRequest, ItemResponse
|
||||
|
||||
def create_item(event, context):
|
||||
"""Create a new item."""
|
||||
try:
|
||||
# Validate input
|
||||
data = ItemCreateRequest(**request.get_json())
|
||||
except Exception as e:
|
||||
raise ValidationError(str(e))
|
||||
|
||||
conn = init_db_connection()
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(
|
||||
"INSERT INTO items (name, description, status) VALUES (%s, %s, %s) RETURNING id, created, modified",
|
||||
(data.name, data.description, data.status.value)
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
conn.commit()
|
||||
item = db_row_to_dict(cursor, row)
|
||||
return item
|
||||
except Exception as e:
|
||||
conn.rollback()
|
||||
raise DatabaseError(str(e))
|
||||
finally:
|
||||
conn.close()
|
||||
```
|
||||
|
||||
### Webhook Receiver
|
||||
|
||||
```python
|
||||
def webhook_handler(event, context):
|
||||
"""Process incoming webhook."""
|
||||
# Webhook data is in event
|
||||
payload = event.get("body", {})
|
||||
signature = request.headers.get("X-Webhook-Signature")
|
||||
|
||||
# Verify signature
|
||||
if not verify_signature(payload, signature):
|
||||
raise ValidationError("Invalid signature")
|
||||
|
||||
# Process webhook
|
||||
process_webhook(payload)
|
||||
|
||||
return {"status": "processed"}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Replace placeholder values in `.fission/deployment.json`
|
||||
2. Update `SECRET_NAME` and `CONFIG_NAME` in `helpers.py` (or use create-project.sh)
|
||||
3. Implement your business logic in new function files
|
||||
4. Write tests for your functions
|
||||
5. Deploy to Kubernetes cluster with Fission
|
||||
|
||||
## Resources
|
||||
|
||||
- [Fission Documentation](https://fission.io/docs/)
|
||||
- [Fission Python Builder](https://github.com/fission/fission-python-builder)
|
||||
- [Pydantic Documentation](https://docs.pydantic.dev/)
|
||||
- [Flask Documentation](https://flask.palletsprojects.com/)
|
||||
14
fission-python/template/dev-requirements.txt
Normal file
14
fission-python/template/dev-requirements.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
# Runtime dependencies (include these to match production environment)
|
||||
Flask==2.1.1
|
||||
pydantic==2.11.7
|
||||
psycopg2-binary==2.9.10
|
||||
PyNaCl==1.6.0
|
||||
requests==2.32.2
|
||||
|
||||
# Development and testing tools
|
||||
pytest==8.2.0
|
||||
pytest-mock==3.14.0
|
||||
flake8==7.0.0
|
||||
black==24.1.1
|
||||
mypy==1.8.0
|
||||
pytest-cov==4.1.0
|
||||
594
fission-python/template/docs/DEPLOYMENT.md
Normal file
594
fission-python/template/docs/DEPLOYMENT.md
Normal file
@@ -0,0 +1,594 @@
|
||||
# Deployment Guide
|
||||
|
||||
This guide covers deploying Fission Python functions to Kubernetes, including configuration tuning, troubleshooting, and best practices.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Quick Start](#quick-start)
|
||||
3. [Deployment Configuration](#deployment-configuration)
|
||||
4. [Executors](#executors)
|
||||
5. [Resource Tuning](#resource-tuning)
|
||||
6. [Environments](#environments)
|
||||
7. [Secrets Management](#secrets-management)
|
||||
8. [Rolling Updates](#rolling-updates)
|
||||
9. [Monitoring & Logging](#monitoring--logging)
|
||||
10. [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes cluster (v1.19+)
|
||||
- Fission installed (`kubectl apply -f https://github.com/fission/fission/releases/latest/download/fission-all.yaml`)
|
||||
- `fission` CLI installed and configured
|
||||
- `kubectl` configured to access cluster
|
||||
- Docker registry access (for custom images if needed)
|
||||
|
||||
## Quick Start
|
||||
|
||||
Assuming you have a project set up:
|
||||
|
||||
```bash
|
||||
# 1. Build the package (creates specs/ directory)
|
||||
cd /path/to/project
|
||||
./src/build.sh
|
||||
|
||||
# 2. Verify deployment configuration
|
||||
fission spec verify --file=.fission/deployment.json
|
||||
|
||||
# 3. Deploy to Fission
|
||||
fission deploy
|
||||
|
||||
# 4. Test deployed function
|
||||
curl http://$FISSION_ROUTER/api/items
|
||||
```
|
||||
|
||||
**That's it!** Fission will:
|
||||
- Build package.zip from src/
|
||||
- Create environment (if not exists)
|
||||
- Create package
|
||||
- Create functions from docstring metadata
|
||||
- Set up HTTP triggers
|
||||
|
||||
## Deployment Configuration
|
||||
|
||||
### deployment.json vs fission.yaml
|
||||
|
||||
This template uses `deployment.json`, **not** `fission.yaml` or `fission.json`. The Fission Python builder extracts function metadata from Python docstrings directly.
|
||||
|
||||
### Key Sections
|
||||
|
||||
#### environments
|
||||
|
||||
Define build environment:
|
||||
|
||||
```json
|
||||
{
|
||||
"environments": {
|
||||
"myproject-py": {
|
||||
"image": "ghcr.io/fission/python-env",
|
||||
"builder": "ghcr.io/fission/python-builder",
|
||||
"mincpu": 50,
|
||||
"maxcpu": 100,
|
||||
"minmemory": 50,
|
||||
"maxmemory": 500,
|
||||
"poolsize": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `image` - Runtime image (Python + libraries)
|
||||
- `builder` - Builder image (compiles dependencies)
|
||||
- Resource limits in millicores (50 = 0.05 CPU) and MB
|
||||
|
||||
#### packages
|
||||
|
||||
Define how to build your code:
|
||||
|
||||
```json
|
||||
{
|
||||
"packages": {
|
||||
"myproject": {
|
||||
"buildcmd": "./build.sh",
|
||||
"sourcearchive": "package.zip",
|
||||
"env": "myproject-py"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `buildcmd` - Build script inside builder container
|
||||
- `sourcearchive` - Generated by builder from `sourcepath`
|
||||
- `env` - Links to environment definition
|
||||
|
||||
#### function_common
|
||||
|
||||
Default configuration for all functions:
|
||||
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"pkg": "myproject",
|
||||
"secrets": ["fission-myproject-env"],
|
||||
"configmaps": ["fission-myproject-config"],
|
||||
"executor": { ... },
|
||||
"mincpu": 50,
|
||||
"maxcpu": 100,
|
||||
"minmemory": 50,
|
||||
"maxmemory": 500
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `pkg` - Package name to use
|
||||
- `secrets` / `configmaps` - K8s resources to mount into functions
|
||||
- `executor` - Execution strategy (poolmgr or newdeploy)
|
||||
|
||||
#### secrets / configmaps
|
||||
|
||||
**Placeholder definitions only**. These inform Fission what secret names to expect, but the actual values go in real K8s secrets:
|
||||
|
||||
```json
|
||||
{
|
||||
"secrets": {
|
||||
"fission-myproject-env": {
|
||||
"literals": [
|
||||
"PG_HOST=localhost",
|
||||
"PG_PORT=5432"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create the actual secret:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=prod-db.example.com \
|
||||
--from-literal=PG_PORT=5432 \
|
||||
--from-literal=PG_USER=myuser \
|
||||
--from-literal=PG_PASS=mypassword
|
||||
```
|
||||
|
||||
## Executors
|
||||
|
||||
Fission supports two executor types:
|
||||
|
||||
### poolmgr (default)
|
||||
|
||||
Good for:
|
||||
- High-concurrency HTTP functions
|
||||
- Functions that should scale to zero
|
||||
- Stateless request/response patterns
|
||||
|
||||
Configuration:
|
||||
|
||||
```json
|
||||
"executor": {
|
||||
"select": "poolmgr",
|
||||
"poolmgr": {
|
||||
"concurrency": 1, // Requests per pod
|
||||
"requestsperpod": 1,
|
||||
"onceonly": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `concurrency` - How many concurrent requests each pod handles (usually 1 for Python due to GIL)
|
||||
- `poolsize` from environment controls number of pods in pool
|
||||
|
||||
### newdeploy
|
||||
|
||||
Good for:
|
||||
- Dedicated function instances
|
||||
- Long-running or background jobs
|
||||
- Functions needing stable network identity
|
||||
|
||||
Configuration:
|
||||
|
||||
```json
|
||||
"executor": {
|
||||
"select": "newdeploy",
|
||||
"newdeploy": {
|
||||
"minscale": 1, // Minimum pods (set to 0 for scale-to-zero)
|
||||
"maxscale": 5, // Maximum pods
|
||||
"targetcpu": 80 // Scale up when CPU > 80%
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `minscale` - Keep at least N pods running (0 = scale to zero)
|
||||
- `maxscale` - Maximum pods for auto-scaling
|
||||
- `targetcpu` - CPU threshold for scaling
|
||||
|
||||
## Resource Tuning
|
||||
|
||||
Resources are defined in millicores (m) and MB:
|
||||
|
||||
- `mincpu` / `maxcpu`: 1000 = 1 CPU core
|
||||
- `minmemory` / `maxmemory`: in MB
|
||||
|
||||
**Example settings**:
|
||||
|
||||
| Function Type | mincpu | maxcpu | minmemory | maxmemory |
|
||||
|--------------|--------|--------|-----------|-----------|
|
||||
| Simple API | 50 | 100 | 128 | 256 |
|
||||
| DB-intensive | 200 | 500 | 256 | 512 |
|
||||
| ML inference | 1000 | 2000 | 1024 | 2048 |
|
||||
|
||||
**Tips**:
|
||||
- Start conservatively, monitor, then adjust
|
||||
- Function pods are killed if they exceed `maxmemory`
|
||||
- CPU limits are enforced by Kubernetes scheduler
|
||||
- Use `minmemory` >= 128 to avoid OOM kills
|
||||
|
||||
### Checking Current Usage
|
||||
|
||||
```bash
|
||||
# Get function pods
|
||||
kubectl get pods -n fission
|
||||
|
||||
# Describe pod for resource usage
|
||||
kubectl describe pod <pod-name> -n fission
|
||||
|
||||
# See metrics (if metrics-server installed)
|
||||
kubectl top pod <pod-name> -n fission
|
||||
```
|
||||
|
||||
## Environments
|
||||
|
||||
You can have multiple deployment environments (dev, staging, prod):
|
||||
|
||||
### Using deployment.json variants
|
||||
|
||||
- `deployment.json` - Production (default)
|
||||
- `dev-deployment.json` - Development (used with `fission deploy --dev`)
|
||||
|
||||
Example `dev-deployment.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"namespace": "fission-dev",
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-dev-env"],
|
||||
"configmaps": ["fission-myproject-dev-config"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Switching Environments
|
||||
|
||||
```bash
|
||||
# Deploy to dev
|
||||
fission deploy --dev
|
||||
|
||||
# Deploy to prod (default)
|
||||
fission deploy
|
||||
|
||||
# Specify namespace
|
||||
fission deploy --namespace fission-staging
|
||||
```
|
||||
|
||||
## Secrets Management
|
||||
|
||||
### Creating Secrets
|
||||
|
||||
```bash
|
||||
# Basic secret from literals
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=localhost \
|
||||
--from-literal=PG_PORT=5432
|
||||
|
||||
# From file
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-file=secrets.properties
|
||||
|
||||
# With multiple namespaces
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--namespace fission-dev \
|
||||
--from-literal=PG_HOST=dev-db.example.com
|
||||
```
|
||||
|
||||
### Encrypted Secrets (Vault)
|
||||
|
||||
To encrypt sensitive values:
|
||||
|
||||
```python
|
||||
# On your local machine (with PyNaCl installed)
|
||||
from vault import encrypt_vault
|
||||
|
||||
key = "your-32-byte-hex-key-here..." # 64 hex chars
|
||||
encrypted = encrypt_vault("super-secret-password", key)
|
||||
print(encrypted) # vault:v1:base64...
|
||||
```
|
||||
|
||||
Store the encrypted string in K8s secret:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_PASS='vault:v1:base64...'
|
||||
```
|
||||
|
||||
Set `CRYPTO_KEY` in `helpers.py` to the hex key:
|
||||
|
||||
```python
|
||||
CRYPTO_KEY = "e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6"
|
||||
```
|
||||
|
||||
**Important**: Rotate keys periodically. When changing key, re-encrypt all secrets.
|
||||
|
||||
### Updating Secrets
|
||||
|
||||
```bash
|
||||
# Edit secret
|
||||
kubectl edit secret fission-myproject-env
|
||||
|
||||
# Update single key
|
||||
kubectl set secret secret fission-myproject-env \
|
||||
--from-literal=PG_PASS='new-password'
|
||||
|
||||
# Roll function to pick up new secret
|
||||
fission function update --name my-function
|
||||
```
|
||||
|
||||
## Rolling Updates
|
||||
|
||||
### Deploy Changes
|
||||
|
||||
```bash
|
||||
# Build and deploy
|
||||
./src/build.sh
|
||||
fission deploy
|
||||
|
||||
# Or deploy single function
|
||||
fission function update --name my-function
|
||||
```
|
||||
|
||||
### Zero-Downtime Deployments
|
||||
|
||||
Fission handles rolling updates automatically:
|
||||
1. New package is built
|
||||
2. New function pods are created with new code
|
||||
3. Old pods continue serving traffic until new pods are ready
|
||||
4. Old pods are terminated
|
||||
|
||||
**No downtime** by default for HTTP triggers.
|
||||
|
||||
### Canary Deployments
|
||||
|
||||
For canary deployments:
|
||||
1. Deploy new version with different function name: `my-function-v2`
|
||||
2. Route some traffic using ingress annotations or service mesh
|
||||
3. Gradually shift traffic
|
||||
4. Delete old function
|
||||
|
||||
## Monitoring & Logging
|
||||
|
||||
### Viewing Logs
|
||||
|
||||
```bash
|
||||
# All function logs in namespace
|
||||
kubectl logs -n fission -l fission-function=true --tail=100
|
||||
|
||||
# Specific function
|
||||
kubectl logs -n fission -l fission-function/name=my-function --tail=100
|
||||
|
||||
# Follow logs
|
||||
kubectl logs -n fission -l fission-function/name=my-function -f
|
||||
|
||||
# Container logs (if multiple containers)
|
||||
kubectl logs -n fission -l fission-function/name=my-function -c builder
|
||||
```
|
||||
|
||||
### Structured Logging
|
||||
|
||||
Use `logger` from `helpers.py` (already configured):
|
||||
|
||||
```python
|
||||
logger.info("Processing request", extra={"user_id": user_id})
|
||||
logger.error("Database error", exc_info=True, extra={"query": sql})
|
||||
```
|
||||
|
||||
Logs are collected by the container runtime and available via `kubectl logs`.
|
||||
|
||||
### Metrics
|
||||
|
||||
Fission exposes Prometheus metrics:
|
||||
|
||||
```bash
|
||||
# Get metrics endpoint
|
||||
kubectl port-forward -n fission svc/fission-prometheus-server 9090:9090
|
||||
|
||||
# Or query via kubectl
|
||||
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/fission/pods/*" | jq .
|
||||
```
|
||||
|
||||
Metrics include:
|
||||
- Request rate
|
||||
- Error rate
|
||||
- Response latency
|
||||
- Pod counts
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Deployment Fails
|
||||
|
||||
**Error**: `Error building package`
|
||||
|
||||
Check:
|
||||
- `build.sh` is executable: `chmod +x src/build.sh`
|
||||
- All dependencies in `requirements.txt` are valid
|
||||
- Python syntax is correct: `python -m py_compile src/*.py`
|
||||
|
||||
**Error**: `Function not found after deploy`
|
||||
|
||||
Check:
|
||||
- Fission docstring block is properly formatted (must be ````fission` with backticks)
|
||||
- No YAML/JSON syntax errors in docstring
|
||||
- Function file is in `src/` directory
|
||||
|
||||
### Function Not Responding
|
||||
|
||||
**Check pod status**:
|
||||
```bash
|
||||
kubectl get pods -n fission -l fission-function/name=my-function
|
||||
```
|
||||
|
||||
**Pod stuck in Pending** - Insufficient resources or image pull error
|
||||
|
||||
**Pod stuck in ContainerCreating** - Volume mount issue or image pull
|
||||
|
||||
**Pod CrashLoopBackOff** - Application error. Check logs:
|
||||
```bash
|
||||
kubectl logs -n fission <pod-name> --previous
|
||||
```
|
||||
|
||||
### Configuration Not Loading
|
||||
|
||||
**Secrets not available**:
|
||||
```bash
|
||||
# Check secret exists in correct namespace
|
||||
kubectl get secret fission-myproject-env -n fission
|
||||
|
||||
# Verify secret is mounted
|
||||
kubectl exec -it <pod-name> -n fission -- ls /secrets/default/
|
||||
```
|
||||
|
||||
**ConfigMaps not available**:
|
||||
```bash
|
||||
kubectl get configmap fission-myproject-config -n fission
|
||||
```
|
||||
|
||||
**Profusion parms not reading**:
|
||||
- Ensure `SECRET_NAME` in helpers.py matches created secret name
|
||||
- Path format: `/secrets/{namespace}/{secret-name}/{key}`
|
||||
|
||||
### Slow Performance
|
||||
|
||||
1. **Increase resources**: Raise `maxmemory` and `maxcpu`
|
||||
2. **Connection pooling**: Use connection pooler like PgBouncer for heavy DB load
|
||||
3. **Database queries**: Check slow queries, add indexes
|
||||
4. **Cold starts**: Set `minscale: 1` with newdeploy executor to keep warm
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
**Error**: `could not connect to server: Connection refused`
|
||||
|
||||
- Verify database is reachable from cluster
|
||||
- Check security groups/network policies
|
||||
- Test connectivity from pod:
|
||||
```bash
|
||||
kubectl exec -it <pod-name> -n fission -- nc -zv $PG_HOST $PG_PORT
|
||||
```
|
||||
|
||||
**Error**: `password authentication failed`
|
||||
|
||||
- Verify credentials in secret
|
||||
- Check PG_USER format (with `plaintext:` prefix for vault)
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Custom Runtime Image
|
||||
|
||||
If you need system packages:
|
||||
|
||||
```dockerfile
|
||||
FROM ghcr.io/fission/python-env:latest
|
||||
RUN apk add --no-cache gcc libffi-dev
|
||||
```
|
||||
|
||||
Build and push:
|
||||
```bash
|
||||
docker build -t myregistry/python-custom:latest .
|
||||
docker push myregistry/python-custom:latest
|
||||
```
|
||||
|
||||
Update `deployment.json`:
|
||||
```json
|
||||
"environments": {
|
||||
"myproject-py": {
|
||||
"image": "myregistry/python-custom:latest",
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Environment Variables from ConfigMap
|
||||
|
||||
```json
|
||||
"configmaps": {
|
||||
"fission-myproject-config": {
|
||||
"literals": [
|
||||
"LOG_LEVEL=DEBUG",
|
||||
"FEATURE_FLAG_X=true"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access in code:
|
||||
```python
|
||||
import os
|
||||
log_level = os.getenv("LOG_LEVEL", "INFO")
|
||||
```
|
||||
|
||||
### Lifecycle Hooks
|
||||
|
||||
Use `function_pre_remove` and `function_post_remove` in deployment hooks:
|
||||
|
||||
```json
|
||||
"hooks": {
|
||||
"function_pre_remove": [
|
||||
{
|
||||
"type": "http",
|
||||
"url": "http://cleanup-service/cleanup",
|
||||
"timeout": 30000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Common Commands Reference
|
||||
|
||||
```bash
|
||||
# List functions
|
||||
fission function list
|
||||
|
||||
# Test function manually
|
||||
fission function test --name my-function
|
||||
|
||||
# Update single function
|
||||
fission function update --name my-function
|
||||
|
||||
# Delete function
|
||||
fission function delete --name my-function
|
||||
|
||||
# View function pods
|
||||
kubectl get pods -n fission -l fission-function/name=my-function
|
||||
|
||||
# View logs
|
||||
kubectl logs -n fission -l fission-function/name=my-function -f
|
||||
|
||||
# Exec into pod
|
||||
kubectl exec -it <pod-name> -n fission -- /bin/sh
|
||||
|
||||
# Describe function
|
||||
fission function describe --name my-function
|
||||
|
||||
# Get function YAML
|
||||
fission function get --name my-function -o yaml
|
||||
|
||||
# Check Fission version
|
||||
fission version
|
||||
|
||||
# Check Fission status
|
||||
kubectl get pods -n fission
|
||||
```
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Fission Deployment Documentation](https://fission.io/docs/usage/deploy/)
|
||||
- [Fission Executors](https://fission.io/docs/architecture/executor/)
|
||||
- [Kubernetes Resource Management](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
|
||||
- [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/)
|
||||
582
fission-python/template/docs/MIGRATIONS.md
Normal file
582
fission-python/template/docs/MIGRATIONS.md
Normal file
@@ -0,0 +1,582 @@
|
||||
# Database Migrations
|
||||
|
||||
This guide covers managing database schema changes in Fission Python projects.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Migration Files](#migration-files)
|
||||
3. [Applying Migrations](#applying-migrations)
|
||||
4. [Writing Migrations](#writing-migrations)
|
||||
5. [Best Practices](#best-practices)
|
||||
6. [Rollback Strategies](#rollback-strategies)
|
||||
7. [Automation](#automation)
|
||||
|
||||
## Overview
|
||||
|
||||
Database schema changes should be managed through versioned migration scripts, not manual `CREATE TABLE` statements.
|
||||
|
||||
This template uses **plain SQL migration files** (`.sql`), which provide:
|
||||
- Version control of schema changes
|
||||
- Repeatable application to different environments
|
||||
- Clear upgrade/downgrade paths
|
||||
- Audit trail of schema evolution
|
||||
|
||||
## Migration Files
|
||||
|
||||
Place SQL migration scripts in the `migrates/` directory:
|
||||
|
||||
```
|
||||
migrates/
|
||||
├── 001_initial_schema.sql
|
||||
├── 002_add_user_email.sql
|
||||
├── 003_create_indexes.sql
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Naming convention**:
|
||||
- Prefix with sequential number (zero-padded for sorting)
|
||||
- Descriptive name after underscore
|
||||
- `.sql` extension
|
||||
- Numbers should be unique and monotonically increasing
|
||||
|
||||
### Initial Schema Example
|
||||
|
||||
```sql
|
||||
-- migrates/001_create_items_table.sql
|
||||
-- Create items table
|
||||
CREATE TABLE IF NOT EXISTS items (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
status VARCHAR(50) DEFAULT 'active',
|
||||
metadata JSONB,
|
||||
created TIMESTAMPTZ DEFAULT NOW(),
|
||||
modified TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Add indexes
|
||||
CREATE INDEX idx_items_status ON items(status);
|
||||
CREATE INDEX idx_items_created ON items(created);
|
||||
|
||||
-- Add comments
|
||||
COMMENT ON TABLE items IS 'Stores item records';
|
||||
COMMENT ON COLUMN items.status IS 'Item status: active, inactive, pending';
|
||||
```
|
||||
|
||||
## Applying Migrations
|
||||
|
||||
### Manually
|
||||
|
||||
```bash
|
||||
# Connect to database
|
||||
psql -h localhost -U postgres -d mydb
|
||||
|
||||
# Run migration file
|
||||
\i migrates/001_create_items_table.sql
|
||||
|
||||
# Run all migrations in order (bash script)
|
||||
for file in $(ls migrates/*.sql | sort); do
|
||||
echo "Applying $file..."
|
||||
psql -h localhost -U postgres -d mydb -f "$file"
|
||||
done
|
||||
```
|
||||
|
||||
### Automatically from Python
|
||||
|
||||
Create a simple migration runner:
|
||||
|
||||
```python
|
||||
# src/migrate.py (not part of function, standalone script)
|
||||
import os
|
||||
import psycopg2
|
||||
from helpers import init_db_connection
|
||||
|
||||
def run_migrations():
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create migrations tracking table if not exists
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version INTEGER PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
applied_at TIMESTAMPTZ DEFAULT NOW()
|
||||
)
|
||||
""")
|
||||
|
||||
# Get already-applied migrations
|
||||
cursor.execute("SELECT version FROM schema_migrations")
|
||||
applied = {row[0] for row in cursor.fetchall()}
|
||||
|
||||
# Find migration files
|
||||
migrates_dir = os.path.join(os.path.dirname(__file__), "..", "migrates")
|
||||
files = sorted([
|
||||
f for f in os.listdir(migrates_dir)
|
||||
if f.endswith(".sql")
|
||||
])
|
||||
|
||||
# Apply pending migrations
|
||||
for filename in files:
|
||||
# Extract version number
|
||||
version = int(filename.split("_")[0])
|
||||
if version in applied:
|
||||
print(f"Skipping {filename} (already applied)")
|
||||
continue
|
||||
|
||||
path = os.path.join(migrates_dir, filename)
|
||||
print(f"Applying {filename}...")
|
||||
with open(path, 'r') as f:
|
||||
sql = f.read()
|
||||
|
||||
try:
|
||||
cursor.execute(sql)
|
||||
cursor.execute(
|
||||
"INSERT INTO schema_migrations (version, name) VALUES (%s, %s)",
|
||||
(version, filename)
|
||||
)
|
||||
conn.commit()
|
||||
print(f" ✓ Applied {filename}")
|
||||
except Exception as e:
|
||||
conn.rollback()
|
||||
print(f" ✗ Failed: {e}")
|
||||
raise
|
||||
|
||||
conn.close()
|
||||
print("All migrations applied")
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_migrations()
|
||||
```
|
||||
|
||||
Run:
|
||||
```bash
|
||||
python src/migrate.py
|
||||
```
|
||||
|
||||
### Using Migration Tools
|
||||
|
||||
For more advanced features (rollbacks, branching), consider:
|
||||
|
||||
- **[Alembic](https://alembic.sqlalchemy.org/)** - Database migration tool for SQLAlchemy (if using ORM)
|
||||
- **[pg migrator](https://github.com/heroku/pg-migrator)** - Heroku's migration tool
|
||||
- **[goose](https://github.com/pressly/goose)** - Multi-database migration tool (can use from Python)
|
||||
- **[yoyo-migrations](https://github.com/gugulet-h/yoyo-migrations)** - Python-based migrations
|
||||
|
||||
## Writing Migrations
|
||||
|
||||
### Principles
|
||||
|
||||
1. **Idempotent** - Script should succeed if run multiple times
|
||||
2. **Additive first** - Add columns/tables before removing/dropping
|
||||
3. **Backward compatible** - New schema should work with old code
|
||||
4. **Atomic** - One logical change per migration file
|
||||
5. **Test locally** - Apply to test database before production
|
||||
|
||||
### Common Operations
|
||||
|
||||
#### Create Table
|
||||
|
||||
```sql
|
||||
CREATE TABLE IF NOT EXISTS orders (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID NOT NULL,
|
||||
total DECIMAL(10,2) NOT NULL,
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'pending',
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Add foreign key
|
||||
ALTER TABLE orders
|
||||
ADD CONSTRAINT fk_orders_user
|
||||
FOREIGN KEY (user_id)
|
||||
REFERENCES users(id)
|
||||
ON DELETE CASCADE;
|
||||
|
||||
-- Index for performance
|
||||
CREATE INDEX idx_orders_user_id ON orders(user_id);
|
||||
CREATE INDEX idx_orders_created_at ON orders(created_at);
|
||||
```
|
||||
|
||||
#### Add Column
|
||||
|
||||
```sql
|
||||
-- Add nullable column (safe, backward compatible)
|
||||
ALTER TABLE orders
|
||||
ADD COLUMN shipping_address JSONB;
|
||||
|
||||
-- Add column with default (be careful with large tables!)
|
||||
-- This rewrites entire table - use cautiously
|
||||
ALTER TABLE orders
|
||||
ADD COLUMN tax_amount DECIMAL(10,2) DEFAULT 0.00;
|
||||
```
|
||||
|
||||
#### Rename Column
|
||||
|
||||
```sql
|
||||
-- PostgreSQL 9.2+ supports RENAME COLUMN
|
||||
ALTER TABLE orders
|
||||
RENAME COLUMN total TO order_total;
|
||||
```
|
||||
|
||||
#### Modify Column Type
|
||||
|
||||
```sql
|
||||
-- Change VARCHAR length
|
||||
ALTER TABLE users
|
||||
ALTER COLUMN email TYPE VARCHAR(320);
|
||||
|
||||
-- Convert to different type (use USING clause)
|
||||
ALTER TABLE orders
|
||||
ALTER COLUMN status TYPE VARCHAR(100)
|
||||
USING status::VARCHAR(100);
|
||||
```
|
||||
|
||||
#### Create Index
|
||||
|
||||
```sql
|
||||
-- Simple index
|
||||
CREATE INDEX idx_users_email ON users(email);
|
||||
|
||||
-- Unique index
|
||||
CREATE UNIQUE INDEX idx_users_email_unique ON users(email);
|
||||
|
||||
-- Partial index (only active users)
|
||||
CREATE INDEX idx_users_active ON users(id)
|
||||
WHERE status = 'active';
|
||||
|
||||
-- Multi-column index
|
||||
CREATE INDEX idx_orders_user_status ON orders(user_id, status);
|
||||
```
|
||||
|
||||
#### Drop Column/Table
|
||||
|
||||
```sql
|
||||
-- First, ensure no one is using it
|
||||
-- Consider using SET DEFAULT then dropping in subsequent migration
|
||||
|
||||
-- Drop column
|
||||
ALTER TABLE orders
|
||||
DROP COLUMN IF EXISTS old_column;
|
||||
|
||||
-- Drop table (dangerous!)
|
||||
DROP TABLE IF EXISTS old_logs;
|
||||
```
|
||||
|
||||
### Data Migrations
|
||||
|
||||
Sometimes you need to transform data:
|
||||
|
||||
```sql
|
||||
-- Backfill new column from existing data
|
||||
UPDATE orders
|
||||
SET shipping_address = jsonb_build_object(
|
||||
'street', address_street,
|
||||
'city', address_city,
|
||||
'zip', address_zip
|
||||
)
|
||||
WHERE shipping_address IS NULL;
|
||||
|
||||
-- Migrate enum values
|
||||
UPDATE products
|
||||
SET status = 'active' WHERE status = 'ACTIVE';
|
||||
|
||||
-- Clean up duplicates
|
||||
WITH duplicates AS (
|
||||
SELECT id, ROW_NUMBER() OVER (PARTITION BY email ORDER BY created_at) AS rn
|
||||
FROM users
|
||||
)
|
||||
DELETE FROM users WHERE id IN (SELECT id FROM duplicates WHERE rn > 1);
|
||||
```
|
||||
|
||||
### Transactional Migrations
|
||||
|
||||
Wrap critical migrations in transactions:
|
||||
|
||||
```sql
|
||||
BEGIN;
|
||||
|
||||
-- Multiple related operations
|
||||
ALTER TABLE orders ADD COLUMN shipping_id UUID;
|
||||
UPDATE orders SET shipping_id = uuid_generate_v4() WHERE shipping_id IS NULL;
|
||||
ALTER TABLE orders ALTER COLUMN shipping_id SET NOT NULL;
|
||||
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
**Note**: DDL statements in PostgreSQL auto-commit, so `BEGIN`/`COMMIT` may not work as expected for schema changes. For complex multi-step changes, consider using advisory locks or deployment coordination.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ Do's
|
||||
|
||||
1. **Test migrations on copy of production database** before applying to prod
|
||||
2. **Keep migrations small** - One logical change per file
|
||||
3. **Write data migrations as separate files** from schema migrations
|
||||
4. **Use `IF NOT EXISTS` and `IF EXISTS`** to make migrations idempotent
|
||||
5. **Never drop columns/tables in the same migration you add them** - Separate to allow rollback
|
||||
6. **Document why** - Add comments explaining the purpose
|
||||
7. **Consider indexes** - Add indexes for frequently queried columns in same migration as table creation
|
||||
8. **Use UUIDs** for primary keys (`gen_random_uuid()` in PostgreSQL 13+)
|
||||
9. **Add `created_at` and `updated_at` timestamps** to all tables
|
||||
10. **Version numbers must be unique and sequential**
|
||||
|
||||
### ❌ Don'ts
|
||||
|
||||
1. **Don't modify already-applied migrations** - They're part of history
|
||||
2. **Don't skip version numbers** - Creates gaps but not critical
|
||||
3. **Don't use destructive operations without backup** - `DROP COLUMN`, `DROP TABLE`
|
||||
4. **Don't run long-running migrations during peak hours** - Use low-traffic windows
|
||||
5. **Don't add NOT NULL without default** on non-empty tables - Will fail due to existing NULL rows
|
||||
6. **Don't assume order of execution** - Always number sequentially
|
||||
7. **Don't mix unrelated changes** in one migration file
|
||||
|
||||
### Zero-Downtime Migrations
|
||||
|
||||
#### Adding Column
|
||||
|
||||
```sql
|
||||
-- Step 1: Add column as nullable or with default (fast)
|
||||
ALTER TABLE orders ADD COLUMN status VARCHAR(50);
|
||||
|
||||
-- Step 2: Deploy code that writes to new column
|
||||
-- Your application updates to populate status
|
||||
|
||||
-- Step 3: Backfill existing rows (if needed)
|
||||
UPDATE orders SET status = 'completed' WHERE status IS NULL AND shipped_at IS NOT NULL;
|
||||
|
||||
-- Step 4: Make column NOT NULL (if needed) - only after all rows have values
|
||||
ALTER TABLE orders ALTER COLUMN status SET NOT NULL;
|
||||
```
|
||||
|
||||
#### Renaming Column
|
||||
|
||||
```sql
|
||||
-- Step 1: Add new column
|
||||
ALTER TABLE orders ADD COLUMN order_status VARCHAR(50);
|
||||
|
||||
-- Step 2: Deploy code writing to both old and new columns (dual-write)
|
||||
|
||||
-- Step 3: Backfill data
|
||||
UPDATE orders SET order_status = status;
|
||||
|
||||
-- Step 4: Deploy code reading from new column, stop writing to old
|
||||
|
||||
-- Step 5: Drop old column (in separate migration)
|
||||
ALTER TABLE orders DROP COLUMN status;
|
||||
```
|
||||
|
||||
## Rollback Strategies
|
||||
|
||||
### Manual Rollback
|
||||
|
||||
For each migration, you may want to write a corresponding "down" migration:
|
||||
|
||||
```sql
|
||||
-- 002_add_user_email.sql (UP)
|
||||
ALTER TABLE users ADD COLUMN email VARCHAR(320);
|
||||
|
||||
-- 002_add_user_email_rollback.sql (DOWN)
|
||||
ALTER TABLE users DROP COLUMN IF EXISTS email;
|
||||
```
|
||||
|
||||
Store rollback scripts alongside migrations or in separate `rollbacks/` directory.
|
||||
|
||||
### Point-in-Time Recovery
|
||||
|
||||
**Best strategy**: Restore database from backup to point before bad migration, then re-apply good migrations.
|
||||
|
||||
```bash
|
||||
# Restore from PITR backup (if using WAL archiving)
|
||||
pg_restore -h localhost -U postgres -d mydb --point-in-time="2025-03-18 10:30:00"
|
||||
|
||||
# Re-run migrations up to good version
|
||||
python src/migrate.py # But this applies all, so need selective
|
||||
```
|
||||
|
||||
### Selective Rollback Script
|
||||
|
||||
```python
|
||||
# rollback.py
|
||||
import sys
|
||||
from helpers import init_db_connection
|
||||
|
||||
def rollback(to_version: int):
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Find migrations after target version
|
||||
cursor.execute("""
|
||||
SELECT version, name
|
||||
FROM schema_migrations
|
||||
WHERE version > %s
|
||||
ORDER BY version DESC
|
||||
""", (to_version,))
|
||||
|
||||
migrations = cursor.fetchall()
|
||||
|
||||
for version, name in migrations:
|
||||
rollback_file = f"rollbacks/{version:03d}_{name.split('_', 1)[1]}.sql"
|
||||
print(f"Rolling back {name} using {rollback_file}...")
|
||||
with open(rollback_file, 'r') as f:
|
||||
sql = f.read()
|
||||
cursor.execute(sql)
|
||||
cursor.execute("DELETE FROM schema_migrations WHERE version = %s", (version,))
|
||||
conn.commit()
|
||||
print(f" Rolled back {name}")
|
||||
|
||||
conn.close()
|
||||
print(f"Rolled back to version {to_version}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
target = int(sys.argv[1])
|
||||
rollback(target)
|
||||
```
|
||||
|
||||
## Automation
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
In your deployment pipeline:
|
||||
|
||||
```bash
|
||||
# Before deploying new code
|
||||
python src/migrate.py
|
||||
|
||||
# If migrations fail, abort deployment
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Migrations failed, aborting deployment"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Deploy new code
|
||||
fission deploy
|
||||
```
|
||||
|
||||
### Pre-deployment Hooks
|
||||
|
||||
Use Fission hooks to run migrations automatically:
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"function_pre_deploy": [
|
||||
{
|
||||
"type": "http",
|
||||
"url": "http://migration-service/migrate",
|
||||
"timeout": 300000
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or simpler: run migration as part of `build.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# src/build.sh
|
||||
|
||||
# Install dependencies
|
||||
pip3 install -r requirements.txt -t .
|
||||
|
||||
# Run migrations against test DB (or do nothing, migrations are separate)
|
||||
# python ../migrate.py
|
||||
|
||||
# Package up
|
||||
cp -r . ${DEPLOY_PKG}
|
||||
```
|
||||
|
||||
### Database Change Management Tools
|
||||
|
||||
Consider specialized tools for larger teams:
|
||||
- **[Flyway](https://flywaydb.org/)** - Java-based, supports repeatable migrations
|
||||
- **[Liquibase](https://www.liquibase.org/)** - XML/YAML/JSON migrations
|
||||
- **[Prisma Migrate](https://www.prisma.io/docs/concepts/components/prisma-migrate)** - If using Prisma ORM
|
||||
- **[Alembic](https://alembic.sqlalchemy.org/)** - Python, SQLAlchemy-specific
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **Create migration**:
|
||||
```bash
|
||||
touch migrates/004_add_orders_table.sql
|
||||
```
|
||||
|
||||
2. **Write SQL**:
|
||||
```sql
|
||||
CREATE TABLE orders (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID NOT NULL REFERENCES users(id),
|
||||
total DECIMAL(10,2) NOT NULL,
|
||||
status VARCHAR(50) DEFAULT 'pending',
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX idx_orders_user_id ON orders(user_id);
|
||||
```
|
||||
|
||||
3. **Test locally**:
|
||||
```bash
|
||||
createdb test_migration
|
||||
psql test_migration -f migrates/004_add_orders_table.sql
|
||||
```
|
||||
|
||||
4. **Commit migration file**:
|
||||
```bash
|
||||
git add migrates/004_add_orders_table.sql
|
||||
git commit -m "Add orders table"
|
||||
```
|
||||
|
||||
5. **Apply to staging**:
|
||||
```bash
|
||||
# Update dev-deployment.json if new env vars needed
|
||||
fission deploy --dev
|
||||
python src/migrate.py
|
||||
```
|
||||
|
||||
6. **Apply to production**:
|
||||
```bash
|
||||
# Maintenance window or blue-green deployment
|
||||
fission deploy
|
||||
python src/migrate.py
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Migration Fails
|
||||
|
||||
Check error message:
|
||||
- **syntax error**: Validate SQL with `psql -c "SQL"` manually
|
||||
- **duplicate column**: Migration already applied, check `schema_migrations`
|
||||
- **permission denied**: DB user lacks ALTER/CREATE privileges
|
||||
- **lock timeout**: Another migration running, wait or kill process
|
||||
|
||||
### Migration Already Applied But Failed
|
||||
|
||||
If migration was recorded in `schema_migrations` but failed mid-way:
|
||||
|
||||
1. Manually revert partial changes or fix broken state
|
||||
2. Delete row from `schema_migrations`: `DELETE FROM schema_migrations WHERE version = 4;`
|
||||
3. Re-run migration
|
||||
|
||||
### Long-Running Migration
|
||||
|
||||
Large table alterations can lock rows and cause downtime:
|
||||
|
||||
- Run during low-traffic period
|
||||
- Use `CONCURRENTLY` for index creation (PostgreSQL):
|
||||
```sql
|
||||
CREATE INDEX CONCURRENTLY idx_orders_created ON orders(created_at);
|
||||
```
|
||||
- For adding NOT NULL, populate values first with UPDATE, then add constraint
|
||||
- Consider using `pg_repack` for online table reorganization
|
||||
|
||||
## Summary
|
||||
|
||||
- Store migrations in `migrates/` directory, numbered sequentially
|
||||
- Use `init_db_connection()` to run migrations programmatically
|
||||
- Test migrations on staging database before production
|
||||
- Keep migrations backward compatible when possible
|
||||
- Have a rollback plan (backups, down scripts)
|
||||
- Integrate migrations into CI/CD pipeline
|
||||
438
fission-python/template/docs/SECRETS.md
Normal file
438
fission-python/template/docs/SECRETS.md
Normal file
@@ -0,0 +1,438 @@
|
||||
# Secrets and Configuration Management
|
||||
|
||||
This guide covers best practices for managing secrets and configuration in Fission Python functions.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Kubernetes Secrets vs ConfigMaps](#kubernetes-secrets-vs-configmaps)
|
||||
3. [Secrets in Fission](#secrets-in-fission)
|
||||
4. [Vault Encryption](#vault-encryption)
|
||||
5. [Secret Rotation](#secret-rotation)
|
||||
6. [Configuration Precedence](#configuration-precedence)
|
||||
7. [Best Practices](#best-practices)
|
||||
|
||||
## Overview
|
||||
|
||||
Sensitive data (passwords, API keys) should **never** be:
|
||||
- Committed to Git
|
||||
- Hardcoded in source code
|
||||
- Passed as plaintext in deployment files
|
||||
|
||||
Instead, use:
|
||||
- **Kubernetes Secrets** - For sensitive values
|
||||
- **Kubernetes ConfigMaps** - For non-sensitive configuration
|
||||
- **Vault encryption** - For encrypting secrets at rest in K8s
|
||||
|
||||
## Kubernetes Secrets vs ConfigMaps
|
||||
|
||||
| Feature | Secrets | ConfigMaps |
|
||||
|---------|---------|------------|
|
||||
| Purpose | Sensitive data (passwords, tokens, keys) | Non-sensitive config (endpoints, feature flags) |
|
||||
| Storage | Base64 encoded (not encrypted by default) | Plain text |
|
||||
| Mount as | Files in `/secrets/` | Files in `/configs/` |
|
||||
| Access in code | `get_secret(key)` | `get_config(key)` |
|
||||
| Max size | 1MB total | 1MB total |
|
||||
| Can be encrypted | Yes, with K8s encryption at rest | Yes |
|
||||
|
||||
**Rule of thumb**:
|
||||
- Use Secrets for: database passwords, API tokens, encryption keys
|
||||
- Use ConfigMaps for: service URLs, feature flags, log levels, non-sensitive constants
|
||||
|
||||
## Secrets in Fission
|
||||
|
||||
### Defining Secret References in deployment.json
|
||||
|
||||
In `.fission/deployment.json`, declare the secret names your functions expect:
|
||||
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-env"],
|
||||
"configmaps": ["fission-myproject-config"]
|
||||
},
|
||||
"secrets": {
|
||||
"fission-myproject-env": {
|
||||
"literals": [
|
||||
"PG_HOST=localhost",
|
||||
"PG_PORT=5432"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Important**: The `literals` array here is **only documentation**. The actual secret values must be created separately in Kubernetes.
|
||||
|
||||
### Creating Actual Kubernetes Secrets
|
||||
|
||||
```bash
|
||||
# Create secret with multiple keys
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=postgres.example.com \
|
||||
--from-literal=PG_PORT=5432 \
|
||||
--from-literal=PG_DB=mydb \
|
||||
--from-literal=PG_USER=myuser \
|
||||
--from-literal=PG_PASS='my-password'
|
||||
|
||||
# In a specific namespace (Fission namespace)
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--namespace fission \
|
||||
--from-literal=...
|
||||
|
||||
# From environment file
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--namespace fission \
|
||||
--from-env-file=.env
|
||||
```
|
||||
|
||||
### How Secrets Are Mounted
|
||||
|
||||
Fission mounts secrets as files in the function pod:
|
||||
|
||||
```
|
||||
/secrets/{namespace}/{secret-name}/{key}
|
||||
```
|
||||
|
||||
Example path: `/secrets/default/fission-myproject-env/PG_HOST`
|
||||
|
||||
The `helpers.py` `get_secret()` function reads from this path:
|
||||
|
||||
```python
|
||||
def get_secret(key: str, default=None):
|
||||
namespace = get_current_namespace()
|
||||
path = f"/secrets/{namespace}/{SECRET_NAME}/{key}"
|
||||
with open(path, "r") as f:
|
||||
return f.read()
|
||||
```
|
||||
|
||||
**Note**: `SECRET_NAME` must match the K8s secret name (`fission-myproject-env`).
|
||||
|
||||
### Reading Secrets in Code
|
||||
|
||||
```python
|
||||
from helpers import get_secret
|
||||
|
||||
# With default fallback
|
||||
db_host = get_secret("PG_HOST", "localhost")
|
||||
db_port = int(get_secret("PG_PORT", "5432"))
|
||||
db_user = get_secret("PG_USER")
|
||||
db_pass = get_secret("PG_PASS")
|
||||
|
||||
# If key missing and no default, returns None
|
||||
maybe_value = get_secret("OPTIONAL_KEY")
|
||||
```
|
||||
|
||||
**Always provide a default** for non-critical configuration to avoid crashes if secret is missing.
|
||||
|
||||
### ConfigMaps
|
||||
|
||||
Same pattern, different mount path: `/configs/{namespace}/{configmap-name}/{key}`
|
||||
|
||||
```python
|
||||
from helpers import get_config
|
||||
|
||||
api_endpoint = get_config("API_ENDPOINT", "http://default.api")
|
||||
feature_flag = get_config("FEATURE_X_ENABLED", "false")
|
||||
```
|
||||
|
||||
Create ConfigMap:
|
||||
|
||||
```bash
|
||||
kubectl create configmap fission-myproject-config \
|
||||
--namespace fission \
|
||||
--from-literal=API_ENDPOINT=https://api.example.com \
|
||||
--from-literal=FEATURE_X_ENABLED=true
|
||||
```
|
||||
|
||||
## Vault Encryption
|
||||
|
||||
To encrypt secrets before storing in K8s:
|
||||
|
||||
### Generate Encryption Key
|
||||
|
||||
```bash
|
||||
# Generate 32-byte (64 hex char) random key
|
||||
openssl rand -hex 32
|
||||
# Example output: e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6
|
||||
```
|
||||
|
||||
### Encrypt a Value
|
||||
|
||||
```python
|
||||
# Encrypt locally
|
||||
from vault import encrypt_vault
|
||||
|
||||
key = "e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6"
|
||||
encrypted = encrypt_vault("my-secret-password", key)
|
||||
print(encrypted)
|
||||
# Output: vault:v1:base64-encrypted-data
|
||||
```
|
||||
|
||||
### Store Encrypted Value
|
||||
|
||||
Create K8s secret with encrypted value:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_PASS='vault:v1:base64...'
|
||||
```
|
||||
|
||||
### Configure decryption in helpers.py
|
||||
|
||||
```python
|
||||
CRYPTO_KEY = "e24ad6ceed96115520f6e6dc8a0da506ae9a706823d54f30a5b75447ecf477b6"
|
||||
```
|
||||
|
||||
### Automatic Decryption
|
||||
|
||||
`get_secret()` and `get_config()` automatically:
|
||||
1. Read the file content
|
||||
2. Detect if it starts with `vault:v1:` (using `is_valid_vault_format()`)
|
||||
3. Decrypt using `CRYPTO_KEY` if encrypted
|
||||
4. Return plaintext
|
||||
|
||||
**No code changes needed** - it "just works".
|
||||
|
||||
### Verification
|
||||
|
||||
```bash
|
||||
# Test decryption
|
||||
kubectl get secret fission-myproject-env -o jsonpath='{.data.PG_PASS}' | base64 -d
|
||||
# Should show: vault:v1:...
|
||||
|
||||
# Exec into pod and manually check
|
||||
kubectl exec -it <pod-name> -- python3 -c "from helpers import get_secret; print(get_secret('PG_PASS'))"
|
||||
# Should print decrypted value
|
||||
```
|
||||
|
||||
## Secret Rotation
|
||||
|
||||
### Rotating a Secret
|
||||
|
||||
1. **Generate new value** (new password, new API key)
|
||||
2. **Encrypt** (if using vault)
|
||||
3. **Update K8s secret**:
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--dry-run=client \
|
||||
--from-literal=PG_PASS='new-password' \
|
||||
-o yaml | kubectl apply -f -
|
||||
```
|
||||
4. **Update actual external system** (database, API provider) with new value
|
||||
5. **Verify applications work** (check logs)
|
||||
6. **Remove old value** (if rotating from old to new, both may need to coexist temporarily)
|
||||
|
||||
### Rotating Vault Encryption Key
|
||||
|
||||
**Warning**: Changing `CRYPTO_KEY` requires re-encrypting all secrets!
|
||||
|
||||
1. Deploy new code with updated `CRYPTO_KEY` **temporarily** pointing to new key
|
||||
2. Create new K8s secrets with values encrypted under new key (or re-encrypt via script)
|
||||
3. Switch `CRYPTO_KEY` back to original (or both keys during transition) - actually this is complex
|
||||
|
||||
**Recommended**: Have two keys during rotation:
|
||||
```python
|
||||
CRYPTO_KEYS = [
|
||||
"old-key-hex...", # Keep for decrypting old secrets
|
||||
"new-key-hex..." # Use for encrypting new/updated secrets
|
||||
]
|
||||
```
|
||||
|
||||
Then update `decrypt_vault()` to try each key until one works. After all secrets migrated, remove old key.
|
||||
|
||||
## Configuration Precedence
|
||||
|
||||
Fission supports multiple deployment configuration files:
|
||||
|
||||
1. **deployment.json** - Base configuration (committed to repo)
|
||||
2. **dev-deployment.json** - Development overrides (usually not committed)
|
||||
3. **local-deployment.json** - Local overrides (gitignored)
|
||||
|
||||
### Override Priority
|
||||
|
||||
When using `fission deploy --dev`, Fission loads:
|
||||
- Base configuration from `deployment.json`
|
||||
- Overlay from `dev-deployment.json`
|
||||
|
||||
Values in the overlay file replace or extend base values.
|
||||
|
||||
**Example**: Override secret name for dev:
|
||||
|
||||
**deployment.json**:
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-env"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**dev-deployment.json**:
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-dev-env"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now `fission deploy --dev` uses the dev secret, while `fission deploy` uses prod secret.
|
||||
|
||||
### Local Overrides
|
||||
|
||||
Create `.fission/local-deployment.json` for your workstation:
|
||||
|
||||
```json
|
||||
{
|
||||
"function_common": {
|
||||
"secrets": ["fission-myproject-local-env"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Fission automatically uses this if present (no flag needed). `.gitignore` typically excludes it.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's ✅
|
||||
|
||||
1. **Do use Kubernetes Secrets** - Never hardcode credentials
|
||||
2. **Do encrypt with vault** - Prevents plaintext secrets in K8s
|
||||
3. **Do store vault key securely** - In K8s sealed secret, external vault (HashiCorp Vault, AWS Secrets Manager), or as a separate K8s secret in restricted namespace
|
||||
4. **Do namespace secrets** - Use different secrets for dev/staging/prod
|
||||
5. **Do rotate secrets regularly** - Especially database passwords, API tokens
|
||||
6. **Do use ConfigMaps for non-sensitive config** - Cleaner separation
|
||||
7. **Do provide sensible defaults** - In `get_secret()` calls
|
||||
8. **Do validate required secrets** - Fail fast at startup:
|
||||
```python
|
||||
def init():
|
||||
pg_host = get_secret("PG_HOST")
|
||||
if not pg_host:
|
||||
raise ValueError("PG_HOST secret is required")
|
||||
```
|
||||
|
||||
### Don'ts ❌
|
||||
|
||||
1. **Don't commit secrets** - Even in `deployment.json` literals
|
||||
2. **Don't put plaintext in Git** - Use placeholders or remove before commit
|
||||
3. **Don't embed vault key in code for production** - Use environment-specific override or external secret management
|
||||
4. **Don't share vault key publicly** - It's a symmetric key - anyone with it can decrypt all secrets
|
||||
5. **Don't use same secret across namespaces** - Separate environments should have separate credentials
|
||||
6. **Don't rely on obscurity** - Security through obscurity is not security
|
||||
|
||||
### Supply Chain Security
|
||||
|
||||
For production deployments:
|
||||
|
||||
1. **Store vault key in sealed secrets** (if on K8s):
|
||||
```bash
|
||||
kubectl create secret generic crypto-key \
|
||||
--from-literal=key='your-hex-key'
|
||||
# Then use SealedSecrets controller to encrypt in Git
|
||||
```
|
||||
|
||||
2. **Use external secrets operator**:
|
||||
```yaml
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: db-creds
|
||||
spec:
|
||||
refreshInterval: "1h"
|
||||
secretStoreRef:
|
||||
name: vault-backend
|
||||
kind: SecretStore
|
||||
target:
|
||||
name: fission-myproject-env
|
||||
creationPolicy: Owner
|
||||
data:
|
||||
- secretKey: PG_PASS
|
||||
remoteRef:
|
||||
key: /prod/db/password
|
||||
```
|
||||
|
||||
3. **Rotate automatically** with cronjobs or external secret manager
|
||||
|
||||
## Environment Variable Alternative
|
||||
|
||||
While the template uses secret files mounted by Fission, you can also use environment variables:
|
||||
|
||||
```json
|
||||
"function_common": {
|
||||
"environment": {
|
||||
"LOG_LEVEL": "INFO",
|
||||
"FEATURE_FLAG": "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Access with `os.getenv()`:
|
||||
|
||||
```python
|
||||
import os
|
||||
log_level = os.getenv("LOG_LEVEL", "INFO")
|
||||
```
|
||||
|
||||
**However**: Environment is less flexible than secrets/configmaps for dynamic updates (requires function restart). Prefer secrets/configmaps for values that may change independently of code deployments.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Secret Not Available
|
||||
|
||||
```bash
|
||||
# Check secret exists in correct namespace
|
||||
kubectl get secret fission-myproject-env -n fission
|
||||
|
||||
# Check secret keys
|
||||
kubectl get secret fission-myproject-env -n fission -o jsonpath='{.data}'
|
||||
|
||||
# Check pod mount
|
||||
kubectl exec -it <pod-name> -n fission -- ls -la /secrets/default/
|
||||
```
|
||||
|
||||
Common issues:
|
||||
- Secret in wrong namespace (use Fission namespace, usually `fission` or as configured)
|
||||
- Secret name typo in helpers.py `SECRET_NAME` variable
|
||||
- Secret not mounted due to missing permission (service account restriction)
|
||||
|
||||
### Vault Decryption Failing
|
||||
|
||||
```python
|
||||
from vault import is_valid_vault_format, decrypt_vault
|
||||
|
||||
vault_str = get_secret("PG_PASS")
|
||||
print(is_valid_vault_format(vault_str)) # Should be True
|
||||
print(decrypt_vault(vault_str, "wrong-key")) # Raises CryptoError
|
||||
```
|
||||
|
||||
Check:
|
||||
- `CRYPTO_KEY` is set correctly in `helpers.py`
|
||||
- Key is 64 hex characters (32 bytes)
|
||||
- Encrypted value format is exactly `vault:v1:base64...`
|
||||
|
||||
### Permission Denied Reading Secret
|
||||
|
||||
Pod may lack permission to read secret. Check service account:
|
||||
|
||||
```bash
|
||||
# Get function pod's service account
|
||||
kubectl get pod <pod-name> -n fission -o jsonpath='{.spec.serviceAccountName}'
|
||||
|
||||
# Check role bindings
|
||||
kubectl get rolebinding -n fission
|
||||
kubectl get clusterrolebinding -n fission
|
||||
|
||||
# Add permission if needed (requires cluster admin)
|
||||
kubectl create clusterrolebinding fission-secret-reader \
|
||||
--clusterrole=view \
|
||||
--serviceaccount=fission:default
|
||||
```
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/)
|
||||
- [Kubernetes ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/)
|
||||
- [Fission Environment and Config](https://fission.io/docs/usage/env/)
|
||||
- [PyNaCl Documentation](https://pynacl.readthedocs.io/)
|
||||
- [SealedSecrets](https://github.com/bitnami-labs/sealed-secrets) - Store encrypted secrets in Git
|
||||
240
fission-python/template/docs/STRUCTURE.md
Normal file
240
fission-python/template/docs/STRUCTURE.md
Normal file
@@ -0,0 +1,240 @@
|
||||
# Project Structure
|
||||
|
||||
This document explains the purpose and contents of each directory and file in a Fission Python project.
|
||||
|
||||
## Directory Layout
|
||||
|
||||
```
|
||||
project/
|
||||
├── .fission/ # Fission configuration
|
||||
│ ├── deployment.json # Main deployment configuration
|
||||
│ ├── dev-deployment.json # Development environment overrides
|
||||
│ └── local-deployment.json # Local development overrides
|
||||
├── src/ # Source code
|
||||
│ ├── __init__.py # Package initialization
|
||||
│ ├── vault.py # Vault encryption utilities
|
||||
│ ├── helpers.py # Shared utility functions
|
||||
│ ├── exceptions.py # Custom exception classes
|
||||
│ ├── models.py # Pydantic models for validation
|
||||
│ ├── build.sh # Build script (executable)
|
||||
│ └── *.py # Your function implementations
|
||||
├── test/ # Unit and integration tests
|
||||
│ ├── __init__.py
|
||||
│ ├── test_*.py # Test files
|
||||
│ └── requirements.txt # Test dependencies
|
||||
├── migrates/ # Database migration scripts
|
||||
│ └── *.sql # SQL migration files
|
||||
├── manifests/ # Kubernetes manifests (optional)
|
||||
│ └── *.yaml # K8s resources
|
||||
├── specs/ # Generated Fission specs
|
||||
│ ├── fission-deployment-config.yaml
|
||||
│ └── ...
|
||||
├── requirements.txt # Runtime dependencies
|
||||
├── dev-requirements.txt # Development dependencies
|
||||
├── .env.example # Environment variable template
|
||||
├── pytest.ini # Pytest configuration
|
||||
├── README.md # Project documentation
|
||||
└── (other project files)
|
||||
```
|
||||
|
||||
## File Purposes
|
||||
|
||||
### .fission/deployment.json
|
||||
|
||||
This is **the most important configuration file** for Fission deployment. It defines:
|
||||
|
||||
- **environments**: Build environment configuration (image, builder, resources)
|
||||
- **archives**: Source code packaging (typically "package.zip" from src/)
|
||||
- **packages**: Package definitions linking source to environment
|
||||
- **function_common**: Default settings applied to all functions
|
||||
- **secrets**: Secret definitions (literal values are placeholders - actual secrets go in K8s)
|
||||
- **configmaps**: ConfigMap definitions (non-sensitive configuration)
|
||||
|
||||
**Important**: The secret and configmap literals are **placeholders only**. In production, you create actual K8s secrets/configmaps with the same names containing real values.
|
||||
|
||||
**Placeholders**:
|
||||
- `${PROJECT_NAME}` - Replaced with your project name by `create-project.sh`
|
||||
- Secret name pattern: `fission-${PROJECT_NAME}-env`
|
||||
- ConfigMap name pattern: `fission-${PROJECT_NAME}-config`
|
||||
|
||||
### src/vault.py
|
||||
|
||||
Provides encryption/decryption utilities using PyNaCl (SecretBox). This is used when you want to store encrypted values in K8s secrets rather than plaintext.
|
||||
|
||||
**Key functions**:
|
||||
- `encrypt_vault(plaintext, key)` - Encrypt and return vault format string
|
||||
- `decrypt_vault(vault, key)` - Decrypt vault format string
|
||||
- `is_valid_vault_format(vault)` - Check if string is vault-encrypted
|
||||
|
||||
**Usage in helpers.py**: The `get_secret()` and `get_config()` functions automatically detect vault format (`vault:v1:...`) and decrypt if a valid `CRYPTO_KEY` is set.
|
||||
|
||||
### src/helpers.py
|
||||
|
||||
Shared utilities used across functions:
|
||||
|
||||
**Database**:
|
||||
- `init_db_connection()` - Creates PostgreSQL connection from secrets
|
||||
- `db_row_to_dict(cursor, row)` - Convert row tuple to dict
|
||||
- `db_rows_to_array(cursor, rows)` - Convert multiple rows to list of dicts
|
||||
|
||||
**Configuration**:
|
||||
- `get_secret(key, default=None)` - Read from K8s secret volume
|
||||
- `get_config(key, default=None)` - Read from K8s config volume
|
||||
- `get_current_namespace()` - Get current K8s namespace
|
||||
|
||||
**Utilities**:
|
||||
- `str_to_bool(input)` - Convert string to boolean
|
||||
- `check_port_open(ip, port, timeout)` - TCP port connectivity check
|
||||
- `get_user_from_headers()` - Extract user ID from request headers
|
||||
- `format_error_response(...)` - Build standardized error dict
|
||||
|
||||
**Logging**:
|
||||
- Helper uses `current_app.logger` (Flask) for error logging
|
||||
|
||||
### src/exceptions.py
|
||||
|
||||
Custom exception hierarchy:
|
||||
|
||||
```
|
||||
ServiceException (base)
|
||||
├── ValidationError (400) - Invalid input
|
||||
├── NotFoundError (404) - Resource not found
|
||||
├── ConflictError (409) - Duplicate/conflict
|
||||
└── DatabaseError (500) - Database failure
|
||||
```
|
||||
|
||||
All exceptions include:
|
||||
- `error_code` - Machine-readable code
|
||||
- `http_status` - HTTP status
|
||||
- `error_msg` - Human-readable message
|
||||
- `x_user` (optional) - User identifier
|
||||
- `details` (optional) - Additional context dict
|
||||
|
||||
When raised in a Fission function, these automatically return proper JSON error responses.
|
||||
|
||||
### src/models.py
|
||||
|
||||
Pydantic models for request/response validation:
|
||||
|
||||
**Patterns included**:
|
||||
- Enums (e.g., `Status`, `DataType`)
|
||||
- Dataclass filters (e.g., `ItemFilter`, `Pagination`)
|
||||
- Request models (`ItemCreateRequest`, `ItemUpdateRequest`)
|
||||
- Response models (`ItemResponse`, `PaginatedResponse`)
|
||||
- ErrorResponse model (used by exceptions)
|
||||
|
||||
**Key concepts**:
|
||||
- Use `Field(...)` with constraints (min_length, max_length, ge, le)
|
||||
- Provide `description` for API documentation
|
||||
- Use `json_schema_extra` for example values
|
||||
- Set `from_attributes = True` for ORM compatibility
|
||||
|
||||
### src/build.sh
|
||||
|
||||
Bash script that builds the dependency package. It:
|
||||
1. Detects OS (Debian vs Alpine)
|
||||
2. Installs build dependencies (gcc, libpq-dev/python3-dev/postgresql-dev)
|
||||
3. Installs Python requirements into `src/` directory
|
||||
4. Copies `src/` to package destination
|
||||
|
||||
**Important**: Must be executable (`chmod +x src/build.sh`)
|
||||
|
||||
The script expects environment variables:
|
||||
- `SRC_PKG` - Source package directory (e.g., `src`)
|
||||
- `DEPLOY_PKG` - Destination package (e.g., `specs/package`)
|
||||
|
||||
Fission builder sets these automatically.
|
||||
|
||||
### test/
|
||||
|
||||
Contains unit and integration tests.
|
||||
|
||||
**Structure**:
|
||||
- `test_*.py` - Test files following pytest conventions
|
||||
- `requirements.txt` - Test dependencies (pytest, pytest-mock, requests)
|
||||
|
||||
**Running tests**:
|
||||
```bash
|
||||
pip install -r dev-requirements.txt
|
||||
pytest
|
||||
```
|
||||
|
||||
## Fission Configuration in Docstrings
|
||||
|
||||
Each Python function that should be exposed as a Fission function **must** include a ````fission` block in its docstring:
|
||||
|
||||
```python
|
||||
def my_function(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "my-function",
|
||||
"http_triggers": {
|
||||
"my-trigger": {
|
||||
"url": "/api/endpoint",
|
||||
"methods": ["GET", "POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Human-readable description here.
|
||||
"""
|
||||
# Implementation
|
||||
```
|
||||
|
||||
The Fission Python builder parses these docstrings and generates the `specs/fission-deployment-config.yaml` and other spec files.
|
||||
|
||||
**Supported trigger types**:
|
||||
- `http_triggers` - HTTP endpoints
|
||||
- `kafka_triggers` - Kafka topics
|
||||
- `timer_triggers` - Scheduled execution
|
||||
- `message_queue_triggers` - MQTT, NATS, etc.
|
||||
|
||||
## Configuration Precedence
|
||||
|
||||
1. **deployment.json** - Base configuration (committed to repo)
|
||||
2. **dev-deployment.json** - Overrides for dev environment (not always committed)
|
||||
3. **local-deployment.json** - Local overrides (typically .gitignored)
|
||||
|
||||
When deploying:
|
||||
- `fission deploy` uses deployment.json
|
||||
- `fission deploy --dev` uses dev-deployment.json if present
|
||||
|
||||
## Secrets and Configuration Flow
|
||||
|
||||
1. **Define placeholders** in `deployment.json`:
|
||||
```json
|
||||
"secrets": {
|
||||
"fission-myproject-env": {
|
||||
"literals": ["PG_HOST=localhost", "PG_PORT=5432"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Create actual K8s secret**:
|
||||
```bash
|
||||
kubectl create secret generic fission-myproject-env \
|
||||
--from-literal=PG_HOST=prod-db.example.com \
|
||||
--from-literal=PG_PORT=5432
|
||||
```
|
||||
|
||||
3. **Read in code** via `get_secret()`:
|
||||
```python
|
||||
host = get_secret("PG_HOST")
|
||||
```
|
||||
|
||||
4. **For vault encryption**:
|
||||
- Set `CRYPTO_KEY` in helpers.py or as env override
|
||||
- Store encrypted: `vault:v1:base64data` in K8s secret
|
||||
- `get_secret()` auto-decrypts
|
||||
|
||||
## Summary
|
||||
|
||||
- Keep function code in `src/`
|
||||
- Define Fission metadata in docstring blocks
|
||||
- Use helpers for common operations
|
||||
- Define custom exceptions for error handling
|
||||
- Validate inputs with Pydantic models
|
||||
- Store tests in `test/` with pytest
|
||||
- Manage database migrations in `migrates/`
|
||||
- Do not commit actual secrets to repository
|
||||
567
fission-python/template/docs/TESTING.md
Normal file
567
fission-python/template/docs/TESTING.md
Normal file
@@ -0,0 +1,567 @@
|
||||
# Testing Guide
|
||||
|
||||
This document covers testing strategies and best practices for Fission Python functions.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Test Types](#test-types)
|
||||
2. [Dependencies](#dependencies)
|
||||
3. [Unit Testing](#unit-testing)
|
||||
4. [Integration Testing](#integration-testing)
|
||||
5. [Test Database](#test-database)
|
||||
6. [Mocking](#mocking)
|
||||
7. [Fixtures](#fixtures)
|
||||
8. [Coverage](#coverage)
|
||||
9. [Running Tests](#running-tests)
|
||||
10. [CI/CD Integration](#cicd-integration)
|
||||
|
||||
## Test Types
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Test individual functions in isolation, mocking external dependencies:
|
||||
- Database calls
|
||||
- HTTP requests
|
||||
- File I/O
|
||||
- External services
|
||||
|
||||
**Goal**: Verify business logic correctness without infrastructure.
|
||||
|
||||
### Integration Tests
|
||||
|
||||
Test the function with real (or test) dependencies:
|
||||
- Actual database queries
|
||||
- End-to-end request/response flow
|
||||
- Real configuration loading
|
||||
|
||||
**Goal**: Verify integration points work correctly.
|
||||
|
||||
## Dependencies
|
||||
|
||||
Install test dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r test/requirements.txt
|
||||
# Or for dev (includes both runtime and test deps):
|
||||
pip install -r dev-requirements.txt
|
||||
```
|
||||
|
||||
Required packages:
|
||||
- `pytest` - Test framework
|
||||
- `pytest-mock` - Mocking utilities (provides `mocker` fixture)
|
||||
- `requests` - For integration tests making HTTP calls
|
||||
|
||||
## Unit Testing
|
||||
|
||||
### Example Test Structure
|
||||
|
||||
```python
|
||||
# test/test_my_function.py
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from src.my_function import create_item
|
||||
from exceptions import ValidationError
|
||||
|
||||
def test_create_item_success():
|
||||
"""Test successful item creation."""
|
||||
# Arrange
|
||||
mock_conn = MagicMock()
|
||||
mock_cursor = MagicMock()
|
||||
mock_conn.cursor.return_value = mock_cursor
|
||||
mock_cursor.fetchone.return_value = ("item-id", "Item Name", "active")
|
||||
|
||||
# Mock init_db_connection to return our mock
|
||||
with patch("src.my_function.init_db_connection", return_value=mock_conn):
|
||||
# Create a mock Flask request
|
||||
with patch("src.my_function.request") as mock_request:
|
||||
mock_request.get_json.return_value = {
|
||||
"name": "Test Item",
|
||||
"status": "active"
|
||||
}
|
||||
mock_request.view_args = {}
|
||||
|
||||
# Act
|
||||
result = create_item({}, {})
|
||||
|
||||
# Assert
|
||||
assert result["id"] == "item-id"
|
||||
assert result["name"] == "Test Item"
|
||||
mock_cursor.execute.assert_called_once()
|
||||
mock_conn.commit.assert_called_once()
|
||||
|
||||
def test_create_item_validation_error():
|
||||
"""Test validation of missing required fields."""
|
||||
with patch("src.my_function.request") as mock_request:
|
||||
mock_request.get_json.return_value = {"name": ""} # Empty name
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
create_item({}, {})
|
||||
|
||||
assert "validation" in str(exc_info.value.error_msg).lower()
|
||||
```
|
||||
|
||||
### Mocking Helpers
|
||||
|
||||
Use `patch` to replace dependencies:
|
||||
|
||||
```python
|
||||
# Mock helpers.get_secret
|
||||
@patch("src.my_function.helpers.get_secret")
|
||||
def test_with_mocked_secret(mock_get_secret):
|
||||
mock_get_secret.return_value = "localhost"
|
||||
# Test code...
|
||||
|
||||
# Mock entire module
|
||||
@patch("src.my_function.helpers.init_db_connection")
|
||||
def test_with_mocked_db(mock_init_db):
|
||||
mock_conn = MagicMock()
|
||||
mock_init_db.return_value = mock_conn
|
||||
# Test code...
|
||||
```
|
||||
|
||||
### Mocking Flask Request
|
||||
|
||||
```python
|
||||
from flask import Request
|
||||
|
||||
def test_with_flask_request():
|
||||
with patch("src.my_function.request") as mock_request:
|
||||
mock_request.get_json.return_value = {"key": "value"}
|
||||
mock_request.args.getlist.return_value = []
|
||||
mock_request.headers.get.return_value = "user-123"
|
||||
# Test code...
|
||||
```
|
||||
|
||||
## Integration Testing
|
||||
|
||||
### Test Database Setup
|
||||
|
||||
Use a separate test database:
|
||||
|
||||
```bash
|
||||
# Create test database
|
||||
createdb fission_test
|
||||
|
||||
# Or with Docker:
|
||||
docker run -d -p 5433:5432 -e POSTGRES_PASSWORD=test postgres:15
|
||||
```
|
||||
|
||||
Set environment variables for test database:
|
||||
```bash
|
||||
export PG_HOST=localhost
|
||||
export PG_PORT=5433
|
||||
export PG_DB=fission_test
|
||||
export PG_USER=postgres
|
||||
export PG_PASS=test
|
||||
```
|
||||
|
||||
### pytest Fixtures for Database
|
||||
|
||||
```python
|
||||
# conftest.py (placed in test/ directory)
|
||||
import pytest
|
||||
import psycopg2
|
||||
from helpers import init_db_connection
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def db_connection():
|
||||
"""Create a database connection for the entire test session."""
|
||||
conn = init_db_connection()
|
||||
yield conn
|
||||
conn.close()
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def db_cursor(db_connection):
|
||||
"""Create a cursor for each test, with transaction rollback."""
|
||||
conn = db_connection
|
||||
cursor = conn.cursor()
|
||||
# Start a transaction that will be rolled back
|
||||
conn.rollback()
|
||||
yield cursor
|
||||
# Rollback after each test to keep DB clean
|
||||
conn.rollback()
|
||||
```
|
||||
|
||||
### Example Integration Test
|
||||
|
||||
```python
|
||||
# test/test_integration.py
|
||||
def test_create_and_retrieve_item_integration(db_connection):
|
||||
"""Test full CRUD cycle with real database."""
|
||||
from src.models import ItemCreateRequest
|
||||
from src.functions import create_item, get_item
|
||||
|
||||
# Insert test data
|
||||
cursor = db_connection.cursor()
|
||||
cursor.execute("DELETE FROM items WHERE name = 'Integration Test'")
|
||||
db_connection.commit()
|
||||
|
||||
# Create item via function
|
||||
with patch("src.functions.request") as mock_request:
|
||||
mock_request.get_json.return_value = {
|
||||
"name": "Integration Test",
|
||||
"description": "Test item"
|
||||
}
|
||||
mock_request.view_args = {}
|
||||
result = create_item({}, {})
|
||||
|
||||
item_id = result["id"]
|
||||
assert result["name"] == "Integration Test"
|
||||
|
||||
# Retrieve same item
|
||||
with patch("src.functions.request") as mock_request:
|
||||
with patch("src.functions.request.view_args", {"id": item_id}):
|
||||
result = get_item({"path": f"/items/{item_id}"}, {})
|
||||
assert result["id"] == item_id
|
||||
|
||||
# Cleanup
|
||||
cursor.execute("DELETE FROM items WHERE id = %s", (item_id,))
|
||||
db_connection.commit()
|
||||
```
|
||||
|
||||
## Test Database Migrations
|
||||
|
||||
Apply migrations before integration tests:
|
||||
|
||||
```python
|
||||
# conftest.py
|
||||
import subprocess
|
||||
|
||||
def apply_migrations():
|
||||
"""Apply all SQL migrations to test database."""
|
||||
import os
|
||||
migrates_dir = os.path.join(os.path.dirname(__file__), "..", "migrates")
|
||||
for file in sorted(os.listdir(migrates_dir)):
|
||||
if file.endswith(".sql"):
|
||||
path = os.path.join(migrates_dir, file)
|
||||
subprocess.run(
|
||||
["psql", "-d", "fission_test", "-f", path],
|
||||
check=True
|
||||
)
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def setup_database():
|
||||
"""Run migrations before any tests."""
|
||||
apply_migrations()
|
||||
yield
|
||||
# Optionally drop and recreate after tests
|
||||
```
|
||||
|
||||
## Mocking
|
||||
|
||||
### Built-in unittest.mock
|
||||
|
||||
```python
|
||||
from unittest.mock import patch, MagicMock, mock_open
|
||||
|
||||
# Simple patch
|
||||
with patch("module.function") as mock_func:
|
||||
mock_func.return_value = "mocked"
|
||||
# call code that uses module.function
|
||||
|
||||
# Assert called with specific args
|
||||
mock_func.assert_called_once_with("arg1", "arg2")
|
||||
|
||||
# Mock context manager
|
||||
with patch("builtins.open", mock_open(read_data="file content")) as mock_file:
|
||||
# code that opens file
|
||||
mock_file.assert_called_with("path/to/file", "r")
|
||||
```
|
||||
|
||||
### pytest-mock Fixture
|
||||
|
||||
Simpler syntax using `mocker` fixture:
|
||||
|
||||
```python
|
||||
def test_with_mocker(mocker):
|
||||
mock_func = mocker.patch("src.function.helper")
|
||||
mock_func.return_value = {"key": "value"}
|
||||
# test code...
|
||||
```
|
||||
|
||||
## Fixtures
|
||||
|
||||
Create reusable fixtures in `conftest.py`:
|
||||
|
||||
```python
|
||||
# test/conftest.py
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def sample_item_data():
|
||||
"""Provide sample item data for tests."""
|
||||
return {
|
||||
"name": "Test Item",
|
||||
"description": "A test item",
|
||||
"status": "active"
|
||||
}
|
||||
|
||||
@pytest.fixture
|
||||
def mock_db_connection(mocker):
|
||||
"""Provide a mocked database connection."""
|
||||
mock_conn = mocker.MagicMock()
|
||||
mock_cursor = mocker.MagicMock()
|
||||
mock_conn.cursor.return_value = mock_cursor
|
||||
mock_cursor.fetchone.return_value = None
|
||||
return mock_conn
|
||||
```
|
||||
|
||||
Fixtures are automatically available to all tests in the directory.
|
||||
|
||||
## Coverage
|
||||
|
||||
Measure test coverage with pytest-cov:
|
||||
|
||||
```bash
|
||||
# Install
|
||||
pip install pytest-cov
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=src
|
||||
|
||||
# HTML report
|
||||
pytest --cov=src --cov-report=html
|
||||
open htmlcov/index.html
|
||||
|
||||
# Show missing lines
|
||||
pytest --cov=src --cov-report=term-missing
|
||||
```
|
||||
|
||||
Aim for high coverage of business logic (80%+). Don't worry about 100% coverage of trivial getters/setters.
|
||||
|
||||
### Excluding Files
|
||||
|
||||
Add to `pytest.ini`:
|
||||
```ini
|
||||
[pytest]
|
||||
addopts = --cov=src --cov-exclude=src/vault.py
|
||||
```
|
||||
|
||||
Or use `.coveragerc`:
|
||||
```ini
|
||||
[run]
|
||||
omit = src/vault.py
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Basic Commands
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Verbose
|
||||
pytest -v
|
||||
|
||||
# Run specific test file
|
||||
pytest test/test_my_function.py
|
||||
|
||||
# Run specific test function
|
||||
pytest test/test_my_function.py::test_create_item_success
|
||||
|
||||
# Run with markers
|
||||
pytest -m "integration" # if using @pytest.mark.integration
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Show print statements
|
||||
pytest -s
|
||||
```
|
||||
|
||||
### Environment Setup
|
||||
|
||||
Create `test/.env` or set environment variables before tests:
|
||||
|
||||
```bash
|
||||
# For integration tests
|
||||
export PG_HOST=localhost
|
||||
export PG_PORT=5432
|
||||
export PG_DB=fission_test
|
||||
```
|
||||
|
||||
Or use a pytest fixture to load from `.env`:
|
||||
|
||||
```python
|
||||
# conftest.py
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def load_env():
|
||||
env_path = os.path.join(os.path.dirname(__file__), ".env")
|
||||
load_dotenv(env_path)
|
||||
```
|
||||
|
||||
### Markers
|
||||
|
||||
Mark tests as unit/integration/slow:
|
||||
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_quick_unit():
|
||||
pass
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_full_workflow():
|
||||
pass
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_long_running():
|
||||
pass
|
||||
```
|
||||
|
||||
Run only unit tests:
|
||||
```bash
|
||||
pytest -m "unit"
|
||||
```
|
||||
|
||||
Skip tests:
|
||||
```bash
|
||||
pytest -m "not slow"
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yaml
|
||||
name: Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:15
|
||||
env:
|
||||
POSTGRES_PASSWORD: test
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
ports:
|
||||
- 5432:5432
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install -r dev-requirements.txt
|
||||
- name: Setup database
|
||||
run: |
|
||||
createdb -h localhost -U postgres fission_test
|
||||
psql -h localhost -U postgres fission_test -f migrates/001_schema.sql
|
||||
env:
|
||||
PGPASSWORD: test
|
||||
- name: Run tests
|
||||
run: |
|
||||
pytest --cov=src --cov-report=xml
|
||||
env:
|
||||
PG_HOST: localhost
|
||||
PG_PORT: 5432
|
||||
PG_DB: fission_test
|
||||
PG_USER: postgres
|
||||
PG_PASS: test
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v3
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **One assertion per test** - Keep tests focused
|
||||
2. **Use descriptive names** - `test_create_item_validation_error_for_missing_name`
|
||||
3. **Arrange-Act-Assert** - Structure tests clearly
|
||||
4. **Mock external dependencies** - Don't rely on network or external services
|
||||
5. **Test error cases** - Don't just test happy paths
|
||||
6. **Use fixtures** - Reuse setup/teardown code
|
||||
7. **Keep tests independent** - No shared state between tests
|
||||
8. **Test edge cases** - Empty inputs, null values, boundary conditions
|
||||
9. **Don't test libraries** - Don't write tests for Flask/Pydantic themselves
|
||||
10. **Clean up resources** - Use fixtures to ensure cleanup
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Testing Exceptions
|
||||
|
||||
```python
|
||||
def test_raises_not_found():
|
||||
with pytest.raises(NotFoundError) as exc:
|
||||
get_item("nonexistent-id")
|
||||
assert exc.value.http_status == 404
|
||||
```
|
||||
|
||||
### Parametrized Tests
|
||||
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.parametrize("input,expected", [
|
||||
("true", True),
|
||||
("false", False),
|
||||
("", None),
|
||||
(None, None),
|
||||
])
|
||||
def test_str_to_bool(input, expected):
|
||||
from helpers import str_to_bool
|
||||
assert str_to_bool(input) == expected
|
||||
```
|
||||
|
||||
### Temporary Files/Directories
|
||||
|
||||
```python
|
||||
def test_with_temp_file(tmp_path):
|
||||
# tmp_path is a pathlib.Path to a temporary directory
|
||||
file = tmp_path / "test.txt"
|
||||
file.write_text("content")
|
||||
assert file.read_text() == "content"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Fail with Database Errors
|
||||
|
||||
- Check test database is running: `pg_isready -h localhost -p 5432`
|
||||
- Verify migrations applied: `psql -l | grep fission_test`
|
||||
- Check environment variables: `echo $PG_HOST`
|
||||
|
||||
### Mock Not Working
|
||||
|
||||
- Ensure you're patching the **correct import location** (where it's used, not where it's defined)
|
||||
```python
|
||||
# Wrong: patching where it's defined
|
||||
@patch("helpers.get_secret")
|
||||
# Right: patching where it's used in your function module
|
||||
@patch("src.my_function.helpers.get_secret")
|
||||
```
|
||||
|
||||
### Import Errors
|
||||
|
||||
Ensure PYTHONPATH includes project root:
|
||||
```bash
|
||||
export PYTHONPATH=/path/to/project:$PYTHONPATH
|
||||
```
|
||||
|
||||
Or use pytest's `pythonpath` option in pytest.ini:
|
||||
```ini
|
||||
[pytest]
|
||||
pythonpath = .
|
||||
```
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [pytest documentation](https://docs.pytest.org/)
|
||||
- [pytest-mock documentation](https://github.com/pytest-dev/pytest-mock)
|
||||
- [Python unittest.mock](https://docs.python.org/3/library/unittest.mock.html)
|
||||
- [Testing Flask Applications](https://flask.palletsprojects.com/en/2.1.x/testing/)
|
||||
433
fission-python/template/examples/example_crud.py
Normal file
433
fission-python/template/examples/example_crud.py
Normal file
@@ -0,0 +1,433 @@
|
||||
"""
|
||||
Example: Basic CRUD operations for a resource.
|
||||
|
||||
This demonstrates:
|
||||
- Pydantic request validation
|
||||
- Database operations with helpers
|
||||
- Standard error handling
|
||||
- Proper Fission docstring configuration
|
||||
"""
|
||||
|
||||
from flask import request
|
||||
from helpers import (
|
||||
init_db_connection,
|
||||
db_row_to_dict,
|
||||
db_rows_to_array,
|
||||
get_user_from_headers,
|
||||
format_error_response,
|
||||
)
|
||||
from exceptions import ValidationError, NotFoundError, ConflictError, DatabaseError
|
||||
from models import ItemResponse, ItemCreateRequest, ItemUpdateRequest
|
||||
|
||||
# Pool manager executor, one request at a time
|
||||
def create(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "create-item",
|
||||
"http_triggers": {
|
||||
"create": {
|
||||
"url": "/api/items",
|
||||
"methods": ["POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Create a new item.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"name": "string (required, 1-255 chars)",
|
||||
"description": "string (optional)",
|
||||
"status": "active|inactive|pending",
|
||||
"metadata": {}
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
- 200: Item created successfully
|
||||
- 400: Validation error
|
||||
- 409: Conflict (e.g., duplicate name)
|
||||
- 500: Database error
|
||||
"""
|
||||
# Get user for audit trail
|
||||
x_user = get_user_from_headers()
|
||||
|
||||
# Validate request payload
|
||||
try:
|
||||
data = ItemCreateRequest(**request.get_json())
|
||||
except Exception as e:
|
||||
raise ValidationError(f"Invalid request: {str(e)}", x_user=x_user)
|
||||
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check for conflicts (example)
|
||||
cursor.execute(
|
||||
"SELECT id FROM items WHERE name = %s",
|
||||
(data.name,)
|
||||
)
|
||||
if cursor.fetchone():
|
||||
raise ConflictError(
|
||||
f"Item with name '{data.name}' already exists",
|
||||
x_user=x_user,
|
||||
details={"name": data.name}
|
||||
)
|
||||
|
||||
# Insert new item
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO items (name, description, status, metadata)
|
||||
VALUES (%s, %s, %s, %s)
|
||||
RETURNING id, name, description, status, metadata, created, modified
|
||||
""",
|
||||
(data.name, data.description, data.status.value, data.metadata)
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
conn.commit()
|
||||
|
||||
# Build response
|
||||
item = db_row_to_dict(cursor, row)
|
||||
return item
|
||||
|
||||
except (ValidationError, NotFoundError, ConflictError, DatabaseError):
|
||||
# Re-raise our own exceptions
|
||||
raise
|
||||
except Exception as e:
|
||||
if conn:
|
||||
conn.rollback()
|
||||
raise DatabaseError(f"Database error: {str(e)}", x_user=x_user)
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
def list_items(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "list-items",
|
||||
"http_triggers": {
|
||||
"list": {
|
||||
"url": "/api/items",
|
||||
"methods": ["GET"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
List items with optional filtering and pagination.
|
||||
|
||||
**Query Parameters:**
|
||||
- `page` (int): Page number, zero-based (default: 0)
|
||||
- `size` (int): Items per page (default: 10, max: 100)
|
||||
- `asc` (bool): Sort ascending (default: true)
|
||||
- `filter[ids]` (string[]): Filter by specific IDs
|
||||
- `filter[keyword]` (string): Search in name/description
|
||||
- `filter[status]` (string[]): Filter by status values
|
||||
- `filter[created_from]` (datetime): Filter created after
|
||||
- `filter[created_to]` (datetime): Filter created before
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"data": [...],
|
||||
"page": 0,
|
||||
"size": 10,
|
||||
"total": 42
|
||||
}
|
||||
```
|
||||
"""
|
||||
from helpers import str_to_bool
|
||||
|
||||
# Parse pagination
|
||||
page = int(request.args.get("page", 0))
|
||||
size = int(request.args.get("size", 10))
|
||||
asc = str_to_bool(request.args.get("asc", "true"))
|
||||
|
||||
# Parse filters
|
||||
ids = request.args.getlist("filter[ids]")
|
||||
keyword = request.args.get("filter[keyword]")
|
||||
statuses = request.args.getlist("filter[status]")
|
||||
|
||||
# Build query
|
||||
conditions = []
|
||||
params = []
|
||||
|
||||
if ids:
|
||||
conditions.append(f"id IN ({', '.join(['%s'] * len(ids))})")
|
||||
params.extend(ids)
|
||||
if keyword:
|
||||
conditions.append("(name ILIKE %s OR description ILIKE %s)")
|
||||
params.extend([f"%{keyword}%", f"%{keyword}%"])
|
||||
if statuses:
|
||||
conditions.append(f"status IN ({', '.join(['%s'] * len(statuses))})")
|
||||
params.extend(statuses)
|
||||
|
||||
where_clause = "WHERE " + " AND ".join(conditions) if conditions else ""
|
||||
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get total count
|
||||
count_sql = f"SELECT COUNT(*) FROM items {where_clause}"
|
||||
cursor.execute(count_sql, params)
|
||||
total = cursor.fetchone()[0]
|
||||
|
||||
# Get paginated data
|
||||
offset = page * size
|
||||
data_sql = f"""
|
||||
SELECT id, name, description, status, metadata, created, modified
|
||||
FROM items
|
||||
{where_clause}
|
||||
ORDER BY created {'ASC' if asc else 'DESC'}
|
||||
LIMIT %s OFFSET %s
|
||||
"""
|
||||
cursor.execute(data_sql, params + [size, offset])
|
||||
rows = cursor.fetchall()
|
||||
items = [db_row_to_dict(cursor, row) for row in rows]
|
||||
|
||||
return {
|
||||
"data": items,
|
||||
"page": page,
|
||||
"size": size,
|
||||
"total": total
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise DatabaseError(f"Failed to list items: {str(e)}")
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
def get_item(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "get-item",
|
||||
"http_triggers": {
|
||||
"get": {
|
||||
"url": "/api/items/:id",
|
||||
"methods": ["GET"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Get a specific item by ID.
|
||||
|
||||
**URL Parameters:**
|
||||
- `id` (string): Item UUID
|
||||
|
||||
**Response:**
|
||||
- 200: Item found
|
||||
- 404: Item not found
|
||||
- 500: Database error
|
||||
"""
|
||||
# Extract item ID from path (Fission passes path params differently depending on trigger)
|
||||
# For HTTP triggers, the ID would come from the URL path
|
||||
item_id = request.view_args.get('id') if hasattr(request, 'view_args') else None
|
||||
if not item_id:
|
||||
# Fallback: parse from query or request path
|
||||
item_id = request.path.rstrip('/').split('/')[-1]
|
||||
|
||||
if not item_id:
|
||||
raise ValidationError("Item ID is required")
|
||||
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT id, name, description, status, metadata, created, modified
|
||||
FROM items WHERE id = %s
|
||||
""",
|
||||
(item_id,)
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
if not row:
|
||||
raise NotFoundError(f"Item {item_id} not found")
|
||||
|
||||
return db_row_to_dict(cursor, row)
|
||||
|
||||
except NotFoundError:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise DatabaseError(f"Failed to fetch item: {str(e)}")
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
def update_item(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "update-item",
|
||||
"http_triggers": {
|
||||
"update": {
|
||||
"url": "/api/items/:id",
|
||||
"methods": ["PUT", "PATCH"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Update an existing item.
|
||||
|
||||
**URL Parameters:**
|
||||
- `id` (string): Item UUID
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"name": "string (optional)",
|
||||
"description": "string (optional)",
|
||||
"status": "active|inactive|pending (optional)"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
- 200: Item updated successfully
|
||||
- 404: Item not found
|
||||
- 409: Conflict (duplicate name)
|
||||
- 400: Validation error
|
||||
- 500: Database error
|
||||
"""
|
||||
x_user = get_user_from_headers()
|
||||
|
||||
# Extract item ID
|
||||
item_id = request.view_args.get('id') if hasattr(request, 'view_args') else None
|
||||
if not item_id:
|
||||
item_id = request.path.rstrip('/').split('/')[-1]
|
||||
|
||||
if not item_id:
|
||||
raise ValidationError("Item ID is required")
|
||||
|
||||
# Validate request
|
||||
try:
|
||||
data = ItemUpdateRequest(**request.get_json())
|
||||
except Exception as e:
|
||||
raise ValidationError(f"Invalid request: {str(e)}", x_user=x_user)
|
||||
|
||||
# Build update statement dynamically
|
||||
updates = []
|
||||
params = []
|
||||
|
||||
if data.name is not None:
|
||||
updates.append("name = %s")
|
||||
params.append(data.name)
|
||||
if data.description is not None:
|
||||
updates.append("description = %s")
|
||||
params.append(data.description)
|
||||
if data.status is not None:
|
||||
updates.append("status = %s")
|
||||
params.append(data.status.value)
|
||||
if data.metadata is not None:
|
||||
updates.append("metadata = %s")
|
||||
params.append(data.metadata)
|
||||
|
||||
if not updates:
|
||||
raise ValidationError("No update fields provided", x_user=x_user)
|
||||
|
||||
updates.append("modified = NOW()")
|
||||
params.append(item_id)
|
||||
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check for name conflict if name is being updated
|
||||
if data.name:
|
||||
cursor.execute(
|
||||
"SELECT id FROM items WHERE name = %s AND id != %s",
|
||||
(data.name, item_id)
|
||||
)
|
||||
if cursor.fetchone():
|
||||
raise ConflictError(
|
||||
f"Another item with name '{data.name}' already exists",
|
||||
x_user=x_user,
|
||||
details={"name": data.name}
|
||||
)
|
||||
|
||||
# Execute update
|
||||
sql = f"UPDATE items SET {', '.join(updates)} WHERE id = %s RETURNING *"
|
||||
cursor.execute(sql, params)
|
||||
row = cursor.fetchone()
|
||||
conn.commit()
|
||||
|
||||
if not row:
|
||||
raise NotFoundError(f"Item {item_id} not found", x_user=x_user)
|
||||
|
||||
return db_row_to_dict(cursor, row)
|
||||
|
||||
except (ValidationError, NotFoundError, ConflictError, DatabaseError):
|
||||
raise
|
||||
except Exception as e:
|
||||
if conn:
|
||||
conn.rollback()
|
||||
raise DatabaseError(f"Failed to update item: {str(e)}", x_user=x_user)
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
def delete_item(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "delete-item",
|
||||
"http_triggers": {
|
||||
"delete": {
|
||||
"url": "/api/items/:id",
|
||||
"methods": ["DELETE"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Delete an item.
|
||||
|
||||
**URL Parameters:**
|
||||
- `id` (string): Item UUID
|
||||
|
||||
**Response:**
|
||||
- 204: Item deleted successfully
|
||||
- 404: Item not found
|
||||
- 500: Database error
|
||||
"""
|
||||
x_user = get_user_from_headers()
|
||||
|
||||
# Extract item ID
|
||||
item_id = request.view_args.get('id') if hasattr(request, 'view_args') else None
|
||||
if not item_id:
|
||||
item_id = request.path.rstrip('/').split('/')[-1]
|
||||
|
||||
if not item_id:
|
||||
raise ValidationError("Item ID is required", x_user=x_user)
|
||||
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("DELETE FROM items WHERE id = %s", (item_id,))
|
||||
conn.commit()
|
||||
|
||||
if cursor.rowcount == 0:
|
||||
raise NotFoundError(f"Item {item_id} not found", x_user=x_user)
|
||||
|
||||
return None # 204 No Content
|
||||
|
||||
except NotFoundError:
|
||||
raise
|
||||
except Exception as e:
|
||||
if conn:
|
||||
conn.rollback()
|
||||
raise DatabaseError(f"Failed to delete item: {str(e)}", x_user=x_user)
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
311
fission-python/template/examples/example_scheduler.py
Normal file
311
fission-python/template/examples/example_scheduler.py
Normal file
@@ -0,0 +1,311 @@
|
||||
"""
|
||||
Example: Background job / scheduled task pattern.
|
||||
|
||||
This demonstrates:
|
||||
- Long-running job execution
|
||||
- Job status tracking
|
||||
- Error handling and retries
|
||||
- Periodic task scheduling
|
||||
- Worker session management
|
||||
|
||||
Use cases: report generation, batch processing, cleanup jobs, etc.
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import time
|
||||
import uuid
|
||||
from helpers import init_db_connection, db_row_to_dict, db_rows_to_array
|
||||
from exceptions import DatabaseError
|
||||
|
||||
|
||||
def scheduled_job(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "scheduled-job",
|
||||
"http_triggers": {
|
||||
"run": {
|
||||
"url": "/jobs/run",
|
||||
"methods": ["POST"]
|
||||
}
|
||||
},
|
||||
"kafka_triggers": {
|
||||
"job-queue": {
|
||||
"topic": "job-queue",
|
||||
"consumer_group": "scheduler-workers"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Execute a scheduled or queued background job.
|
||||
|
||||
This function can be triggered:
|
||||
- Manually via HTTP POST /jobs/run
|
||||
- Automatically by message queue (Kafka)
|
||||
- By cron schedule (via Fission timer trigger)
|
||||
|
||||
**Request Body (HTTP trigger):**
|
||||
```json
|
||||
{
|
||||
"job_type": "report_generation",
|
||||
"parameters": {
|
||||
"report_type": "daily",
|
||||
"date": "2025-03-18"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
- 200: Job completed successfully
|
||||
- 202: Job accepted for async processing
|
||||
- 400: Invalid request
|
||||
- 500: Job failed
|
||||
"""
|
||||
# Parse input
|
||||
job_type = event.get("job_type") or event.get("type", "default")
|
||||
parameters = event.get("parameters", {})
|
||||
|
||||
# Generate job ID for tracking
|
||||
job_id = str(uuid.uuid4())
|
||||
started_at = datetime.datetime.utcnow()
|
||||
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Record job start
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO jobs (id, type, parameters, status, started_at)
|
||||
VALUES (%s, %s, %s, 'running', %s)
|
||||
""",
|
||||
(job_id, job_type, parameters, started_at)
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
# Execute job based on type
|
||||
if job_type == "report_generation":
|
||||
result = generate_report(cursor, job_id, parameters)
|
||||
elif job_type == "data_cleanup":
|
||||
result = cleanup_old_data(cursor, job_id, parameters)
|
||||
elif job_type == "sync_external":
|
||||
result = sync_external_system(cursor, job_id, parameters)
|
||||
else:
|
||||
result = run_default_job(cursor, job_id, parameters)
|
||||
|
||||
# Mark job as completed
|
||||
completed_at = datetime.datetime.utcnow()
|
||||
cursor.execute(
|
||||
"""
|
||||
UPDATE jobs
|
||||
SET status = 'completed',
|
||||
result = %s,
|
||||
completed_at = %s,
|
||||
duration = EXTRACT(EPOCH FROM (%s - started_at))
|
||||
WHERE id = %s
|
||||
""",
|
||||
(result, completed_at, completed_at, job_id)
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
return {
|
||||
"job_id": job_id,
|
||||
"status": "completed",
|
||||
"result": result,
|
||||
"duration_seconds": (completed_at - started_at).total_seconds()
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
# Mark job as failed
|
||||
if conn:
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(
|
||||
"""
|
||||
UPDATE jobs
|
||||
SET status = 'failed',
|
||||
error = %s,
|
||||
completed_at = NOW()
|
||||
WHERE id = %s
|
||||
""",
|
||||
(str(e), job_id)
|
||||
)
|
||||
conn.commit()
|
||||
except:
|
||||
pass
|
||||
|
||||
raise DatabaseError(f"Job {job_type} failed: {str(e)}")
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
def generate_report(cursor, job_id: str, parameters: dict):
|
||||
"""
|
||||
Generate a report based on parameters.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor
|
||||
job_id: Job tracking ID
|
||||
parameters: Report configuration (report_type, date, filters, etc.)
|
||||
|
||||
Returns:
|
||||
Dictionary with report metadata and summary
|
||||
"""
|
||||
report_type = parameters.get("report_type", "daily")
|
||||
report_date = parameters.get("date", datetime.datetime.utcnow().strftime("%Y-%m-%d"))
|
||||
|
||||
# Simulate report generation (could be complex aggregation queries)
|
||||
time.sleep(1) # Simulate work
|
||||
|
||||
# Example: Get statistics for the date
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT
|
||||
COUNT(*) as total_orders,
|
||||
SUM(total) as revenue,
|
||||
COUNT(DISTINCT user_id) as unique_customers
|
||||
FROM orders
|
||||
WHERE DATE(created_at) = %s
|
||||
""",
|
||||
(report_date,)
|
||||
)
|
||||
stats = db_row_to_dict(cursor, cursor.fetchone())
|
||||
|
||||
return {
|
||||
"report_type": report_type,
|
||||
"date": report_date,
|
||||
"statistics": stats,
|
||||
"generated_at": datetime.datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
|
||||
def cleanup_old_data(cursor, job_id: str, parameters: dict):
|
||||
"""
|
||||
Clean up old records based on retention policy.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor
|
||||
job_id: Job tracking ID
|
||||
parameters: Cleanup configuration (table, days_to_retain, etc.)
|
||||
|
||||
Returns:
|
||||
Dictionary with cleanup summary
|
||||
"""
|
||||
table = parameters.get("table", "jobs") # Table to clean
|
||||
days_to_retain = int(parameters.get("days_to_retain", 90))
|
||||
cutoff_date = datetime.datetime.utcnow() - datetime.timedelta(days=days_to_retain)
|
||||
|
||||
# Safety: prevent dropping tables
|
||||
if table not in ["jobs", "webhook_events", "logs", "sessions"]:
|
||||
raise ValueError(f"Cannot clean table: {table}")
|
||||
|
||||
# Count records to be deleted
|
||||
cursor.execute(
|
||||
f"SELECT COUNT(*) FROM {table} WHERE created_at < %s",
|
||||
(cutoff_date,)
|
||||
)
|
||||
count = cursor.fetchone()[0]
|
||||
|
||||
# Delete old records
|
||||
cursor.execute(
|
||||
f"DELETE FROM {table} WHERE created_at < %s",
|
||||
(cutoff_date,)
|
||||
)
|
||||
|
||||
return {
|
||||
"table": table,
|
||||
"cutoff_date": cutoff_date.isoformat(),
|
||||
"records_deleted": count
|
||||
}
|
||||
|
||||
|
||||
def sync_external_system(cursor, job_id: str, parameters: dict):
|
||||
"""
|
||||
Synchronize data with external system.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor
|
||||
job_id: Job tracking ID
|
||||
parameters: Sync configuration (system, endpoint, filters, etc.)
|
||||
|
||||
Returns:
|
||||
Dictionary with sync summary
|
||||
"""
|
||||
system = parameters.get("system")
|
||||
endpoint = parameters.get("endpoint")
|
||||
|
||||
# This would typically make HTTP requests to external API
|
||||
# using requests library
|
||||
import requests
|
||||
|
||||
# Fetch last sync timestamp
|
||||
cursor.execute(
|
||||
"SELECT last_sync_at FROM sync_state WHERE system = %s",
|
||||
(system,)
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
last_sync = row[0] if row else None
|
||||
|
||||
# Build query parameters
|
||||
params = {"since": last_sync.isoformat() if last_sync else ""}
|
||||
|
||||
# Make request to external API
|
||||
try:
|
||||
resp = requests.get(endpoint, params=params, timeout=30)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
except Exception as e:
|
||||
raise DatabaseError(f"Failed to fetch from {system}: {str(e)}")
|
||||
|
||||
# Process and store data
|
||||
records_processed = 0
|
||||
for item in data.get("items", []):
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO external_data (system, external_id, data, synced_at)
|
||||
VALUES (%s, %s, %s, NOW())
|
||||
ON CONFLICT (system, external_id) DO UPDATE SET
|
||||
data = EXCLUDED.data,
|
||||
synced_at = EXCLUDED.synced_at
|
||||
""",
|
||||
(system, item["id"], item)
|
||||
)
|
||||
records_processed += 1
|
||||
|
||||
# Update sync state
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO sync_state (system, last_sync_at)
|
||||
VALUES (%s, NOW())
|
||||
ON CONFLICT (system) DO UPDATE SET
|
||||
last_sync_at = NOW()
|
||||
""",
|
||||
(system,)
|
||||
)
|
||||
|
||||
return {
|
||||
"system": system,
|
||||
"records_processed": records_processed,
|
||||
"sync_timestamp": datetime.datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
|
||||
def run_default_job(cursor, job_id: str, parameters: dict):
|
||||
"""
|
||||
Default no-op job for testing.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor
|
||||
job_id: Job tracking ID
|
||||
parameters: Job parameters
|
||||
|
||||
Returns:
|
||||
Simple acknowledgment
|
||||
"""
|
||||
time.sleep(0.5) # Simulate some work
|
||||
return {
|
||||
"message": "Default job executed",
|
||||
"parameters_received": parameters
|
||||
}
|
||||
296
fission-python/template/examples/example_webhook.py
Normal file
296
fission-python/template/examples/example_webhook.py
Normal file
@@ -0,0 +1,296 @@
|
||||
"""
|
||||
Example: Webhook receiver pattern.
|
||||
|
||||
This demonstrates:
|
||||
- Processing external service callbacks
|
||||
- Signature verification
|
||||
- Event type handling
|
||||
- Idempotency checks
|
||||
- Async processing patterns
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import hmac
|
||||
from flask import request
|
||||
from helpers import init_db_connection, get_secret
|
||||
from exceptions import ValidationError, DatabaseError
|
||||
|
||||
# For signed webhooks, you'll need a secret
|
||||
WEBHOOK_SECRET = get_secret("WEBHOOK_SECRET", "")
|
||||
|
||||
|
||||
def verify_signature(payload: bytes, signature: str) -> bool:
|
||||
"""
|
||||
Verify HMAC-SHA256 webhook signature.
|
||||
|
||||
Args:
|
||||
payload: Raw request body bytes
|
||||
signature: Signature header value (format: "sha256=<hex>")
|
||||
|
||||
Returns:
|
||||
True if signature is valid, False otherwise
|
||||
"""
|
||||
if not WEBHOOK_SECRET:
|
||||
return True # Skip verification if no secret configured (for dev)
|
||||
|
||||
expected = hmac.new(
|
||||
WEBHOOK_SECRET.encode(),
|
||||
payload,
|
||||
hashlib.sha256
|
||||
).hexdigest()
|
||||
|
||||
# Signature header format: "sha256=abcdef..."
|
||||
received = signature.split("=", 1)[1] if "=" in signature else signature
|
||||
return hmac.compare_digest(expected, received)
|
||||
|
||||
|
||||
def webhook_receiver(event, context):
|
||||
"""
|
||||
```fission
|
||||
{
|
||||
"name": "webhook-receiver",
|
||||
"http_triggers": {
|
||||
"webhook": {
|
||||
"url": "/webhooks/external-service",
|
||||
"methods": ["POST"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Receive and process webhook from external service.
|
||||
|
||||
**Request:**
|
||||
- Raw JSON payload in body
|
||||
- Signature header: `X-Webhook-Signature: sha256=<hmac>`
|
||||
|
||||
**Response:**
|
||||
- 200: Webhook accepted for processing
|
||||
- 400: Invalid signature or payload
|
||||
- 500: Processing error
|
||||
|
||||
**Idempotency:**
|
||||
This function is idempotent - duplicate webhooks with same
|
||||
event ID will not be processed twice.
|
||||
"""
|
||||
# Get raw body for signature verification
|
||||
payload = request.get_data()
|
||||
signature = request.headers.get("X-Webhook-Signature", "")
|
||||
|
||||
# Verify signature
|
||||
if not verify_signature(payload, signature):
|
||||
raise ValidationError("Invalid webhook signature")
|
||||
|
||||
# Parse payload
|
||||
try:
|
||||
data = request.get_json()
|
||||
except Exception as e:
|
||||
raise ValidationError(f"Invalid JSON payload: {str(e)}")
|
||||
|
||||
# Validate required fields
|
||||
event_id = data.get("event_id") or data.get("id")
|
||||
event_type = data.get("event_type") or data.get("type")
|
||||
|
||||
if not event_id:
|
||||
raise ValidationError("Missing event_id in webhook payload")
|
||||
|
||||
if not event_type:
|
||||
raise ValidationError("Missing event_type in webhook payload")
|
||||
|
||||
# Idempotency check: have we already processed this event?
|
||||
conn = None
|
||||
try:
|
||||
conn = init_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if event already processed
|
||||
cursor.execute(
|
||||
"SELECT id FROM webhook_events WHERE event_id = %s",
|
||||
(event_id,)
|
||||
)
|
||||
if cursor.fetchone():
|
||||
# Already processed - return success (idempotent)
|
||||
return {"status": "already_processed", "event_id": event_id}
|
||||
|
||||
# Record webhook event (for idempotency)
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO webhook_events (event_id, event_type, payload, received_at)
|
||||
VALUES (%s, %s, %s, NOW())
|
||||
""",
|
||||
(event_id, event_type, payload.decode('utf-8'))
|
||||
)
|
||||
|
||||
# Process based on event type
|
||||
result = process_event(cursor, event_type, data)
|
||||
|
||||
conn.commit()
|
||||
return {"status": "processed", "event_id": event_id, "result": result}
|
||||
|
||||
except Exception as e:
|
||||
if conn:
|
||||
conn.rollback()
|
||||
raise DatabaseError(f"Failed to process webhook: {str(e)}")
|
||||
finally:
|
||||
if conn:
|
||||
conn.close()
|
||||
|
||||
|
||||
def process_event(cursor, event_type: str, data: dict):
|
||||
"""
|
||||
Route event to appropriate handler.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor
|
||||
event_type: Type of event (e.g., "user.created", "order.updated")
|
||||
data: Event payload
|
||||
|
||||
Returns:
|
||||
Handler result
|
||||
"""
|
||||
handlers = {
|
||||
"user.created": handle_user_created,
|
||||
"user.updated": handle_user_updated,
|
||||
"user.deleted": handle_user_deleted,
|
||||
"order.created": handle_order_created,
|
||||
"order.paid": handle_order_paid,
|
||||
"order.shipped": handle_order_shipped,
|
||||
}
|
||||
|
||||
handler = handlers.get(event_type)
|
||||
if not handler:
|
||||
# Log unknown event type but don't fail
|
||||
logger = get_logger()
|
||||
logger.warning(f"Unhandled webhook event type: {event_type}")
|
||||
return {"skipped": True, "reason": "unknown_event_type"}
|
||||
|
||||
return handler(cursor, data)
|
||||
|
||||
|
||||
def handle_user_created(cursor, data: dict):
|
||||
"""Handle user creation event."""
|
||||
user_id = data.get("user_id") or data.get("id")
|
||||
email = data.get("email")
|
||||
name = data.get("name")
|
||||
|
||||
# Create user record
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO users (id, email, name, created_at)
|
||||
VALUES (%s, %s, %s, NOW())
|
||||
ON CONFLICT (id) DO UPDATE SET
|
||||
email = EXCLUDED.email,
|
||||
name = EXCLUDED.name,
|
||||
updated_at = NOW()
|
||||
""",
|
||||
(user_id, email, name)
|
||||
)
|
||||
|
||||
# Send welcome email (async via message queue, etc.)
|
||||
# enqueue_welcome_email(user_id, email)
|
||||
|
||||
return {"action": "user_created", "user_id": user_id}
|
||||
|
||||
|
||||
def handle_user_updated(cursor, data: dict):
|
||||
"""Handle user update event."""
|
||||
user_id = data.get("user_id") or data.get("id")
|
||||
updates = data.get("updates", {})
|
||||
|
||||
# Build dynamic update
|
||||
set_clauses = []
|
||||
params = []
|
||||
for key, value in updates.items():
|
||||
set_clauses.append(f"{key} = %s")
|
||||
params.append(value)
|
||||
params.append(user_id)
|
||||
|
||||
cursor.execute(
|
||||
f"UPDATE users SET {', '.join(set_clauses)}, updated_at = NOW() WHERE id = %s",
|
||||
params
|
||||
)
|
||||
|
||||
return {"action": "user_updated", "user_id": user_id}
|
||||
|
||||
|
||||
def handle_user_deleted(cursor, data: dict):
|
||||
"""Handle user deletion event."""
|
||||
user_id = data.get("user_id") or data.get("id")
|
||||
|
||||
# Soft delete (mark as inactive)
|
||||
cursor.execute(
|
||||
"UPDATE users SET status = 'deleted', deleted_at = NOW() WHERE id = %s",
|
||||
(user_id,)
|
||||
)
|
||||
|
||||
return {"action": "user_deleted", "user_id": user_id}
|
||||
|
||||
|
||||
def handle_order_created(cursor, data: dict):
|
||||
"""Handle order creation event."""
|
||||
order_id = data.get("order_id") or data.get("id")
|
||||
user_id = data.get("user_id")
|
||||
total = data.get("total")
|
||||
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO orders (id, user_id, total, status, created_at)
|
||||
VALUES (%s, %s, %s, 'pending', NOW())
|
||||
""",
|
||||
(order_id, user_id, total)
|
||||
)
|
||||
|
||||
return {"action": "order_created", "order_id": order_id}
|
||||
|
||||
|
||||
def handle_order_paid(cursor, data: dict):
|
||||
"""Handle order payment event."""
|
||||
order_id = data.get("order_id") or data.get("id")
|
||||
payment_id = data.get("payment_id")
|
||||
amount = data.get("amount")
|
||||
|
||||
cursor.execute(
|
||||
"""
|
||||
UPDATE orders
|
||||
SET status = 'paid',
|
||||
paid_amount = %s,
|
||||
payment_id = %s,
|
||||
paid_at = NOW()
|
||||
WHERE id = %s
|
||||
""",
|
||||
(amount, payment_id, order_id)
|
||||
)
|
||||
|
||||
# Trigger fulfillment
|
||||
# enqueue_fulfillment(order_id)
|
||||
|
||||
return {"action": "order_paid", "order_id": order_id}
|
||||
|
||||
|
||||
def handle_order_shipped(cursor, data: dict):
|
||||
"""Handle order shipment event."""
|
||||
order_id = data.get("order_id") or data.get("id")
|
||||
tracking_number = data.get("tracking_number")
|
||||
carrier = data.get("carrier")
|
||||
|
||||
cursor.execute(
|
||||
"""
|
||||
UPDATE orders
|
||||
SET status = 'shipped',
|
||||
tracking_number = %s,
|
||||
carrier = %s,
|
||||
shipped_at = NOW()
|
||||
WHERE id = %s
|
||||
""",
|
||||
(tracking_number, carrier, order_id)
|
||||
)
|
||||
|
||||
# Send shipping notification
|
||||
# send_shipping_email(order_id)
|
||||
|
||||
return {"action": "order_shipped", "order_id": order_id}
|
||||
|
||||
|
||||
def get_logger():
|
||||
"""Get logger instance."""
|
||||
import logging
|
||||
return logging.getLogger(__name__)
|
||||
37
fission-python/template/migrates/schema.sql
Normal file
37
fission-python/template/migrates/schema.sql
Normal file
@@ -0,0 +1,37 @@
|
||||
-- Migration: 001_initial_schema.sql
|
||||
-- Description: Initial database schema with example items table
|
||||
-- To customize: Rename tables/columns and add your own migrations
|
||||
|
||||
-- Create items table (example)
|
||||
CREATE TABLE IF NOT EXISTS items (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'active',
|
||||
metadata JSONB,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
modified TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Create index on status for faster filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_items_status ON items(status);
|
||||
|
||||
-- Create index on created for sorting
|
||||
CREATE INDEX IF NOT EXISTS idx_items_created ON items(created);
|
||||
|
||||
-- Optional: Trigger to auto-update modified timestamp
|
||||
CREATE OR REPLACE FUNCTION update_modified_column()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
NEW.modified = NOW();
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ language 'plpgsql';
|
||||
|
||||
CREATE OR REPLACE TRIGGER update_items_modtime
|
||||
BEFORE UPDATE ON items
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_modified_column();
|
||||
|
||||
-- Add table comment
|
||||
COMMENT ON TABLE items IS 'Example items table - replace with your own schema';
|
||||
8
fission-python/template/pytest.ini
Normal file
8
fission-python/template/pytest.ini
Normal file
@@ -0,0 +1,8 @@
|
||||
[pytest]
|
||||
testpaths = test
|
||||
python_files = test_*.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
log_cli = true
|
||||
log_cli_level = INFO
|
||||
addopts = -v --tb=short
|
||||
42
fission-python/template/specs/README
Normal file
42
fission-python/template/specs/README
Normal file
@@ -0,0 +1,42 @@
|
||||
|
||||
Fission Specs
|
||||
=============
|
||||
|
||||
This is a set of specifications for a Fission app. This includes functions,
|
||||
environments, and triggers; we collectively call these things "resources".
|
||||
|
||||
How to use these specs
|
||||
----------------------
|
||||
|
||||
These specs are handled with the 'fission spec' command. See 'fission spec --help'.
|
||||
|
||||
'fission spec apply' will "apply" all resources specified in this directory to your
|
||||
cluster. That means it checks what resources exist on your cluster, what resources are
|
||||
specified in the specs directory, and reconciles the difference by creating, updating or
|
||||
deleting resources on the cluster.
|
||||
|
||||
'fission spec apply' will also package up your source code (or compiled binaries) and
|
||||
upload the archives to the cluster if needed. It uses 'ArchiveUploadSpec' resources in
|
||||
this directory to figure out which files to archive.
|
||||
|
||||
You can use 'fission spec apply --watch' to watch for file changes and continuously keep
|
||||
the cluster updated.
|
||||
|
||||
You can add YAMLs to this directory by writing them manually, but it's easier to generate
|
||||
them. Use 'fission function create --spec' to generate a function spec,
|
||||
'fission environment create --spec' to generate an environment spec, and so on.
|
||||
|
||||
You can edit any of the files in this directory, except 'fission-deployment-config.yaml',
|
||||
which contains a UID that you should never change. To apply your changes simply use
|
||||
'fission spec apply'.
|
||||
|
||||
fission-deployment-config.yaml
|
||||
------------------------------
|
||||
|
||||
fission-deployment-config.yaml contains a UID. This UID is what fission uses to correlate
|
||||
resources on the cluster to resources in this directory.
|
||||
|
||||
All resources created by 'fission spec apply' are annotated with this UID. Resources on
|
||||
the cluster that are _not_ annotated with this UID are never modified or deleted by
|
||||
fission.
|
||||
|
||||
0
fission-python/template/src/__init__.py
Normal file
0
fission-python/template/src/__init__.py
Normal file
15
fission-python/template/src/build.sh
Executable file
15
fission-python/template/src/build.sh
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/bin/sh
|
||||
ID=$( grep "^ID=" /etc/os-release | awk -F= '{print $2}' )
|
||||
|
||||
if [ "${ID}" = "debian" ]
|
||||
then
|
||||
apt-get update && apt-get install -y gcc libpq-dev python3-dev
|
||||
else
|
||||
apk update && apk add gcc postgresql-dev python3-dev
|
||||
fi
|
||||
|
||||
if [ -f ${SRC_PKG}/requirements.txt ]
|
||||
then
|
||||
pip3 install -r ${SRC_PKG}/requirements.txt -t ${SRC_PKG}
|
||||
fi
|
||||
cp -r ${SRC_PKG} ${DEPLOY_PKG}
|
||||
103
fission-python/template/src/exceptions.py
Normal file
103
fission-python/template/src/exceptions.py
Normal file
@@ -0,0 +1,103 @@
|
||||
"""
|
||||
Custom exceptions for Fission Python functions.
|
||||
|
||||
All exceptions include:
|
||||
- error_code: Machine-readable error identifier
|
||||
- http_status: Appropriate HTTP status code
|
||||
- error_msg: Human-readable message
|
||||
- x_user: Optional user identifier from request headers
|
||||
- details: Optional additional error context
|
||||
"""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
|
||||
class ServiceException(Exception):
|
||||
"""Base exception for service errors."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
error_code: str,
|
||||
http_status: int,
|
||||
error_msg: str,
|
||||
x_user: Optional[str] = None,
|
||||
details: Optional[dict] = None,
|
||||
):
|
||||
self.error_code = error_code
|
||||
self.http_status = http_status
|
||||
self.error_msg = error_msg
|
||||
self.x_user = x_user
|
||||
self.details = details
|
||||
super().__init__(self.error_msg)
|
||||
|
||||
|
||||
class ValidationError(ServiceException):
|
||||
"""Invalid request data."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
error_msg: str,
|
||||
x_user: Optional[str] = None,
|
||||
details: Optional[dict] = None,
|
||||
):
|
||||
super().__init__(
|
||||
error_code="VALIDATION_ERROR",
|
||||
http_status=400,
|
||||
error_msg=error_msg,
|
||||
x_user=x_user,
|
||||
details=details,
|
||||
)
|
||||
|
||||
|
||||
class NotFoundError(ServiceException):
|
||||
"""Resource not found."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
error_msg: str,
|
||||
x_user: Optional[str] = None,
|
||||
details: Optional[dict] = None,
|
||||
):
|
||||
super().__init__(
|
||||
error_code="NOT_FOUND",
|
||||
http_status=404,
|
||||
error_msg=error_msg,
|
||||
x_user=x_user,
|
||||
details=details,
|
||||
)
|
||||
|
||||
|
||||
class ConflictError(ServiceException):
|
||||
"""Resource conflict (e.g., duplicate name)."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
error_msg: str,
|
||||
x_user: Optional[str] = None,
|
||||
details: Optional[dict] = None,
|
||||
):
|
||||
super().__init__(
|
||||
error_code="CONFLICT",
|
||||
http_status=409,
|
||||
error_msg=error_msg,
|
||||
x_user=x_user,
|
||||
details=details,
|
||||
)
|
||||
|
||||
|
||||
class DatabaseError(ServiceException):
|
||||
"""Database operation failed."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
error_msg: str,
|
||||
x_user: Optional[str] = None,
|
||||
details: Optional[dict] = None,
|
||||
):
|
||||
super().__init__(
|
||||
error_code="DB_ERROR",
|
||||
http_status=500,
|
||||
error_msg=error_msg,
|
||||
x_user=x_user,
|
||||
details=details,
|
||||
)
|
||||
251
fission-python/template/src/helpers.py
Normal file
251
fission-python/template/src/helpers.py
Normal file
@@ -0,0 +1,251 @@
|
||||
"""
|
||||
Helper utilities for Fission Python functions.
|
||||
|
||||
Provides database connectivity, configuration/secrets access, and basic data utilities.
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import socket
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
import psycopg2
|
||||
from flask import current_app
|
||||
from psycopg2.extras import LoggingConnection
|
||||
from vault import decrypt_vault, is_valid_vault_format
|
||||
|
||||
# Configuration - these will be overridden by environment-specific values
|
||||
CORS_HEADERS = {
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
# These placeholders will be replaced by create-project.sh with actual project names
|
||||
SECRET_NAME = "${PROJECT_NAME}-env"
|
||||
CONFIG_NAME = "${PROJECT_NAME}-config"
|
||||
K8S_NAMESPACE = "default"
|
||||
CRYPTO_KEY = "" # Set this in your deployment environment
|
||||
|
||||
# Logging setup
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def init_db_connection():
|
||||
"""
|
||||
Initialize PostgreSQL database connection.
|
||||
|
||||
Configuration is loaded from Kubernetes secrets or defaults.
|
||||
|
||||
Returns:
|
||||
psycopg2 connection object
|
||||
|
||||
Raises:
|
||||
Exception: If database connection fails or port check fails
|
||||
"""
|
||||
db_host = get_secret("PG_HOST", "127.0.0.1")
|
||||
db_port = int(get_secret("PG_PORT", 5432))
|
||||
|
||||
if not check_port_open(ip=db_host, port=db_port):
|
||||
raise Exception(f"Failed to connect to database at {db_host}:{db_port}")
|
||||
|
||||
options = get_secret("PG_DBSCHEMA")
|
||||
if options:
|
||||
options = f"-c search_path={options}" # if specific db schema
|
||||
conn = psycopg2.connect(
|
||||
database=get_secret("PG_DB", "postgres"),
|
||||
user=get_secret("PG_USER", "postgres"),
|
||||
password=get_secret("PG_PASS", "secret"),
|
||||
host=db_host,
|
||||
port=db_port,
|
||||
options=options,
|
||||
connection_factory=LoggingConnection,
|
||||
)
|
||||
conn.initialize(logger)
|
||||
return conn
|
||||
|
||||
|
||||
def db_row_to_dict(cursor, row) -> Dict[str, Any]:
|
||||
"""
|
||||
Convert a database row to a dictionary.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor (with description attribute)
|
||||
row: Database row tuple
|
||||
|
||||
Returns:
|
||||
Dictionary mapping column names to values (datetime converted to isoformat)
|
||||
"""
|
||||
record = {}
|
||||
for i, column in enumerate(cursor.description):
|
||||
data = row[i]
|
||||
if isinstance(data, datetime.datetime):
|
||||
data = data.isoformat()
|
||||
record[column.name] = data
|
||||
return record
|
||||
|
||||
|
||||
def db_rows_to_array(cursor, rows) -> list:
|
||||
"""
|
||||
Convert multiple database rows to list of dictionaries.
|
||||
|
||||
Args:
|
||||
cursor: Database cursor
|
||||
rows: List of row tuples
|
||||
|
||||
Returns:
|
||||
List of dictionaries
|
||||
"""
|
||||
return [db_row_to_dict(cursor, row) for row in rows]
|
||||
|
||||
|
||||
def get_current_namespace() -> str:
|
||||
"""
|
||||
Get current Kubernetes namespace from service account secret.
|
||||
|
||||
Returns:
|
||||
Namespace string or default K8S_NAMESPACE if not available
|
||||
"""
|
||||
try:
|
||||
with open("/var/run/secrets/kubernetes.io/serviceaccount/namespace", "r") as f:
|
||||
namespace = f.read().strip()
|
||||
except Exception:
|
||||
namespace = K8S_NAMESPACE
|
||||
return str(namespace)
|
||||
|
||||
|
||||
def get_secret(key: str, default=None) -> str:
|
||||
"""
|
||||
Read a secret from Kubernetes secrets volume.
|
||||
|
||||
Args:
|
||||
key: Secret key name
|
||||
default: Default value if secret not found
|
||||
|
||||
Returns:
|
||||
Secret value (decrypted if vault-encrypted) or default
|
||||
"""
|
||||
namespace = get_current_namespace()
|
||||
path = f"/secrets/{namespace}/{SECRET_NAME}/{key}"
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
value = f.read().strip()
|
||||
if value:
|
||||
if is_valid_vault_format(value):
|
||||
return decrypt_vault(value, CRYPTO_KEY)
|
||||
else:
|
||||
return value
|
||||
except Exception as err:
|
||||
current_app.logger.error(f"Failed to read secret {path}: {err}")
|
||||
return default
|
||||
|
||||
|
||||
def get_config(key: str, default=None) -> str:
|
||||
"""
|
||||
Read configuration from Kubernetes config volume.
|
||||
|
||||
Args:
|
||||
key: Config key name
|
||||
default: Default value if config not found
|
||||
|
||||
Returns:
|
||||
Config value (decrypted if vault-encrypted) or default
|
||||
"""
|
||||
namespace = get_current_namespace()
|
||||
path = f"/configs/{namespace}/{CONFIG_NAME}/{key}"
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
value = f.read().strip()
|
||||
if value:
|
||||
if is_valid_vault_format(value):
|
||||
return decrypt_vault(value, CRYPTO_KEY)
|
||||
else:
|
||||
return value
|
||||
except Exception as err:
|
||||
current_app.logger.error(f"Failed to read config {path}: {err}")
|
||||
return default
|
||||
|
||||
|
||||
def str_to_bool(input: Optional[str]) -> Optional[bool]:
|
||||
"""
|
||||
Convert string representation to boolean.
|
||||
|
||||
Args:
|
||||
input: String value ('true', 'false', or None)
|
||||
|
||||
Returns:
|
||||
True, False, or None if not recognized
|
||||
"""
|
||||
input = input or ""
|
||||
BOOL_MAP = {"true": True, "false": False}
|
||||
return BOOL_MAP.get(input.strip().lower(), None)
|
||||
|
||||
|
||||
def check_port_open(ip: str, port: int, timeout: int = 30) -> bool:
|
||||
"""
|
||||
Check if a TCP port is open on the given IP address.
|
||||
|
||||
Args:
|
||||
ip: IP address or hostname
|
||||
port: Port number
|
||||
timeout: Connection timeout in seconds
|
||||
|
||||
Returns:
|
||||
True if port is open, False otherwise
|
||||
"""
|
||||
try:
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
s.settimeout(timeout)
|
||||
result = s.connect_ex((ip, port))
|
||||
return result == 0
|
||||
except Exception as err:
|
||||
logger.error(f"Check port open error: {err}")
|
||||
return False
|
||||
|
||||
|
||||
def get_user_from_headers() -> Optional[str]:
|
||||
"""
|
||||
Extract user identifier from request headers.
|
||||
|
||||
Returns:
|
||||
User ID from X-Fission-Params-UserId or similar header, or None if not present.
|
||||
"""
|
||||
from flask import request
|
||||
|
||||
# Try common header names
|
||||
user_id = (
|
||||
request.headers.get("X-Fission-Params-UserId")
|
||||
or request.headers.get("X-User-Id")
|
||||
or request.headers.get("User-Id")
|
||||
)
|
||||
return user_id
|
||||
|
||||
|
||||
def format_error_response(
|
||||
error_code: str,
|
||||
error_msg: str,
|
||||
http_status: int,
|
||||
x_user: Optional[str] = None,
|
||||
details: Optional[dict] = None,
|
||||
) -> dict:
|
||||
"""
|
||||
Create a standardized error response dictionary.
|
||||
|
||||
Args:
|
||||
error_code: Machine-readable error identifier
|
||||
error_msg: Human-readable error message
|
||||
http_status: HTTP status code
|
||||
x_user: Optional user identifier
|
||||
details: Optional additional error context
|
||||
|
||||
Returns:
|
||||
Dictionary formatted as ErrorResponse schema
|
||||
"""
|
||||
response = {
|
||||
"error_code": error_code,
|
||||
"http_status": http_status,
|
||||
"error_msg": error_msg,
|
||||
}
|
||||
if x_user:
|
||||
response["x_user"] = x_user
|
||||
if details:
|
||||
response["details"] = details
|
||||
return response
|
||||
153
fission-python/template/src/models.py
Normal file
153
fission-python/template/src/models.py
Normal file
@@ -0,0 +1,153 @@
|
||||
"""
|
||||
Pydantic models for request/response validation and data schemas.
|
||||
|
||||
This file provides example patterns that you can adapt for your service:
|
||||
- Enums for controlled vocabularies
|
||||
- Request models with validation
|
||||
- Response models with serialization config
|
||||
- Pagination and filtering patterns
|
||||
- Nested model relationships
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import enum
|
||||
import typing
|
||||
|
||||
import pydantic
|
||||
from flask import request
|
||||
|
||||
|
||||
# ========== Example Enums ==========
|
||||
class Status(str, enum.Enum):
|
||||
"""Example status enum."""
|
||||
ACTIVE = "active"
|
||||
INACTIVE = "inactive"
|
||||
PENDING = "pending"
|
||||
|
||||
|
||||
class DataType(str, enum.Enum):
|
||||
"""Example data type enum."""
|
||||
ITEM = "ITEM"
|
||||
COLLECTION = "COLLECTION"
|
||||
|
||||
|
||||
# ========== Filter Models (for query parameters) ==========
|
||||
@typing.dataclass
|
||||
class ItemFilter:
|
||||
"""
|
||||
Example filter using dataclass.
|
||||
|
||||
Filters are often built from request query parameters.
|
||||
"""
|
||||
ids: typing.Optional[typing.List[str]] = None
|
||||
keyword: typing.Optional[str] = None
|
||||
status: typing.Optional[typing.List[str]] = None
|
||||
created_from: typing.Optional[datetime.datetime] = None
|
||||
created_to: typing.Optional[datetime.datetime] = None
|
||||
|
||||
@classmethod
|
||||
def from_request_queries(cls) -> "ItemFilter":
|
||||
"""Build filter from Flask request query parameters."""
|
||||
filter = ItemFilter()
|
||||
filter.ids = request.args.getlist("filter[ids]")
|
||||
filter.keyword = request.args.get("filter[keyword]")
|
||||
filter.status = request.args.getlist("filter[status]")
|
||||
filter.created_from = request.args.get("filter[created_from]")
|
||||
filter.created_to = request.args.get("filter[created_to]")
|
||||
return filter
|
||||
|
||||
|
||||
@typing.dataclass
|
||||
class Pagination:
|
||||
"""Pagination parameters."""
|
||||
page: int = 0
|
||||
size: int = 10
|
||||
asc: bool = True
|
||||
|
||||
@classmethod
|
||||
def from_request_queries(cls) -> "Pagination":
|
||||
"""Build pagination from request query parameters."""
|
||||
p = Pagination()
|
||||
p.page = int(request.args.get("page", 0))
|
||||
p.size = int(request.args.get("size", 10))
|
||||
p.asc = bool(request.args.get("asc", True))
|
||||
return p
|
||||
|
||||
|
||||
# ========== Request Models ==========
|
||||
class ItemCreateRequest(pydantic.BaseModel):
|
||||
"""Request model for creating a new item."""
|
||||
name: str = pydantic.Field(..., min_length=1, max_length=255, description="Item name")
|
||||
description: typing.Optional[str] = pydantic.Field(
|
||||
default=None, description="Item description"
|
||||
)
|
||||
status: Status = pydantic.Field(default=Status.ACTIVE, description="Item status")
|
||||
metadata: typing.Optional[dict] = pydantic.Field(
|
||||
default=None, description="Additional metadata as JSON"
|
||||
)
|
||||
|
||||
class Config:
|
||||
json_schema_extra = {
|
||||
"example": {
|
||||
"name": "Example Item",
|
||||
"description": "A sample item",
|
||||
"status": "active",
|
||||
"metadata": {"key": "value"},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class ItemUpdateRequest(pydantic.BaseModel):
|
||||
"""Request model for updating an existing item."""
|
||||
name: typing.Optional[str] = pydantic.Field(
|
||||
default=None, min_length=1, max_length=255, description="Item name"
|
||||
)
|
||||
description: typing.Optional[str] = pydantic.Field(
|
||||
default=None, description="Item description"
|
||||
)
|
||||
status: typing.Optional[Status] = pydantic.Field(
|
||||
default=None, description="Item status"
|
||||
)
|
||||
metadata: typing.Optional[dict] = pydantic.Field(
|
||||
default=None, description="Additional metadata"
|
||||
)
|
||||
|
||||
class Config:
|
||||
json_schema_extra = {
|
||||
"example": {
|
||||
"name": "Updated Item Name",
|
||||
"status": "inactive",
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# ========== Response Models ==========
|
||||
class ItemResponse(pydantic.BaseModel):
|
||||
"""Standard item response."""
|
||||
id: str = pydantic.Field(..., description="Item unique identifier")
|
||||
name: str = pydantic.Field(..., description="Item name")
|
||||
description: typing.Optional[str] = pydantic.Field(default=None, description="Item description")
|
||||
status: Status = pydantic.Field(..., description="Item status")
|
||||
metadata: typing.Optional[dict] = pydantic.Field(default=None, description="Additional metadata")
|
||||
created: datetime.datetime = pydantic.Field(..., description="Creation timestamp")
|
||||
modified: datetime.datetime = pydantic.Field(..., description="Last modification timestamp")
|
||||
|
||||
class Config:
|
||||
from_attributes = True # Enable ORM mode for SQLAlchemy/psycopg2 compatibility
|
||||
|
||||
|
||||
class PaginatedResponse(pydantic.BaseModel):
|
||||
"""Paginated listing response."""
|
||||
data: typing.List[ItemResponse] = pydantic.Field(..., description="List of items")
|
||||
page: int = pydantic.Field(..., description="Current page number (0-indexed)")
|
||||
size: int = pydantic.Field(..., description="Page size")
|
||||
total: typing.Optional[int] = pydantic.Field(default=None, description="Total count of items")
|
||||
|
||||
|
||||
class ErrorResponse(pydantic.BaseModel):
|
||||
"""Standardized error response format (used by exceptions)."""
|
||||
error_code: str = pydantic.Field(..., description="Machine-readable error identifier")
|
||||
http_status: int = pydantic.Field(..., description="HTTP status code")
|
||||
error_msg: str = pydantic.Field(..., description="Human-readable error message")
|
||||
x_user: typing.Optional[str] = pydantic.Field(default=None, description="User identifier")
|
||||
details: typing.Optional[dict] = pydantic.Field(default=None, description="Additional error context")
|
||||
5
fission-python/template/src/requirements.txt
Normal file
5
fission-python/template/src/requirements.txt
Normal file
@@ -0,0 +1,5 @@
|
||||
Flask==2.1.1
|
||||
pydantic==2.11.7
|
||||
psycopg2-binary==2.9.10
|
||||
PyNaCl==1.6.0
|
||||
requests==2.32.2
|
||||
142
fission-python/template/src/vault.py
Normal file
142
fission-python/template/src/vault.py
Normal file
@@ -0,0 +1,142 @@
|
||||
import base64
|
||||
|
||||
import nacl.secret
|
||||
|
||||
|
||||
def string_to_hex(text: str) -> str:
|
||||
"""
|
||||
Convert a string to hexadecimal representation.
|
||||
|
||||
Args:
|
||||
text: Input string to convert
|
||||
|
||||
Returns:
|
||||
Hexadecimal string representation
|
||||
"""
|
||||
return text.encode("utf-8").hex()
|
||||
|
||||
|
||||
def hex_to_string(hex_string: str) -> str | None:
|
||||
"""
|
||||
Convert a hexadecimal string back to regular string.
|
||||
|
||||
Args:
|
||||
hex_string: Hexadecimal string to convert
|
||||
|
||||
Returns:
|
||||
Decoded string
|
||||
|
||||
Raises:
|
||||
ValueError: If hex_string is not valid hexadecimal
|
||||
"""
|
||||
return bytes.fromhex(hex_string).decode("utf-8")
|
||||
|
||||
|
||||
def decrypt_vault(vault: str, key: str) -> str:
|
||||
"""
|
||||
Decrypt a vault string encrypted with PyNaCl SecretBox.
|
||||
|
||||
Vault format: "vault:v1:<base64_encrypted_data>"
|
||||
|
||||
Args:
|
||||
vault: Vault-formatted string (e.g., "vault:v1:eW91cl9lbmNyeXB0ZWRfZGF0YQ==")
|
||||
key: Hex string representation of 32-byte encryption key
|
||||
|
||||
Returns:
|
||||
Decrypted string
|
||||
|
||||
Raises:
|
||||
ValueError: If vault format is invalid or key is not valid hex
|
||||
nacl.exceptions.CryptoError: If decryption fails (wrong key or corrupted data)
|
||||
"""
|
||||
# Parse vault format
|
||||
parts = vault.split(":", 2)
|
||||
if len(parts) != 3 or parts[0] != "vault" or parts[1] != "v1":
|
||||
raise ValueError("Invalid vault format. Expected 'vault:v1:<encrypted_data>'")
|
||||
|
||||
encrypted_string = parts[2]
|
||||
# Convert hex string key to bytes
|
||||
key_bytes = bytes.fromhex(key)
|
||||
|
||||
# Create a SecretBox instance with the key
|
||||
box = nacl.secret.SecretBox(key_bytes)
|
||||
|
||||
# Decode the base64-encoded encrypted string
|
||||
encrypted_data = base64.b64decode(encrypted_string)
|
||||
|
||||
# Decrypt the data
|
||||
decrypted_bytes = box.decrypt(encrypted_data)
|
||||
|
||||
# Convert bytes to string
|
||||
return decrypted_bytes.decode("utf-8")
|
||||
|
||||
|
||||
def encrypt_vault(plaintext: str, key: str) -> str:
|
||||
"""
|
||||
Encrypt a string and return it in vault format.
|
||||
|
||||
Args:
|
||||
plaintext: String to encrypt
|
||||
key: Hex string representation of 32-byte encryption key
|
||||
|
||||
Returns:
|
||||
Vault-formatted encrypted string (e.g., "vault:v1:<encrypted_data>")
|
||||
|
||||
Raises:
|
||||
ValueError: If key is not valid hex string
|
||||
"""
|
||||
# Convert hex string key to bytes
|
||||
key_bytes = bytes.fromhex(key)
|
||||
|
||||
# Create a SecretBox instance with the key
|
||||
box = nacl.secret.SecretBox(key_bytes)
|
||||
|
||||
# Encrypt the data
|
||||
encrypted = box.encrypt(plaintext.encode("utf-8"))
|
||||
|
||||
# Encode to base64
|
||||
encrypted_string = base64.b64encode(encrypted).decode("utf-8")
|
||||
|
||||
# Return in vault format
|
||||
return f"vault:v1:{encrypted_string}"
|
||||
|
||||
|
||||
def is_valid_vault_format(vault: str) -> bool:
|
||||
"""
|
||||
Check if a string is in valid vault format.
|
||||
|
||||
Vault format: "vault:v1:<base64_encrypted_data>"
|
||||
|
||||
Args:
|
||||
vault: String to validate
|
||||
|
||||
Returns:
|
||||
True if the string matches vault format structure, False otherwise
|
||||
|
||||
Note:
|
||||
This only checks the format structure, not whether the data can be decrypted
|
||||
"""
|
||||
# Parse vault format
|
||||
parts = vault.split(":", 2)
|
||||
|
||||
# Check basic structure: vault:v1:<data>
|
||||
if len(parts) != 3 or parts[0] != "vault" or parts[1] != "v1":
|
||||
return False
|
||||
|
||||
encrypted_data = parts[2]
|
||||
|
||||
# Check if data part is not empty
|
||||
if not encrypted_data:
|
||||
return False
|
||||
|
||||
# Check if data is valid base64
|
||||
try:
|
||||
decoded = base64.b64decode(encrypted_data)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
# Check if decoded data has at least nonce bytes (24 bytes for NaCl)
|
||||
if len(decoded) < nacl.secret.SecretBox.NONCE_SIZE:
|
||||
return False
|
||||
|
||||
return True
|
||||
0
fission-python/template/test/__init__.py
Normal file
0
fission-python/template/test/__init__.py
Normal file
3
fission-python/template/test/requirements.txt
Normal file
3
fission-python/template/test/requirements.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
pytest==8.2.0
|
||||
pytest-mock==3.14.0
|
||||
requests==2.32.3
|
||||
40
fission-python/template/test/test_example.py
Normal file
40
fission-python/template/test/test_example.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""
|
||||
Example test file for Fission Python functions.
|
||||
|
||||
This demonstrates basic testing patterns.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
|
||||
def test_placeholder():
|
||||
"""Placeholder test - replace with your actual tests."""
|
||||
assert True
|
||||
|
||||
|
||||
# Example: Testing a function with mocked dependencies
|
||||
@patch("helpers.init_db_connection")
|
||||
def test_example_with_mock(mock_db):
|
||||
"""Example test showing how to mock database."""
|
||||
from examples.example_crud import create_item
|
||||
|
||||
# Setup mock
|
||||
mock_conn = MagicMock()
|
||||
mock_cursor = MagicMock()
|
||||
mock_db.return_value = mock_conn
|
||||
mock_conn.cursor.return_value = mock_cursor
|
||||
mock_cursor.fetchone.return_value = ("id-123", "Test Item", "active")
|
||||
|
||||
# Mock Flask request
|
||||
with patch("examples.example_crud.request") as mock_request:
|
||||
mock_request.get_json.return_value = {"name": "Test Item", "status": "active"}
|
||||
mock_request.view_args = {}
|
||||
|
||||
# Call function
|
||||
result = create_item({}, {})
|
||||
|
||||
# Assert
|
||||
assert result["name"] == "Test Item"
|
||||
mock_cursor.execute.assert_called_once()
|
||||
mock_conn.commit.assert_called_once()
|
||||
290
fission-python/update-docstring.sh
Executable file
290
fission-python/update-docstring.sh
Executable file
@@ -0,0 +1,290 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Fission Docstring Updater
|
||||
# Parses and updates embedded fission configuration in Python function docstrings
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 <file-path> [function-name] [--set \"<json>\"] [--get] [--help]"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " file-path: Path to the Python file containing the function"
|
||||
echo " function-name: Optional specific function name to target (if not provided, processes all functions with fission configuration)"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --set <json>: Set the fission configuration to the provided JSON string"
|
||||
echo " --get: Get/display the current fission configuration (default action)"
|
||||
echo " --help: Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 ./src/my_function.py main --get"
|
||||
echo " $0 ./src/my_function.py main --set '{\"name\": \"updated-function\"}'"
|
||||
echo " $0 ./src/functions.py --get"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check dependencies
|
||||
if ! command -v python3 &> /dev/null; then
|
||||
echo "Error: python3 is required but not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse arguments
|
||||
if [[ $# -lt 1 ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
FILE_PATH="$1"
|
||||
shift
|
||||
|
||||
FUNCTION_NAME=""
|
||||
ACTION="get"
|
||||
SET_VALUE=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--set)
|
||||
SET_VALUE="$2"
|
||||
ACTION="set"
|
||||
shift 2
|
||||
;;
|
||||
--get)
|
||||
ACTION="get"
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
if [[ -z "$FUNCTION_NAME" ]]; then
|
||||
FUNCTION_NAME="$1"
|
||||
else
|
||||
echo "Error: Unexpected argument '$1'"
|
||||
usage
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate file exists
|
||||
if [[ ! -f "$FILE_PATH" ]]; then
|
||||
echo "Error: File '$FILE_PATH' does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate JSON if setting
|
||||
if [[ "$ACTION" == "set" && -z "$SET_VALUE" ]]; then
|
||||
echo "Error: --set requires a JSON value"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$ACTION" == "set" ]]; then
|
||||
# Validate JSON format
|
||||
if ! echo "$SET_VALUE" | python3 -m json.tool >/dev/null 2>&1; then
|
||||
echo "Error: Invalid JSON provided for --set"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Python script to handle the docstring parsing and updating
|
||||
PYTHON_SCRIPT=$(cat << 'EOF'
|
||||
import re
|
||||
import sys
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
def extract_functions_with_fission(content):
|
||||
"""Extract all functions that have fission configuration in their docstrings."""
|
||||
# Pattern to match Python functions with docstrings containing fission configuration
|
||||
# This looks for def function_name(): followed by a docstring that contains ```fission
|
||||
pattern = r'(\s*def\s+(\w+)\s*\([^)]*\):\s*(?:\n\s*)?\"\"\"[\s\S]*?```fission[\s\S]*?```[\s\S]*?\"\"\"[\s\S]*?)(?=\n\s*def|\n\s*class|\Z)'
|
||||
|
||||
matches = re.finditer(pattern, content, re.MULTILINE)
|
||||
functions = []
|
||||
|
||||
for match in matches:
|
||||
full_match = match.group(1)
|
||||
# Extract function name from the match
|
||||
func_name_match = re.search(r'def\s+(\w+)\s*\(', full_match)
|
||||
if func_name_match:
|
||||
func_name = func_name_match.group(1)
|
||||
functions.append({
|
||||
'name': func_name,
|
||||
'full_text': full_match,
|
||||
'start_pos': match.start(),
|
||||
'end_pos': match.end()
|
||||
})
|
||||
|
||||
return functions
|
||||
|
||||
def extract_fission_config(docstring):
|
||||
"""Extract fission configuration from a docstring."""
|
||||
# Look for ```fission ... ``` blocks
|
||||
pattern = r'```fission\s*([\s\S]*?)\s*```'
|
||||
match = re.search(pattern, docstring)
|
||||
if match:
|
||||
config_text = match.group(1).strip()
|
||||
try:
|
||||
return json.loads(config_text)
|
||||
except json.JSONDecodeError as e:
|
||||
return None
|
||||
return None
|
||||
|
||||
def replace_fission_config_in_docstring(docstring, new_config):
|
||||
"""Replace fission configuration in a docstring with new config."""
|
||||
# Format the new config as JSON with indentation
|
||||
formatted_config = json.dumps(new_config, indent=4)
|
||||
# Replace the ```fission ... ``` block
|
||||
pattern = r'(```fission\s*)[\s\S]*?(\s*```)'
|
||||
replacement = r'\1' + formatted_config + r'\2'
|
||||
return re.sub(pattern, replacement, docstring, flags=re.DOTALL)
|
||||
|
||||
def process_file(file_path, target_function=None, action='get', set_value=None):
|
||||
"""Process the Python file to get or set fission configuration."""
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
content = f.read()
|
||||
except IOError as e:
|
||||
print(f"Error: Cannot read file '{file_path}': {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
functions = extract_functions_with_fission(content)
|
||||
|
||||
if not functions:
|
||||
print("No functions with fission configuration found in file.", file=sys.stderr)
|
||||
if action == 'get':
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
# Filter by function name if specified
|
||||
if target_function:
|
||||
functions = [f for f in functions if f['name'] == target_function]
|
||||
if not functions:
|
||||
print(f"Error: Function '{target_function}' with fission configuration not found.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if action == 'get':
|
||||
# Display current configuration for each function
|
||||
for func in functions:
|
||||
# Extract docstring from the function text
|
||||
docstring_match = re.search(r'\"\"\"[\s\S]*?\"\"\"', func['full_text'])
|
||||
if docstring_match:
|
||||
docstring = docstring_match.group(0)
|
||||
config = extract_fission_config(docstring)
|
||||
if config is not None:
|
||||
if len(functions) == 1:
|
||||
print(json.dumps(config, indent=2))
|
||||
else:
|
||||
print(f"Function '{func['name']}':")
|
||||
print(json.dumps(config, indent=2))
|
||||
print()
|
||||
else:
|
||||
if len(functions) == 1:
|
||||
print("No fission configuration found in function docstring.", file=sys.stderr)
|
||||
else:
|
||||
print(f"Function '{func['name']}': No fission configuration found in docstring.", file=sys.stderr)
|
||||
else:
|
||||
if len(functions) == 1:
|
||||
print("Could not extract docstring from function.", file=sys.stderr)
|
||||
else:
|
||||
print(f"Function '{func['name']}': Could not extract docstring.", file=sys.stderr)
|
||||
|
||||
elif action == 'set':
|
||||
if set_value is None:
|
||||
print("Error: No value provided for --set", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
new_config = json.loads(set_value)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Invalid JSON provided for --set: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Update each function
|
||||
updated_content = content
|
||||
offset = 0 # Track position changes due to replacements
|
||||
|
||||
for func in functions:
|
||||
# Extract docstring from the function text
|
||||
docstring_match = re.search(r'\"\"\"[\s\S]*?\"\"\"', func['full_text'])
|
||||
if docstring_match:
|
||||
docstring = docstring_match.group(0)
|
||||
# Check if fission configuration exists
|
||||
if extract_fission_config(docstring) is not None:
|
||||
# Replace fission configuration in docstring
|
||||
new_docstring = replace_fission_config_in_docstring(docstring, new_config)
|
||||
# Replace the docstring in the function text
|
||||
new_func_text = func['full_text'].replace(docstring, new_docstring, 1)
|
||||
# Replace in the overall content (adjusting for previous changes)
|
||||
start_pos = func['start_pos'] + offset
|
||||
end_pos = func['end_pos'] + offset
|
||||
# Update content with the change
|
||||
before = updated_content[:start_pos]
|
||||
after = updated_content[end_pos:]
|
||||
updated_content = before + new_func_text + after
|
||||
# Update offset for next replacements
|
||||
offset += len(new_func_text) - len(func['full_text'])
|
||||
else:
|
||||
print(f"Warning: No fission configuration found in function '{func['name']}' to update.", file=sys.stderr)
|
||||
else:
|
||||
print(f"Warning: Could not extract docstring from function '{func['name']}'.", file=sys.stderr)
|
||||
|
||||
# Write back to file
|
||||
try:
|
||||
with open(file_path, 'w') as f:
|
||||
f.write(updated_content)
|
||||
if len(functions) == 1:
|
||||
print(f"Updated fission configuration in function '{functions[0]['name']}'.")
|
||||
else:
|
||||
print(f"Updated fission configuration in {len(functions)} function(s).")
|
||||
except IOError as e:
|
||||
print(f"Error: Cannot write to file '{file_path}': {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: fission-docstring-updater <file-path> [function-name] [--set \"<json>\"] [--get]", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
file_path = sys.argv[1]
|
||||
target_function = sys.argv[2] if len(sys.argv) > 2 and not sys.argv[2].startswith('--') else None
|
||||
|
||||
# Parse arguments
|
||||
action = 'get'
|
||||
set_value = None
|
||||
|
||||
i = 3 if target_function else 2
|
||||
while i < len(sys.argv):
|
||||
if sys.argv[i] == '--set' and i + 1 < len(sys.argv):
|
||||
action = 'set'
|
||||
set_value = sys.argv[i + 1]
|
||||
i += 2
|
||||
elif sys.argv[i] == '--get':
|
||||
action = 'get'
|
||||
i += 1
|
||||
else:
|
||||
print(f"Error: Unknown argument '{sys.argv[i]}'", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
process_file(file_path, target_function, action, set_value)
|
||||
EOF
|
||||
)
|
||||
|
||||
# Build arguments for Python script
|
||||
PYTHON_ARGS=("$FILE_PATH")
|
||||
if [[ -n "$FUNCTION_NAME" ]]; then
|
||||
PYTHON_ARGS+=("$FUNCTION_NAME")
|
||||
fi
|
||||
|
||||
if [[ "$ACTION" == "set" ]]; then
|
||||
PYTHON_ARGS+=("--set" "$SET_VALUE")
|
||||
else
|
||||
PYTHON_ARGS+=("--get")
|
||||
fi
|
||||
|
||||
# Execute the Python script
|
||||
python3 -c "$PYTHON_SCRIPT" "${PYTHON_ARGS[@]}"
|
||||
Reference in New Issue
Block a user